text
stringlengths 0
2.11M
| id
stringlengths 33
34
| metadata
dict |
---|---|---|
http://arxiv.org/abs/1705.09824v1 | {
"authors": [
"Jiaozi Wang",
"Wen-ge Wang"
],
"categories": [
"cond-mat.stat-mech"
],
"primary_category": "cond-mat.stat-mech",
"published": "20170527144931",
"title": "Internal temperature of quantum chaotic systems at the nanoscale and its detection by a microscopic thermometer"
} |
|
A Sampling Theory Perspective of Graph-based Semi-supervised Learning Aamir Anis, Student Member, IEEE, Aly El Gamal, Member, IEEE, Salman Avestimehr, Senior Member, IEEE, and Antonio Ortega, Fellow, IEEEThis work is supported in part by NSF under grants CCF-1410009, CCF-1527874, CCF-1408639, NETS-1419632 and by AFRL and DARPA under grant 108818. S. Avestimehr and A. Ortega are with the Ming Hsieh Department of Electrical Engineering, University of Southern California. A. Anis is currently with Google Inc., he was affiliated with the University of Southern California at the time this work was completed. A. El Gamal is with the Department of Electrical and Computer Engineering, Purdue University. E-mail: [email protected], [email protected], [email protected], [email protected]. Copyright (c) 2017 IEEE. Personal use of this material is permitted.However, permission to use this material for any other purposes must be obtained from the IEEE by sending a request to [email protected]. ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================This paper addresses the problem of automatic speech recognition (ASR) error detection and their use for improving spoken language understanding (SLU) systems.In this study, the SLU task consists in automatically extracting, from ASR transcriptions, semantic concepts and concept/values pairs in a e.g touristic information system.An approach is proposed for enriching the set of semantic labels with error specific labels and by using a recently proposed neural approach based on word embeddings to compute well calibrated ASR confidence measures. Experimental results are reported showing that it is possible to decrease significantly the Concept/Value Error Rate with a state of the art system, outperforming previously published results performance on the same experimental data. It also shown that combining an SLU approach based on conditional random fields with a neural encoder/decoder attention based architecture, it is possible to effectively identifying confidence islands and uncertain semantic output segments useful for deciding appropriate error handling actions bythe dialogue manager strategy. Index Terms: spoken language understanding, speech recognition,robustness to ASR errors§ INTRODUCTIONIn spite of impressive research efforts and recent results, systems for semantic interpretation of text and speech still make errors. Some of the problems common to text and speech are: difficulty of concept mention localization, ambiguities intrinsic in localized mentions, deficiency to identify sufficient contextual constraints for solving interpretation ambiguities. Additional problems are introduced by the interaction between a spoken language understanding (SLU) system and an error prone automatic speech recognition (ASR) system.ASR errors may affect the mention of a concept, the value of a concept instance. Furthermore, the hypothesization of concepts and values depends, among other things, on the context in which their mention is localized. Thus, context errors may also introduce errors in concept mention location and hypothesization. The focus of this paper[Thanks to the ANR agency for funding through the CHIST-ERA ERA-Net JOKER under the con- tract number ANR-13-CHR2-0003-05.] is on the introduction of suitable ASR confidence measures for localizing ASR word errors that may affect SLU performance. They are used as additional SLU features to be combined with lexical and syntactic features useful for characterizing concept mentions. For this purpose, anASR error detection sub-system has been endowed with confidence features based on syntactic dependencies and other semantically relevant word features. Two SLU architectures equipped with other sets of confidence and specific word features are introduced. The architectures are based on conditional random fields (CRF) and an encoder-decoder neural network structure with a mechanism of attention (NN-EDA). Experimental results showing significant reduction for the French MEDIA corpus on concepts and concept value pairs confirm the expected benefit of introducing semantic specific ASR features. Optimal combinations of these architectures provide additional improvements with a concept error rate (CER) relative reduction of 18.9% and a concept-value error rate (CVER) relative reduction of 10.3% with respect to a baseline described in <cit.> not using these features and based only on CRFs.§ RELATED WORK SLU systems are error prone. Part of them are caused by certain types of ASR errors.In general, ASR errors are reduced by estimating model parameters by minimizing the expected word error rate <cit.>. The effect of word errors can be controlled by associating a single sentence hypothesis with word confidence measures. In <cit.> methods are proposed for constructing confidence features for improving the quality of a semantic confidence measure. Methods proposed for confidence calibration are based on the maximum entropy model with distribution constraints, the conventional artificial neural network, and the deep belief network (DBN). The latter two methods show slightly superior performance but higher computational complexity compared to the first one.More recently <cit.>, new features and bidirectional recurrent neural networks (RNN) have been proposed for ASR error detection.Most SLU systems reviewed in <cit.> generate hypotheses of semantic frame slot tags expressed in a spoken sentence analyzed by an ASR system. The use of deep neural networks (DNN) appeared in more recent systems as described in <cit.>.Bidirectional RNNs with long-short term memory (LSTM) have been used for semantic frame slot tagging <cit.>. In <cit.>, LSTMs have been proposed with a mechanism of attention for parsing text sentences to logical forms. Following <cit.>, in <cit.> a convolutional neural network (CNN) is proposed for encoding the representation of knowledge expressed in a spoken sentence. This encoding is used as an attention mechanism for constraining the hypothesization of slot tags expressed in the same sentence.Most recent papers using sophisticated SLU architectures based on RNNs have the best sequence of word hypotheses as input passed by an ASR system. In this paper, two SLU architectures are considered. The first one, based an encoder with bidirectional gated recurrent units (GRU) used for machine translation <cit.>, integrates context information with an attention based decoder as in <cit.>. The second one integrates context information in the same architecture used in <cit.> based on conditional random fields (CRF).Both SLU systems receive word hypotheses generated by the same ASR sub-system and scored with confidence measures computed by a neural architecture with new types of embeddings and semantically relevant confidence features.§ ASR ERROR DETECTION AND CONFIDENCE MEASURETwo different confidence measures are used for error detection. The first one is the word posterior probability computed with confusion networks as described in <cit.>. The other one is a variant of a newapproach, introduced in <cit.>. The latter measure is computed with a Multi-Stream Multi-Layer Perceptron (MS-MLP) architecture, fed by different heterogeneous confidence features. Among them, the most relevant for SLU are word embeddings of the targeted word and its neighbors, length of the current word, language model backoff behavior, part of speech (POS) tags, syntactic dependency labels and word governors. Other features, such as prosodic features and acoustic word embeddings described in <cit.> and <cit.> could also be used but were not considered in the experiments described in this paper.A particular attention was carried on the word embeddings computation, which is the result of a combination of different well known word embeddings (CBOW, Skip-gram, GloVe) made through the use of a neural auto-encoder in order to improve the performances of this ASR error detection system <cit.>.The MS-MLP proposed here for ASR error detection has two output units. They compute scores for Correct and Error labels associated with an ASR generated hypothesis. This hypothesis is evaluated by the softmax value of the Correct label scored with the MS-MLP. Experiments have shown that this is a calibrated confidence measure more effective than word posterior probability when comparison is based on the Normalized Cross Entropy (NCE) <cit.>, which measures the information contribution provided by confidence knowledge.Table <ref> shows the NCE values obtained by these two confidence measures on the MEDIA test data whose details can be found in section <ref>.Figure <ref> shows the predictive capability of the confidence measure based on MLP-MS compared to word posterior probability on the MEDIA test data. The curve shows the predicted percentage of correct words as a function of confidence intervals.The best measure is the one for which percentages are the closest to the diagonal line.Thanks to these confidences measures, we expect to get relevant information in order to better handle ASR errors in a spoken language understanding framework. § SLU FEATURES AND ARCHITECTURES Two basis SLU architectures are considered to carry experiments on the MEDIA corpus (described in sub-section <ref>). The first one is an encoder/decoder recurrent neural architecture with a mechanism of attention (NN-EDA) similar to the one used for machine translation proposed in <cit.>. The second one is based on conditional random fields (CRF - <cit.>). Both architecturesbuild their training model on the same features encoded with continuous values in the first one and discrete values in the second one§.§ Set of FeaturesWord features, including those defined for facilitating the association of a word with a semantic content, are defined as follows:* the word itself * its pre-defined semantic categories which belongs to:* MEDIA specific categories: like names of the streets, cities or hotels, lists of room equipments, food type, … e.g.: TOWN for Paris* more general categories: like figures, days, months, … e.g.: FIGURE for thirty-three. * a set of syntactic features: the MACAON tool <cit.> is applied to the whole turn in order to obtain for each word its following tags:the lemma, the POS tag, its word governor and its relation with the current word. * a set of morphological features: the 1-to-4 first letter ngrams, the 1-to-4 letter last ngrams of the word and a binary feature that indicates if the first letter is an upper one. * the two ASR confidence measures : the ASR posterior probability (pap) and the MS-MLP confidence measure as described in section <ref>. The two SLU architectures take all those features except the two confidence measures that can be taken partially: one or another or both according to the most powerful configuration. The SLU architectures also need to be calibrated on their respective hyper-parameters in order to give the best results. The way the best configuration is chosen is described in <ref>.§.§ Neural EDA system The proposed RNN encoder-decoder architecture with an attention-based mechanism (NN-EDA) is inspired from a machine translation architecture and depicted in figure <ref>. The concept tagging process is considered as a translation problem from words (source language) to semantic concept tags (target language).This bidirectional RNN encoder is based on Gated Recurrent Units (GRU) and computes an annotation h_i for each word w_i from the input sequence w_1 , ... ,w_I .This annotation is the concatenation of the matching forward hidden layer state and the backward hidden layer state obtained respectively by the forward RNN and the backward RNN comprising the bidirectional RNN.Each annotation contains the summaries of the dialogue turn contexts respectively preceding and the following a considered word.The sequence of annotations h_1 , ... ,h_Iis used by the decoder to compute a context vector c_t (represented as a circle with a cross in figure <ref>). A context vector is recomputed after each emission of an output label. This computation takes into account a weighted sum of all the annotations computed by the encoder. This weighting depends on the current output target, and is the core of the attention mechanism: a good estimation of these weights α_tjallows the decoder to choose parts of the input sequence to pay attention to. This context vector is used by the decoder in conjunction with the previous emitted label output y_t-1 and the current state s_t of the hidden layer of a RNN to make a decision about the current label output y_t. A more detailled description of recurent neural networks and attention based ones can be found in <cit.>. §.§ CRF systemPast experiments described in <cit.> have shown that the best semantic annotation performance on manual and automatic transcriptions of the MEDIA corpus were obtained with CRF systems. More recently in <cit.>, this architecture has been compared to popular bi-directionnal RNN (bi-RNN).The results was that CRF systems outperform a bi-RNN architecture on the MEDIA corpus, while better results were observed by bi-RNN on the ATIS <cit.> corpus. This is probably explained by the fact that MEDIA contains semantic contents whose mentions are more difficult to disambiguate and CRFs make it possible to exploit complex contexts more effectively. For the sake of comparison with the best SLU system proposed in <cit.>, the Wapiti toolkit was used<cit.>. Nevertheless, the set of input features used by the system proposed in this paperis different from the one used in <cit.>. Among the novelties used in our system, we consider syntactic and ASR confidence features and our configuration template is different. After many experiments performed on DEV, our final feature template includes the previous and following instances for words and POS in a unigram or a bigram to associate a semantic label with the current word. Also associated with the current word are semantic categories of the two previous and two following instances. The other features are only considered at the current position.Furthermore, the tool discretize4CRF[https://gforge.inria.fr/projects/discretize4crf/] is used to apply a discretization function to the ASR confidence measures in order to obtain several discrete values that can be accepted as input features by the CRF.§ EXPERIMENTAL SETUP AND RESULTSExperiments were carried with the MEDIA corpus as in <cit.>. For the sake of comparison, the results of their best understanding system is reported in this paper as baseline. However, as the WER of the ASR used in this paper is lower (23.5%) than the one used in the baseline, rigorous conclusions can be drawn only on comparisons between the different SLU components introduced in this paper. §.§ The MEDIA corpus The MEDIA corpus was collected in the French Media/Evalda project <cit.> and deals with negotiation of tourist services. It contains three sets of telephone human/computer dialogues, namely: a training set (TRAIN) with approximately 17.7k sentences, a development set (DEV) with 1.3k sentences and an evaluation set (TEST) containing 3.5k sentences. The corpus was manually annotated with semantic concepts characterized by a label and its value. Other types of semantic annotations (such as mode or specifiers) are not considered in this paper to be consistent with the experimental results provided in <cit.>. Annotations also associate a word sequence to the concepts. These sequences have to be considered as estimations of concept localized mentions.Evaluations are performed with the DEV and TEST sets and report concept error rates (CER) for concept labels only and concept-value error rates (CVER)for concept-value pairs. It is worth mentioning that the number of concepts annotated in a turn has a large variability and may include more than 30 annotated concepts. Among the concepts types there are some, such as three different types of REFERENCE and CONNECTOR of application domain entities. The mentions of these concepts are often short confusable sequences of words. §.§ LIUM ASR system dedicated to MEDIA For these experiments, a variant of the ASR system developed by LIUM that won the last evaluation campaign on French language has been used <cit.>. This system is based on the Kaldi speech recognition toolkit <cit.>. The training set used to estimate the DNN (Deep Neural Networks) acoustic models parameters consists of 145,781 speech segments from several sources: the radiophonic broadcast ESTER <cit.> and ESTER2 <cit.> corpora, which accounts for about 100 hours of speech each; the TV broadcast ETAPE corpus <cit.>, accounting for about 30 hours of speech; the TV broadcast REPERE train corpus, accounting for about 35 hours of speech and other LIUM radio and TV broadcast data for about 300 hours of speech. As a total, 565 hours of speech composes the training corpus. These recordings were converted to 8kHz before training the acoustic models in order to bemore appropriate to the MEDIA telephone data. As inputs, DNN are fed (for training and decoding) withMFCCs (Mel-Frequency Cepstrum Coefficients) concatenated to i-vectors, in order to adaptacoustic models to speakers.The vocabulary of the ASR system contains all the words present in the MEDIA training and development corpora, so about 2.5K words.A first bigram language model (LM) is applied during the decoding process togenerate word-lattices. These lattices are then rescored by applying a 3-gram language model. In order to get an SLU training corpus close to the test corpus, SLU models are trained with ASR transcriptions.To avoid to deal with errors made by an LM over-trained on the MEDIA training corpus, a leave-one-out approach was followed: all the dialogue files in the trainingand the development corpora were randomly split into 4 subsets. Eachsubset was transcribed by using an LM trained on the manual transcriptions present in the 3 other blocks and linearly interpolated to a 'generic' language model trained on a large set of French newspaper crawled on the web, containing 77 millions of words. The test data was transcribed with an LM trained on the MEDIA training corpus and the same generic language model. As shown in table <ref>, word error rates for the training, development, and test corpora were around 23.5%.§.§ ResultsTests were performed for both architectures with the MEDIA DEV set. The best configuration is chosen with respect to the best results observed on the DEV set and applied for obtaining the TEST results. These results in terms of error rate, precision and recall for concepts (C) and concept value (CV) are reported for the best configuration of each architecture in Table <ref>. It appears that the CRF architecture significantly outperforms NN EDA that shown minor improvements with respect to the baseline.In order to evaluate the impact of the use of confidence measures among the input features, we made some experiments summarized in Table <ref>.As we can see, the confidence measure provided by the MS-MLParchitecturebrings relevant information to reduce the CER and the CVER.Other versions of the two systems were considered by adding to the usual MEDIA concept labels two more output tags. During training, these tags are replacing the usual one when the hypothesized word is erroneous. If the erroneous hypothesized word is supporting a concept, it is associated to the ERROR-C tag, ERROR-N otherwise. During evaluation, ERROR-C and ERROR-N hypothesized tags are replaced by null (tag informing that the word does not convey any MEDIA information) in order to perform the usual MEDIA evaluation protocol. Results on TEST, obtained with the best configuration observed on DEV, are reported in Table <ref>. Results in Table <ref> are similar to those in Table <ref>, but we can notice some small differences. For instance,precision is now better, even if the CER is not reduced for CRF while it is for NN-EDA. Using these four SLU systems that can be executed in parallel, it is worth trying to see if improvements can be obtained by their combination with weight estimated by optimal performance on the DEV set. The results are reported in Table <ref> and compared with the ROVER <cit.> combination applied to the six SLU systems described in <cit.>. The results show 0.6% and 0.6% absolute reductions for CER and CVER with respect to the best CRF architecture and 4.5% and 2.8% with respect to the baseline. Considering that the best results on manual transcriptions are above 10% on the TEST set, one may conclude that, with the solutions presented in this paper, the contribution of ASR errors to the overall SLU errors is inferior to errors observed for manual transcriptions. A detailed analysis of the errors observed in the automatic and manual transcriptions show a common large error contributions for concepts such as three different types of reference, connectors between domain relevant entities, and proper names that can be values of different attributes. These concepts are expressed by confusable words whose disambiguation requirescomplex context relations that cannot be automatically characterized (at least with the available amount of train data) by CRFs nor by the type of attention mechanisms used in NN EDA.Considering the case in which all the four systems provided the same output (consensus) for each word, a 0.96 precision with 0.72 recall were observed on the TEST set. Lack of consensus in the DEV and the TEST sets appears to correspond in most cases to mentions of only few types of concepts. This is a very interesting result since it suggests that further investigation on these particular cases is an important challenge for future work. § CONCLUSIONS Two variations of two SLU architectures respectively based on CRFs and NN-EDA have been considered.Using the MEDIA corpus, they were compared with the CRF SLU, considered as baseline that provided the best results among seven different approaches as reported in <cit.>. The main novelties of the proposed SLU architectures are the use, among others, of semantically relevant confidence and input features. The CRF architectures outperformed the NN-EDA architectures with significant improvement over the baseline. Nevertheless, NN-EDA architectures appeared to be useful when combined with the CRF ones. The results show that the interaction between the ASR and SLU components is beneficial. Furthermore, all the architectures show that most of the errors are for concepts whose mentions are made of short confusable sequences of words that remain ambiguous even if they can be localized. These concept types are difficult to detect, even on manual transcription, indicating that the interpretation of the MEDIA corpus is particularly difficult. Thus, suggested directions for future work should consider new structured mechanisms of attention capable of selecting features of distant contexts in a conversation history. The objective is to identify a sufficient set of context features for disambiguating local concept mentions.IEEEtran | http://arxiv.org/abs/1705.09515v1 | {
"authors": [
"Edwin Simonnet",
"Sahar Ghannay",
"Nathalie Camelin",
"Yannick Estève",
"Renato De Mori"
],
"categories": [
"cs.CL",
"cs.AI",
"cs.NE"
],
"primary_category": "cs.CL",
"published": "20170526103424",
"title": "ASR error management for improving spoken language understanding"
} |
School of Mathematics, Georgia Institute of Technology.Email: [email protected]. Supported in part by NSF Grant DMS-1344199 School of Mathematics and School of Computer Science, Georgia Institute of Technology. Email: [email protected]. Supported in part by NSF Grant DMS-1407657.We consider sampling and enumeration problems for Markov equivalence classes. We create and analyze a Markov chain for uniform random sampling on the DAGs inside a Markov equivalence class. Though the worst case is exponentially slow mixing, we find a condition on the Markov equivalence class for polynomial time mixing. We also investigate the ratio of Markov equivalence classes to DAGs and a Markov chain of He, Jia, and Yu for random sampling of sparse Markov equivalence classes.Keywords: graphical Markov model; MCMC algorithm; reversible Markov chain A note on sampling graphical Markov models Prasad Tetali December 30, 2023 ==========================================§ INTRODUCTION A Bayesian network or DAG model is a type of statistical model used to capture a causal relationship in data. The model consists of a directed acyclic graph (DAG) and a set of (dependent) random variables, one variable assigned to each vertex. The DAG encodes conditional independence relations among the random variables. These models are used in areas ranging from computation biology to artificial intelligence <cit.>. However, the correct DAG for a system can only be inferred from data up to a condition called Markov equivalence, where all DAGs in a Markov equivalence class represent the same statistical model <cit.>. Model selection algorithms face a balance between dealing with the more complicated structure of Markov equivalence classes or encountering inefficiencies and constraints while using DAGs. Works (resulting in partial success) towards understanding Markov equivalence classes through counting and random sampling have considered the questions of enumeration of (1) the Markov equivalences classes on a given number of vertices <cit.>, (2) the DAGs comprising a fixed Markov equivalence class <cit.>, or (3) all Markov equivalence classes corresponding to a fixed underlying undirected graph <cit.>. In spite of much research, the topic of exact enumeration remains stubbornly open. In the present work, we consider the problems of random sampling for questions (1) and (2). The graphs in a Markov equivalence class are exactly those that share the same skeleton and immoralities <cit.>: The skeleton of a directed graph is the underlying undirected graph obtained by removing direction from all the edges. A v-structure (also termed an immorality) at c occurs among vertices a,b,c, whenever the induced subgraph on these vertices has the two directed edges (a,c) and (b,c) but not (a,b) or (b,a). An essential graph is a graphical representation of a Markov equivalence class that utilizes both directed and undirected edges. Since all graphs in a Markov equivalence class share the same skeleton, they only differ in direction of edges. An edge is directed in the essential graph for a Markov equivalence graph if that edge is directed in the same direction in all the graphs in the equivalence class. Otherwise it is undirected. The partially directed graphs (PDAG's) resulted from this are distinct.A result of Andersson et al <cit.> gives a characterization of the PDAGs that are essential graphs with four conditions <cit.>. (1) No partially directed cycles (i.e. chain graph), (2) The subgraph formed by taking only the undirected edges is chordal, (3) The graph in Figure <ref> does not occur as an induced subgraph, and(4) Every directed edge is strongly protected: A directed edge u → v is strongly protected if it occurs in one of the four induced subgraphs in Figure <ref>.The essential graphs for Markov equivalence classes containing a single DAG are the essential graphs with only directed edges. In one direction, this follows since if the class has one DAG then all the edges are directed consistently within the class. In the other direction, if there are two DAGs in the class they have the same skeleton and can only differ in the direction of some edge. That edge would then be undirected in the essential graph.A PDAG with no undirected edges has fewer conditions to be an essential graph: (1) It is a DAG (2) All edges u → v are protected by being in one of three induced graphs (a),(b),(c). An edge u → v is protected in a PDAG with only directed edges if {w| w → u}≠{w | w≠ u, w → v}If there exists a w in the first set but not the second, then u,v,w form the induced subgraph in (b). If w is in the second set but not the first, then either u → w or not. In the first case this forms the induced subgraph in (c), otherwise the induced subgraph of (a).This paper is comprised of three sections. Section <ref> investigates a Markov chain for uniform generation of the DAGs in a Markov equivalence class. It finds a class of graphs on which the associated Markov chain is slow mixing, but a condition for fast mixing as well. The key barrier to fast mixing is large cliques with substantial intersection (roughly half). Section <ref> gives a structure theorem for understanding Markov equivalence classes in terms of posets and uses the structure theorem to explore the ratio of DAG's to Markov equivalence classes. Section <ref> relates the observations to a Markov chain for uniform generation of Markov equivalence classes by He, Jia, and Yu <cit.>. In particular, this part provides a simpler, shorter proof of the fact that the chain due to He et al is ergodic. This section concludes with the construction of sparse PDAG's with small Hamming distance but large distance in the chain using moves with positive probability. This highlights the fact that analysis of convergence to equilibrium of the chain is unlikely to be successful using straightforward canonical path arguments or coupling techniques.The related problem of counting the number of DAGs in a Markov equivalence class has been studied combinatorially in <cit.> and algorithmically in recent work of He and Yu<cit.>. The latter's algorithm is best suited to graphs with many vertices adjacent to all other vertices. Our Markov chain is fast mixing on many graphs that lack this feature, but the simplest example (see Proposition <ref>) of a graph on which our Markov chain is slow mixing is well suited to their algorithm. There are however, many graphs that are ill suited for both, such as chains of half overlapping large cliques.§ EDGE FLIP RANDOM WALK ON CHORDAL GRAPHS This section investigates the mixing time of a Markov chain designed to pick random samples from the DAGs forming the equivalence class corresponding to a specific essential graph. This involves choosing acyclic orientations for each of the undirected edges in the essential graph in such a way as to form no v-configurations. (Recall that a v-configuration is an induced subgraph on three vertices depicted inFigure <ref>(a).) This depends on only the undirected edges of the essential graph and can be done for each connected and undirected component separately.The Markov chain is then on v-configuration-free acyclic orientations of connected chordal undirected graphs G = (V,E). A step in the Markov chain is to reverse the direction of a single edge so as to give another such orientation. Let H_G be the graph with vertices the orientations for G and edges the transitions of the Markov chain. Note that H_G is a |E|-regular graph, and the Markov chain has uniform stationary distribution. When drawing H_G, we will suppress self-loops. Anyv-configuration-free acyclic orientation of a connected graph G has a unique source. Let G be a connected graph.Let G̃ be a v-configuration-freeacyclic orientation of G. Suppose there are two sources v and w in G̃. G is connected, so there exists a minimal length undirected path from v to w, (v,v_1,v_2,...,v_k,w). Since v and w are sources, the edge between v and v_1 is directed v → v_1 and the edge w to v_k is directed v_k ← w. There must be at least one point on this path where right directed edges meet left directed edges, v_i → v_i+1← v_i+2. Since G̃ is v-configuration-free, there must be an edge v_i to v_i+2 in G. However, this gives a shorter path v to w. Therefore, there can only be a single source in G. As a side note, the chordal condition for the undirected component of a PDAG exists because there are only acyclic v-configuration-free orientations of chordal graphs. This means the lack of chordal in the above proposition is not meaningful, as the result is vacuous for non-chordal graphs. The unique source of a v-configuration-free acyclic orientation of a connected graph G determines the orientation of all edges with one endpoint closer to the source than the other as away from the source. The proof follows by induction on the distance from the edge to the source. If the edge is incident to the source, i.e. distance zero, then the edge must be oriented away from the source. Suppose this holds for all edges at most distance d from the source. Let e = {v,w} be an edge distance d+1 from the source with v closer to the source. This means there must be an edge f = {v,z} incident to d at distance k whose orientation is forced to be towards v. Moreover, there cannot be an edge from z to w since v is closer to the source. If e were oriented towards v, then z → v ← w forms a v-configuration. Since the orientation is v-configuration-free, e is oriented away from the source.If G = K_n, the v-configuration-free acyclic orientations are in bijection with the permutations of n. This follows from Proposition <ref> by successively choosing from the remaining vertices a source which orients the edges adjacent to that source. Only the edges between two consecutive choices of sources can be flipped, so this Markov chain is the adjacent transposition walk on the symmetric group S_n. The adjacency graph H_G is the Permutohedron.If G is the path, or indeed any tree, H_G is isomorphic with G. Since there is a unique path from any vertex to any other vertex, by Proposition <ref>, the states are entirely determined by the unique source in the orientation. A move in the Markov chain moves the source to an adjacent vertex. These two cases are the extreme examples, and the structure for any G can be viewed by decomposing into pieces of these types. The following structure theorem for chordal graphs is the key (see Theorem 12.3.11 in Diestel <cit.>): A graph is chordal if and only if it has a tree-decomposition into complete parts.A tree-decomposition of a chordal graph is a tree T with vertices the maximal cliques of G and for any vertices t_1,t_2,t_3 ∈ T, if t_2 is along the unique path from t_1 to t_3, t_1 ∩ t_3 ⊆ t_2. A maximal clique t is a non-follower in an orientation of G when for all edges (v,w) with w ∈ t, then v ∈ t as well.The structure of the graph H_G is closely related to the structure of the tree decomposition of G. The following characterization of H_G will be the key to analyzing the v-configuration free edge flip random walk on G, or equivalently the random walk on H_G. As the v-configuration free edge flip walk on a clique is the random walk on a permutohedron, the key to understanding H_G is to describe how the permutohedra for the maximal cliques in the tree decomposition of G occur in H_G. To get H_G, first for each maximal clique t_i, its graph H_t_i, a permutohedron, is dilated by taking the Cartesian product with other permutohedra. Then these pieces are glued together by identifying faces of the polytopes to form H_G. Given a tree-decomposition T of G and a maximal clique t_i, for each other maximal clique t_j let s_j be the clique immediately before t_j on the unique path from t_i to t_j. The dilation of t_i will be D_i = ∏_j ≠ iH_t_j∩ s_j^c where the product is Cartesian product. Then glue for t_i and t_j adjacent in T, H_t_i× D_i and H_t_j× D_j along H_t_i ∩ t_j× D_i,j where D_i,j = H_t_i ∩ t_j^c× D_i = H_t_j ∩ t_i^c× D_j.H_G is formed by first making H_t_i× D_i for each maximal clique t_i in G. For each pair of maximal cliques with non-empty intersection, their respective pieces are identified along the faces H_t_i ∩ t_j× D_i,j. As shown above in Proposition <ref>, each acyclic, v-configuration-free orientation of G has a unique source. This source as in Proposition <ref> determines the orientation of all edges with one end point closer to the source than the other. Fixing only a source and orienting these edges breaks down the graph into disjoint components with independence on how to orient the disconnected pieces. This will give rise to the decomposition. The maximal cliques containing the source remain in the same component. To orient the remaining edges, sources are recursively chosen until all maximal cliques are disconnected. The non-followers are the maximal cliques containing all recursive choices of sources up to when the maximal cliques are disconnected. An orientation will be part of H_t_i× D_i when t_i is a non-follower in the orientation. The gluing comes from when multiple cliques are non-followers in the orientation. H_t_i ∩ t_j× D_i,j are all the orientations from choosing the first |t_i ∩ t_j| recursive sources in t_i ∩ t_j.Let C(G) be the number of maximal cliques in G. For an orientation v ∈ H_G, let M(v) be the number of non-following cliques in v. The degree in H_G of an orientation is |G| - C(G) + M(v)-1. The minimal degree is |G| - C(G).The degree of the vertices in a component for a clique H_t_i is |t_i|-1, the number of adjacent transpositions on S_|t_i|. Taking cartesian products adds the degree of the graphs involved, so the degree of a vertex in H_t_i× D_i is |G|-C(G). Its left to count how many edges extend outside of H_t_i× D_i. These components are glued together along the pairs t_i,t_j with t_i adjacent to t_j in T and t_i ∩ t_j ≠∅. The number of overlapping edges between the pieces glued together is the degree of a vertex in H_t_i ∩ t_j× D_i,j, namely |G|- C(G) - 1. Each gluing thus increases the degree of the orientations involved by one. The number of gluings involving an orientation is one less than the number of non-following cliques. There exists graphs G for which the edge flip random walk is mixing exponentially slowly. For instance, when G is made up with two cliques of size 2n/3 sharing n/3 vertices, the mixing time t_≥ 4^n-1. G has two maximal cliques t_1 and t_2, each of size 2n/3, with intersection s of size n/3. For each maximal clique, its component is C_i = H_K_2n/3× H_K_n/3, identified along the face F = H_K_n/3× H_K_n/3× H_K_n/3, following our construction of H_G above. The face F corresponds to orientations where both maximal cliques are non-followers. Let R be equal to one of these components C_1/F without the intersection, i.e. orientations when one clique is the leader. Let Q(A,B) be the chance of moving from A to B in one step of the random walk starting from the uniform stationary distribution, π. The bottleneck ratio of the random walk on H_G is:Φ_* ≤Φ(R) = Q(R,R^C)/π(S) . The Markov chain is the edge flip random walk on G, so each state has |E| possible moves. The probability of moving from R to R^C is the total number of edges from R to R^C in H_G over |E|·|S|. Each orientation of G in F comes from a recursive choice of sources where the first n/3 sources are from s, the intersection of the maximal cliques, followed by an independent recursive choice of sources in t_1∖ s and t_2∖ s. From such an orientation, there is a single edge into R corresponding to flipping the edge between the source chosen last in s and first in t_1∖ s. Therefore, the number of edges from R to R^C is |F| = (n/3)!^3. The probability of S under the stationary distribution is |S| = (2n/3)!(n/3)! - (n/3)!^3 over the number of orientations of G. By inclusion-exclusion, the number of orientations is |C_1| + |C_2| - |F| = 2(2n/3)!(n/3)! - (n/3)!^3 .Q(R,R^C)/π(S) = 1/|E|(n/3)!^3/(2n/3)!(n/3)!2(2n/3)!(n/3)! - (n/3)!^3/(2n/3)!(n/3)!-(n/3)!^3= 1/|E|1/2n/3n/3-1 . Using Stirling's approximation, 2n/3n/3≈1/√(π n) 4^n.For n ≥ 4, 4^n ≤√(n/π) 4^n - n, and Φ(S) ≤ 4^-n.By Theorem 7.3 of <cit.>, we have t_≥1/4Φ_. This means for this G, t_≥ 4^n-1. We will use a decomposition theorem of Madras and Randall (see Theorem 1.1 in <cit.>) to get an upper bound on the mixing time. Given a reversible Markov chain P on Ω = ⋃_i=1^m A_i with stationary distribution π, they construct two types of Markov chains: for each i, a (restriction) Markov chain restricted to each subset A_i, and a (projection) Markov chain on m states, a_1, a_2, … , a_m, with a_i representing the set A_i.The mixing time of these auxiliary chains are used to get a bound on the mixing time for the original chain. Let the Markov chain inside each set be P_[A_i](x,B) = P(x,B) + 1_x ∈ BP(x, A^c_i) for x ∈ A_i and B ⊂ A_i. To define the chain over the subsets, let Θ := max_x ∈Ω|{i: x ∈ A_i}|. Define the chain over the covering to be:P_H (a_i,a_j) := π(A_i ∩ A_j)/Θπ[A_i] .In the preceeding framework, we have(P) ≥1/Θ^2(P_H) (min_i=1,...,m(P_[A_i])). An upper bound on the mixing time of this random walk can be found through an upper bound on the mixing of the Markov chain for each maximal clique and for a constructed biased random walk on the tree from the tree decomposition. The Markov chains for each maximal clique are the adjacent transposition walk, which is well understood. The random walk on the tree will be studied with comparison techniques.The only impediment to rapid mixing is a quantity relating to the overlap between cliques along a path in the tree decomposition. Leto_G := ∑_i |t_i|! |D_i|/min_(j,k) ∈ T |t_j ∩ t_k|! |D_j,k|.For a graph G on n vertices, let o_G be defined as above. Let T be a tree decomposition with Θ= (T) being the maximal number of overlapping maximal cliques in G. Let t_max = max_i |t_i|. Then the spectral gap of the random walk on H_G satisfies:(H_G) ≥(o_G Θ^3 (|G|-|T|) (T))^-1 2(1 - cos(π/t_max)) .Using the decomposition theorem of Madras and Randall <cit.>, an upper bound on the mixing time of the random walk on H_G can be obtained by understanding the mixing of a random walk P_A_i on each component H_t_i× D_i and the mixing of a Markov chain on the tree from the tree decomposition of G into maximal cliques. The random walk on the pieces H_t_i× D_i are Cartesian products of the adjacent transposition walk on the symmetric group. Random walk on the Cartesian product is the product chain on the components. The spectral gap for the product Γ̃ = Γ_1 × ... ×Γ_d with the chance of moving in the jth chain being w_j and the spectral gap of the jth chain being γ_j is:γ̃ = min_j w_j γ_j.Here the spectral gap of the random walk on H_K_i is 2/i-1(1 - cos(π/n)) by a result of Bacher <cit.>. Note that the degree of the vertices in H_K_i (or Cartesian products thereof) is the same as the degree of K_i (or Cartesian products thereof), namely i-1. Therefore, the chance of making a move in a component of the product chain is the size of that clique minus one over the number of vertices minus the number of cliques. This gives that γ̃ is at most:(P_A_i) = γ̃≤2/|G| - |T|(1 - cos(π/t_max)). The Markov chain on the tree decomposition into cliques has transition probabilitiesconstructed as follows. Let Θ denote the maximum degree of the tree. For two cliques t_i,t_j adjacent in the tree,P_T(t_i,t_j) = π ( H_t_i ∩ t_j× D_i,j)/Θ π(H_t_i× D_i)= (Θ |t_i||t_i ∩ t_j|)^-1.It has stationary distribution π(t_i) = |t_i|! |D_i| z^-1 where z = ∑_i |t_i|! |D_i|.The spectral gap of this Markov chain on the maximal cliques will be bounded using comparison to a Markov chain on the complete graph with vertices the maximal cliques with the same stationary distribution. For all i,j let P̃_̃T̃(t_i,t_j) = π(j). This Markov chain mixes in one step.The comparison technique of Diaconis-Saloff-Coste <cit.>, is in terms of A below, where γ_k,l is the unique path in T from t_k to t_l: A = max_(t_i,t_j) ∈ T1/P_T(t_i,t_j)π(t_i)∑_k,l: (t_i,t_k) ∈γ_t_k,t_l |γ_k,l| π(t_k)π(t_l). To simplify this, note |γ_k,l| ≤(T), ∑_kπ(t_k) = 1 and hence ∑_k,l |γ_k,j| π(t_k) π(t_l) ≤(T). Additionally, P_T(t_i,t_j)π(t_i) =|D_i,j| |t_i ∩ t_j|! /(∑_i |t_i|! |D_i|) Θ = 1/o_G Θ. Therefore, A ≤ o_G Θ(T). This gives for the largest non-trivial eigenvalue β_1 of P_T, β_1 ≤ 1 - 1/A, and(P_T) ≥1/A≥( o_G Θ(T))^-1. The result <cit.> gives (P) ≥1/Θ^2(P_H) min_i (P_A_i), which from the bounds above gives: Gap(P) ≥ (o_G Θ^3 (|G|-|T|) (T))^-1 2(1 - cos(π/t_max)).When all the maximal cliques in G are the same size t and all intersecting cliques intersect along s vertices, then o_G = |T| ts and the spectral gap satisfies: Gap(P) ≥ |T|(|G|-|T|)Θ^3 ts 2(1 - cos(π/t)) . The extreme cases of G being a complete graph or a tree fall into the purview of this corollary, and the bound on the spectral gap is within a constant of what it should be.§ POSETS AND ESSENTIAL GRAPHS A poset, or partially ordered set, is a combinatorial object defined on a set X with a set of relations P.The relations form a partial order in that they are reflexive (for all x ∈ X, x ≤ x), antisymmetric (not both x ≤ y and y ≤ x if y ≠ x), and transitive (if x ≤ y, y≤ z, then x ≤ z). A few terms from posets are useful in the looking at essential graphs. We say y covers x if x < y with no z so that x<z<y. A chain in a poset is a set C ⊆ X with all elements of C totally ordered by the poset. The height of a point in a poset is the size of the largest chain with that point as the highest point in that total order.Every DAG can be reduced to a poset by establishing the relations v < w if w is reachable from v. For each labeled poset P, it is straightforward to count the number of essential graphs with only directed edges that reduce to it as well as the number of DAGs. We recall here that a relation x<y in a poset P is called a cover relation, if there is no z such that x < z< y in P.The number of DAGs is∑_P∏_v ∈ P (2^d(v) - c(v)),where the sum is over labeled posets, d(v) is the number of poset elements below v in the order, and c(v) is the number of elements covered by v. One can construct the DAGs that have a specific reachability poset by looking at the down set and cover relations of each point of the poset. The points covered by a point v are obliged to have directed edges to v. All other elements of the descent set have the option to have a directed edge to v.Surprisingly, satisfying the three conditions to protect the edges is straightforward in this setting. The number of Markov equivalence classes of size one is:∑_P∏_v ∈ P (2^d(v) - c(v) - 1_c(v) = 1),where the sum is over labeled posets, d(v) is the size of the down set of v, and c(v) is the number of elements covered by v. From a labeled poset P, we will count all the essential graphs with only essential edges that reduce to it. The DAG must include all the cover relations in P. For each point v in the poset, a choice can be made to include or not a directed arrow to v from any u in the down set of v that is not covered by v. If d(v) is the size of the down set of v, and c(v) is the number of elements covered by v, this gives 2^d(v)-c(v) ways to have edges come into v. These edges are protected as long as the condition {w| w → u}≠{w | w≠ u, w → v} is met. If v covers at least two other points, then no other vertex coming from the down set of v can have an incoming edge from both these points, since they must be incomparable in the poset. If v only covers a single point, then there is a unique way for {w| w → u}≠{w | w≠ u, w → v} to fail: If u is the point that v covers and they have edges from exactly the same vertices. This gives 2^d(v) - c(v) - 1_c(v) = 1 ways to have protected directed edges coming into v.§.§ Exact enumeration and discussion The ratio of essential graphs to DAGs and essential graphs with only directed edges to DAGs is of interest to determine the limit of increased efficiency in working with essential graphs versus DAGs. Using essentially the same observation above, Steinsky <cit.> uses inclusion-exclusion to get a recursive formula for a_n, the number of essential graphs with only directed edges, otherwise known as essential DAGs. The inclusion-exclusion works to add a set of i maximal vertices and connect n − i lower vertices to them arriving at the formula,a_n = ∑_i=1^n (−1)^i+1ni(2^n-i-(n-i))^i a_n-i .This is done in the style of Robinson's recursive formula <cit.> for the number of DAGs, a_n' = ∑_i=1^n (−1)^i+1ni2^i(n-i)a_n-i' .Steinsky has a second paper published in 2013 on enumerating Markov equivalence classes<cit.>.While they are both exceedingly useful for computation, the alternating quality of both formulas makes asymptotic analysis a challenge.While the poset construction of a formula for a_n would be next to useless for computing a_n for large n, its all positive structure makes it more amenable to inform the ratio of a_n'/a_n.Steinsky computed that the ratio for n ≤ 300. By n = 200, the first 45 decimal places appear to have stabilized, so a_n'/a_n = 13.6517978587767... The q-Pochammer symbol appears to be playing a role in this constant, as shown below by looking at certain families of posets. The ratio of the q-Pochammer symbol that appears and Steinsky's approximations, a_n'/a_n (1/2,1/2)_n-2, is just under 4 at 3.94...The largest contribution of an unlabeled poset to the formulas above is for the total orders, where there are n! ways to label the poset. This family also has the largest number of DAGs per reachability poset at ∏_i=2^n 2^i-2 = 2^n-12, since this maximizes the down set and minimizes the covered points. However, none of these DAGs form an essential DAG, since none of these edges is protected. One of the next largest contributions to the DAG formula is from the unlabeled poset that form a total order aside from two incomparable elements at the bottom of a linear order. There are n!/2 ways to label this poset and ∏_i=4^n 2^i-2 ways to construct a DAG from it. Now, a large number of these are essential graphs. Specifically,∏_i=4^n (2^i-2-1) of them. This proportion is: ∏_i=4^n2^i-2 - 1/2^i-2 = ∏_i=4^n ( 1 - (1/2)^i) = 2(1/2,1/2)_n-2, where (a,q)_n = ∏_i=0^n-1(1-aq^i) is the q-Pochammer symbol. This ratio is quite a bit off from the ratio of essential graphs with only directed edges to DAGs, but by combining the two classes together, one gets a lot closer.There are four times as many DAGs from linear orders as the DAGs from our almost linear orders. This gives instead of 2(1/2,1/2)_n-2 as a ratio, 2/5(1/2,1/2)_n-2. Steinsky's computations would match up with a leading fraction of ≈1/3.94.There should be a natural way to link up posets that lead to essential graphs and those that do not that results in ratios that closely approximate the real ratio.For instance, all essential graphs on only directed edges have in their reachability poset at least two points of height 1 covered by each point of height 2 (and all points of height 2 cover at least two points height 1). By taking any pair of these height 1 points and covering one with the other, one arrives at a poset not giving an essential graph. Moreover, for a poset with height 2 points not covering at least two height 1 points, there is a clear map to essential graph posets - make those two points incomparable. Moreover, since the q-Pochammer symbol that appears will change as the posets get wider, the constant may be easier to achieve than it first appears. To extend this work to understanding the ratio of DAGs to all essential graphs, one needs to understand how undirected edges can be added as well as the directed edges. The undirected edges are not as independently placeable as the directed edges due to the restriction that they must form a chordal graph. Work has been done on counting the size of the equivalence class given a fixed set of undirected edges in <cit.> and sampling inside the equivalence class in Section <ref>. This is an attractive question in that it depends only on the undirected edges present and not on any directed edges. It may also be beneficial to study how the undirected edges can be placed on top of the directed edges in the poset model.Undirected edges can be added to an essential graph with only directed edges as follows. Edges are only allowed between vertices u,v with directed edges coming in from exactly the same other vertices. This prevents the disallowed induced subgraphs of Figure <ref>. Care must also be taken to keep the edges coming out of these vertices protected. Using protected edge condition Figure <ref> (d), the edges coming into v are protected after the addition of undirected edges if at least two of the vertices with edges to v have no undirected edge between them. On top of these protection conditions, the undirected edges must also be chordal. It seems like one of the alternate definitions of chordal might be more tractable. For instance, this definition seems more naturally suited to recursive constructions: a graph is chordal if its vertices are partitionable into three sets A,C,B with the induced subgraph on C complete and no edges with one end in A and the other in B. § A MARKOV CHAIN ON ESSENTIAL GRAPHS Essential graphs are difficult to count or generate exactly beyond about 20 vertices. He, Jia, and Yu <cit.> created a reversible Markov chain on essential graphs that can be used to sample uniformly from the essential graphs on a large number of vertices. It is designed to function under a sparsity condition of at most constant density. In order to produce a non-lazy chain, the authors choose to only allow moves that give a different essential graph. This has the effect of giving a non-uniform stationary distribution and making the chain harder to describe. Here, a lazy version of their chain is used in order to make formulas more explicit.He, Jia, and Yu ultilize 6 moveson the state space of essential graphs on n vertices. The first four are changes in one edge. A random pair of vertices are selected uniformly at random. Then a choice is made to attempt to add a directed edge, remove a directed edge, add an undirected edge, or remove an undirected edge. Adding an edge is only allowed if an edge is not already present. Removing an edge is similarly only allowed if that type of edge is already present. Furthermore, the resulting graph need not be an essential graph itself, but must be extendable by directing edges to give a DAG. The graph is modified to be the essential graph of that DAG. A move to add an edge is furthermore only permissible if that edge remains the type added in the essential graph. These additional factors of correcting the graph to be essential are done in order to make the chain reversible.The final two moves are to pick three vertices a,b,c and add or remove an immoral. An immoral can only be added if there are undirected edges a-b and b-c with a and c not adjacent. As before, the graph is then corrected to be an essential graph if possible or the move is rejected. To remove an immoral at b, there must be directed edges a → b and c → b with no edge from a to c. This is changed by the move to undirect the edges, again with the restriction that it repairs to an essential graph with those edges still undirected.He, Jia, and Yu show the chain is connected by showing an iterative procedure of removing all directed edges followed by removing some directed edge or a v-configuration, thus prescribing a path in the chain from any essential graph to the empty graph. The poset relationship described above can be used to give an explicit order in which edges can be removed in order to get to the empty graph. Such a procedure could be of use in a mixing time analysis of the chain.* Remove all undirected edges. * Starting with the maximal elements at height ≥ 2 of the poset for the essential graph, remove all directed edges into the maximal elements one at a time in an order that keeps them from having the same incoming edges as their children. If a maximal element has multiple children this can be done by removing the edges from the children last (removing any children at height 1 first). If there is a single child, remove all incoming edges in common with that child first. Recursively continue with the new maximal elements.* Continue this until only immorals consisting of elements at height 1 and 2 are left. Remove these by first removing all possible directed edges as singletons, then for each immoral, turning the immorals into undirected edges and removing those. Every move in the above procedure is a move in the Markov chain proposed by He et al <cit.>. Of the four conditions for an essential graph, removing undirected edges could only violate the need for the undirected edges to form a chordal graph. Several of the alternate definitions of chordal graphs give that there is an order remove all the undirected edges leaving it chordal at each step. For instance, one definition of chordal is that every chordal graph can be broken up into three sets A,C,B with C non-empty and complete, no edges between A and B, and A and B chordal. If A and B are independent, removing edges A to C and B to C, followed by the edges adjacent to each vertex in C leaves the graph chordal at each step. Recursively removing the edges from A and B, then A,B to C, then inside of C, then gives a way to remove all edges leaving the graph chordal at each step.For an essential graph with only directed edges, by Proposition <ref> an edge u → v is protected if {c | c→ u}≠{c | c≠ u, c → v}. For a maximal element of the poset v, there are no edges coming out of v, so removing edges coming into v does not endanger the protectedness of any edges going into other vertices. Its enough to ensure that the edges coming into v are protected at each step of their removal. Let w_1,...w_r be the vertices covered by v in the poset. Each of these is in the set {c | c≠ u, c → v} when they aren't u but are never in {c | c→ u}. As long as there are at least two vertices covered by v, all edges into v are protected and all but those two can be removed in any order. Leave one of the edges from the highest vertex, w_i, v covers. Since v is height at least 3, w_i is height at least 2, and {c | c→ w_i }≠∅. Therefore the edge w_i → v is protected if it is the last edge to v left. Remove the other edge, then this edge. Suppose instead that v covers a single vertex w, with height of v at least 3. Since the edge w → v is protected, {c | c→ w}≠{c | c≠ w, c → v}. The presence of a vertex in the latter set not in the former would mean v covered at least two vertices. Then the first set contains at least one vertex u not in the other set. Remove all edges to v other than w → v in any order, as this edge will remain protected by the presence of u and the presence of w → v protects them while they exist. Finally, remove w → v.After recursively removing all maximal vertices of height at least 3, the essential graph left has only vertices of height 1 with directed edges leading to vertices of height 2. Each vertex of height two has at least two incoming edges since forming an immoral is the only way such edges can be protected. Prune each immoral down to two edges by removing single edges. Take one immoral a → b ← c with no other edges going to b. Then turning it into a - b - c forms an essential graph because no u → v - w edges were formed, and all other directed edges remain protected in an immoral. Then these undirected edges can be removed since they are the only undirected edges in the graph. Repeat with the other immorals. Define the Hamming distance between two essential graphs to be the number of edges including direction, in which the graphs differ. We can show that using the basic four moves, at most two moves are needed to move between any two essential graphs at Hamming distance one. The chain is not connected by just these moves. The Hamming distance between two essential graphs that are adding or removing an immoral (without changing the direction of other edges), a → b ← c versus a - b - c, however suffice by the above procedure. Note, this should also mean the chain would be connected and reversible, if the “repairing” moves that do not give essential graphs were left out.There is a path in the chain of length at most 2 between any two essential graphs at Hamming distance one. The first statement breaks into three cases. Let a,b be the two vertices at which the graphs differ. Case I is that one graph has no edge a to b while the other has a directed edge, WLOG, a → b. Case II is that one graph has no edge and the other has an undirected edge a - b. Case III is that one graph has a directed edge a → b and the other b → a. It is not possible to differ at a directed versus undirected edge, since by definition an edge in an essential graph is undirected if there is a DAG in which that arrow goes in either direction.Case I: Since both no edge and the directed edge form essential graphs, the Markov chain moves of add a directed edge and removing a directed edge are legal, and this gives a one step path in either direction under the chain.Case II: Similarly to Case I, since both no edge and the undirected edge form essential graphs, the Markov chain moves of adding an undirected edge and removing an undirected edge give a one step path between the two essential graphs. Case III: We will show removing the directed edge and adding it back in the other direction gives a two step path in the chain. This case has the complexity since it is necessary to show that the intermediate move gives an essential graph only differing at that edge. The graph G obtained by removing the directed edge must still be chordal and contain no partially directed cycles since no undirected edges were changed and removing edges can not add a cycle. The third criterion for an essential graph, no x → y - z, could also not be introduced by removing an edge. It is left to show all the directed edges remain protected. Suppose a → b is used as one of the edges in one of the four induced subgraphs that protects that edge u → v. The fourth induced subgraph cannot be the relevant one, since switching the direction of any of the directed edges gives a partially directed cycle. That means u → v is protected by of the first three conditions which by, Proposition <ref>, are equivalent to {c| c→ u}≠{c| c≠ u, c → v}. The edges a to b are only relevant if one of u or v is a or b. Without loss of generality we will check u=a and v=a. Suppose u =a and v ≠ b. In the graph with a → b, the edge is not in either set and the sets are still not equal. That means with no edge a to b the sets are disjoint and the edge is protected. Suppose v=a and u ≠ b. In the graph with b → a, the edge is not counted in either set and the sets are still not equal. That means without the edge a to b, the edge u to v is still protected.Unfortunately, the Markov chain does not give short paths between all essential graphs with Hamming distance two. For instance, the graph in figure <ref>, where I_n stands in for an independent graph on n vertices.First note, neither the graph with both a-b and c-d and neither a-b not c-d is an essential graph. The first because in order to protect the directed edges, there must be two non-adjacent vertices. The second, because then the undirected edges would have a four-cycle and fail to be chordal. There are two general approaches to get around this, either add more directed edges going up to the independent graph or delete/add undirected edges. In order to protect all the directed edges with both a-b and c-d present, an extra directed edge would have to be added going to each for the vertices in the independent set. This requires at least n edges. Alternatively, one can try to avoid breaking the chordal condition while manipulating undirected edges. This could be done by either deleting an edge in the cycle formed by a,c,b,d or adding extra chords in the cycles formed between the independent graphs and a,b,c,d. In order to delete the edge a-c, one first has to delete n edges from a or c to the independent graph to avoid forming a four-cycle. This takes order n moves. In adding chords to avoid making a cycle, a chord has to be added to each vertex in an independent graph, so again order n new edges must be added. Together, this means there is no path between these graphs in o(n) moves of this Markov chain.Moreover, this example has 5n+4 vertices and 12n + 5 edges, well inside the sparsity condition He, Jia, and Yu consider. Namely, that the number of edges being at most a small constant multiple of the number of vertices.Acknowledgment. We thank Caroline Uhler and Liam Solus for several helpful discussions on the topic of enumeration of Markov equivalence graphs. plain | http://arxiv.org/abs/1705.09717v2 | {
"authors": [
"Megan Bernstein",
"Prasad Tetali"
],
"categories": [
"math.PR",
"math.CO",
"60J10, 60J20, 05C81"
],
"primary_category": "math.PR",
"published": "20170526205626",
"title": "On sampling graphical Markov models"
} |
[ [ Received ; accepted ======================= Ulm University Given a low frequency sample of an infinitely divisible moving average random field {∫_^d f(x-t)Λ(dx); t ∈^d } with a known simple function f, we study the problem of nonparametric estimation of the Lévy characteristics of the independently scatteredrandom measure Λ. We provide three methods, a simple plug-in approach, a method based on Fourier transforms and an approach involving decompositions with respect to L^2-orthonormal bases, which allowto estimate the Lévy density of Λ. Forthese methods, the bounds for the L^2-error are given. Their numerical performance is compared in a simulation study.Keywords: Infinitely divisible random measure; stationary random field; Lévy process, moving average; Lévy density; Fourier transform; Banach fixed–point theorem.§ INTRODUCTION Let Λ be a stationary infinitely divisible independently scattered random measure with Lévy characteristics (a_0,b_0,v_0), where a_0 ≥ 0, b_0 ∈ and v_0 is a Lévy density. Let furthermore X = { X(t);t ∈^d } be a moving average infinitely divisible random field on ^d defined by X(t) = ∫_^d f(x-t) Λ(dx),t ∈^d,with Lévy characteristics (a_1,b_1,v_1), where f = ∑_k=1^n f_k 𝕀_Δ_k is a simple function. Suppose a sample (X(t_1),…,X(t_N)) from X is available. The problem studied in this paper is the nonparametric estimation of (a_0,b_0,v_0). For any simple function f with congruent sets Δ_k, X(t) in (<ref>) has the same distribution as a linear combination of i.i.d. infinitely divisible random variables. Therefore, existence and uniqueness of a characteristic triplet (a_0,b_0,v_0) with the property that a certain linear combination of independent random variables with the corresponding infinitely divisible distribution leading to a random variable with Lévy characteristics (a_1,b_1,v_1) becomes a characterization problem for such distributions. For certain distributions, namely the Poisson and the Gaussian one as well as a mixture of both, all possible distributions for the summands in the linear combination can be described (see e.g. <cit.>). The disadvantage of those characterization theorems is that they do not give any information about the involved parameters(expectation and variance of each summand) and so it is not possible to derive sufficient conditions for the existence of a solution in terms of the kernel function f. Therefore, to solve the inverse problem,we prefer to use concrete relations between the characteristic triplets ofX and Λ (Section 3) given in terms of f. The recent preprint <cit.> covers the case d=1 estimating the Lévy density v_0 of the integrator Lévy process { L_s} of a moving average processX(t)=∫_ f(t-s) d L_s, t∈. It is assumed that L^2_0<∞. The estimate is based on the inversion of the Mellin transform of the second derivative of the cumulant of X(0). A uniform error bound as well asthe consistency of the estimate are given. It is not assumed that f is simple, however, main resultsare subject to a number of quite restricting integrability assumptions onto x^2 v_0(x) and f as well as mixing properties of { L_s}that are tricky to check. Additionally, the logarithmic convergence rate shown there (cf. <cit.>) is too slow. In our approach,we develop the ideas of <cit.> and use Banach fixed–point theorem combined with a recursive iteration procedure (Theorem <ref>) to give sufficient conditions for the existence of a (unique) solution of our (generally speaking, ill–posed) inverse problem v_1 ↦ v_0. We consider simple functions f since * in applications, f is mainly discretely sampled,* anyf∈ L^1(^d) can be approximated in the·_1–norm by a sequence of simple f^(m)∈ L^1(^d) (attaining a finite number of values) arbitrarily well, * this allows us to use relatively simple arguments in the proofs and to avoid complex assumptions that are not easy to verify,* the L^2–convergence rate of our estimates of v_0 to its true value is O(N^-1), cf. Corollaries <ref> and <ref>.The case of arbitrary integrable f is considered in our forthcoming paper <cit.>. This paper is organized as follows: Section <ref> gives an introduction to the theory of infinitely divisiblerandom measures and stochastic integrals as well as a short overview on m-dependent and ϕ-mixing random fields together with some moment inequalities (cf. Section <ref>). In Section <ref>, we describe the inverse problem in detail and give formulas for the relationship between the characteristics (a_0, b_0, v_0) and (a_1, b_1, v_1). In Section <ref>, we obtain sufficient conditions for the existence and uniqueness of the solution of the direct problem, i.e. we propose conditions under which the mapping (a_0, b_0, v_0) ↦ (a_1, b_1, v_1) is a bijection. It turns out that this holds true if either one of the coefficients f_1,…,f_n dominates all the others or one of them repeats often enough in some sense.Estimates for the characteristic Lévy triplet of X are given in Section <ref> for pure jump infinitely divisible random fields. Here we use the ideas of <cit.>, <cit.> and <cit.> originally designedto estimatethe Lévy density of Lévy processes. The main result of this section is the proof of the upper bound for the L^2-error of the proposed estimator without the assumption of independence ofobservations X(t_1),…,X(t_N). The estimation error remains of the same structure as in the Lévy process case if the random field X is assumed to be m-dependent or ϕ-mixing. For the ease of reading, long proofs of the results of this section are moved to Appendix. Section <ref> provides three estimation approaches for the density v_0 of Λ. The first method is a simple plug-in approach. The second one, the Fourier method, is based on the idea of estimating first the Fourier transform of v_0 followed by another plug-in procedure. The last method uses orthonormal bases in the Hilbert space L^2[-A,A], A>0, for a representation of the solution v_0 of the inverse problem. After approximating v_0 by cutting off its expansion, the coefficients can be estimated by solving a system of linear equations. For all our methods, we propose upper bounds for the L^2-estimation error.In the last section, the performance of the methods is compared by numerical simulations. § PRELIMINARIES Introduce some notation that will be used throughout this paper.By ℬ(^d) we denote the Borel σ-field on the d-dimensional Euclidean space ^d.The Lebesgue measure on ^d is denoted by ν_d. We briefly write ν_d(dx) = dxif we integrate w.r.t. ν_d on ^d. The collection of all bounded Borel sets in ^dwill be denoted by ℰ_0(^d).For any measurable space (M, ℳ, μ) we denote by L^α(M), 1 ≤α < ∞, the space of all ℳ|ℬ()-mesurable functions f:M → with ∫_M |f|^α (x) μ(dx) < ∞. Equipped with the norm ||·||_α = ( ∫_M |f|^α (x) μ(dx) )^1/α, L^α(M) becomes a Banach space and even in the case α=2 a Hilbert space with scalar product ⟨ f,g ⟩_α=∫_M f(x)g(x)μ(dx), for any f,g ∈ L^2(M). WithL^∞(M) (i.e. if α = ∞) we denote the space of all real valued bounded functions on M. In case (M, ℳ, μ) = (, ℬ(), ν_1) we denote by H^δ() = { f ∈ L^2():∫_ | f|^2 (x)(1+x^2)^δ dx <∞}the Sobolev space of order δ > 0 equipped with the Sobolev norm ||f||_H^δ = || f(·) (1+·^2)^δ/2||_2, whereis the Fourier transform on L^2(). For f ∈ L^1(), f is defined by f (x) = ∫_ e^itxf(t)dt, x ∈. If (M, ℳ, μ) = (, 2^, μ) or (M, ℳ, μ) = ({1,…,n}, 2^{1,…,n}, μ), n ∈, with μ being the counting measure, then we write as usual l^α(M) instead of L^α(M) and all integrals above become sums. Throughout the rest of this paper (Ω, 𝒜, P) denotes a probability space. Note that in this case L^α(Ω) is the space of all random variables with finite α-th moment as well as ||X||_α = ( 𝔼|X|^α)^1/α, if 1 ≤α < ∞ and ||X||_α = sup_ω∈Ω X(ω) if α = ∞,for any X ∈ L^α(Ω). For an arbitrary set A we introduce furthermore the notation(A) for its cardinality. Let f={x∈^d: f(x)≠ 0} be the support set of a function f: ^d→. Denote by (A)=sup{ x-y _∞: x,y∈ A } the diameter of a bounded set A⊂^d. §.§ ID Random Measures and FieldsRecall some definitions and give a brief overview of infinitely divisible (ID) random measures and fields. Let Λ = {Λ(A);A ∈ℰ_0(^d)} be an ID random measureon some probability space (Ω, 𝒜, P), i.e. a random measure such that * for each sequence (E_m)_m∈ of disjoint sets in ℰ_0(^d) it holds * Λ(∪_m=1^∞ E_m) = ∑_m=1^∞Λ(E_m) a.s., whenever ∪_m=1^∞ E_m ∈ℰ_0(^d), * (Λ(E_m))_m∈ is a sequence of independent random variables. * the random variable Λ(A) has an ID distribution for any choice of A ∈ℰ_0(^d). Due to the infinite divisibility of the random variable Λ(A), its characteristic function, which will be denoted by φ_Λ(A), has a Lévy-Khintchin representation which will assumed to be of the formφ_Λ(A)(t) = exp{ν_d(A) K(t) },A ∈ℰ_0(^d),withK(t) = ita_0 - 1/2 t^2 b_0 + ∫_( e^itx - 1 - itx 𝕀_[-1,1](x) )v_0(x)dx,where a_0 ∈, 0 ≤ b_0 < ∞ and v_0 is a Lévy density, i.e. ∫_min{1,x^2}v_0(x)dx < ∞. The triplet (a_0,b_0,v_0) will be referred to asLévy characteristic of Λ. It uniquely determines the distribution of the process Λ. Ageneral form for the characteristic function of any ID random measure can be found in <cit.>. The particular structure of the characteristic function in (<ref>) means that the random measureΛ is stationary with control measure λ: ℬ() → [0,∞) given byλ(A) = ν_d(A) [|a_0| + b_0 + ∫_min{1,x^2} v_0(x)dx ],A ∈ℰ_0(^d). Now we can define the stochastic integral w.r.t. the ID random measure Λ. * Let f = ∑_j=1^n x_j 𝕀_A_j be a real simple function on ^d, where A_j ∈ℰ_0(^d) are pairwise disjoint. Then for every A ∈ℬ(^d) we define ∫_Af(x)Λ(dx) = ∑_j=1^n x_j Λ(A ∩ A_j). * A measurable function f:(^d,ℬ(^d))→ (, ℬ()) is said to be Λ-integrable, if there exists a sequence (f^(m))_m ∈ℕ of simple functions as in 1. such that * f^(m)→ f, λ-a.e. * for every A ∈ℬ(^d), the sequence ( ∫_A f^(m)(x)Λ(dx) ) _m ∈ℕ converges in probability as m →∞. In this case we set ∫_A f(x) Λ(dx) = _m→∞∫_A f^(m)(x)Λ(dx). A useful characterization of Λ-integrability is given in <cit.>. Now let {f(t - ·);t∈^d} be a family of Λ-integrable functions induced by the Borel measurable map f: ^d →. Then we define the ID moving average random field X = {X(t);t ∈^d} byX(t) = ∫_^d f(t-x)Λ(dx),t ∈^d.A random field is called ID if its finite dimensional distributions are ID. The random field X defined in (<ref>) is stationary and ID and the characteristicfunction of φ_X(0) of X(0) is given byφ_X(0)(u) = exp{∫_^d K(uf(s))ds },with K given in (<ref>). It is easy to see that∫_^d K(uf(s)) ds = i u a_1 - 1/2u^2 b_1 + ∫_ (e^iux-1-iux𝕀_[-1,1](x))v_1(x) dx with a_1 = ∫_ℝ^dU(f(s))ds, b_1 = b_0 ∫_^d f^2(s) ds v_1(x) =∫_S1/|f(s)|v_0( x/f(s)) ds, where a_1 ∈, b_1 ≥ 0, v_1 is the Lévy density of X(0), S =(f) = { s ∈^d:f(s) ≠ 0 } denotes the support of f and the function U is defined viaU(u) = u ( a_0 + ∫_ℝ x [ 𝕀_[-1,1](ux)- 𝕀_[-1,1](x) ] v_0(x)dx ). The triplet (a_1,b_1,v_1) is again referred to as Lévy characteristic (of X(0)) and determines the distribution ofX(0) uniquely. Note that due to Λ-integrability of f all integrals above are finite. This immediately implies that f ∈ L^1(^d) ∩ L^2(^d).For details on the theory of infinitely divisible measures and fields with spectral representation as well as proofs for the above stated facts we refer the interested reader to <cit.>.§.§ m-Dependent and ϕ-Mixing Random FieldsA random field X = {X(t), t ∈ T}, T⊆^d defined on (Ω, 𝒜, P) is called m if for some m∈ and any finite subsets U and Vof T the random vectors (X(u))_u∈ U and (X(v))_v∈ V are independent, whenever||u-v||_∞ = max_1≤ i ≤ d|u_i-v_i|> m,for all u=(u_1,…, u_d)^⊤∈ U and v = (v_1,…, v_d)^⊤∈ V. Note that a random field X as in (<ref>) is m-dependent, if the support S of f is bounded with m ≥(S). Besides, we define the notion of ϕ-mixing random fields. The mixing coefficient ϕ is defined as follows. For any U ⊂ T, let _U = σ(X(t), t∈ U) be the σ–field generated by random variables X(t), t∈ U. Let furthermore 𝒰 and 𝒱 be twosub-σ-fields of 𝒜. Defineϕ(𝒰,𝒱) := sup{|P(V|U)-P(V)|: V ∈𝒱,U ∈𝒰,P(U)≠ 0 }and for k,l,r ∈ ϕ_k,l(r) := sup{ϕ(_Γ_1,_Γ_2): card(Γ_1)≤ k, card(Γ_2)≤ l,d(Γ_1,Γ_2)≥ r},where d(Γ_1,Γ_2) := min{||i-j||_∞: i∈Γ_1, j∈Γ_2} for Γ_1, Γ_2 ⊂ T.A random field X = {X(t), t∈ T} on (Ω, 𝒜, P) is called ϕ-mixingor uniform mixing ifr→∞lim ϕ_k,l(r) = 0for any k,l ∈ℕ. Equation (<ref>) is called ϕ-mixing condition, see e.g. <cit.> for more details on mixing.§.§ Moment and Exponential Inequalities for Random FieldsIn the literature, one can find many moment and exponential inequalities for sums of independent and identically distributed random variables, e.g., the classical Rosenthal inequality<cit.>or the Bernstein inequality <cit.>.Similar inequalities hold true for random fields. For i∈^d define the set V_i^1 = {j∈^d:j <_lex i}, where <_lex denotes the lexicographic order. Let V_i^k = V_i^1∩{j∈^d:||i-j||_∞≥ k} for k≥ 2. For f(X(t))∈ L^1(Ω) set for k∈_k[f(X(t))] := [f(X(t))|_V_t^k].Figure <ref> shows the sets V_t^1 and V_t^k for some t=(t_1,t_2)∈^2. The following two results can be found in <cit.>. Let X = {X(t), t∈^d} be a centered and square-integrable random field. Let U ⊂^d be a finite subset. Then for any p≥ 2 it holds (|∑_t∈ UX(t)|^p)^1/p≤(2p∑_t∈ Ub_t,p/2(X))^1/2, where b_t,α(X) = X(t)^2_α + ∑_k∈ V_t^1X(k)_k-t_∞[X(t)]_α, for t∈ U and for any α≥ 1. Let X = {X(t), t∈^d} be a field of bounded and centered random variables. Set b = ∑_t∈ U b_t,∞(X). Then for any positive and real x it holds P (|∑_t∈ UX(t)|>x)≤exp{1/e-x^2/4eb}.Note that Theorem <ref> and Theorem <ref> are extensions of Burkholder's <cit.> and Azuma's <cit.> inequality for martingales. The next theorem <cit.> states a Rosenthal-type inequality for ϕ-mixing random fields. Let X= {X(t), t∈^d} be a random field. For p≥ 2 let c be the smallest even integer such that c≥ p. Assume ∑_r=1^∞ (r+1)^d(c-u+1)-1[ϕ_u,v(r)]^1/c<∞ for all u,v ∈ with u+v ≤ c, u,v≥ 2. Let U be a finite subset of ^d. If X(t) belongs to L^p(Ω) and is centered for all t∈ U, then there exists a positive constant C that depends on p and on the mixing coefficient ϕ_u,v(r) of X(t) such that |∑_t∈ UX(t)|^p ≤ C·max{∑_t∈ U|X(t)|^p,(∑_t∈ U|X(t)|^2)^p/2}. Additionally, the following result can be found in <cit.>. Let X = {X(t), t∈^d} be a strictly stationary field of bounded and centered random variables. Take h≥‖ X(0)‖_∞ and set B(ϕ) = ∑_j∈^d\ 0ϕ_∞,1(|j|)<∞. For any a_t ∈ [-1,1], t ∈^d set A(U):=∑_t∈ U|a_t| for U⊂^d. For any positive real x we have P(|∑_t∈ Ua_tX(t)|>x)≤exp{1/e-x^2/4(1+B(ϕ))A(U)eh^2}.§ INVERSE PROBLEM In this section, we give a description of the inverse problem treated in this paper. Let Λ = {Λ(A),A ∈ℰ_0(^d) } be a homogeneous ID random measure with Lévy characteristics (a_0,b_0,v_0). Consider f = ∑_k=1^n f_k 𝕀_Δ_k to be a simple function, where f_k ∈\{0} and Δ_k ∈ℰ_0(^d) pairwise disjoint, k=1,… ,n.Assume furthermore X = {X(t),t ∈^d} to be an ID moving average random field of the formX(t) = ∫_^d f(t-x) Λ(dx)= ∑_k=1^n f_k Λ(t-Δ_k) ,t∈^d,where t-A = {t-x:x ∈ A}⊂^d,t∈^d for an arbitrary set A. Given N ∈ℕ observations X(t_1),…,X(t_N) at points t_1,…,t_N ∈^d of the random field X, estimate the Lévy triplet (a_0,b_0,v_0) of the ID random measure Λ.Formulas (<ref>) and (<ref>) then become a_1= ∑_k=1^n U(f_k)ν_d(Δ_k),b_1 = b_0 ∑_k=1^n f_k^2 ν_d(Δ_k),v_1(x)= ∑_k=1^n ν_d(Δ_k)/|f_k|v_0 ( x/f_k) ,x ∈\{0},with U defined in (<ref>). For known a_1, b_1, v_0, theabove equations are easily solvable w.r.t. a_0 and b_0, thus providing an estimation approach for a_0 and b_0. So, given v_1, the main point is now to find a solution v_0 of the last equation. In the next section, we give some sufficient conditions under which a solution exists and is unique. § EXISTENCE AND UNIQUENESS OF A SOLUTION FOR V_0 In the following, we assume w.l.o.g. that ν_d(Δ_k)=1 for all k=1,…,n.Typically it is common to estimate x^n v_1(x) rather than v_1(x) itself, since many of the estimators for Lévy densities are based on derivatives of the Fourier transform (in the context of Lévy processes,see e.g. <cit.>). For this purpose let h:ℝ→ℝ be a measurable function such thatmin{ 1,·^2 } g(·) / h(·)∈ L^1()g ∈ L^2(), s(y)=sup_x {|h(x)| / |h(y x)|)} < ∞y≠ 0. A sufficient condition for (<ref>) to hold is∫_min{ 1,x^4 }/h^2(x) dx < ∞. Indeed, the Cauchy-Schwarz inequality yields∫_min{ 1,x^2 }| g(x)/h(x)| dx ≤( ∫_min{ 1,x^4 }/h^2(x) dx )^1/2 ||g||_2 < ∞.Examples of functions h satisfying (<ref>)–(<ref>) are h(x)=1, h(x)=|x|^β, β∈ (1/2,5/2) and h(x)=x^β, β=1,2. Consider the modified equation(h v_1)(x) = ∑_k=1^n 1/|f_k|h(x)/h(x/f_k)(h v_0)(x/f_k).It is understood in L^2()-sense, where it is assumed that g^(h)_0 = hv_0 and g^(h)_1 = hv_1 are both in L^2(). Let Q = {1 ≤ k ≤ n:f_k = f_1} be the set of all indices of coefficients f_k that coincide with f_1. Denote by n_1 = card(Q) its cardinality.Define s_k=s(f_1/f_k),k=1,…,n.The following theorem states conditions, under which equation (<ref>) has a unique solution for fixed g^(h)_1 ∈ L^2().Let a function h: → be given as above.Then equation (<ref>) has a unique solution g^(h)_0 ∈ L^2() for any g^(h)_1 ∈ L^2() ife(f,h)=1/n_1∑_k:f_k ≠ f_1s_k ·(|f_1|/|f_k| )^1/2 < 1.The solution is given by the formulag^(h)_0(·) = |f_1|/n_1h(·)/h(f_1·)g^(h)_1(f_1 ·) + ∑_j=1^∞ (-1)^j∑_i_1: f_i_1≠ f_1…∑_i_j: f_i_j≠ f_1( |f_1|/n_1 ) ^j+1/|f_i_1… f_i_j|h(·)/h ( f_1^j+1/f_i_1… f_i_j·) g^(h)_1( f_1^j+1/f_i_1… f_i_j·). Let g^(h)_1 ∈ L^2(). Define the operator φ_g^(h)_1:L^2() → L^2() byφ_g^(h)_1(r) = |f_1|/n_1h(·)/h(f_1 ·)g^(h)_1(f_1 ·)- ∑_k: f_k ≠ f_1|f_1|/n_1|f_k|h(·)/h(f_1/f_k·)r(f_1/f_k·)Then formula (<ref>) yields a fixed point of φ_g^(h)_1, i.e., is a solution of equation g^(h)_0 = φ_g^(h)_1(g^(h)_0). It is straight forward to see that for any functions u_1,u_2∈ L^2() it holds||φ_g^(h)_1(u_1)-φ_g^(h)_1(u_2)||_2≤ e(f,h) ||u_1-u_2||_2,i.e. φ_g^(h)_1 is a contraction. By Banach fixed-point theorem there exists a unique solution g^(h)_0 ∈ L^2() to the equation (<ref>) which shows the first part of the theorem. Relation (<ref>) can easily be obtained by iterating equation (<ref>)w.r.t. g^(h)_0. Note that the choice of f_1 in this setting is arbitrary. The statement of Theorem <ref> does not depend on a certain order of the coefficients f_1,…,f_n. In particular, this means that f_1 in the definitions of Q and n_1 can be replaced by any other coefficient f_j_0, j_0 ∈{2,…,n}. Consequently, substituting f_1 by f_j_0 in Theorem <ref> leads to the same solution g_0^(h). Indeed, let f_j, j ≠ 1 be any other coefficient that fulfills the conditions of Theorem <ref>, and let g̅_0^(h) be the corresponding solution of (<ref>). Then 0 = ∑_k=1^n 1/|f_k|h(x)/h(x/f_k) (g_0^(h) - g̅_0^(h))(x/f_k) Due to Theorem <ref>, this equation has a unique solution. Since 0 is a solution it thus follows that g_0^(h) - g̅_0^(h) = 0 (in L^2()-sense). Theorem <ref> gives sufficient conditions for the existence and uniqueness of a solution (<ref>) of equation (<ref>). If condition (<ref>) fails to hold, no solution as well as infinitely many solutions of (<ref>) are possible. One can easily construct corresponding examples illustrating that. Consider e.g. n=2, f_1 = 1 and f_2 = -1. Now choose h to be any odd function satisfying (<ref>)-(<ref>). Clearly condition (<ref>) is not fulfilled. Then (<ref>) becomes g_1^(h)(x) = g_0^(h)(x) + h(x)/h(-x)g_0^(h)(-x) = g_0^(h)(x) - g_0^(h)(-x). * Let g_1^(h)∈ L^2() be any even function, g^(h)_1 ≠ 0 a.e. Then (<ref>) has no solution since its right–hand side is odd. * If, on the other hand, g_1^(h)(x) = 0 a.e. then any even L^2-function g_0^(h) is a solution of (<ref>). Note that condition (<ref>) ensures that h(·)g_1^(h)(f_1·) / h(f_1·) ∈ L^2() for any g_1^(h)∈ L^2(). This condition is necessary. Consider e.g. g_1^(h) (x)= e^-|x|/2, h(x) = e^|x|, x ∈, as well as f_1 = f_2 = f_3 = 1/4, f_4 = 1/16. Then, except for (<ref>), all conditions of Theorem <ref> are fulfilled, but h(·)g_1^(h)(f_1·) / h(f_1·)∉L^2() in this case. Thus (<ref>) cannot be an L^2-solution.Condition (<ref>) is not necessary for the existence and uniqueness of a solution of equation (<ref>). As a counterexample, considern=3, f_1 = e^α, f_2 = e^2α, f_3 = e^3α, and h(x) = x. If 2log( -1 + √(5)/2)≤α≤ 2log( 1 + √(5)/2) then none of the coefficients fulfills (<ref>). In our paper <cit.> we prove necessary and sufficient conditions for existence and uniqueness of a solution of integral equation (<ref>). It can be shown that f = ∑_k=1^3 e^kα𝕀_Δ_k satisfies those conditions and hence there is a unique solution of (<ref>)for any g_1^(h)∈ L^2(). Condition (<ref>) means that one of the coefficients (here f_1) dominates all others either in its magnitude |f_1| or in its frequency n_1. To illustrate this, consider any power function h(x) = |x|^β with β∈ (1/2,5/2) and |x|^β v_1(x) ∈ L^2(). Then s_k =(|f_k|/|f_1|)^β, k=1,…,n and the equation is solvable w.r.t. |x|^β v_0(x) if1/n_1∑_k: f_k ≠ f_1( |f_k|/|f_1|)^β - 1/2 < 1.In particular, if n_1=1 this means that |f_1| > max{|f_2|,…,|f_n|}. If h is strictly positive and super-homogeneous of degree α,i.e. h(cx) ≥ c^α h(x),x∈ℝfor all c ≥ 0 and some α > 0,then condition (<ref>) is fulfilled if all the coefficients f_k have the same sign. Then (<ref>) holds if 1/n_1∑_k: f_k ≠ f_1( f_k/f_1)^α - 1/2 < 1. § ESTIMATION OF G^(H)_1 FOR PURE JUMP ID RANDOM FIELDS Modern statistical literature contains quite a number of methods to estimate the Lévy density v_1 of X(0) if d=1, i.e., X is a Lévy process, see <cit.>,<cit.> and references therein. They range from moment fitting and maximum likelihood ratio to inverse Fourier methods based on the empirical characterstic function of X(0). For simplicity, one often assumes that the drift and the Gaussian part of X(0) vanish, thus letting X be a pure jump Lévy process.In the recent preprint <cit.>, the problem of estimation of the Lévy measure of X(0)was solved for compound Poisson Lévy processesX using variational analysis on the cone of measures and the steepest descent method of minimizing of a certain risk functional implemented for the discrete (atomic) measures. The resulting estimate of v_1 can be obtained out of these measures by smoothing.For all our estimation approaches in the next section, either estimators for g^(h)_1 or at least for its Fourier transform [g^(h)_1] are required to proceed with the estimation of v_0. Therefore we adopted an estimation procedure from <cit.> for pure jump Lévy processes toestimate v_1. The main difference to Lévy processes is in our case the assumption of independent increments which obviously is not given for random fields in arbitrary dimension d. Nevertheless,assuming X to be m-dependent orϕ-mixing allows us to use the same ideas for the estimation of g^(h)_1. Consider a stationary random field X as in (<ref>) with characteristic function φ_X(0)(u) given byψ(u) := φ_X(0)(u) = 𝔼 e^iuX(0) = exp{∫_( e^iux- 1 )v_1(x)dx}.Note that its logarithm coincides with formula (<ref>) by taking a_1 = ∫_-1^1 x v_1(x) dx and b_1 = 0. Under the additional assumption ∫_ |x| v_1(x) dx < ∞ it holdsψ^' (u) = i ψ(u) ∫_ e^iux x v_1(x)dx = i ψ(u) [xv_1](u),that is equivalent to [g_1](u) = -i ψ^'(u)/ψ(u),where g_1(x) := g^(h)_1(x) = xv_1(x) (taking h(x) = x) and [g_1] denotes the Fourier transform of g_1.Now let X be discretely observed ona regular grid Δℤ^d with mesh size Δ > 0, i.e. we consider the random field Y = {Y_j ;j ∈^d}, whereY_j = X(Δ j), Δ j = (Δ j_1, …, Δ j_d),j = (j_1,…,j_d) ∈ℤ^d.For a finite nonempty set W ⊂ℤ^d with cardinality N = |W| let (Y_j)_j ∈ W be a sample from Y. By taking the empirical counterpartsψ̂(u) = 1/N∑_j ∈ W e^i u Y_j,θ̂(u) = 1/N∑_j ∈ WY_j e^i u Y_j,of ψ(u) and θ(u):=ψ^'(u) on the right–hand side of (<ref>) an estimator for [g_1] can be defined as[g_1](u) =-i θ̂(u)/ψ̃(u),where 1/ψ̃(u) := 1/ψ̂(u)𝕀{|ψ̂(u)| > N^-1/2}.The indicator function on the right hand side of (<ref>) ensures the stability of the estimator for small values of |ψ̂(u)|. Based on this idea Comte and Genon-Catalot <cit.> provided the estimator ĝ_1,l(x) = 1/2 π∫_-π l^ π l e^-ixu[g_1](u)dufor g_1. We make the following assumptions: for a k∈ (H1) g_1∈ L^1()∩ L^2() (H2)_k ∫_ |x|^k-1|g_1(x)|dx < ∞ (H3) ∃ c_ψ,C_ψ > 0 and β≥ 0 such that for all x ∈ c_ψ (1+x^2)^-β/2≤ |ψ(x)| ≤ C_ψ (1+x^2)^-β/2 (H4) g_1 ∈ H^β() where β>0 is as in(H3). Assumptions (H1)–(H2)_k are moment conditions for X(0). Assumptions(H3)– (H4) are used to compute L^2–error bounds andrates of convergence of Lévy density estimates, cf. <cit.>. For the random field Y we defineξ_t^(1)(u) = Y_tcos(uY_t)-(Y_0cos(uY_0)),ξ_t^(2)(u) = Y_tsin(uY_t)-( Y_0sin(uY_0)),ξ̃_t^(1)(u) = cos(uY_t)-cos(uY_0),ξ̃_t^(2)(u) = sin(uY_t)-sin(uY_0),where u ∈ℝ, t∈^d. Under condition (H2)_2, it holds X^2(0)<∞ and hence (ξ_t^(i)(u))^2<∞,(ξ̃_t^(i)(u))^2<∞ for i=1,2, t∈^d and u∈. Introduce the notation‖ξ‖_·=( ‖ξ‖_2^2 )^1/2 for any random function ξ:Ω×→ s.t. ξ∈ L^2(Ω×).The following L^2-error bounds for ĝ_1,l will be proven in Appendix. Assume that (H1), (H2)_4 hold and that we observe the strictly stationary random field Y = {Y_t, t∈^d}. Further assume that either (i) the field Y is m-dependent or (ii) the random field Y is ϕ-mixing such that equations (<ref>)–(<ref>) hold. Then for all l ∈ ‖ g_1-ĝ_1,l‖_·^2≤‖ g_1-g_1,l‖_2^2+K/N(√(|Y_0|^4) + ‖ g_1 ‖_1^2 ) ∫_-π l^π ldx/|ψ(x)|^2, where K>0 is a constant, g_1,l is given by g_1,l(x) = 1/2π∫_-π l^π l e^-iuxθ(u)/ψ(u)du for x∈, and N∈ is the sample size.Notice that random fields (<ref>) are m–dependent with m= ( f) since a simple function f has a compact support. Introduce the notation L:=g_1 _H^β^2.The following corollary is an immediate consequence of Theorem <ref>. If additionally (H3) and (H4) hold then the bound in Theorem <ref> can be improved to ‖ g_1-ĝ_1,l‖_·^2 ≤‖ g_1-g_1,l‖_2^2+K̃/N(1+√(|Y_0|^4)∫_-π l^π ldx/|ψ(x)|^2), where K̃>0 is constant. Under the assumptions of Corollary <ref> it holds ‖ g_1-ĝ_1,l‖_·^2≤L/(1+(π l)^2)^β +K̅/Nl (1+(π l)^2)^β, whereK̅=2π K c_ψ(√(|Y_0|^4)+g_1 _1^2 ). The upper bound (<ref>) allows to choose the cut–off parameter l>0 optimally by minimizing the right–hand side expression in (<ref>) numerically. Choosing N,l→ +∞ such that l^1+2β/N→ 0 yields the L^2–consistency of the estimate ĝ_1,l. § ESTIMATION OF THE LÉVY DENSITYV_0 In the following Section three different estimation approaches will be discussed.The plug-in and the Fourier method are both based on formula (<ref>), whereas the third one, which uses orthonormal bases (OnB's) in L^2(), is totally different from them. For this reason, the problem will be reformulated in terms of L^2–OnB's there. Nevertheless it turns out that the sufficient conditions for the existence of a solution do not change essentially. §.§ Plug-In EstimatorLet ĝ_1^(h) be an estimator for g_1^(h)=h· v_1. We now consider a simple plug-in estimator ĝ_0^(h) of g_0^(h)=h· g_0 defined byĝ_0^(h)(x) = |f_1|/n_1h(x)/h(f_1x)ĝ_1^(h)(f_1 x) + ∑_j=1^n_N (-1)^j ∑_i_1: f_i_1≠ f_1…∑_i_j: f_i_j≠ f_1( |f_1|/n_1)^j+1/|f_i_1… f_i_j|h(x)/h ( f_1^j+1/f_i_1… f_i_jx) ĝ_1^(h)( f_1^j+1/f_i_1… f_i_jx ),where N ∈ℕ denotes the sample size and n_N is a certain cut-off parameterdepending on N. The following theorem gives a bound for the mean square error||g_0^(h)-ĝ_0^(h)||_·. Consider g_0^(h)∈ L^2() and let ĝ_1^(h)∈ L^2() be an estimator of g_1^(h). Let furthermore the conditions of Theorem <ref> be fulfilled. Then with the notation given there it holdsg_0^(h) -ĝ_0^(h)_·≤|f_1|^1/2/n_1 s(f_1) ×( (1 + ∑_j=1^n_N (e(f,h))^j) g_1^(h)-ĝ_1^(h)_·+( e(f,h) )^n_N + 1 ||g_1^(h)||_2 /1-e(f,h) ).In particular, if ĝ_1^(h) is an L^2-consistent estimator for g_1^(h) (i.e., g_1^(h)-ĝ_1^(h)_ ·→ 0as N,n_N→∞) thenĝ_0^(h) is as well an L^2-consistentestimator for g_0^(h).First of all, we observe that for each k ∈ℕ and f_i_1,…,f_i_k≠ f_1 it holds|h(x)|/|h(f_1^k+1/f_i_1⋯ f_i_k x )| = |h(x)|/|h(f_1/f_i_1x)||h(f_1/f_i_1x)|/|h(f_1^2/f_i_1 f_i_2x )|⋯|h(f_1^k-1/f_i_1⋯ f_i_k-1x)|/|h(f_1^k/f_i_1⋯ f_i_k x )||h(f_1^k/f_i_1⋯ f_i_kx)|/|h(f_1^k+1/f_i_1⋯ f_i_k x )|≤ s_i_1 s_i_2⋯ s_i_k s(f_1).By relation (<ref>) and condition (<ref>), g_1^(h)∈ L^2() as well, cf. Lemma <ref>. Using formula (<ref>) it follows by triangle inequality and a simple integralsubstitution thatg_0^(h)-ĝ_0^(h)_·≤|f_1|^1/2/n_1 s(f_1) g_1^(h)-ĝ_1^(h)_· + ∑_k=1^n_N∑_i_1: f_i_1≠ f_1…∑_i_k: f_i_k≠ f_11/n_1^k+1( |f_1|^k+1/|f_i_1⋯ f_i_k|)^1/2 s(f_1^k+1/f_i_1⋯ f_i_k) g_1^(h)-ĝ_1^(h)_· + ∑_k=n_N+1^∞∑_i_1: f_i_1≠ f_1…∑_i_k: f_i_k≠ f_11/n_1^k+1( |f_1|^k+1/|f_i_1⋯ f_i_k|)^1/2 s(f_1^k+1/f_i_1⋯ f_i_k) ||g_1^(h)||_2≤|f_1|^1/2/n_1 s(f_1) g_1^(h)-ĝ_1^(h)_· + ∑_k=1^n_N∑_i_1: f_i_1≠ f_1…∑_i_k: f_i_k≠ f_11/n_1^k+1( |f_1|^k+1/|f_i_1⋯ f_i_k|)^1/2 s_i_1 s_i_2⋯ s_i_k s(f_1) g_1^(h)-ĝ_1^(h)_· + ∑_k=n_N+1^∞∑_i_1: f_i_1≠ f_1…∑_i_k: f_i_k≠ f_11/n_1^k+1( |f_1|^k+1/|f_i_1⋯ f_i_k|)^1/2 s_i_1 s_i_2⋯ s_i_k s(f_1)||g_1^(h)||_2 = |f_1|^1/2/n_1 s(f_1)( (1 + ∑_j=1^n_N( 1/n_1∑_k: f_k ≠ f_1( |f_1|/|f_k|)^1/2 s_k )^j)g_1^(h)-ĝ_1^(h)_·.. +( 1-1/n_1∑_k: f_k ≠ f_1( |f_1|/|f_k|)^1/2 s_k )^-1( 1/n_1∑_k: f_k ≠ f_1( |f_1|/|f_k|)^1/2 s_k )^n_N + 1g_1^(h)_2).Since 1/n_1∑_k: f_k ≠ f_1( |f_1|/|f_k|)^1/2 s_k < 1 the consistencyresult follows immediately from this approximation. Let g_0^(h)∈ L^p(), p≥ 1, and condition (<ref>) hold. Then g_1^(h)∈ L^p(). Using relation (<ref>), condition (<ref>) and triangle inequality, we get g_1^(h)_p ≤∑_k=1^n s(1/f_k)g_0^(h) _p .Using the estimator ĝ^(h)_0 in practice reveals that* the choice n_N=1,2,3 suffcies completely to get good results due to fast convergence of the geometric series in (<ref>),* ĝ^(h)_0 oscillates much in a neighborhood of the origin.Hence, one has to regularize it applying a usual smoothing procedure. Convolve ĝ^(h)_0 with a smoothing kernel K_b: →_+ which depends on its bandwidth b>0 and satisfies the following assumptions: (K1) K_b∈ L^1()∩ L^2(), ∫_ K_b(x)dx=1 for all b>0 (K2) sup_x | [K_b](x) | ≤C_K where C_K∈ (0,+∞) is a constant independent of b>0 (K3) |1- [K_b](x)|≤ c_1 min{1, b |x|} for all b>0, x ∈ where c_1>0 is a constant. For the resulting estimatorg̃^(h)_0(x)= ĝ^(h)_0 * K_b (x)=∫_ K_b(x-y)ĝ^(h)_0(y) dywe give an upper bound of its mean square error and prove its consistency as N, n_N→∞ and b→ +0. Let g^(h)_0 ∈ L^1() ∩ H^δ() for some δ > 1/2, and letĝ^(h)_1 ∈ L^1() ∩ L^2() be an estimator of g^(h)_1. For a kernel K_b satisfying assumptions(K1) –(K3), b∈ (0,1) it holdsg_0^(h)-g̃^(h)_0||_·≤C_K/2πg_0^(h)-ĝ_0^(h)_·+ ||g_0^(h)||_1^1/2||g_0^(h)||_H^δ^1/2 a_δ(b),where a_δ(b) = 𝒪( b^1 ∧(2δ-1)/4), δ≠ 5/2,𝒪( b (- log b)^1/4), δ = 5/2. By triangle inequality, Plancherel identity and convolution property ofwe haveg_0^(h)-g̃^(h)_0||_· ≤g_0^(h)*K_b-ĝ_0^(h)*K_b_·+ g_0^(h)*K_b-g_0^(h)_·≤1/2π[g_0^(h)-ĝ_0^(h)][K_b]_·+ 1/2π[g_0^(h) ] ([ K_b]-1 ) _2 ≤C_K/2πg_0^(h)-ĝ_0^(h)_·+ 1/2π[ g_0^(h) ]([K_b]-1) _2,since ĝ^(h)_0 ∈ L^1() ∩ L^2() by relation (<ref>).By assumption (K3) and Cauchy-Schwartz inequality, we have ||[g_0^(h)] (1 - [K_b])||_2 = ( ∫_ | [g_0^(h)](x)|^2 |1-[K_b] (x)|^2 dx )^1/2≤ c_1 ||g_0^(h)||_1^1/2( ∫_ | [g_0^(h)](x)|(1+x^2)^δ/2 (1+x^2)^-δ/2min{ 1,b|x| }^2 dx )^1/2≤c_1||g_0^(h)||_1^1/2 ||g_0^(h)||_H^δ^1/2( ∫_min{ 1,b|x| }^4 (1+x^2)^-δ dx )^1/4. The rest of the proof follows by observing that for b∈(0,1) a_δ(b) := c_1/2π∫_min{ 1,b|x| }^4 (1+x^2)^-δ dx ≤c_1/π b^4 (∫_0^1 x^4 dx/(1+x^2)^δ+1/5-2δ+ b^2δ-5/2δ-5) + c_1/π(2δ-1)b^2δ-1= 𝒪( b^4 ∧ (2δ-1)), δ≠ 5/2,c_1/π b^4 ( ∫_0^1 x^4 dx/(1+x^2)^δ+1/4) -c_1/πb^4 log b = 𝒪( -b^4 log b ), δ = 5/2, as h → 0. There are many examples of kernels satisfying assumptions (K1)–(K3), e.g., the Gaussian kernel K_b(x)=1/√(2π) b e^-x^2/(2b^2). Since [K_b](x)=e^-b^2 x^2/2, (K1)–(K2) are trivial. Condition (K3) holds from theinequality|[ K_b]-1|=| e^-b^2x^2/2 - 1 |≤ b^2x^2/2≤ 2min{1, b|x| }, x∈,b>0.Another class of examples is provided by K_b(x) = K(x/b)/b, x ∈, where K ∈ L^1() ∩ L^2()is a nonnegative function such that ∫_ K(x)dx=1, ([K]) ⊆ [-1,1] and [K] is a Lipschitz continuous function. While (K1)–(K2) trivially hold in this case, (K3)can be seen from the following lemma. Let K:→_+ be as above. Then |1-[K_b](x)| ≤ c_1 min{ 1,b |x| },x ∈, where c_1 = max{ 1, L_K } with L_K > 0 being the Lipschitz constant of K. Because of ([K]) ⊆ [-1,1] and the Lipschitz continuity of [K] it follows |1-[K_b](x)| = |1-[K](bx)| = 1 ,b |x| > 1,≤ L_K b |x|, b |x| ≤ 1. Thus |1-[K_b](x)| ≤ c_1 min{ 1,b|x| }. Choose ĝ^(h)_1=ĝ_1,l, h(x)=x as in Section <ref>.Under the assumptions of Theorem <ref>, Corollary <ref> and Theorem <ref>the estimator g̃_0^(h) isL^2-consistent for g_0 as N, n_N→∞ and b→ +0.Applying Theorem <ref> and Corollary <ref> yields g_0-ĝ^(h)_0||_·→ 0 as N, l→∞ for any sequence n_N→∞. Relation a_δ(b)→ 0 as b→ +0 finishes the proof. The choice of bandwidth b>0 in (<ref>) can be made by solving the following minimization problem numerically:∂g̃^(h)_0/∂ b_2 →min_b>0,which means that we are seeking for a sufficiently smooth estimate g̃^(h)_0. Assuming that K_b is a C^1–smooth function of parameter b>0 and that the differentiation with respect to b and the integral can be interchanged we get by Plancherel identity and convolution property ofthat∂g̃^(h)_0/∂ b_2= [ ∂g̃^(h)_0/∂ b]_2= [ĝ^(h)_0 ] [ ∂ K_b/∂ b] _2→min_b>0.For easy particular functions K_b the Fourier transform of ∂ K_b/∂ b can be usually calculated explicitly. In contrast, [ĝ^(h)_0 ] has to be estimated from the data, compare Section <ref> for h(x)=x. There, we use the estimate [g_0] to assess [ĝ_0 ].§.§ Fourier ApproachA common strategy in the estimation of g_1^(h) (e.g. in the case of Lévy processes) is first to estimate its Fourier transform ℱ[g_1^(h)] and then to invert it. This causes an error in the estimation of the Fourier transform and additionally in the inversion procedure. Using plug-in estimatorsof Section <ref>, this may increase the estimation error for g_0^(h). For this reason, here we estimate ℱ[g_1^(h)] directly to recover g_0^(h). From now on, seth(x) = x^β for some β∈ℕ. In other words, equation (<ref>) is of the form g_0(·) =1/n_1(f_1)^β |f_1|^1-β g_1(f_1 ·)+ ∑_j=1^∞ (-1)^j ∑_i_1: f_i_1≠ f_1…∑_i_j: f_i_j≠ f_11/n_1^j+1( f_1^j+1/f_i_1… f_i_j)^β( |f_1|^j+1/|f_i_1… f_i_j|)^1-β g_1( f_1^j+1/f_i_1… f_i_j·), where g_0(x) = x^β v_0(x) and g_1(x) = x^β v_1(x). Suppose that g_0∈ L^1() and the conditions of Theorem <ref> are fulfilled. Then g_1∈ L^1() as well by Lemma <ref>.The following construction ofĝ_0,l(t) and ĝ_1,l(t) is motivated by estimation approaches for the characteristic triplet of Lévy processes (see e.g. <cit.>). Taking Fourier transforms on both sides of (<ref>) yields[g_0](t) = 1/n_1(f_1)^β |f_1|^-β[ g_1 ] ( t/f_1)+ ∑_j=1^∞ (-1)^j ∑_i_1: f_i_1≠ f_1…∑_i_j: f_i_j≠ f_11/n_1^j+1( f_1^j+1/f_i_1… f_i_j)^β( |f_1|^j+1/|f_i_1… f_i_j|)^-β[ g_1 ] ( f_i_1… f_i_j/f_1^j+1 t ) for t ∈. Let [g_1] be any estimator for the Fourier transform of g_1. Then we define the estimator [g_0] for [g_0] via[g_0](t) = 1/n_1(f_1)^β |f_1|^-β [g_1]( t/f_1)+ ∑_j=1^n_N (-1)^j ∑_i_1: f_i_1≠ f_1…∑_i_j: f_i_j≠ f_11/n_1^j+1( f_1^j+1/f_i_1… f_i_j)^β( |f_1|^j+1/|f_i_1… f_i_j|)^-β [g_1]( f_i_1… f_i_j/f_1^j+1 t ),t ∈. If [g_1] is locally square integrable, an estimator ĝ_0,l of g_0is constructed for some l > 0 asĝ_0,l(t) = 1/2 π∫_-π l^π l e^-itu [g_0](u)du, t ∈.The last expression can be rewritten asĝ_0,l(t)= 1/n_1(f_1)^β |f_1|^-β1/2 π∫_-π l^π l e^-itu [g_1]( u/f_1)du+ ∑_j=1^n_N (-1)^j ∑_i_1: f_i_1≠ f_1…∑_i_j: f_i_j≠ f_11/n_1^j+1( f_1^j+1/f_i_1… f_i_j)^β( |f_1|^j+1/|f_i_1… f_i_j|)^-β×1/2 π∫_-π l^π le^-itu [g_1]( f_i_1… f_i_j/f_1^j+1 u ) du = 1/n_1(f_1)^β |f_1|^1-βĝ_1,l/|f_1|(f_1 t) + ∑_j=1^n_N(-1)^j/n_1^j+1∑_i_1: f_i_1≠ f_1…∑_i_j: f_i_j≠ f_1( f_1^j+1/f_i_1… f_i_j)^β( |f_1|^j+1/|f_i_1… f_i_j|)^1-β×ĝ_1, | f_i_1… f_i_j/f_1^j+1|( f_1^j+1/f_i_1… f_i_j t )with ĝ_1,l(t) = 1/2 π∫_-π l^π l e^-itu [g_1](u)du being an estimator of g_1. The estimator (<ref>) from Section <ref> is locally square integrable. In this case an appropriate choice for the parameter l > 0 can be achieved e.g. by minimizing the right-hand side of (<ref>) for any fixed sample size N (see also the discussion following Corollary <ref>).Similar as in Theorem <ref> one can obtain an upper bound for the L^2-error. With the notationg_1,l(t) = 1/2 π∫_-π l^π l e^-itu [g_1](u)du we get‖ĝ_0,l - g_0 ‖_·≤1/n_1|f_1|^β(‖ĝ_1,l/|f_1| - g_1‖_· +( 1/n_1∑_k: f_k≠ f_1 s_k )^n_N + 1/1 - 1/n_1∑_k: f_k≠ f_1 s_k‖ g_1 ‖_2 ..+ ∑_j=1^n_N∑_i_1: f_i_1≠ f_1…∑_i_j: f_i_j≠ f_1s_i_1… s_i_j/n_1^j‖ĝ_1, | f_i_1… f_i_j/f_1^j+1| l -g_1‖_·) ,where s_k=(|f_k|/|f_1| )^β, k=2,…,n. Assumee(f, |·|^β+1/2)=1/n_1∑_k: f_k≠ f_1 s_k < 1.Choose the estimator ĝ_1,l of g_1 in an L^2–consistent way. Then, as N,l, n_N →∞in an appropriate manner, the above upper bound (<ref>) tends to zero, and ĝ_0,l is L^2–consistent for g_0. For instance, one can choose ĝ_1,l from Section <ref>, which is L^2–consistent under assumptions of Corollary <ref>.Assume, in addition to (<ref>), that |f_1|>max_k: f_k≠ f_1 |f_k| . By (<ref>), the upper bound of g_1- ĝ_1,l_· is monotonously non–decreasing in l. Since |f_i_1… f_i_j/f_1^j+1| l< l/|f_1|we get by (<ref>) and (<ref>) that‖ĝ_0,l - g_0 ‖_· ≤1/n_1|f_1|^β(( 1+∑_j=1^n_N e^j(f, |·|^β+1/2))O(1/l^2β+l^2β+1/N) .. +( e(f, |·|^β+1/2) )^n_N + 1/1 - e(f, |·|^β+1/2)‖ g_1 ‖_2 )≤1/n_1|f_1|^β(( 1+e(f, |·|^β+1/2) /1 - e(f, |·|^β+1/2)) O(1/l^2β+l^2β+1/N) ..+( e(f, |·|^β+1/2) )^n_N + 1/1 - e(f, |·|^β+1/2)‖ g_1 ‖_2 ) → 0as N,n_N,l→∞ such that l^2β+1/N→ 0.§.§ Orthonormal Basis Approach Since the series representation (<ref>) is sensitive to noise and bad estimates for v_1, the aim is to obtain an estimation approach that uses (local) orthonormal bases (e.g., Haar wavelets) of L^2. Moreover, from the numerical point of view it is much more convenient to find a solution only on a finite interval. For this reason, the problem of Section <ref> should be reformulated for functions on L^2() with support contained in a finite interval. For0 < A < ∞, considerU_A = {u ∈ L^2():u = 0a.e. on \ [-A,A]}to bethe closed linear subspace of L^2() equipped with the usual scalar product on L^2(). Find a function g_0^(h)∈ U_A that fulfills equation (<ref>) for fixed g_1^(h). Because of the scalings on the right hand side of this equation, we have to extend theassumptions on g_1^(h) and the coefficients |f_j| a bit. Let |f_1| ≥max_k: f_k ≠ f_1 |f_k| be the largest coefficient and define M = min{1, |f_1|}. Then for g_1^(h)∈ U_AMit follows that g_1^(h)(f_1 ·) ∈ U_A.Since |f_1| is the largest coefficient, it holds moreover that g_0^(h)(f_1/f_j x) = 0, for all |x| > A, i.e. g_0^(h)(f_1/f_j ·) ∈ U_A for all j=1,…,n. For this reason the restriciton φ_g_1^(h)|_U_A of the function φ_g_1^(h) from the proofof Theorem <ref> is a map on U_A. Then one can show the following theorem with the same arguments applied to φ_g_1^(h)|_U_A. Let h:→, s_k be as in Theorem <ref>, and let g_1^(h)∈ U_AM with M defined as before. Assume furthermore that |f_1| ≥max_k: f_k ≠ f_1 |f_k| and relation (<ref>) holds. Then there exists a unique function g_0^(h)∈ U_A such that g_1^(h)(·) = ∑_k=1^n 1/|f_k|h(·)/h(·/f_k)g_0^(h)(·/f_k)a.e. on [-AM, AM]. The solution g_0^(h) can be expressed as in (<ref>).Note that the solution g_0^(h) fulfills the equation φ_g_1^(h)|_U_A(g_0^(h)) = g_0^(h) a.e. on the whole interval [-A,A], whereas (<ref>) holds only on [-AM, AM], whichis merely the same if M = 1, i.e. in the case |f_1| ≥ 1. Notice that g_1^(h)∈U_AM means that the random field X has a compound Poisson marginal distribution if h(x)≡ 1.The last theorem stated the existence of a solution g_0^(h) of the fixpoint equation φ_g_1^(h)|_U_A(g_0^(h)) = g_0^(h) or equivalently forg̅_1(·):=h(·)/h(f_1 ·)g_1^(h)(f_1 ·) = ∑_k=1^n 1/|f_k|h(·)/h( f_1/f_k·)g_0^(h)(f_1/f_k·).Now let (ψ_n)_n ∈ be an orthonormal basis (OnB) of U_A. Since g_0^(h) = ∑_j=1^∞⟨ g_0^(h), ψ_j ⟩ψ_j it holds∑_k=1^n 1/|f_k|h(·)/h( f_1/f_k·)g_0^(h)(f_1/f_k·) = ∑_j=1^∞⟨ g_0^(h), ψ_j ⟩∑_k=1^n 1/|f_k|h(·)/h( f_1/f_k·) ψ_j(f_1/f_k·).Note that because of |f_1| ≥max_k: f_k ≠ f_1 |f_k| the function ψ_j(f_1/f_k ·) is in U_A for all k ∈ℕ. Setη_j (·) = ∑_k=1^n 1/|f_k|h(·)/h( f_1/f_k·) ψ_j(f_1/f_k·),j ∈ℕ .Then we can conclude that there exists a solution g_0^(h)∈ U_A of (<ref>) if and only if the function g̅_1 admits a representation g̅_1 = ∑_j=1^∞ x_j η_j with some l^2-sequence (x_j)_j ∈ℕ. In this case,a solution g_0^(h) is given by∑_j=1^∞ x_j ψ_j. It is unique if and only if the scalar sequence (x_j)_j ∈ℕ is unique. In other words, the problem is characterized by the operator T:l^2 → U_A,T: x = (x_j)_j ∈ℕ↦∑_j=1^∞ x_j η_j.If T is surjective there exists a solution. If it is bijective the solution is unique.It is clear now that under the conditions of Theorem <ref> the operator T is a bijection. Nevertheless, let us reformulate this theorem in terms of theOnB (ψ_l)_l ∈ℕ and give another proof for it.Let (ψ_l)_l ∈ℕ be an OnB of U_A, and let the conditions of Theorem <ref> be fulfilled. Then there exists a unique sequence x ∈ l^2 such that the operator T is one–to–one.We would like to show that the system (η_j)_j∈ℕ is a basis for U_A. First we show, by contradiction, that V:=span((η_j)_j∈ℕ) = U_A.Therefore assume that V ⊂ U_A. Since V is a closed subspace of U_A it follows by Riesz lemma (see e.g. <cit.>) that for any 0<δ < 1 there exists a function g_δ∈ U_A with ‖ g_δ‖_2=1such that ‖ g_δ - v ‖_2 ≥ 1-δ, for all v ∈ V. Now chooseδ :=( 1 -e(f,h))/2. Then we canwrite g_δ = ∑_k=1^∞⟨ g_δ, ψ_k ⟩ψ_k. Define the sequence x = (x_k)_k ∈ℕ∈ l^2 via x_k = |f_1|⟨g_δ, ψ_k ⟩ /n_1, k∈ℕ. Since ‖ g_δ‖_2=1 it follows ‖ x ‖_2=|f_1|/n_1. Clearly, we have ∑_k=1^∞ x_k η_k ∈ V. By triangle inequality, a substitution in the integral and the definition of s_k it can be observed that1 - δ≤‖ g_δ - ∑_k=1^∞ x_k η_k ‖_2= ‖1/n_1∑_j: f_j ≠ f_1|f_1|/|f_j|h(·)/h( f_1/f_j·)g_δ(f_1/f_j·) ‖_2≤ e(f,h),which is a contradiction to the fact that 1-δ >e(f,h), i.e. V = U_A. In the second step of the proof, weuse <cit.> to show that (η_j)_j∈ℕ is a basis for U_A.Therefore we have to verify the assumptions there.First of all, we observe that η_l are non-zero functions, since‖η_j ‖_2 = ‖n_1/|f_1|ψ_j + ∑_k: f_k≠ f_11/|f_k|h(·)/h( f_1/f_k·) ψ_j(f_1/f_k·)‖_2≥n_1/|f_1| - ‖∑_k: f_k≠ f_11/|f_k|h(·)/h( f_1/f_k·) ψ_j(f_1/f_k·)‖_2≥n_1/|f_1| - ∑_k: f_k≠ f_11/|f_k|‖h(·)/h( f_1/f_k·) ψ_j(f_1/f_k·)‖_2 ≥n_1/|f_1| - ∑_k: f_k≠ f_1 s_k ( 1/|f_k|· |f_1|)^1/2= n_1/|f_1|(1 - e(f,h)),where the latter is strictly positive, i.e. (η_j)_j ∈ is a sequence of non-zero functionsin the Hilbert space U_A. Now let (c_j)_j ∈ be an arbitrary real valued sequence andm,l ∈ with m ≤ l. Show that there exists a constant K such that‖∑_j=1^m c_j η_j ‖_2 ≤K ‖∑_j=1^l c_j η_j ‖_2. If c_1 = c_2 = … = c_l = 0 then this relation is obviouslytrue for any choice of K. Otherwise,∑_j=1^l c_j η_j _2 ≥n_1/|f_1|( 1- e(f,h) ) ( c_1^2+…+c_l^2 )^1/2 > 0.Thus, we have‖∑_j=1^m c_j η_j ‖_2/‖∑_j=1^l c_j η_j ‖_2 ≤e(f,h)( c_1^2+…+c_m^2 )^1/2/( 1-e(f,h) ) ( c_1^2+…+c_l^2 )^1/2≤e(f,h)/1-e(f,h) =: K.This means (η_j)_j∈ is a basis for U_A,i.e. for any function f ∈ U_A there is a unique scalar sequence (c_j(f))_j ∈ with f = ∑_j=1^∞ c_j(f) η_j. Since ‖ f ‖_2 = ‖∑_j=1^∞ c_j(f) η_j ‖_2 ≥n_1/|f_1|( 1- e(f,h) ) ( ∑_j=1^∞ c_j^2(f) )^1/2,the sequence (c_j(f))_j ∈ is furthermore an element of l^2. Choosing f = h(·)/h(f_1 ·)g_1^(h)(f_1 ·)completes the proof.Note that the proof of the last theorem shows that the system (η_j)_j ∈ℕ is a basis for the L^2-subspace U_A. Therefore we can orthonormalize it by Gram–Schmidt method to an OnB (e_j)_j ∈ℕ of U_A given by e_1 = η_1 / ||η_1||_2 and succesively e_k = η_k-∑_i=1^k-1⟨η_k,e_i ⟩ e_i /‖η_k-∑_i=1^k-1⟨η_k,e_i ⟩ e_i ‖_2,k=2,3,… .Now let ĝ̅̂_1 be any estimator for g̅_1 ∈ U_A and let P_m be the orthogonal projection of U_A onto the m-dimensional subspace V_m = span{η_1,…,η_m}=span{e_1,…,e_m} which is given by P_m f = ∑_j=1^m⟨ f,e_j ⟩ e_j. Define the sequence (ŷ_j)_j ∈ℕ byŷ_j = ⟨ĝ̅̂_1, e_j ⟩, 1 ≤ j ≤ m, 0,j > m.Then the orthogonal projection of ĝ̅̂_1 onto V_m isĝ̅̂_1,m := P_m ĝ̅̂_1 = ∑_j=1^∞ŷ_j e_j ( = ∑_j=1^m ŷ_j e_j ) .Now, an estimator ĝ_0,m^(h) for g_0^(h) will be constructed as follows: * Let (x̂_1,m,…,x̂_m,m) be the unique solution toŷ_j = ∑_i=1^m x̂_i,m⟨η_i, e_j ⟩,j =1,…,m .Setx̂_i = x̂_i,m ;1 ≤ i ≤ m 0;i > m. * Then we define ĝ_0,m ^(h)= ∑_i=1^∞x̂_i ψ_i ( = ∑_i=1^m x̂_i ψ_i ). Equation (<ref>) comes from the fact that for any f ∈ V_m, ∑_i=1^m λ_i η_i =∑_i=1^m ⟨ f, e_i ⟩ e_i if and only if⟨ f, e_i ⟩ = ∑_j=1^m λ_j ⟨η_j, e_i ⟩. Note that ⟨ e_i, η_j ⟩ = 0 whenever i > jsince η_j is a linear combination of e_1,…,e_j. In particular, formula (<ref>) stays true if j > m. Due to that, the systemof linear equations there becomes diagonal and can easily be solved by backward substitution. Let g̅_1 ∈ U_A and ĝ̅̂_1∈ U_A be an estimator of g̅_1. Let furthermore ĝ̅̂_1,m := P_m ĝ̅̂_1 be the orthogonal projection of ĝ̅̂_1 onto V_m. Then under the conditions of Theorem <ref> it holds for ĝ_0,m^(h) as in (<ref>) that g_0^(h) - ĝ_0,m^(h)_·≤|f_1|/n_1 ( 1 - e(f,h) )[ 2 ∑_j=m+1^∞ x_j η_j _2 + g̅_1 - ĝ̅̂_1,m_·], wherex_j=⟨g_0^(h), ψ_j ⟩, j∈. First of all, it holds ( ∑_j=1^∞[ ∑_i=1^m x_i ⟨η_i, e_j ⟩ - ŷ_j ]^2 )^1/2 = ( ∑_j=1^∞[ ∑_i=1^m x_i ⟨η_i, e_j ⟩ - ∑_i=1^m x̂_i ⟨η_i, e_j ⟩]^2 )^1/2 = ∑_i=1^m (x_i - x̂_i) η_i _2 ≥n_1/|f_1|( 1 - e(f,h) ) ( ∑_i=1^m (x_i - x̂_i)^2 )^1/2, and therefore ∑_i=1^m (x_i - x̂_i)^2≤( |f_1|/n_1 ( 1 - e(f,h) ))^2 ∑_j=1^∞[ ∑_i=1^m x_i ⟨η_i, e_j ⟩ - ŷ_j ]^2= ( |f_1|/n_1 ( 1 - e(f,h) ))^2 ∑_j=1^∞[y_j - ŷ_j - ∑_i=m+1^∞ x_i ⟨η_i, e_j ⟩]^2, with (y_j)_j ∈ℕ defined by y_j = ⟨g̅_1, e_j ⟩ = ∑_i=1^∞ x_i ⟨η_i, e_j ⟩, j ∈ℕ, compare (<ref>). Then g_0^(h) - ĝ_0,m^(h)_· = ∑_j=1^∞ x_j ψ_j - ∑_j=1^m x̂_j ψ_j_·≤∑_j=m+1^∞ x_j ψ_j_2 + ∑_j=1^m (x_j - x̂_j) ψ_j_·. By (<ref>) together with the triangle inequality we get ∑_j=1^m (x_j - x̂_j) ψ_j_·= ( 𝔼∑_i=1^m (x_i - x̂_i)^2 )^1/2≤|f_1|/n_1 ( 1 - e(f,h) )[ ( 𝔼∑_j=1^∞ (y_j - ŷ_j)^2 )^1/2 + ∑_i=m+1^∞ x_i η_i_2 ]=|f_1|/n_1 ( 1 - e(f,h) )[ g̅_1 - ĝ̅̂_1,m_· + ∑_i=m+1^∞ x_i η_i_2 ] . Taking into account that ∑_i=m+1^∞ x_i η_i _2 ≥n_1/|f_1|( 1 - e(f,h) ) ∑_j=m+1^∞ x_j ψ_j _2 the statement of the theorem follows by (<ref>).The term ∑_i=m+1^∞ x_i η_i _2 in (<ref>) is the approximation error of g̅_1=∑_i=1^∞ x_i η_i by the first m summands of its series. As m→∞, the upper bound(<ref>) tends to |f_1|/n_1 ( 1 - e(f,h) )g̅_1 - ĝ̅̂_1_·. In order to estimate g̅_1, the method of Section <ref> can be used if the random field X satisfies the assumptions given there. In this case, Corollaries <ref> and <ref> yield an upper bound for g̅_1 - ĝ̅̂_1_· leading to L^2–consistent estimates of g_0^(h). Since the estimator in (<ref>) is strongly oscillating, a smoothed version g̃_0,m^(h) = ĝ_0,m^(h)∗ K_b of ĝ_0,m^(h) is considered here, where K_b is a smoothing kernel with properties (K1)-(K3) from Section <ref>. It is clear that g_0^(h)∈ L^1(), ĝ_0,m^(h)∈ L^1() ∩ L^2(), because both are in U_A by assumption. If additionally g_0^(h)∈ H^δ() for some δ > 1/2 then it immediately follows from the proof of Theorem <ref> that ‖g̃_0,m^(h) - g_0^(h)‖_·≤C_K/2 π‖ĝ_0,m^(h) - g_0^(h)‖_· + ||g_0^(h)||_1^1/2||g_0^(h)||_H^δ^1/2 a_δ(b) with a_δ given in (<ref>). The bandwidth b>0 can be chosen as in Remark <ref>. § NUMERICAL PERFORMANCE OF THE ESTIMATORS In order to compare the three approaches of Section <ref>, we consider Λ(Δ) to be a compound Poisson random variableΛ(Δ) d=∑_k=1^N Y_k,where {Y_k}_k ∈ℕ is a sequence of independent and identically distributed random variables, independent of N ∼ Poi(ν_d(Δ)). Then for any simple function f = ∑_k=1^n f_k 𝕀_Δ_k with ν_d(Δ_k) = ν_d(Δ) for all k = 1,…,n it holdsX(0) d=∑_k=1^n f_k W_k,where W_1,…,W_n are i.i.d. with W_1 d=Λ(Δ). In the following examples, we assumed d=2, n = 4, f_1 = 1.3, f_2 = 0.2, f_3 = f_4 = 0.1 as well as ν_2(Δ) = 1. Thenv_0 is the density of the random variable Y_1, and due toformula (<ref>),v_1 is given by v_1(x) =1/1.3 v_0( x/1.3) + 1/0.2 v_0( x/0.2) + 2/0.1 v_0( x/0.1) or, equivalently, g_1(x) = g_0( x/1.3) +g_0( x/0.2) + 2 g_0( x/0.1),where g_1(x) = xv_1(x) and g_0(x) = xv_0(x), h(x)=x. Note that the coefficients f_1,…,f_4 fulfill conditions of Theorem <ref>, i.e. for giveng_1∈ L^2() there exists a solution g_0 ∈ L^2() to the above equation. In our examples, we simulated the random field X on an integer grid. The estimators for g_0 based on the correspondingsample with sample size N=10000 were compared to the original g_0for the following examples:Y_1 ∼ N(0,1), i.e. v_0(x) = (√(2π))^-1exp(-x^2 / 2 ) (Fig. <ref>-<ref>) and Y_1 ∼𝖤𝗑𝗉 (1), i.e. v_0(x) = exp(-x) 𝕀_(0,∞)(x) (Fig. <ref>-<ref>) For the estimators based on the Fourier method from Section <ref>, the parameter l=1 is chosen due toCorollary <ref>, cf. Section <ref>. For both the plug-in (Section <ref>) and the Fourier method, we usedfurthermore the cut-off parameter n_N = 1. For the smoothing procedure, the Epanechnikov kernel K_b(x) = 0.75 b^-1 (1-(xb^-1)^2) 𝕀{ |x|≤ b} with bandwidths b=0.5 and b=1.0was used in examples(<ref>)and (<ref>) respectively, chosen according to Remark <ref>. For the OnB method, Haar wavelets {ψ_j} on [-A,A] for A=6 were used together with the cut–off parameter m=7. The parameter l>0 andthe bandwidth b>0 for the estimator in (<ref>) (using Epanechnikov kernel K_b) were chosen based on a simulation study with differentparameters. It turned out that visually the best choice for the example in (<ref>) is l=4.5, b=0.7 whereas for the example in (<ref>) theparameters l=4.0, b=1.1 turned out to be optimal. Figures <ref> and <ref> show realizations of the estimatedg_0 (red) by our methods compared to the original g_0 (dashed) from examples(<ref>) and (<ref>).The empirical mean and the standard deviation of the mean square errors of our estimation (assessed upon estimation results for g_0 out of 100 simulations of X) are given in Table <ref>. It is seen there that plug-in and Fourier methods perform equally wellwhereasthe meanerror for the OnB method is significantly higher.Regarding their computation times (see Table <ref>), the Fourier approach outperforms the others since its algorithmis at least 10 times faster. To summarize, we recommend the Fourier method for the estimation of v_0 unless the plug-in approach can be used under milder assumptions on v_0 and v_1. Thisessentially depends on the estimator for v_1 which is chosen as a plug-in. § APPENDIX Here we give a proof of Theorem <ref> and its corollaries. Before doing so we prove auxiliarystatements. Let Y={Y_t, t∈^d} be a random field defined in (<ref>) satisfying (H2)_2 such that Y is either (i) m-dependent or (ii)ϕ-mixing and condition (<ref>) holds. Furthermore, let W⊂^d be a finite subset, N=(W), and let θ̂(u) = 1/N∑_t∈ WY_te^iuY_t and θ(u) =Y_0e^iuY_0. Then |θ̂(u)-θ(u)|^4 ≤C/N^2|Y_0|^4, where C>0 is a constant. It holds that |θ̂(u)-θ(u)|^4 =1/N^4[√((∑_t∈ Wξ^(1)_t(u))^2 +(∑_t∈ Wξ^(2)_t(u))^2 )]^4 ≤2/N^4[|∑_t∈ Wξ^(1)_t(u)|^4 + |∑_t∈ Wξ^(2)_t(u)|^4]. (i) By Theorem <ref> it holds for p=4, α=1 and i=1,2 |∑_t∈ Wξ^(i)_t(u)|^4 ≤(8 ∑_t∈ Wb_t,2(ξ_t^(i)))^2 = 2^6 ( ∑_t∈ W‖ξ_t^(i)(u)‖_2^2 + ∑_k∈ V_t^1‖ξ^(i)_k(u) _k-t_∞[ξ^(i)_t(u)]‖_1_:=D)^2. To determine expression D it is useful to decompose it into two parts. The first part consists of all k for which k_∞>m and the second part contains all other k. Hence, D = ∑_k∈ V_t^1k-t_∞> m‖ξ^(i)_k(u) _k-t_∞[ξ^(i)_t(u)]‖_1 +∑_k∈ V_t^1 k-t_∞≤ m‖ξ^(i)_k(u) _ k-t_∞[ξ^(i)_t(u)]‖_1. For the first part it holds due to m–dependence of {ξ^(i)_t(u)} that ∑_k∈ V_t^1k-t_∞> m‖ξ^(i)_k(u) [ξ^(i)_t(u)|_V_t^k-t_∞]‖_1 = ∑_k∈ V_t^1k-t_∞> m‖ξ^(i)_k(u) [ξ^(i)_t(u)]‖_1=0,since ξ^(i)_t(u) is centered. Furthermore, for the second sum in expression D it follows by Hölder inequality that ∑_k∈ V_t^1k-t_∞≤ m‖ξ^(i)_k(u) [ξ^(i)_t(u)|_V_t^k-t_∞]‖_1≤∑_k∈ V_t^1k-t_∞≤ m‖ξ^(i)_k(u) ‖_2 ‖[ξ^(i)_t(u)|_V_t^k-t_∞]‖_2 ≤‖ξ^(i)_ t(u)‖_2 ∑_k∈ V_t^1k-t_∞≤ m‖ξ^(i)_k(u) ‖_2. Let Ṽ_t^1:={k∈ V_t^1: k-t_∞≤ m} and n_t:=(Ṽ_t^1). This set is shown in Figure <ref> for d=2. Note that for i=1,2 due to stationarity of Y |ξ^(i)_t(u)|^p ≤ 2^p|Y_0|^p,p= 1,…, 4. Therefore, for all t∈ W and i=1,2 it holds that ‖ξ^(i)_t(u)‖_2≤ 2‖ Y_0 ‖_2. Applying this to (<ref>), we get ‖ξ^(i)_t(u)‖_2 ∑_k∈Ṽ_t^1‖ξ^(i)_k(u) ‖_2≤4n_t‖ Y_0‖_2^2. Moreover, it follows |∑_t∈ Wξ^(i)_t(u)|^4 ≤ 2^6(∑_t∈ W( 2‖ Y_0 ‖_2^2 + 4n_t‖ Y_0‖_2^2))^2 ≤ 2^6(2N ‖ Y_0 ‖_2^2 + 4n^*N‖ Y_0‖_2^2)^2=2^8 N^2 ‖ Y_0 ‖_2^4(1+2n^*)^2 with n^*:=t∈ Wmax{n_t}. By Ljapunov inequality, it holds |θ̂(u)-θ(u)|^4 ≤2^10/N^2(1+2n^*)^2 |Y_0|^4. (ii) Using Theorem <ref> with p=4 and applying the Ljapunov inequality we get |∑_t∈ Wξ^(i)_t(u)|^4 ≤ C_i·max{∑_t∈ W|ξ^(i)_t(u)|^4,(∑_t∈ W|ξ^(i)_t(u)|^2)^2}≤ C_i·max{∑_t∈ W|ξ^(i)_t(u)|^4,(∑_t∈ W(|ξ^(i)_t(u)|^4)^1/2)^2}=C_i·max{N|ξ^(i)_0(u)|^4,N^2|ξ^(i)_0(u)|^4} = C_i N^2|ξ^(i)_0(u)|^4 ≤ 2^4C_i N^2|Y_0|^4 for some constants C_i>0, i=1,2, where the last inequality follows by equation (<ref>). Thus, we have |θ̂(u)-θ(u)|^4 ≤2/N^4[2^4C_1N^2|Y_0|^4+ 2^4C_2N^2|Y_0|^4] = C/N^2|Y_0|^4, where C = 2^5(C_1+C_2)>0 is constant. If assumption (i) holds then the constant C is given by C = 2^10(1+2n^*)^2, where n^*≤ m^d is the maximum over the cardinalities of the sets Ṽ_t^1 for every t∈ W. Therefore, in the first case the constant C depends on m. In the second case the constant C = 2^5(C_1+C_2) depends on the mixing coefficient ϕ_u,v(r) by Theorem <ref>. Let ψ̂(u) = 1/N∑_t∈ We^iuY_t and ψ(u) =e^iuY_0 where N=(W). Under the assumptions of Lemma <ref>for p≥2 there exists a constant C_p>0 such that |ψ̂(u) - ψ(u)|^p≤C_p/N^p/2. Since x↦ |x|^p, p≥ 2 is a convex functionit holds |ψ̂(u)-ψ(u)|^p = 1/N^p|(∑_t∈ Wξ̃^(1)_t(u))^2 + (∑_t∈ Wξ̃^(2)_t(u))^2|^p/2≤2^p/2-1/N^p[|∑_t∈ Wξ̃^(1)_t(u)|^p +|∑_t∈ Wξ̃^(2)_t(u)|^p ]. (i) Applying Theorem <ref> withα=1 we get for i=1,2 |∑_t∈ Wξ̃^(i)_t(u)|^p≤(2p∑_t∈ W(‖ξ̃^(i)_t(u)‖_2^2 + ∑_k∈ V_t^1‖ξ̃^(i)_k(u)_k-t_∞[ξ̃^(i)_t(u)]‖_1))^p/2. Since |ξ̃^(i)_t(u)|≤ 2t∈^d, u∈,i=1,2 it follows ‖ξ̃^(i)_ t(u)‖_2^2 ≤4. Analogously to the calculations in the proof of Lemma <ref> (i) we observe ∑_k∈ V_t^1‖ξ̃^(i)_k(u)_k-t_∞[ξ̃^(i)_t(u)]‖_1 = ∑_k∈Ṽ_t^1‖ξ̃^(i)_k(u)[ξ̃^(i)_t(u)|_V_t^k-t_∞]‖_1≤ 4n_t, and hence |∑_t∈ Wξ̃^(i)_t(u)|^p ≤(2p∑_t∈ W (4+4n_t))^p/2≤ 2^3/2pp^p/2(N(1+n^*))^p/2. So all in all we get from (<ref>) that |ψ̂(u) - ψ(u)|^p ≤C_p/N^p/2 for the constant C_p=2^2p p^p/2 (1+n^*)^p/2>0 and n^*≤ m^d. (ii) Using Theorem <ref> andinequality (<ref>) it follows for p≥ 2 and i=1,2 that |∑_t∈ Wξ̃^(i)_t(u)|^p ≤ C_i·max{∑_t∈ W|ξ̃^(i)_t(u)|^p,(∑_t∈ W|ξ̃^(i)_t(u)|^2)^p/2}≤ C_i·max{N|ξ̃^(i)_t(u)|^p,N^p/2( |ξ̃^(i)_t(u)|^2 )^p/2}≤C_i N^p/22^p. By equation (<ref>) it finally follows |ψ̂(u)-ψ(u)|^p ≤C_p/N^p/2, where C_p = 2^3/2p-1(C_1+C_2)>0 is a constant depending on p and themixing coefficient of {ξ̃_t^(i)}, i=1,2. The following lemma is a generalization of <cit.> (proven there for independent random variables and p=1) to the case of weakly dependent random fields. Under the assumptions of Lemma <ref> together with condition (<ref>)there exists a constant C>0 such that for p∈ |1/ψ̃(u)-1/ψ(u)|^2p≤ C·min{N^-p/|ψ(u)|^4p,1/|ψ(u)|^2p}. 1.) Let |ψ(u)|<2N^-1/2. Then it holds |1/ψ̃(u) -1/ψ(u)|^2p = |𝕀{|ψ̂(u)|≥ N^-1/2}/ψ̂(u)-1/ψ(u)|^2p= |𝕀{|ψ̂(u)|≥ N^-1/2}· (ψ(u)-ψ̂(u))/ψ̂(u)ψ(u)+ψ̂(u)𝕀{|ψ̂(u)|< N^-1/2}/ψ̂(u)ψ(u)|^2p≤ 2^2p-1(1/|ψ(u)|^2p P(|ψ̂(u)|< N^-1/2) + [|ψ(u)- ψ̂(u)|^2p/|ψ̂(u)|^2p|ψ(u)|^2p𝕀{|ψ̂(u)|≥ N^-1/2}])≤ 2^2p-1(1/|ψ(u)|^2p + C_2p N^-p/N^-p|ψ(u)|^2p)= 𝒪(1/|ψ(u)|^2p), where the last inequality follows by Lemma <ref> and the fact that an indicator is always smaller or equal than 1. In this case, we get for |ψ(u)|<2N^-1/2 that N^-p/|ψ(u)|^4p = N^-p/|ψ(u)|^2p·1/|ψ(u)|^2p> N^-p/|ψ(u)|^2p·N^p/2^2p = 2^-2p/|ψ(u)|^2p. 2.) Let |ψ(u)|≥ 2N^-1/2. Then we get P(|ψ̂(u)|<N^-1/2) = P(|ψ(u)|-|ψ̂(u)|>|ψ(u)|-N^-1/2)≤ P(|ψ̂(u)-ψ(u)|>1/2|ψ(u)|) = P(|∑_t∈ W(e^iuY_t- e^iuY_0)|> N/2|ψ(u)|) = P ( √((∑_t∈ Wξ̃_t^(1)(u))^2 + (∑_t∈ Wξ̃_t^(2)(u))^2)> N/2|ψ(u)|) ≤ P (max_i=1,2{|∑_t∈ Wξ̃_t^(i)(u)|}>N/2√(2)|ψ(u)|)≤ P(|∑_t∈ Wξ̃_t^(1)(u)|>N/2√(2)|ψ(u)|) + P (|∑_t∈ Wξ̃_ t^(2)(u)|>N/2√(2)|ψ(u)|). To calculate this probability, we consider assumptions (i) and (ii) separately. (i) Here we can apply Theorem <ref> and we get for i=1,2 P(|∑_t∈ Wξ̃_t^(i)(u)|>N/2√(2)|ψ(u)|)≤exp{1/e- (N/2√(2)|ψ(u)|)^2/4eb_i}, where b_i = ∑_t∈ W b_t,∞(ξ̃^(i)) = ∑_t∈ W(‖(ξ̃_t^(i)(u))^2‖_∞ + ∑_k∈ V_t^1‖ξ̃_k^(i)(u) _k-t_∞[ξ̃_t^(i)(u)]‖_∞) and ‖ Z ‖_∞ := inf{c>0 : P(|Z|>c)=0} for a random variable Z. By inequality (<ref>)and m-dependence ∑_k∈ V_t^1‖ξ̃_k^(i)(u)_k-t_∞[ξ̃_t^(i)(u)]‖_∞ =∑_k∈Ṽ_t^1‖ξ̃_k^(i)(u) _k-t_∞[ξ̃_t^(i)(u) ]‖_∞≤ 4n_t. Therefore, b_i can be estimated as b_i ≤∑_t∈ W(4+4n_t)≤ 4N(1+n^*), i=1,2, with n^* as in the proof of Lemma <ref>. For expression (<ref>) we get P(|∑_t∈ Wξ̃_t^(1)(u)|>N/2√(2)|ψ(u)|) + P (|∑_t∈ Wξ̃_ t^(2)(u)|>N/2√(2)|ψ(u)|)≤2·exp{1/e-N|ψ(u)|^2/128e(1+n^*)}= 𝒪(N^-p/|ψ(u)|^2p). (ii) Apply Theorem <ref> to{ξ̃_t^(i)(u) } with a_t=1 for all t∈ W and h=2. Then, A(W)=N, and wehave P(|∑_t∈ Wξ̃_t^(1)(u)|>N/2√(2)|ψ(u)|) + P (|∑_t∈ Wξ̃_ t^(2)(u)|>N/2√(2)|ψ(u)|)≤ 2·exp{1/e-N^2/8|ψ(u)|^2/16(1+B(ϕ))Ne} = 2·exp{1/e- N|ψ(u)|^2/128(1+B(ϕ))e} = 𝒪(N^-p/|ψ(u)|^2p). So we get in both cases P (|ψ̂(u)|<N^-1/2) = 𝒪(N^-p/|ψ(u)|^2p). It holds that 1/|ψ̂(u)|^2p= |ψ(u)|^2p/|ψ(u)|^2p|ψ̂(u)|^2p= (|ψ(u)-ψ̂(u)+ψ̂(u)|^2/|ψ(u)|^2|ψ̂(u)|^2)^p≤(1/|ψ(u)|^2+|ψ̂(u)-ψ(u)|^2/|ψ̂(u)|^2|ψ(u)|^2)^p = 1/|ψ(u)|^2p(1+|ψ̂(u)-ψ(u)|^2 /|ψ̂(u)|^2)^p. Applying the binomial theorem and |ψ̂(u)|≥ N^-1/2 we get 1/|ψ(u)|^2p(1+|ψ̂(u)-ψ(u)|^2 /|ψ̂(u)|^2)^p = 1/|ψ(u)|^2p∑_k=0^ppk|ψ̂(u)-ψ(u)|^2k/|ψ̂(u)|^2k≤1/|ψ(u)|^2p∑_k=0^ppk|ψ̂(u)-ψ(u)|^2k/N^-k. Therefore, [|ψ̂(u)-ψ(u)|^2p/|ψ̂ (u)|^2p|ψ(u)|^2p𝕀{|ψ̂(u)|≥ N^-1/2}] ≤1/|ψ(u)|^4p[∑_k=0^ppk|ψ̂(u)-ψ(u)|^2k+2pN^k]≤1/|ψ(u)|^4p[∑_k=0^p pkC_2k+2p N^-k-pN^k] = 𝒪(N^-p/|ψ(u)|^4p). So all in all, it holds |𝕀{|ψ̂(u)|≥ N^-1/2}/ψ̂(u)-1/ψ(u)|^2p≤1/|ψ(u)|^2pP(|ψ̂(u)|<N^-1/2)+[|ψ̂(u)- ψ(u)|^2p/|ψ̂ (u)|^2p|ψ(u)|^2p𝕀{|ψ̂(u)|≥ N^-1/2}] = 𝒪(N^-p/|ψ(u)|^4p), that concludes the proof. Now we can finalize the proof of Theorem <ref>. Note that g_1 -g_1,l is orthogonal to ĝ_1,l-g_1,l, since ⟨ g_1-g_1,l,ĝ_1,l-g_1,l⟩ = ⟨ g_1,ĝ_1,l⟩ - ⟨ g_1, g_1,l⟩ - ⟨ g_1,l,ĝ_1,l⟩ + ⟨ g_1,l, g_1,l⟩ =1/2π(⟨[g_1],[ĝ_1,l]⟩ - ⟨[g_1], [g_1,l]⟩ - ⟨[g_1,l],[ĝ_1,l] ⟩ + ⟨[g_1,l],[g_1,l]⟩)=0 due to isometry property ofin L^2().By Pythagorean theorem we get ‖ g_1 - ĝ_1,l‖_2^2 = ‖ g_1 - g_1,l‖_2^2+‖ g_1,l-ĝ_1,l‖_2^2, and the second term can further be determined by ‖ĝ_1,l-g_1,l‖_2^2 = 1/2π‖[ĝ_1,l]-[g_1,l]‖_2^2 = 1/2π∫_-π l^π l|θ̂(x)/ψ̃ (x)-θ(x)/ψ(x)|^2dx. Furthermore, ‖ĝ_1,l-g_1,l‖_·^2 = 1/2π∫_-π l^π l|θ̂(x)/ψ̃(x) -θ̂(x)/ψ(x)+θ̂(x)/ψ(x)-θ(x)/ψ(x)|^2dx≤1/π[∫_-π l^π l|θ̂(x)(1/ψ̃(x)- 1/ψ(x))|^2dx + ∫_-π l^π l|θ̂(x)-θ(x)|^2/|ψ(x)|^2dx]=1/π[∫_-π l^π l[ |θ̂(x)-θ(x)+θ(x)|^2|1/ψ̃(x)- 1/ψ(x)|^2]dx + ∫_-π l^π l|θ̂(x)-θ(x)|^2/|ψ(x)|^2dx]≤1/π[2∫_-π l^π l[ |θ̂(x)-θ(x)|^2|1/ψ̃(x)- 1/ψ(x)|^2]dx + 2∫_-π l^π l[ |θ(x)|^2|1/ψ̃(x)- 1/ψ(x)|^2]dx + ∫_-π l^π l|θ̂(x)-θ(x)|^2/|ψ(x)|^2dx]= 2/π[∫_-π l^π l[ |θ̂(x)-θ(x)|^2|1/ψ̃(x)- 1/ψ(x)|^2]dx_=:(I) + ∫_-π l^π l|[g_1](x)·ψ(x)|^2|1/ψ̃(x)- 1/ψ(x)|^2dx_=:(II) + ∫_-π l^π l|θ̂(x)-θ(x)|^2/2|ψ(x)|^2dx_=:(III)]. First we calculate expression (I). Using the Cauchy-Schwarz inequality and applying Lemma <ref> and Lemma <ref> it holds [ |θ̂(x)-θ(x)|^2|1/ψ̃(x)- 1/ψ(x)|^2] ≤√(|θ̂(x)-θ(x)|^4)√(|1/ψ̃(x)- 1/ψ(x)|^4)≤√(c_1· c_2/N^2·|Y_0|^4/|ψ(x)|^4), where c_1,c_2>0 are some constants. Now we get ∫_-π l^π l[ |θ̂(x)-θ(x)|^2|1/ψ̃(x)- 1/ψ(x)|^2]dx ≤√(c_1· c_2)/N√(|Y_0|^4)∫_-π l^π l1/|ψ(x)|^2dx. For the second term (II) we use again Lemma <ref>. Then it holds |1/ψ̃(u)-1/ψ(u)|^2≤1/N·c/|ψ(x)|^4, for some c>0. Using this, we get ∫_-π l^π l|[g_1](x)·ψ(x)|^2|1/ψ̃(x)- 1/ψ(x)|^2dx ≤c/N∫_-π l^π l|[g_1](x)|^21/|ψ(x)|^2dx ≤c/N∫_-π l^π l(∫_|g_1(x)|dx)^2/|ψ(x)|^2dx = c/N‖ g_1‖_1^2∫_-π l^π l1/|ψ(x)|^2dx. Part (III) can be estimated by Ljapunov inequality and Lemma <ref> as |θ̂(x)-θ(x)|^2 ≤( |θ̂(x)-θ(x)|^4)^1/2≤√(C)/N√(|Y_0|^4). So putting all these results together, it follows for some constant K>0 that ‖ g_1-ĝ_1,l‖_·^2≤‖ g_1-g_1,l‖_2^2 + 1/π[2√(c_1· c_2)/N√(|Y_0|^4)∫_-π l^π l1/|ψ(x)|^2dx . . + 2c/N‖ g_1‖_1^2∫_-π l^π l1/|ψ(x)|^2dx + √(C)/N√(|Y_0|^4)∫_-π l^π l1/|ψ(x)|^2dx]≤‖ g_1-g_1,l‖_2^2 + K/N(√(|Y_0|^4)+ ‖ g_1 ‖_1^2 ) ∫_-π l^π l1/|ψ(x)|^2dx, that completes the proof. Consider expression (<ref>) in the proof of Theorem <ref>. Using assumptions(H3)– (H4) there, it holds ∫_-π l^π l|[g_1](x)|^2/|ψ(x)|^2dx ≤1/c_ψ^2∫_|[g_1](x)|^2(1+x^2)^βdx ≤L/c_ψ^2.Since F is an isometry of L^2() one has g-g_1,l_2^2=[g_1] 𝕀{ |·|>π l}_2^2= ∫_|x|>π l |[g_1](x)|^2(1+x^2)^β(1+x^2)^-β dx ≤max_|x|>π l (1+x^2)^-β∫_|[g_1](x)|^2(1+x^2)^βdx ≤L/(1+(π l )^2)^β by assumption(H4). Using(H3) one gets ∫_-π l^π ldx/|ψ(x)|^2≤c_ψ∫_-π l^π l(1+x^2)^βdx ≤ 2c_ψπ l (1+(π l)^2)^β. Plugging this into (<ref>) yields the result. § ACKNOWLEDGEMENTW. Karcher and E. Spodarev are grateful to E. V. Jensen for her hospitality during their stay at Aarhus University in February 2011 where this research was initiated. The authors thank M. Reiß for the fruitful discussions on the subject of the paper. They also acknowledge the valuable help of O. Moreva in implementing the algorithms of Section <ref>.unsrt | http://arxiv.org/abs/1705.09542v1 | {
"authors": [
"Wolfgang Karcher",
"Stefan Roth",
"Evgeny Spodarev",
"Corinna Walk"
],
"categories": [
"math.ST",
"stat.TH"
],
"primary_category": "math.ST",
"published": "20170526114927",
"title": "An Inverse Problem for Infinitely Divisible Moving Average Random Fields"
} |
[email protected]@[email protected]@[email protected] ^a Department of Physics, Gurukula Kangri Vishwavidyalaya,Haridwar 249 404, Uttarakhand, India ^b Kanoria PG Mahila Mahavidyalaya, Jaipur 302004, Rajasthan, India ^c Department of Physics, Govt. Degree College Narendranagar,Tehri Garhwal - 249 175, Uttarakhand, India^d Department of Physics, HNB Garhwal University,Srinagar Garhwal 246 174, India We study the motion of massless test particles in a five dimensional (5D) Myers-Perry black hole spacetime with two spin parameters. The behaviour of the effective potential in view of different values of black hole parameters is discussed in the equatorial plane. The frequency shift of photons is calculated which is found to depend on the spin parameter of black hole and the observed redshift is discussed accordingly. The deflection angle and the strong deflection limit coefficients are also calculated and their behaviour with the spin parameters is analysed in detail. It is observed that the behaviour of both deflection angle and strong field coefficient differs from Kerr black hole spacetime in four dimensions (4D) in General Relativity (GR) which is mainly due to the presence of two spin parameters in higher dimension. Strong lensing and Observables around 5D Myers-Perry black hole spacetimeK D Purohit ^d December 30, 2023 ============================================================================§ INTRODUCTION The Black holes (BHs) in Einstein's General Relativity (GR) are one of the most strangest and mysteriousobjects in the universe <cit.>. The most general spherically symmetric, vacuum solution of the Einstein field equations in GR is the well known Schwarzschild BH spacetime <cit.> in four dimensions.The study of Schwarzschild BH solution and its applications to the solar system is one of the accurate tests to verify the predictions made by GR.Further, a static solution to the Einstein-Maxwell field equations, which corresponds to the gravitational field of a charged, non-rotating, spherically symmetric body is the Reissner-Nordström spacetime <cit.>. The rotating generalization of the Schwarzschild black hole (BH) spacetime is Kerr BH spacetime in GR while the spacetime geometry in the region surrounding by a charged rotating BH is represented by the Kerr-Newman BH spacetime as a solution of Einstein-Maxwell equations in GR <cit.>.The GR which has revolutionized our understanding of the universe as a whole is now more than one hundred years old and the recent advancements in understanding the gravitational collapse and nature of BH solutions in diversified scenario is remarkable <cit.>.The deflection of light ray in a gravitational field is one of the crucial predictions of GR and the gravitational lensing is an important phenomena resulting due to the bending of light in the gravitational field of a massive object while passing close to that object. The strong gravitational lensing is caused by a compact objects like BHs with a photon sphere has distinctive features. It is worth mentioning that when the photons pass close to the photon sphere, the deflection angles become so large that an observer would detect two infinite sets of faint relativistic images on each side of the BH, which are produced by photons that make complete loops around the BH before reaching to the observer. These relativistic images may therefore provide us not only some important signatures about BHs in the universe, but might also helpful in verification of the alternative theories of gravity in their strong field regime. The gravitational lensing in weak field approximation studies the properties of galaxies and stars, but when BH is treated as a lens, it is no longer valid and the strong field limit is needed which is referred as strong deflection limit. Thus, it acts as a powerful indicator of the physical nature of the central celestial objects and then has been used to study in various theories of gravity. The study of the strong field limit lensing due to different BHs have received considerable attentions in recent years <cit.>. The development of lensing theory in the strong-field regime started with the study of gravitational lensing due to a Schwarzschild BH spacetime <cit.> and it is also shown that a supermassive BH like at the center of our Galaxy may be a suitable lens candidate <cit.>.In recent years, various interesting BH solutions in higher spacetime dimensions, especially in five dimensions <cit.>, have been the subject of intensive research, motivated by various ideas in brane-world cosmology, string theory and gauge/gravity duality <cit.>. It is worth to note that the uniqueness theorem does not hold in higher dimensions due to the fact that there are more degrees of freedom as compared to the usual four dimensions in GR. However, the discovery of black-ring solutions in five dimensions asserts that the non-trivial topologies are allowed in higher dimensions <cit.>. In particular, the Myers-Perry black hole (MPBH) spacetime <cit.> is a higher dimensional generalization of the four-dimensional Kerr BH spacetime in GR.The study of geodesic structure of massless particles in a given BH spacetime is one of the important ways to understand the gravitational field around a BH spacetime.The geodesic motion around various BH spacetimes in a variety of contexts (for timelike as well as null geodesics), both in GR and in alternating theories of gravity, are widely studied time and again <cit.>. The motion of both massive and massless particles in Myers-Perry <cit.> and Myers-Perry anti-de sitter BH spacetime <cit.> with equal rotation parameters has been studied in detail. Further, the complete set of analytical solutions of the geodesic equations in the general 5D Myers-Perry spacetime in terms of the Weierstrass function, for the case of two independent angular momenta, have been derived and discussed in <cit.>. Deimer et. al. also studied massive as well as massless test particles in the general 5D MPBH spacetime <cit.>. The main objective of this paper is to study the strong lensing in a 5D MPBH spacetime. We have calculated the deflection angle and other strong lensing parameters by using Bozza's method and the variation of deflection angle with spin parameter is investigated. We have used the units that fix the speed of light and the gravitational constant via 8π G = c^4 = 1. The paper is organised as follows: In Section II, the first integral of the geodesic equations and the effective potential in 5D MPBH spacetime are discussed. We have discussed the optical properties like frequency shift and the cone of avoidance from the null geodesics in Section III. The gravitational lensing aspects in the strong field limit is then discussed in detail in Section IV. Finally, the results obtained are summarised in Section V. § EQUATIONS OF MOTION IN 5D MPBH SPACETIME To study the geodesics and strong lensing in 5D rotating MPBH spacetime background, we begin with the following metric of MPBH spacetime in the Boyer-Lindquist coordinates <cit.>,ds^2= ρ^2/ 4Δ dx^2+ρ^2 dθ^2 - dt^2 +(x+a^2) sin^2θdϕ^2 +(x+b^2)cos^2θ dψ^2 + 2m/ρ^2[dt+a sin^2θ dϕ+ b cos^2θ dψ]^2,with ρ^2 and Δ are defined asρ^2=x+a^2cos^2θ+b^2 sin^2θ , Δ=(x+a^2)(x+b^2)-2m x.The metric (<ref>) is singular when Δ=g_rr=0 and ρ^2=0. Here a and b are two spin parameters, and ϕ and ψ are angles bounded by the limit 0≤ϕ≤2π and 0≤ψ≤π/2. Following <cit.>, we use the coordinate x=r^2 instead of the radius r in order to simplify the calculations. It is worth noticing here that the metric (<ref>) reduces to 5D Tangherlini solution <cit.> for a=b=0. For the horizon structure of MPBH sapcetime with Δ=0, we obtainx_±=1/2[2 m-(a^2 + b^2)±√([2 m-(a^2 + b^2)]^2 - 4 a^2b^2)] . Here, x_+ and x_- denotes the outer horizon and the inner horizon respectively. The metric (<ref>) describes non-extremal BH for x_+>x_- and when x_+=x_-, one can obtain an extremal BH spacetime.The horizon exists when a^2+b^2<2m and [2m-(a^2+b^2)]^2≥4 a^2b^2. For the metric (<ref>) √(-g)=1/2sinθcosθρ^2.To study the geodesic structure of the 5D rotating Myers-Perry black hole, we begin with the Lagrangian which readsL = 1/2 g_μνẋ^μẋ^ν, where an overdot denotes the partial derivative with respect to an affine parameter. Therefore, the momenta calculated for the metric (<ref>) are:p_t= (-1+r_0^2/ρ^2)ṫ+r_0^2/ρ^2ϕ̇+r_0^2 b cos^2θ/ρ^2ψ̇, p_ϕ = r_0^2 a sin^2θ/ρ^2ṫ+(x+a^2+r_0^2 a^2 sin^2θ/ρ^2)sin^2θϕ̇+ r_0^2 a b sin^2θcos^2θ/ρ^2ψ̇, p_ψ = r_0^2 b cos^2θ/ρ^2ṫ + r_0^2 a b sin^2θcos^2θ/ρ^2ϕ̇ + (x+b^2+r_0^2 b^2 cos^2θ/ρ^2)cos^2θψ̇, p_x= ρ^2/4Δẋ, p_θ = ρ^2 θ̇, where p_t = -E, p_ϕ = L_ϕ and p_ψ=L_ψ correspond toenergy and angular momentum with respect to the respective rotation axisrespectively. Now considering the case for the equatorial plane, i.e., θ=π/2, which results in the conserved quantity along ψ direction, i.e., L_ψ=0. Henceforth, we will be using L_ϕ = L in our calculation. Thus the metric (<ref>) in the equatorial plane reads asds^2 = - A(x) dt^2 + B (x) dx^2 + C(x) dϕ^2 - D (x) dt dϕ,where the metric coefficients are described as below,A(x)=(1-2 m/ρ^2), B(x)= ρ^2/4 Δ, C(x)=x + a^2 + 2 m a^2/ρ^2,D(x)= -4 m a/ρ^2.The first integral of geodesic equations may then be expressed in terms of the above mentioned metric coefficients <cit.>, in the following form,ṫ = 4 C(x) E - 2 D(x) L/4 A(x) C(x) + D(x)^2,ẋ = ± 2 √(C(x) E^2 - D(x) EL - A(x) L^2/B (4 A(x) C(x) + D(x)^2)),ϕ̇ = 2 D(x) E + 4 A(x)L/4A(x)C(x) + D(x)^2.For null geodesics, ẋ from Eq. (<ref>), can be reconstructed asẋ^2 + V_eff = 0,which gives,V_eff =- 4 [C(x)E^2 - D(x)EL - A(x) L^2/B(x) (4 A(x)C(x) + D(x)^2)], = 1/x+b^2( -4 E^2(2 m a^2 + x^2 + x b^2 + a^2 x + a^2 b^2) - 16 m a E L + 4 L^2(x+ b^2 - 2 m)). The general behavior of effective potential as a function of x for different values of rotation parameter is presented in Fig. (<ref>). In particular, Fig. (<ref>a) represents the variation of potential with the spin parameter b forfixed value of a (=0.1) while Fig. (<ref>b) represents the variation of the potential with spin parameter a forfixed value of b (=0.1). The effective potential shows a maxima which corresponds to an unstable circular orbit. It is also observed that with the increase in the value of parameter b, the maximum of the effective potential is shifting towards the left (see Fig. (<ref>a)), i.e., the circular orbits also shift towards the central object accordingly whereas with the increase in the value of spin parameters a at fixed b, the peak is shifting towards the right (see Fig.(<ref>b)), which signifies the shifting of circular orbit away from the central object. § OBSERVABLES FOR PHOTONS In order to discuss the optical properties in 5D MPBH spacetime, the frequency shift and cone of avoidance are discussed below:§.§ Frequency shiftThe angular frequency associated with photons in a circular geodesic is one of the meaningful physical quantity. The angular frequency relative to a distant observer for unequal spin parameters is defined as below,Ω=dϕ/dt.Thus, using Eq. (<ref>) and Eq. (<ref>), the angular frequency given by Eq. (<ref>) can be calculated as below,Ω =(x+b^2-2 m) d - 2 m a/( x^2 + (a^2 + b^2 - 2 m) x + a^2 b^2 + 2 m a ) d + 2 m x + 2 m a^2 ,where d=L/E. The frequency shift may however be expressed as g = k_μu_o^μ/k_μu_e^μ, where k_μ are the covariant components of the photon four-momentum and u_o^μ (u_e^μ, respectively) are the contravariant components of the four-velocity of the observer (emitter). In case of static distant observer, the four-velocity reads u_o = (1,0,0,0,0) and in the case of emitter the four-velocity reads as u_e = (u_e^t,0,0,u_e^ϕ,0). The frequency shift then acquires the following form,g= 1/u_e^t(1- dΩ). The temporal component of the emitter four-velocity can be obtained from the norm of the four-velocity:u_e^t = [1-2m/ρ^2-(x+a^2+2ma^2/ρ^2)Ω^2]^-1/2,such that the expression for frequency shift now reads as,g = [1-2m/ρ^2-(x+a^2+2ma^2/ρ^2)Ω^2]^1/2/1-dΩ. Here, by considering the value of a in between 0 and 1, one automatically has a range for b (from the expressions a^2+b^2 <2m and [2m-(a^2+b^2)]^2 ≥ 4a^2b^2) with m=1 as, -0.4 ≥ b ≥ 0.4 or -2.412 ≥ b ≥ 2.412. The behaviour of the frequency shift for different values of different spin parameters is illustrated in Figs. (<ref>)- (<ref>). In particular in Fig. (<ref>), we have presented the frequency shift for different values of spin parameter a and b while keeping the other parameter constant respectively. The Fig. (<ref>a), shows the variation of frequency shift with spin parameter b at different values of spin parameter a, whereas Fig. (<ref>b), shows the variation of frequency shift with a at different values of b. As the frequency shift increases, the observed frequency of the photon decreases which in turn gives an equivalent increase in the corresponding wavelength of the photon. Hence, for the increasing frequency shifts, the photons get redshifted. One may note that the stronger is the gravitational field of a source, larger will be the energy loss of an incoming photon and also larger will be the observed redshift.From Fig. (<ref>), it can also be observed that for a particular value of parameter a, the redshift for photons around a MPBH spacetime increases with an increase in the value of spin parameter b while it decreases with an increase in the value of parameter a. It therefore signifies the strength of gravitational field which depends strongly on both the parameters at a fixed value of x. In Fig. (<ref>) andFig. (<ref>), the variation of frequency shift with x for different values of spin parameters a and b is shown respectively. It is also observedthat for a particular value of spin parameter b, there is a decrease in the redshift for the photons with an increase in the value of x. However, for fixed value of a, the redshift first increases and then decreases with x.§.§ Null geodesics in the observers frameLet us consider a point source at a given distance r from the centre emits light isotropically into all directions.A part of the light will then be captured by the BH, while another part will escape from the vicinity of BH. It is clear that the critical orbits are at the limit of infall or escape and therefore such orbits may define a cone with half opening angleψ in the observer's frame <cit.> such that,tanψ = k_x/k_ϕ.Here, k_x and k_ϕ are the components of the null vector field k = k^μ∂_μ for null geodesic in this spacetime. In general case, the null vector field k has four components (k^t, k^x, k^θ, k^ϕ). Using the equations for constant of motion, i.e., E = -g_tμ k^μ and L = g_ϕμ k^μ, we obtain the relations,k^t = G_t(E,L)/G,k^ϕ = G_ϕ(E,L)/G,where, G =A(x) C(x) +D(x)^2/4 and the functions G_t and G_ϕ are defined as below,G_t(E,L) =C(x) E -D(x) L/2,G_ϕ(E,L) = D(x) E/2+A(x) L.Using the constraint k· k = 0 (i.e (k^t)^2 + (k^x)^2 + (k^θ)^2 + (k^ϕ)^2 = 0) for null geodesics, we can then obtain an expression for k^x in equatorial plane,(k^x)^2= W/B(x) G,where W = C(x) E^2 - D(x) E L - A(x) L^2.Further, k_α = g_μν k^μ u^ν_(α) leads to the components of k in observers frame as follows,k_x = G_t/√(- G C(x))sinh q- ( √(-W/G)cosχ - L/√(-C(x))sinχ) cosh q, k_θ = 0,k_ϕ = √(-W/ G )sinχ + L/√(-C(x))cosχ.Using Eqs. (<ref>) and (<ref>) in Eq.(<ref>), one can then obtaintanψ = G_t/√(G)sinh q - (√( WC(x)/G)cosχ - L sinχ)cosh q/(√( WC(x)/G)sinχ + L cosχ).Here, we have considered the null geodesics in observer's frame moving with constant acceleration. However an arbitrary observer in the spacetime (<ref>) is determined by a velocity vector field u, (i.e., u^2 = -1 or +1, respectively for timelike and null geodesics) as,u = α∂_t+ β∂_x + γ∂_ϕ.The vector field can be parametrized by a pair of function (q(x), χ(x)) (see <cit.>) as given below,u= √(-C(x)/G)cosh q∂_t + 1/√(-B(x))sinh q cosχ∂_x + 1/√(-C(x))(sinh q sinχ - D(x)/2 √(G)cosh q )∂_ϕ.The other basis vectors u_i are,u_x= √(-C(x)/G)sinh q∂_t + 1/√(-B(x))cosh q cosχ∂_x+ 1/√(-C(x))(cosh q sinχ - D(x)/2 √(G)sinh q)∂_ϕ,u_ϕ =- sinχ/√(-B(x))∂_x + cosχ/√(-C(x))∂_ϕ.The (equatorial) trajectories of the observer (<ref>) are given by following equations,dt/dλ = √(-C(x)/G)cosh q, dx/dλ = √(1/-B(x))sinh q cosχ, dθ/dλ =0, dϕ/dλ = 1/√(-C(x))(sinh q sinχ - D(x)/2 √(G)cosh q ),where λ is an affine parameter. From Eqs. (<ref>)-(<ref>), the static observer (forχ = π/2 and tanh q = D(x)/2 √(G)) is determined as follows,cosh q = 2 √(G)/√(4 G - D(x)^2),sinh q = D(x)/√(4 G - D(x)^2).The radial trajectories (θ = constant) are given by the conditions,tanh q sinχ = D(x)/2 √(G), χ≠ 0.For the radial trajectories of the observer in equatorial plane i.e., χ = π/2, tanh q= D(x)/2 √(G). However for χ = π/4, the relation sinh q = D(x)/√(2 G)cosh q holds and therefore the observer's velocity vector u takes the following form,u = √(-C(x)/G)cosh q ∂_t + D(x)/2 √(-B(x) G)cosh q ∂_x ,and from the condition of velocity vector field one can easily obtain cosh q = √(G)/√( A(x) C(x) - D(x)^2/4). Thus the expression (<ref>) now reads as, tanψ = 1/√(G (A(x) C(x) - D^2(x)/4 ))×G_t D(x) - ( √(W C(x)/G) -L) G/( √(W C(x)/G) +L), which clearly indicates that the angle ψ depends on the two parameters of the null geodesics, i.e., E and L. § STRONG FIELD LENSING BY 5D MPBH SPACETIMEIn this section, we investigate the strong field lensing by a 5D MPBH spacetime given by Eq. (<ref>) for the case where both the observer and the source lie in the equatorial plane i.e., θ = π/2 <cit.>. The impact parameter is related to the minimum distance reached by the photon.In general, a light ray coming from infinity approaches the BH, reaches the minimum distance and then leaves again towards infinity. Using the geodesic equations, we find an implicit relation between angular momentum and the closest approach distance.Here for simplicity, we are considering E = 1, and at the minimum distance (say x_0) of photon trajectory (where V_eff = 0), we have <cit.>,L= - D(x_0) + √(4 A(x_0) C(x_0) + D(x_0)^2(x))/2 A(x_0), = 2 m a - ρ √(x_0ρ^2+a^2ρ^2-2 m x_0)/2 m - ρ^2.Now following <cit.>, the condition for the radius of photon sphere is,A(x_p) C^'(x_p) - A^'(x_p) C(x_p) +L(x_p) (A^'(x_p)D(x_p)- A(x_p) D^'(x_p)) = 0.Thus, the equation for the radius of photon sphere takes the following form,x_p^4+ ( -8 m+4 b^2) x_p^3 + ( 6 b^4-20mb^2+16 m^2-8 ma^2)x_p^2 - ( 16 mb^ 4- 4 b^6 - 16 b^2m^2 + 16 b^2ma^2) x_p + b^8-8 b^4ma^2 -4 b^6m +4 b^4m^2=0. In the limit a=0 and b=0, the radius of the photon sphere comes out as x_p = 4 m, i.e., the radius of photon sphere for the Tangherlini spacetime <cit.>.The radius of the photon sphere is plotted with respect to the spin parameter b in Fig. (<ref>) for different values of spin parameter a. One can easily notice from the plots that on increasing the value of spin parameter a the radius of photon sphere decreases.Depending on the direction of the rotation of BH there are two types of photon spheres, one for the photons winding in the same direction of rotation as the BH known as direct photons and the other one for photons winding in the opposite direction known as retrograde photons. One may also notice that both direct and retrograde photons have same impact, i.e., in both the cases the photons are not easily captured by increasing the value of spin parameter a.The deflection angle for photons coming from infinity can be written as,α(x_0) = ϕ(x_0) - π,where ϕ(x_0) is the total azimuthal angle, which evaluates to π for a straight line and becomes largerwith the bending of light ray in gravitational field. As the distance of closest approach x_0 decreases, the deflection angle increases accordingly. When x_0 reaches a minimum value, i.e.,the radius of the photon sphere, the deflection angle becomes very large, and a photon will be captured by the BH. Now, using Eq. (<ref>), azimuthal angle is given by,ϕ(x_0)=2 ∫_x_0^∞dϕ/dxdx, =2 ∫_x_0^∞√(B(x) | A(x_0) |) (D(x) + 2 L A(x))/√(4 A(x) C(x) + D^2(x))√(sgn(A(x_0)) P) dx,where,P= C(x) A(x_0) - A(x) C(x_0)+ L [A(x) D(x_0)- A(x_0) D(x)]. We can find the behaviour of the deflection angle very close to the photon sphere following Bozza <cit.>. The divergent integral is first split into two parts, one of which contains the divergence and the other is regular. Both these parts are expanded around the radius of photon sphere and approximated with the leading term. We first define two new variables y and z as,y = A(x), z = y-y_0/1-y_0,where y_0=A(x_0). The total azimuthal angle in terms of two new variables can be expressed as,ϕ(r_0)=∫_0^1R(z,x_0)f(z,x_0)dz,withR(z,x_0) = 2(1-y_0)/A'(x)√(B(x)|A(x_0)|)(D(x) + 2 L A(x))/√(4 A(x) C^2(x) + C(x) D^2(x)),f(z,x_0)=√(C(x))/√(X).Where, X = C(x) A(x_0) - A(x) C(x_0) + L (A(x) D(x_0)-A(x_0) D(x)). The function R(z,x_0) is regular for all the values of z and x_0, whereas f(z,x_0) diverges at z=0. So, the integral (<ref>) can be separated into two partsϕ(x_0)=ϕ_R(x_0)+ϕ_D(x_0),withϕ_D(x_0)=∫_0^1R(0,x_c)f_0(z,x_0)dz,and the regular partϕ_R(x_0)=∫_0^1g(z,x_0)dz,with g(z,x_0) = R(z,x_0)f(z,x_0) - R(0,x_c) f_0(z,x_0). In order to find the divergence of the integrand in Eq.(<ref>), we expand the argument of the square root of f(z,x_0) to second order in z:f(z,x_0)∼ f_0(z,x_0)=1/√(α z+β z^2+𝒪(z^3)),whereα = (1-A(x_0))/A'(x_0)C(x_0)×(A(x_0)C'(x_0)-A'(x_0)C(x_0) + L(A'(x_0)D(x_0)-A(x_0)D'(x_0))), β = (1-A(x_0))^2/2C(x_0)^2A'^3(x_0)× (2C(x_0)C'(x_0))A'^2(x_0)+(C(x_0)C”(x_0) -2C'^2(x_0))A(x_0)A'(x_0)-C(x_0)C'(x_0)A(x_0)A”(x_0) +L[A(x_0)C(x_0)(A”(x_0)D'(x_0)-A'(x_0)D”(x_0))+2A'(x_0)C'(x_0)(A(x_0)D'(x_0)-A'(x_0)D(x_0))]).Here, prime denotes the derivative with respect tox. At x_0=x_ps, α vanishes. The outermost solution of α=0 defines the photon sphere. In the strong deflection limit, the expression for the deflection angle <cit.> around the radius of the photon sphere reads as follows,α(u)=-a̅log(u/u(x_p)-1) +b̅+𝒪(u-u(x_p)).The strong field coefficients u(x_p), a̅ and b̅ are then given byu(x_p) = L|_x_0=x_p,a̅ = R(0,x_p)/2√(β_p) =√(2A(x_p)B(x_p)/A(x_p)C(x_p)”-A(x_p)”C(x_p)+u(x_p) Q),b̅ = -π+b_R+a̅log(4β_pC(x_p)/u(x_p)|A(x_p)|(D(x_p) +2u(x_p)A(x_p))),where, Q = A(x_p)” D(x_p) - A(x_p) D(x_p)” and b_R goes to zero for the case of 5D MPBH spacetime which is due to the presence of two spin parameters. Here, we have plotted the deflection angle and the parameters a̅ and b̅, with respect to both the spin parameters a and b. In Fig. (<ref>), the deflection angle with respect to spin parameter b for different values ofspin parameter a is graphically presented and it can be observed that the deflection angle first increases followed by a sharp decrease. The decrease in the bending angle depicts the weakening of the force of gravity in the case of 5D MPBH spacetime. It is also observed that in case of Kerr BH, the deflection angle monotonically increases and the variation of deflection angle is different for direct and retrograde photons <cit.>, whereas in case of 5D MPBH, the nature of deflection angle is identical for both, i.e., direct and retrograde photons.The coefficients of strong deflection limit a̅ andb̅ are also illustrated in Fig. (<ref>) and one can easily notice that for a fixed value of rotation parameter a, a̅ increases and b̅ decreases with an increase in the value of rotation parameter b. One cn also notice that that both the deflection coefficients diverge at the critical point which corresponds to an extremal black hole. We believe that this study might be useful for the investigation of relativistic images in context of mentioned BH spacetime in higher dimension. § SUMMARY AND CONCLUSIONSIn this article, we have investigated the frequency shift, cone of avoidance and strong gravitational lensing in the background of a 5D MPBH spacetime. Some of the important results obtained are summarised below. (i) The effective potential has a maximum which corresponds to an unstable circular orbit. It is observed that with the increase in the value of spin parameters a and b circular orbit shifts towards and away from the central object respectively.(ii) The frequency shift depends on the spin parameters a and b. The redshift becomes stronger with the increase in the value of spin parameter a whereas blueshift becomes stronger with the increase in the value of spin parameter b. For positive frequency shift the spin parameter a strengthens the gravitational field of a 5D MPBH spacetime.(iii) The behaviour of radius of photon sphere indicates that the photons are not easily captured with increasing the value of spin parameter a in case of both type of photon i.e., the direct and retrograde photon.(iv) The deflection angle first increases and then decreases with the increase in the value of spin parameter b for fixed values of parameter a. There is a significant effect of spin on deflection angle. A decrease in the bending angle shows the decrease in the gravitational strength of the MPBH spacetime. However, the deflection angle and strong field coefficients both changes in a similar way qualitatively with an increase in the spin parameters a and b.(v) The behaviour of deflection angle and strong field coefficients in 5D MPBH spacetime differs from the Kerr BH spacetime in 4D in GR due to the presence of two spin parameters and the observable quantities are more complex because the spin (a and b) breaks the spherical symmetry of the system. We believe that the results obtained herewith would be useful in the study of gravitational lensing around the rotating BHs in higher dimensions in future.§ ACKNOWLEDGMENTSThe authors (RSK and HN) would like to thank the Department of Science and Technology (DST), New Delhi for the financial support through grant no. SR/FTP/PS-31/2009. The author UP would like to thank IUCAA, Pune for academic visits and the Department of Physics, Gurukula Kangri Vishwavidyalay, Haridwar for providing the necessary support during the course of this work been done. The authors (HN and RU) are also thankful to IUCAA, Pune for support under visiting associateship program during their stay at IUCCA, where a part of this study was performed. The authors (RU and HN) would also like to thank the Science and Engineering Research Board (SERB), DST, New Delhi for financial support through the grant number EMR/2017/000339. 99 har03Hartle, J. B.: Gravity: An Introduction To Einstein's General Relativity. Pearson Education Inc., Singapore, 2003. wal84Wald, R. M.: General Relativity. University of Chicago Press, Chicago, USA, 1984. psj97Joshi, P. S.: Global aspects in gravitation and cosmology. Oxford University Press, Oxford, UK, 1997. wei04Weinberg, S. : Gravitation and Cosmology: Principles and Applications of General Theory of relativity. Jhon Wiley and Sons (Asia), Singapore 2004. ep04Poisson,E.: A relativists' toolkit: the mathematics of black hole mechanics. Cambridge University Press, Cambridge 2004. cha83Chandrasekhar, S.: The Mathematical Theory of Black Holes. Oxford Uni. Press, New York, 1983. sch83Schutz, B. F.: A first course in general relativity. Cambridge University press, Cambridge, 1983. eva13 Hackmann, E., and Xu, H.: Charged particle motion in Kerr-Newmann space-times. Phys. Rev. D87, 124030 (2013). SMCCarroll, S. M.: Lecture Notes on General Relativity; arXiv: 9712019[gr-qc]. TPPadmanabhan, T.: One Hundred Years of General Relativity: Summary, Status and Prospectus. arXiv: 1512.06672 [gr-qc]. GFREllis, G. F. R.: 100 Years of General Relativity. arXiv: 1509.01772v1[gr-qc]. AAAshtekar, A.: General Relativity and Gravitation: A Centennial Perspective. arXiv: 1409.5823[gr-qc]. sft Frittelly, S., Kling, T. P., and Newman, E. T.: Space-time perspective of Schwarzschild lensing.Phys. Rev. D 61, 064021 (2000). ksvVirbhadra, K. S., and Ellis, G. F. R.:Schwarzschild black hole lensing. Phys.Rev. D 62, 084003 (2000). schtTangherlini, F. R.:Schwarzschild field in n dimensions and the dimensionality of space problem. Nuovo Cim.27, 636 (1963). vbBozza, V.: Quasiequatorial gravitational lensing by spinning black holes in the strong field limit. Phys. Rev. D 67, 103006 (2003). vb1Bozza, V.:Gravitational lensing in the strong field limit. Phys. Rev. D 66, 103001 (2002). iye09 Iyer, S. V., and Hansen, E. C.: Light's Bending Angle in the Equatorial Plane of a Kerr Black Hole . Phys. Rev. D80, 124023 (2009). scht111Tsukamoto, N., Kitamura, T., Nakajima, K., and Asada, H.: Gravitatonal lensing in Tangherlini spacetime in the weak gravitational field and the strong gravitational field. Phys. Rev. D 90, 064043 (2014). upsggPapnoi, U., Atamurotov,F., Ghosh, S. G., and Ahmedov, B.: Shadow of five-dimensional rotating Myers-Perry black hole. Phys. Rev. D90, 024073 (2014). horowitzHorowitz, G. T.: Black Holes in Higher Dimensions. (Cambridge University Press, 2012). Emparan Emparan, R., and Reall, H. S.: A Rotating black ring solution in five-dimensions. Phys. Rev. Lett. 88, 101101 (2002). MPMyers, R. C., and Perry, M. J.: Black Holes in Higher Dimensional Space-Times. Ann. of Phys. 172, 304 (1986). dab97 Dabrowski, M. P., and Larsen, A. L.: Null strings in Schwarzschild space-time. Phys. Rev. D55, 6409-6414 (1997). fern12Fernando, S.: Null geodesics of Charged Black Holes in String Theory. Phys. Rev. D85,024033(2012). fer12Fernando, S.: Schwarzschild black hole surrounded by quintessence: Null geodesics. Gen. Rel. Grav. 44, 1857 (2012). fer14Fernando, S., Meadows, S., and Reis, K.: Null trajectories and bending of light in charged black holes with quintessence. arXiv: 1411.3192 [gr-qc](2014). nor05 Cruz, N., Olivares, M., and Villanueva, R. J.: The geodesic structure of the Schwarzschild Anti-de Sitter black hole. Class. Quant. Grav. 22, 1167-1190 (2005). pug11Pugliese, D., Quevedo, H., and Ruffini, R.: Circular motion of neutral test particles in Reissner-Nordström spacetime. Phys. Rev. D43, 3140 (1991). hio08Hioki, K., and Miyamoto, U.: Hidden symmetries, null geodesics, and photon capture in the Sen black hole. Phys. Rev. D78, 044007 (2008). rasprdUniyal, R., Nandan, H., Biswas, A., and Purohit, K. D.: Geodesic motion in R-charged black hole spacetimes. Phys. Rev.D92 , 084023 (2015). gus15 Gusin, P., Kusmierz, B., and Radosz, A.: Observers in spacetimes with spherical and axial symmetries. Quantum Physics 1 (3), 34-43 (2015). bha03Bhadra, A.: Gravitational lensing by a charged black hole of string theory. Phys. Rev. D67, 103009 (2003). pra11Pradhan, P., and Majumdar, P.: Circular Orbits in Extremal Reissner Nordstrom Spacetimes. Phys. Lett. A375, 474-479 (2011). stu14Stuchlik, Z., and Schee, J.: Circular geodesic of Bardeen and Ayon-Beato-Garcia regular black-hole and no-horizon spacetimes. Int. J. Mod. Phys. D24, 1550020 (2014). sch09 Schee, J., and Stuchlik, Z.: Profiles of emission lines generated by rings orbiting braneworld Kerr black holes. Gen. Rel. Grav. 41, 1795-1818 (2009). kol03Koley, R., Pal, S., and Kar, S.: Geodesics and geodesic deviation in a two-dimensional black hole. Am. J. Phys. 71, 1037 (2003). uni14Uniyal, R., Nandan, H., and Purohit, K., D.: Geodesic Motion in a Charged 2D Stringy Black Hole Spacetime. Mod. Phys. Lett. A29, 1450157 (2014). ras15 R. Uniyal, N. Chandrachani Devi, H. Nandan, and K. D. Purohit.: Geodesic Motion in Schwarzschild Spacetime Surrounded by Quintessence. Gen. Rel. Grav. 47, 16 (2015). eva10Hackmann, E.: Geodesic equations in black hole space-times with cosmological constant. Ph. D. Thesis, (University of Bremen, Germany)(2010). ru16Uniyal, R.: Geodesic congruences around various spacetime backgrounds. Ph. D. Thesis, (Gurukula Kangri Vishwavidyalaya, Haridwar, Uttarakhand, India)(2016). mak94Maki, T., and Shiraishi, K.: Motion of test particles around a charged dilatonic black hole. Class. Quant. Grav. 11, 227-238 (1994). fer03Fernando, S., Krug, D., and Curry, C.: Geodesic structure of static charged black hole solutions in 2+1 dimensions. Gen. Rel. Grav. 35, 1243-1261 (2003). eva08Hackmann, E., Kagramanova, V., Kunz, J., and Lammerzahl, C.: Analytic solutions of the geodesic equation in higher dimensional static spherically symmetric space-times. Phys. Rev. D78, 124018 (2008). nan08Dasgupta, A., Nandan, A., and Kar, S.: Kinematics of deformable media. Annals Phys. 323, 1621-1643 (2008). anv09Dasgupta, A., Nandan, H., and Kar, S.: Kinematics of flows on curved, deformable media. Int. J. Geom. Meth. Mod. Phys. 6, 645-666 (2009). das09 Dasgupta, A., Nandan, H., and Kar, S.: Kinematics of geodesic flows in stringy black hole backgrounds. Phys. Rev. D79, 124004 (2009). das12 Dasgupta, A., Nandan, H., and Kar, S.: Geodesic flows in rotating black hole backgrounds. Phys. Rev. D85, 104037 (2012). RefR11H. Nandan and R. Uniyal.: Geodesic flows in a Charged black hole spacetime with quintessence. Eur. Phys. J. C77, 552 (2017). ras17a R. Uniyal, H. Nandan and K. D. Purohit.: Null geodesics and observables around Kerr-Sen black hole. Class. Quantum. Grav. 35, 025003 (2018). gho10Ghosh, S., Kar, S., and Nandan, H.: Confinement of test particles in warped spacetimes. Phys. Rev. D82, 024040 (2010). rav15 Kuniyal, R. S., Uniyal, R., Nandan, H., and Zaidi, A.: Geodesic flows around charged black holes in two dimensions. Astrophys. Space Sci. 357, 92 (2015). fuji09Fujita, R., and Hikida, W.: Analytic solution of bound timelike geodesic orbits in Kerr spacetime. Class. Quantum Grav. 26, 135002 (2009). MP5DaKagramanova, V., and Reimers, S.: Analytic treatment of geodesics in five-dimensional Myers-Perry space-times. Phys. Rev. D86, 084029 (2012). MP5DbDiemer, V., Kunz, J., Lämmerzahl, C., and Reimers, S.: Dynamics of test particles in the general five-dimensional Myers-Perry spacetime. Phys. Rev. D89, 124026 (2014). MPADSDelsate, T., Rocha, J. V., and Santarelli, R.: Geodesic motion in equal angular momenta Myers-Perry-AdS spacetimes. Phys. Rev. D92, 084028 (2015). Fr03Frolov, V. P., and Stojkovic, D.: Particle and light motion in a space-time of a five-dimensional rotating black hole. Phys. Rev. D 68, 064011 (2003). | http://arxiv.org/abs/1705.09232v2 | {
"authors": [
"Ravi Shankar Kuniyal",
"Hemwati Nandan",
"Uma Papnoi",
"Rashmi Uniyal",
"K D Purohit"
],
"categories": [
"gr-qc"
],
"primary_category": "gr-qc",
"published": "20170525153556",
"title": "Strong lensing and Observables around 5D Myers-Perry black hole spacetime"
} |
Incomplete graphical model inference via latent tree aggregationGeneviève Robin^1, Christophe Ambroise^2 and Stééphane Robin^3^1CMAP, UMR 7641, École Polytechnique, X-POP, INRIA, Palaiseau, France^2LaMME, Université Paris-Saclay, Université d'Évry val d'Essonne, Évry, France^3 MIA-Paris, AgroParisTech, INRA, Université Paris-Saclay, Paris, FranceDecember 30, 2023 ======================================================================================================================================================================================================================================================================================================================================================== Graphical network inference is used in many fields such as genomics or ecology to infer the conditional independence structure between variables, from measurements of gene expression or species abundances for instance. In many practical cases, not all variables involved in the network have been observed, and the samples are actually drawn from a distribution where some variables have been marginalized out. This challenges the sparsity assumption commonly made in graphical model inference, since marginalization yields locally dense structures, even when the original network is sparse.We present a procedure for inferring Gaussian graphical models when some variables are unobserved, that accounts both for the influence of missing variables and the low density of the original network. Our model is based on the aggregation of spanning trees, and the estimation procedure on the Expectation-Maximization algorithm. We treat the graph structure and the unobserved nodes as missing variables and compute posterior probabilities of edge appearance. To provide a complete methodology, we also propose several model selection criteria to estimate the number of missing nodes. A simulation study and an illustration flow cytometry data reveal that our method has favorable edge detection properties compared to existing graph inference techniques. The methods are implemented in an R package. § INTRODUCTION§.§ Motivations Graphical models have been extensively studied and used in a wide variety of contexts, to represent complex dependency structures.In many practical cases however, it is more than likely that some variables involved in the network were in fact not observed. Such missing variables are interpreted as actors that were not measured but nonetheless influence the measurements, or experimental conditions that were not taken into account. In the perspective of unrevealing the conditional independence structure, this can lead to both inference issues and interpretation problems.The existence of unobserved variables can be naturally encompassed in the graphical model framework, by assuming there exists a 'full' graph describing the conditional independence structure of the joint distribution of observed and hidden variables. Observations are then samples of the marginal distribution of the observed variables only. From a graph-theoretical point of view, marginalizing hidden variables means removing them from the node set and marrying their children together, thus forming complete subgraphs, i.e. cliques. Hence, the conditional independence structure among observed variables is described by a marginal graph containing locally dense structures. This violates the sparsity assumption on which the majority of graph inference methods are based. Moreover, an identifiability problem arises in the hidden variable setting, since infinitely many full graphs induce the same marginal structure.In this paper we are interested in both checking if some variables are indeed missing in the graph and, if it is the case, inferring the complete graphical model. We address these problem in the context of Gaussian graphical models. §.§ Incomplete Gaussian graphical models Consider a multivariate Gaussian random vector parametrized by its precision matrixX ∈ℝ^p+r∼𝒩(0,K^-1), p,r≥ 1, K∈ℝ^(p+r) × (p+r)≻ 0,where ≻ denotes positive definiteness. We assume that X can be decomposed as X = (X_O,X_H),where X_O ∈ℝ^p denotes a set of observed variables and X_H ∈ℝ^r a set of hidden variables. In genomics, the hidden variables are understood as genes or experimental conditions that were not measured but nonetheless influence the results of the experiments. The goal of graphical model inference is to uncover the conditional independence structure of X, described by the following full graphG = ({1,… ,p, p+1, … , p+r}, E),where E is the set of undirected edges, such that {i,j}∈ E if and only if X_i and X_j are dependent conditionally to X_{1,…, p+r}∖{i,j}, which we denote X_i⊥̸X_j | X_{1,…, p+r}∖{i,j}. In the Gaussian setting we consider, the set of edges E is nicely determined by the non-zero entries of K <cit.>: For all(i,j)∈{1,…,p+r}^2, i≠ j, {i,j}∈ Eif and only ifK_ij≠ 0.The precision matrix K can be written block-wise to differentiate the terms corresponding to observed and latent variables:K = [K_O K_OH; K_HOK_H ].From (<ref>) and the Schur complement formula <cit.> we deduce that the marginal distribution of the observed variables isX_O ∼𝒩(0, K_m^-1), K_m = K_O - K_OHK_H^-1K_HO.The conditional independence structure of X_0 is thus described by the following marginal graphG_m = ({1,… ,p}, E_m),where E_m is the set of undirected edges given by the non-zero entries of K_m. Consider a sample (X_O^1,…, X_O^n) of n independent realizations of the marginal distribution of X_O ∼𝒩(0, K_m^-1). From such measurements, standard statistical tasks are to infer the full graph G or the marginal graph G_m; in this article we tackle both problems. §.§ Contributions and related workMethods to perform graphical model inference with unobserved variables have been proposed in the past. Some use the Expectation-Maximization (EM) algorithm <cit.>, its variational approximation described in <cit.>, or the Bayesian structural EM algorithm<cit.>. A lot of attention has also been brought to a regularized approach described in <cit.>, based on the analysis of the sum of low-rank and sparse matrices. Alternatives based on this method were also proposed by <cit.>, <cit.> and <cit.>. A major concern in the latent variable framework is identifiability; in general, identifiability constraints are very complex, as those derived in <cit.> for their model, which rely on algebraic geometry properties of low-rank and sparse matrices. On the contrary, in the particular case of trees (acyclic graphs), the conditions for identifying the joint graph from the marginal graph only, described in <cit.>, are very simple. In this article, we propose to exploit this property to build an inference strategy based on the EM algorithm and spanning trees. Latent tree models were studied in the context of phylogenetic tree learning; the Neighbor-Joining algorithm <cit.> among others is a popular method in this field. More recently, a method called Recursive Grouping was proposed in <cit.>, to reconstruct tree structures from partially observable data. We emphasize the fact that all these methods learn a single tree from data. In the present, we take advantage of two key properties of tree-structured graphical models. First, we can specify under which conditions they remain identifiable in presence of missing variables. Second, treating trees as random, we can easily integrate over the whole set of spanning trees, thanks to an algebra result called the Matrix-Tree theorem <cit.>. To our knowledge, no method for latent variable graphical model inference is based on mixtures of trees, which constitute the main novelty of our approach.Our contribution can be casted in the framework of <cit.>, who considered a special mixture of Bayesian network <cit.> where each network involved in the mixture is tree-shaped. <cit.> show the interest of such a model both in terms of tractability and interpretation. <cit.> also use the same framework to estimate the joint distribution of the observed variables and <cit.> aim at characterizing such distributions, but none of them is interested in the inference of the structure of the graphical model itself. A first difference with these tree-based methods is that we do not limit ourselves to a fixed number of trees but consider a mixture over all possible trees. Second, and more importantly, we extend the framework to the hidden variable setting.Our inference strategy is based on the EM algorithm. The computations at the E step are tractable thanks to the Matrix-Tree theorem, which enables us to integrate over the whole set of spanning trees, as opposed to the M step of <cit.> that relies on the Chow-Liu algorithm <cit.>. This approach enables us to compute posterior probabilities of edge appearance, as proposed by <cit.> in the fully observable setting. To our knowledge, no other existing approach provides such an edge-specific measure of reliability. The final inference of the graph relies on the ranking of these probabilities, therefore we estimate graphs with general structures, though our method is based on trees.Although we mostly focus on the inference of the graph structure, we also obtain an estimate of the precision matrix of the joint distribution of the observed and hidden variables, as a by-product of the EM algorithm. Our first contribution is to define, in Section <ref>, a latent tree aggregation model for graphical model inference in the presence of hidden variables and to give identifiability conditions. In Section <ref>, we introduce our procedure based on the EM algorithm to infer the parameters of the joint distribution and probabilities of edge appearance, and to estimate the number of missing nodes. In Section <ref> we show on synthetic data that our method compares favorably to competitors in terms of edge detection. Finally we illustrate the procedure on flow cytometry data analysis in Section <ref>.§ LATENT TREE AGGREGATION MODEL§.§ Identifiability conditionsAssume the full graph G defined in (<ref>) is tree structured. We now characterize the class of trees that are statistically identifiable in our model, i.e. such that the full graph G is uniquely determined by the marginal structure G_m. We assume without loss of generality that the observed and hidden variables are ordered, i.e. X_i is observed for all i∈{1,…,p} and hidden for all i∈{p+1,…,p+r}, and denote for some set A by Card(A) its cardinality. For i∈{1,… p+r}, we defineE_i = { j∈{1,… p+r}; {i,j}∈ E} .The following conditions on G and K, derived from <cit.>, <cit.> and <cit.>, guarantee statistical identifiability.[Identifiability conditions] * For all (i,j)∈{p+1,p+r}^2, {i,j}∉ E;* For all i∈{p+1,p+r}, Card(E_i)≥ 3;* Two nodes connected by an edge are neither perfectly independent nor perfectly dependent. These conditions stem from the simple graphical properties of spanning trees. Indeed, the maximal cliques of a tree are of size two, therefore if (i) no edge connects two hidden nodes and (ii) all hidden variables have at least three neighbors, there is exactlyone hidden node for every clique of size more than or equal to 3 in G_m, as illustrated in Figure <ref>, and the class of identifiable trees is now fully characterized. In particular, hubs (central hidden nodes) are identifiable, while recovering chains of hidden nodes, or hidden nodes located at the leaves of the tree, is hopeless. An important feature is that our identifiability conditions allow sparsity in G_m, contrary to what happens in the sparse plus low-rank model of <cit.>. Indeed, identifiable graph structures in their case will typically have a small number of central hidden variables (hubs), and marginal graphs will therefore be densely connected, nay complete. This is an important difference with our model, and we will see in Section <ref> that the inferred marginal structures are in fact very different. §.§ Fixed unknown tree We now turn to the description of our Latent Tree Aggregation model, and start with a simple procedure where we infer a single tree structure. Let 𝒯 be the set of spanning trees with p+r nodes, and assume the graphical model associated with X, that we now write T∈𝒯, is tree-shaped.Assume further that, conditionally on T, the vector X = (X_O, X_H) is drawn from the Gaussian distribution 𝒩(0, K_T^-1), where K_T has a tree-structured support determined by the edges of T, and can be decomposed inK_T = [K_T,O K_T,OH; K_T,HOK_T,H ].In the complete data setting where X is fully observed but T is unknown, the Chow-Liu algorithm <cit.> computes the tree of maximum likelihood T̂ from empirical observations, and the coefficients of the matrix K_T̂ can be computed easily using a result of <cit.> and the empirical covariance matrix. Building T̂ in this case boils down to finding a maximum spanning tree, which can be done with Kruskal's algorithm <cit.>. If variables are now hidden but the underlying tree T and K_T are known, the conditional distribution of the hidden variables given the observed ones isX_H|X_O∼𝒩(μ_H|O, K_H|O^-1) ,μ_H|O = -K_T,HOX_O, K_H|O = K_T,H.From these two results, we can derive an EM algorithm to infer the tree-structured graph underlying the distribution of X in the hidden variables setting, which runs iteratively until convergence, with the following steps at iteration h+1, h≥ 1.E-step: Evaluation of the conditional expectation of the complete log-likelihood with respect to the current value K^h of the parameter, namely: _X_H| X_O; K^hlog p(X_O, X_H; K).M-step: Maximization of (<ref>) with respect to K to update K^h into K^h+1, using the Chow-Liu algorithm.§.§ Random unknown tree The inference method described above is very simple, but the tree assumption is restrictive, and we expect poor results when it is violated. To overcome this, we choose to treat T as a random variable. Doing so, we are able to compute a posterior probability of appearance for every possible edge in the graph. Ranking them in the decreasing order, we can infer a graph of general structure, even though our model is based on spanning trees. Denote by E_T the set of edges of T. We assume T to be drawn from a distribution defined by a matrix π such that π_ij = P({i,j}∈ E_T).The edges of T are drawn independently, such thatP(T)∝∏_{i,j}∈ E_Tπ_ij.Prior information about the existence of each edge is easily encoded in a distribution of this form, and a non-informative choice of prior is to set the π_ij to be equal for all i,j, i.e. all trees have the same probability to be drawn so every edge has the same probability to be part of the drawn tree.We then assume the existence of a full symmetric matrix K with block decomposition given in (<ref>), the entries of which have to be estimated. For every T∈𝒯 we define the corresponding (p+r)×(p+r) matrix K_T, with off-diagonal term K_T, ij = K_ij if {i,j}∈ E_T and zeros otherwise. The diagonal term K_T, ii both depend on K_ii and on the degree of node i in T. Its expression derived from <cit.> is given in (<ref>), Appendix <ref>. Note that K does not need to be positive definite, although it may be desirable for the numerical stability of the algorithm. The joint distribution of (X_O, X_H) is a mixture of centered Gaussian distributions:(X_O, X_H) ∼∑_T ∈𝒯 p(T) 𝒩(X_0, X_H; 0, K_T^-1).We develop this random unknown tree model further in Section <ref> where we propose an inference procedure. For every possible edge {i,j}, we will compute the quantityα_ij = ∑_T∈𝒯 T∋{i,j}P(T|X_O),that we interpret as edge specific probabilities of appearance. First, we derive conditional distributions that will be necessary. In particular, we show that these distributions factorize over the edges. §.§ Some conditional distributions Let us first compute the joint distribution of T and X_H conditionally on X_O which will be needed in Section <ref>:P(T, X_H |X_O) = P(T |X_O)P(X_H|X_O, T).On the one hand P(X_H|X_O,T) =𝒩(μ_H|O,T,K_H|O,T). On the other hand, P(T|X_O) ∝ P(T)P(X_O|T)∝(∏_{i,j}∈ E_Tπ_ij)(K_T,m)^n/2/(2π)^np/2_(1)exp(-n/2tr(K_T,mΣ_O))_(2),whereK_T,m=K_T,O-K_T,OH(K_T,H)^-1K_T,HO. Terms (1) and (2) can be expressed as products over the edges of T. We directly give the results and leave the derivations to Appendix <ref>. Let us defined_ij =(K_iiK_jj-K_ij^2/K_iiK_jj)^n/2 t_ij =exp(-nK_ijΣ_ij) ∀{i,j}∈{1,…,p}^2,f_ih = exp(n/2∑_k∈ OK_ihK_hkΣ_ki/K_hh)∀{i,h}∈{1,…,p}×{p+1,…,p+r} and finally m_ij = {[t_ij {i,j}∈{1,…,p}^2;f_ij {i,j}∈{1,…,p}×{p+1,…,p+r};f_ij {i,j}∈{p+1,…,p+r}×{1,…,p}; 1 {i,j}∈{p+1,…,p+r}^2; ].. We obtain that the conditional distributionP(T|X_O) nicely factorizes over the edges of T:P(T|X_O)∝P(T) P(X_O|T)∝∏_{i,j}∈ E_Tπ_ij d_ij m_ij.We also need to compute the normalizing constant of P(T) and P(T|X_O) – that is, respectively,∑_T ∏_{i, j}∈ E_Tπ_ijand∑_T ∏_{i, j}∈ E_Tπ_ij d_ij m_ij. Those constants can be computed with the same complexity as a determinant, i.e. in O(p^3) operations, using the Matrix-Tree theorem that we now state. For a matrix W of weights w_ij, we define the Laplacian Δ = (Δ_ij)_i,j∈ V^2 associated to matrix W byΔ_ij= { -w_ijif i≠ j,∑_jw_ijifi=j. .Let W=(w_ij)_(i,j)∈ V^2 be a symmetric matrix of weights and Δ its associated Laplacian. For (u,v)∈ V^2, let Δ_uv be the (u,v)-th minor of Δ. Then all Δ_uv are equal andΔ_uv = ∑_T∈𝒯∏_{i,j}∈ E_Tw_ij :=Z(W).In Section <ref>, we will need to compute similar quantities after removing a given edge. Furthermore, we will need to compute such a quantity for all possible edges. This can be achieved in an efficient manner for all edges at a time thanks to a corollary of Theorem <ref> given in <cit.>, Theorem 3. § INFERENCE OF THE RANDOM UNKNOWN TREE MODEL §.§ EM algorithm Because the proposed model involves unobserved variables, the EM algorithm <cit.> is a natural framework to carry the inference out. Importantly, two hidden layers appear in the model: the latent tree T and the signal at the unobserved nodes X_H. We show that these two hidden layers can be handled, thanks to the matrix-tree theorem <cit.> introduced in Section <ref>. We first remind that the EM algorithm aims at maximizing the log-likelihood of the observed data log p(X_O; K) with respect to the parameter K, alternating two steps in an iterative manner. At iteration h we perform:E-step: Evaluation of all the conditional moments involved in the the conditional expectation of the complete log-likelihood with the current value K^h of the parameter, namely: _X_H, T | X_O; K^hlog p(X_O, X_H, T ; K);M-step: Maximization of (<ref>) with respect to K to update K^h into K^h+1.We now give the details of how those two steps are performed.E-step. The conditional expectation of the complete log-likelihood writes _T | X_O; K^h(_X_H | X_O, Tlog p(X_O, X_H, T; K) )=_T | X_O; K^h(log p(T) + _X_H | X_O, T; K^h[log p(X_O, X_H| T; K) ] ).Thanks to the tree structure of the graphical model, we have a simple form for the latter term:_X_H | X_O, T; K^h[log p(X_O, X_H| T; K) ] = ∑_{i, j}∈ T p_ij(K),wherep_ij(K) is -2K_ijΣ_ij if both i ≠ j are observed, 2K_ij W_ij^h if i is observed and j is hidden, -K_iiΣ_ii if i = j is observed and -K_ii B_ii^h if i = j is hidden,variance and covariance matrices being given byW_HO^h = (K_H^h)^-1 K_HO^h Σ_O, V_H^h = (K_H^h)^-1 K_HO^h Σ_O K_OH^h (K_H^h)^-1, B_H^h = (K_H^h)^-1 + V_H^h.As explained in Section <ref>, the diagonal term K_ii should actually depend on the tree T. Wework here with a common parameter K_ii, which may result in non-positive definite matrices K_T. To circumvent this issue, we project the estimated matrix K on the cone of positive definite matrices at each step of the EM algorithm. In the case where the tree T is supposed to be fixed, the calculation of the conditional distribution (<ref>) is replaced by the determination of the conditionally most probable tree, likewise in the classification EM introduced by <cit.>. M-step. Combined with p(T) ∝∏_{i, j}∈ Tπ_ij and with the conditional distribution of T, p(T |X_O; K^h) ∝∏_{i, j}∈ Tγ_ij given in (<ref>) (with γ_ij = π_ij d_ij m_ij), we get that_X_H, T | X_O; K^hlog p(X_O, X_H, T ; K) ∝_X_H, T | X_O; K^h[∑_{i, j}∈ Tlogπ_ij + p_ij(K) ] ∝∑_T (∏_{k, ℓ}∈ Tγ_kℓ^h) [∑_{i, j}∈ Tlogπ_ij + p_ij(K) ]where the normalizing constant does depend on K^h but not on K. Hence, at the M-step we need to maximize with respect to K∑_T (∏_{k, ℓ}∈ Tγ_kℓ^h) [∑_{i, j}∈ T p_ij(K) ] = ∑_i < j A_ijp_ij(K)where all A_ij = ∑_T: {i, j}∈ T(∏_{k, ℓ}∈ Tγ_kℓ^h) can be computed in O((p+r)^3) using Theorem 3 from <cit.>. The resulting update formulas of K are given in Appendix <ref>. Initialization. The behavior of the EM-algorithm is known to strongly depend on its starting point. Our initialization strategy is described in Appendix <ref>. §.§ Edge probability and model selection In this section, we derive a series of quantities of interest for practical inference.Edge probability.In the perspective of network inference, we need to compute the probability for an edge to be part of the tree given the observed data, that is, for edge {k, l},α_kl := P({k, l}∈ T | X_O).This probability can be computed for all edges at a time in O((p+r)^3) thanks to Theorem 3 from <cit.>. It depends on the marginal distribution of the tree P(T) given in (<ref>) parametrized with π_ij, which controls the marginal probability of the edge p_ij^0 := P({i, j}∈ E_T) in a complex manner. In a decision making perspective, it may be desirable to set this probability to an uninformative value such as 1/2. This probability change can be achieved in O(p+r)^2 <cit.>.Conditional entropy of the tree.We are also interested in the variability of the distribution of the tree given the observed data, measured by its entropy. Denoting Z_O the normalizing constant of the conditional distribution P(T|X_O), we have thatH(T|X_O) = - ∑_T P(T|X_O) log P(T|X_O)= - ∑_T P(T|X_O)(-log Z_O + ∑_kl ∈ Tlogγ_kl)=log Z_O - ∑_kllogγ_kl(∑_T: kl ∈ T P(T|X_O) )=log Z_O - ∑_klα_kllogγ_klwhich can be computed with complexity O((p+r)^2), once the edge probabilities α_kl have been computed. Because our model involves two hidden variables (T and X_H), one may be interested in the conditional entropy of all hidden variables, that isH(T, X_H|X_O) = H(T|X_O) + _T|X_O[ H(X_H|T, X_O)].For the second term, we observe that the conditional distribution of X_H given both T and X_O is a Gaussian distribution with variance K^-1_H (which is diagonal), whatever T and X_O. As a consequence, H(X_H|T, X_O) is constant, so we get that_T|X_O[ H(X_H|T, X_O)] = r log(2π e)/2 - 1/2∑_i ∈ Hlog(K_ii).Model selection.We now turn to the estimation of the unknown number of hidden nodes r. First, a standard Bayesian Information Criterion (BIC) can be defined as BIC(r) = log p(X_O; K) - (r) where the penalty term depends on the number of independent parameters in K, that is(r) =(p(p+1)/2 + rp + r) log n/2.Note that the maximized log-likelihood can be computed aslog p(X_O; K) = [log p(X_O, X_H, T) | X_O; K] + H(X_H, T|X_O, K).In the context of classification, <cit.> introduced an Integrated Complete Likelihood (ICL) criterion where the conditional entropy of the hidden variable is added to the penalty. The rationale behind ICL is a preference for models with lower uncertainty for the hidden variables. Because we are mostly interested in network inference, it seems desirable to penalize only for the conditional entropy of the tree. This leads to the following criterionICL_T(r) = log p(X_O; K) - H(T|X_O) - (r)where H(T|X_O) is given by (<ref>). In situations where a reliable prediction of the hidden node X_H is of interest, both entropies can be used in the penalty leading toICL_T, X_H(r) = log p(X_O; K) - H(T, X_H|X_O) - (r).§ NUMERICAL EXPERIMENTS§.§ Experimental setup Data synthesis in our framework requires the simulation of a graph and of a sparse inverse covariance matrix with matching support. We simulated graphs of two different structures which are given in Figure <ref>, namely a random tree and an Erdös-Renyi graph with density 0.1 containing p=20 nodes. The binary incidence matrix ofthe graph is then transformedby randomly flippingthesignofsomeelementsinordertosimulateboth positively and negatively correlated variables.Positive definiteness of thisprecision matrix K is ensuredby adding alarge enough constantto the diagonal. We choose the missing nodes at random among those that satisfy the identifiability conditions described in Section <ref>.The difficulty of detecting missing edges is related to the value of the correlations between the missing nodes and their children. Recall that the marginal precision matrix writesK_m = K_O - K_OHK_H^-1K_HO.We measure the difficulty of detecting the second term K_OHK_H^-1K_HO with the ratio SNR =K_OHK_H^-1K_HO_2^2/K_O_2^2.As it increases, the amplitude of the signal coming from the marginalized nodes indeed increases compared to the signal coming from the observed nodes. We control this ratio by multiplying terms in the precision matrix by a constant ε that we vary:K = [K_O ε K_OH; ε K_HOε K_H ].In the experiments we will consider two settings where ε∈{1, 10}. AGaussian sampleof sizen=30 with zero mean and the above concentration matrix is then simulated 50 times; the results we present below are averaged over the 50 samples. The total complexity of our inference method is O(n(p+r)^3), where r is the (fixed) number of missing nodes. To simulate marginalization, we simply remove in all samples the chosen variable. §.§ Edge detection We focus this experiment on the ability to recover existing edges of the network, that is the nonzero entries of the concentration matrix. This is a binary decision problem where the compared algorithms are considered as classifiers. The decision made by a binary classifier can be summarized using four numbers: True Positives (TP), False Positive (FP), True Negatives (TN) and False Negatives (FN).We have chosen to draw ROC curves - power (power=TP/(FN+TP)) versus false positive rate (FPR=FP/(FP+TN)) - to display this information and compare how well the methods perform. The performance of five algorithms were tested on all the simulatedgraph structures :the Chow-Liualgorithm <cit.>, the graphical lasso <cit.> (Glasso),the EM of <cit.> (EM-Glasso), the EM algorithm searching for a fixed unknown tree using Chow-Liu algorithm (EM-Chow-Liu), and our EM algorithmfor tree aggregation (Tree Aggregation).Note that the Chow-Liu and Glasso algorithms do not consider missing variables whereas all four other approaches do. We compare all methods in terms of marginal graph inference and only the four methods considering missing nodes in terms full graph inference. We put a special emphasis on the inclusion of 'spurious' edges - that is, edges resulting from marginalization - in the inferred marginal graph. Technically, spurious edges are edges from the marginal graph linking neighbors of the missing nodes in the full graph. To this aim, we plot the fraction IS/S of included spurious edges (IS) among the total number of spurious edges (S) versus the density of the inferred graph: (FP+TP)/[p(p-1)/2]. The interpretation of this curve differs from ROC. An ideal method would keep IS/S to 0 until the end, meaning that the corresponding curve should pushed down to the bottom right corner. The results are displayed in Figures <ref> and <ref>. The Chow-Liu algorithm and its EM version are very fast to converge and provide very similar solutions of the inference problem. On the marginal graph, even when the true model is a tree, both algorithms do not seem to provide better results than Glasso.Glasso and Tree Aggregation perform equally well, and better than EM-Glasso, at inferring the marginal graph. On the full graph Tree Aggregation performs slightly better than EM-Glasso, which tends to overestimate the number of children of the missing node and therefore has a higher false positive rate. This is in accordance with its underlying model, which assumes that all observed nodes have a hidden parent. Each of these false positive edges in the complete graph induces several false positive edges in the marginal graph. Interestingly, though Tree Aggregation is tailored to infer the full graph, it performs as well as Glasso at predicting the marginal graph, which is the primary target of Glasso. §.§ Model selectionWe now assess the performance of the proposed model selection criteria on the same simulated datasets, in which r=1 node is missing. In all simulations, the criteria ICL_T, X_H and ICL_T displayed very similar results, the conditional entropy of X_H being very small as compared to this of T. As a consequence, we only provide the results for ICL_T (hereafter named simply ICL). Figure <ref> shows that, for both network topologies, the BIC and ICL criteria display very similar behaviors and that they all detect the existence of a missing node. When the full network is tree-shaped (Figure <ref>, top), all criteria are maximal for r=1, whereas the choice between r=1 and r=2 is more difficult for the Erdös network. We repeat the experiment, this time without marginalizing any node. The results shown in Figure <ref> show that the BIC criterion doesn't detect any hidden node, contrary to the ICL criterion. Nonetheless the values of ICL for 0, 1, 2 and 3 hidden nodes are much tighter than in the previous example. § FLOW CYTOMETRY DATA ANALYSISWe applied our procedure to the inference of the Raf cellular signaling network based on flow cytometry data. The Raf network is implied in the regulation of cellular proliferation. The data were collected by <cit.> and later used by <cit.> and <cit.> in network inference experiments. Flow cytometry measurements consist in sending unique cells suspended in a fluid through a laser beam, and measuring parameters of interest by collecting the light re-emitted by the cell by diffusion or fluorescence. In this study, the parameters of interest are the activation level of 11 proteins and phospholipids involved in the Raf pathway, and are measured by flow cytometry across 100 different cells. Though the true structure of this network is unknown, experiments have highlighted a consensus pathway that we used as gold standard to assess the performance of our algorithm. The consensus network displayed in Figure <ref> is far from being a tree. We removed one protein from the dataset, which amounts to hide the corresponding node (in red in Figure <ref>), and applied our algorithm to this marginal data. Using hierarchical clustering initialization we inferred models with r = 0 to 3 hidden nodes. Figure <ref> (left) shows that the three proposed model selection criteria agree on the true model, that is r=1. The same figure shows sthat ICL_T and ICL_T, X_H are almost equal and both lower than BIC, meaning that the conditional entropy is mostly due to the uncertainty on the tree. The performances of the methods described in Section <ref> are compared on this example in Figure <ref>. The results are similar to those obtained in the simulation study. The proposed latent tree-based approach performs better than the EM-glasso when trying to infer the full graph. The methods also performs well for the marginal graph. In terms of spurious edges, Tree Aggregation displays a plateau, along which the inclusion of spurious edges is delayed compared to Glasso and EM-Glasso.Finally, we analyzed the complete dataset from <cit.>, without removing any node. Model selection criteria are given in Figure <ref> (right): they all agree on the absence of a missing node, which is consistent with the biological consensus on the Raf pathway.§ DISCUSSIONWe proposed a method for graphical model inference with missing variables. Uncovering such a latent structure provides additional hints in the interpretation of the underlying graphical model. For example, the inference of a missing variable allows to pinpoint a group of observed variables, which are related to this unobserved variable. Our procedure relies on spanning trees and the computations are performed efficiently using the Matrix-Tree theorem. We have defined a model with a two-layer hidden structure where the graph as well as the missing nodes are treated as latent variables. We derived conditional distributions of the latent variables given the observations and developed an inference procedure based on the EM algorithm. We also propose model selection criteria to determine the presence of a hidden structure, as well as the choice of the number of missing variables. We observed on a simulation study that the tree constraint, that we overcome by computing posterior edge probabilities, is not too costly in practice. An implementation of the method ispublicly available through the R package [Thepackage is available onGitHub <https://github.com/cambroise/LITree>]. Directions of future work include the extension to non-Gaussian (such as counts) and temporal data. plainnat § COMPUTATION OF THE CONDITIONAL DISTRIBUTIONS We show that the conditional distribution of the tree given the observations factorizes over the edges of the tree.P(T|X_O) ∝ P(T)P(X_O|T)∝(∏_{i,j}∈ E_Tπ_ij)(K_T,M)^n/2/(2π)^np/2_(1)exp(-n/2tr(K_T,MΣ_O))_(2),We first focus on theterm (1). A linear algebra result based on the Schur complement states that (K_T)=[K_T,O K_T,OH; K_T,HOK_T,H ] =(K_T,H)(K_T,O-K_T,OH(K_T,H)^-1K_T,HO_K_T,M),which finally gives with (K_T,H)>0 by definition (K_T,M)=(K_T) / (K_T,H). The assumptions on the hidden nodes for identifiability give that K_T,H is diagonal and (K_T,H)=∏_h∈ HK_hh is independent of T. Therefore we only need to express (K_T) as a product over the edges of T. We know from a result of <cit.> on decomposable graphs that the precision matrix and determinant of tree-structured graphs can be decomposed simply, with [K_{I,J}] denoting the matrix equal to K on indices I× J and 0 elsewhere,K_T=∑_i∈ V[K_{i,i}]+∑_{i,j}∈ V^2 {i,j}∈ E_T[K_{i,j}]-[K_{i,i}]-[K_{j,j}],which givestr(K_TΣ)=∑_i∈ VK_iiΣ_ii+∑_{i,j}∈ V^2 {i,j}∈ E_T2K_ijΣ_ij-K_iiΣ_ii-K_jjΣ_jj.The approximation mentioned in Section <ref> arises precisely here, where K_ii should actually be K_T, ii. We can also decompose the determinant of K_T as(K_T) = ∏_i∈ V([K_{i,i}]) ∏_{i,j}∈ E_T([K_{i,j}])/K_iiK_jj,where [K_{i,j}] stands for the sub-matrix K where only the ith and jth rows and columns are kept and with (K_T,H) = ∏_h∈ H K_hh and V=O⋃ H,(K_T,M) = ∏_i∈ O([K_{i,i}]) ∏_{i,j}∈ E_T([K_{i,j}])/K_iiK_jj. § FORMULAS FOR THE M-STEPWe need to set the derivative of the objective function E given (<ref>) wrt to each K_ij to 0. Depending on the status of nodes i and j, K_ij must satisfy the following:i, j ∈ O^2 , i≠ j: K_ij^h+1= ( 1-√(1+4Σ_ij^2K_ii^hK_jj^h)) / 2Σ_ij.;i, j ∈ O× H :K_ij^h+1= ( -1+√(1+4(W_ij^h)^2K_ii^hK_jj^h)) / 2W_ij^h.;i = j ∈ O:1/K_ii^h+1+∑_k∈ V(K_ik^h)^2/K_ii^h+1K_kk^h-(K_ik^h)^2α^h_ik = Σ_ii;i = j ∈ H:1/K_ii^h+1+∑_k∈ V(K_ik^h)^2/K_ii^h+1K_kk^h-(K_ik^h)^2α^h_ik = B^h_ii. § INITIALIZATIONAs the EM-algorithm is highly dependent on its starting point, initialization should be carefully undertaken. As a consequence, although this step is overlooked in most publications, we choose to describe it precisely in this appendix. In our case, it requires an initial graph structure as well as initial values for the missing nodes. Our initialization scheme relies on three stages. First we perform a clustering step and treat the clusters as groups of nodes which share a hidden parent. Then, we initialize the missing variables as the first principal component of the matrix containing their children. Finally, from this completed data, we infer an initial tree using the Chow-Liu algorithm. Let us now describe the details of the clustering procedure. We span all the possible triplets of nodes, and merge together the triplet for which the assumption that they had a common hidden parent resulted in the biggest gain in terms of likelihood of the observed realizations. Once the 'best' triplet is selected, we can repeat the same procedure iteratively in order to form clusters in a hierarchical manner. At every level of the hierarchy we have a set of cliques in which the nodes share the same parent and a set of nodes that have not yet been assigned to a clique. For computational reasons we restricted the search to the triplets in which at least one pair of nodes was connected by an edge in the current estimate of the structure. The likelihood gain induced by merging two cliques was penalized for the complexity of the model with the BIC criterion <cit.>. We show below the dendrogram obtained with this hierarchical clustering procedure, and the cliques (colored nodes) obtained by cutting the hierarchy at the level chosen with BIC. This was done on synthetic data, where we generated 2000 samples of a Gaussian network with 50 nodes. | http://arxiv.org/abs/1705.09464v2 | {
"authors": [
"Geneviève Robin",
"Christophe Ambroise",
"Stéphane Robin"
],
"categories": [
"stat.ME"
],
"primary_category": "stat.ME",
"published": "20170526073600",
"title": "Incomplete graphical model inference via latent tree aggregation"
} |
http://arxiv.org/abs/1705.09481v2 | {
"authors": [
"C. D. R. Azevedo",
"D. Gonzalez-Diaz",
"S. F. Biagi",
"C. A. B. Oliveira",
"C. A. O. Henriques",
"J. Escada",
"F. Monrabal",
"J. J. Gómez-Cadenas",
"V. Álvarez",
"J. M. Benlloch-Rodríguez F. I. G. M. Borges",
"A. Botas",
"S. Cárcel",
"J. V. Carrión",
"S. Cebrián",
"C. A. N. Conde",
"J. Díaz",
"M. Diesburg",
"R. Esteve",
"R. Felkai",
"L. M. P. Fernandes",
"P. Ferrario",
"A. L. Ferreira",
"E. D. C. Freitas",
"A. Goldschmidt",
"R. M. Gutiérrez",
"J. Hauptman",
"A. I. Hernandez",
"J. A. Hernando Morata",
"V. Herrero",
"B. J. P. Jones",
"L. Labarga",
"A. Laing",
"P. Lebrun",
"I. Liubarsky",
"N. Lopez-March",
"M. Losada",
"J. Martín-Albo",
"A. Martínez",
"A. D. McDonald",
"C. M. B. Monteiro",
"F. J. Mora",
"L. M. Moutinho",
"J. Muñoz Vidal",
"M. Musti",
"M. Nebot-Guinot",
"P. Novella",
"D. Nygren",
"B. Palmeiro",
"A. Para",
"J. Pérez",
"M. Querol",
"J. Renner",
"L. Ripoll",
"J. Rodríguez",
"L. Rogers",
"F. P. Santos",
"J. M. F. dos Santos",
"L. Serra",
"D. Shuman",
"A. Simón",
"C. Sofka",
"M. Sorel",
"T. Stiegler",
"J. F. Toledo",
"J. Torrent",
"Z. Tsamalaidze",
"J. F. C. A. Veloso",
"R. Webb",
"J. T. White",
"N. Yahlali"
],
"categories": [
"physics.ins-det"
],
"primary_category": "physics.ins-det",
"published": "20170526085349",
"title": "Microscopic simulation of xenon-based optical TPCs in the presence of molecular additives"
} |
|
Generating Time-Based Label Refinements to Discover More Precise Process ModelsA]N.Niek Tax[label=e1][email protected] Corresponding author. e1., B]E.Emin Alasgarov[label=e2][email protected], A]N.Natalia Sidorova[label=e3][email protected], C]R.Reinder Haakma[label=e4][email protected], and A]W.M.P.Wil M.P. van der Aalst[label=e5][email protected] N. Tax et al. [A]Eindhoven University of Technology, Eindhoven, The Netherlands[presep= ]e1,e3,e5 [B]Bol.com, Utrecht, The Netherlands[presep= ]e2 [C]Philips Research, Eindhoven, The Netherlands[presep= ]e4 Process mining is a research field focused on the analysis of event data with the aim of extracting insights related to dynamic behavior. Applying process mining techniques on data from smart home environments has the potential to provide valuable insights into (un)healthy habits and to contribute to ambient assisted living solutions. Finding the right event labels to enable the application of process mining techniques is however far from trivial, as simply using the triggering sensor as the label for sensor events results in uninformative models that allow for too much behavior (i.e., the models are overgeneralizing). Refinements of sensor level event labels suggested by domain experts have been shown to enable discovery of more precise and insightful process models. However, there exists no automated approach to generate refinements of event labels in the context of process mining. In this paper we propose a framework for the automated generation of label refinements based on the time attribute of events, allowing us to distinguish behaviourally different instances of the same event typebased on their time attribute. We show on a case study with real-life smart home event data that using automatically generated refined labels in process discovery, we can find more specific, and therefore more insightful, process models. We observe that one label refinement could have an effect on the usefulness of other label refinements when used together. Therefore, we explore four strategies to generate useful combinations of multiple label refinements and evaluate those on three real-life smart home event logs.Knowledge discovery for smart home environments Circular statistics Process mining § INTRODUCTION Process mining is a fast growing discipline that combines knowledge and techniques from data mining, process modeling, and process model analysis <cit.>. Process mining techniques analyze events that are logged during process execution. Today, such event logs are readily available and contain information on what was done, by whom, for whom, where, when, etc. Events can be grouped into cases (process instances), e.g., per patient for a hospital log, or per insurance claim for an insurance company. Process discovery plays an important role in process mining, focusing on extracting interpretable models of processes from event logs. One of the attributes of the events is usually used as its label, and its values become transition/activity labels in the process models generated by process discovery algorithms.The scope of process mining has broadened in recent years from business process management to other application domains, one of them is the analysis of events of human behavior with data originating from sensors in smart home environments <cit.>. Table <ref> shows an example of such an event log. Events in the event log are generated by, e.g., motion sensors placed in the house, power sensors placed on appliances, open/close sensors placed on closets and cabinets, etc. Particularly challenging in applying process mining in this application domain is the extraction of meaningful event labels that allow for the discovery of insightful process models. Simply using the sensor that generates an event (the sensor column in Table <ref>) as event label is shown to produce non-informative process models that overgeneralize the event log and allow for too much behavior <cit.>. Abstracting sensor-level events into events at the level of human activity (e.g., eating, sleeping, etc.) using activity recognition techniques helps to discover more behaviorally constrained and more insightful process models <cit.>. However, the applicability of this approach relies on the availability of a reliable diary of human behavior at the activity level, which is often expensive or sometimes even impossible to obtain.Existing approaches that aim at mining temporal relations from smart home environment data <cit.> do not support the rich set of temporal ordering relations that are found in the process models <cit.>, which amongst others include sequential ordering, (exclusive) choice, parallel execution, and loops.In our earlier work <cit.>, we showed that better process models can be discovered by taking the name of the sensor that generated the event as a starting point for the event label and then refining these labels using information on the time within the day at which the event occurred. The refinements used in <cit.> were based on domain knowledge, and not identified automatically from the data. In this paper, we aim at the automatic generation of semantically interpretable label refinements that can be explained to the user, by basing label refinements on data attributes of events. We explore methods to bring parts of the timestamp information to the event label in an intelligent and fully automated way, with the end goal of discovering behaviorally more precise and therefore more insightful process models. Initial work on generating label refinements based on timestamp information was started in <cit.>. Here, we extend the work started in <cit.> in two ways. First, we propose strategies to select a set of time-based label refinements from candidate time-based label refinements. Furthermore, add an evaluation of the technique in the form of a case study on a real-life smart home dataset.We start by introducing basic concepts and notations used in this paper in Section <ref>. In Section <ref>, we introduce a framework for the generation of event labels refinements based on the time attribute. In Section <ref>, we apply this framework on a real-life smart home dataset and show the effect of the refined event labels on process discovery. In Section <ref>, we address the case of applying multiple label refinements together. We continue by describing related work in Section <ref> and conclude in Section <ref>.§ BACKGROUND In this section, we introduce basic notions related to event logs and relabeling functions for traces and then define the notions of refinements and abstractions. We also introduce some Petri net basics. We use the usual sequence definition, and denote a sequence by listing its elements, e.g., we write ⟨ a_1,a_2,…,a_n⟩ for a (finite) sequence s:{1,…,n}→ A of elements from some alphabet A, where s(i)=a_i for any i ∈{1,…,n}; |s| denotes the length of sequence s; s_1 s_2 denotes the concatenation of sequences s_1 and s_2. A languageover an alphabet A is a set of sequences over A.An event is the most elementary element of an event log. Let ℐ be a set of event identifiers, 𝒯 be the time domain, and 𝒜_1 ×…×𝒜_n be an attribute domain consisting of n attributes (e.g., resource, activity name, cost, etc.). An event is a tuple e=(i,a_t,a_1,…,a_n), with i∈ℐ, a_t∈𝒯, and (a_1, …, a_n)∈𝒜_1 ×…×𝒜_n. The event label of an event is the attribute set (a_1,…,a_n). Functions 𝑖𝑑(e), 𝑙𝑎𝑏𝑒𝑙(e), and 𝑡𝑖𝑚𝑒(e) respectively return the id, the event label and the timestamp of event e. ℰ=ℐ×𝒯×𝒜_1 ×…×𝒜_n is a universe of events over 𝒜_1, …, 𝒜_n. The rows of Table <ref> are events from an event universe over the event attributes address, sensor, and sensor value.Events are often considered in the context of other events. We call E⊆ℰ an event set if E does not contain multiple events with the same event identifier. The events in Table <ref> together form an event set. A trace σ is a finite sequence formed by the events from an event set E⊆ℰ that respects the time ordering of events, i.e., for all k,m∈, 1≤ k < m ≤ |E|, we have: 𝑡𝑖𝑚𝑒(σ(k))≤𝑡𝑖𝑚𝑒(σ(m)). We define the universe of traces over event universe ℰ, denoted Σ(ℰ), as the set of all possible traces over ℰ. We omit ℰ in Σ(ℰ) and use the shorter notation Σ when the event universe is clear from the context. Often it is useful to partition an event set into smaller sets in which events belong together according to some criterion. We might for example be interested in discovering the typical behavior within a household over the course of a day. In order to do so, we can group together events with the same address and the same day-part of the timestamp, as indicated by the horizontal lines in Table <ref>. For each of these event sets, we can construct a trace; timestamps define the ordering of events within the trace. For events of a trace having the same timestamps, an arbitrary ordering can be chosen within a trace. An event partitioning function is a function 𝑒𝑝: ℰ→ T_id that defines the partitioning of an arbitrary set of events E⊆ℰ from a given event universe ℰ into event sets E_1,…,E_j,… where each E_j is the maximal subset of E such that for any e_1, e_2∈ E_j, 𝑒𝑝(e_1)= 𝑒𝑝(e_2); the value of ep shared by all the elements of E_j defines the value of the trace attribute T_id. Note that multidimensional trace attributes are also possible, i.e., a combination of the name of the person performing the event activity and the date of the event, so that every trace contains activities of one person during one day. The event sets obtained by applying an event partitioning can be transformed into traces (respecting the time ordering of events). An event log L is a finite set of traces L ⊆Σ(ℰ) such that ∀σ∈ L: ∀ e_1,e_2∈σ: 𝑒𝑝(e_1)=ep(e_2). A_L⊆𝒜_1 ×…×𝒜_n denotes the alphabet of event labels that occur in log L. The traces of a log are often transformed before doing further analysis: very detailed but not necessarily informative event descriptions are transformed into some coarse-grained and interpretable labels. For the labels of the log in Table <ref>, the sensor values could be abstracted to on and off, or labels can be redefined to a subset of the event attributes, e.g., leaving the sensor values out completely.After this relabeling step, some traces of the log can become identically labeled (the event id's would still be different). The information about the number of occurrences of a sequence of labels in an event log is highly relevant for process mining, since it allows process discovery algorithms to differentiate between the mainstream behavior of a process (i.e., frequently occurring behavioral patterns) and the exceptional behavior. Let _1, _2 be event universes. A function l: _1 →_2 is an event relabeling function when it satisfies 𝑖𝑑(e)=𝑖𝑑(l(e)) and 𝑡𝑖𝑚𝑒(e)=𝑡𝑖𝑚𝑒(l(e)) for all events e∈_1. A relabeling function can be used to obtain more useful event labels than the full set of event attribute values, by lifting those elements of the attribute space to the label that result in strong ordering relations in the resulting log. We lift l to event logs. Let ,_1,_2 be event universes with ,_1,_2 being pairwise different. Let l_1: →_1 and l_2: →_2 be event relabeling functions. Relabeling function l_1 is a refinement of relabeling function l_2, denoted by l_1≼ l_2, iff ∀_e_1,e_2∈:𝑙𝑎𝑏𝑒𝑙(l_1(e_1))=𝑙𝑎𝑏𝑒𝑙(l_1(e_2))𝑙𝑎𝑏𝑒𝑙(l_2(e_1))=𝑙𝑎𝑏𝑒𝑙(l_2(e_2)); l_2 is then called an abstraction of l_1.The goal of process discovery is to discover a process model that represents the behavior seen in an event log. The activities/transitions in this discovered process model describe allowed orderings over the labels of the events in the event logs. A frequently used process modeling notation in the process mining field is the Petri net notation <cit.>. Petri nets are directed bipartite graphs consisting of transitions and places, connected by arcs. Transitions represent activities, while places represent the enabling conditions of transitions. Labels are assigned to transitions to indicate the type of activity that they model. A special label τ is used to represent invisible transitions, which are only used for routing purposes and not recorded in the log.A labeled Petri net N=⟨ P,T,F,A_M,ℓ⟩ is a tuple where P is a finite set of places, T is a finite set of transitions such that P ∩ T = ∅, F ⊆ (P × T) ∪ (T × P) is a set of directed arcs, called the flow relation, A_M is an alphabet of labels representing activities, with τ∉ A_M being a label representing invisible events, and ℓ:T→ A_M∪{τ} is a labeling function that assigns a label to each transition. For a node n ∈ P ∪ T we use ∙ n and n ∙ to denote the set of input and output nodes of n, defined as ∙ n ={n'|(n',n)∈ F} and n ∙ ={n'|(n,n')∈ F}. An example of a Petri net can be seen in Figure <ref>, where circles represent places and rectangles represent transitions. Gray transitions having a smaller width represent invisible, or τ, transitions.A state of a Petri net is defined by its marking M ∈ℕ^P being a multiset of places. A marking is graphically denoted by putting M(p) tokens on each place p∈ P. A pair (N,M) is called a marked Petri net. State changes occur through transition firings. A transition t is enabled (can fire) in a given marking M if each input place p∈t contains at least one token. Once a transition fires, one token is removed from each input place of t and one token is added to each output place of t, leading to a new marking. An accepting Petri net is a 3-tuple (N,M_i,M_f) with N a labeled Petri net, M_i an initial marking, and M_f a final marking. Visually, places that belong to the initial marking contain a token (e.g., p_1 in Figure <ref>), and places that belong to the final marking are depicted as [node distance=1.4cm, on grid,>=stealth', bend angle=20, auto, every place/.style= minimum size=0.1mm, ][place,pattern=custom north west lines,hatchspread=1.5pt,hatchthickness=0.25pt,hatchcolor=gray] ;. Many process modeling notations, including accepting Petri nets, have formal executional semantics and a model defines a language of accepting traces . The language of a Petri net consists of all sequences of activities that have a firing sequence through the Petri net that starts in the initial marking and ends in the final marking. For the Petri net in Figure <ref>, the language of accepting traces is {⟨ A,B,D,E,F⟩,⟨ A,B,D,F,E⟩,⟨ A,C,D,E,F⟩, ⟨ A,C,D,F,E⟩}. In words: the process starts with activity A, followed by a choice between activity B and C, followed by activity D, finally followed by activity E and F in parallel (i.e., they can occur in any order). We refer the reader to <cit.> for a more thorough introduction of Petri nets.For an event log L and a process model M we say that L is fitting on process model M if L⊆(M). Precision is related to the behavior that is allowed by a process model M that was not observed in the event log L, i.e., (M)∖L. The aim of process discovery is to discover a process model based on and event log L that has both high fitness (i.e., it allows for the behavior seen in the log) and high precision (i.e., it does not allow for too much behavior that was not seen in the log). Many process discovery algorithms have been proposed throughout the years, including techniques based on Integer Linear Programming and the theory of regions <cit.>, Inductive Logic Programming <cit.>, maximal pattern mining <cit.>, or based on heuristic techniques <cit.>. We refer the reader to <cit.> for a thorough introduction of several process discovery techniques.In process discovery tasks on event logs from the business process management domain, events are often simply relabeled to the value of an activity name attribute, which stores a generally understood name for the event (e.g., receive loan application, or decide on building permit application). However, event logs from the smart home environment domain generally do not contain a single attribute such that relabeling on that attribute enables the discovery of insightful process models <cit.>. In this paper we explore a strategy to refine an event label that is based on the name of the sensor in a smart home with information about the time in the day at which the sensor was triggered.§ A FRAMEWORK FOR TIME-BASED LABEL REFINEMENTS In this section, we describe a framework to generate an event label that contains partial information about the event timestamp, in order to make the event labels more specific while preserving interpretability. Note that by bringing time-in-the-day information to the event label we aim at uncovering daily routines of the person under study. We take a clustering-based approach by identifying dense areas in time-space for each label. The time part of the timestamps consists of values between 00:00:00 and 23:59:59, equivalent to the timestamp attribute from Table <ref> with the day-part of the timestamp removed. This timestamp can be transformed into a real number time representation in the interval [0,24). We chose to apply soft clustering (also referred to as fuzzy clustering), which has the benefit of assigning to each data point a likelihood of belonging to each cluster. A well-known approach to soft clustering is based on the combination of the Expectation-Maximization (EM) <cit.> algorithm with mixture models, which are probability distributions consisting of multiple components of the same probability distribution. Each component in the mixture represents one cluster, and the probability of a data point belonging to that cluster is the probability that this cluster generated that data point. The EM algorithm is used to obtain a maximum likelihood estimate of the mixture model parameters, i.e., the parameters of the probability distributions in the mixture.A well-known type of mixture model is the Gaussian Mixture Model (GMM), where the components in the mixture distributions are normal distributions. The data space of time is, however, non-Euclidean: it has a circular nature, e.g., 23.99 is closer to 0 than to 23. This circular nature of the data space introduces problems for GMMs. Figure <ref> illustrates the problem of GMMs in combination with circular data by plotting the timestamps of the bedroom sensor events of the Van Kasteren <cit.> real-life smart home event log. The GMM fitted to the timestamps of the sensor events consists of two components, one with the mean at 9.05 (in red) and one with a mean at 20 (in blue). The histogram representation of the same data shows that some events occurred just after midnight, which on the clock is closer to 20 than to 9.05. The GMM, however, is unaware of the circularity of the clock, which results in a mixture model that seems inappropriate when visually comparing it with the histogram. The standard deviation of the mixture component with a mean at 9.05 is much higher than one would expect based on the histogram as a result of the mixture model trying to explain the data points that occurred just after midnight. The field of circular statistics (also referred to as directional statistics), concerns the analysis of such circular data spaces (cf. <cit.>). In this paper, we use a mixture of von Mises distributions to capture the daily patterns.Here, we introduce a framework for generating refinements of event labels based on time attributes using techniques from the field of circular statistics. This framework consists of three stages to apply to the set of timestamps of a sensor: Data-model pre-fitting stageA known problem with many clustering techniques is that they return clusters even when the data should not be clustered. In this stage, we assess if the events of a certain sensor should be clustered at all, and if so, how many clusters it contains. For sensor types that are assessed to not be clusterable (i.e., the data consists of one cluster), the procedure ends and the succeeding two stages are not executed. Data-model fitting stageIn this stage, we cluster the events of a sensor type by timestamp using a mixture consisting of components that take into account the circularity of the data. The clustering result obtained in the fitting stage is now a candidate label refinement. The label can be refined based on the clustering result by adding the assigned cluster to the label of the event, e.g., open/close fridge can be relabeled into three distinct labels open/close fridge 1, open/close fridge 2, and open/close fridge 3 in case the timestamps of the fridge where clustered into three clusters. Data-model post-fitting stageIn this stage, the quality of the candidate label refinements is assessed from both a cluster quality perspective and a process model (event ordering statistics) perspective. The label is only refined when the candidate label refinement is 1) based on a clustering that has a sufficiently good fit with the data, and 2) helps to discover a more insightful process model. If the candidate label refinement does not pass one of the two tests, the label refinement candidate will not be applied (i.e., the label will remain to only consist of the sensor name). We now proceed with introducing the three stages in detail.§.§ Data-model pre-fitting stageThis stage consists of three procedures: a test for uniformity, a test for unimodality, and a method to select the number of clusters in the data. If the timestamps of a sensor type are consider to be uniformly distributed or follow a unimodal distribution, the data is considered to not be clusterable, and the sensor type will not be refined. If the timestamps are neither uniformly distributed nor unimodal, then the procedure for the selection of number of clusters will decide on the number of clusters used for clustering.§.§.§ Uniformity CheckRao's spacing test <cit.> tests the uniformity of the timestamps of the events from a sensor around the circular clock. This test is based on the idea that uniform circular data is distributed evenly around the circle, and n observations are separated from each other 2π/n radii. The null hypothesis is that the data is uniform around the circle.Given n successive observations f_1,…,f_n, either clockwise or counterclockwise, the test statistics U for Rao's Spacing Test is defined as U = 1/2∑_i = 1^n| T_i - λ|, where λ = 2π/n, T_i = f_i + 1 - f_i for 1 ≤ i ≤ n - 1 and T_n = (2π - f_n) + f_1.§.§.§ Unimodality CheckHartigan's dip test <cit.> tests the null hypothesis that the data follows a unimodal distribution on a circle. When the null hypothesis can be rejected, we know that the distribution of the data is at least bimodal. Hartigan's dip test measures the maximum difference between the empirical distribution function and the unimodal distribution function that minimizes that maximum difference.§.§.§ Selecting the Number of Mixture ComponentsThe Bayesian Information Criterion (BIC) <cit.> introduces a penalty for the number of model parameters to the evaluation of a mixture model. Adding a component to a mixture model increases the number of parameters of the mixture with the number of parameters of the distribution of the added component. The likelihood of the data given the model can only increase by adding extra components, adding the BIC penalty results in a trade-off between the number of components and the likelihood of the data given the mixture model. BIC is formally defined as BIC = -2 * ln(L̂) + k * ln(n), where L̂ is a maximized value for the data likelihood, n is the sample size, and k is the number of parameters to be estimated. A lower BIC value indicates a better model. We start with one component and iteratively increase the number of components from k to k+1 as long as the decrease in BIC is larger than 10, which is shown to be an appropriate threshold in <cit.>. §.§ Data-model fitting stageA generic approach to estimate a probability distribution from data that lies on a circle or any other type of manifold (e.g., the torus and sphere) was proposed by Cohen and Welling in <cit.>. However, their approach estimates the probability distribution on a manifold in a non-parametric manner, and it does not use multiple probability distribution components, making it unsuitable as a basis for clustering.We cluster events generated by one sensor using a mixture model consisting of components of the von Mises distribution, which is the circular equivalent of the normal distribution. This technique is based on the approach of Banerjee et al. <cit.> that introduces a clustering method based on a mixture of von Mises-Fisher distribution components, which is a generalization of the 2-dimensional von Mises distribution to n-dimensional spheres. A probability density function for a von Mises distribution with mean direction μ and concentration parameter κ is defined as pdf(θ|μ, κ) = 1/2π I_0(κ)e^κcos(θ - μ), where mean μ and data point θ are expressed in radians on the circle, such that 0 ≤θ≤ 2π, 0 ≤μ≤ 2π, κ≥ 0. I_0 represents the modified Bessel function of order 0, defined as I_0(k) = 1/2π∫_0^2π e^κcos(θ)dθ. As κ approaches 0, the distribution becomes uniform around the circle. As κ increases, the distribution becomes relatively concentrated around the mean μ and the von Mises distribution starts to approximate a normal distribution. We fit a mixture model of von Mises components using the package movMF <cit.> provided in R, using the number of components found with the BIC procedure of the pre-fitting stage. A candidate label refinement is created based on the clustering result, where the original label based on the sensor type is refined into a new number of distinct labels, each representing one von Mises component, where each event is relabeled according to the von Mises component that has the assigns the highest likelihood to the timestamp of that event. §.§ Data-model post-fitting stageThis stage consists of two procedures: a statistical test to assess how well the clustering result fits the data, and a test to assess whether the ordering relations in the log become stronger by applying the relabeling function (i.e., whether it becomes more likely to discover a precise process model with process discovery techniques). §.§.§ Goodness-of-fit testAfter fitting a mixture of von Mises distributions to the sensor events, we perform a goodness-of-fit test to check whether the data could have been generated from this distribution. We describe the Watson U^2 statistic <cit.>, a goodness-of-fit assessment based on hypothesis testing.The Watson U^2 statistic measures the discrepancy between the cumulative distribution function F(θ) and the empirical distribution function F_n(θ) of some sample θ drawn from some population and is defined as U^2 = n∫_0^2π[ F_n(θ) - F(θ) - ∫_0^2π{ F_n(ϕ) - F(ϕ) } dF(ϕ) ]^2 dF(θ).§.§.§ Control flow testThe clustering obtained can be used as a label refinement where we refine the original event label into a new label for each cluster. We assess the quality of this label refinement from a process perspective using the label refinement evaluation method described in <cit.>. This method tests whether the log statistics that are used internally in many process discovery algorithms become significantly more deterministic by applying the label refinement. Hence, we test whether the models become more precise after time-based label refinement. An example of such a log statistic is the direct successor statistic: #_L,>^+(b,c) is the number of occurrences of b in the traces of L that are directly followed by c, i.e., in some σ∈ L, i∈{1,…,|σ|} we have label([σ(i)])=b and label([σ(i+1)])=c, likewise, #_L,>^-(b,c) is the number of occurrences of b which are not directly followed by c. This control-flow test <cit.> outputs a p-value that indicates whether such log statistics of refined activities a_1,a_2,… of some activity a change with statistical significance. When #_L,>^+(b,c)=#_L,>^-(b,c) the entropy of b being directly followed by c is 1 bit, equal to a coin toss. In addition to the p-value, the test returns an information gain value, which indicates the ratio of the decrease in the total bits of entropy in the log statistics as a result of applying the label refinement. Information gain can be used as a selection criterion for label refinements when there are multiple sensor types that can be refined according to the three steps of this framework. While the entropy of a single log statistic cannot increase by applying a label refinement, the information gain of a refinement can still be negative when it is not useful, as it increases the number activities in the log and therefore also increases the total number of log statistics. § CASE STUDYWe apply our time-based label refinements approach to the real-life smart home dataset described in Van Kasteren et al. <cit.>. The Van Kasteren dataset consists of 1285 events divided over fourteen different sensors. We segment in days from midnight to midnight to define cases. Figure <ref> shows the process model discovered on this event log with the Inductive Miner infrequent <cit.> process discovery algorithm with 20% filtering, which is a state-of-the-art process discovery algorithm that discovers a process model that describes the most frequent 80% of behavior in the log. Note that this process model overgeneralizes, i.e., it allows for too much behavior. At the beginning a (possibly repeated) choice is made between five transitions. At the end of the process, the model allows any sequence over the alphabet of five activities, where each activity occurs at least once.We illustrate the framework by applying it to the bedroom door sensor. Rao's spacing test results in a test statistic of 241.0 with 152.5 being the critical value for significance level 0.01, indicating that we can reject the null hypothesis of a uniformly distributed set of bedroom door timestamps. Hartigan's dip test results in a p-value of 3.95×10^-4, indicating that we can reject the null hypothesis that there is only one cluster in the bedroom door data. Figure <ref> shows the BIC values for different numbers of components in the model. The figure indicates that there are two clusters in the data, as this corresponds to the lowest BIC value. Table <ref> shows the mean and κ parameters of the two clusters found by optimizing the von Mises mixture model with the EM algorithm. A value of 0≡2π radii equals midnight. After applying the von Mises mixture model to the bedroom door events and assigning each event to the maximum likelihood cluster we obtain a time range of [3.08-10.44] for cluster 1 and a time range of [17.06-0.88] for cluster 2. The Watson U^2 test results in a test statistic of 0.368 and 0.392 for cluster 1 and 2 respectively with a critical value of 0.141 for a 0.01 significance level, indicating that the data is likely to be generated by the two von Mises distributions found. The label refinement evaluation method <cit.> finds statistically significant differences between the events from the two bedroom door clusters with regard to their control-flow relations with other activities in the log for 10 other activities using the significance level of 0.01, indicating that the two clusters are different from a control-flow perspective. Figure <ref> shows the process model discovered with the Inductive Miner infrequent with 20% filtering after applying this label refinement to the Van Kasteren event log. The process model still overgeneralizes the overall process, but the label refinement does help to restrict the behavior, as it shows that the evening bedroom door events are succeeded by one or more events of type groceries cupboard, freezer, cups cupboard, fridge, plates cupboard, or pans cupboard, while the morning bedroom door events are followed by one or more frontdoor events. It seems that this person generally goes to the bedroom in-between coming home from work and starting to cook. The loop of the frontdoor events could be caused by the person leaving the house in the morning for work, resulting in no logged events until the person comes home again by opening the frontdoor. Note that in Figure <ref> bedroom door and frontdoor events can occur an arbitrary number of times in any order. Figure <ref> furthermore does not allow for the bedroom door to occur before the whole block of kitchen-located events at the beginning of the net. In the process mining field multiple quality criteria exist to express the fit between a process model and an event log. Two of those criteria are fitness <cit.>, which measures the degree to which the behavior that is observed in the event log can be replayed on the process model, and precision <cit.>, which measures the degree to which the behavior that was never observed in the event log cannot be replayed on the process model. Low precision typically indicates an overly general process model, that allows for too much behavior. Typically we aim for process models with both high fitness and precision, therefore one can consider the harmonic mean of the two, often referred to as F-score. The bedroom door label refinement described above improves the precision of the process model found with the Inductive Miner infrequent (20% filtering) <cit.> from 0.3577 when applied on the original event log to 0.4447 when applied on the refined event log and improves the F-score from 0.5245 to 0.6156.The label refinement framework allows for refinement of multiple activities in the same log. For example, label refinements can be applied iteratively. Figure <ref> shows the effect of a second label refinement step, where Plates cupboard using the same methodology is refined into two labels, representing time ranges [7.98-14.02] and [16.05-0.92] respectively. This refinement shows the additional insight that the evening version of the Plates cupboard occurs directly before or after the microwave. Generating multiple label refinements, however, comes with the problem that the control-flow test <cit.> is sensitive to the order in which label refinements are applied. Because label refinements change the event log, it is possible that after applying some label refinement A, some other label refinement B starts passing the control-flow test that did not pass this test before, or fails the test while it passed before. Additionally, applying one label refinement can change the information gain of applying another label refinement afterwards. For example, when #_L,>^+(b,c)=#_L,>^-(b,c), i.e., b is followed by c 50% of the time, the entropy of this log statistic is 1, equal to a coin toss. Some label refinement A which refines b into b_1,b_2 where b_1 is always followed by c and b_2 is never followed by c is a good label refinement from an information gain point of view, as it decreases the entropy of the log statistic to zero. Some other label refinement B, which refines c into c_1,c_2 such that all b's are directly followed by c_1's and never by c_2's also leads to information gain. However, applying refinement B after having already applied refinement A, does not lead to any further information gain, since refinement A has already made it deterministic whether or not b is followed by any c. Ineffective label refinements might even harm process discovery, as each refinement decreases the frequencies with which activities are observed, thereby decreasing the amount of evidence for certain control-flow relations.§ ON THE ORDERING OF LABEL REFINEMENTSAs shown in Section <ref>, the outcome of the control flow test test of a label refinement can depend on whether other label refinements that passed the test of the pre-fitting and post-fitting stages have already been applied. Therefore, in this section, we explore the effect of the ordering of label refinements on real-life event logs. We explore this effect by evaluating four strategies to select a set of label refinements to apply to the event log. Each of the strategies assume the desired number k of label refinements to be given. All-at-once In this strategy we naively ignore the influence of the interplay between label refinements on the outcome of the control flow test and select top k label refinements in a single step based on their information gain that is calculated using the original event log, to which the other selected label refinements are not applied. Greedy Search We first apply the best label refinement in terms of information gain, then refine the event log using this label refinement, and then iterate to find the next label refinement calculating the information gain using the refined event log from the previous step. Exhaustive Search This strategy exhaustively tries all combinations of label refinements and searches for the label refinement combinations that jointly lead to the largest information gain. While the label refinement combinations that are found with this strategy are optimal in terms of information gain, this strategy can quickly become computationally intractable for event logs that contain many activities. Beam Search In Beam Search only a predetermined number b (called the beam size) of best partial solutions are kept as candidates, i.e., only the best b combinations in terms of information gain that were found consisting of n label refinements are explored to search for a new set of n+1 label refinements. This is an intermediate strategy in-between greedy and exhaustive search, with greedy search being a beam search with b=1 and exhaustive search being a beam search with b=∞. We apply these four strategies on three event logs from the human behavior domain and measure the fitness, precision, and F-score of the model discovered with the Inductive Miner infrequent <cit.> with 20% filtering after each label refinement. The first event log is the Van Kasteren <cit.> event log which we introduced in Section <ref>. The other two event logs are two different households of a smart home experiment conducted by MIT <cit.>. The log Household A of the MIT experiment contains 2701 events spread over 16 days, with 26 different sensors. The Household B log contains 1962 events spread over 17 days and 20 different sensors. Figure <ref> shows the results. On all three event logs the precision can be improved considerably through label refinements. Note that when applying only one label refinement all four strategies are identical. When refining a second label the four strategies all select the same label refinement on all three logs. Therefore the F-score, fitness, and precision for two refined labels happen to be identical. Figure <ref> shows that for the MIT household A data set there are 7 sensor types that can be refined, i.e., they passed the statistical tests of the pre-fitting stage and their obtained clustering passed the goodness-of-fit test. For the MIT household B data set there are 10 activities that can be refined and for the are 8 activities that can be refined for the Van Kasteren data set. However, since the F-score for all strategies drops again after a few label refinements, not all of those label refinements lead to better process models. The four strategies perform very similar in terms of F-score. Exhaustive search outperforms the other strategies for a few refinements on some logs, however, such improvements come with considerable computation times. On the MIT household B log, which has 10 possible label refinements, it takes about 25 minutes on an Intel i7 processor to evaluate all possible combinations of refinements. On logs with even more possible refinements the exhaustive strategy can quickly become computationally infeasible. The all-at-once strategy, which is computationally very fast and only takes milliseconds to compute, shows almost identical performance for MIT household A and Van Kasteren. When making six or more refinements on the MIT household B log, the performance of the all-at-once strategy lags behind the other strategies, indicating that the label refinements that were applied earlier cause the later label refinements to be less effective. However, the optimum in F-score for this log lies at three refinements, therefore the sixth refinement, where a performance difference between non-exhaustive strategies emerges should not be performed with any of the strategies in the first place.Since the F-score decreases again when applying too many label refinements it is important to have a stopping criterion that prevents refining the event log too much. The dashed line in Figure <ref> shows the results when we only refine a label when the information gain of the refinement is larger than zero. On the MIT households A and B logs this stopping criterion causes all strategies to stop at the best combination of label refinements in F-score, consists respectively of one and three refinements. This indicates that the control flow test <cit.> provides a useful stopping criterion for label refinements.All strategies except the exhaustive search strategy suggest as the fourth refinement for MIT B a refinement that decreases the F-score sharply, to increase it again with a fifth refinement. This is caused by an unhelpful refinement being found as the fourth refinement by those strategies, which causes the frequencies to drop below the filtering threshold of the Inductive Miner, leading to a model that is less precise. At the fifth refinement, the follows statistics of other activities drop as well, causing the follows statistics that dropped in the fourth refinement to be relatively higher and above the threshold again. On the Van Kasteren log the optimum in F-score is to make only one refinement, although the F-score after applying the second and third refinement as found by the exhaustive and beam search is almost identical. The all-at-once strategy stops after applying only two refinements while the other strategies apply a third refinement. The best refinement combination found with the all-at-once strategy using the stopping criterion is identical to the refinement combination found with the other strategies, suggesting that in practice the differences between the four approaches are small. On real-life smart home environment event logs the effect that one label refinement influences the control flow test outcome of others is limited.Figures <ref> and <ref> shows the process model that are discovered with the Inductive Miner infrequent with 20% filtering respectively from the original MIT household A event log and the event log obtained after applying the optimal combination of label refinements found in the results of Figure <ref>. Because of the silent transitions, the process model discovered from the original event log allows for almost all orderings over the sensor types. Even though the transition labels in the process model discovered from the refined event log are not readable because of the size, it is clear from the structure of the process model that it is much more behaviorally specific, containing a mix of sequential orderings, parallel blocks, and choices over the sensor types. Especially interesting is the part indicated by the blue dashed ellipse, which contains a parallel block consisting of a cabinet, the oven and burner, and the dishwasher, showing a clearly recognizable cooking routine. Furthermore, the part indicated by the red dotted ellipse indicates a sequentially ordered part, consisting of some door sensor registering the opening of a door, followed by starting the washing machine and then the laundry dryer.The time-based label refinement generation framework as well as the four strategies to generate multiple label refinements on the same event log are implemented and publicly available in the process mining toolkit ProM <cit.> as part of the LabelRefinements[<https://svn.win.tue.nl/repos/prom/Packages/LabelRefinements/>] package.§ RELATED WORK We classify related work into three categories. The first category of related work concerns techniques from the process mining field, specifically focusing on techniques that, like our approach, focus on refining activity labels. The second category of related work, also originating from the process mining area, focuses on the interplay between ordering between process activities and external information, such as time. The third category of related work originates from the ambient intelligence and smart home environments field, focusing on work on mining temporal relations between human activities. We use these three categories to structure this section. §.§ Label Splits and Refinements in Process MiningThe task of finding refinements of event labels in the event log is closely related to the task of mining process models with duplicate activities, in which the resulting process model can contain multiple transitions/nodes with the same label. From the point of view of the behavior allowed by a process model, it makes no difference whether a process model is discovered on an event log with refined labels, or whether a process model is discovered with duplicate activities such that each transition/node of the duplicate activity precisely covers one version of the refined label. However, a refined label may also provide additional insights as the new labels are explainable in terms of time. The first process discovery algorithm capable of discovering duplicate tasks was proposed by Herbst and Karagiannis in 2004 <cit.>, after which many others have been proposed, including the Evolutionary Tree Miner <cit.>, the α^*-algorithm <cit.>, the α^#-algorithm <cit.>, the EnhancedWFMiner <cit.>. An alternative approach has been proposed by Vázques-Barreiros <cit.> et al., who describe a local search based approach to repair a process model to include duplicate activities, starting from an event log and a process model without duplicate activities. Existing work on mining models with duplicate activities all base their duplicate activities on how well the event log fits the process model, and do not try to find semantic differences between the different versions of the activities in the form of attribute differences.The work that is closest to our work is the work by Lu et al. <cit.>, who describe an approach to pre-process an event log by refining event labels with the goal of discovering a process model with duplicate activities. The method proposed by Lu et al., however, does not base the relabelings on data attributes of those events and only uses the control flow context, leaving uncertainty whether two events relabeled differently are actually semantically different. §.§ Data-Aware Process MiningAnother area of related work is data-aware process mining, where the aim is to discover rules with regard to data attributes of events that decide decision points in the process. De Leoni and van der Aalst <cit.> proposed a method that discovers data guards for decision points in the process based on alignments and decision tree learning. This approach relies on the discovery of a behaviorally well-fitting process model from the original event log. When only overgeneralizing process models (i.e., allowing for too much behavior) can be discovered from an event log, the correct decision points might not be present in the discovered process model at all, resulting in this approach not being able to discover the data dependencies that are in the event log. Our label refinements use data attributes prior to process discovery to enable the discovery of more behaviorally constrained process models by bringing parts of the event attribute space to the event label. §.§ Temporal Relation Mining for Smart Home EnvironmentsGalushka et al. <cit.> provide an overview of temporal data mining techniques and discuss their applicability to data from smart home environments. Many of the techniques described in the overview focus on real-valued time series data, instead of discrete sequences which we assume as input in this work. For discrete sequence data, Galushka et al. <cit.> propose the use of sequential rule mining techniques, which can discover rules of the form “if event a occurs then event b occurs with time T”.Huynh et al. <cit.> proposed to use topic modeling to mine activity patterns from sequences of human events. Topic modeling originates from the field of natural language processing and addresses the challenge to find topics in textual documents and assign a distribution over these topics to each document. However, the discovered topics do not represent the human activities in terms of control-flow ordering constructs like sequential ordering, concurrent execution, choices, and loops. Ogale et al. <cit.> proposed an approach to describe the temporal relation between human behavior activities from video data using context-free grammars, using the human poses extracted from the video as the alphabet. Like Petri nets, context-free grammars define a formal language over its alphabet. However, Petri nets have a graphical representation, which is lacking for grammars. Furthermore, as shown by Peterson <cit.>, Petri net languages are a subclass of context-sensitive languages, and some Petri net languages are not context-free. This indicates that some relations over activities that can be expressed in Petri nets cannot be expressed in a context-free grammar.One particularly related technique, called TEmporal RElation Discovery of Daily Activities (TEREDA), was proposed by Nazerfard et al. <cit.>. TEREDA leverages temporal association rule mining techniques to mine ordering relations between activities as well as patterns in their timestamp and duration. The ordering relations between activities that are discovered by TEREDA are restricted to the form “activity a follows activity b”, where our proposed approach of modeling the relations with Petri nets allow for modeling of more complex relations between larger number of activities, such as: “the occurrences of activity b that are preceded by activity a are followed by both activity d and e, but in arbitrary order”. The patterns in the timestamps are obtained by fitting a Gaussian Mixture Model (GMM) with the Expectation-Maximization (EM) algorithm, thereby ignoring problems caused by the circularity of the 24-hour clock introduced in this paper.Jukkala and Cook <cit.> propose a method to mine temporal relations between activities from smart home environments logs where the temporal relation patterns are expressed in Allen's interval algebra <cit.>. Allen's interval algebra allows the expression of thirteen distinct types of temporal relations between two activities based on both the start and end timestamps of these activities. The approach of Jukkala and Cook <cit.> is limited to describing the relations between pairs of activities, and more complex relations between three or higher numbers of activities cannot be discovered. The aim of mining the patterns in Allen's interval algebra representation is to increase the accuracy of activity recognition systems, while our goal is knowledge discovery.Several papers from the process mining area have focused on mining temporal relations between activities from smart home event logs. Leotta et al. <cit.> postulate three main research challenges for the applicability of process mining technique for smart home data. One of those three challenges is to improve process mining techniques to address the less structured nature of human behavior as compared to business processes. Our technique addresses this challenge, as the time-based label refinements help in uncovering relations between activities with process mining techniques that could not be found without applying time-based label refinements.DiMaggio et al. <cit.> and Sztyler et al. <cit.> propose to mine Fuzzy Models <cit.> to describe the temporal relations between human activities. The Fuzzy Miner <cit.>, a process discovery algorithm that mines a Fuzzy Model from an event log, is a process discovery algorithm that is designed specifically for weakly structured processes. However, Fuzzy Models, in contrast to Petri nets, do not define a formal language over the activities, and are therefore not precise on what activity orderings are allowed and which are not. While mining a Fuzzy Model description of human activities is less challenging compared to mining a process model with formal semantics, it is also limited in the insights that can be obtained from it.Finally, insights in the human routines can be obtained through the discovery of Local Process Models <cit.>, which bridges process mining and sequential pattern mining by finding patterns that include high-level process model constructs such as (exclusive) choices, loops, and concurrency. However, Local Process Models, as opposed to process discovery, only give insight into frequent subroutines of behavior and do not provide the global picture of the behavior throughout the day from start to end.§ CONCLUSION & FUTURE WORK We have proposed a framework based on techniques from the field of circular statistics to refine event labels automatically based on their timestamp attribute. We have shown on a real-life event log that this framework can be used to discover label refinements that allow for the discovery of more insightful and behaviorally more specific process models. Additionally, we explored four strategies to search combinations of label refinements. We found that the difference between an all-at-once strategy, which ignores that one label refinement can have an effect on the usefulness of other label refinements, and other more computationally expensive strategies is often limited. An interesting area of future work is to explore the use of other types of event data attributes to refine labels, e.g., power values of sensors. A next research step would be to explore label refinements based on a combination of data attributes combined. This introduces new challenges, such as the clustering on partially circular and partially Euclidean data spaces. Additionally, other time-based types of circles than the daily circle described in this paper, such as the week, month, or year circle, are worth investigating. * ios1 | http://arxiv.org/abs/1705.09359v2 | {
"authors": [
"Niek Tax",
"Emin Alasgarov",
"Natalia Sidorova",
"Wil M. P. van der Aalst",
"Reinder Haakma"
],
"categories": [
"cs.LG",
"cs.AI",
"cs.DB"
],
"primary_category": "cs.LG",
"published": "20170525210120",
"title": "Generating Time-Based Label Refinements to Discover More Precise Process Models"
} |
./fig/theoremTheorem lemmaLemma notationNotation exampleExample definitionDefinition propositionProposition remarkRemark observationObservation corollaryCorollary claimClaim op-tical net-works semi-conduc-tor mydescription[1] #1 Equivalences Between Network Codes With Link Errors and Index Codes With Side Information Errors Jae-Won Kim and Jong-Seon No, Fellow, IEEE J.-W. Kim and J.-S. No are with the Department of Electrical and Computer Engineering, INMC, Seoul National University, Seoul 08826, Korea (e-mail: [email protected], [email protected]).Received September 15, 2016; accepted March 16, 2017 ==========================================================================================================================================================================================================================================In this paper, new equivalence relationships between a network code with link errors (NCLE) and an index code with side information errors (ICSIE) are studied. First, for a given network coding instance, the equivalent index coding instance is derived, where an NCLE is converted to the corresponding ICSIE and vice versa. Next, for a given index coding instance, the equivalent network coding instance is also derived, where an ICSIE is converted to the corresponding NCLE and vice versa if a pair of encoding functions of an original link and the duplicated link are functionally related in the network code. Finally, several properties of an NCLE are derived from those of the equivalent ICSIE using the fact that the NCLE and the ICSIE are equivalent.Index codes, index codes with side information errors (ICSIE), network codes, network codes with link errors (NCLE), side information, side information graph § INTRODUCTION Network coding was introduced in <cit.> to improve the throughput gain of terminals in a network structure, where a source node transmits information to terminal nodes through links and internal nodes. In order to improve the throughput gain, some internal nodes encode their incoming symbols, which is called network coding. In <cit.>, it was proved that a linear network code for multicast in a network can achieve the max-flow bound. For multicast cases, there exist some algorithms to construct network codes achieving the maxflow-mincut capacity for a single source <cit.>, <cit.>. In contrast to an error-free link case, a network code dealing with erroneous data on links was also studied, referred to as a network code with link errors (NCLE) in this paper. As erroneous data on links in a network are considered, the number of overall link errors in a network which network codes can overcome was studied <cit.>, <cit.>. Index coding was introduced in <cit.> for satellite communication systems which consist of one sender and several receivers. A sender has to transmit messages to receivers through a broadcast channel and receivers want to receive some messages and also know some messages priory as side information. Owing to its applications and relevance to other problems, index coding has attracted significant attention and various index coding schemes have accordingly been researched. For example, the optimal linear index coding scheme based on rank minimization over finite fields was introduced in <cit.> and random index coding was studied for infinitely long message length <cit.>.In addition to researches on the index coding schemes, relevance to other problems has been researched such as the equivalence between network coding and index coding, topological interference management, and duality with distributed storage systems <cit.>, <cit.>, <cit.>. There are also many researches on variations of index coding instances. For example, erroneous broadcast channels were considered in <cit.> and coded side information was studied in <cit.>, <cit.>. Moreover, blind index coding instances where a sender only knows the probability distribution of side information were researched <cit.> and functional index coding instances were introduced in <cit.>. In contrast to conventional assumptions on side information, an index code in which side information errors exist, called an index code with side information errors (ICSIE) was studied in <cit.>.Among these researches, we focus on an equivalence between network coding and index coding <cit.> in which their equivalence was introduced and a corresponding index coding instance was derived for a given network coding instance. It was also shown that any network codes can be converted to the corresponding index codes and vice versa. However, the equivalence between two problems for a given index coding instance was not presented in <cit.>. In <cit.>, they showed an equivalence between network computation and functional index coding for a given network coding instance and also suggested their relation for a given index coding instance with the corresponding models for both a network coding instance and an index coding instance, called the equivalent index coding instance and network coding instance, respectively. However, their models of the corresponding instances are defined in a different manner, that is, if a given network coding instance is converted to the corresponding index coding instance and converted back to the network coding instance again, the re-converted network coding instance differs from the originally given network coding instance. Similarly, the same problem occurs to a given index coding instance. Thus, we propose a method to solve these problems in this paper.In this paper, we show new equivalences between an NCLE and an ICSIE for both a given network coding instance and a given index coding instance. For a given network coding instance, the corresponding index coding instance is derived in a manner similar to that in an earlier study <cit.> and convertibility of their solutions is proved. For a given index coding instance, we modify a given side information graph by adding receivers, messages, and edges or by deleting some edges in order to derive the corresponding network coding instance in a similar manner. We also show the convertibility of their solutions if a pair of encoding functions of an original link and the duplicated link are functionally related in the network code. Our models of the corresponding instances not only offer convertibility of the coding solutions but also ensure that a given network (index) coding instance is identical to the re-converted network (index) coding instance from the corresponding index (network) coding instance. Moreover, the equivalent index coding instance of a given network coding instance does not contain the receiver t̂_ all, which was given in the earlier studies <cit.>, <cit.>. In <cit.>, it was noted that an equivalence between secure network and index coding can be achieved without t̂_ all. Similarly, we prove in detail that the receiver t̂_ all of the corresponding index coding instance is redundant in general. Since an NCLE and an ICSIE are equivalent, we derive several properties of an NCLE from the properties of an ICSIE such as the property of redundant links and the relationship between the conventional network code with error-free links and an NCLE.The paper is organized as follows. Several definitions, notations, and problem settings are given in Section <ref>. The main results on equivalence relationships between an NCLE and an ICSIE for both a given network coding instance and a given index coding instance are derived in Section <ref>. In Section <ref>, several properties of an NCLE are derived from those of an ICSIE based on the equivalence between an NCLE and an ICSIE. Finally, conclusions are presented in Section <ref>.§ PRELIMINARYIn this section, we define network codes with link errors and index codes with side information errors and then state their problem settings and notations, where hatted notations are used for index coding to avoid confusion. §.§ Notations Some of the notations are defined as follows: * Z[n] denotes a set of positive integers {1,2,...,n}.* Let 𝔽_q be the finite field of size q, where q is a power of prime and 𝔽_q^*=𝔽_q∖{0}.* For the vector X∈𝔽_q^n, wt(X) denotes Hamming weight of X.* Let X_D be a sub-vector (X_i_1,X_i_2,…,X_i_|D|) of a vector X=(X_1,X_2,…,X_n)∈𝔽_q^n for a subset D={i_1,i_2,…,i_|D|}⊆ Z[n], where i_1<i_2<…<i_|D|.§.§ Network Codes With Link ErrorsIn this paper, in order to provide an equivalence between an NCLE and an ICSIE for any given index coding instance, a generalized network coding scenario is considered, where each internal node can resolve their erroneous incoming symbols.For this scenario, if we know the probability distribution of the link errors, the throughput gain can be improved by assigning suitable error resistance capabilities to the internal nodes in a network structure. That is, large error resistance capabilities of internal nodes for the vulnerable links can improve throughput gain of an entire network because error propagation may be moderated. In this perspective, we introduce a new network code which deals with erroneous data on links.First, we introduce a network coding instance with a network structure 𝔾=(V,E,ℱ), where V and E denote the sets of nodes and edges in 𝔾, respectively and a vector of the error resistance capabilities δ described by a directed acyclic graph and a function of terminals ℱ as follows:* S̅⊆ V denotes a set of source nodes in 𝔾, where source nodes do not have incoming links.* S denotes a set of source messages, that is, s̅∈S̅ has some elements s∈ S.* T⊆ V denotes a set of terminal nodes in 𝔾, where terminal nodes do not have outgoing links.* ℱ denotes a function of the terminal nodes in 𝔾, which indicates a set of indices of each terminal's desired messages.* For a link e=(u,v)∈ E, In(e) denotes a set of incoming links of u, where u,v∈ V.* In the case of u∈S̅, In(e) denotes a set of messages that u has and In(t) denotes a set of incoming links of t for which t∈ T.* At the ends of the links, errors may occur due to transmissions through links, referred to as link errors and source nodes may have erroneous source symbols.* δ=(δ_e_1,...,δ_e_|E|,δ_t_1,...,δ_t_|T|) is a vector whose elements correspond to the error resistance capability for each outgoing link from the node and terminal in E∪ T.* When it is straightforward, we regard s, e, and t as some indices. In this network coding instance, we assume the followings: * Each message is one symbol in 𝔽_q.* Each link carries one symbol in 𝔽_q.* X_s denotes an element of a message vector X∈𝔽_q^|S|.* X_e denotes a symbol on a link e for which e∈ E.* For a set A⊆ S, X_A denotes a sub-vector of X and for a set B⊆ E, X_B denotes a vector consisting of |B| symbols of the corresponding links.* X̃_A=(X̃_1,...,X̃_|A|) and X̃_B=(X̃_1,...,X̃_|B|) denote vectors with erroneous symbol elements. Next, we describe node processing in the network code as in Fig. <ref>, where e^'_i, 1≤ i≤ l, denote the incoming edges of a node u and e_i, 1≤ i≤ k, denote the outgoing edges of u. At the node u, outgoing symbols for edges e_i, 1≤ i≤ k, are computed by encoding functions as X_e_i=F_e_i(X̃_e^'_1,...,X̃_e^'_l), 1≤ i ≤ k. We consider a network code capable of resolving some link errors. Assume that there are less than or equal to δ_e_1 symbol errors in the incoming links of u. If an encoding function F_e_1 can make a correct encoded outgoing symbol X_e_1 from l incoming symbols with less than or equal to δ_e_1 symbol errors, then F_e_1 is said to have an error resistance capability δ_e_1. When u is a source node, incoming symbols of u denote the source messages possessed by u, meaning that up to δ_e_1 message symbols are erroneous. Similarly, the decoding function D_t of a terminal t∈ T is said to have an error resistance capability δ_t if D_t can correctly obtain a decoded vector X_ℱ(t) from incoming symbols with less than or equal to δ_t symbol errors. In such a case, a network code with link errors is summarized as follows. Let δ=(δ_e_1,...,δ_e_|E|,δ_t_1,...,δ_t_|T|) be a vector whose elements correspond to the error resistance capability for each outgoing link and terminal in E∪ T. Then, a network code with link errors with parameters (δ,𝔾) over 𝔽_q, denoted by a (δ,𝔾)-NCLE consists of:* An encoding function F_e: 𝔽_q^| In(e)|→𝔽_q for e∈ E* A decoding function D_t: 𝔽_q^| In(t)|→𝔽_q^|ℱ(t)| for t∈ T * Satisfying F_e(X_ In(e))=F_e(X̃_ In(e)) and D_t(X_ In(t))=D_t(X̃_ In(t)) for any e∈ E and t∈ T, where wt(X_ In(e)-X̃_ In(e))≤δ_e and wt(X_ In(t)-X̃_ In(t))≤δ_t.Note that the error resistance capabilities are defined for encoding and decoding functions, that is, encoding functions for the outgoing links of one node can have different error resistance capabilities despite the fact that they have identical erroneous incoming symbols. The above encoding functions of links are local functions. However, the global function F̅_e is defined as F̅_e(X̃_S)=F_e(X̃_ In(e)). §.§ Index Codes With Side Information Errors We introduce index codes with side information errors as in <cit.>. First, an index coding instance is described as follows:* There are one sender which has n information messages as X̂=(X̂_1,…,X̂_n)∈𝔽_q^n and m receivers (or users) R_1,R_2,…,R_m, having sub-vectors of X̂ as side information.* Let 𝒳_i be the set of side information indices of a receiver R_i for i∈ Z[m].* Each receiver R_i wants to receive some elements in X̂, referred to as the wanted messages denoted by X̂_f(i), where f(i) represents the set of indices of the wanted messages of R_i and f(i)∩𝒳_i=ϕ. * A side information graph 𝒢 shows the wanted messages and side information of all receivers and the sender knows 𝒢. A side information graph is a bipartite graph which consists of message nodes and receiver nodes. A directed edge from a message node to a receiver node means that the receiver wants to receive that message. Conversely, a directed edge from a receiver node to a message node means that the receiver has that message as side information.* Let δ_s=(δ_s^(1),...,δ_s^(m)) be a vector whose elements correspond to the side information error resistance capability of each receiver.* The sender transmits messages to receivers through an error-free broadcast channel.Next, a (δ_s,𝒢)-index code with side information errors is introduced. We consider an index code which can overcome arbitrary side information errors for each receiver, where each receiver does not know which side information is erroneous. Specifically, each receiver R_i has a side information error resistance capability δ_s^(i) such that the receiver can decode the wanted messages even though less than or equal to δ_s^(i) symbols of side information are erroneous. Then, the (δ_s,𝒢)-index code with side information errors is described as follows <cit.>. Let δ_s=(δ_s^(1),...,δ_s^(m)) be the vector of side information error resistance capabilities. An index code with side information errors with parameters (δ_s,𝒢) over 𝔽_q, denoted by a (δ_s,𝒢)-ICSIE is a set of codewords having: * An encoding function F̂: 𝔽_q^n→𝔽_q^N* A set of decoding functions D̂_1,D̂_2,…,D̂_m such that D̂_i: 𝔽_q^N×𝔽_q^|𝒳_i|→𝔽_q^|f(i)| satisfyingD̂_i(F̂(X̂),X̂_𝒳̃_i)=X̂_f(i)for all i∈ Z[m], X̂∈𝔽_q^n, and wt(X̂_𝒳_i-X̂_𝒳̃_i)≤δ_s^(i), where X̂_𝒳̃_i=(X̂_1̃,...,X̂_|̃𝒳̃_̃ĩ|̃) is the erroneous side information vector of a receiver R_i. A (δ_s,𝒢)-ICSIE in <cit.> is a linear code. However, a general index code containing a nonlinear case is considered in this paper. Thus, we should modify and re-prove some of the properties of a (δ_s,𝒢)-ICSIE.Let ℐ(q,𝒢,δ_s) be a set of vectors defined byℐ(q,𝒢,δ_s)=⋃_i∈ Z[m]ℐ_i(q,𝒢,δ_s^(i))where ℐ_i(q,𝒢,δ_s^(i))={Ẑ∈𝔽_q^n |wt(Ẑ_𝒳_i)≤2δ_s^(i), Ẑ_f(i)≠0}.Then, a property of a (δ_s,𝒢)-ICSIE is given in the following theorem.A (δ_s,𝒢)-ICSIE is valid if and only if F̂(X̂)≠F̂(X̂^')for all X̂-X̂^'∈ℐ(q,𝒢,δ_s). Each receiver R_i has to recover X̂_f(i) using the received codeword F̂(X̂) and the side information X̂_𝒳̃_i. Then, the sender has to encode some confusing messages as different codewords. Because each receiver R_i is only interested in X̂_f(i), the codewords of distinct messages with an identical X̂_f(i) do not need to be distinguished. Moreover, the codewords of two messages X̂ and X̂^' such that wt(X̂_𝒳_i-X̂^'_𝒳_i)>2δ_s^(i) do not need to be distinguished because they can be distinguished by the side information of R_i. Thus, only problematic types of messages can be represented by two messages X̂ and X̂^' such that X̂_f(i)≠X̂^'_f(i) and wt(X̂_𝒳_i-X̂^'_𝒳_i)≤2δ_s^(i). Given that X̂ and X̂^' are confusing, F̂(X̂)≠F̂(X̂^') should be satisfied for all i∈ Z[m]. For a linear (δ_s,𝒢)-ICSIE, (<ref>) becomes ẐG≠0 for all Ẑ∈ℐ(q,𝒢,δ_s), where G is the corresponding generator matrix. Let Φ be a set of subsets of Z[n] defined byΦ={B⊆ Z[n]||𝒳_i∩ B|≥2δ_s^(i)+1for alli∈ Z[m]s.t.f(i)∩ B≠ϕ}for the side information graph 𝒢 of a (δ_s,𝒢)-ICSIE. Then, we have the following definition for a δ_s-cycle. For a (δ_s,𝒢)-ICSIE, a subgraph 𝒢^' of 𝒢 is termed a δ_s-cycle if the set of message node indices of 𝒢^' is an element of Φ (i.e., B) and the set of user node indices of 𝒢^' consists of i∈ Z[m] such that f(i)∈ B and its edges consist of the corresponding edges in 𝒢. The graph 𝒢 is said to be δ_s-acyclic if there is no δ_s-cycle in 𝒢. A δ_s-cycle is an important subgraph for a (δ_s,𝒢)-ICSIE problem because the existence of the δ_s-cycle is a necessary and sufficient condition for the possibility to reduce its codelength. The following lemma shows the importance of the δ_s-cycle. 𝒢 is δ_s-acyclic if and only if N_ opt^q(δ_s,𝒢)=n for a (δ_s,𝒢)-ICSIE, where N_ opt^q(δ_s,𝒢) is the optimal codelength.The sufficiency part is similarly proved as in <cit.> by showing the linear index code with codelength n-1, that is, (X̂_1+X̂_2,X̂_2+X̂_3,...,X̂_n-1+X̂_n). The necessity part is based on the fact that ℐ(q,𝒢,δ_s) is a set of all vectors in 𝔽_q^n except for 0 if 𝒢 is δ_s-acyclic. Specifically, because Z[n] is not a δ_s-cycle, we can assume that there is at least one receiver R_1 with a wanted message X̂_1 and |𝒳_1∩ Z[n]|≤ 2δ_s^(1) without loss of generality. Then, every Ẑ∈𝔽_q^n with Ẑ_1≠ 0 is included in ℐ(q,𝒢,δ_s). Similarly, because Z[n]∖{1} is not a δ_s-cycle, we can assume that there is at least one receiver R_2 with a wanted message X̂_2 and |𝒳_2∩{Z[n]∖{1}}|≤ 2δ_s^(2). Then, every Ẑ∈𝔽_q^n with Ẑ_1=0 and Ẑ_2≠ 0 is included in ℐ(q,𝒢,δ_s). The similar result for R_i is that every Ẑ∈𝔽_q^n with Ẑ_1=Ẑ_2=⋯=Ẑ_i-1=0 and Ẑ_i≠ 0 is included in ℐ(q,𝒢,δ_s), which means that ℐ(q,𝒢,δ_s)=𝔽_q^n∖{0}. In this case, all of the message vectors in 𝔽_q^n should be encoded to different codewords. Thus, N_ opt^q(δ_s,𝒢)=n.§ EQUIVALENCES BETWEEN NETWORK CODES WITH LINK ERRORS AND INDEX CODES WITH SIDE INFORMATION ERRORSIn this section, we prove the equivalences between network codes with link errors and index codes with side information errors. First, their equivalence is proved for a given network coding instance, similar to an earlier approach in <cit.>. We also show some differences from that in <cit.> for the corresponding index coding instance. Second, their equivalence is proved for a given index coding instance. For a given index coding instance, Gupta and Rajan defined the corresponding network coding instance and showed an equivalence between a network computation problem and a functional index coding problem <cit.>. However, the equivalence in <cit.> for a given index coding instance has some weak points, which will be explained in this section. In order to mitigate these weak points, for a given index coding instance, we introduce a corresponding network coding instance which differs from that in <cit.> and show a different equivalence relationship between them. In the following definition, the equivalence between two problems is described. NCLE and ICSIE problems are said to be equivalent if and only if the NCLE can be converted to the corresponding ICSIE and vice versa. §.§ Equivalence for a Given Network Coding Instance For a given network coding instance, we can construct the corresponding index coding instance in a manner similar to that in the aforementioned research <cit.>. The differences between our corresponding model and that in <cit.> are the error resistance capabilities and the existence of the receiver t̂_ all. In what follows, the relationship between two coding instances of a (δ,𝔾)-NCLE and the corresponding (δ_s,𝒢)-ICSIE is given as follows:* A sender of a (δ_s,𝒢)-ICSIE has a message X̂=(X̂_S, X̂_E) and there are |E|+|T| receivers, each of which is a corresponding receiver R_e of a link or R_t of a terminal in a given network coding instance.* For e∈ E, R_e of the (δ_s,𝒢)-ICSIE can be described as 𝒳_e= In(e), f(e)={e}, and δ_s^(e)=δ_e.* For t∈ T, R_t of the (δ_s,𝒢)-ICSIE can be described as 𝒳_t= In(t), f(t)=ℱ(t), and δ_s^(t)=δ_t.* The codelength of the (δ_s,𝒢)-ICSIE is the number of links in a (δ,𝔾)-NCLE. This relationship is derived from a given network coding instance. Fig. <ref> shows an example of a network coding instance and the corresponding index coding instance. Before showing validity of this model, some restrictions for the corresponding index coding instance should be satisfied as shown in the following proposition. The corresponding index coding instance of a given network coding instance should satisfy the followings: * In the corresponding side information graph 𝒢, there is no cycle in a subgraph which consists of {R_e | e∈ E} and {X̂_e | e∈ E}.* Each element of {X̂_e | e∈ E} should be wanted by one receiver.* If R_i has X̂_s as side information for s∈ S, R_i cannot have X̂_e as side information for e∈ E.If a network structure is valid, the network structure is directed acyclic and the source nodes are not intermediate. Thus, to be directed acyclic, 1) should be satisfied. 2) is due to our setting of the relation and 3) should be satisfied because the source nodes are not intermediate nodes.From Proposition <ref>, we note that the corresponding models do not cover all index coding instances but cover some index coding instances satisfying conditions in the above proposition necessarily. However, it is important to note that all network structures can be covered by this model. In this model, one difference from that in <cit.> is the existence of the receiver t̂_ all which can be described as 𝒳_t̂_ all=S, f(t̂_ all)=E, and δ_s^(t̂_ all)=0. In fact, the existence of t̂_ all in <cit.> originates from a directed acyclic network structure. Thus, an identical result can be obtained even if we remove t̂_ all in the corresponding index coding instance as in the following proposition. For a given network coding instance, the modeling of the corresponding index coding instance in <cit.> obtains an identical result even if the receiver t̂_ all is removed, that is, t̂_ all is redundant. From 1) of Proposition <ref>, we can see that there is no cycle in a subgraph which consists of {R_e | e∈ E} and {X̂_e | e∈ E} and thus there is no δ_s-cycle. Since this subgraph is δ_s-acyclic, the optimal codelength for the subgraph is |E| by Lemma <ref>, meaning that every vector Ẑ∈𝔽_q^|S|+|E| such that Ẑ_S=0 and Ẑ_E≠0 belongs to ℐ(q,𝒢,δ_s). In <cit.>, t̂_ all wants to receive X̂_E and has X̂_S as side information with δ_s^(t̂_ all)=0. In fact, we do not need t̂_ all because there is no cycle in the subgraph mentioned above. From Theorem <ref>, ℐ_t̂_ all(q,𝒢,δ_s^(t̂_ all)) is the set of all vectors in 𝔽_q^|E|+|S| such that Ẑ_S=0 and Ẑ_E≠0. Since every vector in ℐ_t̂_ all(q,𝒢,δ_s^(t̂_ all)) is already included in ℐ(q,𝒢,δ_s), the receiver t̂_ all can be removed from the corresponding index coding instance. Thus, we can remove the receiver t̂_ all from the corresponding index coding instance. From the proof of Proposition <ref> and Lemma <ref>, the following observation is given.The optimal index codelength of the corresponding index coding instance is larger than or equal to |E|. At this point, we prove validity of this model and the equivalence between an NCLE and an ICSIE. In order to show that they are equivalent, the following lemma is needed. In the equivalent (δ_s,𝒢)-ICSIE for a given network coding instance, there is a unique X̂_E such that F̂(X̂_S,X̂_E)=σ for any codeword σ∈𝔽_q^|E| and X̂_S.From Proposition <ref>, there is no cycle in a subgraph which consists of the set of receivers {R_e | e∈ E} and the set of messages {X̂_e | e∈ E}. Thus, N_opt^q(δ_s,𝒢)≥ |E| and we assume that a (δ_s,𝒢)-ICSIE with codelength |E| exists. Since different symbols of X̂_E for given X̂_S result in different codewords and the codelength is |E| by Lemma <ref>, there exists unique X̂_E such that F̂(X̂_S,X̂_E)=σ for the above conditions.Next, the equivalence between an NCLE and an ICSIE for a given network coding instance is shown in the following theorem.For a given network coding instance, a (δ,𝔾)-NCLE exists if and only if the corresponding (δ_s,𝒢)-ICSIE exists.This can be proved by a method similar to that in <cit.> but the differences are the existence of t̂_ all and the fact that there are link errors and side information errors.Necessity: Assume that there exists a (δ,𝔾)-NCLE. First, the encoding function of the corresponding (δ_s,𝒢)-ICSIE is defined as F̂(X̂)=X̂_B=(X̂_B(e): e∈ E) such that X̂_B(e)=X̂_e+F̅_e(X̂_1,...,X̂_|S|). Next, we define the decoding functions and show that all receivers in the corresponding index coding instance can recover what they want.It is already given that R_e can be described as 𝒳_e= In(e), f(e)={e}, and δ_s^(e)=δ_e. Thus, for each e^'∈ In(e), the decoder can compute X̂_B(e^')-X̂_ẽ^̃'̃, which can be an erroneous value of F̅_e^'(X̂_1,...,X̂_|S|). Then, δ_s^(e) symbols of them can be erroneous. Since the link e in the network coding instance has an error resistance capability δ_e=δ_s^(e), evaluating these symbols with F_e results in the correct value of F̅_e(X̂_1,...,X̂_|S|). Now, R_e can obtain X̂_e by subtracting F̅_e(X̂_1,...,X̂_|S|) from X̂_B(e). It is also given that R_t can be described as 𝒳_t= In(t), f(t)=ℱ(t), and δ_s^(t)=δ_t. Similar to the R_e case, R_t can recover what it wants because it can obtain F̅_e(X̂_1,...,X̂_|S|) for all e∈ In(t), whose δ_s^(t) symbols can be erroneous. However, evaluating these symbols using the decoding function D_t of the network coding instance results in the correct values because δ_t=δ_s^(t).Sufficiency: Assume that there exists a (δ_s,𝒢)-ICSIE together with its codeword σ by Lemma <ref>. Then, the encoding functions of the links and the decoding functions of the terminals in the corresponding network coding instance are defined using the decoding functions of the (δ_s,𝒢)-ICSIE. For e∈ E, F_e is defined as a function whose output is X_e=D̂_e(σ,(X̃_e^' : e^'∈ In(e))). For t∈ T, D_t is defined as a function whose output is D̂_t(σ,(X̃_e^' : e^'∈ In(t))). Without loss of generality, we assume that X_S=X̂_S and then, there is a unique X̂_E such that F̂(X̂_S,X̂_E)=σ for any X̂_S from Lemma <ref>. For e ∈ E, D̂_e(σ,(X̃_e^' : e^'∈ In(e)))=X̂_e because δ_e=δ_s^(e) and X̂_e is unique. For t∈ T, D̂_t(σ,(X̃_e^' : e^'∈ In(t)))=X̂_ℱ(t)=X_ℱ(t) because δ_t=δ_s^(t). Thus, Theorem <ref> tells us that an NCLE problem is equivalent to the corresponding ICSIE problem when a network coding instance is given. The following example shows the equivalence between the two problems.Suppose that a given network coding instance and the corresponding side information graph with δ_s are given as in Fig. <ref>. A network code for Fig. <ref> can be described as follows: * X_e_1=X_e_2=X_e_5=X_e_8=X_e_10=X_s_1* X_e_3=X_e_4=X_e_7=X_e_12=X_s_2* X_e_6=X_e_9=X_e_11=X_s_1+X_s_2* D_t is given in Algorithm 1. Then, the corresponding index code for Fig. <ref> can be described as follows: * The transmitted codeword F̂(X̂)=X̂_B=(X̂_B(e) : e∈ E) consists of 12 components.* X̂_B(e_i)=X̂_e_i+X̂_s_1 for i∈{1, 2, 5, 8, 10}* X̂_B(e_i)=X̂_e_i+X̂_s_2 for i∈{3, 4, 7, 12}* X̂_B(e_i)=X̂_e_i+X̂_s_1+X̂_s_2 for i∈{6, 9, 11}* The decoding functions of receivers can be defined as in Theorem <ref>. Thus, by finding a network code for a given network coding instance, we can find the corresponding index code. Furthermore, if we find an index code with codelength |E| for Fig. <ref>, we can obtain the corresponding network code for Fig. <ref>. §.§ Equivalence for a Given Index Coding Instance Similarly, for a given index coding instance, we can construct the corresponding network coding instance. Since some index coding instances cannot be converted to corresponding network coding instances, as noted in Proposition <ref>, it is necessary to modify a given index coding instance to use the previous relationship between two coding instances. Specifically, some receivers and messages are added to the given index coding instance, where 𝒢 becomes 𝒢^' and δ_s becomes δ_s^'.For simplicity, we can assume that every receiver wants to receive only one message because a receiver who wants to receive more than one message can be split into receivers with identical side information. Now, we explain how to make 𝒢^', δ_s^', and the corresponding network coding instance including 𝔾 and δ.In order to make the corresponding network coding instance using the same relationship between two coding instances in the previous section, it is necessary to determine which part is X̂_E or X̂_S for a given side information graph 𝒢. X̂_e is said to be a unicast message if X̂_e is wanted by one receiver. Suppose that the maximal unicast acyclic subgraph of 𝒢 consists of unicast messages X̂_E and the receivers who want one of X̂_E, that is, {R_e | e∈ E}. In such a case, the validity of the corresponding network structure is not guaranteed but for the directed acyclic network structure, it is guaranteed when we convert 𝒢 to the corresponding network structure as in the previous section. Next, let X̂_S and {R_t | t∈ T} be the remaining messages and the remaining receivers, respectively. Subsequently, we modify 𝒢 to 𝒢^' and determine X̂_E^' in 𝒢^' in order to ensure the validity of the corresponding network structure. Specifically, we determine X̂_E^' by adding several messages to X̂_E and determine {R_e^' | e^'∈ E^'} by adding a number of receivers to {R_e | e∈ E}. Accordingly, some of the corresponding links of these added receivers are referred to as duplicated links, as explained later. Before choosing X̂_E^', we classify problematic cases based on the outgoing edges of the receivers in 𝒢, which should be modified to obtain 𝒢^'.There are six problematic cases based on message nodes and four problematic cases based on receiver nodes for the modification of the side information graph as in the following claim. The problematic cases based on the outgoing edges of receivers in a side information graph 𝒢 can be classified into the following ten cases, which should be modified to make the modified side information graph 𝒢^'.* X̂_s has one incoming edge from {R_t | t∈ T} for s∈ S.* X̂_s has more than one incoming edge from {R_t | t∈ T} for s∈ S.* X̂_s has one incoming edge from {R_e^' | e^'∈ E} and one incoming edge from {R_t | t∈ T} for s∈ S. * X̂_e has one incoming edge from {R_e^' | e^'∈ E} and one incoming edge from {R_t | t∈ T} for e∈ E. * X̂_e has more than one incoming edge from {R_e^' | e^'∈ E} for e∈ E. * X̂_e has more than one incoming edge from {R_t | t∈ T} for e∈ E.* R_e has one outgoing edge to {X̂_s | s∈ S} and one outgoing edge to {X̂_e^' | e^'∈ E} for e∈ E.* R_t has one outgoing edge to {X̂_s | s∈ S} for t∈ T.* R_t has more than one outgoing edge to {X̂_s | s∈ S} for t∈ T.* R_t has one outgoing edge to {X̂_s | s∈ S} and one outgoing edge to {X̂_e^' | e^'∈ E} for t∈ T. In contrast to the ten problematic cases in Claim <ref>, there are ten cases which do not need to be modified for 𝒢^' as: * X̂_e has one incoming edge from {R_e^' | e^'∈ E} for e∈ E. * X̂_e has one incoming edge from {R_t | t∈ T} for e∈ E.* X̂_s has one incoming edge from {R_e^' | e^'∈ E} for s∈ S.* X̂_s has more than one incoming edge from {R_e^' | e^'∈ E} for s∈ S.* R_e has one outgoing edge to {X̂_s | s∈ S} for e∈ E.* R_e has more than one outgoing edge to {X̂_s | s∈ S} for e∈ E.* R_e has one outgoing edge to {X̂_e^' | e^'∈ E} for e∈ E.* R_e has more than one outgoing edge to {X̂_e^' | e^'∈ E} for e∈ E.* R_t has one outgoing edge to {X̂_e^' | e^'∈ E} for t∈ T.* R_t has more than one outgoing edge to {X̂_e^' | e^'∈ E} for t∈ T. For example, the above case 1) can be described as e^' being an incoming edge of e, which does not violate the network structure. Thus, the case 1) does not need to be modified.At this point, we suggest how to modify the ten problematic cases in Claim <ref> so that the corresponding network coding instance is valid as in Fig. <ref>.The case 1) in Claim <ref> is described as R_t having X̂_s as side information for t∈ T and s∈ S, implying that the terminal and the source are identical. To address this, we add a new link-related receiver R_e having X̂_s as side information with δ_s^(e)=0 and wanting the corresponding message X̂_e. In addition, we delete the incoming edge of X̂_s from R_t and add a new edge from R_t to X̂_e, after which we have the corresponding network structure as shown in Fig. <ref>. The cases 2), 3), 8), 9), and 10) can be solved similarly to the case 1).The case 4) indicates that the terminal node is the intermediate node, that is, R_t and R_e^' have X̂_e as side information. This can be modified by adding a new link-related receiver R_e^'' having side information identical to that of R_e with the identical δ_s^(e^'')=δ_s^(e) and the corresponding message X̂_e^'', where e^'' is a duplicated link of e. We delete the edge from R_e^' to X̂_e and add a new edge from R_e^' to X̂_e^'' and thus we have the corresponding network structure as shown in Fig. <ref>. The case 5) can be a problem when two receivers with X̂_e as the side information have different side information as in Fig. <ref>. This situation can be modified by a method similar to that in the case 4).The case 6) is described as one in which R_t and R_t^' have X̂_e as side information, which means that the terminals t and t^' are identical. This situation can be modified by adding a new link-related receiver R_e^' having the side information identical to that of R_e with the same δ_s^(e^')=δ_s^(e) and the corresponding message X̂_e^', where e^' is a duplicated link of e. We delete the edge from R_t to X̂_e and add a new edge from R_t to X̂_e^', after which we have the corresponding network structure as shown in Fig. <ref>. The case 7) indicates that the source node is the intermediate node, that is, R_e has X̂_s and X̂_e^' as side information. This can be modified by adding a new link-related receiver R_e^'' having X̂_s as side information with δ_s^(e^'')=0 and the corresponding message X̂_e^''. We delete the edge from R_e to X̂_s and add a new edge from R_e to X̂_e^''. Accordingly, we have the corresponding network structure as shown in Fig. <ref>.By solving the above problematic cases and modifying 𝒢 with δ_s to 𝒢^' with δ_s^', the valid corresponding network coding instance can be derived from any index coding instance. Once the corresponding network coding instance is derived, we show the equivalence between an NCLE and an ICSIE for a given index coding instance as in the following theorem. For a given side information graph 𝒢 with δ_s, a (δ_s^',𝒢^')-ICSIE with codelength |E^'| exists if and only if the corresponding (δ,𝔾)-network code with link errors exists. By solving the problematic cases in Claim <ref>, we can determine {R_e^' | e∈ E^'} and X̂_E^'. Since we can have the valid network coding instance, a (δ_s^',𝒢^')-ICSIE with the codelength |E^'| exists if and only if the corresponding (δ,𝔾)-network code with link errors exists by Theorem <ref>. Our main concern is the equivalence between an index code for 𝒢 and a network code for 𝔾. However, Theorem <ref> shows the equivalence between an index code for 𝒢^' and a network code for 𝔾. Thus, we have the following corollary. A (δ_s,𝒢)-ICSIE with the codelength |E| exists if and only if the corresponding (δ,𝔾)-network code with link errors exists, where each encoding function of the duplicated links is a function of each encoding function of the original links in the network code. Using the problematic cases in Claim <ref>, we prove the corollary, where the same notations for the above cases are used.Necessity: Assume that a (δ_s,𝒢)-ICSIE with codelength |E| exists. For the case 1) in Claim <ref>, we need one more transmission X̂_e+X̂_s. Then, R_t can still obtain what R_t wants because R_t can obtain X̂_s̃ from X̂_e+X̂_s-X̂_ẽ, which is the same situation as before. Trivially, R_e can obtain X̂_e from X̂_e+X̂_s-X̂_s. For the case 4), we also need one more transmission X̂_e+X̂_e^''. Then, R_e^' can still obtain what R_e^' wants because R_e^' can obtain X̂_ẽ from X̂_e+X̂_e^''-X̂_ẽ^̃'̃'̃. R_e^'' can easily obtain X̂_e^'' from X̂_e+X̂_e^''-X̂_e because R_e^'' can recover X̂_e. For the case 5), we need one more transmission X̂_e+X̂_e^'''. Then, R_e^'' can still obtain what R_e^'' wants because R_e^'' can obtain X̂_ẽ from X̂_e+X̂_e^'''-X̂_ẽ^̃'̃'̃'̃. Clearly, R_e^''' can obtain X̂_e^''' from X̂_e+X̂_e^'''-X̂_e because R_e^''' can recover X̂_e. For the case 6), we need one more transmission X̂_e^'+X̂_e. Then, R_t can still obtain what R_t wants because R_t can obtain X̂_ẽ from X̂_e^'+X̂_e-X̂_ẽ^̃'̃. R_e^' can easily recover X̂_e^' from X̂_e^'+X̂_e-X̂_e because R_e^' can recover X̂_e. For the case 7), we also need one more transmission X̂_s+X̂_e^''. Then, R_e can obtain X̂_e because R_e can obtain X̂_s̃ from X̂_s+X̂_e^''-X̂_ẽ^̃'̃'̃, which is identical to the earlier situation. R_e^'' can easily obtain X̂_e^'' from X̂_s+X̂_e^''-X̂_s. Thus, a (δ_s^',𝒢^')-ICSIE with codelength |E^'| exists if a (δ_s,𝒢)-ICSIE with codelength |E| exists using additional transmissions as described above because the number of additional transmissions is |E^'|-|E| and thus the corresponding (δ,𝔾)-NCLE exists by Theorem <ref>. Since the encoding functions of the corresponding network code are defined by the decoding functions of a given index code and each decoding function of the added receivers is a function of each decoding function of the original receivers, each encoding function of duplicated links is a function of each encoding function of the original links in the network code. Sufficiency: Suppose that a (δ,𝔾)-NCLE exists. Then, we have a (δ_s^',𝒢^')-ICSIE with codelength |E^'|, that is, X̂_B=(X̂_B(e): e∈ E^'), where X̂_B(e)=X̂_e+F̅_e(X̂_1,...,X̂_|S|). In fact, selecting |E| components of the given index code is sufficient for making a (δ_s,𝒢)-ICSIE with codelength |E|. For the case 1), R_t can obtain what R_t wants even if we do not transmit X̂_e+F̅_e(X̂_1,...,X̂_|S|). R_t simply needs F̅_e(X̂_s) related to e. Since R_t has X̂_s̃ as side information in 𝒢, R_t can calculate F̅_e(X̂_s), which may be erroneous. The case 7) is derived similarly to how the case 1) was derived. For the case 4), showing that R_e^' can obtain X̂_e^' even though we do not transmit X̂_e^''+F̅_e^''(X̂_1,...,X̂_|S|) is sufficient. Since R_e^' has X̂_ẽ as side information in 𝒢, R_e^' can have F̅_e(X̂_1,...,X̂_|S|), which may be erroneous. Thus, R_e^' can obtain X̂_e^' if F̅_e^'' is a function of F̅_e. The cases 5) and 6) are derived similarly to the case 4).For a given index coding instance, we can make the corresponding network coding instance by introducing some links, referred to as duplicated links, which are in fact duplicated encoding functions. The duplicated links in the corresponding network coding instance can always be made depending on the original links but some of them can be made independent in the perspective of network coding. Thus, the sufficiency in Corollary <ref> holds for the intended dependent situations, where each encoding function of the duplicated links is a function of each encoding function of the original links. In <cit.>, the corresponding network coding instance can easily be converted from the original index coding instance. However, the given original network coding instance is different from the network coding instance re-converted from the corresponding index coding instance. That is, assume that for a given network coding structure 𝔾 with δ, we have the corresponding side information graph 𝒢 with δ_s. If we derive the corresponding network coding structure 𝔾^' from 𝒢 using the method in <cit.>, 𝔾^' is always different from 𝔾. Similarly, the same problem occurs for a given index coding instance. However, for a given network coding instance, we can make 𝔾=𝔾^' with δ=δ^' using the proposed modification of the side information graph and ensure convertibility between the two codes when the encoding functions of the duplicated links are functions of the encoding functions of the original links in the corresponding network code. A similar approach can be applied to a problem for a given index coding instance. Suppose that a given side information graph 𝒢 with δ_s is given in Fig. <ref>. Then, a modified side information graph 𝒢^' with δ^'_s and the corresponding network coding instance are shown in Fig. <ref> and Fig. <ref>, respectively. We assume the field size q=2.We first find the maximal unicast acyclic subgraph of 𝒢 and determine X̂_E, X̂_s, and the corresponding receivers as in Fig. <ref>. Subsequently, we can make a modified side information graph by solving the case 4) in Claim <ref> twice. An index code for 𝒢 with codelength |E|=3 is (X̂_s+X̂_e_1,X̂_e_1+X̂_e_2,X̂_e_2+X̂_e_3). Then, every receiver can recover what it wants. For example, R_t can calculate X̂_s+X̂_e_1, X̂_s+X̂_e_2, and X̂_s+X̂_e_3 from the received codeword. Since δ_s^(t)=1 and R_t has X̂_ẽ_̃1̃, X̂_ẽ_̃2̃, and X̂_ẽ_̃3̃ as side information, subtracting them from X̂_s+X̂_e_1, X̂_s+X̂_e_2, and X̂_s+X̂_e_3, respectively results in the true symbol by majority decoding. With Theorem <ref> and Corollary <ref>, we can find an index code for 𝒢^' as (X̂_s+X̂_e_1,X̂_e_1+X̂_e_2,X̂_e_2+X̂_e_3,X̂_e_3+X̂_e_4,X̂_e_2+X̂_e_5) and a network code for 𝔾 as follows:* X_e_1=D̂_e_1(0,(X_e^' : e^'∈ In(e_1)))=0+0+X_e_5=X_e_5* X_e_2=0+0+X_e_4=X_e_4* X_e_3=0+0+0+X_s=X_s* X_e_4=0+0+0+0+X_s=X_s* X_e_5=0+0+0+X_e_4=X_e_4* D_t=D̂_t(0,(X̃_e^' : e^'∈ In(t))) By Claim <ref>, e_4 is a duplicated link of e_3 and e_5 is a duplicated link of e_2. It is clear that X_e_4 is a function of X_e_3 and X_e_5 is a function of X_e_2. Similarly, we can find an index code for 𝒢 from a network code for 𝔾 if each encoding function of the duplicated links is a function of each encoding function of the original links in the network code.In general, a given side information graph can be converted to several distinct modified side information graphs but any modified side information graph can be re-converted to the original side information graph. Furthermore, there is a one-to-one correspondence between a modified index coding instance and the corresponding network coding instance. Thus, for a given index coding instance, there are several distinct corresponding network coding instances but each corresponding network coding instance can be re-converted to the original index coding instance.§ RELATIONSHIP OF SOME PROPERTIES In this section, several properties of an NCLE are introduced using the properties of an ICSIE. Since the equivalence between an NCLE and an ICSIE is shown when either a network coding instance or an index coding instance is given, we can utilize the properties of an ICSIE to derive those of an NCLE. First, we introduce a property of an ICSIE in the following lemma, which is similar to that in <cit.>. Suppose that a (0,𝒢̅)-IC problem is constructed by deleting any less than or equal to min(2δ_s^(i), |𝒳_i|) outgoing edges from each receiver R_i in a (δ_s,𝒢)-ICSIE problem. That is, each receiver of 𝒢̅ has larger than or equal to max(0, |𝒳_i|-2δ_s^(i)) side information symbols and then it becomes the conventional index coding problem. Then, N_ opt^q(0,𝒢̅)≤ N_ opt^q(δ_s,𝒢). This is proved similarly to the method in <cit.> and thus we omit it here. Lemma <ref> shows the relationship between the conventional index code and an ICSIE. Thus, we can infer that a property between the conventional network code with δ=0 and an NCLE is derived by Lemma <ref> as in the following theorem. Let E_v be a set of outgoing links of v∈ V and δ_v be min{δ_e | e∈ E_v}. If a (δ,𝔾)-NCLE exists for a given network structure 𝔾, there exists the conventional network code with δ=0 after deleting arbitrary 2δ_v links from In(v) for all v∈ V in 𝔾.Let 𝒢̅ be the side information graph of the corresponding index code of the conventional network code and 𝒢 be the side information graph of the corresponding index code of a (δ,𝔾)-NCLE. Instead of deleting arbitrary 2δ_v incoming links for all v∈ V∖S̅ in 𝔾, we can consider these links as incoming links of dummy nodes with no outgoing link. Then, from Theorem <ref> and Observation <ref>, N_opt^q(0,𝒢̅)=|E| if the conventional network code is valid. From Lemma <ref>, N_ opt^q(0,𝒢̅)≤ N_ opt^q(δ_s,𝒢) and N_ opt^q(δ_s,𝒢)=|E| if the (δ,𝔾)-NCLE is feasible. If the conventional network code is not valid, N_ opt^q(0,𝒢̅)>|E|, which results in the fact that the (δ,𝔾)-NCLE is not feasible.Thus, we can find the conventional network code whenever we have an NCLE. Next, another property of an NCLE is introduced. Before showing it, we define an independent component of an index code. A component X̂_e in X̂_E is said to be independent if fixing the value of X̂_e results in reduction of the code dimension by one.Independent components in the corresponding index coding instance are always in X̂_E because a network structure is directed acyclic and N_ opt^q(δ_s,𝒢)=|E|. Now, we introduce a property related to the redundant links of a network code. From the equivalence between a network code and an index code, we can infer that redundant links may be related to some properties of an index code as shown in the following theorem. Redundant links in a network code are equivalent to independent components in X̂_E of the corresponding index code.Necessity: If e is a redundant link in the given network code, removing e does not affect the feasibility of the given network code. If we remove e, the corresponding index code should have codelength |E|-1 by Theorem <ref>. It means that removing R_e and X̂_e from the corresponding index coding problem results in reduction of the code dimension by one.Sufficiency: If X̂_e is an independent component, fixing its value causes reduction of the code dimension by one and this index code is feasible by Theorem <ref>. Thus, we can say that e is a redundant link.§ CONCLUSIONS In this paper, a new equivalence between an NCLE and an ICSIE was proposed. In order to provide the equivalence between an NCLE and an ICSIE, we considered a new type of a network code, referred to as a (δ,𝔾)-NCLE, where the intermediate nodes can resolve incoming errors.First, we showed the equivalence between a (δ,𝔾)-NCLE and a (δ_s,𝒢)-ICSIE for a given network coding instance with 𝔾 and δ. We also showed that the corresponding side information graph does not need the receiver t̂_ all, which is contained in the previous models <cit.>, <cit.>. In addition to the equivalence between an NCLE and an ICSIE for a given network coding instance, their equivalence was also derived for a given index coding instance. For a given side information graph 𝒢 with δ_s, we derived the corresponding network coding instance with 𝔾 and δ by modifying 𝒢 with δ_s to 𝒢^' with δ^'_s. With the proposed method of modifying 𝒢, we showed an equivalence between a (δ,𝔾)-NCLE and a (δ_s,𝒢)-ICSIE for a given index coding instance if a pair of encoding functions of the original link and the duplicated link are functionally related.Finally, several properties of a (δ,𝔾)-NCLE were derived from the properties of a (δ_s,𝒢)-ICSIE using their equivalence relationship.10networkflow R. Ahlswede, N. Cai, R. Li, and R. Yeung, “Network information flow,” IEEE Trans. Inf. Theory, vol. 46, no. 4, pp. 1204–1216, 2000.linearnetworkcode N. Cai, R. Li, and R. Yeung, “Linear network coding,” IEEE Trans. Inf. Theory, vol. 49, no. 2, pp. 371–381, 2003.networkalgorithm1 R. Koetter and M. Medard, “An algebraic approach to network coding,” IEEE/ACM Trans. Networking, vol. 11, no. 5, pp. 782–795, 2003.networkalgorithm2 S. Jaggi, P. Sanders, P. A. Chou, M. Effros, S. Egner, K. Jain, and L. M. G. M. Tolhuizen, “Polynomial time algorithms for multicast network code construction,” IEEE Trans. Inf. Theory, vol. 51, no. 6, pp. 1973–1982, 2005.networkerror1 R. W. Yeung and N. Cai, “Network error correction, part 1 and part 2,” Commun. and Inf. Systems, vol. 6, pp. 19–36, 2006.networkerror2 Z. Zhang, “Linear network-error correction codes in packet networks,” IEEE Trans. Inf. Theory, vol. 54, no. 1, pp. 209–218, 2008.Informed Y. Birk and T. Kol, “Informed-source coding-on-demand (ISCOD) over broadcast channels,” in Proc. IEEE Conf. on Comput. Commun. (INFOCOM), San Francisco, CA, 1998, pp. 1257–1264.ICSI Z. Bar-Yossef, Y. Birk, T. S. Jayram, and T. Kol, “Index coding with side information,” IEEE Trans. Inf. Theory, vol. 57, no. 3, pp. 1479–1494, 2011.RI F. Arbabjolfaei, B. Bandemer, Y.-H. Kim, E. Sasoglu, and L. Wang, “On the capacity region for index coding,” Proc. IEEE Int. Symp. Inf. Theory (ISIT), Jul. 2013, pp. 962–966.EQU M. Effros, S. EI Rouayheb, and M. Langberg, “An equivalence between network coding and index coding,” IEEE Trans. Inf. Theory, vol. 61, no. 5, pp. 2478–2487, 2015.TIM S. A. Jafar, “Topological interference management through index coding,” IEEE Trans. Inf. Theory, vol. 60, no. 1, pp. 529–568, 2014.DSS A. Mazumdar, “On a duality between recoverable distributed storage and index coding,” Proc. IEEE Int. Symp. Inf. Theory (ISIT), Jun. 2014, pp. 1977–1981.SECIC S. H. Dau, V. Skachek, and Y. M. Chee, “Error correction for index coding with side information,” IEEE Trans. Inf. Theory, vol. 59, no. 3, pp. 1517–1531, 2013.BCSI K. W. Shum, M. Dai, and C. W. Sung, “Broadcasting with coded side information,” 2012 IEEE 23rd International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC), vol. 89, no. 94, pp. 9–12, Sep. 2012.ICCSI N. Lee, A. G. Dimakis, and R. W. Heath, “Index coding with coded side information,” IEEE Commun. Lett., vol. 19, no. 3, pp. 319–322, 2015. BlindIC D. T. H. Kao, M. A. Maddah-Ali, and A. S. Avestimehr, “Blind index coding,” IEEE Trans. Inf. Theory, vol. 63, no. 4, pp. 2076–2097, 2017.FIC A. Gupta and B. S. Rajan, “Error-correcting functional index codes, generalized exclusive laws and graph coloring,” Proc. IEEE Int. Conf. Commun., Kuala Lumpur, Malaysia, Oct. 2016, pp. 1–7.ICSIEJ. W. Kim and J. S. No, “Index coding with erroneous side information,” [Online]. Available: http://arxiv.org/abs/1703.09361 FEQU A. Gupta and B. S. Rajan, “A relation between network computation and functional index coding problems,” IEEE Trans. Commun., vol. 65, no. 2, pp. 705–714, 2017.SEQU L. Ong, B. N. Vellambi, J. Kliewer, and P. L. Yeoh, “An equivalence between secure network and index coding,” 2016 IEEE Globecom Workshops (GC Wkshps), Washington, DC, 2016, pp. 1–6. | http://arxiv.org/abs/1705.09429v1 | {
"authors": [
"Jae-Won Kim",
"Jong-Seon No"
],
"categories": [
"cs.IT",
"math.IT"
],
"primary_category": "cs.IT",
"published": "20170526042944",
"title": "Equivalences Between Network Codes With Link Errors and Index Codes With Side Information Errors"
} |
Center for Astrophysics, Space Physics, and Engineering Research (CASPER), Baylor University, Waco, Texas 76798-7310, USACenter for Astrophysics, Space Physics, and Engineering Research (CASPER), Baylor University, Waco, Texas 76798-7310, USACenter for Astrophysics, Space Physics, and Engineering Research (CASPER), Baylor University, Waco, Texas 76798-7310, USACenter for Astrophysics, Space Physics, and Engineering Research (CASPER), Baylor University, Waco, Texas 76798-7310, USACenter for Astrophysics, Space Physics, and Engineering Research (CASPER), Baylor University, Waco, Texas 76798-7310, USACenter for Astrophysics, Space Physics, and Engineering Research (CASPER), Baylor University, Waco, Texas 76798-7310, USAThe spontaneous rotation of small dust clusters confined inside a cubical glass box in the sheath of a complex plasma was observed in experiment. Due to strong coupling between the dust particles, these clusters behave like a rigid-body where cluster rotation is contingent upon their configuration and symmetry. By evaluating the effects of distinct contributing forces, it is postulated that the rotation observed is driven by the net torque exerted on the cluster by the ion wake force. The configuration and symmetry of a cluster determines whether the net torque induced by the ion wake force is nonzero, in turn leading to cluster rotation. A COPTIC (Cartesian mesh, oblique boundary, particles and thermals in cell) simulation is employed to obtain the ion wake potential providing a theoretical model of cluster rotation which includes both the ion wake force and neutral drag and predicts rotation rates and direction in agreement with experimental results. These results are then used to diagnose the ion flow within the box.Dust cluster spin in complex (dusty) plasmas Truell W. Hyde December 30, 2023 ============================================§ INTRODUCTIONComplex plasmas consist of small solid microparticles immersed in a plasma environment, and are the subject of widespread interest across a rich variety of research fields <cit.>. Once injected into the plasma, microparticles become negatively charged due to the greater thermal velocity of the electrons compared to the ions. The particles interact through a shielded Coulomb potential, and many different dust structures in a plasma have been realized in ground-based laboratories and under microgravity conditions on board the International Space Station (ISS). Examples of these structures include vertical strings <cit.>, Yukawa or Coulomb balls <cit.>, 2D <cit.> and quasi-2D systems <cit.> and 3D <cit.> Coulomb crystals. Among the various strongly-coupled systems formed by dust particles, multiple vertical strings were recently utilized as a system for diagnosing structural phase transitions by Hyde et al. <cit.>. Dusty plasma systems display interesting collective effects such as vortices <cit.>, dust acoustic waves (DAW) <cit.>, and dust rotation <cit.>. Rotational dust motion is generally classified into one of two categories, rigid-body rotation Ω(ρ) ≃ const, with ρ the rotation radius <cit.>, and sheared differential rotation <cit.>. In the majority of cases presented in the literature to date, cluster rotation has been shown to be driven by externally controlled parameters triggered by rotating electric fields <cit.>, axial magnetic fields <cit.>, or rotating electrodes <cit.>. Cheung et al. applied an axial magnetic field to induce dust cluster rigid-body rotation, proposing that the radial confinement electric field is modified by the magnetic field, which in turn changes the angular velocity of the dust cluster <cit.>. Klindworth et al. found that structural transitions, together with intershell rotation of the cluster, can be excited by exerting a torque on the cluster using two opposing laser beams. They also found that the decoupling of the shells within these finite clusters can occur creating a transition from cluster to intershell rotation by altering the Debye shielding. In this case, the intershell rotation barrier of a sixfold cluster is about twice as large as the Coulomb case <cit.>. Recently, two innovative techniques using rotating electrodes and rotating electric fields were employed to investigate dust cluster rotation. The first technique is based on the assumption that the effects of the Coriolis force 2m(v⃗×Ω⃗) and the Lorentz force Q(v⃗×B⃗) are equivalent, allowing the study of magnetic field effects on complex plasmas without the necessity of installing a high power magnet setup <cit.>. This technique allows experiments to be implemented through adoption of a rotating electrode to set the background neutral gas into rotation, with the subsequent gas drag driving the dust cluster rotation <cit.>. Another interesting technique for driving clusters into rotation is through the use of a rotating electric field (see Refs. <cit.>). Rotation is sustained by combining the torque created by the ion-drag and the field generated by the rotating electric field <cit.>. In the present paper, structures consisting of multiple vertical strings are used as probes to study the spontaneous rotation of clusters trapped in a glass box. Rotations of clusters having asymmetric configurations are observed to take place naturally. We propose that such spontaneous cluster rotation is caused by the torque due to the ion wake force exerted on the asymmetric cluster. This allows the ion flow to be investigated using the rotation of the cluster. § EXPERIMENTAL SETUPThe experiment described here was conducted in a modified gaseous electronics conference (GEC) radiofrequency (rf) discharge cell <cit.>. The lower electrode has a diameter of 8 cm and is capacitively coupled to a rf signal generator operated at a frequency of 13.56 MHz. The upper electrode consists of a ring having a diameter of 8 cm, which is grounded, as are the surrounding cell walls. The vertical separation between the upper and lower electrodes is 1.9 cm. A dust dispenser above the grounded ring serves to introduce dust particles into the plasma, with oscilloscopes used to monitor the rf voltage and self-bias generated at the lower rf electrode. All experiments were conducted in argon gas at pressures between 100 and 200 mTorr. Melamine formaldehyde (MF) microparticles having a mass density of 1.514 g/cm^3 and a diameter of 8.89 μm (as supplied by the manufacturer) were used. Particles were illuminated employing either a vertical or horizontal sheet of laser light. A Sony XC-HR50 charge-coupled device (CCD) camera operated at a frame rate of 60 fps and a Photron Fastcam 1024 PCI high-speed camera operated at a frame rate of 250 or 500 fps, were used to record the trajectories of the dust particles. In all experiments, the dust particles were confined in an open-ended glass box with a height of 12.7 mm and a width of 10.5 mm placed on the powered lower electrode <cit.>, as shown in Fig. <ref>.§ RESULTSMultiple string structures were observed to form inside the glass box for neutral gas pressures between 100 and 200mTorr and rf powers between 1.37 and 5.92 W. Dust cluster symmetry was observed to determine spontaneous rotation, with this rotation directly related to dust particle configuration. Symmetric cluster configurations were observed to exhibit little or no rotation; however, when this symmetry was broken, spontaneous rotation of the cluster was observed. Cluster symmetry was determined primarily by the number of particles and system confinement. In this case, symmetric structures were formed using a glass box of cubical geometry, which provides an isotropically harmonic trap potential in the central region of the box <cit.>. Fig. <ref> (a)-(e) shows a series of representative symmetric structures formed in this manner with (a)-(d) showing symmetric multiple-string structures consisting of dust particles arranged as two to five, two-particle strings. A three, three-particle chain structure comprised of nine particles is presented in Fig. <ref>(e). No appreciable rotation for any of the clusters shown in Fig. <ref> was observed. Once cluster symmetry was broken, spontaneous rotation was observed. An asymmetric N = 5 cluster is shown in Fig. <ref>. (A movie showing the complete rotation of this cluster is attached as Supplemental Material.) The direction of rotation of such asymmetric clusters can be either clockwise or counterclockwise, depending on cluster chirality. (See Fig. <ref> (a) and (b), for two five-particle clusters, along with their reconstructed 3D models as shown in Figs. <ref> (c) and (d).) The clusters shown in Fig. <ref> (a) and (b) rotated counterclockwise and clockwise, respectively, once formed. (Two movies are attached in the Supplemental Material to demonstrate this chirality-related rotation.)Fig. <ref> (a) and (b) illustrate representative particle trajectories in the horizontal plane (i.e., imaged by the top view camera) for the asymmetric cluster shown in Fig. <ref> and symmetric structure shown in Fig. <ref> (c), respectively. Interestingly, the center ofrotation for the asymmetric cluster is not located at the projection of the cluster's COM in the horizontal plane, which is given by r⃗_com =∑r⃗_i/N, where r⃗_i is the position of each dust particle with respect to the center of the box and height above the lower electrode and N is the total number of particles comprising the cluster. Fig. <ref> (c) and (d) show the rotational orientation of the clusters over time. As shown, the asymmetric cluster exhibits a uniform angular rotation speed, which increases as the power is decreased (Fig. 5(c)) while Fig. <ref> (d) shows only a small change in orientation of the symmetric cluster, with a maximum rotation speed of 0.055 s^-1. The angular speed ω andthe height of the COM h_COM of the clusters are summarized in Table <ref> for various experimental conditions. For fixed rf power, the angular speed of the cluster decreases with increasing pressure as shown. § ROTATIONAL MECHANISMS AND DISCUSSION We propose that the spontaneous rotation observed for the small clusters described in this experiment is induced by the torque due to the ion wake field force when applied to the cluster once cluster symmetry is broken. At equilibrium, the net torque τ⃗_net that causes the cluster's rotation is balanced by the torque τ⃗_d due to the neutral drag force created by the cluster's uniform rotation,τ⃗_net = τ⃗_d.The neutral drag torque τ⃗_d is given asτ⃗_d = ∑_i=1^N'r⃗_⃗j⃗×F⃗_dj,where F⃗_dj= -m_dβv⃗_j is the neutral drag force applied on the jth particle, r⃗_j is the position of the particle measured from the axis of rotation, v⃗_j is the tangential velocity of the jth dust particle and β is the Epstein drag coefficient defined asβ = δ8/πP/aρ v_th,n,where δ is a coefficient depicting the reflection of the neutral gas atoms from the surface of the dust, (δ = 1.26 ± 0.13 for MF particles in argon gas <cit.>), P is the gas pressure, a the particle radius, ρ is the particle's mass density and v_th,n = √(8k_BT_n/π m_n) is the mean thermal velocity of the neutral gas. The temperature of the neutral gas is taken to be T_n = 300 K and the mass of the argon gas m_n = 6.64 × 10^-26 kg. The neutral drag torque τ⃗_d is calculated and summarized in Table <ref> for four different clusters under different experimental conditions. The torque driving the rotation can be written asτ⃗_net = ∑_j=1^Nr⃗_⃗j⃗×F⃗_j,where F⃗_j is the total force exerted on the jth particle excluding the neutral drag force, and is given byF⃗_j = F⃗_elecj + F⃗_ionj + F⃗_interjwith F⃗_elecj being the electric field force,F⃗_ionj the ion wake field force, and F⃗_interj the interparticle force from all other particles exerted on the jth particle. Inasmuch as F⃗_elecj = ∇⃗ U_j is a conservative force, zero work is done by F⃗_elecj moving a dust particle through a complete rotational trajectory, i.e. ∮F⃗_elecj· dr⃗_j = 0. As such, F⃗_elecj will not produce steady-state rotation for either symmetric or asymmetric structures since it cannot feed energy to the system <cit.>. F⃗_interj is an internal force between dust particles and can not contribute to rotation of the structures. This leaves the ion wake field force F⃗_ionj as one possible contributor to the observed rotation. Thus, Eq. (4) can now be rewritten asτ⃗_net = ∑_j=1^Nr⃗_⃗j⃗×F⃗_ionj.In order to determine F⃗_ionj, a point charge model was employed to model the ion wake field <cit.>. Assuming the ion wakefield acts as a positive point charge located beneath each dust particle, the ion wake field force experienced by the jth dust particle F⃗_ionj is given byF⃗_ionj = ∑_k ≠ j^NQ_dq_w(R⃗_k - r⃗_j)/4πε_0|R⃗_k - r⃗_j|^3,where Q_d, q_w are the dust charge and wakefield point charge, R⃗_k = r⃗_k - z_wẑ is the location of the point charge located a distance z_w beneath the kth particle, and ε_0 is the vacuum permittivity. Since the cluster's rotation axis is in the vertical direction, only the horizontal component of F⃗_ionj contributes to the driving torque for the rotation. The location (z_w) and magnitude (q_w) of the wakefield point charge depends on the experimental conditions, since the power and pressure settings determine the particle charge and ion drift speed. Changes to the rf power also alter the electron and ion density, as well as change the electron temperature, which determines the bias on the lower electrode (establishing the ion drift velocity) and the dust surface charge. The electron Debye length under representative experimental conditions was estimated based on the results presented in Ref. <cit.> (see Table <ref>). The charge on a dust grain within the sheath of a rf discharge is generally on the order of 1000e per μm diameter. Using the result from previous experiments under similar experimental conditions the dust charge was assumed to be ∼12700e <cit.>. However, we analyzed the motion assuming Q_d = 12000e, 13000e, and 14000e to determine the extent to which the dust charge influences the rotation rate. Additionally, theoretical and experimental results have shown that downstream dust grains are decharged relative to the upstream grains <cit.>. Accordingly, a charge of 0.7, 0.8, and 0.9Q_d was assumed for the lower grains in a cluster. (See Table <ref>.) Estimates for the point charge and its location downstream from a particle were obtained employing the COPTIC (Cartesian mesh, oblique boundary, particles and thermals in cell) code developed by Hutchinson <cit.>. In this simulation, grains are represented as point charges immersed in a collisionless plasma using uniform external drifting-Maxwellian ion distributions with T_i/T_e = 0.01. Calculations are performed on a 44 × 44 × 96 cell grid with nonuniform mesh spacing over a cubical domain of 10 × 10 × 25 Debye lengths, where the ions are flowing along the ẑ-direction with a drift velocity v_d expressed as a Mach number M = v_d/c_s where c_s = √(T_e/m_i) is the cold-ion sound speed. The code is run with the point charge representing a dust particle located at position (0,0,0). The analytical part of this point charge extends to radius r_p = 0.1 λ_De. At this distance, the floating potential ϕ_p = -0.25T_e/e with T_e = 2.585 eV. Distinct drift velocities ranging from 0.1 to 3.3 were used in the COPTIC program to determine the maximum value of the wake potential and its location. Fig. <ref> (a) shows the wake potential profile along the axial direction normalized to the dust grain potential as a function of the ion drift velocity M. As can be seen, the maximum wake potential ϕ_max achieves its peak value for M = 0.8 (Fig. <ref> (b)), with its position z_w shifting further away from the dust grain for increasing drift velocity (see Fig. <ref> (c)). The magnitude of the wakefield point charge q_w can be calculated as q_w ≈ Q_dϕ_max/|ϕ_p|, where ϕ_p is the value of the dust surface potential. The theoretical rotation speed Ω can now be calculated based on q_w and z_w where using the COPTIC model to determine the ion flow velocity which best matches the experimental results. Assuming λ_De for the experimental conditions shown in Table <ref>, the magnitude and location of the wake point charge over the range of ion drift velocities were fit employing a higher order polynomial, and then used to calculate the torque on each of the clusters listed in Table <ref>. This torque was then equated to the neutral drag torque (Eq. 2) to determine the values of q_w and z_w needed to balance the torques, allowing an estimate of the ion drift speed to be obtained. As shown in Table <ref>, the ion drift speed found using all possible values of Q_d varies by less than 1.8%. The results of these calculations assuming an upstream dust charge Q_d = 13000e and all estimates of decharging for the downstream particle are shown in Fig. <ref> and Table <ref>.As shown in Fig. <ref>, there are always two values of M which produce a rotational speed matching the experimentally measured value, one for M < 0.7 and one for M > 0.7. As observed in this experiment, the levitation height of the cluster decreases as the power is increased, as does the rotation rate. As the ion drift velocity M increases closer to the lower electrode <cit.> the trend for decreasing Ω with increasing power (and thus increasing M) points to ion drift velocities > 0.7, as shown in Fig. <ref> (a). At fixed power, reducing the pressure causes a cluster's rotation speed to increase while the height of its COM increases. Thus, the Mach number should decrease with decreasing pressure, as is seen in Fig. <ref> (b). Finally, as shown in Fig. <ref> (c), the rotation rates for symmetric clusters are very small over a wide range of ion drift velocities. Calculated ion drift speeds are consistent with the expected increase in these values, given the power range explored in the experiment. According to the trends shown in Fig. <ref> (a) and (b), as the power and pressure are increased further, the rotation speeds of the clusters should be reduced to almost negligible amounts. However, this was not observed experimentally since when the power or pressure exceeded certain critical values, the structure of the cluster changed. Thus asymmetric structures were always observed to have rotation rates on the order of 1-10 rad/s.§ CONCLUSIONSClusters of a small number of dust particles were produced within a glass box placed on the lower electrode of a GEC rf cell. Self-excited rotation was observed for asymmetric structures with a uniform rotation speed, whereas no appreciable rotation was produced for symmetric structures. The asymmetric clusters were found to rotate about a vertical axis not passing through the center of mass.It was proposed that the spontaneous rotation for the small asymmetric dust clusters observed is probably induced by the net torque applied on the cluster due to the ion wake force. It was shown that symmetric clusters experience a very small net torque, and do not rotate. The rotation direction of the asymmetric cluster was determined by the conformational chirality of the specific structure, causing the cluster to spin either clockwise or counterclockwise.The torque induced by the ion wake force was calculated employing the ion wakefield point charge model, where the magnitude and location of the wakefield point charge was determined using the COPTIC code. Balancing the opposing torques induced by gas drag and the ion wake field allows the ion flow to be estimated within the glass box, which was found to be ∼1.0 M for rf power 1.7 < P < 2.0 W. This result is consistent with the values generally assumed for these experimental conditions <cit.>. These results are in rough agreement wiht Nosenko et al. <cit.> who also found rotating particle pairs which they ascribed to the interaction with the ion wake field. Using the theory of Lampe et al. <cit.>, they estimated the Mach number of the ion flow to be 2.26 in the plasma sheath of an rf discharge at 157 mTorr and 5 W. Higher Mach numbers were also suggested by the results of this experiment in analyzing the small rotations observed for symmetric clusters formed at higher rf power.§ ACKNOWLEDGMENTSupport from NSF/DOE Grant No. 1414523 and NSF/NASA Grant No. 1740203 is gratefully acknowledged. 99fortovbook V. E. Fortov and G. E. Morfill, Complex and Dusty Plasmas From Laboratory to Space (CRC Press, 2010).chu94 J. H. Chu and L. I, Phys. Rev. Lett. 72, 4009 (1994).thomas96 H. M. Thomas and G. E. Morfill, Nature 379, 806 (1996).fortov04 V. E. Fortov, A. G. Khrapak, S. A. Khrapak, V. I. Molotkov, and O. F. Petrov, Phys. Usp. 47, 447 (2004).shuklaRev P. K. Shukla and B. Eliasson, Rev. Mod. Phys. 81, 25 (2009).baumgartner09 H. Baumgartner, D. Block, and M. Bonitz, Contrib. Plasma Phys. 49, 281 (2009).bonitz10 M. Bonitz, C. Henning, and D Block, Rep. Prog. Phys. 73, 066501 (2010).hartmann10 P. Hartmann, A. Douglass, J. C. Reyes, L. S. Matthews, T. W. Hyde, A. Kovács, and Zoltán Donkó, Phys. Rev. Lett. 105, 115004 (2010).hartmann14 P. Hartmann, A. Z. Kovács, A. M. Douglass, J. C. Reyes, L. S. Matthews, and T. W. Hyde, Phys. Rev. Lett. 113, 025002 (2014).melzer06 A. Melzer, Phys. Rev. E 73, 056404 (2006).kong11 J. Kong, T. W. Hyde, L. Matthews, K. Qiao, Z. Zhang, and A. Douglass, Phys. Rev. E 84, 016411 (2011).arp04 O. Arp, D. Block, A. Piel, and A. Melzer, Phys. Rev. Lett. 93, 165004 (2004).bonitz06 M. Bonitz, D. Block, O. Arp, V. Golubnychiy, H. Baumgartner, P. Ludwig, A. Piel, and A. Filinov, Phys. Rev. Lett. 96, 075001 (2006).knapek07 C. A. Knapek, D. Samsonov, S. Zhdanov, U. Konopka, and G. E. Morfill, Phys. Rev. Lett. 98, 015004 (2007).nosenko09 V. Nosenko, S. K. Zhdanov, A. V. Ivlev, C. A. Knapek, and G. E. Morfill, Phys. Rev. Lett. 103, 015001 (2009).woon04 W. Y. Woon and L. I, Phys. Rev. Lett. 92, 065003 (2004).chan07 C. L. Chan and L. I, Phys. Rev. Lett. 98, 105002 (2007).klumov10 B. Klumov, G. Joyce, C. Räth, P. Huber, H. Thomas, G. E. Morfill, V. Molotkov, and V. Fortov, Europhys. Lett. 92, 15003 (2010).khrapak11 S. A. Khrapak, B. A. Klumov, P. Huber, V. I. Molotkov, A. M. Lipaev, V. N. Naumkin, H. M. Thomas, A. V. Ivlev, G. E. Morfill, O. F. Petrov, V. E. Fortov, Yu. Malentschenko, and S. Volkov, Phys. Rev. Lett. 106, 205001 (2011).kong13 T. W. Hyde, J. Kong, and L. S. Matthews, Phys. Rev. E 87, 053106 (2013).vaulina03 O. S. Vaulina, A. A. Samarian, O. F. Petrov, B. W. James, and V. E. Fortov, New J. Phys. 5, 82 (2003).schwabe14 M. Schwabe, S. Zhdanov, C. Räth, D. B. Graves, H. M. Thomas, and G. E. Morfill, Phys. Rev. Lett. 112, 115002 (2014).schwabe07 M. Schwabe, M. Rubin-Zuzic, S. Zhdanov, H. M. Thomas, and G. E. Morfill, Phys. Rev. Lett. 99, 095002 (2007).menzel10 K. O. Menzel, O. Arp, and A. Piel, Phys. Rev. Lett. 104, 235002 (2010).yousefi14 R. Yousefi, A. B. Davis, J. Carmona-Reyes, L. S. Matthews, and T. W. Hyde, Phys. Rev. E 90, 033101 (2014).nosenko V. Nosenko, A. V. Ivlev, S. K. Zhdanov, M. Fink, and G. E. Morfill, Phys. Plasmas, 16, 083708 (2009).worner11 L. Wörner, V. Nosenko, A. V. Ivlev, S. K. Zhdanov, H. M. Thomas, G. E. Morfill, M. Kroll, J. Schablinski, and D. Block, Phys. Plasmas 18, 063706 (2011).worner12 L. Wörner, C. Räth, V. Nosenko, S. K. Zhdanov, H. M. Thomas, G. E. Morfill, J. Schablinski, and D. Block, Europhys. Lett. 100, 35001 (2012).laut I. Laut, C. Räth, L. Wörner, V. Nosenko, S. K. Zhdanov, J. Schablinski, D. Block, H. M. Thomas, and G. E. Morfill, Phys. Rev. E 89, 023104 (2014).konopka00 U. Konopka, D. Samsonov, A. V. Ivlev, J. Goree, V. Steinberg, and G. E. Morfill, Phys. Rev. E 61, 1890 (2000).sato01 N. Sato, G. Uchida, T. Kaneko, S. Shimizu, and S. Iizuka, Phys. Plasmas 8, 1786 (2001).cheung03 F. Cheung, A. Samarian, and B. James, New J. Phys. 5, 75 (2003).cheung04 F. Cheung, A. Samarian, and B. James, Phys. Scr. T107, 229 (2004).hou05 L. Hou, Y. Wang, and Z. L. Mišković, Phys. Plasmas 12, 042104 (2005).huang11 F. Huang, Y. H. Liu, M. F. Ye, and L. Wang, Phys. Scr. 83, 025502 (2011).carstensen09 J. Carstensen, F. Greiner, L. Hou, H. Maurer, and A. Piel, Phys. Plasmas 16, 013702 (2009).kahlert12 H. Kählert, J. Carstensen, M. Bonitz, H. Löwen, F. Greiner, and A. Piel, Phys. Rev. Lett. 109, 155003 (2012).hartmann13 P. Hartmann, Z. Donkó, T. Ott, H. Kählert, and M. Bonitz, Phys. Rev. Lett. 111, 155002 (2013).schablinski14 J. Schablinski, D. Block, J. Carstensen, F. Greiner, and A. Piel, Phys. Plasmas 21, 073701 (2014).klindworth00 M. Klindworth, A. Melzer, A. Piel, and V. A. Schweigert, Phys. Rev. B 61, 8404 (2000).kong14 J. Kong, K. Qiao, L. S. Matthews, and T. W. Hyde, Phys. Rev. E 90, 013107 (2014).qiao14 K. Qiao, J. Kong, J. Carmona-Reyes, L. S. Matthews, and T. W. Hyde, Phys. Rev. E 90, 033109 (2014).binliu B. Liu, J. Goree, and V. Nosenko, Phys. Plasmas 10, 9 (2003).flanagan09 T. M. Flanagan and J. Goree, Phys. Rev. E 80, 046402 (2009).qiao13 K. Qiao, J. Kong, E. V. Oeveren, L. S. Matthews, and T. W. Hyde, Phys. Rev. E 88, 043103 (2013).vladimirov95 S. V. Vladimirov and M. Nambu, Phys. Rev. E 52, R2172 (1995).goree95 F. Melandsϕ and J. Goree, Phys. Rev. E 52, 5312 (1995).ishihara97 O. Ishihara and S. V. Vladimirov, Phys. Plasmas 4, 69 (1997). lampe00 M. Lampe, G. Joyce, G. Ganguli, and V. Gavrishchaka, Phys. Plasmas 7, 3851 (2000).ivlev03 A. V. Ivlev, U. Konopka, G. Morfill, and G. Joyce, Phys. Rev. E 68, 026405 (2003).kompaneets07 R. Kompaneets, U. Konopka, A. V. Ivlev, V. Tsytovich, and G. Morfill, Phys. Plasmas 14, 052108 (2007). zhdanov09 S. K. Zhdanov, A. V. Ivlev, and G. E. Morfill, Phys. Plasmas 16, 083706 (2009). nosenko14 V. Nosenko, A. V. Ivlev, R. Kompaneets, and G. Morfill,Phys. Plasmas 21, 113701 (2014).block15 D. Block and W. J. Miloch, Plasma Phys. Control. Fusion 57, 014019 (2015). hutch11pop I. H. Hutchinson, Phys. Plasmas 18, 032111 (2011).hutch11prl I. H. Hutchinson, Phys. Rev. Lett. 107, 095001 (2011). hutch12pre I. H. Hutchinson, Phys. Rev. E 85, 066409 (2012). hutch13pop I. H. Hutchinsonand C. B. Haakonsen, Phys. Plasmas 20, 083701 (2013). hutch13ppcf I. H. Hutchinson, Plasma Phys. Control. Fusion 55, 115014 (2013). douglass11 A. Douglass, V. Land, L. S. Matthews, and T. W. Hyde, Phys. Plasmas 18, 083706 (2011). nosenko12 V. Nosenko, S. K. Zhdanov, H. M. Thomas, J. Carmona-Reyes, and T. W. Hyde, Europhys. Lett. 112, 45003 (2015). lampe12 M. Lampe, T. B. Röcker, G. Joyce, S. K. Zhdanov, A. V. Ivlev, and G. E. Morfill, Phys. Plasmas 19, 113703 (2012). | http://arxiv.org/abs/1705.09683v1 | {
"authors": [
"Bo Zhang",
"Jie Kong",
"Mudi Chen",
"Ke Qiao",
"Lorin S. Matthews",
"Truell W. Hyde"
],
"categories": [
"physics.plasm-ph"
],
"primary_category": "physics.plasm-ph",
"published": "20170526191027",
"title": "Dust cluster spin in complex (dusty) plasmas"
} |
Roberto Orosei,1 Angelo Pio Rossi,2 Federico Cantini,3 Graziella Caprarelli,4,5 Lynn M. Carter,6 Irene Papiano,7 Marco Cartacci,8 Andrea Cicchetti,8, and Raffaella Noschese81Istituto Nazionale di Astrofisica, Istituto di Radioastronomia, Via Piero Gobetti 101, 40129 Bologna, Italy 2Department of Physics and Earth Sciences, Jacobs University Bremen, Campus Ring 1, 28759 Bremen, Germany 3Ecole Polytechnique Federale de Lausanne, Space Engineering Center, EPFL ESC, Station 13, 1015 Lausanne, Switzerland 4University of South Australia, Div ITEE, GPO Box 2471, Adelaide SA 5001, Australia 5International Research School of Planetary Sciences, Viale Pindaro 42, Pescara 65127, Italy 6The University of Arizona, Lunar and Planetary Laboratory, 1629 E University Blvd, Tucson, AZ 85721-0092, USA 7Liceo Scientifico Augusto Righi, Viale Carlo Pepoli 3, 40123 Bologna, Italy 8Istituto Nazionale di Astrofisica, Istituto di Astrofisica e Planetologia Spaziali, Via del Fosso del Cavaliere 100, 00133 Roma, ItalyLucus Planum, extending for a radius of approximately 500 km around 181^∘ E, 5^∘ S, is part of the Medusae Fossae Formation (MFF), a set of several discontinuous deposits of fine-grained, friable material straddling across the Martian highland-lowland boundary. The MFF has been variously hypothesized to consist of pyroclastic flows, pyroclastic airfall, paleopolar deposits, or atmospherically-deposited icy dust driven by climate cycles. MARSIS, a low–frequency subsurface–sounding radar carried by ESA's Mars Express, acquired 238 radar swaths across Lucus Planum, providing sufficient coverage for the study of its internal structure and dielectric properties. Subsurface reflections were found only in three areas, marked by a distinctive surface morphology, while the central part of Lucus Planum appears to be made of radar–attenuating material preventing the detection of basal echoes. The bulk dielectric properties of these areas were estimated and compared with those of volcanic rocks and ice–dust mixtures. Previous interpretations that east Lucus Planum and the deposits on the north–western flanks of Apollinaris Patera consist of high–porosity pyroclastic material are strongly supported by the new results. The north–western part of Lucus Planum is likely to be much less porous, although interpretations about the nature of the subsurface materials are not conclusive. The exact origin of the deposits cannot be constrained by radar data alone, but our results for east Lucus Planum are consistent with an overall pyroclastic origin, likely linked to Tharsis Hesperian and Amazonian activity.§ INTRODUCTION Lucus Planum, extending for a radius of approximately 500 km around 181^∘ E, 5^∘ S, is part of the Medusae Fossae Formation (MFF), a set of several discontinuous deposits of fine-grained, friable material straddling across the Martian highland-lowland boundary (e.g. <cit.>).The MFF covers an extensive area, spanning latitudinally more than 1000 km and longitudinally some 6000 km. It is separated into several discontinuous lobes (Fig. <ref>). The lobe that occupies the central part of the Medusae Fossae Formation is known as Lucus Planum <cit.> (or alternatively, lobe B <cit.>. In the recently revised global geologic map of Mars <cit.> two units make up Lucus Planum, namely the Hesperian and Amazonian-Hesperian transitional units (respectively Htu and AHtu) (Fig. <ref>) <cit.>.The MFF has been variously hypothesized to consist of pyroclastic flows <cit.>, pyroclastic airfall <cit.>, paleopolar deposits <cit.>, or atmospherically-deposited icy dust driven by climate cycles <cit.>. A branching positive relief system within Lucus Planum was interpreted by <cit.> as an ancient fluvial system originating from seepage sapping, implying that Lucus Planum was volatile-rich. The MFF shows evidence of a complex history of deposition, erosion and exhumation of both landforms and deposits <cit.>. Both erosional and depositional landforms are visible at different stratigraphic levels, resulting in complex morphologies.Two sounding radars have been flown on Martian missions: MARSIS <cit.> and SHARAD <cit.>. Both instruments are synthetic aperture, low frequency radars carried by ESA's Mars Express and NASA's Mars Reconnaissance Orbiter, respectively. They transmit low-frequency radar pulses that penetrate below the surface, and are reflected by dielectric discontinuities in the subsurface. MARSIS is optimized for deep penetration, with a free-space range resolution of approximately 150 m, a footprint size of 10-20 km across-track and 5-10 km along-track. SHARAD has tenfold better resolution, at the cost of reduced penetration. Parts of the MFF have been probed by both of these sounding radars <cit.>, revealing a dielectric permittivity of the MFF material that is consistent with either a substantial component of water ice or a low-density, ice-poor material. While the work by <cit.> was focused on Lucus Planum, estimates of dielectric properties by <cit.> were based on observations over Zephyria Planum, in the westernmost part of the Medusae Fossae Formation, and the area between Gordii Dorsum and Amazonis Mensa, at the Eastern end of the MFF.The dielectric permittivity of the MFF material <cit.> is consistent with either a substantial component of water ice or a low-density, ice-poor material. There is no evidence for internal layering from SHARAD data <cit.>, despite the fact that layering at scales of tens of meters has been reported in many parts of the MFF <cit.>. This lack of detection can be the result of one or more factors, such as high interface roughness, low dielectric contrast between materials, or discontinuity of the layers. § METHOD Operating since mid-2005, MARSIS has acquired 238 swaths of echoes across Lucus Planum, shown in Fig. <ref>. Each swath consists of a few hundred observations, for a total of over 38,000 echoes. Data are affected by the dispersion and attenuation of the radar signal caused by ionospheric plasma, but a number of methods has been developed over the years to attenuate or compensate such effects <cit.>. Data used in this work have been processed using the methodology described by <cit.>, which consists in the maximization of the signal power in an interval centred around the strongest echo through the differential variation of the phase of the components of the Fourier signal spectrum.MARSIS data acquired continuously during the movement of the spacecraft are usually displayed in the form of radargrams, grey-scale images in which the horizontal dimension is distance along the ground track, the vertical one is the round trip time of the echo, and the brightness of the pixel is a function of the strength of the echo (ref. to example in Fig. <ref>). The first step in data analysis consisted in the visual inspection of radargrams to determine their quality. Observations were discarded if the ionospheric distortion compensation algorithm had failed, if spurious signals from the electronics of the spacecraft were present, or if exceptional ionosphere conditions resulted in a severe attenuation or absence of the signal. This reduced the number of radargrams suitable for further analysis by approximately 25%.The next step consisted in the identification of subsurface echoes in radargrams, which is complicated by the so called “clutter”, that is by echoes coming from off-nadir surface features, such as craters or mountains, and reaching the radar after the nadir surface echo. As clutter can dwarf subsurface echoes, numerical electromagnetic models of surface scattering have been developed <cit.> to validate the detection of subsurface interfaces in MARSIS data. They are used to produce simulations of surface echoes, which are then compared to the ones detected by the radar: any secondary echo visible in radargrams but not in simulations is interpreted as caused by subsurface reflectors (Fig. <ref>).To analyse clutter, a code for the simulation of radar wave surface scattering was developed, based on the algorithm of <cit.>. The MOLA topographic dataset <cit.> was used to represent the Martian surface as a collection of flat plates called facets. Radar echoes were computed as the coherent sum of reflections from all facets illuminated by the radar. The computational burden of simulations required the use of the SuperMUC supercomputer at the Leibniz–Rechenzentrum, Garching, Germany.Subsurface reflections in Lucus Planum are usually weak and often have a diffuse appearance (Fig. <ref>). Several methods were attempted to automatically identify such reflections in radargrams, but eventually a supervised procedure was used, in which an operator manually selects a few points marking the position of the interface in a radargram, and then the procedure itself outlines the interface and records its aerocentric coordinates, its time delay from the surface echo, and its reflected power. The confidence in the retrieved coordinates is based on the accuracy of the reconstructed Mars Express trajectory, which is estimated to be a fraction of the MARSIS footprint size. To better determine the position and power of subsurface echoes, radar signals have been interpolated with the Fourier interpolation method to reduce the sampling interval to 0.1 μs. The precision in the determination of the time delay is assumed to be the one-way delay resolution (or 0.5 μs, corresponding to 150 m free-space), while the uncertainty in echo power is considered to be below 0.5 dB because of the interpolation. § RESULTS A total of 97 subsurface reflectors were identified, extending along track over distances up to 500 kilometres. Their distribution across Lucus Planum is shown in Fig. <ref>. In spite of several high-quality radargrams crossing the central part of Lucus Planum, only a handful of subsurface interfaces could be detected there, most of which are shallow, often associated with pedestal craters. Reflectors concentrate in specific areas: the deposits on the north-western flanks of Apollinaris Patera, the rugged terrain North of Tartarus Scopulus and the large lobe located North-East of Memnonia Sulci. The contours of these areas follow closely morphologically distinct provinces within Lucus Planum, which suggests that variations in surface morphology could be tied to changes in the material forming Lucus Planum. These areas are outlined in Fig. <ref> and labelled “A”, “B” and “C”, respectively.Figure <ref> shows the apparent depth of reflectors, estimated from the measured round-trip time delay between surface and subsurface echo by: z = cτ/ 2 √(ε) where z is depth, c the speed of light in vacuo, τ the round-trip time delay between surface and subsurface echo, and ε is the real part of the relative complex permittivity (also called dielectric constant) of the Lucus Planum material. The apparent depth z_a was computed assuming that ε is equal to 1, corresponding to the permittivity of free space: z_a = cτ/ 2 Apparent depths overestimate the thickness of Lucus Planum by a factor comprised between √(3) and 3, depending on the nature of the material through which the wave propagates <cit.>.Estimates of permittivity for the different regions of Lucus Planum provide some insight on their nature and a more precise evaluation of their thickness. Following the approach first presented in <cit.> and used also in <cit.>, we produced an independent estimate of the thickness of Lucus Planum assuming that the deposits rest on a surface in lateral continuity with the surrounding topography, and that MARSIS echoes come from such surface. The white contours in Fig. <ref> encompass those areas in which MOLA topography was removed, and then interpolated from the remaining topographic information through the natural neighbour method <cit.>.The difference between the actual topography and the interpolated basal topography of Lucus Planum provides an estimate of the depth of the base of Lucus Planum, z_i. By inserting z_i in Eq. <ref>, solving Eq. <ref> by cτ, and rearranging and simplifying equal terms, we obtain: z_a = √(ε) z_i from which we see that the slope of the best-fit line in a plot of interpolated vs. apparent depth provides an estimate of √(ε). The resulting plots for areas A, B and C are shown in Fig. <ref>. Because of the large dispersion of data points in some of the plots, the best-fit line was computed using the least absolute deviations method <cit.>, which is less sensitive to outliers than the least squares method.The slopes and constant terms of the best-fit lines in Fig. <ref> are reported in Table <ref>. For each value, the corresponding 95% confidence interval of the estimate is listed, providing some insight on the quality of the data fit. Table <ref> reports also estimates of ε, obtained from the values of slopes and their corresponding 95% confidence bounds through Eq. <ref>.From Eq. <ref>, the value of the constant term in best-fit lines should be zero, different from what is reported in Table <ref>. The presence of a constant term indicates a systematic error in the evaluation of z_a, z_i or both. Because the range resolution of MARSIS is about 150 m in free space <cit.>, the constant terms in Table <ref> correspond to a few range resolution cells. One possible explanation is that the interpolation method failed to provide a correct estimate of the basal topography: because Lucus Planum straddles the dichotomy boundary, the topography beneath it is expected to be complex, affecting the precision of results. Another possibility is a systematic overestimation of the time delay of subsurface echoes in radargrams, perhaps because subsurface reflections are less sharp than surface ones, and the manual determination of their exact position introduces additional uncertainties.Permittivity is a complex quantity: its real part affects the velocity of an electromagnetic wave, while its imaginary part is related to the dissipation (or loss) of energy within the medium. The ratio between the imaginary and the real part of the complex permittivity is called the loss tangent. Estimating the loss tangent of the material within Lucus Planum provides an additional constraint on its nature and can be used by way of checking on the significance of the values of ε in Table <ref>.The loss tangent over parts of the Medusae Fossae formation was estimated from the rate of decay of the subsurface echo power as a function of time delay by <cit.>. Following a similar approach, we assumed that the surface and the subsurface interfaces over Lucus Planum are smooth at MARSIS frequencies, meaning that the RMS height of topography is a fraction of the wavelength, and that Lucus Planum consists only of non-magnetic, low loss material. While a higher roughness would cause only a fluctuation of surface and subsurface power without affecting the mean rate of subsurface power decay with depth, the assumption that Lucus Planum consists of a low loss, non-magnetic material is validated by previous results <cit.>, and would result in little or no subsurface interface detections if violated. Under these assumptions, following <cit.>, the surface echo power P_s can be written as follows: P_s = P_t ·(G λ/ 8 π H )^2 ·| R_s |^2 with P_t the transmitted power, G the antenna gain, λ the wavelength, H the spacecraft altitude and R_s the surface Fresnel reflection coefficient at normal incidence. Analogously, the subsurface echo power P_ss can be computed through the following expression: P_ss= P_t ·(G λ/ 8 π ( H + z ) )^2 ·( 1 - | R_s |^2 )^2 · | R_ss|^2 ·exp( -2 π f tanδτ) where R_ss is the subsurface Fresnel reflection coefficient at normal incidence, f the radar frequency, tanδ the loss tangent of the Lucus Planum material, here assumed to be constant through its entire thickness, while z and τ have been defined in Eq. <ref>.By dividing Eq. <ref> and Eq. <ref>, and then taking the natural logarithm of the result, the following expression is obtained: ln(P_ss/ P_s ) = -2 π f tanδτ + K where K is a term depending on R_s and R_ss. The topography of Lucus Planum is characterized by a roughness that is not negligible compared to the MARSIS wavelength <cit.>. This implies that P_s and P_ss fluctuate around a mean value that is a function of statistical parameters characterizing the topography, such as RMS height and RMS slope <cit.>. Under the assumption that such parameters do not vary significantly within each of the three areas A, B and C, then roughness will cause only a variation of the value of parameter K and the addition of a random noise to ln( P_ss / P_s ) in Eq. <ref>.Other factors connected to the internal structure of the Lucus Planum and Apollinaris Patera deposits are unlikely to affect Eq. <ref> significantly. A surface layer thinner than the vertical resolution of the radar can generate interferences so as to drastically reduce surface reflectivity, as in the case of the CO_2 layer over the SPLD identified by <cit.>. However, such coherent effects require a very smooth surface and are strongly frequency-dependent. Both the rougher surface of Lucus Planum and Apollinaris Patera <cit.>, and the fact that such dependence on frequency was not found in the data seem to rule out the presence of such a layer.Other material inhomogeneities in the dielectric properties at depths below the vertical resolution of MARSIS would tend to produce surface echoes whose power is dominated by the dielectric permittivity of the layers closest to the surface, as discussed in <cit.>. This effect would alter P_ss/P_s, but it would not change the rate at which this quantity decreases with depth, that is the first term of the right side of Eq. <ref>. Random inhomogeneities within the deposits, whose characteristic size is comparable to the MARSIS wavelength, would result in volume scattering, that is in the diffusion of electromagnetic radiation within the deposits away from the direction of propagation.The diffuse, weak echoes between surface and basal reflections visible in Fig. <ref> could be caused by volume scattering, although they could also originate from surface roughness. Volume scattering cannot be easily characterized from the measure of backscattered radiation, but it would attenuate the subsurface radar echo. This effect cannot be separated from dielectric attenuation, and it would thus lead to a systematic overestimate of tanδ from Eq. <ref>, which thus constitutes an upper bound for the true dielectric attenuation.With these caveats, the slope of the best-fit line in a plot of 2 π f τ (that is the number of cycles completed by the radar wave within Lucus Planum) vs. the natural logarithm of the subsurface to surface echo power ratio will provide an estimate of tanδ. Such plots for areas A, B and C, with the corresponding best-fit lines, are shown in Fig. <ref>. In analogy with the estimation of ε, the best-fit line was computed using the least absolute deviations method.The slopes and constant terms of the best-fit lines in Fig. <ref> are reported in Table <ref>. For each value we report the corresponding 95% confidence interval of the estimate, to provide some insight on the quality of the data fit. Table <ref> reports also the corresponding estimates of tanδ with their 95% confidence bounds. § DISCUSSION The lack of subsurface reflections in the central part of Lucus Planum can be the result of several factors, some of which depend on surface properties. A high topographic roughness at scales comparable to the radar wavelength causes scattering of the incident pulse, resulting in weaker surface and subsurface echoes. However, RMS heights estimated from MOLA data both over baselines of a few to several kilometers <cit.> and within the MOLA footprint <cit.> are higher in area C, where subsurface detections are frequent, than in the central part of Lucus Planum. Another possibility is that the basal roughness is higher in its central part. Because subsurface echoes appear to be associated with areas of distinct surface morphology, a third possibility is that the central part of Lucus Planum consists of denser, more radar-attenuating material.Values of ε in areas A and C are similar to those found by <cit.> and <cit.>, while those in area B appears to be higher, although the estimate is affected by a larger uncertainty. The same trend, both in values and confidence intervals, is observed also for tanδ. It can also be seen in Fig. <ref> that the spatial density of subsurface interface detections is much higher in areas A and C than in area B, in spite of a comparable density of coverage (see Fig. <ref>). Surface roughness at kilometre scale in area B is similar to that of the central part of Lucus Planum and smaller than that of area C <cit.>, in spite of the different surface morphology, while roughness in area B at hundred-meters scale is comparable to that of area C <cit.>. Because the dearth and weakness of subsurface echoes in area B do not correlate with a higher surface roughness compared to areas A and C, we favour the interpretation that, in spite of the large uncertainties, the higher value of the complex relative permittivity in area B is an indication of a change in bulk dielectric properties with respect to areas A and C.The relative dielectric constant of volcanic rocks such as those thought to constitute the Martian crust is variable, ranging between 2.5 for pumice to about 10 or even higher for dense basalts <cit.>. Following a search in the literature and a set of new measurements between 0.01 and 10 MHz, <cit.> concluded that the main factor in determining the value of ε is porosity, finding the following empirical relation for dacitic rocks: ( ε)^0.96 = Φ + 6.51 ( 1 - Φ) where Φ is porosity. Such relation is similar to estimates for other non-basaltic rocks, and holds also for basalts, although with a greater variability due perhaps to Fe-Ti oxide mineral content <cit.>. Modelling the dependence of ε on porosity through Eq. <ref>, and inverting such equation to obtain estimates of Φ, it is found that the values of ε in Table <ref> are consistent with a porosity between 0.6 and 0.9 for area A, up to 0.85 for area B, and between 0.7 and 0.8 for area C. Such high values are typical of volcanic rocks extruded through explosive, rather than effusive, processes. In their study of the correlation between the distribution of porosity in pyroclasts and eruption styles, <cit.> found that porosity values above 0.5 are characteristic of the product of explosive basaltic eruptions <cit.> or even highly explosive subplinian, plinian or ultraplinian eruptions, whose deposits derive from fall–out or pyroclastic density currents.Potential sources of Martian pyroclastic deposits have been discussed in the literature <cit.> and might be largely related to Tharsis. Also Apollinaris Patera is at close reach for Lucus Planum <cit.>. The role of possibly buried volcanic edifices has been suggested by <cit.>: evidence of such edifices has not been found so far in the area.Tridymite has recently been discovered by the Curiosity rover within lacustrine sediments in Gale Crater <cit.>, suggesting the presence of silica–rich volcanics within the crater's watershed. If the Medusae Fossae Formation is composed of explosive volcanic material, the upper portion of Mount Sharp, which is thought to be a part of the MFF <cit.>, could be a source for that material.Another plausible explanation of the nature of Lucus Planum materials and the Medusae Fossae Formation in general is that they might consist of ice–rich dust or ice–laden porous rock, although previous estimates of dielectric properties based on radar data proved inconclusive <cit.>.The permittivity of a mixture of ice and dust can be estimated using a mixing formula. Because of the lack of knowledge about the size and shape of pores or ice inclusions in the rock, in the following analysis we selected the general Polder–van Santen model <cit.>. This formula is one of the simplest and yet more widely used, and it has the special property that it treats the inclusions and the hosting material symmetrically; it balances both mixing components with respect to the unknown effective medium, using the volume fraction of each component as a weight: ( 1 - f )ε_h - ε_eff/ε_h + 2 ε_eff + fε_i - ε_eff/ε_i + 2 ε_eff = 0 where f is the volume fraction of inclusions in the mixture, ε_h is the permittivity of the host material, ε_i that of the inclusions, and ε_eff the effective permittivity of the mixture.Water ice has a relative dielectric constant well within the range of values typical of porous rocks (2–6), while its loss tangent can vary by orders of magnitude as a function of temperature in the range 100-270 K, which is applicable to Martian conditions. Using the empirical formulas presented in <cit.> and a mean surface temperature of 210 K, typical for the latitudes of Lucus Planum according to <cit.>, it is found that the real part of the permittivity of water ice is ≈ 3.1, and the loss tangent is ≈ 5 · 10^-5. We hypothesized that the relative dielectric constant of the rocky component in the Lucus Planum material could range from 7 to 15 <cit.>, and that its loss tangent could independently vary between 10^-3 and 10^-1 <cit.>. The Polder–van Santen mixing rule was then used to model the effective permittivity of all possible combinations of relative dielectric constant, loss tangent and porosity, similarly to the method described in <cit.>.Comparing the results with the estimated values in areas A, B and C, we found that no mixture of rock and ice could produce a complex permittivity compatible with that of areas A and C. It is possible to obtain compatible permittivity values for these two areas using a three-component mixture, that is rock, ice and void, but the significance of this result, given the weakly constrained multi-dimensional parameter space, is difficult to assess. In the map of water-equivalent hydrogen content for the Martian soil produced by <cit.>, the Lucus Planum area appears to be relatively water–rich, with a water–equivalent hydrogen content estimated at around 8%. This value however is referred to the first meter of depth, while the dielectric permittivity derived from MARSIS data is an average over the whole thickness of the Lucus Planum and Apollinaris Patera deposits.For area B, mixtures with an ice volume fraction between 0.3 and 0.9 and a loss tangent for the rocky material comprised between 3 · 10^-3 and 3 · 10^-2 could return a range of permittivity values consistent with estimates. To determine the significance of this result, we also computed the effective permittivity of a mixture of rock and void (empty pores) over the same parameter space. We found that values consistent with those of area B could be obtained for a range of porosity and loss tangent values similar to that of the mixture of rock and ice. We thus conclude that the nature of the bulk material in area B cannot be reliably determined using only the data provided by this analysis.Area C presents the highest number and density of subsurface detections, and the smallest uncertainty in the estimates of dielectric properties. We therefore inserted in Eq. <ref> the value of ε from Table <ref> to estimate the thickness of the Lucus Planum deposits in such area, and then interpolated this quantity over area C through the natural neighbour method. The result is included in Fig. <ref>, in which the colour–coded thickness is layered on a shaded relief map of Lucus Planum. The deposits are several hundred meters thick on average, locally reaching a thickness up to 1.5 kilometres, for a total volume of ≈ 6.8 · 10^4 km^3. The deposit thickness varies positively with regional elevations, being higher in the South, and lower in the North.Overall, Lucus Planum subsurface as sounded by MARSIS appears to be locally to regionally inhomogeneous. This can be interpreted in terms of complex, multi–process components of the deposits constituting Lucus Planum and possibly the Medusa Fossae Formation as a whole <cit.>. An interplay between a possibly dominating volcano–sedimentary component and local, possibly late–stage erosional and partially depositional episodes could be envisaged. Such episodes on or in the vicinity of Lucus Planum likely occurred in the relatively recent past <cit.>, leading to extensive resedimentation of Lucus Planum materials <cit.>.The dielectric properties of the north–western part of Lucus Planum, implying a higher density compared to the other radar–transparent areas, and the inferred strong attenuation of the radar signal in its central part could be interpreted as due to the presence of indurated sedimentary deposits. Their existence within Lucus Planum is consistent with extensive reworking of those deposits through time <cit.>. Although such deposits could be compositionally similar to the overall MFF materials <cit.>, they could be locally remobilised, thus changing their architecture, structure and texture, including their degree of cementation <cit.>. Their Hesperian–Amazonian age <cit.> would match both late stage valley network as well as vigorous Tharsis activity. MARSIS data cannot shed much light on small– to medium–scale lateral and vertical variations, which will require additional work at an appropriate scale.The possibility of sampling with Mars Science Laboratory on its way uphill on Mount Sharp in Gale Crater some material, even not necessarily in–situ but made available through mass wasting and resedimentation, would allow for some indirect ground truth: pyroclastic, possibly acidic, volcanic material of an age comparable with that of Lucus Planum (Hesperian to Amazonian) would offer support to a pyroclastic origin of the MFF. On the other hand, resedimented material so far from the MFF main bodies would not allow for volatiles to be embedded and preserved. § CONCLUSIONS MARSIS acquired 238 radar swaths across Lucus Planum, providing sufficient coverage for the study of the internal structure and dielectric properties of this part of the MFF. Subsurface reflections were found only in three areas, marked by a distinctive surface morphology, while the central part of Lucus Planum appears to be made of radar–attenuating material preventing the detection of basal echoes. The bulk dielectric constant of these areas was estimated by comparing their apparent thickness from radar data with their basal topography, extrapolated from the surrounding terrains. The complex part of the dielectric permittivity was derived from the weakening of basal echoes as a function of apparent depth, yielding results that are consistent with the estimated dielectric constant. The inferred bulk properties were compared with known materials such as volcanic rocks and ice–dust mixtures. The interpretation that the eastern area of Lucus Planum and the deposits on the north–western flanks of Apollinaris Patera consist of high–porosity pyroclastic material is strongly supported by results, while north–western Lucus Planum is likely to be much less porous. No conclusion could be drawn about the presence of pore ice.All evidence points to Lucus Planum being highly inhomogeneous. The exact origin of the deposits cannot be constrained by radar data alone, but our results are consistent with an overall pyroclastic origin as suggested by <cit.> and <cit.> for the MFF. The geological complexity of the subsurface revealed through MARSIS data is consistent with a combination of processes acting through space and time, including fluvial (possibly outflow–related) activity occurred during the emplacement of the Htu and AHtu units <cit.>, as well as eolian deposition <cit.>. The overall surface textural and topographical heterogeneity might be linked to post–emplacement erosional processes related to regional wind dynamics <cit.> on a variably indurated substrate. The evidence in this work was not sufficient to demonstrate the presence of an ice-related component in the central part of Lucus Planum, although this cannot be conclusively excluded. A full understanding of such a complex geological history will require the integration of several datasets at different scales and with different resolutions.This work was supported by the Italian Space Agency (ASI) through contract no. I/032/12/1. The numerical code for the simulation of surface scattering was developed at the Consorzio Interuniversitario per il Calcolo Automatico dell'Italia Nord–Orientale (CINECA) in Bologna, Italy. Simulations were produced thanks to the Partnership for Advanced Computing in Europe (PRACE), awarding us access to the SuperMUC computer at the Leibniz–Rechenzentrum, Garching, Germany through project 2013091832. Test simulations were run Jacobs University CLAMV HPC cluster, and we are grateful to Achim Gelessus for his support. This research has made use of NASA's Astrophysics Data System. APR has been supported by the European Union FP7 and Horizon 2020 research and innovation programmes under grant agreements #654367 (EarthServer–2) and #283610 (EarthServer). Data used in this analysis have been taken from the MARSIS public data archive, which is currently undergoing validation before publication on the Planetary Science Archive of the European Space Agency () and mirroring on NASA's Planetary Data System Geosciences Node (). Simulations used in the identification of subsurface interfaces will be published in the same archives at a yet undefined date, but they are already available in the following public repository: . 46urlstyle[Alberti et al.(2012)Alberti, Castaldo, Orosei, Frigeri, and Cirillo]2012JGRE..117.9008A Alberti, G., L. Castaldo, R. Orosei, A. Frigeri, and G. Cirillo (2012), Permittivity estimation over Mars by using SHARAD data: the Cerberus Palus area, Journal of Geophysical Research (Planets), 117, E09008, 10.1029/2012JE004047.[Armand et al.(2003)Armand, Smirnov, and Hagfors]2003RaSc...38.1090A Armand, N. A., V. M. Smirnov, and T. Hagfors (2003), Distortion of radar pulses by the Martian ionosphere, Radio Science, 38, 1090, 10.1029/2002RS002849.[Bloomfield and Steiger(1983)]LAD Bloomfield, P., and W. L. Steiger (1983), Least Absolute Deviations, Progress in Probability and Statistics, vol. 6, Birkh'́auser Boston, Boston, Massachusetts, 10.1007/978-1-4684-8574-5.[Bradley et al.(2002)Bradley, Sakimoto, Frey, and Zimbelman]2002JGRE..107.5058B Bradley, B. A., S. E. H. Sakimoto, H. Frey, and J. R. Zimbelman (2002), Medusae Fossae Formation: New perspectives from Mars Global Surveyor, Journal of Geophysical Research (Planets), 107, 5058, 10.1029/2001JE001537.[Broz̆ et al.(2014)Broz̆, C̆adek, Hauber, and Rossi]broz2014epsl Broz̆, P., O. C̆adek, E. Hauber, and A. P. Rossi (2014), Shape of scoria cones on Mars: Insights from numerical modeling of ballistic pathways, Earth and Planetary Science Letters, 406, 14–23, 10.1016/j.epsl.2014.09.002.[Campbell and Watters(2016)]2016JGRE..121..180C Campbell, B. A., and T. R. Watters (2016), Phase compensation of MARSIS subsurface sounding data and estimation of ionospheric properties: New insights from SHARAD results, Journal of Geophysical Research (Planets), 121, 180–193, 10.1002/2015JE004917.[Cartacci et al.(2013)Cartacci, Amata, Cicchetti, Noschese, Giuppi, Langlais, Frigeri, Orosei, and Picardi]2013Icar..223..423C Cartacci, M., E. Amata, A. Cicchetti, R. Noschese, S. Giuppi, B. Langlais, A. Frigeri, R. Orosei, and G. Picardi (2013), Mars ionosphere total electron content analysis from MARSIS subsurface data, Icarus, 223, 423–437, 10.1016/j.icarus.2012.12.011.[Carter et al.(2009)Carter, Campbell, Watters, Phillips, Putzig, Safaeinili, Plaut, Okubo, Egan, Seu, Biccari, and Orosei]2009Icar..199..295C Carter, L. M., B. A. Campbell, T. R. Watters, R. J. Phillips, N. E. Putzig, A. Safaeinili, J. J. Plaut, C. H. Okubo, A. F. Egan, R. Seu, D. Biccari, and R. Orosei (2009), Shallow radar (SHARAD) sounding observations of the Medusae Fossae Formation, Mars, Icarus, 199, 295–302, 10.1016/j.icarus.2008.10.007.[Feldman et al.(2004)Feldman, Prettyman, Maurice,Plaut, Bish, Vaniman, Mellon, Metzger, Squyres, Karunatillake, Boynton,Elphic, Funsten, Lawrence, and Tokar]2004JGRE..109.9006F Feldman, W. C., T. H. Prettyman, S. Maurice, J. J. Plaut, D. L. Bish,D. T. Vaniman, M. T. Mellon, A. E. Metzger, S. W. Squyres, S. Karunatillake,W. V. Boynton, R. C. Elphic, H. O. Funsten, D. J. Lawrence,and R. L. Tokar (2004), Global distribution of near-surface hydrogen on Mars,Journal of Geophysical Research (Planets), 109, E09006, 10.1029/2003JE002160.[Grima et al.(2014)Grima, Blankenship, Young, and Schroeder]2014GeoRL..41.6787G Grima, C., D. D. Blankenship, D. A. Young, and D. M. Schroeder (2014), Surface slope control on firn density at Thwaites Glacier, West Antarctica: Results from airborne radar sounding, Geophysical Research Letters, 41, 6787–6794, 10.1002/2014GL061635.[Harrison et al.(2010)Harrison, Balme, Hagermann, Murray, and Muller]harrison2010mapping Harrison, S., M. Balme, A. Hagermann, J. Murray, and J.-P. Muller (2010), Mapping medusae fossae formation materials in the southern highlands of Mars, Icarus, 209(2), 405–415.[Harrison et al.(2013)Harrison, Balme, Hagermann, Murray, Muller, and Wilson]2013P SS...85..142H Harrison, S. K., M. R. Balme, A. Hagermann, J. B. Murray, J.-P. Muller, and A. Wilson (2013), A branching, positive relief network in the middle member of the Medusae Fossae Formation, equatorial Mars - Evidence for sapping?, Planetary and Space Science, 85, 142–163, 10.1016/j.pss.2013.06.004.[Head and Kreslavsky(2004)]2004LPI....35.1635H Head, J. W., III, and M. Kreslavsky (2004), Medusae Fossae Formation: Ice-rich Airborne Dust Deposited During Periods of High Obliquity?, paper presented at the 35th Lunar and Planetary Science Conference, Lunar and Planetary Institute, Houston, Texas.[Hynek et al.(2003)Hynek, Phillips, and Arvidson]2003JGRE..108.5111H Hynek, B. M., R. J. Phillips, and R. E. Arvidson (2003), Explosive volcanism in the Tharsis region: Global evidence in the Martian geologic record, Journal of Geophysical Research (Planets), 108, 5111, 10.1029/2003JE002062.[Kerber(2014)]2014LPI....45.2672K Kerber, L. (2014), The Distribution and Diversity of Layering Within the Medusae Fossae Formation, paper presented at the 45th Lunar and Planetary Science Conference, Lunar and Planetary Institute, The Woodlands, Texas.[Kerber and Head(2010)]kerber2010icarus Kerber, L., and J. W. Head (2010), The age of the Medusae Fossae Formation: Evidence of Hesperian emplacement from crater morphology, stratigraphy, and ancient lava contacts, Icarus, 206, 669–684, 10.1016/j.icarus.2009.10.001.[Kerber and Head(2012)]kerber2012progression Kerber, L., and J. W. Head (2012), A progression of induration in Medusae Fossae Formation transverse aeolian ridges: evidence for ancient aeolian bedforms and extensive reworking, Earth Surface Processes and Landforms, 37(4), 422–433.[Kerber et al.(2011)Kerber, Head, Madeleine, Forget, and Wilson]2011Icar..216..212K Kerber, L., J. W. Head, J.-B. Madeleine, F. Forget, and L. Wilson (2011), The dispersal of pyroclasts from Apollinaris Patera, Mars: Implications for the origin of the Medusae Fossae Formation, Icarus, 216, 212–220, 10.1016/j.icarus.2011.07.035.[Kreslavsky and Head(2000)]2000JGR...10526695K Kreslavsky, M. A., and J. W. Head (2000), Kilometer-scale roughness of Mars: Results from MOLA data analysis, Journal of Geophysical Research, 105, 26,695–26,712, 10.1029/2000JE001259.[Mandt et al.(2008)Mandt, de Silva, Zimbelman, and Crown]2008JGRE..11312011M Mandt, K. E., S. L. de Silva, J. R. Zimbelman, and D. A. Crown (2008), Origin of the Medusae Fossae Formation, Mars: Insights from a synoptic approach, Journal of Geophysical Research (Planets), 113(E12), E12011, 10.1029/2008JE003076.[Mätzler(1998)]1998ASSL..227..241M Mätzler, C. (1998), Microwave Properties of Ice and Snow, in Solar System Ices, Astrophysics and Space Science Library, vol. 227, edited by B. Schmitt, C. de Bergh, and M. Festou, p. 241, 10.1007/978-94-011-5252-5_10.[Mellon et al.(2004)Mellon, Feldman, and Prettyman]2004Icar..169..324M Mellon, M. T., W. C. Feldman, and T. H. Prettyman (2004), The presence and stability of ground ice in the southern hemisphere of Mars, Icarus, 169, 324–340, 10.1016/j.icarus.2003.10.022.[Morris et al.(2016)Morris, Vaniman, Blake, Gellert, Chipera, Rampe, Ming, Morrison, Downs, Treiman, Yen, Grotzinger, Achilles, Bristow, Crisp, Des Marais, Farmer, Fendrich, Frydenvang, Graff, Morookian, Stolper, and Schwenzer]morris2016pnas Morris, R. V., D. T. Vaniman, D. F. Blake, R. Gellert, S. J. Chipera, E. B. Rampe, D. W. Ming, S. M. Morrison, R. T. Downs, A. H. Treiman, A. S. Yen, J. P. Grotzinger, C. N. Achilles, T. F. Bristow, J. A. Crisp, D. J. Des Marais, J. D. Farmer, K. V. Fendrich, J. Frydenvang, T. G. Graff, J.-M. Morookian, E. M. Stolper, and S. P. Schwenzer (2016), Silicic volcanism on Mars evidenced by tridymite in high-SiO_2 sedimentary rock at Gale crater, Proceedings of the National Academy of Sciences, 113(26), 7071–7076, 10.1073/pnas.1607098113.[Mouginot et al.(2008)Mouginot, Kofman, Safaeinili, and Herique]2008P SS...56..917M Mouginot, J., W. Kofman, A. Safaeinili, and A. Herique (2008), Correction of the ionospheric distortion on the MARSIS surface sounding echoes, Planetary and Space Science, 56, 917–926, 10.1016/j.pss.2008.01.010.[Mouginot et al.(2009)Mouginot, Kofman, Safaeinili, Grima, Herique, and Plaut]2009Icar..201..454M Mouginot, J., W. Kofman, A. Safaeinili, C. Grima, A. Herique, and J. J. Plaut (2009), MARSIS surface reflectivity of the south residual cap of Mars, Icarus, 201, 454–459, 10.1016/j.icarus.2009.01.009.[Mueller et al.(2011)Mueller, Scheu, Kueppers, Spieler, Richard, and Dingwell]2011JVGR..203..168M Mueller, S., B. Scheu, U. Kueppers, O. Spieler, D. Richard, and D. B. Dingwell (2011), The porosity of pyroclasts as an indicator of volcanic explosivity, Journal of Volcanology and Geothermal Research, 203, 168–174, 10.1016/j.jvolgeores.2011.04.006.[Neumann et al.(2003)Neumann, Abshire, Aharonson, Garvin, Sun, and Zuber]2003GeoRL..30.1561N Neumann, G. A., J. B. Abshire, O. Aharonson, J. B. Garvin, X. Sun, and M. T. Zuber (2003), Mars Orbiter Laser Altimeter pulse width measurements and footprint-scale roughness, Geophysical Research Letters, 30, 1561, 10.1029/2003GL017048.[Nouvel et al.(2004)Nouvel, Herique, Kofman, and Safaeinili]2004RaSc...39.1013N Nouvel, J.-F., A. Herique, W. Kofman, and A. Safaeinili (2004), Radar signal simulation: Surface modeling with the Facet Method, Radio Science, 39, RS1013, 10.1029/2003RS002903.[Ogilvy(1991)]ogilvy Ogilvy, J. A. (1991), Theory of Wave Scattering from Random Rough Surfaces, IOP Publishing, Bristol.[Picardi and Sorge(2000)]2000SPIE.4084..624P Picardi, G., and S. Sorge (2000), Adaptive compensation of ionosphere dispersion to improve subsurface detection capabilities in low-frequency radar systems, in Eighth International Conference on Ground Penetrating Radar, Proceedings of SPIE, vol. 4084, edited by D. A. Noon, G. F. Stickley, and D. Longstaff, pp. 624–629.[Picardi et al.(2005)Picardi, Plaut, Biccari, Bombaci, Calabrese, Cartacci, Cicchetti, Clifford, Edenhofer, Farrell, Federico, Frigeri, Gurnett, Hagfors, Heggy, Herique, Huff, Ivanov, Johnson, Jordan, Kirchner, Kofman, Leuschen, Nielsen, Orosei, Pettinelli, Phillips, Plettemeier, Safaeinili, Seu, Stofan, Vannaroni, Watters, and Zampolini]2005Sci...310.1925P Picardi, G., J. J. Plaut, D. Biccari, O. Bombaci, D. Calabrese, M. Cartacci, A. Cicchetti, S. M. Clifford, P. Edenhofer, W. M. Farrell, C. Federico, A. Frigeri, D. A. Gurnett, T. Hagfors, E. Heggy, A. Herique, R. L. Huff, A. B. Ivanov, W. T. K. Johnson, R. L. Jordan, D. L. Kirchner, W. Kofman, C. J. Leuschen, E. Nielsen, R. Orosei, E. Pettinelli, R. J. Phillips, D. Plettemeier, A. Safaeinili, R. Seu, E. R. Stofan, G. Vannaroni, T. R. Watters, and E. Zampolini (2005), Radar Soundings of the Subsurface of Mars, Science, 310, 1925–1928, 10.1126/science.1122165.[Polder and van Santen(1946)]polvan46 Polder, D., and J. H. van Santen (1946), The effective permeability of mixtures of solids, Physica, 12, 257–271.[Porcello et al.(1974)Porcello, Jordan, Zelenka, Adams, Phillips, Brown, Ward, and Jackson]1974IEEEP..62..769P Porcello, L. J., R. L. Jordan, J. S. Zelenka, G. F. Adams, R. J. Phillips, W. E. Brown, Jr., S. H. Ward, and P. L. Jackson (1974), The Apollo lunar sounder radar system., Proceedings of the IEEE, 62, 769–783.[Rust et al.(1999)Rust, Russell, and Knight]1999JVGR...91...79R Rust, A. C., J. K. Russell, and R. J. Knight (1999), Dielectric constant as a predictor of porosity in dry volcanic rocks, Journal of Volcanology and Geothermal Research, 91, 79–96, 10.1016/S0377-0273(99)00055-4.[Safaeinili et al.(2007)Safaeinili, Kofman, Mouginot, Gim, Herique, Ivanov, Plaut, and Picardi]2007GeoRL..3423204S Safaeinili, A., W. Kofman, J. Mouginot, Y. Gim, A. Herique, A. B. Ivanov, J. J. Plaut, and G. Picardi (2007), Estimation of the total electron content of the Martian ionosphere using radar sounder surface echoes, Geophysical Research Letters, 34, L23204, 10.1029/2007GL032154.[Schultz and Lutz(1988)]1988Icar...73...91S Schultz, P., and A. B. Lutz (1988), Polar wandering of Mars, Icarus, 73, 91–141, 10.1016/0019-1035(88)90087-5.[Scott and Tanaka(1982)]1982JGR....87.1179S Scott, D. H., and K. L. Tanaka (1982), Ignimbrites of Amazonis Planitia region of Mars, Journal of Geophysical Research, 87, 1179–1190, 10.1029/JB087iB02p01179.[Seu et al.(2007)Seu, Phillips, Biccari, Orosei, Masdea, Picardi, Safaeinili, Campbell, Plaut, Marinangeli, Smrekar, and Nunes]2007JGRE..112.5S05S Seu, R., R. J. Phillips, D. Biccari, R. Orosei, A. Masdea, G. Picardi, A. Safaeinili, B. A. Campbell, J. J. Plaut, L. Marinangeli, S. E. Smrekar, and D. C. Nunes (2007), SHARAD sounding radar on the Mars Reconnaissance Orbiter, Journal of Geophysical Research (Planets), 112, E05S05, 10.1029/2006JE002745.[Sibson(1981)]natural Sibson, R. (1981), A brief description of natural neighbor interpolation, in Interpolating multivariate data, edited by V. Barnett, pp. 21–36, John Wiley & Sons, New York.[Sihvola(2000)]sihvola00 Sihvola, A. (2000), Mixing Rules with Complex Dielectric Coefficients, Subsurface Sensing Technologies and Applications, 1, 393–415, 10.1023/A:1026511515005.[Smirnov and Yushkova(2013)]2013SoSyR..47..430S Smirnov, V. M., and O. V. Yushkova (2013), The influence of the ionosphere in subsurface Martian soil sounding experiments and a method of its correction, Solar System Research, 47, 430–436, 10.1134/S0038094613060099.[Smith et al.(2001)Smith, Zuber, Frey, Garvin, Head, Muhleman, Pettengill, Phillips, Solomon, Zwally, Banerdt, Duxbury, Golombek, Lemoine, Neumann, Rowlands, Aharonson, Ford, Ivanov, Johnson, McGovern, Abshire, Afzal, and Sun]2001JGR...10623689S Smith, D. E., M. T. Zuber, H. V. Frey, J. B. Garvin, J. W. Head, D. O. Muhleman, G. H. Pettengill, R. J. Phillips, S. C. Solomon, H. J. Zwally, W. B. Banerdt, T. C. Duxbury, M. P. Golombek, F. G. Lemoine, G. A. Neumann, D. D. Rowlands, O. Aharonson, P. G. Ford, A. B. Ivanov, C. L. Johnson, P. J. McGovern, J. B. Abshire, R. S. Afzal, and X. Sun (2001), Mars Orbiter Laser Altimeter: Experiment summary after the first year of global mapping of Mars, Journal of Geophysical Research, 106, 23,689–23,722, 10.1029/2000JE001364.[Spagnuolo et al.(2011)Spagnuolo, Grings, Perna, Franco, Karszenbaum, and Ramos]2011P SS...59.1222S Spagnuolo, M. G., F. Grings, P. Perna, M. Franco, H. Karszenbaum, and V. A. Ramos (2011), Multilayer simulations for accurate geological interpretations of SHARAD radargrams, Planetary and Space Science, 59, 1222–1230, 10.1016/j.pss.2010.10.013.[Tanaka(2000)]2000Icar..144..254T Tanaka, K. L. (2000), Dust and Ice Deposition in the Martian Geologic Record, Icarus, 144, 254–266, 10.1006/icar.1999.6297.[Tanaka et al.(2014)Tanaka, Skinner, Dohm, Irwin III, Kolb, Fortezzo, Platz, Michael, and Hare]tanaka2014geologic Tanaka, K. L., J. A. Skinner, J. M. Dohm, R. P. Irwin III, E. J. Kolb, C. M. Fortezzo, T. Platz, G. G. Michael, and T. Hare (2014), Geologic map of Mars, US Department of the Interior, US Geological Survey.[Thomson et al.(2011)Thomson, Bridges, Milliken, Baldridge, Hook, Crowley, Marion, de Souza Filho, Brown, and Weitz]2011Icar..214..413T Thomson, B. J., N. T. Bridges, R. Milliken, A. Baldridge, S. J. Hook, J. K. Crowley, G. M. Marion, C. R. de Souza Filho, A. J. Brown, and C. M. Weitz (2011), Constraints on the origin and evolution of the layered mound in Gale Crater, Mars using Mars Reconnaissance Orbiter data, Icarus, 214, 413–432, 10.1016/j.icarus.2011.05.002.[Ulaby et al.(1986)Ulaby, Moore, and Fung]ulaby1986microwave Ulaby, F. T., R. K. Moore, and A. K. Fung (1986), Microwave Remote Sensing: Active and Passive, no. v. 3 in Artech House microwave library, Addison-Wesley Publishing Company, Advanced Book Program/World Science Division.[Watters et al.(2007)Watters, Campbell, Carter, Leuschen, Plaut, Picardi, Orosei, Safaeinili, Clifford, Farrell, Ivanov, Phillips, and Stofan]2007Sci...318.1125W Watters, T. R., B. Campbell, L. Carter, C. J. Leuschen, J. J. Plaut, G. Picardi, R. Orosei, A. Safaeinili, S. M. Clifford, W. M. Farrell, A. B. Ivanov, R. J. Phillips, and E. R. Stofan (2007), Radar Sounding of the Medusae Fossae Formation Mars: Equatorial Ice or Dry, Low-Density Deposits?, Science, 318, 1125–1128, 10.1126/science.1148112.[Wilson and Head(1994)]wilson1994mars Wilson, L., and J. W. Head (1994), Mars: Review and analysis of volcanic eruption theory and relationships to observed landforms, Reviews of Geophysics, 32(3), 221–263.[Zhang et al.(2009)Zhang, Nielsen, Plaut, Orosei, and Picardi]2009P SS...57..393Z Zhang, Z., E. Nielsen, J. J. Plaut, R. Orosei, and G. Picardi (2009), Ionospheric corrections of MARSIS subsurface sounding signals with filters including collision frequency, Planetary and Space Science, 57, 393–403, 10.1016/j.pss.2008.11.016. | http://arxiv.org/abs/1705.09110v1 | {
"authors": [
"Roberto Orosei",
"Angelo Pio Rossi",
"Federico Cantini",
"Graziella Caprarelli",
"Lynn M. Carter",
"Irene Papiano",
"Marco Cartacci",
"Andrea Cicchetti",
"Raffaella Noschese"
],
"categories": [
"astro-ph.EP"
],
"primary_category": "astro-ph.EP",
"published": "20170525095043",
"title": "Radar sounding of Lucus Planum, Mars, by MARSIS"
} |
Instituto de Física, Universidade Federal de Uberlândia, C.P. 593, 38400-902, Uberlândia, MG,BrazilWe have investigated the energetic stability and the electronic properties ofmetal-organic topological insulators bilayers (BLs), -BL, with M=Ni and Pt,usingcalculations andtight-binding model. Our findings show that-BL is an appealing platform to perform electronic band structureengineering, based on the topologically protected chiral edge states.Theenergetic stability of the BLs is ruled by van der Waals interactions; being the AA stackingthe energetically most stable one. The electronic bandstructure is characterized by a combination of bonding and anti-bonding kagomeband sets (KBSs), revealing that -BL presents a Z_2-metallic phase, whereas-BLmay present both Z_2-metallic phase or quantum spin Hall phase. Thosenon-trivial topological states were confirmed by the formationof chiral edgestates in -BL nanoribbons. We show that the localization of the edge statescan be controlled with a normal external electric field, breaking the mirrorsymmetry. Hence, the sign of electric field selects in which layer each set ofedge states are located. Such a control on the (layer) localization, of thetopological edge states, bring us an additional and interestingdegree offreedom to control the transport properties in layered metal-organic topologicalinsulator. Tuning the topological states in metal-organic bilayers F. Crasto de Lima, Gerson J. Ferreira, and R. H. Miwa December 30, 2023 ========================================================= § INTRODUCTION Two dimensional (2D) topological insulators, based on organic hosts, have beenthesubject of numerous studies addressing not only fundamental issues, butalso future technological applications. In a seminal work, Wang et al.<cit.> predicted a non-trivial topological phase in an organiclattice composed by a monolayer (ML) of three benzene molecules bonded to metalatoms, Pb and Bi. Soon after the successfulsynthesis of 2D metal-organicML lattices of nickel bis(dithiolene),<cit.>, theoreticalstudies based on calculations and single orbital tight-binding (TB) model,predicted a non-trivial topological phase in , characterized by thetopological invariant Z_2 [=1 in ], and the formation of spin-polarizedchiral edge states at the time-reversal-invariant momenta (TRIM) <cit.>.By exploiting the large variety of (possible) combinations of metal-organichosts, other metal-organic frameworks (MOFs), with non-trivial topologicalphase, have been proposed in the past few years. For instance, keeping thekagome lattice of , but substituting Ni with Mn atoms, Zhao etal. <cit.> verified the quantum anomalous Hall (QAH) statein (MnC_4S_4)_3.Here, the appearance of a ferromagnetic phase, mediatedby the unpaired Mn-3d electrons, breaks the time-reversal symmetryof theoriginal system. Further QAH state has also been predicted in 2D latticesof (i) trans-Au-THTAP, where the ferromagnetism arise due to a half-filled flat band <cit.> ; and (ii) triphenil-manganese (MnC_4H_5)_3 <cit.>, where ferromagnetically coupled Mn atoms are connected by benzene rings forming a honeycomb lattice. By keepingthe same honeycomb structure of the benzene host, and substituting Mn with Pbatoms (triphenil-manganese→triphenil-lead), it has been predicted anon-magnetic ground state, where the spin-orbit coupling (SOC) promotestheQSH phase in (PbC_4H_5)_3 <cit.>. Furtherinvestigations <cit.> pointed out that, mediated by an externalelectric field,the (PbC_4H_5)_3 lattice presents anenergetically stable ferrimagnetic QAH phase. Meanwhile, the recentlysynthesizedNi_3(C_18H_12N_6)_2 MOF <cit.> canbe considered as theexperimental realization of the so called topologicalZ_2-metallic phase <cit.> in MOFs. Itis characterized by a kagome lattice, witha global energy gap at the edge of the Brillouin zone (K point), whereas theenergy dispersion of the flat (kagome band) along the Γ–K directiongives rise to a local gap at the Γ point <cit.>.The design of 2D systems based on the MOFsis not limited by themetal↔organic-host combinations. Based on the recent concept ofvan der Waals (vdW) heterostructures <cit.>,we may access aset of new/interesting electronic properties by stacking 2D MOFs, as we havetestified in inorganic layered materials <cit.>. Currently we are facing a suitablesynergybetween theexperimentalworks addressing the successful synthesis of stacked 2DMOFs <cit.>, andtheoreticalstudies aiming the understanding of their physical properties; and propose thedesign of new atomic structures <cit.>focusing on a set of desired electronic properties. For instance, the control ofthe topological states in stacked MOFs.In this paper we investigatethe energetic stability and the electronicproperties of (M=Ni and Pt)bilayers, -BLs. The present study wascarried out througha combination of calculations andTB model. Theenergetic stability of the -BLs is ruled by vdW interactions; where (i) theelectronic band structure of the most likely BL configuration (AA stacking) ischaracterized by a combination of bonding and anti-bondingkagome band sets(KBSs).The non-trivial nature of theenergy gaps, induced by the SOC,wasverified through the calculation of the edge states in -BL nanoribbons(NRs). (ii) Turning on an external electric field normal to the BL, we find thatthe electronic contributions from each ML are no longer symmetric; giving risetoan interlayer separation between the bonding and anti-bonding KBSs. By mapping the localization of the edge states,we findthat theyfollow thesame spacial separation pattern, showing that the (layer)localization of thetopologically protected edge states in -BL NRs canbe tuned by the externalelectric field. Based upon the calculations and a phenomenological model,we can infer that (i) and (ii), described above, will also take place in othervdW metal-organic BLs charaterized by a superposition of kagome bands. § METHODThe calculations were performed based on the DFT approach, as implementedin the VASP code<cit.>. The exchange correlation term was described usingthe GGA functional proposed by Perdew, Burke and Ernzerhof (PBE)<cit.>. TheKohn-Sham orbitals are expanded in a plane wave basis set with an energy cutoffof 400 eV. The 2D Brillouin Zone (BZ) is sampled according to the Monkhorst-Packmethod<cit.>, using a gamma-centered 4×4×1 meshfor atomic structure relaxation and 6×6×1 mesh to obtain theself-consistent total charge density. The electron-ion interactions are takeninto account using the Projector Augmented Wave (PAW) method<cit.>. All geometries have been relaxed until atomic forceswere lower than 0.025 eV/Å. The metal-organic frameworkmonolayersystem is simulated considering a vacuum region in the direction perpendicularto the layers of at least 16 Å. For MOF bilayers the van der Waalsinteraction (vdW-DF2<cit.>)was considered to correctly describe thesystem. In this bilayer system the vacuum region is increased to at least24 Å to avoid periodic images interaction. The real-space tight-binding (TB) Hamiltonian of kagome-hexagonallattice <cit.> in the presence of intrinsic spin-orbit coupling can be written as H_TB = H_0 + H_SOwhere each term is given byH_0 = t_1 ∑_⟨ i j⟩; α c_i α^† c_j α+ t_2 ∑_⟨⟨ i j ⟩⟩; α c_i α^†c_j α ;H_SO = i λ_1∑_⟨ i j ⟩ c_i^†σ· (d_kj×d_ik) c_j + i λ_2∑_⟨⟨ i j ⟩⟩ c_i^†σ·(d_kj×d_ik) c_j ; Here, c_i α^† and c_i α are thecreation and annihilation operators for an electron with spin α on sitei; σ are the spin Pauli matrices.As depicted in Figs. <ref>(d) and (e), d_ik andd_kj are the vectors connecting thei-th and j-th sites to the k-th nearest-neighbor in common;t_i and λ_i are the strength of hopping and spin-orbit terms. The ⟨ i j ⟩ and ⟨⟨ i j ⟩⟩ refer to sums over nearest-neighbor and next-nearest-neighbor, respectively. SeeSec. I of Supplemental Material (SM) <cit.>for more details.§ RESULTS AND DISCUSSIONS§.§ MonolayerThe metal organic frameworkof monolayer, -ML, M = Ni and Pt,presents a hexagonal atomic structure [Fig. <ref>(a)], which can beviewed as a kagome lattice [Fig. <ref>(b)], where each site is occupiedby a (MC_4S_4) molecule [Fig. <ref>(c)]. At the equilibrium geometry,we found that presents a lattice parameter (a) of 14.70 Å, whichisin good agreement with recent experimentalmeasurements <cit.>, andDFTresults <cit.>. Forwe obtained a = 15.06 Å, as the Ptcovalent radius is greater than Ni, which is also in agreement with recentresults <cit.>.The electronic band structures of both MOFsexhibit the typical kagome energy bands above the Fermi level (E_ F),within E_ F < E < E_ F + 0.8 eV. These aregraphene-like energy bands, with a Dirac cone at the K point,degeneratedwith a nearly flat band at the Γ point, as shown inFigs. <ref>(a1) and (b1). Such degeneracies are removedby the SOC. In -ML we find non-trivial global energy gapsof 4 meV (indirect)and 14 meV (direct at the K point), and a local gap of17 meV, between c2 and c3 at the Γ point, Fig. <ref>(a2).Those (SOC induced) energy gaps are larger in -ML, i.e. 72 meV atΓ and 60 meV at K, as shown in Fig. <ref>(b2). Dueto the energy dispersion of c3, the former is not a global gap. The largerenergy dispersion of c3can be attributed the next-nearest-neighborinteractions among the Pt atoms <cit.>. The electronicbandstructures projected to the atomic orbitals, Figs. <ref>(a3) and (b3),reveal that the kagome band set of both systems are formed by thehybridization of C and S p_z orbitals of the organic host, with the metald_xz and d_yz orbitals. As will be discussed below, such a hybridizationpictureis quite relevant for the electronic properties of the bilayer systems. The energy gapsinduced by the SOC between c1 and c2 at K, and between c2and c3 at Γ [Figs. <ref>(a2) and (b2)] characterize the QSHphase of, and . The topological phase of is wellknown <cit.>. Here, based on the evolution of the Wannier ChargeCenters (WCC), we found Z_2 = 1 for both MOFs (Details in theSM <cit.>). However, due to the energy dispersion of c3 in, the energy gap c2-c3at the Γ point is a local gap, giving riseto the so calledZ_2-metallic state <cit.>. Further verificationof the QSH phase can be done by mapping theedge states of - and -ML.Based on the TB approach,we calculated the energy bands of - and -MLNRs. As depicted in Figs. <ref>(a4) and (b4), the formation of chiralspin-polarized edge states, degenerated at the TRIM,confirms the non-trivialtopological phases of the and MLs. We have examined the formation ofedge states forother edge geometries as detailed in <cit.>. §.§ Bilayer In this section, based on calculations, firstly weinvestigate the energetic stability, and the electronic properties of the BL systems; and next by combining calculations and thephenomenological model described below,we provide a comprehensive understanding of the interlayer-electronic tuning processes mediated by anexternal electric field and interlayer separation.The energetic stability of -BL was examined by considering a set ofdifferent/interface geometries, aligning sites X and Y [for X, Y= A, B, G, H, and M, as indicated in Fig. <ref>(a)], i.e. the X site ofone layer above the Y site of the other. In Table <ref> we show theaveraged interlayer equilibrium distance (d_0), the root-mean-square deviation(⟨δ z|$⟩) of the atomic position perpendicularly to thesheet, and the BL binding energy (E^b). Here, we defineE^bas,E^b =2E^(ML) - E^(BL), whereE^(ML)is the total energy of anisolated monolayer, andE^(BL)is the total energy of the -BL for agivenstaking configuration. We found that the AA staking is the most stableone, withE^bof 9.99 and 8.46 meV/Å^2(69 and 62 meV/atom) for -BLand -BL, respectively. Followed by the AG stacking by 0.70 and0.36 meV/Å^2(4.8 and 2.6 meV/atom). The energetic stability of those-BLs isruled by vdW interactions.It is worth noting thatthe binding strength of the -BL is larger compared with other energeticallystable 2D-vdW systems like graphene <cit.> andboron-nitride <cit.>bilayers. There are no chemicalbonds at the (MS_4C_4)_3/(MS_4C_4)_3interface region, where wefoundd_0of 3.64 and 3.66 Åfor and BLs, and⟨δz|=⟩ 0.01Å, thus indicating that the corrugations of the sheetsarenegligible in the AA stacking. In contrast, the other stackinggeometriespresent⟨δz|$⟩ between 0.1 and 0.2 Å. Next we discuss the electronic properties of the energeticallymost stableandBLs. Initially, we will examinethe electronic bandstructurewithout the SOC. The electronic structure of the BLs can bedescribed as a combination of anti-bonding(KBS^+) and bonding (KBS^-) kagome band sets, indicated by orangeand greensolid lines inFigs. <ref>(a1) and (b1). The Dirac bands of each KBSs arepreserved, where the KBS^+ and KBS^- are separated (in energy) by Δ;giving rise to one Dirac point at about E_ F+0.6 eVand anotherlying on the Fermi level. Here,Δ provide a measureof the interlayercoupling between the MLs <cit.>.Further projected energy bands [Figs. <ref>(a2) and (b2)]show that(i)each layer exhibits the same electronic contribution on the KBS^+ andKBS^-, where(ii) the energy bands are composed by d_xz andd_yz orbitals of thetransition metals (Ni and Pt) hybridized with C and Sp_z orbitals of the organic host. The SOC yields energy gaps at the Dirac points (E^D_ g). For instance, in-BL [Fig. <ref>(a3)] we find aenergy gap of 18 meV in KBS^- (E_ g^ D-). This is a local energygap, due to the presence of partially occupied metallic bands near theΓ point. The SOC also induces energy gaps at the Γ point. As shownin Fig. <ref>(a3), we find a small local gap of 4 meV in the KBS^+ (E_ g^Γ+) near the Fermi level, and another local gap of 25 meVat E_ F+0.2 eV in the KBS^- (E_ g^Γ-). In contrast, BL presents a global gap of 22 meV at the Fermi level (E_g^Γ+), followed by E_ g^ D- of 60 meV, and a local gap of50 meV at the Γ point (E_ g^Γ-) [Fig.<ref>(b3)].As will be discussed below, thoseenergy gaps induced by the SOC will dictatethe formation of topologically protected edge states in the -BLs.To model the DFT results presented above, we propose a phenomenological Hamiltonian to describe the interaction between layers. Assuming the mirror symmetry of the AA stacking, the Hamiltonian reads H_s = h_3 × 3(k) ⊗ τ_0 +Δ/2 𝕀_3 × 3⊗τ_x,where, h_3 × 3( k), represents the Hamiltonian of each monolayerseparately, diagonal on the base {|#L;n, k⟩} (n = 1, 2, 3 bands,# = 1, 2 layers), which gives the kagome band dispersions; τ_j(j=0, x, y, z) are the Pauli matrix in the layer space, and Δ /2 thecoupling term between the layers. Inthis model, each layer will interactforming the highest energy (anti-bonding, |+⟩) and the lowest energy(bonding, |-⟩)KBSs,energetically separated by Δ.Inthis case, the Dirac bands at the Fermi level are given by the bondingKBSs, green solid lines in Fig. <ref>(a1) and (b1). The mirrorsymmetry imposesthat |⟨#L|±||⟩^2=1/2, for # = 1, 2. The mirror symmetry can be suppressed uponthe interaction of the -BLs witha solid surface, or due to the presence of an external electricfieldperpendicular to the layer. The latter can be expressed byadding apotential difference between the layers in H_s,H = H_s -ε 𝕀_3 × 3⊗τ_z.Here, the potential difference due to only the external electric field(E^ ext) will beε = (d/2) E^ ext, but the charge rearrangement at the/interface can reduce this potential difference such that,ε = σ E^ ext. Further discussion on the proposed model canbe found in the Supplemental Material <cit.>, Sec. II. Therefore, inthis model the contribution of each layer to an given state is E^ extdependent. Initially, the effect of external electric fieldwas studied based on the approach.In Figs. <ref>(a1) and (b1) we present theelectronic band structures of the and BLs forE^ ext=0. The mirror symmetry is fulfilled and both layer contributes equally foreach state. The size of red circles isproportional to the layer contributionto eachstate, |⟨# L|n, k||⟩^2. By turning on theexternal electric field (E^ ext≠ 0), there is an unbalance on thecharge density distribution between the MLs,Figs. <ref>(a2)and (b2);followed by an increase on the energy separation between the kagomebands, Δ=0.63→0.94 eV as the electric field module increasefrom 0.0→0.2 eV/Å in-BL. In contrast, such an increase ofΔ, as a function of the external field, is almost negligible in -BL.For the electric field module increasing from 0.0→0.5 eV/Å, the separation between the kagome bands changes by less than 0.03 eV(Δ=0.59→0.61 eV).The dependence of Δwiththe external electric field can be understoodby analyzing the changes on the total charge density (Δρ) as afunction of E^ ext and the interlayer distance d. For a givenvalue of d, we can define Δρ as,Δρ = ρ(E^ ext) - ρ(0),where ρ(E^ ext) and ρ(0) represent the total charge densities oftheBL at E^ ext≠0 and E^ ext=0, respectively. Ourresults of Δρ for the and BLs show that, (i)at theequilibrium geometry (d_0=3.6 Å), there is no charge transfer between theMLs [Fig. <ref>(a3)]; in contrast (ii)a net chargetransfer takes place between the MLs [Fig. <ref>(b3)].Such a net charge transfer gives rise to an intrinsic local electric field whichcan be written as, E^ loc = -α E^ ext; reducing the potentialdifference between the MLs, in agreement with the small changes on theenergy separation between the kagome bands, Δ. Byincreasing the interlayer distance, for instance d_0→ d_1=3.9 Å,we found that (i) the electronic interaction between theMLs reduces, aswell as the coupling term Δ. We found Δ=0.41 eV ( calculations) for bothBLs; and (ii)there isa reduction on the netcharge transfer between the MLs due to the external electric field, as depictedin Figs. <ref>(a4) and (b4) forand , respectively. As shown in Figs. <ref>(a2) and (b2), the layer contribution onthe KBSs can be controlled byan external electric field. Here we will considerthe electronic states around the Dirac point nearthe the Fermi level, indicated by (blue) rectangles inFigs. <ref>(a1)-(a2) and <ref>(b1)-(b2). Thecalculated partial charge densities within those rectangles, |⟨1L|-||⟩^2and |⟨2L|-||⟩^2, are shown in Figs. <ref>(c) and (d)for E^ ext from 0 to 0.5 eV/Å. Our results are indicatedby colored circles, and solid lines indicate the onesobtained by usingthephenomenological model. At E^ ext=0 wehave |⟨1L|-||⟩^2 =|⟨2L|-||⟩^2 = 0.5, i. e.both layers present the same electroniccontribution as the mirror symmetry is fulfilled. For lower values ofE^ ext,ε≪Δ /2, the electronic contribution of each layer exhibits a linear behaviour, where thetangent modulus is σ/Δ <cit.>. The separation of thepartial charge densities between the MLs isstrengthened for larger interlayerdistances. For instance, at the equilibrium geometry, d_0=3.6 Å, wefind |⟨1L|-||⟩^2=0.27 and |⟨2L|-||⟩^2=0.73, which corresponds toa charge densityseparation ratio (η),η=|⟨1L|-||⟩^2/|⟨2L|-||⟩^2of 0.37 for E^ ext=0.1 eV/Åin -BL;increasing d to 3.9 Å,the charge density separation increases,η=0.20 for the same value ofE^ ext. On the other hand, the net charge transfers between the MLsresult in ε≪Δ / 2 for a greater range of E^ ext,giving rise to a linear response of the layer contribution, even for E^ext = 0.5 eV/Å, black and red lines in Fig. <ref>(d).Indeed, for d_0=3.6 Å thecharge density separation is very small,wefind |⟨1L|-||⟩^2=0.40 and |⟨2L|-||⟩^2=0.60,η=0.67 forE^ ext = 0.50 eV/Å. On the other hand, increasing the interlayerdistance to d = 3.9 Å, the charge transfer is suppressed[Fig. <ref>(b4)], andwe find η=0.19for E^ext=0.10 eV/Å, which ispractically the result obtained in -BL.It is worth noting that (i) by inverting the E^ extdirection, the layer localization also inverts (1L ↔ 2L), and(ii) in the present scenario thecharge density separation in BLs isruled by the suppression of the mirror symmetry. Here we haveconsidered the suppression of the mirror symmetry through an external electricfield, but the same behavior is expected in other cases, e.g. the presence of asubstrate. In the next section we discuss the bilayers ribbons and the locationof the topologically protected edgestates, by the breaking of the mirrorsymmetry. §.§ Bilayer Nanoribbon In this section we will discuss the edge states of BL nanoribbons,in order to provide a more complete picture of the electronic properties of the-BLs.Here, the electronic band structure of - and -BLs, obtainedthrough calculations,was fitted within the TB approach considering theintralayer and interlayer hoppings, andtwo orbitals (A and B) per site ofthe kagome bilayer-lattice (details in Sec. I of the SM). As shown inFigs. <ref>(a) and (b), the energy dispersions obtained through theTB Hamiltonian (red lines) present a reasonably well correspondence with theones obtained by the calculations approach (blue circles),where the mainfeatures of the band structure are well described. Similarly to the monolayers, the bilayers also have a ℤ_2 = 1topological invariant. However, here it shows a ℤ_2 = 1 for each setof orthogonal subspaces. For the mirror symmetry case, these are the bonding(KBS^+) and anti-bonding (KBS^-) states. For a finite E^ext it is stillpossible to define two orthogonal sets [see SupplementalMaterial <cit.>], which are similar to the KBS^± states.Consequently, in the following we find two sets of edge states, one for eachorthogonal subspace.Inorder to identify these topologically protected edge states we have considered nanoribbon widths (W) of ∼ 51 and ∼ 52 nm forandBLs, Figs. <ref>(c)and (d). | http://arxiv.org/abs/1705.09345v2 | {
"authors": [
"F. Crasto de Lima",
"Gerson J. Ferreira",
"R. H. Miwa"
],
"categories": [
"cond-mat.mtrl-sci"
],
"primary_category": "cond-mat.mtrl-sci",
"published": "20170525200851",
"title": "Tuning the topological states in metal-organic bilayers"
} |
1Cornell University, SpaceSciences Building, Ithaca, NY 14853, USA 2European Southern Observatory, Karl-Schwarzschild-Straße 2, D-85748 Garching, Germany 3Institute for Astronomy, University of Edinburgh, Royal Observatory, Blackford Hill, Edinburgh EH9 3HJ, UK 4Instituto de Astrofisica de Canarias, E-38200 La Laguna, Tenerife, Spain 5Departamento de Astrofisica, Universidad de La Laguna, E-38205 La Laguna, Tenerife, Spain 6Astrophysics Group, Imperial College London, Blackett Laboratory, Prince Consort Road, London SW7 2AZ, UK 7Department of Physics and Astronomy, University of California, Irvine, CA 92697, USA 8Astronomy Centre, Department of Physics and Astronomy, University of Sussex, Brighton BN1 9QH, UK 9Department of Physics, University of Oxford, Keble Road, Oxford OX1 3RH, UK 10Space Science and Technology Department, Rutherford Appleton Laboratory, Chilton, Didcot, Oxfordshire OX11 0QX, UK 11Department of Physics and Astronomy, University of British Columbia, 6224 Agricultural Road, Vancouver, BC V6T 1Z1, Canada 12Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn, Germany [email protected] report the detection of 27, a dusty, starbursting major merger at a redshift of z=5.655, using the Atacama Large Millimeter/submillimeter Array (ALMA). 27 was selected from Herschel/SPIRE and APEX/LABOCA data as an extremely red “870 μm riser” (i.e., S_250μ m<S_350μ m<S_500μ m<S_870μ m), demonstrating the utility of this technique to identify some of the highest-redshift dusty galaxies. A scan of the 3 mm atmospheric window with ALMA yields detections ofandemission, and a tentative detection ofemission, which provides an unambiguous redshift measurement. The strength of the CO lines implies a large molecular gas reservoir with a mass of M_ gas=2.5×10^11 (α_ CO/0.8) (0.39/r_51) , sufficient to maintain its ∼2400yr^-1 starburst for at least ∼100 Myr. The 870 μm dust continuum emission is resolved into two components, 1.8 and 2.1 kpc in diameter, separated by 9.0 kpc, with comparable dust luminosities, suggesting an ongoing major merger. The infrared luminosity of L_ IR≃2.4×10^13implies that this system represents a binary hyper-luminous infrared galaxy, the most distant of its kind presently known. This also implies star formation rate surface densities of Σ_ SFR=730 and 750yr^-1 kpc^2, consistent with a binary “maximum starburst”. The discovery of this rare system is consistent with a significantly higher space density than previously thought for the most luminous dusty starbursts within the first billion years of cosmic time, easing tensions regarding the space densities of z∼6 quasars and massive quiescent galaxies at z≳3. § INTRODUCTIONDetailed studies of dusty star-forming galaxies (DSFGs) at high redshift selected at (sub-)millimeter wavelengths (submillimeter galaxies, or SMGs) over the past two decades have shown them to be a key ingredient in our understanding of the early formation of massive galaxies (seefor reviews). The brightest, “hyper-luminous” DSFGs (hyper-luminous infrared galaxies, or HyLIRGs) represent some of the most luminous, massive galaxies in the early universe, reaching infrared luminosities of L_ IR>10^13 , and star formation rates in excess of 1000yr^-1, emerging from compact regions only few kiloparsec in diameter <cit.>. While the general DSFG population is thought to be somewhat heterogeneous <cit.>, these HyLIRGs are likely major mergers of gas-rich galaxies <cit.>, and they may also be associated with protoclusters of galaxies, which represent some of the most overdense environments in the early universe <cit.>.Due to their high dust content, it is common that most of the stellar light in DSFGs is subject to dust extinction, rendering their identification out to the highest redshifts notoriously difficult. While many DSFGs were found at z=2–3.5 relatively early on (e.g., ), more than a decade passed between the initial discovery of this galaxy population and the identification of the first examples at z>4 <cit.> and z>5 <cit.>.Once the Herschel Space Observatory was launched, it became possible to develop color selection techniques to systematically search for the most distant DSFGs in large-area surveys like the Herschel Multi-tiered Extragalactic Survey <cit.>.Since the peak of the far-infrared (FIR) spectral energy distribution (SED) shifts through the 250, 350 and 500 μm bands probed by Herschel's Spectral and Photometric Imaging Receiver (SPIRE), the most distant sources typically appear “red” between these bands, i.e., S_250μ m<S_350μ m<S_500μ m, with steeper (“ultra-red”) color criteria resulting in the selection of potentially more distant sources <cit.>. Based on FIR photometric redshift estimates, the median redshifts of these sources have been suggested to be ⟨z⟩3.7 to 4.7, where different redshift values are obtained for different samples due to the exact color cutoffs, flux density limits, and redshift fitting techniques chosen (e.g., ; see also ). Spectroscopic confirmation of a subsample of 25 sources based on CO rotational lines, an indicator of the molecular gas that fuels the intense star formation in these systems (seefor a review), has verified the higher median redshifts compared to general DSFG samples (e.g., ; D. Riechers et al., in prep.; Fudamoto et al., in prep.). These studies find redshifts as high as z6.34 (). In an alternative approach, surveys with the South Pole Telescope (SPT) have revealed a sample of gravitationally-lensed DSFGs selected at 1.4 and 2 mm with a spectroscopic median redshift of ⟨z⟩3.9 <cit.>. A substantial fraction of this sample would also fulfill Herschel-red sample selection criteria.With this paper, we aim to extend the Herschel-red and ultra-red criteria through the identification of “extremely red” DSFGs with S_250μ m<S_350μ m<S_500μ m<S_870μ m. Such “870 μm riser” galaxies should, in principle, lie at even higher redshifts than the bulk of the red DSFG population. We here present detailed follow-up observations of the first such source we have identified in the Herschel HerMES data, 2HERMES S250 SF J043657.7–543810 (hereafter: 27).We use a concordance, flat ΛCDM cosmology throughout, with H_069.6Mpc^-1, Ω_ M0.286, and Ω_Λ0.714.§ DATA §.§ Herschel/PACS+SPIRE 27 was observed with the Herschel Space Observatory as part of HerMES, covering 7.47 deg^2 in the Akari Deep Field South (ADFS). The field was observed for 18.1 hr with the PACS and SPIRE instruments in parallel mode, resulting in nominal instrumental noise levels of 49.9, 95.1, 25.8, 21.2, and 30.8 mJy (5σ rms) at 110, 160, 250, 350, and 500 μm, respectively.[Quoted sensitivities are single-pixel rms values, which are worse than the flux uncertainties of point sources achieved after employing matched filtering techniques (e.g., ).] The flux scale is accurate to ∼5%. 27 was detected at 250, 350, and 500 μm, but not shortwards. Flux densities were extracted using Starfinder and SussExtractor, and from the band-merged xID250 catalog published as part of HerMES DR4. This yields S_ 250μ m=(14.3±2.3), (13.0±2.6), and (14.3±2.3) mJy, S_ 350μ m(20.3±2.4), (18.5±2.5), and (19.1±2.3) mJy, and S_ 500μ m(22.0±2.6), (22.2±2.9), and (24.0±2.7) mJy, respectively. These uncertainties do not include the contribution due to source confusion, which typically dominates. We however note that the source is relatively isolated in the SPIRE maps (Fig. <ref>). xID250-based flux densities are adopted in the following (Table <ref>).From these data, 27 was selected as a “red source” (i.e., S_250μ m<S_350μ m<S_500μ m) for further follow-up observations. §.§ APEX/LABOCA We observed 27 at 870 μm with the Large APEX bolometer camera (LABOCA) mounted on the 12 m Atacama Pathfinder EXperiment (APEX) telescope. Observations were carried out on 2012 September 17 as part of program M-090.F-0025-2012, resulting in 3.4 hr on source time. Individual scans had a length of ∼7 min, resulting in a map that fully samples the ∼11 arcmin diameter field-of-view of LABOCA. Pointing was checked on nearby quasars every hour, and was stable to within ∼3” rms. The effective FWHM beam size, as measured on the pointing source J2258–280, was 19.2”. Precipitable water vapor columns varied between 0.4 and 1.3 mm, corresponding to zenith atmospheric opacities of 0.2–0.4 in the LABOCA passband. This resulted in an rms noise level of 1.8 mJy beam^-1 at the position of 27 (3.7 mJy beam^-1 map average) in a map smoothed to 27” resolution. The flux density scale was determined through observations of Uranus and Neptune, yielding an accuracy of ∼7%. Data reduction was performed with the BoA package, applying standard calibration techniques. These observations were used to select 27 as an “extremely red” source with S_250μ m<S_350μ m<S_500μ m<S_870μ m (Fig. <ref>; Table <ref>).§.§ ALMA 870 μm We observed 870 μm continuum emission toward 27 using ALMA (project ID: 2013.1.00001.S; PI: Ivison). Observations were carried out on 2015 August 31 with 33 usable 12 m antennas under good weather conditions in an extended array configuration (baseline range: 15–1466 m). This resulted in 5.1 min of usable on source time, centered on the Herschel/SPIRE 500 μm position. The nearby quasar J0425–5331 was observed regularly for pointing, amplitude and phase calibration, while J0538–4405 was observed for bandpass calibration, and J0519–4546 was used for absolute flux calibration, leading to <10% calibration uncertainty. The correlator was set up with two spectral windows of 1.875 GHz bandwidth (dual polarization) each per sideband, centered at a local oscillator frequency of 343.463325 GHz, with a frequency gap of 8 GHz between the sidebands.Data reduction was performed using version 4.7.1 of the Common Astronomy Software Applications (casa) package. Data were mapped using the CLEAN algorithm with “natural” and robust 0.5 weighting, resulting in synthesized beam sizes of 020×017 and 017×014 at rms noise values of 99 and 108 μJy beam^-1 in the phase center over the entire 7.5 GHz bandwidth, respectively. Due to its distance from the phase center, the noise is increased by a primary beam attenuation factor of 1.62 at the position of 27.§.§ ALMA 3 mm We scanned the 84.077033–113.280277 GHz frequency range to search for spectral lines toward 27 using ALMA (project ID: 2016.1.00613.S; PI: Riechers). Observations were carried out under good weather conditions during six runs between 2017 January 5 and 9 with 40–47 usable 12 m antennas in a compact array configuration (baseline range: 15–460 m). We used five spectral setups, resulting in a total on source time of 45.7 min (7.8–14.1 min per setup), centered on the ALMA 870 μm position. The nearby quasar J0425–5331 was observed regularly for pointing, amplitude and phase calibration. J0519–4546 was used for bandpass and absolute flux calibration, leading to <10% calibration uncertainty.The correlator was set up with two spectral windows of 1.875 GHz bandwidth (dual polarization) each per sideband, at a sideband separation of 8 GHz. Full frequency coverage was attained by shifting setups in frequency by ∼3.75 GHz, such that subsequent settings filled in part of the IF gap in the first spectral setup. This allowed us to cover the full range of ∼29.21 GHz without significant gaps in frequency, but resulted in some frequency overlap near 97.5 GHz (see Fig. <ref> for effective exposure times across the full band).Data reduction was performed using version 4.7.1 of the casa package. Data were mapped using the CLEAN algorithm with “natural” and robust 0.5 weighting, resulting in synthesized beam sizes of 313×236 and 248×186 at rms noise values of 11.2 and 13.6 μJy beam^-1 in the phase center over a line-free bandwidth of 27.40 GHz after averaging all spectral setups, respectively. Spectral line cubes mapped with “natural” weighting at 86.6, 103.9, and 113.0 GHz yield beam sizes of 368×272, 305×226, and 283×217 at rms noise levels of 352, 509, and 297 μJy beam^-1 per 19.55, 19.55, and 58.65 MHz bin, respectively. Imaging the same data at 103.9 GHz with robust –2 (“uniform”) weighting yields a beam size of 211×156 at ∼1.9 times higher rms noise.§.§ Spitzer/IRAC 27 was covered with Spitzer/IRAC at 3.6 and 4.5 μm between 2011 November 17–21 (program ID: 80039; PI: Scarlata) and targeted for deeper observations on 2015 May 24 (program ID: 11107; PI: Perez-Fournon). Data reduction was performed using the MOPEX package using standard procedures. Absolute astrometry was obtained relative to Gaia DR1, yielding rms accuracies of 0.04” and 0.06” in the 3.6 and 4.5 μm bands, respectively. Photometry was obtained with the SExtractor package, after de-blending from two foreground objects and sky removal using GALFIT.§.§ VISTA and WISE The position of 27 was covered by the VISTA Hemisphere Survey (VHS) DR4 on 2010 November 19 and by the Wide-field Infrared Survey Explorer (WISE) as part of the allWISE survey between 2010 January 19 and 2011 January 30. 27 is not detected in the VHS 1.25, 1.65, and 2.15 μm (J/H/K_ s) bands. It is strongly blended with a nearby star (m_ gaia=18.20) in the 3.4 and 4.6 μm (W1 and W2) bands, such that no useful limit can be obtained. It also remains undetected in the 12 and 22 μm (W3 and W4) bands. § RESULTS§.§ Continuum Emission We detect strong continuum emission at 3 mm and 870 μm at peak significances of ∼39 and 28σ toward 27, yielding flux densities of (0.512±0.023) and (28.1±0.9) mJy, respectively (Figs. <ref>, bottom right and <ref>, respectively). The emission is marginally resolved at 3 mm, and it breaks up into two components of similar strength separated by 1.49” in the high-resolution 870 μm data, with flux densities of (15.70±0.76) and (12.43±0.56) mJy for the northern and southern components (hereafter: , or “mal” 말, the horse, and , or “yong” 용, the dragon), respectively.[Extracted from a map tapered to 0.8” resolution.] The two components thus contain the full single-dish 870 μm flux. Both components are spatially resolved. Two-dimensional Gaussian fitting yields deconvolved sizes of (0.303±0.030)×(0.213±0.027) and (0.341±0.031)×(0.146±0.025) arcsec^2 forand S, respectively. After removal of a bright foreground star, some faint residual emission is seen at 3.6 and 4.5 μm near the position of 27 and consistent with the expected flux levels (Fig. <ref>), but higher resolution observations would be required to confirm its mid-infrared detection (Fig. <ref>; Tab. <ref>). Given the lack of a candidate lensing galaxy at short wavelengths or arc-like structure in the high-resolution ALMA data, there presently is no evidence for strong gravitational lensing (i.e., flux magnification factors μ_ L≥2), but detailed imaging with the Hubble Space Telescope would be required to further investigate the possibility of strong or weak lensing.l c l 27 continuum photometry Wavelength Flux densitya Telescope(μm) (mJy)1.25 <0.015VISTA/VHS 1.65 <0.022VISTA/VHS 2.15 <0.020VISTA/VHS 3.6b (2.33±0.74)×10^-3 Spitzer/IRAC 4.5b (4.20±0.82)×10^-3 Spitzer/IRAC 12 <0.6WISE 22 <3.6WISE 110<30 Herschel/PACS 160<57 Herschel/PACS 250c,d 14.3±2.3Herschel/SPIRE 350c,d 19.1±2.3Herschel/SPIRE 500c,d 24.0±2.7Herschel/SPIRE 870c 25.4±1.8APEX/LABOCA 87028.1±0.9ALMA 3000 0.512±0.023 ALMA (scan)aLimits are 3σ. bPossibly contaminated by foreground sources, and hence, considered as upper limits only in the SED fitting. cUsed for initial color/photometric redshift selection. dUncertainties do not account for confusion noise, which is 5.9, 6.3, and 6.8 mJy (1σ) at 250, 350, and 500 μm, respectively <cit.>.§.§ Line Emission A search of the 3 mm spectral sweep reveals two strong features near 86.6 and 103.9 GHz detected at ∼19 and 12σ significance, respectively. Together with a third, tentative feature near 113.0 GHz recovered at 2.3σ significance, we obtain a unique (median) redshift solution at z=5.6550±0.0001, identifying the features as , , andemission (Fig. <ref>, top).[No spectral lines are detected in the ALMA 870 μm data.] Theline recovery is marginal at best and near the edge of the spectral range. Thus, an independent confirmation of this feature is required. The line emission is marginally resolved on the longest baselines and elongated along the axis that separates the two continuum source components, and thus, is consistent with emerging from both sources (Fig. <ref>). From Gaussian fitting to the line profiles, we obtain peak flux densities of S_ line=(3.89±0.28), (3.75±0.43), and (1.55±0.37) mJy at FWHM linewidths of dv=(651±59), (710±103), and (503±163) , respectively.[The CO line redshifts agree within <1σ, where 1σ=25 and 43for the CO J=5→4 and 6→5 lines, respectively. The fit of theline indicates a blueshift by -(237±64)with respect to theline, which we consider to be due to limited signal-to-noise ratio. Another possible explanation is that the H_2O emission may preferentially emerge from one of the components of 27, assuming a small centroid velocity shift between both components. Fixing the line centroid to that of theline yields S_ line=(0.99±0.31) mJy and dv=(915±380) , i.e., a ∼15% higher line flux. This difference is not significant.]This implies integrated line fluxes of (2.68±0.20), (2.82±0.34), and (0.83±0.22) Jyand line luminosities of L'_ CO=(11.96±0.92) and (8.73±1.07) and L'_ H_2O=(2.17±0.58)×10^10 , respectively (Table <ref>). This yields a / line brightness temperature ratio of r_65=0.73±0.10, which is consistent with the average value for SMGs within the uncertainties (r_65=0.66; ), but significantly lower than that found in the z=5.3 SMG AzTEC-3 (r_65=1.03±0.16; ). Thus, assuming the average / line brightness temperature ratio for SMGs of r_51=0.39 <cit.>, we find aluminosity of L'_ CO(1-0)=3.1×10^11 , i.e., ∼50× that of Arp 220 (e.g., ).[Assuming the r_51=0.56 value of AzTEC-3 instead would yield L'_ CO(1-0)=2.1×10^11<cit.>.] We also find a / ratio of r_ WC=0.25±0.14, which is ∼2.5× lower than in Arp 220 and the z=6.34 starburst HFLS3 (), and ∼1.5× lower than in the z∼3.5 strongly-lensed starbursts G09v1.97 and NCv1.143 (; D. A. Riechers et al., in preparation). This is consistent with a moderate interstellar medium excitation for a starburst system. § ANALYSIS AND DISCUSSION§.§ Spectral Energy Distribution Properties To determine the overall spectral energy distribution properties of 27, we have fit modified black-body (MBB) models to the continuum data between 1.25 μm and 3 mm (Fig. <ref>).[Confusion noise and flux scale uncertainties were added in quadrature where appropriate.] We adopt the method described by <cit.> and <cit.>, using an affine-invariant Markov Chain Monte-Carlo (MCMC) approach, and joining the MBB to a ν^α power law on the blue side of the SED peak. We fit optically-thin models, with the power-law slope α, the dust temperature T_ dust, and the spectral slope of the dust emissivity β_ IR as fitting parameters, using the observed-frame 500 μm flux density as a normalization factor. We also fit “general” models that allow for wavelength-dependent changes in optical depth, adding the wavelength λ_0=c/ν_0 where the optical depth τ_ν=(ν/ν_0)^β_ IR reaches unity as an additional fitting parameter.The optically-thin fitting procedure yields statistical mean values of T_ dust=59.9^+42.7_-33.4 K, β_ IR=2.3^+0.6_-1.1, and α=6.2^+5.0_-3.9.[α is only poorly constrained by the data.]The general fit yields mean values of λ_0=195^+39_-41 μm, T_ dust=55.3^+7.8_-7.6 K, β_ IR=3.0^+0.5_-0.5, and α=9.8^+6.7_-6.1.The fit also implies rest-frame infrared (8–1000 μm) and far-infrared (42.5–122.5 μm) luminosities of L_ IR=2.42^+0.48_-0.47×10^13and L_ FIR=1.64^+0.27_-0.27×10^13 , respectively.[The measured L_ IR agrees to within ∼2% with independent estimates based on integrating a normalized MAGPHYS-based SED template based on the z=6.34 starburst HFLS3 (), showing that the adopted power-law approximation of the short wavelength emission has a minor impact on the measured quantities.] Assuming a dust absorption coefficient of κ_ν=2.64 m^2kg^-1 at 125 μm <cit.>, we also find a dust mass of M_ dust=4.4^+2.3_-2.4×10^9 .[Given the limited photometry, the uncertainties may be somewhat under-estimated.]Assuming a <cit.> stellar initial mass function, these parameters suggest a total star formation rate (SFR) of ∼2400yr^-1.Given the limited SED constraints in the rest-frame optical, we obtain an estimate for the stellar mass M_⋆ of 27 by normalizing the MAGPHYS-based SED template of HFLS3 in Fig. <ref> to the observed-frame 4.5 μm limit. This yields M_⋆<1.2×10^11 . §.§ Molecular Gas Mass, Gas-to-Dust Ratio, and Gas Depletion Time The L'_ CO(1-0) value of 27 (based on the adopted r_51=0.39) implies a total molecular gas mass of M_ gas=2.5×10^11 (α_ CO/0.8) (0.39/r_51) .[We here adopt a conversion factor of α_ CO=0.8()^-1 for nearby ultra-luminous infrared galaxies and SMGs <cit.>.]Taken at face value, this yields a gas-to-dust ratio of M_ gas/M_ dust≃60, which is comparable to that in the z=6.34 starburst HFLS3 and within the range of values found for nearby infrared-luminous galaxies <cit.>, but ∼4× lower than that for the z=5.30 starburst AzTEC-3 <cit.>. At its current SFR, this implies a gas depletion time of τ_ dep=M_ gas/SFR≃100 Myr, consistent with the general SMG population <cit.>.l c c c Line fluxes and luminositiesin 27. Transition I_ line L'_ line L_ line[Jy ] [10^10 Kpc^2] [10^8 ] 2.68 ± 0.20 11.96 ± 0.92 7.32 ± 0.56 2.82 ± 0.348.73 ± 1.07 9.24 ± 1.13a 0.83 ± 0.222.17 ± 0.58 2.96 ± 0.80 aTentative detection. Independent confirmation is required. Quoted uncertainties are from Gaussian fitting to the line profile near the edge of the spectral range. We consider the true flux uncertainty to be at least ∼45%, consistent with line map-based estimates. §.§ Star Formation Rate and Gas Surface Densities, Gas Dynamics, and Conversion Factor The apparent 870 μm continuum sizes of(mal) and(yong) imply physical sizes of (1.83±0.18)×(1.28±0.16) and (2.05±0.18)×(0.87±0.15) kpc^2 at z=5.655, which are comparable to the ∼2.5 kpc diameters found for other z>4 dusty starbursts like AzTEC-3, HFLS3, and SGP-38326 at similar wavelengths <cit.>. Assuming that their flux ratios at 870 μm are representative at the peak of the SED, this implies L_ IR surface densities of Σ_ IR=7.3 and 7.5×10^12kpc^-2 and SFR surface densities of Σ_ SFR=730 and 750yr^-1 kpc^-2, at SFRs of ∼1350 and 1070yr^-1, respectively, consistent with what is expected for “maximum starbursts” <cit.>. These Σ_ SFR values are comparable to those found in other HyLIRGs at z>4 like AzTEC-3, HFLS3, and SGP-38326 <cit.>, but significantly higher than for the bulk of the DSFG population <cit.>.Assuming a common CO linewidth and using the sizes and flux ratio measured in the 870 μm continuum emission, we can obtain approximate constraints on the dynamical masses M_ dyn of(mal) and(yong) by adopting an isotropic virial estimator <cit.>. We here increase the assumed source radii by a factor of 1.5 to account for the typical difference between the measured Gaussian sizes of gas and dust emission in SMGs, likely caused by decreasing dust optical depth towards the outskirts of the starbursting regions <cit.>. We find M_ dyn^ N=3.25×10^11and M_ dyn^ S=3.66×10^11 . Taken at face value, and conservatively assuming that 100% of the dynamical mass is due to molecular gas (i.e., neglecting the potentially major contributions due to stellar mass and dark matter, and the likely minor contributions due to dust and black hole masses), this implies an upper limit of α_ CO<2.25()^-1, which is consistent with the assumptions made above. This limit drops to α_ CO<1.8()^-1 when including the M_⋆ limit at face value in the estimate. Adopting α_ CO=0.8()^-1 instead suggests gas fractions of f_ gas=M_ gas/M_ dyn=0.41 and 0.32 forand S, respectively. This is comparable to other SMGs <cit.>. Under the same assumptions, we find gas surface densities of Σ_ gas^ N=7.3 and Σ_ gas^ S=8.1×10^10kpc^2. These values are at the high end of, but consistent with the spatially-resolved Schmidt-Kennicutt “star formation law” <cit.>, providing some of the first constraints on this relation at z∼6. § CONCLUSIONS We have identified a massive, dust-obscured binary HyLIRG at a redshift of z=5.655, using ALMA. Our target 27 was selected as a “870 μm riser”, fulfilling an FIR color criterion of S_250μ m<S_350μ m<S_500μ m<S_870μ m. Among 25 Herschel-red sources (i.e., “500 μm risers”, fulfilling S_250μ m<S_350μ m<S_500μ m) spectroscopically confirmed to date (e.g., ; Riechers et al., in prep.) and ∼300 photometrically-identified Herschel-red sources (; S. Duivenvoorden et al., in prep.), 27 is the only point source to fulfill this additional criterion, implying that such sources are likely very rare. Of the spectroscopic red sample, all sources are at z<5.5 with the exception of HFLS3 at z=6.34, which however had an additional criterion of 1.3×S_350μ m<S_500μ m applied in its selection <cit.>. 27 is significantly redder than HFLS3 in its 870 μm/500μm color (1.06 vs. 0.70). Of the 39 spectroscopically confirmed, 1.4+2.0 mm-selected sample from the SPT survey, only SPT 0243–49 at z=5.6991 fulfills the “870 μm-riser” criterion <cit.>. While not providing a complete selection of z≫5 DSFGs, this shows the potentially very high median redshifts of such sources, which likely significantly exceeds that of the parent sample of red sources.[SPT 0459–59 at z=4.7993 does not fulfill the selection criterion with the revised LABOCA 870 μm flux of (61±8) mJy found by <cit.>. However, our discussion would remain largely unchanged if we included this source.]The apparent submillimeter fluxes of this source are ∼3× higher than those of 27, but SPT 0243–49 is strongly gravitationally lensed and intrinsically less than half as bright as 27 (having two components of 6.2 and 5.2 mJy at 870 μm; ). It thus is not a binary HyLIRG.The overall properties of the binary HyLIRG 27 are perhaps most similar to lower-redshift sources like SGP-38326 at z=4.425 <cit.>. It likely represents a major merger of two already massive galaxies (>3×10^11each) at z∼6 leading to the formation of an even more massive galaxy, and it contains several billion solar masses of dust that must have formed at even earlier epochs. Its existence is consistent with previous findings of an apparently significantly higher space density of luminous dusty starbursts back to the first billion years of cosmic time than previously thought, which may be comparable to the space density of the most luminous quasars hosting supermassive black holes at the same epochs (e.g., ). While the flux limits achieved by the deepest Herschel SPIRE surveys are perhaps not sufficiently sensitive to account for the bulk of dusty galaxies at z>5, the population uncovered so far could be of key importance for understanding the early formation of some of the most massive quiescent galaxies at z≳3 (e.g., ). Despite its extreme properties, 27 is only barely sufficiently bright and isolated to allow identification in the deep ADF-S SPIRE data. Of the >1000 deg^2 surveyed with SPIRE (e.g., ), only ∼110 deg^2 are sufficiently deep and high quality to identify “extremely red” sources as bright as 27 without the aid of strong gravitational lensing. Our results indicate that such sources are rare, with space densities as low as 9×10^-3 deg^-2 if our measurement is representative, but they could remain hidden in larger numbers among strongly-lensed and/or 500 μm “dropout” samples with strong detections longward of 850 μm, identified in large-area surveys with JCMT/SCUBA-2, APEX/LABOCA, ACT and SPT, and future facilities like CCAT-prime. The authors wish to thank the anonymous referee for a helpful and constructive report.The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. This paper makes use of the following ALMA data: ADS/JAO.ALMA# 2016.1.00613.S and ADS/JAO.ALMA# 2013.1.00001.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada) and NSC and ASIAA (Taiwan), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/ NRAO and NAOJ. D.R. acknowledges support from the National Science Foundation under grant number AST-1614213 to Cornell University. T.K.D.L. acknowledges support by the NSF through award SOSPA4-009 from the NRAO. This research makes use of data obtained with Herschel, an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA, through the HerMES project. HerMES is a Herschel Key Program utilising Guaranteed Time from the SPIRE instrument team, ESAC scientists and a mission scientist. This work is based in part on observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. This publication made use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. This work is based on observations made with APEX under Program ID: M-090.F-0025-2012, and also based on observations obtained as part of the VISTA Hemisphere Survey, ESO Progam 179.A-2010 (PI: McMahon).ALMA, APEX(LABOCA), Herschel(PACS and SPIRE), Spitzer(IRAC), WISE, ESO:VISTAyahapj | http://arxiv.org/abs/1705.09660v2 | {
"authors": [
"Dominik A. Riechers",
"T. K. Daisy Leung",
"Rob J. Ivison",
"Ismael Perez-Fournon",
"Alexander J. R. Lewis",
"Rui Marques-Chaves",
"Ivan Oteo",
"Dave L. Clements",
"Asantha Cooray",
"Josh Greenslade",
"Paloma Martinez-Navajas",
"Seb Oliver",
"Dimitra Rigopoulou",
"Douglas Scott",
"Axel Weiss"
],
"categories": [
"astro-ph.GA"
],
"primary_category": "astro-ph.GA",
"published": "20170526180000",
"title": "Rise of the Titans: A Dusty, Hyper-Luminous \"870 micron Riser\" Galaxy at z~6"
} |
[email protected] Authors contributed equally to this work. Institute for Infocomm Research [email protected] Institute for Infocomm Research CentraleSupélec [email protected] Institute for Infocomm Research [email protected] Institute for Infocomm Research [email protected] Institute for Infocomm Research [email protected] National Cancer Centre Singapore [email protected] Hunan University [email protected] Chesed Radiology [email protected] Corresponding authors. Institute for Infocomm Research [email protected] Institute for Infocomm Research Nanyang Technological University We present a deep learning framework for computer-aided lung cancer diagnosis. Our multi-stage framework detects nodules in 3D lung CAT scans, determines if each nodule is malignant, and finally assigns a cancer probability based on these results. We discuss the challenges and advantages of our framework. In the Kaggle Data Science Bowl 2017, our framework ranked 41^st out of 1972 teams. <ccs2012> <concept> <concept_id>10010147.10010178.10010224.10010245.10010250</concept_id> <concept_desc>Computing methodologies Object detection</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010147.10010178.10010224.10010245.10010251</concept_id> <concept_desc>Computing methodologies Object recognition</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010147.10010257.10010293.10010294</concept_id> <concept_desc>Computing methodologies Neural networks</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010405.10010444.10010449</concept_id> <concept_desc>Applied computing Health informatics</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Computing methodologies Object detection [500]Computing methodologies Object recognition [500]Computing methodologies Neural networks [500]Applied computing Health informaticsDeep Learning for Lung Cancer Detection: Tackling the Kaggle Data Science Bowl 2017 Challenge Vijay Chandrasekhar[2] May 24, 2017 ===============================================================================================§ INTRODUCTIONCancer is one of the leading causes of death worldwide, with lung cancer being among the leading cause of cancer related death. In 2012, it was estimated that 1.6 million deaths were caused by lung cancer, while an additional 1.8 million new cases were diagnosed <cit.>.Screening for lung cancer is crucial in the early diagnosis and treatment of patients, with better screening techniques leading to improved patient outcome. The National Lung Screening Trial found that screening with the use of low-dose helical computed tomography (CT) reduced mortality rates by 20% compared to single view radiography in high-risk demographics <cit.>. However, screening for lung cancer is prone to false positives, increasing costs through unnecessary treatment and causing unnecessary stress for patients <cit.>. Computer-aided diagnosis of lung cancer offers increased coverage in early cancer screening and a reduced false positive rate in diagnosis.The Kaggle Data Science Bowl 2017 (KDSB17) challenge was held from January to April 2017 with the goal of creating an automated solution to the problem of lung cancer diagnosis from CT scan images <cit.>. In this work, we present our solution to this challenge, which uses 3D deep convolutional neural networks for automated diagnosis. §.§ Related Work Computer-aided diagnosis (CAD) is able to assist doctors in understanding medical images, allowing for cancer diagnosis with greater sensitivity and specificity, which is critical for patients. <cit.> survey CAD pipelines, separating them into preprocessing, feature extraction, selection, and finally classification. They further document the use of logistic regression, decision trees, k-nearest neighbour, and neural networks in existing approaches.<cit.> use an SVM over MRI scan texture features to detect prostate cancer in patients. The winners of the Camelyon16 challenge <cit.>, for example, detect breast cancer from images of lymph nodes.Deep convolutional neural networks (CNN) have proven to perform well in image classification <cit.>, object detection <cit.>, and other visual tasks. They have found great success in medical imaging applications <cit.>, and are for example able to detect skin cancer metastases <cit.>, achieving substantially better sensitivity performance than human pathologists. These methods all operate on two-dimensional images, typically a cross-sectional image of the affected body part.In comparison, CT image scans are three-dimensional volumes and are usually anisotropic. Deep networks have also been shown to perform well in 3D segmentation <cit.>, and have been successfully adapted from 2D to 3D <cit.>. <cit.> have demonstrated a 3D deep learning framework to perform automatic prostate segmentation. <cit.> perform the segmentation and classification of lung cancer nodules separately.The LUNA16 challenge <cit.> had two tasks: detecting pulmonary nodules using CT scans, and reducing the false positive rate from identifying these nodules. The former was solved by <cit.> using UNet <cit.> on stacks of 3 consecutive horizontal lung slices; and the latter was won by <cit.> by applying multi-contextual 3D CNNs.The top two teams in the Kaggle Data Science Bowl 2017 have published their solutions to the challenge <cit.>. Both teams proceed to an intermediate step before giving patients a cancer probability. After having identified regions of possible abnormalities (nodules), the team placing second, <cit.>, uses 17 different 3D CNNs to extract medically relevant features about nodules. Then, they aggregate the predictions of these nodules attributes into a patient-level descriptor. The team placing first <cit.> detects nodules via a 3D CNN, then uses the highest confidence detections as well as manual nodule labelling to predict cancer via a simple classifier. §.§ Key Challenges One key characteristic of lung cancer is the presence of pulmonary nodules, solid clumps of tissue that appear in and around the lungs <cit.>. These nodules are visible in CT scan images and can be malignant (cancerous) in nature, or benign (not cancerous). In cancer screening, radiologists and oncologists examine CT scans of the lung volume to identify nodules and recommend further action: monitoring, blood tests, biopsy, etc. Specifically, lung cancer is screened through the presence of nodules <cit.>.To build a system for computer-aided diagnosis (CAD) of lung cancer, we investigate the following approaches:* A single-stage network that automatically learns cancer features and produces a cancer probability directly from CT scan images of patients.* A multi-stage framework that first localizes nodules, classifies the malignancy of each one, and finally produces a cancer probability of patients. In our initial experiments however, the single-stage network, implemented as a 3D CNN, fails to converge across a wide set of hyperparameters, performing only slightly better than random chance.Factoring the problem into multiple stages on the other hand significantly improves convergence. Even when the single-stage network fails to converge in training, each stage of our pipeline can be easily trained to convergence, as illustrated in Figure <ref>. Our framework as detailed in this work therefore focuses on a multi-stage pipeline, focusing on detection and classification of pulmonary nodules. This approach presents the following problems: the shape and size of nodules vary, and benign nodules can look very similar to malignant ones. Furthermore, the presence of blood vessels in the lung makes distinguishing nodules a challenging task, especially on 2D image slices. This makes the task more suitable for 3D CNNs which are better able to identify nodules based on their structure in 3D space. § TECHNICAL APPROACH §.§ Data While the Kaggle Data Science Bowl 2017 (KDSB17) dataset provides CT scan images of patients, as well as their cancer status, it does not provide the locations or sizes of pulmonary nodules within the lung. Therefore, in order to train our multi-stage framework, we utilise an additional dataset, the Lung Nodule Analysis 2016 (LUNA16) dataset, which provides nodule annotations. This presents its own problems however, as this dataset does not contain the cancer status of patients. We thus utilise both datasets to train our framework in two stages.§.§.§ LUNA16 The Lung Nodule Analysis 2016 (LUNA16) dataset is a collection of 888 axial CT scans of patient chest cavities taken from the LIDC/IDRI database<cit.>, where only scans with a slice thickness smaller than 2.5 mm are included. In each scan, the location and size of nodules are agreed upon by at least 3 radiologists. There is no information regarding the malignancy or benignity of each nodule or the cancer status of the associated patient.In total, 1186 nodules are annotated across 601 patients. We use 542 patients as a training set and the remaining 59 as a validation set. A slice from one patient, with a single nodule location shown, can be seen in Figure <ref>.§.§.§ Kaggle Data Science Bowl 2017 The Kaggle Data Science Bowl 2017 (KDSB17) dataset is comprised of 2101 axial CT scans of patient chest cavities. Of the 2101, 1595 were initially released in stage 1 of the challenge, with 1397 belonging to the training set and 198 belonging to the testing set. The remaining 506 were released in stage 2 as a final testing set.Each CT scan was labelled as `with cancer' if the associated patient was diagnosed with cancer within 1 year of the scan, and `without cancer' otherwise. Crucially, the location or size of nodules are not labelled. Figure <ref> contains a sample slice from this dataset.§.§.§ PreprocessingEach scan is comprised of multiple 2D axial scans taken in sequence with pixel values in the range (-1024, 3071), corresponding to Hounsfield radiodensity units. The number of slices, slice thickness, and scale vary between scans.We normalize pixel values to the (0, 1) range and stack the 2D slices in sequence to produce a 3D volume. The entire 3D volume is scaled and padded, while maintaining the true aspect ratio using the embedded scale information, into a 512 volume. Due to the increased GPU memory usage involved with volumetric data, we then separate this volume into overlapping 128 crops with a 64 voxel stride, to be processed by our pipeline in parallel.§.§ Architecture Our framework is divided into four separate neural networks. They are the: * nodule detector, which accepts a normalized 3D volume crop from a CT scan and identifies areas that contain nodules;* malignancy detector, which operates similarly to the nodule detector, but further classifies nodules as benign (non-cancerous) or malignant (cancerous);* nodule classifier, which accepts individual nodule volumes and similarly classifies them as benign or malignant; and–* patient classifier, which accepts features from the malignancy detector and nodule classifier and yields the probability of the patient having cancer. The nodule detector is used to detect regions that contain nodules. The malignancy detector provides a class probability map over grid cells of each cell containing benign, malignant, or no nodules. Separate code extracts and preprocesses nodule volumes and runs the classifier on each, yielding the probability of malignancy for each nodule. Finally, the patient classifier pools features from the classifier and features from the malignancy detector; producing the probability of the patient having lung cancer. Figure <ref> graphically shows the structure of our pipeline.§.§.§ Nodule Detectors Common frameworks for object detection (such as Faster RCNN <cit.>) produce precise bounding boxes around objects of interest. As our task does not require perfect localization, we instead divide the search space into a uniform grid and perform detection in each grid cell. This was inspired by the class probability map of the YOLO network <cit.>.We base our architecture on the pre-activation version of ResNet-101 <cit.>, which uses fewer parameters than other state-of-the-art networks while achieving comparably high accuracy on visual tasks. Our modified ResNet is described in Table <ref>. Notably, we use 3D convolutions and pooling, and replace the global average pooling operation with a 1 convolution. Additionally, we substitute rectified linear units (ReLU) with Leaky ReLU units (α = 0.1), to improve convergence. We interpret the output as a class probability map of nodules occurring inside the corresponding receptive field. Our modifications also narrow the receptive field of each output node to 16 voxels to better localize nodules as illustrated in Figure <ref>, which shows the distribution of nodule radii in the LUNA16 dataset. A 128 input volume produces an 8 output class probability map. The nodule detector provides a distribution over two classes {`has-nodule', `no-nodule'}, while the malignancy detector provides a distribution over three {`malignant', `benign', `no-nodule'}. An example of the output map can be seen in Figure <ref>.The output of the malignant detector is provided directly to the patient-level classifier. It acts as a global feature, providing information on the distribution of malignant nodules through the entire volume without providing specific information about any nodule.As nodules are sparsely distributed through the scan, we expect there to be a strong class imbalance. We address this by weighting our cross-entropy function during training. In the nodule detector, we balance the loss by calculating a weight per-batch and apply it to the weaker class as in Equation <ref>. Loss(p, q)= -1/|C|∑_c ∈ C w(c) · p(c) log q(c)where p is the predicted distribution, q is the true distribution, C is {`no-nodule', `has-nodule'} f_c is the frequency of class c in the mini-batch, and– w(c) = {[ f_no-nodule/f_nodule if c is nodule;1otherwise ]. In the malignancy detector, we slightly alter the loss function and generalise it to allowing balancing of multiple classes as in Equation <ref>. We did not have time to retrain the initial nodule detector with this generalised loss, but expect similar performance. Loss(p, q)= -1/|C|∑_c ∈ C1/f_c· p(c) log q(c)where p is the predicted distribution, q is the true distribution, C is {`malignant', `benign', `no-nodule'}, and– f_c is the frequency of class c in the mini-batch.§.§.§ Nodule ClassifierEvery grid cell reported to contain a nodule by the nodule detector is extracted from the original volume. Note that the nodules classifier works on the detector output and not on the malignancy detector one. This allows the framework to be more robust by ensembling results from the separate networks.Contiguous grid cells with nodules are assumed to contain parts of the same nodule, and are stitched together. All nodules are scaled and zero-padded to fit into a 32 volume.The nodule classifier takes as input the 32 volume with the nodule and classifies it as malignant or benign. As the size of the input is smaller, training the classifier on the detected nodules is significantly easier than on the entire scan volume. The classifier is based on the pre-activation version of ResNet-18 and is described in Table <ref>.§.§.§ Patient ClassifierTo create a single feature vector for each patient, the global patient features from the malignancy nodule detector are combined with the local nodule features from the nodule classifier. We aggregate the output of the malignancy detector by combining crops for each patient and constructing a density histogram over the softmax output, with 32 bins for each class in {`malignant', `benign', `no-nodule'}. The number of crops containing a nodule is also appended to this feature. We pool the nodule classifier outputs by computing the number of nodules, minimum, maximum, mean, standard deviation, and sum of the softmax output, as well as a 10 bin density histogram. In the case of a patient without any detected nodule, all these values are set to zero.The features from both networks are then weighted and concatenated into a 113 dimensional vector. This acts as input to a simple neural network with two hidden layers followed by ReLU non-linearity as shown in Table <ref>. This network produces the two-class probability of the cancer status of the patient.The use of histograms to perform pooling in neural networks is a long-accepted technique, dating to the earliest development of neural networks<cit.>.§.§ Training§.§.§ Nodule Detector, LUNA16Each volume crop in LUNA16 is preprocessed then divided into a uniform grid, with each cell of size 16. If the bounding box of a nodule intersects with a grid cell, that cell is deemed to be labelled `has-nodule'; other cells are labelled `no-nodule'. To save time, we sample only 128 random crops from each patient for training, duplicating crops with nodules to maintain class balance.We train for 100 000 iterations of 24 mini-batches, with a learning rate of 0.01 and weight decay of 10^-4. The Adam optimizer <cit.> is used with default parameters of (β_1 = 0.9, β_2 = 0.999).§.§.§ Malignancy Detector, KDSB17 This network is initialized with the weights of the trained nodule detector network, modified to return a distribution over three classes C = {`malignant', `benign', `no-nodule'}. We fine-tune this network on the KDSB17 dataset: cells classified by the nodule detector as `has-nodule' are classified as `malignant' or `benign' depending on the cancer status of the patient; other cells are classified as `no-nodule'. We only train and test on crops that contain a nodule and maintain class balance by duplication. This is an optional step done to save time. We fine-tune this model, first for 20 000 iterations with a learning rate of 0.01, then for 30 000 iterations with a learning rate of 0.001.§.§.§ Nodule ClassifierOver the KDSB17 dataset, we detect between 0 and 10 nodule grid cells per scan. We stack and average detection results from over-lapping crops and consider detections with a confidence above 0.5 as indicating the presence of a nodule. We then extract all the detected nodules from all the patients, and scale with zero-padding to a fixed value of 32 for training. Each of these nodules are classified independently. Generally, in a patient with cancer, only a few of the nodules present are malignant. All nodules in patients without cancer are benign. Benign and malignant nodules are difficult to distinguish, even for experienced radiologists; Figure <ref> compares sample cross-sections. Doctors often use the nodule size as a first criteria in nodules examination: cancerous nodules tend to be the largest, and usually larger than a particular threshold <cit.>.For the classifier, we explore different methods of labelling nodules and build the set of malignant nodules using the heuristic measure described in Section <ref>.Malignant nodules are far less prevalent than benign (around 1 for 7). To rebalance the classes, we augment the set of malignant nodules by flipping and 90-degree rotations.We trained from scratch for 6 000 iterations (stopping early) using the Adam optimizer with a batch size of 32, a learning rate of 0.001, and a weight-decay of 10^-4. We split the training data into an actual training set and a validation one, the latter corresponding to 10% of original training patients.§.§.§ Patient Classifier The patient classifier is trained to associate the pooled outputs from the malignancy detector and classifier with the cancer status of the patient, as provided in the KDSB17 dataset. We initialise weights as in <cit.>, and train with the Adam optimizer for 2 000 iterations using a learning rate of 0.001, with all the data used as a single batch. To prevent overfitting, we train the patient classifier on an augmented version of the KDSB17 training set (augmented via volume transpose), and use a weight decay of 10^-4. Output is clipped to [0.1, 0.9].§.§.§ Strategies for Labelling Training DataTo classify individual nodules we need to obtain labels for each nodule. We do not have any such data, and obtaining radiologist annotation on individual nodules was not feasible. We know that all nodules in patients without cancer are benign; and we use heuristic methods to label nodules within patients with cancer as benign or malignant:We compare two different heuristic methods to assign labels to nodules: the patient-label strategy, and the largest-nodule strategy. The patient-label strategy is the simplest possible, where we label all nodules from a patient with cancer as malignant. The largest-nodule strategy assumes that in patients with cancer, the largest nodule and all nodules at least some proportion w of that nodule are malignant. We used the latter in the Kaggle competition.In contrast, the malignancy detector uses the patient-label heuristic. This is a deliberate simplification to avoid the computational overhead of backpropagation through the nodule extractor. The classifier does not incur such an overhead because it operates on nodules after the extractor has assembled them from grid cells. § RESULTS AND DISCUSSION In medical diagnostics, it is common to present classifier performance using sensitivity (the true positive rate) and specificity (the true negative rate) instead of accuracy. To assess overall classification relevance, we also compute the F1-score. In the KDSB 2017, candidates were evaluated using the log-loss metric. We evaluate each component in the entire pipeline and present our results.§.§ PerformanceWe evaluate the patient classifier directly on the stage 1 and stage 2 test sets of the KDSB17 dataset. On stage 1 test data, we observe sensitivity of 0.719, specificity of 0.716, and Log-Loss of 0.47707, ranking our entry as 71^st during the first round of the competition. Sensitivity and specificity are computed by setting the probability threshold separating the two classes on the classifier's output at 0.25.When testing on stage 2 test data in the second phase of the KDSB17 contest, we observe a Log-Los of 0.52712 ranking our entry as 41^st out of 1972 teams, placing us in the top 3%. As the competition organizers only reported the log loss, we are unable to make a direct comparison of approaches. We present our ranking in Table <ref>. During the competition, only 4 features were used from the nodule classifier, number of nodules, mean, std, and sum of the softmax output. Post-competition, we use additional features as described in Section <ref>. We present updated results in Table <ref> and compare the overall performance of each component and their contribution to the final result.§.§ Component Training§.§.§ Nodule & Malignancy Detector To verify the nodule detector's performance, we evaluate on the validation set of LUNA16 and observe sensitivity of 0.697, specificity of 0.999, and F1-score of 0.740. We similarly evaluate the malignancy detector on the stage 1 test set of KDSB17 and observe sensitivity of 0.317, specificity of 0.997, and F1-score of 0.269. The metrics for the malignancy detector are calculated with only the malignant nodules as the positive class.While the nodule detector performs well, the malignancy detector has comparatively poor performance. This is likely due to the additional class increasing the complexity of the task.Additionally, the malignancy detector is trained on a version of the KDSB17 dataset where nodules are labelled using a naive method of labelling all nodules in a cancer patient as malignant. This naive patient-labelling method might introduce noise into the groundtruth labels of the annotated dataset, thus impeding the learning of the network. As labelling of nodules is done by the nodule detector, errors in labelling also propagate down to the malignancy detector as well as nodule classifier.§.§.§ Nodule ClassifierWe evaluate the classifier's performance using sensitivity, specificity, as well as F1-score.Initially training with 10 000 steps resulted in almost perfect scores while evaluating the model on training data, indicating a high chance of over-fitting. Additionally, the F1-score started to deteriorate rapidly. To prevent this, we stop training early, before performance on the training set flattens. Table <ref> shows the classifier performance on the testing set after several training durations. For the final architecture, we use a model trained for 6 000 steps. It provides the best trade-off between all three criteria. The quality of these classification results directly relies on the quality of the nodules labelling strategy that was used. A strategy that mislabelled many nodules would result in lower classifier performance.In Table <ref> we present the classification results of the patient-label strategy and the largest-nodule strategy (see Section <ref>. )with w=90% and w=70% when applied to the classifier, trained for 6 000 steps. Largest nodule with w=70% gives an average F1-score and the best trade-off between specificity and sensitivity. This suggests it is the method leading to the least number of mislabelled training nodules. Cancerous nodules are usually the largest ones, and not all nodules are systematically malignant.In our pipeline, we thus use the largest-nodule strategy (w=70%) for the classifier.Changing the strides in the ResNet architecture is essential. With the original stride values from the ResNet-18 architecture, training is less efficient. Convolutional filters become larger than the convolutional feature maps, harming learning and leading to very poor sensitivity and F1-score. In consequence, strides are removed in blocks 1, 2 and 4. Table <ref> shows the classifier performance on the testing set after 6 000 training steps. § CONCLUSIONS AND FUTURE WORKDetecting lung cancer in a full 3D CT-Scan is a challenging task. Directly training a single-stage network is futile, but factoring our solution into multiple stages makes training tractable. Due to imperfect datasets, our approach leveraged the LUNA16 dataset to train a nodule detector, and then refined that detector with the KDSB17 dataset to provide global features. We use that, and pool local features from a separate nodule classifier, to detect lung cancer with high accuracy.The quality of our method was validated by the competition, in which we placed 41^st out of 1972 teams (top 3%). §.§ Improvements There are many ways in which we can extend our method:Our method makes little use of the peculiarities of cancer nodules, and so will likely improve with advice from medical professionals. <cit.>, who placed second in the competition, used 17 different CNNs to extract the diameter, lobulation, spiculation, calcification, sphericity, and other features of nodules. These features are commonly used to classify nodules <cit.>, and help the network better learn about malignancy and cancers.When fine-tuning the nodule classifier, we rely on heuristic methods to determine which nodules are malignant and which are benign. These heuristic methods have not been experimentally validated, and so remain of dubious quality. We can instead apply unsupervised learning techniques together with a small set of radiologist-labelled nodules to directly learn the difference between malignant and benign nodules.§.§ Future Work The generality of our method also suggests that it can be adapted to other tumour- and cancer diagnosis problems. The design of our pipeline does not rely on any particular feature of lungs or lung cancer, and so we can easily adapt our pipeline to other nodular cancers, or perhaps other diseases.We have many more negative examples than positive examples, and so we need to balance our classes to improve classification performance. Current balancing techniques rely on classical data augmentation (flipping, rotation, etc.), though we would like to investigate advanced techniques such as 3D Generative Adversarial Networks (3D GANs) <cit.>. GANs are a relatively novel invention, and such a fusion technique may yield insight into both GANs and lung cancer. Radiologists do not arrive at a diagnosis of lung cancer from a single CT scan <cit.>. They diagnose a particular type of lung cancer using a sequence of CT scans over a few months. They match the behaviour of the nodules over time with a particular subtype of lung cancer. To match radiologist-level accuracy on the task, we need to develop a time-varying model of lung cancer that can effectively include a progression of CT scans.Patient-level priors also significantly affect diagnoses. Age, sex, smoking behaviour, familial co-occurrence, occupation, etc. are all factors that influence the likelihood of developing lung cancer. Adding this heterogeneous data will yield significant improvement in detection performance.From a clinical perspective, detecting the subtype and optimizing treatment options is a vitally important problem that machine learning can tackle. Our long-term goal is to model the disease itself, so that we can detect it, predict its behaviour, and treat it optimally. ACM-Reference-Format | http://arxiv.org/abs/1705.09435v1 | {
"authors": [
"Kingsley Kuan",
"Mathieu Ravaut",
"Gaurav Manek",
"Huiling Chen",
"Jie Lin",
"Babar Nazir",
"Cen Chen",
"Tse Chiang Howe",
"Zeng Zeng",
"Vijay Chandrasekhar"
],
"categories": [
"cs.CV"
],
"primary_category": "cs.CV",
"published": "20170526053629",
"title": "Deep Learning for Lung Cancer Detection: Tackling the Kaggle Data Science Bowl 2017 Challenge"
} |
Interactive Lévy Flight in Interest Space]Interactive Lévy Flight in Interest Space ^1 School of Systems Science, Beijing Normal University, Beijing 100875, P.R.China ^* [email protected], [email protected] to the well-studied topic of human mobility in real geographic space, very few studies focus on human mobility in virtual space, such as interests, knowledge, ideas, and so forth. However, it relates to the issues of management of public opinions, knowledge diffusion, and innovation. In this paper, we assume that the interests of a group of online users can span a Euclidean space which is called interest space, and the transfers of user interests can be modeled as the Lévy Flight on the interest space. To consider the interaction between users, we assume that the random walkers are not independent but interact each other indirectly via the digital resources in the interest space. The model can successfully reproduce a set of scaling laws for describing the growth of the attention flow networks of real online communities, and the ranges of the exponents of the scaling are similar with the empirical data. Further, we can infer parameters for describing the individual behaviors of the users according to the scaling laws of the empirical attention flow network. Our model can not only provide theoretical understanding on human online behaviors, but also has wide potential applications, such as dissemination and management of public opinions, online recommendation, etc. Keywords: human mobility, Lévy Flight, collective attention, interest space, attention flow networks§ INTRODUCTION Everything is moving. To understand the mobility patterns for human kinds is of great importance since it relates to epidemics<cit.>, urban planning<cit.>, and other issues in modern city<cit.>. Lots of studies on human mobility in real space have been made in past decades<cit.>. For instance, it is found that Lévy Flight<cit.>, one of the most famous random walk model which is significantly distinguished from Brownian motion<cit.>, can be used to characterize human movements. However, human mobility does not only take place in real space exclusively, but also in virtual space<cit.>. For example, our consciousness always jumps between different ideas, which can be understood as a virtual movement in interest space<cit.>. A large amount of users surfing on an online community, and jumping between different posts, can be also understood as collective movements in interest space<cit.>. Although virtual space seems to be less solid than physical space, the study of it is of great significance because it may help us to understand the issues of psychology therapy<cit.>, dissemination<cit.> and management of public opinion<cit.>, online recommendation<cit.>, and so on.Some important conclusions for collective users in online community have been achieved. For example, conventional studies of human dynamics usually concerned the statistics on a single interest (digital resource) such as access time, frequency, and so forth, or the distributions of the number of visiting pages and the decay patterns of popularity. Besides, human being is a social animal<cit.>. Thus, a large amount of attention is paid on how people interact, correlate, and connect each other in the studies of social networks<cit.>. The interaction and correlation between people can also be reflected by statistical laws<cit.>. For example, the super-linear scaling of productivity is found in both cities and online communities<cit.>, which means cooperation between people may promote the growth of the per capita productivity for human organizations in a faster rate than their sizes. While, the widely existed sub-linear scaling law of diversity of places or interests indicates a slower increase of diversification. This is also an emergent indirect results of interactions between people. Although human dynamics and complex networks have drawn a wide attention, little concern is made with the sequential movements of users on interests because the concept of the interest space is not apparent. Recently, some attempts have been made to visualize the virtual space by attention flow network model<cit.>, on which nodes represent digital resources (posts, tags, and articles etc.). The network is constructed according to the collective behaviors of a large number of users. However, the attention flow network is built according to the data of users<cit.>, i.e., the representation of the interest space rather than the space itself.In this paper, we focus on collective movements of users in their interest space. Here, we assume that all the possible interests of users span a Euclidean space, in which adjacent points standing for similar interests, and users performing random walks of Lévy Flights in the space. However, naive Lévy Flight model<cit.> can not reproduce the required scaling laws because of the absence of interactions. Thus, we build an interactive Lévy Flight model, in which, interactions occur between users indirectly bridged by digital resources. The model successfully reproduces all the concerned scaling laws, and the range of scaling exponents can be also calibrated by adjusting the parameters in the model. Our model can not only deepen our understanding, but also may largely improve the accuracy of predictions on user behaviors<cit.>, leading to wide applications on recommendation<cit.>, searching<cit.>, user profiling<cit.>, etc. § MODEL To understand the scaling phenomena of production and diversity, and the relationship between users and digital resources, we construct an interactive Lévy Flight model to simulate user behaviors. Let’s consider an online community (such as Baidu Tieba, Stack Exchange or Flickr, etc.), which containing a large number of registered users. All their interests can span an interest space which is modeled by a 2-dimensional Euclidean plane (see figure <ref>-(A)). In which, each cell represents one possible interest, such as a kind of music style, or a type of article, etc. Two cells are adjacent standing for they representing similar contents. Meanwhile, articles, tags, Q&As, and so forth digital resources generated by users can be projected onto this interest space. We use C(X,t) to denote the number of units projected at X and time t, where X is the coordinate of the cell. The cell occupied by at least one unit of digital resource, i.e., C(X,t)>0, is called an Active Site meaning that the resource has the corresponding theme as the interest. In figure <ref>-(A), S0, S1,...,S4 represent active sites and occupied by digital resources. When a digital resource is generated by a user, it will be projected on the space and be able to be visited or read by other users. Thus, all users interact each other indirectly through the active sites.Users’ sequential behaviors such as browsing, posting, Q&A, and others in a session can be viewed as a walk in the interest space. We assume that the user's walk satisfies the 2-dimensional Lévy Flight law, meaning that the movement pattern of a user is basically random and the probability density distribution function of the movement distance l in one jump is a power law: P(l) ∼ł^-λ This movement rule describes how a user’s interest transfers: user's interest will stay in a narrow area for a long time, but occasionally implement a long-range jump with a small probability. λ (values are in [1,3]) is the exponent of the Lévy flight, it characterizes how wide are the interests of users. If λ is small, then the users may frequently perform long range random jump, representing they have wide interests, and vice versa. Thus, λ is a parameter of a trade-off between familiarity and novelty. Users usually consume some familiar topics, but they also require some new information occasionally to visit. In figure <ref>-(A), the arcs labeled with same numbers represent the flights of one user. For example, the user with label 1 travels S0, S1, and S4 sequentially.Next, we consider the interaction between users. We know that if a community already has abundant digital resources (such as many posts in a forum), users will continue to visit these resources. Otherwise they will lose their interests and quit quickly. In order to characterize this feature, without loss of generality, we assume that the user can jumps continually and randomly from a cell X as long as X is active (there are at least one unit of digital resource), otherwise, he(she) will go out of the space from X. We denote the position of user i at time t is X_t and it is ∅ when the user quit the community, so we have X_t^i=X_t-1^i+ξ if C(X_t-1^i, t-1) > 0 ∅ if C(X_t-1^i, t-1) = 0 where, ξ is a random number following equation (<ref>). On the other hand, each user will generate new digital resource with a certain probability in the process of random walk. That is, if the user i jumps to the cell X, he(she) will add a new digital resource at X with a certain probability p. Thus, we have: C(X_t^i, t) = C(X_t-1^i, t-1)+η where, δ is the Dirac delta function, it equals to 1 only if its component is 0; and η is a random number following 0-1 distribution with a probability p to be 1, and N is the total number of users who performs Lévy Flights in the space.Next, we consider the situation with N users. Suppose in one simulation (a session), N users are set on the origin of the interest space, and they begin to implement Lévy Flights from the origin simultaneously. Although they don’t interact directly, they can interfere each other via the active sites. The simulation ends at time T(N) when all users exit the space. Apparently, T(N) will increase with N implying that the indirect interactions can keep users living in the community for longer time since the probability that they encounter each other increases. One trajectory can be generated for one user in his(her) lifespan in the community. To observe the collective behaviors of these Lévy Flighters, we construct an attention flow network as shown in figure <ref>-(B). The so called attention flow network is an open flow network<cit.> in which nodes representing digital resources (active sites) and weighted directed links representing transition flows between two nodes formed by the collective behaviors of the random walkers. In the model, the weight of the edge connecting active sites X and Y can be defined as: W(X, Y)=∑_τ=1^T-1∑_i=1^Nδ(X_τ^i-X)·δ(X_τ+1^i-Y) Two special nodes are added to represent the environment which are the source and the sink. When a random walker starts to jump to an active site X in the interest space, a unit of flux from the source to the node X is added to the attention flow network. On the other hand, a unit of flux from node X to the sink is added if the last site of a random walker’s visit is X. The attention flow network can characterize the collective properties of a large number of users for both the simulated model and the empirical data. § SIMULATIONTo validate our model, we study how the network properties will change as the size of the system increases, and to see if the same scaling laws can be reproduced by our model. Here, we use the total number of users N as the measure of the size. In fact, this quantity is also the total influx to the attention network for a given simulation. Following that, we will focus on how the macroscopic properties change with N. Here, we focus on three basic macroscopic variables. First,A=∑_X∑_YW(X, Y)measures the activity of the community, it is defined as the total number of transitions of interests (jumps). Second,D=∑_X∑_t=1^Tδ(C(X, t))is the total number of active sites, or the total number of nodes in the network. It measures the diversity of interests for all members in the community. Third,E=∑_X∑_Yδ(W(X, Y)) is the total number of edges of the network, and it measures the diversification of interests transitions. According to our simulated data, all three variables scale toN with different exponents, i.e.,A ∼ N^αD ∼ N^βE ∼ N^γWhere α, β, and γ are exponents characterizing the relative growth speed of the quantities to the size of the system as shown in figure (<ref>).To compare with the simulated data, we also plot the empirical scaling laws for the same quantities on three representative online communities, Baidu Tieba (each jump represents a click behavior, see figure <ref>-(a,b,c)) and Stack Exchange(each jump represents an answering behavior, see figure <ref>-(d,e,f )).We observe that all the communities follow the same scaling laws as the simulated results, and the values of exponents are also similar.First, we notice that the exponents α are always larger than 1.0 for different p values in simulations. This observation also holds for empirical data. As shown in figure <ref>-(a, b), we systematically calculate the exponents of 1000 Baidu Tiebas and 136 communities in Stack Exchange and plot the distribution of exponents.It is clear that the distribution of the four exponents are nearly normal distribution, in which α is right skew and the average value of θ is approximately 1.25 which is larger than 1.0 significantly. Some small Tiebas’ exponents are less than 1.0 since their scaling properties are not statistically significant.We further confirm the super-linear relationship between A and N for more online communities as shown in table <ref>. All the exponents are larger than 1. Among which, the communities with intensive interactions between users always have larger exponents like Baidu Tieba, Stack Exchange, and Digg. Actually, we can understand the exponent α as an indicator to measure the intensity of the social interactions between users for one community. According to (<ref>) we derive:A/N∼ N^α-1That indicates the average number of jumps increases with the size of the system N if α>1. And the relative speed increases with α. Therefore, if α is big, the average activities generated by the users will be sensitive to the total number of users. This characterizes the nonlinearity of the interactions of users.To compare, we investigate the possible intervals of α for our simulations. As shown in Fig.5, when the exponent α increases as the probability p, which means as the propensity that user generating activities increases, the average intensity of interaction also increases. If we understand the activity as a kind of production of users, then the exponent α characterizes the productivity of the bunch of users. If it is easy for users to express their interests (p increases), the online community is more productive.Next, we analyze the exponent of β, the scaling between the number of nodes of the attention flow network and the size of the system. This scaling law indicates how the diversification of the digital resources generated by the users changes with the size. We found both for simulation(see figure <ref>) and empirical data(see figure <ref>), the exponents are significantly less than one which indicates a sub-linear scaling between diversity and the size of the system. This sub-linear scaling is always observed in other complex systems.The total number of edges E on the attention flow network measures the diversification of distinct transitions between pairs of nodes. However, there is a large deviation for the exponent γ. Super linear and sub linear are both possible for different communities. There is a transition from sub-linear to super-linear for simulations.When p increases, both exponents (β and γ) for diversification increase. That means the propensity that user generating contents can accelerate the relative speeds that diverse contents are produced compared to the size of the system. Thus, the average distinct contents generated increase with the size of the system. It is interesting to observe another scaling behavior between E and D,E ∼ D^θa super-linear can be observed. This phenomenon is observed for a large number of networks which is named as densification phenomenon. Our model can successfully reproduce this phenomenon and the exponent θ fluctuates around 1.5. All the ranges of exponents for models are consistent with the ones in empirical data, which implies that our model can capture the scaling behaviors in data.We further test how the exponent of Lévy Flight influences the other exponents. The results are shown in figure <ref>. The qualitative characteristics of the dependence between the exponents and p do not change dramatically, however, the range of exponents change is different. We also note that the range of the fluctuation of λ is relatively small for different p, but changes dramatically with different exponent λ. Thus, we guess that the exponent of γ exclusively depends on λ, and we suppose that this dependence can be used to infer the value of λ for a real community.§ INFERENCES FOR PARAMETERS Λ AND PNext, we will infer the parameters λ and p from empirical exponents by using maximum likelihood principle for each community. We suppose the real exponents α, β, and θ are random sampled from the model. And the exponents follow normal distributions with centers determined by the model and standard deviation σ for given p and λ, that is: P(α, β, θ|p, λ) ∼exp(-(α-α(p, λ))^2+(β-β(p, λ))^2+(θ-θ(p, λ))^2/σ^2) where, α(p,λ), β(p,λ), and θ(p,λ) represent the exponents generated the model for given parameter p and λ which can be read from the dependency of figure <ref>. To infer p and λ from given empirical measure of α_i, β_i, and θ_i, we attempt to maximize the likelihood probability (eq.<ref> ), that is: p,λ=max_p, λ P(α, β, θ|p, λ) So, we need to minimize the distance:D=√((α-α(p,λ))^2+(β-β(p,λ))^2+(θ-θ(p,λ))^2)That is, we should find the most probable parameters p and λ so that the simulated exponents are closest to the empirical ones.In figure <ref> (a, b), we show all the inferred parameters for Baidu Tieba (a) and communities of Stack Exchange (b). We notice that all the Tieba’s can be roughly separated into two groups according to their parameters, and they have similar p value (0.1) but different λ values. We know that λ indicates how dissimilar of the users’ interests for one transition. Thus, the users in Tieba with small λ always have relatively wide interests. All the Tiebas’ have very small p values meaning that the tendency for posting a new thread is much less than clicking. While, the communities in Stack Exchange almost concentrate in the area of 1.0<λ<2.0 or 0<p<0.4. That means the users in Stack Exchange always have wide interest and do not like to post questions. However, compared to Tieba, Stack Exchange communities always have larger p values meaning that it is easier for asking a question compared to answering it than for posting a thread compared to clicking threads.§ DISCUSSION In this paper we build an interactive Lévy flight model to simulate the random walk behaviors of users in virtual interest space. We assume the users can interact indirectly via the digital resources. Two important parameters controlling the Lévy flight’s behavior, i.e., how wide are users interests and the propensity that a user deliver a post determine the structures of the attention flow network. We compare the statistical properties of the attention flow network with empirical online communities from the perspective of scaling laws. Four different scaling laws characterize how the macroscopic quantities of activity, diversification of resources generated by users, and the diversification of interests transfer scale to the number of users. And the exponents characterize the relative growth speed. All the scaling behaviors and the range of exponents in simulation are in accordance with the empirical data. We then can infer the two important parameters p and λ if the exponents Therefore, the interest transition of users may be characterized by a simple random walk model on a 2-dimensional space spanned by the interests of users. The key that may explain the origins of the scaling laws that we have observed for the empirical communities is the indirect interactions between users. In our model, we assume that the users may stay in the system only if they can find the published digital resources which can feed their interests. This is the key to the indirect interaction and the super-linear scaling law of activity because when the number of users increases, the interactions between users also increase but in a faster rate. This work does not only provide theoretical understanding of online communities, but also implies potential applications. First, the scaling exponents can be treated as novel indicators to characterize the growth of communities. For example, the exponent α may indicate the level of interactive stickness of a community since it increases with the intensity of the interactions between users. The merits of adopting the exponents to quantify the communities include the stability of the exponent and the independency on the size of the community. Therefore, we can make a reasonable evaluation of a forum or a community when it is small.Second, we can infer the parameters from the measured exponents. All these parameters describe the behaviors of users. Thus, our work makes it possible to infer the individual behavior only from their macroscopic performance of collective. And it is also possible that we can imply the macroscopic behavior if we know the individual parameters.Third, we pave the path to connect the mobility between real and virtual worlds. Our model shows that human mobility in the virtual world may also follow the same statistical law as in the real world. And the interactions between people may play an important role.Finally, drawbacks exist for the current model. First, we only provide an indirect evidence for the mobility in the virtual world. However, the space may not be 2-dimensional or even Euclidean. Second, the model simplifies the human behaviors in a large extent, this may not work if other factors need to be considered. Third, more empirical data should be collected to test our model.§ ACKNOWLEDGMENTSWe gratefully acknowledge funding support from the National Natural Science Foundation of China (grant 61673070), the Fundamental Research Fund for the Central Universities(grant 310421103) and the Beijing Normal University Interdisciplinary Project. § REFERENCESiopart-num | http://arxiv.org/abs/1705.09462v1 | {
"authors": [
"Fanqi Zeng",
"Li Gong",
"Jing Liu",
"Jiang Zhang",
"Qinghua Chen",
"Ruyue Xin"
],
"categories": [
"cs.SI"
],
"primary_category": "cs.SI",
"published": "20170526073329",
"title": "Interactive Levy Flight in Interest Space"
} |
[Corresponding author: ][email protected] für Theoretische Physik, Universität Regensburg, D-93040 Regensburg, Germany Institut für Theoretische Physik, Universität Regensburg, D-93040 Regensburg, Germany Institut für Theoretische Physik, Universität Regensburg, D-93040 Regensburg, Germany Institut für Theoretische Physik, Universität Regensburg, D-93040 Regensburg, GermanyWe present a semiclassical study of the spectrum of a few-body system consisting of two short-range interacting bosonic particles in one dimension, a particular case of a general class of integrable many-body systems where the energy spectrum is given by the solution of algebraic transcendental equations. By an exact mapping between δ-potentials and boundary conditions on the few-body wavefunctions, we are able to extend previous semiclassical results for single-particle systems with mixed boundary conditions to the two-body problem. The semiclassical approach allows us to derive explicit analytical results for the smooth part of the two-body density of states that are in excellent agreement with numerical calculations. It further enables us to include the effect of bound states in the attractive case. Remarkably, for the particular case of two particles in one dimension, the discrete energy levels obtained through a requantization condition of the smooth density of states are essentially in perfect agreement with the exact ones. Semiclassics in a system without classical limit:The few-body spectrum of two interacting bosons in one dimension Klaus Richter July 12, 2017 ===================================================================================================================§ INTRODUCTIONThe discovery by Bethe of quantum many-body systems admitting analytical expressions for eigenstates and eigenenergies in terms of the implicit solutions of algebraic transcendental equations <cit.> marked the birth of a whole branch of mathematical physics, namely, the theory of quantum integrable models (for an introduction see <cit.>). Since then, these models have served as the playground to study the role of symmetries and conservation laws for the stationary and dynamical properties of many-body systems <cit.>.Solvable models had been mainly restricted to the area of mathematical physics because they involve apparently unphysical δ-type interparticle interactions and are commonly restricted to one-dimensional (1D) motion. This situation has drastically changed with the successful preparation of quantum states of interacting cold atoms <cit.>, especially in regimes where the system can be considered essentially 1D, as in elongated optical traps <cit.>. As it turned out that interactions between neutral cold atoms are described with astonishing precision by δ-type potentials <cit.>, the knowledge accumulated during almost one century of work on solvable models found its way into cold-atom physics during the last decade. Hence, the experimental study of solvable many-body systems has become a very active field that provides hints for other hitherto inaccessible regimes where external potentials destroy integrability.There are, however, at least two situations where the detailed level-by-level calculation of the many-body spectrum using methods of integrable quantum systems overshoots in the problem of calculating macroscopic properties like the microcanonical partition function. One example is the thermodynamic limit, where quantum fluctuations are suppressed and the spectrum behaves effectively smooth. In this regime the appropriate tool is the so-called thermodynamic Bethe ansatz <cit.>. The second situation appears when studying interacting bosonic systems in the mesoscopic short-wavelength regime, where a separation of scales between the smooth and oscillatory part of the level density allows for the approximation of the spectrum as a smooth function. This is a well known procedure often referred to as the Thomas-Fermi approximation <cit.>.The Thomas-Fermi density of states is of paramount importance as it fixes the energy-dependence of the mean-level spacing, the fundamental energy scale that heralds the appearance of quantum effects and determines the relative importance of external perturbations. In this context, cold atom systems pose an interesting problem: since the Thomas-Fermi approximation explicitly requires a classical limit for the quantum mechanical Hamiltonian, it is not well defined for systems with δ-like interactions where the classical limit only exists away from the collisions.All in all, in systems with few particles interacting through short-range interactions, the semiclassical limit appears to be difficult to handle already when we ask for a very fundamental quantity as the mean level spacing. Our objective in this paper is to show how, by using techniques imported from the calculation of Thomas-Fermi approximations in single-particle systems with mixed boundary conditions, one can obtain a rigorous definition of the smooth density of states in systems of few particles interacting through δ-potentials, even when the latter do not have a classical limit.§ ONE PARTICLE IN A Δ-POTENTIAL In this section we present the general formalism for the calculation of the smooth part of the density of states (DOS) for d-dimensional billiards with either (d-1)-dimensional δ-barriers [i.e. a δ-function potential along a (d-1)-dimensional manifold] inside the volume or with Robin (or mixed) boundary conditions on the surface. The latter case was already discussed by Balian and Bloch <cit.> and by Sieber <cit.>, but in all three cases the derivations were done via the energy dependent Greens function whereas we use a formalism based on 1D propagators <cit.>. For this we need the propagator for a δ-potential in 1D, which is known exactly from a path integral calculation <cit.> (see also <cit.> for higher dimensions). Notably, quantum mechanical properties, like the appearance of a bound state for attractive interaction, are well hidden inside the results (see e.g. <cit.>). Therefore we take an alternative approach which is more instructive for our purposes.§.§ Propagator for a particle in a δ-potential The 1D-propagator for the δ-barrier is derived in a straightforward way by first calculating the exact propagator for a particle on a line with Dirichlet boundary conditions at x=± L and a δ-potential V(x)=(ħ^2κ/m) δ(x) with κ∈ and then taking the limit L→∞. The solutions of the stationary Schrödinger equation (-ħ^2/2mx+ħ^2κ/mδ(x))ψ(x)=Eψ(x) for the confined system are well known (see e.g. <cit.>) and can be separated into symmetric and antisymmetric solutions due to the symmetry of the problem. The latter are not affected by the δ-potential and coincide with the antisymmetric solutions of a particle in a box: ψ_n^(a)(x)=1/√(L)sin(k_n^(a) x), k_n^(a)=nπ/L, n∈. The symmetric solutions are given by ψ_n^(s)(x) =A_n/√(L)sin(k_n^(s)(|x|-L)), k_n^(s) L =nπ-arctan(k_n^(s)/κ), where A_n is a normalization constant that depends on k_n^(s). The transcendental equation in Eq. (<ref>) has exactly one solution for every n∈ and another nontrivial solution for n=0 if κ is negative. In the case κ L∈(-1,0) this solution is real whereas the case κ L<-1 yields a purely imaginary wave number which corresponds to a negative energy, referred to as a bound state in the following. The two types of states are continuously connected by the zero energy solution ψ_0^(s)(x)=√(3/2L^3)(|x|-L) valid for κ L=-1. In the limit L→∞ the state ψ_0^(s) will always be bound irrespective of the value of κ. The exact propagator for the confined system can now be written as K_⌊δ⌋(x',x,t) = ∑_n=1^∞-ıħ t/2m(k_n^(a))^2ψ_n^(a)(x')ψ_n^(a)(x) + ∑_n=0^∞-ıħ t/2m(k_n^(s))^2ψ_n^(s)(x')ψ_n^(s)(x). In the case κ≥0 both summations start with n=1. In the limit L→∞ this yields (see Appendix <ref>) K_δ(x',x,t) = 1/2π∫_-∞^∞k-ıħ t/2mk^2cos(k(x'-x)) - 1/2π∫_-∞^∞k-ıħ t/2mk^2ik(|x'|+|x|)/1+ik/κ- Θ(-κ)κıħ t/2mκ^2κ(|x'|+|x|), where the last term originates from the separate treatment of the (bound) state ψ_0^(s) (Θ is the Heaviside step function). The first integral in Eq. (<ref>) can be evaluated directly by means of a Gaussian (or Fresnel) integral and yields the well-known expression for the free propagator K_0(x',x,t)=(x'-x). The second integral in Eq. (<ref>) can be evaluated in the same way after replacing 1/1+ık/κ=∫_0^∞ϵ-(1+ık/κ)ϵ. Finally, the propagator for the Hamiltonian in Eq. (<ref>) reads K_δ(x',x,t)=K_0(x',x,t)+K_κ(x',x,t), with the deviation from free propagation given by K_κ (x',x,t) = 1∓ 1/2κıħ t/2mκ^2-κ(|x'|+|x|)- κ√(m/2πıħ t)∫_0^∞ϵ-κϵ-m/2iħ t(|x'|+|x|±ϵ)^2. Here, we identified κ=|κ| and the upper (lower) signs stand for a repulsive (attractive) potential. This result generalizes the propagator found in <cit.>, which is restricted to the repulsive case, and is equivalent to the result from path integral approaches <cit.>. Note that the second term in Eq. (<ref>) can be written as -∫_0^∞ϵκ-κϵK_0(-|x'|,|x|±ϵ,t), which is closely related to the correction -K_0(-x',x,t) obtained for a Dirichlet boundary condition (κ→∞) at x=0 <cit.>. It thus can be interpreted as the propagation from x to x' via the δ-potential, taking a detour or shortcut of length ϵ weighted with the density κ-κϵ.§.§ The DOS for a billiard with a δ-barrier The DOS of a d-dimensional system can be written as the inverse Laplace transform of the trace of the propagator <cit.>: ρ(E)=β∫dxK(x,x,t=-ıħβ)E. Let Ω be the configuration space of a d-dimensional billiard with a classically impenetrable thin barrier inside the volume which can be approximated by a δ-potential along a (d-1)-dimensional smooth manifold. By only taking into account short-time propagation inside the billiard we can locally approximate the barrier by (d-1)-dimensional planes and thus treat the coordinate perpendicular to the barrier as independent from the remaining d-1 tangential coordinates. In this case the local approximation for the propagator can be written as K^(d)(x',x,t) = K_0^(d-1)(x_∥',x_∥,t) K_δ(x_⊥',x_⊥,t) = K_0^(d)(x',x,t) + K_0^(d-1)(x_∥',x_∥,t) K_κ(x_⊥',x_⊥,t). Here, K_0^(d)(x',x,t) is the d-dimensional free propagator and x_⊥ and x_∥ are the coordinates perpendicular and tangential to the barrier. The trace is now calculated assuming that the integration of the perpendicular direction converges rapidly and is thus independent of the position at the barrier. By introducing the interaction strength in units of energy, μ=ħ^2κ^2/2m, this yields ∫_ΩdxK(x,x,t=-ıħβ) = V_Ω(m/2πħ^2β)^d/2+ S_δ/2(m/2πħ^2β)^d-1/2×(-1±μβ(√(μβ))+(1∓ 1)μβ) with the d-dimensional volume V_Ω of the configuration space Ω and the surface S_δ of the barrier. The DOS is then given by the inverse Laplace transform of Eq. (<ref>). For d=1 we have to set S_δ=1 and the DOS is given by ρ(E)=V_Ω(m/2πħ^2)^1/2Θ(E)/√(π E) -1/2δ(E) ±1/2π√(μ/E)Θ(E)/E+μ +1∓ 1/2δ(E+μ). In all other cases the result is ρ(E) = V_Ω(m/2πħ^2)^d/2E^d-2/2/Γ(d/2)Θ(E) - S_δ/2(m/2πħ^2)^d-1/2E^d-3/2/Γ(d-1/2)Θ(E) ±S_δ/2(m/2πħ^2)^d-1/2{β1/β^d-1/2μβ(√(μβ))E + (1∓ 1) (E+μ)^d-3/2/Γ(d-1/2)Θ(E+μ)}. A closed formula for the inverse Laplace transform β1/β^d-1/2μβ(√(μβ))E for arbitrary dimensions d>1 is given in Appendix <ref>. For varying strength of the δ-potential along the surface of the barrier, i.e., κ=κ(x), the surface S_δ has to be replaced by the integral operator ∫_S_δd-1x. Note that the boundary conditions at the boundary ∂Ω of the billiard are not yet included and the approximation of a flat barrier may fail near the boundary. Furthermore, the above approximation does not include curvature corrections and contains only information on the smooth part of the DOS. The result could, in principle, be improved by using periodic orbit theory following <cit.>, but this would be at the expense of generality. Now consider a different setup without a δ-barrier inside the billiard but with Robin- (or mixed) boundary conditions ∂/∂ x_⊥ψ(x)|_x_s=κψ(x_s),x_s∈∂Ω. In 1D this is equivalent to a δ-potential (ħ^2κ/m) δ(x_s) at the surface (i.e., the end points of the line segment) while only allowing for solutions symmetric to the endpoints in a coordinate space extended beyond the latter. This means that in the approximation of a locally flat surface of the boundary we only have to replace the propagator K_δ in the above derivation by its symmetry-projected equivalent K_δ^+(x',x,t) = K_δ(x',x,t)+K_δ(-x',x,t) = K_0(x',x,t)+K_0(-x',x,t) + 2K_κ(x',x,t) while taking the trace perpendicular to the boundary to one side only. One can easily see that this only adds an additional surface term S_δ/4(m/2πħ^2)^d-1/2E^d-3/2/Γ(d-1/2)Θ(E) to Eq. (<ref>), which corresponds to a Neumann boundary condition, while the other terms remain unchanged. This is due to the fact that the Robin boundary condition is equivalent to a δ-potential on the surface combined with a Neumann boundary condition in the sense of reflection symmetry in the extended space. The above result reproduces the first term in the expansion derived in <cit.>. Note that our approach also comprises the two-dimensional (2D) case, which was treated separately in <cit.>. This will be essential in the next section, where we will use the results to derive the Weyl expansion for an interacting system. § TWO PARTICLES ON A LINE SEGMENT §.§ Configuration space In this section we consider two identical particles on a line with Dirichlet boundary conditions at q=0 and q=L [two particles in a box, see Fig. <ref>(a)] as an idealized model for confined particles. The particles shall interact only when they are at the same point, which is realized by a δ-potential. Furthermore we restrict ourselves to either bosons with zero spin or fermions with spin 1/2. In order to shorten notation, we use imaginary time and choose scaled units, i.e., β=it/ħandħ^2/2m=1. Inside the configuration space Ω={q∈^2| 0<q_i<L} the Hamiltonian is given by (q_1,q_2)=-∂^2/∂ q_1^2-∂^2/∂ q_2^2+√(8) δ(q_1-q_2). After introducing relative and center of mass coordinates x_1=1/√(2)(q_1-q_2), x_2=1/√(2)(q_1+q_2) the Hamiltonian reads (x_1,x_2)=-∂^2/∂ x_1^2-∂^2/∂ x_2^2+2 δ(x_1). For bosons, the wave functions must be symmetric with respect to particle exchange and, in the case of fermions, an interaction can only occur if the particles have different spin, i.e., the wave functions must be symmetric, too. We can therefore restrict ourselves to bosons. For =0 the system of two indistinguishable particles is equivalent to a system of one quasi-particle of the same mass in the fundamental domain ℱ={q∈Ω| q_1≥ q_2} while requiring a Neumann boundary condition on the line q_1=q_2 <cit.>. In the interacting case the same arguments yield the very same equivalence but with a Robin boundary condition ∂/∂ x_1ψ(x_1,x_2)|_x_1=0=ψ(0,x_2) on the symmetry line instead, as illustrated in Fig. <ref>(b),(c). §.§ Smooth part of the DOS Using the above equivalence of two particles on a line to one quasi-particle in the fundamental domain we can calculate the DOS directly from Eqs. (<ref>) and (<ref>): ρ(E) = L^2/8πΘ(E) - (2+√(2))L/8πΘ(E)/√(E)±√(2)L/4πΘ(E)/√(E+) + √(2)L/2πΘ(E+)/√(E+)+ ρ_c(E), with =^2 in scaled units (<ref>). We introduced the abbreviation =(1∓ 1)/2 here in order to shorten notation. The last term ρ_c(E) represents the contributions coming from the corners in the fundamental domain. Here we need the contribution from a π/4 corner with Dirichlet and Robin boundary conditions along the rays. The exact propagator for such a corner can be derived from the propagator for a π/2 corner with Robin boundary conditions on the axes, K_π/2(x',x,t)=K_δ^+(x_1',x_1,t)K_δ^+(x_2',x_2,t). The Dirichlet boundary condition at x_1=x_2 can be satisfied by antisymmetrizing K_π/2 with respect to this line. This yields the expression K_π/4(x',x,t) = 1/2[K_π/2((x_1',x_2'),(x_1,x_2),t)-K_π/2((x_2',x_1'),(x_1,x_2),t)]. The factor 1/2 was chosen such that the trace can be taken in the first quadrant instead of only integrating the inside of the corner. This is possible due to the symmetry of K_π/2. It is now convenient to write the symmetric propagator (<ref>) as K_δ^+(x',x,t)=K_0(x',x,t)+K_R(x',x,t) with K_R(x',x,t)=K_0(-x',x,t)+2K_κ(x',x,t). Here, K_R(x',x,t) can be interpreted as the correction to the free propagator representing the propagation from x to the point x' via the boundary with mixed boundary condition. The propagator K_π/4 now takes the form K_π/4(x',x,t) = 1/2[K_0(x_1',x_1,t)K_0(x_2',x_2,t) + K_R(x_1',x_1,t)K_R(x_2',x_2,t) + K_0(x_1',x_1,t)K_R(x_2',x_2,t) + K_R(x_1',x_1,t)K_0(x_2',x_2,t) -K_0(x_2',x_1,t)K_0(x_1',x_2,t) - K_R(x_2',x_1,t)K_R(x_1',x_2,t) - K_0(x_2',x_1,t)K_R(x_1',x_2,t) - K_R(x_2',x_1,t)K_0(x_1',x_2,t)]. The interpretation of the different terms for x'=x is shown in Figs. <ref> and <ref>. Tracing the above propagator will lead to different contributions to the DOS. The first term can be identified as a volume term, and the third and fourth can be combined to a surface term corresponding to mixed boundary conditions. The surface terms for the Dirichlet boundary are given by the fifth and part of the sixth term. All these contributions have to be dropped to get the parts that arise only from the corner: K_π/4(x',x,t) = 1/2[K_R(x_1',x_1,t)K_R(x_2',x_2,t) - K_R(x_2',x_1,t)K_R(x_1',x_2,t) + K_0(-x_2',x_1,t)K_0(-x_1',x_2,t) - K_0(x_2',x_1,t)K_R(x_1',x_2,t) - K_R(x_2',x_1,t)K_0(x_1',x_2,t)]. The inverse Laplace transform (<ref>) of the trace of this propagator yields the corner contribution. For the first term in Eq. (<ref>) the trace can be calculated separately for each coordinate and the inverse Laplace transform can be calculated as a convolution of the density β∫_0^∞x K_R(x,x,t)E= β(-1/4±1/2β(√(β))+β)E= -1/4δ(E) ±1/2π√(/E)Θ(E)/E+ +δ(E+) with itself. The remaining four terms of the propagator (<ref>) are traced as a whole because most of the integrals do not have to be evaluated as they cancel mutually after some elementary manipulations. Altogether, the π/4 corner contribution is ρ _π/4(E) = 5/32δ(E) + 1/4π√(/E+)Θ(E)/E+2∓1/8π√(/E)Θ(E)/E+∓1/4π√(2/E)Θ(E)/E+2- [ 1/4δ(E+) + 1/2π√(/E+)Θ(E+)/E+2]. Note that all the -dependent expressions give multiples of δ(E) in the limit → 0 (Neumann case), as expected. Combining this result with the contribution from a π/2 Dirichlet-Dirichlet corner (see, e.g., <cit.>) we finally obtain the entire DOS for the system ρ(E) = L^2/8πΘ(E) - (2+√(2))L/8πΘ(E)/√(E)±√(2)L/4πΘ(E)/√(E+) + √(2)L/2πΘ(E+)/√(E+)+ 3/8δ(E) ∓1/4π√(/E)Θ(E)/E+∓1/2π√(2/E)Θ(E)/E+2 + 1/2π√(/E+)Θ(E)/E+2- [ 1/2δ(E+) + 1/π√(/E+)Θ(E+)/E+2]. The above result can be represented in a shorter form if we consider positive energies only. Then the cases of an attractive and a repulsive interaction only differ in the overall sign of the μ̃-dependent corner corrections: ρ_+(E) = L^2/8π - (2+√(2))L/8π1/√(E) + √(2)L/4π1/√(E+)∓[ 1/4π√(/E)1/E+ + 1/2π√(2/E)1/E+2. -. 1/2π√(/E+)1/E+2] +3/8δ(E). Note that the corner corrections are monotonous and nonzero for E>0 but vanish for E→∞. This means that the corner corrections are most important for energies near the ground state or, in the case of an attractive interaction, close to E=0. In the regime -<E<0 the DOS is conveniently expressed in terms of the excitation energy E^*=E+: ρ_-(E^*)=√(2)L/2πΘ(E^*)/√(E^*)-1/2δ(E^*)-1/π√(/E^*)Θ(E^*)/E^*+. The first two terms correspond exactly to the DOS for one particle of mass 2m on a line with Dirichlet boundary conditions at the end points: for very strong interaction the two particles behave as one single particle. The third term represents the interplay of boundary reflections and the interaction. If integrated from -μ to 0 it reduces the level counting function by exactly 1/2 bound state.§.§ Comparison with numerical calculations The comparison with the exact quantum mechanical DOS will be done using the level counting function 𝒩(E)=∫_-∞^EE'ρ(E'). In all plots E and μ̃ will be given in units of 1/L^2 (κ̃ in units of 1/L) which is equivalent to setting L=1 in 𝒩. In Fig. <ref> the semiclassical level counting function is plotted for -25≤≤ 25 in steps of Δ=10. For E<0 the three curves representing the attractive cases κ=-25,-15,-5 resemble 1D single-particle counting functions. The exact solutions of the Schrödinger equation for the Hamiltonian (<ref>) can be found by using that the system is symmetric with respect to the line x_2=L/√(2). This means that we can determine a full set of energy eigenstates that are either symmetric or antisymmetric to that line [see Fig. <ref>(a)]. Except for normalization, this is equivalent to considering only the lower part of the fundamental domain while requiring either Neumann or Dirichlet boundary conditions at the symmetry line [see Fig. <ref>(b)]. The Dirichlet boundary condition at the line x_1=x_2 corresponds to the antisymmetric solutions of the extended system shown in Fig. <ref>(c). Moreover, the normalization constants for the original and the extended system are the same. The solutions can be written straightforwardly as antisymmetrized products of 1D wave functions. ψ_mn^D = 1/2[ψ_m^D(x_1)ψ_n^D(x_2)-ψ_m^D(x_2)ψ_n^D(x_1)], k_m,k_n∈Spec^D, 0≤ m<n, ψ_mn^N = 1/2[ψ_m^N(x_1)ψ_n^N(x_2)-ψ_m^N(x_2)ψ_n^N(x_1)], k_m,k_n∈Spec^N, 0≤ m<n. Here, the 1D wave functions and the sets Spec^N/D are defined by ψ_n^D(x) = A_n^Dsin(k_n(x-d)), Spec^D = {k_n| k_n d=nπ-arctan(k_n/),n∈_0}∪{k_0=ık̃_0 |k̃_0=-tanh(k̃_0d),k̃_0>0}, ψ_n^N(x) = A_n^Ncos(k_n(x-d)), Spec^N = {k_n| k_n d=nπ-π/2-arctan(k_n/),n∈}∪{k_0=ık̃_0|k̃_0=-(k̃_0 d),k̃_0>0}, where k_0∈Spec^D is chosen either positive or purely imaginary with positive imaginary part depending on the value of κ d with d=L/√(2). The energy eigenvalues of the system are now given as E_mn=k_m^2+k_n^2 with k_m,k_n either both in Spec^D or both in Spec^N. The energies have been calculated numerically and the comparison to the smooth level counting function is shown in Fig. <ref> for the repulsive case (=10) and in Fig. <ref> for the attractive case (=-25). The value =10 for the repulsive case has been chosen such that the resulting level counting function lies well in between the two limits =0 and →∞ corresponding to non-interacting bosons and fermions (denoted as 𝒩_N and 𝒩_Din Fig. <ref>). For ≥ 0 numerical calculations show that the ratio of corner corrections in the DOS to the full semiclassical DOS at the ground state has a maximum of about 8% for ≈ 5.814 which is very close to the value at which the equation E_0=^2 for the ground state energy holds (E_0 varies smoothly from E_0=2π^2 for =0 to E_0=5π^2 for =∞). As this ratio decreases with the energy the corner corrections can be neglected in the DOS for high energies. This holds also true for the level counting function, where they decrease rapidly from 3/8 to -1/8 as the energy increases. Figure 7 (attractive case) shows that the corner corrections are very important for negative energies and result in a shift by one level in the level counting function at E=0. In fact the corner corrections give a contribution that nearly exactly reproduces the quantum mechanical energy eigenstates. By integrating Eq.(<ref>) to get 𝒩_-(E^*) and substituting k^*=√(E^*) the equation 𝒩_-(k_n^*)+1/2=n for n∈ yields k_n^*d=nπ/2-arctan(k_n^*/) which gives exactly the allowed real wave numbers in Spec^D and Spec^N. Since the negative quantum-mechanical energy levels are always given as E=k_0^2+k_n^2 with k_0 purely imaginary and k_n real we can see that the error in the approximation (<ref>) of the eigenenergies lies only in the assumption that the imaginary wave numbers k_0 in both Spec^D and Spec^N are equal to -ı. This approximation is very good for large absolute values ofand is still reasonable as long as the requantization with ρ_- makes sense, i.e., the ground state energy is negative. The error Δ E=|k_0^2-^2| of the semiclassically requantized energies is plotted in Fig. <ref> for -20<<-2. The ground state energy changes sign at =-(3π)/(2√(2))≈-3.332 (semiclassical prediction, while the quantum mechanical result yields ≈-3.286). At this point the error is less than 0.45/L^2 which is small compared to the mean level spacing [ρ_-(μ̃)]^-1≈ 19/L^2. This shows that the corner corrections are essential for E<0 and can be used to requantize the system in this regime. Moreover one can use the level counting function directly to find the number of bound states in the system. § CONCLUSION We presented an alternative derivation for the propagator for a δ-potential in Sec. <ref>. The resulting expression has a natural interpretation by means of free propagation and hard-wall reflection, taking exponentially weighted detours or shortcuts in the cases of repulsive or attractive interaction, respectively. The propagator can be used to calculate the single-particle DOS for arbitrary shaped billiards of any dimension with mixed boundary conditions and/or δ-barriers inside the volume. Although the calculations do not include curvature terms the formalism is straightforward and allowed us to calculate the contributions from a Dirichlet-Robin (π/4)-corner to the DOS which, to our knowledge, have not been calculated before. The corrections given by these corners are, as expected, of lowest order in the energy but give very accurate corrections at low positive energies, where the repulsive and the attractive case differ only in an overall sign. In the attractive case the corner corrections allow for the requantization of the system for negative energies, which shows the accuracy of the approximations used in the formalism. By using the equivalence of the 1D two-particle problem with δ-interactions to the 2D problem in the fundamental domain with mixed boundary conditions we presented a powerful tool for the treatment of few-body systems. Its natural application lies in the approximation of the DOS for an arbitrary number of δ-interacting particles. It is important to note that the scattering and bound states of such a system with infinite volume are known exactly <cit.>. This means that the short-time propagators required for the treatment of higher particle numbers can, in principle, be calculated directly from them. Our approach presented here can be extended to include smooth external potentials, which allowed us to examine the thermodynamics of a system of δ-interacting bosons in a harmonic confinement <cit.>. B.G. thanks the Studienstiftung des deutschen Volkes for support. We further acknowledge support through the Deutsche Forschungsgemeinschaft. § CONTINUUM LIMIT OF THE PROPAGATOR The antisymmetric solutions of the confined system have equidistant k's, i.e., Δ k=k_n+1-k_n=π/L for arbitrary n. So the first line of Eq. (<ref>) is easily verified to have the continuum limit 1/2π∫_-∞^∞k-ıħ t/2mk^2[cos(k(x'-x))-cos(k(x'+x))], whereas the second line needs a more careful treatment. Omitting the indices we can write 2L/A^2ψ(x')ψ(x)= cos(k(|x'|-|x|))-cos(k(|x'|+|x|-2L)). Using trigonometric identities and identifying sin(2kL) =2tan(kL)/1+tan^2(kL)=-2k/κ/1+(k/κ)^2 cos(2kL) =1-tan^2(kL)/1+tan^2(kL)=1-(k/κ)^2/1+(k/κ)^2 this yields 2L/A^2ψ(x')ψ(x)= cos(k(|x'|+|x|))+cos(k(|x'|-|x|))- 2ik(|x'|+|x|)/1+i, where the absolute values in the first two arguments can be dropped due to symmetry. Now, observing that A^2=1+O(L^-1) and Δ k_n=k_n+1-k_n=π/L+O(L^-2)for all n as L goes to infinity, the continuum limit can be performed exactly in the same way as for the antisymmetric solutions. The bound state solution of the confined system has an exact limit for the free space, i.e., an exponentially decaying wave function localized at the potential. Summing up all the contributions we get Eq. (<ref>). § INVERSE LAPLACE TRANSFORMATION In Eq. (<ref>) we need the inverse Laplace transform f_n(E) of functions of the form F_n(β)=β^-nμβ(√(μβ)) with 2n∈_0. To this end one can use the property of the two-sided Laplace transformation E∫_-∞^EE'f(E')β=1/βEf(E)β to get f_n recursively from f_0 or f_1/2. We will take a different approach that yields the full expression. It is straightforward to show nF_n+1(β)=(μ-β)F_n(β)-√(μ/π)β^-(n+1/2). This can be used to verify by induction the formula F_n(β) = Γ(γ)/Γ(n)(μ-β)^n-γF_γ(β) - √(μ/π)∑_k=1^n-γΓ(n-k)/Γ(n)(μ-β)^k-1β^-(n-k+1/2), where γ:= 1 n∈,1/2n+1/2∈. Now one can use the linearity of the Laplace transformation and the identity E-Ef(E)β=βEf(E)β to get f_n (E) = Γ(γ)/Γ(n)(E+μ)^n-γf_γ(E)- - √(μ/π)∑_k=1^n-γΓ(n-k)/Γ(n)Γ(n-k+1/2)(E+μ)^k-1E^n-k-1/2. The functions f_γ are given by <cit.> f_γ(E)= 2/πarctan(√(E/μ))Θ(E)γ=1,1/√(π)Θ(E)/√(E+μ) γ=1/2. | http://arxiv.org/abs/1705.09637v2 | {
"authors": [
"Benjamin Geiger",
"Juan-Diego Urbina",
"Quirin Hummel",
"Klaus Richter"
],
"categories": [
"cond-mat.quant-gas",
"nlin.SI",
"quant-ph"
],
"primary_category": "cond-mat.quant-gas",
"published": "20170526161927",
"title": "Semiclassics in a system without classical limit: The few-body spectrum of two interacting bosons in one dimension"
} |
[email protected] Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544 USA Princeton Plasma Physics Laboratory, Princeton University, Princeton, NJ 08543 USA Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544 USA Princeton Plasma Physics Laboratory, Princeton University, Princeton, NJ 08543 USA School of Nuclear Science and Technology, University of Science and Technology of China, Hefei, Anhui 230026, China Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544 USA Princeton Plasma Physics Laboratory, Princeton University, Princeton, NJ 08543 USALarge amplitude waves in magnetized plasmas, generated either by external pumps or internal instabilities, can scatter via three-waves interactions. While three-wave scatterings in either forward or backward geometry are well-known, what happens when waves propagate at angles with one another in magnetized plasmas remains largely unknown, mainly due to the analytical difficulty of this problem. In this paper, we overcome this analytical difficulty and find a convenient formula for three-wave coupling coefficients in cold, uniform, magnetized plasmas in the most general geometry. This is achieved by systematically solving the fluid-Maxwell model to second order using a multiscale perturbative expansion. The general formula for the coupling coefficient becomes transparent when we reformulate it as the S matrix element of a quantized Lagrangian. Using the quantized Lagrangian, it is possible to bypass the perturbative solution and directly obtain the nonlinear coupling coefficient from the linear response of plasmas. To illustrate how to evaluate the cold coupling coefficient, we give a set of examples where the participating waves are either quasi-transverse or quasi-longitudinal. In these examples, we determine the angular dependence of three-wave scattering, and demonstrate that backscattering is not necessarily the strongest scattering channel in magnetized plasmas, in contrast to what happens in unmgnetized plasmas. Our approach gives a more complete picture, beyond the simple collimated geometry, of how injected waves can decay in magnetic confinement devices, as well as how lasers can be scattered in magnetized plasma targets. Three-wave scattering in magnetized plasmas:from cold fluid to quantized Lagrangian Nathaniel J. Fisch December 30, 2023 ===================================================================================== § INTRODUCTIONCoherent three-wave scattering is perhaps the simplest and the most common type of nonlinear interaction in plasmas. It happens, for example, in magnetic confinement devices, where waves injected by antenna arrays decay to other waves <cit.>. In the case where the wave is injected to drive current in a tokamak <cit.>, there is a possibility that the lower hybrid current drive is affected by unwanted decays near the tokamak periphery <cit.>. Even more importantly, three-wave scattering also happens, for example, in laser implosion experiments <cit.>, where high intensity lasers interact with plasmas. During magnetized implosions, where the magnetic field is imposed to enhance particle confinement <cit.>, multiple laser beams may scatter and reflect one another via magnetic resonances. In fact, the magnetic resonances can be utilized to mediate energy transfer between laser beams to achieve pulse amplification <cit.>, where three-wave scattering plays an essential role. Despite of its importance, coherent three-wave scattering, well-studied in unmagnetized plasma <cit.>, remains poorly understood when plasmas become magnetized, except in the simple forward or backward geometry, where the participating waves are collimated. This situation is mostly due to the analytical difficulty when external magnetic field is present.Such difficulty deserves to be overcome in the midst of recent developments in strong magnetic field technologies <cit.>. Using these technologies, magnetic fields on the order of mega-Gauss or even giga-Gauss can be produced. Such strong magnetic field makes electron gyrofrequency comparable to the plasma frequency in laser implosion experiments, in which the anisotropy introduced by the magnetic field can play a prominent role. Since multiple laser beams usually propagate at angles to one another and with the magnetic field during laser-driven implosions, understanding the angular dependence of three-wave scattering in magnetized plasma becomes indispensable for making a knowledgeable choice of the experimental setups to optimize laser-plasma coupling. By far, most theoretical work on laser scattering in magnetized plasmas is focused on the simple collimated geometry. In this simple geometry, three kinds of theories have been developed.The first kind is coupled mode theory, which searches for normal modes of the nonlinear equations <cit.>. The normal modes are typically linear combinations of fluctuating quantities, and the equations satisfied by normal modes are formally simple. However, these equations hide the complexity of the nonlinear problem inside their complicated coupling coefficients, from which little physical meaning has been extracted. The second kind is nonlinear current theory, which describes three-wave parametric interaction by adding a nonlinear source term into the Maxwell's equation. The nonlinear current can be expressed in terms of a coupling tensor, which is combined with the dielectric tensor to give a nonlinear dispersion relation of the system.Using fluid models, parametric growth rates have been obtained for extraordinary wave pump <cit.>, lower hybrid wave pump <cit.>, as well as the right- and left-circularly polarized wave pumps <cit.>. To capture thermal effects, a simple treatment retains only thermal corrections to the dielectric tensor <cit.>. A more complete treatment also include thermal corrections to the coupling tensor <cit.>. However, beyond the simple collimated geometry, such treatment becomes so cumbersome that decades of efforts have been spent on just simplifying the expressions <cit.>, with very little extractable physical results <cit.>. Beside the coupled mode theory and the nonlinear current theory, the third kind of theory uses Lagrangian formulation. In this more systematic approach, the interaction Lagrangian is obtained either from the Low's Lagrangian <cit.>, or the oscillation-center Lagrangian <cit.> by expanding plasma response to the third order. Although transparent in formalism, three-wave interactions in magnetized plasma, where the waves are not collimated, remains to be analyzed systematically, in generality, and in detail. In this paper, we overcome the analytical difficulty in fluid theory and obtain angular dependence of three-wave scattering in cold, uniform, magnetized plasmas in the most general geometry. This is achieved by systematically solving the fluid-Maxwell system to second order in the perturbation series, where secular terms are removed using a multiscale expansion. Using this technique, we manage to obtain an expression for the coupling coefficient that is not only explicit, but also convenient, from which illuminating physical results can be extracted. Moreover, we show that the formula for the coupling coefficient, which contains six permutations of the same structure, naturally arises as the scattering matrix (S matrix) element of a quantized Lagrangian. This refreshing perspective, emerging from detailed cold fluid calculations, offers a high-level methodology, through which three-wave coupling can be easily computed.The cold fluid results are applicable when the wave lengths of participating waves are much longer than both the Debye length and the typical gyroradius. Within the applicable range of the fluid model, our non-relativistic perturbative treatment is valid when the amplitudes of waves are small enough, so that the linear eigenmode structures are preserved, and spectrum broadening is limited.This paper is organized as follows. In Sec. <ref>, we solve the fluid-Maxwell system to second order using a multiscale expansion, in the case where the fluctuation contains a discrete spectrum of waves. In Sec. <ref>, we simplify the general equation in the simple case where there are only three linear waves participating in the interaction. In Sec. <ref>, we distill the classical theory into a quantized Lagrangian, where the formula for three-wave coupling becomes obvious. In Sec. <ref>, we illustrate the general cold fluid results using a set of examples, where the participating waves are either purely electrostatic or purely electromagnetic. The conclusion and discussion are given in Sec. <ref>, and supplementary materials are provided in the Appendixes.§ PERTURBATIVE SOLUTION OF FLUID-MAXWELL SYSTEMIn the fluid regime, where both the Debye length and the typical gyroradius are much smaller than the shortest wavelength, charged particles in the plasma respond collectively to perturbations. In this situation, the plasma system is well described by the fluid-Maxwell equations∂_t n_s = -∇·(n_s𝐯_s),∂_t𝐯_s = -𝐯_s·∇𝐯_s+e_s/m_s(𝐄+𝐯_s×𝐁),∂_t𝐁 = -∇×𝐄,∂_t𝐄 =c^2∇×𝐁-1/ϵ_0∑_se_sn_s𝐯_s.The continuity equation [Eq. (<ref>)] describes the conservation of particles of species s, whose density is n_s and average velocity is 𝐯_s. The momentum equation [Eq. (<ref>)] governs how the velocity field 𝐯_s change due to both the advection and the Lorentz force, where e_s and m_s are the charge and mass of individual particles of species s. Finally, the magnetic field 𝐁 evolves according to the Faraday's law [Eq. (<ref>)], and the electric field 𝐄 evolves according to the Maxwell-Ampère's law [Eq. (<ref>)], where the current density is contributed by all charged species in the system.The fluid-Maxwell equations [Eqs. (<ref>)-(<ref>)] are a system of nonlinear hyperbolic partial differential equations. Such a system of equations are in general difficult to solve. Nevertheless, when fluctuation near equilibrium is small, nonlinearities may be regarded as perturbations, and the equations may be solved perturbatively. To see when nonlinearities may be regarded as perturbations, we can normalize equations such that all quantities become dimensionless numbers. For example, we may normalize time to the plasma frequency ω_p and distance to the skin depth c/ω_p. We may further normalize mass to electron mass m_e, charge to elementary charge e, density to unperturbed density n_s0, and velocity to the speed of light c. Finally, we can normalize electric field to m_ecω_p/e and normalize magnetic field to m_eω_p/e.With the above normalizations, the fluid-Maxwell equation can be written in dimensionless form. In this form, nonlinearities are products of small numbers and are therefore even smaller, provided that the perturbations are small.In the absence of nonlinearities, the general solution to the fluid-Maxwell system is a spectrum of linear waves with constant amplitudes. Now imaging turning on nonlinearities adiabatically, then waves start to scatter one another, whose amplitudes start to evolve slowly in space and time. This physical picture may be translated into a formal mathematical procedure. Formally, to solve the fluid-Maxwell equations peturbatively, it is helpful to keep track of terms by inserting an auxilliary small parameter λ≪1 in the perturbation series, and let the adiabatic parameter λ→ 1 in the end, mimicking the adiabatic ramping up of nonlinearities. The electric field, the magnetic field, the density, and the velocity can be expanded in asymptotic series𝐄 = 𝐄_0+λ𝐄_1+λ^2𝐄_2+…, 𝐁 = 𝐁_0+λ𝐁_1+λ^2𝐁_2+…,n_s = n_s0+λ n_s1+λ^2n_s2+…, 𝐯_s = 𝐯_s0+λ𝐯_s1+λ^2𝐯_s2+…,where a self-consistent equilibrium is given by 𝐄_0=0 and 𝐯_s0=0, while the background magnetic field 𝐁_0 and density n_s0 are some constants. It is well-known that if we only expand field amplitudes, the naive asymptotic solution will contain secular terms for nonlinear problems. To remove the secular terms, we also need to do a multiscale expansion<cit.> in both space and timex^i = x^i_(0)+1/λ x^i_(1)+1/λ^2x^i_(2)+…,t = t_(0)+1/λ t_(1)+1/λ^2t_(2)+…,where x^i is the i-th components of vector 𝐱. In the above expansion, x^i_(0) is the shortest spatial scale. In comparison, one unit of x^i_(1) is 1/λ times longer that one unit of x^i_(0), and so on. Similarly, t_(0) is the fastest time scale, and one unit of t_(n) is 1/λ^n times longer that one unit of t_(0). In the above multiscale expansion, different spatial and temporal scales are regarded as independent∂_i^(a)x^j_(b)=δ_i^jδ^(a)_(b), ∂_t^(a)t_(b)=δ^(a)_(b),and by chain rule, the total spatial and temporal derivatives are∂_i = ∂_i^(0)+λ∂_i^(1)+λ^2∂_i^(2)+…, ∂_t = ∂_t(0)+λ∂_t(1)+λ^2∂_t(2)+….Using the multiscale expansion (<ref>)-(<ref>), together with expansion in field amplitudes (<ref>)-(<ref>), secular terms can be removed and the perturbative solution is well behaved. In Appendix <ref>, we demonstrate how the multiscale expansion can be successively applied to a hyperbolic system of ordinary differential equations.§.§ First order equationsAlthough the first order equations and their solutions are well-known <cit.>, here let us briefly review some important results, in order to introduce some new notations that will be used in the next subsection. To obtain first order equations, we expand fields, space, and time in fluid-Maxwell equations, and collect all the O(λ) terms∂_t(0)𝐁_1 = -∇_(0)×𝐄_1, ∂_t(0)𝐯_s1 = e_s/m_s(𝐄_1+𝐯_s1×𝐁_0), ∂_t(0) n_s1 = -n_s0∇_(0)·𝐯_s1, ^(0)_ijE_1^j = -1/ϵ_0∑_se_sn_s0∂_t(0)v^i_s1. Here, we have written the equations in the order that we are going to use them. The electric field equation (<ref>) is obtained by substituting the Faraday's law (<ref>) into the Maxwell-Ampère's equation (<ref>), and then making the multiscale expansion. This procedure introduces the zeroth order differential operator^(0)_ij:=(∂_t(0)^2-c^2∇_(0)^2)δ_ij+c^2∂_i^(0)∂_j^(0).This operator is the d'Alembert wave operator projected in the transverse direction. Since the first order equations are linear, the general solution is a superposition of plane waves. Let us write the electric field in the form𝐄_1=1/2∑_𝐤∈𝕂_1ℰ_𝐤^(1) e^iθ_𝐤,where ℰ_𝐤^(1)(t_(1),𝐱_(1);t_(2),𝐱_(2);…) is the slowly varying complex wave amplitude, and θ_𝐤=i𝐤·𝐱_(0)-iω_𝐤t_(0) is the fast varying wave phase. The summation of wave vector 𝐤 is over a discrete spectrum 𝕂_1. In order for 𝐄_1∈ℝ^3 to be a real vector, whenever 𝐤∈𝕂_1 is in the spectrum, then -𝐤 must also be in the spectrum. Moreover, the amplitude ℰ_𝐤^(1) must satisfy the reality condition ℰ_-𝐤^(1)=ℰ_𝐤^(1)*. Therefore, it is natural to introduce notations𝐳_-𝐤 = 𝐳_𝐤^*, α_-𝐤 = -α_𝐤,for any complex vector 𝐳∈ℂ^3 and real scalar α∈ℝ that are labeled by subscript 𝐤. For example, the complex vector ℰ_-𝐤=ℰ_𝐤^*, and the real scalar θ_-𝐤=-θ_𝐤. Using the above notations, the reality condition is conveniently built into the symbols.In spectral expansion Eq. (<ref>), it is tempting to write the summation over discrete wave vector 𝐤 as an integral over some continuous spectrum. However, such a treatment will be very cumbersome due to double counting, because wave amplitude ℰ_𝐤, which can vary on slow spatial and temporal scales, already has an spectral width. The first order magnetic field 𝐁_1, velocity field 𝐯_s1, and density field n_s1 can be expressed in terms of the first order electric field 𝐄_1. Substituting expression (<ref>) for the electric field into the first order fluid-Maxwell equations (<ref>)-(<ref>), we immediately find𝐁_1 = 1/2∑_𝐤∈𝕂_1𝐤×ℰ^(1)_𝐤/ω_𝐤e^iθ_𝐤, 𝐯_s1 = ie_s/2m_s∑_𝐤∈𝕂_1𝔽_s,𝐤ℰ^(1)_𝐤/ω_𝐤e^iθ_𝐤,n_s1 = ie_sn_s0/2m_s∑_𝐤∈𝕂_1𝐤·𝔽_s,𝐤ℰ^(1)_𝐤/ω_𝐤^2e^iθ_𝐤.Here,we introduce the forcing operator 𝔽_s,𝐤:ℂ^3→ℂ^3, acting on any complex vector 𝐳∈ℂ^3 by 𝔽_s,𝐤𝐳:=γ_s,𝐤^2[𝐳+iβ_s,𝐤𝐳×𝐛-β_s,𝐤^2(𝐳·𝐛)𝐛].In the above definition, 𝐛 is the unit vector in the 𝐁_0 direction , γ_s,𝐤^2:=1/(1-β_s,𝐤^2) is the magnetization factor, β_s,𝐤:=Ω_s/ω_𝐤 is the magnetization ratio, and Ω_s=e_sB_0/m_s is the gyrofrequency of species s. It is clear from Eq. (<ref>) that the forcing operator 𝔽_s,𝐤 is related to the linear electric susceptibility χ_s,𝐤 byχ_s,𝐤=-ω_ps^2/ω^2_𝐤𝔽_s,𝐤,where ω^2_ps=e_s^2n_s0/ϵ_0m_s is the plasma frequency of species s. While the susceptibility χ_s,𝐤 is typically used in linear theories, the forcing operator 𝔽_s,𝐤 will be much more convenient when we discuss nonlinear effects. Note that in the limit B_0→ 0, the forcing operator 𝔽_s,𝐤→𝐈 becomes the identity operator, and χ_s becomes the cold unmagnetized susceptibility.The forcing operator 𝔽_s,𝐤 will be extremely useful later on when we solve the second order equations. Therefore, let us observe a number of important properties of this operator. For brevity, we will suppress the subscript s,𝐤, with the implied understanding that all quantities have the same subscript. First, the operator satisfies the vector identity 𝔽𝐳=𝐳+iβ(𝔽𝐳)×𝐛.This identity guarantees that the velocity field 𝐯_s1, given by Eq. (<ref>), satisfies the first order momentum equation (<ref>). Second, 𝔽 is a self-adjoint operator with respect to the inner product ⟨𝐰,𝐳⟩:=𝐰^†𝐳,𝐰^†𝔽𝐳=(𝔽𝐰)^†𝐳, for all complex vectors 𝐳, 𝐰∈ℂ^3.Using this property, we can move 𝔽 from acting on one vector to acting on the other vector in an inner product pair. Third, it is a straightforward calculation to show that𝔽^2=𝔽-ω∂𝔽/∂ω,where the dependence of 𝔽 on ω comes from β and γ in definition (<ref>). Indeed, using its definition, 𝔽 satisfies an obvious identity𝔽(-ω)=𝔽^*(ω),which can also be written as 𝔽_-𝐤=𝔽^*_𝐤. Lastly, when two frequencies ω_1 and ω_2 are involved, we have an nontrivial quadratic identity(β_1-β_2)𝔽_1𝔽_2=β_1𝔽_1-β_2𝔽_2,which can be shown by straight forward calculation. Using this identity, we can reduce higher powers of the forcing operators to their linear combinations. Combining with property Eq. (<ref>), the above identity can generate a number of other similar identities. Properties (<ref>)-(<ref>) will enable important simplifications when we solve the second order equations.Having expressed other first order perturbations in terms of 𝐄_1, the electric field equation(<ref>) constrains the relations between the wave amplitude ℰ^(1)_𝐤, the wave frequency ω_𝐤, and the wave vector 𝐤. Substituting the expression (<ref>) for 𝐯_s1 into the electric field equation, we obtain the first order electric field equation in the momentum spaceω_𝐤^2ℰ^(1)_𝐤+c^2𝐤×(𝐤×ℰ^(1)_𝐤)=∑_sω_ps^2𝔽_s,𝐤ℰ^(1)_𝐤,which must be satisfied for individual wave vector 𝐤 in the spectrum. The above equation can be written in a matrix form 𝔻_𝐤ℰ^(1)_𝐤=0, where the dispersion tensor𝔻_𝐤^ij:=(ω_𝐤^2-c^2𝐤^2)δ^ij +c^2k^ik^j-∑_sω_ps^2𝔽_s,𝐤^ij.The matrix equation has nontrivial solutions when the wave vector 𝐤 and wave frequency ω_𝐤 are such that the linear dispersion relation 𝔻(𝐤,ω_𝐤)=0 is satisfied. When the dispersion relation is indeed satisfied, solving the matrix equation gives wave polarizations. It is well-known that in magnetized plasmas, the eignemodes are two mostly electromagnetic waves and a number of mostly electrostatic hybrid waves. In Appendix <ref>, we review the dispersion relations and wave polarizations when waves propagate at arbitrary angles with respect to the background magnetic field. Finally, to introduce one more operator that will be useful for solving the second order equations, let us calculate the wave energy. The average energy carried by linear waves can be found by summing up average energy carried by fields and particles. For a single linear wave with wave vector 𝐤, after averaging on t_(0) and 𝐱_(0) scale, the wave energyU_𝐤 = ϵ_0/2⟨𝐄_1^2⟩_(0) +1/2μ_0⟨𝐁_1^2⟩_(0)+1/2∑_sn_s0m_s⟨𝐯_s1^2⟩_(0)= ϵ_0/4ℰ^(1)*_𝐤·ℍ_𝐤ℰ^(1)_𝐤,where we introduce the normalized wave energy operatorℍ_𝐤 := 2𝕀-∑_sω_ps^2/ω_𝐤∂𝔽_s,𝐤/∂ω_𝐤= 1/ω_𝐤∂(ω_𝐤^2ϵ_𝐤)/∂ω_𝐤.Here, ϵ_𝐤=𝕀+∑_sχ_s,𝐤 is the dielectric tensor, and we have used the Eq. (<ref>), which relates the forcing operator to the susceptibility. When evaluating ⟨𝐁_1^2⟩, we have used expression (<ref>) for 𝐁_1, followed by simplification using the momentum space electric field equation (<ref>). This term is then combined with ⟨𝐯_s1^2⟩, calculated using Eq. (<ref>) for 𝐯_s1. The final result is simplified using identity (<ref>) for the forcing operator 𝔽_s,𝐤. Now that we have introduced the wave energy operator ℍ_𝐤, the momentum space electric field equation (<ref>) can be converted into a form that is closely related to the wave energy∂ω_𝐤/∂ k_lω_𝐤ℍ_𝐤^ijℰ^(1)j_𝐤 =c^2(2k_lδ_ij-k_iδ_jl-k_jδ_il)ℰ^(1)j_𝐤.This form of the first order electric field equation is obtained by taking ∂/∂ k_l derivative on both side of Eq. (<ref>). Notice that although ℰ^(1)_𝐤 is labeled by 𝐤, it does not explicitly depend on 𝐤. This alternative form of the first order electric field equation will be useful when we solve the second order equations. §.§ Second order equationsTo obtain the second order equations, we collect all the O(λ^2) terms in the asymptotic expansions. The resultant second order equations are ∂_t(0)𝐁_2 = -∂_t(1)𝐁_1-∇_(1)×𝐄_1-∇_(0)×𝐄_2,∂_t(0)𝐯_s2 = -∂_t(1)𝐯_s1-𝐯_s1·∇_(0)𝐯_s1 +e_s/m_s(𝐯_s1×𝐁_1+𝐄_2+𝐯_s2×𝐁_0),∂_t(0) n_s2 = -∂_t(1) n_s1-∇_(0)·(n_s1𝐯_s1) -n_s0(∇_(1)·𝐯_s1+∇_(0)·𝐯_s2),^(0)_ijE_2^j = -^(1)_ijE_1^j-1/ϵ_0∑_se_s[n_s0∂_t(1)v^i_s1 +∂_t(0)(n_s1v^i_s1)+n_s0∂_t(0)v^i_s2].Again, the electric field equation (<ref>) is obtained by substituting Faraday's law into the Maxwell-Ampère's equation. In doing so, we introduce the first order differential operator^(1)_ij: = 2(∂_t(0)∂_t(1)-c^2∂_l^(0)∂_l^(1))δ_ij+ c^2(∂_i^(0)∂_j^(1)+∂_i^(1)∂_j^(0)).This operator mixes fast and slow scales, and will govern how wave amplitudes vary on the slow scales due to interactions that happen on the fast scale.To solve the second order equations, notice that although the second order equations are nonlinear in 𝐁_1, 𝐯_s1, and n_s1, they are nevertheless linear in𝐄_2, 𝐁_2, 𝐯_s2, and n_s2. Therefore, we may solve for the second order perturbations from the linear equations, regarding nonlinearities in first order perturbations as source terms. The general solution to such a system of linear equations is again a superposition of plane waves. Let us write the second order electric field𝐄_2=1/2∑_𝐤∈𝕂_2ℰ_𝐤^(2) e^iθ_𝐤.Similar to the first order expansion (<ref>), in the above expression, ℰ_𝐤^(2)(t_(1),𝐱_(1);t_(2),𝐱_(2);…) is the second order slowly varying complex wave amplitude, θ_𝐤 is the fast wave phase, and 𝕂_2 is the spectrum of second order fluctuations, which contains -𝐤 whenever 𝐤∈𝕂_2.The second order spectrum 𝕂_2 is highly constrained and will need to be determined from the second order electric field equation, once the first order spectrum 𝕂_1 is given.Before we candetermine 𝕂_2 and ℰ_𝐤^(2), we need to express 𝐁_2 in terms of 𝐄_2. Plugging in expressions for the first order fluctuations Eqs. (<ref>) and (<ref>) into the second order Faraday's law Eq. (<ref>), the second order magnetic field can be expressed as𝐁_2 = 1/2∑_𝐤∈𝕂_2𝐤×ℰ^(2)_𝐤/ω_𝐤e^iθ_𝐤+ 1/2∑_𝐤∈𝕂_1(∇_(1)×ℰ_𝐤^(1)/iω_𝐤 +𝐤×∂_t(1)ℰ_𝐤^(1)/iω_𝐤^2)e^iθ_𝐤.The first line has the same structure as 𝐁_1, except now the summation is over the second order spectrum 𝕂_2. The second line involves slow derivatives of the first order amplitude ℰ^(1)_𝐤. These derivatives, still unknown at this step, will be determined later from the second order electric field equation. Similarly, the second order velocity 𝐯_s2 can be solved from Eq. (<ref>). One way of solving this equation is by first taking the Fourier transform on t_(0) and 𝐱_(0) scale. Then in the Fourier space, the resultant algebraic equation can be readily solved using the property (<ref>) of the forcing operator. After taking the inverse Fourier transform, the second order velocity can be expressed as𝐯_s2 = ie_s/2m_s∑_𝐤∈𝕂_2𝔽_s,𝐤ℰ_𝐤^(2)/ω_𝐤e^iθ_𝐤+ e_s/2m_s∑_𝐤∈𝕂_1𝔽^2_s,𝐤∂_t(1)ℰ_𝐤^(1)/ω_𝐤^2e^iθ_𝐤- e_s^2/4m_s^2∑_𝐪,𝐪'∈𝕂_1𝔽_s,𝐪+𝐪'(𝐋^s_𝐪,𝐪'+𝐓^s_𝐪,𝐪')/ω_𝐪+ω_𝐪'e^iθ_𝐪+iθ_𝐪'.The first two lines of the above expression is in analogy to the expression (<ref>) for 𝐁_2. The third line comes from beating of nonlinearities. In particular, the 𝐯_s1×𝐁_1 nonlinearity introduce a longitudinal beating 𝐋^s_𝐪,𝐪'=(𝔽_s,𝐪ℰ_𝐪^(1))×(𝐪'×ℰ_𝐪'^(1))/ω_𝐪ω_𝐪'.In addition, the Euler derivative 𝐯_s1·∇_(0)𝐯_s1, which is responsible for generating turbulence in neutral fluids, gives rise to a turbulent beating𝐓^s_𝐪,𝐪'=(𝔽_s,𝐪ℰ_𝐪^(1))(𝐪·𝔽_s,𝐪'ℰ_𝐪'^(1))/ω_𝐪ω_𝐪'.The third line in Eq. (<ref>) may be simplified using the quadratic property (<ref>) of the forcing operator. This simplification will be done later when we discuss interaction of three waves in the next section.Using similar method, we can find the expression for the second order density n_s2. Although the expression for n_s2 is not indispensable for studying three-wave scattering, we present it here because it will become useful when one studies four-wave or even higher order interactions. The second order density can be expressed asn_s2 = e_sn_s0/2m_s[∑_𝐤∈𝕂_2i𝐤·𝔽_s,𝐤ℰ^(2)_𝐤/ω_𝐤^2e^iθ_𝐤+ ∑_𝐤∈𝕂_1(𝐤·(𝔽_s,𝐤+𝔽^2_s,𝐤)∂_t(1)ℰ_𝐤^(1)/ω_𝐤^3+∇_(1)·𝔽_s,𝐤ℰ_𝐤^(1)/ω_𝐤^2)e^iθ_𝐤]- e_s^2n_s0/4m_s^2∑_𝐪,𝐪'∈𝕂_1(𝐪+𝐪')·𝐑^s_𝐪,𝐪'/(ω_𝐪+ω_𝐪')^2e^iθ_𝐪+iθ_𝐪'.The above three lines are in analogy to those for 𝐯_s2 in Eq. (<ref>). In the third line, the quadratic response 𝐑^s_𝐪,𝐪'=𝔽_s,𝐪+𝐪' (𝐋^s_𝐪,𝐪'+𝐓^s_𝐪,𝐪') +(1+ω_𝐪/ω_𝐪')𝐂^s_𝐪,𝐪',where the longitudinal beating 𝐋^s_𝐪,𝐪' and the turbulent beating 𝐓^s_𝐪,𝐪' are given by Eqs. (<ref>) and (<ref>). The third term, proportional to 𝐂^s_𝐪,𝐪', comes from the divergence of the nonlinear current ∇_(0)·(n_s1𝐯_s1), which introduces the current beating 𝐂^s_𝐪,𝐪'=(𝔽_s,𝐪ℰ_𝐪^(1))(𝐪'·𝔽_s,𝐪'ℰ_𝐪'^(1))/ω_𝐪ω_𝐪'.Although the form of 𝐂^s_𝐪,𝐪' is similar to that of𝐓^s_𝐪,𝐪' , the physics of these two types of beating are nevertheless very different.Having expressed second order fluctuationsin terms of 𝐄_2, we can obtain an equation that only involves electric perturbations. Substituting expressions (<ref>), (<ref>), and (<ref>) into the second order electric field equation (<ref>), we can eliminate 𝐯_s1, n_s1, and 𝐯_s2. The resultant equation can be simplified using the first order electric field equation (<ref>), as well as property (<ref>) of the forcing operator. The second order electric field equation can then be put into a rather simple and intuitive form∑_𝐤∈𝕂_2𝔻_𝐤ℰ^(2)_𝐤e^iθ_𝐤 +i∑_𝐤∈𝕂_1ω_𝐤ℍ_𝐤d_t(1)^𝐤ℰ^(1)_𝐤 e^iθ_𝐤= i/2∑_s,𝐪,𝐪'∈𝕂_1𝐒^s_𝐪,𝐪'e^iθ_𝐪+iθ_𝐪'.The left-hand-side are modifications of the first order spectrum, as consequences of three-wave scatterings on the right-hand-side. In the above equation, the dispersion tensor 𝔻_𝐤=𝔻^*_-𝐤 is defined by Eq. (<ref>), the normalized wave energy operator ℍ_𝐤=ℍ^*_-𝐤 is defined by Eq. (<ref>), and d_t(1)^𝐤=d_t(1)^-𝐤 is the advective derivatived_t(1)^𝐤:=∂_t(1)+∂ω_𝐤/∂𝐤·∇_(1),which advects the wave envelope at the wave group velocity 𝐯_g=∂ω_𝐤/∂𝐤 on the slow scale t_(1) and 𝐱_(1). In Eq. (<ref>), the three-wave scattering strength 𝐒^s_𝐪,𝐪'=e_sω_ps^2/2m_s(𝐑^s_𝐪,𝐪'+𝐑^s_𝐪',𝐪),where the quadratic response 𝐑^s_𝐪,𝐪' is given by Eq. (<ref>). Notice that the scattering strength 𝐒^s_𝐪,𝐪' is proportional to the density n_s0. This is intuitive because three-wave scattering cannot happen in the vacuum. Hence, all three-wave scatterings come from charged particle response, which is additive and therefore proportional to the density. Also notice that 𝐒^s_𝐪,𝐪' is proportional to the charge-to-mass ratio. This is also intuitive because e_s/m_s is the coefficient by which charged particles respond to the electric field. Let us observe a number of properties of the scattering strength 𝐒^s_𝐪,𝐪'. First, by construction, the scattering strength is symmetric with respect to 𝐪,𝐪', namely,𝐒^s_𝐪,𝐪'=𝐒^s_𝐪',𝐪.In addition, using notation (<ref>) and (<ref>), it is easy to see that reality condition for 𝐒_𝐪,𝐪' is𝐒_𝐪,𝐪'^s*=-𝐒^s_-𝐪,-𝐪'.Moreover, it turns out that the scattering strength 𝐒^s_𝐪,𝐪' satisfies the important identity𝐒^s_𝐪,-𝐪=0.This identity can be shown by straight forward calculation using the limiting form 𝔽(ω)→𝐛𝐛 when ω→ 0. Identity (<ref>) guarantees that no zero-frequency mode with ω_𝐤=0 will arise in the second order electric field equation.Without this important identity, any change in the wave amplitude would be faster then the zero-frequency mode, a situation that would violate the multiscale assumption. Fortunately, due to identity (<ref>), the multiscale perturbative solution is well justified. Now that we have obtained the second order electric field equation (<ref>), we can use it to constrain the spectrum 𝕂_2 and the amplitude ℰ_𝐤^(2).In order to satisfy(<ref>), the coefficient of each Fourier exponent e^iθ_𝐤 must be matched on both sides of the equation.To match the spectrum on the right-hand-side of Eq. (<ref>), which is generated by beating of first order perturbations, we can take the second order spectrum to be𝕂_2=(𝕂_1^0⊕𝕂_1^0)∖𝕂_1^0,where the set 𝕂_1^0:=𝕂_1⋃{0}. We define the direct sum of two sets G_1,G_2⊆ G, where G is an additive group, by G_1⊕ G_2:={g_1+g_2|g_1∈ G_1, g_2∈ G_2}. We can exclude the zero vector 0 from the second order spectrum 𝕂_2 using property (<ref>) of the scattering strength. We also excluded vectors that are already contained in the first order spectrum 𝕂_1, such that the matrix 𝔻_𝐤 is invertible for all 𝐤∈𝕂_2. Since the matrix is invertible, the second order amplitude ℰ_𝐤^(2) is determined byℰ_𝐤^(2)=i𝔻^-1_𝐤∑_s𝐒^s_𝐪,𝐪',where 𝐪, 𝐪'∈𝕂_1 are such that 𝐤=𝐪+𝐪'∈𝕂_2. Here, the factor 1/2 has been removed using the symmetry property 2𝐒^s_𝐪,𝐪'=𝐒^s_𝐪,𝐪'+𝐒^s_𝐪',𝐪. We can put the above abstract notations in more intuitive language as follows. The first order spectrum contains all the “on-shell" waves, which satisfy the dispersion relation 𝔻(𝐤,ω_𝐤)=0 for all 𝐤∈𝕂_1. While the second order spectrum 𝕂_2 contains all the “off-shell" waves generated by beating. These “off-shell" waves do not satisfy the linear dispersion relation, and their amplitude is driven by the beating of two “on-shell" waves.To illustrate the abstract notations introduced above, let us consider the simplest example where the spectrum 𝕂_1 contains only one “on-shell" wave, namely, 𝕂_1={𝐤,-𝐤}. In this case, the second order spectrum 𝕂_2={2𝐤,-2𝐤} contains the second harmonic. Matching the Fourier exponents, the “on-shell" equation is ω_𝐤ℍ_𝐤d_t(1)^𝐤ℰ^(1)_𝐤=0.The other “on-shell" equation is the complex conjugate of the above equation.Since ℍ_𝐤 enters the wave energy (<ref>), this matrix is positive definite and therefore nondegenrate. Hence, the above equation can be written as d_t(1)^𝐤ℰ^(1)_𝐤=0, which says that the wave amplitude is a constant of advection. Next, matching coefficients of the other Fourier exponent, we obtain the “off-shell" equation for the second harmonic is𝔻_2𝐤ℰ^(2)_2𝐤=i∑_s𝐒^s_𝐤,𝐤.After inverting the matrix 𝔻_2𝐤, this equation gives the amplitude of the second harmonic in terms of the amplitude of the first harmonic. Moreover, since the complex amplitude ℰ^(2)_2𝐤 also encodes the phase information, the above equation also tells how the second harmonic is phase-locked with the fundamental. § SCATTERING OF THREE RESONANT ON-SHELL WAVESIn this section, we illustrate the general theory developed in Sec. <ref> with thesimplest nontrivial example where the spectrum contains exactly three resonant “on-shell" waves.Without loss of generality, suppose the three waves satisfies the resonance conditions𝐤_1 = 𝐤_2+𝐤_3, ω_𝐤_1 = ω_𝐤_2+ω_𝐤_3,where all ω's are positive. The above resonance condition can also be written more compactly as θ_𝐤_1=θ_𝐤_2+θ_𝐤_3. In this case, the spectrum 𝕂_1={𝐤_1, 𝐤_2, 𝐤_3, (𝐤→-𝐤)}. Using Eq. (<ref>), we find the second order spectrum 𝕂_2={2𝐤_1, 2𝐤_2, 2𝐤_3, 𝐤_1+𝐤_2, 𝐤_2-𝐤_3, 𝐤_3+𝐤_1, (𝐤→-𝐤)}. Notice that resonant waves, such as 𝐤_1=𝐤_2+𝐤_3, are not contained in the second order spectrum 𝕂_2. In this way, we avoid the ambiguous partition between ℰ^(2)_𝐤, and d_t(1)^𝐤ℰ^(1)_𝐤. In another word, all perturbative corrections to the first order amplitude ℰ^(1)_𝐤 are accounted for by its slow derivatives.Using the electric field equation (<ref>), we can extract the “off-shell" equations by matching coefficients of Fourier exponents. There are twelve “off-shell" equations, six of which are complex conjugations of the following six “off-shell" equations𝔻_2𝐤_1ℰ^(2)_2𝐤_1 = i∑_s𝐒^s_𝐤_1,𝐤_1, 𝔻_2𝐤_2ℰ^(2)_2𝐤_2 = i∑_s𝐒^s_𝐤_2,𝐤_2, 𝔻_2𝐤_3ℰ^(2)_2𝐤_3 = i∑_s𝐒^s_𝐤_3,𝐤_3, 𝔻_𝐤_1+𝐤_2ℰ^(2)_𝐤_1+𝐤_2 = i∑_s𝐒^s_𝐤_1,𝐤_2, 𝔻_𝐤_2-𝐤_3ℰ^(2)_𝐤_2-𝐤_3 = i∑_s𝐒^s_𝐤_2,-𝐤_3, 𝔻_𝐤_3+𝐤_1ℰ^(2)_𝐤_3+𝐤_1 = i∑_s𝐒^s_𝐤_3,𝐤_1.Since the dispersion tensor 𝔻_𝐪 for “off-shell" waves are non-degenerate, the second order amplitudes ℰ^(2)_𝐤 can be found by simply inverting the above matrix equations, which gives the second order amplitudes in terms of the first order amplitudes.Similarly, we can extract the “on-shell" equations from the second order electric field equation (<ref>). There are six “on-shell" equations, three of which are complex conjugation of the following three “on-shell" equations ω_𝐤_1ℍ_𝐤_1d_t(1)^𝐤_1ℰ^(1)_𝐤_1 = ∑_s𝐒^s_𝐤_2,𝐤_3, ω_𝐤_2ℍ_𝐤_2d_t(1)^𝐤_2ℰ^(1)_𝐤_2 = ∑_s𝐒^s_𝐤_1,-𝐤_3, ω_𝐤_3ℍ_𝐤_3d_t(1)^𝐤_3ℰ^(1)_𝐤_3 = ∑_s𝐒^s_𝐤_1,-𝐤_2.These “on-shell" equations govern how the first order amplitudes ℰ^(1)_𝐤 evolve on the slow scales due to scattering of the three waves. The left-hand-side of these equations is basically the passive advection of wave envelopes at the wave group velocities. The right-hand-side of these equations is redistribution of wave action and energy due to three-wave scattering. §.§ Action conservation of on-shell equationsBy the conservative nature of the redistribution process, the “on-shell" equations (<ref>)-(<ref>) conserve the total wave action U/ω, as well as the total wave energy U. As will be proven in the next paragraph, the local conservation laws of wave actions are d_t(1)^𝐤_1U_𝐤_1/ω_𝐤_1 +d_t(1)^𝐤_2U_𝐤_2/ω_𝐤_2 = 0,d_t(1)^𝐤_3U_𝐤_3/ω_𝐤_3-d_t(1)^𝐤_2U_𝐤_2/ω_𝐤_2 = 0,where U_𝐤, given by Eq. (<ref>), is the energy of the linear wave with wave vector 𝐤. The first conservation law (<ref>) implies that the total number of wave quanta in the incident wave and the scattered wave is a constant. This is intuitive because, in the absence of damping, whenever a quanta of the 𝐤_1 mode is annihilated, it is consumed to create a quanta of the 𝐤_2 mode. Analogously, the second conservation law (<ref>) says that whenever a quanta of the 𝐤_2 mode is created, a quanta of the 𝐤_3 mode must also be created by the three-wave process (<ref>). As a consequence of wave action conservation, the total wave energy is also conserved during resonant three-wave interactiond_t(1)^𝐤_1U_𝐤_1+d_t(1)^𝐤_2U_𝐤_2+d_t(1)^𝐤_3U_𝐤_3=0.This local energy conservation law can be obtained by linearly combining Eqs. (<ref>) and (<ref>), and use the frequency resonance condition (<ref>). The conservation of wave energy is also intuitive, because in the absence of damping and other waves, three-wave scattering can only redistribute energy among the three waves.The above conservation lawscan be proven by noting the following properties of the scattering strength 𝐒^s_𝐪,𝐪'. First, using formula (<ref>) for the scattering strength, together with the quadratic identity (<ref>) of the forcing operator 𝔽, we can obtain a simple expression for 𝐒^s_𝐤_2,𝐤_3 𝐒_2,3 = eω_p^2ω_1/2mω_2ω_3[(ℰ_3·𝔽_2ℰ_2)(𝔽_1𝐤_3) +(ℰ_2·𝔽_3ℰ_3)(𝔽_1𝐤_2)/ω_1 +(𝔽_3ℰ_3)(𝐤_1·𝔽_2ℰ_2) -(𝔽_1ℰ_3)(𝐤_3·𝔽_2ℰ_2)/ω_2 +(𝔽_2ℰ_2)(𝐤_1·𝔽_3ℰ_3) -(𝔽_1ℰ_2)(𝐤_2·𝔽_3ℰ_3)/ω_3],where we have abbreviated ω_j:=ω_𝐤_j, ℰ_j:=ℰ_𝐤_j^(1), 𝔽_j:=𝔽_s,𝐤_j, and suppressed other species label s for simplicity. The expression for 𝐒_1,-3 can be obtained easily from Eq. (<ref>) using the replacement rule 1→ 2, 2→ 1, 3→-3, where the minus sign is interpreted using notations (<ref>) and (<ref>). Similarly, to obtain the expression for 𝐒_1,-2, we can replace 1→ 3, 2→ 1, 3→-2 in Eq. (<ref>). Having obtained expressions for 𝐒_2,3, 𝐒_1,-3, and 𝐒_1,-2, we can use the self-adjoint property (<ref>) of the forcing operator to show, by straight forward calculations, that the scattering strength for three resonant waves satisfies the following identitiesℰ_1·𝐒^*_2,3/ω_1^2+ℰ^*_2·𝐒_1,-3/ω_2^2 = 0, ℰ^*_2·𝐒_1,-3/ω_2^2 -ℰ^*_3·𝐒_1,-2/ω_3^2 = 0.Then the action conservation Eqs. (<ref>) and (<ref>), as well as the energy conservation Eq. (<ref>), are immediate consequences of the above identities. One may be puzzled by the expression (<ref>) for 𝐒_2,3. After all, why 𝐒_2,3 is given by those six particular combinations of vectors 𝔽_𝐪ℰ_𝐪' and 𝔽_𝐪𝐪', weighted by inner products ℰ_𝐪·𝔽_𝐪'ℰ_𝐪' and 𝐪·𝔽_𝐪'ℰ_𝐪', as well as signed frequencies ±1/ω? At first glance, there seems to be no obvious pattern. However, action conservation laws, given by Eqs. (<ref>) and (<ref>), clearly indicate that 𝐒_2,3, 𝐒_1,-3, and 𝐒_1,-2 are originated from a single term from the variational principle. In Sec. <ref>, we will write down the Lagrangian that generate the three “on-shell" equations (<ref>)-(<ref>). From the Lagrangian, it will become obvious why Eq. (<ref>) looks the way it is.§.§ Three-wave equationsBefore unveiling the deeper reason leading to the expression of the scattering strength, let us first extract a number of observable consequences of three-wave interactions. When one is not concerned with the vector dependence of the complex wave amplitude ℰ_𝐤, the “on-shell" equations (<ref>)-(<ref>) can be written as three scalar equations, called the three-wave equations. To remove the vector dependence, let us decompose ℰ^(1)_𝐤=𝐞_𝐤ε_𝐤, where 𝐞_𝐤 is the complex unit vector satisfying 𝐞^*_𝐤·𝐞_𝐤=1. This decomposition is not unique due to the U(1) symmetry 𝐞_𝐤→ e^iα𝐞_𝐤 and ε_𝐤→ e^-iαε_𝐤. By requiring the scalar amplitude ε_𝐤∈ℝ to be real valued, the symmetry group of the above decomposition is reduced to the ℤ_2 symmetry ε→-ε. The convective derivative of the complex wave amplitude d_t(1)^𝐤ℰ^(1)_𝐤=𝐞_𝐤d_t(1)^𝐤ε_𝐤+ε_𝐤 d_t(1)^𝐤𝐞_𝐤, can be decomposed into change due to the scalar amplitude and the change due to the rotation of the complex unit vector.The left-hand-sides of the “on-shell" equations are closely related to the energy of the linear waves. Denote the dimensionless wave energy coefficientu_𝐤:=1/2𝐞_𝐤^†ℍ_𝐤𝐞_𝐤. Then the wave energy Eq. (<ref>) can be written as U_𝐤=ϵ_0u_𝐤ε_𝐤^2/2. Notice that the energy coefficient u_𝐤>0 is always real and positive, because the matrix ℍ_𝐤 is Hermitian and positive definite.Taking inner product with 𝐞_𝐤^* on both sides of the “on-shell" equations and sum the result with its Hermitian conjugate, we obtain u_𝐤d_t(1)^𝐤ε_𝐤+1/2ε_𝐤 d_t(1)^𝐤u_𝐤=∑_s[𝐞_𝐤^†𝐒_𝐪,𝐪'^s/ω_𝐤+h.c.]/4. From this expression, we see the combination ε_𝐤 u_𝐤^1/2 will be particularly convenient. Let us nondimensionalizethe electric field amplitude by electron massa_𝐤:=eε_𝐤/m_ecω_𝐤u_𝐤^1/2. Then the “on-shell" equations can then be written in terms of the normalized wave amplitude d_t(1)^𝐤a_𝐤=e/(4m_ecω_𝐤 u_𝐤^1/2)∑_s(𝐞_𝐤^†𝐒^s_𝐪,𝐪'/ω_𝐤+h.c.).From this equation, we see only the real part of 𝐞^†𝐒 affects how the amplitude change, while the imaginary part affects how the direction 𝐞 rotates on the complex unit sphere.The right-hand-sides of the “on-shell" equations are originated from a single scattering term. As can be seen from identities (<ref>) and (<ref>), there exist some dimensionless scattering strength Θ^s, such that e_sω_ps^2/2m_scε_1ε_2^*ε_3^*/ω_1ω_2ω_3Θ^s:=-ℰ_1·𝐒^*_2,3/ω_1^2 =ℰ^*_2·𝐒_1,3̅/ω_2^2 =ℰ^*_3·𝐒_1,2̅/ω_3^2,where we have abbreviate ε_j:=ε_𝐤_j,and used the notation j̅=-j. Using formula (<ref>) for 𝐒_2,3, we see that the normalized scattering strength can be written as the summation of strengths of six scattering channelsΘ^s = Θ_1,2̅3̅^s+Θ_2̅,3̅1^s+Θ_3̅,12̅^s+ Θ_1,3̅2̅^s+Θ_2̅,13̅^s+Θ_3̅,2̅1^s,where the normalized scattering strength due to each channel is given by the simple formulaΘ_i,jl^s=1/ω_j(c𝐤_i·𝐟_s,j)(𝐞_i·𝐟_s,l) .In the above formula, the vector 𝐟_s,j is defined by 𝐟_s,j:=𝔽_s,𝐤_j𝐞_j, and we have abbreviated 𝐞_j:=𝐞_𝐤_j. In general, the normalized scattering strength Θ^s=Θ^s_r+iΘ^s_i contains both real and imaginary parts. In Sec. <ref>, we will show that the normalized scattering strength Θ^s is related to the reduced S matrix element of the quantized theory, and the six scattering channels correspond to the six ways of contracting a single interaction vertex. Having expressed both the left- and the right-hand-side of the “on-shell" equations as scalars, we can now write down the three-wave equationsd_t(1)^𝐤_1a_1 = -Γ/ω_1a_2a_3,d_t(1)^𝐤_2a_2 = +Γ/ω_2a_3a_1,d_t(1)^𝐤_3a_3 = +Γ/ω_3a_1a_2,where a_j:=a_𝐤_j are the real-valued normalized wave amplitudes, and Γ is the coupling coefficient. Notice that due to the residual ℤ_2 symmetrya_j→-a_j, the sign of Γ is insignificant, as long as Eq. (<ref>) has the opposite sign as Eqs. (<ref>) and (<ref>). Combining Eqs. (<ref>)-(<ref>), the coupling coefficient is given by Γ=∑_sZ_sω_ps^2Θ^s_r/4M_s(u_1u_2u_3)^1/2,where Z_s:=e_s/e is the normalized charge, M_s:=m_s/m_e is the normalized mass of species s, and u_j:=u_𝐤_j is the wave energy coefficient. As expected, only the real part Θ^s_r of the normalized scattering strength affects the wave amplitude. Also notice when density n_s0→ 0, coupling due to species s vanishes as expected. The numerator of the coupling coefficient measures how strong the three waves are coupled by the scattering strength,and the denominator measures how energetically expensive to excite the linear waves, as measured by the wave energy coefficients. It is instructive to count how many degrees of freedom does the three-wave coupling coefficient Γ contains. For each wave, its 4-momentum is constrained by one dispersion relation. Once the 4-momentum is fixed, the wave polarization is determined by the dispersion tensor up to the wave amplitude, which Γ does not dependent. Therefore, for each wave, there are three degrees of freedom. Now that the resonant conditions give another four constrains, there are in total 3×3-4=5 independent variables. Hence, in the absence of additional symmetry, the three-wave coupling coefficient Γ is a function of five independent variables in a given plasma. Once the coupling coefficient is obtained in a given situation, the nonlinear three-wave equations Eqs. (<ref>)-(<ref>) may be solved using a number of techniques. For the homogeneous problem, where the spatial derivatives are zero, the equations become a system of nonlinear ordinary differential equations, and the general solution are given by the Jacobi elliptic functions <cit.>. Similarly, in one dimension, the steady state problem, where the time derivatives are zero, can also be solved in terms of the Jacobi elliptic functions <cit.>. As a trivial extension, traveling wave solutions in one spatial dimension can also be found <cit.>, using the coordinate transform ξ=x-vt. In addition to these periodic solutions, the nonlinear three-wave equations also has compact solutions, such as the N-soliton solutions <cit.>. More general solutions may also be constructed using the inverse scattering method <cit.>. In this paper, we will not be concerned with solving the three-wave equations, and only focus on calculating the coupling coefficient. Without solving the three-wave equations, a number of experimental observables can already be extracted from the coupling coefficient. For example, Γ can be related to the growth rate of parametric instabilities. Consider the parametric decay instability where a pump wave with frequency ω_1 decays into two waves with frequencies ω_2 and ω_3. Suppose the pump has constant amplitude a_1, and the decay waves have no spatial variation. Then solving the linearized three-wave equations, we find a_2 and a_3 grow exponentially with rateγ_0=|Γ a_1|/√(ω_2ω_3).The experimentally observed linear growth rate will be somewhat different than γ_0 due to wave damping. Wave damping, both collisional and collisionless, can be taken into account by inserting a phenomenological damping term ν a into the left-hand-side of the three-wave equations. Solving the linearized equations, the growth rate, modified by wave damping, isγ=√(γ_0^2+(ν_2-ν_3/2)^2)-ν_2+ν_3/2, where ν_2 and ν_3 are the phenomenological damping rates of the two decay waves. In addition to wave damping, the experimentally observed growth rate can also be modified by frequency mismatch δω=ω_1-ω_2-ω_3. When the frequency mismatch is much smaller than the spectral width of waves, the three waves can still couple almost resonantly. To find the growth rate in the presence of small δω, promote amplitude a to be complex and change variable α_j:=a_je^-itδω/2 for j=2 and 3. This change of variable is equivalent to modifying the damping rates to ν'_2:=ν_2+iδω/2 and ν_3^'*:=ν_3-iδω/2. Therefore, the growth rate of parametric decay instability, modified by both weak damping and small frequency mismatch isγ'=√(γ_0^2+(ν_2-ν_3+iδω/2)^2)-ν_2+ν_3/2. The frequency mismatch δω not only introduces amplitude modification, but also results in phase modification. In the following discussions, we shall only be concerned with the growth rate γ_0 as observable, ignoring wave damping and frequency mismatch.§ LAGRANGIAN OF THREE-WAVE INTERACTIONNow that we know how the coupling coefficient can be related to experimental observables, let us unveil why its formula looks the ways it is. Recall in the previous section, we show that the three-wave scattering strengths 𝐒_𝐪,𝐪' satisfies the action conservation laws.Motivated by these conservation laws, here in this section, we show that the three “on-shell" equations (<ref>)-(<ref>) can be derived from a classical three-wave Lagrangian. More importantly, we will show that all terms in the classical interaction Lagrangian arise from essentially one term after quantizing the Lagrangian. To write down the Lagrangian, it is more convenient to use the gauge field A^μ instead of the electric or magnetic fields. Since we will later quantize the Lagrangian, it is convenient to use the temporal gauge A^0=0. In temporals gauge, the electric field is related to the vector potential by 𝐀_𝐤=ℰ_𝐤/ω_𝐤, which, in the natural units ħ=c=1, has the dimension of energy M. Similarly, we can dimensionalize the wave energy operator ℍ byΛ_𝐤:=ω_𝐤ℍ_𝐤,which then has the dimension of energy M as it should.Having defined the necessary operators, we can now write down the classical three-wave action for the three “on-shell" equationsS_c=∫ d^4x_(1)(ℒ_c0+ℒ_cI),where the integrations over space and time are on the slow scales x_(1) and t_(1). Abbreviating the subscripts 𝐤_j as j, the Lagrangian of freely advecting wave envelopesℒ_c0=∑_j=1^3𝐀_j^*· iΛ_jd^j_t(1)𝐀_j,where the complex amplitude 𝐀_j(x_(1),t_(1)) is a function of the slow spatial and temporal scales, and the advective derivative d^j_t(1) is defined by Eq. (<ref>). It is easy to show that ℒ_c0 gives rise to a real-valued action S_c0 after integrating by part. The second term in the classical action [Eq. (<ref>)] is the three-wave interaction Lagrangian ℒ_cI=-i(Ξ-Ξ^*),which is obviously real-valued. Using Eq. (<ref>), the three waves interact through the couplingΞ=A_1 A_2^* A_3^*∑_se_sω_ps^2/2m_scΘ^s,where Θ^s is the normalized scattering strength [Eq. (<ref>)], and the A's are the scalar amplitudes of the three waves. Clearly, the coupling Ξ has mass dimension M^4, and hence the action S_cI is dimensionless in the natural unit as expected.Now that we have written down the Lagrangian, we can find the classical equations of motion by taking variations with respect to 𝐀_1, 𝐀_2, and 𝐀_3, or equivalently, their independent complex conjugates. Using the self-adjointness [Eq. (<ref>)] of the forcing operator, it is straight forward to verify that the three “on-shell" equations (<ref>)-(<ref>) are the resultant equations.The classical three-wave Lagrangian ℒ_c=ℒ_c0+ℒ_cI has U(1) symmetries, which lead to the action conservation laws. For example, the Lagrangian is invariant under the following global U(1) transformation𝐀_1 →e^iα𝐀_1, 𝐀_2 →e^iα𝐀_2, 𝐀_3 → 𝐀_3,where α is an arbitrary real constant. Under the above transformation, the infinitesimal variation of the Lagrangian is zero δℒ_c=0, while the infinitesimal variation δ𝐀_1=iα𝐀_1, δ𝐀_2=iα𝐀_2, and δ𝐀_3=0, giving rise to a Noether's current. In fact, we have an even stronger symmetry δΞ=0 for any α. Therefore this U(1) symmetry leads to the identity𝐀_1·δΞ/δ𝐀_1-𝐀_2^*·δΞ/δ𝐀_2^*=0,which is exactly the action conservation law Eq. (<ref>). Using similar arguments, other action conservation laws can be derived from other global U(1) symmetries.The large number of terms contained in the classical Lagrangian can be reduced to essentially two terms when we quantized the Lagrangian, in which the gauge field becomes real valued. Before introducing the quantized Lagrangian, it is helpful to review the second quantization notations. For simplicity, we will omit the subscripts for the slow spatial and temporal variables x_(1) and t_(1), with the implied understanding that all spatial and temporal dependences are on the full scales. Let us promote the gauge field 𝐀 to quantized operator𝐀̂:=∫d^3𝐤/(2π)^31/√(2ω_𝐤)(𝐞_𝐤â_𝐤e^-ikx+𝐞_𝐤^*â^†_𝐤e^ikx),where kx:=ω_𝐤t-𝐤·𝐱 is the Minkowski inner product, 𝐞_𝐤 is the unit polarization vector, and the summation over branches of the dispersion relation is implied. The annihilation operator â_𝐤 and the creation operator â^†_𝐤 satisfies the canonical commutation relations for bosons, where the nontrivial commutator is[â_𝐩,â^†_𝐤]=(2π)^3δ^(3)(𝐩-𝐤). Using the standard normalization, the single boson state |𝐤⟩:=√(2ω_𝐤)â^†_𝐤|0⟩,where |0⟩ is the vacuum state. Then we have the following Wick contractions𝐀̂|𝐤𝐀̂|𝐤⟩ = 𝐞_𝐤e^-ikx, ⟨𝐤|𝐀̂⟨𝐤|𝐀̂ = 𝐞_𝐤^*e^ikx.Let us also promote the displacement operator for species s to act on the operator 𝐀̂ byΠ̂_s𝐀̂:=i∫d^3𝐤/(2π)^31/√(2ω_𝐤)(𝔽_s,𝐤𝐞_𝐤/ω_𝐤â_𝐤e^-ikx-𝔽_s,𝐤^*𝐞_𝐤^*/ω_𝐤â^†_𝐤e^ikx),where the minus sign in front of the second term comes from notation Eq. (<ref>). Taking time derivative of the displacement operator, ∂_t(Π̂_s𝐀̂) is the velocity operator for species s, which is proportional to the current operator. Now we are ready to write down the quantized Lagrangian, which contains a kinetic term and a single three-wave coupling termℒ=𝐀̂^†iΛ d_t 𝐀̂ -∑_se_sω_ps^2/2m_s(Π̂_s𝐀̂)_i(∂_i𝐀̂_j)∂_t(Π̂_s𝐀̂)_j.Here, the i and j indices in the second term are the spatial indices, and summation over repeated indices is assumed. The first term ℒ_0 closely resembles the kinetic term of quantum electrodynamics (QED), with the Dirac spinor replaced by the gauge field, and the Dirac gamma matrices replaced by the Λ energy matrix. The second term ℒ_I is the three-wave interaction Lagrangian, which is nonvanishing only if the background density of some species s is nonzero. Notice that the three-wave interaction is nonrenormalizable, which is not unexpected in an effective field theory.To make sense of the quantized Lagrangian, we recognize that the displacement Π̂_s𝐀̂ is proportional to the polarization density 𝐏, and the velocity ∂_t(Π̂_s𝐀̂) is proportional to the current density 𝐉. Therefore, the three-wave interaction Lagrangian is of the form ℒ_I∝ P^i(∂_i𝐀_j)J^j, where the polarization and current density are determined by linear response. Although one may not have guessed this form of the interaction Lagrangian, it makes the following intuitive sense: in the absence of the third wave, the electromagnetic field interacts with the particle fields through 𝐀_jJ^j in the temporal gauge; now when the third wave is present, it modulates the medium through which the electromagnetic field advects, giving rise to the P^i(∂_i𝐀_j)J^j interaction. In this interaction term, there is no reason why a particular wave should only be responsible for 𝐏, 𝐀, or 𝐉. Therefore, the three waves can switch their roles, and the total interaction is given by linear superpositions of all possible permutations.To see how the quantized Lagrangian, with the linear superposition principle built in, gives rise to the classical Lagrangian, let us compute the S matrix element of three-wave decay 𝐤_1→𝐤_2+𝐤_3. The S matrix element ⟨𝐤_2,𝐤_3|iℒ_I|𝐤_1⟩=iℳ e^i(k_2+k_3-k_1)x,where the reduced matrix element iℳ can be represented using Feynman diagrams w3 iℳ= (40,40) w3i1o2,o3photoni1,v1photonv2,o2photonv3,o3fermionv1,v3plainv1,v2v2,v3label=1,label.angle=-120,label.dist=6v1label=2,label.angle=-120,label.dist=6v2label=3,label.angle=120,label.dist=6v3+5 permutations.Since there are three external boson lines, each connecting to one of the three vertices, there are in total 3!=6 Feynman diagrams. In the above Feynman diagram, interaction vertex to which 1 is connected to is the usual QED vertex, whereas vertices 2 and 3 appear only when there are background particle fields <cit.>. The arrow between vertices 1 and 3 indicates the direction of momentum flow, and also labels which vertex does the ∂_t derivative acts on. The above Feynman diagram corresponds to the particular Wick contractionw3 = -[1.5ex]ie_sω_ps^2/2m_s⟨𝐤_2,𝐤_3|(Π̂_s𝐀̂) ie_sω_ps^2/2m_s⟨𝐤_2,𝐤_3|(Π̂_s𝐀̂)_j(∂_j𝐀̂_l)∂_t(Π̂_s𝐀̂ ie_sω_ps^2/2m_s⟨𝐤_2,𝐤_3|(Π̂_s𝐀̂)_j(∂_j𝐀̂_l)∂_t(Π̂_s𝐀̂)_l|𝐤 ie_sω_ps^2/2m_s⟨𝐤_2,𝐤_3|(Π̂_s𝐀̂)_j(∂_j𝐀̂_l)∂_t(Π̂_s𝐀̂)_l|𝐤_1⟩ = ie_sω_ps^2/2m_scΘ_1,2̅3̅^s.Summing with the other five Feynman diagrams, the reduced S matrix element in the quantum theory is related to the normalized scattering strength in the classical theory by the simple relationℳ=∑_se_sω_ps^2/2m_scΘ^s.From the Lagrangian perspective, the classical three-wave coupling is related to the quantized interaction through the S matrix iΞ=A_1A_2^*A_3^*⟨𝐤_2,𝐤_3|iℒ_I|𝐤_1⟩ e^i(k_1-k_2-k_3)x.Using the above relation,we immediately recovers the classical three-wave coupling by computing the S matrix element using the quantized Lagrangian. Alternatively, one may simply regard Lagrangian Eq. (<ref>) as a classical Lagrangian, and substitute Eq. (<ref>) as the spectral expansion of the gauge field. Then after integrating over spacetime, ∫ d^4xexp[i(k_1-k_2-k_3)x]=(2π)^4δ^(4)(k_1-k_2-k_3) will select out the six resonate terms from the interaction Lagrangian.Now that we understand how the classical theory and the quantized theory are connected, we may postulate that the three-wave coupling always arises from the P^i(∂_i𝐀_j)J^j term in the effective Lagrangian, regardless of the plasma model that is used to calculate the linear response.In the cold fluid model, the linear response is expressed in terms of the forcing operator 𝔽. By modifying this operator to include thermal or even quantum effects, and plugging it into the formalism we have developed, the three-wave scattering strength may be evaluated immediately. Having obtained the normalized scattering strength, as well as the wave energy coefficients in that particular plasma model, one can then compute the three-wave coupling coefficient using Eq. (<ref>). We have thus conjectured a prescription for computing three-wave coupling, without the need for going through the perturbative solution of the equations. The coupling coefficient then enters the three-wave equation, which governs the evolution of the envelopes of the three waves. § SCATTERING OF QUASI-TRANSVERSE AND QUASI-LONGITUDINAL WAVESThe three-wave coupling coefficient (<ref>) can be readily evaluated in cold fluid model using wave energy coefficient Eq. (<ref>) and normalized scattering strength Eq. (<ref>). In the most general geometry (Fig. <ref>), we need to ensure that the resonant conditions Eqs. (<ref>) and (<ref>) are satisfied by three otherwise arbitrary “on-shell" waves. The evaluation becomes particularly easy when waves are either quasi-transverse (T) or quasi-longitudinal (L). In these situations, the wave dispersion relations are simplified, and hence matching resonance conditions becomes an easy task. Moreover, for both T and L waves, the wave polarization vector 𝐞 are at special angles with the wave vector 𝐤, so that the expressions for the wave energy and scattering strength can be further simplified. It is possible that T and L waves couple with other waves that have both electrostatic and electromagnetic components, but in this section, we will only give examples where all three waves are either T or L waves. Although there are in general four different three-wave triplets: {T,T,T}, {T,T,L}, {T,L,L}, and {L,L,L}, only two of these triplets can couple resonantly.From Appendix <ref>, we know the T waves are electromagnetic waves with ω≫ω_p,|Ω_e|, and the L waves are electrostatic waves with ω→ω_r, for some resonance ω_r. Since the frequency of T waves are much higher than the frequency of L waves, only the following types of interactions can match frequency resonance T⇌ T+L, L⇌ L+L.A typical scenario for the TTL interaction is the scattering of lasers. For example, an incident lasers is scattered inelastically by some plasma waves and thereafter propagates in some other direction with shifted frequency. Similarly, a typically scenario for the LLL interaction is the scattering of antenna waves. For example, a plasma wave, launched by some antenna array, can decay into two other plasma waves propagating in some other directions. In what follows, we will consider these two scenarios in details.§.§ T⇌ T+L scatteringConsider the decay of a pump laser (ω_1) into a scattered laser (ω_2) and a plasma wave (ω_3). Since the frequency ω_1,2≫Ω_s, the magnetization ratio β_1,2≃ 0 and the magnetization factor γ_1,2≃ 1 for any species. Consequently, the forcing operator 𝔽_1,2≃𝕀 are approximately the identity operator, and the lasers are therefore transverse electromagnetic waves. As for the plasma wave, using the quasi-longitudinal approximation 𝐞_3≃𝐤̂_3, the inner products is purely real𝐤̂_3·𝐟̂^*_s,3≃𝐤̂_3·𝔽_s,3𝐤̂_3=γ_s,3^2(1-β_s,3^2cos^2θ_3),where θ_3 is the angle between 𝐤_3 and 𝐛 as shown in Fig. <ref>, and 𝐤̂_3 is the unit vector along 𝐤_3 direction. With these basic setup, we can readily evaluate Eq. (<ref>), the coupling coefficient.Let us first calculate the wave energy coefficients Eq. (<ref>), which enters the denominator of the coupling coefficient. Since 𝔽_1,2≃𝕀, the wave energy coefficients for the lasers are simplyu_1≃ u_2≃ 1.As for the plasma wave, after taking the frequency derivative in Eq. (<ref>), the wave energy coefficient for quasi-longitudinal wave isu_3≃ 1+∑_sω_ps^2/ω_3^2γ_s,3^4β_s,3^2sin^2θ_3.As expected, u_3 is always positive, although γ_s,3^2 can be either positive or negative, depending on whether β_s,3 is either smaller or larger than one.To find the normalized scattering strength Eq. (<ref>), which enters the numerator of Γ, we again use the fact ω_1,2≫ω_3. Since the wave vectors are comparable in magnitudes, the dominant terms of the coupling strength arethe two terms proportional to 1/ω_3,if the inner product 𝐞_1·𝐟_2^*≃𝐟_1·𝐞_2^*≃𝐞_1·𝐞_2^* is of oder unity. Using the resonance condition 𝐤_1-𝐤_2=𝐤_3, the dominant term of theTTL scattering strengthΘ^s≃-ck_3/ω_3 (𝐤̂_3·𝔽_s,3𝐤̂_3) (𝐞_1·𝐞_2^*),where the inner product 𝐤̂_3·𝔽_s,3𝐤̂_3 is given explicitly by Eq. (<ref>). Now that we have simplified both the denominator and the numerator of Eq. (<ref>), a simple formula for the three-wave coupling coefficient Γ can be obtained. Having obtained an explicit formula for the coupling coefficient, we can use it to obtain expressions for experimental observables. For example, the linear growth rate γ_0 [Eq. (<ref>)] can be decomposed asγ_0=γ_R|ℳ_T|,where γ_R is the backward Raman growth rate when the plasma is unmagnetized γ_R=√(ω_1ω_p)/2|a_1Re(𝐞_1^*·𝐞_2)|,and ℳ_T is the normalized growth rate of the TTL scattering. The normalized growth rate is proportional to thecoupling coefficient Γ=ω_p^2μ/4 up to some kinematic factorℳ_T=1/2(ω_p^3/ω_1ω_2ω_3)^1/2μ_T,where the normalized coupling coefficient μ_T is given by μ_T≃∑_sZ_s/M_sω_ps^2/ω_p^2ck_3/ω_3𝐤̂_3·𝔽_s,3𝐤̂_3/u_3^1/2,in the TTL approximation. In the unmagnetized limit B_0→ 0,we have β_3→ 0 and γ_3→ 1. Since ion mass is much larger than electron mass, we have μ_T→ -ck_3/ω_3. Moreover, since the lasers can only couple through the Langmuir wave in cold unmagnetized plasma, we have ω_3→ω_p. Then the normalized growth rate ℳ_T→ ck_3/2√(ω_1ω_2). Finally, in backward scattering geometry ck_3=ck_1+ck_2≃ω_1+ω_2≃2ω_0, where we have denoted ω_0:=ω_1≃ω_2. We see ℳ_T→ 1 in the unmagnetized limit as expected. The normalized growth rate becomes particularly simple when waves propagate at special angles. For example, consider the situation where the three waves propagate along the magnetic field 𝐁_0, and the plasma wave ω_3=ω_p is the Langmuir wave. Since γ_s,3^2 remains finite as θ_3→ 0, the normalized growth rate for collimated parallel wave propagation isℳ_T∥^P≃-1/2ck_3/√(ω_1ω_2),where we have used M_i≫1 to drop the summation over species. The above is exactly the same as the unmagnetized result, which is expected because the plasma wave is not affected by the parallel magnetic field. To give another simple example, consider the situation where the three waves are collimated and propagate perpendicular to the magnetic field 𝐁_0. In cold electron-ion plasma, there are two L waves in the perpendicular direction: the upper-hybrid (UH) wave and the lower-hybrid (LH) wave. Let us first consider scattering mediated by the UH wave ω_3≃ω_UH≃√(ω_p^2+Ω_e^2). In this situation, the magnetization factor γ_3,e^2≃ω_UH^2/ω_p^2 and γ_3,i^2≃ 1. Since M_i≫1, the dominant contribution for both the wave energy coefficient and the scattering strength comes from electrons. The wave energy coefficient u_3≃ω_UH^2/ω_p^2, and the normalized coupling coefficient μ_T≃-ck_3/ω_p. Therefore, the normalized growth rate for collimated perpendicular wave propagation mediated by the UH wave isℳ_T⊥^UH≃-1/2ck_3/√(ω_1ω_2)(ω_p/ω_UH)^1/2.Similarly, let us consider scattering mediated by the LH wave ω_3≃ω_LH≃√(|Ω_e|Ω_i)ω_p/ω_UH. Since the LH frequency satisfies Ω_i≪ω_LH≪|Ω_e|, the magnetization ratios β_3,e≫1 and β_3,i≪1. Consequently, the magnetization factor γ_3,e≃-1/β_3,e^2 and γ_3,i≃ 1. When ω_p∼|Ω_e| are comparable, electron contributions again dominate. The wave energy coefficient u_3≃ω_UH^2/Ω_e^2, and the normalized coupling coefficient μ_T≃ ck_3ω_LH/ω_UH|Ω_e|. Therefore, the normalized growth rate for LH wave mediation in the collimated perpendicular geometry is ℳ_T⊥^LH≃1/2ck_3/√(ω_1ω_2)ω_p^3/2ω_LH^1/2/ω_UH|Ω_e|.The above examples recover results known in <cit.>, who analyze the same problem in the restricted geometry where the the waves are collimated and propagate perpendicular to the magnetic field.In more general geometry, where the waves are not collimated and propagate at angles with respect to the magnetic field, we can evaluate the normalized growth rate using the following procedure, mimicking what happens in an actual experiment where the plasma density and magnetic field strength are known. First, we shine a laser with frequency ω_1 into the plasma at some angle θ_1 with respect to the magnetic field. Then the wave vector k_1 is known from the dispersion relation. Second, we observe the scattered laser using some detector placed at angle θ_2 with respect to the magnetic field, and point the detector at angle α_2 with respect to the incoming laser. Suppose the detector can measure the frequency ω_2 of the scattered laser, then from this frequency information, we immediately know k_2 from the dispersion relation, as well as ω_3=ω_1-ω_2 from the resonance condition. Next, we can calculate k_3=√(k_1^2+k_2^2-2k_1k_2cosα_2) using the resonance condition. Finally, we can determine θ_3 by inverting ω_3=ω_r(θ_3), where ω_r is the angle-dependent resonance frequency. Using this procedure, the normalized growth rate can be readily evaluated numerically.Conversely, we may diagnose the plasma density and magnetic field using information measured from laser scattering experiment. Using measured scattering intensities, which can be related to the growth rate, one may be able to fit plasma parameters such that the angular dependence of ℳ_T, measured from experiments, matches what is expected from the theory. §.§.§ Parallel pumpTo demonstrate how to evaluate the normalized growth rate ℳ_T, consider the example where the incident laser propagate along the magnetic field, while the scattered laser propagate at some angle θ_2. In this case α_2=θ_2, and by cylindrical symmetry, ℳ_T depends on only one free parameter θ_2, as plotted in Fig. <ref> for hydrogen plasma with ω_1/ω_p=10. When there are only two charged species, as in the case of hydrogen plasma, there are three electrostatic resonances the lasers can scatter from (Fig. <ref>). The first resonance is the upper resonance, whose frequency asymptotes to the upper-hybrid frequency ω_UH when θ_3→π/2. When scattered from the upper resonance (red curves), the scattered laser is frequency down-shifted (Δω=ω_1-ω_2) by the largest amount. The second resonance is the lower resonance, whose frequency asymptotes to the lower-hybrid frequency ω_LH when θ_3→π/2. When scattered from the lower resonance (orange curves), the scattered laser is frequency shifted by either |Ω_e| in over-dense plasma (|Ω_e|<ω_p), or by ω_p in under-dense plasma (|Ω_e|>ω_p), when θ_3→0. The third resonance is the bottom resonance, whose frequency asymptotes to 0 when θ_3→π/2. When scattered from the bottom resonance (blue curves), the scattered laser is frequency shifted by at most Ω_i when θ_3→0. Since Ω_i is much smaller than other frequency scales, the frequency shift Δω for scattering off the bottom resonance is not discernible in Fig. <ref>c and <ref>d. In terms of the normalized growth rate (upper panels), we see ℳ_T→1when the laser is backscattered from the Langmuir resonance with Δω→ω_p, while ℳ_T→ 0 when the laser is scattered from the cyclotron resonances with Δω→|Ω_e|,Ω_i. For Langmuir-like resonance, ℳ_T increases monotonously with θ_2. In contrast, for cyclotron-like resonances, ℳ_T peaks near at intermediate θ_2, and becomes zero for exact backscattering. To better understand the angular dependence of the normalized growth rate ℳ_T, let us find its asymptotic expressions. In the limit ω_1,2≫ω_3, the wave vector k_2/k_1≃ 1 and k_3/k_1≃ 2sin(θ_2/2). At finite angle θ_2>0, we can approximate θ_3≃(π-θ_2)/2 . For even larger θ_2, we can also approximate the resonance frequency ω_3 using Eqs. (<ref>)-(<ref>), because θ_3∼ 0 is now small. These asymptotic geometric relations will be useful when we evaluate the coupling coefficient.First, consider scattering off the Langmuir-like resonance ω_3∼ω_p. Since γ_3,s is finite, the lowest order angular dependence comes from k_3. Take the limit θ_3→ 0, we get Eq. (<ref>). Now retain the angular dependence of k_3, we can grossly approximate|ℳ_T^p|≃sinθ_2/2.This approximation is of course very crude, but it captures the monotonous increasing feature for scattering off the Langmuir-like resonance. In fact, the above result becomes a very good approximation when the magnetic field B_0→0. In this unmagnetized limit, we recover the angular dependence of Raman scattering.Second, consider scattering off the electron-cyclotron-like resonance ω_3∼|Ω_e|. Notice that in this case, the magnetization factor γ_3,e^2≫ 1 for small θ_3. Nevertheless, since both the numerator and the denominator contains this factor, ℳ_T remains finite. For electrons, the magnetization ratio β_3,e≃1. Using Eq. (<ref>), which is valid when ω_p|Ω_e|, the magnetization factor γ_3,e^2≃(Ω_e^2-ω_p^2)/(ω_p^2sin^2θ_3). In comparison, β_3,i≪ 1 and γ_3,i^2≃1. Hence the dominant contribution comes from electrons. Substituting these into formula Eq. (<ref>), we see to leading order|ℳ_T^e|≃1/2(ω_p/ω_3)^1/2sinθ_2,where ω_3 as function of θ_2 is given by Eq. (<ref>), with θ_3≃(π-θ_2)/2. From Eq. (<ref>), we see |ℳ_T^e| reaches maximum when the laser is scattered almost perpendicularly to the magnetic field. The maximum value scales roughly as |ℳ_T^e|∼√(ω_p/|Ω_e|)/2, which can be very large in weakly magnetized plasmas, as long as the cold fluid approximation remains valid. Away from θ_2∼π/2, the normalized growth rate |ℳ_T^e| falls off to zero. This falloff is expected, because exciting cyclotron resonance is energetically forbidden. In the end, consider scattering off ion-cyclotron-like resonance ω_3∼Ω_i. In this case, the ion contribution to the wave energy coefficient is no longer negligible, because β_3,i≃1 and γ_3,i^2≃Ω_e/Ω_itan^2(θ_2/2)≫1, as can be seen from Eq. (<ref>). The scattering strength is still dominated by electrons, for which β_3,e≫1, and γ_3,e^2≃-1/β_3,e^2≪1. Substituting these into Eq. (<ref>), the normalized growth rate|ℳ_T^i|≃1/2(.ω_p.Ω_i/|Ω_e|ω_3)^1/2sinθ_2. We see the above result is rather similar to Eq. (<ref>), except that ω_3∼Ω_i has very weak angular dependence. Therefore, |ℳ_T^i| is very well approximated by Eq. (<ref>). The normalized growth rate peaks almost at θ_2=π/2, reaching a maximum |ℳ_T^i|∼√(ω_p/|Ω_e|)/2, which can be very large in weakly magnetized plasmas. Similar to the electron cyclotron case, |ℳ_T^i| falls off to zero for parallel scattering. §.§.§ Perpendicular pump Consider the other special case where the pump laser propagates perpendicular to the magnetic field. In this geometry, it is natural to plot the normalized growth rate |ℳ_T| in spherical coordinate (Fig. <ref>), where the polar angle θ_2 is measured from the magnetic field 𝐁_0, and the azimuthal angle ϕ_2 is measured from the wave vector 𝐤_1. By symmetry of this setup, it is obvious that ℳ_T(ϕ_2,θ_2) =ℳ_T(ϕ_2,π-θ_2) =ℳ_T(-ϕ_2, θ_2). Therefore, it is sufficient to consider the range θ_2∈[0,π/2] and ϕ_2∈[0,π]. By matching the 𝐤 resonance, we can read θ_3 from the spherical coordinates (ϕ_2,θ_2), and thereafter read the frequency shift ω_3 from Fig. <ref>. As for the growth rate, in electron-ion plasma, when scattered from the upper resonance (Fig. <ref>a), backscattering has the largest growth rate. While for scattering off the lower resonance (Fig. <ref>b), |ℳ_T| reaches maximum for both backscattering and nearly parallel scattering, where the scattered laser propagates almost parallel to the magnetic field. In comparison, when scattering off the bottom resonance(Fig. <ref>c), the normalized growth rate peaks for nearly backward scattering, while falls to zero for exact backscattering.To better understand the angular dependence of the normalized growth rate, let us consider its asymptotic expressions for two special cases. The first special case is when all waves lie in the plane perpendicular to the magnetic field, namely, when θ_2=90^∘. In this case, the angle θ_3 is fixed to 90^∘, and the frequency of the plasma resonances are also fixed to ω_UH, ω_LH, or zero. Therefore, the angular dependence only comes from k_3. In the limit ω_1,2≫ω_3, we have k_3≃2k_1sin(ϕ_2/2). Using Eqs. (<ref>) and (<ref>), it is easy to see, for scattering off UH and LH waves in the perpendicular plane|ℳ_T⊥^UH| ≃ (ω_p/ω_UH)^1/2sinϕ_2/2,|ℳ_T⊥^LH| ≃ ω_p^3/2ω_LH^1/2/ω_UH|Ω_e|sinϕ_2/2.Now let us calculate ℳ_T⊥^b for scattering off the bottom resonance. Using asymptotic expression Eq. (<ref>) for ω_3, we see although the magnetization ratio β_3,s→∞, the product β_3,scosθ_3 remains finite as θ_3→π/2. Since the magnetization factor γ_3,s≃-1/β_3,s^2≪1, it is easy to see ℳ_T⊥^b∝√(ω_3), which goes to zero when θ_3→π/2. Hence, for scattering off the bottom resonance in the perpendicular plane|ℳ_T⊥^b|=0,is completely suppressed. Consequently, exact backscattering from the bottom resonance is also suppressed.To see how ℳ_T^b climbs up from zero, consider another special case where 𝐤_2 is in the plane spanned by 𝐤_1 and 𝐛. In this case, it is more natural to consider ℳ_T as function of α_2, the angle between 𝐤_1 and 𝐤_2, as plotted in Fig. <ref>. Let us find the asymptotic expression of ℳ_T^b when α_2∼π. In this limit, we have θ_3∼π/2, and the resonance frequency ω_3 can be approximated by Eq. (<ref>). Then the magnetization ratios β_3,e^2≃Ω_e^2/Ω_i^2+|Ω_e|/(Ω_icos^2θ_3) and β_3,i^2≃1+Ω_i/(|Ω_e|cos^2θ_3). Consequently, the magnetization factors can be well approximated by γ_3,e^2≃-1/β_3,e^2 and γ_3,i^2≃-|Ω_e|cos^2θ_3/Ω_i. Moreover, since ω_1,2≫ω_3, the angle θ_3≃α_2/2 and the wave vector k_3≃2k_1sin(α_2/2). Substituting these into formula Eq. (<ref>), we see when α_2∼π, the normalized growth rate|ℳ_T^b|^2≃[ζ(1+ζcos^2α_2/2)]^3/2sin^2α_2/2cosα_2/2/r^3+r[1+ζ(1+ζcos^2α_2/2)^2]sin^2α_2/2,where r:=|Ω_e|/ω_p and ζ:=M_i/Z_i≫1. To see the lowest order angular dependence, we can use a cruder but simpler approximation |ℳ_T^b|^2≃ζ^1/2cos(α_2/2)/r. We see |ℳ_T^b| increases sharply from zero away from exact backscattering. Using result Eq. (<ref>), we find in the other limit α_2∼0, the normalized growth rate|ℳ_T^b|≃sin^2α_2/2/r^1/2(1-1/ζtan^2α_2/2)^-3/4. We see scattering from the bottom resonance can be strong when the plasma is weakly magnetized, as long as the scattering angle is away from exact forward or backward scattering.In summary, the TTL scattering in magnetized plasma is mostly due to density beating Eq. (<ref>), and the modification due to the magnetic field can be represented by the normalized growth rate ℳ_T. In magnetized plasmas, cyclotron-like resonances, in addition to the Langmuir-like resonance, contribute to the scattering of the T waves. When scattered from the Langmuir-like resonance, both the wave energy coefficient and the scattering strength are finite. Therefore in this case, the angular dependence of ℳ_T comes mostly from k_3, which reaches maximum for backscattering. In contrast, for scattering from cyclotron-like resonances, both the scattering strength and the wave energy coefficient can blow up. Their ratio, ℳ_T, goes to zero when the scattering angles are such that the L wave frequency approaches either zero or the cyclotron frequencies. In addition, ℳ_T can also become zeros at special angles where scattering from electrons and ions exactly cancel. Away from these special angles, scattering from cyclotron-like resonances, which increases with decreasing magnetic field, typically have growth rates that are comparable to scattering from Langmuir-like resonances. When the plasma parameters are known, we can determine the angular dependence of ℳ_T using formula Eq. (<ref>). This knowledge can be used to choose injection angles of two lasers such that their scattering is either enhanced or suppressed. Conversely, by measuring angular dependence of ℳ_T in laser scattering experiments, one may be able to fit plasma parameters to match Eq. (<ref>). This provide a diagnose method from which the magnetic field, as well as the plasma density and composition can be measured.§.§ L⇌ L+L scatteringIn this subsection, we consider the other scenario where the three-wave scattering happens between three resonant quasi-longitudinal waves. This happens, for example, when we launch an electrostatic wave into the plasma by some antenna arrays. When the wave power is strong enough to overcome damping, it may subsequently decay to two other waves if the resonance conditions can be satisfied. The decay waves are not necessarily electrostatic, but for the purpose of illustrating the general results in Sec. <ref>, we will only give examples where the two decay waves are also electrostatic. The coupling strength between three L waves can be simplified using the approximation that the waves are quasi-longitudinal. Substituting 𝐞_i≃𝐤̂_i into formula (<ref>) and using the frequency resonance condition (<ref>), the normalized scattering strength for LLL scattering can be written asΘ^s≃ - ck_1ω_1/ω_2ω_3(𝐤̂_1 ·𝔽_s,2^*𝐤̂_2)(𝐤̂_1·𝔽_s,3^*𝐤̂_3)+ ck_2ω_2/ω_3ω_1(𝐤̂_2 ·𝔽_s,1𝐤̂_1)(𝐤̂_2·𝔽_s,3^*𝐤̂_3)+ ck_3ω_3/ω_1ω_2(𝐤̂_3 ·𝔽_s,1𝐤̂_1)(𝐤̂_3·𝔽_s,2^*𝐤̂_2),where k_i:=|𝐤_i| is the magnitude of the wave vector, and 𝐤̂_i is the unit vector along 𝐤_i direction. It is easy to recognize that 𝐤̂_i·(𝔽_s,j/ω_j)𝐤̂_j is the projection of quiver velocity 𝐯̂_j in 𝐤̂_i direction. Therefore, the couplings between three L waves may also be interpreted as density beating. The first term in Θ^s is proportional to the rate of creating wave 1 by annihilating waves 2 and 3, the second term is proportional to the rate of annihilating waves 3 and 1̅ to create wave 2̅, and the last term can be interpreted similarly. The interference between these processes determines the overall scattering strength.Having obtained expressions for the normalized scattering strength (<ref>) and wave energy (<ref>), we can immediately evaluate the coupling coefficient (<ref>), and find expressions for experimental observables. For example, the linear growth rate γ_0 [Eq. (<ref>)] of the parametric decay instability can be written asγ_0=γ_L|ℳ_L|,where γ_L is purely determined by the pump waveγ_L=1/2ck_1|a_1|.The normalized growth rate for LLL scatteringℳ_L=ω_p/2ck_1(ω_p^2/ω_2ω_3)^1/2μ_L,is the product of a kinematic factor with the coupling coefficient Γ=ω_p^2μ/4. In the LLL approximation, the normalized coupling coefficientμ_L≃∑_sZ_s/M_sω_ps^2/ω_p^2Θ^s_r/(u_1u_2u_3)^1/2,where Θ^s_r is the real part of Eq. (<ref>). Again, notice when density of species s goes to zero, its contribution to μ_L also goes to zero as expected.To evaluate the normalized growth rate ℳ_L, we can use the following procedure to mimic what happens in an actual experiment. Suppose we know the species density and magnetic field, then we know what resonances are there in the plasma. We can then launch a pump wave at resonance frequency ω_1 using some antenna array. The antenna array not only inject a wave at the given frequency, but also selects the wave vector k_1 and the wave direction θ_1.To observe the decay waves, we can place a probe at some angle θ_2 with respect to the magnetic field, and some azimuthal angle ϕ_2 in a spherical coordinate. The probe can measure fluctuations of the plasma potential and therefore inform us about the wave frequency ω_2. Then we immediately know ω_3=ω_1-ω_2 from the three-wave resonance condition. Moreover, since the third wave is a magnetic resonance, the frequency ω_3 constrains the angle θ_3 at which the third wave can propagate. However, a simple probe cannot measure the wave vector, so we will have to solve k_2 and k_3 from the resonance condition (<ref>), which can be written in components ask_3^2=k_1^2+k_2^2-2k_1k_2cosα_2, k_3cosθ_3=k_1cosθ_1-k_2cosθ_2.Here α_2=α_2(θ_1,θ_2,ϕ_2) is the angle between 𝐤_1 and 𝐤_2. The above system of quadratic equations have two solutions in general. This degeneracy comes from the symmetry 2↔3, because we cannot distinguish whether the probe is measuring wave 2 or wave 3, both of which are electrostatic resonances. If the solutions k_2 and k_3 are both real and positive, then the three-wave resonance conditions can be satisfied, and three-wave decay will happen once the pump amplitude a_1 exceeds the damping threshold. In another word, we control ω_1 and 𝐤_1 by the antenna array, measure ω_2 using probes, and infer ω_3, 𝐤_2, and 𝐤_3 by solving resonance conditions. With these information, the analytical formula of the normalized growth rate ℳ_L can be readily evaluated numerically. §.§.§ Parallel pumpTo demonstrate how to evaluate the normalized growth rate ℳ_L, consider the example where the pump wave is launched along the magnetic field (θ_1=0). In an electron-ion plasma, this geometry allows the antenna to launch three electrostatic waves: the Langmuir wave, the electron cyclotron wave, or the ion cyclotron wave. In the regime where ω_p∼|Ω_e|∼|ω_p-Ω_e|≫Ω_i, four decay modes are allowed by the resonance conditions: u→ l+l, l→ l+l, l→ l+b, and b→ b+b, where we have labeled waves by the resonance branch they belong to, and u, l, and b denote the upper, lower and bottom resonances. First, let us consider the case where the pump wave is the Langmuir wave (Fig. <ref>a, <ref>b). In this case, the magnetization factor γ_1 is finite, the wave energy coefficient u_1=1, and 𝔽_s,1𝐤̂_1=𝐤̂_1. The normalized scattering strength (<ref>) contains the following four simple inner products:(𝐤̂_1·𝔽_s,2^*𝐤̂_2) =(𝐤̂_2·𝔽_s,1𝐤̂_1)=cosθ_2; (𝐤̂_1·𝔽_s,3^*𝐤̂_3)= (𝐤̂_3·𝔽_s,1𝐤̂_1)=cosθ_3, as well as two other inner products (𝐤̂_2·𝔽_s,3^*𝐤̂_3)=cosθ_2cosθ_3-γ_s,3^2sinθ_2sinθ_3; and (𝐤̂_3·𝔽_s,2^*𝐤̂_2)=cosθ_3cosθ_2-γ_s,2^2sinθ_3sinθ_2. Substituting these inner products into Eq. (<ref>), and using the resonance condition (<ref>), the normalized scattering strength can be immediately found. In the above expressions, θ_2 is the independent variable, and ω_2 is measured. Then we can determine θ_3 from ω_3(θ_3)=ω_1-ω_2 using Eq. (<ref>), and solve for k_2 and k_3 from Eqs. (<ref>) and (<ref>). Finally, with the above information, the normalized matrix element ℳ_L can be readily evaluated. When pumped at the Langmuir frequency (ω_1=ω_p), the resonance conditions constrain the plasma parameters and angles at which the three-wave decay can happen. In over-dense plasma (eg. Fig. <ref>a), the Langmuir wave is in the upper resonance, so the resonance condition can be satisfied only if ω_p<2|Ω_e|. Having satisfied this condition, the u→ l+l decay can happen if θ_2<θ_b^o, where θ_b^o is the angle such that ω_l(θ_b^o)=ω_p-|Ω_e|. In comparison, in under-dense plasma (eg. Fig. <ref>b), the Langmuir wave is in the lower resonance, and therefore can always decay. One decay mode is l→ l+l, which can happen for θ_2>θ_a^u, where ω_l(θ_a^u)=ω_p-ω_LH. Another decay mode is l→ l+b. When ω_2=ω_l, this decay mode happens for 0<θ_2<θ_i^u, where ω_l(θ_i^u)=ω_p-Ω_i; whereas when ω_2=ω_b, this decay mode can happen at any θ_2. Finally, using the symmetry 2↔3, the constrains on θ_3 can be readily deduced.For Langmuir wave pump, the normalized growth rate reaches maximum for symmetric decay, where ω_2=ω_3=ω_p/2. Let us find the asymptotic expression of ℳ_L in the symmetric case, so as to get a sense of how the normalized growth rate scales with plasma parameters. The symmetric angle θ_s can be solved from Eq. (<ref>). Using ω_p∼|Ω_e|≫Ω_i, we find sin^2θ_s≃3[1-ω_p^2/(4Ω_e^2)]/4. Then the wave energy coefficient u_2=u_3≃1+3ω_p^2/(4Ω_e^2-ω_p^2), where the sub-dominant ion contribution in Eq. (<ref>) has been dropped. To solve for the degenerate wave vectors in the symmetric case, it is more convenient to consider the two limits:θ_2=θ_s-ϕ, θ_3=θ_s+ϕ, and θ_2=θ_s-ϕ, θ_3=π-θ_s-ϕ, and then let ϕ→0. Solving Eqs. (<ref>) and (<ref>) for the wave vectors, the two solutions are k_2^-/k_1≃1/(2cosθ_s) and k_2^+/k_1≃sinθ_s/(2sinϕ). For the k_2^- solution, all terms are finite, and the normalized scattering strength is dominated by electron contribution Θ_e^-≃-3ck_1[1+ω_p^2/(2Ω_e^2)]/(4ω_p). Consequently, the normalized growth rate for symmetric k^- scatteringℳ_L^-(ω_p→ω_p/2,ω_p/2)≃3/4(1-ω_p^2/4Ω_e^2).Notice that this decay mode can happen only if |Ω_e|≥ω_p/2. To see what happens to the k_2^+ solution, we need to keep the dominant terms, and expand ω_2≃ω_p/2-ω_s'ϕ and ω_3≃ω_p/2+ω_s'ϕ, where the angular derivativeof lower resonance ω_l(θ) can be evaluated at the symmetric angle using Eq. (<ref>) to be ω_s'/ω_p≃-2Ω_e^2sin(2θ_s)/(2Ω_e^2+ω_p^2). Since ion terms does not contain singularity, the normalized scattering strength is again dominatedby electrons Θ_e^+≃3ck_1[1+5ω_p^2/(4Ω_e^2)]/(8ω_p). Consequently, the normalized growth rate for symmetric k^+ scattering isℳ_L^+(ω_p→ω_p/2,ω_p/2)≃ -ℳ_L^-/2(1+3ω_p^2/2/ω_p^2+2Ω_e^2),where ℳ_L^- is given by Eq. (<ref>). Since ω_p≤2|Ω_e|, it is easy to see that |ℳ_L^+| is always smaller than |ℳ_L^-|. Moreover, wave damping tends to be smaller for the k_2^- solution. Therefore, the dominate decay mode in experiments will be the k^- mode, where the two decay waves propagate symmetrically at angle θ_s with respect to the parallel pump wave. Second, let us consider the case where the pump wave is the electron cyclotron wave (Fig. <ref>c, <ref>d). In this case, β_e,1∼1 and the magnetization factor γ_e,1^2≃(Ω_e^2/ω_p^2-1)/sin^2θ_1 approaches infinity, so the dominate contribution comes from electrons. Keeping track of dominate terms as θ_1→0 and using small angle expansion Eq. (<ref>), the inner products (𝐤̂_2·𝔽_e,1𝐤̂_1 )≃∓γ_e,1^2sinθ_1sinθ_2, and(𝐤̂_3·𝔽_e,1𝐤̂_1) ≃±γ_e,1^2sinθ_1sinθ_3. The other four inner products that enters Eq. (<ref>) are the same as before. Keeping terms ∝1/sinθ_1, the leading term of the normalized scattering strength can be readily found.Although the normalized scattering strength is divergent as θ_1→0, the normalized growth rate remains finite. This is because the divergence in Θ_e cancels the divergence in the wave energy coefficient u_1≃(ω_p^2-Ω_e^2)^2/(ω_p^2Ω_e^2sin^2θ_1), which enters the denominator of ℳ_L. Follow procedure in the first example, the normalized growth rate can be readily obtained.When intense electron cyclotron pump (ω_1=|Ω_e|) exceed the damping threshold, a number of decay modes are possible. In over-dense plasma (eg. Fig. <ref>c), the electron cyclotron wave is in the lower resonance, and three-wave decay is always possible. One decay mode is l→ l+l, which can happen for θ_2>θ_a^o, where ω_l(θ_a^o)=|Ω_e|-ω_LH. Another decay mode is l→ l+b, which can happen for any θ_2 if ω_2=ω_b, and can happen for 0<θ_2<θ_i^o if ω_2=ω_l, where ω_l(θ_i^o)=|Ω_e|-Ω_i. In comparison, in under-dense plasma (eg. Fig. <ref>d), the electron cyclotron wave is in the upper resonance. The resonance condition can be satisfied if |Ω_e|<2ω_p, and u→ l+l decay can happen if θ_2<θ_b^u, where ω_l(θ_b^u)=|Ω_e|-ω_p. We see the angular constrains for electron cyclotron pump decay is in reciprocal to that of the Langmuir pump.For electron cyclotron pump, the normalized growth rate crosses zero and therefore vanish for symmetric k^- decay, while reaching maximum for symmetric k^+ decay. Let us find the asymptotic expression for ℳ_L^+ to get a sense of how the normalized growth rate scales with plasma parameters. Again, we can find the symmetric angle θ_s from Eq. (<ref>), which gives sin^2θ_s≃3[1-Ω_e^2/(4ω_p^2)]/4. Then the wave energy coefficients u_2=u_3≃2(1+2ω_p^2/Ω_e^2)/3. To find the leading behavior of the scattering strength, consider the limit θ_2=θ_s-ϕ, θ_3=π-θ_s-ϕ, and let ϕ→0. In this limit, the wave vector k_2^+/k_1≃sinθ_s/(2sinϕ)→∞, and the frequencies can be expanded by ω_2≃ω_p/2-ω_s'ϕ and ω_3≃ω_p/2+ω_s'ϕ, where the angular derivative ω_s' can again be solved from Eq. (<ref>) to be ω_s'/Ω_e≃2ω_p^2sin(2θ_s)/(Ω_e^2+2ω_p^2). Keeping the dominate terms as ϕ→0, the normalized scattering strength |Θ_e^+|≃ ck_1sin(2θ_s)(1-r^2)(1-r^2/4)/(sinθ_1Ω_e), where r:=|Ω_e|/ω_p. Since the ion contributions are subdominate, the normalized growth rate for symmetric k+ scattering is|ℳ_L^+(Ω_e→Ω_e/2,Ω_e/2)|≃r/4√((3-3r^2/4)^3(1+3r^2/4))/2+r^2.We see ℳ_L^+ is nonzero for 0<r<2, and reaches a maximum of ∼0.38 when r∼0.92. The normalized growth rate can be related to the decay rate in experiments, once wave damping is taken into account. Finally, let us consider the case where the electrostatic pump wave is at ion cyclotron frequency (Fig. <ref>e). Since Ω_i is much smaller than any other characteristic wave frequencies, the only possible decay mode is b→ b+b. Such decay can happen for any angle θ_2, because the resonance conditions can always be satisfied. Similar to what happens in the previous example, the normalized growth rate ℳ_L changes sign and therefore vanish for symmetric k^- decay, while reaching maximum for symmetric k^+ decay. Now let us give an estimate of the maximum value of ℳ_L^+. Since the magnetization factor γ_1,i^2≃ζ/tan^2θ_1→∞, where ζ:=M_i/Z_i≫1, the ion terms dominant. The divergent inner products are (𝐤̂_2·𝔽_i,1𝐤̂_1)≃∓γ_i,1^2sinθ_1sinθ_2 and (𝐤̂_3·𝔽_i,1𝐤̂_1)≃±γ_i,1^2sinθ_1sinθ_3. The other four inner products are finite and similar to what we have before. Using these inner products and keep the leading terms, the normalized scattering |Θ_i^+|≃ ck_1Ω_e^2cosθ_s/(2Ω_i^3sinθ_1), where we have expanded near the symmetric angle as before, with ω_s'≃9Ω_esin(2θ_s)/16. The symmetric angle, very close to π/2, can be estimated from Eq. (<ref>) to be cos^2θ_s≃Ω_i/(3|Ω_e|). The wave energy coefficients u_1≃ω_p^2|Ω_e|/(Ω_i^3sin^2θ_1), and u_2=u_3≃16ω_p^2/(9Ω_i|Ω_e|). Substituting these results into formula (<ref>), the normalized growth rate for symmetric k^+ decay is|ℳ_L^+(Ω_i→Ω_i/2,Ω_i/2)|≃3√(3)/32Ω_i/ω_p.We see in a typical plasma where ω_p≫Ω_i, the decay mode b→ b+b is orders of magnitude weaker than the other decay modes. Nevertheless, when compared with the pump frequency ω_1=Ω_i, the growth rate of the three-wave decay instability is not necessarily small. §.§.§ Perpendicular pumpIn this subsection, we use another set of examples to illustrate how to evaluate the normalized growth rate ℳ_L, by considering the cases where the pump wave propagates perpendicular to the magnetic field. In this geometry, the pump frequency can either be the upper-hybrid frequency ω_UH, or the lower hybrid frequency ω_LH, in an electron-ion plasma. For three-wave decay to happen, the frequency resonance condition (<ref>) must be satisfied. Since the lower hybrid frequency ω_LH≫Ω_i, it is not possible to match the frequency resonance condition with a LH pump wave in a uniform plasma. By similar consideration, for a UH pump wave, the decay mode u→ u+u is also forbidden. However, other decays modes of the UH pump are possible. Using expression ω_UH^2≃ω_p^2+Ω_e^2, we see that u→ u+b is always possible; u→ u+l is possible if 2/√(ζ)≲ r≲√(ζ)/2, where ζ=M_i/Z_i≫1 is the normalized charge-to-mass ratio for ions; and u→ l+l is possible only if 1/√(3)≤ r≤√(3). Here, r=|Ω_e|/ω_p is the ratio of electron cyclotron frequency to the plasma frequency. In this section, we will consider r in the range where all three decay modes are possible. In addition to the frequency condition, the wave vector resonance conditions (<ref>) must also be satisfied for three-wave decay to happen. To see when Eq. (<ref>) can be satisfied in this perpendicular geometry, it is convenient to discuss in the spherical coordinate where the polar angle θ is measured from the magnetic field 𝐛, and the azimuthal angle ϕ is measured from 𝐤_1. In this spherical coordinate, the wave vectors 𝐤_2 and 𝐤_3 are constrained on the two cones spanning angles θ_2, π-θ_2 and θ_3, π-θ_3.Then 𝐤_2 and 𝐤_3 can reside along the lines generated by cutting the two cones with a plane passing through 𝐤_1. When |cosθ_2|>|cosθ_3|, the plane starts to intercept both cones when|cosϕ_3|≥|cosϕ_c|, where the critical angle sinϕ_c=tanθ_2/tanθ_3. When the strict inequality holds, for each 𝐤_3, there are two solutions to 𝐤_2 such that the resonance conditions (<ref>) is satisfied. By the exchange symmetry 2↔3, we immediately know what happens when |cosθ_2|<|cosθ_3|.The resonance condition (<ref>)constrains where in the θ_2-ϕ_2 plane can the normalized growth rate ℳ_L take nonzero values. Having matched the resonance conditions, the normalize growth rate in the polar coordinate can be readily evaluated (Fig. <ref>). To understand the angular dependence of ℳ_L, it is useful to notice that due to the exchange symmetry ℳ_L(2,3)=ℳ_L(3,2), the normalized growth rate ℳ_L(θ_2,ϕ_2) in one region can be mapped to ℳ_L(θ_2',ϕ_2') in anther region. To be more specific, when ω_2 is on the upper resonance (Fig. <ref>a), the normalized growth rate ℳ_L is nonzero in two regions. The first region is θ_2<θ_u^a, where ω_u(θ_u^a)=ω_UH-ω_LH. In this region, the decay mode u_1→ u_2+l_3 is allowed, where ω_3 is on the lower resonance. By the exchange symmetry, this region can be mapped to the island on the bottom right corner of Fig. <ref>b, in which ω_2 is on the lower resonance instead. The other region in Fig. <ref>a where ℳ_L is nonzero is the narrow strip θ_2>θ_u^b, where ω_u(θ_u^b)=ω_UH-Ω_i. In this region, the decay mode u_1→ u_2+b_3 is allowed, where ω_3 is on the bottom resonance. Exchanging 2↔3, this region corresponds to the case where ω_2 is on the bottom resonance instead (Fig. <ref>c). The remaining decay mode is u_1→ l_2+l_3, where both decay waves are on the lower resonance. This decay mode is allowed within the large region on the left of Fig. <ref>b. This region has a straight boundary at θ_2=θ_l^m, where ω_l(θ_l^m)=ω_UH/2. To the left of this boundary, we have θ_2<θ_3, so there is only one solution for k_2. To the right of this boundary, we have θ_2>θ_3, so bothk_2^- and k_2^+ solutions exist as long as sinϕ_2<tanθ_3/tanθ_2. Whenever both solutions exist, Fig. <ref> shows the k^- branch, which has weaker damping. In those degenerate cases, the k^+ branch is usually comparable with the k^- branch. An exception is inserted in Fig. <ref>c', where the k^+ branch is dominant for u_1→ b_2+u_3 decay, corresponding to the forward scattering of the UH pump with little frequency shift.For the u→ u_2+l_3 decay (Fig. <ref>a), one important decay channel has ω_2∼|Ω_e| propagating almost parallel to 𝐛 in the backward direction (ϕ_2=180^∘), andthe other decay wave propagating almost perpendicular to 𝐛 in the forward direction (ϕ_3=0^∘). To see how does ℳ_L scales with plasma parameters, let us find its asymptotic expression when θ_2→0. In this limit ω_2→|Ω_e|, so the magnetization factor γ_2,e^2 is divergent. Then the dominant terms of the coupling strength (<ref>) comes from the 𝔽_e,2 terms. The divergent inner products are (𝐤̂_̂1̂·𝔽_e,2^*𝐤̂_̂2̂)≃-γ_e,2^2sinθ_2 and (𝐤̂_̂3̂·𝔽_e,2^*𝐤̂_̂2̂)≃-γ_e,2^2sinθ_2sinθ_3, and we also need the finite inner products (𝐤̂_̂1̂·𝔽_e,3^*𝐤̂_̂3̂)≃γ_e,3^2sinθ_3 and (𝐤̂_̂3̂·𝔽_e,1𝐤̂_̂1̂)≃γ_e,1^2sinθ_3. Then the leading term of the normalized scattering strength is Θ_e≃ ck_1γ_e,1^2γ_e,2^2γ_e,3^2 (ω_1^2-ω_3^2)sinθ_2sinθ_3/(ω_1ω_2ω_3), where we have used the resonance condition k_3sinθ_3=k_1. The angle θ_3 can be estimated from Eq. (<ref>) using ω_3≫Ω_i, which gives sin^2θ_3≃(ω_3^2-ω_p^2)(ω_3^2-Ω_e^2)/(ω_p^2Ω_e^2). Then the wave energy coefficient u_3≃(2ω_3^2-ω_UH^2)(ω_3^2-Ω_e^2). As for the other two wave energy coefficients, using previous results, we know u_1=ω_UH^2/ω_p^2 and u_2≃(Ω_e^2-ω_p^2)^2/(Ω_e^2ω_p^2sin^2θ_2). Substituting these into Eqs. (<ref>) and (<ref>), we find the normalized growth rate |ℳ_L(ω_UH→|Ω_e|,ω_3)|≃ω_3(ω_3+ω_UH)/ω_p√(2(ω_UH^2-2ω_3^2)),where ω_3=ω_UH-|Ω_e| is the resonance frequency. From previous discussion, we know this decay mode can happen as long as 1/√(3)≤ r≲√(ζ)/2. Within this parameter range, it is easy to see that Eq. (<ref>) decreases monotonically with increasing magnetic field. The maximum value ℳ_L=√(3)/2 is attained at r=1/√(3), where ω_3=|Ω_e|=ω_UH/2 such that the decay is symmetric.For the u→ l_2+l_3 decay (Fig. <ref>b), the dominant decay channel is the symmetric decay, where ω_2=ω_3=ω_1/2. In the symmetric decay geometry, we have θ_3=π-θ_2 and ϕ_3=-ϕ_2. Then the wave vector resonance condition becomes k_2=k_3=k_1/(2sinθ_2cosϕ_2). The symmetric decay angle θ_2=θ_s can be estimated from Eq. (<ref>) using ω_2=ω_UH/2≫Ω_i, which gives cos^2θ_s≃3ω_UH^4/(16ω_p^2Ω_e^2). Since the frequencies are far away from cyclotron frequencies, all the magnetization factors are finite. Then the inner products (𝐤̂_̂1̂·𝔽_s,2^*𝐤̂_̂2̂) ≃γ_s,2^2(cosϕ_2+iβ_s,2sinϕ_2)sinθ_2, (𝐤̂_̂2̂·𝔽_s,1𝐤̂_̂1̂)≃γ_s,1^2(cosϕ_2+iβ_s,1sinϕ_2)sinθ_2,(𝐤̂_̂3̂·𝔽_s,2^*𝐤̂_̂2̂)≃-1+γ_s,2^2sin^2θ_2(2cos^2ϕ_2+iβ_s,2sin2ϕ_2-β_s,2^2), and by exchanging 2↔ 3, we can easily find the other three inner products. Substituting these inner products into Eq. (<ref>), the normalized scattering strength becomes particularly simple when ϕ_2→π/2. In this limit k_2,k_3→∞, but the products k_2cosϕ_2=-k_3cosϕ_3 remains finite. Keeping nonzero terms as ϕ_2→π/2, the scattering strength simplifies to Θ_e^+≃- 2ck_1ω_UH^3/[ω_p^2(3Ω_e^2-ω_p^2)]. The electron terms also dominate the wave energy coefficients u_2=u_3≃2ω_UH^2/(3Ω_e^2-ω_p^2). Gathering the above results, the normalized growth rate for symmetric k^+ scattering is|ℳ_L^+(ω_UH→ω_UH/2,ω_UH/2)|≃ω_p/ω_UH.The above special value of ℳ_L is approximately the maximum in Fig. <ref>b, where θ_2=θ_s and ϕ_2=90^∘. Notice that this special case is singular in wave vector k_2,k_3→∞, and hence will be suppressed by wave damping. Therefore, the dominant decay channels observed in experiment will happen at smaller angle ϕ_2<90^∘ in the symmetric decay geometry.Finally, for the u→ b_2+u_3 decay (Fig. <ref>c), the dominant decay channel has ω_2∼ω_UH propagating almost perpendicular to 𝐛 in the forward direction, and ω_3∼Ω_i propagating either in the forward or backward direction. As an example, let us consider symmetric forward scattering where ϕ_2=ϕ_3=0 and θ_2=π-θ_3=θ_s. In this geometry, k_2^-=k_3^-=k_1/(2sinθ_s). Since θ_s∼π/2, we can estimate the symmetric angle using asymptotic expressions Eqs. (<ref>) and (<ref>). Substituting these expressions into he frequency resonance condition (<ref>), we obtain cos^2θ_s≃2Ω_iω_UH^3/(Ω_e^2ω_p^2)∼0, where we have used that ω_p^2|Ω_e|/(2ω_UH^3)≲0.2 is always a small number. Then the wave energy u_2≃ u_1=ω_UH^2/ω_p^2, and u_3≃ω_p^2[1+2ω_UH^3/(ω_p^2|Ω_e|)]^2/(Ω_i|Ω_e|). Now that the magnetization factors are all finite, the inner products are simply (𝐤̂_̂1̂·𝔽_s,2^*𝐤̂_̂2̂) ≃γ_s,2^2sinθ_2, (𝐤̂_̂2̂·𝔽_s,1𝐤̂_̂1̂)≃γ_s,1^2sinθ_2,(𝐤̂_̂3̂·𝔽_s,2^*𝐤̂_̂2̂)≃cosθ_3cosθ_2+γ_2,s^2sinθ_3sinθ_2, and the three other inner products can be obtained by exchanging 2↔ 3. Again, the scattering is mostly due to electrons, for which γ_e,1^2≃γ_e,2^2≃ω_UH^2/ω_p^2 and γ_e,3^2≃-ω_3^2/Ω_e^2≪cosθ^2_s. Therefore, the dominant term comes from the second line of Eq. (<ref>), which gives the scattering strength Θ_e^-≃- ck_1Ω_iω_UH^5/(ω_3Ω_e^2ω_p^4). Substituting these results into formula (<ref>) and (<ref>), we immediately see that the normalized growth rate for forward scattering is|ℳ_L^-(ω_UH→ω_UH,Ω_i)|≃ω_p/4√(ω_UH|Ω_e|)(ω_3/Ω_i)^1/2,where ω_3=ω_b(θ_s)∼Ω_i can be obtained from Eq. (<ref>). Using the above result, we can also find the symmetric nearly backward scattering ℳ_L^+ by replacing the coefficient 1/4 with k_2^+/(2k_1). The symmetric nearly backward scattering channel has divergent k_2^+, and therefore can have very large growth rate in the absence damping.§ CONCLUSION AND DISCUSSIONIn summary, we solve the cold fluid-Maxwell system to second order in the multiscale perturbation series in the most general geometry (Sec. <ref>), where a discrete spectrum of waves interact in triplets through quadratic nonlinearities [Eq. (<ref>)]. Due to nonlinear interactions, three-wave scatterings change the envelopes of “on-shell" waves as they advect, as well as generate a spectrum of `off-shell" waves due to wave beating. The coupling of wave triplets are described by the scattering strength 𝐒_𝐪,𝐪' [Eq. (<ref>)], which includes the effects of the 𝐯_s1×𝐁_1 nonlinearity, the 𝐯_s1·∇_(0)𝐯_s1 nonlinearity, as well as the ∇_(0)·(n_s1𝐯_s1) nonlinearity . By introducing the forcing operator [Eq. (<ref>)], we manage to give a convenient formula [Eq. (<ref>)] for the three-wave scattering strength in the most general geometry.When there are only three resonant “on-shell" waves participating in the interaction (Sec. <ref>), the three scattering strengths [Eq. (<ref>)] are closely related to one another due to action conservation. The action conservation laws are manifested by the three-wave equations [Eqs. (<ref>)-(<ref>)], which describe how the amplitudes of waves evolve, regardless of the changes in their phases and polarizations. The three-wave equations contain one essential parameter, the coupling coefficient [Eq. (<ref>)], whose explicit formula is given in terms of the wave energy coefficient [Eq. (<ref>)] and the normalized scattering strength [Eq. (<ref>)]. The coupling coefficient contains five degrees of freedom, and can be readily evaluated once the participating waves and their geometry are specified.The general formula of the scattering strength becomes particularly transparent once we quantize the classical three-wave Lagrangian. Using the quantized Lagrangian [Eq. (<ref>)], all six terms of the scattering strength arise from a single cubic interaction ∝ P^i(∂_iA_j)J^j as six permutations of the Feynman diagrams [Eq. (<ref>)].We postulate that this form of the three-wave interaction is independent of the plasma model that one uses to calculate the linear response. In this paper, the linear response is calculated using the cold fluid model. More generally, the linear response may be calculated using the kinetic model or even quantum models. Then, using the relation between the S matrix element and the three-wave scattering strength [Eq. (<ref>)], the three-wave coupling coefficient may be directly computed without going through the perturbative solution of the equations.To demonstrate how to evaluate the cold fluid coupling coefficient, we give a set of examples where all three participating waves are either quasi-transverse (T) or quasi-longitudinal (L) (Sec. <ref>). As an experimental observable, we compute the growth rate of the three-wave decay instability [Eq. (<ref>)], which is proportional to the coupling coefficient when wave damping is ignored. For TTL decay (Sec. <ref>.<ref>), the scattering is due to density perturbation of the L wave, and the normalized growth rate is given conveniently by formula Eqs. (<ref>) and (<ref>). For LLL decay (Sec. <ref>.<ref>), the scattering is due to density beating of three L waves, and the normalized growth rate is given by the explicit formula Eqs. (<ref>), (<ref>) and (<ref>). We evaluate these formulas numerically for the cases where the pump wave is either parallel or perpendicular to the magnetic field, while the decay waves propagate at arbitrary angles. To facilitate understanding of the angular dependences, we also find asymptotic expressions of the normalized growth rate in limiting cases.The above examples elucidate the previously unknown angular dependence of three-wave scattering when strong magnetic field is present. In contrast to the unmagnetized case, backscattering is not necessarily the fastest growing instability in a magnetized plasma. For example, in the TTL scattering (Fig. <ref>,<ref>,<ref>), which happens when two lasers interact via a magnetic resonance, exact backscattering may be suppressed, while nearly perpendicular scattering may be enhanced. For another example, in the LLL scattering (Fig. <ref>,<ref>), which can happen when an electrostatic wave launched by antenna arrays decay to two other longitudinal waves, symmetric decays are usually favored whenever possible, but asymmetric decays can also be important at special angles. The above collisionless, cold, fluid results will need to be modified when kinetic or collisional effects become important. Besides wave damping [Eq. (<ref>)], a major modification comes from the alternation of the linear eigenmode structure, which will be constituted of Bernstein waves instead of the hybrid waves. In addition, weak coupling results obtained in this paper will need to be modified when three-wave interactions becomes strong. This happens when wave amplitudes become nonperturbative, so that relativistic effects becomes non-negligible, and linear eigenmode structure becomes strongly distorted. Despite of the above caveats, the importance of this work is twofold. First, the formulation we develop in this paper preserves the general mathematical structure, thereby enables profound simplifications of the most general results, from which illuminating physical consequences can be extracted. Second, the uniform, collisionless, and cold fluid results we have obtained serve as the baseline for understanding angular dependence of three-wave scattering in magnetized plasmas, which is important both for magnetic confinement devices, as well as laser-plasma interactions in magnetized environment.This research is supported by NNSA Grant No. DE-NA0002948 and DOE Research Grant No. DE-AC02-09CH11466. figuresection § MULTISCALE PERTURBATIVE SOLUTION OF SYSTEM OF ODESIn Sec. <ref>, we use a multiscale expansion to solve a system of nonlinear hyperbolic partial differential equations. To facilitate understanding of the multiscale expansion, here we demonstrate how it can be successfully applied to the following system of ordinary differential equations, which are hyperbolic in the absence of perturbationsẋ = +y+ϵ f(x,y), ẏ = -x+ϵ g(x,y).where ẋ and ẏ denotes the time derivatives of x(t) and y(t), respectively, f and g are some polynomials, and ϵ≪1 is a small parameter enabling us to find the perturbative solution.The above system of equations may be solved perturbatively using the expansionx(t) = x_0(t)+ϵ x_1(t)+ϵ^2x_2(t)+…,y(t) = y_0(t)+ϵ y_1(t)+ϵ^2y_2(t)+….However, naive perturbative solution using only the above expansions will fail due to nonlinearity, by which the notorious secular terms will arise, which increase monotonically in time, and will quickly render the perturbative solutions invalid. To remove the secular terms, we need to also expand the time scalest = t_0+1/ϵ t_1+1/ϵ^2t_2+…, ∂_t = ∂_0+ϵ∂_1+ϵ^2∂_2+…,where one unit of the slow time scale t_n worth 1/ϵ^n units of the fastest time scale t_0. By regarding different time scales as independent variables, the total time derivative is expressed in terms of the summation of derivative on each time scale ∂_n:=∂/∂ t_n using the chain rule. Substituting expansions (<ref>)-(<ref>) into Eqs. (<ref>) and (<ref>) and collect terms according to their order in ϵ, we can obtain a series of equations. The ϵ^0-order equations are simply the equations for a simple harmonic oscillator∂_0x_0-y_0 = 0, ∂_0y_0+x_0 = 0.For real valued x and y, the general solution is well-knownx_0 = a_0e^it_0+c.c.,y_0 = ia_0e^it_0+c.c.,where c.c. stands for complex conjugate, and the complex amplitude a_0=a_0(t_1,t_2,…) can be a function of slow variables. If we truncate the solution on this order, then x and y oscillate harmonically with constant amplitude. On the other hand, if we move on to the next order, perturbations ϵ f(x,y) and ϵ g(x,y) will in general cause the amplitude a_0 to vary on slow time scales, which will be described by higher order equations. The ϵ^1-order equations start to couple perturbations on different time scales∂_1x_0+∂_0x_1-y_1-f_0 = 0, ∂_1y_0+∂_0y_1+x_1-g_0 = 0,where f_0:=f(x_0,y_0) and g_0:=g(x_0,y_0), in which x_0 and y_0 are given by Eqs. (<ref>) and (<ref>). The above two equations contain three unknowns x_1,y_1, and ∂_1a_0. Therefore, we can use the extra degree of freedom to remove secular terms. To do that, let us first separate variables x_1 and y_1 and rewrite the ϵ^1-order equations as∂_0^2x_1+x_1+2∂_1y_0 = u_1, ∂_0^2y_1+y_1-2∂_1x_0 = v_1,where the source terms areu_1[a_0] := ∂_0f_0+g_0,v_1[a_0] := ∂_0g_0-f_0.Substituting x_0 and y_0 into polynomials f and g, we can write f_0=∑_nf_0ne^int_0+c.c., and g_0=∑_ng_0ne^int_0+c.c, where f_0n and g_0n are some functionals of a_0. Then the source terms can be written similarly as u_1=∑_n u_1ne^int_0+c.c. and v_1=∑_n v_1ne^int_0+c.c., where u_1n=g_0n+inf_0n and v_1n=-f_0n+ing_0n. To solve the ϵ^1-order equations (<ref>) and (<ref>), we can match coefficients of Fourier exponents and split the equations into two sets . The first set of equations govern how the amplitude a_0 evolves on the slow time scale t_1, which can be written as ∂_1x_0=-1/2(v_11e^it_0+c.c.) and ∂_1y_0=1/2(u_11e^it_0+c.c.). These two equations are redundant, as can be seen from the relations between x_0 and y_0, as well as the definitions of u_11 and v_11. Both of these equations results in the same equation for a_0, which absorbs the secular term∂_1a_0=1/2(f_01-ig_01),where the right-hand-side is some functional of a_0. This first order ODE of a_0 can usually be integrated, from which a_0 will be a known function of t_1. The other sets of equations governs x_1 and y_1∂_0^2x_1+x_1 = ∑_n 1u_1ne^int+c.c., ∂_0^2y_1+y_1 = ∑_n 1v_1ne^int+c.c.Having removed the secular terms, the above equations are now secular-free, and can be readily solved byx_1=a_1e^it_0+∑_n 1u_1n/1-n^2e^int_0+c.c.,y_1=b_1e^it_0+∑_n 1v_1n/1-n^2e^int_0+c.c..The amplitudes a_1 and b_1 are clearly related by the ϵ^1-order equations, which giveb_1=ia_1-1/2(f_01+ig_01).Notice that in the perturbation series Eq. (<ref>), we can always redefine a_0+ϵ a_1→ a_0'. Hence it is sufficient to set the amplitude a_1=0. In this way, we will obtain a x-majored solution, where the amplitude of e^it_0 for x is completely given by a_0, whereas amplitude e^it_0 for y is given by the summation b_0+ϵ b_1+…. Alternatively, by setting b_1=0, we can of course also obtain a y-majored solution, which we will not pursue here. For three-wave scattering studied in this paper, it is sufficient to truncate at this order. Then the solution is constituted of oscillations with slowly varying amplitudes.To show the general structure of the multiscale expansion, here, it is instructive to carry out the solution to the next order. The ϵ^2-order equations are∂_2x_0+∂_1x_1+∂_0x_2-y_2-f_1 = 0, ∂_2y_0+∂_1y_1+∂_0y_2+x_2-g_1 = 0,where f_1:=x_1∂_xf_0+y_1∂_yf_0 and g_1=x_1∂_xg_0+y_1∂_yg_0. In the above two equations, there are three unknowns x_2, y_2 and ∂_2a_0. So again, we can use the extra degree of freedom to remove the secular terms. Separating variables x_2 and y_2, we can rewrite the equations as∂_0^2x_2+x_2+2∂_2y_0 = u_2 ∂_0^2y_2+y_2-2∂_2x_0 = v_2.Since we set a_1=0 for the x-majored solution, the source terms are functionals of a_0 onlyu_2[a_0]: = ∂_0f_1+g_1+∂_1^2x_0-2∂_1y_1-∂_1 f_0,v_2[a_0]: = ∂_0g_1-f_1+∂_1^2y_0+2∂_1x_1-∂_1 g_0.Again, since f and g are polynomials, we can write f_1=∑_nf_1ne^int_0+c.c., and g_1=∑_ng_1ne^int_0+c.c. Then the source terms can be written similarly as u_2=∑_n u_2ne^int_0+c.c. and v_2=∑_n v_2ne^int_0+c.c., where v_21=iu_21=i∂_1^2a_0+ig_11-∂_1g_01-f_11, and for n≥2, we have u_2n=inf_1n-∂_1f_0n+g_1n-2∂_1v_1n/(1-n^2) andv_2n=ing_1n-∂_1g_0n+f_1n+2∂_1u_1n/(1-n^2).To solve the ϵ^2-order equations (<ref>) and (<ref>), we can use similar procedure to split the equations into two sets. The first set of equations are again redundant, and can be written as a single equation governing how the amplitude a_0 evolve on the slow time scale t_2∂_2a_0=1/2(f_11-ig_11)-i/4∂_1(f_01+ig_01).Regarding t_1 as a parameter, the above equation is a first order ODE for a_0(t_2), which can usually be integrated. The second sets of equations are similar to Eqs. (<ref>) and (<ref>), with u_1n and v_1n replaced by u_2n and v_2n, respectively. The solutions to these secular-free equations are similar toEqs. (<ref>) and (<ref>) with the order index “1" replaced by the order index “2", in which the second order amplitudes a_2 and b_2 are again related by the ϵ^2-order equationsb_2=ia_2-1/2(f_11+ig_11)-i/4∂_1(f_01+ig_01).To obtain the x-majored solution, we again set a_2 to zero. By the obvious analogy between the ϵ^1- and ϵ^2-order equations, the above procedures can be readily extended to higher order in the perturbation series.In summary, using the multiscale expansion Eqs. (<ref>)-(<ref>), we convert a system of ODEs (<ref>)-(<ref>) to a series of equations. The general solution for a system of hyperbolic ODEs is rapid oscillations with slowly varying amplitudes. The first order amplitude equation Eq. (<ref>) governs how the amplitude varies on t_1 time scale, and the higher order amplitude equations, such as Eq. (<ref>) governs how the amplitude evolves on even slower time scales. By summing up solutions on each order, which may include not only oscillations with fundamental frequency, but also higher harmonics such as Eqs. (<ref>) and (<ref>), we can obtain a perturbative solution to the system of ODEs, majored in any one of its variables.To see how the multiscale expansion work in practice, interested readers are encouraged to test it on the following two examples. The first is a linear example, where f(x,y)=-x and g(x,y)=0. The exact solution to this linear case can be easily obtained. The second is a nonlinear example, where f(x,y)=0 and g(x,y)=-x+2x^3. The exact solutions to this nonlinear case are the Jacobi elliptic functions. One can expand the exact solutions in ϵ, and check order by order that it matches the perturbative solution obtained using the multiscale expansion. § LINEAR WAVES IN COLD MAGNETIZED PLASMASIn Sec. <ref>.<ref>, we obtain the first order electric field equation (<ref>) in the momentum space. The solutions to this matrix equation give the linear eigenmodes of the cold fluid-Maxwell system. In this appendix, we review properties of the linear waves, in order to facilitate understanding of their scatterings discussed in this paper.To discuss properties of the linear waves, it is convenient to choose the coordinate system where the uniform magnetic field is in the z-direction. In this coordinate, the forcing operator Eq. (<ref>) has matrix representation𝔽_s,𝐤 = ( [γ^2_s,𝐤iβ_s,𝐤γ^2_s,𝐤0; -iβ_s,𝐤γ^2_s,𝐤γ^2_s,𝐤0;001 ]).Having fixed the z-axis, we can rotate the coordinate system, such that the wave vector 𝐤=(k_⊥,0,k_∥)=k(sinθ,0,cosθ), where θ is the angle between 𝐤 and 𝐛. In this coordinate system, the matrix representation of the dispersion tensor (<ref>) can be easily found.Then the first order electric field equation 𝔻_𝐤ℰ^(1)_𝐤/ω_𝐤^2=0 can be written as( [ S-n_∥^2 -iD n_⊥ n_∥;iD S-n^2 0; n_⊥ n_∥ 0 P-n_⊥^2 ]) ([ ℰ_x^(1); ℰ_y^(1); ℰ_z^(1) ])=0,where n=ck/ω is the refractive index, n_⊥=nsinθ, and n_∥=ncosθ are projections in the perpendicular and parallel directions. Following Stix's notations <cit.>, the components of the dielectric tensor areS = 1-∑_sω_ps^2/ω^2-Ω_s^2,D = ∑_sΩ_s/ωω_ps^2/ω^2-Ω_s^2,P = 1-∑_sω_ps^2/ω^2.In the above expressions, we have omitted the 𝐤-subscripts for both ω and ℰ^(1). The expressions for S and D can be simplified, using identities in quasi-neutral electron-ion plasma, in which n_e=Z_in_i, so Ω_iω_pe^2+Ω_eω_pi^2=0 and Ω_i^2ω_pe^2+Ω_e^2ω_pi^2+ω_p^2Ω_eΩ_i=0, where ω_p^2=∑_sω_ps^2 is the plasma frequency squared.The electric field equation (<ref>) has nontrivial solution if and only if the dispersion tensor is degenerate. This is equivalent to requiring the determinant of the dispersion tensor to be zero, which gives a constraint between ω and 𝐤, called the dispersion relation. In the above coordinate system, using Stix's notation, the dispersion relation can be written asAn^4-Bn^2+C=0,where the coefficients of the quadratic equation of n^2 areA = Ssin^2θ+Pcos^2θ,B = RLsin^2θ+PS(1+cos^2θ),C = PRL,which are functions of ω only, independent of the wave vector. In the above expressions, R=S+D and L=S-D are the right- and left-handed components of the dielectric tensor. The quadratic dispersion relation (<ref>) has two solutions n_±^2=(B± F)/(2A), where F^2=B^2-4AC=(RL-PS)^2sin^4θ+4P^2D^2cos^2θ. Since F^2≥0, we see the two solutions n_±^2 are both real. However, n_±^2 is not always positive, so each solution may contain many branches, emanating from cutoff frequencies ω_c, at which C(ω_c)=0 so that n^2=0. For example, in electron-ion plasma (Fig. <ref>a), the cutoff frequencies are at ω_R, ω_p, and ω_L, and the dispersion relation contains two electromagnetic-like branches, for which ω→ ck as k→∞, as well as three electrostatic-like branches, for which ω→ω_r as k→∞, where ω_r is some resonance frequencies. The resonance frequencies are asymptotic values of ω on electrostatic branches when k→∞. As the frequency approaches the resonance frequencies from the below, the refractive index n^2_±→∞, so we can find ω_r by solving A(ω_r)=0. In electron-ion plasma, this equation for resonance frequencies can be explicitly written as0 = ω_r^6-ω_r^4(ω_p^2+Ω_e^2+Ω_i^2)-ω_p^2Ω_e^2Ω_i^2cos^2θ+ ω_r^2[ω_p^2(Ω_e^2+Ω_i^2)cos^2θ-ω_p^2Ω_eΩ_isin^2θ+Ω_e^2Ω_i^2].The above cubic equation for ω_r^2 has three solutions (Fig. <ref>), which can be ordered from large to small as the upper (ω_u), lower (ω_l), and bottom (ω_b) resonance. When θ∼ 0 or π, the resonance frequencies approaches ω_p, |Ω_e|, and Ω_i. Keeping the next order angular dependence, the three resonance frequencies, when sinθ∼0, can be approximated byω_r^2/ω_p^2 ≃ 1-Ω_e^2sin^2θ/Ω_e^2(2-cos^2θ)-ω_p^2, ω_r^2/Ω_e^2 ≃ 1-ω_p^2sin^2θ/ω_p^2(2-cos^2θ)-Ω_e^2, ω_r^2/Ω_i^2 ≃ 1-Ω_i/|Ω_e|tan^2θ.In the other limit, when θ∼π/2, the resonance frequencies approaches the upper-hybrid frequency ω_UH, the lower hybrid frequency ω_LH, and 0. Keeping the next order angular dependence, the upper, lower, and bottom resonance frequencies, when cosθ∼ 0, can be approximated by ω_u^2/ω_UH^2 ≃ 1- ω_p^2Ω_e^2cos^2θ/(ω_p^2+Ω_e^2)^2+ω_p^2Ω_e^2cos^2θ,ω_l^2/ω_LH^2 ≃ 1+Ω_e^2cos^2θ/Ω_e^2cos^2θ+|Ω_e|Ω_i(1+cos^2θ), ω_b^2/Ω_i^2≃ |Ω_e|cos^2θ/Ω_i+|Ω_e|cos^2θ.The above asymptotic expressions for resonance frequency ω_r are extremely useful when we approximate the scattering strength and wave energy coefficients. When frequencies approaches resonances, the waves becomes longitudinal. On the other hand, the wave becomes transverse when frequencies approaches infinity. For intermediate frequencies, we can find the wave polarization by solving for eigenmodes of the electric field equation (<ref>).In the wave coordinate 𝐤̂, 𝐲̂, and 𝐤̂×𝐲̂, we can write ℰ_k=ℰcosϕ, ℰ_y=-iℰsinϕcosψ, and ℰ_×=ℰsinϕsinψ, where we have omitted the superscript of ℰ^(1). Then the polarization anglestanψ = Sn^2-RL/n^2Dcosθ,tanϕ = Pcosθ/(n^2-P)sinθsinψ. Notice that ℰ_×/ℰ_y=itanψ is imaginary. Therefore, the wave is elliptically polarized in general. Also notice that the polarization ray ℰ̂ is invariant under transformations (ϕ,ψ)→(ϕ±180^∘,ψ) and(ϕ,ψ)→(-ϕ,ψ±180^∘). Therefore, the polarization angles (Fig. <ref>b) can be interpreted up to these identity transformations. Finally, notice that ψ_± for the n^2_± solutions satisfies the identity tanψ_+tanψ_-=-1. Hence, polarizations of these two frequency-degenerate eigenmodes are always orthogonal in the transverse plane. 46 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Chang and Porkolab(1974)]Chang74 author author R. P. H. Chang and author M. Porkolab, 10.1103/PhysRevLett.32.1227 journal journal Phys. Rev. Lett. volume 32, pages 1227 (year 1974)NoStop [Liu and Tripathi(1986)]Liu86 author author C. S. Liu and author V. Tripathi, @noopjournal journal Phys. Rep. volume 130, pages 143 (year 1986)NoStop [Fisch(1978)]Fisch78 author author N. J. Fisch, @noopjournal journal Phys. Rev. Lett. volume 41, pages 873 (year 1978)NoStop [Fisch(1987)]Fisch87 author author N. J. Fisch, 10.1103/RevModPhys.59.175 journal journal Rev. Mod. Phys. volume 59, pages 175 (year 1987)NoStop [Cesario et al.(2006)Cesario, Cardinali, Castaldo, Paoletti, Fundamenski, Hacquin et al.]Cesario06 author author R. Cesario, author A. Cardinali, author C. Castaldo, author F. Paoletti, author W. Fundamenski, author S. Hacquin,et al., @noopjournal journal Nucl. fusion volume 46, pages 462 (year 2006)NoStop [Myatt et al.(2013)Myatt, Vu, DuBois, Russell, Zhang, Short, and Maximov]Myatt13 author author J. Myatt, author H. Vu, author D. DuBois, author D. Russell, author J. Zhang, author R. Short,and author A. Maximov, @noopjournal journal Phys. Plasmas volume 20, pages 052705 (year 2013)NoStop [Hohenberger et al.(2012)Hohenberger, Chang, Fiksel, Knauer, Betti, Marshall, Meyerhofer, Séguin, and Petrasso]Hohenberger12 author author M. Hohenberger, author P.-Y. Chang, author G. Fiksel, author J. Knauer, author R. Betti, author F. Marshall, author D. Meyerhofer, author F. Séguin,and author R. Petrasso, @noopjournal journal Phys. Plasmas volume 19, pages 056306 (year 2012)NoStop [Slutz and Vesey(2012)]Slutz12 author author S. A. Slutz and author R. A. Vesey, @noopjournal journal Phys. Rev. Lett. volume 108, pages 025003 (year 2012)NoStop [Gotchev et al.(2009)Gotchev, Chang, Knauer, Meyerhofer, Polomarov, Frenje, Li, Manuel, Petrasso, Rygg, Séguin, and Betti]Gotchev09 author author O. V. Gotchev, author P. Y. Chang, author J. P. Knauer, author D. D. Meyerhofer, author O. Polomarov, author J. Frenje, author C. K. Li, author M. J.-E. Manuel, author R. D. Petrasso, author J. R. Rygg, author F. H. Séguin,and author R. Betti, 10.1103/PhysRevLett.103.215004 journal journal Phys. Rev. Lett. volume 103, pages 215004 (year 2009)NoStop [Shi et al.(2017)Shi, Qin, and Fisch]Shi17 author author Y. Shi, author H. Qin,and author N. J. Fisch, 10.1103/PhysRevE.95.023211 journal journal Phys. Rev. E volume 95, pages 023211 (year 2017)NoStop [Davidson(1972)]Davidson12 author author R. Davidson, @nooptitle Methods in nonlinear plasma theory (publisher Elsevier, year 1972)NoStop [Weiland and Wilhelmsson(1977)]Weiland77 author author J. Weiland and author H. Wilhelmsson, @nooptitle Coherent non-linear interaction of waves in plasmas, International series in natural philosophy (publisher Pergamon Press, year 1977)NoStop [Wagner et al.(2004)Wagner, Tatarakis, Gopal, Beg, Clark, Dangor, Evans, Haines, Mangles, Norreys et al.]Wagner04 author author U. Wagner, author M. Tatarakis, author A. Gopal, author F. Beg, author E. Clark, author A. Dangor, author R. Evans, author M. Haines, author S. Mangles, author P. Norreys,et al., @noopjournal journal Phys. Rev. E volume 70, pages 026401 (year 2004)NoStop [Santos et al.(2015)Santos, Bailly-Grandvaux, Giuffrida, Forestier-Colleoni, Fujioka, Zhang, Korneev, Bouillaud, Dorard, Batani et al.]Santos15 author author J. Santos, author M. Bailly-Grandvaux, author L. Giuffrida, author P. Forestier-Colleoni, author S. Fujioka, author Z. Zhang, author P. Korneev, author R. Bouillaud, author S. Dorard, author D. Batani,et al., @noopjournal journal New J. Phys. volume 17, pages 083051 (year 2015)NoStop [Fujioka et al.(2013)Fujioka, Zhang, Ishihara, Shigemori, Hironaka, Johzaki, Sunahara, Yamamoto, Nakashima, Watanabe et al.]Fujioka13 author author S. Fujioka, author Z. Zhang, author K. Ishihara, author K. Shigemori, author Y. Hironaka, author T. Johzaki, author A. Sunahara, author N. Yamamoto, author H. Nakashima, author T. Watanabe,et al., @noopjournal journal Sci. Rep. volume 3 (year 2013)NoStop [Sjölund and Stenflo(1967)]Sj枚lund67 author author A. Sjölund and author L. Stenflo, @noopjournal journal Zeitschrift für Physik A Hadrons and nuclei volume 204, pages 211 (year 1967)NoStop [Shivamoggi(1982)]Shivamoggi82 author author B. K. Shivamoggi, http://stacks.iop.org/1402-4896/25/i=5/a=011 journal journal Phys. Scripta volume 25, pages 637 (year 1982)NoStop [Grebogi and Liu(1980)]Grebogi80 author author C. Grebogi and author C. Liu,@noopjournal journal J. Plasma Phys.volume 23, pages 147 (year 1980)NoStop [Barr et al.(1984)Barr, Boyd, Gardner, and Rankin]Barr84 author author H. C. Barr, author T. J. M. Boyd, author L. R. T. Gardner,and author R. Rankin, @noopjournal journal Phys. Fluids volume 27, pages 2730 (year 1984)NoStop [Vyas et al.(2016)Vyas, Singh, and Sharma]Vyas16 author author A. Vyas, author R. K. Singh, and author R. Sharma,@noopjournal journal Phys. Plasmasvolume 23, pages 012107 (year 2016)NoStop [Sanuki and Schmidt(1977)]Sanuki77 author author H. Sanuki and author G. Schmidt, @noopjournal journal J. Phys. Soc. Jpn. volume 42, pages 664 (year 1977)NoStop [Laham et al.(1998)Laham, Nasser, and Khateeb]Laham98 author author N. M. Laham, author A. S. A. Nasser,and author A. M. Khateeb, @noopjournal journal Phys. Scripta volume 57, pages 253 (year 1998)NoStop [Platzman et al.(1968)Platzman, Wolff, and Tzoar]Platzman68 author author P. M. Platzman, author P. A. Wolff, and author N. Tzoar, 10.1103/PhysRev.174.489 journal journal Phys. Rev. volume 174, pages 489 (year 1968)NoStop [Ram(1982)]Ram82 author author S. Ram, @noopjournal journal Plasma Physics volume 24, pages 885 (year 1982)NoStop [Boyd and Rankin(1985)]Boyd85 author author T. J. M. Boyd and author R. Rankin, @noopjournal journal J. Plasma Phys. volume 33, pages 303 (year 1985)NoStop [Stenflo(1970)]Stenflo70 author author L. Stenflo, @noopjournal journal J. Plasma Phys. volume 4, pages 585 (year 1970)NoStop [Stenflo(1994)]Stenflo94 author author L. Stenflo, @noopjournal journal Phys. Scripta volume 1994, pages 15 (year 1994)NoStop [Brodin and Stenflo(2012)]Brodin12 author author G. Brodin and author L. Stenflo, @noopjournal journal Phys. Scripta volume 85, pages 035504 (year 2012)NoStop [Larsson et al.(1976)Larsson, Stenflo, and Tegeback]Larsson76 author author J. Larsson, author L. Stenflo, and author R. Tegeback, 10.1017/S002237780002002X journal journal J. Plasma Phys. volume 16, pages 37 (year 1976)NoStop [Stenflo(2004)]Stenflo04 author author L. Stenflo, @noopjournal journal Phys. Scripta volume T107, pages 262 (year 2004)NoStop [Galloway and Kim(1971)]Galloway71 author author J. Galloway and author H. Kim,@noopjournal journal J. Plasma Phys.volume 6, pages 53 (year 1971)NoStop [Boyd and Turner(1978)]Boyd78 author author T. Boyd and author J. Turner,@noopjournal journal J. Math. Phys.volume 19, pages 1403 (year 1978)NoStop [Dodin and Arefiev(2017)]Dodin17 author author I. Y. Dodin and author A. V. Arefiev, @noopjournal journal Phys. Plasmas volume 24, pages 032119 (year 2017)NoStop [Debnath(2011)]Debnath11 author author L. Debnath, @nooptitle Nonlinear partial differential equations for scientists and engineers (publisher Springer Science & Business Media, year 2011)NoStop [Stix(1992)]Stix92 author author T. Stix, @nooptitle Waves in Plasmas(publisher American Inst. of Physics, year 1992)NoStop [Jurkus and Robson(1960)]Jurkus60 author author A. Jurkus and author P. Robson, @noopjournal journal Proceedings of the IEE-Part B: Electronic and Communication Engineering volume 107, pages 119 (year 1960)NoStop [Armstrong et al.(1962)Armstrong, Bloembergen, Ducuing, and Pershan]Armstrong62 author author J. Armstrong, author N. Bloembergen, author J. Ducuing,and author P. Pershan, @noopjournal journal Phys. Rev. volume 127, pages 1918 (year 1962)NoStop [Harvey and Schmidt(1975)]Harvey75 author author R. Harvey and author G. Schmidt, @noopjournal journal Phys. Fluids volume 18, pages 1395 (year 1975)NoStop [Armstrong et al.(1970)Armstrong, Jha, and Shiren]Armstrong70 author author J. Armstrong, author S. Jha, and author N. Shiren, 10.1109/JQE.1970.1076386 journal journal IEEE J. Quantum Elect. volume 6, pages 123 (year 1970)NoStop [Nozaki and Taniuti(1973)]Nozaki73 author author K. Nozaki and author T. Taniuti, @noopjournal journal J. Phys. Soc. Jpn. volume 34, pages 796 (year 1973)NoStop [Ohsawa and Nozaki(1974)]Ohsawa74 author author Y. Ohsawa and author K. Nozaki, @noopjournal journal J. Phys. Soc. Jpn. volume 36, pages 591 (year 1974)NoStop [Zakharov and Manakov(1975)]Zakharov75 author author V. Zakharov and author S. Manakov, @noopjournal journal Zh. Exp. Teor. Fiz volume 69, pages 1654 (year 1975)NoStop [Turner and Baldwin(1988)]Turner88 author author J. G. Turner and author M. Baldwin, @noopjournal journal Phys. Scripta volume 37, pages 549 (year 1988)NoStop [Ablowitz et al.(1974)Ablowitz, Kaup, and Newell]Ablowitz74 author author M. J. Ablowitz, author D. J. Kaup, and author A. C. Newell,@noopjournal journal Stud. Appl. Math. volume 53, pages 249 (year 1974)NoStop [Kaup et al.(1979)Kaup, Reiman, and Bers]Kaup79 author author D. J. Kaup, author A. Reiman,and author A. Bers, 10.1103/RevModPhys.51.275 journal journal Rev. Mod. Phys. volume 51, pages 275 (year 1979)NoStop [Shi et al.(2016)Shi, Fisch, and Qin]Shi16 author author Y. Shi, author N. J. Fisch, and author H. Qin, @noopjournal journal Phys. Rev. A volume 94, pages 012124 (year 2016)NoStop | http://arxiv.org/abs/1705.09758v1 | {
"authors": [
"Yuan Shi",
"Hong Qin",
"Nathaniel J. Fisch"
],
"categories": [
"physics.plasm-ph"
],
"primary_category": "physics.plasm-ph",
"published": "20170527025035",
"title": "Three-wave scattering in magnetized plasmas: from cold fluid to quantized Lagrangian"
} |
Hidden symmetries in N-layer dielectric stacks Riichiro Saito^1 December 30, 2023 ============================================== We study the complexity of computing the VC Dimension and Littlestone's Dimension. Given an explicit description of a finite universe and a concept class (a binary matrix whose (x,C)-th entry is 1 iff element x belongs to concept C), both can be computed exactly in quasi-polynomial time (n^O(log n)). Assuming the randomized Exponential Time Hypothesis (ETH), we prove nearly matching lower bounds on the running time, that hold even for approximation algorithms. § INTRODUCTION A common and essential assumption in learning theory is that the concepts we want to learn come from a nice, simple concept class, or (in the agnostic case) they can at least be approximated by a concept from a simple class. When the concept class is sufficiently simple, there is hope for good (i.e. sample-efficient and low-error) learning algorithms. There are many different ways to measure the simplicity of a concept class. The most influential measure of simplicity is the VC Dimension, which captures learning in the PAC model. We also consider Littlestone's Dimension <cit.>, which corresponds to minimizing mistakes in online learning (see Section <ref> for definitions). When either dimension is small, there are algorithms that exploit the simplicity of the class, to obtain good learning guarantees.Two decades ago, it was shown (under appropriate computational complexity assumptions) that neither dimension can be computed in polynomial time <cit.>; and these impossibility results hold even in the most optimistic setting where the entire universe and concept class are given as explicit input (a binary matrix whose (x, C)-th entry is 1 iff element x belongs to concept C).The computational intractability of computing the (VC, Littlestone's) dimension of a concept class suggests that even in cases where a simple structure exists, it may be inaccessible to computationally bounded algorithms (see Discussion below).In this work we extend the results of <cit.> to show that the VC and Littlestone's Dimensions cannot even be approximately computed in polynomial time. We don't quite prove that those problems are -hard: both dimensions can be computed (exactly) in quasi-polynomial (n^O(log n)) time,hence it is very unlikely that either problem is -hard.Nevertheless, assuming the randomized Exponential Time Hypothesis (ETH) [The randomized ETH (rETH) postulates that there is no 2^o(n)-time Monte Carlo algorithms that solves 3 on n variables correctly with probability at least 2/3 (i.e. 3∉(2^o(n))).] <cit.>,we prove essentially tight quasi-polynomial lower bounds on the running time - that hold even against approximation algorithms.Assuming Randomized ETH, approximating VC Dimension to within a (1/2 + o(1))-factor requires n^log^1 - o(1)n time.There exists an absolute constant ε > 0 such that, assuming Randomized ETH, approximating Littlestone's Dimension to within a (1 - ε)-factor requires n^log^1 - o(1)n time.§.§ DiscussionAs we mentioned before, the computational intractability of computing the (VC, Littlestone's) dimension of a concept class suggests that even in cases where a simple structure exists, it may be inaccessible to computationally bounded algorithms. We note however that it is not at all clear that any particular algorithmic applications are immediately intractable as a consequence of our results.Consider for example the adversarial online learning zero-sum game corresponding to Littlestone's Dimension: At each iteration, Nature presents the learner with an element from the universe; the learner attempts to classify the element, and loses a point for every wrong classification; at the end of the iteration, the correct (binary) classification is revealed. The Littlestone's Dimension is equal to the worst case loss of the Learner before learning the exact concept. (see Section <ref> for a more detailed definition.)What can we learn from the fact that the Littlestone's Dimension is hard to compute? The first observation is that there is no efficient learner that can commit to a concrete mistake bound. But this does not rule out a computationally-efficient learner that plays optimal strategy and makes at most as many mistakes as the unbounded learner. We can, however, conclude that Nature's task is computationally intractable! Otherwise, we could efficiently construct an entire worst-case mistake tree (for a concept class C, any mistake tree has at most | C| leaves, requiring | C|-1 oracle calls to Nature).On a philosophical level, we think it is interesting to understand the implications of an intractable, adversarial Nature. Perhaps this is another evidence that the mistake bound model is too pessimistic?Also, the only algorithm we know for computing the optimal learner's decision requires computing the Littlestone's Dimension. We think that it is an interesting open question whether an approximately optimal computationally-efficient learner exists.In addition, let us note that in the other direction, computing Littlestone's Dimension exactly implies an exactly optimal learner. However, since the learner has to compute Littlestone's Dimension many times, we have no evidence that an approximation algorithm for Littlestone's Dimension would imply any guarantee for the learner.Finally, we remark that for either problem (VC or Littlestone's Dimension), we are not aware of any non-trivial approximation algorithms.§.§ TechniquesThe starting point of our reduction is the framework of “birthday repetition” <cit.>. This framework has seen many variations in the last few years, but the high level approach is as follows: begin with a hard-to-approximate instance of a 2CSP (such as 3-Color), and partition the vertices into √(n)-tuples.On one hand, by the birthday paradox, even if the original graph is sparse, we expect each pair of random √(n)-tuples to share an edge; this is crucial for showing hardness of approximation in many applications. On the other hand our reduction size is now approximately N ≈ 2^√(n) (there are 3^√(n) ways to color each √(n)-tuple), whereas by ETH solving 3-Color requires approximately T(n) ≈ 2^n time, so solving the larger problem also takes at least T(n) ≈ N^log N time. VC Dimension The first challenge we have to overcome in order to adapt this framework to hardness of approximation of VC Dimension is that the number of concepts involved in shattering a subset S is 2^|S|.Therefore any inapproximability factor we prove on the size of the shattered set of elements, “goes in the exponent” of the size of the shattering set of concepts. Even a small constant factor gap in the VC Dimension requires proving a polynomial factor gap in the number of shattering concepts (obtaining polynomial gaps via “birthday repetition” for simpler problems is an interesting open problem <cit.>). Fortunately, having a large number of concepts is also an advantage: we use each concept to test a different set of 3-Color constraints chosen independently at random; if the original instance is far from satisfied, the probability of passing all 2^Θ(|S|) tests should now be doubly-exponentially small (2^-2^Θ(|S|))! More concretely, we think of half of the elements in the shattered set as encoding an assignment, and the other half as encoding which tests to run on the assignments.Littlestone's Dimension Our starting point is the reduction for VC Dimension outlined in the previous paragraph. While we haven't yet formally introduced Littlestone's Dimension, recall that it corresponds to an online learning model. If the test-selection elements arrive before the assignment-encoding elements, the adversary can adaptively tailor his assignment to pass the specific test selected in the previous steps. To overcome this obstacle, we introduce a special gadget that forces the assignment-encoding elements to arrive first; this makes the reduction to Littlestone's Dimension somewhat more involved.Note that there is a reduction by <cit.> from VC Dimension to Littlestone's Dimension.Unfortunately, their reduction is not (approximately) gap-preserving, so we cannot use it directly to obtain Theorem <ref> from Theorem <ref>. §.§ Related WorkThe study of the computational complexity of the VC Dimension was initiated by Linial, Mansour, and Rivest <cit.>, who observed that it can be computed in quasi-polynomial time. <cit.> proved that it is complete for the class which they define in the same paper. <cit.> reduced the problem of computing the VC dimension to that of computing Littlestone's Dimension, hence the latter is also -hard. (It follows as a corollary of our Theorem <ref> that, assuming ETH, solving any -hard problem requires quasi-polynomial time.)Both problems were also studied in an implicit model,where the concept class is given in the form of a Boolean circuit that takes as input an element x and a concept c and returns 1 iff x ∈ c. Observe that in this model even computing whether either dimension is 0 or not is already -hard. Schafer proved that the VC Dimension is Σ_3^-complete <cit.>, while the Littlestone's Dimension is -complete <cit.>. <cit.> proved that VC Dimension is Σ_3^-hard to approximate to within a factor of almost 2; can be approximated to within a factor slightly better than 2 in ; and is -hard to approximate to within n^1-ε.Another line of related work in the implicit model proves computational intractability of PAC learning (which corresponds to the VC Dimension).Such intractability has been proved either from cryptographic assumptions, e.g. <cit.> or from average case assumptions, e.g. <cit.>. <cit.> showed a “computational” separation between PAC learning and online mistake bound (which correspond to the VC Dimension and Littlestone's Dimension, respectively):if one-way function exist, then there is a concept class that can be learned by a computationally-bounded learner in the PAC model, but not in the mistake-bound model.Recently, <cit.> introduced a generalization of VC Dimension which they call Partial VC Dimension, and proved that it is -hard to approximate (even when given an explicit description of the universe and concept class). Our work is also related to many other quasi-polynomial lower bounds from recent years, which were also inspired by “birthday repetition”; these include problems like Densest k-Subgraph <cit.>, Nash Equilibrium and related problems <cit.> and Community Detection <cit.>. It is interesting to note that so far “birthday repetition” has found very different applications, but they all share essentially the same quasi-polynomial algorithm: The bottleneck in those problem is a bilinear optimization problem max_u,v u^⊤Av, which we want to approximate to within a (small) constant additive factor. It suffices to find an O(log n)-sparse sample v̂ of the optimal v^*; the algorithm enumerates over all sparse v̂'s <cit.>. In contrast, the problems we consider in this paper have completely different quasi-polynomial time algorithms: For VC Dimension, it suffices to simply enumerate over all log||-tuples of elements (wheredenotes the concept class and log|| is the trivial upper bound on the VC dimension) <cit.>. Littlestone's Dimension can be computed in quasi-polynomial time via a recursive “divide and conquer” algorithm (See Appendix <ref>).§ PRELIMINARIESFor a universe (or ground set) , a concept C is simply a subset ofand a concept classis a collection of concepts. For convenience, we sometimes relax the definition and allow the concepts to not be subsets of ; all definitions here extend naturally to this case.The VC and Littlestone's Dimensions can be defined as follows. A subset S ⊆ is said to be shattered by a concept classif, for every T ⊆ S, there exists a concept C ∈ such that T = S ∩ C.The VC Dimension (, ) of a concept classwith respect to the universeis the largest d such that there exists a subset S ⊆ of size d that is shattered by .A depth-d instance-labeled tree ofis a full binary tree of depth d such that every internal node of the tree is assigned an element of . For convenience, we will identify each node in the tree canonically by a binary string s of length at most d.A depth-d mistake tree (aka shattered tree <cit.>) for a universeand a concept classis a depth-d instance-labeled tree ofsuch that, if we let v_s ∈ denote the element assigned to the vertex s for every s ∈{0, 1}^< d, then, for every leaf ℓ∈{0, 1}^d, there exists a concept C ∈ that agrees with the path from root to it, i.e., that, for every i < d, v_ℓ_ i∈ C iff ℓ_i + 1 = 1 where ℓ_ i denote the prefix of ℓ of length i.The Littlestone's Dimension (, ) of a concept classwith respect to the universeis defined as the maximum d such that there exists a depth-d mistake tree for ,. An equivalent formulation of Littlestone's Dimension is through mistakes made in online learning, as stated below. This interpretation will be useful in our proof. An online algorithmis an algorithm that, at time step i, is given an element x_i ∈ and the algorithm outputs a prediction p_i ∈{0, 1} whether x is in the class. After the prediction, the algorithm is told the correct answer h_i ∈{0, 1}. For a sequence (x_1, h_1), …, (x_n, h_n), prediction mistake ofis defined as the number of incorect predictions, i.e., ∑_i ∈ n1[p_ih_i]. The mistake bound offor a concept classis defined as the maximum prediction mistake ofover all the sequences (x_1, h_1), …, (x_n, h_n) which corresponds to a concept C ∈ (i.e. h_i = 1[x_i ∈ C] for all i ∈ [n]). For any universeand any concept class , (, ) is equal to the minimum mistake bound of , over all online algorithms.The following facts are well-know and follow easily from the above definitions.For any universeand concept class , we have(, ) (, ) log ||.For any two universes _1, _2 and any concept class ,(, _1 ∪_2) (, _1) + (, _2).§.§ Label Cover and PCP As is standard in hardness of approximation, the starting point for our reductions will be the following problem called Label Cover.A Label Cover instance = (A, B, E, Σ, {π_e}_e ∈ E) consists of a bipartite graph (A, B, E), an alphabet Σ, and, for every edge (a, b) ∈ E, a projection constraint π_(a, b): Σ→Σ.An assignment (aka labeling) foris a function ϕ: A ∪ B →Σ. The value of ϕ, _(ϕ) is defined as the fraction of edges (a, b) ∈ E such that π_(a, b)(ϕ(a)) = ϕ(b); these edges are called satisfied edges. The value of the instance , (), is defined as the maximum value among all assignments ϕ: A ∪ B →Σ. Throughout the paper, we often encounter an assignment that only labels a subset of A ∪ B but leaves the rest unlabeled. We refer to such assignment as a partial assignment to an instance; more specifically, for any V ⊆ A ∪ B, a V-partial assignment (or partial assignment on V) is a function ϕ: V →Σ. For notational convenience, we sometimes write Σ^V to denote the set of all functions from V to Σ.We will use the following version of the PCP Theorem by Moshkovitz and Raz, which reduces 3SAT to the gap version of Label Cover while preserves the size to be almost linear.For every n and every ν = ν(n) > 0, solving 3SAT on n variables can be reduced to distinguishing between the case that a bi-regular instance of Label Cover with |A|, |B|, |E| = n^1 + o(1)(1/ν) and |Σ| = 2^(1/ν) is satisfiable and the case that its value is at most ν.§.§ Useful Lemmata We end this section by listing a couple of lemmata that will be useful in our proofs.Let X_1, …, X_n be i.i.d. random variables taking value from {0, 1} and let p be the probability that X_i = 1, then, for any δ > 0, we have[∑_i=1^n X_i(1 + δ) np]2^-δ^2np/3 if δ < 1, 2^-δ np/3 otherwise.For any bi-regular bipartite graph G = (A, B, E), let n = |A| + |B| and r = √(n)/log n. When n is sufficiently large, there exists a partition of A ∪ B into U_1, …, U_r such that∀ i ∈ [r], n/2r |U_i| 2n/rand∀ i, j ∈ [r], |E|/2r^2 |(U_i × U_j) ∩ E|, |(U_j × U_i) ∩ E| 2|E|/r^2.Moreover, such partition can be found in randomized linear time (alternatively, deterministic n^O(log n) time).§ INAPPROXIMABILITY OF VC DIMENSIONIn this section, we present our reduction from Label Cover to VC Dimension, stated more formally below. We note that this reduction, together with Moshkovitz-Raz PCP (Theorem <ref>), with parameter δ =1/log n gives a reduction from 3SAT on n variables to VC Dimension of size 2^n^1/2 + o(1) with gap 1/2 + o(1), which immediately implies Theorem <ref>.For every δ > 0, there exists a randomized reduction from a bi-regular Label Cover instance = (A, B, E, Σ, {π_e}_e ∈ E) such that |Σ| = O_δ(1) to a ground setand a concept classsuch that, if n ≜ |A| + |B| and r ≜√(n) / log n, then the following conditions hold for every sufficiently large n. * (Size) The reduction runs in time |Σ|^O(|E|(1/δ)/r) and ||, |||Σ|^O(|E|(1/δ)/r).* (Completeness) Ifis satisfiable, then (, )2r.* (Soundness) If () δ^2 / 100, then (, )(1 + δ)r with high probability.In fact, the above properties hold with high probability even when δ and |Σ| are not constants, as long as δlog(1000nlog |Σ|) / r. We remark here that when δ = 1 / log n, Moshkovitz-Raz PCP produces a Label Cover instance with |A| = n^1 + o(1), |B| = n^1 + o(1) and |Σ| = 2^(n). For such parameters, the condition δlog(1000nlog |Σ|) / r holds for every sufficiently large n. §.§ A Candidate Reduction (and Why It Fails) To best understand the intuition behind our reduction, we first describe a simpler candidate reduction and explain why it fails, which will lead us to the eventual construction. In this candidate reduction, we start by evoking Lemma <ref> to partition the vertices A ∪ B of the Label Cover instance = (A, B, E, Σ, {π_e}_e ∈ E) into U_1, …, U_r where r = √(n)/log n. We then create the universeand the concept classas follows: * We make each element incorrespond to a partial assignment to U_i for some i ∈ [r], i.e., we let = {x_i, σ_i| i ∈ [r], σ_i ∈Σ^U_i}. In the completeness case, we expect to shatter the set of size r that corresponds to a satisfying assignment σ^* ∈Σ^A ∪ B of the Label Cover instance , i.e., {x_i, σ^*|_U_i| i ∈ [r]}. As for the soundness, our hope is that, if a large set S ⊆ gets shattered, then we will be able to decode an assignment forthat satisfies many constraints, which contradicts with our assumption that () is small. Note that the number of elements ofin this candidate reduction is at most r · |Σ|^O(|E|(1/δ)r) = 2^(√(n)) as desired.* As stated above, the intended solution for the completeness case is {x_i, σ^*|_U_i| i ∈ [r]}, meaning that we must have at least one concept corresponding to each subset I ⊆ [r]. We will try to make our concepts “test” the assignment; for each I ⊆ [r], we will choose a set T_I ⊆ A ∪ B of (√(n)) vertices and “test” all the constraints within T_I. Before we specify how T_I is picked, let us elaborate what “test” means: for each T_I-partial assignment ϕ_I that does not violate any constraints within T_I, we create a concept C_I, ϕ_I. This concept contains x_i, σ_i if and only if i ∈ I and σ_i agrees with ϕ_I (i.e. ϕ_I|_T_I ∩ U_i = σ_i|_T_I ∩ U_i). Recall that, if a set S ⊆ is shattered, then each ⊆ S is an intersection between S and C_I, ϕ_I for some I, ϕ_I. We hope that the I's are different for differentso that many different tests have been performed on S.Finally, let us specify how we pick T_I. Assume without loss of generality that r is even. We randomly pick a perfect matching between r, i.e., we pick a random permutation π_I: [r] → [r] and let (π_I(1), π_I(2)), …, (π_I(r - 1), π_I(r)) be the chosen matching. We pick T_I such that all the constraints in the matchings, i.e., constraints between U_π_I(2i - 1) and U_π_I(2i) for every i ∈ [r/2], are included. More specifically, for every i ∈ [r], we include each vertex v ∈ U_π_I(2i - 1) if at least one of its neighbors lie in U_π_I(2i) and we include each vertex u ∈ U_π_I(2i) if at least one of its neighbors lie in U_π_I(2i - 1). By Lemma <ref>, for every pair in the matching the size of the intersection is at most 2|E|/r^2, so each concept contains assignments to at most 2|E|/r variables; so the total size of the concept class is at most 2^r · |Σ|^2|E|/r. Even though the above reduction has the desired size and completeness, it unfortunately fails in the soundness. Let us now sketch a counterexample. For simplicity, let us assume that each vertex in T_[r] has a unique neighbor in T_[r]. Note that, since T_[r] has quite small size (only (√(n))), almost all the vertices in T_[r] satisfy this property w.h.p., but assuming that all of them satisfy this property makes our life easier.Pick an assignment ∈Σ^V such that none of the constraints in T_[r] is violated. From our unique neighbor assumption, there is always such an assignment. Now, we claim that the set S_≜{x_i, |_U_i| i ∈ [r]} gets shattered. This is because, for every subset I ⊆ [r], we can pick another assignment σ' such that σ' does not violate any constraint in T_[r] and σ'|_U_i = |_U_i if and only if i ∈ I. This implies that {x_i, |_U_i| i ∈ I} = S ∩ C_[r], σ' as desired. Note here that such σ' exists because, for every i ∉ I, if there is a constraint from a vertex a ∈ U_i ∩ A to another vertex b ∈ T_[r]∩ B, then we can change the assignment to a in such a way that the constraint is not violated[Here we assume that |π_(a, b)^-1((b))| > 1; note that this always holds for Label Cover instances produced by Moshkovitz-Raz construction.]; by doing this for every i ∉ I, we have created the desired σ'. As a result, (, ) can still be as large as r even when the value ofis small. §.§ The Final Reduction In this subsection, we will describe the actual reduction. To do so, let us first take a closer look at the issue with the above candidate reduction. In the candidate reduction, we can view each I ⊆ [r] as being a seed used to pick a matching. Our hope was that many seeds participate in shattering some set S, and that this means that S corresponds to an assignment of high value. However, the counterexample showed that in fact only one seed (I = [r]) is enough to shatter a set. To circumvent this issue, we will not use the subset I as our seed anymore. Instead, we create r new elements y_1, …, y_r, which we will call test selection elements to act as seeds; namely, each subset H ⊆ will now be a seed. The benefit of this is that, if S ⊆ is shattered and contains test selection elements y_i_1, …, y_i_t, then at least 2^t seeds must participate in the shattering of S. This is because, for each H ⊆, the intersection of S with any concept corresponding to H, when restricted to , is always H ∩{y_i_1, …, y_i_t}. Hence, each subset of {y_i_1, …, y_i_t} must come a from different seed.The only other change from the candidate reduction is that each H will test multiple matchings rather than one matching. This is due to a technical reason: we need the number of matchings, ℓ, to be large in order get the approximation ratio down to 1/2 + o(1); in our proof, if ℓ = 1, then we can only achieve a factor of 1 - ε to some ε > 0. The full details of the reduction are shown in Figure <ref>.Before we proceed to the proof, let us define some additional notation that will be used throughout. * Every assignment element of the form x_i, σ_i is called an i-assignment element; we denote the set of all i-assignment elements by _i, i.e., _i = {x_i, σ_i|σ_i ∈Σ^U_i}. Letdenote all the assignment elements, i.e., = ⋃_i _i.* For every S ⊆, let I(S) denote the set of all i ∈ [r] such that S contains an i-assignment element, i.e., I(S) = {i ∈ [r] | S ∩_i ∅}.* We call a set S ⊆ non-repetitive if, for each i ∈ [r], S contains at most one i-assignment element, i.e., |S ∩_i|1. Each non-repetitive set S canonically induces a partial assignment ϕ(S): ⋃_i ∈ I(S) U_i →Σ. This is the unique partial assignment that satisfies ϕ(S)|_U_i = σ_i for every x_i, σ_i∈ S* Even though we define each concept as C_I, H, σ_H where σ_H is a partial assignment to a subset T_H ⊆ A ∪ B,it will be more convenient to view each concept as C_I, H, σ where σ∈Σ^V is the assignment to the entire Label Cover instance. This is just a notational change: the actual definition of the concept does not depend on the assignment outside T_H. * For each I ⊆ [r], let U_I denote ⋃_i ∈ I U_i. For each σ_I ∈Σ^U_I, we say that (I, σ_I) passes H ⊆ if σ_I does not violate any constraint within T_H. Denote the collection of H's that (I, σ_I) passes by (I, σ_I).* Finally, for any non-repetitive set S ⊆ and any H ⊆, we say that S passes H if (I(S), ϕ(S)) passes H. We write (S) as a shorthand for (I(S), ϕ(S)). The output size of the reduction and the completeness follow almost immediately from definition.Output Size of the Reduction. Clearly, the size ofis ∑_i ∈ [r] |Σ|^|U_i| r · |Σ|^n/r |Σ|^O(|E|(1/δ)/r). As for ||, note first that the number of choices for I and H are both 2^r. For fixed I and H, Lemma <ref> implies that, for each matching π_H^(t), the number of vertices from each U_i with at least one constraint to the matched partition in π_H^(t) is at most O(|E|/r^2). Since there are ℓ matchings, the number of vertices in T_H = _1(M_H(1)) ∪⋯∪_r(M_H(r)) is at most O(|E|ℓ/r). Hence, the number of choices for the partial assignment σ_H is at most |Σ|^O(|E|(1/δ)/r). In total, we can conclude thatcontains at most |Σ|^O(|E|(1/δ)/r) concepts. Completeness. Ifhas a satisfying assignment σ^* ∈Σ^V, then the set S_σ^* = {x_i, σ^*|_U_i| i ∈ [r]}∪ is shattered because, for any S ⊆ S_σ^*, we have S = S_σ^*∩ C_I(S), S ∩, σ^*. Hence, (, )2r.The rest of this section is devoted to the soundness analysis. §.§ Soundness In this subsection, we will prove the following lemma, which, combined with the completeness and output size arguments above, imply Theorem <ref>.Let (, ) be the output from the reduction in Figure <ref> on input . If () δ^2 / 100 and δlog(1000nlog |Σ|) / r, then (, )(1 + δ)r w.h.p. At a high level, the proof of Lemma <ref> has two steps: * Given a shattered set S ⊆, we extract a maximal non-repetitive set ⊆ S such thatpasses many (2^|S| - ||) H's. If || is small, the trivial upper bound of 2^r on the number of different H's implies that |S| is also small. As a result, we are left to deal with the case that || is large.* When || is large,induces a partial assignment on a large fraction of vertices of . Since we assume that () is small, this partial assignment must violate many constraints.We will use this fact to argue that, with high probability,only passes very few H's, which implies that |S| must be small. The two parts of the proof are presented in Subsection <ref> and <ref> respectively. We then combine them in Subsection <ref> to prove Lemma <ref>.§.§.§ Part I: Finding a Non-Repetitive Set That Passes Many TestsThe goal of this subsection is to prove the following lemma, which allows us to, given a shattered set S ⊆, find a non-repetitive setthat passes many H's.For any shattered S ⊆, there is a non-repetitive setof size |I(S)| |()|2^|S| - |I(S)|. We will start by proving the following lemma, which will be a basis for the proof of Lemma <ref>. Let C, C' ∈ correspond to the same H (i.e. C = C_I, H, σ and C' = C_I', H, σ' for some H ⊆, I, I' ⊆ [r], σ, σ' ∈Σ^V).For any subset S ⊆ and any maximal non-repetitive subset ⊆ S, if ⊆ C and ⊆ C', then S ∩ C = S ∩ C'. The most intuitive interpretation of this lemma is as follows. Recall that if S is shattered, then, for each ⊆ S, there must be a concept C_I_, H_, σ_ such that = S ∩ C_I_, H_, σ_. The above lemma implies that, for each ⊇, H_ must be different. This means that at least 2^|S| - || different H's must be involved in shattering S. Indeed, this will be the argument we use when we prove Lemma <ref>.[Lemma <ref>] Let S, be as in the lemma statement. Suppose for the sake of contradiction that there existsH ⊆, I, I' ⊆ [r], σ, σ' ∈Σ^V such that ⊆C_I, H, σ, ⊆C_I', H, σ' and S ∩ C_I, H, σ S ∩ C_I', H, σ'.First, note that S ∩ C_I, H, σ∩ = S ∩ H ∩ = S ∩ C_I', H, σ'∩. Since S ∩ C_I, H, σ S ∩ C_I', H, σ', we must have S ∩ C_I, H, σ∩ S ∩ C_I', H, σ'∩. Assume w.l.o.g. that there exists x_i, σ_i∈ (S ∩ C_I, H, σ) ∖ ( S ∩ C_I', H, σ').Note that i ∈ I(S) = I() (where the equality follows from maximality of ). Thus there exists σ'_i ∈Σ^U_i such that x_i, σ'_i∈⊆ C_I, H, σ∩C_I', H, σ'. Since x_i, σ'_i is in both C_I, H, σ and C_I', H, σ', we have i ∈ I ∩ I' andσ|__i(M_H(i)) = σ'_i|__i(M_H(i)) = σ'|__i(M_H(i)).However, since x_i, σ_i∈ (S ∩ C_I, H, σ) ∖ ( S ∩ C_I', H, σ'), we have x_i, σ_i∈ C_I, H, σ∖ C_I', H, σ'. This implies thatσ|__i(M_H(i)) = σ_i|__i(M_H(i))σ'|__i(M_H(i)),which contradicts to (<ref>). In addition to the above lemma, we will also need the following observation, which states that, if a non-repetitiveis contained in a concept C_I, H, σ_H, thenmust pass H. This observation follows definitions.If a non-repetitive setis a subset of some concept C_I, H, σ_H, then H ∈(). With Lemma <ref> and Observation <ref> ready, it is now easy to prove Lemma <ref>.[Lemma <ref>] Pickto be any maximal non-repetitive subset of S. Clearly, || = |I(S)|. To see that |()|2^|S| - |I(S)|, consider anysuch that ⊆⊆ S. Since S is shattered, there exists I_, H_, σ_ such that S ∩ C_I_ , H_ , σ_ =. Since ⊇, Observation <ref> implies that H_∈(). Moreover, from Lemma <ref>, H_ is distinct for every . As a result, |()|2^|S| - |I(S)| as desired. §.§.§ Part II: No Large Non-Repetitive Set Passes Many TestsThe goal of this subsection is to show that, if () is small, then w.h.p. (over the randomness in the construction) every large non-repetitive set passes only few H's. This is formalized as Lemma <ref> below.If () δ^2/100 and δ 8 / r, then, with high probability, for every non-repetitive setof size at least δ r, |()|100n log |Σ|. Note that the mapping ↦ (I(), ϕ()) is a bijection from the collection of all non-repetitive sets to {(I, σ_I) | I ⊆ [r], σ_I ∈Σ^U_I}. Hence, the above lemma is equivalent to the following.If () δ^2/100 and δ8 / r, then, with high probability, for every I ⊆ [r] of size at least δ r and every σ_I ∈Σ^U_I, |(I, σ_I)|100n log |Σ|. Here we use the language in Lemma <ref> instead of Lemma <ref> as it will be easier for us to reuse this lemma later. To prove the lemma, we first need to bound the probability that each assignment σ_I does not violate any constraint induced by a random matching. More precisely, we will prove the following lemma.For any I ⊆ [r] of size at least δ r and any σ_I ∈Σ^U_I, if π: [r] → [r] is a random permutation of [r], then the probability that σ_I does not violate any constraint in ⋃_i ∈ [r]_i(M(i)) is at most (1 - 0.1δ^2)^δ r / 8 where M(i) denote the index that i is matched with in the matching (π(1), π(2)), …, (π(r - 1), π(r)).Let p be any positive odd integer such that p δ r / 2 and let i_1, …, i_p - 1∈ [r] be any p - 1 distinct elements of [r]. We will first show that conditioned on π(1) = i_1, …, π(p - 1) = i_p - 1, the probability that σ_I violates a constraint induced by π(p), π(p + 1) (i.e. in _π(p)(π(p + 1)) ∪_π(p + 1)(π(p))) is at least 0.1δ^2.To see that this is true, let I_ p = I ∖{i_1, …, i_p - 1}. Since |I| δ r, we have |I_ p| = |I| - p + 1 δ r / 2 + 1. Consider the partial assignment σ_ p = σ_I|_U_I_ p. Since ()0.01δ^2, σ_ p can satisfy at most 0.01δ^2 |E| constraints. From Lemma <ref>, we have, for every ij ∈ I_ p, the number of constraints between U_i and U_j are at least |E|/r^2. Hence, there are at most 0.01δ^2r^2 pairs of i < j ∈ I_ p such that σ_ p does not violate any constraint between U_i and U_j. In other words, there are at least |I_ p|2 - 0.01δ^2r^20.1δ^2r^2 pairs i < j ∈ I_ p such that σ_ p violates some constraints between U_i and U_j. Now, if π(p) = i and π(p + 1) = j for some such pair i, j, then ϕ() violates a constraint induced by π(p), π(p + 1). Thus, we have[σ_Idoes not violate a constraint induced by π(p), π(p + 1) ⋀_t=1^p - 1π(t) = i_t]1 - 0.1δ^2. Let E_p denote the event that σ_I does not violate any constraints induced by π(p) and π(p + 1). We can now bound the desired probability as follows.[σ_Idoes not violate any constraint in ⋃_i ∈ [r]_i(M(i))][⋀_oddp ∈ [δ r / 2 + 1] E_p] = ∏_oddp ∈ [δ r / 2 + 1][E_p ⋀_oddt ∈ [p - 1] E_t ] (From(<ref>))∏_oddp ∈ [δ r / 2 + 1] (1 - 0.1δ^2)(1 - 0.1δ^2)^δ r / 4 - 1,which is at most (1 - 0.1δ^2)^δ r / 8 since δ 8/r. We can now prove our main lemma.[Lemma <ref>] For a fixed I ⊆ [r] of size at least δ r and a fixed σ_I ∈Σ^U_I, Lemma <ref> tells us that the probability that σ_I does not violate any constraint induced by a single matching is at most (1 - 0.1δ^2)^δ r/8. Since for each H ⊆ the construction picks ℓ matchings at random, the probability that (I, σ_I) passes each H is at most (1 - 0.1δ^2)^δℓ r/8. Recall that we pick ℓ = 80/δ^3; this gives the following upper bound on the probability:[(I, σ_I)passesH] ≤(1 - 0.1δ^2)^δℓ r/8 = (1 - 0.1δ^2)^10r/δ^2 (1/1 + 0.1δ^2)^10r/δ^2 2^-rwhere the last inequality comes from Bernoulli's inequality.Inequality (<ref>) implies that the expected number of H's that (I, σ_I) passes is less than 1. Since the matchings M_H are independent for all H's, we can apply Chernoff bound which implies that[|(I, σ_I)|100n log |Σ|]2^-10n log |Σ| = |Σ|^-10 n. Finally, note that there are at most 2^r |Σ|^n different (I, σ_I)'s. By union bound, we have[∃ I ⊆ [r], σ_I ∈Σ^U_Is.t. |I| δ r AND |(I, σ_I)|100n log |Σ| ] (2^r |Σ|^n)(|Σ|^-10n)|Σ|^-8n,which concludes the proof. §.§.§ Putting Things Together[Lemma <ref>] From Lemma <ref>, every non-repetitive setof size at least δ r, |()|100 n log |Σ|. Conditioned on this event happening, we will show that (, )(1 + δ) r. Consider any shattered set S ⊆. Lemma <ref> implies that there is a non-repetitive setof size |I(S)| such that |()|2^|S| - |I(S)|. Let us consider two cases: * |I(S)| δ r. Since () ⊆(), we have |S| - |I(S)||| = r. This implies that |S|(1 + δ)r.* |I(S)| > δ r. From our assumption, |()|100 n log |Σ|. Thus, |S||I(S)| + log(100 n log |Σ|)(1 + δ)r where the second inequality comes from our assumption that δlog(1000nlog |Σ|) / r.Hence, (, )(1 + δ)r with high probability.§ INAPPROXIMABILITY OF LITTLESTONE'S DIMENSION We next proceed to Littlestone's Dimension. The main theorem of this section is stated below. Again, note that this theorem and Theorem <ref> implies Theorem <ref>.There exists ε > 0 such that there is a randomized reduction from any bi-regular Label Cover instance = (A, B, E, Σ, {π_e}_e ∈ E) with |Σ| = O(1) to a ground setand a concept classessuch that, if n ≜ |A| + |B|, r ≜√(n) / log n and k ≜ 10^10|E|log |Σ| / r^2, then the following conditions hold for every sufficiently large n. * (Size) The reduction runs in time 2^rk· |Σ|^O(|E|/r) and ||, ||2^rk· |Σ|^O(|E|/r).* (Completeness) Ifis satisfiable, then (, )2rk.* (Soundness) If ()0.001, then (, )(2 - ε)rk with high probability. §.§ Why the VC Dimension Reduction Fails for Littlestone's Dimension It is tempting to think that, since our reduction from the previous section works for VC Dimension, it may also work for Littlestone's Dimension. In fact, thanks to Fact <ref>, completeness for that reduction even translates for free to Littlestone's Dimension. Alas, the soundness property does not hold. To see this, let us build a depth-2r mistake tree for ,, even when () is small, as follows. * We assign the test-selection elements to the first r levels of the tree, one element per level. More specifically, for each s ∈{0, 1}^< r, we assign y_|s| + 1 to s.* For every string s ∈{0, 1}^r, the previous step of the construction gives us a subset ofcorresponding to the path from root to s; this subset is simply H_s = {y_i ∈| s_i = 1}. Let T_H_s denote the set of vertices tested by this seed H_s. Let ϕ_s ∈Σ^V denote an assignment that satisfies all the constraints in T_H_s. Note that, since T_H_s is of small size (only (√(n))), even if () is small, ϕ_s is still likely to exist (and we can decide whether it exists or not in time 2^Õ(√(n))).We then construct the subtree rooted at s that corresponds to ϕ_s by assigning each level of the subtree x_i, ϕ_s|_U_i. Specifically, for each t ∈{0, 1}^ r, we assign x_|t| - r + 1, ϕ_t_ r|_U_|t| - r + 1 to node t of the tree. It is not hard to see that the constructed tree is indeed a valid mistake tree. This is because the path from root to each leaf l ∈{0, 1}^2r agrees with C_I(l), H_l_ r, ϕ_lr (where I(l) = {i ∈ [r] | l_i = 1}). §.§ The Final Reduction The above counterexample demonstrates the main difference between the two dimensions: order does not matter in VC Dimension, but it does in Littlestone's Dimension. By moving the test-selection elements up the tree, the tests are chosen before the assignments, which allows an adversary to “cheat” by picking different assignments for different tests. We would like to prevent this, i.e., we would like to make sure that, in the mistake tree, the upper levels of the tree are occupied with the assignment elements whereas the lower levels are assigned test-selection elements. As in the VC Dimension argument, our hope here is that, given such a tree, we should be able to decode an assignment that passes tests on many different tests. Indeed we will tailor our construction to achieve such property. Recall that, if we use the same reduction as VC Dimension, then, in the completeness case, we can construct a mistake tree in which the first r layers consist solely of assignment elements and the rest of the layers consist of only test-selection elements. Observe that there is no need for different nodes on the r-th layer to have subtrees composed of the same set of elements; the tree would still be valid if we make each test-selection element only work with a specific s ∈{0, 1}^r and create concepts accordingly. In other words, we can modify our construction so that our test-selection elements are = {y_I, i| I ⊆ [r], i ∈ [r]} and the concept class is {C_I, H, σ_H| I ⊆ [r], H ⊆, σ_H ∈Σ^T_H} where the condition that an assignment element lies in C_I, H, σ_H is the same as in the VC Dimension reduction, whereas for y_I', i to be in C_I, H, σ_H, we require not only that i ∈ H but also that I = I'. Intuitively, this should help us, since each y_I, i is now only in a small fraction (2^-r) of concepts; hence, one would hope that any subtree rooted at any y_I, i cannot be too deep, which would indeed implies that the test-selection elements cannot appear in the first few layers of the tree.Alas, for this modified reduction, it is not true that a subtree rooted at any y_I, i has small depth; specifically, we can bound the depth of a subtree y_I, i by the log of the number of concepts containing y_I, i plus one (for the first layer). Now, note that y_I, i∈ C_I', H, σ_H means that I' = I and i ∈ H, but there can be still as many as 2^r - 1· |Σ|^|T_H| = |Σ|^O(|E|/r) such concepts. This gives an upper bound of r + O(|E|log|Σ|/r) on the depth of the subtree rooted at y_I, i. However, |E|log|Σ|/r = Θ(√(n)log n) = ω(r); this bound is meaningless here since, even in the completeness case, the depth of the mistake tree is only 2r.Fortunately, this bound is not useless after all: if we can keep this bound but make the intended tree depth much larger than |E|log|Σ|/r, then the bound will indeed imply that no y_I, i-rooted tree is deep. To this end, our reduction will have one more parameter k = Θ(|E|log|Σ|/r) where Θ(·) hides a large constant and the intended tree will have depth 2rk in the completeness case; the top half of the tree (first rk layers) will again consist of assignment elements and the rest of the tree composes of the test-selection elements. The rough idea is to make k “copies” of each element: the assignment elements will now be {x_i, σ_i, j| i ∈ [r], σ_i ∈Σ^U_i, j ∈ [k]} and the test-selection elements will be {y_I, i, j| I ⊆ [r] × [k], j ∈ [k]}. The concept class can then be defined as {C_I, H, σ_H| I ⊆ [r] × [k], H ⊆ [r] × [k], σ_H ∈Σ^T_H} naturally, i.e., H is used as the seed to pick the test set T_H, y_I', i, j∈ C_I, H, σ_H iff I' = I and (i, j) ∈ H whereas x_i, σ_i, j∈ C_I, H, σ_H iff (i, j) ∈ I and σ_i|_(I, σ_I) = σ_H|_(I, σ_I). For this concept class, we can again bound the depth of y_I, i-rooted tree to be rk + O(|E|log|Σ|/r); this time, however, rk is much larger than |E|log|Σ|/r, so this bound is no more than, say, 1.001rk. This is indeed the desired bound, since this means that, for any depth-1.999rk mistake tree, the first 0.998rk layers must consist solely of assignment elements.Unfortunately, the introduction of copies in turn introduces another technical challenge: it is not true any more that a partial assignment to a large set only passes a few tests w.h.p. (i.e. an analogue of Lemma <ref> does not hold). By Inequality (<ref>), each H is passed with probability at most 2^-r, but now we want to take a union bound there are 2^rk≫ 2^r different H's.To circumvent this, we will define a map τ: ([r] × [k]) →([r]) and use τ(H) to select the test instead of H itself. The map τ we use in the construction is the threshold projection where i is included in H if and only if, for at least half of j ∈ [k], H contains (i, j). To motivate our choice of τ, recall that our overall proof approach is to first find a node that corresponds to an assignment to a large subset of the Label Cover instance; then argue that it can pass only a few tests, which we hope would imply that the subtree rooted there cannot be too deep. For this implication to be true, we need the following to also hold: for any small subset ⊆([r]) of τ(H)'s, we have that (τ^-1(), [r] × [k]) is small. This property indeed holds for our choice of τ (see Lemma <ref>).With all the moving parts explained, we state the full reduction formally in Figure <ref>.Similar to our VC Dimension proof, we will use the following notation: * For every i ∈ [r], let _i ≜{x_i, σ_i, j|σ_i ∈Σ^U_i, j ∈ [k]}; we refer to these elements as the i-assignment elements. Moreover, for every (i, j) ∈ [r] × [k], let _i, j≜{x_i, σ_i, j|σ_i ∈Σ^U_i}; we refer to these elements as the (i, j)-assignment elements.* For every S ⊆, let I(S) = {i ∈ [r] | S ∩_i ∅} and IJ(S) = {(i, j) ∈ [r] × [k] | S ∩_i, j∅}.* A set S ⊆ is non-repetitive if |S ∩_i, j|1 for all (i, j) ∈ [r] × [k].* We say that S passesif the following two conditions hold: * For every i ∈ [r] such that S ∩_i∅, all i-assignment elements of S are consistent on T_|_U_i, i.e., for every (i, σ_i, j), (i, σ'_i, j') ∈ S, we have σ_i|_U_i = σ'_i|_U_i.* The canonically induced assignment on T_ does not violate any constraint (note that the previous condition implies that such assignment is unique).We use (S) to denote the collection of all seeds ⊆ [r] that S passes. We also use the following notation for mistake trees: * For any subset S ⊆ and any function ρ: S →{0, 1}, let [ρ] ≜{C ∈|∀ a ∈ S, a ∈ C ⇔ρ(a) = 1} be the collections of all concept that agree with ρ on S. We sometimes abuse the notation and write [S] to denote the collection of all the concepts that contain S, i.e., [S] = {C ∈| S ⊆ C}.* For any binary string s, let (s) ≜{∅, s_ 1, …, s_ |s| - 1} denote the set of all proper prefixes of s. * For any depth-d mistake tree , let v_, s denote the element assigned to the node s ∈{0, 1}^ d, and let P_, s≜{v_, s'| s' ∈(s)} denote the set of all elements appearing from the path from root to s (excluding s itself). Moreover, let ρ_, s: P_, s→{0, 1} be the function corresponding to the path from root to s, i.e., ρ_, s(v_, s') = s_|s'| + 1 for every s' ∈(s).Output Size of the Reduction The output size of the reduction follows immediately from a similar argument as in the VC Dimension reduction. The only different here is that there are 2^rk choices for I and H, instead of 2^r choices as in the previous construction.Completeness. Ifhas a satisfying assignment σ^* ∈Σ^V, we can construct a depth-rk mistake treeas follows. For i ∈ [r], j ∈ [k], we assign x_i, σ^*|_U_i, j to every node in the ((i - 1)k + j)-th layer of . Note that we have so far assigned every node in the first rk layers. For the rest of the vertices s's, if s lies in layer rk + (i - 1)k + j, then we assign y_I(ρ^-1_, s(1)), i, j to it. It is clear that, for a leaf s ∈{0, 1}^rk, the concept C_I(ρ^-1_, s(1)), H_, s, σ^* agrees with the path from root to s where H_, s is defined as {(i, j) ∈ [r] × [k] | y_I(ρ^-1_, s(1)), i, j∈ρ^-1_, s(1)}. Hence, (, )2rk. §.§ Soundness Next, we will prove the soundness of our reduction, stated more precisely below. For brevity, we will assume throughout this subsection that r is sufficiently large, and leave it out of the lemmas' statements. Note that this lemma, together with completeness and output size properties we argue above, implies Theorem <ref> with ε = 0.001. Let (, ) be the output from the reduction in Figure <ref> on input . If ()0.001, then (, )1.999rk with high probability. Roughly speaking, the overall strategy of our proof of Lemma <ref> is as follows: * First, we will argue that any subtree rooted at any test-selection element must be shallow (of depth 1.001rk). This means that, if we have a depth-1.999rk mistake tree, then the first 0.998rk levels must be assigned solely assignment elements.* We then argue that, in this 0.998rk-level mistake tree of assignment elements, we can always extract a leaf s such that the path from root to s indicates inclusion of a large non-repetitive set. In other words, the path to s can be decoded into a (partial) assignment for the Label Cover instance .* Let the leaf from the previous step be s and the non-repetitive set be . Our goal now is to show that the subtree rooted as s must have small depth. We start working towards this by showing that, with high probability, there are few tests that agree with . This is analogous to Part II of the VC Dimension proof.* With the previous steps in mind, we only need to argue that, when |()| is small, the Littlestone's dimension of all the concepts that contains(i.e. ([], )) is small. Thanks to Fact <ref>, it is enough for us to bound ([], ) and ([], ) separately. For the former, our technique from the second step also gives us the desired bound; for the latter, we prove that ([], ) is small by designing an algorithm that provides correct predictions on a constant fraction of the elements in . Let us now proceed to the details of the proofs.§.§.§ Part I: Subtree of a Test-Selection Assignment is Shallow For any y_I, i, j∈, ([{y_I, i, j}], )rk + (4|E|ℓ/r)log |Σ|1.001 rk. Note that the above lemma implies that, in any mistake tree, the depth of the subtree rooted at any vertex s assigned to some y_I, i, j∈ is at most 1 + 1.001rk. This is because every concept that agrees with the path from the root to s must be in [{y_I, i, j}], which has depth at most 1.001rk.[Lemma <ref>] Consider any C_I', H, σ_τ(H)∈[{y_I, i, j}], ). Since y_I, i, j∈ C_I', H, σ_τ(H), we have I = I'. Moreover, from Lemma <ref>, we know that |_i(M_τ(H)(i))|4|E|ℓ/r^2, which implies that |T_τ(H)|4|E|ℓ/r. This means that there are only at most |Σ|^4|E|ℓ/r choices of σ_τ(H). Combined with the fact that there are only 2^rk choices of H, we have |[{y_I, i, j}]|2^rk· |Σ|^4|E|ℓ/r. Fact <ref> then implies the lemma. §.§.§ Part II: Deep Mistake Tree Contains a Large Non-Repetitive Set The goal of this part of the proof is to show that, for mistake tree of , of depth slightly less than rk, there exists a leaf s such that the corresponding path from root to s indicates an inclusion of a large non-repetitive set; in our notation, this means that we would like to identify a leaf s such that IJ(ρ^-1_, s(1)) is large. Since we will also need a similar bound later in the proof, we will prove the following lemma, which is a generalization of the stated goal that works even for the concept class [] for any non-repetitive . To get back the desired bound, we can simply set = ∅. For any non-repetitive setand any depth-d mistake treeof , [], there exists a leaf s ∈{0, 1}^d such that |IJ(ρ^-1_, s(1)) ∖ IJ()|d - r. The proof of this lemma is a double counting argument where we count a specific class of leaves in two ways, which ultimately leads to the above bound. The leaves that we focus on are the leaves s ∈{0, 1}^d such that, for every (i, j) such that an (i, j)-assignment element appears in the path from root to s but not in , the first appearance of (i, j)-assignment element in the path is included. In other words, for every (i, j) ∈ IJ(P_, s) ∖ IJ(), if we define u_i, j≜inf_s' ∈(s), v_, s'∈_i, j |s'|, then s_u_i, j + 1 must be equal to 1. We call these leaves the good leaves. Denote the set of good leaves ofby _,.Our first way of counting is the following lemma. Informally, it asserts that different good leaves agree with different sets ⊆ [r]. This can be thought of as an analogue of Lemma <ref> in our proof for VC Dimension. Note that this lemma immediately gives an upper bound of 2^r on |_, |.For any depth-d mistake treeof , [] and any different good leaves s_1, s_2 ∈_,, if C_I_1, H_1, σ_1 agrees with s_1 and C_I_2, H_2, σ_2 agrees with s_2 for some I_1, I_2, H_1, H_2, σ_1, σ_2, then τ(H_1) τ(H_2).Suppose for the sake of contradiction that there exist s_1s_2 ∈_,, H_1, H_2, I_1, I_2, σ_1, σ_2 such that C_I_1, H_1, σ_1 and C_I_2, H_2, σ_2 agree with s_1 and s_2 respectively, and τ(H_1) = τ(H_2). Let s be the common ancestor of s_1, s_2, i.e., s is the longest string in (s_1) ∩(s_2). Assume w.l.o.g. that (s_1)_|s| + 1 = 0 and (s_2)_|s| + 1 = 1.Consider the node v_, s in treewhere the paths to s_1, s_2 split; suppose that this is x_i, σ_i, j. Therefore x_i, σ_i, j∈ C_I_2, H_2, σ_2∖ C_I_1, H_1, σ_1.We now argue that there is some x_i, σ'_i, j (with the same i,j but a different assignment σ'_i) that is in both concepts, i.e. x_i, σ'_i, j∈ C_I_2, H_2, σ_2∩ C_I_1, H_1, σ_1. We do this by considering two cases:* If (i, j) ∈ IJ(), then there is x_i, σ'_i, j∈⊆ C_I_1, H_1, σ_1, C_I_2, H_2, σ_2 for some σ'_i ∈Σ^U_i. * Suppose that (i, j) ∉ IJ(). Since s_1 is a good leaf, there is some t ∈(s) such that v_, t = x_i, σ'_i, j for some σ'_i ∈Σ^U_i and t is included by the path (i.e. s_|t| + 1 = 1). This also implies that x_i, σ'_i, j is in both C_I_1, H_1, σ_1 and C_I_2, H_2, σ_2. Now, since both x_i, σ_i, j and x_i, σ'_i, j are in the concept C_I_2, H_2, σ_2, we have (i, j) ∈ I_2 andσ_i|__i(M_τ(H_1)) = σ_2|__i(M_τ(H_1)) = σ'_i|__i(M_τ(H_1)).On the other hand, since C_I_1, H_1, σ_1 contains x_i, σ'_i, j but not x_i, σ_i, j, we have (i, j) ∈ I_1 andσ_i|__i(M_τ(H_2))σ_1|__i(M_τ(H_2)) = σ'_i|__i(M_τ(H_2)).which contradicts (<ref>) since τ(H_1) = τ(H_2). Next, we will present another counting argument which gives a lower bound on the number of good leaves, which, together with Lemma <ref>, yields the desired bound.[Lemma <ref>] For any depth-d mistake treeof [],, let us consider the following procedure which recursively assigns a weight λ_s to each node s in the tree. At the end of the procedure, all the weight will be propagated from the root to good leaves. * For every non-root node s ∈{0, 1}^ 1, set λ_s ← 0. For root s = ∅, let λ_∅← 2^d.* While there is an internal node s ∈{0, 1}^< d such that λ_s > 0, do the following: * Suppose that v_s = x_i, σ_i, j for some i ∈ [r], σ_i ∈Σ^U_i and j ∈ [k].* If so far no (i, j)-element has appeared in the path or in , i.e., (i, j) ∉ IJ(P_, s) ∪ IJ(), then λ_s1←λ_s. Otherwise, set λ_s0 = λ_s1 = λ_s/2.* Set λ_s ← 0.The following observations are immediate from the construction: * The total of λ's over all the tree, ∑_s ∈{0, 1}^ dλ_d always remain 2^d.* At the end of the procedure, for every s ∈{0, 1}^ d, λ_s0 if and only if s ∈_,.* If s ∈_,, then λ_s = 2^|IJ(ρ^-1_, s(1)) ∖ IJ()| at the end of the execution.Note that the last observation comes from the fact that λ always get divides in half when moving down one level of the tree unless we encounter an (i, j)-assignment element for some i, j that never appears in the path or inbefore. For any good leaf s, the set of such (i, j) is exactly the set IJ(ρ^-1_, s(1)) ∖ IJ().As a result, we have 2^d = ∑_s ∈_,2^|IJ(ρ^-1_, s(1)) ∖ IJ()|. Since Lemma <ref> implies that |_, |2^r, we can conclude that there exists s ∈_, such that |IJ(ρ^-1_, s(1)) ∖ IJ()|d - r as desired. §.§.§ Part III: No Large Non-Repetitive Set Passes Many Test The main lemma of this subsection is the following, which is analogous to Lemma <ref>If ()0.001, then, with high probability, for every non-repetitive setof size at least 0.99rk, |()|100n log |Σ|.For every I ⊆ [r], let U_I ≜⋃_i ∈ I U_i. For every σ_I ∈Σ^U_I and every ⊆, we say that (I, σ_I) passesif σ_I does not violate any constraint in T_. Note that this definition and the way the test is generated in the reduction is the same as that of the VC Dimension reduction. Hence, we can apply Lemma <ref> with δ = 0.99, which implies the following: with high probability,for every I ⊆ [r] of size at least 0.99 r and every σ_I ∈Σ^U_I, |(I, σ_I)|100n log |Σ| where (I, σ_I) denote the set of all 's passed by (I, σ_I). Conditioned on this event happening, we will show that, for every non-repetitive setof size at least 0.99 rk, |()|100n log |Σ|.Consider any non-repetitive setof size 0.99rk. Let σ_I() be an assignment on U_I() such that, for each i ∈ I(), we pick one x_i, σ_i, j∈ (if there are more than one such x's, pick one arbitrarily) and let σ_I()|_U_i = σ_i. It is obvious that () ⊆(I(), σ_I()). Sinceis non-repetitive and of size at least 0.99rk, we have |I()|0.99r, which means that |(I(), σ_I())|100n log |Σ| as desired. §.§.§ Part IV: A Subtree ContainingMust be Shallow In this part, we will show that, if we restrict ourselves to only concepts that contain some non-repetitive setthat passes few tests, then the Littlestone's Dimension of this restrictied concept class is small. Therefore when we build a tree for the whole concept class , if a path from root to some node indicates an inclusion of a non-repetitive set that passes few tests, then the subtree rooted at this node must be shallow. For every non-repetitive set , ([], )1.75rk - || + r + 1000k√(r)log(|()| + 1).We prove the above lemma by bounding ([], ) and ([], ) separately, and combining them via Fact <ref>. First, we can bound ([], ) easily by applying Lemma <ref> coupled with the fact that |IJ()| = || for every non-repetitive . This immediately gives the following corollary.For every non-repetitive set , ([], )rk - || + r.We will next prove the following bound on ([], ). Note that Corollary <ref>, Lemma <ref>, and Fact <ref> immediately imply Lemma <ref>.For every non-repetitive set , ([], )0.75rk + 500k √(r)log(|()| + 1).The overall outline of the proof of Lemma <ref> is that we will design a prediction algorithm whose mistake bound is at most 0.75rk + 1000k √(r)log |()|. Once we design this algorithm, Lemma <ref> immediately implies Lemma <ref>. To define our algorithm, we will need the following lemma, which is a general statement that says that, for a small collection of H's, there is a some ^* ⊆ [r] that agrees with almost half of every H in the collection.Let ⊆([r]) be any collections of subsets of [r], there exists ^* ⊆ [r] such that, for every ∈, |^* Δ|0.5r + 1000√(r)log(|| + 1) where Δ denotes the symmetric difference between two sets.We use a simple probabilistic method to prove this lemma. Let ^r be a random subset of [r] (i.e. each i ∈ [r] is included independently with probability 0.5). We will show that, with non-zero probability, |^r Δ|0.5r + 1000√(r)log(|| + 1) for all ∈, which immediately implies that a desired ^* exists.Fix ∈. Observe that |^r Δ| can be written as ∑_i ∈ [r]1[i ∈ (^r Δ)]. For each i, 1[i ∈ (^r Δ)] is a 0, 1 random variable with mean 0.5 independent of other i' ∈ [r]. Applying Chernoff bound here yields[|^r Δ| > 0.5r + 1000√(r)log(|| + 1)]2^-log^2(|| + 1)1/|| + 1. Hence, by union bound, we have[∃∈, |^r Δ| > 0.5r + 1000√(r)log(|| + 1)] ||/|| + 1 < 1.In other words, |^r Δ|0.5r + 1000√(r)log(|| + 1) for all ∈ with non-zero probability as desired. We also need the following observation, which is an analogue of Observation <ref> in the VC Dimension proof; it follows immediately from definition of (S).If a non-repetitive setis a subset of some concept C_I, H, σ_τ(H), then τ(H) ∈(). With Lemma <ref> and Observation <ref> in place, we are now ready to prove Lemma <ref>.[Lemma <ref>] Let ^* ⊆ [r] be the set guaranteed by applying Lemma <ref> with = (). Let H^* ≜^* × [k].Our prediction algorithm will be very simple: it always predicts according to H^*; i.e., on an input[We assume w.l.o.g. that input elements are distinct; if an element appears multiple times, we know the correct answer from its first appearance and can always correctly predict it afterwards.] y ∈, it outputs 1[y ∈ H^*]. Consider any sequence (y_1, h_1), …, (y_w, h_w) that agrees with a concept C_I, H, σ_τ(H)∈[]. Observe that the number of incorrect predictions of our algorithm is at most |H^* Δ H|.Since C_I, H, σ_τ(H)∈[], Observation <ref> implies that τ(H) ∈(). This means that |τ(H) Δ^*|0.5r + 1000√(r)log(|| + 1). Now, let us consider each i ∈ [r] ∖ (τ(H) Δ^*). Suppose that i ∈τ(H) ∩^*. Since i ∈τ(H), at least k/2 elements of _i are in H and, since i ∈^*, we have _i ⊆ H^*. This implies that |(H^* Δ H) ∩ Y_i|k/2. A similar bound can also be derived when i ∉τ(H) ∩^*. As a result, we have|H^* Δ H|= ∑_i ∈ [r] |(H^* Δ H) ∩ Y_i| = ∑_i ∈τ(H) Δ^* |(H^* Δ H) ∩ Y_i| + ∑_i ∈ [r] ∖ (τ(H) Δ^*) |(H^* Δ H) ∩ Y_i|(|τ(H) Δ^*|)(k) + (r - |τ(H) Δ^*|)(k/2)0.75rk + 500k√(r)log(|| + 1),concluding our proof of Lemma <ref>. §.§.§ Putting Things Together [Lemma <ref>] Assume that ()0.001. From Lemma <ref>, we know that, with high probability, |()|100nlog|Σ| for every non-repetitive setof size at least 0.99rk. Conditioned on this event, we will show that (, )1.999rk.Suppose for the sake of contradiction that (, ) > 1.999rk. Consider any depth-1.999rk mistake treeof ,. From Lemma <ref>, no test-selection element is assigned to any node in the first 1.999rk - 1.001rk - 10.997rk levels. In other words, the tree induced by the first 0.997rk levels is simply a mistake tree of ,. By Lemma <ref> with = ∅, there exists s ∈{0, 1}^0.997rk such that |IJ(ρ_, s^-1(1))|0.997rk - r0.996rk.Since |IJ(ρ_, s^-1(1))|0.996rk, there exists a non-repetitive set ⊆ρ_, s^-1(1) of size 0.996rk. Consider the subtree rooted at s. This is a mistake tree of [ρ_, s], of depth 1.002rk. Since ⊆ρ_, s^-1(1), we have [ρ_, s] ⊆[]. However, this implies1.002rk([ρ_, s], ) ([], ) (From Lemma <ref>) 1.75rk - 0.996rk + r + 100k√(r)log(|()| + 1) (From Lemma <ref>) 0.754rk + r + 100k√(r)log(100nlog|Σ| + 1) = 0.754rk + o(rk),which is a contradiction when r is sufficiently large. § CONCLUSION AND OPEN QUESTIONSIn this work, we prove inapproximability results for VC Dimension and Littlestone's Dimension based on the randomized exponential time hypothesis. Our results provide an almost matching running time lower bound of n^log^1 - o(1) n for both problems while ruling out approximation ratios of 1/2 + o(1) and 1 - ε for some ε > 0 for VC Dimension and Littlestone's Dimension respectively.Even though our results help us gain more insights on approximability of both problems, it is not yet completely resolved. More specifically, we are not aware of any constant factor n^o(log n)-time approximation algorithm for either problem; it is an intriguing open question whether such algorithm exists and, if not, whether our reduction can be extended to rule out such algorithm. Another potentially interesting research direction is to derandomize our construction; note that the only place in the proof in which the randomness is used is in Lemma <ref>.A related question which remains open, originally posed by Ben-David and Eiron <cit.>, is that of computing the self-directed learning[Roughly, self-directed learning is similar to the online learning model corresponding to Littlestone's dimension, but where the learner chooses the order elements; see <cit.> for details.] mistake bound.Similarly, it may be interesting to understand the complexity of computing (approximating) the recursive teaching dimension <cit.>.§ ACKNOWLEDGEMENTWe thank Shai Ben-David for suggesting the question of approximability of Littlestone's dimension, and several other fascinating discussions. We also thank Yishay Mansour and COLT anonymous reviewers for their useful comments. Pasin Manurangsi is supported by NSF Grants No. CCF 1540685 and CCF 1655215.Aviad Rubinstein was supported by a Microsoft Research PhD Fellowship, as well as NSF grant CCF1408635 and Templeton Foundation grant 3966. This work was done in part at the Simons Institute for the Theory of Computing.alpha § QUASI-POLYNOMIAL ALGORITHM FOR LITTLESTONE'S DIMENSION In this section, we provides the following algorithm which decides whether (, )d in time O(|| · (2||)^d). Since we know that (, ) log ||, we can run this algorithm for all d log || and compute Littlestone's Dimension of , in quasi-polynomial time. There is an algorithm that, given a universe , a concept classand a non-negative integer d, decides whether (, )d in time O(|| · (2||)^d).Our algorithm is based on a simple observation: if an element x belongs to at least one concept and does not belong to at least one concept, the maximum depth of mistake trees rooted at x is exactly1+min{([x → 0], ), ([x → 1], )}. Recall from Section <ref> that [x → 0] and [x → 1] denote the collection of concepts that exclude x and the collection of concepts that include x respectively.This yields the following natural recursive algorithm. For each x ∈ such that [x → 0], [x → 1] ∅, recursively run the algorithm on ([x → 0], , d - 1) and ([x → 1], , d - 1). If both executions return NO for some x, then output NO. Otherwise, output YES. When d = 0, there is no need for recursion as we can just check whether ||1.Finally, we note that the running time can be easily proved by induction on d. | http://arxiv.org/abs/1705.09517v1 | {
"authors": [
"Pasin Manurangsi",
"Aviad Rubinstein"
],
"categories": [
"cs.CC"
],
"primary_category": "cs.CC",
"published": "20170526103801",
"title": "Inapproximability of VC Dimension and Littlestone's Dimension"
} |
trees, matrix, arrows, calc, patterns, decorations.pathreplacingplain thmTheorem[section] *thm*Theorem lem[thm]Lemma *lem*Lemma cor[thm]Corollary *cor*Corollary prop[thm]Proposition *prop*Propositiondefinition d?f[thm]Definition *d?f*Definitionrem[thm]Remark *rem*Remark ex[thm]Example *ex*Example tabl[thm]Table tabl*Table smallpmatrix ( [ )#1#1e century⌈⌉ alphaurl Lattices of minimal covolume in _n() Franois ThilmanyThe author is supported by the Fonds National de la Recherche, Luxembourg (AFR grant 11275005)================================================================================================================== The objective of this paper is to determine the lattices of minimal covolume in _n(), for n ≥ 3. The answer turns out to be the simplest one: _n() is, up to automorphism, the unique lattice of minimal covolume in _n(). In particular, lattices of minimal covolume in _n() are non-uniform when n ≥ 3, contrasting with Siegel's result for _2(). This answers for _n() the question of Lubotzky: is a lattice of minimal covolume typically uniform or not?§ INTRODUCTION§.§ A brief history The study of lattices of minimal covolume in _n originated with Siegel's work <cit.> on _2(). Siegel showed that in _2(), a lattice of minimal covolume is given by the (2,3,7)-triangle group. He raised the question to determine which lattices attain minimum covolume in groups of isometries of higher-dimensional hyperbolic spaces. For _2(), which acts on hyperbolic 3-space, the minimum among non-uniform lattices was established by Meyerhoff <cit.>; among all lattices in _2(), the minimum was exhibited more recently by Gehring, Marshall and Martin <cit.>, and is attained by a uniform lattice. Lubotzky established the analogous result <cit.> for _2(_q((t^-1))), where this time _2(_q[t]) attains the smallest covolume. Lubotzky observed that in this case, as opposed to the (2,3,7)-triangle group in _2(), the lattice of minimal covolume is not uniform; he then asked whether, for a lattice of minimal covolume in a semi-simple Lie group, the typical situation is to be uniform, or not. Progress has been made on this question, and Salehi Golsefidy showed <cit.> that for most Chevalley groups G of rank at least 2, G(_q[t]) is the unique (up to isomorphism) lattice of minimal covolume in G( _q((t^-1))). Salehi Golsefidy also obtained <cit.> that for most simply connected almost simple groups over _q((t^-1)), a lattice of minimal covolume will be non-uniform (provided Weil's conjecture on Tamagawa numbers holds). On the other side of the picture, when the rank is 1, Belolipetsky and Emery <cit.> determined the lattices of minimal covolume among arithmetic lattices in (n,1)() (n ≥ 4) and showed that they are non-uniform. For (n,1)(), Emery and Stover <cit.> determined the lattices of minimal covolume among the non-uniform arithmetic ones, but to the best of the author's knowledge, this has not been compared to the uniform arithmetic ones in this case.Unfortunately, in the rank 1 case, it is not known whether a lattice of minimal covolume is necessarily arithmetic.The above results give a partial answer to the question of Lubotzky in these two respective situations. In this paper, we intend to contribute to the question for _n(). We show that, up to automorphism, the non-uniform lattice _n() is the unique lattice of minimal covolume in _n().§.§ Outline The goal of the present paper is to prove the following theorem.Let n ≥ 3 and let Γ be a lattice of minimal covolume for some (any) Haar measure in _n(). Then σ(Γ) = _n() for some (algebraic) automorphism σ of _n().The argument relies in an indispensable way on the important work of Prasad <cit.> and Borel and Prasad <cit.> (there will be multiple references to results contained in these two articles). We will proceed as follows.We start with a lattice Γ of minimal covolume in _n(). Using Margulis' arithmeticity theorem and Rohlfs' maximality criterion, we find a number field k, an archimedean place v_0 and a simply connected absolutely almost simple k-group G for which Γ is identified with the normalizer of a principal arithmetic subgroup Λ in G(k_v_0). The latter means that there is a collection of parahoric subgroups {P_v}_v ∈ V_f such that Λ consists precisely of the elements of G(k) whose image in G(k_v) lies in P_v for all v ∈ V_f. This allows us to express the covolume of Γ as μ(G(k_v_0) / Γ) = [Γ: Λ]^-1 μ(G(k_v_0) / Λ).The factor μ(G(k_v_0) / Λ) can be computed using Prasad's volume formula <cit.>, and the result depends on the arithmetics of k and of the parahorics P_v, as well as on the quasi-split inner form of G. On the other hand, the index [Γ: Λ] can be controlled using techniques developed by Rohlfs <cit.>, and Borel and Prasad <cit.>. The bound depends namely on the first Galois cohomology group of the center of G and on its action on the types of the parahorics P_v. Once we have an estimate on the covolume of Γ, we can compare it to the covolume of _n() in _n(). We argue that for the former not to exceed the latter, it must be that k is , G is an inner form of _n, and all the parahorics are hyperspecial. This is carried out in sections <ref>-<ref>. Finally, using local-global techniques, we conclude that Γ must be the image of _n() under some automorphism of _n(). §.§ Acknowledgements First of all, the author wishes wholeheartedly to thank Alireza Salehi Golsefidy, to whom the author is greatly indebted for suggesting the subject of the present work, for his precious insight regarding some of the key points at issue, and for his helpful remarks throughout the completion of this project. The author also wishes to thank Mikhail Belolipetsky and Gopal Prasad for their very helpful comments, and Jake Postema for interesting conversations about some of the number-theoretical aspects of this work. Lastly, the author is very grateful for the financial support of the Fonds National de la Recherche du Luxembourg, that allowed him to devote full attention to this project. §.§ Notation and preliminaries The contents of the paper will assume familiarity with the theory of algebraic groups, Bruhat-Tits theory and basic number theory. We refer the reader to <cit.> for an exposition of some of these topics and a more complete list of the available literature. As much as possible, we will follow the notation adopted by Borel and Prasad in <cit.> and <cit.>. * , , ,respectively denote the sets of strictly positive natural, rational, real and complex numbers. For p a place or a prime, _p denotes the field of p-adic numbers and _p its ring of p-adic integers. _p denotes the finite field with p elements.* In what is to follow, we will fix a number field k of degree m, and V, V_∞ and V_f will always denote the set of places, archimedean places and non-archimedean places of k. We will always normalize each non-archimedean place v so that v =. * For v ∈ V, k_v will denote the v-adic completion of k. For v ∈ V_f, k_v is the maximal unramified extension of k_v, _v denotes the residue field of k at v and q_v = #_v is the cardinality of the latter. * _k denotes the ring of adeles of k, and the adeles ofwill be abbreviated . * When working with the adele points G(_k) (or variations of them, e.g. finite adeles) of an algebraic group G, we will freely identify G(k) with its image in G(_k) under the diagonal embedding, and vice-versa. * For l a finite extension of k, we denote D_l the absolute value of the discriminant of l (over ) and _l/k the relative discriminant of l over k; h_l is the class number of l. The units of l will be denoted by U_l, and the subgroup of roots of unity in l by μ(l). * G will be a simply connected absolutely almost simple group (of type A_r) defined over k. We denote r = n-1 its absolute rank, and for v ∈ V_f, r_v is its rank over k_v. *denotes the quasi-split inner k-form of G, l will denote its splitting field. * _n denotes the special unitary group defined overassociated to the positive-definite hermitian form |z_1| + … + |z_n| on ^n. Its group _n() of real points is the usual special unitary group, the unique compact connected simply connected almost simple Lie group of type _n-1. * ζ denotes Riemann's zeta function. * For n ∈, we set ñ = 1 or 2 if n is respectively odd or even.* For x ∈, x denotes the ceiling of x, that is the smallest integer n such that n ≥ x. * V_n will denote the quantity ∏_i=1^n-1i!/(2π)^i+1. § THE SETTINGOn _n, we pick a left-invariant exterior form ω_0 of highest degree which is defined over . The form ω_0 induces a left-invariant form on _n(), also to be denoted ω_0, which in turn induces a left-invariant form on _n() through their common Lie algebra. Let c_0 ∈ be such that _n() has volume 1 for the Haar measure determined in this way by c_0 ω_0; we denote μ_0 the Haar measure given by c_0 ω_0 on _n(). Computing the covolume of _n() goes back to Siegel <cit.>, and for this particular measure, it is given byμ_0(_n()/ _n()) = ( ∏_i=1^r i!/(2π)^i+1) ·∏_i=2^nζ(i).(To obtain this, one can for example use <cit.>; see <ref> below. For the lattice Λ = _n(), one can take P_v = _n(_v), so that e(P_v) = (q_v-1)q_v^n^2-1/∏_i=0^n-1 (q_v^n - q_v^i) = ∏_i=2^n 1/1 - q_v^-i and ∏_v ∈ V_f e(P_v) = ∏_i=2^n ζ(i).) Let Γ be a lattice of minimal covolume for μ_0 in _n() (the existence of such a lattice can be obtained using the Kazhdan-Margulis theorem, see for example <cit.>); in particular, Γ is a maximal lattice. By Margulis' arithmeticity theorem <cit.> and Rohlfs' maximality criterion <cit.> combined, there is a number field k, a place v_0 ∈ V_∞, a simply connected absolutely almost simple group G defined over k, and a parahoric subgroup P_v of G(k_v) for each v ∈ V_f, such that:* k_v_0 =, * there is an isomorphism ι : _n → G defined over k_v_0 (in particular, _n() ≅ G(k_v_0)), * the collection {P_v}_v ∈ V_f is coherent, i.e. ∏_v ∈ V_∞ G(k_v) ×∏_v ∈ V_f P_v is an open subgroup of the adele group G(_k), * ι(Γ) is the normalizer of the lattice Λ = G(k) ∩ι(Γ) in G(k_v_0), and Λ = G(k) ∩∏_v ∈ V_f P_v is the principal arithmetic subgroup determined by the collection {P_v}_v ∈ V_f.This already imposes the signature of k and of the splitting field l of the quasi-split inner formof G. Indeed, for any archimedean place v ≠ v_0, the group G(k_v) must be compact (otherwise Λ would be dense in G(k_v_0) by strong approximation). In consequence, k_v ≅ for v ∈ V_∞ - {v_0} (otherwise G(k_v) ≅_n() is not compact) and k is totally real. Note that in fact, for each v ∈ V_∞ - {v_0}, G(k_v) is isomorphic to _n(), the unique compact connected simply connected almost simple Lie group of type _n-1. Recall that since G is of type A, either l = k or l is a quadratic extension of k. Regardless, if v ∈ V_∞ - {v_0}, it may not be that l embeds into k_v: indeed, if this happens, thensplits over k_v, and thus G would be an inner k_v-form of _n. This prohibits G(k_v) from being compact, as inner k_v-forms of _n are isotropic when n ≥ 3. Thus, in the former case, when G is an inner k-form, it must be that V_∞ - {v_0} is empty, i.e. l = k =. In the latter case, when G is an outer k-form, for each v ∈ V_∞ - {v_0} the real embedding k → k_v extends to two (conjugate) complex embeddings of l. On the other hand, G, hence , splits over k_v_0, thus l embeds in k_v_0. Combined, we see in this case that the signature of l is (2, m-1). On G, we pick a left-invariant exterior form ω of highest degree which is defined over k. The form ω induces a left-invariant form on G(k_v_0), also to be denoted ω, which in turn induces a left-invariant form on _n() through their common Lie algebra. Let c ∈ be such that _n() has volume 1 for the Haar measure determined in this way by c ω; we denote μ the Haar measure determined by c ω on G(k_v_0). By construction, μ agrees with the measure induced from μ_0 through the isomorphism ι. In what follows, we will freely identify _n() with G(k_v_0), Γ with its image ι(Γ) and μ_0 with μ. With this, we haveμ_0(_n() / Γ) = μ(G(k_v_0) / Γ) = [Γ: Λ]^-1 μ(G(k_v_0) / Λ). § PRASAD'S VOLUME FORMULAWe fix a left-invariant exterior form ω_qs defined over k on the quasi-split inner k-formof G. As before, ω_qs induces for each v ∈ V_∞ an invariant form on (k_v), and in turn on any maximal compact subgroup of () through their common Lie algebra. (Note again that such a maximal compact subgroup can be identified with _n().) For each v ∈ V_∞, we choose c_v ∈ k_v such that the corresponding maximal compact subgroup has measure 1 for the Haar measure determined in this way by c_v ω_qs. Let φ: G → be an isomorphism, defined over some Galois extension K of k, such that φ^-1∘^γφ is an inner automorphism of G for all γ in the Galois group of K over k. Then φ induces an invariant form ω^* = φ^* (ω_qs) on G, defined over k. Once again, ω^* induces for each v ∈ V_∞ a form on G(k_v) and then a form on any maximal compact subgroup of G() through their Lie algebras. It turns out <cit.> that the volume of any such maximal compact subgroup for the Haar measure determined in this way by c_v ω^* is 1. This implies in particular that the Haar measure determined on G(k_v_0) by c_v_0ω^* is actually the measure μ that we constructed earlier. For each v ∈ V_∞, we endow G(k_v) with the Haar measure μ_v determined by c_v ω^*. As we observed, μ_v_0 = μ, and for v ∈ V_∞ - {v_0}, G(k_v) is compact, hence μ_v(G(k_v)) = 1 by definition of μ_v.The product G_∞ = ∏_v ∈ V_∞ G(k_v) is then endowed with the product measure μ_∞ = ∏_v ∈ V_∞μ_v.The lattice Λ embeds diagonally in G_∞; we will abusively denote its image by Λ as well. If F is a fundamental domain for Λ in G(k_v_0), then F_∞ = F ×∏_v ∈ V_∞ - {v_0} G(k_v) is a fundamental domain for Λ in G_∞. Thereforeμ_∞(G_∞ / Λ) = μ_∞(F_∞) = μ_v_0(F) ·∏_v ∈ V_∞ - {v_0}μ(G(k_v)) = μ(G(k_v_0) / Λ). Using this observation, the main result from <cit.> allows us to compute μ(G(k_v_0)/ Λ) = D_k^1/2 G (D_l / D_k^[l:k])^1/2()( ∏_i=1^r i!/(2π)^i+1)^[k:]∏_v ∈ V_f e(P_v). VHere, l is the splitting field of the quasi-split inner k-formof G (l is k or a quadratic extension of k), r=n-1 is the absolute rank of G, () = 0 ifis split, otherwise () = 1/2r(r+3) if r is even or () = 1/2(r-1)(r+2) if r is odd, and e(P_v) = q_v^(M_v + _v)/2/#M_v(_v) is the inverse of the volume of P_v for a particular measure.We refer to <cit.> for the unexplained notation (in the present setting, S = V_∞ consists only of real places). § AN UPPER BOUND ON THE INDEXFor the convenience of the reader, we briefly recollect the upper bound on the index [Γ: Λ] developed by Borel and Prasad. The complete exposition, proofs and references are to be found in <cit.> (in the present setting, = {v_0}, G'=G, Γ' = Γ, etc.). For each place v ∈ V_f, we fix a maximal k_v-split torus T_v of G; we also fix an Iwahori subgroup I_v of G(k_v) such that the chamber in the affine building of G(k_v) fixed by I_v is contained in the apartment corresponding to T_v. We denote by Δ_v the basis determined by I_v of the affine root system of G(k_v) relative to T_v. (G(k_v)), hence also the adjoint group G(k_v), acts on Δ_v; we denote by ξ_v: G(k_v) →(Δ_v) the corresponding morphism. Let Ξ_v be the image of ξ_v. Let C be the center of G and φ: G →G the natural central isogeny, so that there is an exact sequence of algebraic groups1 → C → G G→ 1.This sequence gives rise to long exact sequences (of pointed sets), which we store in the following commutative diagram (v ∈ V). 1 rC(k)rdG(k) rφd G(k) rδd ^1(k,C) rd ^1(k,G) d1 rC(k_v)rG(k_v) rφ G(k_v) rδ_v ^1(k_v,C) r ^1(k_v,G) L_vWhen v ∈ V_f, we have that ^1(k_v, G) = 1 by a result of Kneser <cit.> and thus δ_v induces an isomorphism G(k_v) / φ(G(k_v)) ≅^1(k_v, C).Recall that ξ_v is trivial on φ(G(k_v)). Thus ξ_v induces a map ^1(k_v, C) →Ξ_v, which we abusively denote by ξ_v as well.Let Δ = ∏_v ∈ V_fΔ_v, Ξ = ⊕_v ∈ V_fΞ_v and Θ = ∏_v ∈ V_fΘ_v, where Θ_v ⊂Δ_v is the type of the parahoric P_v associated to Λ. Ξ acts on Δ componentwise, and we denote by Ξ_Θ_v the stabilizer of Θ_v in Ξ_v and Ξ_Θ the stabilizer of Θ in Ξ.The morphisms ξ_v induce a mapξ: ^1(k,C) →Ξ: c ↦ξ(c) = (ξ_v(c_v))_v ∈ V_fwhere c_v denotes the image of c in ^1(k_v, C). With this, we define^1(k,C)_Θ = { c ∈^1(k,C) |ξ(c) ∈Ξ_Θ} ^1(k,C)'_Θ = { c ∈^1(k,C)_Θ| c_v_0 = 1 } ^1(k,C)_ξ ={ c ∈^1(k,C) |ξ(c) = 1 }.Borel and Prasad <cit.> use the exact sequence due to Rohlfs1 → C(k_v_0) / (C(k) ∩Λ) →Γ / Λ→δ(G(k)) ∩^1(k,C)'_Θ→ 1.Since k_v_0 =, C(k_v_0) = {1} or {1,-1} depending whether n is odd or even. In particular, it follows that C(k_v_0) = C(k) ∩Λ and Γ / Λ≅δ(G(k)) ∩^1(k,C)'_Θ.Also, it is clear that the kernel of ξ restricted to δ(G(k)) ∩^1(k,C)'_Θ is contained in δ(G(k)) ∩^1(k,C)_ξ, implying that #( δ(G(k)) ∩^1(k,C)'_Θ) ≤#( δ(G(k)) ∩^1(k,C)_ξ) ·∏_v ∈ V_f#Ξ_Θ_v, and in turn,[Γ: Λ] ≤#(δ(G(k)) ∩^1(k,C)_ξ) ·∏_v ∈ V_f#Ξ_Θ_v≤#^1(k,C)_ξ·∏_v ∈ V_f#Ξ_Θ_v. IIn the next two subsections, we try to control the size of δ(G(k)) ∩^1(k,C)_ξ. We distinguish the case where G is an inner k-form of _n from the case G is an outer k-form. For the former, we follow the argument of <cit.>. In the latter, we will adapt to our setting a refinement of the bounds of Borel and Prasad due to Mohammadi and Salehi Golsefidy <cit.>. Except for minor modifications, all the material in this section can be found in these two sources.§.§ The inner caseAlthough in the inner case we have already established that k =, we will discuss it for an arbitrary (totally real) field k, as this will be useful to treat the outer case as well. Let us thus assume G is an inner k-form, i.e. (by the classification) G is isomorphic to _n' for some central division algebraover k of index d = n/n'. Similarly, over k_v, G is isomorphic to _n_v_v for some central division algebra _v over k_v of index d_v = n / n_v. The center C of G is isomorphic to μ_n, the kernel of the map _1 →_1: x ↦ x^n, and thus for any field extension K of k, ^1(K, C) may (and will in this paragraph) be identified with K^× / K^× n (where K^× n = { x^n | x ∈ K^×}). With this identification, the canonical map ^1(k,C) →^1(k_v, C) corresponds to the canonical map k^× / k^× n→ k_v^× / k_v^× n. The action of ^1(k_v, C) on Δ_v can be described as follows: Δ_v is a cycle of length n_v, on which G(k_v) acts by rotations, i.e. Ξ_v can be identified with / n_v. The action of ^1(k_v, C) is then given by the morphism k_v^× / k_v^× n→/ n_v: x ↦ v(x)n_v.From this description, we see that x ∈ k_v^× / k_v^× n acts trivially on Δ_v precisely when v(x) ∈ n_v; in particular, if G splits over k_v, x acts trivially if and only if v(x) ∈ n. We can form the exact sequence1 → k_n / k^× n→^1(k,C)_ξ⊕_v ∈ V_f / n ,where k_n = { x ∈ k^×| v(x) ∈ n for allv ∈ V_f}. By the above, the image of ^1(k,C)_ξ lies in the subgroup ⊕_v ∈ V_f n_v/ n.Let T be the set of places v ∈ V_f where G does not split over k_v, i.e. for which n_v ≠ n. Then the exact sequence yields #^1(k,C)_ξ≤#(k_n / k^× n) ·∏_v ∈ T d_v.The proof of <cit.> shows that #(k_n / k^× n) ≤ h_k ñ n^[k:]-1, where ñ = 1 or 2 if n is respectively odd or even. In the case k =, which will be of interest later, it is indeed clear that #(_n / ^× n) = ñ.§.§ The outer caseSecond, we assume G is an outer k-form. The centers of G and of the quasi-split inner formof G are k-isomorphic, hence there is an exact sequence1 → C →_l/k(μ_n) μ_n → 1,where μ_n denotes the kernel of the map _1 →_1: x ↦ x^n as above, _l/k denotes the restriction of scalars from l to k, and N is (induced by) the norm map of l/k. The long exact sequence associated to it yields1 →μ_n(k) / N(μ_n(l)) →^1(k,C) → l_0 / l^× n→ 1where l_0 / l^× n denotes the kernel of the norm map N: l^× / l^× n→ k^× / k^× n. The Hasse principle for simply connected groups allows us to writeG(k) rδd ^1(k,C) rd ^1(k,G) d90∼ ∏_v ∈ V_∞G(k_v) r(δ_v)_v ∏_v ∈ V_∞^1(k_v,C) r ∏_v ∈ V_∞^1(k_v,G). If n is odd, we can make the following simplifications: μ_n(k) = {1} and thus ^1(k,C) ≅ l_0 / l^× n in (<ref>); using the analogous sequence for k_v, we also have ^1(k_v,C) ≅{1} for v ∈ V_∞. Thus, in (<ref>), we read that δ is surjective and conclude δ(G(k)) ≅ l_0 / l^× n. If n is even, a weaker conclusion holds provided l has at least one complex place, i.e. if V_∞≠{v_0}. Indeed, if v_1 ∈ V_∞ - {v_0}, so that l _k k_v_1 =, then (l _k k_v_1)^× / (l _k k_v_1)^× n = {1} and the long exact sequences associated to (<ref>) read1 r {± 1}rd90∼ ^1(k,C) rdl_0 / l^× nrd1 1 r {± 1}r∼ ^1(k_v_1,C) r1 r 1.The first row splits, and thus we may identify ^1(k,C) ≅{± 1}⊕ l_0 / l^× n; then l_0 / l^× n is precisely the kernel of the canonical map ^1(k,C) →^1(k_v_1,C). Now since the adjoint map G(k_v_1) →G(k_v_1) is surjective (recall that G(k_v_1) ≅_n()), we have in (L_v_1) that the image of G(k) in ^1(k_v_1,C) is trivial, hence δ(G(k)) ⊂ l_0 / l^× n. If n is even and V_∞ = {v_0}, then k =. We have, for each v ∈ V_f,1 r μ(k)/N(μ(l))rd ^1(k,C) rdl_0 / l^× nrd1 1 r μ(k_v)/N(μ_n(lk_v))r ^1(k_v,C) r (N: lk_v → k_v)/(l k_v)^× nr 1.We observe that μ(k) / N(μ(l)) (≅{± 1}) acts trivially on Δ_v for every v ∈ V_f (see for example <cit.>), hence the action factors through l_0 / l^× n. Thus # H^1(k,C)_ξ = 2 ·# l_ξ / l^× n, where l_ξ/ l^× n = { x ∈ l_0/ l^× n|ξ(x) = 1}, so that the bound we establish below will hold with an extra factor ñ in the case k =. It remains to understand the action of l_0 / l^× n on Δ. Let x ∈ l and let(x) = ∏_^i_^i_·∏_''^i_'·∏_””^i_”be the unique factorization of the fractional ideal of l generated by x, where (, ) (resp. ', ”) runs over the set of primes of l that lie over primes of k that split over l (resp. over inert primes of k, over ramified primes of k). When x ∈ l_0, N(x) ∈ k^× n and thus n divides i_ + i_, 2i_' and i_”. Observe that v ∈ V_f splits over l if and only if l embeds into k_v, that is, if and only if ( splits over k_v and) G is an inner k_v-form of _n. In particular, at such a place v, G is isomorphic to _n_v_v for some central division algebra _v over k_v of index d_v = n / n_v.In <cit.>, it is shown that when v splits asover l, the action of x ∈ l_0 is analogous to the inner case described in <ref>, hence x acts trivially on Δ_v if and only if n divides d_v i_ (and thus n also divides d_v i_), i.e. v_ (x) = 0n_v (and v_(x) = 0n_v).When v is inert, say v corresponds to ', then x acts trivially on Δ_v if and only if n divides i_' <cit.>. Let T be the set of places v ∈ V_f such that v splits over l and G is not split over k_v, and let T^l be a subset of the finite places of l consisting of precisely one extension of each v ∈ T, so that restriction to k defines a bijection from T^l to T. By the discussion above, we can form an exact sequence1 → (l_n ∩ l_0) / l^× n→ l_ξ / l^× n⊕_w ∈ T^l / n,where l_n = { x ∈ l^×| w(x) ∈ n for each normalized finite place w of l} and l_ξ / l^× n = { x ∈ l_0/ l^× n|ξ(x) = 1}. Moreover, the image of l_ξ / l^× n lies in the subgroup ⊕_w ∈ T^l n_v/ n. Thus, if we assume k ≠ (so that we may identify δ(G(k)) with a subgroup of l_0 / l^× n), #( δ(G(k)) ∩^1(k,C)_ξ) ≤#(l_ξ / l^× n) ≤#( (l_n ∩ l_0) / l^× n) ·∏_v ∈ T d_v.We get the concrete bound on the index [Γ: Λ] ≤ h_l ñ^m n ·∏_v ∈ T d_v ·∏_v ∈ V_f#Ξ_Θ_vby combining this with (<ref>) and lemma <ref>. If k =, we have instead[Γ: Λ] ≤ h_l ñ^m+1 n ·∏_v ∈ T d_v ·∏_v ∈ V_f#Ξ_Θ_v.§ THE FIELD K IS We set m= [k:] and as before, n=r+1. The purpose of this section is to show that k =, i.e. m = 1. We start by recalling that if P_v is special (in particular, if it is hyperspecial), i.e. Θ_v consists of a single special (resp. hyperspecial) vertex of Δ_v, then Ξ_Θ_v is trivial. Regardless of the type Θ_v, we have #Ξ_Θ_v≤ñ unless G is an inner k_v-form of _n (say G ≅ SL_n_v (_v)), in which case #Ξ_Θ_v≤#Δ_v = n_v, where n_v-1 is the rank of G over k_v. (For example, this can be seen explicitly on all the possible relative local Dynkin diagrams Δ_v for G(k_v), enumerated in <cit.> or <cit.>. In the inner case, the Dynkin diagram is a cycle on which the adjoint group acts as rotations.) By a result of Kneser <cit.>, G is quasi-split over the maximal unramified extension k_v of k_v for any v ∈ V_f. This means that over k_v, G is isomorphic to . The quasi-split k-forms of simply connected absolutely almost simple groups of type A_n-1 are well understood <cit.>: either ≅_n, or ≅_n,l, the special unitary group associated to the split hermitian form on l^n, where l is a quadratic extension of k equipped with the canonical involution (incidentally, l is the splitting field of _n,l, in accordance with the notation introduced). Thus, over k_v, only these two possibilities arise for G. (Nonetheless,might split over k_v; in fact, it does so except at finitely many places.) In particular, the rank r_v of G over k_v is either r, or the ceiling of r/2.§.§ The inner caseThe case where G is an inner k-form of _n (i.e. when l=k) has been treated in section <ref>. We observed that if G is an inner k_v-form of _n for some v ∈ V_∞, then G(k_v) cannot be compact. This forced V_∞ = {v_0} and thus k =.§.§ The outer caseHere we settle the case where G is an outer k-form of _n, i.e. when [l:k] = 2. We observed in section <ref> that l has two real embeddings (extending k → k_v_0) and m-1 pairs of conjugate complex embeddings. Suppose that m > 1.Let T be the finite set of places v ∈ V_f such that v splits over l and G is not split over k_v. Then, according to section <ref>, we have[Γ: Λ] ≤ h_l ñ^m n ·∏_v ∈ T d_v ·∏_v ∈ V_f#Ξ_Θ_vwhere ñ = 1 or 2 if n is odd or even, and h_l denotes the class number of l. Combined with (<ref>), we find (abbreviating V_n = ∏_i=1^n-1i!/(2π)^i+1)μ(G(k_v_0)/ Γ) ≥ñ^-m n^-1 h_l^-1 D_k^n^2 -1/2 (D_l / D_k^2)^1/2() V_n^m·∏_v ∈ T d_v^-1·∏_v ∈ V_f#Ξ_Θ_v^-1·∏_v ∈ V_f e(P_v). We use <cit.> and the observations made at the begining of section <ref> to study the local factors of the right-hand side.* If v ∈ T, then we use e(P_v) ≥ (q_v-1) q_v^(n^2 -n^2 d_v^-1 - 2)/2 to obtain d_v^-1·#Ξ_Θ_v^-1· e(P_v) ≥ n^-1· (q_v-1) q_v^n^2 /4 -1 > 1 when n ≥ 4. When n = 3, then d_v = 3 and we also have d_v^-1·#Ξ_Θ_v^-1· e(P_v) ≥ n^-1· (q_v-1) q_v^n^2 /3 -1 > 1 (lemma <ref>). * If v ∉ T but P_v is special, then #Ξ_Θ_v = 1 and e(P_v) > 1, thus #Ξ_Θ_v^-1· e(P_v) > 1. * If v ∉ T, P_v is not special and G is not split over k_v, then we use that e(P_v) ≥ (q_v+1)^-1 q_v^r_v+1 to obtain #Ξ_Θ_v^-1· e(P_v) ≥ñ^-1· (q_v+1)^-1 q_v^(n-1)/2+1 > 1 (lemma <ref>).* If v ∉ T, P_v is not special but G splits over k_v, then P_v is properly contained in a hyperspecial parahoric H_v. There is a canonical surjection H_v →_n(_v), under which the image of P_v is the proper parabolic subgroup P_v of _n(_v) whose type consists of the vertices belonging to the type of P_v in the Dynkin diagram obtained by removing the vertex corresponding to H_v in the affine Dynkin diagram of G(k_v). In particular, it follows that [H_v: P_v] = [_n(_v) : P_v] and we may compute using lemma <ref>e(P_v) = [H_v: P_v] · e(H_v) > [H_v: P_v] > q^n-1.Hence #Ξ_Θ_v^-1· e(P_v) >n^-1 q^n-1 > 1.Multiplying all the factors together, we have that ∏_v ∈ T d_v^-1·∏_v ∈ V_f#Ξ_Θ_v^-1·∏_v ∈ V_f e(P_v) > 1and we can thus write μ(G(k_v_0)/ Γ) > ñ^-m n^-1 h_l^-1 D_k^n^2 -1/2 (D_l / D_k^2)^1/2() V_n^m.Recall that D_l / D_k^2 is the norm of the relative discriminant 𝔡_l/k of l over k; in particular, D_l / D_k^2 is a positive integer. Note also that () ≥ 5 if n ≥ 3.We combine this with two number-theoretical bounds: from the results in <cit.>, we use that h_l^-1D_l ≥1/100( 12/π)^2m; from Minkowski's geometry of numbers, we recall (k is totally real)D_k^1/2≥m^m/m!.Altogether, we obtainμ(G(k_v_0)/ Γ)> 1/100 ñ^m( 12/π)^2m D_k^n^2 -5/2 (D_l / D_k^2)^1/2() -1 V_n^m n^-1≥1/100ñ^m( 12/π)^2m( m^m/m!)^n^2 -5 V_n^m n^-1.We consider the function M: ×→ defined byM(m,n) = 1/100 ñ^m( 12/π)^2m( m^m/m!)^n^2 -5( ∏_i=1^n-1i!/(2π)^i+1)^m-1 n^-1.M is strictly increasing in both variables, provided m ≥ 2 and n ≥ 6 (lemma <ref>). In consequence, if m ≥ 2, n ≥ 9, μ(G(k_v_0)/ Γ)/μ(_n()/ _n()) > M(m,n)/∏_i=2^nζ(i) > M(2,9)/∏_i=2^∞ζ(i) >1,cf. lemma <ref>, and Γ is not of minimal covolume. In a similar manner, we would like to show that m cannot be large. To this end, Odlyzko's bounds on discriminants <cit.> are well-suited. We have D_k^1/2 > A^m · E,with A = 29.534^1/2 and E = e^-4.13335.Combining with (<ref>), we obtainμ(G(k_v_0)/ Γ)> 1/100ñ^m( 12/π)^2m( A^m E )^n^2 -5 V_n^m n^-1.We consider the function M': ×→ defined by M'(m,n) = 1/100 ñ^m( 12/π)^2m( A^m E )^n^2 -5( ∏_i=1^n-1i!/(2π)^i+1)^m-1 n^-1.M' is also strictly increasing in both variables, provided m ≥ 4 and n ≥ 4 (lemma <ref>). This means that if m ≥ 6, n ≥ 4, μ(G(k_v_0)/ Γ)/μ(_n()/ _n()) > M'(m,n)/∏_i=2^nζ(i) > M'(6,4)/∏_i=2^∞ζ(i) >1,(cf. table <ref> and lemma <ref>) and Γ is not of minimal covolume. We may thus restrict our attention to the range 4 ≤ n ≤ 8 and 2 ≤ m ≤ 5 (we will treat the case n=3 with a separate argument at the end of this section). By further sharpening our estimates on the discriminant, we will show that all these values are excluded as well, forcing m=1. From the bound (<ref>) and the estimate μ(G(k_v_0)/ Γ) ≤μ(_n()/ _n()) < 2.3 · V_n (<ref>), we deduce an upper bound on the discriminant of k:D_k< ( 230 ñ^m( π/12)^2m (D_l / D_k^2)^1 - 1/2() V_n^1-m n )^2/n^2-5≤( 230 ñ^m( π/12)^2m V_n^1-m n )^2/n^2-5 =: C(m,n).As can be seen by comparing the values of C (table <ref>) with the smallest discriminants (table <ref>), this bound already rules out n ≥ 7. We use these two tables to obtain information about D_k. A lower bound on D_k in turn will give us a bound on the relative discriminant: using (<ref>) again, D_l / D_k^2 < ( 230 ñ^m( π/12)^2m D_k^5-n^2/2 V_n^1-m n )^2/() -2.We proceed to rule out all values of m. In what follows, unless specified otherwise, any bound on D_k is obtained using (<ref>), (<ref>) or (<ref>), and any upper bound on D_l / D_k^2 using (<ref>). Claims made on the existence of a field l satisfying certain conditions are always made with the underlying assumption that l is a quadratic extension of k of signature (2,m-1). m=5 gives 14641 ≤ D_k ≤ 15627 (and n=4). A quick look in the online database of number fields <cit.> shows [The database <cit.> provides a certificate of completeness for certain queries. All allusions made here refer to searches that are proven complete.However, it is important to note that in <cit.>, class numbers are computed assuming the generalized Riemann hypothesis (the rest of the data being unconditional). The class numbers referred to in this paper were therefore all verified using PARI/GP'scommand. A PARI/GP script of this process is available on the author's page (http://www.math.ucsd.edu/ fthilman/research/mincovsln/classnumberscertificate.html).]that there is only one such field (with D_k = 14641). Now for l, Odlyzko's bound <cit.> reads D_l > (29.534)^2 · (14.616)^8 · e^-8.2667≥ 4.66756 · 10^8and in particular, we compute that D_l/ D_k^2 ≥ 2.177 (hence D_l / D_k^2 ≥ 3). On the other hand, (<ref>) yields D_l / D_k^2 < 1.271,ruling out this case.m=4 gives 725 ≤ D_k ≤ 1741 (and n = 4). A quick look in the database <cit.> shows that there are three fields satisfying this requirement, with discriminants respectively 725, 1125, 1600. * If D_k = 1600, then D_l / D_k^2 < 1.365, hence D_l = D_k^2 = 2560000. But, as observed in the database, there are no fields l of signature (2,3) with D_l ≤ 3950000. Unfortunately, the database has no complete records for fields with signature (2,3) and discriminants past 3950000. We will thus need to refine our bounds to be able to treat the two other possible values for D_k. First, we go back to our bound on the class number h_l: as in <cit.>, we use Zimmert's bound R_l ≥ 0.04 · e^2 · 0.46 + (m-1) · 0.1 on the regulator of l along with the Brauer-Siegel theorem (with s=2) to deduceh_l ≤ 100 · e^- 0.82 - 0.1 · m· (2π)^-2m·ζ(2)^2m·D_l≤ 29.523 ·( π/12)^8 · D_l.Using this, we may rewrite the bound (<ref>) as D_l / D_k^2 < ( 67.9029 ñ^4( π/12)^8 D_k^5-n^2/2 V_n^-3 n )^2/() -2.* If D_k = 1125, then our new bound yields D_l / D_k^2 ≤ 2, hence D_l ≤ 2 D_k^2 = 2531250 and this is ruled out by the database. * If D_k = 725, then our new bound yields D_l / D_k^2 ≤ 11, hence D_l ≤ 11 D_k^2 = 5781875. Selmane <cit.> has computed all fields of signature (2,3) that possess a proper subfield and have discriminant D_l ≤ 6688609. It turns out that among those, only the field with discriminant -5781875 can be an extension of k. As observed in the online database, this field has class number 1. Substituting this information in (<ref>), we see that the right-hand side exceeds 2.3 · V_n. m=3 gives 49 ≤ D_k ≤ 194 (and n = 4 or 5). A quick look in the database <cit.> shows that there are four fields satisfying this requirement, with discriminants respectively 49, 81, 148, 169.* If D_k = 169, then D_l / D_k^2 < 1.661 hence D_l = D_k^2 = 28561. There are no fields l with D_l ≤ 28000. * If D_k = 148, then D_l / D_k^2 ≤ 2. There are no fields l with D_l / 148^2 = 1 or 2. * If D_k = 81, then D_l / D_k^2 ≤ 24. An extensive search in the database shows that this can only be satisfied by one field l, with discriminant D_l = 81^2 · 17. It has class number h_l = 1, hence we may substitute this information in (<ref>) and compute that the right-hand side exceeds 2.3 · V_n. * If D_k = 49, then D_l / D_k^2 ≤ 155. An extensive search in the database shows that there are 6 fields l satisfying this condition. They correspond to D_l / D_k^2 = 13, 29, 41, 64, 97 or 113, and all have class number 1. Then, in (<ref>), the right-hand side again exceeds 2.3 · V_n (note that it suffices to check this for the smallest value of D_l/D_k^2). m=2 gives 5 ≤ D_k ≤ 21 (and 4 ≤ n ≤ 6). It is well known (and can be observed in the database <cit.>) that there are 6 fields satisfying this requirement, with discriminants respectively 5, 8, 12, 13, 17, 21. From (<ref>), we see that D_l/ D_k^2 ≤ 214, 38, 8, 6, 2, 1 respectively.* If D_k = 21 or 17, we observe that D_l ≤ 578. There are no fields with D_l ≤ 578 that can be extensions of k in these cases. * If D_k = 13, then the database exhibits only one possible field l with D_l = 13^2 · 3. This field has trivial class group, and using this information in (<ref>), we see that the right-hand side exceeds 2.3 · V_n. * If D_k = 12, then there are again no fields with D_l ≤ 8 D_k^2. * If D_k = 8, then there are 11 candidates l with D_l ≤ 38 · 8^2, and all have trivial class group. The one with smallest relative discriminant has D_l / D_k^2 = 7. For this field (hence for all of them), the right-hand side of (<ref>) is again too large. * If D_k = 5, there are 25 candidates l with D_l ≤ 214 · 5^2, and all have trivial class group. The one with smallest relative discriminant has D_l = 11. This field (hence all of them) is one more time excluded by (<ref>).It remains to deal with the case n=3. First, we proceed as above, using lemma <ref>, M'(16,3) ≃ 4.6751..., and ζ(2) ·ζ(3) < 1.97731 to see that μ(G(k_v_0)/ Γ)/μ(_3()/ _3()) > M'(m,3)/ζ(2) ·ζ(3) >1provided m ≥ 16. Hence we may restrict our attention to the range 2 ≤ m ≤ 15. Unfortunately, this bound on the degree of k is too large to allow us to work with a number field database. Of course, the reason this bound is large is that the powers of D_k and D_l appearing in (<ref>) are very small. In turn, the bound we used for the class number h_l was very greedy in terms of D_l, aggravating the situation. In fact,we can use (<ref>) and one of Odlyzko's bounds <cit.> for D_l to obtain a lower bound on h_l:h_l ≥D_k^-1 D_l^5/2 V_3^m-1/3 ·ζ(2) ·ζ(3)≥D_l^2 V_3^m-1/3 ·ζ(2) ·ζ(3) > (25.465^2 · 13.316^2m-2· e^-7.0667)^2 · V_3^m-1/3 ·ζ(2) ·ζ(3).We record the values of this bound in table <ref> (for small values of m, we used the actual minimum for D_l to obtain this lower bound for h_l).To solve this issue, we use the following trick. The Hilbert class field L of l has degree [L: ] = 2 m h_l, signature (2h_l, (m-1) h_l) and discriminant D_L = D_l^h_l. Hence, when the class number is large, we can use Odlyzko's bounds <cit.> for D_L in order to improve our bounds on D_l. Namely, we haveD_l = D_L^1/h_l > 60.015^2· 22.210^2m-2· e^-80.001/h_l.We record this bound for D_l in table <ref>. Now using D_l ≥ D_k^2, we may rewrite (<ref>) asζ(2) ·ζ(3) · V_3 > μ(G(k_v_0)/ Γ) > 1/300( 12/π)^2m D_l · V_3^mand check that this inequality contradicts the bound in table <ref> as soon as m ≥ 4. For m=3 and m=2, the bound reads respectively D_l ≤ 4578732 and D_l ≤ 13643. Finally, to treat the remaining two cases, we can use the online database <cit.>. If m=3, we observe that all fields of signature (2,2) with discriminant D_l ≤ 4578732 have class number either h_l = 1 or h_l = 2; this contradicts (<ref>) and table <ref>. Similarly, if m=2, we observe in the database that all fields of signature (2,1) with discriminant D_l ≤ 13643 also have class number either h_l = 1 or h_l = 2. This is again a contradiction to (<ref>) and table <ref>. Below is a summary of the various discriminant bounds that were used in this section to exclude a given couple (m,n) from giving rise to a lattice of minimal covolume.[scale=0.8] (Origin) at (0,0); (XAxisMin) at (-1,0); (XAxisMax) at (17,0); (YAxisMin) at (0,-8); (YAxisMax) at (0,1); [thick,black] (XAxisMin) – (XAxisMax); [thick,black] (YAxisMin) – (YAxisMax); [thick,black] (-1,1) – (0,0); at (0,1) [below left,inner sep=3pt] m; at (-1,0) [above right,inner sep=3pt] n;at (-0.5,-0.5) 3; at (-0.5,-1.5) 4; at (-0.5,-2.5) 5; at (-0.5,-7.5) 10;at (0.5,0.5) 1; at (1.5,0.5) 2; at (2.5,0.5) 3; at (3.5,0.5) 4; at (4.5,0.5) 5; at (9.5,0.5) 10; at (14.5,0.5) 15; [pattern = north east lines] (1.1,-5.9) – (1.1,-3.8) – (1.9,-3.8) – (1.9,-2.8) – (2.9,-2.8) –(2.9,-2.1) – (3.9,-2.1) – (3.9,-5.9) – cycle; [white] (1.4,-5.6) – (1.4,-4.1) – (2.2,-4.1) – (2.2,-3.1) – (3.2,-3.1) –(3.2,-2.4) – (3.6,-2.4) – (3.6,-5.6) – cycle;[rotate=90] at (1.5,-5) <ref>;in 1,2,...,17 in -1,-2,...,-8[draw,circle,inner sep=1pt,fill] at (-0.5,+0.5) ; (1,-8) – (1,-6) – (2,-6) – (2,-4) – (5,-4) – (5,-3) – (11,-3) – (11,-2) – (17,-2) – (17,-2.4) – (11.4,-2.4) – (11.4, -3.4) – (5.4,-3.4) – (5.4,-4.4) – (2.4, -4.4) – (2.4,-6.4) – (1.4, -6.4) – (1.4, -8) – cycle; [pattern=north west lines] (1.1,-10) – (1.1,-6.1) – (2.1,-6.1) – (2.1,-4.1) – (5.1,-4.1) – (5.1,-3.1) – (11.1,-3.1) – (11.1,-2.1) – (17.1,-2.1)–(17.1,-3.1);[rotate=90] at (2,-7.5) Minkowski; (3,-8) – (3,-3) – (4,-3) – (4,-2) – (5,-2) – (5,-1) – (14,-1) – (14,0) – (17,0) – (17,-0.4) – (15.4, -0.4) – (15.4,-1.4) – (5.4,-1.4) – (5.4,-2.4) – (4.4,-2.4) – (4.4,-3.4) – (3.4,-3.4) – (3.4,-8) – cycle;[pattern = north west lines] (3.1,-10) – (3.1,-3.1) – (4.1,-3.1) – (4.1,-2.1) – (5.1,-2.1) – (5.1,-1.1) – (15.1,-1.1) – (15.1,-0.1) – (17.1,-0.1) – (17.1,-2.1); at (8.5,-2) Odlyzko;(0.1,-8) – (0.1,-0.1) – (0.9,-0.1) – (0.9,-8); [rotate=90] at (0.5,-4.95) (sections <ref> and <ref>);[pattern = north west lines] (1.5,-3.5) circle (0.2) (1.5,-2.5) circle (0.2) (1.5,-1.5) circle (0.2) (1.5,-0.5) circle (0.2) (2.5,-2.5) circle (0.2) (2.5,-1.5) circle (0.2) (2.5,-0.5) circle (0.2) (3.5,-1.5) circle (0.2) (4.5,-1.5) circle (0.2); at (2,-2) Case by case; [pattern=north east lines] (3.1,-0.9) – (3.1,-0.1) – (14.9,-0.1) – (14.9,-0.9) – cycle;at (8.5,-0.5) Class field + Odlyzko; § G IS AN INNER FORM OF _NThe purpose of this section is to show that G is an inner k-form of _n, i.e. thatsplits over k. Let us thus suppose, for contradiction, that [l : k] > 1. We have shown in section <ref> that k=, so that the bounds (<ref>) and (<ref>) obtained in <ref> can be adapted as follows: (the extra factor ñ is due to the correction in the index bound when k=, cf. section <ref>)μ(G(k_v_0)/ Γ)> ñ^-2 n^-1 h_l^-1 D_l^1/2() V_n ≥1/100 ñ^2( 12/π)^2 D_l^1/2() -1 V_n n^-1. First, let us assume that h_l ≠ 1. Since l is totally real, this implies D_l ≥ 40. Note that () ≥1/2(r^2+r-2) = 1/2(n^2 - n -2). Thereforeμ(G(k_v_0)/ Γ) > 1/100ñ^2( 12/π)^2 40^1/4(n^2-n-6) V_n n^-1. We consider the function N: → defined byN(n) = 1/100 ñ^2( 12/π)^2 40^1/4(n^2-n-6) n^-1.N is strictly increasing, provided n ≥ 2 (lemma <ref>). In consequence, if n ≥ 4, then N(n) ≥ N(4) ≃ 2.30692... and thus μ(G(k_v_0)/ Γ)/μ(_n()/ _n()) > N(n)/∏_i=2^nζ(i) > N(4)/∏_i=2^∞ζ(i) >1,hence Γ is not of minimal covolume. For n=3 we notice that () = 5, so thatμ(G(k_v_0)/ Γ) > 1/300( 12/π)^2 40^3/2· V_3> 12.3035 · V_3and Γ is not of minimal covolume. Second, if h_l = 1, then at least D_l ≥ 5 and we may consider the function N': → defined by N'(n) = ñ^-2 n^-1 5^1/4(n^2 - n - 2).N' is strictly increasing (lemma <ref>) and N'(4) ≃ 3.49385..., thusμ(G(k_v_0)/ Γ)/μ(_n()/ _n()) > N(n)/∏_i=2^nζ(i) > N(4)/∏_i=2^∞ζ(i) >1,and Γ is not of minimal covolume. For n=3, we use again that () = 5 to see thatμ(G(k_v_0)/ Γ) > 1/3· 5^5/2· V_3> 18.6338 · V_3and Γ is not of minimal covolume. This forces l = k and G to be an inner form.§ THE PARAHORICS P_V ARE HYPERSPECIAL AND G SPLITS AT ALL PLACESSo far, we have established that k = l = and G is an inner k-form of _n; thus, G is isomorphic to _n' for some central division algebraover k of index d = n/n'. Similarly, over k_v, G is isomorphic to _n_v_v for some central division algebra _v over k_v of index d_v = n / n_v. Recall that T is the finite set of places v ∈ V_f where G does not split over k_v, and let T' be the finite set of places v ∈ V_f where P_v is not a hyperspecial parahoric; of course, T ⊂ T'. The goal of this section is to show that T' is empty. According to section <ref>, we have#^1(k,C)_ξ≤ñ·∏_v ∈ T d_v,with d_v ≥ 2 if v ∈ T.Also, as we noted at the begining of section <ref>, #Ξ_Θ_v≤ n_vif v ∈ T,#Ξ_Θ_v≤ r+1 = nif v ∈ T',#Ξ_Θ_v = 1otherwise.Combined with (<ref>) and (<ref>), we obtainμ(G(k_v_0)/ Γ)≥ñ^-1 V_n ·∏_v ∈ T d_v^-1·∏_v ∈ T n_v^-1·∏_v ∈ T' - T n^-1·∏_v ∈ V_f e(P_v) = ñ^-1 V_n ·∏_v ∈ T' n^-1·∏_v ∈ V_f e(P_v). Recall that for any v ∈ V_f, e(P_v) > 1. If v ∈ T, then according to <cit.>, we have e(P_v) ≥ (q_v-1)q_v^1/2(n^2 - n^2 d_v^-1 -2)≥ (q_v-1) q_v^1/4n^2 - 1.Now if T is not empty, then by looking at the Hasse invariant of , it appears that d_v ≥ 2 for at least two (finite) places. This means that T has at least two elements, and using lemma <ref>, we see that if n≥ 4,∏_v ∈ T n^-1 e(P_v) ≥ (n^-1 (2 -1) ·2^1/4n^2 - 1) · (n^-1 (3-1) · 3^1/4n^2 - 1) ≥ 27.If n=3, then actually d_v = 3 for at least two (finite) places, and ∏_v ∈ T n^-1 e(P_v) ≥ (n^-1 (2 -1) ·2^1/3n^2 - 1) · (n^-1 (3-1) · 3^1/3n^2 - 1) = 8.In particular, it is clear from (<ref>) that Γ is not of minimal covolume. Hence it must be that T is empty and G splits everywhere. On the other hand, if v ∈ T' - T, then P_v is properly contained in a hyperspecial parahoric H_v. As discussed previously, there is a canonical surjection H_v →_n(_v), under which the image of P_v is the proper parabolic subgroup P_v of _n(_v) whose type consists of the vertices belonging to the type of P_v in the Dynkin diagram obtained by removing the vertex corresponding to H_v in the affine Dynkin diagram of G(k_v). In particular, it follows that [H_v: P_v] = [_n(_v) : P_v] and thus using lemma <ref>,e(P_v) = [H_v: P_v] · e(H_v) ≥ q_v^n-1· e(H_v).Of course, as G splits everywhere, we have that e(H_v) is equal to the corresponding factor e(_n(_v)) = ∏_i=2^n 1/1 - q_v^-i for _n(_v). In consequence,μ(G(k_v_0)/ Γ)/μ(_n()/ _n())≥ñ^-1∏_v ∈ T' n^-1·∏_v ∈ V_f e(P_v)/∏_v ∈ V_f e(_n(_v))≥ñ^-1∏_v ∈ T' (n^-1 q_v^n-1) ≥ 1with equality only if n = 4, T' = {2} and #Ξ_Θ_2 = 4. Notice however that this bound is rather rough; by examining the types of the parahorics carefully, one obtains much better bounds. For example, to achieve #Ξ_Θ_v = n, P_v must be an Iwahori subgroup, in which case [H_v: P_v] ≥ q_v^(n^2-n)/2 in lemma <ref>. This rules out the equality case above and thus T' must be empty as well. § CONCLUSION As we have shown in section <ref>, G splits over k_v for all v ∈ V_f and thus for all v ∈ V. As before, letbe a central division algebra over k (= ) of degree d such that G ≅_n'() over k. Now since G splits at all places, we have for any v ∈ V that G(k_v) ≅_n(k_v), or in other words, that the group of elements of reduced norm 1 in _n'() _k k_v is isomorphic to _n(k_v). This implies that _n'() _k k_v ≅_n(k_v), i.e. _v = _k k_v splits over k_v. It then follows from the AlbertBrauerHasseNoether theorem that = k and in turn G(k) ≅_n(k) and G is split over k. From hereon, we will thus identify G with _n through this isomorphism, to be denoted η. Since each parahoric P_v is hyperspecial, for each v ∈ V_f there exists g_v ∈_n(_v) such that g_v P_v g_v^-1 = _n(_v). As the family {P_v} is coherent, we may assume that g_v = 1 except for finitely many v ∈ V_f. In this way, g = (1, (g_v)_v ∈ V_f) determines an element of the adele group _n(). The class group of _n overis trivial <cit.>, therefore_n() = (_n() ×∏_v ∈ V_f_n(_v)) ·_n(),and we can write g = (1, (g'_v h)_v ∈ V_f) for g'_v ∈_n(_v) and h ∈_n(). In consequence, hP_vh^-1 = g_v'^-1_n(_v) g'_v = _n(_v), and thus h Λ h^-1 = h_n()h^-1∩∏_v ∈ V_f h P_v h^-1 = _n() ∩∏_v ∈ V_f_n(_v) = _n().In turn, h Γ h^-1 = _n(), as _n() (or equivalently Λ) is its own normalizer in _n(). One way to obtain this fact is using Rohlfs' exact sequence (see section <ref>). Indeed, clearly C(k_v_0) = C(k) ∩Λ, and on the other hand, since Λ is given by hyperspecial parahorics, we may identify^1(k,C)'_Θ = { x ∈^× / ^× n| v(x) ∈ n for v ∈ V_f, and x ∈^× n} = {1}.Hence Γ / Λ is trivial as claimed. Finally, retracing our identifications, we find that _n() is the image of Γ under the automorphism σ: _n()G(k_v_0) _n() _n() of _n() (here c_h denotes conjugation by h). This concludes the proof of the Let n ≥ 3 and let Γ be a lattice of minimal covolume for some (any) Haar measure in _n(). Then σ(Γ) = _n() for some (algebraic) automorphism σ of _n(). § APPENDIX: BOUNDS FOR SECTIONS <REF> THROUGH <REF>Let k be a totally real number field of degree m and let l be a quadratic extension of k of signature (2m_1,m_2), so that m = m_1 + m_2. Let n ∈ and set l_0 = { x ∈ l^×| N_l/k(x) ∈ k^× n} and l_n = { x ∈ l^×| w(x) ∈ n for each normalized finite place w of l}. Then #( (l_n ∩ l_0) / l^× n) ≤#( μ(l) / μ(l)^n ) ·ñ^m-1 n^m_1·#_n,where μ(l) is the group of roots of unity of l, ñ = 1 or 2 depending if n is odd or even, and _n is the n-torsion subgroup of the class groupof l. Moreover, if N_l/k is surjective from U_l onto U_k / {± 1}, then#( (l_n ∩ l_0) / l^× n) ≤#( μ(l) / μ(l)^n ) · n^m_1·#_n.According to <cit.>, there is an exact sequence1 → U_l/U_l^n → l_n / l^× n→_n → 1,where U_l denotes the group of units of the ring of integers of l, and _n is the n-torsion subgroup of the class groupof l. Intersecting with l_0 / l^× n yields#( (l_n ∩ l_0) / l^× n) ≤#( (U_l ∩ l_0) / U_l^n ) ·#_n. Dirichlet's units theorem states that U_l is the internal direct product F_l ×μ(l) of F_l, the free abelian subgroup of U_l (of rank 2m_1+m_2 -1) generated by some system of fundamental units, and μ(l), the subgroup of roots of unity in l^×. Since μ(l) ⊂ l_0, we also have that U_l ∩ l_0 is the internal direct product of F_l ∩ l_0 and μ(l). Additionally, it is clear that under this identification, U_l^n corresponds to the subgroup F_l^n ×μ(l)^n of (F_l ∩ l_0) ×μ(l). In consequence, #( (U_l ∩ l_0) / U_l^n ) = #( (F_l ∩ l_0) / F_l^n ) ·#( μ(l) / μ(l)^n ),and it remains to study (F_l ∩ l_0)/ F_l^n; to this end, we switch to additive notation. We write L for the free abelian group U_l / μ(l) (canonically isomorphic to F_l) in additive notation, and M for its free subgroup U_k / {± 1} (of rank m-1) consisting of units lying in k. The norm N_l/k induces a map N: L → M, and in turn a map L/ nL → M /nM also denoted by N, whose kernel L_0 / nL corresponds precisely to (F_l ∩ l_0) / F_l^n. In other words, the sequence0 → L_0 / nL → L / nLM / nMis exact. It is clear that # (L/nL) = n^2m_1 + m_2 -1 and #(M/nM) = n^m-1. If N is surjective, then it follows that # (L_0 /nL) = n^m_1. In any case, we have 2M ⊂ N(L) hence we may write#(N(L)+nM/nM) = #( N(L) + nM/2M + nM) ·#( 2M +nM/nM).As 2M + nM = ñM, we have #( 2M +nM/nM) = ( n/ñ)^m-1 and the lemma follows. The function ×→ defined by E(n,q) = n^-1· (q-1) q^n^2 /4 -1 is increasing in both n and q provided n, q ≥ 2. In consequence, n^-1· (q-1) q^n^2 /4 -1 > 1 provided n≥ 4. Similarly, n^-1· (q-1) q^n^2 /3 -1 > 1 provided n ≥ 3. We compute, for n, q ≥ 2,E(n,q+1)/E(n,q) = q (q+1)^1/4n^2 - 1/(q-1) q^1/4n^2 -1 = q^2 (q+1)^1/4n^2/(q^2 -1) q^1/4n^2 > 1.andE(n+1,q)/E(n,q) = n/n+1· q^1/4(2n+1)≥2/3· 2^5/4 > 1.Thus E is strictly increasing in n and q if n, q ≥ 2, and E(4,2) = 2. The proof of the second inequality is analogous. Let n, q ∈ with q ≥ 2. Then ñ^-1· (q+1)^-1 q^(n+1)/2 > 1 provided n ≥ 3.Observe that E(n,q) = q^(n+1)/2/(q+1) ñ is increasing in n and strictly increasing in q, asE(n+1,q)/E(n,q) = ñ/n+1 q^2 - ñ≥ 1andE(n,q+1)/E(n,q) = (q+1)(q+1)^(n+1)/2/(q+2) q^(n+1)/2 = (q^2+2q +1)(q+1)^(n+1)/2-1/(q^2 + 2q)q^(n+1)/2-1 > 1.Finally E(3,2) = 4/3. The function M: ×→ defined byM(m,n) = 1/100 ñ^m( 12/π)^2m( m^m/m!)^n^2 -5( ∏_i=1^n-1i!/(2π)^i+1)^m-1 n^-1(where ñ = 1 or 2 if n is odd or even) is strictly increasing in both m and n, provided m ≥ 2 and n ≥ 6.For F a function of two integer variables m and n, we denote ∂_m F (resp. ∂_n F) the function defined by ∂_m F(m,n)= F(m+1,n)/F(m,n) (resp. ∂_n F(m,n)= F(m,n+1)/F(m,n)). In order to show that M increases in m (resp. in n), we intent to show that ∂_m M > 1 (resp. ∂_n M > 1). We have∂_m M(m,n)= 144/π^2 ñ((m+1)^m/m^m)^n^2-5·∏_i=1^n-1i!/(2π)^i+1 ∂_n M(m,n)= (ñ/n+1)^m·n/n+1·(m^m/m!)^2n+1(n!/(2π)^n+1)^m-1and thus∂_m (∂_n M)(m,n) = ∂_n (∂_m M)(m,n) = ñ/n+1·((m+1)^m/m^m)^2n+1·n!/(2π)^n+1.Now if m ≥ 2 and n ≥ 4, then (m+1)^m/m^m≥9/4 and we have∂_m (∂_n M)(m,n) ≥1/2( 9/4)^2n+1n!/(2 π)^n+1 = 9/16 π·( 81/16 π)^n·n!/2^n≥9/16 π·( 81/16 π)^4 > 1.This means that provided m ≥ 2 and n ≥ 4, ∂_m M increases in n and ∂_n M increases in m. Finally, assuming m ≥ 2 and n ≥ 6 respectively, we have∂_m M(m,6)= 144/2 π^2((m+1)^m/m^m)^31·∏_i=1^5i!/(2π)^i+1≥144/2π^2(9/4)^31·∏_i=1^5i!/(2π)^i+1 > 1∂_n M(2,n)= (ñ/n+1)^2 n/n+1· 2^2n+1·n!/(2π)^n+1≥3/14· 2^nn!/π^n+1≥3/14· 2^6·6!/π^7> 1hence ∂_m M(m,n) > 1 and ∂_n M(m,n) > 1 provided m ≥ 2 and n ≥ 6, completing the proof. The table below contains some values of the function M from lemma <ref>. 0.60[ (n, m)12345678;20.0364756 0.003370120.000276781 0.00002157711.63315 × 10^-61.21281 × 10^-78.88761 × 10^-9 6.44933 × 10^-10;30.0486342 0.002318760.000177084 0.00001665851.76356 × 10^-62.01469 × 10^-72.42731 × 10^-83.04153 × 10^-9;40.01823780.0002142399.19392 × 10^-66.99962 × 10^-77.37412 × 10^-89.57798 × 10^-91.43998 × 10^-9 2.41175 × 10^-10;50.02918050.0008602600.0002674340.0002357650.0003751600.000873531 0.00265357 0.00980934;60.01215850.000715847 0.001623630.0185268 0.52802027.14892107.97221884.;70.02084320.037445311.982337981.04.41409× 10^8 1.18530× 10^13 5.71337× 10^17 4.24155× 10^22;8 0.00911891 0.55691235451.14.88495 × 10^103.84324 × 10^179.29477 × 10^244.92580 × 10^324.65827 × 10^40;90.0162114685.655 2.23863× 10^11 3.83726× 10^21 6.20398× 10^32 4.26138× 10^44 8.04066× 10^56 3.19899× 10^69; 10 0.00729513306071.9.29184 × 10^173.98641 × 10^322.82701 × 10^481.22281 × 10^651.87055 × 10^827.27033 × 10^99; 110.0132639 1.40574× 10^10 1.27888× 10^28 4.91209× 10^48 5.79785× 10^70 6.22507× 10^933.12510× 10^1174.89869× 10^141 ]The function M': ×→ defined byM'(m,n) = 1/100 ñ^m( 12/π)^2m( A^m E )^n^2 -5( ∏_i=1^n-1i!/(2π)^i+1)^m-1 n^-1(where ñ = 1 or 2 if n is odd or even, and A = 29.534^1/2, E = e^-4.13335) is strictly increasing in both m and n, provided m ≥ 4 and n ≥ 4. Moreover, M'(m,n) is strictly increasing in m provided n ≥ 3.In order to show that M' increases in m (resp. in n), we intend to show that ∂_m M > 1 (resp. ∂_n M > 1); the notation is as in lemma <ref>. We have∂_m M'(m,n)= 144/π^2 ñ· A^n^2-5·∏_i=1^n-1i!/(2π)^i+1 ∂_n M'(m,n)= (ñ/n+1)^m( A^m E )^2n+1(n!/(2π)^n+1)^m-1(n/n+1)and thus∂_m (∂_n M')(m,n) = ∂_n (∂_m M')(m,n) = ñ/n+1· A^2n+1·n!/(2π)^n+1.As clearly A^2 > 2π, we have (if n ≥ 3)∂_m (∂_n M')(m,n) > 1/2· A ·n!/2π > 1.This means that ∂_m M' increases in n and ∂_n M' increases in m. Assuming respectively m ≥ 1 and n ≥ 4, we have∂_m M'(m,3) =144/π^2· A^4·2/(2π)^5 > 1∂_n M'(4,n) ≥1/2^4·( A^4 E )^2n+1·(n!)^3/(2 π)^3n+3·4/5≥1/2^4·( A^4 E )^9·(6!)^3/(2 π)^21·4/5 > 1hence ∂_m M'(m,n) > 1 and ∂_n M'(m,n) > 1 provided m ≥ 4 and n ≥ 4. Moreover, ∂_m M'(m,n) > 1 if n ≥ 3, completing the proof. The table below contains some values of the function M' from lemma <ref>. 0.57[(n, m) 1 2 3 4 5 6 7 8; 20.418729 0.0142379 0.0004841240.0000164615 5.59732 × 10^-7 1.90323 × 10^-86.47149 × 10^-102.20047 × 10^-11; 3 2.80041 × 10^-6 7.27880 × 10^-60.00001891900.0000491740 0.000127813 0.000332209 0.0008634740.00224433; 43.99708 × 10^-142.79970 × 10^-11 1.96100 × 10^-80.00001373560.00962086 6.73878 4720.083.30611 × 10^6; 51.84711 × 10^-232.62212 × 10^-16 3.72231 × 10^-9 0.0528412 750123.1.06486× 10^131.51165× 10^202.14591× 10^27; 61.68676 × 10^-352.85139 × 10^-234.82016 × 10^-11 81.4827 1.37743 × 10^14 2.32849 × 10^26 3.93621 × 10^38 6.65400 × 10^50; 74.80891 × 10^-491.09207 × 10^-292.48000 × 10^-10 5.63189× 10^91.27896× 10^292.90442× 10^486.59571× 10^671.49783× 10^87; 82.65506 × 10^-656.66279 × 10^-381.67200 × 10^-10 4.19583 × 10^17 1.05293 × 10^45 2.64229 × 10^72 6.63074 × 10^991.66396 × 10^127; 94.52005 × 10^-831.88536 × 10^-45 7.86407 × 10^-83.28019× 10^301.36821× 10^68 5.70695× 10^105 2.38043× 10^143 9.92906× 10^180;10 1.47804 × 10^-1031.08376 × 10^-54 7.94662 × 10^-6 5.82681 × 10^43 4.27247 × 10^923.13276 × 10^1412.29707 × 10^1901.68431 × 10^239;11 1.48182 × 10^-1253.59121 × 10^-630.8703372.10928× 10^62 5.11187× 10^124 1.23887× 10^187 3.00243× 10^249 7.27644× 10^311 ]The table below contains some values of C(m,n) = ( 230 ñ^m( π/12)^2m V_n^1-m n )^2/n^2-5. 0.8[(n, m) 1 2 3 4 5 6 7 8; 3 6.87691 125.979 2307.81 42276.9 774473. 1.41876× 10^7 2.59904× 10^8 4.76120× 10^9; 4 2.40966 21.6241 194.053 1741.42 15627.4 140239. 1.25850× 10^6 1.12937× 10^7; 5 1.54762 8.80582 50.1044 285.090 1622.14 9229.86 52517.2 298819.; 6 1.40247 6.73460 32.3393 155.292 745.707 3580.86 17195.1 82570.5; 7 1.23838 4.82334 18.7864 73.1708 284.992 1110.01 4323.37 16839.0; 8 1.20619 4.19700 14.6037 50.8142 176.811 615.221 2140.69 7448.64; 9 1.13928 3.44306 10.4054 31.4468 95.0368 287.215 868.006 2623.24 ]The table below contains the absolute value of the smallest discriminant D_k of a totally real number field of degree m (see for example <cit.> or <cit.>). 1[ m 1 2 3 4 5 6 7 8; min D_k 1 549 725 1464130012520134393 282300416 ]The tables below contains some values of H(m) =(A^2 B^2m-2 E)^2 V_3^m-1/3 ·ζ(2) ·ζ(3) for A =25.465, B = 13.316, E= e^-7.0667 if m ≥ 5, and otherwise H(m) is obtained from (<ref>) using the smallest discriminant for the signature (2,m-1) (see <cit.>). 1[ m 2 3 4 5 6 7 8 9;H(m) 2.603 5.527 26.39 87.71 563.23616.4 23222.2 149118. ]1[ m101112131415;H(m) 9.58 × 10^56.15× 10^63.95× 10^72.54× 10^81.63× 10^9 1.05× 10^10 ]The table below contains some values of 60.015^2· 22.210^2m-2· e^-80.001/H(m), where H(m) is as in table <ref>. 0.9[m2345678;D_l > 8.05 × 10^-8 454.012.08× 10^10 8.57 × 10^13 9.13 × 10^16 5.08 × 10^192.55× 10^22 ]0.9[m9 10 11 12 13 14 15;D_l > 1.26 × 10^25 6.23 × 10^27 3.07 × 10^30 1.52 × 10^33 7.48 × 10^35 3.69 × 10^38 1.82 × 10^41 ]The function N: → defined byN(n) = 1/100 ñ^2( 12/π)^2 40^1/4(n^2-n-6) n^-1.(where ñ = 1 or 2 if n is odd or even) is strictly increasing provided n ≥ 2. The same holds forN'(n) = ñ^-2 n^-1 5^1/4(n^2 - n - 2).We computeN(n+1)/N(n) = ñ^2/n+1^2· 40^1/2 n·n/n+1≥1/4· 40 ·2/3 > 1.The proof for N' is analogous.∏_i=2^∞ζ(i) < 2.3We haveln∏_i=9^∞ζ(i)= ∑_i=9^∞ln (1 + (ζ(i) - 1)) ≤∑_i=9^∞ (ζ(i)-1) = ∑_i=9^∞∑_j=2^∞1/j^i= ∑_j=2^∞1/j^9∑_i=0^∞1/j^i = ∑_j=2^∞1/j^9j/j-1≤ 2 ∑_j=2^∞1/j^9 = 2(ζ(9)-1);hence ∏_i=2^∞ζ(i) ≤exp(2ζ(9)-2) ·∏_i=2^8 ζ(i) < 2.3Let P be a parabolic subgroup of _n(_q) and let n_1, n_2, …, n_k be integers such that the complement of the type θ of P in the Dynkin diagram of _n(_q) consists of k- #θ connected components of respectively n_1-1, n_2 -1, …, n_k-#θ -1 vertices and n_k- #θ +1 = n_k- #θ +2 = … = n_k = 1. Then [_n(_q): P] ≥ q^1/2(n^2 - ∑_i=1^k n_i^2). In particular, if P is a proper parabolic subgroup, then [_n(_q): P] ≥ q^n-1.Without loss of generality, we may assume that P contains the subgroup B of upper triangular matrices and that elements of P are of the form( [ 2c2*n_11|c* ⋯ * * * *;2c1|c* ⋯ * * * *; 1-3 01c|0 n_21|c⋯ * * * *;3-4⋮ ⋮1c|⋮1c|⋱ ⋮ ⋮ ⋮ ⋮; 4-7 0 0 01c|⋯ 3c3*n_k-11|c*; 0 0 01c|⋯3c1|c*; 0 0 01c|⋯3c1|c*; 5-8 0 0 0 ⋯ 0 01c|0 n_k ])where n_i indicates a block in _n_i(_q), ∗ indicates an arbitrary entry in _q, and the determinant of the whole matrix is 1. Hence#P = ∏_j=1^n_1 - 1 (q^n_1 - q^j) …∏_j=1^n_k - 1 (q^n_k - q^j) · q^1/2(n^2 - ∑_i=1^k n_i^2)/q-1and#_n(_q)/#P = ∏_j=0^n-1 (q^n - q^j)/∏_j=0^n_1 - 1 (q^n_1 - q^j) …∏_j=0^n_k - 1 (q^n_k - q^j) · q^1/2(n^2 - ∑_i=1^k n_i^2)= q^n(n-1)/2·∏_j=1^n (q^j - 1)/q^1/2(n^2 - ∑_i=1^k n_i^2) q^1/2∑_i=1^k n_i(n_i -1)·∏_j=1^n_1 (q^j - 1) …∏_j=1^n_k (q^j - 1)= ∏_j=1^n (q^j - 1)/∏_j=1^n_1 (q^j - 1) …∏_j=1^n_k (q^j - 1)= q^1/2(n(n-1) - ∑_i=1^k n_i(n_i -1))·∏_j=1^n_1 (q^j - 1) ·∏_j=1^n_2(q^j - q^-n_1) …∏_j=1^n_k (q^j - q^- ∑_i=1^k-1 n_i)/∏_j=1^n_1 (q^j - 1) …∏_j=1^n_k (q^j - 1).Of course, n(n-1) - ∑_i=1^k n_i(n_i -1) = (n^2 - ∑_i=1^k n_i^2). Now the ratio in the right-hand side is clearly greater then 1, as, taken in order, each factor in the numerator is bigger than the corresponding one in the denominator. Finally, we observe that if P is proper, k ≥ 2 and n^2 - ∑_i=1^k n_i^2 ≥ 2n_1 n_2 ≥ 2(n-1).Department of Mathematics, University of California, San Diego 9500 Gilman Drive, La Jolla, CA 92093-0112E-mail address:mailto:[email protected] ] | http://arxiv.org/abs/1705.09742v2 | {
"authors": [
"François Thilmany"
],
"categories": [
"math.GR",
"math.NT",
"math.RT",
"22E40"
],
"primary_category": "math.GR",
"published": "20170526234835",
"title": "Lattices of minimal covolume in SL_n(R)"
} |
0 cm 0 cm-50pt 6pt | http://arxiv.org/abs/1705.09540v1 | {
"authors": [
"Pu Qiao",
"Xingzhi Zhan"
],
"categories": [
"math.CO"
],
"primary_category": "math.CO",
"published": "20170526114321",
"title": "On vertex types of graphs"
} |
[email protected] T.C.M. group, Cavendish Laboratory, J. J. Thomson Avenue, Cambridge, CB3 0HE, United KingdomT.C.M. group, Cavendish Laboratory, J. J. Thomson Avenue, Cambridge, CB3 0HE, United KingdomMax Planck Institute for the Physics of Complex Systems, Nöthnitzer Str. 38, 01187 Dresden, GermanyRudolf Peierls Centre for Theoretical Physics, 1 Keble Road, Oxford, OX1 3NP, United Kingdom NRC Kurchatov institute, 1 Kurchatov sq., 123182, Moscow, Russia We study the time evolution after a quantum quench in a family of models whose degrees of freedom are fermions coupled tospins, where quenched disorder appears neither in the Hamiltonian parameters nor in the initial state. Focussing on the behaviour of entanglement, both spatial and between subsystems, we show that the model supports a state exhibiting combined area/volume law entanglement, being characteristic of the quantum disentangled liquid. This behaviour appears for one set of variables, which is related via a duality mapping to another set, where this structure is absent. Upon adding density interactions between the fermions, we identify an exact mapping to an XXZ spin-chain in a random binary magnetic field, thereby establishing the existence of many-body localizationwith its logarithmic entanglement growth in a fully disorder-free system. Absence of Ergodicity without Quenched Disorder: from Quantum Disentangled Liquids to Many-Body Localization D. L. Kovrizhin December 30, 2023 ===============================================================================================================The intriguing problem of the interplay between interactions and disorder in a quantum system has been fuelling research in this field since Anderson's original work <cit.>. Recent progress in understanding physical phenomena associated with this interplay <cit.> has firmly placed many-body localization (MBL) ideas among the central paradigms of many-body physics <cit.>. These exciting developments moved disordered interacting systems into the focus of attention, not least because MBL offers new important insights into the fundamental questions of ergodicity and its breaking, such as concepts of eigenstate thermalization hypothesis <cit.>, beyond the realm of integrable models. Because the presence/absence of ergodicity defines the way a generic system relaxes towards an equilibrium state, there are many interesting connections between the physics of MBL, and non-equilibrium quantum physics, e.g. quantum quenches.One of such connections, recently proposed theoretically <cit.>, suggests a new non-ergodic state of matter – the quantum disentangled liquid (QDL) – which complements the established phenomenology of relaxation in isolated many-body quantum systems. The defining feature of these quantum liquids is that they are unable to fully thermalize because of interactions, thus making unnecessary the usual requirements for ergodicity breaking, such as integrability or quenched disorder. The idea of QDLs can be traced back to early works of Kagan and Maksimov on interaction-induced localization, discussed in the context of solid Helium <cit.>. One QDL scenario is that of heavy particles which thermalize, while light particles evade thermalization by localizing on the heavy particles. More recent studies of heavy-light particle models suggest that this physical picture of sub-diffusive dynamics, while present, is only transient, and gives way to ergodic behaviour at long times. Hence, these systems have been dubbed quasi-MBL <cit.>. Similar phenomenology has been observed in the corresponding quantum dynamics of classical glassy models <cit.>.Intriguingly, some evidence for QDL-like behaviour, showing different timescales for equilibration of two subsystems, has been observed in cold-atom experiments <cit.>.In a recent paper <cit.> we proposed a disorder-free spin-fermion model, which exhibits complete localization of the fermion subsystem. Its remarkable feature is that disorder, a prerequisite for localization, only emerges dynamically. This is highlighted via an exact duality mapping between spin/fermion degrees of freedom. This non-linear transformation reveals the presence of an extensive number of conserved quantities playing the role of the disorder potential. In the dual representation the model becomes that of free-fermions, and there is an important question as to what extent the physics that we found is robust to adding perturbations to our model. Here we propose and study such an interacting extension, showing that it can be mapped exactly onto a random field XXZ spin-chain – the drosophila of MBL <cit.>.A standard diagnostic for MBL and QDL behaviour is the bipartite entanglement entropy. Many-body localization can be distinguished from its non-interacting counterpart – Anderson localization <cit.> – via the post-quench logarithmic growth of entanglement compared with the area-law saturation of entanglement correspondingly <cit.>. QDLs on the other hand can be identified using projective measures of entanglement entropy of separate species <cit.>. One of these obeys an area-law scaling, while the other together with the full system show the volume-law. The original proposal of <cit.> provides explicit examples of many-body wave functions showing QDL phenomenology. However, the search for a microscopic Hamiltonian supporting quantum disentangled liquid has so far proved to be inconclusive. In this Letter we demonstrate two central results obtained within our model; the many-body localization without quenched disorder, and a microscopic Hamiltonian showing QDL behaviour. Here we focus on the results for the time evolution of entanglement entropy after a quantum quench, which are obtained using a combination of duality mappings, exact diagonalization, and matrix-product state (MPS) based time evolution.Our work comes at a time when exceptional progress has been made in experimental realization of controlled isolated quantum systems <cit.> and in simulating lattice gauge theories coupled to fermionic matter <cit.> – of which our system is an example. This is driven in part by MBL and general questions about thermalization, or lack-thereof, in such systems. The Hamiltonian we present is simple enough that it should be implementable in similar set-ups, and being able to tune the localization length should minimize the effect of system size limitations. We have a system that violates the eigenstate thermalization hypothesis in the two ways that we present in this paper, and with a novel disorder-free mechanism.Model and its mapping to XXZ chain in a random field.— In our previous work <cit.> we introduced a model of spinless fermions, f̂_j, hopping between sites of a 1D lattice, that are coupled to spins-1/2, σ̂_j,j+1, living on the bonds. Here we extend this model by adding nearest-neighbour interactions between the fermionsĤ_̂f̂ = -J∑_ i jσ̂^z_i,jf̂^†_i f̂_j - h∑_j σ̂^x_j-1,jσ̂^x_j,j+1 + Δ∑_j (2n̂_j -1)(2n̂_j+1 - 1),where n̂_j = f̂^†_j f̂_j is the fermion density operator. Without loss of generality we assume that all parameters of the Hamiltonian are non-negative. The model possesses an extensive number of conserved quantities (charges), identified by a duality mapping which we outline here for completeness, see details in <cit.>. We define τ-spins on the sites of the lattice through the duality transformation <cit.>τ̂^z_j = σ̂^x_j-1,jσ̂^x_j,j+1, τ̂^x_jτ̂^x_j+1 = σ̂^z_j,j+1.The charges q̂_j ≡τ̂^z_j (-1)^n̂_j, commute with the Hamiltonian also in the presence of fermion interactions Δ≠ 0. Finally, in terms of new fermion operators ĉ_j = τ̂^x_j f̂_j the Hamiltonian can be recast in the following formĤ_̂q̂ = -J∑_ i jĉ^†_i ĉ_j + h∑_j q̂_j (2n̂_j - 1)+ Δ∑_j (2n̂_j -1)(2n̂_j+1 - 1),where n̂_j = ĉ^†_j ĉ_j = f̂^†_jf̂_j, and q̂_j have eigenvalues ± 1. The Hamiltonian (<ref>) is equivalent to an XXZ chain in a magnetic field via Jordan-Wigner transformation <cit.>, where the value of the magnetic field on each lattice site is given by ± 2h, and the signs are fixed for any given configuration of q_j's. In the following we investigate the emergence of QDL and MBL behavior using the time evolution of the entanglement entropy after a global quantum quench with the Hamiltonian (<ref>), and initial states being tensor products of spin and fermion degrees of freedom. For simplicity we assume that at t=0 the σ-spins are polarized along the z-axis, and the f-fermions are described by the Slater determinant corresponding to a charge density wave. Thus, initial states |0 = |↑↑⋯_σ⊗ |ψ_f transform into an equal-weight superposition of charge configurations |0 = 1/√(2^N)∑_{q_j} = ± 1 |q_1 q_2 ⋯ q_N ⊗ | ψ_c, with |ψ_f equivalent to |ψ_c <cit.>. Note that the choice of a spin-polarized initial state is dictated purely by its simplicity, while the physics remains the same for any typical spin state. Exceptions are a zero-measure subset of special states, e.g., there is a simple product state of spins each (anti-)aligned with the x-axis and fermions in a tensor product of local occupations (such as a CDW) which maps to fermions in a single uniform charge sector. In this setup the problem maps to a paradigmatic MBL system – the XXZ spin-chain in a random magnetic field <cit.>. In our case the field has a binary nature, in other words it takes only two values ± 2h, as in Refs. <cit.> where MBL behaviour is also observed. Note, that here disorder is determined by the conserved charges q̂_j which are themselves related to the physical degrees of freedom of Eq. (<ref>). Our choice of the initial state results in averaging over all charge configurations, thereby generating emergent random binary magnetic fields.Quantum Disentangled Liquid.—A fresh perspective using entanglement measures <cit.> was recently proposed in the context of localization in a disorder-free system given by a mixture of heavy and light particles <cit.>. These developments brought forward the notion of a quantum disentangled liquid – a state of matter which is defined by different behaviour of the entanglement entropy of its subspecies. However, to our knowledge, no microscopic Hamiltonian conclusively exhibiting this behaviour has been identified so far. Here we show that the model we suggested in <cit.>, even in the non-interacting case of Δ=0, does realize the phenomenology of QDLs.The quantum disentangled liquid was defined in <cit.> via Projective Bipartite Entanglement Entropy (PBEE). Here we briefly review the definition of PBEE for the case of a system with two components in a pure state |ψ. Let α and β label the components, and P̂^γ_ϕ be a projector onto the state |ϕ of the species γ∈{α, β}. This projector is related to a measurement of the single component. We also spatially partition our system into two subsystems, A and B. The algorithm for calculating PBEE for the component α is as follows:(i) project the state |ψ onto the state |ϕ of species β, i.e. |ψ_ϕ = P^β_ϕ | ψ;(ii) define the reduced density matrix ρ^ϕ_A = Tr_B |ψ_ϕψ |_ϕ; (iii) compute the von Neumann entanglement entropy S^ϕ_A = -Tr_A[ρ^ϕ_A logρ^ϕ_A]; (iv) the PBEE for the species α is then defined asS^α_PBEE = ∑_ϕ ||ψ_ϕ|^2 S^ϕ_A,where the sum for the entropies S^ϕ_A is weighted with the probabilities of states |ψ_ϕ. Crucially a QDL has volume-law scaling of the total bipartite von Neumann entropy S and S^α_PBEE for one species, but the area-law for the other species S^β_PBEE. In Fig. <ref>(a) we show bipartite entanglement entropy for the full system for Δ=0, h/J = 20 after a quench from a charge density wave fermion state. The entropy exhibits initial linear growth followed by an area-law plateau which eventually gives way to the volume-law scaling (note the dependence on the system size). The extent of the plateau scales as (h/J)^2for h/J ≫ 1, as shown in the inset; it is absent for h/J < 1. This behaviour can be attributed to a separation of timescales, which is particularly crisp in our case of binary disorder, where for h/J ≫ 1, a pair of adjacent sites with opposite values of q_j correspond to a high energy barrier. Traversing such a barrier is a process parametrically suppressed in h/J, while motion between such barriers takes place on shorter timescales. The latter can only producearea-law scaling of the entanglement entropy, while the former can act on longer timescales, resulting in equilibration of the spins and a concomitantvolume-law scaling for theentanglement entropy. Note that the same two localization regimes also appear inthe disorder-averaged entanglement entropy of a simple tight-binding model with binary disorder. It is directly related to PBEE projected onto the charge sectors in our model, S^c_PBEE shown in Fig. <ref>(a), because our choice of spin polarized initial state leads to an equal weight superposition of all disordered charge configurations.The PBEEs for the original degrees of freedom, the f-fermions and σ-spins, are shown in Fig. <ref>(b). The data is scaled to highlight the fact that both PBEEs have the same qualitative behaviour, and match the entanglement entropy of the composite system. In terms of the f and σ degrees of freedom, the long time limit does not suggest the QDL behaviour since all three measures develop volume-law scaling (see inset). However, in terms of new degrees of freedom, after the mapping to c-fermions and conserved charges, we do find the phenomenology of the QDL. The corresponding PBEEs obey area and volume law scaling, respectively, as shown in Fig. <ref>. Importantly, we find area-law scaling of the PBEE for a macroscopic fraction of the degrees of freedom. Furthermore, since the localization behaviour persists for all system sizes <cit.>, and there is a direct relation between the area-law scaling of S^c_PBEE and the localization of fermions, this allows us to infer that this behaviour holds in the thermodynamic limit. These contrasting results highlight the subtlety of defining a QDL, most crucially an appropriate choice of the measurement basis. While the dynamics of the f and c fermions is closely related, e.g., all density correlators are the same, they are connected via non-linear and non-local transformation with a string of spin operators. Disorder-free MBL.—We now turn to our second main result related to the interacting fermion case Δ≠ 0.Here, the system (<ref>) can be mapped to an XXZ model with a random magnetic field of binary nature q_j h→± h via a standard Jordan-Wigner transformation, S^+_j=ĉ_j^†(-1)^∑_l<jn̂_l and S^z_j=n̂_j-1/2, yieldingĤ_XXZ = -J∑_j (Ŝ^+_j Ŝ^-_j+1+Ŝ^-_j Ŝ^+_j+1)+ 4Δ∑_jŜ^z_jŜ^z_j+1 + 2h∑_jq_j Ŝ^z_j.Usually studied with continuously sampled disorder, but also considered with binary disorder in Ref. <cit.>, the random field XXZ model serves as an important example of a model showing many-body localized behaviour <cit.>. Here, we find that MBL phenomenology extends to our model, even without quenched disorder. MBL is often distinguished from Anderson localization by the logarithmic growth of entanglement entropy after a quench whilst preserving area-law scaling <cit.> with the system size. We use this diagnostic for the initial charge density wave fermion state. The time evolution under the Hamiltonian (<ref>) is computed using exact diagonalization for N=12 and by an MPS algorithm for N=20 (with the help of the iTensor library <cit.>), where we use second-order Trotter decomposition with error of compression at each step less than 3× 10^-7 up to a maximum bond dimension χ = 700.In Fig. <ref>(a) we present the results for the time evolution of entanglement entropy after a quench from a charge density wave initial state. In the case of Δ=0 we observe an area-law plateau at long times, as identified in Fig. <ref>(a). Upon increasing Δ/J to 0.1 we find a change of behaviour with entanglement entropy growing without saturation, which also obeys area-law scaling with respect to different size partitions (not shown) and is evident from comparing results for N=12 and N=20. The same data shown in a semi-log plot (see inset) confirms that this is consistent with the logarithmic growth of entanglement. The averaged density imbalance between neighbouring sites, Δρ(t) ∝∑_j | 0 | n̂_j(t) - n̂_j+1(t) |0|, along with the time-averaged value 1/t∫^t_0 dτ Δρ(τ), is shown in Fig. <ref>(b). For both Δ/J = 0, 0.1, the density imbalance oscillates about the long-time value Δρ(∞) > 0, directly establishing non-ergodicity on these very large timescales <cit.>. The electron interactions (Δ0) lead to additional damping of the oscillations around this value. For a more detailed investigation of the XXZ spin chain with binary disorder we point the reader to Refs. <cit.>Discussion.—We presented an extension of our model of disorder-free localization discussed previously in Ref. <cit.>. The model shows rich phenomenology, from quantum disentangled liquids to many-body localization. Our results explicitly demonstrate that the usual assumption that the MBL phase requires quenched disorder is false.Using the time evolution of entanglement after a quantum quench we demonstrated that our model also shows quantum disentangled liquid behaviour in this observable, and we highlight the dependence of the definition of QDL on the choice of the measurement basis. In spite of a close relationship between the f and c fermions that we consider, the projective entanglement involved in the definition of QDLs reveal a stark difference between the two measurements. We have also identified a family of models that realise QDLs, namely extensions to our model, which commute with the conserved charges, as well as the models with non-interacting auxillary spins of Ref. <cit.>. The model can be extended by adding to the Hamiltonian a number of other terms commuting with conserved charges. A particularly interesting example is a simple longitudinal field ∼∑_i σ_i^x. This term confines excitations of the spin sector and corresponds to a non-local interaction after the mapping to c-fermions. Another possible extension is to give dynamics to the conserved charges which could help to establish how robust the physics of disorder-free MBL is to perturbations.The model we discussed in this Letter marks an intersection of many-body localization and quantum disentangled liquids. Recent experimental progress on controlled isolated quantum systems, and in particular the simulation of lattice gauge theories, makes it accessible with current capabilities <cit.>. It provides a new setting for studying old and general open questions about the relaxation of isolated many-body quantum systems.§ ACKNOWLEDGEMENTSWe are grateful to Fabian Essler for enlightening and encouraging suggestions, and for bringing to our attention the importance of the XXZ mapping. We would like to thank Thomas Veness for his suggestions on the manuscript. A. S. acknowledges EPSRC for studentship funding under Grant No. EP/M508007/1. J. K. is supported by the Marie Curie Programme under EC Grant agreements No.703697. The work of D. L. K. was supported by EPSRC Grant No. EP/M007928/1.R. M. was in part supported by DFG under grant SFB 1143.30 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Anderson(1958)]Anderson1958 author author P. W. Anderson, 10.1103/PhysRev.109.1492 journal journal Phys. Rev. volume 109, pages 1492 (year 1958)NoStop [Gornyi et al.(2005)Gornyi, Mirlin, and Polyakov]Gornyi:2005xy author author I. V. Gornyi, author A. D. Mirlin, and author D. G. Polyakov, https://link.aps.org/doi/10.1103/PhysRevLett.95.206603 journal journal Physical Review Letters volume 95, pages 206603 (year 2005)NoStop [Basko et al.(2006)Basko, Aleiner, and Altshuler]Basko2006 author author D. Basko, author I. Aleiner, and author B. Altshuler, 10.1016/j.aop.2005.11.014 journal journal Ann. Phys. (N. Y). volume 321, pages 1126 (year 2006)NoStop [Altshuler et al.(1997)Altshuler, Gefen, Kamenev, and Levitov]Altshuler:1997fk author author B. L. Altshuler, author Y. Gefen, author A. Kamenev,and author L. S. Levitov, https://link.aps.org/doi/10.1103/PhysRevLett.78.2803 journal journal Physical Review Letters volume 78, pages 2803 (year 1997)NoStop [Altman and Vosk(2015)]Altman_review2015 author author E. Altman and author R. Vosk,@noopjournal journal Annual Review of Condensed Matter Physics volume 6, pages 383 (year 2015)NoStop [Nandkishore and Huse(2015)]Nandkishore2015 author author R. Nandkishore and author D. A. Huse, 10.1146/annurev-conmatphys-031214-014726 journal journal Annu. Rev. Condens. Matter Phys. volume 6, pages 15 (year 2015)NoStop [Srednicki(1994)]Srednicki1994 author author M. Srednicki, 10.1103/PhysRevE.50.888 journal journal Phys. Rev. E volume 50, pages 888 (year 1994)NoStop [Pino et al.(2016)Pino, Ioffe, and Altshuler]Pino2016 author author M. Pino, author L. B. Ioffe, and author B. L. Altshuler,@noopjournal journal Proceedings of the National Academy of Sciences volume 113, pages 536 (year 2016)NoStop [Pino et al.()Pino, Kravtsov, Altshuler, and Ioffe]Pino2017 author author M. Pino, author V. Kravtsov, author B. L. Altshuler,and author L. B. Ioffe, @noopjournal arXiv:1704.07393 NoStop [Grover and Fisher(2014)]Grover2014 journal author author T. Grover and author M. P. A. Fisher, 10.1088/1742-5468/2014/10/P10010 journal journal J. Stat. Mech. Theory Exp. volume 2014, pages P10010 (year 2014)NoStop [Veness et al.(2016)Veness, Essler, and Fisher]Veness2016 author author T. Veness, author F. H. L. Essler,and author M. P. A. Fisher, https://arxiv.org/pdf/1611.02075.pdf http://arxiv.org/abs/1611.02075 (year 2016), http://arxiv.org/abs/1611.02075 arXiv:1611.02075 NoStop [Garrison et al.(2017)Garrison, Mishmash, and Fisher]Garrison2017 author author J. R. Garrison, author R. V. Mishmash,and author M. P. A. Fisher, 10.1103/PhysRevB.95.054204 journal journal Phys. Rev. B volume 95, pages 054204 (year 2017)NoStop [Kagan and Maksimov(1984)]Kagan author author Y. Kagan and author L. A. Maksimov, 0038-5646/84/070201-10 journal journal JETP volume 60, pages 201 (year 1984)NoStop [Yao et al.(2016)Yao, Laumann, Cirac, Lukin, and Moore]Yao2014 author author N. Y. Yao, author C. R. Laumann, author J. I. Cirac, author M. D. Lukin,and author J. E. Moore, 10.1103/PhysRevLett.117.240601 journal journal Phys. Rev. Lett. volume 117, pages 240601 (year 2016)NoStop [Papić et al.(2015)Papić, Stoudenmire, and Abanin]Papic2015 author author Z. Papić, author E. M. Stoudenmire,and author D. A. Abanin, 10.1016/j.aop.2015.08.024 journal journal Ann. Phys. (N. Y). volume 362, pages 714 (year 2015)NoStop [Schiulaz et al.(2015)Schiulaz, Silva, and Müller]Schiulaz2015 author author M. Schiulaz, author A. Silva, and author M. Müller,10.1103/PhysRevB.91.184202 journal journal Phys. Rev. B volume 91, pages 184202 (year 2015)NoStop [van Horssen et al.(2015)van Horssen, Levi, and Garrahan]vanHorssen2015 author author M. van Horssen, author E. Levi,and author J. P. Garrahan, 10.1103/PhysRevB.92.100305 journal journal Phys. Rev. B volume 92, pages 100305 (year 2015)NoStop [Hickey et al.(2016)Hickey, Genway, and Garrahan]Hickey2016 author author J. M. Hickey, author S. Genway, and author J. P. Garrahan,http://stacks.iop.org/1742-5468/2016/i=5/a=054047 journal journal Journal of Statistical Mechanics: Theory and Experiment volume 2016, pages 054047 (year 2016)NoStop [Bloch()]Bloch_private author author I. Bloch, @noopjournal private communicationNoStop [Smith et al.(2017)Smith, Knolle, Kovrizhin, and Moessner]Smith2017 journal author author A. Smith, author J. Knolle, author D. L. Kovrizhin,and author R. Moessner, https://arxiv.org/pdf/1701.04748.pdf http://arxiv.org/abs/1701.04748 journal journal arXiv:1701.04748(year 2017)NoStop [Žnidarič et al.(2008)Žnidarič, Prosen, and Prelovšek]Znidaric2008 author author M. Žnidarič, author T. Prosen,and author P. Prelovšek, 10.1103/PhysRevB.77.064426 journal journal Phys. Rev. B volume 77, pages 064426 (year 2008)NoStop [Bardarson et al.(2012)Bardarson, Pollmann, and Moore]Bardason2012 author author J. H. Bardarson, author F. Pollmann, and author J. E. Moore,10.1103/PhysRevLett.109.017202 journal journal Phys. Rev. Lett. volume 109,pages 017202 (year 2012)NoStop [Enss et al.(2017)Enss, Andraschko, and Sirker]Enss2016 author author T. Enss, author F. Andraschko, and author J. Sirker, 10.1103/PhysRevB.95.045121 journal journal Phys. Rev. B volume 95, pages 045121 (year 2017)NoStop [Schreiber et al.(2015)Schreiber, Hodgman, Bordia, Luschen, Fischer, Vosk, Altman, Schneider, and Bloch]Schreiber2015 author author M. Schreiber, author S. S. Hodgman, author P. Bordia, author H. P. Luschen, author M. H. Fischer, author R. Vosk, author E. Altman, author U. Schneider,and author I. Bloch, 10.1126/science.aaa7432 journal journal Science volume 349, pages 842 (year 2015)NoStop Andraschko2014 author author F. Andraschko, author T. Enss, and author J. Sirker,10.1103/PhysRevLett.113.217201 journal journal Phys. Rev. Lett. volume 113,pages 217201 (year 2014)NoStop Tang2015 author author B. Tang, author D. Iyer, and author M. Rigol,10.1103/PhysRevB.91.161109 journal journal Phys. Rev. B volume 91,pages 161109(R) (year 2015)NoStop [Choi et al.(2016)Choi, Hild, Zeiher, Schauss, Rubio-Abadal, Yefsah, Khemani, Huse, Bloch, and Gross]Choi2016 author author J.-Y. Choi, author S. Hild, author J. Zeiher, author P. Schauss, author A. Rubio-Abadal, author T. Yefsah, author V. Khemani, author D. A. Huse, author I. Bloch,and author C. Gross, 10.1126/science.aaf8834 journal journal Science volume 352, pages 1547 (year 2016)NoStop [Zhang et al.(2016)Zhang, Hess, Kyprianidis, Becker, Lee, Smith, Pagano, Potirniche, Potter, Vishwanath, Yao, and Monroe]Zhang2017 author author J. Zhang, author P. W. Hess, author A. Kyprianidis, author P. Becker, author A. Lee, author J. Smith, author G. Pagano, author I. D. Potirniche, author A. C. Potter, author A. Vishwanath, author N. Y. Yao, and author C. Monroe, 10.1038/nature21413 journal journal Nature volume 543, pages 217 (year 2017)NoStop [Martinez et al.(2016)Martinez, Muschik, Schindler, Nigg, Erhard, Heyl, Hauke, Dalmonte, Monz, Zoller, and Blatt]Martinez2016 author author E. A. Martinez, author C. A. Muschik, author P. Schindler, author D. Nigg, author A. Erhard, author M. Heyl, author P. Hauke, author M. Dalmonte, author T. Monz, author P. Zoller,and author R. Blatt, 10.1038/nature18318 journal journal Nature volume 534, pages 516 (year 2016)NoStop [Kramers and Wannier(1941)]Kramers1941 author author H. A. Kramers and author G. H. Wannier, 10.1103/PhysRev.60.252 journal journal Phys. Rev. volume 60, pages 252 (year 1941)NoStop [Fradkin and Susskind(1978)]Fradkin1978 author author E. Fradkin and author L. Susskind, 10.1103/PhysRevD.17.2637 journal journal Phys. Rev. D volume 17, pages 2637 (year 1978)NoStop [Essler and Fagotti(2016)]Essler2016 author author F. H. L. Essler and author M. Fagotti, 10.1088/1742-5468/2016/06/064002 journal journal J. Stat. Mech. Theory Exp. volume 2016, pages 064002 (year 2016)NoStop [Serbyn et al.(2013)Serbyn, Papi ćć, and Abanin]Serbyn2013 author author M. Serbyn, author Z. Papi ćć,and author D. A. Abanin, 10.1103/PhysRevLett.110.260601 journal journal Phys. Rev. Lett. volume 110, pages 260601 (year 2013)NoStop [ITensor()]iTensor author author ITensor, http://itensor.org journal http://itensor.org NoStop [Paredes et al.(2005)Paredes, Verstraete, and Cirac]Paredes2005 journal author author B. Paredes, author F. Verstraete,and author J. I. Cirac, 10.1103/PhysRevLett.95.140501 journal journal Phys. Rev. Lett. volume 95, pages 140501 (year 2005)NoStop | http://arxiv.org/abs/1705.09143v2 | {
"authors": [
"Adam Smith",
"Johannes Knolle",
"Roderich Moessner",
"Dmitry L. Kovrizhin"
],
"categories": [
"cond-mat.str-el"
],
"primary_category": "cond-mat.str-el",
"published": "20170525121736",
"title": "Absence of Ergodicity without Quenched Disorder: from Quantum Disentangled Liquids to Many-Body Localization"
} |
[footnoteinfo]This research is supported in part by a Hong Kong Research Grant council (RGC) grant (No. 15206915), the Air Force Office of Scientific Research (AFOSR) and the Office of Naval Research Global (ONRS) under agreement number FA2386-16-1-4065, and the Australian Research Council under grant number DP180101805. Corresponding author G. Zhang. Tel. +852 2766 6936. Fax +852 2764 4382.PolyU]Qing [email protected], PolyU]Guofeng [email protected], ANU]Ian R. [email protected] [PolyU]Department of Applied Mathematics, Hong Kong Polytechnic University, Hong Kong SAR, China.[ANU]Research School of Engineering, Australian National University, Canberra, ACT, 2601, Australia.Open quantum systems; quantum filtering; quantum information geometry; exponential quantum projection filter. An approximate exponential quantum projection filtering scheme is developed for a class of open quantum systems described by Hudson-Parthasarathy quantum stochastic differential equations, aiming to reduce the computational burden associated with online calculation of the quantum filter. By using a differential geometric approach, the quantum trajectory is constrained in a finite-dimensional differentiable manifold consisting of an unnormalized exponential family of quantum density operators, and an exponential quantum projection filter is then formulated as a number of stochastic differential equations satisfied by the finite-dimensional coordinate system of this manifold. A convenient design of the differentiable manifold is also presented through reduction of the local approximation errors, which yields a simplification of the quantum projection filter equations. It is shown that the computational cost can be significantly reduced by using the quantum projection filter instead of the quantum filter. It is also shown that when the quantum projection filtering approach is applied to a class of open quantum systems that asymptotically converge to a pure state, the input-to-state stability of the corresponding exponential quantum projection filter can be established. Simulation results from an atomic ensemble system example are provided to illustrate the performance of the projection filtering scheme. It is expected that the proposed approach can be used in developing more efficient quantum control methods.§ INTRODUCTIONThe past decades have witnessed tremendous advances in quantum technologies which allow us to effectively probe and manipulate matter at the level of atoms (e.g., <cit.>, <cit.>, <cit.>, <cit.>). A basic requirement in realizing these technologies is to infer the unknown quantum system states from measurements. Nevertheless, two fundamental nonclassical features manifested by quantum systems are that i) any quantum measurement scheme can extract in principle only partial information from the observed quantum system; and ii) any quantum measurement inevitably changes the quantum system states in a probabilistic way (<cit.>, <cit.>, <cit.>). As a result, any measurement based quantum feedback control problem is essentially a problem of stochastic control theory with partial observations and can generally be converted into a control problem for a quantum filter with fully accessible states, as in classical stochastic control theory (<cit.>, <cit.>, <cit.>,<cit.>, <cit.>). In this context, the quantum system and observations are modelled as a pair of quantum stochastic differential equations, while the quantum filter, also known as the quantum trajectory, is a dynamic equation driven by the classical output signal of a laboratory measuring device (<cit.>, <cit.>, <cit.>). A quantum filter recursively updates the information state of a quantum system undergoing continual measurements and provides real-time information that can be fed into the quantum controller. Therefore, real time solution of the quantum filter equations is essential in implementing a quantum feedback control setup, which, however, tends to be computationally expensive, especially when the quantum system has a high dimension (<cit.>). In order to make the implementation more efficient, several approaches have been proposed in the literature concerning the approximation or model reduction of quantum filter, to mention a few, see (<cit.>, <cit.>). In <cit.>, an extended Kalman filtering approach was developed for a class of open quantum systems subject to continuous measurement, where time-varying linearization was applied to the system dynamics and a Kalman filter was designed for the linearized system. The proposed approach performs well for nearly linear quantum systems. A numerical approach to reducing the computational burden associated with calculating quantum trajectories was discussed in <cit.> and was used to demonstrate a two-qubit feedback control scheme. It was shown in simulation studies that a high approximation accuracy can be achieved even when a small number of integration steps is involved. The main goal of this paper is to approximate the optimal quantum filter using a lower-dimensional quantum projection filter, motivated by the pioneering work on projection filtering for classical stochastic systems by Brigo, Hanzon and LeGland (<cit.>, <cit.>). The basic idea of projection filtering is to constrain the optimal filter to remain in a finite-dimensional submanifold embedded in the state space of the filter. Then the projection filter can be expressed as a set of dynamic equations satisfied by the local coordinates of this submanifold. The problem of quantum projection filtering has been addressed in <cit.> where the information state of a highly nonlinear quantum model of a strongly coupled two-level atom in an optical cavity was approximately determined by a tractable set of stochastic differential equations. However, the approach in <cit.> requires exact prior knowledge of an invariant set of the solutions to the quantum filter equations. In other words, a finite-dimensional family of densities is already known to be a good approximation of the information state. This restrictive assumption was removed in <cit.> where an unsupervised learning identification algorithm was developed to determine the structure of the submanifold. However, the identification algorithm itself could be time consuming when a more general and complex open quantum system is considered instead of the simple two-level quantum system in <cit.>. In this paper, we design an exponential quantum projection filter for a general atom-laser interaction system subject to continuous homodyne detection, using a differential-geometric method in quantum information geometry theory. We propose a finite-dimensional differentiable submanifold consisting of an unnormalized exponential family of quantum density operators, on which a quantum Fisher metric structure is rigorously defined. Then through a projection operation, the solutions to the unnormalized quantum filter equations are maintained in this submanifold. In other words, the resulting quantum trajectory becomes a curve on the finite-dimensional manifold and the unnormalized quantum filter equation reduces to a set of recursive equations satisfied on the corresponding finite-dimensional coordinate system. We also present a convenient design of the differentiable manifold, by which the local approximation errors are significantly reduced and the quantum projection filter equations are simplified. In addition, it is shown that when the projection filtering strategy is applied to a class of open quantum systems that asymptotically converge to a pure state, the input-to-state stability of the corresponding exponential quantum projection filter can be established.This paper is organized as follows. Section 2 introduces some preliminaries on the quantum system model, quantum filter and quantum information geometry. Section 3 presents the main contributions of this paper. Here, we first derive the exponential quantum projection filter equations and provide a convenient design of the differentiable manifold in Subsection 3.1. Then we apply the projection filtering strategy to an asymptotically stable open quantum system and analyze the behaviour of the corresponding exponential quantum projection filter in Subsection 3.2. Section 4 applies the proposed approach to an atomic ensemble interacting with an electromagnetic field and demonstrates the approximation performance through simulation studies. Section 5 concludes this paper.Notation. i=√(-1). Here we use the Roman type character i to distinguish the imaginary unit from the index i. A^† represents the complex conjugate transpose of matrix A. s_1(A),...,s_n(A) are the singular values of matrix A which are arranged in decreasing order, i.e., s_1(A)≥ s_2(A)≥...≥ s_n(A)≥ 0. (A) is the trace of matrix A. [A, B]=AB-BA is the commutator of matrices A and B. I is the identity matrix. A is the max norm of matrix A. ℝ^n and ℂ^n represent the n-dimensional real vector space and complex vector space, respectively. § PRELIMINARIES §.§ System Formulation and Quantum FilterWe sketch the open quantum system model under consideration in this section; a more detailed description can be found in (<cit.>, <cit.>, <cit.>) and the references therein. In this paper, we consider a typical physical scenario from quantum optics. An arbitrary quantum system G, e.g., an atomic ensemble, is in weak interaction with an external single-channel laser field that is initially in the vacuum state. A cavity is used to increase the interaction strength between the light and the quantum system. One of the cavity mirrors, through which a forward mode of the electromagnetic field scatters off, is made slightly leaky such that information about the quantum system G is extracted using a homodyne detector. The single-channel probe laser field has an annihilation operator b(t) and a creation operator b^†(t), which are operators defined on a symmetric Fock space ℰ that can be decomposed into the past and future components in the form of a tensor product ℰ=ℰ_t]⊗ℰ_(t. Let B(t)=∫_0^tb(s)ds and B^†(t)=∫_0^tb^†(s)ds be integrated annihilation and creation field operators on ℰ, respectively. In this paper, the laser field is supposed to be canonical, that is, dB(t)dB^†(t)=dt, dB^†(t)dB(t)=dB^†(t)dB^†(t)=dB(t)dB(t)=0.Let us denote by ℋ_𝒬 the Hilbert space of the quantum system G and suppose (ℋ_𝒬)=n<∞. The composite system composed of the atomic system and the field is assumed to be isolated. Then its temporal Heisenberg-picture evolution can be described by a unitary operator U(t) on the tensor product Hilbert space ℋ_𝒬⊗ℰ, which satisfies the following Hudson-Parthasarathy quantum stochastic differential equation[We have assumed ħ=1 by using atomic units in this paper.]:dU(t)={(-H-1/2L^†L)dt+LdB^†(t)-L^†dB(t)}U(t)with the initial condition U(0)=I, where H is the initial Hamiltonian of the quantum system G, and L is a coupling operator, or measurement operator that describes how the system interact with the input field. The joint system state π_0⊗|υ><υ| is given by some quantum state π_0 in ℋ_𝒬 and the vacuum state |υ>. In the Heisenberg picture, an initial system operator X evolves to j_t(X)=U^†(t)(X⊗ I)U(t) at time t. Using the quantum Itô rules, j_t(X) satisfies the following quantum master equation:dj_t(X)= j_t(ℒ_L, H(X))dt +j_t([L^†,X])dB(t)+j_t([X,L])dB^†(t),where ℒ_L, H is the so-called Lindblad generator:ℒ_L, H(X)=[H,X]+L^†XL-1/2(L^†LX+XL^†L).A homodyne detector measures the observable Y(t)=U^†(t)Q(t)U(t) where Q(t)=B(t)+B^†(t) is the real quadrature of the input laser field and generates a classical photocurrent signal. The so-called self-nondemolition property, i.e., [Y(s), Y(t)]=0 for all s≤ t enables monitoring Y(t) continuously and interpreting Y(t) as a classical signal (photocurrent). By the Itô rules, Y(t) satisfies dY(t)=U^†(t)(L+L^†)U(t)dt+dQ(t).Equations (<ref>) and (<ref>) form the system-observation pair of our model. As in classical stochastic control theory, the goal of quantum filtering is to find the least-mean-square estimate of the system observable j_t(X) given the prior observations Y(s), 0≤ s ≤ t, that is, to derive an expression for the quantum conditional expectation π_t(X)=𝔼(j_t(X)|𝒴(t)) where 𝒴(t) is the commutative von Neumann algebra generated by the observation process Y(s), 0≤ s ≤ t. From the so-called nondemolition condition, i.e.,[j_t(X), Y(s)]=0 for all s≤ t, π_t(X) can be isomorphically interpreted as a classical conditional expectation and is thus well defined (<cit.>, <cit.>). The dynamic equation satisfied by π_t(X) has been derived as(<cit.>, <cit.>):dπ_t(X)= π_t(ℒ_L, H(X))dt+(π_t(L^†X+XL)..-π_t(L^†+L)π_t(X))(dY(t)-π_t(L^†+L)dt).In this paper, we are mainly concerned with the adjoint form of the quantum filter in (<ref>). Defining the conditional quantum density matrix ρ_t by π_t(X)=(ρ_tX), the filter equation in (<ref>) yieldsdρ_t=ℒ_L, H^†(ρ_t)dt+𝒟_L(ρ_t)(dY(t)-(ρ_t(L+L^†)dt),with ρ_0=π_0. Here ℒ_L, H^† is the adjoint Lindblad generator:ℒ_L, H^†(X)=-[H,X]+LXL^†-1/2(L^†LX+XL^†L),and 𝒟_L(X)=LX+XL^†-X(X(L+L^†)).Note that the quantum filter (<ref>) is a classical stochastic differential equation that is driven by the Wiener type classical photocurrent signal Y(t) and can thus be conveniently implemented on a classical signal processor. Equation (<ref>) has been widely used in applications including quantum state estimation and quantum feedback control (<cit.>, <cit.>, <cit.>), where in time calculation of (<ref>) is essential. However, one has to solve a system of n^2-1 recursive Itô stochastic differential equations in order to determine the conditional probability density ρ_t defined on ℋ_𝒬. A high computational cost will arise if the atomic system has a large number of energy levels. The main goal of this paper is to reduce the dimension of the filtering equations while guaranteeing acceptable approximation performance.§.§ Quantum Information GeometryThis subsection will introduce some foundations of quantum information geometry theory. A more detailed formulation can be found in Chapter 7 of the book (<cit.>). Let the set of all self-adjoint operators on the Hilbert space ℋ_𝒬 be denoted by𝔸={A|A=A^†}.Subsequently, we focus on the geometry of the set of nonnegative self-adjoint operators which is denoted byℚ={ρ|ρ≥ 0, ρ∈𝔸}.Hence ℚ is a closed subset of 𝔸 and is naturally regarded as a real manifold with dimension (ℚ)=n^2. Apparently, the tangent space at each point ρ to ℚ, which is denoted by 𝒯_ρ(ℚ), is identified with 𝔸. When a tangent vector X∈𝒯_ρ(ℚ) is considered as an element of 𝔸 by this identification, we denote it by X^(m) and call it the mixture representation (m-representation) of X. When a coordinate system [ε^i], i=1,2,...,n^2, is given on ℚ so that each state is parameterised as ρ_ε, the m-representation of the natural basis vector of the tangent vector space is identified with(∂_i)^(m)=∂_i,where ∂_i:=∂ρ_ε /∂ε^i. Naturally, {∂_i} are linearly independent and𝒯_ρ_ε(ℚ)={∂_i}.A differentiable manifold is not naturally endowed with an inner product structure. We need to add a Riemannian structure to the manifold. To be specific, we define a Riemannian metric on ℚ. The symmetrized inner product is employed to define the inner product {≪,≫_ρ, ρ∈ℚ} on 𝔸 (<cit.>):≪ A,B≫_ρ=1/2(ρ AB+ρ BA), ∀ A, B∈𝔸.Based on this inner product, we define another useful representation called the e-representation of a tangent vector X∈𝒯_ρ(ℚ) as the self-adjoint operator X^(e)∈𝔸 satisfying≪ X^(e), A≫_ρ=(X^(m)A), ∀ A∈𝔸.Using the e-representation defined above, we define an inner product <,> on 𝒯_ρ(ℚ) by <X,Y>_ρ =≪ X^(e), Y^(e)≫_ρ=(X^(m)Y^(e)),∀ X, Y∈𝒯_ρ(ℚ).Then g=<,> forms a Riemmanian metric on ℚ which may be regarded as a quantum version of the Fisher metric. The components of this metric are given byg_ij=<∂_i,∂_j>_ρ=(∂_i^(m)∂_j^(e)).Example 2.1.Consider a qubit system, the Hilbert space ℋ_𝒬 is identified with ℂ^2. DenoteQ_1=I+σ_z/2,Q_2=I-σ_z/2,Q_3=σ_x, Q_4=σ_y,whereσ_x,σ_y,σ_z are Pauli matrices described by σ_x= ( [ 0 1; 1 0 ]), σ_y= ( [0 -i;i0 ]) σ_z= ( [10;0 -1 ]).Then Q_i∈ℚ⊂𝔸. Each ρ_ε∈ℚ can be represented byρ_ε=∑_i=1^4ε^i Q_i.In this case, one has(∂_i)^(m)=∂_i=Q_i,and𝒯_ρ_ε(ℚ)={∂_i},respectively. Then, given any X∈𝒯_ρ_ε(ℚ), its m-representation X^(m) is a linear combination of Q_i and its e-representation can be derived from (<ref>). § AN EXPONENTIAL QUANTUM PROJECTION FILTER: DESIGN AND ANALYSISIn this section, we propose a projection filtering approach to approximating the quantum filter equation in (<ref>), using differential geometric methods in quantum information geometry theory. The basic idea of the projection filtering strategy is illustrated in Fig. 1. We consider to apply a projection operation to a space of unnormalized quantum density operators and map the optimal quantum filter equation onto a fixed lower-dimensional submanifold. A natural basis will be derived for the tangent space at each point of this submanifold, and a local projection operation can be defined with respect to a quantum Fisher metric to map the infinitesimal increments generated by the quantum filter equation onto such tangent spaces. The resulting stochastic vector field on the submanifold then defines the dynamics of the approximation filter. In this paper, we consider to use a submanifold consisting of an unnormalized exponential family of quantum density operators. It is noted that quantum density operators in the exponential form is useful in practice, examples being Gaussian states and general thermal states (<cit.>, <cit.>).§.§ Design of the Quantum Projection FilterThe quantum projection filter equation will be derived in this subsection. We start from the unnormalized version of the quantum filter equation in (<ref>):dρ̅_t=ℒ_L, H^†(ρ̅_t)dt+(Lρ̅_t+ρ̅_tL^†)dY(t),where ρ̅_t is the unnormalized information state corresponding to ρ_t such that ρ_t=ρ̅_t/(ρ̅_t). ρ̅_t is initially set to be ρ̅_0=ρ_0=π_0. The unnormalized filter equation (<ref>) is used since its linear form is easier to manipulate compared with the nonlinear filter equation in (<ref>). It is worth mentioning that in order to illustrate the unnormalized quantum filter using a differential manifold structure, one must interpret the stochastic differential equation in (<ref>)using Stratonovich integral theory because Itô's rule is incompatible with a manifold structure <cit.>. We have the following result.Lemma 3.1. The Itô quantum stochastic differential equation in (<ref>) can be equivalently rewritten as the following Stratonovich quantum stochastic differential equation:dρ̅_t=(-[H, ρ̅_t]-𝒮_L(ρ̅_t))dt+(Lρ̅_t+ρ̅_tL^†)∘ dY(t),where𝒮_L(ρ̅_t)=(L+L^†)Lρ̅_t+ρ̅_tL^†(L+L^†)/2.Proof. The proof of Lemma 3.1 is given in Appendix.Now we design the quantum projection filter following the scheme illustrated in Fig. 1. On one hand, it follows from (<ref>) that ρ̅_t is nonnegative and self-adjoint. Thus the totality of the unnormalized quantum density matrix ρ̅_t is identified with the set ℚ in (<ref>). It can be verified that the two terms -i[H, ρ̅_t]-𝒮_L(ρ̅_t) and Lρ̅_t+ρ̅_tL^† on the right hand side of (<ref>) are vectors in 𝒯_ρ(ℚ), or equivalently, operators belonging to the set 𝔸 in (<ref>).On the other hand, the submanifold is designed to be a C^∞ manifold consisting of an exponential family of unnormalized quantum density operators:𝕊={ρ̅_θ}={e^1/2∑_i=1^mθ_iA_iρ_0e^1/2∑_i=1^mθ_iA_i},where the submanifold operators A_i∈𝔸, i∈{1,2,...,m} are mutually commutating and pre-designed.We suppose that the entire submanifold 𝕊 can be covered by a single coordinate chart (𝕊,θ=(θ_1,...,θ_m)∈Θ), where Θ is an open subset of ℝ^m containing the origin. Then we have {𝕊}=m. According to the chain rule in Stratonovich stochastic calculus, we have dρ̅_θ=∑_i=1^m ∂̅_i∘ dθ_i,where ∂̅_i:=∂ρ̅_θ/∂θ_i. Assuming the set {∂̅_1,...,∂̅_m} is linearly independent, then this set forms an m-representation of the natural basis of 𝒯_ρ̅_θ(𝕊); i.e., the tangent vector space at each point ρ̅_θ to 𝕊. We have𝒯_ρ̅_θ(𝕊)={∂̅_i, i=1,...,m.}.A direct calculation using Stratonovich stochastic calculus yields∂ρ̅_θ/∂θ_i=1/2(A_iρ̅_θ+ρ̅_θA_i).It then follows directly from (<ref>) and (<ref>) that ∂̅_i^(e)=A_i. Thus each component of the quantum Fisher metric in (<ref>) is given by a real-valued function of θ:g_ij(θ) =≪∂̅_i^(e), ∂̅_j^(e)≫_ρ̅_θ=(ρ̅_θA_iA_j)=(ρ_0e^1/2∑_i=1^mθ_iA_iA_iA_je^1/2∑_i=1^mθ_iA_i),because the operator e^1/2∑_i=1^mθ_iA_iA_iA_je^1/2∑_i=1^mθ_iA_i is self-adjoint. The quantum Fisher information matrix is an m× m dimensional real matrix given by G(θ)=(g_ij(θ)). Then an orthogonal projection operation Π_θ can be defined for every θ∈Θ as follows:𝔸⟶𝒯_ρ̅_θ(𝕊)ν⟼∑_i=1^m∑_j=1^m g^ij(θ) <ν,∂̅_j>_ρ̅_θ∂̅_i,where the matrix (g^ij(θ)) is the inverse of the quantum information matrix G(θ).Consider a curve in 𝕊 around the point ρ̅_θ to be of the form ζ: t ↦ρ̅_θ_t. This corresponds to a real curve γ: t ↦θ_t in Θ around the real vector θ, through the coordinate chart (𝕊,θ). Let us consider that the curve ζ starts from the initial condition thatρ̅_θ_0=π_0, or equivalently, the curve ζ starts from θ_0=0. The unnormalized exponential quantum projection filter is then defined as the following quantum stochastic differential equation on the m-dimensional differentiable manifold 𝕊:dρ̅_θ_t =Π_θ_t(-[H,ρ̅_θ_t])dt+Π_θ_t(-𝒮_L(ρ̅_θ_t))dt+Π_θ_t(Lρ̅_θ_t+ρ̅_θ_tL^†)∘ dY(t).From the definition of the manifold 𝕊 in (<ref>), the projection quantum filter can be equivalently written using the equations satisfied by the real curve γ in Θ. Denote θ_t=(θ_1(t),...,θ_m(t))'. An explicit form of the curve equations is given in the following theorem.Theorem 3.1. The real curve γ: t ↦θ_t satisfies the following recursive stochastic differential equation:dθ_t=G(θ_t)^-1{Ξ(θ_t)dt+Γ(θ_t)∘ dY(t)},with the initial conditions θ_i(0)=0, i=1,...,m, where Ξ(θ_t) and Γ(θ_t) are both m-dimensional column vectors of real functions on θ_t. The jth elements of these quantities are given byΞ_j(θ_t)=(ρ̅_θ_t([H, A_j]-A_j(L+L^†)L+L^†(L+L^†)A_j/2)),and Γ_j(θ_t)=(ρ̅_θ_t(A_jL+L^†A_j)),respectively. Proof. Applying the projection operation in (<ref>) and the chain rule in (<ref>) to the filter equation (<ref>) yieldsdρ̅_θ_t=∑_i=1^m ∂̅_i∘ dθ_i(t) =∑_i=1^m∑_j=1^m g^ij(θ) ((-[H,ρ̅_θ_t]-𝒮_L(ρ̅_θ_t))A_j)∂̅_idt+∑_i=1^m∑_j=1^m g^ij(θ) ((Lρ̅_θ_t+ρ̅_θ_tL^†)A_j)∂̅_i∘ dY(t) =∑_i=1^m∑_j=1^m g^ij(θ)(ρ̅_θ_t([H,A_j]-𝒮_L^†(A_j)))dt∂̅_i+∑_i=1^m∑_j=1^m g^ij(θ)(ρ̅_θ_t(A_jL+L^†A_j))∘ dY(t)∂̅_i,where 𝒮_L^†(X)=X(L+L^†)L+L^†(L+L^†)X/2 is the adjoint of the operator 𝒮_L in (<ref>). The differential equation in (<ref>) can be obtained by comparing the coefficients of the natural basis {∂̅_i} from both sides of (<ref>).The stochastic differential equation (<ref>) combined with the equation (<ref>) determines the unnormalized projection quantum density operator. In this paper, (<ref>) or (<ref>) is called the quantum projection filter. The approximate quantum information state ρ̃_t can be then simply obtained asρ̃_t=ρ̅_θ_t/(ρ̅_θ_t).It can be observed that only a system of m stochastic differential equations is needed to be calculated in order to determine ρ̃_t. Recall that one has to calculate a collection of n^2-1 stochastic differential equations in determining the information state ρ_t in the quantum filter equation (<ref>). Thus the computational cost would be reduced significantly if the number m is chosen to be small.The above design procedure requires the predesign of the submanifold operators A_i, i=1,...,m. The remainder of this subsection will be devoted to a convenient design method for these self-adjoint operators through reduction of the local approximation errors. In fact, the proposed approximation scheme in Theorem 3.1 is implemented through two steps. First, the right-hand side of (<ref>) is evaluated at the current projection filter quantum density operator ρ̅_θ(t) on 𝕊, instead of the true density operator ρ̅_t. However, the right-hand side vectors -i[H, ρ̅_t], -𝒮_L(ρ̅_t) and Lρ̅_t+ρ̅_tL^† will generally make the solutions leave the manifold 𝕊. Thus a second approximation is made by projecting these vector fields onto the linear tangent vector space 𝒯_ρ̅_θ(𝕊). In the remainder of this subsection, we will present a design of the submanifold 𝕊 by considering the local errors for the quantum projection filter occurring in the second approximation step at time t.Following similar ideas as in <cit.>, we define at each point ρ̅_θ_t the prediction residual as 𝔓(t)=-[H, ρ̅_θ_t]-Π_θ_t(-[H, ρ̅_θ_t]),and two correction residuals asℭ_1(t)=-𝒮_L(ρ̅_θ_t)-Π_θ_t(-𝒮_L(ρ̅_θ_t))andℭ_2(t)=Lρ̅_θ_t+ρ̅_θ_tL^†-Π_θ_t(Lρ̅_θ_t+ρ̅_θ_tL^†),respectively.Although it is not required in Theorem 3.1, the following assumption will be essential in the subsequent analysis in this paper. Assumption 3.1. The coupling operator is self-adjoint, i.e., L=L^†.This assumption is practically reasonable in many experimental settings; e.g., trapping a cold atomic ensemble in an optical cavity (<cit.>, <cit.>). Since L is self-adjoint, it admits a spectral decomposition L=∑_i=1^n_0λ_iP_L_i, where n_0≤ n is the number of nonzero eigenvalues of L, the set {λ_i} contains all of the nonzero real eigenvalues of L, and {P_L_i} is a set of projection operators that satisfies P_L_jP_L_k=δ_jkP_L_k. Then one has the following result:Theorem 3.2. The correction residuals ℭ_1 and ℭ_2 are both identically zero for all t≥ 0, if the submanifold in (<ref>) is designed according to m=n_0,A_i=P_L_i.Moreover, the exponential quantum projection filter (<ref>) becomes dθ_t=G(θ_t)^-1(ρ̅_θ_t[H, A_j])dt-2α dt+2β dY(t),where α=(λ_1^2,...,λ_m^2)' and β=(λ_1,...,λ_m)'.Proof. From the definitions of the natural basis in (<ref>), the projection operation in (<ref>) and the correction residuals in (<ref>) and (<ref>), one hasℭ_1(t)=Π_θ_t(L^2ρ̅_θ_t+ρ̅_θ_tL^2)-(L^2ρ̅_θ_t+ρ̅_θ_tL^2) =∑_k=1^mλ_k^2{Π_θ_t(P_L_k^2ρ̅_θ_t+ρ̅_θ_tP_L_k^2)-(P_L_k^2ρ̅_θ_t+ρ̅_θ_tP_L_k^2)}=∑_k=1^mλ_k^2{Π_θ_t(A_kρ̅_θ_t+ρ̅_θ_tA_k)-(A_kρ̅_θ_t+ρ̅_θ_tA_k)}=∑_k=1^m2λ_k^2{Π_θ_t(∂̅_k)-∂̅_k}=0,andℭ_2(t) =Π_θ_t(Lρ̅_θ_t+ρ̅_θ_tL)-(Lρ̅_θ_t+ρ̅_θ_tL) =∑_k=1^m2λ_k{Π_θ_t(∂̅_k)-∂̅_k}=0.Through the design method in Theorem 3.1, the components of the quantum Fisher metric in (<ref>) are given byg_ij(θ)=(ρ̅_θA_iA_j)=δ_ij(ρ̅_θA_i), i,j∈{1,...,m},and the quantum Fisher matrix G(θ)=(g_ij(θ)) becomes a diagonal matrixG(θ)={(ρ̅_θA_1),...,(ρ̅_θA_m)}.The jth elements of the vector functions Ξ(θ_t) and Γ(θ_t) in (<ref>) are given byΞ_j(θ_t) ={ρ̅_θ_t([H, A_j]-(A_jL^2+L^2A_j))}, =(ρ̅_θ_t[H, A_j])-2λ_j^2(ρ̅_θA_j),andΓ_j(θ_t)=(ρ̅_θ_t(A_jL+LA_j))=2λ_j(ρ̅_θA_j),respectively. Then (<ref>) can be concluded by substituting (<ref>), (<ref>) and (<ref>) into the filter equation (<ref>).It has been shown in Theorem 3.1 that, by using the design scheme as in (<ref>), the correction residuals ℭ_1 and ℭ_2 are both eliminated while the prediction residual 𝔓(t) still exists. In general, it is difficult to analyze 𝔓(t) which depends on the trajectory of the quantum projection filter. However, in a special case, an upper bound of 𝔓(t) can be derived and the exponential quantum projection filter (<ref>) can be further simplified. The unnormalized quantum filter (<ref>) and the exponential quantum filter (<ref>) are both driven by the classical photocurrent Y(t) which is a Wiener process with bounded drift under some classical probability measure P. Using Girsanov's theorem, however, one can always find a measure P' that is equivalent to P such that Y(t) is a Wiener process with zero drift on the interval [0,T], where T>0 is a fixed time called the final time (Page 458, <cit.>). Let 𝔼̂ denote the expectation operation with respect to the measure P'. One has the following result. Theorem 3.3. When [H, L]=0, if the submanifold in (<ref>) is designed according to (<ref>), the exponential quantum projection filter (<ref>) becomes dθ_t=-2α dt+2β dY(t),and the correction residuals ℭ_1 and ℭ_2 are both identically zero for all t≥ 0. Moreover, the prediction residual 𝔓(t) satisfies𝔼̂√((𝔓(t)^2))≤√((X_0^2)), t≥ 0,where X_0=-[H, ρ_0].Proof. Since [H, L]=0 and A_i=P_L_i is the projection operator of L, one has [H, A_i]=0, i=1,2,...m. Then the evolution of the coordinate system θ_t in (<ref>) reduces to a set of independent Itô stochastic differential equations in (<ref>). Next we prove (<ref>). Denote Λ(t)=1/2∑_i=1^mθ_i(t)A_i. Then the submanifold (<ref>) can be rewritten as 𝕊={ρ̅_θ}={e^Λ(t)ρ_0e^Λ(t)} and𝔓(t)=-[H, ρ̅_θ_t]-Π_θ_t(-[H, ρ̅_θ_t])= e^Λ(t)X_0e^Λ(t)-∑_i=1^m∑_j=1^m g^ij(θ) (e^Λ(t)X_0e^Λ(t)A_j)∂̅_i, = e^Λ(t)X_0e^Λ(t)+∑_i=1^m∑_j=1^m g^ij(θ) (ρ̅_θ[A_j, H])∂̅_i,= e^Λ(t)X_0e^Λ(t).It then follows from Lemma A2 in the Appendix that 𝔼̂√((𝔓(t)^2))=𝔼̂√((e^2Λ(t)X_0e^2Λ(t)X_0))≤ 𝔼̂√(∑_i=1^ms_i(e^2Λ(t)X_0e^2Λ(t)X_0))≤ 𝔼̂√(∑_i=1^ms_i^2(e^2Λ(t))s_i^2(X_0))≤ √(∑_i=1^ms_i^2(X_0))𝔼̂s_1(e^2Λ(t))=√((X_0^2))max_i𝔼̂e^θ_i(t).By using the Itô rules, one can calculate from (<ref>) thatde^θ_i(t) =-2λ_i^2e^θ_i(t)dt+1/2e^θ_i(t)(2λ_i)^2dt+2λ_ie^θ_i(t)dY(t) =2λ_ie^θ_i(t)dY(t),which implies that 𝔼̂(e^θ_i(t))≡𝔼̂(e^θ_i(0))=1. Then (<ref>) can be concluded from (<ref>) and (<ref>). The proof is thus completed.Under some conditions, the exponential quantum projection filter could be an exact expression for the quantum filter (<ref>). The following model reduction result is a corollary of Theorem 3.2.Corollary 3.1. When the system Hamiltonian H=0, ρ̅_θ_t≡ρ̅_t if the submanifold is designed according to (<ref>). §.§ Practical Stability of the Quantum Projection FilterIn this subsection, we will analyze the time behaviour of theexponential quantum projection filter on the interval [0,T]. Before proceeding, the following notation is introduced. Let 𝒪 be an orthonormal basis of ℋ_𝒬. For the quantum filter equation in (<ref>) and for any ψ∈𝒪, letT_ψ=inf{t≥ 0|ρ_t=|ψ><ψ|}.Definition 3.1. (<cit.>) The quantum filter (<ref>) fulfils a nondemolition condition if there exists an orthonormal basis 𝒪 such that for any ψ∈𝒪(ρ_t|ψ><ψ|)=1, ∀ t≥ T_ψ.The stable states |ψ><ψ|, ψ∈𝒪, are called pointer states of the quantum filter.Let |ψ_0><ψ_0| be a particular pointer state of the quantum filter (<ref>). Then the Hilbert space ℋ_𝒬 can be decomposed in the direct sum ℋ_𝒬=ℋ_S⊕ℋ_R where ℋ_S=ℂ|ψ_0>. This yields a convenient decomposition of all matrices on X∈ℋ_𝒬, that is, by choosing an appropriate basis, X can be written asX=( X_S X_PX_Q X_R),where X_S, X_R, X_P and X_Q are operators from ℋ_S to ℋ_S, ℋ_R to ℋ_R, ℋ_R to ℋ_S, and ℋ_S to ℋ_R, respectively. Denote P̅_S=( I 00 0) and P̅_R=( 0 00 I ) the orthogonal projectors on ℋ_S and ℋ_R, respectively. Definition 3.2. The quantum filter (<ref>) is said to be strongly globally asymptotically stable (SGAS), if it fulfils a nondemolition condition for an orthonormal basis 𝒪 and there is a pointer state |ψ_0><ψ_0|∈𝒪 such that, ∀ρ_0,lim_t →∞ρ_t-P̅_S ρ_t P̅_S=0,Similar to (<ref>), define an operation from ℋ_R to ℋ_R:ℒ_L_R, H_R(X_R)=[H_R,X_R]+L_R^†X_RL_R-1/2(L_R^†L_RX_R+X_RL_R^†L_R).and denote its spectral abscissa as:Δ_0:=min{-(λ)|λ∈(ℒ_L_R, H_R)}.The following lemma can be concluded directly from Theorem 3 in (<cit.>) and Lemma 2.7 in (<cit.>).Lemma 3.2. The quantum filter (<ref>) is SGAS, if and only if [H, L]=0 and Δ_0>0.In addition, the following useful lemma is introduced.Lemma 3.3. (<cit.>) For any constant scalar ϵ>0, there exists an operator K_R>0 on ℋ_R such thatℒ_L_R, H_R(K_R)≤ -(Δ_0-ϵ)K_R.We are ready to present the main result of this subsection.Theorem 3.4. Suppose a continuously monitored open quantum system modelled by (<ref>) and (<ref>) has a quantum filter (<ref>) which is SGAS. Then for any positive scalar ϵ such that Δ_0>ϵ>0 there exists a positive operator K_R>I such that the solution to the exponential quantum projection filter (<ref>) satisfies:(P̅_Rρ̅_θ_t)≤ ((K_R)(P̅_Rρ_0)-(K_R)/Δ_0-ϵs_1(X_0))e^-(Δ_0-ϵ)t+(K_R)/Δ_0-ϵs_1(X_0), ,where X_0=-[H,ρ_0] as defined in Theorem 3.3.Proof. Since the quantum filter (<ref>) is SGAS, it is implied from Lemma 3.2 that [H, L]=0 and Δ_0>0. By designing the submanifold according to (<ref>), it follows from Theorems 3.2 and 3.3 that the correction residuals defined in (<ref>) and (<ref>) vanish, and the prediction residual in (<ref>) becomes𝔓(t) =e^1/2∑_i=1^mθ_i(t)A_iX_0e^1/2∑_i=1^mθ_i(t)A_i.The unnormalized exponential quantum projection filter (<ref>) can be rewritten as dρ̅_θ_t ={-[H,ρ̅_θ_t]-𝒮_L(ρ̅_θ_t)}dt-𝔓(t)dt+(Lρ̅_θ_t+ρ̅_θ_tL)∘ dY(t),which can be converted into an Itô stochastic differential equation using Lemma 1:dρ̅_θ_t =ℒ_L, H^†(ρ̅_θ_t)dt-𝔓(t)dt+(Lρ̅_θ_t+ρ̅_θ_tL)dY(t).Let ρ̂_θ_t=𝔼̂(ρ̅_θ_t) and 𝔓̂(t)=𝔼̂(𝔓(t)). Also, let ρ̂_θ_t=( (ρ̂_θ_t)_S (ρ̂_θ_t)_P(ρ̂_θ_t)_Q (ρ̂_θ_t)_R ), ρ_0=( (ρ_0)_S (ρ_0)_P(ρ_0)_Q (ρ_0)_R) and 𝔓̂(t)=(𝔓_S(t)𝔓̂_P(t) 𝔓_Q(t)𝔓̂_R(t) ) be the decompositions of ρ̂_θ_t, ρ_0 and 𝔓̂(t) corresponding to the subspace decomposition ℋ_𝒬=ℋ_S⊕ℋ_R, respectively. A direct calculation on (<ref>) yieldsd(ρ̂_θ_t)_R={ℒ_L_R, H_R^†((ρ̂_θ_t)_R)-𝔓̂_R(t)}dt,which has a solution (ρ̂_θ_t)_R=e^tℒ_L_R, H_R^†(ρ_0)_R-∫_0^te^(t-s)ℒ_L_R, H_R^†𝔓̂_R(s)ds.Let K=( 0 00 K_R) and V_K(ρ̅_θ_t)=(Kρ̂_θ_t). Then it follows from (<ref>) thatV_K(ρ̅_θ_t)=(Kρ̂_θ_t)=(K_R(ρ̂_θ_t)_R)=(e^tℒ_L_R, H_R^†(ρ_0)_RK_R)-∫_0^t(e^(t-s)ℒ_L_R, H_R^†𝔓̂_R(s)K_R)ds=(e^tℒ_L_R, H_RK_R(ρ_0)_R)-∫_0^t(e^(t-s)ℒ_L_R, H_RK_R𝔓̂_R(s))ds.On one hand, because e^tℒ_L_R, H_R is a strictly positive map and K_R>0, it follows that e^tℒ_L_R, H_RK_R>0 (<cit.>). Then from Lemma 3.3 and Lemma A1 in Appendix, one has that by choosing ϵ<Δ_0,(e^tℒ_L_R, H_RK_R(ρ_0)_R) ≤(e^tℒ_L_R, H_RK_R)((ρ_0)_R)≤ e^-(Δ_0-ϵ)t(K_R)((ρ_0)_R).One the other hand, based on Theorem 3.3, Lemma 3.3 and Lemma A2 in Appendix,-(e^(t-s)ℒ_L_R, H_RK_R𝔓̂_R(s))≤∑_i=1^n-1s_i(e^(t-s)ℒ_L_R, H_RK_R𝔓̂_R(s))≤∑_i=1^n-1s_i(e^(t-s)ℒ_L_R, H_RK_R)s_i(𝔓̂_R(s))≤(e^(t-s)ℒ_L_R, H_RK_R)s_1(𝔓̂_R(s))≤ e^-(Δ_0-ϵ)(t-s)(K_R)𝔼̂{s_1^2(∑_i=1^m e^1/2θ_i(t)A_i)}s_1(X_0)=e^-(Δ_0-ϵ)(t-s)(K_R)s_1(X_0)max_i𝔼̂(e^θ_i(t)) =e^-(Δ_0-ϵ)(t-s)(K_R)s_1(X_0).Then one has that-∫_0^t(e^(t-s)ℒ_L_R, H_RK_R𝔓̂_R(s))ds≤∫_0^te^-(Δ_0-ϵ)(t-s)ds(K_R)s_1(X_0)=s_1(X_0)(K_R)/Δ_0-ϵ(1-e^-(Δ_0-ϵ)t).In addition, it is noted that the inequality (<ref>) still holds when multiplying K_R by any positive scalar. Thus one can choose K_R ≥ I such that V_K(ρ̅_θ_t)=(Kρ̂_θ_t)≥(P̅_Rρ̂_θ_t)=𝔼̂|(P̅_Rρ̅_θ_t)|,because (P̅_Rρ̅_θ_t) is strictly positive.Then (<ref>) can be concluded from (<ref>), (<ref>) and (<ref>). It is noted that (P̅_Rρ̅_θ_t) in (<ref>) serves as a linear Lyapunov function candidate for the subspace ℋ_S=ℂ|ψ_0> (See Theorem 1.2 in (<cit.>)). Thus Theorem 3.3 can be treated as an input-to-state stability result for the exponential quantum projection filter. When [H, ρ_0]=0, or equivalently, X_0≡ 0, Theorem 3.3 reduces to an asymptotical stability result. As in Corollary 3.1, we have the following alternative model reduction result:Corollary 3.2. Suppose [H, ρ_0]=0 and the quantum filter (<ref>) fulfils a nondemolition condition, then ρ̅_θ_t≡ρ̅_t if the submanifold is designed according to (<ref>). § EXAMPLE: A SPIN-J SYSTEM WITH DISPERSIVE COUPLINGThe illustrative physical model used consists of an ensemble of N atomic spins interacting dispersively with an electromagnetic laser field (<cit.>, <cit.>, <cit.>). Here N is a positive integer. The atomic sample is placed in a strongly driven and heavily damped optical cavity and all atomic transitions probing state|1> are assumed to be detuned far from the cavity resonance. A homodyne detector is used to continuously monitor the light scattered from the cavity. Similar setups have been exploited in experiments producing spin squeezed states which have practical applications in several metrology tasks like magnetometers (<cit.>) and atomic clocks (<cit.>). The collective properties of N two-level atoms can be conveniently described by a spin-J system, i.e., a collection of N=2J spin-1/2 systems (<cit.>). Let the internal states, |0> and |1>, of each atom be the degenerate two-level ground state. The collective spin operators are given by J_α=∑_k=1^N1/2σ_α^(k) (α=x,y,z), where σ_α^(k)=I^⊗ (k-1)⊗σ_α⊗ I^⊗ (N-k) are the Pauli operators for each particle.Consider applying a magnetic field with regulatable strength u(t) in the y-direction. u(t) can either act as a control signal that depends on the output of the quantum filter in a measurement based feedback control scheme, or an external disturbance signal in quantum estimation. Assuming that the cavity has a sufficiently large decay rate, the cavity dynamics can be adiabatically eliminated (<cit.>, <cit.>, <cit.>) and the temporal evolution of this spin-J system can be described by the following quantum stochastic differential equation:dU(t)= {(-u(t)J_y-1/2μ J_z^2)dt..+√(μ)J_z(dB^†(t)-dB(t))}U(t),where μ is the effective coupling strength. The dispersive interaction between the atomic ensemble and the laser field introduces a phase shift of the cavity field that linearly depends on the total population difference, a quantity characterized by the self-adjoint operator J_z since N is conserved. A continuous and noisy observation of J_z can be accomplished through a quantum nondemolition (QND) measurement of the field observable B(t)+B^†(t), that is dY(t)=2√(μ)U^†(t)J_zU(t)dt+dQ(t),as in (<ref>). The corresponding quantum filter conditioned on the observation process Y(s), 0≤ s ≤ t is given bydρ_t= ℒ_√(μ)J_z, u(t)J_y^†(ρ_t)dt +𝒟_√(μ)J_z(ρ_t)(dY(t)-2√(μ)(ρ_tJ_z)).Calculating the filter equation in (<ref>) is generally equivalent to solving a collection of 4^N-1 stochastic differential equations, which is computational expensive even when the number of atoms in the ensemble is small. In this simulation, we consider the simplest case that N=2, that is, a two-qubits system with dispersive coupling (<cit.>). In this case, the solution to a total of 15 stochastic differential equations is generally necessary. Nevertheless, based on Theorem 3.2, one can approximately calculate the information state ρ_t using an exponential quantum projection filter consisting of only 2 stochastic differential equations. To be specific, the submanifold 𝕊 in (<ref>) can be designed according to (<ref>). That is, the dimension of the manifold is chosen to be m=2 and the two submanifold operators are given byA_1=P_L_1=(1,0,0,0),A_2=P_L_2=(0,0,0,1)respectively, where P_L_1 and P_L_2 are the two projection operators of √(μ)J_z corresponding to its two nonzero eigenvalues though spectral decomposition. The exponential quantum projection filter is given by (<ref>) with λ_1=√(μ)/2 and λ_2=-√(μ)/2. For the system state, we assume that initially the first atom is prepared to be in 0.25*|1><1|+0.75*|0><0| and the second atom is prepared to be in 0.5*|1><1|+0.5*|0><0|. In the simulation, the photocurrent is simulated from dY(t)=2√(μ)(ρ_tJ_z)dt+dW(t) and is used to drive the exponential quantum projection filter.Monte Carlo simulations have been conducted by using the discretization approach as in <cit.>. The simulation parameters used are as follows: the simulation interval is t∈ [0,T] with T=1, the normally distributed variance is δ t=T/N_0 with N_0=2^12, and the step size is chosen to be Δ t=2δ t. The effective coupling strength μ is set to be μ=1. The disturbance signal, as shown in Fig. 2, is set to be u(t)=5e^-5ta where a is a random variable with standard normal distribution. The projection filtering strategy in Theorem 3.2 allows us to approximate the quantum information state ρ_t by ρ̃_t=ρ̅_θ_t/(ρ̅_θ_t). The approximation performance of the proposed approximation filtering scheme is demonstrated by comparing the probabilities that each qubit is in its one particular internal states, calculated from the quantum filter equation in (<ref>) and the exponential quantum projection filter equation in (<ref>), respectively. In other words, we compare the trajectories of the process ρ_t(1,1), ρ_t(2,2), ρ_t(3,3) and ρ_t(4,4) with ρ̃_t(1,1), ρ̃_t(2,2), ρ̃_t(3,3) and ρ̃_t(4,4), respectively, which are depicted in the four subfigures of Fig. 3. The Frobenius norm of the difference between ρ_t and ρ̃_t, i.e., √(((ρ_t-ρ̃_t)^2)) is shown in Fig. 4. One can observe that ρ_t and ρ̃_t are very close over this time interval. This implies that in feedback control of the atomic ensemble one may use ρ̃_t instead of ρ_t in the controller in order to achieve a computationally more efficient design, although there might be small reduction of the control performance because of the approximation errors as shown in Fig. 3. Simulations also show that when the disturbance signal is set to be u(t)≡ 0, then ρ_t≡ρ̃_t, which coincides with Corollary 3.1. In order to illustrate the stability analysis result in Theorem 3.4, we further consider the case that the external magnetic field with strength u(t) is applied to the atomic ensemble in the z-direction. In this case, the system Hamiltonian is given by H=u(t)J_z and thus commutes with the measurement operator L=√(μ)J_z. All other parameters and settings remain the same and the simulation results are shown in Fig. 5. One can observe that ρ_t in the filter (66) converges asymptotically to the target state ρ_tg=(0,0.25,0,0.75), while the corresponding quantum projection filter is input-to-state stable, which is as expected from Theorem 3.4.§ CONCLUSIONSIn this paper, a quantum projection filtering strategy is developed for a class of open quantum systems subject to homodyne detection. An exponential quantum projection filter is derived by defining a Riemann metric structure on a manifold consisting of an exponential family of unnormalized quantum density operators, which enables more efficient calculation of the quantum information state. The convergence capability of the exponential quantum projection filter for a special class of open quantum systems is also discussed. Simulations from an atomic ensemble with dispersive coupling show that the exponential quantum projection filter is able to approximate the quantum filter with a high accuracy. A number of open problems that deserve further research efforts are summarized as follows: * Design of a submanifold such that the approximation errors are minimized;* Application of the quantum projection filter to feedback control of quantum systems;* Quantum projection filtering for open quantum systems with multiple Lindblad operators; and* Extension of the approach to the case of infinite dimensional quantum systems. § APPENDIXLemma A1. (Page 269, <cit.>). If A and B are positive semidefinite matrices, then0≤(AB) ≤(A)(B).Lemma A2. (Page 177, <cit.>). Let A and B are n× n matrices, Then,∑_i=1^ks_i(AB)≤∑_i=1^ks_i(A)s_i(B), 1≤ k ≤ n.Proof of Lemma 3.1. Let t_0<t_1<t_2...<t_p<T be a partition of any time interval [t_0,T] and let the positive integer p be big enough. A direct discretization of the filter equation (<ref>) yieldsρ̅_t_i+1 ≃ρ̅_t_i+(-i[H, ρ̅_t]-𝒮_L(ρ̅_t))Δ t_i+(Lρ̅_t+ρ̅_tL^†)Δ Y(t_i), i=0,1,...,p-1,where Δ t_i=t_i+1-t_i and Δ Y(t_i)=Y(t_i+1)-Y(t_i).It is noted that Y(t) is a classical Wiener process. Thus, when p→∞, one has Δ Y(t_i)Δ Y(t_i)=Δ t_i and Δ Y(t_i) Δ t_i=0. From the definition of the Stratonovich integral and (<ref>), one has(s) ∫_t_0^T Lρ̅_t+ρ̅_t L^†∘ dY(t)=lim_p→∞∑_k=0^p L(ρ̅_t_k+1+ρ̅_t_k)+(ρ̅_t_k+1+ρ̅_t_k)L^†/2Δ Y(t_k)=(I) ∫_t_0^T Lρ̅_t+ρ̅_t L^†dY(t)+lim_p→∞∑_k=0^p L(Lρ̅_t_k+ρ̅_t_kL^†)+(Lρ̅_t_k+1+ρ̅_t_kL^†)L^†/2Δ t(t_k)=(I) ∫_t_0^T Lρ̅_t+ρ̅_t L^†dY(t)+1/2∫_t_0^TLLρ̅_t+ρ̅_tL^†L^†+2Lρ̅_tL^†dt.Lemma 3.1 can be obtained by substituting (<ref>) into (<ref>). § ACKNOWLEDGMENT Discussions with Prof. Haidong Yuan and Prof. Bo Qi are very much appreciated.99 [Akimoto & Hayashi (2011)]Akimoto2011 Akimoto D. & Hayashi M. (2011) Discrimination of the change point in a quantum setting. Physical Review A, 83, 052328. [Amari & Nagaoka (2000)]Amari2000 Amari S. & Nagaoka H.(2000). Methods of Information Geometry. Oxford: Oxford University Press. [Belavkin (1992)]Belavkin1992 Belavkin V.P. (1992). Quantum stochastic calculus and quantum nonlinear filtering. Journal of Multivariate Analysis, 42, 171-201. [Benoist & Pellegrini (2014)]Benoist2014 Benoist T. & Pellegrini C. (2014). Large time behavior and convergence rate for quantum filters under standard non demolition conditions.Communications in Mathematical Physics, 331, 703-723. [Benoist et al. (2017)]Benoist2017 Benoist T., Pellegrini C. & Ticozzi F. (2017). Exponential stability of subspaces for quantum stochastic master equations. Annales Henri Poincaré, 1-30. [Bouten et al. (2007)]Bouten2007 Bouten L., van Handel R. & James M. R. (2007). An introduction to quantum filtering. SIAM Journal on Control and Optimization, 46, 2199-2241.[Breitenbecker & Grümm (1972)]Breitenbecker1972 Breitenbecker M. & Grümm H.R. (1972). Note on trace inequalities. Communications in Mathematical Physics, 26, 276-279.[Breuer & Petruccione (2002)]Breuer2002 Breuer H. -P. & Petruccione F. (2002). The Theory of Open Quantum Systems. Oxford, U.K.: Oxford University Press. [Brigo et al. (1998)]Brigo1998 Brigo D., Hanzon B. & LeGland F. (1998). A differential geometric approach to nonlinear filtering: the projection filter. IEEE Transactions on Automatic Control, 43, 247-252. [Brigo et al. (1999)]Brigo1999 Brigo D., Hanzon B. & LeGland F. (1999). Approximate filtering by projection on the manifold of exponential densities. Bernoulli, 5, 495-543. [Cohen (1988)]Cohen1988 Cohen J.E. (1988). Spectral inequalities for matrix exponentials. Linear Algebra and its Applications, 111, 25-28. [Emzir et al. (2016)]Emzir2016 Emzir M.F. , Woolley M.J. & Petersen I.R. (2016) A Quantum Extended Kalman Filter. arXiv:1603.01890v1[quant-ph].[Evans et al. (1978)]Evans1978 Evans D.E. & Høegh-Krohn R. (1978) Spectral properties of positive maps on C*- algebras. Journal of the London Mathematical Society, 2, 345-355.[Gardiner & Zoller (2000)]Gardiner2000 Gardiner C. W. & Zoller P. (2000). Quantum Noise: A Handbook of Markovian and Non-Markovian Quantum Stochastic Methods with Applications to Quantum Optics. 2nd Edition. New York: Springer-Verlag.[Gao et al. (2016)]Gao2016 Gao Q., Dong D. & Petersen I.R. (2016) Fault tolerant quantum filtering and fault detection for quantum systems. Automatica, 71, 125-134. [Gibilisco et al. (2009)]Gibilisco2009 Gibilisco P., Imparato D. & Isola T. (2009) Quantum covariance, quantum fisher information and the uncertainty principle. IEEE Transactions on Information Theory, 55, 439-443. [Hamerly & Mabuchi (2012)]Hamerly2012 Hamerly R. & Mabuchi H. (2012). Advantages of coherent feedback for cooling quantum oscillators. Physical Review Letters, 109, 173602. [Higham (2001)]Higham2001 Higham D. (2001). An algorithmic introduction to numerical simulation of stochastic differential equations. SIAM Review, 43, 525-546.[Jiang (2014)]Jiang2014 Jiang Z. (2014). Quantum fisher information for states in exponential form. Physical Review A, 89, 032128.[Liu et al. (2016)]Liu2016 Liu Y., Shankar S., Ofek N. et al. (2016). Comparing and combining measurement-based and driven-dissipative entanglement stabilization. Physical Review X, 6, 011022.[Mirrahimi & van Handel (2007)]Mirrahimi2007 Mirrahimi M. & van Handel R. (2007). Stabilizing feedback control for quantum systems. SIAM Journal on Control and Optimization, 46, 445-467. [Nielsen et al. (2009)]Nielsen2009 Nielsen A., Hopkins A. & Mabuchi H. (2009). Quantum filter reduction for measurement-feedback control via unsupervised manifold learning. New Journal of Physics, 11, 105043. [Rouchon & Ralph (2015)]Rouchon2015 Rouchon P. & Ralph J.F. (2015). Efficient quantum filtering for quantum feedback control. Physical Review A, 91, 012118.[Song et al. (2016)]Song2016 Song H., Zhang G. & Xi Z. (2016). Continuous-mode multi-photon filtering. SIAM Journal on Control and Optimization, 54, 1602-1632.[Stockton et al. (2004)]Stockton2004 Stockton J.K., Geremia J., Doherty A.C. & Mabuchi H. (2004). Robust quantum parameter estimation: coherent magnetometry with feedback. Physical Review A, 69, 032109. [Ticozzi & Viola (2008)]Ticozzi2008 Ticozzi F. & Viola L. (2008). Quantum Markovian subsystems: invariance, attractivity and control.IEEE Transactions on Automatic Control, 53, 2048-2063. [Ticozzi & Viola (2009)]Ticozzi2009 Ticozzi F. & Viola L. (2009). Analysis and synthesis of attractive quantum Markovian dynamics. Automatica, 45, 2002-2009.[Thomsen et al. (2002)]Thomsen2002 Thomsen L.K., Mancini S. & Wiseman H.M. (2002). Spin squeezing via quantum feedback. Physical Review A, 65, 061801(R).[van Handel et al. (2005a)]Handel2005a van Handel R., Stockton J.K. & Mabuchi H. (2005) Modelling and feedback control design for quantum state preparation.Journal of Optics B: Quantum and Semiclassical Optics, 7, S179-S197. [van Handel & Mabuchi (2005b)]Handel2005b van Handel R. & Mabuchi H. (2005) Quantum projection filter for a highly nonlinear model in cavity QED.Journal of Optics B: Quantum and Semiclassical Optics, 7, S226-S236. [van Handel et al. (2005c)]Handel2005c van Handel R., Stockton J.K. & Mabuchi H. (2005) Feedback control of quantum state reduction.IEEE Transactions on Automatic Control, 50, 768-780. [Viola & Knill (2003)]Viola2003 Viola L. & Knill E. (2003). Robust dynamical decoupling of quantum systems with bounded controls. Physical Review Letters, 90, 037901.[Wineland et al. (1994)]Wineland1994 Wineland D.J., Bollinger J.J., Itano W.M. & Heinzen D.J. (1994). Squeezed atomic states and projection noise in spectroscopy. Physical Review A, 50, 67-88.[Wiseman & Milburn (1993)]Wiseman1993 Wiseman H.M. & Milburn G.J. (1993). Quantum theory of field-quadrature measurements.Physical Review A, 47, 642-662.[Wiseman & Milburn (2010)]Wiseman2009 Wiseman H.M. & Milburn G.J. (2010). Quantum Measurement and Control. Cambridge, U.K.: Cambridge University Press. | http://arxiv.org/abs/1705.09114v2 | {
"authors": [
"Qing Gao",
"Guofeng Zhang",
"Ian R. Petersen"
],
"categories": [
"math-ph",
"math.MP",
"quant-ph"
],
"primary_category": "math-ph",
"published": "20170525095653",
"title": "An Exponential Quantum Projection Filter for Open Quantum Systems"
} |
firstpage–lastpage Reactor antineutrino shoulder explained by energy scale nonlinearities? G. Mention^(a), M. Vivier^(a), J. Gaffiot^(a), T. Lasserre^(a,b), A. Letourneau^(a), T. Materna^(a)Received September 15, 2016; accepted March 16, 2017 ========================================================================================================Understanding the governing mechanism of solar magnetism remains an outstanding challenge in astrophysics. Seismology is the most compelling technique with which to infer the internal properties of the Sun and stars.Waves in the Sun, nominally acoustic, are sensitive to the emergence and cyclical strengthening of magnetic field, evidenced by measured changes in resonant oscillation frequencies that are correlated with the solar cycle. The inference of internal Lorentz stresses from these measurements has the potential to significantly advance our appreciation of the dynamo. Indeed, seismological inverse theory for the Sun is well understood for perturbations in composition, thermal structure and flows but, is not fully developed for magnetism, owing to the complexity of the ideal magnetohydrodynamic (MHD) equation. Invoking first-Born perturbation theory to characterize departures from spherically symmetric hydrostatic models of the Sun and applying the notation of generalized spherical harmonics, we calculate sensitivity functions of seismic measurements to the general time-varying Lorentz stress tensor.We find that eigenstates of isotropic (i.e. acoustic only) background models are dominantly sensitive to isotropic deviations in the stress tensor and much more weakly so to anisotropic stresses (and therefore challenging to infer). The apple cannot fall far from the tree. Sun: helioseismology—Sun: interior—Sun: oscillations—waves—hydrodynamicsReactor antineutrino shoulder explained by energy scale nonlinearities? G. Mention^(a), M. Vivier^(a), J. Gaffiot^(a), T. Lasserre^(a,b), A. Letourneau^(a), T. Materna^(a)Received September 15, 2016; accepted March 16, 2017 ========================================================================================================§ INTRODUCTIONThe cycling of the Sun's magnetic field, occurring on the time scale of approximately 11 years, causes luminosity changes and affects Earth's climate and space and geo-magnetic environments <cit.>. Magnetism in the Sun is a multi-scale phenomenon, ranging from the system size (R_⊙ = 695,700 km) to a few km. Understanding Lorentz stresses on large scales lends insight to the processes that drive the solar dynamo. Because the internal layers of the Sun are opaque to radiation and therefore inaccessible by optical imaging, seismology provides a unique and powerful technique with which to study the interior. A variety of seismic measurements in the Sun are used to infer its properties, such as global <cit.> and local normal-mode frequencies <cit.>, wave travel times <cit.>, mode coupling <cit.> and holograms <cit.>.For instance, at solar maxima when Lorentz stresses reach their peak magnitudes, solar normal mode frequencies are observed to be elevated in relation to their values at solar minima <cit.>. Using measured changes in frequencies or other seismic measurements to infer the internal state of the Sun is the goal of helioseismology.Linear magnetohydrodynamics (MHD), the theory of small-amplitude wave propagation in magnetised media, is used to describe the physics of helioseismic oscillations <cit.>. Acoustic waves are transformed to magnetosonic and incompressible Alfvén waves, akin to vibrations of elastic media <cit.>. Inviscid fluids only support pressure stresses, which act locally isotropically. In contrast, magnetic fields and flows cause waves to propagate anisotropically.Alfvén waves, which propagate only in the presence of magnetic fields, behave as vertically polarised shear waves, thereby adding to the anisotropy. It is important to recognise that wave propagation in the limit of vanishingly small magnetic fields is not the same as in the zero-field case since Alfvén waves exist is the former and cannot exist in the latter. Because the wavelength of Alfvén waves scales linearly with magnetic field strength, it becomes infinitesimally small in the limit of vanishing field strength. In contrast, when the field strength is identically zero, waves do not disperse into magneto-acoustic and Alfvén modes, and are described as purely acoustic oscillations. Indeed, for this reason, MHD may act as a singular perturbation to an otherwise hydrostatic state although <cit.> showed that regular perturbation theory could be used to effectively predict changes in seismic measurements due to magnetic fieldswhen the ratio of magnetic-to-hydrostatic pressure is small. However, owing to the tensor nature of the MHD equation, it has thus far not been possible to obtain a formal relationship between magnetic fields and attendant deviations in seismic measurements <cit.>. At the heart of the inverse problem is this relationship, i.e. the construction of sensitivity functions or kernels that capture the dependence of seismic measurements to perturbations in the solar model. It has been possible to obtain kernels for sound-speed and flow anomalies <cit.> and for numerically computing small deviations around an existing magnetised state <cit.>. However without the theoretical machinery to account for the full anisotropy of the MHD equation, modelling the direct influence of general Lorentz stresses on seismic variables has eluded resolution thus far <cit.>. Prior approaches invoke assumptions on the field geometry to make the problem tractable, however at the cost of potentially diminishing inferential accuracy <cit.>.Hydrodynamic pressure increases rapidly with depth in the Sun, implying that Lorentz stresses grow comparatively weaker, allowing for the application of perturbation theory. Whereas in near-surface layers, magnetic pressure is comparable to or greater than hydrodynamic pressure, and therefore surface magnetism represents a large deviation (e.g. sunspots). This latter problem deals with perturbing around a given model of a sunspot to fit seismic measurements and requires the application of iterative numerical methods <cit.>. In contrast, the present technique allows for the direct inference of the Lorentz stress and treats it as a perturbation from a hydrostatic state.Applying solid-Earth mode theory and treating field as a regular perturbation to the helioseismic wave equation, we derive the scattering matrix due to Lorentz stresses for mode coupling-measurements. Geophysical mode theory is particularly well suited to the problem at hand because it has been designed to address wave physics in the anisotropic Earth <cit.>. Resonant modes, which are computed for a given spherically symmetric structure model of the Sun, are nominally “uncoupled" in that they are independent of one another.Deviations from this spherically symmetric state cause mode scattering, inducing correlations among different modes in the reference model and they become “coupled". For temporally stationary perturbations to a given linear wave operator, mode scattering occurs at constant frequency. Here we allow the perturbation to vary in time and model the resultant coupling between modes at different frequencies as well. The proximity of modes to one another, i.e. in terms of spatial and temporal frequencies, determines the extent of mode coupling. The closer the modes are, the stronger the scattering-induced correlation. Although we only outline the theory for mode-coupling measurements, the formalism here is immediately suitable to computing Lorentz kernels for normal-mode and travel-time measurements. § HELIOSEISMIC MEASUREMENTSFor a non-rotating, non-magnetic,undamped, spherically symmetric model of the Sun, the linear acoustic wave equation for displacement (,ω) is given by <cit.>_0 = -ρ ω^2-(ρ c^2 · + ρ· g)- g·(ρ) =0,where ω is temporal frequency, c(r) the sound speed, g(r) is gravity, ρ(r) the density andthe covariant spatial derivative and_0 is the unperturbed wave operator. The eigenfrequencies and eigenfunctions of the Hermitian operator_0 in equation (<ref>) are real. We employ spherical coordinates with radius, colatitude and longitude denoted by = (r,θ,ϕ) and unit vectors (,_θ,_ϕ) respectively. Non-radial variations in ρ, c, rotation, material circulations and magnetism are considered perturbations to the operator (<ref>). For the analysis here, we assume that ρ is only a function of radius.A general wavefieldmay be written in terms of mode eigenfunctions _k thus = ∑_k a_k(ω) _k(), where a_k denotes the contribution of mode k. Resonant modes are identified by quantum numbers k=(ℓ,m,n), where ℓ is spherical-harmonic degree, m azimuthal order and n, radial order. Writing equation (<ref>) in operator notation for the eigenfunction _k associated with mode k,_0_k = ρ ω_k^2_k,where ω_k is the (real) resonant frequency.The eigenfunctions _k form an orthonormal basis when integrated over the solar volume ⊙,∫_⊙ d ρ ^*_m·_n = δ_mn.Modes are continuously randomly excited by near-surface convection in the Sun, resulting in stochastic time series' a_k(t) for each mode k.For an unperturbed spherically symmetric solar model, we have ⟨ a^ω'*_ja^ω_k⟩ = |R^ω_k|^2 δ(ω - ω') δ_jk <cit.>, whereR^ω_k = 1/ω̅_k^2 - ω^2,which only contributes at frequencies close to resonance. Note that the dependence on frequency is now expressed through a superscript to be consistent with <cit.> and <cit.>. Solar modes experience a small degree of attenuation γ_k≪ω_k that we take into account by perturbing only the eigenfrequency ω̅_k ≈ω_k- iγ_k/2 in equation (<ref>), leaving the eigenfunction unchanged. Thus the cross-spectral measurement, ⟨ a^ω'*_ja^ω_k⟩ when j k, is non-zero only when solar structure departs from purely acoustic spherical symmetry. The Michelson Doppler Imager <cit.> and Helioseismic and Magnetic Imager <cit.> space missions, which have together observed some 20 years of the spherical-harmonic coefficients a_k(t), allow us to measure these deviations.Now consider a time-varying perturbation to the operator, δ_ω, which will in turn modify the wavefield by an amount δ,(-ρ ω^2+ _0 +δ_ω)( + δ) = 0.The subscript ω on δ denotes the frequency dependence of the perturbation (arising from its time variability). The perturbed wavefield is written as a linear superposition of the original eigenfunctions,δ = ∑_jδ a^ω_j _j.With some algebra <cit.>, we arrive at a model for cross-spectral correlations,⟨a^ω+σ_k' δ a^ω*_k + δ a^ω+σ_k'a^ω*_k⟩≈ HΛ^k'_k(σ),whereH= R^ω+σ_k'|R^ω_k|^2 + R^ω_k|R^(ω+σ)*_k'|^2 and the coupling or scattering matrix ΛΛ^k'_k(σ) = -∫_⊙ d^*_k'·δ_σ _k,captures the extent of scattering, mediated by perturbation operator δ_σ, from mode k to k'.In equation (<ref>), δ_σ is the perturbation operator measured at temporal frequency channel σ.Because the Lorentz stress is a real quantity in the spatio-temporal domain and linear MHD is self-adjoint <cit.>, we have Λ^k*_k'(-σ) = Λ^k'_k(σ).A general time-varying magnetic field in spherical geometry is written thus(, σ) = ∑_s=0^∞∑_t=-s^s( u^t_s Y^t_s+ v^t_sY^t_s)+w^t_s × Y^t_s,where Y^t_s are spherical harmonics of azimuthal order t and spherical harmonic degree s, u^t_s(r,ω), v^t_s(r,ω) constitute polodial-field coefficients and the w^t_s(r,ω) term represents toroidal field and ω is temporal frequency. The toroidal component by construction is solenoidal, i.e. ·(w^t_s × Y^s_t) = 0. In order to enforce · = 0, the poloidal coefficients must obey ∂_r(r^2 u^t_s) = s(s+1) r v^t_s. Manipulating vectors and tensors in spherical geometry is simplified when using generalised spherical harmonics <cit.>. The generalised coordinate system is given by _0 = _r,_+ = -(_θ + i_ϕ)/√(2),_- = (_θ - i_ϕ)/√(2), ^*_0 = _0, ^*_+ = - _-,^*_- = - _+.Eigenfunctions for an unperturbed spherically symmetric solar model may be expanded using spheroidal functions thus <cit.>,_k =∑_ℓ,mu^m_ℓ Y^m_ℓ + v^m_ℓ Y^m_ℓ =∑_ℓ,mξ^0_k Y^0,m_ℓ _0+ ξ^-_k Y^-1,m_ℓ _- + ξ^+_k Y^1,m_ℓ _+,where Y^Nm_ℓ, which are generalised spherical harmonics, are related to elements of the Wigner rotation matrixY^Nm_ℓ= d^ℓ_Nm(θ,ϕ) e^imϕ <cit.>. Equation (<ref>) states that the simplest form of the solar eigenfunction comprises entirely spheroidal modes and lacks toroidal modes such as shear waves <cit.>, resulting in ξ_k^+ = ξ_k^-. Equation (<ref>) for a general field is also rewritten using ±,0 notation,(,σ) =∑_s,tB^0_st Y^0t_s_0 + B^+_st Y^1,t_s _+ + B^-_st Y^-1,t_s _-, and the solenoidal condition on the field translates toB_st^+ + B_st^- = ∂_r(r^2B_st^0)/rΩ^s_0.The s,t indices occur as subscripts in equation (<ref>) for convenience.§ MHD EQUATION The action of magnetism is described using linearized ideal MHD, a model of small-amplitude fluctuations about an equilibrium <cit.>. The time-varying Lorentz-stress tensor =, whereis the field, perturbs operator (<ref>) thus,δ = -·[ · +()^T· -·-2· + (:· - :)·:/2 ].We outline the derivation of equation (<ref>) in Appendix <ref>. The dependence ofon ω is not explicitly stated to reduce notational burden. Denoting the strain tensor _k = [_k + (_k)^T]/2 and the unit dyad by , i.e. ()_ij = δ_ij, the coupling coefficient linking two modes k = (ℓ, m, n) and k' = (ℓ', m', n') is Λ^k'_k = ∫_⊙ d :[_k·^*_k' + ^*_k'·(_k)^T- ^*_k' ·_k- _k·^*_k' + ·_k ·^*_k'/2],where (_k)_ij = ∂_i ξ_k,j and (_k)^T_ij = ∂_j ξ_k,i. Using generalized spherical harmonics, we may expand (,σ), where σ is temporal frequency, thus(,σ) = ∑_i,j∑_s,t h^ij_st(r,σ)Y^i+j,t_s _i_j , where s is spherical harmonic degree, t is azimuthal order, h_st^ij(r,σ) is the s,t coefficient of the i,j component of the tensor , andi, j are ± or0. Becauseis symmetric and real in the spatio-temporal domain, the following relationships hold h_st^0+ = h_st^+0, h_st^0- = h_st^-0, h_st^-+ = h_st^+-,(-1)^t h^ij_st(r,-σ) = [h^ij_s,-t(r,σ)]^*.Thusonly has 6 independent components and we use (h_st^++,h_st^+0,h_st^00,h_st^+-,h_st^-0,h_st^–) to represent the tensor.Note that the inverse problem is for Lorentz stresses and not the field itself. The solenoidal condition on magnetic field could not readily be translated to an equivalent constraint on the Lorentz stress. We therefore do not incorporate it in the present analysis. After tedious algebra (see Appendices <ref> and <ref>), we obtain the followingrelationΛ^k'_k=∑_s,t∫_⊙ dr _st^00 h_st^00 + _st^++ [h_st^– (-1)^ℓ'+ℓ+s + h_st^++ ] + 2_st^0+ [h_st^0- (-1)^ℓ'+ℓ+s + h_st^0+] + 2_st^+- h_st^+- , where , defined in Appendix <ref>, denote kernels for different components of the stress tensor, ℓ and ℓ' are the harmonic degrees associated with modes k and k' respectively that have become coupled due to Lorentz stresses. Using Wigner-3j rules, integration over the 3-D sphere have been simplified to a 1-D integral over radius. Kernels ^00 and ^+-, whose superscripts sum to zero, capture the seismic sensitivity to isotropic, on-diagonal components of the stress tensor, the radial and transverse magnetic energies respectively. Kernels ^0+ and ^++ represent the sensitivity to off-diagonal, anisotropic Lorentz stresses. We show examples of ^+- and ^00 kernels in Figure <ref> and ^++ and ^0+ kernels in Figure <ref>.These are for self-coupled modes ℓ= ℓ' and n=n'. In Figures <ref> and <ref>, we show cross-coupled kernels ℓ = ℓ' and n≠ n'. In general we find that coupled modes are significantly more sensitive to isotropic components of the Lorentz stress tensor than the anisotropic terms. The following relations connect stresses in real space to the intermediate ±,0 variables,B_r B_r(,σ)= ∑_s,t h_st^00 Y^0,t_s,B_r B_θ(,σ)= ∑_s,th_st^0- Y^-1,t_s - h_st^0+ Y^1,t_s/√(2),B_r B_ϕ(,σ)= -i∑_s,th_st^0- Y^-1,t_s + h_st^0+ Y^1,t_s/√(2),B_θ B_θ(,σ) = ∑_s,t h_st^++ Y^2,t_s -2h_st^+- Y^0,t_s + h_st^– Y^-2,t_s/2,B_θ B_ϕ(,σ)= i∑_s,th_st^++ Y^2,t_s - h_st^– Y^-2,t_s/2,B_ϕ B_ϕ(,σ)= ∑_s,t-h_st^++ Y^2,t_s- 2h_st^+- Y^0,t_s - h_st^– Y^-2,t_s/2 .The inverse problem (<ref>) indicates that modes with even ℓ + ℓ' + s are only sensitive to h^00, h^+-, h^– + h^++, h^0+ + h^0- those with odd ℓ + ℓ' + s only sense h^– - h^++ and h^0+ - h^0-. This is also encountered when imaging flows for instance, where kernels with odd ℓ + ℓ' + s are sensitive only to toroidal flows and kernels with even ℓ + ℓ' + s are sensitive only to poloidal flows <cit.>.§ DISCUSSIONThe foregoing analysis brings to light a technique to elegantly compute the influence of general anisotropic magnetic stresses on seismic variables. The results pave the way for formally inferring the Lorentz stress tensor using seismic measurements <cit.>. We find that modes are much more sensitive to the diagonal components of the tensor, i.e. the magnetic energies, than off-diagonal, anisotropic terms. The kernels appear to naturally separate out regular and singular perturbations associated with magnetic fields. The component of the Lorentz stress that is a regular perturbation is isotropic, behaving as sound-speed anomalies might, as demonstrated by the kernels in Figure <ref>. Seismic measurements are primarily sensitive to radial and transverse magnetic energies, which are the diagonal components the stress tensor. In contrast, the anisotropic behaviour of the magnetic field represents a singular perturbation to the original model, since the background does not contain anisotropy(e.g. adding rotation or magnetic fields to it could induce anisotropy). A direct manifestation of anisotropy is the appearance of Alfvén waves, which is not permitted in hydrodynamics. Modes computed around a hydrostatic background are far less sensitive to these stresses, as Figure <ref> suggests.Indeed, deviations from models are primarily of the same character as the model itself and isotropy only begets deviations of the isotropic kind. In appendix <ref>, we derive flow kernels using generalized spherical harmonics. However, in deriving these sensitivities, we ignore the second-order flow term, i.e. that goes as ·(ρu u·) where u is the flow velocity. This term possesses some similarities with magnetic perturbations in that it takes the form u u and flows obey the continuity condition · (ρ u) = 0, akin to the divergence-free condition on magnetic fields. The second-order flow perturbation to the wave equation couples eigenfunctions according to ρ( u·_k)·( u·^*_k'), taking on mathematical structure somewhat different from the magnetic terms of appendix <ref>. While it may therefore be possible to distinguish between the coupling effects of the two, the data likely will not support discerning such subtleties owing to systematic effects such as spatial and temporal leakage. Moreover, inferring sub-surface magnetic fields will be difficult given that magnetic fields at the surface, where the magnetic-to-gas pressure is much greater than unity, couple modes strongly <cit.>. In particular, sunspots have locally very strong fields and the assumption of linearity between the perturbation and corresponding deviation in the measurement may break down. This implies that inferring Lorentz stresses in the sub-surface is a challenging problem. The present technique may also be used to model instantaneous and classical travel-time and amplitude measurements. First-Born scattering theory relies on Green's functions for computing kernels, and since Green's functions for spherically symmetric models may be expressed using equation (<ref>), the same vector harmonic basis as the eigenfunctions, the analysis proceeds unchanged. To describe normal-mode-frequency sensitivity, we may use self-coupling kernels as in Figures <ref> and <ref>.§ ACKNOWLEDGMENTSSMH is grateful to David Al-Attar for a most useful conversation that set the analysis in motion. He also acknowledges support from Ramanujan fellowship SB/S2/RJN-73, the Max-Planck partner group program and thanks NYUAD's Center for Space Science. Jishnu Bhattacharya helped greatly by re-calculating and verifying these expressions using Mathematica.mnras§ MHD PERTURBATION Linearized ideal MHD is a model of small amplitude fluctuations about an equilibrium <cit.>. The perturbation to the operator due to the presence of magnetism is given byδ = -·[ · +·-2· -(·) -(·) + B^2·- :+ ·B^2/2 ],where the notation a :b = ∑_i,j a_ij b_ji. The term ·[(·) +(·)] in Einstein-index notation is ∂_i[ξ_j∂_j(B_i) B_k] + ∂_i(B_iξ_j∂_j B_k) = ∂_i[ξ_j∂_j (B_i B_k)], i.e. ·[·()]. Writing the Lorentz-stress tensor as =, the quantity of central interest here, we may rewrite the perturbation operator asδ = -·[ · +()^T· -2· -· +:·- :+ ·:/2 ]. We consider the coupling integral (<ref>) along with the definition of the operator (<ref>) term by term,-∫_⊙ d ^*_k'··(·_k) = -∫_⊙ d ·[·(_k)·^*_k'] +∫_⊙ d :[(_k)·(^*_k')^T], -∫_⊙ d ^*_k'··[(_k)^T·] = -∫_⊙ d ·[(_k)^T··^*_k'] +∫_⊙ d :[(_k)·(^*_k')], 2∫_⊙ d ^*_k'··(·_k) = 2∫_⊙ d ·(·^*_k' ·_k)-2∫_⊙ d :(^*_k') ·_k, ∫_⊙ d ^*_k'··(_k·) = ∫_⊙ d ·[_k·()·^*_k'] - ∫_⊙ d _k·():(^*_k'), - ∫_⊙ d _k·():(^*_k') = - ∫_⊙ d · [_k:(^*_k')] +∫_⊙ d :(^*_k')·_k, -∫_⊙ d ^*_k'··(·_k:) = -∫_⊙ d ·(^*_k' ·_k:) + ∫_⊙ d :( ·^*_k' ·_k), ∫_⊙ d ^*_k'··( :_k) = ∫_⊙ d ·(^*_k' :_k) - ∫_⊙ d :[(_k) ·^*_k'], -1/2∫_⊙ d ^*_k'··[ _k·(:)] = -1/2∫_⊙ d ·[^*_k' _k·(:)]+ 1/2∫_⊙ d _k·(:) ·^*_k', 1/2∫_⊙ d _k·(:) ·^*_k' = 1/2∫_⊙ d ·(_k ·^*_k' :) - 1/2∫_⊙ d :( ·_k ·^*_k').The boundary contributions are assumed to vanish, allowing us to write the full coupling integral asΛ^k'_k = ∫_⊙ d :{(_k)·[(^*_k')^T + ^*_k'] - (^*_k')·_k - (_k)·^*_k' + 1/2 ·_k ·^*_k'}.Sinceand ε are symmetric tensors, we may reduce the expression furtherΛ^k'_k = ∫_⊙ d :{_k·^*_k' + ^*_k'·(_k)^T - ^*_k' ·_k - _k·^*_k' + 1/2 ·_k ·^*_k'},where ε_k = [_k + (_k)^T]/2, and (_k)_ij = ∂_i ξ_k,j, (_k)^T_ij = ∂_j ξ_k,i.§ TENSOR MANIPULATION Manipulating vectors and tensors in spherical geometry is simplified when using generalised spherical harmonics <cit.>. The generalised coordinate system is given by _0 = _r,_+ = -(_θ + i_ϕ)/√(2),_- = (_θ - i_ϕ)/√(2), ^*_0 = _0, ^*_+ = - _-,^*_- = - _+.and we have _i·_j =0 with the exception of _0·_0 = 1, _+·_- = - 1. The following relations are also relevant, _+·^*_+ = 1 = _-·^*_-. Using the rules and terminology of covariant differentiation developed by <cit.> and defining the tensor _k = _k, we obtain _k = ∑_ℓ = 0^∞∑_m=-ℓ^ℓ∑_α,β T_k^αβ Y^α+β, m_ℓ_α_β,T_k^– = ξ_k^-|- = 1/rΩ^ℓ_2 U^-,m_ℓ ,T_k^0- = ξ_k^-|0 = ^-,m_ℓ,T_k^+- = ξ_k^-|+ = 1/r[Ω^ℓ_0 U^-,m_ℓ - U^0,m_ℓ],T_k^-0 = ξ_k^0|- = 1/r[Ω^ℓ_0 U^0,m_ℓ - U^-,m_ℓ],T_k^00 = ξ_k^0|0 = ^0,m_ℓ,T_k^+0 = ξ_k^0|+ = 1/r[Ω^ℓ_0 U^0,m_ℓ - U^+,m_ℓ],T_k^-+ = ξ_k^+|- = 1/r[Ω^ℓ_0 U^+,m_ℓ - U^0,m_ℓ],T_k^0+ = ξ_k^+|0 = ^+,m_ℓ,T_k^++ = ξ_k^+|+ = 1/rΩ^ℓ_2 U^+,m_ℓ(r). The symbol ξ_k^a|b denotes the derivative of ξ_k^a with respect to the b coordinate. The terms ξ^0,±_k and U, V are described in appendix <ref> and coefficients Ω^ℓ_2 and Ω^ℓ_0 are defined in equations (<ref>) and (<ref>). Owing to the degeneracy between the ± components of the eigenfunction (Eq. [<ref>]), we have the following equivalences, T_k^– = T_k^++, T_k^+0 = T_k^-0, T_k^-+ = T_k^+- and T_k^0+ = T_k^0-. The trace of this tensor is given by Tr(T_k) = Tr(ε_k) =T_k^00 - T_k^-+ - T_k^+- = T_k^00 - 2 T_k^+- = {^0,m_ℓ - 1/r[Ω^ℓ_0 (U^-,m_ℓ + U^+,m_ℓ) - 2U^0,m_ℓ]},where α, β take on the values 0, ±1. The symmetric strain tensor _k = [(_k)^T + _k]/2 is given by_k = ∑_ℓ = 0^∞∑_m=-ℓ^ℓ∑_α,βε_k^αβ Y^α+β, m_ℓ_α_β, ε_k^– = ξ_k^-|- = 1/rΩ^ℓ_2 U^-,m_ℓ, ε_k^-0 = ε_k^0-= ξ_k^-|0 + ξ_k^0|-/2 = 1/2{^-,m_ℓ + 1/r[Ω^ℓ_0 U^0,m_ℓ - U^-,m_ℓ] }, ε_k^-+ = ε_k^+- =ξ_k^-|+ + ξ_k^+|-/2 = 1/2r[Ω^ℓ_0 (U^-,m_ℓ + U^+,m_ℓ) - 2U^0,m_ℓ], ε_k^00 = ξ_k^0|0 = ^0,m_ℓ, ε_k^0+ = ε^+0 = ξ_k^0|+ + ξ_k^+|0/2 = 1/2{^+,m_ℓ + 1/r[Ω^ℓ_0 U^0,m_ℓ - U^+,m_ℓ]}, ε_k^++ = ξ_k^+|+ = 1/rΩ^ℓ_2 U^+,m_ℓ(r).Because U^- = U^+ (see Eq. [<ref>] of Appendix <ref>), we may simplify these equations to obtain ε_k^-+ = ε_k^+- = 1/r(Ω^ℓ_0 U^+,m_ℓ - U^0,m_ℓ), ε_k^++ = ε_k^–, ε_k^0+ = ε_k^+0 = ε_k^0- = ε_k^-0,T_k^0+ = T_k^0-, T_k^-0 = T_k^+0,T_k^++ = T_k^– = ε_k^++ = ε_k^–,T_k^00 = ε_k^00,T_k^+- = T_k^-+=ε_k^+-, Tr(ε_k) = T_k^00 - 2T_k^+- = ^0+ 2/r(U^0-Ω^ℓ_0 U^+). We expand (,σ) thus(,σ) = ∑_s = 0^∞∑_t=-s^s h^ij_st(r,σ) Y^i+j,t_s _i _j,where h^ij_st is the (i,j) component of the tensor , andi, j take on values -1, +1 or0. We list the components,^++ = ∑_s = 0^∞∑_t=-s^s h^++_st Y^2t_s,^+- = ∑_s = 0^∞∑_t=-s^s h^+-_st Y^0t_s, ^+0 = ∑_s = 0^∞∑_t=-s^s h^+0_st Y^1t_s,^-0 = ∑_s = 0^∞∑_t=-s^s h^0-_st Y^-1t_s, ^00 = ∑_s = 0^∞∑_t=-s^s h^00_st Y^0t_s,^– = ∑_s = 0^∞∑_t=-s^s h^–_st Y^-2t_s. § DERIVING SENSITIVITY KERNELS We obtain the coupling integral in equation (<ref>) thusΛ^k'_k= ∑_s,t∫_⊙ d ∑_α, β h_st^αβ Y^α+β,t_s {e_αβ/2Tr(ε^*_k') Tr(ε_k) (Y^0m'_ℓ')^*Y^0m_ℓ. . - [e_αγ e_βδ ε^γδ_k Tr(ε^*_k') (Y^0m'_ℓ')^* Y^γ+δ,m_ℓ + ε^αβ*_k' Tr(ε_k) (Y^α+β,m'_ℓ')^* Y^0m_ℓ].. + ∑_μ,γ[e_μβT^μγ_k ε^γα*_k' (Y^γ+α,m'_ℓ')^* Y^γ+μ,m_ℓ+ e_αμT^μγ_k ε^γβ*_k'(Y^γ+β,m'_ℓ')^* Y^γ+μ,m_ℓ]},where we denote the dot product _α·_β = e_αβ. From the definitions of the unit vectors for generalized coordinates in equation (<ref>), e_αβ = 0 with the exceptions e_00 =1 and e_+- = e_-+ = -1. The expression resolves into the following 1D problem Λ^k'_k= ∑_s,t ∫_⊙ drh_st^++ _st^++ +h_st^00 _st^00 +h_st^– _st^– +(h_st^+0 + h_st^0+) _st^+0 +(h_st^-0 + h_st^0-) _st^-0+(h_st^-+ + h_st^+-) _st^-+,where we acknowledge the symmetry of the tensor .Owing to the Wigner addition rules (see Appendix <ref>), and because h_st^++ is attached to the harmonic Y^2,t_s, we have the following expression for ^++_st,_st^++ = 4π (-1)^m' [ℓ' s ℓ; -m' t m ][-ε^++_k ε^00*_k' [ ℓ'sℓ;02 -2 ]+ 2 T^-0_k ε^0+*_k' [ ℓ'sℓ; -12 -1 ]. .- ε^00_k ε^++*_k'[ ℓ'sℓ; -220 ]]. A similar analysis may be applied to obtain the kernel for h_st^–,_st^– = 4π (-1)^m' [ℓ' s ℓ; -m' t m ][-ε^++_k ε^00*_k' [ ℓ'sℓ;0 -22 ]+ 2T^+0_k ε^0-*_k' [ ℓ'sℓ;1 -21 ]. .- ε^00_k ε^++*_k'[ ℓ'sℓ;2 -20 ]] = (-1)^ℓ'+ℓ + s_st^++,where we have used the degeneracy of the ± components of the eigenfunction (see appendix <ref>). Next we compute the kernel for the h_st^0+ + h_st^+0 component, 2_st^0+ = 2_st^+0 = 4π (-1)^m' [ℓ' s ℓ; -m' t m ] { [(T^0+_k- T^+0_k) ε_k'^00* - 2T^+0_k ε_k'^+-*] [ ℓ'sℓ;01 -1 ]. . - 2ε^+-_k ε^0+*_k [ ℓ'sℓ; -110 ]+ 2ε_k^–ε^-0*_k'[ ℓ'sℓ;11 -2 ] +2T_k^0+ε^++*_k'[ ℓ'sℓ; -211 ]}. The symmetries between the ± terms encourage us to consider h_st^0- + h_st^-0 next, and we obtain, _st^0- = _st^-0 = (-1)^ℓ'+ℓ + s_st^0+.The penultimate term is h_st^00 whose kernel is_st^00 = 4π (-1)^m' 1+(-1)^ℓ'+ℓ + s /2[ℓ' s ℓ; -m' t m ] { -4T_k^0- ε_k'^-0* [ ℓ'sℓ; -101 ]..+ (2ε^+-_k + ε^00_k)(2ε^+-*_k' + ε^00*_k') /2[ ℓ'sℓ;000 ]}. Finally, we have the expression for the kernel for h_st^+- + h_st^-+,2_st^+- = 4π (-1)^m' [ℓ' s ℓ; -m' t m ] 1+(-1)^ℓ'+ℓ + s /2 { 4T_k^+0 ε_k'^-0* [ ℓ'sℓ; -101 ].. - 4ε_k^++ ε_k'^++* [ ℓ'sℓ; -202 ] - ε^00*_k'ε^00_k[ ℓ'sℓ;000 ]}= 2_st^-+, where we have exploited the symmetric nature of the Lorentz stress tensor. The structure of these kernels allows for rewriting the inverse problem thus,Λ^k'_k= ∑_s,t ∫_⊙ dr_st^++ [h_st^++ + (-1)^ℓ'+ℓ+s h_st^–] + _st^00 h_st^00 +2 _st^+0 [h_st^+0 + (-1)^ℓ'+ℓ+s h_st^0-] +2 _st^-+ h_st^-+,and is therefore sensitive to the sums or differences between various components of h depending on whether the sum ℓ' + ℓ + s is even or odd. Given the complexity of these expressions, we additionally verified the kernels using Mathematica. § LIST OF SYMBOLS γ_ℓ = √(2ℓ+1/4π), Ω^ℓ_N = √(1/2(ℓ+N)(ℓ-N+1)), Ω^ℓ_0 = Ω^ℓ_1,Ω^ℓ_-1 = Ω^ℓ_2.The definition of the Wigner-3j symbol is∫_0^2π dϕ∫_0^π dθ sinθ (Y^N'm'_ℓ')^* Y^N”m”_ℓ” Y^Nm_ℓ = 4π(-1)^(N'-m')[ℓ'ℓ” ℓ; -N'N” N ][ℓ'ℓ” ℓ; -m'm” m ].Each Wigner symbol is non-zero only if the elements in the second row sum to zero, i.e. N” + N - N' =0 and m” + m - m' =0. We also use[ℓ'ℓ” ℓ;N' -N”-N ] = (-1)^ℓ'+ℓ+s[ℓ'ℓ” ℓ; -N'N” N ].§ CONVERTING FROM THE GENERALIZED TO SPHERICAL COORDINATESThe eigenfunctions of a spherically symmetric model U,V in generalized coordinates areξ_k^0 = γ_ℓU, ξ_k^+ = ξ_k^- = γ_ℓ Ω^ℓ_0V. We have the following relations between the ±,0 vectors to the (r,θ,ϕ) representation,_0 = ,_- = _θ - i_ϕ/√(2),_+ = -_θ + i_ϕ/√(2).To reconstruct the real-space version of , we first note that= ∑_s,t[h_st^++_+ _+ Y^2,t_s + (h_st^0+_0 _+ + h_st^+0_+ _0) Y^1,t_s . . + (h_st^00_0 _0+ h_st^+-_+ _- + h_st^-+_- _+) Y^0,t_s. . +(h_st^0-_0 _- + h_st^-0_- _0) Y^-1,t_s+h_st^–_- _- Y^-2,t_s].The components ofare obtained by dotting with (, _θ , _ϕ). We compute the following,·_- = 0,_θ·_- = 1/√(2),_ϕ·_- = -i/√(2), ·_0 = 1,_θ·_0 = 0,_ϕ·_0 = 0, ·_+ = 0,_θ·_+ = -1/√(2),_ϕ·_+ = -i/√(2). Becauseis symmetric, we only need six components,B_r B_r = : = ∑_s,t h_st^00 Y^0,t_s ,B_r B_θ =_θ: = 1/√(2)[∑_s,t h_st^0- Y^-1,t_s - h_st^0+ Y^1,t_s],B_r B_ϕ =_ϕ: = -i/√(2)[∑_s,t h_st^0- Y^-1,t_s + h_st^0+ Y^1,t_s],B_θ B_θ = _θ _θ: = 1/2[∑_s,t h_st^++ Y^2,t_s -2h_st^+- Y^0,t_s + h_st^– Y^-2,t_s],B_θ B_ϕ = _θ _ϕ: = i/2[∑_s,t h_st^++ Y^2,t_s - h_st^– Y^-2,t_s],B_ϕ B_ϕ = _ϕ _ϕ: = -1/2[∑_s,t h_st^++ Y^2,t_s+ 2h_st^+- Y^0,t_s + h_st^– Y^-2,t_s]. We rewrite the above equations in terms of sums and differences in the h_stB_r B_r= : = ∑_s,t h_st^00 Y^0,t_s ,B_r B_θ= 1/√(2)[∑_s,t[(h_st^0- + h_st^0+)Y^-1,t_s -Y^1,t_s/2] + (h_st^0-- h_st^0+)Y^-1,t_s + Y^1,t_s/2],B_r B_ϕ= -i/√(2)[∑_s,t[(h_st^0- + h_st^0+)Y^-1,t_s +Y^1,t_s/2] + (h_st^0-- h_st^0+)Y^-1,t_s - Y^1,t_s/2],B_θ B_θ=1/2[∑_s,t[(h_st^– + h_st^++)Y^-2,t_s+Y^2,t_s/2 -2h_st^+- Y^0,t_s] + (h_st^– - h_st^++)Y^-2,t_s - Y^2,t_s/2],B_θ B_ϕ= -i/2[∑_s,t[(h_st^– + h_st^++)Y^-2,t_s - Y^2,t_s/2] + (h_st^– - h_st^++)Y^-2,t_s + Y^2,t_s/2],B_ϕ B_ϕ=-1/2[∑_s,t[(h_st^– + h_st^++)Y^-2,t_s+Y^2,t_s/2 +2h_st^+- Y^0,t_s] + (h_st^– - h_st^++)Y^-2,t_s - Y^2,t_s/2].Because of the 1 + (-1)^ℓ' + ℓ + s multiplying factor in equations (<ref>) and (<ref>), the components h_st^+- and h_st^00 are only sensed by modes when the sum ℓ + ℓ' + s is even. Similarly, the even or odd parity of ℓ' + ℓ + s determines whether we are able to infer sums or differences, i.e. h_st^–(-1)^ℓ'+ℓ+s + h_st^++ and h_st^0- (-1)^ℓ'+ℓ+s + h_st^0+ respectively (Eqs. [<ref>] through [<ref>]). This effect is akin to being able to infer toroidal flows only when ℓ' + ℓ + s is odd and poloidal flows when ℓ' + ℓ + s is even <cit.> and Appendix <ref>.§ DERIVING FLOW KERNELSWe sketch the technique to compute kernels for flows using generalized coordinates. Indeed, <cit.> discuss this possibility in their Appendix C but do not pursue it. We begin by expressing a general flow field _0 thus_0 = ∑_s=0^∞∑_t=-s^s u^+_stY^1,t_s _+ + u^0_stY^0,t_s _0 +u^-_stY^-1,s_t _-.The relationship between the ±,0 symbols and poloidal and toroidal flow components is <cit.>u^t_s = u^0_st/γ_s,v^t_s = u^-_st + u^+_st/2γ_sΩ^s_0,w^t_s = i/2γ_sΩ^s_0(u^-_st - u^+_st),where u^t_s, v^t_s represent the poloidal flow components and w^t_s is the toroidal flow component. The perturbation to the wave operator (<ref>) due to advection is given byδ = -2iω ρ_0·,and recalling equation (<ref>), the coupling between two modes k and k' induced by flows is given byΛ^k'_k = 2iω ∫_⊙ d ρ_0·(_k)·^*_k'.In generalized-coordinate notation, this becomesΛ^k'_k = 2iω ∫_⊙ d [- ρ u^+_stT^–_k ξ^-*_k'(Y^-1m'_ℓ')^* Y^1t_s Y^-2m_ℓ- ρu^+_stT^-0_k ξ^0*_k'(Y^0m'_ℓ')^* Y^1t_s Y^-1m_ℓ - ρu^+_stT^-+_k ξ^+*_k'(Y^1m'_ℓ')^* Y^1t_s Y^0m_ℓ + ρu^0_stT^0-_k ξ^-*_k'(Y^-1m'_ℓ')^* Y^0t_s Y^-1m_ℓ + ρu^0_stT^00_k ξ^0*_k'(Y^0m'_ℓ')^* Y^0t_s Y^0m_ℓ + ρu^0_stT^0+_k ξ^+*_k'(Y^1m'_ℓ')^* Y^0t_s Y^1m_ℓ - ρu^-_stT^+-_k ξ^-*_k'(Y^-1m'_ℓ')^* Y^-1t_s Y^0m_ℓ - ρu^-_stT^+0_k ξ^0*_k'(Y^0m'_ℓ')^* Y^-1t_s Y^1m_ℓ -ρu^-_stT^++_k ξ^+*_k'(Y^1m'_ℓ')^* Y^-1t_s Y^2m_ℓ]. The spherical integration reduces all these terms to Wigner-3j symbols,Λ^k'_k = 8iπω (-1)^m'∫_⊙ dr ρ r^2 [ℓ' s ℓ; -m' t m ][ u^+_stT^–_k ξ^-*_k' [ ℓ'sℓ;11 -2 ].. - u^+_stT^-0_k ξ^0*_k' [ ℓ'sℓ;01 -1 ] + u^+_stT^-+_k ξ^+*_k' [ ℓ'sℓ; -110 ] + u^-_stT^+-_k ξ^-*_k' [ ℓ'sℓ;1 -10 ].. - u^-_stT^+0_k ξ^0*_k' [ ℓ'sℓ;0 -11 ] +u^-_stT^++_k ξ^+*_k' [ ℓ'sℓ; -1 -12 ]- u^0_stT^0-_k ξ^-*_k' [ ℓ'sℓ;10 -1 ]..+ u^0_stT^00_k ξ^0*_k' [ ℓ'sℓ;000 ] - u^0_stT^0+_k ξ^+*_k' [ ℓ'sℓ; -101 ]].Because of the ± degeneracy in the eigenfunctions and the corresponding expressions for T, i.e. T^0± = T^0∓, T^±0 = T^∓0, T^++ = T^– and T^+- = T^-+, this expression may reduced,Λ^k'_k = 8iπω (-1)^m'∫_⊙ dr ρ r^2 [ℓ' s ℓ; -m' t m ]{[u^+_st + (-1)^ℓ'+ℓ+s u^-_st][ T^–_k ξ^-*_k' [ ℓ'sℓ;11 -2 ]. ... -T^-0_k ξ^0*_k' [ ℓ'sℓ;01 -1 ] +T^-+_k ξ^+*_k' [ ℓ'sℓ; -110 ]]..+ u^0_st 1+ (-1)^ℓ'+ℓ+s/2[T^00_k ξ^0*_k'[ ℓ'sℓ;000 ] -2 T^0+_k ξ^+*_k' [ ℓ'sℓ; -101 ]]}.Equation (<ref>) states that u^0_st, which from equation (<ref>) is directly proportional to the radial flow, can only be inferred for even values of the sum ℓ'+ℓ + s (the Wigner-3j symbol with all zeros in the second row is zero for odd ℓ'+ℓ+s). Similarly, depending on whether ℓ'+ℓ+s is even or odd, we correspondingly recover the sum u^+_st + u^-_st or difference u^+_st - u^-_st, giving us alternate access to the poloidal flow (for even ℓ' + ℓ +s) and toroidal flow (when ℓ' + ℓ +s is odd). See also <cit.>, <cit.> and <cit.> for more details. | http://arxiv.org/abs/1705.09431v2 | {
"authors": [
"Shravan M. Hanasoge"
],
"categories": [
"astro-ph.SR",
"physics.geo-ph"
],
"primary_category": "astro-ph.SR",
"published": "20170526045213",
"title": "Seismic sensitivity of Normal-mode Coupling to Lorentz stresses in the Sun"
} |
Colorado School of Mines, Golden, Colorado 80401, USA We explore the quantum many-body physics of a three-component Bose-Einstein condensate (BEC) in an optical lattices driven by laser fields in V and Λconfigurations. We obtain exact analytical expressions for the energy spectrum andamplitudes of elementary excitations, and discover symmetries among them.We demonstrate that the applied laser fields induce a gap in the otherwise gapless Bogoliubov spectrum.We find that Landau damping of the collective modes above the energy of the gap is carried by laser-induced roton modes and is considerably suppressed compared to the phonon-mediated damping endemic to undriven scalar BECs.03.75.Kk, 03.75.Mn, 42.50.Gy, 67.85.-d, 63.20.kg Absence of Landau damping in driven three-component Bose–Einstein condensate in optical lattices Gavriil Shchedrin, Daniel Jaschke, and Lincoln D. Carr Received September 15, 2016; accepted March 16, 2017 ================================================================================================Multicomponent Bose-Einstein condensates (BECs) are a unique form of matter that allow one to explore coherent many-body phenomena in a macroscopic quantum system by manipulating its internal degrees of freedom <cit.>. The ground state of alkali-based BECs, which includes ^7 Li, ^23 Na, and ^87 Rb, is characterized by the hyperfine spin F, that can be best probed in optical lattices, which liberate its 2F+1 internal components and thus provides a direct access to its internal structure <cit.>. Driven three-component F=1 BECs in V and Λ configurations (see Fig. <ref>(b) and <ref>(c)) are totally distinct from two-component BECs <cit.> due to the light interaction with three-level systems that results in the laser-induced coherence between excited states and ultimately leads to a number of fascinating physical phenomena, such as lasing without inversion (LWI) <cit.>, ultraslow light <cit.>, and quantum memory <cit.>. The key technique behind these phenomena is electromagnetically induced transparency (EIT) <cit.>, which is based on the elimination of real and imaginary parts of the susceptibility upon applying a coherent resonant drive to a gas of three-level atoms, that opens a transparency window in otherwise optically opaque atomic media <cit.>. The vanishing imaginary part of the susceptibility results in an extremely small group velocity of light, which led to the observation of unprecedented seven orders of magnitude slowdown of light propagation through a BEC of ^23Na atoms <cit.>. The notion of non-dissipative dark-state polaritons not only yields a simple and elegant description of slow light phenomena <cit.>, but also provides an efficient way to store and retrieve individual quantum states, i.e., quantum memory <cit.>. Apart from physical phenomena achieved by the light-induced coherence in three-level systems, confined multicomponent BECs allowed the experimental realization of a number of fundamental physical concepts including the observation of the spin Hall effect <cit.>, creation of exotic magnetic <cit.> andtopological states <cit.>, and observation of Dirac monopoles <cit.>.However, access to these rich physical phenomena is limited or entirely excluded by the damping processes of the collective modes of the BEC <cit.>. The damping of the collective modes manifests itself in the metastable nature of spinor BEC, which dictates its properties and the many-body phenomena governed by it. Suppression of damping processes for collective excitations has been previously discovered experimentally and described theoretically in several BEC contexts, e.g., absence of Beliaev damping, which governs the decay process of a single collective mode into two collective excitations of a lower energy, for a quasi-2D dipolar gas <cit.>; the Quantum Zeno mechanism, responsible for a diminished decay rate of collective excitations in a quantum degenerate fermionic gas of polar molecules confined in optical lattices <cit.>; and suppression of the Landau decay rate of collective excitations for a Bose-Fermi superfluid mixture <cit.>. However, in all these systems the energy spectrum is gapless for small momenta, and therefore, Landau damping is carried out predominantly by the phonons. In contrast to all these past studies, in this Letter we calculate exact analytical expressions for the Landau damping rate in spinor three-component BECs in optical lattices driven by microwave fields in both V and Λ configurations. The resulting generalized energy spectrum, Rabi-Bogoliubov (RB) amplitudes, and symmetries among them allow us to explore near-equilibrium BEC dynamics.We find that the laser fields induce a gap (see Fig. <ref>) in the energy spectrum, preventing collective excitation from Landau damping, and thus, enabling a metastable state in driven spinor BEC. The laser-induced gap in the energy spectrum results in zero group velocity and non-zero current for the collective excitations lying above the energy of the gap. Therefore, roton modes are induced, which significantly suppress Landau damping rate in spinor BECs compared to the phonon-mediated Landau damping in undriven scalar BECs.We begin with the second-quantized Hamiltonian for a driven three-component BEC,l H=∫d𝐫 ∑_j=a,b,c ψ_j^†(𝐫) ( -ħ^2/2m∇^2+ V(𝐫) -μ_j ) ψ_j(𝐫)+ 1/2 ∫d𝐫 ∑_j=a,b,c ψ_j^†(𝐫) ( ∑_j'=a,b,c g_jj' ψ_j'^†(𝐫) ψ_j'(𝐫) ) ψ_j(𝐫)+ Ω_s/2 ∫d𝐫 ( e^iΔt ψ_a^†(𝐫) ψ_b(𝐫) + e^-iΔt ψ_b^†(𝐫) ψ_a(𝐫) ) + Ω_p/2 ∫d𝐫 ( e^iΔt ψ_c^†(𝐫) ψ_b(𝐫) + e^-iΔt ψ_b^†(𝐫) ψ_c(𝐫) ) , where we have chosen a V-configuration (see Fig. <ref>b) for concreteness. Here we introduced a Bose field operator ψ_j(𝐫) which annihilates a particle determined by the mass m, position 𝐫, and internal state j=b (a,c) for a particle in the ground (excited) state. The lattice potential is assumed to have a simple cubic form, V(𝐫)=V_0∑_i=1^3sin^2(k_L r_i), where k_L=π/a_L is the lattice vector and a_L is the lattice constant. The coupling constant g_jj' determines the interaction between particles occupying the internal states j and j'.The laser fields are characterized by the Rabi frequencies Ω_s and Ω_p and equal detuning Δ from excited states. Initially, the BEC is prepared in the ground state |b⟩.For sufficiently deep lattices, or alternatively in the long-wavelength approximation, one can safely adopt the lowest band approximation, and perform expansion of the bosonic field operators ψ_j(𝐫) in the Wannier basis ψ_j(𝐫)=∑_n b _njw_j(𝐫-𝐫_n). Throughout the paper, we will adopt the index convention, according to which the first argument of the field operator index denotes the site in an optical lattice, and the second argument indicates the internal state. Inserting this expansion into Eq. (<ref>) we obtain,l H=-∑_j=a,b,c ∑_ ⟨m,n⟩ J^jj_mn (b ^†_mj b _nj+b ^†_nj b _mj ) - ∑_j=a,b,c μ_j ∑_nb ^†_nj b _nj + ∑_j,j'=a,b,c U_jj'/2 ∑_ nb ^†_njb ^†_nj'b _nj'b _nj + Ω_s/2 ∑_n ( e^iΔtb ^†_nab _nb + e^-iΔtb ^†_nbb _na )+ Ω_p/2 ∑_n ( e^iΔtb ^†_ncb _nb + e^-iΔtb ^†_nbb _nc ) , where we truncated the sum to the nearest neighbors, indicated by ⟨ m,n⟩. Here the hopping integral is J^ij_mn= -∫d𝐫 w^*_i(𝐫-𝐫_m) [ -ħ^2/2m∇^2+ V(𝐫) ] w^*_j(𝐫-𝐫_n) , the on-site interaction isl U_jj'= g_jj' ∫d𝐫 w^*_j(𝐫) w^*_j'(𝐫) w_j'(𝐫) w_j(𝐫) ,In order to formulate Eq. (<ref>) in k-space we introduce the Fourier transform of the creation and annihilation operators,lb _nj=1/√(N_L) ∑_k exp[-i𝐤𝐫_n] a_kj , where N_L is number of lattice cites. In order to linearize the Fourier-transformed Hamiltonian given by Eq. (<ref>) we expand the operators near their average values, i.e., a_kj=⟨a_0j⟩+(a_kj-⟨a_0j⟩). The average value of the field operator ⟨a_0j⟩ is given in terms of the number ofparticles occupying the zero momentum state N_0j, i.e. ⟨a_0j⟩=√(N_0j). The matrix, which describes coupling between particles identified by the internal state j={a,b,c}is given by, ( [ n_aU_aa √(n_an_b)U_ab √(n_an_c)U_ac; √(n_bn_a)U_ba n_bU_bb √(n_bn_c)U_ac; √(n_cn_a)U_ca √(n_cn_b)U_cb n_cU_cc ])≡( [ u s t; s u s; t s u ]) . Here we have introduced the average filling factor n_j=N_0j/N_L for the particles characterized by the internal state j and momentum k=0.For brevity, we consider the simplified case, s=0 and t=0. However, the main physical results concerning the structure of the energy spectrum, the amplitudes of elementary excitations of a driven three-component BEC, and symmetries among them, obtained in the most general case match the description here. The diagonalization of the Fourier-transformedHamiltonian can be accomplished via the generalizedtransformation. This transformation is carried out by the quasi-particle operators α_k,a, ζ_kc, and β_k,b, that annihilate a particle occupying the internal state j=a, j=c (excited states) and j=b (ground state), correspondingly. The transformation from the particle to the quasi-particle basis is given by the linear combination, a_kj= 𝒰_kα_ka+ 𝒱^*_kα_-ka^†+ 𝒲_kβ_kb+ 𝒴^*_kβ_-kb^† +𝒳_kζ_kc+ 𝒵^*_kζ_-kc^†. The generalizedamplitudes(see Fig. <ref>) are the subject to a constraint 𝒰_k^2-𝒱_k^2+𝒲^2_k- 𝒴^2_k +𝒳^2_k - 𝒵^2_k=1, which ensures the bosonic commutation relation for the quasi-particle creation and annihilation operators, i.e., [α_kj, α^†_-k,j' ]=δ_jj', and[β_kj, β^†_-k,j' ]=δ_jj', and[ζ_kj, ζ^†_-k,j' ]=δ_jj'.In the basis of quasiparticle creation and annihilation operators, theHamiltonian acquires a diagonal form, l H_eff= 1/2∑_k ( E_a(k) α_a,k^† α_a,k+ E_b(k)β_b,k^† β_b,k + E_c(k)ζ_b,k^† ζ_b,k ) ,where E_a(k), E_b(k), and E_c(k) are the three branches of theenergy spectrum, see Fig. <ref>, for both V and Λ configurations. In the case of the V-configuration, the energy spectrum is obtained from the condition [M_V-1(E/2)]=0, wherel M_V= ( [h_k+Δ -Ω_s/20u00; -Ω_s/2h_k -Ω_p/20u0;0 -Ω_p/2h_k+Δ00u; -u00 -h_k-ΔΩ_s/20;0 -u0Ω_s/2 -h_kΩ_p/2;00 -u0Ω_p/2 -h_k-Δ ] ).Here, the tunneling parameter h_k=u + t_k, with t_k = 4Jsin^2(ka_L/2), is given in terms of the tunneling amplitude J≡J^jj_mn, momentum k, and the lattice constant a_L.Thus, thespectrum is E^V_a,±(k) =±√(4_k^2+4 (t_k+u) Δ +Δ ^2+Ω^2+2σ), E^V_b,±(k) =±√(4 _k^2+4 (t_k+u) Δ +Δ ^2+Ω^2-2σ), E^V_c,±(k) =±√((t_k+Δ) (t_k+2 u+Δ)) where _k= √(t_k (t_k+2 u)) is the standard Bogoliubov spectrum <cit.>, Ω=√(Δ^2+Ω_s^2+Ω_p^2) is the effective Rabi frequency of the combined laser fields Ω_s(t) and Ω_p(t) and σ≡(2 (t_k+u)+Δ)Ω. We find a set of new symmetries that holds among the generalizedamplitudes,𝒱_k^2(E_a,Ω)=- 𝒰^2_k(-E_a,Ω) , 𝒲^2_k(E_b,Ω) =𝒰^2_k(E_b,-Ω),𝒴^2_k(E_b,Ω)=- 𝒰^2_k(-E_b,-Ω) , 𝒳^2_k(E_c)=- 𝒵^2_k(-E_c) . The new symmetries summarized by Eq. (<ref>) are the direct generalization of the intrinsic symmetries of the standardamplitudes, i.e., u^2_k(E)=-v^2_k(-E) <cit.>. These symmetries generate the complete set of the generalized amplitudes from a single amplitude 𝒰^2_k(E_a,Ω), which for a V-system is explicitly given by𝒰^2_k(E_a,Ω) =Ω _p^2+Ω _s^2+(2 (t_k+u)+E^V_a)(Ω-Δ)/4 E^V_aΩ and for the Λ-system, 𝒰^2_k(E_a,Ω) = Ω _s^2 [Ω _p^2+Ω _s^2+(2 (t_k+u)+E^Λ_a) (Ω-Δ)]/4 E^Λ_a(Ω _p^2+Ω _s^2) Ω The symmetries in the V-system result in cancellation of the amplitudes 𝒳_k(E_c) and 𝒵_k(E_c), while for the Λ-system we have 𝒳^2_k(E_c) =Ω _p^2(t_k+u+E^Λ_c)/2 E^Λ_c(Ω _p^2+Ω _s^2) Here E^Λ_a(k), E^Λ_b(k), and E^Λ_c(k) are solutions of the eigenvalue problem for the Λ configuration governed by M_Λ= ( [ h_k+Δ -Ω _s/2 -Ω _p/2 u 0 0; -Ω _s/2 h_k 0 0 u 0; -Ω _p/2 0 h_k 0 0 u;-u 0 0-h_k-ΔΩ _s/2Ω _p/2; 0-u 0Ω _s/2-h_k 0; 0 0-uΩ _p/2 0-h_k ]) The eigenvalues in Λ-configuration are given in terms of the energy spectrum for the V-system, Eq. (<ref>),l E^Λ_a(k)=E^V_a(k)E^Λ_b(k)=E^V_b(k)E^Λ_c,±(k)=±√(t_k (t_k+2 u))In the long wavelength limit theamplitudes 𝒲_k, 𝒴_k are purely imaginary. Therefore, we are left with the real-valuedamplitudes 𝒰_k(E), 𝒱_k(E), 𝒳_k(E), and 𝒵_k(E), that could be further simplified in case of resonant driving fields, i.e. Δ=0. For the laser fields in the V-configuration we have,l 𝒰^V_k(E_a)=√( [E_a +2 (t_k+u + Ω_0/2)]/ (4 E_a )) , 𝒱^V_k(E_a)=-√( [-E_a +2 (t_k+u+Ω_0/2)]/ (4 E_a)), while for the Λ-system theamplitudes can be simplified into,l 𝒰^Λ_k(E_a)= Ω_s/Ω_0 √( E_a +2 (t_k+u + Ω_0/2)/4 E_a ) , 𝒱^Λ_k(E_a)=- Ω_s/Ω_0 √( -E_a +2 (t_k+u+Ω_0/2)/4 E_a ). The amplitudes 𝒳^Λ_k(E_c)= Ω_p/Ω_0 u_k(E_c) and 𝒳^Λ_k(E_c)= Ω_p/Ω_0v_k(E_c) are given in terms of the standardamplitudes u_k and v_k. The effective Rabi frequency simplifies into Ω_0≡Ω(Δ=0)=√(Ω_s^2+Ω_p^2) and E_a=√((2 t_k+Ω) (2 t_k+4 u+Ω)) and E^Λ_c=√(t_k (t_k+2 u)).Introducing E=E_a/2, we obtain the following expression for the Landau damping rate for the laser fields in the V configuration,Γ^V_L = -πħω_q2π/(2πħ)^3( 4 √(N)g_jj/2√(ω_q)/√(2)√(u+s))^21/q×β/β∫dp 1/v_gp^2/E1/(e^βE-1)( 3/4E/(u+s))^2 . Here ω_q and q are the frequency and momentum of the collective mode, respectively. In the Λ-configuration the Landau damping rate is Γ^Λ_L=(Ω_s/Ω_0)^2Γ^V_L + (Ω_p/Ω_0)^2Γ_L,where Γ_L is the usual Landau damping rate in the laser-free case <cit.>. The Landau damping acquires a particularly simple form if we introduce the density of the laser-induced roton modes,ρ_r = 4π/3(2πħ)^3∫dpp^2E^2/v_g(-/ E1/(e^βE-1)) = 4π/3(2πħ)^3(-β/β) ∫_E_0^∞dE p^2/v_g^2E/(e^βE-1) . Here we have introduced the group velocity v_g= E(p)/ p, the Boltzmann factorβ=1/k_BT, and the Boltzmann constant k_B. The Taylor expansion of the energyaround zero momentum returns, E≃E_0+E_2p^2/2,where that gap in the spectrum is E_0=√(Ω(4 u+Ω))/2, and the curvature of the spectrum is E_2=(2 u+Ω)/ [m^*√(Ω(4 u+Ω))]. The effective mass is given by m^*=1/(Ja_L^2). Finally, we can express the rate of Landau damping for a three-component BEC driven by the laser fields in a V-configuration in terms of the density of the laser-induced roton modes, Γ^V_L= θ(ħω_q-E_0) 27π/16ħω_qρ_r/ρ(ω_q) . Here the spectral density of the collective modes, ρ(ω_q)= q(u+s)^3/(g_jj^2Nω_q), is given in terms the q= √(2(ħω_q-E_0)/E_2). We immediately find that the collective modes characterized by the energy not exceeding the energy of the gap (ħω_q < E_0) are free from Landau damping, i.e., Γ_L = 0. Therefore, the gap in the energy spectrum produced by the applied laser fields effectively protects low-lying collective modes from Landau damping. For the collective modes lying above the energy of the gap, the Landau damping rate scales with the density of the laser-induced roton modes, which in the limit of low temperatures behaves as ρ_r≃β^-2. In the limiting case of laser-free condensate, i.e., Ω_s=Ω_p=0, thespectrum simplifies to the standardspectrum. As a result, laser-modified Landau damping rate Eq.(<ref>) reduces to the well-known result <cit.> for the phonon-mediated Landau damping of the collective modes in scalar BEC, Γ_L(Ω_s = Ω_p = 0)= 27π/16ħω_qρ_n/ρ≃1/β^4, defined in terms of the density of a phonon gas ρ_n=2π^2T^4/(45ħ^3c^5) <cit.>. Thus, Landau damping rate of the collective excitations in a driven three-level BEC is significantly slowed down compared to scalar laser-free BEC, where damping processes are mediated by phonons.Experimentally, the absence of Landau damping in driven three-component condensate can be verified by means of the two-photon Bragg spectroscopy. This technique was successfully applied in measuring Beliaev damping of the collective modes <cit.>, which revealed a complete absence of the collision of quasiparticles below a critical momentum in a BEC of ^87 Rb atoms. In case of Beliaev damping of collective modes in a laser-free BEC, as well as in case of Landau damping in a laser-driven spinor condensate, both physical systems are characterized by a critical energy, below which collision of the collective modes and the corresponding damping processes are entirely excluded. Thus, we conclude that despite the fact that the collision of the quasiparticles reported in the experiment <cit.> was governed by Beliaev damping, we anticipate the same results for Landau damping of collective modes in a laser-driven three-component spinor Bose-Einstein condensates. In conclusion, we investigated the quantum many-body physics of a three-component BEC confined in optical lattices and driven by laser fields in both V and Λ configurations. We found that the applied laser fields create a gap in the spectrum that shields collective excitation of the condensate lying below the energy of the gap from Landau damping. Above the gap, Landau damping is proportional to the density of the laser-induced roton modes, and is substantially suppressed compared to the Landau damping rate in an undriven scalar condensate carried by the phonons.This advance provides a prescription for the realization of electromagnetically induced transparencyand other exciting three-level phenomena in multicomponent Bose-Einstein condensates. Theauthorsgratefullyacknowledgestimulatingdiscussions with Marc Valdez and Logan Hillberry. This material is based in part upon work supported by the US National Science Foundation under grant numbers PHY-1306638, PHY-1207881, and PHY-1520915, and the US Air Force Office of Scientific Research grant number FA9550-14-1-0287. 42 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Kawaguchi and Ueda(2012)]kawaguchi2012spinor author author Yuki Kawaguchi and author Masahito Ueda, title title Spinor Bose–Einstein condensates, @noopjournal journal Phys. Rep. volume 520,pages 253–381 (year 2012)NoStop [Ueda(2010)]ueda2010fundamentals author author MasahitoUeda, @nooptitle Fundamentals and new frontiers of Bose-Einstein condensation (publisher World Scientific, year 2010)NoStop [Pethick and Smith(2008)]pethick2008bose author author C.J. Pethick and author H. Smith, @nooptitle Bose-Einstein condensation in dilute gases (publisher Cambridge,year 2008)NoStop [Davis et al.(1995)Davis, Mewes, Andrews, van Druten, Durfee, Kurn, and Ketterle]davis1995bose author author K. B. Davis, author M. O. Mewes, author M. R. Andrews, author N. J. van Druten, author D. S. Durfee, author D. M. Kurn,and author W. Ketterle, title title Bose-Einstein condensation in a gas of sodium atoms, @noopjournal journal Phys. Rev. Lett. volume 75, pages 3969–3973 (year 1995)NoStop [Anderson et al.(1995)Anderson, Ensher, Matthews, Wieman, and Cornell]anderson1995observation author author Mike H Anderson, author Jason R Ensher, author Michael R Matthews, author Carl E Wieman,and author Eric A Cornell, title title Observation of Bose-Einstein condensation in a dilute atomic vapor, @noopjournal journal Science volume 269, pages 198 (year 1995)NoStop [Bradley et al.(1997)Bradley, Sackett, and Hulet]bradley1997bose author author C. C. Bradley, author C. A. Sackett,and author R. G. Hulet, title title Bose-Einstein condensation of lithium: Observation of limited condensate number,@noopjournal journal Phys. Rev. Lett.volume 78, pages 985–989 (year 1997)NoStop [Chang et al.(2004)Chang, Hamley, Barrett, Sauer, Fortier, Zhang, You, andChapman]chang2004observation author author M.-S. Chang, author C. D. Hamley, author M. D. Barrett, author J. A. Sauer, author K. M. Fortier, author W. Zhang, author L. You,and author M. S.Chapman, title title Observation of spinor dynamics in optically trapped ^87Rb Bose-Einstein condensates, @noopjournal journal Phys. Rev. Lett. volume 92,pages 140403 (year 2004)NoStop [Miesner et al.(1999)Miesner, Stamper-Kurn, Stenger, Inouye, Chikkatur, and Ketterle]miesner1999observation author author H.-J. Miesner, author D. M. Stamper-Kurn, author J. Stenger, author S. Inouye, author A. P. Chikkatur,andauthor W. Ketterle, title title Observation of metastable states in spinor Bose-Einstein condensates, @noopjournal journal Phys. Rev. Lett. volume 82,pages 2228–2231 (year 1999)NoStop [Barrett et al.(2001)Barrett, Sauer, and Chapman]barrett2001all author author M. D. Barrett, author J. A. Sauer,and author M. S. Chapman,title title All-optical formation of an atomic Bose-Einstein condensate, @noopjournal journal Phys. Rev. Lett. volume 87,pages 010404 (year 2001)NoStop [Stenger et al.(1998)Stenger, Inouye, Stamper-Kurn, Miesner, Chikkatur, and Ketterle]stenger1998spin author author J Stenger, author S Inouye, author DM Stamper-Kurn, author H-J Miesner, author AP Chikkatur,and author W Ketterle, title title Spin domains in ground-state Bose-Einstein condensates, @noopjournal journal Nature volume 396, pages 345–348 (year 1998)NoStop [Chang et al.(2005)Chang, Qin, Zhang, You, andChapman]chang2005coherent author author Ming-ShienChang, author Qishu Qin, author Wenxian Zhang, author Li You,and author Michael S Chapman, title title Coherent spinor dynamics in a spin-1 Bose condensate, @noopjournal journal Nature physics volume 1, pages 111–116 (year 2005)NoStop [Scully et al.(1989)Scully, Zhu, and Gavrielides]PhysRevLett.62.2813 author author Marlan O.Scully, author Shi-YaoZhu,and author AthanasiosGavrielides, title title Degenerate quantum-beat laser: lasing without inversion and inversion without lasing, @noopjournal journal Phys. Rev. Lett. volume 62, pages 2813–2816 (year 1989)NoStop [Harris(1989)]PhysRevLett.62.1033 author author S. E. Harris, title title Lasers without inversion: Interference of lifetime-broadened resonances, @noopjournal journal Phys. Rev. Lett. volume 62, pages 1033–1036 (year 1989)NoStop [Hau et al.(1999)Hau, Harris, Dutton, and Behroozi]hau1999light author author Lene VestergaardHau, author Stephen EHarris, author ZacharyDutton,and author Cyrus HBehroozi, title title Light speed reduction to 17 metres per second in an ultracold atomic gas,@noopjournal journal Nature volume 397, pages 594–598 (year 1999)NoStop [Kash et al.(1999)Kash, Sautenkov, Zibrov, Hollberg, Welch, Lukin, Rostovtsev, Fry, and Scully]PhysRevLett.82.5229 author author Michael M.Kash, author Vladimir A.Sautenkov, author Alexander S.Zibrov, author L. Hollberg, author George R. Welch, author Mikhail D. Lukin, author Yuri Rostovtsev, author Edward S. Fry,and author Marlan O. Scully, title title Ultraslow group velocity and enhanced nonlinear optical effects in a coherently driven hot atomic gas, @noopjournal journal Phys. Rev. Lett. volume 82, pages 5229–5232 (year 1999)NoStop [Phillips et al.(2001)Phillips, Fleischhauer, Mair, Walsworth, and Lukin]PhysRevLett.86.783 author author D. F. Phillips, author A. Fleischhauer, author A. Mair, author R. L. Walsworth,andauthor M. D. Lukin, title title Storage of light in atomic vapor,@noopjournal journal Phys. Rev. Lett.volume 86, pages 783–786 (year 2001)NoStop [Boller et al.(1991)Boller, Imamo ğğlu, andHarris]PhysRevLett.66.2593 author author K.-J. Boller, author A. Imamo ğğlu,and author S. E. Harris, title title Observation of electromagnetically induced transparency, @noopjournal journal Phys. Rev. Lett. volume 66, pages 2593–2596 (year 1991)NoStop [Harris(1997)]harris1997today author author Stephen EHarris, title title Electromagnetically induced transparency, @noopjournal journal Physics Today volume 50, pages 36–42 (year 1997)NoStop [Lukin(2003)]RevModPhys.75.457 author author M. D. Lukin, title title Colloquium: Trapping and manipulating photon states in atomic ensembles, @noopjournal journal Rev. Mod. Phys. volume 75, pages 457–472 (year 2003)NoStop [Fleischhauer et al.(2005)Fleischhauer, Imamoglu, and Marangos]RevModPhys.77.633 author author Michael Fleischhauer, author Atac Imamoglu,and author Jonathan P.Marangos, title title Electromagnetically induced transparency: Optics in coherent media,@noopjournal journal Rev. Mod. Phys.volume 77, pages 633–673 (year 2005)NoStop [Fleischhauer and Lukin(2000)]PhysRevLett.84.5094 author author M. Fleischhauer and author M. D. Lukin, title title Dark-state polaritons in electromagnetically induced transparency, @noopjournal journal Phys. Rev. Lett. volume 84, pages 5094–5097 (year 2000)NoStop [Fleischhauer and Lukin(2002)]PhysRevA.65.022314 author author M. Fleischhauer and author M. D. Lukin, title title Quantum memory for photons: Dark-state polaritons, @noopjournal journal Phys. Rev. A volume 65,pages 022314 (year 2002)NoStop [Julsgaard et al.(2004)Julsgaard, Sherson, Cirac, Fiurášek, and Polzik]julsgaard2004experimental author author Brian Julsgaard, author Jacob Sherson, author J Ignacio Cirac, author Jaromír Fiurášek,and author Eugene SPolzik, title title Experimental demonstration of quantum memory for light, @noopjournal journal Nature volume 432, pages 482–486 (year 2004)NoStop [Lvovsky et al.(2009)Lvovsky, Sanders, and Tittel]lvovsky2009optical author author Alexander ILvovsky, author Barry CSanders,and author WolfgangTittel, title title Optical quantum memory, @noopjournal journal Nature photonics volume 3, pages 706–714 (year 2009)NoStop [Kielpinski et al.(2001)Kielpinski, Meyer, Rowe, Sackett, Itano, Monroe, and Wineland]kielpinski2001decoherence author author David Kielpinski, author V Meyer, author MA Rowe, author CA Sackett, author Wayne M Itano, author C Monroe,and author David J Wineland, title title A decoherence-free quantum memory using trapped ions, @noopjournal journal Science volume 291, pages 1013–1015 (year 2001)NoStop [Li et al.(2014)Li, Natu, Paramekanti, and Sarma]li2014chiral author author XiaopengLi, author Stefan S Natu, author Arun Paramekanti,and author S Das Sarma, title title Chiral magnetism and spontaneous spin hall effect of interacting Bose superfluids,@noopjournal journal Nature Comm.volume 5 (year 2014)NoStop [Stamper-Kurn and Ueda(2013)]stamper2013spinor author author Dan M Stamper-Kurn and author MasahitoUeda, title title Spinor Bose gases: Symmetries, magnetism, and quantum dynamics,@noopjournal journal Rev. Mod. Phys.volume 85, pages 1191 (year 2013)NoStop [Choi et al.(2012)Choi, Kwon, and Shin]choi2012observation author author Jae-yoonChoi, author Woo JinKwon,and author Yong-ilShin, title title Observation of topologically stable 2d skyrmions in an antiferromagnetic spinor Bose-Einstein condensate, @noopjournal journal Phys. Rev. Lett. volume 108,pages 035301 (year 2012)NoStop [Williams and Holland(1999)]williams1999preparing author author J.E. Williams and author M.J. Holland, title title Preparing topological states of a Bose–Einstein condensate, @noopjournal journal Nature volume 401, pages 568–572 (year 1999)NoStop [Ray et al.(2014)Ray, Ruokokoski, Kandel, Möttönen, and Hall]ray2014observation author author M.W. Ray, author E. Ruokokoski, author S. Kandel, author M. Möttönen,andauthor D.S. Hall, title title Observation of Dirac monopoles in a synthetic magnetic field, @noopjournal journal Nature volume 505, pages 657–660 (year 2014)NoStop [Natu and Wilson(2013)]PhysRevA.88.063638 author author Stefan S.Natu and author Ryan M.Wilson, title title Landau damping in a collisionless dipolar Bose gas, @noopjournal journal Phys. Rev. A volume 88, pages 063638 (year 2013)NoStop [Natu and Das Sarma(2013)]PhysRevA.88.031604 author author Stefan S.Natu and author S. Das Sarma, title title Absence of damping of low-energy excitations in a quasi-two-dimensional dipolar Bose gas, @noopjournal journal Phys. Rev. A volume 88, pages 031604 (year 2013)NoStop [Pixley et al.(2015)Pixley, Li, and Das Sarma]PhysRevLett.114.225303 author author J. H. Pixley, author Xiaopeng Li, and author S. Das Sarma,title title Damping of long-wavelength collective modes in spinor Bose-Fermi mixtures, @noopjournal journal Phys. Rev. Lett. volume 114, pages 225303 (year 2015)NoStop [Phuc et al.(2013)Phuc, Kawaguchi, and Ueda]phuc2013beliaev author author Nguyen ThanhPhuc, author Yuki Kawaguchi,and author MasahitoUeda, title title Beliaev theory of spinor Bose-Einstein condensates, @noopjournal journal Annals of Physics volume 328, pages 158–219 (year 2013)NoStop [Sun et al.(2016)Sun, Hu, Wen, Liu, Juzeliūnas, and Ji]sun2016ground author author Qing Sun, author Jie Hu, author Lin Wen, author W-M Liu, author Gediminas Juzeliūnas,and author An-Chun Ji, title title Ground states of a Bose-einstein condensate in a one-dimensional laser-assisted optical lattice, @noopjournal journal Scientific Reports volume 6 (year 2016)NoStop [Natu et al.(2014)Natu, Campanello, and Das Sarma]PhysRevA.90.043617 author author Stefan S.Natu, author L. Campanello,and author S. Das Sarma, title title Dynamics of correlations in a quasi-two-dimensional dipolar Bose gas following a quantum quench, @noopjournal journal Phys. Rev. A volume 90, pages 043617 (year 2014)NoStop [Yan et al.(2013)Yan, Moses, Gadway, Covey, Hazzard, Rey, Jin, andYe]yan2013observation author author Bo Yan, author Steven A Moses, author Bryce Gadway, author Jacob P Covey, author Kaden RA Hazzard, author Ana Maria Rey, author Deborah S Jin,and author Jun Ye, title title Observation of dipolar spin-exchange interactions with lattice-confined polar molecules, @noopjournal journal Nature volume 501, pages 521–525 (year 2013)NoStop [Ferrier-Barbut et al.(2014)Ferrier-Barbut, Delehaye, Laurent, Grier, Pierce, Rem, Chevy, and Salomon]ferrier2014mixture author author Igor Ferrier-Barbut, author Marion Delehaye, author Sebastien Laurent, author Andrew T Grier, author Matthieu Pierce, author Benno S Rem, author Frédéric Chevy, and author Christophe Salomon,title title A mixture of Bose and Fermi superfluids, @noopjournal journal Science volume 345, pages 1035–1038 (year 2014)NoStop [Zheng and Zhai(2014)]PhysRevLett.113.265304 author author Wei Zheng and author Hui Zhai,title title Quasiparticle lifetime in a mixture of Bose and Fermi superfluids, @noopjournal journal Phys. Rev. Lett. volume 113, pages 265304 (year 2014)NoStop [Pitaevskii and Stringari(1997)]pitaevskii1997landau author author L.P. Pitaevskii and author S. Stringari, title title Landau damping in dilute Bose gases, @noopjournal journal Phys. Lett. A volume 235,pages 398–402 (year 1997)NoStop [Landau and Lifshitz(1980)]landau1980statistical author author L. D. Landau and author E.M. Lifshitz, @nooptitle Statistical Physics, Vol. volume I (publisher Pergamon,year 1980)NoStop [Katz et al.(2002)Katz, Steinhauer, Ozeri, and Davidson]PhysRevLett.89.220401 author author N. Katz, author J. Steinhauer, author R. Ozeri,and author N. Davidson, title title Beliaev damping of quasiparticles in a Bose-Einstein condensate, @noopjournal journal Phys. Rev. Lett. volume 89,pages 220401 (year 2002)NoStop | http://arxiv.org/abs/1705.10199v2 | {
"authors": [
"Gavriil Shchedrin",
"Daniel Jaschke",
"Lincoln D. Carr"
],
"categories": [
"cond-mat.quant-gas"
],
"primary_category": "cond-mat.quant-gas",
"published": "20170526064643",
"title": "Absence of Landau damping in driven three-component Bose-Einstein condensate in optical lattices"
} |
∂ i.e. e.g. etc. et al.→↔<∼>∼M_ plḍ^1Institute for Theoretical Studies, ETH Zurich, Clausiusstrasse 47, 8092 Zurich, Switzerland ^2Department of Physics, Faculty of Science, Tokyo University of Science, 1-3, Kagurazaka, Shinjuku-ku, Tokyo 162-8601, Japan ^3Centro Multidisciplinar de Astrofisica - CENTRA, Departamentode Fisica, Instituto Superior Tecnico - IST, Universidade de Lisboa - UL, AvenidaRovisco Pais 1, 1049-001 Lisboa, Portugal We present a family of exact black-hole solutions ona static spherically symmetric background in second-ordergeneralized Proca theories with derivative vector-fieldinteractions coupled to gravity. We also derive non-exact solutions in power-law coupling models including vectorGalileons and numerically show the existence ofregular black holes with a primary hair associatedwith the longitudinal propagation. The intrinsic vector-fieldderivative interactions generally give rise to a secondary hairinduced by non-trivial field profiles.The deviation from General Relativity is most significantaround the horizon and hence there is a golden opportunity for probingthe Proca hair by the measurements of gravitational waves (GWs) inthe regime of strong gravity.04.50.Kd,04.70.BwHairy black-hole solutions in generalized Proca theoriesLavinia Heisenberg^1,Ryotaro Kase^2, Masato Minamitsuji^3, andShinji Tsujikawa^2 December 30, 2023 ========================================================================================= § INTRODUCTION The no-hair conjecture of black holes (BHs) <cit.>was originally suggested bythe existence of uniqueness theorems for Schwarzschild,Reissner-Nordström (RN), and Kerr solutionsin General Relativity (GR) <cit.>.However, there are several assumptions for provingthe absence of hairs besides mass, charge, and angular momentumin the form of no-hair theorems.One of such assumptions for a scalar field ϕ is that the standard canonical term ∇ _μϕ∇^μϕ/2is the only field derivative in the action <cit.>.Hence, the no-hair theorem of Ref. <cit.> loses its validityfor theories containing non-canonical kinetic terms. There are theories with non-canonical scalars with non-linearderivative interactions-like Galileons <cit.>and its extension to Horndeski theories <cit.>.In shift-symmetric Horndeski theories, a no-hair theorem for staticand spherically symmetric BHs was proposed <cit.>by utilizing the regularity of a Noether current on the horizon. A counterexample of a hairy BH evading one of the conditions discussed in Ref. <cit.> was advocated for the scalarfield linearly coupled to a Gauss-Bonnet term <cit.>.For a time-dependent scalar with non-mininal derivative coupling tothe Einstein tensor, there is also a stealth Schwarzschild solution with anon-trivial field profile <cit.>.For a massless vector field in GR, the static and sphericallysymmetric BH solution is described by the RN metric withmass M and charge Q. The introduction of a vector-fieldmass breaks the U(1) gauge symmetry, which allows thepropagation of the longitudinal mode.For this massive Proca field, Bekenstein showed <cit.>that a static BH does not have a vector hair.The vector field A^μ vanishes throughout the BHexterior from the requirement that a physical scalarconstructed from A^μ is bounded on a non-singular horizon.In this case, the static and spherically symmetric BH solution is described by the Schwarzschild metricwith mass M.The Bekenstein's no hair theorem <cit.> cannot beapplied to the massive vector fieldwith non-linear derivative interactions.In Refs. <cit.>the action of generalized Proca theories was constructed by demandingthe condition that the equations of motion are up to second order to avoid the Ostrogradski instability.An exact static and spherically symmetric BH solution withthe Abelian vector hair[In this paper we focus onthe Abelian vector field, but there are hairyBH solutions for non-Abelian Yang-Mills fields <cit.>.A complex Abelian vector field can also give rise to hairyKerr solutions <cit.>.]was found in Ref. <cit.>for the LagrangianL=(M_ pl^2/2) R-F_μνF^μν/4 +β_4G^μνA_μA_ν with the specific couplingβ_4=1/4, where M_ pl is the reduced Planck mass, R and G_μν are the Ricci scalar and Einstein tensor respectively, and F_μν=A_ν;μ-A_μ;ν(a semicolon represents a covariant derivative) is the field strength. This is a stealth Schwarzschild solution containing mass M alonewith a temporal vector component A_0=P+Q/rand a non-vanishing longitudinal component A_1, where r is thedistance from the center of spherical symmetry. Unlike the RN solution present for the massless vector in GR,the Proca hair P is physical but P as well as the electric chargeQ does not appear in metrics.The exact BH solutions studied in Ref. <cit.> have been extendedto non-asymptotically flat solutions <cit.>,rotating solutions <cit.>,and solutions for β_4≠ 1/4<cit.>.All these studies considered only theβ_4 G^μνA_μA_ν coupling.It is crucial to investigate whether the self-derivative interactionsof generalized Proca theories give rise to compelling new hairy BHsolutions. In this paper we provide a systematic prescription forconstructing new BH solutions in generalized Proca theories onthe static and spherically symmetric background given bythe line element ds^2 =-f(r) dt^2 +h^-1(r)dr^2 +r^2( dθ^2+sin^2θ dφ^2) ,with the vector field A_μ=(A_0(r),A_1(r),0,0),where f(r), h(r), A_0(r), and A_1(r) are arbitrary functions of r.We will derive exact solutions under the conditionof a constant norm of the vector field A_μ A^μ= constant. We also numerically obtain hairy BH solutions for power-law couplingmodels including vector Galileons. Unlike scalar-tensor theories in which hairy BH solutions exist onlyfor restrictive cases, we will show that the presence ofa temporal vector component besides a longitudinal scalar modegives rise to a bunch of hairy BH solutions in broad classesof models.The generalized Proca theories are given by theactionS=∫ d^4x √(-g)( F+∑_i=2^6 L_i) ,whereF=-F_μνF^μν/4, and <cit.>L_2=G_2(X) , L_3=G_3(X) A^μ_;μ , L_4=G_4(X)R+G_4,X[(A^μ_;μ)^2 -A_ν_;μA^μ^;ν]-2g_4(X)F , L_5=G_5(X)G_μνA^ν^;μ -G_5,X/6 [ (A^μ_;μ)^3 -3A^μ_;μA_σ_;ρA^ρ^;σ +2A_σ_;ρA^ρ^;νA_ν^;σ] -g_5(X)F̃^αμF̃^β_μA_β;α ,L_6=G_6(X) L^μναβ A_ν;μA_β;α +G_6,X/2F̃^αβF̃^μνA_μ;αA_ν;β.The functions G_2,3,4,5,6 and g_4,5 depend onX=-A_μA^μ/2, with the notationG_i,X=∂ G_i/∂ X. The vector field A^μ hasnon-minimal couplings with the Ricci scalar R,the Einstein tensor G_μν, and the double dual Riemann tensorL^μναβ= E^μνρσ E^αβγδR_ρσγδ/4,where E^μνρσ is theLevi-Civita tensorand R_ρσγδ is the Riemann tensor.The dual strength tensor F̃^μν is defined byF̃^μν= E^μναβF_αβ/2. The Lagrangians containing g_4,g_5,G_6 correspond to intrinsicvector modes that vanish in the scalar limitA^μ→π^;μ.Throughout the analysis we take into account theEinstein-Hilbert term M_ pl^2/2 in G_4(X). § EXACT SOLUTIONS The exact solution of Ref. <cit.> was found forthe model G_4(X)=M_ pl^2/2+β_4 Xwith β_4=1/4. For this solution there are two relations f=h , X=X_c ,where X_c is a constant. On using these conditions for the vector fieldA_μ, it follows thatA_1=±√(A_0^2-2fX_c)/f .Introducing the tortoise coordinate dr_*=dr/f(r), the scalar product A_μdx^μ reduces toA_0 du_± around the horizon, where u_±=t ± r_*.The advanced and retarded null coordinates u_+and u_- are regular at the future and past event horizons,respectively. Hence the regularity of solutions at the corresponding (future or past)horizon is ensured for each branch of(<ref>), which is analogous tothe case of shift-symmetric scalar-tensor theories <cit.>.We will search for exact solutions byimposing the two conditions (<ref>). Provided the condition G_4,XX(X_c)=0 is satisfiedfor the quartic-order coupling G_4(X), the equationof motion for A_1 reduces toG_4,X(rf'+f-1)A_1=0, where a prime representsthe derivative with respect to r.As long as G_4,X≠ 0,there are two branches characterized byrf'+f-1=0 or A_1=0. The first gives rise to the stealth Schwarzschild solutionf=h=1-2M/r found in Ref. <cit.>.In this case the temporal vector component obeysA_0”+2A_0'/r=0, whose integrated solution isgiven by A_0=P+Q/r. Since the constant P is independent of M and Q,it is regarded as a primary hair <cit.>.The other two independent equations are satisfiedfor G_4,X(X_c)=1/4 and X_c=P^2/2, so the longitudinalmode (<ref>) readsA_1=±√(2P(MP+Q)r+Q^2)/(r-2M). A concrete model satisfying the above mentionedconditions isG_4(X)=G_4(X_c)+1/4( X-X_c)+∑_n=3 b_n ( X-X_c )^n ,where b_n's are constants.The model G_4(X)=M_ pl^2/2+X/4 corresponds to the special case of Eq. (<ref>).Besides the non-vanishing A_1 solution there existsanother branch A_1=0 for the couplingsG_i(X) with even i-index, in which casethe relation A_0^2(r)=2f(r)X_cholds from Eq. (<ref>). For the quartic coupling G_4(X) the equation for A_0 can be satisfiedunder the conditions G_4,X(X_c)=0 and2r f f”-rf'^2+4ff'=0.The latter leads to the solution f=(C-M/r)^2 with two integrated constants C and M.For the consistency with the other two equations of motion,we require that C=1 and G_4(X_c)=X_c/2.Hence we obtain the extremal RN BH solution f=h=( 1-M/r)^2, A_0=P-PM/r , A_1=0,where P=±√(2X_c). An explicit model realizing this solution isG_4(X)=X_c/2 +∑_n=2 b_n( X-X_c )^n .For the metric (<ref>),P depends on M by reflecting the fact thatthe charge Q=-PM has a special relation withthe mass M. Hence the Proca hair is of thesecondary type.For the cubic coupling G_3(X) the equationfor A_1 readsG_3,X[ f^2 (rf'+4f)A_1^2 +rA_0 (2fA_0'-f'A_0) ]=0 ,so there are two branches satisfying (i) G_3,X(X_c)=0 or(ii) G_3,X(X_c) ≠ 0. For the branch (i) the consistency with the other equationsrequires that 2 ( rf'+f-1 )M_ pl^2 +r^2 A_0'^2=0 and A_0”+2A_0'/r=0, sothe integrated solutions are of the RN forms: f=h=1-2M/r+Q^2/2M_ pl^2 r^2 , A_0=P+Q/r ,with the non-vanishing longitudinal mode (<ref>). This exact solution can be realized by the modelG_3(X)=G_3(X_c)+∑_n=2 b_n( X-X_c )^n .Unlike the RN solution in GR with G_3(X)=0,P in Eq. (<ref>) has the meaningof the primary hair with the non-vanishing longitudinalmode (<ref>). The branch (ii) corresponds to the case in which the termsin the square bracket of Eq. (<ref>) vanishes.On using Eq. (<ref>) and imposing the asymptoticallyflat boundary condition f → 1 for r →∞,we obtain the extremal RN BH solution (<ref>) withP=±√(2)M_ pl.For the quintic coupling G_5(X) the temporal componentobeys A_0”+2A_0'/r=0 under the conditions (<ref>),so the resulting solution is A_0=P+Q/r.Imposing the condition G_5,X(X_c)=0 further,the equation for A_1 reduces to(A_0A_0'-X_cf')A_1^2G_5,XX=0 and hencethere are two branches satisfying(i) A_0A_0'=X_c f' or (ii) A_1=0.For the branch (i),the resulting solutions are given bythe RN solutions (<ref>) with the particularrelations P=-2MM_ pl^2/Q and X_c=M_ pl^2. The longitudinal mode (<ref>) reduces to A_1=±2M_ pl^3√(2(2M^2M_ pl^2-Q^2)) r^2/Q[2M_ pl^2r(2M-r)-Q^2] ,whose existence requires the condition2M^2M_ pl^2>Q^2.Since P depends on M and Q, theProca hair P is secondary.This exact solution can be realized by the modelG_5(X)=G_5(X_c)+∑_n=2 b_n( X-M_ pl^2 )^n .Another branch A_1=0 is the specialcase of Eq. (<ref>), i.e., Q^2=2M^2M_ pl^2,under which the solution is given by the extremalRN BH solution (<ref>)with P=±√(2)M_ pl.The sixth-order coupling G_6(X) has the two branches(i) A_1=0 or (ii) A_0'=0.For the branch (i) there exists an exact solution if the twoconditions G_6(X_c)=0 and G_6,X(X_c)=0 hold. This is the extremal RN BH solution (<ref>) with X_c=M_ pl^2 and P=±√(2)M_ pl,which can be realized for the model G_6(X)=∑_n=2 b_n ( X-M_ pl^2 )^n .The branch (ii) corresponds to A_0= constant,in which case we obtain the stealth Schwarzschild solutionf=h=1-2M/r. This exists for general couplings G_6(X)with arbitrary values of A_1. Since we are now imposing the second condition of Eq. (<ref>), the longitudinal mode is fixed to beA_1=±√(r[(A_0^2-2X_c)r+4MX_c])/(r-2M). § POWER-LAW COUPLINGS So far we have imposed the conditions(<ref>) to derive exact solutions, but we willalso study BH solutions for the power-law models G_i(X)=β̃_i X^n ,g_j(X)=γ̃_jX^n ,where n is a positive integer, and β̃_̃ĩ andγ̃_j are coupling constants[ For the dimensionless coupling constants, we use thenotations β_i andγ_j in the following.]with i=3,4,5,6 and j=4,5.Let us begin with the cubic vector-Galileoninteraction G_3(X)=β_3 X. Then, the longitudinal mode obeys A_1=±√(rA_0(f'A_0-2fA_0')/fh(rf'+4f)) .Around the horizon characterized by the radius r_h,we expand f,h,A_0 in the formsf=∑_i=1^∞ f_i(r-r_h)^i , h=∑_i=1^∞ h_i(r-r_h)^i , A_0=a_0+∑_i=1^∞ a_i(r-r_h)^i ,where f_i,h_i,a_0,a_i are constants.To recover the RN solutions of the formf=h=(r-r_h)(r-μ r_h)/r^2 in the limit β_3 → 0,where the constant μ is in the range 0<μ<1 so that r=r_h corresponds to the outer horizon,we choose f_1=h_1=(1-μ)/r_h .Taking the positive branch of A_1 with a_0>0and picking up linear-order terms in β_3,the effect of the coupling β_3 starts to appearat second order of (r-r_h)^i, such that a_1=M_ pl√(2μ)/r_h , a_2=-M_ pl√(2μ)/r_h^2 +α_2 β_3 ,f_2=2μ-1/r_h^2+ F_2 β_3 , h_2=2μ-1/r_h^2+ H_2 β_3 ,where α_2,F_2,H_2 depend onthe three parameters (h_1, r_h, a_0). The coupling β_3 induces the difference betweenthe metrics f and h. The leading-order longitudinal mode around r=r_h is given by A_1=a_0/[f_1(r-r_h)],so the scalar product A_μdx^μbecomes A_μdx^μ≃ a_0du_+,which is regular at the future horizon r=r_h. We also search for asymptotic flat solutions at spatialinfinity (r →∞) by expanding f,h,A_0 in the forms f=1+∑_i=1^∞f̃_i/r^i, h=1+∑_i=1^∞h̃_i/r^i, A_0=P+∑_i=1^∞ã_i/r^i .For the cubic Galileon, the asymptotic solution for A_1reduces to A_1=∑_i=2^∞b̃_i/r^i,where the first-order coefficient b̃_1vanishes from the background equations of motion. The iterative solutions are given byf=1-2M/r-P^2M^3/(6M_ pl^2r^3)+ O(1/r^4),h=1-2M/r-P^2M^2/(2M_ pl^2r^2) -P^2M^3/(2M_ pl^2r^3)+ O(1/r^4), andA_0=P-PM/r-PM^2/(2r^2)+ O(1/r^3), wherewe have set f̃_1=h̃_1=-2M. The coefficient b̃_2 and thecoupling β_3 begin to appear at the orders of1/r^4 and 1/r^5, respectively, in f,h,A_0. In Fig. <ref> we plot one example of numericallyintegrated solutions outside the horizon derived by usingthe boundary conditions (<ref>)-(<ref>)around r=r_h. The solutions in the two asymptoticregimes smoothly join each other without anydiscontinuity. As estimated above, the longitudinal modebehaves as A_1 ∝ (r-r_h)^-1 for r ≃ r_hand A_1 ∝ r^-2 for r ≫ r_h.Since the time t can be reparametrized such thatf shifts to 1 at spatial infinity, we haveperformed this rescaling of f after solving the equationsof motion up to r=2 × 10^7r_h. In Fig. <ref> thedifference between h and f manifests itself in theregime of strong gravity with the radius r ≲ 100 r_h.Since the two asymptotic solutions discussed above are continuous,the three parameters (b̃_2, M, P)appearing in the expansion (<ref>) with A_1=∑_i=2^∞b̃_i/r^i are related tothe three parameters (h_1, r_h, a_0) arisingin the expansion (<ref>), asb̃_2=b̃_2(h_1, r_h, a_0), M=M(h_1, r_h, a_0), andP=P(h_1,r_h,a_0). Since b̃_2 is not fixed by the two parametersM and P alone, this is regarded as a primary hair.For the cubic interaction G_3(X)=β_3 M_ pl^2(X/M_ pl^2)^n with n ≥ 2, there is thenon-vanishing A_1 branchsatisfying the relation (<ref>). In this case, the property of two asymptotic solutions (<ref>) and (<ref>) is similar to that discussedfor n=1. The solutions are also regular throughout theBH exterior with the difference between f andh induced by β_3.There is also another branch obeyingA_1=±√(A_0^2/(fh)) ,for which the resulting solutions correspond to the RNsolutions (<ref>). Indeed, this exact solutionis the special case of the model (<ref>) withX_c=0 and G_3(X_c)=0.Let us proceed to the quartic couplingG_4(X)=β_4 M_ pl^2 (X/M_ pl^2)^nwith n ≥ 2. In general, we have two branchescharacterized by (i) A_1 ≠ 0 or (ii) A_1=0. For n ≥ 3 there exists the non-vanishing A_1branch (<ref>) with the RN solutions (<ref>). Another non-vanishing A_1 branch gives rise tohairy BH solutions with f ≠ h.Indeed, the solutions around the horizon are given bythe expansion (<ref>) with the couplingβ_4 appearing at second order (i=2) in f, hand at first order (i=1) in A_0.They are characterized by the three parameters(h_1, r_h,a_0) under the condition (<ref>). The solutions expanded at spatial infinity arethe RN solutions (<ref>) with correctionsinduced by β_4.If n=2, for example, such corrections to f,h,A_0arise at second order in 1/r^2, e.g.,δ f=3P^2Q^2(5P^2-8M_ pl^2)β_4/ (4M_ pl^6 r^2),δ h=3P^2Q^2(11P^2-16M_ pl^2)β_4/ (4M_ pl^6 r^2), andδ A_0=PQ^2 (3P^2-4M_ pl^2)β_4/ (M_ pl^4 r^2), respectively. The longitudinal mode behaves asA_1 ∝ (r-r_h)^-1 for r ≃ r_h andA_1 ∝ r^-1/2 for r ≫ r_h.Numerically we confirmed that the solution aroundr=r_h smoothly connects to that in theasymptotic regime r ≫ r_h, so the parameters(P,Q,M) are related to (h_1, r_h,a_0).Therefore, the Proca hair P is of the primary type.For the second branch A_1=0, the solutions(<ref>) around r=r_h are subject tothe constraint a_0=0. Hence they are expressed interms of the two parameters (h_1,r_h) with the coupling β_4 appearing at the orderof (r-r_h)^3 in f,h,A_0 for n=2. At spatial infinity, the effect of β_4 works ascorrections to the RN solutions (<ref>).For n=2 the leading-order corrections to f,h,A_0are given, respectively, byδ f=-4P^3(MP+Q)β_4/(M_ pl^4r),δ h=3P^4Q^2β_4/(4M_ pl^6 r^2), andδ A_0=-P^3Q(2MP+Q)β_4/(2M_ pl^4r^2).The matching of two asymptotic solutions has been alsoconfirmed numerically, so (P,M,Q) depend on thetwo parameters (h_1, r_h) alone. HenceP corresponds to the secondary hair.The sixth-order couplingG_6(X)=(β_6/M_ pl^2 )(X/M_ pl^2)^nwith the power n ≥ 0 has the branch satisfyingA_1^2/A_0^2=(3h-1)/[fh{(2n+1)h-1}]besides A_0'=0, A_1=0, andA_1=±√(A_0^2/(fh)) (the last one is present for n ≥ 3).However, the first one does not exist in the region 1/(2n+1)<h<1/3 outside the horizon. Since the second and fourth branches correspond to theSchwarzschild and RN solutions, respectively,the branch A_1=0 alone leads to the solutionswith f ≠ h.The U(1)-invariant interaction derived byHorndeski<cit.> corresponds to n=0,in which case the coupling β_6 appears in theexpansion (<ref>) around r=r_h at second orderfor f,h and at first order for A_0, with a_0 unfixed.For n ≥ 1 the effect of β_6 arises at n+1order in Eq. (<ref>), with a_0=0. At spatial infinity, the leading-order corrections to theRN solutions (<ref>) read δ f=-P^2nQ^2β_6/(2^1+nM_ pl^4+2nr^4),δ h=(2n-1)MP^2nQ^2β_6/(2^1+nM_ pl^4+2nr^5),andδ A_0=-MP^2nQ β_6/(2^nM_ pl^2+2nr^4), which match with those derived by Horndeski in the U(1)-invariant case (n=0) <cit.>.For n ≥ 0, the numerically integrated solutions are regularthroughout the horizon exterior with the difference betweenf and h. When n=0,P has no physical meaningdue to the U(1) gauge symmetry, so there are two physical hairs M and Qrelated to the parameters h_1 and r_h around the horizon.For n ≥ 1 the Proca hair P is secondary, which reflects the fact that (P,M,Q) depend on (h_1,r_h) alone.The quintic coupling G_5(X)=β_5 (X/M_ pl^2)^n does notlead to regular solutions with A_1 ≠ 0 due to the divergenceat h=1/(2n+1). For the intrinsic vector-mode couplingsg_4(X)=γ_4(X/M_ pl^2)^n andg_5(X)=(γ_5/M_ pl^2)(X/M_ pl^2)^nwith n ≥ 1, there are hairy regular BH solutionswith f ≠ h characterized by A_1=0 andA_1=±√(A_0^2/[(1+2n)fh]),respectively. The couplings γ_4and γ_5give rise to corrections to the RN solutions (<ref>),where the near-horizon expansion (<ref>) can beexpressed in terms of the two parameters h_1 and r_h with a_0=0. In this case the Proca hair P is dependent on M andQ at spatial infinity, so it is of the secondary type. § CONCLUSIONS We have systematically constructed new exact BH solutionsunder the conditions (<ref>) and also obtained a family of hairynumerical BH solutions with f ≠ h for the power-law models (<ref>).For the cubic and quartic couplingsG_3(X)=β̃_3 X^n and G_4(X)=β̃_4 X^n,there exist non-vanishing A_1 branches with the primary Proca hairwith the difference between f and h manifesting around the horizon.For the intrinsic vector-mode couplingsG_6(X)=β̃_6 X^n, g_4(X)=γ̃_4X^n,g_5(X)=γ̃_5X^n with n ≥ 1,there are regular BH solutions (RN solutionswith corrections induced by the couplings) characterized by the secondaryProca hair P.Since astronomical observations of BHs have increased their accuracies,there will be exciting possibilities for probing deviations from GR in theforeseeable future, e.g., in the measurements of innermost stable circular orbits.GWs emitted from quasi-circular BH binariescan generally place tight bounds on modified gravitational theories with large deviations from GR in the regime of strong gravity <cit.>.The future GW measurementswill be able to measurethe Proca charge P through the corrections to the Schwarzschild orRN metrics and the precise determination of polarizations. The existence of such a new vector hair will shed new lighton the construction of unified theories connecting gravitational theorieswith particle theories. Our analysis in the strong gravity regime is also complementary to the cosmological analysis with the late-timeacceleration <cit.> and the solar-system constraints <cit.>.The combination of them will allow us to probe vector-tensor theories in all scales in astrophysics and cosmology.§ ACKNOWLEDGMENTSL. H. thanks financial support from Dr. Max Rössler,the Walter Haefner Foundation and the ETH Zurich Foundation. R. K. is supported by the Grant-in-Aid for Young Scientists B of the JSPS No. 17K14297.M. M. is supported by FCT-Portugalthrough Grant No. SFRH/BPD/88299/2012.S. T. is supported by the Grant-in-Aid for Scientific Research Fund of the JSPS No. 16K05359 andMEXT KAKENHI Grant-in-Aid forScientific Research on Innovative Areas “Cosmic Acceleration” (No. 15H05890).99WheelerR. Ruffini and J. A. Wheeler, Phys. Today24,No. 1, 30 (1971).IsraelW. Israel,Phys. Rev.164, 1776 (1967). CarterB. Carter, Phys. Rev. Lett.26, 331 (1971). HawkingS. W. Hawking,Commun. Math. Phys.25, 152 (1972). BekensteinJ. D. Bekenstein,Phys. Rev. D51, R6608 (1995). Galileon1A. Nicolis, R. Rattazzi and E. Trincherini,Phys. Rev. D79, 064036 (2009).Galileon2C. Deffayet, G. Esposito-Farese and A. Vikman,Phys. Rev. D79, 084003 (2009).HorndeskiG. W. Horndeski,Int. J. Theor. Phys.10, 363 (1974). GaoC. Deffayet, X. Gao, D. A. Steer and G. Zahariade,Phys. Rev. D84, 064039 (2011). HuiL. Hui and A. Nicolis,Phys. Rev. Lett.110, 241104 (2013).Soti1T. P. Sotiriou and S. Y. Zhou,Phys. Rev. Lett.112, 251102 (2014).BabichevE. Babichev and C. Charmousis,JHEP1408, 106 (2014).Bekenstein2J. D. Bekenstein,Phys. Rev. D5, 1239 (1972). HeisenbergL. Heisenberg,JCAP1405, 015 (2014). Tasinato G. Tasinato,JHEP1404, 067 (2014). Allys E. Allys, P. Peter and Y. Rodriguez,JCAP1602, 004 (2016). Jimenez2016J. B. Jimenez and L. Heisenberg,Phys. Lett. B757, 405 (2016).colored R. Bartnik and J. Mckinnon,Phys. Rev. Lett.61, 141 (1988);M. S. Volkov and D. V. Galtsov,JETP Lett.50, 346 (1989); P. Bizon,Phys. Rev. Lett.64, 2844 (1990). KerrProcaC. Herdeiro, E. Radu and H. Runarsson,Class. Quant. Grav.33, 154001 (2016). ChagoyaJ. Chagoya, G. Niz and G. Tasinato,Class. Quant. Grav.33, 175007 (2016).MinamiM. Minamitsuji, Phys. Rev. D94, 084039 (2016).Babichev17E. Babichev, C. Charmousis and M. Hassaine,JHEP1705, 114 (2017).Chagoya2J. Chagoya, G. Niz and G. Tasinato,Class. Quant. Grav.34, no. 16, 165002 (2017).HerdeiroC. A. R. Herdeiro and E. Radu,Int. J. Mod. Phys. D24, 1542014 (2015).Horndeski76G. W. Horndeski, J. Math. Phys.17, 1980 (1976). HorndeskiBHG. W. Horndeski, Phys. Rev. D17, 391 (1978). YagiK. Yagi, N. Yunes and T. Tanaka, Phys. Rev. Lett.109, 251105 (2012).cosmoA. De Felice, L. Heisenberg, R. Kase, S. Mukohyama,S. Tsujikawa and Y. l. Zhang,JCAP1606, 048 (2016).screeningA. De Felice, L. Heisenberg, R. Kase, S. Tsujikawa,Y. l. Zhang and G. B. Zhao,Phys. Rev. D93, 104016 (2016). | http://arxiv.org/abs/1705.09662v2 | {
"authors": [
"Lavinia Heisenberg",
"Ryotaro Kase",
"Masato Minamitsuji",
"Shinji Tsujikawa"
],
"categories": [
"gr-qc",
"astro-ph.CO",
"hep-th"
],
"primary_category": "gr-qc",
"published": "20170526180001",
"title": "Hairy black-hole solutions in generalized Proca theories"
} |
-60pt 115pt 60pt -38pt 5pt=Γ=ω=σ Effective Temperatures and Radiation Spectra fora Higher-Dimensional Schwarzschild-de-SitterBlack-Hole P. Kanti andT. Pappas Division of Theoretical Physics, Department of Physics,University of Ioannina, Ioannina GR-45110, Greece AbstractThe absence of a true thermodynamical equilibrium for an observer located in the causal area of a Schwarzschild-de Sitter spacetime has repeatedly raisedthe question of the correct definition of its temperature. In this work, we consider five different temperatures for a higher-dimensionalSchwarzschild-de Sitter black hole: the bare T_0, the normalised T_BH and three effective ones given in terms of both the black hole and cosmological horizon temperatures. We find that these five temperatures exhibit similarities but also significant differences in their behaviour as the number of extra dimensions and the value of the cosmological constant are varied. We then investigate theireffect on the energy emission spectra of Hawking radiation. We demonstrate that the radiation spectra for the normalised temperature T_BH– proposed by Bousso and Hawking over twenty years ago – leads to the dominant emission curve while the other temperatures either support a significant emission rate only at a specific Λ regime or they have their emission rates globally suppressed. Finally, we compute the bulk-over-brane emissivity ratio andshow that the use of different temperatures may lead to different conclusions regarding the brane or bulk dominance.§ INTRODUCTION The novel theories, that postulate the existence of additional spacelike dimensions in nature <cit.> with size much larger than the Planck length or even infinite, have in fact an almost 20-year life-time. During that period, several aspects of gravity, cosmology and particle physics have been reconsidered in the context of these higher-dimensional theories. Black hole solutions have been intensively studied since the existence of extra dimensions affects both their creation and decay processes (for more information on this, one may consult the reviews <cit.>-<cit.>). The presence of the brane(s) in the model with warped extra dimensions <cit.> has proven so far to be an unsurmountable obstacle for the construction of analytical solutions describing regular black holes. As a result, most of the study of the decay process of a higher-dimensional black hole has been restricted in the context of the model with large extra dimensions <cit.>, where the latter are assumed to be empty, and thus flat, and where the self-energy of the brane may be ignored compared to the black-hole mass.It is in the context of this theory that analytical expressions describing higher-dimensional black holes may be written, and theemission of particles, comprising the Hawking radiation <cit.>, may be studied in detail. Historically, the first solution describing a higher-dimensional, spherically-symmetric black hole appeared in the '60s, and is known as the Tangherlini solution <cit.>. The solution describes a higher-dimensional analogue of the Schwarzschild solutionof the General Theory of Relativity that is formed also in the presence of a cosmological constant. Therefore, this solution constitutes in fact an improvement of the assumption made in the context of the large extra dimensions scenario where the extra space is absolutely empty: here, the extra dimensions are filled with a constant distribution of energy, or with some field configuration that effectively acts as a constant distribution of energy. For a positive cosmological constant, the solution describes a higher-dimensional Schwarzschild-de Sitter black-hole spacetime.Although the emission of Hawking radiation from higher-dimensional, spherically-symmetric or rotating black holes has been extensively studied in the literature (for a partial only list see <cit.>-<cit.> or the aforementioned reviews <cit.>-<cit.>), the analyses focused on the higher-dimensional Schwarzschild-de Sitter black holes are only a few. The first such work <cit.> contained an analytic study of thegreybody factor for scalar fields propagating on the brane and in the bulk, and in addition provided exact numerical results for the radiation spectra in both emission channels. A subsequent analytic work <cit.> extended the aforementioned analysis by determining the next-to-leading-order term in the expansion of the greybody factor. An exact numerical study <cit.> then considered the emission of fields with arbitrary spin from a higher-dimensional Schwarzschild-de-Sitter black hole. A series of three, more recent works studied the case of a scalar field having a non-minimal coupling to the scalar curvature: the first <cit.> studied the case of a purely 4-dimensional Schwarzschild-de-Sitter black hole, the second <cit.> considered the scalar field propagating either in the higher-dimensional bulk or being restricted on a brane, and a third one <cit.> provided exact numerical results for the greybody factors and radiation spectra in the same theory. A few additional works<cit.> have also appeared that studied the greybody factors for fields propagating in the background of variants of a Schwarzschild-de-Sitter black hole.However, over the years, the question of what is the correct notion of the temperature of a Schwarzschild - de Sitter (SdS) spacetime has risen. This spacetime contains a black hole whose event horizon sets the lower boundary of the causally connected spacetime. But it also contains a positive cosmological constant that gives rise to a cosmological horizon, the upper boundary of the causal spacetime. An observer living at any point of this causal area is never in a true thermodynamical equilibrium - the two horizons have each one its own temperature, expressed in terms of their surface gravities <cit.>, and thus an incessant flow of thermal energy (from the hotter black-hole horizon to the colder cosmological one) takes place at every moment. In addition, the SdS spacetime lacks an asymptotically-flat limit where the black-hole parameters may be defined in a robust way. The latter problem was solved in <cit.> where anormalised black-hole temperature was proposed that made amends for the lack of an asymptotic limit. Then, assuming that the value of the cosmological constant is small and the two horizons are thus located far away from each other, one could formulate two independent thermodynamics.Despite the above, the question of what happens as the cosmological constant becomes larger and the two horizons come closer still persisted. It was this question that gave rise to the notion of theeffective temperature for an SdS spacetime<cit.>, namely one that implements both the black-hole and the cosmological horizon temperatures (for a review on this, see <cit.>). A number of additional works have appeared in the literature with similar or alternative approaches on the thermodynamics of de Sitter spacetimes <cit.>-<cit.>, however, the question of the appropriate expression of the SdS black-hole temperature still remains open.Up to now, no work has appeared in the literature that makes a comprehensive study of the different temperatures for an SdS spacetime and compare their predictions forthe corresponding Hawking radiation spectra. In fact, previous works that study the radiation spectra from a four-dimensional or higher-dimensional SdS black hole make use of either itsbare temperature T_0, based on its surface gravity, or the normalised one T_BH, at will. In the context of this work, we will perform such a comprehensive study, and we will derive and compare the derived radiation spectra. We will do so not only for the aforementioned two SdS black-hole temperatures but also for three additional effective temperatures for the SdS spacetime, namely T_eff-, T_eff+ and T_effBH– the use of one of the latter temperatures may be unavoidable for large values of the cosmological constant when the two horizons lie so close that the independent thermodynamics no longer hold. To address the above, we will also extend the regime of values of the cosmological constant that has been studied in the literature so far, and consider the entire allowed regime, from a very small value up to its maximum critical value <cit.>. To make our analysis as general as possible, we will consider a higher-dimensional SdS spacetime. We will then study the properties of the different temperatures both in terms of the value of the cosmological constant but also of the number of extra spacelike dimensions. The corresponding Hawking radiation spectra will then be produced for scalar fields, both minimally and non-minimally coupled to gravity, propagating either on our brane or in the bulk.As we will see, the different temperatures will lead to different energy emission rates for the black hole, each one with its own profile in terms of the bulk cosmological constant, number of extra dimensions and value of the non-minimal coupling constant. In addition, each temperature will lead to different conclusions regarding the dominance of the brane or of the bulk.The outline of our paper is as follows: in Section 2, we present the theoretical framework of our analysis, the gravitational background, the equations of motion for the scalar field as well as the different definitions of the temperature of an SdS spacetime. In Sections 3 and 4, we derive the energy emission rates for bulk and brane scalar fields, having a minimal or non-minimal coupling to gravity, respectively. In Section 5, we calculate the bulk-over-brane emissivity ratio and, in Section 6, we summarise our analysis and present our conclusions.§ THE THEORETICAL FRAMEWORK§.§ The Gravitational Background We will start by considering a higher-dimensional gravitational theory with D=4+n total number of dimensions. The action functional of the theory will also contain a positive cosmological constant Λ, and will therefore readS_D=∫ d^4+nx √(-G) (R_D/2 κ^2_D - Λ) .In the above, R_D is the higher-dimensional Ricci scalar and κ^2_D=1/M_*^2+n the higher-dimensional gravitational constant associated with the fundamental scale of gravity M_*. If we vary the above action with respect to the metric tensor G_MN, we obtain theEinstein's field equations that have the form R_MN-1/2 G_MN R_D = κ^2_D T_MN = -κ^2_D G_MNΛ ,with the only contribution to the energy-momentum tensor T_MN coming from the bulk cosmological constant. The above set of equations admit a spherically-symmetric solution of the form <cit.>ds^2 = - h(r) dt^2 + dr^2/h(r) + r^2 dΩ_2+n^2,where dΩ_2+n^2 is the area of the (2+n)-dimensional unit sphere given by dΩ_2+n^2=dθ^2_n+1 + sin^2θ_n+1 (dθ_n^2 + sin^2θ_n ( ... + sin^2θ_2 (dθ_1^2 + sin^2 θ_1dφ^2) ... )) ,with 0 ≤φ < 2 π and 0 ≤θ_i ≤π, for i=1, ..., n+1. The radial function h(r) is found to have the explicit form <cit.>h(r) = 1-μ/r^n+1 - 2 κ_D^2 Λ r^2/(n+3) (n+2) .The above gravitational background describes a (4+n)-dimensional Schwarzschild-de-Sitter (SdS) spacetime, with the parameter μ related to the black-hole mass M through the relation <cit.> μ=κ^2_D M/(n+2) Γ[(n+3)/2]/π^(n+3)/2 .The horizons of the SdS black hole follows from the equation h(r)=0– this has, in principle, (n+3) roots, however, not all of them are real and positive; in fact, the SdS spacetime may have two, one or zero horizons, depending on the values of the parameters M and Λ<cit.>. Here, we will ensure that the values of M and Λ are in the regime that supports the existence of two horizons, the black-hole r_h and the cosmological one r_c, with r_h<r_c. However, the degenerate case, that results in the Nariai limit <cit.> in which the two horizons coincide, will also be investigated. The higher-dimensional background (<ref>) is seen by gravitons and particles with no Standard-Model quantum numbers that may propagate in the bulk. All ordinary particles, however, are restricted to live on our 4-dimensional brane <cit.>, and therefore propagate on a different gravitational background. The latter follows byprojecting the (4+n)-dimensional background (<ref>) on the brane and it is realised by fixing the value of the extra angular coordinates, θ_i=π/2, for i=2, ..., n+1. Then, we obtain the 4D line-element ds^2 = - h(r) dt^2 + dr^2/h(r) + r^2 (dθ^2 + sin^2θdφ^2) ,with the metric function h(r) preserving its form, given by Eq. (<ref>), and thus its dependence on both the number of additional spacelike coordinates n and the value of the bulk cosmological constant Λ.§.§ The Temperature of the Schwarzschild-de Sitter Black HoleThe temperature of a black hole is traditionally defined in terms of its surface gravity k_h at the location of the horizon <cit.>. The latter quantity is expressed as k_h^2 =-1/2 lim_r → r_h (D_M K_N)(D^M K^N) ,where D_M is the covariant derivative andK=γ_t ∂/∂ t is the timelike Killing vector with γ_t a normalization constant. In the case that the gravitational background is spherically-symmetric, Eq. (<ref>) takes the simpler form <cit.>k_h=1/2 1/√(-g_tt g_rr) |g_tt,r|_r=r_h .When the above expression is employed for the line-element (<ref>) of a higher-dimensional Schwarzschild-de Sitter black hole, we obtain the following expression for its temperature <cit.>T_0 =k_h/2π=1/4π r_h [(n+1)-(n+3) Λ̃r_h^2],where we have defined, for convenience, the quantityΛ̃=2 κ^2_D Λ/(n+2)(n+3), and used the condition f(r_h)=0 to replace μ in terms of r_h and Λ̃.The Schwarzschild-de Sitter spacetime is characterized, in the most generic case, by the presence of a second horizon, the cosmological horizon r_c. As a result, one may define another surface gravity k_c, this time at the location of r_c, and a temperature for the cosmological horizon <cit.>, namely <cit.>T_c = - k_c/2π=-1/4π r_c [(n+1)-(n+3) Λ̃r_c^2],where care has been taken so that T_c is positive-definite since r_h<r_c<cit.>. The presence of the second horizon with its own temperature makes the thermodynamics of the Schwarzschild-de Sitter spacetime significantly more complicated, as compared to the cases of either asymptotically Minkowski or Anti de Sitter spacetimes <cit.>. The two temperatures, T_0 and T_c, are in principle different, therefore an observer located at an arbitrary point of the causal region r_h<r<r_c is not in thermodynamical equilibrium. The usual approach adopted in the literature is to make the assumption that the two horizons are located far away andtherefore each one can have its own independent thermodynamics <cit.>– this assumption, however, is valid only for small values of the cosmological constant and thus it imposes a constraint on all potential analyses.In <cit.>, a modified expression for the temperature of the black hole was proposed, namely T_BH = 1/√(h(r_0)) 1/4π r_h [(n+1)-(n+3) Λ̃r_h^2],in which a normalization factor √(h(r_0)) was introduced involving the value of the metric function at its global maximum r_0. This point follows from the condition h'(r)=0 and is given by <cit.> r_0^n+3=(n+1) μ/2 Λ̃ . There, the metric function assumes the value h(r_0)=1-μ/r_0^n+1 -Λ̃r_0^2=1/n+1 [(n+1) - (n+3) Λ̃r_0^2].The above is the maximum value that the metric function attains as it interpolates between the two zeros at the two horizons. The point r_0 is the point the closest that the Schwarzschild-de Sitter spacetime has to an asymptotically flat region: it is here that the effects of the black-hole and cosmological horizons cancel out and an observer can stay at rest <cit.>. Mathematically, the normalization factor √(h(r_0)) appears from the normalization of the Killing vector, K_M K^M=-1: this condition is satisfied in asymptotically flat spacetime for γ_t=1 but, at r=r_0, this factor should be γ_t=1/√(h(r_0)).Including this normalisation factor in Eq. (<ref>) is a step forward in defining the black-hole temperature in a non-asymptotically flat spacetime, however,this factor modifies significantly the properties of T_0. In Fig. <ref>(a,b), we depict the dependence of the two temperatures, T_0 and T_BH, as a function of the cosmological constant, and for two values of the number of extra dimensions, n=2 and n=5. For low n, as Λ increases, T_0 monotonically decreases, in accordance to Eq. (<ref>), whereas T_BH predominantly increases - the latter is caused by the variation in the value of h(r_0) that, in most part of the allowedΛ regime, causes an enhancement in T_BH. For large values of n, the monotonic decrease of T_0 remains unaffected while the increase of T_BH holds only for the lower range of values of Λ. Even in this case,the value of T_BH is constantly larger than that of T_0 (see, also, <cit.> for a similar comparison and conclusions). The two temperatures match only in the limit Λ→ 0 when they reduce to the temperature of a higher-dimensional Schwarzschild black hole. A radically different behaviour appears in the opposite limit, the Nariai or extremal limit <cit.>: as Λ approaches its maximum allowed value, the two horizons approach each other and eventually coincide, with r_h=r_c. In that limit, the combination inside the square brackets in Eq. (<ref>), and thus T_0 itself, vanishes[Although, for arbitrary n, this is very difficult to prove analytically, for special values of n we may easily confirm it: for n=0, the Nariai limit is reached when M^2 Λ=1/9 and then r_h^2 =1/Λ; for n=1, the two horizons coincide when μΛ̃=1/4 and then r_h^2=1/(2Λ̃). In both cases, we may easily see that Eq. (<ref>) vanishes. For higher values of n, the vanishing of Eq. (<ref>) may be easily confirmed numerically.], a feature that is clearly shown in Fig. <ref>. On the contrary, in the critical limit, T_BH assumes an asymptotic constant value; this is caused by the fact that its numerator and denominator both tend to zero values with the ratio approaching a constant number.In Fig. <ref>, we show the dependence of T_0 and T_BH on the number of extra dimensions n, for two different fixed values of thecosmological constant, Λ=0.1 and Λ=0.8 (we have set for simplicity κ_D^2=1, therefore Λ is given in units of r_h^-2). We observe again that the `normalised' temperature T_BH remains always larger than the `bare' one T_0, however this dominance gets softer as n increases, and almost disappears for small values of Λ.The temperature of a black hole is one of the important factors that determine theHawking radiation emission spectra. Only a handful of works exist in the literature that study the emission of Hawking radiation from a Schwarzschild-de Sitter black hole, either 4-dimensional or higher-dimensional, and these use both definitions of its temperature, Eq. (<ref>) <cit.> or Eq. (<ref>)<cit.>, at will. In addition, during the recent years, the notion of theeffective temperature of the Schwarzschild-de Sitter spacetime has emerged, that involves both temperatures T_0 and T_c, in an attempt to unify the thermodynamical description of this spacetime. In the most popular of the analyses, a thermodynamical first law for a Schwarzschild-de Sitter black hole is written in which the black-hole massplays the role of the enthalpy of the system (M=-H), the cosmological constant that of the pressure (P=Λ/8π) while the entropy is the sum of the entropies of the two horizons (S=S_h+S_c) <cit.>. In this picture, an effective temperature emerges that has the form T_eff-=(1/T_c-1/T_0)^-1=T_0 T_c/T_0-T_c .The above expression was obtained for the case of a 4-dimensional Schwarzschild-de Sitter black hole. However, the arguments leading to the formulation of the aforementioned first thermodynamical law had no explicit dependence on the dimensionality of spacetime.Therefore, we expect that the functional form of the effective temperature T_eff- for the case of a (4+n)-dimensional Schwarzschild-de Sitter black hole will still be given by Eq. (<ref>), but with the individual temperatures T_0 and T_c, now assuming their higher-dimensional forms, Eqs. (<ref>) and (<ref>).Then, the explicit form of T_eff- in D=4+n dimensions will be the following T_eff-=-1/4π (n+1)^2 -(n+1)(n+3) Λ̃(r_h^2+r_c^2) +(n+3)^2 Λ̃^2 r_h^2 r_c^2/(r_h +r_c) [(n+1) -(n+3) Λ̃r_h r_c] .In the limit r_h → 0, the above expression for the effective temperature reduces tothat of the cosmological horizon T_c, as expected. However, the limit r_c →∞ (or, equivalently, Λ̃→ 0) leads to a vanishing result: the effective temperature does not interpolate between the black-hole temperature T_0 and the cosmological one T_c, as one may have expected; in fact, the limit Λ→ 0 is a particular one since it is equivalent to a vanishing pressure of the system, that in the relevant analyses is always assumed to be positive. That is, by construction, T_eff- is valid for non-vanishing cosmological constant - but this is exactly the regime where the need for an effective temperature really emerges since, in the limit of small Λ, the horizons r_h and r_c are located so far away from each other that the independent thermodynamics at the two horizons do indeed hold.In Figs. <ref> and <ref>, we depict also the behaviour of T_eff- in terms of the value of the cosmological constant Λ and the number of extra dimensions n, respectively. The effective temperature T_eff- is an increasing function of Λ, and, similarly to the case of the normalised temperature T_BH, it assumes a non-vanishing constant value at the critical limit - as in the case of T_BH, the numerator and denominator of Eq. (<ref>) both go to zero with their ratio tending to a constant number. On the contrary, T_eff- is a decreasing function of the number of extra dimensions n.The effective temperature T_eff- was found to exhibit some unphysical properties, especially in the case of charged de Sitter black holes where the aforementioned expression may take on negative values or exhibit infinite jumps at the critical point. For this reason, in <cit.> (see also <cit.>) a new expression for the effective temperature of a Schwarzschild-de Sitter spacetime was proposed, namely the following T_eff+=(1/T_c+1/T_0)^-1=T_0 T_c/T_0 + T_c .The above proposal was characterised as an `ad hoc' one, that would follow from an analysis similar to that leading to T_eff- in which the entropy of the system would be the difference of the entropies of the two horizons, i.e. S=S_c-S_h, instead of their sum.In the higher-dimensional case, the aforementioned alternative effective temperature has the explicit form T_eff+=1/4π (n+1)^2 -(n+1)(n+3) Λ̃(r_h^2+r_c^2) +(n+3)^2 Λ̃^2 r_h^2 r_c^2/(r_h -r_c) [(n+1) +(n+3) Λ̃r_h r_c] .In the limit r_h → 0, T_eff+ reduces again to T_c. When Λ→ 0, it also exhibits the same behaviour as T_eff- by going to zero. However, near the critical point, T_eff+ has a distinct behaviour as it vanishes instead of taking a constant value. This is in accordance with Eq. (<ref>) where the numerator clearly approaches zero faster than the denominator. It is perhaps the vanishing of T_eff+ near the critical point that helps to avoid the infinite jumps and makes this alternative effective temperature more physically acceptable. The complete behaviour of T_eff+ in terms of the cosmological constant isdepicted in Fig. <ref>; its decreasing behaviour in terms of n is also shown in Fig. <ref>. Inspired by the above analysis, here we propose a third, alternative form for the effective temperature of a Schwarzschild-de Sitter spacetime. Its functional form is the following T_effBH=(1/T_c-1/T_BH)^-1=T_BH T_c/T_BH - T_c ,and it matches the one of T_eff-, but with the normalised black-hole temperature T_BH in the place of the bare one T_0. Our proposal may be considered as an equally `ad hoc' one compared to that of (<ref>); however, T_effBH would follow from exactly the same analysis that gave rise to T_eff- (with S=S_h+S_c) with the only differencebeing the consideration that the `correct' black-hole temperature, due to the absence of asymptotic flatness, is T_BH instead of T_0. Its explicit form in a spacetime with D=4+n dimensions is T_effBH=-1/4π (n+1)^2 -(n+1)(n+3) Λ̃(r_h^2+r_c^2) +(n+3)^2 Λ̃^2 r_h^2 r_c^2/(r_h √(h(r_0)) +r_c)[(n+1) -(n+3) Λ̃r_h r_c] .The above definition shares many characteristics with the effective temperature T_eff-: it also reduces to T_c when r_h → 0 and it vanishes in the limit Λ→ 0. But it also exhibits the same attractive behaviour near the critical point as T_eff+ by going to zero; this is due to the fact that, as we approach the critical point, T_BH in Eq. (<ref>) is a constant while T_c vanishes. The complete profile of T_effBH as a function of the cosmological constant is depicted in Fig. <ref>, while its similar behaviour in terms of n, compared to the other effectivetemperatures, is shown in Fig. <ref>. Observing Fig. <ref>, it is interesting to note that T_effBH matches T_eff- over an extended low Λ-regime, and then coincides with T_0 in the high Λ-regime[One may wonder whether an alternative effective temperature could be defined along the lines of Eq. (<ref>) but with a normalised temperature for the cosmological horizon too, i.e. T_cBH=T_c/√(h(r_0)). As one may see, such a temperature would have a similar behaviour to T_eff- in the small Λ-regime but would have an ill-defined behaviour near the critical point where it diverges.].§ HAWKING RADIATION FOR MINIMALLY-COUPLED SCALAR FIELDSIn the previous section, we examined in detail the characteristics of two temperatures for the Schwarzschild-de Sitter black hole, the bare T_0 and the normalised one T_BH, as well as three effective temperatures for the Schwarzschild-de Sitter spacetime, T_eff-, T_eff+ and T_effBH, to which the SdS black hole belongs. In this section, we proceed to derive and compare the radiation spectra for scalar fields emitted by the SdS black hole, for each one of the aforementioned five temperatures. Our analysis will focus on the higher-dimensional case and will present radiation spectrafor scalar fields emitted both on the brane and in the bulk. To this end, we need also the greybody factor for brane and bulk scalar fields propagating in the SdS background. These have been derived analytically, in the limit of small cosmological constant, in <cit.> and numerically, for arbitrary values of Λ, in <cit.>. Since here we are interested in deriving the form of the spectra for the complete range of Λ, we will use the exact results derived in <cit.>. For the sake of completeness, we will briefly review the method for calculating the scalar greybody factors in a SdS spacetime - for more information, interested readers may look in <cit.>.We will start from the emission of scalar fields on the brane. The equation of motion of a free, massless scalar field minimally-coupled to gravity and propagating in the brane background (<ref>) has the form1/√(-g) ∂_μ(√(-g) g^μν∂_νΦ)=0 . If we assume a factorized ansatz for the field, i.e.Φ(t,r,θ,φ)= e^-iω t R(r) Y(θ,φ), where Y(θ,φ) are the usual scalar spherical harmonics, we obtain a radial equation for the function R(r) of the form1/r^2 d/dr(hr^2 d R/dr ) + [ω^2/h -l(l+1)/r^2] R=0 . As was shown in <cit.>, in the near-horizon regime, the above equation takes the form of a hypergeometric equation. Its solution, when expanded in the limit r → r_h takes the form of an ingoing free wave, namely R_BH≃ A_1 f^α_1 = A_1 e^-i(ω r_h/A_h) ln f ,where A(r)=(n+1)-(n+3) Λ̃r^2 and A_h=A(r=r_h). Also,f is a new radial variable defined through the relation r → f(r) = h(r)/1- Λ̃r^2 .For simplicity, we may appropriately choose the arbitrary constant A_1 so thatR_BH(r_h)=1 .The above expression serves as a boundary condition for the numerical integration of Eq. (<ref>). The second boundary condition comes from the near-horizon value of the first derivative of the radial function (<ref>) for which we obtain <cit.> dR_BH/dr|_r_h≃ -i ω/h(r) . Near the cosmological horizon, the radial equation (<ref>) takes again the form of a hypergeometric differential equation whose general solution, in the limit r → r_c and f → 0, is written as <cit.> R_C ≃B_1 e^-i (ω r_c/A_c)ln f +B_2 e^i (ω r_c/A_c)ln f ,In the above, A_c=A(r=r_c), and B_1,2 are the amplitudes of the ingoing andoutgoing free waves. Then, the greybody factor, or equivalently the transmission probability, for the scalar field is given by |A|^2=1-|B_2/B_1|^2 .The B_1,2 amplitudes are found by integrating numerically Eq. (<ref>), starting close to the black-hole horizon, i.e. from r=r_h+ϵ, where ϵ=10^-6-10^-4, and proceeding towards the cosmological horizon (again, for more information on this, see <cit.>). The exact numerical analysis demonstrated that for a minimally-coupled, massless scalar field propagating on the brane, the greybody factor is enhanced over the whole energy regime as the cosmological constant Λ increases.Having at our disposal the exact values of the greybody factor |A|^2, we may now proceed to derive the differential energy emission rate for brane scalars. This is given by the expression <cit.> d^2E/dt dω=1/2π ∑_l N_l |A|^2 ω/exp(ω/T)-1 ,where ω is the energy of the emitted particle, and N_l=2l+1 the multiplicity of states that, due to the spherical symmetry, have the same angular-momentum number <cit.>. Also, T is the temperature of the black hole - this will be taken to be equal to T_0, T_BH, T_eff-, T_eff+ and T_effBH, respectively, in order to derive the corresponding radiation spectra. As was demonstrated in <cit.>, the dominant modes of the scalar field are the ones with the lowest values of l - in fact, all modes higher than the l = 7 have negligible contributions to the total emission rate.In Fig. <ref>, we depict the differential energy emission rates for a higher-dimensional Schwarzschild-de Sitter black hole for the case of n=2 and for four different values of the bulk cosmological constant (Λ=0.8, 2, 4, 5). For the first, small value of Λ, all the effective temperatures have an almost vanishing value, therefore the corresponding spectra are significantly suppressed; it is the two black-hole temperatures, T_0 and T_BH, that lead to significant emission rates, with the latter dominating overthe former in accordance to the behaviour presented in Fig. <ref>. As Λ increases to the value of 2, the effective temperatures, and their corresponding spectra, start becoming important; at the same time, the emission spectrum for the bare temperature T_0 is suppressed whereas the one for the normalised T_BH is enhanced. For Λ=4 and 5 finally, the radiation spectrum for T_BH is further enhanced while the one for T_eff- has also become important - it is these two temperatures that tend to a constant, non-vanishing value at the critical limit; on the contrary, all three remaining temperatures, T_0, T_eff+ and T_effBH, tend to zero thus causing a suppression to the corresponding spectra.Let us also note that the traditional shape of the energy emission curves – starting from zero and reaching a maximum value before vanishing again – is severely distorted.The presence of the cosmological constant leads to a non-vanishing asymptotic valueof the greybody factor in the limit ω→ 0<cit.> given by |A^2|=4 r_h^2r_c^2/(r_c^2+r_h^2)^2+O(ω) .The above holds for the case of minimally-coupled, massless scalar fields propagating in the brane background, and leads to a significant emission rate of extremely soft, low-energetic particles – this feature is evident in all plots of Fig. <ref>. In addition, when the temperature employed has a small value, like the effective temperatures in the low and intermediate Λ-regime or T_0, T_eff+ and T_effBH near the critical limit, the emission curve never reaches a maximum at an energy larger than zero; rather, it exhibits only the `tail', and monotonically decreases towards zero. The case of an even higher-dimensional Schwarzschild-de Sitter black hole with n=5 is shown in Fig. <ref>. A similar behaviour, to the one presented in the case of n=2, is also observed here: for low values of Λ, the radiation spectra for all effective temperatures are suppressed; as Λ increases, they get moderately enhanced while for large values of Λ only the one for T_eff- takes up significant values. The radiation spectrum for the bare temperature T_0 starts at its highest values for small Λ and is constantly suppressed as the value of the cosmological constant increases. The radiation spectrum for the normalised black-hole temperature T_BH is the one that dominates over the whole Λ-regime – even in the high Λ-regime, where T_BH is suppressed with Λ according to Fig. <ref>(b), the compensating enhancement of the greybody factor <cit.> causes the overall increase of the differential energy emission rate. Let us also study the emission of scalar fields from a higher-dimensional Schwarz- schild-de Sitter black hole in the bulk. The equation of motion of a free, massless field propagating in the bulk is also given by the covariant equation (<ref>) but with the projected metric tensor g_μν of Eq. (<ref>) being replaced by the higher-dimensional one G_MN given in Eq. (<ref>). Assuming again a factorized formΦ(t,r,θ_i,φ) = e^-iω tR(r) Ỹ(θ_i,φ), where Ỹ(θ_i,φ) are the hyperspherical harmonics <cit.>, we obtain the following radial equation <cit.> 1/r^n+2 d/dr(hr^n+2 d R/dr ) + [ω^2/h -l(l+n+1)/r^2] R=0 .The above differential equation may be again analytically solved for small Λ<cit.> but, for the purpose of comparing the radiation spectra over the entire Λ regime, we turn again to numerical integration. This has been performed in <cit.> by following an analysis similar to the one for brane scalar fields. The asymptotic solutions of Eq. (<ref>) near the black-hole and cosmological horizons take similar forms to the brane ones, with their expanded forms(<ref>) and (<ref>) being identical. The same boundary conditions (<ref>)-(<ref>) were used for the numerical integration from the black-hole to the cosmological horizon. The exact value of the greybody factor for bulk scalar fields, for arbitrary values of the particle and spacetime parameters, was again derived via Eq. (<ref>), and found to be an increasing function of the bulk cosmological constant.In Fig. <ref>, we display the differential energy emission rates for bulk scalar fields emitted by a 6-dimensional (n=2) SdS black hole, and for four different values of the cosmological constant. Similarly to the behaviour observed in the case of brane emission, the radiation spectrum for the normalised temperature T_BH is the one that dominates and gets enhanced as Λ increases, under the combined effect of the temperature and greybody profiles. The spectrum for the bare temperature T_0, starting from significant values for low Λ, is again monotonically suppressed as Λ increases approaching its maximum critical value. The spectra for all effective temperatures start from extremely low values and only the one for T_eff- manages to reach non-negligible values – this takes place only near the critical limit where T_eff- acquires a constant value. If we allow for a larger value of the number of extra dimensions, i.e. n=5, the general behaviour of the emission curves remains the same, as can be seen from the plots in Fig. <ref>, drawn for four different values of the cosmological constant. Here, the additional suppression of all effective temperatures with n keeps even more the corresponding radiation spectra at low values. Also in the bulk, all emission curves tend to a non-vanishing value in the limit ω→ 0. This is again due to the non-zero asymptotic value of the greybody factor at the very low-energy regime. However, this value for bulk emission is <cit.>|A^2|= 4 (r_hr_c)^(n+2)/(r_c^n+2+r_h^n+2)^2+O(ω) .The above expression is suppressed with the number of extra dimensions n and this is the reason why this feature is more difficult to discern in the 9-dimensional emission curves of Fig. <ref> compared to the 6-dimensional ones ofFig. <ref>– it is nevertheless visible in the zoom-in plotsthat have been added in Fig. <ref>. The numerical analysis performed in the context of the present work serves not only as a comparison of the radiation emission curves when different expressions for the temperature of the SdS spacetime are used, but also as an extension to the previous results obtained in <cit.> where the normalised temperature T_BH was employed. There, exact results for the radiation spectra were produced but the range of values of the cosmological constant was much more restricted, i.e. Λ∈ [0.01, 0.3],therefore, the regime of large values of Λ, including the critical limit, was never studied. Here, we have performed a thorough analysis of the Λ regime for all different temperatures, and thus we have the complete picture of how the corresponding radiation spectra behave as a function of the value of the cosmological constant. Overall, after having performed both the brane and the bulk analysis, we may conclude that it is the black-hole temperatures T_0 and T_BH that lead to Hawking radiation emission curves with the typical shape, i.e. start from a low value at the low-energy regime, rise to a maximum height and then slowly die out at the high-energy regime. In fact, even the T_0 spectrum loses this typical shape as Λ increases. Of the effective temperatures, only T_eff- manages to mimic this behaviour, and does so only close to the critical limit. If we focus on the most typical radiation spectra, i.e. the ones derived for the normalised temperature T_BH, we could comment on some additional features that emerge from the more thorough study, in terms of the Λ-regime, performed in the present work. Our current results have confirmed the enhancement of the corresponding radiation spectra in terms of both the number of extra dimensions n and the value of the cosmological constant, as found in <cit.>. As Λ increases, the non-zero asymptotic value of each curve in the limit ω→ 0 is enhanced thus increasing the probability of the emission of very low-energetic particles. In addition, for large values of n,as Λ increases, all emission curves, for brane and bulk propagation alike, show a significant shift of the peak of the curves towards the lower part of the spectrum. Therefore, we may conclude that the presence of a cosmological constant gives a significant boost to both low and intermediate-energy free, massless scalar particles and it does so more effectively the larger the number of extra dimensions is. Finally, in <cit.> it was found that for Λ in the regime[0.01,0.3], the brane emission channel for free, massless scalar fields is always dominantcompared to the bulk channel. Here, we observe that for larger values of Λ the situation is radically changed: even for small values of n, i.e. n=2, the comparison of the vertical axes of the plots of Figs. <ref> and <ref> reveals that the bulk emission curve has surpassed, by a factor of two, the brane one, for values of Λ larger than 4. As the dimensionality of spacetime increases, the bulk dominance becomes more important: for n=5, the comparison of the vertical axes of the plots ofFigs. <ref> and <ref>, now tells us that the bulk dominates over the brane for values of Λ > 10, i.e. for more than half the allowed regime of values of the cosmological constant, by a factor that ranges between 3 and 20.§ HAWKING RADIATION SPECTRA FOR NON MINIMALLY-COUPLED SCALAR FIELDS In this section, we will consider the case of scalar particles propagating either on the brane or in the bulk and having a non-minimal coupling to gravity. This coupling is realised through a quadratic function ξΦ^2, where ξ is a constant, multiplying the appropriate scalar curvature (with the value ξ=0 corresponding to the minimal coupling).The reason for studying such a theory is two-fold: first, the presence of the non-minimal coupling acts as an effective mass term for the scalar field, therefore the effect of the mass on the radiation spectra may thus be studied; second, for large values of the coupling constant ξ, it was found that the enhancement of the radiation spectra with the cosmological constant – for the normalised temperature T_BH, that was also evident in the results of the previous section – changes to a suppression in the low-energy and intermediate-energy regimes <cit.>. It would thus be interesting to see what the effect of the non-minimal coupling would be on the radiation spectra over for the complete Λ regime and for different temperatures. For a scalar field propagating in the bulk, its higher-dimensional action would read S_Φ=-1/2 ∫ d^4+nx√(-G)[ξΦ^2 R_D +∂_M Φ ∂^M Φ] ,where G_MN is again the higher-dimensional metric tensor defined in Eq. (<ref>), and R_D the corresponding curvature given by the expression R_D=2 (n+4)/n+2 κ^2_D Λ ,in terms of the bulk cosmological constant. The equation of motion of the bulk scalar field now reads1/√(-G) ∂_M(√(-G) G^MN∂_N Φ) =ξ R_D Φ ,or, more explicitly,1/r^n+2 d/dr(hr^n+2 d R/dr ) + [ω^2/h -l(l+n+1)/r^2-ξ R_D] R=0 .In the above we have decoupled the radial part of the equation by considering the same factorized ansatz, namely Φ(t,r,θ_i,φ) = e^-iω tR(r) Ỹ(θ_i,φ), as in the previous section. The action functional for a scalar field propagating on the brane background and having also a quadratic, non-minimal coupling to the scalar curvature, will have a form similarto Eq. (<ref>). However, now the metric tensor G_MN will be replaced by the projected-on-the-brane one g_μν given in Eq. (<ref>), and the higher-dimensional Ricci scalar R_D by the four-dimensional one R_4 that is found to be <cit.>R_4=24 κ_D^2 Λ/(n+2) (n+3) + n(n-1)μ/r^n+3 .The equation for the radial part of the brane-localised, non-minimally coupled scalar fieldthen follows from Eq. (<ref>) by setting n=0 and changing R_D with R_4, and reads1/r^2 d/dr(hr^2 d R/dr ) + [ω^2/h -l(l+1)/r^2-ξ R_4] R=0 . Both equations (<ref>) and (<ref>) were solved analytically in <cit.> and numerically in <cit.>. As it is clear from both equations, thenon-minimal coupling term acts as an effective mass term, therefore any increase in the coupling function ξ causes a suppression to the radiation spectra, in accordance to previous studies of massive scalar fields <cit.>. In addition, in <cit.>, it was found that as ξ exceeds the value of approximately 0.3, any increase in the value of the cosmological constant causes a suppression in the low and intermediate part of the spectrum. In the light of the above, here we will consider a value for the non-minimal coupling constant well beyond that critical value, namely we will choose ξ=1. We will also study thecomplete Λ-regime and compute the radiation spectra for all five temperatures, T_0, T_BH, T_eff-, T_eff+, and T_effBH. We will use again the exact numerical results for the brane and bulk greybody factors, that follow from an analysis identical to that in the minimal-coupling case – although the coupling constant ξ modifies the form of the effective potentials that the brane and bulk scalar fields have to overcome to reach infinity <cit.>, it has no effect at the asymptotic regimes of the two horizons; therefore, the asymptotic solutions (<ref>) and (<ref>) as well as the boundary conditions (<ref>)-(<ref>) remain the same. Starting from the emission of non-minimally-coupled scalar fields on the brane, inFig. <ref> we depict the differential energy emission rates for a 6-dimensional SdS black hole, and for the values Λ=2, 2.8, 4 and 5 of the bulk cosmological constant. We first note that, in the presence of ξ, the emission curves have returned to their typical shape: as was found in <cit.>, and confirmed also here,the non-minimal coupling destroys the non-zero asymptotic limit of the scalar greybody factor in the low-energy limit; as a result, all emission curves emanate from zero at the low-energy regime. Moreover, the larger the value of ξ, the later in terms of ω the emission curves rise above the zero value, in accordance to the effect that the mass of the scalar particle has on the spectra <cit.>. Also, by comparing the vertical axes of Figs. <ref>(b,c) and <ref>(a,c), respectively, we observe that the radiation spectra in the non-minimal case are indeed significantly suppressed, in accordance to the previous discussion.This suppression is due to the fact that the greybody factors for both brane and bulk scalar fields decrease with any increase in the non-minimal coupling constant ξ, and therefore is common to the radiation spectra for the different temperatures.As a result, the inclusion of the non-minimal coupling does not modify the general picture drawn in the previous section. However, some of the radiation spectra are more sensitive to the changes brought by the presence of the non-minimal coupling. For example, in Fig. <ref> drawn for the minimal-coupling case, we observe that, for the three effective temperatures and T_0, the maxima of all emission curves are located at the very low-energy limit; the relatively small magnitude of these temperatures, compared to that of T_BH, combined with the enhanced value of the greybody factor for ultra soft particles, makes the emission of low-energetic particles much more favourable for the black hole. When the non-minimal coupling is introduced, the emission of soft particles becomes disfavoured and the radiation spectra for the aforementioned four temperatures are significantly suppressed. The radiation spectrum for the normalised temperature T_BH is also suppressed, however its relatively large value allows also for the significant emission of higher-energetic particles and these are not significantly affected by thenon-minimal coupling. As a result, the relative enhancement of the T_BH radiation spectrum compared to the remaining ones is extended by the non-minimal coupling. As the critical limit is approached, only the T_eff- spectrum manages again to reach comparable values due to its asymptotic, non-zero value at that regime.A similar behaviour is observed also in the case where the number of extra dimensions takes larger values. We have performed the same analysis for n=5, and found that all emission curves for non-minimally coupled brane scalar fields return again to their typical shape and thus have the emission of low-energy particles suppressed. For small values of Λ, and due to the enhancement with n that characterizes both T_0 and T_BH (see Fig. <ref>) the difference in the corresponding two radiation spectra is smaller compared to the case with n=2; as Λ however increases, the T_0 radiation spectrum is constantly suppressed reaching a negligible value at the critical limit. Of the effective temperature, only T_eff- manages tosupport a relatively significant spectrum and that is realised very close to the critical limit.We now turn to the case of the emission of non-minimally-coupled scalar fields emitted in the bulk. The radiation spectra for the different temperatures and for the case with n=2 are now depicted in Fig. <ref>, again for the value ξ=1 and for the same four values of the cosmological constant. A similar picture emerges also here: the T_0 radiation spectrum is significant only in the low Λ regime, the T_eff- becomes important near the critical limit, while the other two radiation spectra for T_eff+ and T_effBH fail to acquire any significant value at any Λ regime. Theradiation spectrum for T_BH is the one that dominates over the whole energy regime and for the entire Λ range. The same behaviour is observed also for n=5.Let us finally note that the dominance of the bulk emission channel in the large Λ regime <cit.> is confirmed also in the case of non-minimal coupling and even for models with a small number of extra dimensions. As the comparison of the vertical axes of Figs. <ref> and <ref> reveals, the differential energy emission rate in the bulk exceeds that on the brane as soon as Λ becomes approximately larger than 3, and stays dominant for the remaining half of the allowed range. § BULK-OVER-BRANE RELATIVE EMISSIVITIES A final question that we would like to address in this section is that of the effect of the different temperatures on the total emissivities in the bulk and on the brane, and more particularly on the bulk-over-brane emissivity ratio. In our previous work <cit.>, we calculated the total power emitted by the SdS black hole over the whole frequency range in both the brane and bulk channels, by employing the Bousso-Hawking T_BH normalization for the temperature. Here, we generalise this analysis to cover all five temperatures T_0, T_BH, T_eff-, T_eff+ and T_effBH, and compare the corresponding results. We also extend our previous study by considering the whole range of values for the bulk cosmological constant, from avanishing value up to its critical limit.The quantity of interest, namely the ratio of the total power emitted in the bulk over the corresponding total power on the brane, for the case with n=2 and for four different values of the coupling constant ξ, i.e. ξ=0,0.5,1,2, is presented in Tables 1 through 4. The five columns of each Table give the total ratio for five values of the cosmological constant that span the entire allowed range, i.e. for Λ=0.3,1,2,4,5. Let us see first how the change in the value of Λ affects our results. For small values of Λ, and independently of the value of ξ, the brane emission channel clearly dominates over the bulk one; however, as Λ increases, the bulk emission channel gradually becomes more and more important. This is due to the fact that for an increasing cosmological constant the bulk emission curves move to the right, thus allowing for the emission of a larger number of high-energetic particles compared to that on the brane, but also the maximum height of the bulk curves soon overpasses the one of the brane curves by a factor of 3. For the T_BH and T_eff- temperatures, that retain a significant value near the critical limit, the bulk-over-brane ratio well exceeds unitythus rendering the bulk channel the dominant one in the emission process of the black hole - the tendency of T_BH to overturn the power ratio in favour of the bulk channel was already anticipated by the results of <cit.>. The only exception to the above behaviour is the one exhibited by the bare temperature T_0: the enhancement of the bulk-over-brane ratio with Λ is observed only in the case of minimal coupling whereas this ratio decreases for all values ξ≠ 0, as Λ increases towards its critical value. We may interpret this as the result of the disappearance of the low-energy modes as soon as the coupling constant ξ takes a non-vanishing value: the emission curves for T_0 have their maxima at the low-energy regime and are thus mostly affected when these are banned from the emission spectrum – according to our results, this change affects more the bulk channel rather than the brane one causing the suppression of the bulk-over-brane ratio. If we now turn our attention to the role of the non-minimal coupling constant ξ in the value of the bulk-over-brane ratio, we find that the overall behaviour is a suppression of this quantity as ξ increases. This behaviour holds for almost all values of the cosmological constant apart from the lower part of its allowed regime where, in contrast, the bulk-over-brane ratio exhibits an enhancement without however exceeding unity. On the other hand, despite the suppression with ξ, the bulk-over-brane ratio retainsvalues above unity when Λ tendsto its critical limit.When we increase the number of the extra dimensions, all the above effects become amplified. In Tables 5 through 8, we display the value of the bulk-over-brane ratio for the case with n=5,for the same four values of the non-minimal coupling constant ξand for five indicative values of the bulk cosmological constant, i.e. Λ=1,4,10,13 and 18, that again span the entire allowed regime. The dominance of the bulk channel over the brane one for T_BH and T_eff-, as Λ approaches its critical limit, is now much more prominent with the overall energy emitted in the bulk surpassing the one emitted on the brane by a factor of even larger than 10. The suppression of the energy ratio as ξ increases is also obvious here, but again this suppression does not prevent the bulk from becoming the dominant channel at the critical limit. What is different in this case from the n=2 case is that the enhancement with ξ for small values of the cosmological constant, noted also in the case with n=2, is now adequate to cause the dominance of the bulk channel over the brane one for the bare T_0 and normalised T_BH temperatures - for the latter temperature, this effect was also observed in <cit.>.§ CONCLUSIONSOver the years, the study of the thermodynamics of the Schwarzschild-de Sitter spacetime has proven to be a challenging task. The existence of two different horizons, the black-hole and the cosmological one – each with its own temperature expressed in terms of its surface gravity – results into the absence of a true thermodynamical equilibrium. On the other hand, the absence of an asymptotically-flat limit led to the formulation of a normalised temperature for the black hole <cit.> more that two decades ago. Both problems become more severe in the limit of large cosmological constant when the two horizons are located so close to each other that the argument of the two independent thermodynamics, valid at the two horizons, comes into question. As a result, the notion of the effective temperature of the SdS spacetime was proposed <cit.> that implements both the black-hole and the cosmological horizon temperatures.In the context of the present work, we have focused on the case of the higher-dimensional Schwarzschild-de Sitter black hole, and have formed a set of five different temperatures: the bare black-hole temperature T_0, based on its surface gravity, the normalised black-hole temperature T_BH and three effective temperatures for the SdS spacetime, T_eff-, T_eff+ and T_effBH– the latter three are inspired by four-dimensional analyses, where the cosmological constant plays the role of the pressure of the system, and are combinations of the black-hole and cosmological horizon temperatures. We have first studied the dependence of the aforementioned temperatures on the value of the cosmological constant, as this is varied from zero to its maximum allowed value, set by the critical limit where the two horizons coincide. In the limit of vanishing cosmological constant, the black-hole temperatures T_0 and T_BH reduce to the temperature of an asymptotically-flat, higher-dimensional Schwarzschild black hole as expected; on the other hand, all three effective temperatures tend to zero, an artificially ill behaviour due to the fact that Λ (or, equivalently the pressure of the system) is not allowed to vanish. In the opposite limit, that of the critical value, it is the normalised T_BH and effective T_eff- temperatures that have a common behaviour reaching a non-vanishing asymptotic value; the other three temperatures all vanish in the same limit. We then examined the dependence of the temperatures on the number of extra dimensions. Here, the five temperatures were found to fall again into two categories: the black-hole temperatures T_0 and T_BH both are enhanced with n while all effective temperatures predominantly are suppressed. Overall, the normalised T_BH temperature was found to be the dominant one for all values of Λ and n. The set of five temperatures was then used to derive the Hawking radiation spectra for a free, massless scalar field propagating both on the brane and in the bulk. We considered the cases where the number of extra dimensions had a small (n=2) and a large (n=5) value: in each case, we chose four different values for the cosmological constant that covered the allowed regime from zero to the critical value. For both brane and bulk radiation spectra, the emission curves closely followed the behaviour of the temperatures: for small Λ, the emission curves for all effective temperatures were significantly suppressed while the ones for the black-hole temperatures were the dominant ones. As Λ increased, the emission rate for the bare T_0 started to become suppressed while the one for the effective T_eff- started to become important. Near the critical limit, it is the two temperatures, T_BH and T_eff-, with the non-vanishing values that lead to the dominant emission curves. It is worth noting that the two effective temperatures T_eff+ and T_effBH support a non-negligible emission rate only for intermediate values of the cosmological constant, where they favour the emission of very low-energetic scalar particles. The emission rate for the normalised temperature T_BH is the one that constantly rises as Λ gradually increases, being clearly the dominant one: for n=2, the peak of the emission curve on the brane for T_BH rises to a height that is 2 times larger than that for T_0 at the low Λ-regime and 5 times larger than that for T_eff- at the high Λ-regime; these factors increase even more as n increases, or when we study the bulk emission channel.For the case of a minimally-coupled scalar field, all emission curves were found to have non-zero asymptotic values at the very-low part of the spectrum due to the well-known behaviour of the greybody factor both on the brane and in the bulk. As a result, a significant number of soft particles are expected to be emitted; in fact, for the three effective temperatures T_eff-, T_eff+, T_effBH (for small values of values of Λ) and for T_0 (for large values of Λ) this is where the peak of the emission curves is located. When the non-minimal coupling to the scalar curvature is turned on, theemission curves for all five temperatures resume their usual shape. The general behaviour regarding the comparative strength of the emission curves for the different temperatures observed in the case of the minimal coupling holds also here. The emission curve for the normalised temperature T_BH is again the dominant one over the entire Λ-regime, with only the emission curves for T_0 and T_eff- reaching significant values at low and large values of Λ, respectively.The exact analysis performed in the context of this work serves not only as a comparison of the radiation spectra, that follow by using differenttemperatures for the Schwarzschild-de Sitter spacetime, but also as a source of information regarding their behaviour as the cosmological constantvaries from a very small value to the largest allowed one at the critical limit. The complete radiation spectra reveal that as Λ increases, the emission of energy from the black hole along the brane and bulk channels very quickly become comparable, and even for low values of the number of extra dimensions, the bulk emission eventually dominates over the brane one. The exact total emissivities that were calculated in Section 5 demonstrated exactly this effect: apart from the case of T_0 when ξ≠ 0, the bulk-over-brane ratio exhibits a significant enhancement as Λ increases and, in fact, renders the bulk channel the dominant emission channel of the SdS black hole for the temperatures T_BH and T_eff-, i.e. for the temperatures that retain a non-vanishing value near the critical limit. In addition, when the number of extra dimensions is large enough, the bulk was found to dominate over the brane even for values of Λ much lower than its critical limit as long as the value of the non-minimal coupling constant ξ was large enough; in this case, the bulk dominance was obtained also for the bare temperature T_0. In conclusion, choosing a particular form for the temperature of an SdS black hole, i.e. the bare, the normalised or an effective one, plays a paramount rolein the form of the obtained radiation spectra. Some of the suggested temperatures fail even to produce a significant emission rate, others lead to an emission only for very small or very large values of the bulk cosmological constant. Our results clearly reveal that the normalised temperature T_BH, the one that makes amends for the absence of an asymptotically-flat limit in a Scwarzschild-de Sitter spacetime, is the one that produces the most robust radiation spectra over the entire regime of the bulk cosmological constant. Acknowledgement T.P. would like to thank the Alexander S. Onassis Public Benefit Foundation for financial support. 99ADD N. Arkani-Hamed, S. Dimopoulos and G. R. Dvali,Phys. Lett. B429, 263 (1998); Phys. Rev. D59, 086004 (1999); I. Antoniadis, N. Arkani-Hamed, S. Dimopoulos and G. R. Dvali,Phys. Lett. B436, 257 (1998). RS L. Randall and R. Sundrum, Phys. Rev. Lett. 83 (1999) 3370; Phys. Rev. Lett. 83 (1999) 4690.Kanti:2004 P. Kanti,Int. J. Mod. Phys. A19, 4899–4951 (2004). Cavaglia M. Cavaglia,Int. J. Mod. Phys. A18 (2003) 1843. Landsberg:2003br G. L. Landsberg,Eur. Phys. J. C33, S927–S931 (2004). Majumdar:2005ba A. S. Majumdar and N. Mukherjee,Int. J. Mod. Phys. D14, 1095–1129 (2005). Park S. C. Park,Prog. Part. Nucl. Phys.67, 617–650 (2012). Webber B. Webber,eConf C0507252, T030 (2005) [hep-ph/0511128]]. Casanova A. Casanova and E. Spallucci,Class. Quant. Grav.23, R45–R62 (2006).Kanti:2008eq P. Kanti,Lect. Notes Phys.769, 387–423 (2009).Kanti:2012jh P. Kanti,Rom. J. Phys.57, 879–893 (2012).Winstanley:2007hj E. Winstanley,arXiv:0708.2656 [hep-th] (2007).Kanti:2009sz P. Kanti, J. Phys. Conf. Ser.189, 012020 (2009). PKEW P. Kanti and E. Winstanley,arXiv:1402.3952 [hep-th]. Hawking S. W. Hawking,Commun. Math. Phys.43, 199–220 (1975)Tangherlini F. R. Tangherlini,Nuovo Cim.27 (1963) 636. KMR P. Kanti and J. March-Russell,Phys. Rev. D66, 024023 (2002);Phys. Rev. D67, 104019 (2003). HK1 C. M. Harris and P. Kanti, JHEP 0310, 014 (2003). graviton-schw A. S. Cornell, W. Naylor and M. Sasaki,JHEP0602, 012 (2006);V. Cardoso, M. Cavaglia and L. Gualtieri,Phys. Rev. Lett.96, 071301 (2006); JHEP 0602, 021 (2006); S. Creek, O. Efthimiou, P. Kanti and K. Tamvakis, Phys. Lett. B635, 39 (2006).DHKW C. M. Harris and P. Kanti,Phys. Lett. B633 (2006) 106;G. Duffy, C. Harris, P. Kanti and E. Winstanley, JHEP0509, 049 (2005).CKW M. Casals, P. Kanti and E. Winstanley, JHEP 0602, 051 (2006). CDKW1 M. Casals, S. Dolan, P. Kanti and E. Winstanley, JHEP 0703, 019 (2007); JHEP 0806, 071 (2008). IOP D. Ida, K. y. Oda and S. C. Park, Phys. Rev. D67, 064025 (2003) [Erratum-ibid. D69, 049901 (2004)];Phys. Rev.D71, 124039 (2005);Phys. Rev. D73, 124022 (2006). CEKT S. Creek, O. Efthimiou, P. Kanti and K. Tamvakis,Phys. Rev. D75 (2007) 084043;Phys. Rev. D76 (2007) 104013;Phys. Lett. B656, 102 (2007).rot-other V. P. Frolov and D. Stojkovic,Phys. Rev. D67, 084004 (2003);H. Nomura, S. Yoshida, M. Tanabe and K. i. Maeda, Prog. Theor. Phys.114, 707 (2005); E. Jung and D. K. Park,Nucl. Phys. B 731, 171 (2005); Mod. Phys. Lett. A 22, 1635 (2007);S. Chen, B. Wang, R. K. Su and W. Y. Hwang,JHEP 0803, 019 (2008). graviton-rot H. Kodama,Prog. Theor. Phys. Suppl.172, 11 (2008); Lect. Notes Phys.769, 427 (2009);J. Doukas, H. T. Cho, A. S. Cornell and W. Naylor, Phys. Rev. D80 (2009) 045021; P. Kanti, H. Kodama, R. A. Konoplya, N. Pappas and A. Zhidenko,Phys. Rev. D80 (2009) 084016.FST A. Flachi, M. Sasaki and T. Tanaka,JHEP 0905 (2009) 031.CDKW3M. Casals, S. R. Dolan, P. Kanti and E. Winstanley, Phys. Lett. B680 (2009) 365. Stojkovic-ang D. C. Dai and D. Stojkovic, JHEP 1008 (2010) 016.Sampaio-ang M. O. P. Sampaio,JHEP 1203 (2012) 066.GBK J. Grain, A. Barrau and P. Kanti, Phys. Rev. D72 (2005) 104016.FS1 V. P. Frolov and D. Stojkovic,Phys. Rev. D66, 084002 (2002); Phys. Rev. Lett.89, 151302 (2002);D. Stojkovic,Phys. Rev. Lett.94, 011603 (2005). tense-brane D. C. Dai, N. Kaloper, G. D. Starkman and D. Stojkovic,Phys. Rev.D75, 024043 (2007);T. Kobayashi, M. Nozawa, Y. Takamizu,Phys. Rev.D77, 044022 (2008).JorgeR. Jorge, E. S. de Oliveira and J. V. Rocha,Class. Quant. Grav.32, no. 6, 065008 (2015). DongR. Dong and D. Stojkovic,Phys. Rev. D92, no. 8, 084045 (2015). Panotopoulos G. Panotopoulos and A. Rincon,arXiv:1611.06233 [hep-th]. Miao Y. G. Miao and Z. M. Xu,arXiv:1704.07086 [hep-th]. KGB P. Kanti, J. Grain and A. Barrau,Phys. Rev. D71 (2005) 104002. Harmark T. Harmark, J. Natario and R. Schiappa,Adv. Theor. Math. Phys.14 (2010) 727. WuS. F. Wu, S. y. Yin, G. H. Yang and P. M. Zhang,Phys. Rev. D78, 084010 (2008). Crispino L. C. B. Crispino, A. Higuchi, E. S. Oliveira and J. V. Rocha,Phys. Rev. D87 (2013) 10,104034. KPP1 P. Kanti, T. Pappas and N. Pappas,Phys. Rev. D90, no. 12, 124077 (2014). KPP3 T. Pappas, P. Kanti and N. Pappas,Phys. Rev. D94 (2016) no.2,024035. AndersonP. R. Anderson, A. Fabbri and R. Balbinot,Phys. Rev. D91, no. 6, 064061 (2015). SporeaC. A. Sporea and A. Borowiec,Int. J. Mod. Phys. D25, no. 04, 1650043 (2016). Ahmed J. Ahmed and K. Saifullah,arXiv:1610.06104 [gr-qc]. Fernando S. Fernando,arXiv:1611.05337 [gr-qc]. Boonserm P. Boonserm, T. Ngampitipan and P. Wongjun,arXiv:1705.03278 [gr-qc]. GH1 G. W. Gibbons and S. W. Hawking,Phys. Rev. D15 (1977) 2738. GH2 G. W. Gibbons and S. W. Hawking,Phys. Rev. D15 (1977) 2752. BH R. Bousso and S. W. Hawking,Phys. Rev. D54 (1996) 6312. Shankar S. Shankaranarayanan,Phys. Rev. D67 (2003) 084026. Urano M. Urano, A. Tomimatsu and H. Saida,Class. Quant. Grav.26 (2009) 105010. Lahiri S. Bhattacharya and A. Lahiri,Eur. Phys. J. C73 (2013) 2673. Bhatta S. Bhattacharya,Eur. Phys. J. C76 (2016) no.3,112. LiMa H. F. Li, M. S. Ma and Y. Q. Ma,Mod. Phys. Lett. A32 (2017) no.02,1750017 [arXiv:1605.08225 [hep-th]]. Mann_review D. Kubiznak, R. B. Mann and M. Teo,Class. Quant. Grav.34 (2017) no.6,063001.Romans L. J. Romans,Nucl. Phys. B383 (1992) 395. Traschen D. Kastor and J. H. Traschen,Phys. Rev. D47 (1993) 5370. Cai R. G. Cai,Phys. Lett. B525 (2002) 331; Nucl. Phys. B628 (2002) 375. Ghezelbash:2001vs A. M. Ghezelbash and R. B. Mann,JHEP0201 (2002) 005. Sekiwa Y. Sekiwa,Phys. Rev. D73 (2006) 084009. Cvetic M. Cvetic, G. W. Gibbons and C. N. Pope,Phys. Rev. Lett.106 (2011) 121301. Dolan B. P. Dolan, D. Kastor, D. Kubiznak, R. B. Mann and J. Traschen,Phys. Rev. D87 (2013) no.10,104017. Ma1 M. S. Ma, H. H. Zhao, L. C. Zhang and R. Zhao,Int. J. Mod. Phys. A29 (2014) 1450050. Zhao H. H. Zhao, L. C. Zhang, M. S. Ma and R. Zhao,Phys. Rev. D90 (2014) no.6,064018. Zhang L. C. Zhang, M. S. Ma, H. H. Zhao and R. Zhao,Eur. Phys. J. C74 (2014) no.9,3052. Ma2 M. S. Ma, L. C. Zhang, H. H. Zhao and R. Zhao,Adv. High Energy Phys.2015 (2015) 134815 doi:10.1155/2015/134815 [arXiv:1410.5950 [gr-qc]]. Guo X. Guo, H. Li, L. Zhang and R. Zhao,Phys. Rev. D91 (2015) no.8,084009;Guo:2016iqn X. Guo, H. Li, L. Zhang and R. Zhao,Adv. High Energy Phys.2016 (2016) 7831054. Araujo A. Araujo and J. G. Pereira,Int. J. Mod. Phys. D24 (2015) no.14,1550099. Kubiznak D. Kubiznak and F. Simovic,Class. Quant. Grav.33 (2016) no.24,245001. McInerney J. McInerney, G. Satishchandran and J. Traschen,Class. Quant. Grav.33 (2016) no.10,105007. Li H. F. Li, M. S. Ma, L. C. Zhang and R. Zhao,Nucl. Phys. B920 (2017) 211.Liu H. Liu and X. h. Meng,arXiv:1611.03604 [gr-qc]. Pourhassan B. Pourhassan, S. Upadhyay and H. Farahani,arXiv:1701.08650 [physics.gen-ph]. Nariai H. Nariai, Sci. Rep. Tohoku Univ., I., 35 (1951) 62.MP R. C. Myers and M. J. Perry,Annals Phys.172, 304 (1986). Molina C. Molina,Phys. Rev. D68 (2003) 064007. York J. W. York, Jr.,Phys. Rev. D31 (1985) 775.Labbe J. Labbe, A. Barrau and J. Grain, PoS HEP2005 (2006) 013 [hep-ph/0511211]. BanderM. Bander and C. Itzykson,Rev. Mod. Phys.38, 330 (1966). Muller C. Muller, inLecture Notes in Mathematics: Spherical Harmonics (Springer-Verlag, Berlin-Heidelberg, 1966).Page D. N. Page, Phys. Rev.D16 (1977) 2402. Jung E. l. Jung, S. H. Kim and D. K. Park,Phys. Lett. B586 (2004) 390;JHEP 0409 (2004) 005;Phys. Lett. B602 (2004) 105. Sampaio M. O. P. Sampaio, JHEP0910 (2009) 008;JHEP1002 (2010) 042. KNP1 P. Kanti and N. Pappas,Phys. Rev. D82 (2010) 024039. | http://arxiv.org/abs/1705.09108v2 | {
"authors": [
"Panagiota Kanti",
"Thomas Pappas"
],
"categories": [
"hep-th",
"astro-ph.CO",
"gr-qc",
"hep-ph"
],
"primary_category": "hep-th",
"published": "20170525094848",
"title": "Effective Temperatures and Radiation Spectra for a Higher-Dimensional Schwarzschild-de-Sitter Black-Hole"
} |
Identifying the underlying models in a set of data points contaminated by noise and outliers, leads to a highly complex multi-model fitting problem. This problem can be posed as a clustering problem by the projection of higher order affinities between data points into a graph, which can then be clustered using spectral clustering. Calculating all possible higher order affinities is computationally expensive. Hence in most cases only a subset is used. In this paper, we propose an effective sampling method to obtain a highly accurate approximation of the full graph required to solve multi-structural model fitting problems in computer vision. The proposed method is based on the observation that the usefulness of a graph for segmentation improves as the distribution of hypotheses (used to build the graph) approaches the distribution of actual parameters for the given data. In this paper, we approximate this actual parameter distribution using a k-th order statistics based cost function and the samples are generated using a greedy algorithm coupled with a data sub-sampling strategy.The experimental analysis shows that the proposed method is both accurate and computationally efficient compared to the state-of-the-art robust multi-model fitting techniques. The code is publicly available from https://github.com/RuwanT/model-fitting-cbs. Model-fitting , Spectral clustering , Data segmentation , motion segmentation , Hyper-graph Effective Sampling: Fast Segmentation Using Robust Geometric Model Fitting Ruwan Tennakoon, Alireza Sadri, Reza Hoseinnezhad, and Alireza Bab-Hadiashar, Senior Member, IEEE R.B. Tennakoon, A. Sadri, R. Hoseinnezhad and A. Bab-Hadiashar are with the School of Engineering, RMIT University, Melbourne, Australia.E-mail: [email protected] 30, 2023 ================================================================================================================================================================================================================================================================================================================= § INTRODUCTIONRobust fitting of geometric models to data contaminated with both noise and outliers is a well studied problem with many applications in computer vision <cit.>. Visual data often contain multiple underlying structures and there are pseudo-outliers (measurements representing structured other than the structure of interest <cit.>) as well as gross-outliers (produced by errors in the data generation process). Fitting models to this combination of data involves solving a highly complex multi-model fitting problem. The above multi-model fitting problem can be viewed as a combination of two sub problems: data labeling and model estimation. Although solving one of the sub-problems, when the solution to the other is given, is straightforward, solving both problems simultaneously remains a challenge.Traditional approaches to multi-model fitting were based on fit and remove strategy: apply a high breakdown robust estimator (e.g. RANSAC <cit.>, least k-th order residual) to generate a model estimate and remove its inliers to prevent the estimator from converging to the same structure again. However, this approach is not optimal as errors made in the initial stages tend to make the subsequent steps unreliable (e.g. small structures can be absorbed by models that are created by accidental alignment of outliers with several structures) <cit.>. To address this issue, energy minimization methods have been proposed. They are based on optimizing a cost function consisting of a combination of data fidelity and model complexity (number of model instances) terms <cit.>. In this approach, the cost function is optimized to simultaneously recover the number of structures and their data association. Commonly such cost functions are optimized using discrete optimization methods (metric labeling <cit.>). They start form a large number of proposal hypotheses and gradually converge to the true models. The outcome of those methods depends on the appropriate balance between the two terms in the cost function (controlled by an input parameter) as well as the quality of initial hypotheses. The method proposed in this paper is primarily designed to avoid the use of parameters that are difficult to tune.Sensitivity to the parameters included for the summation of terms with different dimensions is also an issue associated with the application of several other subspace learning and clustering methods. For instance, Robust-PCA <cit.> splits the data matrix into a low-rank matrix and a sparse error matrix. The aim is to minimize the cost function (which is a norm of the error matrix) while it is regularized by a rank of representation matrix. In factorization methods such as <cit.> the low-rank representation is obtained by learning a dictionary and coefficients for each data point. The effect of regularization is included using a parameter. These parameters often depend on noise scales, complexity of structures and even depend on the number of underlying structures and their data points. As such, these variables vary between data-sets and therefore limits the application of those methods.Another approach to multi-model fitting is to pose the problem as a clustering problem <cit.> <cit.>. In this approach, the idea is that a pure sample (members of the same structure) of the observed data from a cluster can be represented by a linear combination of other data points from the same cluster. Then the relations of all points to each sampled subset can encode the relations between data points. For example Sparse Subspace Clustering SSC <cit.> tries to find a sparse block-diagonal matrix that relates data points in each cluster. The optimization task in this work is to minimize the error as well as the L_1 norm of this latent sparse matrix. In contrast, the regularization term in LRR <cit.> uses nuclear norm of this sparse matrix. Our proposed method is computationally faster than these methods and does not need the parameter brought in both cases for the regularization. Recently <cit.> gave a deterministic analysis of LRR and suggested that the regularization parameter can be estimated by looking at the number of data points. Although this improves the speed and accuracy of those methods, it remains unclear what would happen when the number of data points is very high (similar to databases studied in this work). We should also note that methods such as LRSR <cit.> and CLUSTEN <cit.>, with more constraints for the regularization and therefore more parameters, have also been proposed. A similar strategy is also taken to solve the problem of Global Dimension Minimization in <cit.> which is used to estimate the fundamental matrix for the problem of two-view motion segmentation. The method is somewhat more accurate than LRR and SSC but it is computationally expensive.Another widely used clustering method is called Spectral Clustering <cit.>. The main idea is to search for possible relations between data points and form a graph that encodes the relations obtained by this search. Spectral clustering, based on eigen-analysis of a pairwise similarity graph, finds a partitioning of the similarity graph such that the data points between different clusters have very low similarities and the data points within a cluster have high similarities.A simple measure of similarity between a pair of points lying on a vector field is the euclidean distance. However, such measures based on just two points will not work when the problem is to identify data points that are explained by a known structure with multiple degrees of freedom. For instance, in a 2D line fitting problem, any two points will perfectly fit a line irrespective of their underlying structure, hence a similarity cannot be derived by just using two points. In such cases an effective similarity measure can be devised using higher order affinities (e.g. for a 2D line fitting problem least square error between three or more points will provide a suitable affinity measure indicating how well those points approximate a line <cit.>).There are several methods to represent higher order affinities using either a hyper-graph or a higher order tensor.Since spectral clustering cannot be applied directly to those higher order representations, they are commonly projected to a graph (discussed further in background). It is also known that the number of elements in a higher order affinity tensor (or number of edges in a hyper-graph) will increase exponentially with the order of the affinities (h), which is directly related to the complexity of the model (p). Hence, for complex models it would not be computationally feasible (in terms of memory utilization or computation time) to generate the full affinity tensor (or hyper-graph) even for a moderate size dataset. The commonly used method to overcome this problem is to use a sampled version of the full tensor (or hyper-graph) obtained using random sampling <cit.>, <cit.>. The information content of the projected graph heavily dependents on the quality of the samples used <cit.>, <cit.>, <cit.> and we analyze this behavior in background.In this paper, we propose an efficient sampling method called cost based sampling (CBS), to obtain a highly accurate approximation of the full graph required to solve multi-structural model fitting problems in computer vision. The proposed method is based on the observation that the usefulness of a graph for segmentation improves as the distribution of hypotheses (used to build the graph) approaches the actual parameter distribution for the given data. The approach is similar to the one proposed in <cit.> where Mixture of Gaussian is used to find the structures in the parameter space. The search is initialized by a few Gaussians and the parameters of the mixture is obtained through Expectation-Maximization steps. The grouping strategy is based on the above mentioned optimization approach and similarly involves the use of a regularization parameter that is difficult to tune. When the number of Gaussians is too low, which is to seek a few perfect samples, the noise cannot be characterized properly and some structures may be missed. Increasing the number of Gaussians is computationally expensive for the EM part. This is where our approach is most effective. Our proposed method benefits from a fast greedy optimization method to generate many samples and makes use on the inherent robustness of Spectral Clustering for occasional samples that may not be perfect. The underlying assumption in this approach is that the parameter distribution can reveal the underlying structures and the generation of many good samples is the key to properly construct the distribution for successful clustering. This basic approach can be implemented with different choices of cost functions and optimization methods. The choice of the optimization method mostly determines the speed and the choice of the cost function affects the accuracy. For example, LBF <cit.> attempts to improve the generated samples of the cost function (chosen to be the β-number of the residuals of a model) by guiding the samples and increasing their size. Its optimization method is slower than our proposed method, which uses the derivatives of the cost function and the chosen cost function is very steep around the structures, which makes the initialization of the method very difficult and can lead to missing structures. The recipe to overcome these shortcomings is based on using extra constraint, such as spatial contiguity, to ensure the purity of samples before increasing their sizes. In this paper, we approximate this actual parameter distribution using the k-th order cost function, which in turn enables us to generate samples using a greedy algorithm that incorporates a faster optimization method. The advantage of the proposed method is that it only uses information present in data with respect to a putative model and does not require any additional assumptions such as spatial smoothness. The main contribution of this paper is the introduction of a fast and accurate data segmentation method based on effective combination of the accuracy of a new sampling method with the speed of a good clustering method. The paper presents a reformulation of these methods in way that it makes them complementary. The proposed sampler is ensured to visit all structures in data (by a high probability) and guide each sample to represent the closest structure. This is achieved by focusing on the distribution of putative models in parameter space and by providing samples with highest likelihoods from each structure. The choice of maximum likelihood method plays an important role in the speed of the sampler where the accuracy is still preserved. Furthermore, compared to other techniques, the proposed method incorporates less sensitive parameters that are difficult to tune. In particular, we compare the proposed method with ones using a scale parameter to combine two unrelated cost functions. Such a parameter is often data dependent and difficult to tune for a general solution. The rest of this paper is organized as follows. background discusses the use of clustering techniques for robust model fitting and the need for better sampling methods. method describes the proposed method in detail and experimentalanalysis presents experimental results involving real data, and comparisons with state-of-the-art model-fitting techniques. Additional discussion regarding the merits and shortcomings of the method is presented in Discussion followed by a conclusion in conclusion. § BACKGROUND Consider the problem of clustering data points X = [ x_i ]^N_i=1; x_i ∈ℝ^d assuming that there are underlying models (structures) Θ = [ θ^(j)]_j=1^m; θ^(j)∈ℝ^p that relate some of those points together. Here N is the number of data points and m is the number of structures in the dataset with zeroth structure assigned for outliers. Clustering a data-set, in such a way that elements of the same group have higher similarity than the elements in different groups is a well-studied problem with attractive solutions like spectral clustering. Spectral clustering operates on a pairwise undirected graph with affinity matrix, G, that contain affinities between pairs of points in the dataset. As explained earlier, for model fitting applications, only higher than pairwise order affinities reveal useful similarity measure and spectral clustering cannot be directly applied to higher order affinities.Agrawal <cit.> introduced an algorithm where the higher order affinities (in multi-structural multi-model fitting problems) were represented as a hyper-graph. They proposed a two step approach to partition a hyper-graph with h=p+1 (p is the number of parameters of the model) affinities. In the first step, the hyper-graph wasapproximated with a weighted graph using clique averaging technique. The resulting graph was then segmented using spectral clustering. Constructing the hyper-graph with all possible p+1 edges is very expensive to implement. As such, they used a sampled version of the hyper-graph constructed by random sampling. Govindu <cit.> posed the same problem in a tensor theoretic approach where the higher order affinities were represented as an h-dimensional tensor 𝒫. Using the relationship between higher order SVD (HOSVD) of the h-mode representation and the eigan value decomposition <cit.> showed that the supper symmetric tensor 𝒫 (the similarity does not depend on the ordering of points in the h-tuple) can be decomposed in to a pairwise affinity matrix using G = PP^⊤. Here P is the flattened matrix representation[The flattened matrix (P_d) along dimension d is a matrix with each column obtained by varying the index along dimension d while holding all other dimensions fixed.] of 𝒫 along any dimension. The size of the matrix P is still very large. For example, the size of P for a similarity tensor constructed using h-tuples from a dataset containing N data points is N × N^h-1. As with the hyper-graphs, to make the computation tractableGovindu <cit.> suggested a sampled version of the flattened matrix (H ≈ P) to be used. Each column of H was obtained using the residuals to a model (θ) estimated using randomly picked h-1 data points. In the remainder of the text we adopt this tensor theoretic approach. The sampling strategy used to construct the sample matrix H critically affects the clustering and thus, overall performance of the model fitting solution. §.§ Why distribution of sampling is important?In tensor theoretic approach, pairwise affinity matrix G is constructed by multiplying the matrix H with its transpose where H(i,l) = e^-r^2_θ_l(i) /2σ^2, r^2_θ_l(i) is the squared residual of point i to model θ_l (obtained by fitting to a tuple τ_l) and σ is a normalization constant. G_[N × N] = HH^⊤ = ∑_l=1^n_H[H^(l)H^(l)^⊤]_G^(l)_[N × N]whereH^(l) is the l^th column of H corresponding to the hypothesisθ_l, G^(l) is the contribution of hypothesisθ_l to the overall affinity matrix (G) and n_H is the total number of hypotheses.When a model hypothesis θ_l is close to an underlying structure in data (Hypothesis A in lineEx:subfig1), the inlier points of that structure would have relatively small residuals and the resulting G^(l) (lineEx:subfig2) would have high affinities between the inliers and low affinity values for all other point pairs (outlier-outlier, outlier-inlier). On the other hand, when a model hypothesis θ_l is far (in parameter space) from any underlying structure, the presumption is that the resulting residual would be large, leading to a G^(l)≈0_[N × N]. However, as seen in lineEx:subfig1 (for Hypothesis B), this is not always the case in model fitting. It is highly likely that there exists some data points that give small residuals even for such hypothesis (far from any underlying model) leading to high H(i,l) values. The resulting G^(l) (lineEx:subfig3) would have high affinities between some unrelated points that can be seen as noise in the overall graph. The effect of these bad hypothesis can be amplified by the fact that the normalization factor, σ is often overestimated (using robust statistical methods) when the hypothesis θ_l is far (in parameter space) from any underlying structure. It is important to note that if none of the hypotheses (used in constructing the graph) are close to a underlying structure, then the overall graph would not have higher affinities between the data points in that structure and the clustering methods would not be able to segment that structure. The above example shows that the sampling process influences the level of noise in the graph. While spectral clustering can tolerate some level of noise, it has been proved that this noise level is related to the size of the smallest cluster we want to recover (tolerable noise level goes up rapidly with the size of the smallest cluster) <cit.>. As model fitting often involves recovering small structures, it is highly important to limit the noise level in the affinity matrix.For any two data points x_i,x_j we can write:G(i,j) = 1/n_H∑_l=1^n_He^- (r^2_θ_l(i) + r^2_θ_l(j) )/2σ^2_g_ij(θ_l)∫ P_θ· g_ij(θ_l) dθFor any model fitting problem with p > 2 there exists infinite number of models θ_l whereg_ij(θ_l) → 1. This implies that for any two points, G(i,j) (according to gij) can be maximized or minimized by choosing P_θ accordingly.For a graph to have the block diagonal structure suitable for clustering, G(i,j) needs to be large for x_i ∧ x_j ∈θ_t and small otherwise. If hypotheses are selected from a Gaussian mixture distribution with sharp peaks around the underlying model parameters and low density in other places and θ_t representing the true underlying structures, we have:P_θ = ∑_t=1^mϕ_t 𝒩 (θ_t, Σ_t ).the edge weights approach the following values when Σ_t →0:G(i,j) →{ϕ_t i ∧ j ∈θ_t 0 i ∧ j ∉θ_t .The G results in a graph that has a block diagonal structure suitable for clustering. Of course, generating sample hypotheses form this distribution is not possible because it is unknown until the problem is solved. This point is further illustrated using a simple model fitting experiment using a synthetic dataset containing four lines. Each line contain 100 data points with additive Gaussian noise 𝒩(0, 0.02^2), while 50 gross outliers were also added to those lines. First, 500 hypotheses were generated using uniform sampling, random sampling (using 5-tuples) and the sampling scheme proposed in this paper (CBS). These hypotheses were then used to generate the three graphs shown in examleGraph. As the data is arranged based on the structures membership, a properly constructed graph should show a block diagonal structure with high similarities between points in the same structure and low similarities for data from different structures.The figure shows that while the CBS method has resulted in a graph favorable for clustering the other two sampling strategies have produced graphs with little information. The corresponding hypothesis distributions (examleGraph (e-f)) show that only CBS has generated high amount of hypotheses closer to the underlying structure.Govindu <cit.> used randomly sampled h-1 (for affinities of order h) data points and calculated a column of H by computing the affinity from those to each point in the dataset. It is well known that the probability of obtaining a clean sample, leading to a hypothesis close to a true structure in data, decreases exponentially with the size of the tuple <cit.>. Hence it becomes increasingly unlikely to obtian a good graph for models with high number of parameters using random sampling. There are several techniques in the literature that try to tackle the clustering problem by tapping into available information regarding the likelihood distribution of good hypotheses. For instance, spectral curvature clustering <cit.>, which is an algorithm designed for affine subspace clustering, employs an iterative sampling mechanism that increases the chance of finding good hypotheses. In this scheme, a randomly chosen affinity matrix (H) is used to build a graph and partitions it using the spectral clustering method to generate an initial segmentation of the dataset. Data points within each segment of this clustering are then sampled to generate a new set of columns of H. This process is repeated several times to improve the final clustering results.Similarly, Ochs and Brox <cit.> used higher order affinities in a hyper-graph setting for motion segmentation of video sequences. In their method, the affinity matrix is obtained using a sampling strategy that is partly random and partly deterministic. The higher order affinities are based on 3-tuples generated by choosing two points randomly. The third points are then chosen as a mixture of 12 nearest neighbor points and 30 random 3rd points.The previous guided sampling approaches generate the columns of the affinity matrix using the minimal size tuples. Purkait <cit.> advocated the use of larger tuples and showed that if those tuples are selected correctly, the hypotheses distribution would be closer to the true model parameters compared to smaller tuples. However selecting larger all inlier (correct) tuples using random sampling is highly unlikely. Purkait <cit.> suggested to use Random Cluster Models (RCM) <cit.> to improve the sampling efficiency. RCM is based on selecting the tuples iteratively in a way that at every iteration the samples are selected using the segmentation results obtained by enforcing the spatial smoothness on the results of the previous iteration. This approach is particularly advantageous where the application satisfies the spatial smoothness requirements. Our proposed approach for constructing the affinity matrix, without relying on the existence of spatial smoothness, is explained in the next section.§ PROPOSED METHOD This section describes a new approach for multi-structural model fitting problem.Similar to <cit.>, <cit.>, we approach multi-structural fitting as a clustering problem with the intention of applying spectral clustering. In this approach, the pairwise affinity matrix G for spectral clustering is obtained by projecting the higher order affinity tensor (𝒫) via multiplying an approximated flattened matrix H with its transpose. For affinities of order h, each column of H is obtained by sampling h-1 data points and calculating the affinity of each point to those sampled points. The affinity of a data point i to a h-1 tuple is calculated as e^-r^2_θ_l (i)/(2σ^2) where θ_l is the model parameters fitted to h-1 tuple and σ is the normalization factor. For the sake of clarity, in the remainder of this text, a h-1 tuple (τ_l) used to generate a column of H is referred to as an edge while its respective model (θ_l) is called a hypothesis. As discussed in background, the way we sample the edges affects the information content of the resulting graph and our ultimate goal is to sample edges in such a way that the distribution of their associated hypotheses resembles the true distribution of the model parameters. While the true distribution of the model parameters for a given dataset p(θ | X) is unknown until the problem is solved, using Bayes' theorem it can be written as follows: p(θ|X) ∝ p(X|θ) p(θ)where p(X|θ) is the likelihood of observing data X under the model θ and p(θ) is the prior distribution of θ. Given that the prior is uninformative (i.e. any parameter vector is equally likely), the posterior is largely determined by the data (the posterior is data-driven) and can be approximated by: p(θ|X) ∝p(X|θ).A robust objective function is often used in multi-structural model fitting applications to quantify the likelihood of existence of a structure in data <cit.>. On that basis, we would argue that it can be a good approximation of the model parameters likelihood. For example the sample consensus objective function as employed in RANSAC is expected to have a peak in places where a true structure is present (in the parameter space) and low values where there are no structures. It should be noted here that when there are structures of different size, the sample consensus function associates higher values for larger structures (hence it is biased towards large structures). In this work, we select the cost function of the least k-th order statistics (LkOS) estimator as the objective function, as it has shown to perform with stability and high breakdown point <cit.> in various applications and it is not biased towards large structures (LkOS is biased towards structures with low variance, which is a desirable property). A modified version of the LkOS cost function used in <cit.> is as follows:C(θ) =∑_j=0^p-1r_i_j-m,θ^2(θ)where r_i^2(θ) is the i-th sorted squared residual with respect to model θ and i_k, θ is the index of the k-th sorted squared residual with respect to model θ. Here k refers to the minimum acceptable size of a structure in a given application and its value should be significantly larger than the dimension of the parameter space (k ≫ p). Because the above cost function is designed to have minima around the underlying structures, the model parameters likelihood function can be expressed as:P_θ∝ p(X|θ)≈1/Z e^-C(θ) .The above function is highly non-linear and its evaluation over the entire parameter space, required for calculating the normalizing constant Z, would not be feasible. The common approach for sampling from a distribution that can only be evaluated up to a proportional constant on specified points is to use the Markov Chain Monte Carlo (MCMC) method (e.g. by using Metropolis-Hasting algorithm). However such algorithms need a good update distribution to be effective, and simple update distributions like random walk would be inefficient and may not traverse the full parameter space <cit.>. In particular, setting up random walk distributions need the information regarding the span of model parameters, which is unknown until the problem is solved. §.§ Sampling edges using the robust cost function Using derivatives of the order statistics function in kthSortedCostFunction, a greedy iterative sampling strategy was proposed in <cit.> that is intentionally biased towards generating data samples from a structure in the data. This sampling strategy was then used to generate putative model hypotheses for different size tuples in conjunction with the fit and remove strategy to recover multiple structures in data <cit.>, <cit.>. Because fit and remove strategy is susceptible to errors in the initial stages, the sampling had to be reinitialized (randomly) several times to reduce the probability of error propagation in the sequential fit and remove stages. In this paper, we propose a modified version of this iterative update procedure (recalled in Algorithm <ref>) to generate model estimates (edges) that are close to the peaks of the true parameter density function p(θ|X). Each edge used in constructing the H matrix of the proposed method is obtained as follows:Initially a h-tuple (h = p+2) is picked according to the inclusion weights W (this will be explained later). Using this tuple as the starting point the following update is run until convergence. A model hypothesis is generated using the selected tuple, and the residuals form each data point to this hypothesis are calculated. These residuals are then sorted and the h points around the k-th sorted index are selected as the updated tuple for next iteration. In practice, the above update step has the following property: If the current h-tuple is a clean sample (all inliers) from a structure in data, there is a high probability that the next sample will also be from the same structure as there should be at least k points agreeing to each true structure. On the other hand if the current hypothesis is not supported by k points (not a structure in data), the next hypothesis would be at a distance in the parameter space. It is shown that residuals of a data structure with respect to an arbitrary hypothesis have a high probability of clustering together in the sorted residual space <cit.>, <cit.>. As the next sample is selected from the sorted residual space, the probability of hitting a clean sample would then be higher than selecting it randomly.Following <cit.>, we use the following criterion to decide whether the update procedure is converged to a structure in data: F_stop =(r_i_k,θ_l^2(θ_l) < 1/h∑_j=k-h+1^kr_i_j,θ_(l-1)^2(θ_l)_(a) )∧(r_i_k,θ_l^2(θ_l) < 1/h∑_j=k-h+1^kr_i_j,θ_(l-2)^2(θ_l)_(b) ). Here (a) and (b) are the squared residuals of the edge points in iterations l-1 and l-2 with respect to the current parameters θ_l. This criterion checks the data points associated with the two previous samples to see if the average residuals of those points (with respect to the current parameters) are still lower than the inclusion threshold associated with having k points (assuming that a structure has at least k points implies that data points with residuals less than r_i_k,θ_l^2(θ_l) are all inliers). This indicates that the samples selected in the last three iterations are likely to be from the same structure hence the algorithm has converged. §.§ Sub-sampling dataAlthough the above update procedure has a high probability of generating an edge that results in a hypothesis close to a peak in p(θ|X), there is no guarantee that all the structures present in the data will be visited given that the update step is reinitialized from random locations. If some of the structures were not visited by the sampling procedure, the resulting graph would not contain the information required to identify those structures.To ensure that the algorithm would visit all the structures in data, we propose to use a data sub-sampling strategy. Each run of the the update procedure in Algorithm <ref> is executed only on a subset of data selected based on an inclusion weight (W). The inclusion weight, which is initialized to one, is designed in such a way that at every iteration, it will give higher importance to data points that are not modeled by the hypothesis used in the previous iterations. This will progressively increase the chance of unmodeled data to be included in the sampling process. This idea is similar to the Bagging predictors <cit.> with boosting <cit.>,<cit.> in machine learning. In Bagging predictors multiple subsets of data formed by bootstrap replicates of the dataset are used to estimate the models, which are then aggregated to get the final model. Boosting improves the bagging process by giving importance to unclassified data points in successive classifiers. The overall edge generation procedure is as follows: A data subset of size N_s is sampled from data using the inclusion weights W without replacement (W is normalized in sampleData(·) function). This sub-sample is then used in the update procedure in algorithm <ref>, which produces an edge. Next the inclusion weights W of the inliers to the above hypothesis are decreased while the inclusion weights of the remaining points are increased. This process is repeated for a fixed number of iterations.The complete steps of the proposed method (CBS) are listed in Algorithm <ref>. The scale of noise plays a crucial role in the success of segmentation methods. In spectral clustering based model fitting methods, the scale is used to convert the residuals to an affinity measure. While most competing algorithms require this as an input parameter <cit.>, <cit.>, the proposed method estimates the scale of noise from the given data. In this implementation, we selected the MSSE <cit.> to estimate the scale of noise. The MSSE algorithm requires a constant threshold T as an input. This threshold defines the inclusion percentage of inliers. Assuming a normal distribution for noise, it is usually set to 2.5, i.e. T=2.5 will include 99% of normally distributed inliers. Desirable properties of this estimator for dealing with small structures were discussed in <cit.>.§ EXPERIMENTAL RESULTS We have evaluated the performance of the proposed method for multi-object motion segmentation in several well-known datasets. The results of the proposed cost-based sampling method (CBS) were then compared with state-of-the-art robust multi-model fitting methods. The selected methods use higher order affinities Spectral Curvature Clustering (SCC <cit.>, HOSC<cit.> and OB <cit.>) or are based on energy minimization (RCMSA <cit.>, PEARL <cit.> and QP-MF <cit.>).The accuracy of all methods was evaluated using the commonly used clustering error(CE) measure given in <cit.>:CE = min_Γ∑_i=1^Nδ( L^*(i) ≠ L_r^Γ (i) ) /N× 100where L^*(i) is the true label of point i, L_r(i) is the label obtained via the method under evaluation and Γ is a permutation of labels. The function δ(·) returns one when the input condition is true and zero otherwise.The proposed CBS algorithm was coded in MATLAB (The code is publicly available: https://github.com/RuwanT/model-fitting-cbs) and the results for competing methods were generated using the code provided by the authors of those works.The experiments were run on a Dell Precision M3800 laptop with Intel i7-4712HQ processor. §.§ Analysis of the proposed methodIn this section we investigate the significance of each part of the proposed algorithm and the effect of its parameters on its accuracy. This analysis was conducted using a Two-view motion segmentation problem (see twoviewMS for more details).We used the “posters-checkerboard” sequence from RAS dataset <cit.> to evaluate the significance of the main components of the CBS method. This sequence contain three rigid moving objects with 100, 99, 81 point matches respectively and 99 outlier points.In the first experiment the matrix H was generated with edges obtained by: pure random sampling (RDM), with the CBS method without the sub-sampling strategy, i.e. lines 3, 7-10 removed from Algotihm <ref> (CBS-nSS) and the complete proposed method (CBS) respectively. For each sampling method the number of hypothesis (n_H) was varied and the mean clustering error and the run time was recorded (averaged over 100 runs per each n_H). posterCheckerboard:subfig5 shows the variation of mean clustering error with the sampling time (computing time). The results show that for this problem accurate identification of models could not be achieved with pure random sampling even when large number of edges were sampled. It also shows that the sub-sampling strategy of the proposed CBS method significantly contributes towards accurate and efficient identification of the underlying models in data. Next we use the same image sequence to study the variations in accuracy of the proposed method with the value of parameter k.This parameter defines the minimal acceptable size for a structure (in number of points) in a given application. Here we vary the value of k from 10 to 80 (CBS use edge of size 10 and the smallest structure in this sequence has only 81 points hence any value outside this range is not realistic). The number of hypothesis was set to 100 for both sampling methods. Results plotted in posterCheckerboard:subfig6 show that for CBS-nSS and CBS the clustering error reduces steeply up to around k=20. In CBS-nSS the CE remains relatively unchanged after that while in CBS the clustering error start to increase when k goes beyond 40. This behavior can be explained as follows: The CBS method estimates the scale of noise fromdata and the analysis of <cit.> showed that the estimation of the noise scale from data requires at least 20 data points to limit the effects of finite sample bias. As such, the CBS method would not have high accuracy when k<20. In addition the data sub-sampling in CBS reduces the number of points available for each run of the sample generator hence the increased clustering error for large k values. Using large values for k is also not desirable because the smaller size structures would be ignored. Next, we compared the proposed hypothesis generation process against several well known sampling methods for robust model fitting (e.g. MultiGS <cit.> and Lo-RANSAC <cit.>). These methods are designed to bias the sampling process towards selecting points from a structure in data.For completeness we have also included pure spatial sampling (generate hypothesis using points closer in space picked via a KDtree) and random sampling. Similar to the proposed method the hypothesis from these sampling methods were used to generate a graph which is cut to perform the clustering. The posterCheckerboard:subfig6 shows that the CBS method is capable of generating highly accurate clusterings faster than other sampling methods. It should be noted here that while we have only presented the results for one two-view motion segmentation case, similar trends were observed across all other problems tested in this paper. §.§ Two-view motion segmentation Two-view motion segmentation is the task of identifying the points correspondences of each object in two views of a dynamic scene that contains multiple independently moving objects. Provided that the point matches between the two views are given as [X_1, X_2] where X_i = (x, y, 1)^⊤ is a coordinate of a point in view i, each motion can be modeled using the fundamental matrix F ∈ℛ^3 × 3as <cit.>:X_1^⊤ F X_2 = 0The distance from a given model to a point pair can be measured using the Sampson distance <cit.>.We tested the performance of the CBS method on the Adelaide-RMF dataset <cit.> which contains key-point matches (obtained using SIFT) of dynamic scenes together with the ground truth clustering. The clustering error and the computational time of the CBS method on each sequence together with those of the competing methods (PEARL, FLOSS, RCMSA and QP-MF) are given in fundamentalRes. The results show that in comparison to the competing methods, the proposed method has achieved comparable or better accuracy over all sequences. Moreover, on average the computation time of the proposed method is around 4 times less than that of QP-MF and twice that of the RCMSA when its computational bottlenecks are implemented using C (MATLAB MEX) whereas our method is implemented using simple MATLAB script. One would expect significant improvements in terms of speed by using C language implementation.In these experiments the parameter k of the proposed method was set to k = min(0.1 × N , 20). The number of samples in QP-MF was set to 200 (determined through trial and error: no significant improvement of accuracy was observed when thenumber of samples were increased beyond 200 for a test sequence). §.§ 3D-motion segmentation of rigid bodiesThe objective of 3D motion segmentation is to identify multiple moving objects using point trajectories through a video sequence. If the projections (to the image plane) of N points tracked through Fframes are available, [x_fα ]_α =1 … N^f=1 … F: x_fα∈ℛ^2 then <cit.> has shown that the point trajectories P_α =[ x_1α, y_1α,x_2α, … x_Fα, y_Fα ]^⊤∈ℛ^2F that belong to a single rigid moving object are contained within a subspace of rank ≤ 4, under the affine camera projection model. Hence, the problem of 3D motion segmentation can be reduced to a subspace clustering problem. One of the characteristics in subspace segmentation is that the dimension of the subspaces may vary between two and four, depending on the nature of the motions. This means that the model we are estimating is not fixed. The proposed method, which was not specifically developed to solve this problem (unlike some competing techniques <cit.>) is not capable of identifying the number of dimensions of a given motion and requires this information as an input. In our implementation we have used the Eigan values of the sampled data point to select a dimension d of the model such that 2 ≤ d ≤ 4.We utilized the commonly used “checkerboard” image sequence in the Hopkins 155 dataset <cit.> to evaluate the CBS algorithm. This dataset contains trajectory information of 104 video sequences that are categorized into two main groups depending on the number of motions in each sequence (two or three motions). The clustering error (mean and median) and the computation time for CBS together with competing higher order affinity based methods are shown in hopkins155. The results show that CBS has achieved comparable clustering accuracies to those achieved by competing methods while being significantly faster than those methods (specially on 3-motion sequence). For completeness we have also included the results for some energy minimization (PEARL<cit.>, QP-MF<cit.>) and fit & remove (RANSAC, HMSS<cit.>) based methods as reported in <cit.>. To gain a better understanding of the methods (that has good accuracy) across all sequences we have plotted the cumulative distributions of the errors per sequence in hopkinHist:subfig1 (two motion sequences)and hopkinHist:subfig2 (three motion sequences).For algorithms with a random elements the mean error across 100 runs is used.To provide a qualitative measure of the performance the final segmentation results of several sequences in the Hopkins 155 dataset, where CBS was both successful and unsucessful, are shown in hopkinQuality.The sequences contained in the Hopkins 155 dataset are outlier-free. In order to test robustness to outliers, we added synthetically generated outlier trajectories to each three-motion sequence of Hopkins 155 dataset[The MATLAB code provided by http://www.vision.jhu.edu/data/hopkins155/ was used.]. The clustering results of the CBS method together with those obtained by the best performing method (SCC) are plotted in hopkinHist:subfig3. The results show that CBS was able to achieve high accuracy in the presence of outliers on higher number of sequences. It should be noted here that the SSC algorithm is not designed to handle outliers and therefore was not included in this analysis.§.§ Long-term analysis of moving objects in video The point trajectories of the “Hopkings155” dataset used in the above analysis are hand tunned (i.e. the point trajectories of each sequence are cleaned by a human such that they do not contain gross-outliers or incomplete trajectories).Recently, more realistic “Berkeley Motion Segmentation Dataset” (BMS-26) was introduced by <cit.>, <cit.> for long-term analysis of moving objects in video. This dataset consist of point trajectories that are obtained by running a state of the art feature point tracker (the large displacement optical flow <cit.>), on 26 videos directly without any further post processing.Thus those feature trajectories contain noise and outliers and most importantly include incomplete trajectories. Incomplete trajectories are trajectories that do not run for the whole duration of the video, they can appear in any frame of the video and disappear on or before the last frame. These incomplete trajectories are mainly caused by occlusion and disocclusion.The traditional approach of using two views to segment objects is susceptible to short term variations (e.g. human standing for a short time can be merged with the background). Hence Brox and Malik <cit.> proposed long-term video analysis where a similarity between two points trajectories was used to build a graph that was segmented using spectral clustering. Such pairwise affinities only model translations and do not account for scaling and rotation. Ochs and Brox <cit.> used affinities defined on higher order tuples, which results in a hyper-graph. Using a nonlinear projection this hyper-graph was then converted to an ordinary graph which was segmented using spectral clustering. In this analysis we use the approach proposed by Ochs and Brox <cit.> where a motion of an object is modeled using a special similarity transform 𝒯∈ SSim(2), with parameters scaling (s), rotation (α) and translation (v). The distance from a trajectory (c_i(t) → c_i(t')) to the model 𝒯_t is calculated using L_2-distance d_𝒯_t, i = 𝒯_t c_i(t) - c_i(t'). A motion hypothesis 𝒯_t at time t can be obtained using two or more point trajectories that exist in the interval [t,t'] . In our implementation we used edges of size h=p+2=4 to generate hypotheses. It should be noted here that the distance measure is only valid if the trajectories used to generate the hypothesis and the trajectory to which the distance is calculated all coexist in time. Hence a distance of infinity is assigned to all the points that does not exist in the time interval [t,t'].This behavior causes complications in the weight update of the proposed method as now some trajectories can be identified as outliers even though those belong to the same object. To overcome this we uniformly sample small windows (of size 7 frames) and limit the weight updates to that window alone. Another important feature of this dataset is that most sequences have a large number of frames and data points (e.g. sequence "tennis" even with 8 times down-scaling <cit.>, includes more than 450 frames and 40,000 data points). Storing a graph of that size is challenging specially on a PC. Hence, in cases where the number of frames is large, we divide the video into few large windows (e.g. 100 frames) and solve the problem in each large window independently. Next we calculated themutual distance between each structure in different windows and clustered them using k-means to get the desired number of structures. The number of clusters is a parameter selected such that it would result in reasonable accuracy with least over-segmentation. Once the clustering was obtained they were evaluated using the method provided along with the dataset (man made masks on specific frames of the videos). We compare our results with <cit.>, <cit.>, which are based on higher order affinities. The results given in longtermVid show that our method has achieved similar accuracies to those with significant improvements in computation time.The computation time is related to the number of hyper-edges used and OB used N^2 × (30+12) hyper edges in their implementation where as HOSC used 2N/5 + N. In contrast our method uses fewer hyper-edges (N/10) selected using the k-th order cost function. The results show that if the edges are selected appropriately similar accuracies can be achieved and lower number of edges means a lower computational time. We also note here that while the two competing methods <cit.>,<cit.> use spacial contiguity in selecting the edges to construct the affinity graph, the proposed method have not used any such additional information.§ DISCUSSIONThe proposed method requires the value of k, which defines the minimal acceptable size for a structure in a given application, as an input. Any robust model fitting method needs to establish the minimal acceptable structure size (either explicitly or implicitly), or else it may result in a trivial solution. For example if we are given a set of 2D points and asked to identify lines in data without any additional constraint, there would be no basis to exclude the trivial solution because any two points will result in a perfect line. Hence, in order to find a meaningful solutions there must be some additional constraints such as the minimal acceptable size for a structure. The proposed method estimates the scale of noise fromdata and the analysis of <cit.> showed that the estimation of the noise scale from data requires at least around 20 data points to limit the effects of finite sample bias. This leads to a lower bound of k around 20.Similar to competing clustering based methods (e.g. SCC <cit.>, SSC <cit.>) the proposed method also requires prior knowledge on the number of clusters. This is one of the limitations of the proposed method. The problem of identifying the number of structures and the scale of noise simultaneously is still a highly researched area. Remaining outliers can always be seen as members of a model with large noise values. Zelnik-Manor and Perona <cit.> proposed a method to automatically estimate the number of clusters in a graph using Eigenvector analysis. Since our focus in this paper is on efficiently generating the graph (not in how to cluster it), we have not included this in the evaluations. Some model fitting methods that are based on energy minimization <cit.> are devised to estimate the number of structures given the scale of noise. They achieve this by adding a model complexity term to the cost function that penalize additional structures in a given solution. However, these methods require an additional parameter that balances the data fidelity cost with the model complexity (number of structures in <cit.>). Our experiments on <cit.> showed that the output of these methods were heavily dependent on this parameter and required hand tunning on each image (of fundamentalRes) to generate reliable results. The proposed method uses a data-sub-sampling strategy based on a set of inclusion weights to bias the algorithm to produce edges from different structures. These inclusion weights iteratively calculated using the inlier/outlier dichotomy for each edge. However in case there are additional information about the problem such as spacial contiguity, one can use those to improve the sub-sampling. For example in two-view motion segmentation, the euclidean distance between points can be used to construct a KDtree, which can then be used to do the sampling directly (i.e. select initial point randomly and include N_s points closest to that point as the data sub-sample). It is important to note that in the performance evaluations of this paper we have not used any such additional information.§ CONCLUSIONIn this paper we proposed an efficient sampling method to obtain a highly accurate approximation of the full graph required to solve the multi-structural model fitting problems in computer vision. The proposed method is based on the observation that the usefulness of a graph for segmentation improves as the distribution of hypotheses (used to build the graph) approaches the actual parameter distribution for the given data. In this paper we approximate this actual parameter distribution using the k-th order statistics cost function and the samples are generated using a greedy algorithm coupled with a data sub-sampling strategy. The performance of the algorithm in terms of accuracy and computational efficiency was evaluated on several instances of the multi-object motion segmentation problems and was compared with state-of-the-art model fitting techniques. The comparisons show that the proposed method is both highly accurate and computationally efficient. § ACKNOWLEDGMENTS § ACKNOWLEDGMENT This research was partly supported under Australian Research Council (ARC) Linkage Projects funding scheme.IEEEtran1Fischler1981 M. A. Fischler and R. C. Bolles, “Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography,” Commun. ACM, vol. 24, no. 6, pp. 381–395, Jun. 1981.Delong2012 A. Delong, L. Gorelick, O. Veksler, and Y. Boykov, “Minimizing energies with hierarchical costs,” International Journal of Computer Vision, vol. 100, no. 1, pp. 38–58, 2012.Elhamifar2013 E. Elhamifar and R. Vidal, “Sparse subspace clustering: Algorithm, theory, and applications,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 35, no. 11, pp. 2765–2781, 2013.Haifeng2003 C. Haifeng and P. Meer, “Robust regression with projection based m-estimators,” in Proceedings. Ninth IEEE International Conference on Computer Vision, 2003, pp. 878–885 vol.2.Stewart1997 C. V. Stewart, “Bias in robust estimation caused by discontinuities and multiple structures,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, no. 8, pp. 818–833, 1997.Zuliani2005 M. Zuliani, C. S. Kenney, and B. S. Manjunath, “The multiransac algorithm and its application to detect planar homographies,” in IEEE International Conference on Image Processing, ICIP., vol. 3, 2005, pp. III–153–6.Boykov2001 Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via graph cuts,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 23, no. 11, pp. 1222–1239, 2001.candes2011robust E. J. Candès, X. Li, Y. Ma, and J. Wright, “Robust principal component analysis?” Journal of the ACM (JACM), vol. 58, no. 3, p. 11, 2011.cabral2013unifying R. Cabral, F. De La Torre, J. P. Costeira, and A. Bernardino, “Unifying nuclear norm and bilinear factorization approaches for low-rank matrix decomposition,” in Proceedings of the IEEE International Conference on Computer Vision, 2013, pp. 2488–2495.Agarwal2005 S. Agarwal, L. Jongwoo, L. Zelnik-Manor, P. Perona, D. Kriegman, and S. Belongie, “Beyond pairwise clustering,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR., vol. 2, 2005, pp. 838–845.Govindu2005 V. M. Govindu, “A tensor decomposition for geometric grouping and segmentation,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR., vol. 1, 2005, pp. 1150–1157 vol. 1.LRRliu2013robust G. Liu, Z. Lin, S. Yan, J. Sun, Y. Yu, and Y. Ma, “Robust recovery of subspace structures by low-rank representation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 1, pp. 171–184, 2013.LRRDetliu2016deterministic G. Liu, H. Xu, J. Tang, Q. Liu, and S. Yan, “A deterministic analysis for lrr,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 3, pp. 417–430, 2016.LRSRNCwang2016lrsr J. Wang, D. Shi, D. Cheng, Y. Zhang, and J. Gao, “Lrsr: Low-rank-sparse representation for subspace clustering,” Neurocomputing, 2016.TIP16kim2016robust E. Kim, M. Lee, and S. Oh, “Robust elastic-net subspace representation,” IEEE Transactions on Image Processing, 2016.poling2014new B. Poling and G. Lerman, “A new approach to two-view motion segmentation using global dimension minimization,” International Journal of Computer Vision, vol. 108, no. 3, pp. 165–185, 2014.Ng2002 A. Y. Ng, M. I. Jordan, and Y. Weiss, “On spectral clustering: Analysis and an algorithm,” in Advances in Neural Information Processing Systems 14, T. Dietterich, S. Becker, and Z. Ghahramani, Eds.1em plus 0.5em minus 0.4emMIT Press, 2002, pp. 849–856.Chen2009 G. Chen and G. Lerman, “Spectral curvature clustering (scc),” International Journal of Computer Vision, vol. 81, no. 3, pp. 317–330, 2009.Ochs2012 P. Ochs and T. Brox, “Higher order motion models and spectral clustering,” in IEEE Conference on Computer Vision and Pattern Recognition, CVPR., 2012, pp. 614–621.Purkait2014 P. Purkait, T.-J. Chin, H. Ackermann, and D. Suter, Clustering with Hypergraphs: The Case for Large Hyperedges, ser. Lecture Notes in Computer Science.1em plus 0.5em minus 0.4emSpringer International Publishing, 2014, vol. 8692, ch. 44, pp. 672–687.MoGCVPRli2015subspace B. Li, Y. Zhang, Z. Lin, and H. Lu, “Subspace clustering by mixture of gaussian regression,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 2094–2102.LBFzhang2012hybrid T. Zhang, A. Szlam, Y. Wang, and G. Lerman, “Hybrid linear modeling via local best-fit flats,” International journal of computer vision, vol. 100, no. 3, pp. 217–240, 2012.Balakrishnan2011 S. Balakrishnan, M. Xu, A. Krishnamurthy, and A. Singh, “Noise thresholds for spectral clustering,” in Advances in Neural Information Processing Systems 24, J. Shawe-Taylor, R. S. Zemel, P. L. Bartlett, F. Pereira, and K. Q. Weinberger, Eds.1em plus 0.5em minus 0.4emCurran Associates, Inc., 2011, pp. 954–962. [Online].Swendsen1987 R. H. Swendsen and J.-S. Wang, “Nonuniversal critical dynamics in monte carlo simulations,” Physical review letters, vol. 58, no. 2, p. 86, 1987.Rousseeuw2005 P. J. Rousseeuw and A. M. Leroy, Robust regression and outlier detection.1em plus 0.5em minus 0.4emJohn Wiley & Sons, 2005, vol. 589.Bab-Hadiashar2008 A. Bab-Hadiashar and R. Hoseinnezhad, “Bridging parameter and data spaces for fast robust estimation in computer vision,” in Digital Image Computing: Techniques and Applications (DICTA), 2008, 2008, Conference Proceedings, pp. 1–8.Andrieu2003 C. Andrieu, N. de Freitas, A. Doucet, and M. Jordan, “An introduction to mcmc for machine learning,” Machine Learning, vol. 50, no. 1-2, pp. 5–43, 2003.Tennakoon2015 R. Tennakoon, A. Bab-Hadiashar, Z. Cao, R. Hoseinnezhad, and D. Suter, “Robust model fitting using higher than minimal subset sampling,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. PP, no. 99, pp. 1–1, 2015.Toldo2008 R. Toldo and A. Fusiello, Robust Multiple Structures Estimation with J-Linkage, ser. Lecture Notes in Computer Science.1em plus 0.5em minus 0.4emSpringer Berlin Heidelberg, 2008, vol. 5302, book section 41, pp. 537–547.Breiman1996 L. Breiman, “Bagging predictors,” Machine Learning, vol. 24, no. 2, pp. 123–140, 1996.Freund1996 Y. Freund and R. E. Schapire, “Experiments with a new boosting algorithm,” in ICML Vol. 96, pp. 148-156, 1996, Conference Proceedings.Freund1997 ——, “A decision-theoretic generalization of on-line learning and an application to boosting,” Journal of Computer and System Sciences, vol. 55, no. 1, pp. 119–139, 1997.Pham2014 T. T. Pham, C. Tat-Jun, Y. Jin, and D. Suter, “The random cluster model for robust geometric fitting,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 36, no. 8, pp. 1658–1671, 2014.Bab-Hadiashar1999 A. Bab-Hadiashar and D. Suter, “Robust segmentation of visual data using ranked unbiased scale estimate,” Robotica, vol. 17, no. 06, pp. 649–660, 1999.Hoseinnezhad2010 R. Hoseinnezhad, A. Bab-Hadiashar, and D. Suter, “Finite sample bias of robust estimators in segmentation of closely spaced structures: a comparative study,” Journal of mathematical Imaging and Vision, vol. 37, no. 1, pp. 66–84, 2010.Jin2011 Y. Jin, C. Tat-Jun, and D. Suter, “A global optimization approach to robust multi-model fitting,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2011, Conference Proceedings, pp. 2041–2048.Rao2010 S. Rao, A. Yang, S. S. Sastry, and Y. Ma, “Robust algebraic segmentation of mixed rigid-body and planar motions from two views,” International Journal of Computer Vision, vol. 88, no. 3, pp. 425–446, 2010.Tat-Jun2012 C. Tat-Jun, Y. Jin, and D. Suter, “Accelerated hypothesis generation for multistructure data via preference analysis,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 34, no. 4, pp. 625–638, 2012.Chum2003 O. Chum, J. Matas, and J. Kittler, “Locally optimized ransac,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2003, vol. 2781, pp. 236–243.Torr1997 P. H. S. Torr and D. W. Murray, “The development and comparison of robust methods for estimating the fundamental matrix,” International Journal of Computer Vision, vol. 24, no. 3, pp. 271–300, 1997.Hartley2003 R. Hartley and A. Zisserman, Multiple view geometry in computer vision.1em plus 0.5em minus 0.4emCambridge university press, 2003.HoiSim2011 W. Hoi Sim, C. Tat-Jun, Y. Jin, and D. Suter, “Dynamic and hierarchical multi-structure geometric model fitting,” in Computer Vision (ICCV), 2011 IEEE International Conference on, 2011, Conference Proceedings, pp. 1044–1051.Lazic2009 N. Lazic, I. Givoni, B. Frey, and P. Aarabi, “Floss: Facility location for subspace segmentation,” in IEEE 12th International Conference on Computer Vision, 2009, Conference Proceedings, pp. 825–832.Sugaya2004 Y. Sugaya and K. Kanatani, Geometric Structure of Degeneracy for Multi-body Motion Segmentation, ser. Lecture Notes in Computer Science.1em plus 0.5em minus 0.4emSpringer Berlin Heidelberg, 2004, vol. 3247, book section 2, pp. 13–25.Tron2007 R. Tron and R. Vidal, “A benchmark for the comparison of 3-d motion segmentation algorithms,” in Computer Vision and Pattern Recognition, 2007. CVPR '07. IEEE Conference on, 2007, Conference Proceedings, pp. 1–8.Elhamifar2009 E. Elhamifar and R. Vidal, “Sparse subspace clustering,” in IEEE Conference on Computer Vision and Pattern Recognition, CVPR., 2009, Conference Proceedings, pp. 2790–2797.Ochs2014 P. Ochs, J. Malik, and T. Brox, “Segmentation of moving objects by long term video analysis,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 36, no. 6, pp. 1187–1200, 2014.Brox2010 T. Brox and J. Malik, Object Segmentation by Long Term Analysis of Point Trajectories, ser. Lecture Notes in Computer Science.1em plus 0.5em minus 0.4emSpringer Berlin Heidelberg, 2010, vol. 6315, book section 21, pp. 282–295.Brox2011 ——, “Large displacement optical flow: Descriptor matching in variational motion estimation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 3, pp. 500–513, 2011.Zelnik-Manor2004 L. Zelnik-Manor and P. Perona, “Self-tuning spectral clustering,” in Advances in neural information processing systems, 2004, Conference Proceedings, pp. 1601–1608. | http://arxiv.org/abs/1705.09437v1 | {
"authors": [
"Ruwan Tennakoon",
"Alireza Sadri",
"Reza Hoseinnezhad",
"Alireza Bab-Hadiashar"
],
"categories": [
"cs.CV"
],
"primary_category": "cs.CV",
"published": "20170526053907",
"title": "Effective Sampling: Fast Segmentation Using Robust Geometric Model Fitting"
} |
Topology InducedOscillations in Majorana Fermions in a Quasiperiodic Superconducting Chain Indubala I SatijaDecember 30, 2023 ============================================================================================ Random-effects meta-analyses are very commonly used in medical statistics. Recent methodological developments includemultivariate (multiple outcomes) and network (multiple treatments)meta-analysis. Here we provide a new model and corresponding estimation procedure for multivariate network meta-analysis, so that multiple outcomes and treatments can be included in a single analysis. Our new multivariate model is a direct extension of a univariate model for network meta-analysis that has recently been proposed. We allow two types of unknown variance parameters in our model, which represent between-study heterogeneity and inconsistency. Inconsistency arises when different forms of direct and indirect evidence are not in agreement, even having taken between-study heterogeneity into account. However the consistency assumption is often assumed in practice and so we also explain how to fit a reduced model which makes this assumption. Our estimation method extends several other commonly used methods for meta-analysis, including the method proposed by DerSimonian and Laird (1986).We investigate the use of our proposed methods in the context ofa real example.§ INTRODUCTIONMeta-analysis, the statistical process of pooling the results from separate studies, is commonly used in medical statistics and now requires little introduction. The univariate random-effects modelis often used for this purpose. This model has recently been extended to the multivariate (multiple outcomes; Jackson et al., 2011) and network (multiple treatments; Lu and Ades, 2004) meta-analysis settings. In a network meta-analysis,more than two treatments are included in the same analysis. The main advantage of network meta-analysis is that, by using indirect information contained in the network, more precise and coherent inference is possible, especially when direct evidence for particular treatment comparisons is limited. Here we describe a new model that extends the random-effects modelling framework to the multivariate network meta-analysis setting, so that both multiple outcomes and multiple treatments may be included in the same analysis.Other multivariate extensions of univariate methods for network meta-analysis have previously been proposed. For example,Achana et al. (2014)analyse multiple correlated outcomes in multi-arm studies in public health. Efthimiouet al. (2014) propose a model for the joint modelling of odds ratios on multiple endpoints. Efthimiou et al. (2015) develop another model that is a network extension of an alternative multivariate meta-analytic model that was originally proposed by Riley et al. (2008). A network meta-analysis of multiple outcomes with individual patient data has also been proposed by Hong et al. (2015) under both contrast-based and arm-based parameterizations, and Hong et al. (2016) develop a Bayesian framework for multivariate network meta-analysis. Thesemultivariate network meta-analysis models are based on the assumption of consistency in the network, extending the approach introduced by Lu and Ades (2004). In contrast to these previously developedmethods, the method proposed here relaxes the consistency assumption.This assumption is sometimes found to be false across the entire network (Veroniki et al., 2013). We model the inconsistency using a design-by-treatment interaction, so that different forms of direct and indirect evidence may not agree, even after taking between-study heterogeneity into account. However we assume that the design-by-interaction terms follow normal distributions, and so conceptualise inconsistency as another source of random variation. This allows us to achieve the dual aim of estimating meaningful treatment effects whilst also allowing for inconsistency in the network. Although we allow inconsistency in the network, we propose a relatively simple model. Our preference for a simple model is because the between-study covariance structure is typically hard to identify accurately in multivariate meta-analyses(Jackson et al., 2011) and also because network meta-analysis datasets are usually small (Nikolakopoulou et al., 2014). The new model that we propose for multivariate network meta-analysis is a direct generalisation of the univariate network meta-analysis model proposed by Jackson et al. (2016), which is a particular form of the design-by-treatment interaction model (Higgins et al., 2012). In addition to proposing a new model for multivariate network meta-analysis, we also develop a corresponding new estimation method. This estimation method is based on the method of moments and extends a wide variety of related methods. In particular, weextend the estimation method described by DerSimonian and Laird (1986) by directly extending the matrix based extension of DerSimonian and Laird's estimation method for multivariate meta-analysis (Chen et al., 2012; Jackson et al., 2013).We adopt the usual two-stage approach to meta-analysis, where the estimated study-specific treatment effects (including the within-study covariance matrices) are computed in the first stage. We give some information about how this first stage is performed but our focus is the second stage, where the meta-analysis model is fitted. The paper is set out as follows. In section 2, we briefly describe the univariate model for network meta-analysis to motivate our new multivariate network meta-analysis model in section 3. We present our new estimation method in section 4 and we apply our methods to a real dataset in section 5. We conclude with a short discussion in section 6.§ A UNIVARIATE NETWORK META-ANALYSIS MODELHere we describe our univariate modelling framework for network meta-analysis (Jackson et al., 2016; Law et al., 2016).Without loss of generality, we take treatment A as the reference treatment for the network meta-analysis. The other treatments are B, C, etc. We take the design d as referring only to the set of treatments compared in a study.For example, if the first design compares treatments A and B only, then d=1 refers to two-arm studies that compare these two treatments. We define t to be the total number of treatments included in the network, and t_d to be the number of treatments included in design d. We define D to be the number of different designs,N_d to be the number of studies of design d, and N = ∑_d=1^D N_d to be the total number of studies. We will use the word`contrast' to refer to a particular treatment comparison or effect in a particular study, for example the `AB contrast' in the first study.We model the estimated relative treatment effects, rather than the average outcomes in each arm, and so perform contrast based analyses. We define Y_di to be the c_d × 1 column vector of estimated relative treatment effects from the ith study of design d, where c_d=t_d-1. We define n_d = N_d c_d to be the total number of estimated treatment effects that design d contributes to the analysis, andn = ∑_d=1^D n_d to be the total number of estimated treatment effects that contribute to the analysis. To specify the outcome data Y_di, wechoose a baseline treatment in each design d. The entries of Y_di are then the estimatedeffects of the other c_d treatmentsincluded in design d relative to this baseline treatment. For example, if wetake d=2 to indicate the `CDE design' then c_2=2. Taking C as the baseline treatment forthis design, the two entries of theY_2i vectors are the estimated relative effects of treatment D compared to C and of treatment E compared to C. For example, the entries of the Y_di could be estimated log-odds ratios or mean differences.We use normal approximationsfor the within-study distributions.We define S_di to be the c_d × c_d within-study covariance matrix corresponding toY_di. We treat all S_di as fixed and known in analysis.Ignoring the uncertainty in the S_di is acceptable provided that the studies are reasonably large and is conventional in meta-analysis, but this approximation is motivated by pragmatic considerations because this greatly simplifies the modelling. We do not impose any constraints on the form of S_di other than they must be valid covariance matrices. The lead diagonal entries of the S_di are within-study variances that can be calculated using standard methods. Assuming that the studies are composed of independent samples for each treatment, the other entries of the S_di are calculated as the variance of the average outcome (for example the log odds or the sample mean) of the baseline treatment. We defineδ_1^AB, δ_1^AC, ⋯, δ_1^AZ, where Z is the final treatment in the network, to be treatment effects relative to the reference treatment A, and call them basic parameters (Lu and Ades, 2006). We use the subscript 1 when defining the basic parameters to emphasise that they are treatment effects for the first (and in this section, only) outcome.We define c=t-1 to be the number of basic parameters in the univariate setting. Treatment effectsnot involving A can be obtained as linear combinations of the basic parameters and are referred to as functional parameters (Lu and Ades, 2006). For example the average treatment effect of treatment E to treatment C, δ_1^CE = δ_1^AE - δ_1^AC, is a functional parameter.We define the c × 1 column vector δ=(δ_1^AB, δ_1^AC, ⋯, δ_1^AZ)^Tanddesign specific c_d × c design matrices 𝐙_(d). We use the subscript (d) in these design matrices to emphasise that they apply to each individual study of design d; we reserve the subscript d for design matrices that describe regression models for all outcome data from this design.If the ith entry of the 𝐘_di areestimatedtreatment effects of treatment J relative to the reference treatment A then the ith row of𝐙_(d) contains a single nonzero entry: 1 in the (j-1)th column, where j is the position of J in the alphabet.If instead theith entry of the 𝐘_di are estimatedtreatment effects of treatment J relative to treatmentK, KA,then the ith row of𝐙_(d) contains two nonzero entries: 1 in the (j-1)th column and -1 in the(k-1)th column.Our univariate model for network meta-analysis isY_di = 𝐙_(d)δ + Θ_di + Ω_d + ϵ_diwhere Θ_di∼ N(0, τ^2_β𝐏_c_d), Ω_d∼ N(0, τ^2_ω𝐏_c_d), ϵ_di∼ N(0, 𝐒_di), all Θ_di, Ω_di and ϵ_di are independent, andP_c_d is the c_d× c_dmatrix with oneson the leading diagonal and halves elsewhere. We refer to τ^2_β and τ^2_ω as the between-study variance, and the inconsistency variance, respectively. The term Θ_diis a study-by-treatment interaction term that models between-study heterogeneity. The model Θ_di∼ N(0, τ^2_β𝐏_c_d) implies that the heterogeneity variance is the same for all contrasts for every study regardless of whether or not the comparison is relative to the baseline treatment (Lu and Ades, 2004). Other simple choices of 𝐏_c_d, such as allowing the off-diagonal entries to differ from 0.5, violate this symmetry between treatments. For example, in the cased=2 indicating the CDE design, the between-study variances for the CD and CE effects in this study are given by the two diagonal entries of τ^2_β𝐏_c_d, which are both τ^2_β. The between-study variance for the effect of E relative to D is (-1, 1) τ^2_β𝐏_c_d (-1, 1)^T, which is also τ^2_β. The Ω_d are design-by-treatment interaction terms that model inconsistency in the network.The model Ω_d∼ N(0, τ^2_ω𝐏_c_d) implies that theinconsistency variance is the same for all contrasts for every design; other simple choices of 𝐏_c_d also violate this symmetry. To describe all estimates from all studies, westack the Y_di from the same design to formthe n_d × 1 column vector Y_d = ( Y_d1^T, ⋯,Y^T_d N_d)^T, andwe then stack theseY_d to form the n × 1 column vector Y = ( Y_1^T, ⋯,Y^T_D)^T. Jackson et al. (2016) then use three further matrices that we also define here because they will be required to describe the estimation procedure that follows.The matrix M_1 is defined as an × n square matrixwhere m_1ij=0 if the ith and jth entries of Y, i,j=1, ⋯ n, are estimates from different studies; otherwise m_1ii =1, andm_1ij =1/2for ij. The matrix M_2 is defined as an × n square matrixwhere m_2ij=0if the ith and jth entries of Y, i,j=1, ⋯ n, are estimates from different designs; otherwise m_2ij =1 if the ith and jth entries of Y are estimates of the same treatment comparison (for example, treatment A compared to treatment B) andm_2ij =1/2 if these entries are estimates of different treatment comparisons. The supplementary materials show a concrete example showing how these two matricesare formed. Jacksonet al. (2016) also define a n × c univariate design matrix 𝐙,where if the ith entry of 𝐘 is an estimatedtreatment effect of treatment J relative to the reference treatment A then the ith row of𝐙 contains a single nonzero entry: 1 in the (j-1)th column, where j is the position of J in the alphabet.If insteadthe ith entry 𝐘 is an estimated treatment effect of treatment J relative to treatmentK, KA,then the ith row of𝐙 contains two nonzero entries: 1 in the (j-1)th column and -1 in the(k-1)th column. Defining 𝐒_d=(𝐒_d1, ⋯, 𝐒_d N_d ), and then 𝐒=(𝐒_1, ⋯, 𝐒_D ), model (<ref>) can be presented for the entire dataset as𝐘∼ N(𝐙δ, τ^2_β M_1+τ^2_ω M_2+ 𝐒) § A MULTIVARIATE NETWORK META-ANALYSIS MODELWe now explain how to extend the univariate model in section 2 to the multivariate setting to handle multiple outcomes. We define p to be the number of outcomes, and so the dimension of the network meta-analysis, so that we now consider the case where p>1. The Y_di are now p c_d ×1 column vectors, where the Y_di contain c_d column vectors of length p. For example, in a p=5 dimensional meta-analysis and continuing with the example where d=2 indicates the CDE design, we have c_2=2. The Y_2i are then 10 × 1 column vectors where,taking C as the baseline treatment for this design, the first five entries of the Y_2i are estimated relative treatment effect of D compared to C and the second five entries are the same estimate of E compared to C. Wedefine the pc × 1 column vector δ=(δ_1^AB, δ_1^AC, ⋯, δ_1^AZ, δ_2^AB, δ_2^AC, ⋯, δ_2^AZ, ⋯,δ_p^AB, δ_p^AC, ⋯, δ_p^AZ)^T, so that this vector contains the basic parameters for each outcome in turn. When p=1 the vector δ reduces to its definition in the univariate setting, as given in section 2. We define Σ_β and Σ_ω to be p × p unstructured covariance matrices that are multivariate generalisations of τ^2_β and τ^2_ω. These two matrices contain the between-study variances and covariances, and the inconsistency variances and covariances, respectively, for all p outcomes. We refer to Σ_β and Σ_ω as the between-study covariance matrix, and the inconsistency covariance matrix, respectively. We continue to treat the within-study covariance matrices S_di as if fixed and known in analysis but these are now pc_d × p c_d matrices. The entries of the S_dimatrices that describe the covariance of estimated treatment effects for the same outcome can be obtained as in the univariate setting. However the other entries of S_di, that describe the covariance between treatment effects for different outcomes, are harder to obtain in practice.A variety of strategies for dealing with this difficulty have been proposed (Jackson et al., 2011; Wei and Higgins, 2013).§.§ The proposed multivariate model for network meta-analysisIn the multivariate setting, to allow correlations between estimated treatment effects for different outcomes, both within studies and designs,we propose that model (<ref>) is generalised toY_di =X_(d)δ + Θ_di + Ω_d + ϵ_diwhereX_(d) =(( I_p ⊗ Z_(d)1)^T, ⋯, ( I_p ⊗ Z_(d) c_d)^T)^T, Z_(d)i is the ith row of Z_(d), Θ_di∼ N(0, 𝐏_c_d⊗Σ_β),Ω_d∼ N(0,𝐏_c_d⊗Σ_ω)andϵ_di∼ N(0,S_di), where all Θ_di, Ω_d and ϵ_di are independent, and ⊗ is the Kronecker product.The random Θ_di and Ω_d continue to model between-study heterogeneity, and inconsistency, respectively. Recalling that δ contains the basic parameters for each outcome in turn, thedesign matrices X_(d) provide the correct linear combinations of basic parameters to describe the mean of all estimated treatment effects in Y_di. Model (<ref>) reduces to model (<ref>) in one dimension. The definition of 𝐏_c_d means that Σ_β and Σ_ω are the between-study covariance matrix, and inconsistency covariance matrix, for all contrasts. We continue define Y as in the univariate setting, where Y contains n column vectors of estimated treatment effects that are of length p, so that Y is a np × 1 column vector in the multivariate setting. We define the multivariate np × pc design matrix 𝐗= ((𝐈_p ⊗𝐙_1)^T, ⋯, (𝐈_p ⊗𝐙_n)^T)^T, where 𝐙_i is the ith row of 𝐙. Model (<ref>) can be presented for the entire dataset as𝐘∼ N(𝐗δ,M_1⊗Σ_β + M_2⊗Σ_ω+ 𝐒) where we continue to define 𝐒 as in the univariate case. Matrices M_1 and M_2 are the same as in the univariate setting, and so continue to be n × n matrices.Model (<ref>) is a linear mixed model for network meta-analysis and is conceptually similar to other models of this type (Piepho et al., 2012). If Σ_ω= 0 then all Ω_d=0 and there is no inconsistency; we refer to this reduced model as the `consistent model'. If both Σ_β= 0 and Σ_ω= 0 then all studies estimate the same effects to within-study sampling error and we refer to this model as the `common-effect and consistent model'.Missing data (unobserved entries of 𝐘) are common in applications as not all studies may provide data for all outcomes and contrasts. When there are missing outcome data, the model for the observed data is the marginal model for the observed data implied by (<ref>), where any rows of 𝐘 that contain missing values are discarded. We will use a non-likelihood based approach for making inferences and so assume any data are missing completely at random (Seaman et al., 2013). We define the diagonal np × np missing indicator matrix 𝐑, where 𝐑_ii=1 if 𝐘_i is observed, 𝐑_ii=0 if 𝐘_i is missing, and 𝐑_ij=0 if ij.§ MULTIVARIATE ESTIMATION: A NEW METHOD OF MOMENTS Our estimation procedure is motivated by the univariate method proposed by DerSimonian and Laird (1986). Thiswas developed in the much simpler setting where each study provides a single estimate Y_i, and where the random-effects model Y_i ∼ N(δ, τ^2+S_i) is assumed.This estimation method for τ^2 uses the Q statistic, where Q=∑ S_i^-1(Y_i -δ̂)^2 and δ̂= ∑ S_i^-1 Y_i/∑ S_i^-1is the pooled estimate under the common-effect model (τ^2=0).Now consider an alternative representation of this Q statistic. TakingY = (Y_1, ⋯, Y_n)^T, S = (S_1, ⋯, S_n) and W= S^-1 means that Q=(𝐖 (𝐘-𝐘̂)(𝐘-𝐘̂)^T), where 𝐘̂ is obtained under the common-effect model. To obtain a p × p matrix generalisation of Q for multivariate analyses, we replace the trace operator with the block trace operator in this expression (Jackson et al., 2013). The block trace operator is a generalisation of the tracethat sums over alln of the p × p matrices along the main block diagonal of an np × np matrix. This produces a p × p matrix.In the absence of missing data we canwrite our multivariate generalisation of the Q statistic, (𝐖 (𝐘-𝐘̂)(𝐘-𝐘̂)^T), as a weighted sum of outer products of p × 1 vectors of residuals under the common-effect and consistent model. Hence the distribution of (𝐖 (𝐘-𝐘̂)(𝐘-𝐘̂)^T) depends directly on the magnitudes of unknown variance components. §.§ A Q matrix for multivariate network meta-analysisWe define a within-study precision matrix 𝐖 corresponding to 𝐒. If there are no missing outcome data in 𝐘then we define 𝐖 = 𝐒^-1, where 𝐒 is taken from model (<ref>). If there are missing data in 𝐘 then the entries of 𝐖 that correspond to observed data are obtained as the inverse of the corresponding entries of the within-study covariance matrix of reduced dimension (equal to that of the observed data) and the other entries of 𝐖 are set to zero. For example, consider the case where𝐘 is a 6 × 1 vector but only the second and fifth entries are observed; this corresponds to much less outcome data than would be used in practice but provides an especially simple example. Then we define 𝐒_r, where the subscript r indicates a dimension reduction, as a 2 × 2 matrix whose entries are the within variances and covariances of the two observed entries of 𝐘. The 6 × 6 precision matrix 𝐖 then has all zero entries in the first, third, fourth and sixth rows and columns. However the remaining entries of 𝐖 are the entries of the 2 × 2 matrix 𝐒_r^-1, so that 𝐖_22 = (𝐒_r^-1)_11, 𝐖_25 = (𝐒_r^-1)_12, 𝐖_52 = (𝐒_r^-1)_21, and 𝐖_55 = (𝐒_r^-1)_22. We define 𝐘̂ to be the fitted value of𝐘 under the common-effect and consistent model (Σ_β = Σ_ω= 0), so that 𝐘̂ =𝐇𝐘 where 𝐇 = 𝐗 (𝐗^T 𝐖𝐗)^-1𝐗^T𝐖. We also define an asymmetric np × npmatrix (Jackson et al., 2013)𝐐= 𝐖{𝐑 (𝐘-𝐘̂)}{𝐑(𝐘-𝐘̂)}^T =𝐖 (𝐘-𝐘̂)(𝐘-𝐘̂)^T𝐑Our definitions of 𝐖 and 𝐑 mean that 𝐖𝐑=𝐖, which results in the simplified version of 𝐐 in (<ref>). From the first form given in (<ref>), we have that the residuals 𝐘-𝐘̂ are pre-multiplied by 𝐑, so that any residuals that correspond to missing outcome data do not contribute to𝐐. Furthermore missing outcome data do not contribute to𝐘̂ because they have no weight under the common-effect and consistent model. Hence we can impute missing outcome data with any finite value without changing the value of 𝐐. This is merely a convenient way to handle missing data numerically andhas no implications for the statistical modelling.§.§ Design specific Q matrices for multivariate network meta-analysisIn order to identify the full model, we will require design-specific versions of𝐐 that only use data from a particular design. As in the univariate setting, we stack the outcome data from design d to form thevector 𝐘_d = (𝐘^T_d1, ⋯, 𝐘^T_dN_d)^T. In the multivariate setting the vector 𝐘_d contains n_d estimated effects each of length p, so that 𝐘_d is now a p n_d × 1 column vector. Wedefine the design specificn_d × n_d matrix M_1^d, where m^d_1ij=0if the ith and jth estimated effect (of length p)in 𝐘_d, i,j=1, ⋯, n_d, are from separate studies; otherwise m^d_1ii =1 andm^d_1ij =1/2for ij. We define the p n_d × p c_ddesign matrix 𝐗_d which is obtained by stacking identity matrices of dimension p c_d, where we include one such identity matrix for each study of design d. Hence 𝐗_d =1_N_d⊗𝐈_p c_d, where 1_N_d is the N_d × 1 column vector where every entry is one. We also define the p c_d × 1 column vector β_d =X_(d)δ + Ω_d.An identifiabledesign-specific marginal model for outcome data from design d only, that is implied by model (<ref>), is𝐘_d ∼ N(𝐗_dβ_d,M_1^d⊗Σ_β+ 𝐒_d )where𝐒_d=(𝐒_d1, ⋯,𝐒_d N_d). We can also calculate design specific versions of (<ref>)where we calculate all quantities, including the fitted values, using just the data from studies of design d. We define these p n_d× p n_d design specific matrices as𝐐_d= 𝐖_d (𝐘_d-𝐘̂_d)(𝐘_d-𝐘̂_d)^T𝐑_dwhere 𝐖_d, 𝐑_d and 𝐘̂_d in (<ref>) are defined in the same way as 𝐖, 𝐑 and 𝐘̂ in (<ref>) but where only data from design d are used. Hence 𝐑_d and 𝐖_d are the missing indicator matrix, and the within-study precision matrix, of 𝐘_d, respectively. We compute 𝐘̂_d = 𝐇_d 𝐘_d where 𝐇_d = 𝐗_d (𝐗_d^T 𝐖_d 𝐗_d)^-1𝐗_d^T𝐖_d. When computing 𝐇_d we take the matrix inverse to be the Moore-Penrose pseudoinverse. This is so that anydesign-specific regression corresponding to this hat matrix that is not fully identifiable (due to missing outcome data) can still contribute to the estimation. We usemodel (<ref>) to derive the properties of𝐐_d in equation (<ref>). §.§ The estimating equations Webase our estimation on the two p × p matrices(𝐐) and ∑_d=1^D (𝐐_d), where 𝐐 and 𝐐_d are given in (<ref>) and (<ref>), respectively. Specifically, we match these quantities to their expectations to estimate the unknown variance parameters using the method of moments. §.§.§ Evaluating [ (𝐐)] and deriving the first estimating equationWe define 𝐀 = (𝐈_np-𝐇)^T 𝐖 and 𝐁 = (𝐈_np - 𝐇)^T𝐑, which are known np × np matrices. We also divide the matrices 𝐀 and 𝐁 into n^2 blocks ofp × p matrices, and write 𝐀_i,j and 𝐁_i,j, i,j=1, ⋯ n, to mean the ith by jth blocks of𝐀 and 𝐁 respectively. Hence 𝐀_i,j and 𝐁_i,j are both p × p matrices.In the supplementary materials we show that[(𝐐)]= ∑_i=1^n∑_j=1^n∑_k=1^n m_1i j𝐀_k,iΣ_β𝐁_j,k+ ∑_i=1^n∑_j=1^n∑_k=1^n m_2i j𝐀_k,iΣ_ω𝐁_j,k + (𝐁).We apply the (·) operator to both sides of the previous equation and use the identity (𝐀𝐗𝐁) = (𝐁^T ⊗𝐀)(𝐗) (see Henderson and Searle, 1981), to obtain([(𝐐)]) = 𝐂(Σ_β) + 𝐃(Σ_ω) + 𝐄where𝐂 = ∑_i=1^n∑_j=1^n∑_k=1^n m_1i j𝐁_j,k^T ⊗𝐀_k,i 𝐃 =∑_i=1^n∑_j=1^n∑_k=1^n m_2i j𝐁_j,k^T ⊗𝐀_k,iand𝐄 = (( B)).Upon substituting [(𝐐)] = (𝐐),Σ_β = Σ̂_β and Σ_ω=Σ̂_ω in equation (<ref>), the method of moments gives one estimating equation in the vectorised form of two unknown covariance matrices. §.§.§ Evaluating [ (𝐐_d)] and deriving the second estimating equationModel (<ref>) depends upon one unknown covariance matrix, Σ_β. The intuition is that, upon using all D of the 𝐐_d matrices in (<ref>) and the method of moments to estimate Σ_β,we will then be able toestimate the other unknown covariance matrix Σ_ω using the first estimating equation. We define design specific 𝐀_d = (𝐈_p n_d-𝐇_d)^T 𝐖_d and 𝐁_d = (𝐈_p n_d - 𝐇_d)^T𝐑_d , where 𝐀_d and 𝐁_d are known p n_d × p n_d matrices. We also divide the matrices 𝐀 and 𝐁 into n_d^2 blocks ofp × p matrices, and write 𝐀_d, i,j and 𝐁_d, i,j, i,j=1, ⋯, n_d, to mean the ith by jth blocks of𝐀_d and 𝐁_d respectively. In the supplementary materials we show that([∑_d=1^D (𝐐_d)]) = (∑_d=1^D 𝐂_d) (Σ_β)+ ∑_d=1^D 𝐄_dwhere𝐂_d = ∑_i=1^n_d∑_j=1^n_d∑_k=1^n_d m^d_1 ij𝐁_d, j,k^T ⊗𝐀_d, k,iand𝐄_d = (( B_d)).Upon substituting [∑_d=1^D (𝐐_d)]=∑_d=1^D (𝐐_d) and Σ_β = Σ̂_β in (<ref>), we obtain a second estimating equation from the method of moments.§.§ Solving the estimating equations and performing inference We solve the estimating equation resulting from (<ref>) for (Σ̂_β) and substitute this estimate into the estimating equation resulting from (<ref>) and solve for(Σ̂_ω). §.§.§ Estimating Σ_β under the consistent modelSome applied analysts may prefer toassume the consistent model (Σ_ω=0). As in the univariate case (Jacksonet al., 2016), we have two possible ways of estimating Σ_β under the consistent model: we can use the estimating equation resulting from (<ref>) with Σ_ω =0 or the estimating equation resulting from(<ref>) as in the full model. Also as in the univariate case, we suggest the former option because it uses the information made by assuming consistency when estimating Σ_β. However this first optionis valid only under the consistent model. §.§.§ `Truncating' the estimates of the unknown covariance matrices so that they are symmetric and positive semi-definiteAs in the univariate case, there is the problem that the point estimates of the two unknown covariance matrices are not necessarily positive semi-definite. The method of moments does not even initially enforce the constraint that the point estimates of the unknown covariance matrices are symmetrical (Chen et al., 2012; Jackson et al., 2013). We produce symmetric estimators corresponding to an estimated covariance matrix of Σ̂ as (Σ̂^̂T̂ + Σ̂)/2(Chen et al., 2012; Jackson et al., 2013). This also corresponds to taking the average of estimates that result from our Q and Q_d matrices and their transposes (Jacksonet al., 2013). We then write these symmetric estimators in terms of their spectral decomposition (Chen et al., 2012; Jackson et al., 2013) and truncate any negative eigenvalues to zero to provide the final symmetric positive semi-definite estimated covariance matrices. Specifically, we define the truncated estimatecorresponding to the symmetrical Σ̂ as Σ̂^+ = ∑_i=1^p(0, λ_i) 𝐞_i 𝐞_i^T, where λ_i is the ith eigenvalue of the symmetricΣ̂ and 𝐞_i is the corresponding normalised eigenvector.§.§.§ Inference for δInference for δ then proceeds as a weighted regression where all weights are treated as fixed and known. Writing V̂ as the estimated variance of Y in (<ref>), in the absence of missing outcome data we have δ̂= ( X^T V̂^-1 X)^-1 X^T V̂^-1 Y where (δ̂) = ( X^T V̂^-1 X)^-1. In the presence of missing data we can, under our missing completely at random assumption,apply these standard formulae for weighted regression to the observed outcomes. Alternatively and equivalently,we can impute the missing outcome data in Y with an arbitrary value and replaceV̂^-1 with the precision matrix corresponding to V̂,calculated in the way explained for S in section 4.1 (Jackson et al., 2011). Approximate confidence intervals and hypothesis tests for all basic parameters for all outcomes then immediately follow by taking δ̂ to be approximately normally distributed. Inferences for functional parameters follow by taking appropriate linear combinations of δ̂. §.§ Special cases of the estimation procedureIn the supplementary materials we show that the proposed method reduces to two previous methods in special cases. If all studies are two arm studiesand consistency is assumed then the proposed method reduces to the matrix based method for multivariate meta-regression(Jackson et al., 2013). The proposed multivariate methodreduces to the univariate DerSimonian and Laird method for network meta-analysis (Jackson et al., 2016) when p=1.§.§ Model identification If the necessary standard matrix inversions resulting from the estimating equations from (<ref>) and (<ref>) cannot be performed then both unknown variance components cannot be identified using the proposed method.A minimum requirement for any multivariate modelling is that the common-effect and consistent model must be identifiable. This means that there must be some information (direct or indirect) about each basic parameter for all outcomes. Two or more studies of the same design must provide data for all possible pairs of outcomes to identify Σ_β. Two or more studies of different designs must provide data for all possible pairs of outcomes to identifyΣ_ω. If these conditions are satisfied then the model will be identifiable. In situations where our model is notidentifiablewe suggest that simpler models should be considered instead. Possible strategies for this include considering models of lower dimension or the consistent model. In practice it is highly desirable to have more than the minimum amount of replication required, both within and between designs, so that the model is well identified. We make some pragmatic decisions in the next section for our example to provide sufficient replication within designs, in order to estimate Σ_β with reasonable precision.§ EXAMPLEThe methodology developed in this paper is now applied to an illustrative example in relapsing remitting multiple sclerosis (RRMS). Multiple sclerosis (MS) is an inflammatory disease of the brain and spinal cord and RRMS is a common type of MS.The effectiveness of a new treatment is typically measured to assess its impact on relapse rate and odds of disease progression. Magnetic Resonance Imaging (MRI)allows measurement of the number of new or enlarging lesions in the brain. Three outcomes are included in our analyses, so that p=3 in the full three dimensional network meta-analysis. These three outcomes are: (1) the log rate ratio of new or enlarging MRI lesions; (2) the log annualised relapse rate ratio; and (3) log disability progression odds ratio. Relapse is defined as appearance of new, worsening or recurrence of neurological symptoms that can be attributable to MS, accompanied by an increase ofa score on the Expanded Disability Status Scale (EDSS) and also functional-systems score(s), lasting at least 24 hours, preceded by neurologic stability for at least 30 days. Disability progression is defined as an increase in EDSS score that was sustained for 12 weeks, with an absence of relapse at the time of assessment. Negative basic parameters indicate that treatments B-F are beneficial compared to treatment A throughout.Data in this illustrative example were obtained from ten randomised controlled trials of six treatment options (coded in the network data as treatments A to F); placebo (A), interferon beta-1b (B), interferon beta-1a (C), glatiramer (D), and two doses of fingolimod; 0.5mg (E) and 1.25mg (F). Three of the fingolimod trials were three-arm (two doses and a control) and are included as three-arm studies. Three trials of interferon beta (one 1a and two 1b) were three-arm (also two doses and a control), and these were included as separate two-arm trials (each dose against the control, with the number of participants in each control arm halved). Thisignores the differences in doses of interferon beta and was a pragmatic decision to help provide an identifiable network. Briefly, in this example there is very little replication within designs, so that identifying Σ_β well is very difficult without making pragmatic decisions such as this.Sormani et al. (2010) also treat these particular studies as two separate studies in this way, which helps them to identify their meta-regression models. Treating these three studies as separate two-arm trials means that the data are analysed as being from thirteen studies and a summary of the resulting data structure is shown in Table <ref>. There are eight different designs in Table <ref> and so there is relatively little replication within designs, even when including three of the three-arm studies as separate two arm studies. SeeBujkiewicz et al. (2016) for further details of these data. Figure 1 provides network diagrams that show the number of comparisons between each pair of treatments on the edges. In these diagrams the three arm studies (Table <ref>) are taken to contribute three comparisons, for example the CEF study contributes CE, CF and EF comparisons. Two estimates of treatment effect from this study contribute to analyses however because C is taken as the baseline; the study's estimated EF treatment effectcontains no additional information once its CE and CF contrasts are included in the analysis.Treatment effect estimates of each treatment relative to the reference treatment A (placebo).model 5cestimate (se)AB AC AD AE AFMRI (y_1)univariate (y_1)-0.95 (0.39) -1.00 (0.21) -0.68 (0.50) -1.38 (0.26) -1.52 (0.26) bivariate (y_1, y_2)-0.94 (0.39) -1.00 (0.21) -0.68 (0.50) -1.39 (0.26) -1.53 (0.26) bivariate (y_1, y_3)-0.96 (0.39) -0.98 (0.22) -0.66 (0.50) -1.38 (0.26) -1.51 (0.26) trivariate (y_1, y_2, y_3)-0.96 (0.39) -0.97 (0.22) -0.67 (0.50) -1.38 (0.26) -1.51 (0.26) Relapse rate (y_2) univariate (y_2)-0.35 (0.10) -0.25 (0.09) -0.34 (0.11) -0.81 (0.12) -0.78 (0.12) bivariate (y_1, y_2)-0.35 (0.10) -0.25 (0.09) -0.34 (0.11) -0.81 (0.12) -0.78 (0.12) bivariate (y_2, y_3)-0.36 (0.11) -0.23 (0.10) -0.33 (0.12) -0.80 (0.13) -0.77 (0.13) trivariate (y_1, y_2, y_3)-0.36 (0.11) -0.23 (0.10) -0.33 (0.12) -0.80 (0.13) -0.77 (0.13) Disability progression (y_3)univariate (y_3)-0.46 (0.25) -0.11 (0.21) -0.42 (0.25) -0.33 (0.25) -0.37 (0.24) bivariate (y_2, y_3)-0.47 (0.25) -0.10 (0.21) -0.43 (0.25) -0.37 (0.25) -0.37 (0.25) bivariate (y_1, y_3)-0.46 (0.25) -0.11 (0.21) -0.42 (0.25) -0.34 (0.25) -0.38 (0.25) trivariate (y_1, y_2, y_3)-0.47 (0.25) -0.10 (0.21) -0.43 (0.25) -0.37 (0.25) -0.37 (0.25) Table <ref> shows the estimates of the basic parameters (treatment effects relative to the reference treatment, placebo) obtained from univariate network meta-analyses, bivariate analyses for all three combinations of pairs of outcomes and the trivariate analysis. The results are similar across all analyses, and conclusions from univariate and multivariate analyses are the same. This is disappointing because multivariate analyses have not resulted in more precise inference. The entries of Σ̂_β and Σ̂_ωare shown in Table <ref>. The positive estimates obtained for the unknown variance components suggest that this example exhibits some between-study heterogeneity and inconsistency. In order to assess the impact of the unknown variance components, we also fitted the consistent modeland the common-effect and consistent model (results not shown) using all three outcomes (p=3). On average, the standard errors of the fifteen basic parameters from the fullmodel are 35% greater (range: 13% to 84%) than those from the consistent model, which in turn are 58% (range: 8% to 128%) greater than those from the common-effect and consistent model. Both the between-study heterogeneity and inconsistency have notable impact.The multivariate analysis adds to the univariate analyses in two main ways. Firstly, the finding that the multivariate analysis is in good agreement with the univariate analyses is a particularly important finding for treatment effects on MRI where a substantial proportion of data were missing. It has been demonstrated by Kirkham et al. (2012) that a multivariate approach to meta-analysis can help obtain more accurate estimates in the presence of outcome reporting bias. Hence the multivariate analysis reduces concerns that this univariate analysis is affected by reporting bias. Secondly joint inferences for all three outcomes are possible under the multivariate model. For example, and as we might anticipate, in our example the estimated log annualised relapse rate ratios and log disability progression odds ratios are highly positively correlated; from (δ̂) in our three dimensional multivariate meta-analysis, the correlations between the five pairs of estimated basic parameters for these two outcomes are all between 0.63 and 0.75. Medical decision making based jointly on these two outcomes should take this high positive correlation into account, and this is only possible by using a multivariate approach. For example, a formal decision analysis involving these two outcomes should be based on their joint distribution rather than their two marginal distributions.§ DISCUSSIONWe have proposed a new model for dealing with both multiple treatment contrasts and multiple outcomes, to provide aframework for conducting multivariate network meta-analysis. By using a matrix-based method of moments estimator, our methodology naturally builds on previous work (such as the well-known DerSimonian and Laird approach) and is computationally very fast, relative to other potential estimation approaches such as REML or MCMC; this is especially the case in very high dimensions and so our methodology is particularly advantageous for ambitious analyses of this type. The main disadvantage is that, as a necessary consequence of its semi-parametric nature, the method of moments is not based on sufficient statistics and so is not fully efficient. The loss in efficiency relative to maximum likelihood estimation awaitsinvestigation but we anticipate that this will be less serious for inferences about the average effects than the unknown variance components. Furthermore the within-study normal approximations used in our model are not necessarily very accurate even in moderately sized studies. Since our analysis uses a general design matrix, the modelling may easily be extended by adding study level covariates to describe andfit multivariate network meta-regressions. In the network meta-analysis setting these regressions have the potential to explain the reasons for inconsistency and model multiple dose level responses. Our method of moments estimation can be combined withapproaches that `inflate' confidence intervals from a frequentist random effects meta-analysis (Hartung and Knapp, 2001; Jackson and Riley, 2014). In conclusion, we have developed a new model and estimation method for multivariate network meta-analysis, which can describe multiple treatments and multiple correlated outcomes.§ ACKNOWLEDGEMENTS DJ, IRW and ML are (or were) employed by the UK Medical Research Council [Unit Programme number U105260558]. SB was supported by the Medical Research Council (MRC) Methodology Research Programme [New Investigator Research Grant MR/L009854/1].§ REFERENCES Achana, F. A., Cooper, N. J., Bujkiewicz, S., Hubbard, S. J., Kendrick, D., Jones, D. R. and Sutton, A. J. (2014). Network meta-analysis of multiple outcome measures accounting for borrowing of information across outcomes. BMC Medical Research Methodology 14, 92. Bujkiewicz, S., Thompson, J. R., Riley, R. D. and Abrams, K. R. (2016). Bayesian meta-analytical methods to incorporate multiple surrogate endpoints in drug development process. Statistics in Medicine 35, 1063-–1089. Chen, H., Manning, A. K. and Dupuis, J. (2012). A method of moments estimator for random effect multivariate meta-analysis. Biometrics 68, 1278-–1284.Dersimonian, R. and Laird, N. (1986). Meta-analysis in clinical trials. Controlled Clinical Trials 7, 177–188.Efthimiou, O., Mavridis, D., Cipriani, A., Leucht, S., Bagos, P. and Salanti, G. (2014). An approach for modelling multiple correlated outcomes in a network of interventions using odds ratios. Statistics in Medicine 33, 2275-–2287.Efthimiou O, Mavridis D, Riley RD, Cipriani A and Salanti G. (2015) Joint synthesis of multiple correlated outcomes in networks of interventions. Biostatistics 16, 84–97.Hartung, J. and Knapp, G. (2001). On tests of the overall treatment effect in meta-analysis with normally distributed responses. Statistics in Medicine 20, 1771–1782.Henderson, H. V. and Searle, S. R. (1981). The vec-permutation matrix, the vec operator and Kronecker products: a review. Linear and Multilinear Algebra 9, 271–-288.Higgins, J.P.T., Jackson, D., Barrett, J.K., Lu, G., Ades, A.E. and White, I.R. (2012). Consistency and inconsistency in network meta-analysis: concepts and models for multi-arm studies. Research Synthesis Methods 3, 98–110.Hong, H., Fu, H., Price, K. L. and Carlin, B. P. (2015) Incorporation of individual-patient data in network meta-analysis for multiple continuous endpoints, with application to diabetes treatment. Statistics in Medicine 34, 2794–2819.Hong, H., Chu H., Zhang J. and Carlin B.P.(2016) A Bayesian missing data framework for generalized multiple outcome mixed treatment comparisons. Research Synthesis Methods 7, 6–22.Jackson, D., Riley, R. and White, I. R. (2011). Multivariate meta-analysis: potential and promise (with discussion). Statistics in Medicine 30, 2481–-2510.Jackson, D., White, I.R. and Riley, R.D. (2013). A matrix-based method of moments for fitting the multivariate random effects model for meta-analysis and meta-regression. Biometrical Journal 55, 231–245.Jackson, D. and Riley, R. (2014).A refined method for multivariate meta-analysis and meta-regression. Statistics in Medicine 33, 541–554.Jackson, D., Law, M., Barrett, J.K., Turner, R., Higgins, J.P.T., Salanti, G. and White, I.R. (2016). Extending DerSimonian and Laird's methodology to perform network meta-analyses with random inconsistency effects. Statistics in Medicine 35, 819–839.Kirkham, J. J., Riley, R. D. and Williamson, P. R. (2012). A multivariate meta-analysis approach for reducing the impact of outcome reporting bias in systematic reviews. Statistics in Medicine 31, 2179-–2195.Kulinskaya, E., Dollinger, M.B. and Bjørkestøl, K. (2011). Testing for Homogeneity in Meta-Analysis I. The One-Parameter Case: Standardized Mean Difference. Biometrics 67, 203–212.Law, M., Jackson, D., Turner, R., Rhodes, K. and Viechtbauer, W. (2016).Two new methods to fit models for network meta-analysis with random inconsistency effects BMC Medical Research Methodology 16, 87Lu, G. and Ades, A. (2004). Combination of direct and indirect evidence in mixed treatment comparisons. Statistics in Medicine 23, 3105–3124.Lu, G. and Ades, A. (2006). Assessing evidence consistency in mixed treatment comparisons. Journal of the American Statistical Association 101, 447–459.Nikolakopoulou, A., Chaimani, A., Veroniki, A., Vasiliadis, H.S., Schmid, C.H. and Salanti, G. (2014). Characteristics of Networks of Interventions: A Description of a Database of 186 Published Networks. Plos One 9, 1: e86754.Piepho, H.P., Williams, E.R. and Madden, L.V. (2012). The Use of Two-Way Linear Mixed Models in Multitreatment Meta-Analysis. Biometrics 68, 1269–1277.Riley, R.D.,Thompson, J. R. and Abrams, K. R. (2008). An alternative model for bivariate random-effects meta-analysis when the within-study correlations are unknown. Biostatistics 9, 172-–186.Riley, R.D., Price, M.J., Jackson, D., Wardle, M., Gueyffier, F., Wang, J., Staessen, J.A. and White, I.R. (2015). Multivariate meta-analysis using individual participant data. Research Synthesis Methods 6, 157–-174.Sormani, M.P., Bonzano, L., Roccatagliata, L.,Mancardi, G.L., Uccelli, A. and Bruzzi, P. (2010). Surrogate endpoints for EDSS worsening in multiple sclerosis. A meta-analytic approach. Neurology 75, 302–309.Seaman, S., Galati, J., Jackson, D. andCarlin, J. (2013). What Is Meant by “Missing at Random?”. Statistical Science28, 257-268.Searle, S.R. (1971). Linear Models. Wiley. New York.Veroniki, A., Vasiliadis, H.S., Higgins, J.P. and Salanti G. (2013). Evaluation of inconsistency in networks of interventions. International Journal of Clinical Epidemiology 42, 332–345.Wei, Y. and Higgins, J.P.T. (2013). Estimating within-study covariances in multivariate meta-analysis with multiple outcomes. Statistics in Medicine 32, 1191–-1205. § SUPPLEMENTARY MATERIALS§ MULTIVARIATE ESTIMATION§.§ An important result In order to evaluate the expectations required, we will need to be able to compute expressions of the form ( A( M⊗Σ) B), where A and B are np × np matrices, M is an n × n matrix and Σ is a p × p matrix. We continue to use the notation A_i,j to denote the ith by jth block of A, wherethese blocks are p × p matrices. For any three np × np matrices A,B and C, we have( ACB)_k,l = ∑_i=1^n∑_j=1^n A_k,iC_i,jB_j,lThis is just the law of matrix multiplication applied to blocks. Then taking C =M⊗Σ so that from the definition of the Kronecker product, C_i,j = m_ijΣ, we have( A( M⊗Σ)B)_k,l =∑_i=1^n∑_j=1^n m_ij A_k,i Σ B_j,lTo obtain the block trace, we sum the matrices along the main diagonal. Hence to obtain the block trace we take l=k to obtain the matrices along the main diagonal and sum over k so obtain( A( M⊗Σ) B) = ∑_i=1^n∑_j=1^n∑_k=1^n m_ij A_k,i Σ B_j,kThe use of equation (<ref>), with the appropriate matrices, almost immediately results in the expected values required in section 4. §.§ The estimating equations In this section we prove the results given in section 4 of the main paper. We do not redefine all quantities or give the size of all matrices and vectors, see the main paper for these details.As inthe univariate approach of Jackson et al. (2016), we will base our estimation on the two quantities(𝐐) and ∑_d=1^D (𝐐_d) where D is the number of different designs. We match these quantities to their expectations to estimate the unknown variance parameters. We therefore need to evaluate[ (𝐐)] and [(𝐐_d)]. §.§.§ Evaluating [ (𝐐)] and deriving the first estimating equationAs in Jackson et al. (2013), by direct calculation we have that 𝐖𝐇𝐖^-1 = 𝐇^T and ((𝐈_np - 𝐇)^T)^2= (𝐈_np - 𝐇)^T; if 𝐖 is not invertible because outcome data are missing then we can justify the use of the identity 𝐖𝐇𝐖^-1 = 𝐇^T and the expectation that follows in the limit, where the precision p attributed to missing data tends towards zero from above, p → 0^+ (Jackson et al., 2013). Furthermore we can use the identity 𝐖 = 𝐒^-1 in this limit. We also have that 𝐘 - 𝐘̂ = (𝐈_np - 𝐇)𝐘 and[𝐘 - 𝐘̂]= 0. Hence from the definition of 𝐐we have [𝐐] = 𝐖[𝐘-𝐘̂] 𝐑.From these results,taking the variance of 𝐘 from model (3) of the main paper,we can evaluate[𝐐]=𝐀( M_1⊗Σ_β + M_2⊗Σ_ω) 𝐁 + 𝐁where𝐀 = (𝐈_np-𝐇)^T 𝐖and𝐁 = (𝐈_np - 𝐇)^T𝐑Here 𝐀 and 𝐁 are known np × np matrices.For estimation purposes we require[(𝐐)] = ([𝐐]). Wewrite 𝐀_i,j and 𝐁_i,j to mean the ith by jth blocks of𝐀 and 𝐁 respectively, so that 𝐀_i,j and 𝐁_i,j are both p × p matrices. Then, using (<ref>),we have[(𝐐)]= ∑_i=1^n∑_j=1^n∑_k=1^n m_1i j𝐀_k,iΣ_β𝐁_j,k+ ∑_i=1^n∑_j=1^n∑_k=1^n m_2i j𝐀_k,iΣ_ω𝐁_j,k + (𝐁) §.§.§ Evaluating [ (𝐐_d)] and deriving the second estimating equationThen we follow very similar, but much simpler, arguments as in the previous section to derive the result that we require. We define design specific hat matrices𝐇_d = 𝐗_d (𝐗_d^T 𝐖_d 𝐗_d)^-1𝐗_d^T𝐖_d and also design specific p n_d × p n_d 𝐀 and 𝐁 matrices𝐀_d = (𝐈_p n_d-𝐇_d)^T 𝐖_dand𝐁_d = (𝐈_p n_d - 𝐇_d)^T𝐑_dIn equation (<ref>) we take the matrix inverse to be the Moore-Penrose pseudoinverse. This is because, in the presence of missing outcome data, the design-specific regression corresponding to this hat matrix may not be identifiable (for example, if studies of a particular design do not provide data for one or more of theoutcomes). In such instances this design may still provide information about some of the unknown between-study variance components and so it is not desirable to exclude the design from this part of the estimation procedure. By computing (<ref>) using this pseudoinverse we obtain a suitable hat matrix (Searle, 1971; page 221, his equations 126 and 127). Furthermore all the necessary properties of the hat matrix are retained when using the pseudoinverse when computing (<ref>) and we retain unbiased fitted values (Searle, 1971; page 181). Following a simpler version of the arguments in the previous section and the main paper, taking the variance of 𝐘_d from model (5) of the main paper, and upon applying the vec operator, we obtain([(𝐐_d)]) = 𝐂_d (Σ_β)+ 𝐄_dwhere𝐂_d = ∑_i=1^n_d∑_j=1^n_d∑_k=1^n_d m^d_1 ij𝐁_d, j,k^T ⊗𝐀_d, k,iand𝐄_d = (( B_d))We then sum equation (<ref>) across all designs in order to obtain([∑_d=1^D (𝐐_d)]) = (∑_d=1^D 𝐂_d) (Σ_β)+ ∑_d=1^D 𝐄_d§.§ Special cases of the estimation procedure (an extended version of section 4.5)The proposed method reduces to two previous methods in special cases. If all studies are two arm studies (and so provide a single contrast) and consistency is assumed then the proposed method reduces to the matrix based method for multivariate meta-regression(Jackson et al., 2013). This is because we then have Σ_ω= 0, so that the second triple sum in our expression for [(𝐐)] is zero; furthermore the first triple summation in this expression can be reduced to a double summation, because M_1 is an identity matrix for multivariate meta-regression (Jackson et al., 2013; their equation A.1.).Furthermore the proposed multivariate method also reduces to the univariate DerSimonian and Laird method for network meta-analysis (Jackson et al., 2016) when p=1. This is because, in one dimension, the 𝐐 matrices all reduce to the Q random scalars used in the estimation procedure suggested byJackson et al. (2016). This can be shown by replacing the block trace operator with the more familiar trace of a matrix (btr is the trace when p=1) in the definition of the 𝐐 matrices and using the identity ( AB) =( BA). These two special cases are in turn generalisations of methods such as that proposed by DerSimonian and Laird (1986). There is however one caveat when stating that the new multivariate method reduces to the univariate method proposed by(Jackson et al., 2016) when p=1. This is because the account of Jackson et al. (2016) does not mention the possibility of missing outcome data and so we have implicitly taken all data to be observed in the argument used in the previous paragraph.§ EXAMPLE OF MATRICES 𝐌_1 AND 𝐌_2We provide a concrete example of matrices 𝐌_1 and 𝐌_2, in order to clarify how they are computed. We take such an example from Law et al. (2016) which comprises thirteen studies with the following study designs: AB, BC, BC, BC, BC, BC, BD, BD, CD, CD, ABD, BCD, BCD. This is the same type of network as used in the simulation study below. The two matrices for this example are given explicitly below, where we can see that these matrices contain blocks that are comprised of blocks of𝐏_c_d, where in 𝐌_1 the blocks are formed by studies and in 𝐌_2 the blocks are formed by designs. 𝐌_1 = ( [ 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0; 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0; 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0; 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0; 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0; 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0; 0 0 0 0 0 0 0 0 0 0 1 1/2 0 0 0 0; 0 0 0 0 0 0 0 0 0 0 1/2 1 0 0 0 0; 0 0 0 0 0 0 0 0 0 0 0 0 1 1/2 0 0; 0 0 0 0 0 0 0 0 0 0 0 0 1/2 1 0 0; 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1/2; 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1/2 1; ])𝐌_2 = ( [ 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0; 0 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0; 0 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0; 0 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0; 0 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0; 0 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0; 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0; 0 0 0 0 0 0 0 0 0 0 1 1/2 0 0 0 0; 0 0 0 0 0 0 0 0 0 0 1/2 1 0 0 0 0; 0 0 0 0 0 0 0 0 0 0 0 0 1 1/2 1 1/2; 0 0 0 0 0 0 0 0 0 0 0 0 1/2 1 1/2 1; 0 0 0 0 0 0 0 0 0 0 0 0 1 1/2 1 1/2; 0 0 0 0 0 0 0 0 0 0 0 0 1/2 1 1/2 1; ]) | http://arxiv.org/abs/1705.09112v1 | {
"authors": [
"Dan Jackson",
"Sylwia Bujkiewicz",
"Martin Law",
"Richard D Riley",
"Ian White"
],
"categories": [
"stat.ME"
],
"primary_category": "stat.ME",
"published": "20170525095301",
"title": "A matrix-based method of moments for fitting multivariate network meta-analysis models with multiple outcomes and random inconsistency effects"
} |
empty =3cm =17pt [t]1 2 The four-dimensional on-shell three-point Amplitude in spinor-helicity formalism and BCFW recursion relations Andrea Marzolla^ [t]0.7^Physique Théorique et Mathématique and International Solvay Institutes, Université Libre de Bruxelles, C.P. 231, 1050 Brussels, Belgium. Abstract 1ex [t] Lecture notes on Poincaré-invariant scattering amplitudes and tree-level recursion relations in spinor-helicity formalism. We illustrate the non-perturbative constraints imposed over on-shell amplitudes by the Lorentz Little Group, and review how they completely fix the three-point amplitude involving either massless or massive particles. Then we present an introduction to tree-level BCFW recursion relations, and some applications for massless scattering, where the derived three-point amplitudes are employed. empty roman§.§ Note to the reader The present notes collect and integrate the subjects taught over five hours at the http://www.ulb.ac.be/sciences/ptm/pmif/Rencontres/ModaveXII/lectures.htmlXII Modave Summer School in Mathematical Physics. The audience was composed almost entirely by young PhD students, so the lectures were intended for researchers who are new to this specific field. In the same way, these notes require as preliminary notions no more than the standard material of any master studies in theoretical physics, namely some basics of group theory, quantum field theory and complex analysis. Moreover, the shortness of the course imposed a restricted selection of topics. The study of theoretical and mathematical aspects of on-shell scattering amplitudes is an extremely broad field of research, which experienced great vitality and development in the recent years. This work does not aim to any extent at being an exhaustive review on the subject, reviews which on the other hand we have in a number of eminent examples (as for instance: <cit.>).This text is rather meant to be an initiating tool for students and researchers who are interested in working on these topics. Therefore, taking the risk of being even redundant, we provide enough details for the computations to be reproduced by the reader. On the other hand, we try on any occasion to refer to other material which extends and develops the subjects that we are neglecting in our discussion, in order to guide the interested reader. empty arabic§ INTRODUCTIONINTRODUCTION tocsectionIntroductionThe most prolific source of experimental evidence for high energy physics is given by large particle accelerators, where the measured observables are the probability amplitudes of scattering processes. Such measured quantities are matched with theoretical predictions which are computed mainly by techniques based on Feynman diagrammatics. These techniques are derived from Lagrangian formulation of quantum field theory, and rely over perturbative expansion and renormalizability. They apply thus only to weak and renormalizable interactions. However, since the very beginning of quantum field theory, an alternative non-perturbative analytic approach to scattering amplitudes was suggested <cit.>. Such approach, initially developed by Chew <cit.>, and by many others afterwards, was prominent in the 50's and 60's, going under the name of “S-matrix program”. The idea was that the S-matrix could be entirely reconstructed starting from a few first principles: analyticity, unitarity and crossing properties of the S-matrix, together with its symmetries. The program, after some initial success, did not manage to achieve its goal, and was eventually supplanted by the field theoretical approach, namely after the affirmation of quantum chromodynamics as theory of strong interaction. It knew then a period of rediscovery during the 70's and 80's, in the framework of two-dimensional perturbative string-theory, but it is in very recent times that the S-matrix perspective is living a new revival, more as a complement, rather than a substitute, of perturbative quantum field theory. Ironically, this new vague of success is concerning mainly Yang-Mills theories, which were responsible for burying the S-matrix program half a century ago. The new elements that made the recent advancements achievable are the focus on massless particles, with the extensive use of the spinor-helicity formalism as a simplifying operational framework, and the addition of more symmetry (supersymmetry, conformal symmetry).The symmetries of spacetime have a crucial role in the scattering theory, since they underlie the notion itself of particle-state: what we call elementary particle in high-energy physics corresponds to a unitary representation of the spacetime symmetry group. But spacetime symmetries do not only classify the external states participating in the scattering, they also yield constraints on the form of the scattering amplitude. Such symmetry-based constraints do not rely on the existence of a local Lagrangian, nor on the validity of a perturbative expansion, so they must apply to any kind of particles (any mass, any spin) and interactions (strong, non-renormalizable,...).In these notes we will remain in the `maximally realistic' situation of scattering events in four-dimensional flat spacetime, and we will show how mere Poincaré invariance is enough to fix the kinematic dependency of the three-point amplitude, where the external states are either massless or massive. The most general non-perturbative -invariant three-point amplitude is already quite an interesting result on its own, but it is even more significant since it can be used as building-block to construct higher-point amplitudes, through on-shell recursion relations. We present here an introduction to tree-level BCFW (Britto-Cachazo-Feng-Witten) recursion relations, and we carry out some practical applications to simple but emblematic examples, with an attentive regard towards the details of computations. Furthermore, the entirety of our derivation will be realized in spinor-helicity formalism. The advantages of such formalism, besides being particularly convenient to express on-shell massless amplitudes, are two main ones: in the first place, the constraints coming from Lorentz symmetry take a particularly simple and effective form in spinor language; then, the spinor description naturally extends to complex momenta, which are essential for on-shell recursive techniques, which are based on complex analysis.Before immersing into the actual matter, let us mention those works of other authors that inspired most the present manuscript. Among its spiritual fathers, this work particularly owes to its predecessor <cit.>, the notes of the lectures given on similar topics by Eduardo Conde, at the ninth edition of the same Modave School, in 2013. Moreover, the co-authored work with Eduardo <cit.> constitutes a significant part of the material of these notes. Other main inspirational sources for the adopted overall perspective are Weinberg's textbook on Quantum Field Theory <cit.>, the more modern book by Henn and Plefka <cit.>, together with the already cited review by Elvang and Huang <cit.>.More specifically, for Section 1, where we briefly sketch some preliminary notions from S-matrix theory, we refer to Chapter 3 and 4 of Weinberg's book <cit.>, to the historical book <cit.>, and to Conde's notes <cit.>. In Section 2 we review the Lorentz Little Group for massless and massive representations. Then in Section 3 we introduce the spinor-helicity formalism, both for massless and massive momenta, establishing our notation, which is the same as in the useful practical compendium <cit.>. In Section 4, essentially based on the already cited work <cit.>, we show how the constraints coming from Lorentz symmetry completely fix the three-point amplitude in spinor-helicity formalism, for either massless or massive external particles. For Section 5, where we present BCFW recursive techniques and some illustrative examples, we are grateful to the works: <cit.>. Introduction§ PRELIMINARIES ON THE S-MATRIX Let us start with setting some preliminary definitions regarding the S-matrix. The probability amplitude of a scattering process is defined asymptotically, that is at past and future infinity: we have some initial (interaction-free) particle content, and after a `long' time, during which some interaction happens, we get some other final (interaction-free) particle content. More formally, the probability amplitude of the transitions between some initial state |i⟩ of the physical Hilbert space at time -∞ and some final state |f⟩ at time +∞ is defined as the expectation value (inner product on the Hilbert space)S_fi=⟨ f | i ⟩ .This defines the elements of the S-matrix. We should intend the final and initial states as ranging over an orthonormal basis of multi-particle states. So, from the completeness relation, it follows that the S-matrix has to be unitary.We can subtract from the S-matrix the trivial case of transitions with no interaction happening, to get the part that actually contains the interactions:S_fi-δ_fi = T_fi . Then, we demand translational invariance of the S-matrix, namely ⟨ f | i ⟩ = ⟨ e^-ia_μ𝔭_f^μf | e^-ia_μ𝔭_i^μi ⟩ = e^-ia_μ(𝔭_f^μ-𝔭_i^μ)⟨ f | i ⟩ ,where 𝔭_i represents the sum of incoming four-momenta, and 𝔭_f the sum of outgoing four-momenta. Of course, for arbitrary a_μ, this identity can be true only if the four-momentum is conserved. So we can extract an overall delta function of momenta and write what we will actually call the amplitude M:T_fi=-2π i δ(𝔭_f-𝔭_i) M_fi ,with some conventional numerical factor. Then the unitarity condition for S translates into the following condition on M:S^† S = 𝕀 = S S^†⇒ M-M^† = 2π iM M^† . Finally, we use the crossing symmetry of scattering amplitudes, that is the equivalence of interpreting outgoing particles as incoming antiparticles. In this way we can always treat all the particles as incoming, and replace M_fi, where we have m incoming and n-m outgoing particles, by M_n, where all particles are incoming.Another crucial property of the S-matrix is the cluster decomposition principle. This is just a consequence of the assumption that experiments that are sufficiently distant in space are uncorrelated, which is a mild version of locality of the interactions. The consequence is that the total S-matrix of distant, uncorrelated scattering processes factorizes into the product of the S-matrices of each of these processes. Then we can restrict to the part of the S-matrix where there is an actual exchange of momentum among all the n particles involved (that is, we look at a scattering event in a given, localized experiment), which is called the connected part of the S-matrix.The advantage we will gain from considering the connected part is that it contains no delta function other than the total momentum conservation one, whereas the non-connected part contains additional delta functions corresponding to subsets of particles going through the process without interacting with the rest. On the other hand, the non-connected pieces can be recursively constructed from the lower-point connected one, so the connected amplitudes are really the objects we need to determine in order to have the whole S-matrix. Thus, in the following, by M_n we will always implicitly mean the connected component of the amplitude, rather then the whole n-point amplitude. For more details about the cluster decomposition principle we refer to standard literature on the subject (for instance chapter 4 of Weinberg's QFT book <cit.>).We have then just sketched the basic properties of S-matrix: unitarity, crossing, cluster decomposition (which is related to locality). We have also already imposed translational invariance, but in case of a larger spacetime symmetry group we can impose further constraints on the S-matrix. It is precisely what we are going to do, for the four-dimensional Lorentz group. In order to do it systematically, we will start reviewing the representation theory of Poincaré group, which classifies the different kinds of one-particle states, furnishing a basis for our physical Hilbert space. This is crucial for the definition of S-matrix itself, as it is clear from the defining relation (<ref>). § POINCARÉ REPRESENTATIONS AND THE LITTLE GROUPSo, as we have just seen, the scattering amplitude is defined out of the states of the physical Hilbert space, which in turn are classified by the irreducible unitary representations of the symmetry group of spacetime ( group in our case). Moreover, the states will transform under the symmetry transformations in a determined way, and so the amplitudes will inherit such transformation properties.So, our preliminary effort is about reviewing the representation theory of Poincaré group. We will particularly focus on the Little Group, which will play a central role in the subject of Section <ref>, providing the crucial constraints for the three-point amplitude.The algebra of Poincaré group possesses two Casimir operators. A Casimir is an element of the universal enveloping algebra of a given Lie algebra 𝔤, that commutes with all the elements of the algebra 𝔤. Since an irreducible representation has no Casimir beside the identity, the Casimir operators of a group can be used to classify its irreducible representations. The classification of representations of Poincaré group in four dimensions is due to Wigner <cit.>, and it is referred to as Wigner classification.[ In these notes we will remain in four dimensions, but an analogous classification can be realized for any dimensions bigger than one (see for instance <cit.>). ]The Casimir operators of Poincaré group are the squared norm of the translation generator (momentum), P^2, and the squared norm of the Pauli-Lubanski operator, W^2, whereW_λ=1/2ϵ_λμνρM^μνP^ρ ,with M^μν the generators of the Lorentz algebra.Since the Casimir operators commute with all other transformations of the group, their eigenstates can be chosen as the physical states: the respective eigenvalues are not affected by any transformation of the group and can be considered as intrinsic properties of the state/particle.The eigenvalue of the square momentum operator is the squared mass. This divides the Hilbert space into separated classes, whether the mass is zero or different from zero, respectively leading to massless and massive representations of thegroup[ Tachyonic momenta (i.e. with negative squared mass) constitute a third distinct class, but we discard them as unphysical here. The vacuum (p≡0) also corresponds to a separate, yet trivial, case.].The eigenvalues connected to the Pauli-Lubanski operator, instead, will be related to the helicity/spin of the particle. The Pauli-Lubanski operator generates the Little Group (LG) of a given four-momentum p. The LG of p is the stabilizer subgroup of the Lorentz group with respect to p, which is defined as the subgroup of proper orthochronous Lorentz transformations that leave p invariant:LG_p = {Λ_p ∈L_+^↑ / Λ_p p = p } . We can thus label our physical states pa thanks to the eigenvalues of the momentum operator (i.e. p) and of Pauli-Lubanski operator (represented for the moment by the generic label a), and decompose the action of any unitary representation U of a generic Lorentz transformation Λ in the following wayU(Λ)pa = ∑_a'D_aa'(Λ) Λ pa' .If the considered Lorentz transformation is a LG transformation, then the expression (<ref>) clearly reduces toU(Λ_p)pa = ∑_a'D_aa'(Λ_p) pa' ,which does not touch p, but can affect the other labels a, related to the LG.In the next sections we will see how the Pauli-Lubanski operator actually generates the LG, defining a basis of physical states, and we will explicitely determine the form of the representations D_aa'. Since the LG will highlight the fundamentally different nature of massless and massive particle, we will discuss light-like and time-like momenta in distinct sub-sections.§.§ Little Group for massless momenta Let us then consider a light-like four-vector p, so meaning p^2=0. There exists a Lorentz transformation L_p that moves p to the special frameL_pp≡ k = (E,0,0,E) .Then transformations Λ_k of the LG of k would give transformations Λ_p of the LG of p through the composition Λ_p=L_p^-1Λ_k L_p.The LG of k can be intuitively identified with the group of isometriesin the p_1-p_2 plane. Let us verify that the Pauli-Lubanski operator corresponding to k indeed generates thegroup. From the definition (<ref>), we haveW_0 = E M^12≡ E J^3; W_1 = E (-M^23-M^02) ≡ -E (J^1+K^2); W_2 = E (-M^31+M^01) ≡ -E (J^2-K^1); W_3 = -E J^3 = -W_0;where J^i are spatial rotations around the respective axis, and K^i are Lorentzian boosts along the respective direction. The operators W_1, W_2, J^3 verify the algebra (following from Lorentz commutation relations):[W_i,W_j]=0 , [J^3,W_i]=iϵ_ij3 W_j , withi,j=1,2,which is indeed the algebra of .Thus a generic LG transformation for k would act on a massless state ka as e^-iα W_1e^-iβ W_2e^-iθ J^3 ka .The eigenstates of W_1,W_2 turn out to have continuous eigenvalues, which would lead to continuous spin. Even if such possibility constitutes current matter of research <cit.>, it is discarded in standard particle physics (requiring the action of W_1,W_2 to be trivial on the physical states), in favor of a quantized spin. In fact, the eigenstates of J^3 possess discrete eigenvalues, which correspond to the two opposite values of the helicity of a massless particle. This can be seen directly from the definition of helicity, that is the projection of the spin onto the direction of motion; with our special choice of four-momentum k (<ref>), we have indeed:H=J⃗·P⃗/|P⃗|⇒ H_k=J⃗·k⃗/|k⃗|=E J^3/E=J^3. Then we choose the eigenstates of the helicity operator as our physical massless particle-states, i.e.Hph=hph , forp^2=0,where the helicity eigenvalue can take two opposites values h=± s, being s the spin of the massless particle. Then, a general little group transformation will act as follows on the state in the special frame k:U(Λ_k)kh=e^-iθ J^3kh=e^-iθ hkh ;and equivalently for a general light-like four-momenta p:U(Λ_p)ph=U(L_p^-1Λ_kL_p)ph=e^-iθ hph .We notice, comparing this expression to the more general (<ref>), that the representations of massless LG are diagonal in the helicity basis, i.e. D_hh'=δ_hh'e^-iθ h; which is obvious, because the LG is one-dimensional, so it has only one generator and we have chosen as basis the eigenstates of such generator. Yet, this is possible only because the helicity is a Lorentz invariant (Lorentz transformations cannot reverse the direction of motion of a massless particle). It will not be the case for massive representations, as we will see in next section. §.§ Little Group for massive momenta As in the massless case, we can bring a generic time-like momentum P, P^2=m^2, to a special frame (the rest frame), through a certain Lorentz boost L_P,L_P P≡ K = (m,0,0,0) ,and then we can retrieve the LG transformations for P from those for K: Λ_P=L_K^-1Λ_KL_K. From (<ref>) we can realize that the LG would be given in this case by the group of three-dimensional spatial rotations . Again we can derive the generators of the LG from the Pauli-Lubanski operator (<ref>),W_0 = 0; W_i = m ϵ_ijk0M^jk≡ -m J^i.Of course, the generators of spatial rotations J^i by definition reconstruct the algebra of , which is equivalent to the algebra of , i.e.[J^i,J^j]=iϵ^ijkJ^k .In the rest frame, the total angular momentum, which is in general the sum of orbital and intrinsic angular momentum, J⃗=L⃗+S⃗, is given just by the intrinsic angular momentum, the spin S⃗. The Casimir is thus W^2=J⃗^2=S⃗^2, and we choose the particle-states to be eigenstates of the Casimir, with eigenvalues s (s+1), where s defines the spin of the massive particle. Yet, as we know from our quantum mechanics courses, this is not sufficient to define a basis of the Hilbert space: we need to choose the component of the spin along one direction (which we will call J_0), and the corresponding eigenstates with eigenvalue σ will give the complete basis. Then we can label our massive states by s and σ, and writeJ⃗^2Ps,σ = s(s+1)Ps,σ ; J^0Ps,σ = σ Ps,σ , J^±Ps,σ = σ^± Ps,σ± 1 ;where σ^±≡√((s∓σ)(s+1±σ)), and the generators J_± are defined in the standard way in order to satisfy thecommutation relations[J^+,J^-]=2J^0,[J^0,J^±]=± J^± .As announced in previous section, we notice that, since the spin projection σ, contrarily to the helicity, is not a Lorentz invariant (it is indeed shifted by transformations generated by J_+ and J_-), the representations of massive LG transformations will not be diagonal in the eigenstates of J_0: U(Λ_P)Ps,σ = ∑_σ'D_σσ'(Λ_P)Ps,σ' .This is a crucial difference that we will have to take into account when we will try to extract constraints for the amplitudes from the massive LG.§.§ Commentary: the LG transformations and the amplitude Before moving on, we want to make more explicit the link between LG transformations and the amplitude, which is the physical object we are interested in. The fact is that, since the amplitude is made out of a direct product of in-going (out-going) states, it will inherit the transformation properties of the states under Lorentz transformations. Referring to expression (<ref>), we can writeM_n({p_i};{a_i})Λ⟶ (∑_a'_jD_a_ja'_j(Λ,p_j)) M_n({Λ p_i};{a'_i}),where the Lorentz transformation is acting on the j-th state inside the n-point amplitude M_n with n external momenta p_i. In particular, for infinitesimal LG transformations it readsH_jM_n({p_i};{a_i}) = h_j M_n({p_i};{a_i}),for p_j massless, from eq. (<ref>), andJ^0_jM_n({p_i};{a_i})= σ_j M_n({p_i};…, σ_j,…), J^±_jM_n({p_i};{a_i})= σ^±_j M_n({p_i};…, σ_j± 1,…),for p_j massive, from eq.s (<ref>) and (<ref>).The fact that the amplitude has to transform in a proper way under LG transformations yields strong constraints on the general form of the amplitude. Yet it is not immediate to extract such constraints from equations (<ref>-<ref>), we need to write them in a more explicit way. For such purpose, the spinor-helicity formalism furnishes a language to translate LG equations in a ready-to-use and effective form, in terms of simple linear differential equations.Hence in next section we will introduce and review the spinor-helicity formalism, before making large use of it in the rest of our discussion. § SPINOR-HELICITY FORMALISM The spinor-helicity formalism is an ubiquitous ingredient of the recent vague of successes in computing scattering amplitudes with on-shell methods. Yet, it does not contain anything magic. It is just a language which translates null-norm four-vectors, transforming under the (1/2,1/2) representation, into a pair of Weyl bi-spinors, transforming under the (1/2,0) and (0,1/2) representations. This has the advantage of implementing by construction the massless on-shell condition, as we will see. The essential fact behind all this is that the (proper ortochronous) Lorentz group L(ℝ) is homomorphic to . This two-to-one correspondence can be more naturally understood and explicitly constructed for the complexification of the Lorentz group, i.e. L(ℂ).[ This constitutes a first reason to consider complex momenta throughout the discussion of these notes.] Using the Pauli matrices plus the identity, σ^μ=(𝕀,σ⃗), we can associate a complex two-by-two matrix to any four-vector:[ℂ^4 ⟶M_2(ℂ);p_μ=(p_0,p_1,p_2,p_3)⟼ σ^μ_aȧp_μ=([p_0+p_3 p_1-ip_2; p_1+ip_2p_0-p_3 ])=p_aȧ. ]In this a way, a complex Lorentz transformation acting on a complex four-vector corresponds to the action of twotransformations conjugately acting on the complex two-by-two matrix.[L(ℂ) ⟶ ×; Λ: p_μ↦Λ^μν_μ k_ν ⟼ ζ(Λ),ξ(Λ):p_aȧ↦ζ^ab_a p_bḃ ξ^ḃ_bȧ ]Demanding the transformed four-vector to match the transformed two-by-two matrix, we derive the defining map between Λ and ζ(Λ),ξ(Λ):. p'_aȧ=ζ(Λ)^ab_a p_bḃξ(Λ)^ḃ_aȧ = ζ(Λ)^ab_aσ^ν_bḃξ(Λ)^ḃ_aȧp_ν/| p'_aȧ= σ^μ_aȧp'_μ = σ^μ_aȧΛ^μν_μp_ν} ⟹ ζ(Λ)^ab_a σ^μ_bḃξ(Λ)^ḃ_aȧ = σ^ν_aȧΛ^νμ_νThe fact that the transformation matrices ζ,ξ have to belong todescends from the defining property of the Lorentz group, the conservation of the norm: (Λ p)^2=p^2. In the two-by-two matrices language the norm is given by the determinant of the matrix:|p|=p_0^2-p⃗^2=p^2 .Then|p|=|ζ pξ|=|ζ||p||ξ| ⇔|ζ||ξ|=1.There is a redundancy in such defining property, since we can always rescale ζ and ξ in the following way,ζ⟶ C^-1ζξ⟶ C ξwithC∈ℂ ,without spoiling the condition (<ref>). Then, if we take C=|ζ|, the redefined ζ gets unit determinant, and so ξ as well must have unit determinant, in order to satisfy (<ref>):|ζ|=1=|ξ| ⟹ζ,ξ∈ . We have thus constructed the homomorphism between L() and ×. It is a homomorphism, and not an isomorphism, because there is still a leftover redundancy in sending ζ,ξ into -ζ,-ξ, so that we have a two-to-one map. The quotient of × by ℤ_2 is then giving a one-to-one map, i.e. an isomorphism.If we want to recover the real-valued case, we have to impose a reality condition p_μ^*=p_μ, which yields in turns p_aȧ^†=p_aȧ, as a straightforward consequence of the hermiticity of Pauli matrices. Imposing hence this reality condition on the transformed momentum-matrices, we getζ p ξ = (ζ p ξ)^† = ξ^† p ζ^†⇔ξ≡ζ^† ,i.e.: the “right-handed” transformation must be the conjugate of the “left-handed” one. So we have to take the diagonalin the product of (<ref>) in order to get the homomorphism with the (real) proper orthochronous Lorentz group, L().Finally, we can specialize to massless momenta and define the spinor-helicity formalism. A null four-vector translates in a two-by-two matrix with null determinant. A complex two-by-two matrix with null determinant can be always expressed as the direct product of a pair of complex two-dimensional vectors:|p|=0 ⇒ p_aȧ=λ_a⊗λ̃_ȧ= ([ λ_1λ̃_1 λ_1λ̃_2; λ_2λ̃_1 λ_2λ̃_2 ]) .In the following we will keep the tensor product implicit, using the lighter notation: p=λλ̃.You can notice that the advantage of writing a null momentum in such a way is that the on-shell condition |λλ̃|=0 is built-in, and any expression written in this formalism would be automatically on-shell, with no need to enforce this condition by hand.We define now a bunch of conventions and shorthand notations for spinor products, which we will extensively use in the rest of these notes. First of all, we can take contractions of two bi-spinors, which areinvariant in the same way as scalar products of four-vectors are Lorentz invariant:|λ⟩μ ≡λ^aμ_a=ϵ^abλ_bμ_a , ⟨λ̃|μ̃ ≡λ_ȧμ^ȧ=ϵ^ȧḃλ̃_ȧμ̃_ḃ ,with ϵ^12=1=ϵ^1̇2̇, ϵ_12=-1=ϵ_1̇2̇, and ϵ^acϵ_cb=δ^a_ab. These inner products are obviously anti-symmetric and, in particular, their vanishing implies that the two spinors are proportional. The Minkowski scalar product translates into a product of two-by-two matrices (non necessarily with null determinant), as follows:2p·q=ϵ^abϵ^ȧḃp_aȧ q_bḃ .In the case of light-like momenta, we can replace the matrices by a pair of spinors, obtaining2 p_i· p_j = |λ_i⟩λ_j⟨λ̃_j|λ̃_i≡|i⟩j⟨j|i ,where in the last passage we have introduced a shorthand notation that we will largely use throughout these notes. Another shorthand notation that we will adopt is the following one, for the product of a light-like vector k=κκ̃ with a generic four-momentum p:2k·p=ϵ^abϵ^ȧḃ κ_aκ̃_ȧp_bḃ≡κpκ̃ ,which naturally reduces to something of the kind of (<ref>) in the particular case where also p is light-like.§.§ Spinor-helicity formalism for massive particles The main power of the spinor-helicity description for null momenta is the automatic implementation of the on-shell condition, as we have just seen. If we want to apply this formalism to massive momenta, we loose such advantage, since we do not know how to enforce the massive on-shell condition |p|=m^2 by construction. Nevertheless, as we will see in next section, a second advantage of spinor-helicity formalism is the simple and effective form that LG operators take in this language. Such simplicity and effectiveness is realized both for massless and massive particles. Moreover, if we want to consider amplitudes where massless and massive particles are involved at the same time, it is sensible to treat them on the same footing.There are two standard ways of expressing a time-like momentum P in terms of bi-spinors, as discussed by Dittmaier <cit.>. We choose the strategy of representing P as the sum of two null momenta, p and q:P_μ=p_μ+q_μ⇒ P_aȧ=λ_aλ̃_ȧ+μ_aμ̃_ȧ , 2p·q=|λ⟩μ⟨μ̃|λ̃=m^2.Notice that there are many ways of decomposing a time-like vector in terms of two light-like vectors. This description introduces thus a redundancy, which is not physical and will need to be removed at a certain point. We should not confuse this non-uniqueness with the one that yet we have for massless momenta described in terms of bi-spinors (<ref>): in this latter case we can act with -transformations on λ and λ̃ such that the final two-by-two matrix is left unchanged. But these are nothing else that the physical LG transformations, which, as we will see later on, will be precious for constraining the amplitude. On the contrary, the physical massive LG invariance will mix up with the non-physical redundancy introduced by our ambiguous decomposition (<ref>). It will be crucial in section (<ref>) to distinguish the two of them, and to remove the latter.We have thus briefly seen how to apply the spinor-helicity formalism to massive particles in a very simple way, which yet yields some disadvantages. Now we will see how to write the LG equations in this formalism, bringing out the advantages of our choice. § CONSTRAINING THE AMPLITUDE: THE LG EQUATIONS The aim of this section is to translate equations (<ref>) and (<ref>) in spinor language, and use them to constrain the amplitude. In this context, the asymptotic states, instead of being labeled in terms of massless or massive four-momenta, will be labeled by pairs of bi-spinors, {λ_i,λ̃_i}. Since the LG for massless and massive momenta is fundamentally different, we will treat the two cases separately. The derived LG equations yield constraints on the dependency of the n-point amplitude on its kinematic variables (spinor products). For the 3-point amplitude, such constraints will be enough to completely determine the form of the amplitude, both in the massless and in the massive case. §.§ Massless LG equations We have seen that the massless LG corresponds to rotations around the direction of motion. For instance, in the special frame k_μ=(E,0,0,E) it is given by rotations around the third axis. These are a subgroup of (real) Lorentz transformations parametrized by an angle θ, acting on a four-momentum p as follows:R_3(θ)p = ([ 1 0 0 0; 0cos(θ) -sin(θ) 0; 0sin(θ)cos(θ) 0; 0 0 0 1 ]) ([ p_0; p_1; p_2; p_3 ]) .How this transformation translates intotransformations acting on bi-spinors? The answer can be worked out straightforwardly from the relation (<ref>), and is given byζ_R_3(θ)^ab_aλ_b = ± e^-iθ/2σ^3λ=±([ e^-iθ/2 0; 0e^iθ/2 ]) ([ λ_1; λ_2 ]) ,where the sign ambiguity corresponds to the ℤ_2 ambiguity in the homomorphism between Lorentz and . The special frame k in spinor language is given byk_aȧ=κ_aκ^†_ȧ= ([ 2E0;00 ]) , withκ_a=([ √(2E); 0 ]) .Hence, whereas the momentum-matrix is trivially invariant under LG transformations, i.e. , as it should be, it is not the case for each bi-spinor by itself:ζ_R_3κ=e^-iθ/2κ , κ^†ζ^†_R_3=e^+iθ/2κ^† .So the LG is acting non-trivially on the spinors (yielding a phase), and actually these transformations, which we have here derived for the special frame k, are the same for any generic frame (see the appendix A of <cit.> for more details), namelyλ ⟶e^-iθ/2 λ , λ̃ ⟶e^+iθ/2λ̃ .In any case, however, the momentum matrix p is conserved under such transformations, whereas the momentum-spinors are scaling in a precise way, and so the amplitude scales accordingly.The differential operator that generates the infinitesimal version of transformations (<ref>) isH=-1/2(λ^a∂λ^a∂λ^a-λ̃_ȧ∂λ^a∂λ̃_ȧ) ≡ -1/2(λ∂λ∂λ-λ̃∂λ∂λ̃),so that the helicity equation (<ref>) becomes(λ∂λ∂λ-λ̃∂λ∂λ̃)λ,λ̃h=-2h λ,λ̃h ,for the massless one-particle state, and consequently for the n-point amplitude,(λ_j∂λ∂λ_j-λ̃_j∂λ∂λ̃_j) M_n({λ_i,λ̃_i},{a_i}) =-2h_j M_n({λ_i,λ̃_i},{a_i}),when the j-th particle is massless, from eq. (<ref>). This equation constitutes a sort of Ward identity for the n-point amplitude, and it is as powerful as to completely constrain the form of the amplitude for the lowest-point case, i.e. n=3, as it was first derived by Benincasa and Cachazo <cit.>, and as we will see immediately. §.§ The massless three-point amplitude The three-point amplitude for three massless particles has to be zero for real external momenta: a massless particle cannot decay into two other massless particles, except for aligned momenta and helicities summing to zero. This can be seen just by applying momentum conservation to the case of three light-like four-vectors, as it is shown in <cit.>. Momentum conservation can be satisfied only if the spatial momenta are aligned. Then, being the helicity the projection of the spin along the direction of motion, the conservation of the spin imposes h_1+h_2+h_3=0[ This turns out to be a somehow sick case (see the comments on page sick), except for the case of three massless scalars, where yet the 3-point amplitude is just a constant, the cubic scalar coupling.].If we allow for complex momenta, instead, we can have non-trivial three-point amplitudes for arbitrary values of the helicities. The advantage of considering complex momenta and so having non-zero massless three-point amplitudes will be clear in section <ref>, when we will use the power of complex analysis to glue together two three-point amplitudes to obtain a four-point one. Sending afterwards the complex momenta to real ones, we will get a non-zero final result, which is the actual physical four-point amplitude with real-valued external momenta.So we derive now the most general Poincaré-invariant three-point function with complex massless external momenta, keeping in mind that it will constitute the fundamental brick to construct higher-point amplitudes. To do that, we just consider three times the LG equation (<ref>) for n=3:(λ_j∂λ∂λ_j-λ̃_j∂λ∂λ̃_j) M_3^h_1,h_2,h_3({λ_i,λ̃_i}) =-2h_j M_3^h_1,h_2,h_3({λ_i,λ̃_i}), withj=1,2,3.The amplitude will depend only on Lorentz invariants,invariants in our case, that is the scalar products of spinors (<ref>-<ref>). Then it is convenient to change to those variables:[ x_1=|2⟩3, x_2=|3⟩1, x_3=|1⟩2;; y_1=⟨3|2, y_2=⟨1|3, y_3=⟨2|1; ]where we have used the shorthand notation (<ref>). If we use the chain rule, i.e.λ_1∂λ∂λ_1=x_2∂x∂ x_2+x_3∂x∂ x_3 ,and so on, we can recast the three equations (<ref>) in the following way (omitting the subscript 3 on M from now on):(x_1∂_1-y_1∂̃_1) M^{h_i}({x_i,y_i}) = (h_1-h_2-h_3)M^{h_i}({x_i,y_i}),(x_2∂_2-y_2∂̃_2) M^{h_i}({x_i,y_i}) = (h_2-h_3-h_1)M^{h_i}({x_i,y_i}),(x_3∂_3-y_3∂̃_3) M^{h_i}({x_i,y_i}) = (h_3-h_1-h_2)M^{h_i}({x_i,y_i});where we have used the shorthand notations ∂_i and ∂̃, for partial derivatives with respect to x_i and y_i respectively.The most general solution[ This kind of solutions for partial differential equations are standardly obtained through the https://en.wikipedia.org/wiki/Method_of_characteristicsmethod of characteristics. ] for this system of equations isM^{h_i}({x_i,y_i})= x_1^h_1-h_2-h_3x_2^h_2-h_3-h_1x_3^h_3-h_1-h_2f(x_1y_1, x_2y_2, x_3y_3) = y_1^h_2+h_3-h_1y_2^h_3+h_1-h_2y_3^h_1+h_2-h_3 f̃(x_1y_1, x_2y_2, x_3y_3) ,where we have a pre-factor encoding the proper LG scaling, and an undetermined function depending on the combinations x_iy_i, which precisely vanishes under the action of the differential operators x_i∂_i-y_i∂̃_i. We have written the solution in two different ways, just to make manifest that there are many equivalent ways to write the pre-factor, and in order to retrieve the actual form of the amplitude we need to explicitly determine the function f (or f̃).Actually, we still have to impose momentum conservation, which yields relations among our six variables. We will discuss these relations for generic n external momenta in section <ref>, but for three massless momenta the analysis is straightforward. We obtain for instance0=p_1^2=(-p_2-p_3)^2=2p_2· p_3=|2⟩3⟨3|2= x_1y_1.So we have to choose x_1=0, or y_1=0. Let us try with x_1=|2⟩3=0. This means that λ_2 and λ_3 are proportional (aligned in the two-dimensional vector space of the spinors). In addition, in a two-dimensional vector space three vectors cannot be linearly independent, so λ_1=αλ_2+βλ_3. Therefore all the `non-tilded' spinors have to be proportional: λ_1∝λ_2∝λ_3. Of course, if we had started with y_1=0, we would have obtained that all the `tilded' spinors should be proportional. Thus momentum conservation requires that all x_i=0, or all y_i=0.So there are only two forms for the amplitude to be finite and not trivially zero[ We remind that we are considering the connected part of the amplitude, so that we know that it does not contain any delta functions. ], which areM^{h_i}_H=g_Hx_1^h_1-h_2-h_3x_2^h_2-h_3-h_1x_3^h_3-h_1-h_2 , wheny_i=0∀ i ,M^{h_i}_A=g_Ay_1^h_2+h_3-h_1y_2^h_3+h_1-h_2y_3^h_1+h_2-h_3 , whenx_i=0∀i ,respectively corresponding to the following values of the function f:f = g, and f = g̃ (x_1y_1)^h_2+h_3-h_1(x_2y_2)^h_3+h_1-h_2(x_3y_3)^h_1+h_2-h_3 . It is eventually possible, by imposing an additional physical requirement, to determine which of the two forms of the amplitude is the correct one, depending on the values of the helicities. We remind the reader our initial remark of this section: for real external momenta the massless three-point function is trivially zero (except for specific values of the helicities, s.t. h_1+h_2+h_3=0). The reality conditions read y_i=x_i^*, implying that x_i=0=y_i for every i. Then the amplitude (<ref>) vanishes (and does not explode) for h_1+h_2+h_3<0, whereas the amplitude (<ref>) vanishes (and does not explode) for h_1+h_2+h_3>0. The choice remains ambiguous precisely for the case h_1+h_2+h_3=0, where actually the amplitude can be not vanishing. But except the trivial case of three scalars, where the amplitude is just a constant, i.e. the cubic coupling, there is no known interaction that yields a three-particle process with h_1+h_2+h_3=0, and actually it can be shown that, for theories where we can construct higher-point amplitudes out of the three-point ones, such interactions are ruled out <cit.>.Thus the (complex-valued) massless three-point amplitude is fixed by Poincaré invariance up to a constant, the coupling constant g_H or g_A, as it was first derived by Benincasa and Cachazo <cit.>. We stress that the results (<ref>) and (<ref>) are non-perturbative, since they rely only on Poincaré invariance plus the requirement that the amplitude be non-singular. This last requirement applies to the full non-perturbative physical amplitude, as well as tree-level amplitudes. In a perturbative expansions, as we all know, intermediate steps are typically non-finite. Such kinds of `partial' amplitudes, as long as they obey Lorentz invariance, should still obey the most general form (<ref>)[ The general form (<ref>) does not appear in the original paper <cit.>, but has been instead proposed in <cit.>, where an example of loop divergent amplitude, matching (<ref>) rather than (<ref>-<ref>), is given as well. ]. In next section we will extend the successful strategy, that has allowed us to determine the massless three-point amplitude, to the case where one, two, or three massive particles participate in the scattering. The derivation will be less straightforward, but just as successful.§.§ Massive LG equations As we have seen in section <ref>, an n-point amplitude that involves a massive particle has to obey the massive LG equations (<ref>) for the corresponding leg. The generators J_0, J_+, J_- respect the three-dimensionalalgebra (<ref>). We want to turn these equations in spinor-helicity language, that is spinor differential operators acting on the amplitude. As in previous section we can consider the action of a LG transformation on a four-vector, and from that infer the corresponding transformations on the spinors.We recall our description of a massive momentum-matrix in spinor-helicity formalism (<ref>), and we go to the rest frame K_μ=(m,0,0,0), where we haveK_aȧ= ( [ m 0; 0 m ]) = λ_aλ̃_ȧ+μ_aμ̃_ȧ= ( [ λ_1λ̃_1+μ_1μ̃_1 λ_1λ̃_2+μ_1μ̃_2; λ_2λ̃_1+μ_2μ̃_1 λ_2λ̃_2+μ_2μ̃_2 ]) ,with the following conditions on the spinor components:λ_1λ̃_2+μ_1μ̃_2= 0 =λ_2λ̃_1+μ_2μ̃_1 , λ_1λ̃_1-μ_2μ̃_2= 0 =μ_1μ̃_1-λ_2λ̃_2 ,λ_1λ̃_1+λ_2λ̃_2= m =μ_1μ̃_1+μ_2μ̃_2 .The LG is given by , the three-dimensional spatial rotations. If R is a generic element of , then the correspondingmatrices acting on spinors, ζ_R, ζ_R^†, turn out to be elements ofsubgroup. This can be easily checked taking rotations around specific reference axis, like the one in (<ref>), and composing them to obtain a generic rotation, for instance by the standard parametrization through Euler angles. Then we takeζ_R=([ab; -b^*a^* ]) ,with|a|^2+|b|^2=1,and we check that indeed the correct LG transformation property for K_aȧ is fulfilled:ζ_R^ab_a K_bḃζ^†_R^ḃ_aȧ = ζ_R^ab_aλ_bλ̃_ḃζ^†_R^ḃ_aȧ + ζ_R^ab_aμ_bμ̃_ḃζ^†_R^ḃ_aȧ = K_aȧ .The last identity holds in virtue of relations (<ref>), which are specific to this frame. We also remark that the total momentum is invariant under thesetransformations, whereas λ_aλ̃_ȧ and μ_aμ̃_ȧ are not separately invariant. Of course we can consider a generic boosted frame P=L_P^-1K, and after having found the proper formulation of boosts in terms ofmatrices, we would obtain the LG transformations for a generic frame:ζ_L_P^-1 ζ_R ζ_L_PPζ_L_P^† ζ^†_R ζ_L_P^†^-1 =P. But this road would not be convenient for us, since the generators of this group of transformations do not write in a nice form in spinor-helicity formalism. For instance the J^0 generator corresponding to the transformations ζ_R readsJ^0=-1/2(λ_1∂/∂λ_1-λ_2∂/∂λ_2-λ̃_1∂/∂λ̃_1+λ̃_2∂/∂λ̃_2+μ_1∂/∂μ_1-μ_2∂/∂μ_2-μ̃_1∂/∂μ̃_1+μ̃_2∂/∂μ̃_2).One can see that it cannot be written in a compact form for λ,λ̃ and μ,μ̃, contrary to the operator (<ref>). Let us instead consider anothertransformation, namely([ λ; μ ])→ U([ λ; μ ]),( λ̃μ̃ ) →( λ̃μ̃ )U^† ,withU∈ .Our massive momentum is always invariant under such transformations, independently of the frame:( λμ ) U^⊺U^⊺^†([ λ̃; μ̃ ]) = ( λμ )([ λ̃; μ̃ ])= λλ̃+μμ̃ . So these transformations U are perfect candidates as massive LG transformations. Moreover, U is actually a four-by-four matrix, which is an actualmatrix composed with the two-by-two identity, i.e.:U=([α 𝕀_2β 𝕀_2; -β^* 𝕀_2α^* 𝕀_2;]) , with|α|^2+|β|^2=1 .So, in (<ref>), U is acting in the same way on both components λ_1 and λ_2 of the bi-spinor λ, and in the same way on both components μ_1 and μ_2 of the bi-spinor μ. Then the infinitesimal generator J_0 for these transformations[ A basis of generators ofmatrices is given by J^0=σ^3/2=[10;0 -1 ], J^+=σ^1+iσ^2/2=[ 0 1; 0 0 ], J^-=σ^1-iσ^2/2=[ 0 0; 1 0 ], from which the respective differential operators are derived. ] readsJ^0 = -1/2( λ_1∂λ∂λ_1 +λ_2∂λ∂λ_2 -λ̃_1∂λ∂λ̃_1 -λ̃_2∂λ∂λ̃_2 -μ_1∂μ∂μ_1 -μ_2∂μ∂μ_2 +μ̃_1∂μ∂μ̃_1 +μ̃_2∂μ∂μ̃_2) == -1/2( λ∂λ∂λ -λ̃∂λ∂λ̃ -μ∂μ∂μ +μ̃∂μ∂μ̃).So this operator recasts in a nice form in terms of λ and μ, analogous to that of the helicity operator (<ref>). The same holds true for the other generators, and we can summarize:J^0=-1/2( λ∂λ∂λ -μ∂μ∂μ -λ̃∂λ∂λ̃ +μ̃∂μ∂μ̃), J^+=-μ∂λ∂λ +λ̃∂μ∂μ̃ , J^-=-λ∂μ∂μ +μ̃∂λ∂λ̃ . We underline that there is an isomorphic map between the U-transformations and thetransformations ζ_R. It can be derived explicitly in the rest frame (and then extended through boosts to any frame), imposing the identityU([ λ; μ ]) = ([ ζ_Rλ; ζ_Rμ ])component by component. So these U-transformations are full-fledged LG transformations, and we will legitimately use operators (<ref>–<ref>) to constrain the amplitude. For a more formal derivation of such representation of the LG generators, the reader can check Section 2 and Appendix A of <cit.>. We eventually remark that these groups of transformations, either U or ζ_R, are the physical LG transformations, but they are not the largest group of transformations that leave P=λλ̃+μμ̃ invariant. Indeed we can keep λ, λ̃ unchanged and scale μ, μ̃ as followsμ⟶ t μ , μ̃⟶ t^-1μ ,or the other way round. Such transformations leave P invariant, but they are nottransformations, neither of the form (<ref>) nor of the form (<ref>). This ambiguity is related to the redundancy in our description of a time-like momentum in terms of null ones (<ref>), as we have already stressed there: we can of course apply the respective massless LG transformations on each of the null momentum in the decomposition independently of the other; but this strictly depends on the given decomposition, while the actual massive LG transformation cannot depend on how we choose to decompose the massive momentum. So these additional transformations are not physical and we will have to demand the amplitude not to depend on them.But let us first proceed to the analysis of the constraints given by the massive LG equations on the three-point amplitude.§.§ The massive three-point amplitude The starting point are the massive LG equations (<ref>), which we rewrite here for the three-point amplitude:J^0_jM_3({p_i};…, σ_j,…) = σ_j M_3({p_i};…, σ_j,…), J^±_jM_3({p_i};…, σ_j,…)= σ^±_j M_3({p_i};…, σ_j± 1,…).As we have remarked in advance around equations (<ref>), the massive LG equations are such that the eq. (<ref>) is an eigenvalue equation exactly as the helicity one (<ref>), whereas the eq.s (<ref>) are relating different amplitudes. Of course, we would like to have a maximal number of differential equations for the same function, in order to hope to solve the system. The smart thing we can do is considering the amplitude where all massive particles are in the lowest value of their spin projection, σ_i=-s_i, so that the action of J_- annihilates such amplitude:J_j^-M_3({p_i};…, -s_j,…) = 0.Then we have two simple equations for this amplitude for each massive particle: this last one, and the one corresponding to the action of J_0 (<ref>). With the operators J_i^+ we can then raise the value of spin projections σ_i, and obtain all the other amplitudes with higher values of σ_i. Moreover, once we get to the highest value of the spin projection, σ_i=+s_i, we can act one more time with J^+_i and again we annihilate the amplitude:(J_j^+)^2s_j+1M_3({p_i};…, -s_j,…) = 0.This is yielding an additional constraint on this amplitude, but it is much more involved than (<ref>) and (<ref>). So we will keep it for the end, considering for the moment only the simpler equations for J_i^0 and J_i^-.Summarizing, we want to consider the three-point amplitudes involving massive particles that are in the lowest value of their spin component, σ_i=-s_i, as well as massless particles with arbitrary helicity, h_i=± s_i. Taking into account the expressions of the massive LG operators in spinor formalism (<ref>-<ref>), together with the helicity equation (<ref>) for each massless legs, we have the following system of equations for this `lowest-component' amplitude:{[ H_j M^{a_i} = h_jM^{a_i} ,ifa_j=h_j ;;; [ J_j^0 M^{a_i} = -s_iM^{a_i} ,; J_i^- M^{a_i}=0 , ] ifa_j=-s_j . ].Again we have omitted the subscript of M to make notation lighter.We now treat separately the cases with one, two, and three massive external states, from the simplest to the most involved. But before proceeding we discuss briefly the kinematic constraints coming from momentum conservation and on-shell conditions.§.§.§ subsubsection Kinematic constraints: momentum conservation and on-shell conditions Since we use one pair of spinors for each massless momentum and two pairs for each massive one, we will need four pairs of spinors for the one-massive, two-massless case, five pairs for the two-massive, one-massless case, and six pairs for the three-massive case. Anyway, for all three cases momentum conservation turns into momentum conservation for (four, five, six) massless momenta. In general, momentum conservation for m massive momenta and n-m massless ones in our description is equivalent to momentum conservation of n+m massless momenta.We consider thus the general case of n massless momenta, which means n `non-tilded' and n `tilded' bi-spinors. Out of 2n spinors we can build 1/2n (n-1) `angle' products (<ref>) and 1/2n (n-1) square products (<ref>).First, the massless on-shell condition for each of the n momenta is automatically implemented thanks to spinor-helicity formalism, and this translates, as we have already seen in section <ref>, into the geometrical statement that three bi-spinors cannot be linearly independent, i.e.|j⟩k λ_i+|k⟩i λ_j+|i⟩j λ_k=0.This fact goes under the name of Schouten identity. We can then choose for instance λ_1 and λ_2 as projecting directions, and use (<ref>) to express any of the angle products involving neither λ_1 nor λ_2 in terms of|1⟩2,|1⟩i,|2⟩i, withi=3,…, n.These are 2(n-2)+1=2n-3 independent variables.We can then consider momentum conservation, which in spinor-helicity formalism reads∑_i=1^nλ_iλ̃_i=0.If we contract this equation with λ_1 and λ_2 respectively, we obtainλ̃_1=-∑_i=3^n |i⟩2/|1⟩2λ̃_i, and λ̃_2=-∑_i=3^n |1⟩i/|1⟩2λ̃_i.We see that in this way only the angle products that we have chosen as independent variables in (<ref>) appear in the relations (<ref>). With them we can express all the square products involving either λ̃_1 or λ̃_2 in terms of⟨i|jwithi,j≠ 1,2,which are 1/2(n-2)(n-3) variables. When n>5, the variables (<ref>) are not all independent since there are Schouten identities relating them, so we can further reduce the number of square products to 2(n-4)+1=2n-7.Thus, thanks to massless on-shell conditions and momentum conservation, we have reduced the total number of independent variables from the initial n(n-1) to{[2n-3 + 1/2(n-2)(n-3) = 1/2 n (n-1)if n≤5;2n-3 +2n-7 = 2(2n-5)if n>5 ].. This conclusion is completely general, and holds for any kinematic process with n conserved massless momenta. In our case, since we want to consider massive momenta, we have one additional condition per each massive particle, the massive on-shell condition (<ref>).If we refer to our counting of LG differential equations for massless and massive external particles (<ref>), we can already compare the number of equation to the number of independent variables for each case. For three massless particles we had three equations and three independent variables. For one massive and two massless particles we have four equations from (<ref>), and six independent variables from (<ref>), which further reduce to five because of one massive on-shell condition. For two massive and one massless particles, we have five equations and eight independent variables. And for three massive particles we have six equations and eleven variables.Such an unfair comparison (clearly displayed in Table <ref>) could make us believe that we will hardly be able to completely determine the amplitude as in the fully massless case. Nonetheless, we are going to see how this is indeed possible.§.§.§ subsubsection One-massive two-massless amplitude We first consider the three-point amplitude with one massive particle and two massless one. We decide to parametrize the involved momenta through four pairs of spinors in the following wayP_1=λ_1λ̃_1+λ_4λ̃_4 , p_2=λ_2λ̃_2 , p_3=λ_3λ̃_3,with the mass condition reading|1⟩4 ⟨4|1 =m^2. From the system (<ref>), we have four equations for the three-point amplitude M^h_1,h_2,-s_3, which in this section will be denote simply by M. Using the expressions (<ref>–<ref>) for the massive LG operators, and (<ref>) for the helicity operators, we can write (λ_1∂λ∂λ_1 -λ_4∂λ∂λ_4 -λ̃_1∂λ∂λ̃_1 +λ̃_4∂λ∂λ̃_4) M =+2s_1M, (λ_1∂λ∂λ_4 -λ̃_4∂λ∂λ̃_1) M =0;(λ_2∂λ∂λ_2 -λ̃_2∂λ∂λ̃_2) M =-2h_2M ; (λ_3∂λ∂λ_3 -λ̃_3∂λ∂λ̃_3) M =-2h_3M.As we have done in section <ref>, since we know that the amplitude can only depend on -invariant products of spinors, we can denote[ x_1=|2⟩3, x_2=|3⟩1, x_3=|1⟩2, x_4=|3⟩4, x_5=|2⟩4, x_6=|1⟩4,; y_1=⟨3|2, y_2=⟨1|3, y_3=⟨2|1, y_4=⟨4|3, y_5=⟨4|2, y_6=⟨4|1. ] Then we can use the chain rule to translate the differential operator in (<ref>) in terms of these twelve variables, obtaining(x_3∂_5-x_2∂_6+y_6∂̃_2-y_5∂̃_3) M =0 ,(x_2∂_2+x_3∂_3-x_5∂_5-x_6∂_6-y_2∂̃_2-y_3∂̃_3+y_5∂̃_5+y_6∂̃_6) M =2s_1M,(x_1∂_1+x_3∂_3+x_5∂_5-y_1∂̃_1-y_3∂̃_3-y_5∂̃_5) M =-2h_2M,(x_1∂_1+x_2∂_2+x_6∂_6-y_1∂̃_1-y_2∂̃_2-y_6∂̃_6) M =-2h_3M ,where again we have used the same shorthand notations as in (<ref>) for partial derivatives with respect to x_i and y_i. From the discussion of previous section, we know that only five variables over twelve are independent. It is convenient to choose λ_2 and λ_3 as reference directions in (<ref>) (and consequently λ̃_2 and λ̃_3 in (<ref>)), expressing in this way all the variables in terms of x_1, x_2, x_3, x_4, x_5 (this choice is completely arbitrary, but it will reveal as the most convenient to express the result in its simplest form). We get the following expressions for the other variables:x_6=-x_1x_4+x_2x_5/x_3 ; [y_1=m^2/x_1 ,y_2=m^2x_5/x_1x_4 , y_3=-m^2/x_1x_4x_1x_4+x_2x_5/x_3 ,;y_4=m^2/x_4 ,y_5=m^2x_2/x_1x_4 ,y_6=m^2x_3/x_1x_4 . ] Now we can use the chain rule the other way round to express the system in terms only of the chosen independent variables, obtaining∂_5 M=0 ,(x_2∂_2+x_3∂_3-x_5∂_5) M = +2s_1M ,(x_1∂_1+x_3∂_3+x_5∂_5) M =-2h_2 M ,(x_1∂_1+x_2∂_2) M = -2h_3 M .Indeed the chain rule for the changes of variable (<ref>) yieldsx_1∂_1 ⟶ x_1∂_1 -x_1x_4/x_3 ∂_6 -m^2/x_1 ∂̃_1 -m^2x_5/x_1x_4 ∂̃_2 +m^2x_2x_5/x_1x_4x_3 ∂̃_3 -m^2x_2/x_1x_4 ∂̃_5 -m^2x_3/x_1x_4 ∂̃_6, x_2∂_2 ⟶ x_2∂_2 -x_2x_5/x_3 ∂_6 -m^2x_2x_5/x_1x_4x_3 ∂̃_3 +m^2x_2/x_1x_4 ∂̃_5, x_3∂_3 ⟶ x_3∂_3 +x_1x_4+x_2x_5/x_3 ∂_6 +m^2/x_1x_4x_1x_4+x_2x_5/x_3 ∂̃_3 +m^2x_3/x_1x_4 ∂̃_6, x_4∂_4 ⟶ x_4∂_4 -x_1x_4/x_3 ∂_6 -m^2x_5/x_1x_4 ∂̃_2 +m^2x_2x_5/x_1x_4x_3 ∂̃_3 -m^2/x_4 -m^2x_2/x_1x_4 ∂̃_5 -m^2x_3/x_1x_4 ∂̃_6, x_5∂_5 ⟶ x_5∂_5 -x_2x_5/x_3 ∂_6 +m^2x_5/x_1x_4 ∂̃_2 -m^2x_2x_5/x_1x_4x_3 ∂̃_3.If we substitute these rules into (<ref>), we get precisely the system (<ref>). Notice that it is not granted at all, that the operators in (<ref>) can be obtained from the operators (<ref>), namely containing only differentials of the independent variables, upon the application of the constraints (<ref>). Some magic is happening, reflecting the compatibility of the constraints (<ref>), coming from Poincaré invariance, with our LG differential operators.[ If we take for instance the differential operator x∂_x+y∂_y with the constraint y=x^2, it cannot be expressed as x∂_x, which instead with this constraint is giving x∂_x→ x∂_x+2y∂_y. It would be interesting to explicitly show why the constraints coming from Poincaré invariance happen to be compatible with the LG differential operators, at least in all the cases discussed here.] It can be taken as a confirmation of the consistency of our treatment.Let us now solve the system (<ref>). The first equation tells us that the amplitude does not depend on x_5, so that the other three equations yield exactly the same system as in the massless case (<ref>)! (with h_1 replaced by -s_1) Moreover, we note that x_4 is not appearing in the equations, so that there is no constraint at all on the dependency of the amplitude on that variable. Thus, the most general solution for the one-massive-leg lowest-component three-point amplitude isM^-s_1, h_2,h_3 =x_1^-s_1-h_2-h_3x_2^h_2-h_3+s_1x_3^h_3-h_2+s_1f_1(x_4) = |1⟩2^h_3-h_2+s_1 |2⟩3^-s_1-h_2-h_3 |3⟩1^h_2-h_3+s_1f_1(|1⟩4),where f_1 is an arbitrary function, which depends on |1⟩4 and on other parameters of the interaction, like the mass m and the coupling constant g. The mass dimension of f_1 is fixed since the three-point amplitude must have mass dimension equal to one. Then we can factorize the dimensionful part of f_1,f_1(|1⟩4)=g m^1-[g]-s_1+h_2+h_3 f̃_1(|1⟩4m),where g is the coupling constant of the interaction and [g] is its mass dimension. In this way the function f̃_1 is now dimensionless, depending only on the dimensionless argument |1⟩4/m. Furthermore, we will argue that f̃_1 is just a constant.Indeed, we notice that the argument |1⟩4 is related to our ambivalent choice of decomposition of the massive momentum P_1. Following the considerations on page fakeLG, we can apply the transformation (<ref>) on λ_4, and make it scale, whereas we leave λ_1 untouched: in this way, requiring the amplitude to be independent of such unphysical scaling is equivalent to demanding f̃_1 in (<ref>) to be constant. We can then absorb it into the coupling constant, obtaining the following final expression for the physical one-massive leg three-point amplitude <cit.>:M^-s_1, h_2,h_3 = g m^1-[g]-s_1+h_2+h_3 |1⟩2^h_3-h_2+s_1 |2⟩3^-s_1-h_2-h_3 |3⟩1^h_2-h_3+s_1 ,which is thus completely determined by Poincaré invariance, exactly as its massless sibling. But contrarily to its massless counterpart, this amplitude is non-zero even for real kinematics, representing the decay of a massive particle into two massless ones. So, it constitutes a full non-perturbative result, being derived only from symmetry-based considerations.The attentive reader might remember now of the cumbersome constraint (<ref>), and wonder how it could further constrain the amplitude, as it is already `fully' determined. Actually, if one acts 2s_1+1 times on the amplitude (<ref>) with the spin-raising operator for particle 1, i.e.J_1^+ = -λ_4∂λ∂λ_1 +λ̃_1∂λ∂λ̃_4 ,and require the result to vanish, the following condition on the helicities of the two massless particles is obtained <cit.>:h_2-h_3 = {-s_1,-s_1+1, …,s_1-1,s_1} .Such condition on the difference of the helicities of the two massless particles is a very basic relation descending from the conservation of angular momentum. Indeed, we can quickly see the case where the difference between the helicities is maximized, that is the frame where the spatial momenta of the three particles are aligned:P⃗_1|P⃗_1|=±p⃗_2|p⃗_2|=∓p⃗_3|p⃗_3| .Then, from angular momentum conservation, J⃗_1+J⃗_2+J⃗_3=0, and using the definition of helicity (<ref>), we have∓J⃗_3·p⃗_3|p⃗_3| = (-J⃗_1-J⃗_2)·P⃗_1|P⃗_1| ⇒ ∓ h_3 = s_1 ∓J⃗_2·p⃗_2|p⃗_2| = s_1 ∓ h_2 ⇔ |h_2-h_3|=s_1,in agreement with (<ref>).We conclude this section with a nice, straightforward application of the formula (<ref>). A renowned fact in particle physics is the impossibility of a massive vector boson to decay into two photons, which goes under the name of Landau-Yang theorem <cit.>. We can indeed take the result (<ref>), set s_1=1, and consider the two cases where the helicities of the massless spin-1 particles have either the same sign or opposite sign:M^-1, ±1 ±1 = f_1 |1⟩2|3⟩1|2⟩3(|2⟩3)^∓ 2 ; M^-1, ±1 ∓1 = f_1 |1⟩2|3⟩1|2⟩3(|1⟩2|3⟩1)^∓ 2 .You can see that in both case, if we switch particle 2 and particle 3, the amplitude flips sign. But since particles 2 and 3 are identical bosonic particles, their exchange should not affect the amplitude. We conclude that this amplitude has to be zero. The simplicity and shortness of this proof should be appreciated, compared to traditional derivations of the Landau-Yang theorem. Moreover, notice that with the formula (<ref>) the statement can be easily generalized to massless particles of higher spin: a spin-1 massive particle cannot decay into two massless identical bosonic particles (i.e.: of any arbitrary integer spin). §.§.§ subsubsection Two-massive one-massless amplitude The case of two massive and one massless particles goes in a completely analogous way as the one we have just considered. We takeP_1=λ_1λ̃_1+λ_4λ̃_4 , P_2=λ_2λ̃_2+λ_5λ̃_5 , p_3=λ_3λ̃_3,with the mass conditions|1⟩4 ⟨4|1 =m_1^2 , |2⟩5 ⟨5|2 =m_2^2. For the amplitude where the two massive particles are in their lowest spin component, the system of LG equations reads(λ_1∂λ∂λ_1 -λ_4∂λ∂λ_4 -λ̃_1∂λ∂λ̃_1+λ̃_4∂λ∂λ̃_4) M=+2s_1M , (λ_1∂λ∂λ_4 -λ̃_4∂λ∂λ̃_1) M =0 ; (λ_2∂λ∂λ_2 -λ_5∂λ∂λ_5 -λ̃_2∂λ∂λ̃_2+λ̃_5∂λ∂λ̃_5) M=+2s_2M , (λ_2∂λ∂λ_5 -λ̃_5∂λ∂λ̃_2) M =0 ; (λ_3∂λ∂λ_3 -λ̃_3∂λ∂λ̃_3) M=-2h_3M. So we have five equations, and from the five pairs of spinors in (<ref>) we can form twenty spinor products, which reduce to eight independent ones after applying the kinematic constraints (<ref>). We choose as independent variables the following ones,[x_1=|2⟩3, x_2=|3⟩1 , x_3=|1⟩2 , x_4=|1⟩4 ,;x_5=|2⟩5, x_6=|2⟩4 , x_7=|1⟩5 , y_8= ⟨5|4. ]so seven angle-products and one square-product. Again, the LG equations can be perfectly recast in terms of differential of the independent variables only, obtainingx_1∂_1 M= (s_2-s_1-h_3) M, | ∂_6 M= 0, x_2∂_2 M= (s_1-s_2-h_3) M, | ∂_7 M= 0,(x_3∂_3+y_8∂̃_8)M =(s_1+s_2+h_3)M.The most general solution to this system is <cit.>M^-s_1,-s_2, h_3 = x_1^s_2-s_1-h_3x_2^s_1-s_2-h_3x_3^s_1+s_2+h_3f_2(x_4, x_5, y_8/x_3) = |1⟩2^s_1+s_2+h_3 |3⟩1^s_1-s_2-h_3 |2⟩3^s_2-s_1-h_3f_2(|1⟩4,|2⟩5,⟨5|4|1⟩2) .We recognize again a factor carrying the proper Lorentz-wise scaling, again similar to that of the massless amplitude, together with an undetermined function f_2 of three variables. We can repeat the dimension-based considerations of (<ref>), and rewrite f_2 asf_2(|1⟩4,|2⟩5,⟨5|4/|1⟩2)= g m_1^1-[g]-s_1-s_2+h_3f̃_2(|1⟩4/m_1,|2⟩5/m_2,⟨5|4/|1⟩2 ;m_2/m_1),so that f̃_2 is dimensionless. We notice that the first arguments of f_2 (or f̃_2) are the angle-products related to the mass, so we can expect the function not to depend on them, because of the argument about the non-physical scaling of λ_4,λ̃_4 (and λ_5,λ̃_5), that we have already applied to f_1; but the third argument contains actual kinematic information. We can then resort to the constraints of the form (<ref>). We have two of them in the present case, i.e.(J^+_1)^2s_1+1M^-s_1,-s_2, h_3=0,(J^+_2)^2s_2+1M^-s_1,-s_2, h_3=0,whose respective actions on the amplitude (<ref>)[ See Appendix B of the original paper <cit.> for details of the calculation. ] determine the following two rational expressions for f_2:f_2 =∑_k=0^2s_1 c^(1)_k(|1⟩4,|2⟩5) (1+|1⟩4|2⟩5/m_2^2⟨5|4|1⟩2)^s_1+s_2+h_3-k , f_2 =∑_k=0^2s_2 c^(2)_k(|1⟩4,|2⟩5) (1+|1⟩4|2⟩5/m_1^2⟨5|4|1⟩2)^s_1+s_2+h_3-k .The coefficients c_k^(1), c_k^(2) are still undetermined functions of their arguments, but now they depend only on the `mass' variables, and so, requiring the amplitude to be invariant under the non-physical scaling of λ_4,λ̃_4 (λ_5,λ̃_5) independently of λ_1,λ̃_1 (λ_2,λ̃_2), we obtain that they must be constants. Notice that on the other hand the remaining combinations of variables appearing as the power bases in (<ref>-<ref>) are invariant under the non-physical scalings of λ_4,λ̃_4 and λ_5,λ̃_5, consistently.We can go further realizing that these two expressions for f_2 have to be equivalent, since they describe the same function. Let us first assume different masses, m_1≠ m_2. Then, if we develop both expressions (<ref>-<ref>) in powers of the common variable ⟨5|4|1⟩2, and require the coefficients of equal powers to match, we will be forced to set to zero some of the constants c_k^(i). And in particular, when |h_3|>s_1+s_2, all of them will have to be zero. Again we find a constraint on the possible values of the helicity of the massless particle, namely <cit.>h_3={-s_1-s_2, -s_1-s_2+1, …, s_1+s_2-1, s_1+s_2} ,and again for a case where the amplitude is physical, i.e. non-zero for real momenta, as the masses are different.Let us then consider the case where the masses are equal, which corresponds to a kinematically forbidden process. Up to truncating the longer of the two series, the expressions (<ref>) and (<ref>) are automatically matching, without need of any restriction on h_3. This parallels the massless case, where the amplitude is also forbidden for real kinematics, and indeed we had no constraints on the values of the helicities.So, we have completely determined also the three-point amplitude with one massless and two massive legs, up to now several constants, corresponding to some different kinds of coupling. But how many of them? If we take for instance s_1≤s_2, we can convince ourselves that from the matching of (<ref>) and (<ref>) the following number of surviving (non-zero) constants c_k is given, depending on the values of the helicity of the massless particle <cit.>:# of couplings = {[ s_1+s_2-h_3+1if s_2-s_1≤h_3≤s_2+s_1;2s_1+1if-s_2+s_1≤h_3≤s_2-s_1; s_1+s_2+h_3+1if-s_2-s_1 ≤h_3≤-s_2+s_1; ].,which is always no more than 2s_1+1, with s_1≡min{s_1,s_2}.Let us conclude with an example, to see in practice how these different couplings can arise. Consider the QED three-point vertex, that is two massive spin-12 fermions interacting with a massless vector boson. Here the electrons/positrons have the same mass and same spin, so we are not facing a physical amplitude representing the decay of a massive particle, but nevertheless we have a three-point vertex which intervenes in intermediate steps of the perturbative calculation of physical higher points amplitudes.Let us take both fermions in their lowest spin component -12, and consider a photon of helicity -1. The formulæ (<ref>) and (<ref>), with s_1=s_2=12 and h_3=-1, yieldM^-1/2,-1/2,-1 = e m^-1-[e] |2⟩3|3⟩1 c_0+c_1+c_0 ξ/1+ξ , withξ=|1⟩4|2⟩5/m^2⟨5|4|1⟩2 ,and where we have renamed the coupling constant e, foreseeing future identification with the electromagnetic coupling. We see that we have indeed two independent constants and two different functional structures. If we want to construct the same amplitude through a Lagrangian approach, we realize that in the Lagrangian of QED we have just one three-point vertex, i.e. e ψ̅γ^μ A_μψ, with a single coupling constant related to the electric charge of the electron. However, the quantum corrected three-point vertex exhibits, already at one loop, a second piece, proportional to γ^μν=i/2[γ^μ,γ^ν]:Γ^μ_loops= γ^μ G_1(p_3^2) +i/2mγ^μνp_3_ν G_2(p_3^2) .This latter quantum-generated term is responsible for the anomalous gyromagnetic moment of the electron[ This is standard material of any textbook on Quantum Field Theory. Check for instance Chapter 6 of Peskin-Schroeder's book <cit.>, or Sections 10.6 and 11.3 of Weinberg's book <cit.>. ]. Considering the following expressions for the polarization vector of the photon of negative helicity[ The polarization vector is defined in spinor helicity formalism by means of an arbitrary reference spinor, that here we choose to be λ_4. This ambiguity of definition is related to gauge transformations, and one particular choice corresponds to a gauge fixing. The final answer is gauge invariant, and so not depending on this choice. For instance, it can be easily check that choosing λ̃_5 instead of λ̃_4 as reference spinor would not change the result. ] and for the wave functions of the Dirac fermions in spinor/helicity formalism,σ^μϵ^-_μ(p_3)=λ_3 λ̃_4/⟨3|4 , v̅_-(P_1)=(|1⟩4m λ̃_4, λ_1), u_-(P_2)=([ |2⟩5/m λ̃_5; λ_2 ]),we can write the electron-electron-photon three point amplitude from the QED renormalized cubic Lagrangian, L_QED^ren=ψ̅(e γ^μ A_μ+eg/2m iγ^μνF_μν)ψ, which is taking into account the quantum contributions. We obtainM^-1/2,-1/2,-1 =v̅_-(P_1)(e γ^μ+eg/2m iγ^μνp_3_ν)ϵ^-_μ(p_3) u_-(P_2) =e/m |2⟩3|3⟩1(-ξ/1+ξ+g/2),which is precisely matching the expression (<ref>), once we recall that the electromagnetic coupling is dimensionless. To complete the matching we have to choose c_0=g/2-1 and c_1=1.We hope with this example to have shown how different Lorentz structures can arise for the same external particle content, and this independently of the adopted formalism. §.§.§ subsubsection Three-massive amplitude The three massive case is completely analogous to the previously considered cases, just more involved due to the increasing number of variables, so we review it very briefly. We have here three pairs of spinors, which describe the three massive momenta,[ P_1=λ_1λ̃_1+λ_4λ̃_4 , P_2=λ_2λ̃_2+λ_5λ̃_5 , P_3=λ_3λ̃_3+λ_6λ̃_6 ,;with|1⟩4⟨4|1=m_1^2 ,|2⟩5⟨5|2=m_2^2 ,|3⟩6⟨3|6=m_3^2 ; ]thus we have thirty spinor products, eleven of which are independent. Since we have six LG equations in the system (<ref>) for three massive legs, then we expect the amplitude to depend on an undetermined function of five arguments. From the previous examples we can already guess that three of these arguments will be the angle-products related to the mass (that we are able to get rid of by imposing the non-dependence of the amplitude on the non-physical scaling), whereas the other two arguments would contain some square-products. Indeed, the final result is <cit.>:M^-s_1,-s_2,-s_3 == |1⟩2^s_1+s_2-s_3 |3⟩1^s_3+s_1-s_2 |2⟩3^s_2+s_3-s_1f_3(|1⟩4,|2⟩5,|3⟩6,⟨5|4|1⟩2,⟨4|6|3⟩1).We find again the by now usual pre-factor embodying the LG scaling, and then the expected undetermined function of the remaining five variables, which we can again make dimensionless as in the previous cases: f_3=g m_1^1-[g]-s_1-s_2-s_3f̃_3(|1⟩4/m_1,|2⟩5/m_2,|3⟩6/m_3,⟨5|4|1⟩2,⟨4|6|3⟩1 ;m_2/m_1,m_3/m_1). We can now expect this function to be specified by applying the spin-raising operators as for the two-massive case (<ref>), which is true, even if more complicated. Indeed the three spin-raising operators, corresponding to particle 1, 2 and 3, are in this caseJ^+_1 = -λ_4∂λ∂λ_1 +λ̃_1∂λ∂λ̃_4 , J^+_2 = -λ_5∂λ∂λ_2 +λ̃_2∂λ∂λ̃_5 , J^+_1 = -λ_6∂λ∂λ_3 +λ̃_3∂λ∂λ̃_6 .As you can see, the second operator acts only on the argument containing λ̃_5 in the function (<ref>), the third operator acts only on the argument containing λ̃_6, whereas the first operator acts on both of them. This makes the solution to the equation (J^+_i)^2s_i+1M=0 easy to find for i=2,3, but vary hard for i=1.For i=2,3 respectively, the following expressions can be obtained <cit.>:f_3(…; ⟨5|4|1⟩2,⟨4|6|3⟩1)= ∑_k=0^2s_2 c^(2)_k(…; ⟨4|6|3⟩1) (|2⟩5⟨5|4|1⟩2+|3⟩6⟨4|6|3⟩1+m_1^2|1⟩4)^s_1-s_2-s_3+k = ∑_k=0^2s_3 c^(3)_k(…; ⟨5|4|1⟩2) (|2⟩5⟨5|4|1⟩2+|3⟩6⟨4|6|3⟩1+m_1^2|1⟩4)^s_1-s_2-s_3+k ,where the dots stand for the variables |1⟩4,|2⟩5,|3⟩6, which we know that the amplitude will be eventually independent of. But you see that in this case the coefficients c^(2)_k and c^(3)_k are not necessarily constants, since they depend also on a square-product, ⟨4|6|3⟩1 and ⟨5|4|1⟩2 respectively. Of course, if we require the matching of these two different expressions of the same function, and impose the additional constraint coming from the action of J^+_1, we could in principle fully specify the form of f_3, and extract as well some restrictions on the allowed spins. But in practice this is unfortunately too cumbersome, and it is unlikely possible to obtain a final expression for arbitrary spins, as for f_2 (<ref>-<ref>). However, it is (easily) feasible to work it out case by case, with given (little) values of the spins.We conclude this section remarking that also this amplitude will eventually depend on several constants. If we take s_1 to be the highest of the three spins, then we can be quickly convinced that the number of constants cannot exceed s_2·s_3. Of course, it would be interesting to precisely determine this number. § BRITTO-CACHAZO-FENG-WITTEN RECURSION RELATIONS Recursion relations for scattering amplitudes are in general relations connecting n-point amplitudes to lower-point ones, that can be thus applied recursively to construct arbitrarily-high-point amplitudes from lower-point information. You see that if we dispose of such a powerful tool, then the three-point amplitudes, which we have determined in chapter <ref>, constitute the fundamental starting point from which we would be able to recursively construct any other higher-point amplitude.Before proceeding, we clarify that recursion relations are (so far) based on arguments that are valid order by order in the perturbative expansion. So, the methods we will discuss in this chapter are not non-perturbative, as those discussed in the previous one, which were based on symmetries.The first principles that we will use to derive the on-shell recursion relations are locality, analyticity, unitarity. Locality enters the discussion through the cluster decomposition principle, which we have introduced in Chapter <ref>, page cluster. This assures that the amplitude, once we have singled out the delta function of momentum conservation, exhibits no other delta-like singularities. Yet, locality of interaction will not be explicitly manifest in this context, as it is in a Lagrangian formulation of quantum field theory; actually, it can be violated in some intermediate steps[ In the Lagrangian approach, the price for manifest locality is the gauge redundancy: Feynman diagrams are gauge dependent, even if the final results, after precise cancellations among different terms, is of course gauge invariant. In BCFW recursions some intermediate pieces can present non-local singularities, which are removed from the final result by mutual cancellations. On the other hand, gauge invariance is assured along each step. ]. Analiticity is the assumption that the amplitude is an analytic function of the kinematic variables. This eventually means that its singularity structure, since we have ruled out delta functions, is made of poles and branch cuts. Moreover, we are allowed to analytically continue the amplitude to complex momenta, in order to exploit the power of complex analysis and determine it from its singularities. Unitarity is essential to sensibly define a probability amplitude. For the S-matrix, as we have already seen, unitarity results in the condition (<ref>). This implies in particular that at the locus of a singularity, which corresponds to one or several of the involved particles going on-shell, the amplitude factorizes into sub-amplitudes with lower number of external legs and/or at lower perturbative order. It is clear that this factorization property is the crucial one to have recursion relations, and so we will discuss it a bit more in detail.Consider an n-particles scattering process. Imagine that a subgroup of the n external momenta, containing k of them, squares to one of the physical masses of the considered asymptotic states:𝔭_^2=(∑_p_i∈ p_i)^2=m^2 .This would corresponds to the production of an intermediate particle, which would then decay into the remaining n-k particles, with the amplitude factorizing into two sub-amplitudes, with k+1 and n-k+1 external legs respectively, exchanging the intermediate particle (see fig. <ref>).Of course we need 2≤k≤n-2, for each of the sub-amplitudes in (<ref>) to have at least three legs. Any intermediate state through which this factorization can occur is call factorization channel. Such splitting of the total process into two sub-processes is not only possible, but infinitely more likely than all the particles interacting together at once. This infinitely greater probability is embodied by a simple pole singularity in the amplitude, located in momentum space where the on-shell condition of the intermediate particle is met. In formulæ:M_n ∼∑_ M_n-k+1 1/|_^2-m^2|M_k+1 ,where M_n is at a given perturbative order, and _ is the one defined in (<ref>).[ For a formal derivation of the factorization property from unitarity of the S-matrix, you can check Chapter 4 of the book “The Analytic S-Matrix” <cit.>, or Section 1.6 of Conde's lecture notes <cit.>. ]The formula (<ref>) is potentially defining an on-shell recursion relation, that is it can be used in the opposite direction (from right to left) to build an n-point amplitude from lower-point on-shell information. We stress on the word on-shell (both of the pieces that the amplitude factorizes in are physical, gauge-invariant, on-shell amplitudes), since there exist off-shell recursion relations as well, where the amplitude is reconstructed recursively, but from off-shell information (Berends-Giele recursion relations <cit.> are an example).Of course we should consider the possibility of more than one particle exchanged in the factorization channel, and so two or more internal propagators going simultaneously on-shell, yielding a branch cut, instead of a simple pole. This actually corresponds to loop contributions, whereas at tree-level we can only have simple poles. That is why we restrict to three level from now on, where we are completely able to establish recursion relations for the amplitude.Now we have to provide an operational method to actually compute the contributions of the poles in (<ref>), in order to realize recursion relations. That will be achieved by using the power of complex analysis and Cauchy theorem, by virtue of the assumed analyticity of the S-matrix.recursion relations entail indeed complex deformation of some of the external momenta. Britto-Cachazo-Feng-Witten is a particular form of recursion relations, where only two (the minimal amount) of the external momenta are deformed. They were first discovered by Britto, Cachazo, and Feng <cit.> in the context of one-loop Yang-Mills amplitudes, and then directly proven for generic tree-level amplitudes by the same three authors together with Witten <cit.>.Let us then consider a tree-level amplitude with n external (real) momenta, and let us shift two of them, for i=a,b, in the following wayp_a⟶ p_a -z q ≡p̂_a , p_b⟶ p_b +z q ≡p̂_b ,withz∈ℂ .Of course this kind of shift does not affect momentum conservation, ∑_ip_i=0. But we want to preserve `on-shellness' as well, then:{[ p̂_a^2 = p_a^2 -2zq · p_a +z^2 q^2 = p_a^2; p̂_b^2 = p_b^2 +2zq · p_b +z^2 q^2 = p_b^2 ].⇔{[ q^2 = 0; q · p_a = 0 = q · p_b ]. .So the shifting momentum q must be light-like, and orthogonal to both p_a and p_b. In four or higher dimensions, such a q always exists, if we allow it to be complex.Then the amplitude, expressed in terms of these shifted momenta, gets a dependency on the complex variable z. It is by construction an holomorphic function of z, and it matches for z=0 the original amplitude. Then, moving away from the origin in the z-complex-plane, we will intercept the singular points, eventually corresponding to physical poles. The gain of deforming momenta is indeed that of translating the physical poles in the kinetic variables into poles in the unique holomorphic variable z.Then we can use Cauchy theorem on the `shifted' amplitude M̂^(a,b)_n(z), and stateR_n^∞=∑_z_≠0z=z_Res[M̂^(a,b)_n(z)/z] + M̂^(a,b)_n(0) ,where z_ are the locations of the poles, R_n^∞ represents the residue at complex infinity, and we have singled out from the sum the residue in zero, which is actually the physical, non-shifted amplitude that we want to determine. The residues at finite z can be determined in a general fashion, whereas the residue at infinity can be an issue. In some specific cases it can be proven to be zero, or explicitly computed, either resorting to Lagrangian-based arguments, or to other principles. More in general, the existence or not of the residue at infinity depends on the pair of momenta, p_a and p_b, that we decide to deform. So, as we will see, it may occur that for some particular shifts the residue at infinity vanishes, letting us to safely apply the recursion formula, whereas for some other shifts it does not vanish. In any case, we have to deal with this issue, if we want to apply these techniques.Let us assume from now on that we are in a case where the residue at infinity vanishes for the chosen shift (a,b). Then any pole z_ would correspond to a factorization channel, and so to a partition of the external particles intoand(complement of ), and from the factorization property (<ref>) we can writeM_n=M̂^(a,b)_n(0)=-∑_z_≠0z=z_Res[M̂^(a,b)_n(z)/z]= ∑_z_≠0 M̂_(z_) 1/ 𝔭_^2-m^2M̂_(z_) .Besides the sum over the poles, that is over specific partitions of the external particles, we have also to take a sum over all possible internal states (different masses m, helicities, spins, etc...), which we have omitted here. Of course, the partition must contain at least two momenta and at most n-2 of them, since the sub-amplitudes cannot have less than three legs.In order to be able to explicitly write down (<ref>), we need to determine the locations of the poles. To do so, we require the shifted internal momentum to go on-shell at the location of the pole. Of course, for any given shift (a,b), only partitions where the shifted momenta p̂_a and p̂_b are on opposite sides give a shifted internal momentum, thus contributing to the sum. So let us call Å a subset of external momenta containing p_a, andthe complementary subset, which is containing p_b. Then the internal momentum will be given by _Å=∑_p_i∈Å p_i, which will inherit the same shift (<ref>) as p_a: _Å=_Å-zq. The location of the pole z_Å is the value of z such that the shifted momentum _Å goes on-shell, that is:0=_Å^2-m^2 = (_Å^2-m^2)(1-z 2q·_Å/ _Å^2-m^2 ) = (_Å^2-m^2)(1-z/z_Å) ⇒ z_Å=_Å^2-m^2/2q·_Å . Then we can finally rewrite (<ref>) asM_n=∑_Å M̂_(z_Å) 1/ _Å^2-m^2M̂_Å(z_Å) ,where z_A is given by (<ref>), and again we are omitting the sum over different internal physical states.We have thus a very general picture of how on-shell recursion relation can be derived for tree-level amplitudes. We stress that, whereas the rest of these notes is firmly grounded in four dimensions, the derivation depicted here does not rely on anything specific to four dimensions, and it holds indeed in higher dimensions as well[ In lower dimensions there is the issue that the shifting momentum q with the required properties does not exist. However, in some cases recursion relations can be generalized to three dimensions, as for instance for Chern-Simons theories with matter <cit.>. ]. Furthermore, our discussion is valid for massive particles as well as for massless ones, even if the original BCFW papers <cit.> were dealing only with massless Yang-Mills theory. The extension of BCFW recursion to massive particles is due to Badger et al. <cit.>. At the same time, the amplitudes and the involved momenta could be equivalently expressed in spinor-helicity formalism as well as in four-vector language. However, in particular for massless particles, since we are dealing with complexified and on-shell momenta, the spinor-helicity formalism is the ideal tool for expressing on-shell recursion relations; and indeed it was used in the original papers <cit.>.So, before moving to some practical applications of BCFW recursions (for massless particles), we briefly reformulate the BCFW shift in spinor-helicity language.§.§.§ BCFW shift in spinor-helicity formalism Let us consider the shift (<ref>) when both p_a and p_b are light-like. Then we can write p_a=λ_aλ̃_a, and p_b=λ_bλ̃_b. The shifting momentum q has to be light-like anyway, so we also write q=μμ̃. Then the BCFW shift (<ref>) rephrases asλ_aλ̃_a⟶ λ_aλ̃_a -z μμ̃ , λ_bλ̃_b⟶ λ_bλ̃_b +z μμ̃ ,and the additional orthogonality conditions for q read|μ⟩λ_a⟨λ̃_a|μ̃ = 0 = |μ⟩λ_b⟨λ̃_b|μ̃.There are two distinct solutions satisfying such conditions, that is either q=λ_aλ̃_b, or q=λ_bλ̃_a. Notice that for both solutions only one spinor for p_a and only one spinor for p_b are shifted, namely:q=λ_aλ̃_b⇒ {[ λ̃_a→ λ̃_a -z λ̃_b;λ_b→ λ_b +z λ_a ].; q=λ_bλ̃_a⇒ {[λ_a→ λ_a -z λ_b; λ̃_b→ λ̃_b +z λ̃_a ]..The first option is conventionally referred to as [a,b⟩-shift, while the second one as ⟨ a,b]-shift.We are now ready to apply these techniques to build up tree-level massless amplitudes.§.§ Parke-Taylor formula Parke-Taylor formula is a stunning result for n-point tree-level gluon amplitudes, which was `empirically' inferred by Parke and Taylor in 1986 <cit.>. It is a formula for maximally helicity violating (MHV) n-gluon tree-level amplitudes in Yang-Mills theory. MHV means that all gluons have the same helicity, except for two of them. It is maximally helicity violating, since actually amplitudes where all gluons have the same helicity or at most one has different helicity both vanish for any number of external particles:A_n(±,…,±)=0, A_n(∓,±,…,±)=0,where we have used the letter A, rather than M to specifically indicate a tree-level amplitude, and the same we will do from now on. Moreover, we are here considering color-ordered gluon amplitudes, which means that the color structure, coming from traces of the generators of the non-abelian gauge group, has been singled out, yielding an amplitude where the order of the particle is fixed, yet still enjoying a symmetry under cyclic permutations of the external legs. It is thanks to this cyclic invariance that we have always the right to move the gluon of different helicity to the first position. These results are recovered by Feynman graph calculations after cancellation of various (gauge dependent) terms. They can be proven by Lagrangian techniques based on Lorentz structures (see for instance Section 2.7 of <cit.>). At loop-level, these amplitudes are not vanishing anymore.The first non trivial tree-level amplitudes are the MHV ones, where two gluons have different helicities from all the others, and Parke and Taylor realized that, even with increasing number of external legs, they keep a very simple form, i.e.:A_n(…,i^-,…,j^-,…)= |i⟩j^4|1⟩2|2⟩3⋯|n-1⟩n|n⟩1 , A_n(…,i^+,…,j^+,…)= ⟨j|i^4⟨1|n⟨n|n-1⋯⟨3|2⟨2|1 ,where we have omitted the powers of the coupling constant for the sake of neatness. This very simple results is again coming out of precise cancellations among different Feynman diagrams, whose number dramatically increases with increasing number of external legs n:[ See Appendix A of <cit.>, if you really want to check it...] n 3 4 5 6 7 ⋯ 10 # of diagrams 1 4 25 220 2485 ⋯ 10^·525^·900 The formula (<ref>), guessed by Parke and Taylor, was first proven by Berends and Giele <cit.> through their off-shell recursion relations. We will show here the inductive proof <cit.> based on BCFW on-shell recursion relations.The starting amplitude is the three-point one, which we can write, for two negative helicities and two positive helicities respectively, from the expressions (<ref>-<ref>)[ The reader can notice the all-plus or all-minus amplitudes can be non-zero from (<ref>-<ref>). However, they would have a coupling of different dimensions with respect to the coupling of the amplitudes (<ref>-<ref>), corresponding thus to a different kind of cubic interaction, if considered as tree-level vertices. As we have already underlined, all-plus and all-minus amplitudes vanish only at tree-level in Yang-Mills theory, but they can be non trivial at loop-level. Expressions (<ref>-<ref>) are non-perturbative, so they of course take into account beyond tree-level possibilities. ]. Again omitting the coupling constant, we haveA_3(-,-,+)≡ M_H^-1,-1,+1 = |1⟩2^4/|1⟩2|2⟩3|3⟩1 , A_3(+,+,-)≡ M_A^+1,+1,-1 = ⟨2|1^4/⟨2|1⟨1|3⟨3|2 ,which indeed match the Parke-Taylor expressions (<ref>) for n=3.So, we already have that Parke-Taylor formula holds for n=3. Now we want to use BCFW recursion to prove it for arbitrary n. We assume that the formula holds for n-1, and we will obtain the formula for n external legs. Another ingredient that we will use is the fact that all the amplitudes with at most one different helicity vanish (<ref>-<ref>). We postpone the proof of that at the end of this section, for the sake of readability. We will show the computation for the `mostly-plus' MHV amplitude, M_n^tree(-,-,…), the computation for the `mostly-minus' being completely identical. First of all we have to assure that there exists a shift that makes the residue at infinity vanish. As it was shown already in the original BCFW paper <cit.>[ The argument of <cit.> is based on Feynman rules. In <cit.> the large z behaviors under different shifts for four-dimensional Yang-Mills theory have been determined also by non-Lagrangian approaches. ], this is the case for gluon amplitudes with the shifts [-,-⟩ (⟨-,-]), [-,+⟩ (⟨+,-]), [+,+⟩ (⟨+,+]). On the contrary, the term at infinity does not vanish for the shift [+,-⟩ (⟨-,+]). As an indication that something is wrong with these latter combinations, we can check a fortiori on the results (<ref>) that they explode for z→∞ precisely when we shift the `non-tilded' spinor of a negative-helicity leg and the `tilded' spinor of a positive-helicity leg; on the contrary, under all other shifts, we find a fall-off as z^-1 or faster.We choose the valid shift ⟨ n^+,1^-], that is, from eq.s (<ref>),λ_n→ λ_n -z λ_1 , λ̃_1→ λ̃_1 +z λ̃_n ,which will turn out to make the computation particularly simple. Then we consider the factorization formula (<ref>): since the amplitude factorizes around poles in z, then the shifted legs 1̂ and n̂ have to appear in opposite sub-amplitudes, otherwise the internal momentum would not be shifted. Moreover, since we are considering color-ordered amplitudes, the position of the external legs cannot shuffle. Thus, we writeA_n(1^-,2^-,…) = ∑_k=3^n-2∑_ h_k=± A_n-k+1(k+1^+,…,n̂^+,𝔭̂_k^h_k)1/ 𝔭_k^2 A_k+1(1̂^-,2^-,…,k^+,-𝔭̂_k^-h_k) ,where 𝔭_k=p_1+⋯+p_k=-(p_k+1+⋯+p_n). You see that the exchanged momentum 𝔭_k has to appear with opposite sign and helicity in the two sub-amplitudes, since it is incoming on one side whereas is outgoing on the other side. Then we notice that the sub-amplitude on the left side can have at most one negative helicity, so for (<ref>-<ref>) it vanishes for any number of external legs, except three. The three-point amplitude for all positive helicities is zero as well, so the only non-vanishing contribution is for k=n-2 and h_k=-1, that isA_n(1^-,2^-,…) = A_3(n-1^+,n̂^+,-𝔭̂_n^-) 1/ 𝔭_n^2 A_n-1(1̂^-,2^-,…,n-2^+,+𝔭̂_n^+) ,where 𝔭_n=p_n+p_n-1. You see that the n-point Parke-Taylor formula is indeed recovered from the n-1 version, together with the basic three-point block.Inferring the expression for the three-point amplitudes from (<ref>), and using the Parke-Taylor formula (<ref>) for n-1, we obtainA_n(1^-,2^-,…) = ⟨n|n-1^3⟨n-1|𝔭̂_n⟨𝔭̂_n|n1/ 𝔭_n^2 |1⟩2^4|1⟩2⋯|n-2⟩-𝔭̂_n|-𝔭̂_n⟩1 ,where we have already used n̂]≡ n] and 1̂⟩≡1⟩, since our shift (<ref>) affects λ_n and λ̃_̃1̃, and not λ̃_n and λ_1. A technical point would be the choice of sign in expressing -𝔭̂_n=|-𝔭̂_n⟩[-𝔭̂_n|=-|𝔭̂_n⟩[𝔭̂_n| in terms of spinors. The ℤ_2 ambiguity in the definition allows us to attribute the minus sign either to the `angle' spinor (|-𝔭̂_n⟩≡-|𝔭̂_n⟩, [-𝔭̂_n|≡[𝔭̂_n|), or to the `square' spinor. However, it makes no difference, since |-𝔭̂_n⟩ appears an even number of times in (<ref>).Then, we have𝔭_n^2=2p_n· p_n-1 = |n⟩n-1⟨n-1|n ,|n-2⟩𝔭̂_n⟨𝔭̂_n|n = n-2𝔭̂_nn = n-2p̂_n+p_n-1n = |n-2⟩n-1⟨n-1|n , |𝔭̂_n⟩1⟨n-1|𝔭̂_n = 1𝔭̂_nn-1 = |1⟩n̂⟨n|n-1 = |1⟩n⟨n|n-1 ,where we have used the definitions (<ref>–<ref>), and in the last line the fact that |1⟩n̂=|1⟩n, which comes straightforwardly from (<ref>).Putting all this into (<ref>) we immediately get the desired result:A_n(1^-,2^-,…) = |1⟩2^4|1⟩2⋯|n⟩1 .The simplicity and conciseness of this proof is one of the neatest demonstration of the power of BCFW recursion for tree-level amplitudes. The key element of such power is that the different contributions to BCFW recursion relation are made up of gauge-invariant objects: the on-shell sub-amplitudes and the exchanged propagators. The gauge redundancy of local Lagrangians leads to a proliferation of gauge dependent contributions, the Feynman diagrams. Gauge invariance is restored in the final result upon cancellations among the various gauge dependent terms. So, the gauge redundancy is somehow the price to pay for manifest locality, which we have in a local field theory. On the contrary, in on-shell factorization, locality is not manifest: it underlies in the fact that singularities come from poles of propagators. However, in more involved calculations than this one, where more than one contributions sum up, some additional `spurious' poles can arise, meaning poles which are not those of the physical factorization channels. These non-physical poles eventually cancel out among different contributions in the recursive formula, in an analogous way as gauge dependency is cleared up by sum of different Feynman diagrams. However, generally on-shell recursive techniques (where they apply) drastically reduce the number of contributions with respect to Feynman diagrams.Moreover, we have noticed that, provided we can assure that we have no residue at infinity, we are free to choose the BCFW-shift among various possibilities. Of course, different shifts must yield the same result for a given amplitude, yet the number of contributions and the simplicity of the computation may depend on the chosen shift. So, it is often possible to choose a more favorable shift that optimizes the BCFW computation[ For instance, we invite you to derive again the result (<ref>) by the [n-1^+,n^+⟩ or [1^-,2^-⟩ shifts. You would encounter one difficulty more with respect to the computation presented here. ].On the other hand, this possibility of computing the same amplitude through different BCFW shifts can also be used to constrain the amplitude. Indeed, in some cases different shifts give different results for the same amplitude. Requiring the final answer to be unique thus gives some conditions on the considered amplitude, and so on the form of the relative interaction. It is the idea at the core of the four-particle test, which we illustrate in the next section. Before moving to next section, let us show that starting form the three-point vertices (<ref>-<ref>) we cannot obtain non-vanishing amplitudes with at most one different helicity (<ref>-<ref>), which completes our proof of Parke-Taylor formula.Quite straightforwardly, the four-amplitudes with all negative or all positive helicities cannot be generated from the considered vertices (<ref>-<ref>), since, as we have seen, the intermediate particle must have opposite helicity on opposite sides. If A_4(∓,∓,∓,∓)=0, then recursively all other amplitudes with all equal helicities are zero as well.In order to prove that also the amplitudes where only one helicity differs from the others are zero (<ref>), we have to show that certain BCFW contributions vanish. In particular, we will see that any three-point sub-amplitude of the form (<ref>-<ref>) where one of the shifted spinors appears explicitly is zero. This can be intuitively understood recalling that in an on-shell `holomorphic' three-point amplitude (<ref>) the relative `tilded' spinors are all proportional (all the square-products vanishing), while in an on-shell `anti-holomorphic' three-point amplitude (<ref>) the relative `non-tilded' spinors are all proportional (all the angle-products vanishing). We have already seen in the derivation of the Parke-Taylor formula that the shifted momenta have to be on different sides of the factorization in order to contribute: in such case we use the shifting variable z to put the exchanged momentum on-shell, so getting on-shell amplitudes on both sides of the factorization. With the BCFW shift, we shift only one `non-tilded' spinor and only one `tilded' spinor. The shift of the `tilded' (`non-tilded') spinor makes all the square-products (angle-products) vanish at the location of the pole, so that the `holomorphic' (`anti-holomorphic') three-point sub-amplitude is on-shell and non-zero, whereas the `anti-holomorphic' (`holomorphic') sub-amplitude would contain the shifted `tilded' (`non-tilded') spinor and be zero.Let us see this in a practical case, which will be a bit tedious, but useful to be convinced once for all, and then automatically discard these kind of contributions in any future computation. Let us thus consider a BCFW contribution of the kindA_3(-𝔭̂_ij^-,i^ -,^ +)1/𝔭_ij^2 ⋯ = |𝔭̂_ij⟩i^3|i⟩|⟩𝔭̂_ij 1/𝔭_ij^2 ⋯ ,where 𝔭̂_ij=p_i+p̂_j, and the dots stands for the other on-shell sub-amplitude which completes the factorization. We are performing a ⟨ j^+,k^h_k]-shift (which is a `safe' shift, independently of the value of h_k!), with p_k being of course in the sub-amplitude in the dots, and so we are shifting the `non-tilded' spinor of p_j:λ_j→ λ_j -z λ_k .We want to prove that the three-point sub-amplitude in (<ref>), which explicitly exhibits the shifted spinor λ_j, is made of spinor products that all vanish at the location of the pole, and so vanishes as well (three powers in the numerator dominate over two powers in the denominator).Requiring as usual the intermediate momentum to be on-shell, we find the location of the pole z_ij,0=𝔭̂_ij^2=|i⟩⟨j|i=⟨j|i(|i⟩j-z_ij|i⟩k) ⇔ z_ij=|i⟩j|i⟩k .Therefore at the location of the pole the angle-product |i⟩ goes to zero. Then we have𝔭̂_ij= λ_iλ̃_i+(λ_j-z_ij λ_k) λ̃_j = λ_iλ̃_i-|k⟩i λ_j+|i⟩j λ_k/|i⟩k λ̃_j = λ_i(λ̃_i+|j⟩k/|i⟩k λ̃_j) ⇒|𝔭̂_ij⟩=λ_i ,[𝔭̂_ij|= λ̃_i+|j⟩k/|i⟩kλ̃_j ,where we have used the Schouten identity (<ref>) in the last step of the first line. Thus, we have that |⟩𝔭̂_ij=|⟩i, which goes to zero at the location of the pole (<ref>). To deal with the numerator in (<ref>), we multiply and divide[ Of course, we have to worry about whether we are multiplying and dividing for something vanishing. Using the value of [𝔭̂_ij| in (<ref>), it can be immediately checked that ⟨i|𝔭̂_ij is not zero at the location of the pole. ] for ⟨i|𝔭̂_ij^3, and use⟨i|𝔭̂_ij|𝔭̂_ij⟩i=i𝔭̂_iji = |i⟩⟨j|i ,which is also proportional to the vanishing angle-product |i⟩.Thus, as announced, all the spinor products of the three-point sub-amplitude in (<ref>) go to zero at the location of the pole as |i⟩∝ z-z_ij. In virtue of one power more at the numerator, the three-point sub-amplitude vanishes, and so the whole BCFW term (<ref>).With identical procedure, it can be checked for all the cases where the `non-tilded' shifted spinor explicitly appears in a `holomorphic' sub-amplitudeand for all the cases where the `tilded' shifted spinor explicitly appears in a `anti-holomorphic' three-point sub-amplitude. We summarize these useful results, which will be helpful already in next section:A_3(-𝔭̂_ij^-,i^ -,^ +) = A_3(-𝔭̂_ij^-,^ -,i^+) = A_3(i^ -,^ -,-𝔭̂_ij^+)=0 ,when λ_j is shifted; A_3(-𝔭̂_kl^+,l^+,k̂^-)= A_3(-𝔭̂_kl^+,k̂^+,l^-) = A_3(k̂^+,l^+,-𝔭̂_kl^-)=0 , when λ̃_k is shifted. With these results we can now analyze the four-point amplitude where all helicities are equal but one. As you can see from figure <ref>, such four-point amplitudes necessarily factorizes into two three-point sub-amplitudes of the same kind, that is either both `holomorphic' (<ref>), or both `anti-holomorphic' (<ref>). In both cases, since the shifted momenta have to be on different sides of the factorization, one of the two sub-amplitudes contains the `non-tilded' shifted spinor, and the other one contains the `tilded' shifted spinor. So, either they are both `holomorphic' or both `anti-holomorphic', one of the two sub-amplitudes will explicitly exhibit one of the shifted spinors, and so it will vanishes. Thus, the whole four-point amplitude where all helicities but one are equal is zero. Then we straightforwardly obtain by induction that all higher-point amplitudes of the form (<ref>) vanish as well.§.§ Four-particles test In principle, if we compute a given amplitude through BCFW with different shifts, we expect to obtain the same result. However, if we specify to some particular form of three-point interaction, different shifts can give different results for the same amplitude. This of course is not sensible, and so we require the two results to be equivalent, deriving some condition on the kind of interaction we are considering. Essentially, we demand the four-point amplitude to be compatible with the chosen form of three-point interaction. This requirement, very similar in spirit to what is done in the bootstrap paradigm, was dubbed `four-particle test' by Benincasa and Cachazo, as they introduced it in their work <cit.>.We present here as an example a particularly nice application which was worked out in that paper <cit.>. We consider the cubic vertices of Yang-Mills gauge vector bosons (without color-ordering), that is vertices made of three spin-1 massless particles of various species (colors). The four-particle test in this case requires the coupling constants of such vertices to satisfy the Jacobi identity, letting emerge their connection with the structure constants of a non-abelian gauge group. This result is quite surprising, since in our basic initial ingredients (the existence of Lorentz-invariant three-point amplitudes of massless vector bosons of different kinds) there is no assumption of an underlying Lie algebra.The three-point vertices we are going to consider are thusA(i^-_a,j^-_b,k^+_c)=κ_abc|i⟩j^3|j⟩k|k⟩i , A(i^+_a,j^+_b,k^-_c)=κ_abc⟨j|i^3⟨i|k⟨k|j ,whose form is given by (<ref>, <ref>), where we allow for different coupling constants depending on the species of the external particles. On the contrary, we take the coupling to be the same for both expressions in (<ref>), that is for vertices with the same species of particles but opposite helicities. This corresponds to consider a parity-invariant interaction (sign inversion of the spatial part of momentum produces flipping of the helicity). Since the two vertices are connected by complex conjugation, we also have that the couplings are real. Moreover, these amplitudes flip sign when we exchange two particles, so the couplings must be completely antisymmetric in order not to violate the crossing symmetry (changing labels should not affect the amplitude). Finally, from the fact that the three-point amplitude has mass dimensions one, we have that the considered coupling is dimensionless, [κ_abc]=0.We want to build four-point amplitudes out of this type of three-point interaction, using BCFW techniques. As we have seen in detail at the end of previous section, because of (<ref>), the four-point amplitudes where at most one helicity differs from the others are all zero. So the only remaining possibility is the configuration with two positive and two negative helicities. Let us explicitly work out the case A_4(1_a^-,2_b^+,3_c^-,4_d^+), first using the [1^-,2^+⟩-shift, then the [1^-,4^+⟩-shift, and eventually requiring the two outcomes to be equal.For the the [1^-,2^+⟩-shift, we have the following two contributions (fig. <ref>)[ As displayed in figure <ref>, with the first contribution we could have considered also the one with flipped helicity for the intermediate momentum 𝔭̂_14: but this term is zero, again because it involves three-point amplitudes that explicitly contain the shifted variables, λ̃_1 and λ_2, which thus vanish at the location of the pole, as one of (<ref>). ]A_4^[1,2⟩(1_a^-,2_b^+,3_c^-,4_d^+) = = A(𝔭̂_14^+_e,2̂_b^+,3_c^-) 1/𝔭_14^2A(1̂_a^-,-𝔭̂_14^-_e,4_d^+) + A(2̂_b^+,4_d^+,𝔭̂_13^-_e) 1/𝔭_13^2A(3_c^-,1̂_a^-,-𝔭̂_13^+_e) = κ_daeκ_bce|1⟩4⟨4|1 ⟨2|𝔭̂_14^3⟨𝔭̂_14|3⟨3|2|1⟩𝔭̂_14^3|𝔭̂_14⟩4|4⟩1 + κ_caeκ_bde|1⟩3⟨3|1 ⟨4|2^3⟨2|𝔭̂_13⟨𝔭̂_13|4|3⟩1^3|1⟩𝔭̂_13|𝔭̂_13⟩3 ,with 𝔭_14=p_1+p_4≡-p_2-p_3, and 𝔭_13=p_1+p_3≡-p_2-p_4.The location of the poles is given by0=𝔭̂_14^2=|1⟩4(⟨4|1-z_14⟨4|2)⟺ z_14=⟨1|4⟨2|4=-|2⟩3|1⟩3 ; 0=𝔭̂_13^2=|1⟩3(⟨3|1-z_13⟨3|2)⟺ z_13=⟨3|1⟨3|2=-|2⟩4|1⟩4 .For the first term in (<ref>), we then work out|1⟩𝔭̂_14⟨𝔭̂_14|2 = |1⟩4⟨4|2 ,|4⟩𝔭̂_14⟨𝔭̂_14|3 = |4⟩1⟨1̂|3 = -|1⟩4⟨2|4(⟨1|3⟨2|4+⟨3|2⟨1|4) = |1⟩4⟨2|4⟨2|1⟨3|4 ,where for the second line we have used the expression of the location of the pole z_14 (<ref>) and the Schouten identity (<ref>); and|1⟩𝔭̂_13⟨𝔭̂_13|4 = |1⟩3⟨3|4 ,|3⟩𝔭̂_13⟨𝔭̂_13|2 = |3⟩1⟨1̂|2 = |3⟩1⟨1|2 ,for the second term in (<ref>). We obtain soA_4^[1,2⟩(1_a^-,2_b^+,3_c^-,4_d^+) = κ_daeκ_bce|1⟩4⟨4|1 |1⟩4⟨2|4^4⟨2|1⟨3|2⟨3|4 + κ_caeκ_bde|1⟩3⟨3|1 |1⟩3^2⟨2|4^3|1⟩3⟨2|1⟨3|4 ,This expression can be simplified further thanks to momentum conservation, since|1⟩4⟨2|4= -1p_42=1p_1+p_2+p_32=|1⟩3⟨3|2, |1⟩3⟨3|4=1p_34= -|1⟩2⟨2|4,and it finally nicely readsA_4^[1,2⟩(1_a^-,2_b^+,3_c^-,4_d^+) = -|1⟩3^2⟨2|4^2/ s (κ_daeκ_bce/ t + κ_caeκ_bde/ u) ,where the standard definitions of the Mandelstam variables are employed, i.e.: s=(p_1+p_2)^2 , t=(p_1+p_4)^2 , u=(p_1+p_3)^2 .We now compute the same amplitude through the [1^-,4^+⟩-shift. Similarly to the previous case, we have the following two contributionsA_4^[1,4⟩(1_a^-,2_b^+,3_c^-,4_d^+) == A(4̂^+_d,𝔭̂_12^+_e,3_c^-) 1/𝔭_12^2A(-𝔭̂_12^-_e,1̂_a^-,2_b^+) + A(2_b^+,4̂_d^+,𝔭̂_13^-_e) 1/𝔭_13^2A(1̂_a^-,3_c^-,-𝔭̂_13^+_e)= κ_abeκ_cde|1⟩2⟨2|1 ⟨𝔭̂_12|4^3⟨4|3⟨3|𝔭̂_12|𝔭̂_12⟩1^3|1⟩2|2⟩𝔭̂_12 + κ_aceκ_bde|1⟩3⟨3|1 ⟨4|2^3⟨2|𝔭̂_13⟨𝔭̂_13|4|1⟩3^3|3⟩𝔭̂_13|𝔭̂_13⟩1 .The location of the poles are now given by0=𝔭̂_12^2=|1⟩2(⟨2|1-z_12⟨2|4)⟺ z_12=⟨2|1⟨2|4=-|3⟩4|3⟩1 ; 0=𝔭̂_13^2=|1⟩3(⟨3|1-z̃_13⟨3|4)⟺z̃_13=⟨3|1⟨3|4=-|4⟩2|1⟩2 .With completely analogous manipulations as in the previous computation, we getA_4^[1,4⟩(1_a^-,2_b^+,3_c^-,4_d^+) = κ_abeκ_cde|1⟩2⟨2|1 |1⟩2⟨2|4^4⟨1|4⟨4|3⟨3|2 + κ_aceκ_bde|1⟩3⟨3|1 |1⟩3^2⟨2|4^3|1⟩3⟨3|2⟨1|4 = -|1⟩3^2⟨2|4^2/ t (κ_abeκ_cde/ s + κ_aceκ_bde/ u) .We have repeated the computation for completeness, but notice we could have obtained the expression (<ref>) from (<ref>), by simply exchanging the labels 2 with 4 and b with d.Now we have nothing else to do than comparing the two different results for M_4, (<ref>) and (<ref>), and require them to coincide. This yields0 = A_4^[1,2⟩-A_4^[1,4⟩ = |1⟩3^2⟨2|4^2/ st (κ_abeκ_cde -κ_daeκ_bce +κ_aceκ_bde( s+t)/ u) = |1⟩3^2⟨2|4^2/ st (κ_abeκ_cde +κ_adeκ_bce +κ_aceκ_dbe) ,where we have used the antisymmetric property of κ, and the fact that s+t+u=0 by momentum conservation. We have so retrieved the Jacobi identity for the coupling constants, as announced,κ_abeκ_cde +κ_aceκ_dbe +κ_adeκ_bce =0 .We know that from Yang-Mills theory the coupling constants of the gauge vector bosons are expressed as g_YM f_abc, where f_abc are the structure constants of the Lie algebra of the underlying non-abelian gauge group, which indeed satisfy by definition the Jacobi identity.With this charming result, which is a simple but surprisingly non-trivial test of BCFW techniques, we conclude our limited review of applications of BCFW. We hope that we have given the reader the starting tools for continuing the exploration of such a promising technique.§ EPILOGUE We have arrived to the conclusion of our short travel through the world of scattering amplitudes. We hope these notes be a self-consistent story, which can be read in a whole, from the beginning to the end. At same time we wish them to be easily accessible at any point by the reader who is looking for a specific information.[ We would be grateful to the reader for any commentary or remark, to be addressed to the mailto:[email protected]. ] Before bidding farewell to the reader, we shortly summarize the outlines of our discussion.After having recalled the basic properties of a scattering amplitude deriving from a consistent S-matrix theory, we have assumed Poincaré invariance as the symmetry of the spacetime. The representation theory of Poincaré group allowed us to define the asymptotic states corresponding to fundamental particles participating in the scattering process, and provided some constraints for the amplitude. These constraints are coming from the Little Group transformation properties, which are distinct for massless and massive particles. Nonetheless, for the simplest case of the three-point amplitude, the Little Group equations, together with momentum conservation and on-shell conditions, are able to completely fix the kinematic dependency of the amplitude, either the external legs are massless or massive. Some of these results vanish for real kinematics, but some others correspond to physical processes, constituting thus non-perturbative expressions.Then we have gone beyond the three-point case, thanks to BCFW recursion relations, which on the other hand are based on the validity of a perturbative expansion. We have used them at tree-level to prove by induction the Parke-Taylor formula, and to derive the Jacobi identity for non-abelian Yang-Mills couplings.As already mentioned, the BCFW shift has been generalized to massive particles <cit.>, so a natural continuation of the subject presented here would be to use the massive three-point amplitudes (<ref>, <ref>, <ref>) as building blocks to construct higher-point massive amplitudes. The reader may wonder about the extension of recursive techniques to loop-level. Actually, it is a very though open question, due to the much more involved singularity structure of loop level. First of all, loop-level recursive techniques apply to the integrand rather than the whole amplitude. Then, several on-shell analytic methods for loop integrands exist (see <cit.> for a relatively recent review), but the direct extension of BCFW to loop-level presents some obstructions (well illustrated in Section 7 of <cit.>). In the special case of the planar limit of 𝒩=4 super-Yang-Mills, such obstructions have been overcome, and full recursive formulæ for all-loop amplitudes have been derived, based on momentum-twistor duality and the positive Grassmannian <cit.>.In conclusion, we hope to have given a flavor of the fact that on-shell methods, non-perturbative as perturbative ones, if they are far from replacing local quantum field theory, as it was maybe the intention of the original S-matrix program, constitute at least a practically useful and theoretically enlightening alternative, which complements and extends Lagrangian-based techniques.§.§ AknowledgementsFollowing chronological order, let me thank Gabriele Travaglini, since it is by his lectures at Lisbon Summer School on String Theory and Holography in July 2014 that I got first interested in the field of scattering amplitudes. Then I have to thank Eduardo Conde, for having eventually pulled me into this field, and for the enriching and strong relationship of professional collaboration and friendship, on which large part of this work rests on.Furthermore, I am grateful to all the participants of the http://www.ulb.ac.be/sciences/ptm/pmif/Rencontres/ModaveXII/lectures.htmlXII Modave Summer School in Mathematical Physics, for being a patient as well as attentive audience. I finally thank Céline Zwikel, Eduardo Conde, Riccardo Argurio, Roberto Oliveri, and Victor Lekeu for reading the final drafts of these notes, and returning precious feedback. This work is supported by IISN-Belgium (convention 4.4503.15).fancy[] []utphys | http://arxiv.org/abs/1705.09678v1 | {
"authors": [
"Andrea Marzolla"
],
"categories": [
"hep-th"
],
"primary_category": "hep-th",
"published": "20170526184039",
"title": "The four-dimensional on-shell three-point amplitude in spinor-helicity formalism and BCFW recursion relations"
} |
^1 Department of Physics, University of Guelph,Guelph, ON N1G 2W1, Canada^2 Theoretical Division, Los Alamos National Laboratory,Los Alamos, NM 87545, USAFermions in Two Dimensions: Scattering and Many-Body Properties Alexander Galea^1 Tash Zielinski^1 Stefano Gandolfi^2 Alexandros Gezerlis^1=================================================================================Ultracold atomic Fermi gases in two-dimensions (2D) are an increasingly popular topic of research. The interaction strength between spin-up and spin-down particles in two-component Fermi gases can be tuned in experiments, allowing for a strongly interacting regime where the gas properties are yet to be fully understood. We have probed this regime for 2D Fermi gases by performing T = 0 ab initio diffusion Monte Carlo (DMC) calculations. The many-body dynamics are largely dependent on the two-body interactions, therefore we start with an in-depth look at scattering theory in 2D. We show the partial-wave expansion and its relation to the scattering length and effective range. Then we discuss our numerical methods for determining these scattering parameters. We close out this discussion by illustrating the details of bound states in 2D. Transitioning to the many-body system, we use variationally optimized wave functions to calculate ground-state properties of the gas over a range of interaction strengths. We show results for the energy per particle and parametrize an equation of state. We then proceed to determine the chemical potential for the strongly interacting gas. § INTRODUCTIONIn recent years cold atomic gas experiments have seen novel developments<cit.>. With the advent of Feshbach resonances it is now possible to probe the interactions of these systems in regimes ranging from tightly bound Bose-Einstein Condensate (BEC) dimers to weakly interacting Bardeen-Cooper-Schrieffer (BCS) pairs. This has allowed for the possibility of experimental verification of ground state properties, providing a strong motivation for further investigation into these cold dilute gas systems. The BEC-BCS crossover of cold atomic gases is of particular interest due to the existence of a scale independent unitary regime. In application to the crossover, mean-field theory provides a quantitatively inadequate description, which is, however, useful as a guidepost. As a result, the unitary regime has been the target of severalfirst-principles Quantum Monte Carlo attempts to determine the ground state properties of these dilute cold atomic gas systems <cit.>. Intriguingly, BEC-BCS crossover of cold atomic gases is closely related to the physics of neutron matter in compact stars, which is found on the BCS side of thecrossover. <cit.>. A rich area of study subject to ongoing investigation is that oflow dimensionality <cit.>. These dilute cold atomic gases have been trapped using anisotropic potentials, resulting in a quasi-2D pancake-shape gas cloud. Very recent experiments have used box potentials to directly probe the physics of homogeneous two-dimensional Fermi gases <cit.>. These systems of reduced dimensionality display properties that are distinct from the analogous in 3D phenomena. As will be discussed in detail in the following section, the main difference arises due to the logarithmic dependence on the coupling that appears in 2D. Mean-field theory BCS has also been applied to 2D <cit.>. As in 3D, the 2D regime is not well described by the two-dimensional BCS theory in the crossover region. As a result,the determination of ground-state properties of strongly interacting Fermi gases has been attempted with Quantum Monte Carlo methods starting with a pioneering calculation using DMC <cit.>, which was later updated using Auxiliary-Field Quantum Monte Carlo (AFQMC) <cit.>, as well as DMC using a more sophisticated wave function that included several variational parameters <cit.>. These previous works focused on the determination of the contact parameter and ground state energies throughout the crossover. In this paper westart with the two-body interaction from which our effective range and scattering length are determined. This is of particular importance to low-energy scattering phenomena, and is directly applied to the many body 2D s-wave problem. In section 2 we first provide a self-contained discussion of scattering in 2D using the partial-wave expansion. Then we define the effective range and scattering length parameters and go over their determination. Finally we discuss the formation of bound states in 2D, noting the substantial differences between 2D scattering and the well-known 3D scattering phenomena. In section 3 we discuss the strongly interacting 2D Fermi gas in the BEC-BCS crossover. First, we give a detailed discussion of the pairing function used in the many-body wave function. We explicitly determine the variational parameters used in the pairing function. A brief overview of the DMC method is given, then explicit variationally optimized DMC results for a range of interactions are shown. The equation of state is fit to these DMC results and then the corresponding chemical potential is calculated. § SCATTERING The two-body problem is the starting point for our many-body study. Since three-dimensional (3D) scattering is a more familiar topic, we make sure to highlight crucial differences between this theory and the 2D one. We start with a partial-wave expansion of the time-independent Schrödinger equation and then define relevant scattering parameters, namely the scattering length a_2D and effective range r_e. Next we show our techniques for determining a_2D and r_e. Finally we describe and illustrate bound states in 2D.§.§ Partial Wave ExpansionDescribing the interaction of two particles in their center of mass reference frame, the two-body problem is reduced to that of one body with mass m_r in the presence of a potential that represents the interaction.This m_r is the reduced mass of the two-body system, and simplifies to m/2 if the particles have equal mass.The problem is further simplified by taking the interaction to be spherically symmetric. In this case the potential V( r)→ V(r), where r=| r|=√(r^2_x+r^2_y) is the separation distance between particles.The time-dependent wave functions for such a system can be written as ψ( r,t)=ψ( r)e^-iEt/ħ, where ψ( r) is an eigenfunction of the time-independent Schrödinger equation:-ħ^2/2m_r∇^2 ψ(r,θ)+V(r)ψ(r,θ)=Eψ(r,θ) ,and the scattering energy E is the eigenvalue. The second angle (usually denoted with ϕ in physics) which ranges from 0 to π in 3D has no meaning in 2D. Inserting the Laplacian for 2D polar coordinates, we can rewrite Eq. (<ref>) as1/r∂/∂ r(r∂ψ(r,θ)/∂ r)+1/r^2∂^2 ψ(r,θ)/∂θ^2+ k^2 ψ(r,θ) - 2m_r/ħ^2V(r)ψ(r,θ)=0 ,where k^2 =2m_rE/ħ^2.Now we perform a partial-wave expansion, separating the wave function into a sum of the product of radial and angular terms (l will turn out to be the orbital angular momentum):ψ(r,θ)=∑_l=0^∞a_l R_l(r) T_l(θ) .After this substitution and some simple rearranging we find the following equation, which must be satisfied for all l:-[k^2 - 2m_r/ħ^2V(r)] = 1/r∂/∂ r(r∂ R_l(r)/∂ r)1/R_l(r) + 1/r^2(1/T_l(θ))∂^2 T_l(θ)/∂θ^2 ,where we have removed the summation over partial waves. This can be rewritten asf(r)+g(θ)=0 ,where f depends only on r and g includes all the angular dependence.If we imagine keeping r fixed, then g(θ)=(1/T_l(θ))∂^2 T_l(θ)/∂θ^2 must be the same for all values of θ if Eq. (<ref>) is to be consistent.The angular functions T_l(θ), therefore, must satisfy the wave equation:∂^2 T_l(θ)/∂θ^2=-c_l T_l(θ) ,where c_l is some constant (which can be different for each value of l). The general solution can be written asT_l(θ)=a_lsin(√(c_l) θ)+b_lcos(√(c_l) θ) .Considering, for example, an incident beam at θ=0, we assume symmetric scattering and can thereby eliminate the odd sin function dependence.Then, with the simple periodic condition T_l(0)=T_l(2π), we can determine that √(c_l) must be an integer l and Eq. (<ref>) can be rewritten as <cit.>:∂^2 T_l(θ)/∂θ^2=-l^2 T_l(θ) ,with the normalized solutionT_l(θ)=1/√(π)cos(lθ) .Like the Legendre polynomials involved in the analogous 3D partial wave expansion, the T_l(θ) functions are linearly independent and therefore we were justified in removing the summation for Eq. (<ref>).Using Eq. (<ref>) to simplify the angular term and making the substitution R_l(r)→ u_l(r)/√(r) to simplify the radial term, Eq. (<ref>) becomes-[k^2 - 2m_r/ħ^2V(r) - l^2/r^2]=(1/u_l(r)√(r))∂/∂ r[r ∂/∂ r(u_l(r)/√(r))]= (1/u_l(r)√(r))∂/∂ r[√(r) (∂ u_l(r)/∂ r-u_l(r)/2r)]= 1/u_l(r)√(r)[∂^2 u_l(r)/∂ r^2√(r)+u_l(r)/4r^3/2] = 1/u_l(r)∂^2 u_l(r)/∂ r^2+1/4r^2 .Rearranging, we find the Schrödinger equation for the 2D reduced radial wave function u_l(r):- ∂^2 u_l(r)/∂ r^2 = u_l(r)[k^2 - 2m_r/ħ^2V(r) - l^2-1/4/r^2] ,Differing from the 3D case, the solution to the 2D equation is related to the radial wave function by u_l(r)=√(r)R_l(r) instead of u_l(r)=r R_l(r).For purely s-wave scattering we consider only the l=0 partial wave:-∂^2 u_0(r)/∂ r^2 = u_0(r) [k^2 - 2m_r/ħ^2 V(r) + 1/4r^2] .The wave function u_0(r) can be solved for numerically, although analytic solutions exist for simple potentials such as the square well.The singularity at r=0 in Eq. (<ref>) means that boundary conditions must be carefully selected. In the asymptotic region, when the radial separation is larger than the range of the potential, Eq. (<ref>) simplifies to∂^2 u_0(r)/∂ r^2+u_0(r)[k^2+1/4r^2]=0 .We stress at this point the significance of the 1/r^2 term in this equation: note that it is present even for purely s-wave scattering. This is to be compared with the 3D case, where a centifugal barrier is only present for beyond-s-wave partial waves. As a result of this 1/r^2, together with the different relationships between u_0(r) and R_0(r), the 2D problem always supports a bound state for pairwise attractive interactions(regardless of the strength of the interaction), in direct contradistinction to what goes on in the 3D problem.A solution for u_0(r) can be written in terms of Bessel functions of the first (J_n) and second (N_n) kind <cit.>:u_0(r)∝√(r)[J_0(kr)cosδ_0-N_0(kr)sinδ_0] .By calculating u_0(r) at two radial separations r_1 and r_2 beyond the range of the potential (always for a small k), the s-wave phase shifts δ_0 can be calculated with the relationδ_0=K̃N_0(kr_1)-N_0(kr_2)/K̃J_0(kr_1)-J_0(kr_2) ,where K̃=u_0(r_2) √(r_1)/u_0(r_1) √(r_2). §.§ Scattering length and effective rangeTo define the s-wave scattering length a_2D and the effective range r_e we first look at the zero-energy Schrödinger equation in the asymptotic region. We take Eq. (<ref>) outside the range of the potential and set k=0 to find:∂^2 y_0(r)/∂ r^2=-1/4r^2y_0(r) ,where y_0(r) represents the asymptotic form of u_0(r). In general, we can write the solution y_0(r)=√(r)(α+βlog(r)) <cit.>.In this and all other instances we take log to represent the natural logarithm.The 2D scattering length a_ 2D, in analogy to the 3D interpretation, is defined as the r-intercept of y_0(r).For sufficiently short range potentials, such as those used in Section <ref>, a_ 2D can also be given by the r-intercept of u_0(r) for even strongly bound states.Using the condition y_0(a_ 2D)=0, where a_ 2D is the 2D scattering length, we set α=-βlog(a_ 2D) and the solution becomes y_0(r)=β√(r)log(r/a_ 2D).The choice for β will influence the effective range; we set β=-1 and use the solutiony_0(r)=-√(r)log(r/a_ 2D) .An analogous parameter to the above β is encountered for the 3D case, where it can be determined by setting y_0(0)=1.Ideally, we would use this condition for 2D scattering as well, however y_0(r) in 2D does not extrapolate to r < 0 and always approaches 0 at the origin. This is related to the fact that a_ 2D, unlike the 3D scattering length, can never be negative. In other words, a 2-particle bound state exists for even arbitrarily weak attraction.Our choice of β=-1 for the 2D case is consistent with work done by Adhikari et al.<cit.>.The effective range is related to the area between u_0(r) and y_0(r).In 2D this is defined by the integral <cit.>:r_e^2 = 4 ∫_0^∞[y_0^2(r) - u_0^2(r)]_k → 0 dr,and is the second-order term in the effective-range expansion relating low-energy phase shifts δ_0 to the scattering parameters a_ 2D and r_e.In 2D for small values of k: <cit.>δ_0|_k→ 0 ≈2/π[γ + log(ka_ 2D/2)] + k^2 r_e^2/4 ,where γ≈ 0.577215 is Euler's constant.Although differing from the 3D expansion, this relationship has the same implication; low-energy scattering is independent of the details of the potential.The logarithmic scattering-length dependence in Eq. (<ref>) is characteristic of 2D interactions and also appears in the asymptotic solution y_0(r), Eq. (<ref>), which describes the kinetic energy alone.To determine a_2D and r_e for an arbitrary potential we solve for u_0(r) numerically and determine the asymptotic form y_0(r) by extrapolating from two points, r_1 and r_2, outside the range of the potential. We consider two generalizations of Eq. (<ref>), and use the fact that u_0(r)=y_0(r) for both r_1 and r_2 to writeu_0(r_1)=ξ√(r_1)log(r_1/a_ 2D) and u_0(r_2)=ξ√(r_2)log(r_2/a_ 2D) ,where ξ has been introduced (in place of β) for the purposes of determining a_ 2D by extrapolation.This is done using the following equations, which we find by working from Eq. (<ref>):a_ 2D=r_1 exp[-u_0(r_1)/ξ√(r_1)] , where ξ=√(r_1) u_0(r_2)-√(r_2) u_0(r_1)/√(r_1 r_2) log(r_2/r_1) .The scattering length is then used to determine y_0(r) as given in Eq. (<ref>), which is then in turn used to scale u_0(r) such that u_0(r)=y_0(r) at r_1 and r_2. The effective range r_e can then be determined by solving Eq. (<ref>), which depends on the difference between y^2_0(r) and u^2_0(r).In order to find the correct asymptotic form y_0(r), it's important to solve u_0(r) up to sufficiently large r, outside the range of the potential.Another technical detail has to do with the scattering energy E=ħ^2 k^2/2m_r.The parameters a_ 2D and r_e are defined in the limit of E → 0.To make sure that finite-energy scattering effects are not influencing our determination of the parameters, we reduce E until the results for a_ 2D and r_e have converged. §.§ Bound States in 2DBy plotting the scattering length as a function of the potential depth, we can visualize the formation of bound states.In this respect, a difference exists between the 2D and 3D scattering theories.As the depth of V(r) is increased, the scattering length a_ 2D approaches 0 then diverges to +∞ when a new bound state is created; whereas a_ 3D changes from -∞ to +∞ when a new bound state is formed.In each case a scattering length of +∞ corresponds to a weakly bound state that becomes tighter as scattering length decreases.The binding energy of the particle pair in 2D is given byϵ_b = -4ħ^2 /m a^2_ 2D e^2γ ,where there would also be small correction terms for a finite effective range, but we use this zero-range expression for concreteness. This compares to the 3D case where, for equal mass particles, the binding energy of the two particle state is ϵ_b, 3D = - ħ^2 / (m a^2_ 3D).We note that an alternate definition of a_ 2D is sometimes used in other work: a'_ 2D=a_ 2De^γ/2, such that ϵ_b = - ħ^2/(m a'^2_ 2D). For the examples in this section and the DMC results presented in Section <ref>, we use the modified Pöschl-Teller potential:V(r)=-v_0ħ^2/m_rμ^2/cosh^2(μ r) ,where r is the interparticle spacing. This potential is purely attractive and continuous. The parameters v_0 and μ roughly correspond to the depth and inverse width respectively and are tuned such that V(r) reproduces the desired scattering parameters a_ 2D and r_e.For a given μ, the scattering length and effective range exhibit a repetitive pattern of spiking up and then decaying as a function of v_0. This is illustrated in Fig. <ref>, where we have set μ/k_F=100.Here we have introduced the Fermi wave vector k_F that has units of inverse length (as does μ) and is related to the 2D number density n by:k_F=√(2π n) .When a_ 2D diverges to +∞, a new (and initially, arbitrarily weak) bound state is created.In other words, at these values of v_0, the potential becomes deep enough to support an additional bound state.Near these locations, in this example, we see the effective range become very large.The v_0 values where a_ 2D→∞ depend on the specific μ value selected.In the case of an attractive square well potential, the effective range integral in Eq. (<ref>) will give a negative number for very strongly bound states, causing the effective range to be imaginary.If plotting scattering parameters for the square well in the same style as Fig. <ref>, we would see a very similar plot.A major distinction is that r_e would continue decreasing to zero and then become imaginary as v_0 is increased.Instead of gradually increasing as we approach a new bound state, r_e^2 becomes increasingly large and negative before diverging to a large positive value after the bound state threshold is surpassed. Starting in the region where only one bound state can exist (plotted in the inset of Fig. <ref>), we will look at the wave function evolution as the depth of V(r) is increased.In the top panel of Fig. <ref>, we show the reduced radial wave function u_0(r) (solid line) and the asymptotic form y_0(r) (dashed line), as defined by Eq. (<ref>) in the limit as k→ 0 and Eq. (<ref>) respectively.These wave functions are plotted for v_0=0.1, 0.2, and 0.4, with the corresponding potentials shown in the bottom panel.We have expressed V(r) in units of the Fermi energy ϵ_F=ħ^2 k^2_F/2m and set μ/k_F=100.The 3 states in this figure are weakly bound and have large scattering lengths which are identified as the points where y_0(r) would become zero.These values are determined by extrapolating u_0(r) as described in Section <ref>.In this example, the asymptotic zone is reached far before the wave functions cross the r-axis and therefore u_0(a_ 2D)=y_0(a_ 2D)=0. As v_0 is increased further, the non-zero node of y_0(r) becomes increasingly central (i.e., a_ 2D→ 0).As depicted in Fig. <ref>, we find that y_0(r) and u_0(r) cross the r-axis at dramatically different locations for very strongly bound states and look far more distinct than at smaller v_0.In the bottom panel, we plot the effective range integrand in Eq. (<ref>).Here the dotted line simply marks the r-axis.Curves correspond to wave functions plotted in the top panel, which can be distinguished by line thickness.For v_0=1.2, the integrand is almost completely positive and peaks where the difference between u_0(r) and v_0(r) is maximum.As v_0 is increased, the features become more pronounced and we find large negative contributions to the effective range integral.For this example we find the positive contributions are dominant for any v_0.As discussed above, this is not generally true (e.g., for the square well potential where r_e becomes imaginary).When a new bound state is formed, the scattering length diverges discontinuously to +∞ and an extra node exists.In Fig. <ref>, we show the wave function behaviour past this threshold, where the potential supports two bound states.We plot V(r)/ϵ_F in the bottom panel as was done in Fig. <ref>.In this figure, however, the scale has increased by an order of magnitude.The scattering length of each state is roughly the same as a_ 2D for the equivalent state in Fig. <ref>. § STRONGLY INTERACTING 2D FERMI GASES IN THE BEC-BCS CROSSOVERNow we shift to the many-body context of dilute Fermi gases with tunable interactions. We study interaction strengths where the gas is in between a BEC state of tightly bound pairs (dimers) and a weakly paired BCS superfluid.In this regime, where the coupling of opposite-spin particles is intermediate, the gases are said to be strongly interacting and their properties are not fully understood.As expected, we find that the mean-field BCS calculation, which gives the correct energy on each side of the crossover, is unreliable in between.In this section we describe our many-body system including the interaction parametrizations and many-body wave function. We briefly introduce our DMC method before showing ground-state energy results for a range of interaction strengths. We first calculate the energy per particle and then parametrize an equation of state (EOS) in order to determinethe chemical potential. §.§ The BEC-BCS Crossover in 2DDue to the omnipresent bound state, and therefore a positive scattering length for all interaction strengths, identifying the exact region of BEC-BCS crossover point is not as obvious in 2D as in 3D. Here the crossover interaction strength is chosen to be the value at which the chemical potential switches signs; this is a reasonably intuitive choice. For k_F a_2D≫ 1 we encounter the BCS limit and for k_F a_2D≪ 1 we have the corresponding BEC limit. Calculations are done for a range of interaction strengths, defined asη = log(k_F a_ 2D) ,in order to determine the gas properties for a large fraction of the crossover. The number density of the many-body system is fixed such that the Fermi wave vector k_F, as defined in Eq. (<ref>), is constant. To change the interaction strength we vary a_ 2D. Given that n=N/A is the number density of the system (where N is the number of particles and A the area of the periodic box) and r_0=1 / √(π n) is the mean interparticle spacing, the diluteness requirement is satisfied by taking r_e ≪ r_0. We maintain a constant effective range of k_Fr_e=0.006 by adjusting μ as v_0 is varied.§.§ Many-Body Wave FunctionTo describe the strongly interacting Fermi gas for any attraction strength, we use the Jastrow-BCS many-body trial wave function <cit.>:Φ_ BCS( R) =A [ϕ( r_11') ϕ( r_22') ... ϕ( r_N_↑ N'_↓)] , Ψ_T( R) = ∏_ij'f_J(r_ij')Φ_ BCS( R),where the anti-symmetry requirement of Ψ_T( R) for the Fermi gas is enforced by the operator A. Correlations between interacting particles are accounted for through the Jastrow terms f_J(r_ij'). The pairing functions ϕ( r) are expressed asϕ( r) = ∑_nα_n e^ik_ n· r + β̃(r),which contains variational parameters α_n for each momentum state up to some level n_max and the β(r) function to account for higher-momentum contributions. This two-body function encodes details of the many-body system which vary with the interaction strength.The spherically symmetric short-range function is given by:β̃(r) = β(r)+β(L-r)-2 β(L/2) r ≤ L/2 , = 0 r > L/2 ,β (r) = [ 1 + c br ] [ 1 - e^ - d b r ] e^ - b r /d b r ,which contains variational parameters b, c and d.This form of the beta function has been used for 3D calculations, and we have explicitly checked its behaviour in 2D.Specifically, when calculating the local energy given by Ψ^-1_T( R) ĤΨ_T( R), we need to evaluate terms of the form ∂β̃(r) / ∂α_i, where α_i is the coordinate of a specific particle (e.g., x_2, y_3…) and r=√(Δ x^2_ij+Δ y^2_ij) is the radial separation between two particles.Defining Δα_ij as the projection of r along a coordinate (e.g., Δ x_ij or Δ y_ij) we can write:∂β̃(r)/∂α_i = ∂β̃(r)/∂ r ∂ r/∂α_i= ∂β̃(r)/∂ r 1/22 Δα_ij/√(Δ x^2_ij+Δ y^2_ij) ∂Δα_ij/∂α_i= ∂β̃(r)/∂ r Δα_ij/r ∂Δα_ij/∂α_i=±∂β̃(r)/∂ r Δα_ij/r ,where ∂Δα_ij/∂α_i can be positive or negative 1.For example, ∂ (x_2-x_5)/∂ x_2=1 and ∂ (x_2-x_5)/∂ x_5=-1.The result in Eq. (<ref>) has a singularity at r=0 due to the 1/r term.This can cause large fluctuations in the local energy for small r, therefore we define the variational parameter c in Eq. (<ref>) such that∂β̃(r)/∂ r|_r=0 = [∂β(r)/∂ r+∂β(L-r)/∂ r - 2 ∂β(L/2)/∂ r]_r=0 = 0 .Making use of L'Hôpital's rule, it is mostly straightforward to show thatc=2+2dbL+(dbL)^2e^bL(1+d)+2db^2L^2e^bL(1+d)+2bL-2e^dbLbL -2e^dbL/2b^2L^2(e^dbL-1-d+de^bL(1+d)) .In this work, we set b=0.5k_F and d=5, as done by Gandolfi et al. <cit.> for the 3D unitary Fermi gas. With these values of b and d, we find c ≃ 3.5. §.§ DMCTo determine ground-state properties of Fermi gases we use DMC to project the ground state Φ_0 from the trial wave function Ψ_T( R). This is done by propagating in imaginary time τ=it:Φ_0=Ψ(τ→∞), Ψ(τ) = e^-(Ĥ-E_T)τΨ_T(𝐑) , where the trial energy E_T is a constant offset applied to the Hamiltonian.DMC expectation values are determined by averaging over a set of equilibrated configurations.In this work we use the mixed estimate to calculate the energy:⟨Ĥ⟩_M= ⟨Ψ_T | Ĥ | Ψ(τ) ⟩/⟨Ψ_T | Ψ(τ) ⟩= ⟨Ψ_T | Ĥe^-(Ĥ-E_T)τ | Ψ_T ⟩/⟨Ψ_T | e^-(Ĥ-E_T)τ | Ψ_T ⟩= ⟨Ψ_T | e^-(Ĥ-E_T)τ/2Ĥ e^-(Ĥ-E_T)τ/2 | Ψ_T ⟩/⟨Ψ_T | e^-(Ĥ-E_T)τ/2 e^-(Ĥ-E_T)τ/2 | Ψ_T ⟩= ⟨Ψ(τ/2) | Ĥ | Ψ(τ/2) ⟩/⟨Ψ(τ/2) | Ψ(τ/2) ⟩ .In the second line we wrote the explicit form of the imaginary-time evolved ket. The propagator is then used to act on the trial wave function bra straightforwardly due to the fact that it commutes with the Hamiltonian. Finally taking τ → ∞we see that this is the energy of the ground state.§.§ Equation of StateA mean-field calculation <cit.> gives a ground-state energy per particle ofE_ BCS = E_ FG + ϵ_b/2 ,for Fermi gases in the BEC-BCS crossover, where the binding energy ϵ_b is given by Eq. (<ref>). This is expected to be accurate for weakly paired systems in the BCS limit in which E/N → E_ FG. The BEC limit of tightly bound pairs is also expected to be reasonably well described by mean field as the energy scale grows rapidly by many orders of magnitude due to large binding energies. The QMC results vary dramatically from the mean-field description in the crossover but become increasingly similar to mean-field predictions in each limit.Our DMC calculations for a range of interactions strengths are shown in Fig.. <ref>. The errors represent statistical uncertainty, which becomes larger as the energy scale increases. We use the Jastrow-BCS wave function Φ_ BCS(𝐑), Eq. (<ref>), which contains parameters that are optimized for each η independently. We have compared our results to previous ab-initio work, finding significantly lower energies than prior ground-state DMC results <cit.> in the crossover regime and excellent agreement with AFQMC <cit.>. More recently another QMC study has emerged <cit.> that finds DMC results in agreement with ours. Note that we here provide more results in the BCS side than where available previously <cit.>. In order to calculate other ground-state properties of strongly interacting Fermi gases, we calculate an EOS for our thermodynamic-limit energy results E_ TL. This quantity is a finite-size corrected version of the results in Fig. <ref>. The correction ranges from zero on the BEC side of the crossover to ∼0.041 E_ FG in the BCS regime. Using similar methods as previous ab initio studies <cit.>, we parametrize the EOS using three functions. In the crossover regime we fit to a 7th-order polynomial:f(η) = ∑^7_i=0c_iη^i .This is joined by a dimer form in the BEC regime <cit.> and an expansion in 1/η in the BCS regime <cit.>. The dimer form is given by:f^ BEC(η) = 1/2x[1-log(x)+d/x+∑_i=0^2 c_i [log(x)]^i/x^2] ,where x = log[4π/(k_F a_d)^2] ≈ 3.703 - 2η (for the dimer scattering length a_d ≈ 0.557a_ 2D <cit.>) and d = logπ + 2γ + 0.5.The BCS form is given by:f^ BCS (η) = 1 - 1/η + ∑_i=2^4c^i/η^i .Values of c_i in Eq. (<ref>) and Eq. (<ref>) are determined using continuity conditions for f, ∂ f/∂η, and ∂^2 f/∂η^2 at the matching points. Our EOS parameters are included in Tab. <ref>. The matching points were selected as η = -0.25 and η = 2.5. We found these values result in the most optimal overall fit while including as much of the crossover polynomial function as possible.Also, we ensure that our matching point for Eq. (<ref>) is selected on the BEC side of the crossover. §.§ Chemical PotentialWe have determined the chemical potential μ using our EOS.In order to derive a relationship, we defineζ(η) = E_ TL(η)/N1/E_ FG = 2E_ TL(η)/Nϵ_F ,where E_ TL(η) is the total ground-state energy. This quantity is related to our parametrized EOS, where we fit to f(η)=(E_ TL/N-ϵ_b/2)/E_ FG; the new quantity ζ(η) can easily be determined by adding ϵ_b/2 in units of E_ FG to f(η). Noting that ϵ_F = (π N ħ^2)/(mA) in 2D (using k^2_F=2π N/A), the chemical potential is related to ζ(η) as follows:μ = ∂ E_ TL/∂ N = ∂/∂ N[ ϵ_F N ζ(η)/2]= 1/2∂ (ϵ_F N)/∂ Nζ(η) + ϵ_F N/2∂ζ(η)/∂η∂η/∂ N= 1/2∂/∂ N( π N^2 ħ^2/mA) ζ(η) + ϵ_F N/2∂ζ(η)/∂η∂η/∂ N= ( π N ħ^2/mA) ζ(η) + ϵ_F/4∂ζ(η)/∂η= ϵ_F ζ(η) + ϵ_F/4∂ζ(η)/∂η .In the fourth line we have evaluated ∂η/∂ N using∂ [log(k_F a_ 2D)]/∂ N = ∂ [log(√(2π N/A) a_ 2D)]/∂ N = 1/2N .Expressing the chemical potential in units of the non-interacting Fermi gas, we findμ/E_ FG = 2( ζ(η) + 1/4∂ζ(η)/∂η) .Our result is plotted in Fig. <ref>, which showsthe chemical potential with half of the two-body binding energy subtracted.Our DMC determination is thethick blue line, which is qualitatively similar to the chemical potential one gets starting fromthe earlier DMC results <cit.> (which we generated using an analogous fitting strategy). The dotted vertical line corresponds to the interaction strength (η≈ 0.65) where the chemical potential changes sign. Comparing our results with experimentally extracted values of the chemical potential <cit.>,we find a nice match in the deep BEC regime. Differences become more significant in the BCS limit, probablydue to finite-temperature and quasi-2D effects in the experiment.§ SUMMARY AND CONCLUSION In this paper we presented a detailed discussionof scattering in the context of a two-body system confined to two dimensions. We then determined the radial wave function and defined its asymptotic form, which in a purely attractive potential always produces a bound state in contrast to the 3D case. Continuing to explore 2D scattering phenomena we illustrated the dependence of the effective ranger_e and the scattering length aon the attractive potential's strength. Varying the potential we plotted the divergence of r_e and a upon the approach of new bound states.We also demonstrated the radial wave function and associated asymptotic form dependence on the attractive potential strength, where stronger attractive potentials produce more tightly bound states with small positive scattering lengths.Having quantified the 2D two body interacting system in detail, we moved to the many-body problem. Starting with the BCS determinant we looked at its composite pairing functions and specifically the variational parameters. We presented the explicit form of the β pairing function, and briefly described the mixed-estimate method. Then we provided QMC ground-state energy results for various interaction strengths within the BEC-BCS crossover range. Fitting to these energies with a 7th-order polynomial an equation of state was determined. The expression for the chemical potential was derived according to the EOS, for which fitting parameters were explicitly provided. To conclude, the careful study of two-dimensional scattering properties in conjunction with a non-perturbative many-body method containing several variational parameters (like DMC), has led to dependable predictions for the properties of strongly correlated physical systems. Overall, we observe that two-dimensional strongly interacting cold Fermi gases constitute an exciting new development, where theory can be confronted by impressive experimental work. The authors would like to thank G. E. Astrakharchik, T. Enss, J. Thywissen, and E. Vitalifor helpful discussions. This work was supported in partby the Natural Sciences and Engineering Research Council (NSERC) of Canada, theCanada Foundation for Innovation (CFI), the Early Researcher Award (ERA) program of the OntarioMinistry of Research, Innovation and Science, the US Department of Energy, Office of Nuclear Physics, under Contract DE-AC52-06NA25396, and the LANL LDRD program.Computational resources were provided by SHARCNET, NERSC, and Los Alamos Open Supercomputing. The authors would like to acknowledge the ECT* for its warm hospitality during the “Superfluidity and Pairing Phenomena”workshop in March 2017, where part of this work was carried out.99Bloch:2008 I. Bloch, J. Dalibard, and W. Zwerger, Rev. Mod. Phys. 80, 885 (2008).Giorgini:2008 S. Giorgini, L. P. Pitaevskii, and S. Stringari, Rev. Mod. Phys. 80, 1215 (2008).Levinsen:2015 J. Levinsen and M. M. Parish, Annu. Rev. Cold At. Mol. 3, 1 (2015).Carlson:2003 J. Carlson, S.Y. Chang, V. R. Pandharipande, and K. E. Schmidt, Phys. Rev. Lett. 91, 050401 (2003).Chang:2004 S. Y. Chang, V. R. Pandharipande, J. Carlson, and K. E. Schmidt, Phys. Rev. A 70, 043602 (2004).Astrakharchik:2004 G. E. Astrakharchik, J. Boronat, J. Casulleras, and S. Giorgini, Phys. Rev. Lett. 93, 200404 (2004).Forbes:2011 M. M. Forbes, S. Gandolfi, and A. Gezerlis, Phys. Rev. Lett. 106, 235303 (2011).Gandolfi:2011 S. Gandolfi, K. E. Schmidt, and J. Carlson, Phys. Rev. A 83, 041601 (2011).Forbes:2012 M. M. Forbes, S. Gandolfi, and A. Gezerlis, Phys. Rev. A 86, 053603 (2012).Gezerlis:2008 A. Gezerlis and J. Carlson, Phys. Rev. C 77, 032801 (2008). Carlson:2012 J. Carlson, S. Gandolfi, and A. Gezerlis, Prog. Theor. Exp. Phys. 01A209 (2012).Stein:2012 M. Stein, X.-G. Huang, A. Sedrakian, and J. W. Clark, Phys. Rev. C 86,062801(R) (2012).Gandolfi:2015 S. Gandolfi, A. Gezerlis, and J. Carlson, Annu. Rev. Nucl. Part. Sci. 65, 303 (2015). Buraczynski:2016 M. Buraczynski, and A. Gezerlis, Phys. Rev. Lett. 116, 152501 (2016). Lacroix:2017 D. Lacroix, A. Boulet, M. Grasso, C.-J. Yang, arXiv:1704.08454.Gunter:2005 K. Günter, T. Stöferle, H. Moritz, M. Köhl, and T. Esslinger, Phys. Rev. Lett. 95, 230401 (2005). Liu:2010 X.-J. Liu, H. Hu, and P. D. Drummond, Phys. Rev. B 82, 054524 (2010).Martiyanov:2010 K. Martiyanov, V. Makhalov, and A. Turlapov, Phys. Rev. Lett. 105, 030404 (2010). Valiente:2011 M. Valiente, N. T. Zinner, and K. Molmer, Phys. Rev. A 84, 063626 (2011). Frohlich:2011 B. Fröhlich, M. Feld, E. Vogt, M. Koschorreck, W. Zwerger, and M. Köhl, Phys. Rev. Lett. 106, 105301 (2011). Feld:2011 M. Feld, B. Fröhlich, E. Vogt, M. Koschorreck, and M. Köhl, Nature. 480, 75-78 (2011). Orel:2011 A. A. Orel, P. Dyke, M. Delehaye, C. J. Vale, and H. Hu, New J. Phys. 13, 113032 (2011). Makhalov:2014 V. Makhalov, K. Martiyanov, and A. Turlapov, Phys. Rev. Lett. 112, 045301 (2014).Bauer:2014 M. Bauer, M. M. Parish, and T. Enss, Phys. Rev. Lett. 112, 135302 (2014). Mulkerin:2015 B. C. Mulkerin, K. Fenech, P. Dyke, C. J. Vale, X.-J. Liu, and H. Hu, Phys. Rev. A 92, 063636 (2015). He:2015 L. He, H. Lü, G. Cao, H. Hu, X.-J. Liu, Phys. Rev. A 92, 023620 (2015). Klawunn:2016 M. Klawunn, Phys. Lett. A, 380, 2650 (2016).Anderson:2015 E. R. Anderson and J. E. Drut, Phys. Rev. Lett. 115, 115301 (2015).He:2016 L. He, Ann. Phys. (N.Y.) 373, 470 (2016).Wong:2015 W. Ong, C.-Y. Cheng, I. Arakelyan, and J. E. Thomas, Phys. Rev. Lett. 114, 110403 (2015). Murthy:2015 P. A. Murthy, I. Boettcher, L. Bayha, M. Holzmann, D.Kedar, M. Neidig, M. G. Ries, A. N. Wenz, G. Zürn, and S. Jochim, Phys. Rev. Lett. 115, 010401 (2015). Ries:2015 M. G. Ries, A. N. Wenz, G. Zürn, L. Bayha, I. Boettcher, D. Kedar, P. A. Murthy, M. Neidig, T. Lompe, and S. Jochim, Phys. Rev. Lett. 114, 230401 (2015). Fenech:2016 K. Fenech, P. Dyke, T. Peppler, M. G. Lingham, S. Hoinka, H. Hu, and C. J. Vale, Phys. Rev. Lett. 116, 045302 (2016). Boettcher:2016 I. Boettcher, L. Bayha, D. Kedar, P. A. Murthy, M. Neidig, M. G. Ries, A. N. Wenz, G. Zürn, S. Jochim, and T. Enss, Phys. Rev. Lett. 116, 045303 (2016). Rammelmuller:2015 L. Rammelmüller, W. J. Porter and J. E. Drut, Phys. Rev. A, 93, 033639 (2016). Martiyanov:2016 K. Martiyanov, T. Barmashova, V. Makhalov, and A. Turlapov, Phys. Rev. A 93, 063622 (2016).Cheng:2016 C. Cheng, J. Kangara, I. Arakelyan, and J. E. Thomas, Phys. Rev. A 94, 031606 (2016). Luciuk:2017 C. Luciuk, S. Smale, F. Böttcher, H. Sharum, B. A. Olsen, S. Trotzky, T. Enss, and J. H. Thywissen, Phys. Rev. Lett., 118, 130405 (2017). Hueck:2017 K. Hueck, N. Luick, L. Sobirey, J. Siegl, T. Lompe, H. Moritz, arXiv:1704.06315.Miyake:1983 K. Miyake, Prog. Theor. Phys. 69, 1794 (1983). Randeria:1990 M. Randeria, J.-M. Duan, and L.-Y. Shieh, Phys. Rev. Lett. 62, 981 (1989); Phys. Rev. B 41, 327 (1990). Bertaina:2011 G. Bertaina and S. Giorgini, Phys. Rev. Lett. 106, 110403 (2011).Shi:2015 H. Shi, S. Chiesa, and S. Zhang, Phys. Rev. A 92, 033603 (2015).Galea:2016 A. Galea, H. Dawkins, S. Gandolfi, and A. Gezerlis, Phys. Rev. A 93, 023602 (2016).Adhikari:1986a S. K. Adhikari, Am. J. Phys. 54, 362 (1986).Khuri:2009 N. N. Khuri, A. Martin, J.-M. Richard, and T. T. Wu, J. Math. Phys. 50, 072105 (2009).Adhikari:1986 S. K. Adhikari, W. G. Gibson, and T. K. Lim, J. Chem. Phys. 85, 5580 (1986).Madeira:2017 L. Madeira, S. Gandolfi, and K. E. Schmidt, Phys. Rev. A 95, 053603 (2017).Engelbrecht:1992 J. R. Engelbrecht, M. Randeria, and L. Zhang, Phys. Rev. B 45, 10135 (1992).Petrov:2003 D. S. Petrov, M. A. Baranov, and G. V. Shlyapnikov, Phys. Rev. A 67, 031601(R) (2003).Enss:2015 T. Enss, private communication (2015). | http://arxiv.org/abs/1705.09310v2 | {
"authors": [
"Alexander Galea",
"Tash Zielinski",
"Stefano Gandolfi",
"Alexandros Gezerlis"
],
"categories": [
"cond-mat.quant-gas",
"nucl-th"
],
"primary_category": "cond-mat.quant-gas",
"published": "20170525180457",
"title": "Fermions in Two Dimensions: Scattering and Many-Body Properties"
} |
Vol.0 (201x) No.0, 000–000Lowell Center for Space Science and Technology, University of Massachusetts Lowell, Lowell, MA, 01854, USA.Department of Mathematical Sciences, University of Massachusetts Lowell, Lowell, MA, 01854, USA. E-mail: [email protected] NASA Goddard Space Flight Center, Laboratory for High-Energy Astrophysics, Code 663, Greenbelt, MD 20771, USA. Email: [email protected] consider the geometric Titius-Bode rule for the semimajor axes of planetary orbits. We derive an equivalent rule for the midpoints of the segments between consecutive orbits along the radial direction and we interpret it physically in terms of the work done in the gravitational field of the Sun by particles whose orbits are perturbed around each planetary orbit. On such energetic grounds, it is not surprising that some exoplanets in multiple-planet extrasolar systems obey the same relation. But it is surprising that this simple interpretation of the Titius-Bode rule also reveals new properties of the bound closed orbits predicted by Bertrand's theorem and known since 1873.Christodoulou and Kazanas Titius-Bode Rule and Orbits of Bertrand's TheoremA Physical Interpretation of the Titius-Bode Rule and its Connection to the Closed Orbits of Bertrand's TheoremDimitris M. Christodoulou1,2 Demosthenes Kazanas3Received 2017 month day; accepted 2017 month day =================================================================================================================§ INTRODUCTION The numerical algorithm called the Titius–Bode “law" has been known for 250 years <cit.>. It relies on an ad-hoc geometric progression to describe the positions of the planets in thesolar system and works fairly well out to Uranus but no farther <cit.>. The same phenomenology has also been applied to the satellites of thegaseous giant planets <cit.>. Two modern brief reviews of the history along with criticisms ofthis rule have been written by <cit.> and <cit.>. Currently, the general consensus is that a satisfactory physical basis has not been found for this numerical coincidence despite serious efforts by many researchers over the past three centuries. Furthermore, opinions differ on whether such a physical basis exists at all.Apparently, many researchers still believe that the Titius–Bode algorithm does have a physical foundation and continue to work on this problem. In particular, the last decade of the twentieth century saw a resurgence of investigations targeting precisely two questions: the origin of the “law"<cit.> and its statistical robustness against the null hypothesis <cit.>. Furthermore, in this century, some extrasolar systems have been discovered in which the planets appear to obey the Titius-Bode rule and the rule is used as a predictor of additional planets yet to be discovered in these multiple-planet systems <cit.>.In <ref>, we examine the Titius-Bode rule in its original form, that of a geometric progression of the semimajor axes of most of the planetary orbits in the solar system. By inductive reasoning, we associate the geometric rule with the work done in the gravitational field of the Sun by perturbed particles orbiting in the vicinity of planetary orbits, but we find that the spacing of the semimajor axes is not the right qualifier of the physical profile dictated by the Sun's gravitational potential. Then we derive another rule for a group of hypothetical orbits that are equally spaced between the actual semimajor axes and we interpret this rule physically in terms of the gravitational potential differences of particles perturbed around the actual orbits of the planets. Our results support the discovery of <cit.> <cit.> that such an arrangement of orbits implies that the protoplanets do not interfere with one another during their formation stage, thus a planet is expected to be formed at every available orbit of the geometric progression. Furthermore, our results reveal new geometric properties (see the Appendix) of the bound closed orbits predicted in spherical potentials by the celebrated theorem of <cit.>. In <ref>, we summarize and discuss these results. § TITIUS-BODE RULE REWRITTEN AND INTERPRETED PHYSICALLY In its original form, the Titius-bode rule dictates that the semimajor axes of most planetary orbits are in geometric progression. (In some forms, an additional term of 0.4 is added ad hoc in order to reproduce the innermost three planets that appear to be in arithmetic progression.) The geometric progression is described formally by two equivalent relations: Consider three consecutive orbits with semimajor axes a_1, a_2, and a_3 (Fig. <ref>); then the intermediate axis must be the geometric mean of its neighboring axes, viz.a_2 = √(a_1 a_3),or equivalently1/a_2-a_1 - 1/a_3-a_2 = 1/a_2.The form of eq. (<ref>) contains reciprocal distances and this is a sufficient hint that the relation could be associated with the central gravitational potential due to the Sun. But, as illustrated in Figure <ref>, such a simple association is not entirely straighforward because the distances (a_2-a_1) and (a_3-a_2) are not central, i.e., they are not measured from the Sun. In order to recast the rule in terms of central reciprocal distances, we define hypothetical orbits that are equidistant between the semimajor axes. In Figure <ref>, such orbits would cross the ray from S at the midpoints M_12 and M_23. Their radial coordinates arem_12 = 1/2(a_1+a_2)andm_23 = 1/2(a_2+a_3) ,respectively. The sequence m_12, m_23, ... of intermediate radii forms a geometric progression with the same ratio as that of the a_1, a_2, a_3, ... sequence. Eliminating a_1 and a_3 between eqs. (<ref>) and (<ref>), the Titius-Bode rule is transformed to the equivalent form1/m_12 + 1/m_23 = 2/a_2,which implies that a_2 is the harmonic mean of m_12 and m_23. As we describe in the Appendix, this is an important geometric property that is valid only in a central -1/r gravitational potential and its physical meaning can be easily deduced: eq. (<ref>) can be rewritten in a form that can be interpreted in terms of central potential differences, viz.G M(1/m_12 - 1/a_2)= G M(1/a_2 - 1/m_23) ,where G is the gravitational constant and M is the mass of the central object that creates the gravitational field.Consider now particles oscillating about the intermediate orbit O_2 (this includes also the protoplanetary core early in its formation and before it settles down to O_2). It is evident that the work done by a particle at m_12 to reach a_2 is the same as the work done by the field to a particle at m_23 that reaches a_2. In other words, the gravitational field allows orbit O_2 in Figure <ref> to utilize the entire area between the hypothetical orbits through M_12 and M_23 and to accummulate matter while sharing half-way with orbits O_1 and O_3 the areas between them. This arrangement of orbits in a geometric progression ensures that adjacent orbits do not interfere with one another, a result that was first found by <cit.> who started with intersecting planitesimal orbits and derived the Titius-Bode rule for a surface density profile of the solar nebula when the interactions ceased. Our derivation above starts with the Titius-Bode rule and it is effectively the converse of Laskar's derivation.This “harmonic-mean” sharing by protoplanets of the in-between areas has also been used empirically in the seminal work of <cit.> who distributed planetary material in annuli around the current orbits of planets in order to derive a surface density profile for the solar nebula. Our calculation justifies this empirical notion on energetic grounds: eq. (<ref>) describes the energy balance of a harmonic oscillator in spherical (radial) coordinates with different amplitudes on either side of orbit O_2(a_2) and it is in contrast to the simple harmonic oscillator in which the deviations (a_2-a_1)/2 and (a_3-a_2)/2 from the equilibrium position a_2 are equal because of the linear nature of the restoring force <cit.>.§ SUMMARY AND DISCUSSION§.§ Summary We have described a physical interpretation of the Titius-Bode rule by considering, not the present positions of the planets in the solar system, but the “regions of occupancy” utilized by neighboring protoplanets during their efforts to collect and accummulate material as they orbit in the solar nebula: according to eq. (<ref>), the work done by a particle to move out from an interior orbit through M_12 (Fig. <ref>) to the next outer planetary orbit O_2 is the same as the work done to a particle that falls into the gravitational field from M_23 to O_2. The importance of protoplanets sharing half-way their in-between regions is twofold. First, the protoplanets do not cross into the orbits of their neighbors as they oscillate about their equilibrium orbits and continue to accummulate material <cit.>. This behavior ensures that some object or objects will be found in every single radial location a_1, a_2, a_3, ..., even in the predicted location between Mars and Jupiter (where the asteroid belt resides). Second, after the remaining disk gas disperses or gets accreted by the Sun and the planets emerge in their final settled orbits, the long-term dynamical stability of the solar system is strengthened because these orbits are as far away from one another as possible, and neighboring planets may interact only weakly by tidal forces that exert only minor perturbations to the positions of their neighbors <cit.>. Such weak interactions are contingent upon the absence of resonant orbits which is an observed fact for the planets in our solar system. §.§ Solar Nebula In <cit.>, we derived exact solutions of the Lane-Emden equation with rotation for the solar nebula <cit.> assuming it is an isothermal gas. The isothermal solutions of the Lane-Emden equations are very much relevant to the problem at hand: they show that protoplanetary cores are trapped inside local gravitational potential wells in which they can collect matter and grow in time. The distances of these localized potential wells from the protosun are in geometric progression as a result of the differential rotation of the solar nebula (that tapers off at the inner region and at the farthest outer regions of the nebula, where the planetary orbits appear to follow arithmetic progressions). The present result comes to strengthen the argument that planets grow locally inside deep gravitational potential wells that extend half-way between adjacent planetary orbits: on energetic grounds, solid protoplanetary cores share the disk space in the solar nebula between adjacent orbits and they collect material by various processes that make matter settle down to the potential minima, whereas the gas can flow inward and continue its accretion on to the central protosun. Furthermore, this model argues against excessively large migrations of protoplanets in the solar nebula <cit.>. Protoplanetary cores can move radially only within the bounds of their local gravitational potential wells (radii m_12 and m_23 in eq. (<ref>) for orbit O_2 in Fig. <ref>). §.§ Extrasolar Multiplanet Systems It is not surprising that at least some extrasolar systems exhibit similar characteristic distributions of exoplanetary orbits. Their protoplanetary disks may have had similar energetic and stability properties as our solar nebula, a similarity that apparently is neither universal nor wide-spread <cit.>. As for the location of the habitable zone and its planets in extrasolar systems <cit.>, we believe that the outcome depends crucially on the differential rotation and surface density profiles of each particular protoplanetary disk <cit.> irrespective of whether the Titius-Bode rule is applicable or not.§.§ Connection to the Closed Orbits of Bertrand's Theorem Eq. (<ref>) shows that perturbed particle orbits around a circular equilibrium orbit such as O_2(a_2) in Figure <ref> have different amplitudes, say A_1 and A_2>A_1, on either side of the equilibrium radius a_2. This is required so that the potential differences between a_2 and the maximum radial displacements be equal in magnitude, an assertion of the Work-Energy Theorembetween the equilibrium radius a_2 and the radii of the turning points of the oscillation where the radial velocity goes to zero. The result is a restriction placed on the two amplitudes that must be related by1/a_2-A_1 + 1/a_2+A_2 = 2/a_2,that is, radius a_2 is the harmonic mean of the radii of the turning points. This property is valid only for bound closed orbits in a -1/r gravitational potential and it is derived in the Appendix, where we also analyze closed orbits in an r^2 gravitational potential <cit.>. It turns out that the latter orbits exhibit another precise symmetry altogether: radius a_2 is the geometric mean of the radii of the turning points.We thank the reviewers of this article for their comments that led to a clearer presentation of our ideas. DMC is obliged to Joel Tohline for advice and guidance over many years.§ APPENDIX A: THE GEOMETRY OF BOUND CLOSED ORBITS IN SPHERICAL POTENTIALS§.§ A1. Newton-Kepler -1/r Potential Consider an equilibrium orbit r=a in a -1/r potential and assume that the maximum radial deviation is ± A on either side of r=a. At the turning points r=a± A, the radial velocity is zero (ṙ=0) and the total energy per unit mass can then be written as <cit.>E =L^2/2r^2 -GM/r,where the specific angular momentum satisfies L^2 =GMa, thus eq. (<ref>) can be written in the formE/ GM = a/2r^2 - 1/r =const.Applied to the turning points r=a± A, this equation yieldsa/2(a-A)^2 - 1/a-A = a/2(a+A)^2 - 1/a+A,a strict requirement for energy conservation. This requirement is satisfied only for A=0 which implies that the amplitude of the oscillation cannot be the same on either side of r=a. We consider next two different amplitudes A_1>0 and A_2>A_1 on either side of the equilibrium orbit r=a. After some elementary algebra, energy conservation (eq. (<ref>)) at the turning points r=a-A_1 and r=a+A_2 yields 1/A_1 - 1/A_2 = 2/a,or equivalently1/a-A_1 + 1/a+A_2 = 2/a.This last equation shows that, in a -1/r potential, the equilibrium radius a is the harmonic mean of the radii of the turning points a-A_1 and a+A_2 (as was also found in eq. (<ref>) for orbit O_2 and points M_12, M_23 in Fig. <ref>).§.§ A2. Isotropic Hooke r^2 Potential The isotropic harmonic-oscillator potential, written as Ω^2 r^2/2 (Ω=const.), cannot support arbitrarily large oscillations of equal amplitude on either side of the equilibrium orbit r=a either. The same analysis leads to an energy equation analogous to eq. (<ref>), but here L^2=Ω^2 a^4, thusE/Ω^2/2 = a^4/r^2 + r^2 =const.When energy conservation is applied between the turning points r=a± A, we obtain three solutions, A=0 and two extraneous solutions A=± a√(2). The solution A=a√(2) is of course rejected because A>a. We consider next two different amplitudes A_1>0 and A_2>A_1 on either side of the equilibrium orbit r=a. After some elementary algebra, energy conservation (eq. (<ref>)) at the turning points r=a-A_1 and r=a+A_2 yields1/A_1 - 1/A_2 = 1/a,or equivalently(a-A_1)(a+A_2) = a^2 .This last equation shows that, in a harmonic r^2 potential, the equilibrium radius a is the geometric mean of the radii of the turning points a-A_1 and a+A_2.[Bertrand(1873)]ber73 Bertrand, J. 1873, C. R. Acad. Sci. Paris, 77, 849[Bovaird & Lineweaver(2013)]bov13 Bovaird, T., & Lineweaver, C. H. 2013, , 448, 3608[Bovaird et al.(2015)]bov15 Bovaird, T., Lineweaver, C. H., & Jacobsen, S. K. 2015, , 448, 3608[Christodoulou & Kazanas(2007)]chr07 Christodoulou, D. M., & Kazanas, D. 2007, arXiv:0706.3205[Christodoulou & Kazanas(2017)]chr17b Christodoulou, D. M., & Kazanas, D. 2017, RAA, submitted[Danby(1988)]dan88 Danby, J. M. A. 1988, Fundamentals of Celestial Mechanics, 2nd edition (Richmond: Willmann-Bell) [Dubrulle & Graner(1994)]dub94 Dubrulle, B., & Graner, F. 1994, A&A, 282, 269[Emden(1907)]emd1907 Emden, R. 1907, Gaskugeln (Leipzig: B. G. Teubner)[Goldstein(1950)]gol50 Goldstein, H. 1950, Classical Mechanics (Reading, MA: Addison-Wesley)[Gomes et al.(2004)]gom04 Gomes, R. S., Morbidelli, A., & Levison, H. F. 2004, Icarus, 170, 492[Gomes et al.(2005)]gom05 Gomes, R. S., Gallardo, T., Fernández, J. A., & Brunini, A. 2005,Celest. Mech. Dyn. Astron., 91, 109[Graner & Dubrulle(1994)]gra94 Graner, F., & Dubrulle, B. 1994, A&A, 282, 262[Hayes & Tremaine(1998)]hay98 Hayes, W., & Tremaine, S. 1998, Icarus, 135, 549[Hooke(1678)]hoo1678 Hooke, R. 1678, De Potentia Restitutiva, or of Spring. Explaining the Power of Springing Bodies (London: J. Martyn)[Huang & Bakos(2014)]hua14 Huang, C. X., & Bakos, G. Á. 2014, , 442, 674[Jaki(1972)]jak72 Jaki, S. 1972, Am. J. Phys., 40, 93[Jiang et al.(2015)]jia15 Jiang, I.-G., Yeh, L.-C., & Hung, W.-L. 2015, , 449, L65[Kane et al.(2016)]kan16 Kane, S. R., Hill, M. L., Kasting, J. F., et al. 2016, , 830, 1[Lane(1870)]lan1870 Lane, L. J. H. 1870, Amer. J. Sci. Arts, Second Series, 50, 57[Laskar(2000)]las00 Laskar, J. 2000, Phys. Rev. Lett., 84, 3240[Laskar & Petit(2017)]las17 Laskar, J., & Petit, A. 2017, A&A, in press (arXiv:1703.07125) [Lecar(1973)]lec73 Lecar, M. 1973, Nature, 242, 318[Levison et al.(2007)]lev07 Levison, H. F, Morbidelli, A., Gomes, R. S., & Backman, D. 2007, in Protostars and Planets V, ed. B. Reipurth, D. Jewitt, & K. Keil (Tucson: Univ. of Arizona Press), 669[Li et al.(1995)]li95 Li, X. Q., Zhang, H., & Li, Q. B. 1995, A&A, 304, 617[Lynch(2003)]lyn03 Lynch, P. 2003, , 341, 1174[Murray & Dermott(1999)]mur99 Murray, C. D., & Dermott, S. F. 1999, Solar System Dynamics (Cambridge, UK: Cambridge Univ. Press)[Neuhäuser & Feitzinger(1986)]neu86 Neuhäuser, R., & Feitzinger, J. 1986, A&A, 170, 174[Nieto(1972)]nie72 Nieto, M. M. 1972, The Titius–Bode Law of Planetary Distances: Its History and Theory (Oxford, UK: Pergamon Press)[Nottale et al.(1997)]not97 Nottale, L., Schumacher, G., & Gay, J. 1997, A&A, 322, 1018[Poveda & Lara(2008a)]pov08a Poveda, A., & Lara, P. 2008a, Rev. Mex. Ast. Astrof., 34, 49[Poveda & Lara(2008b)]pov08b Poveda, A., & Lara, P. 2008b, Rev. Mex. Ast. Astrof., 44, 243[Weidenschilling(1977)]wei77 Weidenschilling, S. J. 1977, A&SS, 51, 153 | http://arxiv.org/abs/1705.09356v6 | {
"authors": [
"Dimitris M. Christodoulou",
"Demosthenes Kazanas"
],
"categories": [
"astro-ph.EP"
],
"primary_category": "astro-ph.EP",
"published": "20170525204928",
"title": "A Physical Interpretation of the Titius-Bode Rule and its Connection to the Closed Orbits of Bertrand's Theorem"
} |
= -0.45in | http://arxiv.org/abs/1705.09311v3 | {
"authors": [
"N. Fraija",
"P. Veres",
"B. B. Zhang",
"R. Barniol Duran",
"R. L. Becerra",
"B. Zhang",
"W. H. Lee",
"A. M. Watson",
"C. Ordaz-Salazar",
"A. Galvan-Gamez"
],
"categories": [
"astro-ph.HE"
],
"primary_category": "astro-ph.HE",
"published": "20170525180652",
"title": "Theoretical Description Of GRB 160625B with Wind-to-ISM Transition and Implications for a Magnetized Outflow"
} |
http://arxiv.org/abs/1705.09298v2 | {
"authors": [
"Yuan-Ming Lu",
"Ying Ran",
"Masaki Oshikawa"
],
"categories": [
"cond-mat.str-el"
],
"primary_category": "cond-mat.str-el",
"published": "20170525180003",
"title": "Filling-enforced constraint on the quantized Hall conductivity on a periodic lattice"
} |
|
caosp | http://arxiv.org/abs/1705.09820v1 | {
"authors": [
"V. Karas",
"O. Kopacek",
"D. Kunneriath",
"M. Zajacek",
"A. Araudo",
"A. Eckart",
"J. Kovar"
],
"categories": [
"astro-ph.HE"
],
"primary_category": "astro-ph.HE",
"published": "20170527140851",
"title": "Plunging neutron stars as origin of organised magnetic field in galactic nuclei"
} |
A geometric multigrid method for isogeometric compatible discretizations of the generalized Stokes and Oseen problems Christopher Coley,[Ann and H.J. Smead Aerospace Engineering Sciences, 426 UCB, University of Colorado Boulder 80309, USA] [ E-mail: [email protected]]Joseph Benzaken[Department of Applied Mathematics, 526 UCB, University of Colorado, Boulder, CO 80309, USA],John A. Evans^*====================================================================================================================================================================================================================================================================================================== In this paper, we present a geometric multigrid methodology for the solution of matrix systems associated with isogeometric compatible discretizations of the generalized Stokes and Oseen problems.The methodology provably yields a pointwise divergence-free velocity field independent of the number of pre-smoothing steps, post-smoothing steps, grid levels, or cycles in a V-cycle implementation.The methodology relies upon Scwharz-style smoothers in conjunction with specially defined overlapping subdomains that respect the underlying topological structure of the generalized Stokes and Oseen problems.Numerical results in both two- and three-dimensions demonstrate the robustness of the methodology through the invariance of convergence rates with respect to grid resolution and flow parameters for the generalized Stokes problem as well as the generalized Oseen problem provided it is not advection-dominated.Keywords: Geometric multigrid; Isogeometric compatible discretizations; Isogeometric divergence-conforming discretizations; Generalized Stokes flow; Generalized Oseen flow; Overlapping Schwarz smoothers§ INTRODUCTION Isogeometric compatible discretizations[Depending on context, isogeometric compatible discretizations may also be referred to as isogeometric discrete differential forms, structure-preserving discretizations, divergence-conforming discretizations, or curl-conforming discretizations.] have recently arisen as an attractive candidate for the spatial discretization of fluid flow problems <cit.>.These discretizations comprise a discrete Stokes complex <cit.> and may be interpreted as smooth generalizations of Raviart-Thomas-Nédélec finite elements <cit.>.When applied to incompressible flow problems, isogeometric compatible discretizations produce pointwise divergence-free velocity fields and hence exactly satisfy mass conservation.As a result, they preserve the balance law structure of the incompressible Navier-Stokes equations, and in particular, they properly conserve mass, linear and angular momentum, energy, vorticity, enstrophy (in the two-dimensional setting), and helicity (in the three-dimensional setting) in the inviscid limit <cit.>.Isogeometric compatible discretizations have recently been applied to Cahn-Hilliard flow <cit.>, turbulent flow <cit.>, and fluid-structure interaction <cit.> where improved results were attained in comparison with state-of-the-art discretization procedures.Despite the promise of isogeometric compatible discretizations, very little research has been conducted so far in the area of efficient linear solvers.In fact, only the performance of Krylov subspace methods in conjunction with block preconditioners has been investigated in prior work <cit.>.The objective of the current work is to introduce an optimally efficient linear solution procedure for isogeometric compatible discretizations of the generalized Stokes and Oseen problems.It should be noted that there are many different candidates in this regard.For instance, there exist efficient physics-based splitting methods such as the inexact Uzawa algorithm <cit.>.However, these techniques rely on suitable Schur complement approximations which can be difficult to design in the context of generalized Oseen flow.Alternatively, one can employ a multigrid method in conjunction with a Vanka smoother <cit.>, a Uzawa smoother <cit.>, or a Braess-Sarazin smoother <cit.>.While these techniques generally do not require accurate Schur complement approximations, they typically involve specially tuned relaxation parameters.Perhaps more concerning is the fact that all of the aforementioned procedures do not return a pointwise-divergence free velocity field unless the linear solver is fully converged.To overcome the issues associated with the aforementioned linear solution procedures, we present a geometric multigrid methodology which relies upon Schwarz-style smoothers <cit.> in conjunction with specially defined overlapping subdomains that respect the underlying topological structure of the generalized Stokes and Oseen problems.This methodology is inspired by multigrid and auxiliary space preconditioning methodologies for divergence-conforming discontinuous Galerkin formulations of Stokes flow <cit.> and multigrid methodologies for compatible finite element discretizations of Darcy and Maxwell problems <cit.>.We prove that our methodology yields a pointwise divergence-free velocity field independent of the number of pre-smoothing steps, post-smoothing steps, grid levels, or cycles in a V-cycle implementation.We also demonstrate by numerical example that our methodology is optimally efficient and robust in that it exhibits convergence rates independent of the grid resolution and flow parameters for the generalized Stokes problem as well as the generalized Oseen problem provided it is not advection-dominated.It should be mentioned that the only user-defined constants in our methodology are the number of pre-smoothing steps and post-smoothing steps as well as the scaling factor if one elects to use an additive Schwarz smoother rather than a multiplicative Schwarz smoother.However, we have found that our method is optimally efficient irregardless of the number of pre- and post-smoothing steps selected.An outline of the remainder of the paper is as follows.In Section <ref>, we inspire the need for efficient linear solvers for the generalized Stokes and Oseen problems through a discussion of temporal discretization of the Navier-Stokes equations.In Section <ref>, we discuss spatial discretization of the generalized Stokes and Oseen problems.In Section <ref>, we introduce the Stokes complex and demonstrate how to construct isogeometric compatible discretizations which commute with this complex.In Section <ref>, we present our structure-preserving geometric multigrid methodology, and we prove that this methodology indeed yields discrete velocity fields which are divergence-free.In Section <ref>, we apply the proposed multigrid method to a selection of generalized Stokes and Oseen problems.Finally, in Section <ref>, we provide concluding remarks.§ TEMPORAL DISCRETIZATION OF THE NAVIER-STOKES EQUATIONS AND THE GENERALIZED STOKES AND OSEEN PROBLEMSTo motivate the need for efficient linear solvers for the generalized Stokes and Oseen problems, we first demonstrate how such problems arise through semi-implicit temporal discretization of the incompressible Navier-Stokes equations subject to homogeneous Dirichlet boundary conditions. For d ∈ℤ_+, let Ω⊂ℝ^d denote an open, Lipschitz bounded domain, let Γ denote the boundary of Ω, and let T ∈ℝ_+. Given ν∈ℝ_+, f: Ω× (0,T) →ℝ^d, and u_0: Ω→ℝ^d, the strong form of the Navier-Stokes problem then reads as follows: Find u: Ω× [0,T] →ℝ^d and p: Ω× (0,T) →ℝ such that:[ ∂ u/∂ t +u·∇ u - νΔ u + ∇ p=f( x,t) ∈Ω× (0,T);∇· u = 0( x,t) ∈Ω× (0,T); u=0( x,t) ∈Γ× (0,T); .u|_t = 0=u_0 x∈Ω ]Above, u denotes the velocity field, p denotes the pressure field, ν denotes the kinematic viscosity, f denotes the force per unit mass, and u_0 denotes the initial velocity field.The velocity field is uniquely specified by the Navier-Stokes problem while the pressure field is unique up to a constant.To discretize in time, we first define a sequence of time instances t_0 < t_1 < t_2 < … < t_N such that t_0 = 0 and t_N = T, and we denote the velocity and pressure solutions at the n^ time instance as u^(n) and p^(n) respectively for n = 0, …, N.We further define t_n+1/2 = t_n + t_n+1/2.Without loss of generality, we assume that the time instances are equi-spaced, and we define the time step size to be Δ t = t_n+1 - t_n.The velocity solution at N = 0 is given by u^(0) =u_0, while to find the velocity and pressure solutions at each subsequent time instance, we must discretize the Navier-Stokes problem in time.We discuss two demonstrative semi-implicit temporal discretization schemes herein, though the proceeding discussion also applies to other semi-implicit temporal discretization schemes[With a fully implicit time discretization scheme, one must turn to a nonlinear solution procedure such as Newton's method.However, with Newton's method, one solves a sequence of generalized Oseen problems.Hence, the multigrid methodology discussed here can also be employed to solve these problems.]Let us first consider the standard Crank-Nicolson/Adams-Bashforth scheme <cit.>.In this approach, a central difference approximation of the unsteady term and linear interpolation approximations of the diffusive and pressure force terms are employed:∂ u/∂ t(t_n+1/2)≈ u^(n+1)- u^(n)/Δ t Δ u(t_n+1/2)≈Δ u^(n+1)+Δ u^(n)/2 ∇ p(t_n+1/2)≈∇ p^(n+1)+∇ p^(n)/2The advection term at t_n+1/2 is alternatively approximated using Taylor-series expansions involving time instances t_n-1 and t_n, resulting in[This approximation is not properly defined for n = 0.Consequently, the approximation is replaced by u^(n)·∇ u^(n) for n = 0 in practice.]:( u·∇ u)(t_n+1/2)≈3/2 u^(n)·∇ u^(n) - 1/2 u^(n-1)·∇ u^(n-1)Collecting the above approximations, we find that the resulting generalized Stokes system holds for each n = 0, …, N-1:[ σ u^(n+1) - νΔ u^(n+1) + ∇ p^(n+1)=f_GS^(n+1)x∈Ω; ∇· u^(n+1)= 0x∈Ω;u^(n+1) =0x∈Γ ]where σ = σ = 2Δ t and:f_GS^(n+1) =f(t_n+1/2) + σ u^(n) - 3u^(n)·∇ u^(n) +u^(n-1)·∇ u^(n-1) + νΔ u^(n) - ∇ p^(n)The above generalized Stokes system is reaction-dominated for small time step sizes and diffusion-dominated for large time step sizes.We demonstrate later that our geometric multigrid methodology is robust for both of these regimes.The advantage of the Crank-Nicolson/Adams-Bashforth scheme is that the advection term is handled in a purely explicit manner.After spatial discretization, this leads to a symmetric matrix problem.However, the disadvantage of the scheme is that it is stable only if the time-step is chosen sufficiently small as to satisfy a CFL condition.With this in mind, we next consider an unconditionally stable semi-implicit scheme introduced by Guermond <cit.>.In this scheme, the unsteady, diffusive, and pressure force terms are approximated as before, but the advection term is approximated as follows[This approximation is also not properly defined for n = 0.Consequently, the approximation is replaced by u^(n)·∇( ( u^(n) +u^(n+1))/2 ) for n = 0 in practice.]:( u·∇ u)(t_n+1/2)≈( 3/2 u^(n) - 1/2 u^(n-1)) ·∇(u^(n+1) +u^(n)/2)Note that the advection velocity is approximated in an explicit manner while the gradient is approximated in an implicit manner.Collecting the above approximations, we find that the resulting generalized Oseen system holds for each n = 0, …, N-1:[ σ u^(n+1) +a^(n+1)·∇ u^(n+1) - νΔ u^(n+1) + ∇ p^(n+1) =f_GO^(n+1/2) x∈Ω;∇· u^(n+1) = 0 x∈Ω; u^(n+1)=0 x∈Γ ]where σ = 2Δ t, a^(n+1) = 3/2 u^(n) - 1/2 u^(n-1), and:f_GO^(n+1) =f(t_n+1/2) + σ u^(n) -a^(n+1)·∇ u^(n) + νΔ u^(n) - ∇ p^(n)In opposition with the generalized Stokes system obtained earlier, the above system admits different behavior based on not only the scalars σ and ν but also the advection velocity a^(n+1).We demonstrate later that our geometric multigrid methodology is robust for this system provided it is not advection-dominated.This holds if a CFL-like condition is satisfied.§ SPATIAL DISCRETIZATION OF THE GENERALIZED STOKES AND OSEEN PROBLEMSNow that we have motivated the need for efficient linear solvers for the generalized Stokes and Oseen problems, we turn to the question of spatial discretization.In this section, we present the basic ingredients associated with a mixed Galerkin discretization.Later, we will specialize to the setting of isogeometric compatible discretizations. §.§ Weak Formulation of the Generalized Stokes and Oseen Problems To begin, we must state a weak formulation for the generalized Stokes and Oseen Problems.We strictly consider the case of homogeneous Dirichlet boundary conditions without loss of generality.Before proceeding, we must first define suitable velocity and pressure test spaces:H^1_0(Ω) := { v∈ H^1(Ω) :v=0Γ}L^2_0(Ω) := { q ∈ L^2(Ω) : ∫_Ωq dΩ = 0 }We also assume that σ, ν∈ℝ_+, a∈ H^1_0(Ω), and f∈ L^2(Ω), and we assume that the advection velocity is divergence-free, that is, ∇· a≡ 0.With these assumptions in hand, the weak form of the generalized Stokes or Oseen problem is stated as follows:Find u∈ H^1_0(Ω) and p ∈ L^2_0(Ω) such that:a( v, u) - b( v,p) + b( u,q) = ℓ( v)for all v∈ H^1_0(Ω) and q ∈ L^2_0(Ω) where:a( v, u) := {[∫_Ωσ v· u d Ω + ∫_Ων∇ v : ∇ u d Ω generalized Stokes; ∫_Ωσ v· u d Ω + ∫_Ω v·(a·∇ u)d Ω + ∫_Ων∇ v : ∇ u d Ωgeneralized Oseen;]. b( v,p) := ∫_Ω( ∇· v) p d Ω ℓ( v) := ∫_Ω v· f dΩ§.§ Mixed Galerkin Approximation of the Generalized Stokes and Oseen Problems To discretize in space using a mixed Galerkin formulation, we first must specify finite-dimensional approximation spaces for the velocity and pressure fields.We denote these spaces as V_h ⊂ H^1_0(Ω) and Q_h ⊂ L^2_0(Ω) respectively, but we defer the discussion of suitable approximation spaces to Section <ref>.With approximation spaces defined, the mixed Galerkin formulation of the generalized Stokes or Oseen problem is stated as follows: Find u_h ∈ V_h and p_h ∈Q_h such thata( v_h, u_h) - b( v_h,p_h) + b( u_h,q_h) = ℓ( v_h)for all v_h ∈ V_h and q ∈Q_h.It should be noted that the velocity and pressure approximation spaces may not be arbitrarily selected.Instead, they should be chosen such that the Babuška-Brezzi inf-sup condition is satisfied <cit.>.We later select isogeometric divergence-conforming discretizations for spatial discretization which indeed satisfy such a condition. §.§ Weak Enforcement of No-Slip Boundary Conditions The no-slip boundary condition, u× n =0 where n is the outward facing normal to Γ, leads to the formation of boundary layers for wall-bounded flows.High mesh resolution is required near boundary layers to accurately represent associated sharp layers, so when the no-slip condition is strongly enforced in a mixed Galerkin formulation, inaccurate flow field approximations are obtained for insufficiently-resolved boundary layer meshes. It has recently been shown that superior results can be achieved by imposing the no-penetration boundary condition strongly and the no-slip boundary condition weakly using a combination of upwinding and Nitsche's method <cit.>. With such an approach, we first specify finite-dimensional velocity and pressure approximation spaces as before, but we only require that the corresponding discrete velocity fields satisfy v· n = 0. That is, we specify V_h ⊂ H^1_n(Ω) = { v∈ H^1(Ω):v· n = 0 Γ} and Q_h ⊂ L^2_0(Ω).The corresponding formulation for the generalized Stokes or Oseen problem is then stated as: Find u_h ∈ V_h and p_h ∈Q_h such that:a_h( v_h, u_h) - b( v_h,p_h) + b( u_h,q_h) = ℓ( v_h)for all v_h ∈ V_h and q_h ∈Q_h where:a_h( v_h, u_h) := a( v_h, u_h) - ∫_Γν v_h ·∇_ n u_hd Γ - ∫_Γν∇_ n v_h · u_hd Γ + ∫_ΓC_I ν/h v_h · u_hd ΓAbove, h is the wall-normal element mesh size and C_I is a positive constant that must be chosen sufficiently large to ensure coercivity of the bilinear form a_h(·,·).Appropriate values for the constant C_I can be obtained by solving element-wise eigenvalue problems or by appealing to analytical upper bounds for the trace inequality <cit.>.We choose to weakly enforce no-slip boundary conditions throughout the remainder of this work.This not only leads to more accurate numerical results, but it also ensures proper solution behavior in the limit of zero viscosity <cit.>. §.§ The Matrix Problem The formulation given by (<ref>) yields a linear matrix system when the discrete velocity and pressure spaces are provided basis functions. Let { N^v_i }_i=1^n_v denote a set of vector basis functions for V_h where n_v = dim( V_h), and let { N^q_i }_i=1^n_q denote a set of scalar basis functions for Q_h where n_q = dim(Q_h). Then the resulting matrix system takes the form: [ [ A - B; B^T 0 ]] ( [ u; p ]) = ( [ f; 0 ])where:[ A]_ij := a_h( N^v_i, N^v_j) [ B]_ij := b( N^v_i, N_j^q) [ f]_i:= ℓ( N^v_i)Moreover, this matrix system can be written concisely as:K U =Fwhere the matrix K has the block structure in (<ref>) and the vectors U and F are vectors representing the group variables in (<ref>).§ THE STOKES COMPLEX AND ISOGEOMETRIC COMPATIBLE DISCRETIZATIONSIt remains to specify suitable velocity and pressure approximation spaces for the generalized Stokes and Brinkman problems.In this section, we present a particular selection of velocity and pressure approximation spaces which is not only inf-sup stable but also yields pointwise divergence-free discrete velocity fields.Before doing so, however, we first introduce the so-called Stokes complex which succinctly captures the fundamental theorem of calculus and expresses the differential relationships between potential, velocity, and pressure fields. §.§ The Stokes ComplexThe Stokes complex <cit.> is a cochain complex of the form:0 @>>> Φ @>∇⃗>> Ψ @>∇⃗×>>V @>∇⃗·>> Q @>>> 0in the three-dimensional setting where:Φ := H^1_0(Ω)Ψ := {ψ∈ L^2(Ω): ∇⃗×ψ∈ H^1(Ω) ψ× n =0Γ}V :=H^1_n(Ω)Q := L^2_0(Ω)are infinite-dimensional spaces of scalar potential fields, vector potential fields, velocity fields, and pressure fields. The Stokes complex is a smoothed version of the classical L^2 de Rham complex, and when the domain Ω⊂ℝ^3 is simply connected with simply connected boundary, the Stokes complex is exact.This means that every pressure field may be represented as the divergence of a velocity field, every divergence-free velocity field may be represented as the curl of a vector potential field, and every curl-free vector potential field may be represented as the gradient of a scalar potential field.An analogous two-dimensional Stokes complex also exists, though for brevity, the interested reader is referred to <cit.> for more details.It has been shown in previous works that the Stokes complex endows the generalized Stokes and Oseen problems with important underlying topological structure.In particular, the infinite-dimensional inf-sup condition may be derived from the complex <cit.>.As such, there is impetus for developing finite-dimensional approximations of the Stokes complex.Such discrete complexes are referred to as discrete Stokes complexes, and when these complexes are endowed with special commuting projection operators, they form the following commuting diagram with the Stokes complex:0 @>>> Φ @>∇⃗>> Ψ @>∇⃗×>>V @>∇⃗·>> Q @>>> 0@. @VV Π_ϕ V @VV Π_ψ V @VV Π_v V @VV Π_q V 0 @>>> Φ_h @>∇⃗>> Ψ_h @>∇⃗×>>V_h @>∇⃗·>> Q_h @>>> 0where Φ_h, Ψ_h, V_h, and Q_h are discrete scalar potential, vector potential, velocity, and pressure spaces and Π_ϕ : Φ→Φ_h, Π_ψ: Ψ→Ψ_h, Π_v :V→ V_h, and Π_q : Q→Q_h are the aforementioned commuting projection operators.Remarkably, when V_h and Q_h are selected as velocity and pressure approximation spaces in a mixed Galerkin formulation of the generalized Stokes or Oseen problem, the resulting approximation scheme is inf-sup stable and free of spurious oscillations and the returned discrete velocity solution will be pointwise divergence-free <cit.>.Both of these properties are a direct consequence of the commuting diagram above, and for the sake of completeness, we prove the second property below. Assume that the discrete velocity and pressure spaces V_h and Q_h are associated with a discrete complex which commutes with the Stokes complex.Suppose v_h ∈ V_h satisfies b( v_h,q_h) = 0 for every q_h ∈Q_h.Then ∇· v_h = 0 pointwise.Let q_h = ∇· v_h.Then ∇· v_h ^2_L^2(Ω) = b( v_h,q_h) = 0 and the desired result follows. While we have demonstrated the benefit of using velocity and pressure spaces coming from a discrete Stokes complex, we have not yet described how to arrive at such spaces.In this paper, we turn to the use of so-called isogeometric compatible B-spline discretizations which are the focus of the next two subsections. §.§ Univariate and Multivariate B-splines The basic building blocks of isogeometric compatible B-spline discretizations, like any isogeometric analysis technology, are B-splines <cit.>.B-splines are piecewise polynomial functions, but unlike C^0-continuous finite elements, B-splines may exhibit high levels of continuity.Univariate B-splines are constructed by first specifying a polynomial degree p[The notation p is used for both the pressure field as well as the polynomial degree.Thus, the reader should discern what term p refers to in various portions of the paper by context.], a number of basis functions n, and an open knot-vector Ξ = {ξ_0,ξ_1,…,ξ_n+p+1}, a non-decreasing vector of knots ξ_i such that the first and last knot are repeated p+1 times. We assume without loss of generality that the first and last knot are 0 and 1 respectively such that the domain of the knot vector is (0,1).With a knot vector in hand, univariate B-spline basis functions are defined recursively through the Cox-deBoor formula:[ N̂_i,p(ξ) := ξ - ξ_i/ξ_i+p-ξ_iN̂_i,p-1(ξ) + ξ_i+p+1-ξ/ξ_i+p+1-ξ_i+1N̂_i+1,p-1(ξ)p > 0;; N̂_i,0(ξ) := {[1 ξ_i ≤ξ < ξ_i+1;0elsewhere ]. ]Figure <ref> shows example sets of unvariate B-spline basis functions.We can alternatively define B-splines not from the knot vector itself, but instead a vector of unique knot values ζ = {ζ_1, ζ_2, …, ζ_n_k} and a regularity vector α = {α_1, α_2, …, α_n_k} such that the B-splines have α_j continuous derivatives across ζ_j.By construction, α_1 = α_n_k = -1.We will later employ the convention α - 1 = { -1, α_2 - 1, …, α_n_k-1-1, -1 }.Given a set of knot-vectors and polynomial degrees, multivariate B-spline basis functions are obtained through a tensor-product of unvariate B-spline basis functions:N̂_ i, p(ξ) := ∏_k=1^d N̂_i_k,p_k(ξ_k)where i = (i_1, i_2, …, i_d) and p = (p_1, p_2, …, p_d) We denote the corresponding space of multidimensional B-splines over the parametric domain Ω̂ = (0,1)^d as:S^p_1,p_2,…,p_d_α_1,α_2,…,α_d(ℳ_h) := { f : Ω̂→ℝ |f(ξ) = ∑_ i a_ iN̂_ i, p (ξ) },where α_j is the regularity vector associated with the j^th direction where j = 1, …, d and ℳ_h is the parametric mesh defined by the vectors of unit knot values in each parameteric direction.Note that the space is fully characterized by the polynomial degrees, regularity vectors, and parametric mesh as indicated by the notation.For ease of notation, however, we drop the dependence on the parameteric mesh and instead use S^p_1,p_2,…,p_d_α_1,α_2,…,α_d = S^p_1,p_2,…,p_d_α_1,α_2,…,α_d(ℳ_h) in what follows. §.§ Isogeometric Compatible B-splines We are now in a position to define isogeometric compatible B-splines.Their definition is made possible through the observation that the derivative of univariate B-splines of degree p are univariate B-splines of degree p - 1.Since multivariate B-splines are tensor-products of univariate B-splines, the aforementioned property naturally generalizes to higher dimension, allowing us to build a discrete Stokes complex of B-spline spaces <cit.>.We first define such a discrete Stokes complex in the parametric domain Ω̂ = (0,1)^d for both d = 2 and d = 3 before constructing a discrete Stokes complex in the physical domain of interest using a set of structure-preserving push-forward/pull-back operators.In the two-dimensional setting, we define the following B-spline spaces over the unit square:Ψ̂_h:= {ψ̂_h ∈ S_α_1,α_2^p_1,p_2: ψ̂_h = 0 Γ̂} V̂_h:= {v̂_h ∈ S_α_1,α_2-1^p_1,p_2-1× S_α_1-1,α_2^p_1-1,p_2: v̂_h · n = 0 Γ̂} Q̂_h:= {q̂_h ∈ S_α_1-1,α_2-1^p_1-1,p_2-1: ∫_Ω̂q̂_h dΩ̂ = 0 }where Ψ̂_h is the B-spline space of streamfunctions, V̂_h is the B-spline space of flow velocities, and Q̂_h is the B-spline space of pressures. These discrete spaces are endowed with B-spline basis functions {N̂^ψ_i }_i=1^n_ψ, {N̂^v_i }_i=1^n_v, and {N̂^p_i }_i=1^n_q, respectively, where n_ψ is the number of streamfunction basis functions, n_v is the number of velocity basis functions, and n_q is the number of pressure basis functions, all of which can be inferred from the chosen polynomial degrees and knot vectors. One can readily show that these spaces form the following discrete Stokes complex:0 @>>> Ψ̂_h @>∇⃗^⊥>> V̂_h @>∇⃗·>> Q̂_h @>>> 0and provided the functions in the B-spline pressure space are at least C^0-continuous, there exist a set of commuting projection operators that make the above discrete complex commute with the Stokes complex.Thus, we refer to the spaces Ψ̂_h, V̂_h, and Q̂_h as compatible B-spline spaces.As mentioned previously, if we select V̂_h and Q̂_h as velocity and pressure approximation spaces in a mixed Galerkin formulation of the generalized Stokes or Oseen problems, then the resulting scheme yields a pointwise divergence-free velocity field.The degrees of freedom associated with compatible B-splines are associated with the geometrical entries of the underlying control mesh.This is graphically illustrated in Figure <ref> which shows that streamfunction degrees of freedom are associated with control points, velocity degrees of freedom are associated with (and aligned normal to) control edges, and pressure degrees of freedom are associated with control cells.Each degree of freedom corresponds to a particular basis function, and to visualize these basis functions, we have selected four degrees of freedom in Figure <ref> and visualized the respective basis functions in Figure <ref>[Note that the pressure basis function we have highlighted does not have zero average over the parametric domain.In practice, we enforce this constraint using a Lagrange multiplier rather than to the individual pressure basis functions.].In the three-dimensional setting, we define the following B-spline spaces over the unit cube:Φ̂_h:= {ϕ̂_h ∈ S_α_1,α_2,α_3^p_1,p_2,p_3: ϕ̂_h = 0 Γ̂} Ψ̂_h:= {ψ̂_h ∈ S_α_1-1,α_2,α_3^p_1-1,p_2,p_3× S_α_1,α_2-1,α_3^p_1,p_2-1,p_3× S_α_1,α_2,α_3-1^p_1,p_2,p_3-1 : ψ̂_h × n =0Γ̂} V̂_h:= {v̂_h ∈S_α_1,α_2-1,α_3-1^p_1,p_2-1,p_3-1× S_α_1-1,α_2,α_3-1^p_1-1,p_2,p_3-1× S_α_1-1,α_2-1,α_3^p_1-1,p_2-1,p_3 : v̂_h · n = 0 Γ̂} Q̂_h:= {q̂_h ∈ S_α_1-1,α_2-1,α_3-1^p_1-1,p_2-1,p_3-1 : ∫_Ω̂q̂_h dΩ̂ = 0 }where Φ̂_h is the B-spline space of scalar potentials, Ψ̂_h is the B-spline space of vector potentials, V̂_h is the B-spline space of flow velocities, and Q̂_h is the B-spline space of pressures. These discrete spaces are endowed with the basis functions {N̂^ϕ_i }_i=1^n_ϕ, {N̂^ψ_i }_i=1^n_ψ, {N̂^v_i }_i=1^n_v, and {N̂^p_i }_i=1^n_q, respectively, where n_ϕ is the number of scalar potential basis functions, n_ψ is the number of vector potential basis functions, n_v is the number of velocity basis functions, and n_q is the number of pressure basis functions, all of which can be inferred from the chosen polynomial degrees and knot vectors. Once again, one can show that the above spaces form the following discrete Stokes complex:0 @>>> Φ̂_h @>∇⃗>> Ψ̂_h @>∇⃗×>> V̂_h @>∇⃗·>> Q̂_h @>>> 0and provided the functions in the B-spline pressure space are at least C^0-continuous, there exist a set of commuting projection operators that make the above discrete complex commute with the Stokes complex.Heretofore, we have discussed how to construct compatible B-splines in the parametric domain.To define compatible B-splines in the physical domain Ω, we need to first define a piece-wise smooth bijective mapping F : Ω̂→Ω.This mapping can be defined using Non-Uniform Rational B-splines (NURBS), for instance, as is commonly done in the isogeometric analysis community <cit.>.With this mapping in hand, we define two-dimensional compatible B-spline spaces in the physical domain via the relations:Ψ_h:= {ψ_h ∈Ψ: ψ_h ∘ F∈Ψ̂_h }V_h:= { v_h ∈ V: det( J) J^-1 v_h ∘ F∈V̂_h } Q_h:= { q_h ∈Q: det( J) q_h ∘ F∈Q̂_h }and three-dimensional compatible B-spline spaces via the relations:Φ_h:= {ϕ_h ∈Φ: ϕ_h ∘ F∈Φ̂_h } Ψ_h:= {ψ_h ∈Ψ:J^-Tψ_h ∘ F∈Ψ̂_h }V_h:= { v_h ∈ V: det( J) J^-1 v_h ∘ F∈V̂_h } Q_h:= { q_h ∈Q: det( J) q_h ∘ F∈Q̂_h }whereJ = ∂_ξ F is the Jacobian of the parametric mapping. Corresponding basis functions in the physical domain are defined via push-forwards of the basis functions in the parametric domain, and we denote the discrete velocity basis functions as { N^v_i }_i=1^n_v and the basis functions for other quantities in analogous fashion.It is easily shown that the compatible B-spline spaces in the physical domain also comprise a discrete complex which commutes with the Stokes complex.The compatible B-splines in the physical domain are referred to as isogeometric compatible B-splines as they are built from B-splines, the basis building blocks of geometric modeling, and they are defined on the exact geometry of the problem of interest. §.§ B-spline RefinementOne more concept needs to be introduced before proceeding forward, namely the concept of B-spline refinement.For a fixed set of polynomial degrees, B-spline refinement is carried out by a process referred to as knot insertion <cit.>.In the univariate setting, we start with a particular knot vector Ξ and then insert a sequence of knots to arrive at a refined knot vector Ξ̃ such that Ξ⊂Ξ̃.The B-spline basis functions associated with the original knot vector, denoted as {N̂_i_p(ξ) }_i=1^n, can be represented as linear combinations of the basis functions associated with the refined knot vector, denoted as {Ñ_i,p(ξ) }_i=1^ñ, using a transformation matrix T.This relationship is expressed mathematically as:N̂_i,p(ξ) =∑_j = 1^ñ [ T]_ijÑ_j,p(ξ)for i = 1, …, n.Consequently, if a B-spline function takes the form:û(ξ) = ∑_i=1^nû_i N̂_i,p(ξ)it can be alternately be represented as:û(ξ) = ∑_j=1^ñũ_j Ñ_j,p(ξ)where:ũ_j =∑_i = 1^n [ T]_ijû_ifor j = 1, …, ñ.Figure <ref> depicts the action of knot insertion for univariate quadratic B-splines. B-spline refinement in the multivariate setting (including the compatible B-spline setting) is carried out in a tensor-product fashion, and the transformation matrix T takes the same form in both the parametric domain and the physical domain.There exist a variety of algorithms capable of performing knot insertion <cit.> which can be used to construct the transformation matrix T, so we do not discuss this construction further in this paper.§ A STRUCTURE-PRESERVING GEOMETRIC MULTIGRID METHODOLOGYAt last, we are ready to present our geometric multigrid methodology for isogeometric compatible discretizations of the generalized Stokes and Oseen problems.We begin this section by reviewing the basics of the geometric multigrid approach as well as the required ingredients in the setting of an isogeometric compatible discretization.Then, we introduce the Schwarz-style smoothers which our methodology leans upon.We then show that our methodology preserves the divergence-free constraint on the velocity field and that it effectively ellipticizes the underlying system of interest.We limit our discussion to the V-cycle algorithm, though our approach can also be applied within a W-cycle or Full Multigrid framework <cit.>. §.§ Nested B-spline Stokes Complexes, Intergrid Transfer Operators, and the V-Cycle Algorithm Assume that we have a sequence of nested B-spline Stokes complexes that have been obtained through knot insertion.We denote the discrete velocity and pressure spaces associated with this sequence as { V_ℓ}_ℓ = 0^n_ℓ and {Q_ℓ}_ℓ = 0^n_ℓ respectively where n_ℓ is the number of levels, and we note that:V_0⊂ V_1⊂…⊂ V_n_ℓ Q_0⊂Q_1⊂…⊂Q_n_ℓand:∇· V_ℓ = Q_ℓfor each ℓ = 0, …, n_ℓ.Level ℓ = 0 corresponds to the coarsest mesh while level ℓ = n_ℓ corresponds to the finest mesh.The action of knot insertion not only allows for B-spline refinement, but it also provides the intergrid transfer operators associated with a geometric multigrid method.Namely, we can build prolongation operators:P^v_ℓ:V_ℓ→ V_ℓ+1 P^q_ℓ: Q_ℓ→Q_ℓ+1for ℓ = 0, …, n_ℓ - 1 using the construction provided in Subsection <ref>.We encode the action of these prolongation operators in the matrices P^v_ℓ and P^q_ℓ such that the following refinement operations hold:N^v_i,ℓ(ξ)= ∑_j [ P^v_ℓ]_ji N^v_j,ℓ+1(ξ) N^q_i,ℓ(ξ)= ∑_j [ P^q_ℓ]_ji N^q_j,ℓ+1(ξ)for ℓ = 0, …, n_ℓ - 1 where { N^v_i,ℓ}_i=1^n_v,ℓ and { N^q_i,ℓ}_i=1^n_q,ℓ denote the velocity and pressure B-spline basis functions associated with level ℓ.Moreover, the degrees of freedom associated with pressure and velocity fields on the ℓ^ level can be transferred to the the (ℓ+1)^ level via the expressions:u_ℓ+1 =P^v_ℓ u_ℓp_ℓ+1 =P^q_ℓ p_ℓAs is standard with a Galerkin formulation, restriction operators are constructed as the adjoint or transpose of the prolongation operators, namely R^v_ℓ+1 = (P^v_ℓ)^*, R^q_ℓ+1 = (P^q_ℓ)^*, R^v_ℓ+1 = ( P^v_ℓ)^T, and R^q_ℓ+1 = ( P^q_ℓ)^T for ℓ = 0, …, n_ℓ - 1.Finally, we define a prolongation matrix P_ℓ for the full group variable such that:U_ℓ+1 = [ [ u_ℓ+1; p_ℓ+1 ]] = [ [ P^v_ℓ 0; 0 P^q_ℓ ]] [ [ u_ℓ; p_ℓ ]] =P_ℓ U_ℓfor ℓ = 0, …, n_ℓ-1.The corresponding restriction matrix for level ℓ is given by R_ℓ+1 = ( P_ℓ)^T.We need a few more ingredients before stating the multigrid V-cycle algorithm for our discretization scheme.First of all, we need to form the matrix system associated with the finest level, K U =F.We then form the system matrices associated with coarser levels via the relation K_ℓ =R_ℓ+1 K_ℓ+1 P_ℓ for ℓ = 0, …, n_ℓ-1 where K_n_ℓ =K.Second of all, we need to choose a smoother for each level ℓ which we encode in a smoothing matrix S_ℓ, and and we need to select a number of pre-smoothing steps ν_1 and post-smoothing steps ν_2.Third of all, we need to choose a suitable initial guess U for the solution on the finest level.Then, one V-cycle corresponds to a single call of the form MGV(n_ℓ, U, F) to the recursive function defined below <cit.>.Note that the solution U is updated within the algorithm stated above.Hence, additional V-cycles simply correspond to additional calls of the form MGV(n_ℓ, U, F). §.§ Overlapping Schwarz Smoothers on Compatible Subdomains At this juncture, we have not yet determined what smoother to employ.We turn to the use of overlapping Schwarz smoothers <cit.> with specially chosen overlapping subdomains which respect the underlying topological structure of the generalized Stokes and Oseen problems <cit.>.Namely, for each level ℓ, we define a collection of subdomains {Ω_i,ℓ}_i where each individual subdomain is defined as the support of a discrete streamfunction basis function in the two-dimensional setting:Ω_i,ℓ := supp(N^ψ_i,ℓ)and a discrete vector potential basis function in the three-dimensional setting:Ω_i,ℓ := supp( N^ψ_i,ℓ)It is easily seen that the subdomains form a cover of the physical domain, that is:Ω = ⋃_i Ω_i,ℓFor each subdomain, we define discrete velocity and pressure subspaces V_i,ℓ⊂ V_ℓ and Q_i,ℓ⊂Q_ℓ, respectively, asV_i,ℓ := { v_h ∈ V_ℓ: suppv_h ⊆Ω_i,ℓ}andQ_i,ℓ := { q_h ∈Q_ℓ : supp q_h ⊆Ω_i,ℓ}In the two-dimensional setting, we define a discrete streamfunction subspace for each subdomain as:Ψ_i,ℓ := {ψ_h ∈Ψ_ℓ: supp ψ_h ⊆Ω_i,ℓ}and in the three-dimensional setting, we define a discrete vector potential subspace for each subdomain as:Ψ_i,ℓ := {ψ_h ∈Ψ_ℓ: supp ψ_h ⊆Ω_i,ℓ}The degrees of freedom associated with all of the aforementioned subspaces are illustrated in the two-dimensional case in Figure <ref> for two separate subdomains.By construction, (Ψ_i,ℓ) = 1 in the two-dimensional setting and(Ψ_i,ℓ) = 1 in the three-dimensional setting. Moreover, ( V_i,ℓ) = 4 and (Q_i,ℓ) = 3 in both the two- and three-dimensional settings[From Figure <ref>, it appears that (Q_i,ℓ) = 4.However, the functions in Q_i,ℓ must satisfy a zero average constraint, so the dimension is one less than what is observed from the figure.].Thus, the subspaces associated with each subdomain form the following exact discrete Stokes complex in the two-dimensional setting:0 @>>> Ψ_i,ℓ @>∇⃗^⊥>>V_i,ℓ @>∇⃗·>> Q_i,ℓ @>>> 0and the following exact discrete Stokes complex in the three-dimensional setting:0 @>>> Ψ_i,ℓ @>∇⃗×>>V_i,ℓ @>∇⃗·>> Q_i,ℓ @>>> 0Thus, as previously suggested, our choice of subdomains indeed respects the underlying topological structure of the generalized Stokes and Oseen problems.With our subdomains defined, we can now describe our choice of smoothers, namely additive and multiplicative Schwarz smoothers using our prescribed subdomains.In this direction, let E^v_i,ℓ and E^q_i,ℓ denote the velocity and pressure subdomain restriction matrices for a given level ℓ and subdomain i that take the full set of velocity and pressure degrees of freedom associated with level ℓ and map them to the set of pressure and velocity degrees of freedom associated with the subdomain Ω_i,ℓ.Additionally, let:E_i,ℓ:= [ [ E^v_i,ℓ 0; 0 E^q_i,ℓ ]]denote the subdomain restriction matrix for a given level ℓ and subdomain i for the full group variable.Then, the action of the additive Schwarz smoother is defined through:S^-1_ℓ = η( ∑_iE_i,ℓ^T ( E_i,ℓ K_ℓ E_i,ℓ^T)^-1 E_i,ℓ)where η∈ (0,1) is a suitably chosen scaling factor <cit.>, while the action of the multiplicative Schwarz smoother is defined through:S^-1_ℓ = [I - ∏_i (I -E_i,ℓ^T ( E_i,ℓ K_ℓ E_i,ℓ^T)^-1 E_i,ℓ K_ℓ) ]K^-1_ℓThe additive and multiplicate Schwarz smoothers are generalizations of the classical Jacobi and Gauss-Seidel smoothers, and indeed they can be implemented in an efficient, iterative manner.For both of these smoothers, a sequence of local matrix problems of the form:K_i,ℓ U_i,ℓ =F_i,ℓwhere K_i,ℓ =E_i,ℓ K_ℓ E_i,ℓ^T must be solved.It is easily seen that:K_i,ℓ = [ [ A_i,ℓ - B_i,ℓ; B_i,ℓ^T 0 ]]where:A_i,ℓ =E^v_i,ℓ A_ℓ( E^v_i,ℓ)^T B_i,ℓ =E^v_i,ℓ A_ℓ( E^q_i,ℓ)^TThus, with both the additive and multiplicative Schwarz smoothers, a discrete generalized Stokes or Oseen problem is solved for each subdomain.In the next subsection, we further clarify this interpretation in a variational setting.With the additive Schwarz smoother, the subdomain problems are solved independently, and their respective solutions are summed together and multiplied through by a scaling factor as indicated above.With the multiplicative Schwarz smoother, the subdomain problems are solved in a sequential fashion in analogy with the Gauss-Seidel smoother. §.§ Preservation of the Divergence-free Constraint Now that we have presented our choice of smoother, we demonstrate that our geometric multigrid methodology preserves the divergence-free constraint on the velocity field.Provided that the initial guess for the V-cycle algorithm satisfies the divergence-free constraint, it is sufficient to show that each smoothing step provides velocity updates that are divergence-free.One application of either the additive or multiplicative Schwarz smoother at level ℓ is akin to solving a collection of local subdomain problems of the form: Find δ u_i,ℓ∈ V_i,ℓ and δ p_i,ℓ∈Q_i,ℓ such that:a_h( v_h,δ u_i,ℓ) - b( v_h,δ p_i,ℓ) + b(δ u_i,ℓ,q_h) = ℓ( v_h) - a_h( v_h, u_h) + b( v_h,p_h) - b( u_h,q_h)for all v_h ∈ V_i,ℓ and q_h ∈Q_i,ℓ where u_h and p_h are the approximate discrete velocity and pressure solutions.For the additive Schwarz smoother, the approximate discrete velocity and pressure solutions are updated following the solution of all of the local problems according to:u_h← u_h + η(∑_i δ u_i,ℓ) p_h← p_h + η(∑_i δ p_i,ℓ)while for the multiplicative Schwarz smoother, the approximate discrete velocity and pressure solutions are updated following the solution of each individual local problem according to:u_h← u_h + δ u_i,ℓp_h← p_h + δ p_i,ℓFor each subdomain problem, if the approximate discrete velocity solution is divergence-free, it holds that:b(δ u_i,ℓ,q_h) = 0for all q_h ∈Q_i,ℓ.Since Q_i,ℓ = ∇· V_i,ℓ, we can select q_h = ∇·δ u_i,ℓ to find:∇·δ u_i,ℓ^2_L^2(Ω) = b(δ u_i,ℓ,q_h) = 0and thus the solution to the local problem is also divergence-free.Thus, if the initial guess for the V-cycle algorithm satisfies the divergence-free constraint, each subsequent application of the Schwarz smoother at any given level ℓ preserves the divergence-free constraint as well. §.§ Efficacy of the Structure-Preserving Geometric Multigrid Methodology We conclude here with a short discussion of the efficacy of our geometric multigrid methodology.We restrict our discussion to the three-dimensional setting without loss of generality.Recall that the spaces Ψ_i,ℓ, V_i,ℓ, and Q_i,ℓ form a discrete Stokes complex for a given level ℓ and subdomain i.Thus, we can express the velocity solution δ u_i,ℓ∈ V_i,ℓ to (<ref>) in terms of the curl of a vector potential δψ_i,ℓ∈Ψ_i,ℓ provided the velocity is divergence-free, and this vector potential can be obtained via the reduced subdomain problem: Find δψ_i,ℓ∈Ψ_i,ℓ such that:a_h(∇×ζ_h,∇×δψ_i,ℓ) = ℓ(∇×ζ_h) - a_h(∇×ζ_h, u_h)for all ζ_h ∈Ψ_i,ℓ.This is precisely the subdomain problem associated with the global semi-elliptic generalized Maxwell problem with hyperresitivity <cit.>: Find ψ∈Ψ_h such that:a_h(∇×ζ^h,∇×ψ^h) = ℓ(∇×ζ^h)for all ζ^h ∈Ψ_h.It is known that a geometric multigrid methodology based on the use of Schwarz smoothers posed on structure-preserving subdomains is optimally convergent for Maxwell problems <cit.>.Consequently, we can expect that at least the discrete velocity solutions will converge in our approach.§ NUMERICAL RESULTSWe now present a series of numerical tests illustrating the effectiveness of our proposed geometric multigrid methodology. Each of the tests correspond to problems with homogeneous Dirichlet boundary conditions applied along the entire domain boundary.In our discretization scheme, no-penetration boundary conditions are enforced strongly and no-slip boundary conditions are enforced weakly using a penalty constant of C_I = 4(p-1) where p is the polynomial degree which is taken to be equal in each parameteric direction.It should be noted that p refers to the polynomial degree of the discrete streamfunction space in the two-dimensional case and the discrete scalar potential space in the three-dimensional case.Hence, for p = 2, the discrete pressure fields are piecewise bilinear/trilinear B-splines rather than piecewise biquadratic/triquadratic B-splines.Maximally smooth B-splines defined on uniform knot vectors are utilized throughout.For all of the following tests, we define convergence as the number of V-cycles required to reduce the initial residual by a factor of 10^6.We always initialize the V-cycle algorithm using a random initial guess which satisfies the divergence-free constraint on the velocity field.For each V-cycle, one pre-smoothing and two post-smoothing steps are employed using either the multiplicative or additive Schwarz smoother.For the additive Schwarz smoother, a scaling factor of η = 0.5 is employed.For all the problems presented here, a single element is used for the coarsest mesh and we investigate the convergence behavior for various levels of refinement.We report on the convergence behavior of our method for both the generalized Stokes and Oseen problems as well as a selection of different problem parameters, polynomial degrees, domain geometries, and number of spatial dimensions (2D and 3D).With respect to problem parameters, we consider the ratios between reaction and diffusion and advection and diffusion, which we express through a Damköhler number (Da) and a Reynolds number (Re).We define these numbers as:Da = σ L^2/νand Re = | a|L/ν.where L is a characteristic length scale which is taken to be one throughout. §.§ Two-dimensional generalized Stokes flow in a square domain We first consider a two-dimensional generalized Stokes problem posed on the square domain (0,1)^2.In particular, we consider a forcing:f = σu - νΔu + ∇ pcorresponding to the manufactured solution <cit.>:u = [ [2e^x (-1 + x)^2 x^2 (y^2 - y)(-1 + 2y); (-e^x (-1 + x) x (-2 + x (3 + x))(-1 + y)^2 y^2 ]] p = (-424 + 156e + (y^2 - y) (-456 + e^x (456 + x^2 (228 - 5 (y^2 - y))+ 2x (-228 + (y^2 -y)) + 2x^3 (-36 + (y^2 - y)) + x^4 (12 + (y^2-1)))))The velocity field associated with this exact solution is plotted in Figure <ref>.We first present convergence results for our multigrid method using the multiplicative Schwarz smoother and polynomial degrees p = 2 and p = 3 in Table <ref>.It is clear that for a given polynomial order, the convergence behavior is robust with respect to both the number of levels of refinement and the problem parameters.We also observe that as polynomial order is increased, the convergence behavior deteriorates, albeit slightly.This is consistent with previously observed behavior for isogeometric analysis <cit.>.We next present convergence results for our multigrid method using the additive Schwarz smoother and polynomial degree p = 2 in Table <ref>.As expected, overall convergence is slower with additive Schwarz than with multiplicative Schwarz, although in this case the method is still robust with regard to both the number of levels of refinement and the problem parameters. §.§ Two-dimensional generalized Stokes flow in a quarter annulus We next consider a two-dimensional generalized Stokes problem posed on a quarter annulus.The domain is described in Figure <ref> where r_i = 0.075 and r_o = 0.225.We consider a manufactured solution achieved by mapping the solution presented in (<ref>)-(<ref>) to the quarter-annulus domain using a quadratic rational Bézier parametric mapping and appropriate push-forward operators.The velocity field associated with the exact solution are also plotted in Figure <ref>.We present convergence results for our multigrid method using the multiplicative Schwarz smoother and polynomial degree p = 2 in Table <ref>.Compared with the square domain, the number of V-cycles required for convergence is larger.However, the method is still robust with respect to both the number of levels of refinement and the problem parameters. §.§ Three-dimensional generalized Stokes flow in a cube domainWe next consider a three-dimensional generalized Stokes problem posed on the unit cube (0,1)^3.In particular, we consider a forcing:f = σu - νΔu + ∇ pcorresponding to the manufactured solution <cit.>:u = ∇×ψ ψ = [ [ x(x-1)y^2(y-1)^2z^2(z-1)^2;0; x^2(x-1)^2y^2(y-1)^2z(z-1) ]] p = sin(π x) sin(π y) - 4/π^2Streamlines colored by velocity magnitude associated with the exact solution are plotted in Figure <ref>.We present convergence results for our multigrid method using the multiplicative Schwarz smoother and polynomial degrees p = 2 and p = 3 in Table <ref>.Convergence appears to be much quicker in the three-dimensional setting.Notably, one V-cycle appears to be sufficient to reduce the residual by six orders of magnitude for a sufficient number of levels for both p = 2 and p = 3 and irrespective of the Damköhler number.We believe this may be due to the fact each velocity degree of freedom is updated twice as many times in each iteration of the smoother in the three-dimensional case as compared to the two-dimensional case. §.§ Three-dimensional generalized Stokes flow in a hollow cylinder sectionThe final generalized Stokes problem considered in this paper is a three-dimensional problem posed on a hollow cylinder section.The domain for this problem is simply the quarter annulus from before extruded in the z-direction by a depth of d = 0.1.We consider a manufactured solution achieved by mapping the solution presented in (<ref>)-(<ref>) to the hollow cylinder section domain using a quadratic rational Bézier parametric mapping and appropriate push-forward operators. Streamlines colored by velocity magnitude associated with the exact solution are plotted in Figure <ref>.We present convergence results for our multigrid method using the multiplicative Schwarz smoother and polynomial degree p = 2 in Table <ref>.Incredibly, one V-cycle again appears to be sufficient to reduce the residual by six orders of magnitude for a sufficient number of levels irrespective of the Damköhler number. §.§ Two-dimensional generalized Oseen flow in a square domain We now turn our attention to the generalized Oseen problem.We first consider a two-dimensional generalized Oseen problem posed on the square domain (0,1)^2.We manufacture a solution with a forcing:f = σu + a·∇u - νΔu + ∇ pwhere u and p are defined as in (<ref>)-(<ref>) such that the resulting solution is the same as the unit square generalized Stokes problem.Note that the advection velocity is taken to be the manufatured velocity field.We present convergence results for our multigrid method using the multiplicative Schwarz smoother, polynomial degree p = 2, and various Reynolds and Damköhler numbers in Table <ref>.When Reynolds number is low, the advection terms become negligible, and thus the method performs as it did on the the 2D generalized Stokes problem.As the Reynolds number is increased, the advection term becomes more significant.In this case, we have observed favorable convergence behavior as long as the Damköhler number is at least as large as the Reynolds number.When the system becomes advection-dominated, on the other hand, the multigrid method fails to converge.We expect that improved results may be obtained through the use of an alternative smoother which respects the directionality of the advection velocity. §.§ Three-dimensional generalized Oseen flow in a cube domain We conclude by considering a three-dimensional generalized Oseen problem posed on the unit cube (0,1)^3.We manufacture a solution with a forcing:f = σu + a·∇u - νΔu + ∇ pwhere u and p are defined as in (<ref>)-(<ref>) such that the resulting solution is the same as the unit cube generalized Stokes problem.As with the two-dimensional generalized Oseen problem, the advection velocity is take to be the manufactured velocity field.We present convergence results for our multigrid method using the multiplicative Schwarz smoother, polynomial degree p = 2, and various Reynolds and Damköhler numbers in Table <ref>.The same trends that were observed for the two-dimensional case are observed here as well.Namely, when the system is not advection dominated, we achieve excellent convergence behavior.Also, as was the case with the generalized Stokes flow, the three-dimensional case exhibits improved convergence as compared to the two-dimensional case.§ CONCLUSIONSIn this paper, we presented a structure-preserving geometric multigrid methodology for isogeometric compatible discretizations of the generalized Stokes and Oseen problems which relies upon Schwarz-style smoothers in conjunction with specially chosen subdomains.We proved that our methodology yields a pointwise divergence-free velocity field independent of the number of pre-smoothing steps, post-smoothing steps, grid levels, or cycles in a V-cycle implementation, and we demonstrated the efficiency and robustness of our methodology by numerical example.Specifically, we found that our methodology exhibits convergence rates independent of the grid resolution and flow parameters for the generalized Stokes problem as well as the generalized Oseen problem provided it is not advection-dominated.We also discovered that, somewhat surprisingly, our methodology exhibits improved convergence rates in the three-dimensional setting as compared with the two-dimensional setting.We envision several avenues for future work.First of all, we plan to conduct a full mathematical analysis of our methodology.We anticipate that this analysis will largely follow the same program of work as laid out in a recent geometric multigrid paper for divergence-conforming discontinuous Galerkin formulations of Stokes flow <cit.>.Second, we would like to extend the applicability of our methodology to advection-dominated Oseen problems.We anticipate the need for upwind-based line smoothers in such a setting <cit.>.Third, we plan to extend our methodology to multi-patch geometries and adaptive isogeometric compatible discretizations <cit.>.Initial results in this area are quite encouraging.Finally, we plan to extend our methodology to multi-physics problems, including coupled flow transport, fluid-structure, and magnetohydrodynamics.§ ACKNOWLEDGEMENTThis material is based upon work supported by the Air Force Office of Scientific Research under Grant No. FA9550-14-1-0113.wileyj | http://arxiv.org/abs/1705.09282v1 | {
"authors": [
"Christopher Coley",
"Joseph Benzaken",
"John A. Evans"
],
"categories": [
"math.NA"
],
"primary_category": "math.NA",
"published": "20170525175646",
"title": "A geometric multigrid method for isogeometric compatible discretizations of the generalized Stokes and Oseen problems"
} |
Cloud-free skies for the puffiest known super-Neptune?Instituto de Astrofísica de Canarias, Vía Láctea s/n, E-38205 La Laguna, Tenerife, Spain [email protected] de Astrofísica, Universidad de La Laguna, SpainKey Laboratory of Planetary Sciences, Purple Mountain Observatory, Chinese Academy of Sciences, Nanjing 210008, ChinaDepartment of Physics, University of Warwick, Coventry CV4 7AL, UKWASP-127b is one of the lowest density planets discovered to date. With a sub-Saturn mass (M_ p=0.18 ± 0.02 M_J) and super-Jupiter radius (R_ p= 1.37 ± 0.04 R_J), it orbits a bright G5 star, which is about to leave the main-sequence. We aim to explore WASP-127b's atmosphere in order to retrieve its main atmospheric components, and to find hints for its intriguing inflation and evolutionary history. We used the ALFOSC spectrograph at the NOT telescope to observe a low resolution (R∼330, seeing limited) long-slit spectroscopic time series during a planetary transit, and present here the first transmission spectrum for WASP-127b. We find the presence of a strong Rayleigh slope at blue wavelengths and a hint of Na absorption, although the quality of the data does not allow us to claim a detection. At redder wavelengths the absorption features of TiO and VO are the best explanation to fit the data. Although higher signal-to-noise ratio observations are needed to conclusively confirm the absorption features, WASP-127b seems to posses a cloud-free atmosphere and is one of the best targets to perform further characterization studies in the near future.A feature-rich transmission spectrum for WASP-127b E. Palle1,2 G. Chen1,2,3 J. Prieto-Arranz1,2 G. Nowak1,2 F. Murgas1,2 L. Nortmann1,2 D. Pollacco4 K. Lam4 P. Montanes-Rodriguez1,2 H. Parviainen1,2 N. Casasayas-Barris1,2Received Month 00, 2017; accepted Month 00, 2017 =============================================================================================================================================================================================================================================================================================§ INTRODUCTIONThe atmospheres of exoplanets are a unique window to investigate the planetary chemistry, which can help improve our understanding of planetary interior properties and provide links to planet formation and migration histories <cit.>. Transmission spectroscopy retrieves the absorption and scattering signatures from the atmosphere at the planetary day-night terminator region. These signatures are only imprinted on the stellar light when it is transmitted through the planetary atmosphere during a transit, and they can be extracted through the differential method when compared to out-of-transit measurements. Such studies have been carried out by many ground-based large telescope and space telescope, in a wide range of spectral resolutions <cit.>, resulting in robust detections of Na, K, H_2O, CO, and scattering hazes (see the inventory listed inand ). A recent HST+Spitzer survey led by <cit.> performed a comparative study on ten hot Jupiters covering 0.3-5 μ m. This diverse hot Jupiter sample reveals a continuum from clear to cloudy atmospheres, and suggests clouds/hazes as the cause of weakened spectral features.As the investigated sample increases, it is fundamental to construct a spectral sequence for exoplanets, for a global picture of population characteristics and formation/evolution scenarios, as we have achieved for stars and brown dwarfs.In the near-future the JWST will provide spectral resolutions at high SNR with a large wavelength coverage 0.6-28 μ m that can distinguish among different atmospheric compositions. However, ground-based observations can also complement JWST by extending the wavelength range to λ < 600 nm, which is critical to examine spectral signatures arising from Rayleigh scattering, Na, or TiO/VO <cit.>. The ideal starting point are low density planets, which are more likely to host extended atmospheric envelopes that can produce stronger transmission signals if cloud-free.WASP-127b <cit.>, with a mass of 0.18 ± 0.02 M_J and a radius of 1.37 ± 0.04 R_J, is the puffiest, lowest density, planet discovered to date. It has an orbital period of 4.18 days, and orbits a bright parent star (V = 10.2), which makes it a very interesting object for atmospheric follow-up studies.WASP-127b's host star is a G5 star which is at the end of the main-sequence phase and moving to the sub-giant branch <cit.>. Moreover, the unusually large radius (compared to its sub-Saturn mass) cannot be explained by the standard coreless model <cit.>, and places it into the short-period Neptune desert, a region between Jovian and super-Earth planets with a lack of detected planets <cit.>. Several inflation mechanisms have been proposed to explain this inflation, including tidal heating, enhanced atmospheric opacity, Ohmic heating, and/or re-inflation by host star when moving towards the RGB phase <cit.>, although no concluding observations have yet been established to favour one or the other. Therefore, the formation and evolution mechanisms of WASP-127b are very intriguing, given its transition size between these two classes of planets.§ OBSERVATIONS AND DATA REDUCTIONWe observed one transit of WASP-127b on the night of February 23rd, 2017, using the Andalucia Faint Object Spectrograph and Camera (ALFOSC) mounted at the 2.5 m Nordic Optical Telescope (NOT) at ORM observatory. ALFOSC has a field of view of 6'.4x6'.4 and a 2048x2048 E2V detector with a pixel size of 0”.2. The observation was carried out in the long-slit mode using a 40" wide slit to avoid flux losses, and placing both WASP-127 and a reference star simultaneously aligned into the slit. The reference star TYC 4916-897-1 is located 40”.5 away from WASP-127 and it is about one magnitude fainter (V = 11.2) over the observed spectral range. Grism #4 was used covering simultaneously the spectral range from 320-960 nm. Observations started at 23:45 UT and ended at 05:36 UT, resulting in a time series of 746 spectra. Exposure times were set to 20s. The transit of WASP-127b (T_14) started at 00:19 UT and ended at 04:38 UT, resulting in 554 spectra taken within transit. The night was clear, with a relatively stable seeing of around 0".5 during the full observation. The airmass changed from 1.35 to 1.19, then to 2.45.Data reduction was carried out using the approach outlined in <cit.> for similar OSIRIS long-slit data taken with the GTC. The one-dimensional spectra (see Figure <ref>) were extracted using the optimal extraction algorithm <cit.> with an aperture diameter of 13 pixels, which minimized the scatter for the white-color light curves created from various trial aperture sizes. The time stamp was centered on mid-exposure and converted into the Barycentric Dynamical Time standard <cit.>. Misalignment between the target and reference stars in the wavelength solutions and any spectral drifts were corrected. Then the requested wavelength range of a given pass-band was converted to a pixel range, and the flux was summed to generate the time series.A broad-band (white-color) light curve was integrated from 395 nm to 945 nm, excluding the range of 755–765 nm to eliminate the noise introduced by the oxygen-A band <cit.>, and used to derive the transit parameters in Figure <ref>. Moreover several narrow band light curves were constructed to study the wavelength-dependence of the transit depth and derive the transmission spectrum (see Figure <ref> for the band ranges).§ LIGHT-CURVE ANALYSISThe light-curve data were modeled in the approach detailed in <cit.>. In brief, the light-curve model contains two multiplicative components. One component describes the astrophysical signal, which adopts the analytic transit model 𝒯(p) proposed by <cit.>. The other component describes the systematics of telluric or instrumental origins in a fully parametric form or in a semi-empirical form, which is designated as the baseline model ℬ(c_i). The transit model 𝒯(p) was parameterized as orbital period P, inclination i, scaled semi-major axis a/R_⋆, planet-to-star radius ratio R_ p/R_⋆, mid-transit time T_ mid, and limb-darkening coefficients u_i, where a circular orbit was assumed. The orbital period P was fixed to 4.178062 days as reported by <cit.>. A quadratic limb-darkening law was adopted and conservatively constrained by Gaussian priors of width σ=0.1, whose central values were calculated from the ATLAS atmosphere models following <cit.> with stellar parameters T_ eff=5750 K, log g=3.9, and [Fe/H]=-0.18. The baseline model ℬ(c_i) consisted of a selected combination of auxiliary state vectors, including spectral and spatial position drifts (x, y), spectra's full width at half maximum (FWHM) in the spatial direction (s_y), airmass (z), and time sequence (t). The Bayesian information criterion <cit.> was used to find the baseline model that can best remove the systematics. For the white-color light-curve, the modelℬ_𝓌 = c_0+c_1s_y+c_2zgave the lowest BIC value. The second best model yields a value of ΔBIC=3.3 higher. For the spectroscopic light-curves, the model was chosen in a semi-empirical form:ℬ_𝓈𝓅ℯ𝒸(λ) = 𝒮_𝓌×(c_0+c_1s_y(λ)+c_2t+c_3t^2),which inherited a common-mode component 𝒮_𝓌 determined from the white-color light-curve. The common-mode systematics 𝒮_𝓌 were derived after dividing the white-color light-curve by the best-fitting transit model 𝒯(p).The Transit Analysis Package <cit.>, customized for our purposes, was employed to perform the Markov chain Monte Carlo analysis. The correlated noise was taken into account by the wavelet-based likelihood function proposed by <cit.>. The overall transit parameters were determined from the white-color light-curve, whose best-fitting values and associated uncertainties were calculated as the median and 1σ percentiles of the posterior probability distributions and listed in Table <ref>. For the spectroscopic light-curves, only the planet-to-star radius ratio R_ p/R_⋆, the limb-darkening coefficients u_i, and the baseline coefficients c_i were fit, while the other transit parameters were fixed to the ones determined from the white-color light-curve. The wavelength-dependent radius ratios are presented in Table <ref>. The white-color and spectroscopic light-curves are shown in Fig. <ref> and <ref>, respectively. § RESULTS AND DISCUSSION §.§ Second order contamination When using the grism #4 with ALFOSC, second order contamination can be present due to the overlap in the detector of different diffraction orders<cit.>. To check this issue, on March 21st 2017, we performed consecutive observations of WASP-127 with the grism #4, with and without the second order blocking filters #101 (GG475) and #102 (OG515). We find that for WASP-127, second order contamination of the stellar flux appears at 1% level at 655 nm and rises nearly monotonically reaching 10% at 900 nm. Following the approach of <cit.>, we were able to directly remove the second order component of the blue light from the first order stellar spectra, and then to derive a new transmission spectrum. As shown in Fig. <ref>, this correction makes the transit depths slightly smaller at red wavelengths (λ≳ 600 nm), which agree with the original ones well within the error bars and still show the same relative spectral shape.§.§ Transmission spectrumTo interpret the transmission spectrum of WASP-127b, a series of atmospheric models with an isothermal temperature structure were generated using thecode <cit.>. Various metallicities, chemical compositions (with or without the presence of Na, K, TiO, VO), and weather conditions (clear,hazy, or cloudy) were considered. We also analytically calculated a pure Rayleigh scattering model following the approach of <cit.>, and used a simple flat straight line to represent the gray absorbing clouds.It is clear from Fig. <ref> that the transmission spectrum is not flat, but on the contrary it has strong spectral features.It is not surprising that we can detect spectral features even using a relatively small aperture telescope, given that one atmospheric scale height, H, of WASP-127b corresponds to approximately 2500 km (equivalent to a signal of 510 ppm) assuming an H-He atmosphere, and that the amplitude of a given spectral signature can typically achieve about 5H <cit.>.At the bluer wavelengths, the spectrum shows a decreasing slope with λ, which seems to indicate the presence of Rayleigh scattering. A hint of Na absorption is seen (although statistically insignificant), with the band centered on the Na doublet presenting a larger R_ p/R_⋆ value than the surrounding bands. Unfortunately, the analysis of narrower pass bands around Na did not provide more information but increasing noise (not shown). No K absorption is seen. Toward the red, strong absorptions from TiO and VO molecules seem to dominate the spectral shape. Fitting the different models to the whole spectral range (415-885 nm) or the blue spectral range free of second order (415–655 nm), the one with the minimumis always precisely the model including only TiO/VO, and with an enhanced Rayleigh slope indicative of some haze in the atmosphere (see Figure <ref>, and Table <ref> forfitting results). We find that the best fitting models are metal poor, which is interesting because the host star is also metal poor ([Fe/H]= -0.18 ± 0.06).Given the relatively cool equilibrium temperature of WASP-127b <cit.>, the tentative inference of the TiO/VO molecules is somewhat unexpected and intriguing. For planets with equilibrium temperatures lower than ∼1900 K, TiO could be cold trapped in the deep atmospheric layers when the temperature-pressure profile crosses the condensation curve <cit.>. Several other possibilities could also account for TiO/VO's absence in the upper atmosphere <cit.>. Until now only two very hot Jupiters, that is WASP-121b <cit.> and WASP-48b <cit.>, have shown evidence of TiO/VO in the transmission spectrum. If the presence of TiO/VO were true in WASP-127b's relatively “cool” atmosphere, one possible scenario to avoid the cold trap could be that the stellar irradiation is directly deposited into WASP-127b's deep interior which thereby changes the deep temperature profile <cit.>. § CONCLUSIONSWe have observed one transit of WASP-127b, an inflated, sub-Neptune mass planet. Because of its low density, the observed atmospheric scale height signals are large, and even with the NOT telescope we could retrieve its transmission spectrum. After considering the possible effects of second order contamination in the spectra, the spectrum shows the presence of a strong Rayleigh-like slope at blue wavelengths and a hint of Na absorption, although the quality of the data does not allow us to claim a detection. At redder wavelengths the absorption features of TiO and VO are the best explanation to fit the observed data. While the SNR is small, these findings are enough to conclude that the atmosphere of WASP-127b is either completely or partially cloud-free.The brightness of its host star, a close-by comparison star, its extraordinary inflation, and its short period, all contribute to make WASP-127b a prime target for further followup with ground- and space-based facilities, including the JWST, which will be able to confirm our findings and extend them into the infrared regime. Finding the physical mechanism(s) responsible for this inflation will help us understand how this type of planets evolve and how their fate is tied to that of their host star. This article is based on observations made in the Observatorios de Canarias del IAC with the NOT telescope operated on the island of La Palma by the NOTSA in the Observatorio del Roque de los Muchachos (ORM).This work is partly financed by the Spanish MINECO through grants ESP2013-48391-C4-2-R, and ESP2014-57495-C2-1-R.G.C. acknowledges the support by the National NSF of China (Grant No. 11503088) and the Nat. Sci. Found. of Jiangsu Province (Grant No. BK20151051). DLP is supported by the UK's STFC and a Royal Society Wolfson Merit award. aa§ APPENDIX A: SPECTRO-PHOTOMETRIC DATA Observed color light curves are shown in Figure <ref> and the derived transit depths at each spectral pass band are given here in Table <ref>. | http://arxiv.org/abs/1705.09230v1 | {
"authors": [
"E. Palle",
"G. Chen",
"J. Prieto-Arranz",
"G. Nowak",
"F. Murgas",
"L. Nortmann",
"D. Pollacco",
"K. Lam",
"P. Montanes-Rodriguez",
"H. Parviainen",
"N. Casasayas-Barris"
],
"categories": [
"astro-ph.EP"
],
"primary_category": "astro-ph.EP",
"published": "20170525153307",
"title": "A feature-rich transmission spectrum for WASP-127b"
} |
Version 1.6 as of December 30, 2023 Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT, UK National Physical Laboratory, Hampton Road, Teddington, TW11 0LW, UK Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT, UK Geophysical Laboratory, Carnegie Institution of Washington, Washington, DC 20015, USA Ludwig Maximilian University of Munich, 80539 München, Germany Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT, UK Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT, UKWe provide a fundamental insight into the microscopic mechanisms of the ageing processes.Using large scale molecular dynamics simulations of the prototypical ferroelectric material PbTiO_3, we demonstrate that the experimentally observed ageing phenomena can be reproduced from intrinsic interactions of defect-dipoles related to dopant-vacancy associates, even in the absence of extrinsic effects. We show that variation of the dopant concentration modifies the material's hysteretic response. We identify a universal method to reduce loss and tune the electromechanical properties of inexpensive ceramics for efficient technologies. Improving the Functional Control of Aged Ferroelectrics using Insights from Atomistic Modelling D. M. Duffy December 30, 2023 ===============================================================================================Technologies utilising ferroelectric components are ubiquitous in modern devices, being used from mobile phones, diesel engine drive injectors and sonar to print heads and non-volatile memory <cit.>. Doping with transition-metals has been shown experimentally to improve electromechanical properties of widely used ferroelectrics. For example, doping of BaTiO_3, PbTiO_3, (PZT) and (PMNPT) is used to improve the functional properties and efficiency of these simple and cheap oxides <cit.>. However, the fundamental origin of the electromechanical improvements is not understood and requires full characterisation to enable properties to be directly tuned for purpose and functional lifetimes to be accurately predicted. Dopant interactions can be classified as intrinsic (bulk/volume) or extrinsic (boundary). Extrinsic coupling is associated with domain wall and grain boundary effects. Defects, including dopants and vacancies, migrate to domain walls and subsequently pin their propagation, resulting in fatigue of the material's switching properties <cit.>. Intrinsic effects occur independent of interaction with domain walls, as they arise due to the interaction between defect induced dipoles p⃗_d and the spontaneous polarisation of the domain surrounding the defect site P⃗_s. Strong evidence from electron paramagnetic spin resonance (ESR) and density functional theory (DFT) calculations has shown dopants/impurities, such as iron Fe^3+ and copper Cu^2+, in PZT (or Mn^2+ in BaTiO_3) substitute the B-cations as acceptors, which bind to charge compensating oxygen vacancies V_O^2- to form thermodynamically stable defect complexes <cit.>. In Kröger-Vink notation, the divalent dopant-vacancy associates can be written as (B^”_Ti+V^∙∙_O)^×, where B^”_Ti is an unspecified divalent dopant subsituting a Ti^4+ site, V_O^∙∙ is an oxygen vacancy with a +2e charge relative to the defect free site, ' identifies a negative charge unit (-e), ∙ represents a positive charge unit (+e) and × stands for charge neutrality. Density functional theory calculations have shown defect-dipoles p⃗_d spontaneously form for (Fe^'_Ti+V^∙∙_O)^∙ <cit.> and (Cu^”_Ti+V^∙∙_O)^× <cit.> associates in PbTiO_3 and (Mn^”_Ti+V^∙∙_O)^× in BaTiO_3 <cit.>, and the energetically favourable oriention is along the polar axis [001]. Group-IIIB and group-VB acceptor substitutes on Ti sites in PbTiO_3 have been shown to form immobile clusters of dopant-vacancy associates which have different structures when the associate is aligned parallel or perpendicular to the polar axis <cit.>. (V_Pb^”+V_O^∙∙)^× divacancy complexes in PbTiO_3 have been calculated to have a local dipole moment twice the bulk value <cit.>.Ageing is simply defined as the change in a material's properties over time. It has been proposed that in aged ferroelectrics, defect-dipoles produced from dopant-vacancy associates will slowly rotate to align in parallel with the domain symmetry to minimise its energy state <cit.>. The co-alignment and subsequent correlated behaviour of these aged defect-dipoles has been proposed to create a macroscopically measurable internal bias, which in turn has been conjectured to be responsible for experimentally observed ageing phenomena, including a 10-40 fold increase in piezoelectric coefficients, shifts in the hysteresis along the electric field axis and pinched/double hysteresis loops typically associated with antiferroelectrics <cit.>.In this letter, we use large scale classical molecular dynamics to model ageing arising from defect-dipoles of dopant-vacancy associates in tetragonal bulk lead titanate (PbTiO_3). We show that all the experimentally observed large signal effects (P-E and S-E hysteresis) of aged prototype perovskite ferroelectrics; pinched and double hysteresis, shifted hysteresis and a large recoverable electromechanical response can be reproduced from intrinsic effects alone and we identify the microscopic mechanisms of each case.We study ideal and aliovalent-doped bulk PbTiO_3 using classical molecular dynamics (MD) asimplemented in the DL_POLY code <cit.>. We use the adiabatic core-shell interatomic potentials derived in Gindele et al <cit.> that reproduces the properties of bulk and thin films of PbTiO_3 in excellent agreement with DFT calculations <cit.>. The prototype PbTiO_3 has been chosen as it has a single ferroelectric phase, which reduces competing effects and because it is a parent compound for two of the most widely used ferroelectric materials in industry (PZT/PMNPT).In this study we investigate volume effects, therefore, three-dimension periodic boundary conditions are implemented to mimic an infinite crystal, devoid of surfaces, interfaces and grain boundaries. We choose a moderate supercell constructed from 12× 12 × 12 unit cells, approximately 125 nm^3, corresponding to 8,640 atoms (for the ideal bulk). This system size is large enough for ensemble sampling but sufficiently small to prevent the formation of 90^∘ domain walls. We use the Smooth Particle Mesh Ewald (SPME) summation for the calculation of Coulomb interactions. Coupling between strain and polarisation is enabled using the constant-stress Nosé-Hoover (Nσ T) ensemble with thermostat and barostat relaxation times of 0.01 ps and 0.1 ps, respectively. A 0.2 fs timestep is used in all instances. Initial calculations were run at 100 K to prevent diffusion of the vacancies <cit.> and to allow the correct characterisation of each effect. The temperature dependence, for the range from 50 K to 400 K, is then investigated.We calculate polarisation - electric field (P-E) hysteresis using a quasistatic approach. Starting at 0 kV/mm, the electric field is cycled between the limits ±150 kV/mm in 16.7 kV/mm intervals. For each field strength the system is restarted using the coordinates, velocities and forces from the previous calculation and equilibrated for 4 ps to enable the system to equilibrate following the E-field impulse. This is followed by an 8 ps production run over which statistics are collected (total of 12 ps per iteration). We calculate the local polarisation by considering conventional Ti-centred unit cells as implemented in references <cit.>. Further details are provided in the Supplementary Information. A dopant-vacancy concentration n_d=100(N_Ti^ideal-N_B^”)/N_Ti^ideal is introduced into the supercell initially containing N_Ti^ideal Ti atoms, by randomly selecting a total ofN_B^” Ti atoms to be replaced with generic divalent dopants B_Ti^”. Each dopant is coordinated by six nearest neighbouring oxygen-sites from which a charge compensating oxygen vacancy, V_O^∙∙, can be introduced. This configuration mimics (B^”_Ti+V^∙∙_O)^× dopant-vacancy associates observed from ESR experiments (Figure <ref>a). In experiments it is observed that the properties of an aged sample can be removed by heating above the Curie temperature for a long period and then rapidly quenching. It has been hypothesised that during this `un-ageing' process in the cubic phase of the prototype ferroelectric, each orientation of the defect-dipole is equally probable such that vacancies will thermally hop between the neighbouring oxygen site adjacent to the dopant and eventually 1/6 defect-dipoles will populate each of the six possible directions <cit.>. These are then frozen when quenched into the ferroelectric phase. Ferroelectrics can then be intentionally aged again by applying a bias field for a significantly long period. It is believed this causes defect-dipoles to align. Even in the absence of an ageing field, defect-dipoles in a sample left for a long period will align with the spontaneous polarisation of the domain <cit.>. When constructing the supercell for a particular simulation, the choice of which oxygen is removed neighbouring the dopant depends on the aged/unaged condition:(1) Unaged condition.To simulate unaged tetragonal PbTiO_3 we assign N_B^”/6 defect-dipoles along each of the six possible orientations causing the total moment to cancel, Figure <ref>b.(2) Aged condition.To simulate an aged PbTiO_3 sample, each V_O^∙∙ is selected to situate on the oxygen-site along the ageing direction (defined below) relative to its associated dopant. For these simulations we arbitrarily choose the ageing direction along +x̂ (see Figure <ref>a). This initialises all defect dipoles p⃗_d as parallel, polarised along [1̅00] as shown in Figure <ref>c.The ageing direction is defined relative to the driving field for the hysteresis characterisation. If the defect dipoles are co-aligned with the driving field we label this as aged(∥) (Figure <ref>d), whereas perpendicular alignments are labelled aged(⊥) (see Figure <ref>e). The strain is calculated as Δϵ=(c_0-c)/c where c_0 is the relaxed lattice constant (parallel to the drive field orientation) under no applied field. Further details of the computational methodology are provided in the Supplementary Information.Firstly we discuss results obtained at 100 K, to observe the ideal behaviour without thermal diffusion or hopping. The calculated P-E hysteresis of PbTiO_3 in response to an external driving field is shown in Figure <ref>a for the defect-free bulk, unaged and the two aged conditions. In all instances, the response is highly non-linear, typical of ferroelectrics. For the ideal bulk case a symmetric, square loop indicative of a hard ferroelectric is observed. We note our bulk coercive field E_c^int corresponds to the material's intrinsic coercive field, which greatly exceeds those measured experimentally for Pb-based ferroelectrics <cit.>. This is because our model excludes grain boundaries, surfaces and domain walls which would all act as nucleation sites, which lower the energy barrier for reversal in physical samples. Our result of 130 kV/mm matches other MD models <cit.> and is in excellent agreement with the intrinsic coercive field of 150 kV/mm calculated using density functional perturbation theory <cit.>.Figure <ref>a shows the hysteresis of an aged single domain simulated sample, with a defect concentration of 1.38%, in response to a driving field perpendicular to the direction in which the material was aged. Interestingly, when the system is equilibrated with no applied field the spontaneous polarisation P⃗ reorientates parallel to the ageing direction (See Supplementary Figure 2 and cartoon schematic in Figure <ref>d). This shows the internal bias created from the defect-dipoles is sufficient to overcome the switching barrier <cit.>. This observation provides direct evidence supporting the work of Zhang et al <cit.> who observed that non-switching defect-dipoles from (Mg^”_Ti-V_O^∙∙)^× associates in BaTiO_3 create restoring forces that promote reversible domain switching. Under the application of the perpendicular driving field there is an almost linear response until 67 kV/mm (≈ E_c^int/2), at which point the field strength is sufficient to switch the polarisation parallel to the drive field. As the electric field decreases to zero, the polarisation again reorientates along the ageing axis such that no remnant polarisation P_r remains in the poling direction. Thus in our work, the iconic double-hysteresis indicative of aged ferroelectrics is observed without the requirement of either domain walls or grain boundaries <cit.>.When poling parallel to the ageing orientation (Figure <ref>e) the system exhibits a shifted hysteresis curve along the electric-field axis as shown in Figure <ref>a. Such an effect is well documented in the literature when there is a preferred orientation of the defect dipoles in the poling direction <cit.>. In the unaged simulation we observe a symmetric square E-P hysteresis loop (Figure <ref>a). The computed coercive field of the unaged PbTiO_3 is reduced relative to the ideal bulk value by 35% (0.65E_C^int). The reduction of the coercive field from E_c^int occurs because the dopant-vacancy associates break local symmetry creating localised areas where the activation energy for nucleation of reverse domains is reduced <cit.>. The electrostrain (S-E) of each condition is shown in Figure <ref>b. Symmetric butterfly S-E curves are observed for both the ideal (n_d=0%) and unaged (n_d=1.38%) simulations, in excellent agreement with unaged ferroelectrics measured by experiment <cit.>. In order to test the validity of results in comparison to experiment, we calculate the d_33 piezoelectric tensor coefficient of bulk PbTiO_3 using a fluctuation-perturbation theory approach <cit.> and second derivative matrices using the GULP package <cit.> (see Supplementary Information for further details). We find d_33=47±1 pC/N from the fluctuations, and 49 pC/N from the derivatives, which agree with each other and agree well with values measured on polycrystalline PbTiO_3 films (52-65 pC/N) <cit.>.Ageing parallel to the poling field is shown to induce an asymmetric S-E hysteresis (Figure <ref>b). Instances of large asymmetric S-E loops have been experimentally reported in a range of ferroelectric materials <cit.>. In <cit.>, the authors report a strain difference of 0.15% in Li doped (Bi_0.5Na_0.4K_0.1)_0.98Ce_0.2TiO_3 ceramics which they propose is due to alignment of (Li^”'_Ti-V_O^∙∙)^' associates. Using our prototypical system, we provide evidence that the asymmetry is likely to arise from an excess orientation along the poling direction and is a general feature of ageing that may be exploited for technological applications.It has been observed that ageing of doped BaTiO_3 is capable of producing a large recoverable non-linear electric field induced strain of 0.75%; far greater than those measured in PZT or PMNPT <cit.>. It is argued that a strong restoring force from aligned defect-dipoles enables polar axis rotation parallel to the defect-dipoles enabling reversible switching of 90^∘ domains and could lead to the realisation of strain values of 6% in PbTiO_3. In Figure <ref>b, we show that ageing perpendicular to a poling field in PbTiO_3 leads to a large recoverable strain in excess of 4.5% (black-diamonds). This large non-linear strain arises from the reorientation of the polar axis along the ageing direction due to the internal bias from the defect dipoles at subswitching fields (see Supplementary Figure 2. c→ a, Δϵ = (c_0-a)/a). Observing the switching behaviour, we find that, in this instance, the 90^∘ switching occurs via near-homogeneous polarisation rotation over a small field range rather than nucleation and growth of 90^∘ domains - a switching mechanism predicted in bulk PbTiO_3 <cit.> and BaTiO_3 <cit.>. Therefore, we further show the volume effect of the dopant-vacancy associates to be the fundamental cause of this ageing phenomenon and a domain wall mechanism is not required for the full reproduction of experimental observations. Our ageing results are in excellent agreement with a complementary bond valence model study of ageing in ideal BaTiO_3 using fixed dipoles introduced into the crystal structure <cit.>.The results of an investigation into the effect of temperature on the ageing phenomenon in PbTiO_3 is shown in Figure <ref>a for n_d=1.38%. As the temperature increases, a decrease in the effective coercive field and saturation polarisation are observed, corresponding to a narrowing of the double hysteresis as indicated by the trend arrows. This is analogous to the behaviour known for the square loop of the ideal prototype. Near room temperature under poling fields comparable to E_c, vacancy hopping becomes thermally activated causing limited events whereby a subset of defect dipoles reorientate. This was observed by tracking the displacement of each oxygen atom relative to its initial position. This reorientation can create asymmetric loops as seen at 400 K (0.67T_c), clearly demonstrating that at high temperatures/large fields the defect-dipoles can readily realign, elucidating the microscopic mechanism for aged → unaged transitions. No defect-dipoles were observed to switch below 300 K (0.5T_c). At 300 K a single hopping event was observed (1/24 vacancies) and two (1/12 vacancies) at 400 K, over the full hysteresis. We note that due to the relatively short simulation times these hopping frequencies will be under-sampled for accurate statistics and will be an interesting subject for future investigation. Defect concentrations close to and above 1.38% are shown to form closed double hysteresis loops described previously (Fig. <ref>b). As the concentration is increased the enclosed area of the hysteresis loops decrease due to the increased strength of the internal bias, which lowers the barrier for the reorientation of the polar axis. For intermediate defect concentrations (0.78% in this model), we find pinched hysteresis loops are produced. This form of P-E loop is the most common large signal observation noted in experimental studies of aged ferroelectrics <cit.>. We find that the work dissipated (area enclosed by the P-E loop) decreases with the dopant level (Fig. <ref>c).Increased defect concentrations start to pinch the square loop which, upon further increases, leads to a closed double hysteresis and gradual reduction of area. Thus, the dissipated energy losses, effective coercive fields and hysteretic behaviour of ferroelectric materials can be controlled by varying the applied fields and dopant levels. We note that in our study we are limited by the constraint of zero total dipole moment in our unaged simulation cell, which restricts the number of dopants N_B^” to factors of six. Thus, the concentrations identifying pinching and double hysteresis are, in fact, upper bounds. In conclusion, we use molecular dynamics to model ageing in boundary-free single domain doped PbTiO_3. We show that all the large-signal characteristics of ageing: pinched/double hysteresis, hysteresis shifts and large recoverable non-linear strains, can be reproduced from intrinsic effects of defect-dipoles from dopant-vacancy associates alone, resulting from the net defect dipole orientation with respect to the poling field. Varying the concentration of dopants was found to modify the material's hysteretic response, suggesting a mechanism for tuning ferroelectric and electromechanical properties for enhanced device performance. This work identifies and clarifies the microscopic mechanisms involved the ageing phenomena and suggests practical methods to inexpensively improve functional performance of ferroelectric ceramic based technologies. Funding was provided by the EPSRC (EP/G036675/1) via the Centre for Doctoral Training in Molecular Modelling and Materials Science at University College London and the National Measurement Office of the UK Department of Business Innovation and Skills. Computer services on Archer were provided via membership of the UK's HPC Materials Chemistry Consortium funded by EPSRC (EP/L000202). We acknowledge the use of the UCL facilities LEGION and GRACE, and computational resources at the London Centre for Nanotechnology. REC acknowledges support of the US Office of Naval Research, the ERC Advanced grant ToMCaT, and the Carnegie Institution for Science. | http://arxiv.org/abs/1705.09709v1 | {
"authors": [
"J. B. J. Chapman",
"R. E. Cohen",
"A. V. Kimmel",
"D. M. Duffy"
],
"categories": [
"cond-mat.mtrl-sci"
],
"primary_category": "cond-mat.mtrl-sci",
"published": "20170526203911",
"title": "Improving the Functional Control of Aged Ferroelectrics using Insights from Atomistic Modelling"
} |
k | http://arxiv.org/abs/1705.09511v1 | {
"authors": [
"Marek Napiorkowski",
"Jaroslaw Piasecki"
],
"categories": [
"cond-mat.quant-gas",
"cond-mat.stat-mech"
],
"primary_category": "cond-mat.quant-gas",
"published": "20170526100809",
"title": "Thermodynamic equivalence of two-dimensional imperfect attractive Fermi and repulsive Bose gases"
} |
=116.2 cm 22.75 cm -1.25 cm -0.0 cm equationsectionh.c.% C.L.𝚍𝚒𝚊𝚐 1 g𝒢ℋℒØ𝒪𝒰𝒴 eV GeV TeVΔ m^2_solΔ m^2_atmΛ_LFVΛ_Lμ_Lhep-ph/***FTUAM-17-8IFT-UAM/CSIC-17-046SISSA24/2017/FISI 1cmbold Revisiting Minimal Lepton Flavour Violationin the Light of Leptonic CP Violation normal .3cm 0.5cm D.N. Dinh ^a),b),L. Merlo ^c),S.T. Petcov ^d),e),R. Vega-Álvarez ^c).7cm ^a) Mathematical and high energy physics group, Institute of physics,Vietnam academy of science and technology, 10 Dao Tan, Ba Dinh, Hanoi, Viet Nam.1cm ^b) Department of Physics, University of Virginia, Charlottesville, VA 22904-4714, USA.1cm ^c) Departamento de Física Teórica and Instituto de Física Teórica, IFT-UAM/CSIC,Universidad Autónoma de Madrid, Cantoblanco, 28049, Madrid, Spain.1cm ^d) SISSA and INFN-Sezione di Trieste, Via Bonomea 265, 34136 Trieste, Italy.1cm ^e) Kavli IPMU, University of Tokyo (WPI), Tokyo, Japan.3cm [l].9 E-mail:[email protected], [email protected], [email protected], [email protected] 0.5cmThe Minimal Lepton Flavour Violation (MLFV) framework is discussed after the recent indication for CP violation in the leptonic sector. Among the three distinct versions of MLFV, the one with degenerate right-handed neutrinos will be disfavoured, if this indication is confirmed. The predictions for leptonic radiative rare decays and muon conversion in nuclei are analysed, identifying strategies to disentangle the different MLFV scenarios. The claim that the present anomalies in the semi-leptonic B-meson decays can be explained within the MLFV context is critically re-examined concluding that such an explanation is not compatible with the present bounds from purely leptonic processes.[1]Table of Contentstableofcontents § INTRODUCTION The discovery <cit.> of a non-vanishing reactor angle θ^ℓ_13 in the lepton mixing matrix led to a huge fervour in the flavour community and to a deep catharsis in the model building approach.When the value of this angle was still unknown, the closeness to a maximal mixing value of the atmospheric angle θ^ℓ_23 was suggesting a maximal oscillation between muon- and tau-neutrinos: in terms of symmetries of the Lagrangian acting on the flavour space, it could be described by a discrete Abelian Z_2 symmetry, which, in turn, implied a vanishing reactor angle. The simplicity and the elegance of this pattern, i.e. one maximal angle and one vanishing one, convinced part of the community that Nature could have made us a favour and that neutrino physics could indeed be described, at least in the atmospheric and reactor sectors, by this texture <cit.>. An approach followed for such constructions was to write a Lagrangian whose leading order terms described specific textures for the Yukawa matrices, leading to θ^ℓ_13=0^∘ and θ^ℓ_23=45^∘. Often, this was done such that the Yukawa matrix for the charged leptons was diagonal while the Yukawa matrix for the light active neutrinos was diagonalised by the so-called Tri-Bimaximal mixing matrix <cit.>, whichpredicts, besides a vanishing reactor mixing angle and a maximal atmospheric one θ^ℓ_23=45^∘, a solar angle satisfying to sin^2θ^ℓ_12=1/3, in a very good agreement with the neutrino oscillation data.Pioneer models can be found in Refs. <cit.>, where the discrete non-Abelian group A_4 was taken as a flavour symmetry of the lepton sector. Several distinct proposals followed, i) attempting to achieve the Tri-Bimaximal pattern, but with other flavour symmetries (see for example Refs. <cit.>); or ii) adopting other mixing patterns to describe neutrino oscillations, such as the Bimaximal mixing[Bimaximal mixing can be obtained by assuming the existence of an approximate U(1) symmetry corresponding to the conservation of the non-standard lepton charge L'= L_e - L_μ - L_τ and additional discrete μ - τ symmetry <cit.>.] <cit.>, the Golden Ratio mixing <cit.> and the Trimaximal mixing <cit.>; iii) analysing the possible perturbations or modifications to Bimaximal mixing, Tri-Bimaximal mixing etc., arising from the charged lepton sector <cit.>, vi) implementing the so-called quark-lepton complementarity <cit.> which suggests that the lepton and quark sectors should not be treated independently, but a common dynamics could explain both the mixings <cit.>. Further details could be found for example in these reviews <cit.>. After the discovery of a non-vanishing θ^ℓ_13 and the improved sensitivity on the other two mixing angles, which pointed out that θ^ℓ_23 best fit is not 45^∘ (the most recent global fits on neutrino oscillation data can be found in Refs. <cit.>), models based on discrete symmetries underwent to a deep rethinking. A few strategies have been suggested: introduction of additional parameters in preexisting minimal models, see for example Refs. <cit.>; implementation of features that allow sub-leading corrections only in specific directions in the flavour space <cit.>; search for alternative flavour symmetries or mixing patterns that lead already in first approximation to θ^ℓ_13≠ 0^∘ and θ^ℓ_23≠ 45^∘ <cit.>. One can fairly say that the latest neutrino data can still be described in the context of discrete symmetries, but at the prize of fine-tunings and/or less minimal mechanisms.Alternative approaches to discrete flavour model building strengthened after 2011 and, in particular, constructions based on continuous symmetries were considered interesting possibilities: models based on the simple U(1) (e.g. Refs. <cit.>) or based on SU(3) (e.g. Refs. <cit.>)orthe so-called Minimal Flavour Violation (MFV) <cit.>, and its leptonic versions <cit.>, dubbed MLFV. The latter is a setup where the flavour symmetry is identified with the symmetry of the fermionic kinetic terms, or in other words, the symmetry of the SM Lagrangian in the limit of vanishing Yukawa couplings: it is given by products of U(3) factors, one for each fermion spinor of the considered spectrum. Fermion masses and mixings are then described once the symmetry is broken. This approach allows to relate any source of flavour and CP violation in the SM and beyond to the Yukawa couplings, such that any flavour effect can be described in terms of fermion masses and mixing angles. The M(L)FV is not a complete model, as fermion masses and mixings are just described while their origin is not explained (attempts to improve with this respect can be found in Refs. <cit.>). It is instead a framework where observed flavour violating observables are described in agreement with data and unobserved flavour violating signals are not expected to be observed with the current experimental sensitivities, but could be observable in the future planned experiments with significantly higher sensitivity, assuming the New Physics (NP) responsible for these phenomenology at the TeV scale or slightly higher <cit.>.The recent indication of a relatively large Dirac CP violation in the lepton sector <cit.> represented a new turning point in the sector. Present data prefer a non-zero Dirac CP phase, δ^ℓ_CP, over CP conservation at more than 2σ's, depending on the specific neutrino mass ordering. Moreover, the best fit value for the leptonic Jarlskog invariant, J^ℓ_CP≃-0.033 <cit.>, is numerically much larger in magnitude than its quark sibling, J^ℓ_CP≃3.04× 10^-5 <cit.>, indicating potentially a much larger CP violation in the lepton sector than in the quark sector.In the field of discrete flavour models, this indication translated into looking, for the first time, for approaches and/or contexts where, besides the mixing angles, also the lepton phase(s) were predicted: new models were presented with the CP symmetry as part of the full flavour symmetry <cit.>; studies on the mixing patterns and their modifications to provide realistic descriptions of oscillation data were performed <cit.>; an intense activity was dedicated to investigate sum rules involving neutrino masses, mixing angles and δ^ℓ_CP <cit.>.The indication for CP violation in the lepton sector also had an impact on models based on continuous flavour symmetries. In particular, one very popular version of MLFV <cit.> strictly requires CP conservation as a working assumption and therefore, if this indication is confirmed, this setup will be disfavoured. The first goal of this paper is to update previous studies on MLFV in the light of the last global fit on neutrino oscillation data and to discuss the impact of the recent indication for CP violation in the lepton sector. Indeed, the last studies on MLFV date back to the original papers in 2005 <cit.> and 2011 <cit.>, before the discovery of a non-vanishing θ^ℓ_13 and lacking any information about the leptonic CP phase. The search for an explanation of the heterogeneity of fermion masses and mixings, the so-called Flavour Puzzle, is just a part of the Flavour Problem of particle physics. A second aspect of this problem is related to the fact that models involving NP typically introduce new sources of flavour violation. Identifying the mechanism which explains why the experimentally measured flavour violation is very much consistent with the SM predictions is a crucial aspect in flavour physics. The use of flavour symmetries turned out to be useful also with this respect: a very well-known example is the MFV setup, as previously discussed, whose construction was originally meant exactly to solve this aspect of the Flavour Problem. Promising results have been obtained also with smaller symmetries than the MFV ones, both continuous <cit.> and discrete <cit.>.The Flavour Problem becomes even more interesting after the indications for anomalies in the semi-leptonic B-meson decays: the angular observable P'_5 in the B→ K^∗μ^+μ^- decay presents a tension with the SM prediction of 3.7σ <cit.> and 2σ <cit.>, considering LHCb and Belle data, respectively; the Branching Ratio of B_s→ϕμ^+μ^- is in tension with the SM prediction at 3.2σ <cit.>; the ratio R_D^∗^ℓ≡ BR( B→ D^(∗)τν)_exp/BR( B→ D^(∗)ℓν)_exp× BR( B→ D^(∗)ℓν)_SM/ BR( B→ D^(∗)τν)_SM with ℓ=e, μ indicates a 3.9σ violation of τ/ℓ universality<cit.>; the ratio R_K≡ BR(B^+→ K^+μ^+μ^-)/BR(B^+→ K^+e^+e^-) is in a 2.6σ tension with the SM prediction <cit.>, indicating lepton universality violation in the e/μ sector. The latter has been confirmed also by the recent announcement of the measure of R_K^∗≡ BR(B^0→ K^∗0μ^+μ^-)/BR(B^0→ K^∗0e^+e^-) is in a 2.4-2.5σ (2.2-2.4σ) tension with the SM prediction in the central-q^2 region (low-q^2 region)<cit.>. Under the assumption that these anomalies are due to NP, and not due to an underestimation of the hadronic effects <cit.> or due to a statistical fluctuation, a global analysis on b→ s data can attempt to identify the properties of the underlying theory. Adopting an effective description, these results can be translated into constraints of the Wilson coefficients of the Hamiltonian describing Δ B=1 decays: the results of such analysis <cit.> are that the anomalies can be explained with a modification of the Wilson coefficients C_9 and C_10 defined as ^eff_Δ B=1⊃ -4G_F√(2)e^2(4π)^2V_tbV^∗_ts [ s γ_μ P_L b][ℓγ^μ(C_9+C_10γ_5)ℓ]+ where V is the CKM matrix, P_L=(1-γ_5)/2 is the usual left-handed (LH) chirality projector, b and s refer to the bottom and strange quarks, respectively, ℓ are the charged leptons, and the pre-factors refer to the traditional normalisation. Writing each of the coefficients as the sum of the purely SM contribution and the NP one, C_i=C^SM_i+δ C_i, the results of a one-operator-at-a-time analysis <cit.> suggest lepton universality violation in the e/μ sector quantifiable in δ C^e_9=-δ C^e_10∈[+0.56, +1.02]andδ C^μ_9=-δ C^μ_10∈[-0.81, -0.48] @1σ ,corresponding to 4.3σ and 4.2σ tension with the SM predictions, respectively.The hypothetical underlying theory, which manifests itself at low energies with these features, will necessarily respect the SM gauge invariance, and therefore will also contribute to b→ c processes and hopefully solve the R^ℓ_D^(∗) anomalies.Several attempts have been presented in the literature to explain the deficit on C_9 and/or C_10, including the MLFV approach: Ref. <cit.> considers the version of MLFV introduced in Ref. <cit.> and constraints on the Lagrangian parameters and on the Lepton Flavour Violating (LFV) scale have been obtained requiring to reproduce the values of δ C^e_9 and δ C^e_10 aforementioned. A second goal of this paper is to revisit the results presented in Ref. <cit.> considering the constraints from purely leptonic observables, such as radiative rare decays and μ→ e conversion in nuclei. Moreover, the analysis will be extended to the other versions of MLFV <cit.>.The structure of the paper can easily be deduced from the table of content: first, in Sect. <ref>, basic concepts of MFV and MLFV will be recalled, underlying the differences between the distinct versions of MLFV; then, in Sect. <ref>, several processes in the lepton sector will be discussed considering the last global fit on neutrino data and the recent indication for leptonic CP violation; in Sect. <ref>, the anomalies in the b→ s decays will be discussed, pointing out the differences with respect to previous literature; finally, concluding remarks will be presented in Sect. <ref>.§ MINIMAL (LEPTON) FLAVOUR VIOLATION If a theory of NP, with a characteristic scale of a few TeVs, behaves at low energy accordingly to the MFV ansatz, i.e. the SM Yukawa couplings are the only sources of flavour and CP violation even beyond the SM, then its flavour protection is guaranteed: the large majority of observed flavour processes in the quark sector are predicted in agreement with data <cit.>; unseen flavour changing processes, for example leptonic radiative rare decays, are predicted to have strengths which are inside the present experimental sensitivity <cit.>.In the modern realisation of the MFV ansatz, the flavour symmetry corresponds to the one arising in the limit of vanishing Yukawa couplings. This massless Lagrangian is left invariant under a tridimensional unitary transformations in the flavour space associated to each fermion spinor. In the quark sector, it is given by _Q× U(1)_B× U(1)_A^u× U(1)_A^dwith_Q= SU(3)_q_L× SU(3)_u_R× SU(3)_d_R ,where q_L refer to the SU(2)_L-doublet of quarks, and u_R and d_R to the SU(2)_L-singlets. The Abelian terms can be identified with the Baryon number, and with two axial rotations, in the up- and down-quark sectors respectively, which do not distinguish among the distinct families <cit.>. On the contrary, the non-Abelian factors rule the interactions among the generations and govern the amount of flavour violation: they are the key ingredients of MFV and will be in the focus of the analysis in which follows.The explicit quark transformations read [ q_L∼ ( 3, 1, 1)__Q u_R∼ (1,3, 1)__Q d_R∼ (1, 1,3)__Q;q_L→_q_Lq_Lu_R→_u_Ru_Rd_R→_d_Rd_R ,;] where _i∈ SU(3)_i are 3× 3 unitary matrices acting in the flavour space. The quark Lagrangian is invariant under these transformations, except for the Yukawa interactions: _Q=- q_L Y_u H̃ u_R- q_L Y_d H d_R +,where Y_i are 3× 3 matrices in the flavour space, H is the SU(2)_L-double Higgs field, and H̃=iσ_2 H^∗. _Q can be made invariant under _Q promoting the Yukawa matrices to be spurion fields,i.e. auxiliary non-dynamical fields, denoted by _u and _d, with specific transformation properties under the flavour symmetry: [ _u∼( 3,3, 1)__Q _d∼( 3, 1,3)__Q; _u→_q_L _u ^†_u_R _d→_q_L _d ^†_d_R . ] Once the Yukawa spurions acquire a background value, the flavour symmetry is broken and in consequence fermions masses and mixings are generated. A useful choice for these background values is to identify them with the SM Yukawa couplings: in a given basis, Y_d is diagonal and describes only down-type quark masses, while Y_u contains non-diagonal entries and accounts for both up-type quark masses and the CKM matrix V: _u≡ Y_u=√(2)vV^†M̂_u,_d≡ Y_d=√(2)vM̂_d,where v=246 GeV is the Higgs vacuum expectation value (VEV) defined by H^0=v/√(2), and M̂_u,d are the diagonal mass matrices for up- and down-type quarks, M̂_u≡(m_u, m_c, m_t) ,M̂_d≡(m_d, m_s, m_b) . When considering low-energy flavour processes, they can be described within the effective field theory approach through non-renormalisable operators suppressed by suitable powers of the scale associated to the messenger of the interaction. These structures could violate the flavour symmetry _Q, especially if they describe flavour changing observables. As for the Yukawa Lagrangian, a technical way out to recover flavour invariance is to insert powers of the Yukawa spurions. Once the spurions acquire background values, the corresponding processes are predicted in terms of quark masses and mixings. Several studies already appeared addressing this topic <cit.> and, as already mentioned at the beginning of this section, the results show that flavour data in the quark sector are well described within the MFV(-like) approach. Indeed, the Yukawa spurions act as expanding parameters and processes described by effective operators with more insertions of the spurions obtain stronger suppressions[The top Yukawa represents an exception as it cannot be technically taken as an expanding parameter. This aspect has been treated in Refs. <cit.>, where a resummation procedure has been illustrated.].MFV, however, cannot be considered a complete flavour model, as there is not explanation of the origin of quark masses and mixings. There have been attempts to go from the effective-spurionic approach to a more fundamental description, promoting the Yukawa spurions to be dynamical fields, called flavons, acquiring a non-trivial VEV. The corresponding scalar potentials have been discussed extensively with interesting consequences <cit.>: a conclusive dynamical justification for quark masses and mixing is still lacking, but the results are encouraging as the potential minima lead, at leading order, to non-vanishing masses for top and bottom quarks and to no mixing.§.§ The Lepton Sector The lepton sector is more involved with respect to the quark one, due to the lack of knowledge on neutrino masses: indeed, while the charged lepton description mimics the one of down-quarks, light active neutrino masses, and then the lepton mixing, cannot be described within the SM.Several ways out have been presented in the literature to provide a description for the lepton sector, and the focus here will be on two well-defined approaches, one maintaining the SM spectrum but relaxing the renormalisability criterium, and the other adding new particles in a still renormalisable theory.§.§.§ Minimal Field Content (MFC)Giving up with renormalisability, active neutrino masses can be described via the so-called Weinberg operator <cit.>, a non-renormalisable operator of canonical dimension 5 which breaks explicitly Lepton number by two units, Ø_W=12(ℓ^c_LH̃^∗)g_ν(H̃^†ℓ_L) ,where ℓ_L^c≡ C ℓ_L^T, C being the charge conjugation matrix (C^-1γ_μ C = - γ^T_μ), g_ν is an adimensional symmetric 3×3 matrix in the flavour space andis the scale of Lepton Number Violation (LNV). The flavour symmetry arising from the kinetic terms in this case is _L× U(1)_L× U(1)_A^ewith_L=SU(3)_ℓ_L× SU(3)_e_R ,where U(1)_L is the Lepton number while U(1)_A^e is an axial rotation in ℓ_L and e_R, and the non-Abelian transformations of the leptons read [ ℓ_L∼ ( 3, 1)__L e_R∼ (1,3)__L; ℓ_L→_ℓ_Lℓ_L e_R→_e_Re_R . ] The part of the Lagrangian describing lepton masses and mixings, _L= -ℓ_L Y_e H e_R - Ø_W +,is not invariant under _L, but this can be cured by promoting Y_e and g_ν to be spurion fields, _e and _ν, transforming as [_e∼( 3,3)__L _ν∼ ( 6,1)__L; _e→_ℓ_L _e ^†_e_R _ν→^∗_ℓ_L _ν ^†_ℓ_L . ] Lepton masses and the PMNS matrix U arise once _e and _ν acquire a background value that can be chosen to be _e≡ Y_e=√(2)vM̂_ℓ , _ν≡ g_ν =2v^2U^∗M̂_ν U^† ,with M̂_ℓ,ν being the diagonal matrices of the charged lepton and active neutrino mass eigenvalues, M̂_ℓ≡(m_e, m_μ, m_τ) ,M̂_ν≡(m_ν_1, m_ν_2, m_ν_3) ,and U defined as the product of four matrices <cit.>,U=R_23(θ^ℓ_23)· R_13(θ^ℓ_13,δ^ℓ_CP)· R_12(θ^ℓ_12)·(1,e^iα_21/2,e^iα_31/2) ,with R_ij(θ^ℓ_ij) a generic rotation of the angle θ^ℓ_ij in the ij sector, with the addition of the Dirac CP phase δ^ℓ_CP in the reactor sector, and α_21,31 the Majorana phases<cit.>.As discussed for the quark case, Y_e and g_ν act as expanding parameters: operators with more insertions of these spurions describe processes that receive stronger suppressions. This perturbative treatment requires, however, that the largest entries in Y_e and g_ν are at most Ø(1). The charged lepton Yukawa satisfies to this condition as the largest entry is ∼ m_τ/v. The neutrino spurion g_ν is instead function of : requiring that |g_ν ij|<1 leads to an upper bound on the LNV scale, which depends on |(U^∗M̂_ν U^†)_ij| that is a function of the type of neutrino mass spectrum (NO or IO), of the value of the lightest neutrino mass and of the values of the Majorana and Dirac CP violation phases. The lowest upper bound is given approximately by: ≃v^22g_ν√()≲ 6× 10^14 . It will be useful for the phenomenological discussion in the next sections to remember that the spurion combination _ν^† _ν transforms as ( 8, 1)__L and to introduce the quantity Δ≡ g_ν^† g_ν=4Λ^2_Lv^4UM̂^2_ν U^† . §.§.§ Extended Field Content (EFC)Enlarging the SM spectrum by the addition of three RH neutrinos N_R leads to the so-called type I Seesaw context <cit.>, described by the following Lagrangian: _L–SS=-ℓ_L Y_e H e_R-ℓ_L Y_νH̃ N_R-12N^c_RY_N N_R+ ,where Y_e, Y_ν and Y_N are adimensional 3×3 matrices in the flavour space, whilestands for the scale of Lepton number violation, broken by two units by the last term on the right of this equation. Assuming a hierarchy betweenand v, ≫ v, it is then possible to easily block-diagonalise the full 6× 6 neutrino mass matrix, and obtain the induced masses for the light active neutrinos: in terms of the parameter g_ν appearing in the Weinberg operator in Eq. (<ref>), they are given by g^†_ν=Y_νY_N^-1Y_ν^T . The fermionic kinetic terms of the SM extended with 3 RH neutrinos manifest the following flavour symmetry: _L× U(1)_L× U(1)_A^e× U(1)_A^Nwith_L=SU(3)_ℓ_L× SU(3)_e_R× SU(3)_N_R ,under which leptons transform as [ ℓ_L∼ ( 3, 1, 1)__L e_R∼ (1,3, 1)__L N_R∼ (1, 1,3)__L;ℓ_L→_ℓ_Lℓ_Le_R→_e_Re_RN_R→_N_RN_R , ] and where U(1)_A^N is an axial transformation associated to N_R and SU(3)_N_R is a new rotation that mixes thethree RH neutrinos. The Lagrangian in Eq. (<ref>) breaks explicitly _L defined in Eq. (<ref>), but the invariance can be technically restored promoting Y_E, Y_ν and Y_N to be spurions fields, _E, _ν and _N, transforming as [ _e∼( 3,3, 1)__L _ν∼( 3, 1,3)__L _N ∼ (1, 1, 6)__L; _e→_ℓ_L _e ^†_e_R _ν→_ℓ_L _ν ^†_N_R _N→^∗_N_R _N ^†_N_R . ] Lepton masses and mixing are then described when these spurion fields acquire the following background values: _e≡ Y_e=√(2)vM̂_ℓ ,_ν_N^-1_ν^T≡ Y_ν Y_N^-1Y_ν^T=2v^2UM̂_ν U^T .Differently from the quark sector and the MFC lepton case, it is not possible to identify a unique choice for _ν and _N, as only the specific combination in Eq. (<ref>) can be associated to the neutrino mass eigenvalues and the PMNS matrix entries. This is a relevant aspect as it nullifies the MLFV flavour protection. Indeed, the basic building blocks for several processes, such as radiative leptonic decays or leptonic conversions, are fermionic bilinears of the type ℓ_L^iΓℓ_L^j, ℓ_L^iΓℓ_L^c j, ℓ_L^iΓ e_R^j and e_R^iΓ e_R^j, with Γ standing for combination of Dirac γ matrices and/or Pauli σ matrices. In the unbroken phase, these terms are invariant under the flavour symmetry contracting the flavour indices with combinations of the spurions transforming as ( 8, 1, 1)__L, ( 6, 1, 1)__L, ( 3,3, 1)__L, and (1,8, 1)__L, among others. These spurion combinations are distinct from the combination of _ν and _N that appears in Eq. (<ref>): a few examples are( 8, 1, 1)__L _ν_ν^† ,_e _e^† ,_ν_N^†_N _ν^†,(_ν_ν^†)^2,…( 6, 1, 1)__L _ν_N^†_ν^T,_ν_N^†_N_N^†_ν^T,_ν_N^†_ν^T _ν^∗_ν^T ,…( 3,3, 1)__L _e,_ν_ν^†_e,_e_e^†_e,_ν_N^†_N _ν^†_e,…(1,8, 1)__L _e^†_e,_e^†_ν_ν^†_e,_e^†_ν_N^†_N_ν^†_e,… In consequence, one concludes that it is not possible to express any flavour changing process involving leptons in terms of lepton masses and mixings, losing in this way the predictive power of MLFV.This problem can be solved, and predictivity can be recovered, if all the information of neutrino masses and mixing would be encoded into only one spurion background among Y_ν and Y_N, being the other proportional to the identity matrix. Technically, this corresponds to break _L following two natural criteria.I):_L→ SU(3)_ℓ_L× SU(3)_e_R× SO(3)_N_R× CP <cit.>.Under the assumption that the three RH neutrinos are degenerate in mass, i.e. , SO(3)_N_R is broken down to SO(3)_N_R and the transformation _N_R in Eq. (<ref>) is then an orthogonal matrix. The additional assumption of no CP violation in the lepton sectoris meant to force Y_e and Y_ν to be real[Strictly speaking, the condition of CP conservation in the leptonic sector forces the Dirac CP phase to be equal to δ^ℓ_CP={0, π} and the Majorana CP phases to be α_21,31={0, π, 2π}. However, Y_ν is real only if α_21,31={0, 2π}, and therefore α_21,31=π needs to be disregarded in order to guarantee predictivity. The CP conservation condition assumed in this context is then stronger than the strict definition.]. With this simplifications, all flavour changing effects involving leptons can be written in terms of Y_ν Y_ν^T and Y_e, as can be easily deduced from Eq. (<ref>). In this case, Eq. (<ref>) simplifies toY_ν Y_ν^T=2v^2UM̂_ν U^T≡Δ ,eventually redefiningby reabsorbing the norm of Y_N, and therefore any flavour changing process can be described in terms of lepton masses and mixings. The last equivalence in the previous equation is a definition that will be useful in the phenomenological analysis.As for the MFC case, requiring that the spurions respect the perturbativity regime leads to an upper bound on the LNV scale: ≃v^22Y_ν Y_ν^T√()≲ 6× 10^14 ,numerically the same as the one in Eq. (<ref>). II):_L→ SU(3)_ℓ_L+N_R× SU(3)_e_R <cit.>.Assuming that the three RH neutrinos transform as a triplet under the same symmetry group of the lepton doublets,ℓ_L, N_R∼ ( 3, 1)__L e_R∼ (1,3)__L ,then the Schur's Lemma guarantees that _ν transforms as a singlet of the symmetry group and then Y_ν is a unitary matrix <cit.>, which can always be rotated to the identity matrix by a suitable unitary transformation acting only on the RH neutrinos. The only sensible quantities in this context are _e and _N, which now transform as_e∼( 3,3)__L_N ∼ ( 6, 1)__L .The background value of _N would eventually encode the norm of Y_ν, in order to consistently take Y_ν=. In this basis, neutrino masses and the lepton mixing are encoded uniquely into Y_N,Y_N=v^22U^∗M̂_ν^-1U^† .Moreover, all the spurion combinations in Eq. (<ref>) can be written only in terms of Y_e and Y_Nand therefore any flavour changing process can be predicted in terms of lepton masses and mixing. It will be useful in the phenomenological analysis that follows to introduce the quantity Δ≡ Y_N^† Y_N =v^44μ^2_LUM̂_ν^-2U^† . Contrary to what occurs in the MFC and the EFCI cases, the perturbativity condition on Y_N allows to extract a lower bound on the LNV scale: ≃v^22Y_N^-1√()≳ 6× 10^14 . Similarly to what discussed for the quark sector, none of the two versions of the MLFV provide an explanation for the origin of lepton masses and mixing, and therefore cannot be considered complete models. In Refs. <cit.> attempts have been presented to provide a dynamical explanation for the flavour puzzle in the lepton sector: as for the quark sector, the results are not conclusive, but highlighted interesting features. Indeed, for the MLFV version with an SO(3)_N_R symmetry factor associated to the RH neutrinos, the minima of the scalar potential, constructed by promoting _e and _ν to be dynamical fields, allow a maximal mixing and a relative maximal Majorana CP phase between two almost degenerate neutrino mass eigenvalues. This seems to suggest that the large angles in the lepton sector could be due to the Majorana nature of neutrinos, in contrast with the quark sector where this does not occur.No dedicated analysis of the scalar potential arising in the second version of MLFV has appeared in the literature, although the results are not expected to be much different from the ones in the quark sector. However, as a conclusive mechanism to explain lepton masses and mixing is still lacking, both the versions of MLFV remain valid possibilities. As anticipated in Sect. <ref>, the recent indication for a relatively large leptonic CP violation, if confirmed, would disfavour EFCI, due to the required reality of Y_ν. However, in the present discussion and in the analysis that follows, EFCI will not be discarded yet, as the assumption of CP conservation is a distinctive feature of this low-energy description of the lepton sector, but could be avoided in more fundamental ones. Indeed, a model constructed upon thegauged lepton flavour symmetry SU(3)_ℓ_L× SU(3)_e_R× SO(3)_N_R, without any further hypothesis on CP in the lepton sector, is shown in Ref. <cit.> to be as predictive as EFCI: indeed, with the Dirac CP phase taken at its best fit value, this gauged flavour model presents several phenomenological results similar to the onesof EFCI discussed in Ref. <cit.>. This motivates to consider EFCI as a valid context to describe lepton flavour observables, even if results which show a strong dependence on the value of the Dirac CP phase should be taken with a grain of salt. § PHENOMENOLOGY IN THE LEPTON SECTORIn this section, the phenomenology associated to the MFC, EFCI and EFCII cases will be discussed considering specifically leptonic radiative rare decays and μ→ e conversion in nuclei. While these analyses have already been presented in the original MLFV papers <cit.>, in the review part of the present paper the latest discovered value of the reactor angle and the recent indication of non-vanishing CP phase in the leptonic sector will be considered.The input data that will be used in what follows are the PDG values for the charged lepton masses <cit.> m_e=0.51 MeV , m_μ=105.66 MeV , m_τ=1776.86±0.12 MeV ,where the electron and muon masses are taken without errors as the sensitivities are negligible, and the results of the neutrino oscillation fit from Ref. <cit.> reported in Table <ref>.The value of the lightest neutrino mass and the neutrino mass ordering are still unknown. For this reason, the results of this section will be discussed in terms of the values of the lightest neutrino mass and for both the Normal Ordering (NO) and the Inverted Ordering (IO). The measured parameters are taken considering their 2σ error bands [EW running effects<cit.> are negligible in the analysis presented here.]: this is to underly the impact of the raising indication for a leptonic CP violation. §.§ The LFV Effective Lagrangian The rates of charged LFV processes, i.e. μ→ e+γ, μ→ 3e,and μ→ e conversion in nuclei among others, are predicted to be unobservably small in the minimal extension of the SM with light massive Dirac neutrinos, in which the total lepton charge is conserved <cit.>. As a consequence, the rates of such processes have a remarkable sensitivity to NP contributions.The main observables that will be discussed here are lepton radiative rare decays and μ→ e conversion in nuclei. Other leptonic observables which are typically very sensible to NP are ℓ→ℓ'ℓ'ℓ” decays, and especially the μ→3 e decay, given the significant increase of the sensitivity of the planned experiments. However, these processes do not provide additional information for the results that will be obtained in the following, and therefore they will not be further considered.Assuming the presence of new physics at the scaleresponsible for these observables characterised by a much lower typical energy, one can adopt the description in terms of an effective Lagrangian[The effective Lagrangian reported here corresponds to the linearly realised EWSB. An alternative would be to considered a non-linear realisation and the corresponding effective Lagrangian dubbed HEFT <cit.>. In this context, however, a much larger number of operators should be taken into consideration and a slightly different phenomenology is expected <cit.>. The focus in this paper is on the linear EWSB realisation and therefore the HEFT Lagrangian will not be considered in what follows.]: the relevant terms are then given by[A few other operators are usually considered in the effective Lagrangian associated to these LFV observables, but the corresponding effects are negligible. See Ref. <cit.> for further details.]^eff_LFV=1^2∑_i=1^5c^(i)_LLØ_LL^(i)+1^2(∑_j=1^2 c_RL^(j)Ø_RL^(j)+) ,where the Lagrangian parameters are real coefficients[The reality of the Lagrangian parameters guarantees that no sources of CP violation are introduced beyond the SM. A justification of this approach can be found in Ref. <cit.>.] of order 1 and the operators have the form[The notation chosen for the effective operators matches the one of the original MLFV paper <cit.>. It is nowadays common to adopt an other operator basis introduced in Ref. <cit.>. The link between the two bases is given by: Ø_LL^(1) → Q_φℓ^(1) ,Ø_LL^(2) → Q_φℓ^(3) ,Ø_LL^(3) → Q_ℓ q^(1) ,Ø_LL^(4d) → Q_ℓ d ,Ø_LL^(4u) → Q_ℓ d ,Ø_LL^(5) → Q_ℓ q^(3) ,Ø_RL^(1) → Q_eB ,Ø_RL^(2) → Q_eW . ]: Ø_LL^(1) =iℓγ^μℓ_L H^† D_μ H , Ø_LL^(2) =iℓγ^μσ^aℓ_L H^†σ^a D_μ H ,Ø_LL^(3) =ℓγ^μℓ_Lqγ_μ q_L , Ø_LL^(4d) =ℓγ^μℓ_Ldγ_μ d_R ,Ø_LL^(4u) =ℓγ^μℓ_Luγ_μ u_R , Ø_LL^(5) =ℓγ^μσ^aℓ_Lqγ_μσ^a q_L ,Ø_RL^(1) =g' ℓ H σ^μν e_R B_μν , Ø_RL^(2) =g ℓ H σ^μνσ^a e_R W^a_μν . The Ø_LL^(i) structures are invariant under the flavour symmetries without the necessity of introducing any spurion field, but they can only contribute to flavour conserving observables. The LFV processes aforementioned can only be described by the insertion of specific spurion combinations transforming as 8 under SU(3)_ℓ_L, whose flavour indices are contracted with those of the lepton bilinear ℓ_L^iΓℓ_L^j in Ø_LL^(i), Γ being a suitable combination of Dirac and/or Pauli matrices. The specific spurion combinations depend on the considered model: some examples are _ν^† _ν in MFC, _ν_ν^† in EFCI and _ν_N^†_N _ν^† in EFCII. Interestingly, once the spurions acquire their background values, these combinations reduce to the expressions for Δ in Eqs. (<ref>), (<ref>) and (<ref>), respectively.The Ø_RL^(i) operators, instead, are not invariant under the flavour symmetry _L and require the insertion of spurion combinations transforming as ( 3,3) under SU(3)_ℓ_L× SU(3)_e_R. The simplest combination of this kind is the charged lepton Yukawa spurion _e, whose background value, however, is diagonal. Requiring as well that these structures describe LFV processes, it is necessary to insert more elaborated combinations: some examples are _ν^† _ν_e in MFC, _ν_ν^†_e in EFCI and _N^†_N _e in EFCII. Once the spurions acquire background values, these combinations reduce to Δ Y_e, with the specific expression for Δ depending on the case considered.From the previous discussion one can deduce that the relevant quantity that allows to describe LFV processes in terms of lepton masses and mixings is Δ, beside the diagonal matrix Y_e. It is then instructive to explicitly write the expression for Δ in the three cases under consideration and distinguishing between the NO and the IO for the neutrino mass spectrum[The expression for Δ in the IO case may differ from what reported in Ref. <cit.>, due to a different definition taken for the atmospheric mass squared difference.].1. Minimal Field Content _L=SU(3)_ℓ_L× SU(3)_e_R. Expliciting Eq. (<ref>), the off-diagonal entries of Δ can be written as Δ_μ e= 4^2v^4[s_12c_12c_23c_13(m_ν_B-m_ν_A)+s_23s_13c_13e^iδ(m_ν_C-s_12^2m_ν_B-c_12^2 m_ν_A)] ,Δ_τ e= 4^2v^4[-s_12c_12s_23c_13(m_ν_B-m_ν_A)+c_23s_13c_13e^iδ(m_ν_C-s_12^2m_ν_B-c_12^2m_ν_A)] ,Δ_τμ= 4^2v^4{s_23c_23[c_13^2m_ν_C+(s_12^2s_13^2-c_12^2)m_ν_B+(c_12^2s_13^2-s_12^2)m_ν_A]+. . +s_12c_12s_13(s_23^2e^-iδ-c_23^2e^iδ)(m_ν_B-m_ν_A)} ,where, for brevity of notation, s_ij and c_ij stand for the sine and cosine of the leptonic mixing angles θ^ℓ_ij, δ stands for the Dirac CP phase δ^ℓ_CP, and a generic notation for M̂_ν has been adopted in the definition of Δ: M̂^2_ν≡(m_ν_A, m_ν_B, m_ν_C) .The three parameters m_ν_A,B,C depend on the neutrino mass ordering: for the NO casem_ν_A=0 , m_ν_B= , m_ν_C= ,and for the IO casem_ν_A=- , m_ν_B= , m_ν_C=0 .Notice that there is no dependence on the lightest neutrino mass in these expressions. This has an interesting consequence because Δ_i≠ j are completely fixed, apart for the common scale .2. Extended Field Content I) _L=SU(3)_ℓ_L× SU(3)_e_R× SO(3)_N_R× CP. From Eqs. (<ref>), one gets the following explicit expressions for the off-diagonal entries of Δ: Δ_μ e= 2/v^2[s_12c_12c_23c_13(m_ν_B-m_ν_A)+s_23s_13c_13e^iδ(e^-2iδm_ν_C-s_12^2m_ν_B-c_12^2 m_ν_A)] ,Δ_τ e= 2/v^2[-s_12c_12s_23c_13(m_ν_B-m_ν_A)+c_23s_13c_13e^iδ(e^-2iδm_ν_C-s_12^2m_ν_B-c_12^2m_ν_A)] ,Δ_τμ= 2/v^2{s_23c_23(c_13^2m_ν_C-c_12^2m_ν_B-s_12^2m_ν_A)+. . +s_12c_12s_13e^iδ(s_23^2-c_23^2)(m_ν_B-m_ν_A)+s_23c_23s^2_13e^2iδ(s_12^2m_ν_B+c_12^2 m_ν_A)} ,where a generic notation -different from the one in the MFC case- for M̂_ν has been adopted: M̂_ν≡(m_ν_A, m_ν_B, m_ν_C) .The three parameters m_ν_A,B,C are now defined bym_ν_A=m_ν_1 , m_ν_B=e^iα_21√(+m_ν_1^2) , m_ν_C=e^iα_31√(+m_ν_1^2) ,for the NO case, m_ν_1<m_ν_2<m_ν_3, and bym_ν_A=√(-+m_ν_3^2) , m_ν_B=e^iα_21√(+m_ν_3^2) , m_ν_C=e^iα_31m_ν_3 ,for the IO case, m_ν_3<m_ν_1<m_ν_2.The hypothesis of CP conservations fixes the Dirac and Majorana CP phases to be δ={0,π} and α_21,31=0 in these expressions. Indeed, while Δ_ij would be real even for α_21,31=π and therefore no CPV process would be described with Δ insertions, Y_ν would be complex and then it would not be possible to express the spurions insertions in Eq. (<ref>) in terms of low-energy parameters, losing the predictivity power of MLFV.In the strong hierarchical limit, m_ν_1≪ m_ν_2<m_ν_3 in the NO case and m_ν_3≪ m_ν_1<m_ν_2 in the IO one, and setting the lightest neutrino mass to zero, the expressions for m_ν_A,B,C reduce to the square root of those for the MFC case, as can be deduced comparing Eqs. (<ref>) and (<ref>), and the results for Δ_i≠ j get simplified. Also in this case, only one parameter remains free, that is the LNV scale .When the neutrino mass hierarchy is milder or the eigenvalues are almost degenerate, the lightest neutrino mass cannot be neglected and represents a second free parameters of Δ_i≠ j, besides .3. Extended Field Content II) _L=SU(3)_ℓ_L+N_R× SU(3)_e_R. The expressions for the off-diagonal entries of Δ that follow from Eqs. (<ref>) can be obtained from the expressions in Eq. (<ref>), by substituting 4^2v^4→v^44^2 and taking the following notation for M̂_ν: M̂^-2_ν≡(m_ν_A, m_ν_B, m_ν_C) ,with m_ν_A,B,C given bym_ν_A=1/m^2_ν_1 , m_ν_B=1/+m_ν_1^2 , m_ν_C=1/+m_ν_1^2 ,for the NO case, andm_ν_A=1/-+m_ν_3^2 , m_ν_B=1/+m_ν_3^2 , m_ν_C=1/m_ν_3 ,for the IO case.The limits for the lightest neutrino mass being zero are not well defined for this case, as it would lead to an infinity in the expressions for Δ_i≠ j. Differently from the other two cases, only a moderate neutrino mass hierarchy is then allowed. Finally, these expressions depend on two free parameters, the lightest neutrino mass and the LNV scale .§.§ Rare Radiative Leptonic Decays and Conversion in Nuclei In the formalism of the effective Lagrangian reported in the Eq. (<ref>), the Beyond SM (BSM) contributions to the branching ratio of leptonicradiative rare decays are given byB_ℓ_i→ℓ_jγ≡Γ(ℓ_i→ℓ_jγ)Γ(ℓ_i→ℓ_jν_iν_j)= 384π^2e^2v^44^4|Δ_ij|^2|c_RL^(2)-c_RL^(1)|^2 ,being e the electric charge, and where the corrections of the Wilson coefficient due to the electroweak renormalisation from the scale of NP down to the mass scale of the interested lepton<cit.> have been neglected, and the limit m_ℓ_j≪ m_ℓ_i has been taken.The same contributions to the branching ratio for μ→ e conversion in a generic nucleus of mass number A readB_μ→ e^A= 32 G_F^2m_μ^5Γ_capt^Av^44^4|Δ_μ e|^2|((14-s_w^2)V^(p)-14V^(n))(c_LL^(1)+c_LL^(2))+.+32(V^(p)+V^(n))c_LL^(3)+(V^(p)+12V^(n))c_LL^(4u)+(12V^(p)+V^(n))c_LL^(4d)+.+12(-V^(p)+V^(n))c_LL^(5)-eD_A4(c_RL^(2)-c_RL^(1))^*|^2 ,where s_W≡sinθ_W=0.23, V^(p), V^(n) and D are dimensionless nucleus-dependent overlap integrals that can be found in Tab. <ref> for Aluminium and Gold, that also contains the numerical values for decay rate of the muon capture, which has been used to normalise the decay rate for the μ→ e conversion.The experimental bounds on these processes that will be considered in the numerical analysis are the following:B_μ→ eγ<5.7× 10^-13 <cit.> (6× 10^-14 <cit.>) ,B_τ→ eγ<5.2× 10^-8 <cit.>(10^-9÷ 10^-10 <cit.>) ,B_τ→μγ<2.5× 10^-7 <cit.>(10^-8÷ 10^-9 <cit.>) ,B^Au_μ→ e<7× 10^-13 <cit.> ,B^Al_μ→ e<6× 10^-17 <cit.> ,where the values in the brackets and the bound on B^Al_μ→ e refer to future expected sensitivities.§.§.§ Bounds on the LFV Scale The bounds on the LNV scales, determined in Eqs. (<ref>), (<ref>) and (<ref>), can be translated into bounds on the LFV scale when considering the experimental limits in the rare processes introduced above. Indeed, after substituting the expressions for Δ, defined in Eqs. (<ref>), (<ref>) and (<ref>), into the Eqs. (<ref>) and (<ref>), one can rewrite these expressions extracting the dependence on the NP scales:B_ℓ_i→ℓ_j(γ)≡()^4 B_ℓ_i→ℓ_j(γ)[c_i] , for the MFC caseB_ℓ_i→ℓ_j(γ)≡(v^2)^2 B_ℓ_i→ℓ_j(γ)[m_ν^lightest,c_i] , for the EFCI caseB_ℓ_i→ℓ_j(γ)≡(v^2)^4 B_ℓ_i→ℓ_j(γ)[m_ν^lightest,c_i] , for the EFCII case where the square brackets list the free parameters, that is the lightest neutrino mass (only for the EFCI and EFCII cases) and the effective Lagrangian parameters c_i.The numerical analysis reveals that the strongest bounds on thecomes from the data on μ→ e conversion in gold, although similar results are provided by the data on leptonic radiative rare decays. The corresponding parameter space is shown in Fig. <ref>, obtained taking the best fit values for the quantities in Tab. <ref> (for the EFCI case, the Dirac CP phase can only acquire two values, 0 and π) and the data from Tab. <ref>. Although these plots have been generated for the NO neutrino spectrum, they hold for the IO case as well, as no difference is appreciable. On the other hand, a dependence on the strength of the splitting between neutrino masses can be found for the EFC scenarios: the plots reported here illustrate the almost degenerate case, where the lightest neutrino mass is taken to be Ø(0.1); stronger hierarchies result in a more constrained parameter space. Finally, the plot for EFCI refers to δ^ℓ_CP=π, but the other case with δ^ℓ_CP=0 is almost indistinguishable.The upper bound onfor the MFC case reduce the parameter space, although it cannot be translated into upper bounds on : largersimply further suppresses the expected values for the branching ratios of the observables considered.Moreover, no lower bound can be drown: requiring to close the experimental bound for the μ→ e conversion, smallrequires small , leading at the same time to tune g_ν to small values, in order to reproduce the correct masses for the light active neutrinos, see Eq. (<ref>). The same occurs for EFCI, forand Y_ν, although, in this case, this can be well justified considering the additional Abelian symmetries appearing in Eq. (<ref>), as discussed in Ref. <cit.>. When considering the EFCII case, the lower bound onremoves a large part of the parameter space, but does not translate into a lower bound on : for example, forat its lower bound in Eq. (<ref>),must be larger than 10^5 in order to satisfy to the present bounds on B_μ→ e^Au; however, for larger values of ,can be smaller, down to the TeV scale for ∼ 10^17, although in this case a tuning on |Y_N| is necessary in order to reproduce correctly the lightness of the active neutrino masses. The absence of evidence of NP in direct and indirect searches at colliders and low-energy experiments suggests that NP leading to LFV should be heavier than a few TeV. In the optimistic scenario that NP is just behind the corner and waiting to be discovered in the near future, an indication of the LNV scale could be extracted from the plots in Fig. <ref>. Indeed, if μ→ e conversion in nuclei is observed, ∼ 10^3÷10^4 will lead to ∼10^12÷ 10^13 for MFC, ∼10^9÷ 10^10 for EFCI, and ∼10^16÷10^17 for EFCII. In the EFC scenarios, the LNV scale is associated to the masses of the RH neutrinos, that therefore turn out to be much heavier than the energies reachable at present and future colliders. An exception is the case where additional Abelian factors are considered in the flavour symmetry that allows to separate the LNV scale and the RH neutrino masses <cit.>: this opens the possibility of producing sterile neutrinos at colliders and then of studying their interactions in direct searches. §.§.§ Ratios of Branching Ratios The information encoded in Eq. (<ref>) are not limited to the scales of LFV and LNV. Studying the ratios of branching ratios between the different processes reveals characteristic features that may help to disentangle the different versions of MLFV. To shorten the notation,R^t→ sγ_i→ jγ≡B_ℓ_t→ℓ_sγB_ℓ_i→ℓ_jγ ,will be adopted in the analysis that follows. These observables do not depend on the LFV and LNV scales, nor on the Lagrangian coefficients. They are sensible to the neutrino oscillation parameters and, for the EFC cases, to the mass of the lightest active neutrino. For MFC, they do not even depend on m_ν^lightest: although the corresponding plots only contain points along an horizontal line, they will be reported in the next subsections in order to facilitate the comparison with the other cases.The two branching ratios with the best present sensitivities, the one for μ→ e conversion in nuclei and the one for μ→ eγ, have the same dependence on Δ_μ e and therefore their ratio is not sensitive to the charged lepton and neutrino masses and to the neutrino mixing. Instead, as pointed out in Ref. <cit.>, this ratio may be sensitive to the chirality of the effective operators contributing to these observables. The comparison between Eqs. (<ref>) and (<ref>) shows that only B^A_μ→ e is sensitive to Ø^(i)_LL, and thus any deviation from B_μ→ e^AB_μ→ eγ=π D_A^2would be a signal of this set of operators.In the scatter plots that follow, neutrino oscillation parameters are taken from Tab. <ref> as random values inside their 2σ error bands. The lightest neutrino mass is taken in the range m_ν^lightest⊂[0.001, 0.1] and the results for the NO and IO spectra are shown with different colours. In these figures, the density of the points should not be interpreted as related to the likelihood of differently populated regions of the parameter space.§.§.§ R^μ→ eγ_τ→μγ In the upper left, upper right and lower left panes in Fig. <ref>, the results are reported for the ratio of the branching ratios of the μ→ eγ and τ→μγ decays for the MFC, EFCI and EFCII cases, respectively. Figure <ref> is a summarising figure where all the three plots are shown together to facilitate the comparison and to make clearer the non-overlapping areas.As Fig. <ref> shows, R^μ→ eγ_τ→μγ is independent of the lightest neutrino mass. The two sets of points corresponding to NO and IO spectra almost overlap, making it very hard to distinguish between the two neutrino mass orderings.In Fig. <ref>, the dependence on m_ν^lightest can be slightly appreciated and the predictions for two mass orderings do not overlap when the spectrum is hierarchical. In the NO case there are two branches associated with the two values of δ^ℓ_CP: the values associated with the δ^ℓ_CP=0-branch are very close to those for the IO spectrum and correspond to the positive sum of the two terms on the right-hand side of Eq. (<ref>);the values associated with the δ^ℓ_CP=π-branch are smaller by about one order of magnitude, which reflects a partial cancellation between the two terms in the right-hand side of Eq. (<ref>). In the IO case there is only one branchbecause the first term on the right-hand side of Eq. (<ref>) is dominant.AsFig. <ref> shows, the points for the two mass orderings overlap in the quasi-degenerate limit down to masses of about 0.05. However,they show different profiles in the hierarchical limit. In the IO case the ratio of branching ratios under discussion is almost constant with m_ν^lightest. In the NO case the ratio R^μ→ eγ_τ→μγ can be as small as few × 10^-4 at ∼ 0.012, while for m_ν 1 < 0.01 the ratio is R^μ→ eγ_τ→μγ > 1. As discussed in Ref. <cit.>, this can be understood from Eqs. (<ref>) and (<ref>): in the NO case and strong mass hierarchy, the dominant contribution is proportional to 1/m_ν_1 and therefore R^μ→ eγ_τ→μγ gets enhanced; while when the spectrum is almost degenerate and in the IO case, the dominant contribution is suppressed by the sine of the reactor angle and the dependence on the lightest neutrino mass is negligible. In Fig. <ref>, where the three cases are shown altogether, it can be seen that all the cases overlap for the IO spectrum and in the quasi-degenerate limit forthe NO spectrum, predictingR^μ→ eγ_τ→μγ≅ 0.02÷0.07. When the mass spectrum is of NO type and hierarchical, the ratio spans values from 0.004 to 10. Interestingly, if this ratio is observed to be larger than 0.1, or smaller than 0.004, then only the EFCII with NO spectrum can explain it. Notice that, given the current limits on B_μ→ e γ, values smaller than ∼ 6× 10^-4 would be testable in the future planned experiments searching for τ→μγ. §.§.§ R^μ→ eγ_τ→ eγ The ratio R^μ→ eγ_τ→ eγ exhibits features which are very similar to those of the ratio R^μ→ eγ_τ→μγ. Figs. <ref> and <ref> are very similar to Figs. <ref> and <ref>: the profiles of the points are the same, only the area spanned is different, as indeed R^μ→ eγ_τ→ eγ is predicted to be by almost one order of magnitude larger than R^μ→ eγ_τ→μγ. Similar conclusions, however, apply. Fig. <ref>, instead, shows an interesting difference with respect to its sibling Fig. <ref>: the IO and the NO points cover almost the same nearly horizontal area both for quasi-degenerate masses and for a hierarchical mass spectrum, the NO region being slightly wider. Only for values of the lightest neutrino mass between 0.01 and 0.02, there could be an enhancement or a suppression of R^μ→ eγ_τ→ eγ in the EFCII case. This is a distinctive feature that could allow to disentangle EFCII from the other cases: values of R^μ→ eγ_τ→ eγ larger than 10 or smaller than 0.04 can only be explained by a NO neutrino spectrum in the case of EFCII. Notice that, given the current limits on B_μ→ e γ, values smaller than 0.006 would be testable in the future planned experiments searching for τ→ eγ. §.§.§ R^τ→ eγ_τ→μγ The ratio R^τ→ eγ_τ→μγ is almost indistinguishable form the ratio R^μ→ eγ_τ→μγ except for the EFCII case with NO neutrino mass spectrum. For the other cases the conclusionsforR^τ→ eγ_τ→μγ are almost the same as the conclusions reached forR^μ→ eγ_τ→μγ.One can see that values for R^τ→ eγ_τ→μγ smaller than 0.01 or larger than 0.1 would only be explain by EFCII with NO neutrino spectrum. Summarising, the study of these three ratios can provide relevant information if values for these ratios are found to be larger than 0.1 (10) for R^μ→ eγ_τ→μγ and R^τ→ eγ_τ→μγ (for R^μ→ eγ_τ→ eγ) or smaller than 0.004 for R^μ→ eγ_τ→μγ, 0.01 for R^τ→ eγ_τ→μγ, and 0.04 for R^μ→ eγ_τ→ eγ: such values can be explained only in the case of EFCII with NO spectrum. If large values for R^μ→ eγ_τ→μγ and R^τ→ eγ_τ→μγ are found, then this would point to arelatively small value for the lightest neutrino mass, smaller than 0.008; this should occur consistently with a value for R^μ→ eγ_τ→ eγ between 0.1 and 10. If instead, R^μ→ eγ_τ→ eγ is found to be much larger than 10, this would imply masses for the lightest neutrino between 0.008 and 0.04; consistently, R^μ→ eγ_τ→μγ and R^τ→ eγ_τ→μγ should remain smaller than 1. Finally, if no signals are seen in all the three ratios and bounds of 0.004 (0.01) [0.04] or smaller can be obtained for R^μ→ eγ_τ→μγ (R^τ→ eγ_τ→μγ) [R^μ→ eγ_τ→ eγ], then this would be consistent with masses between 0.01 and 0.02 for the lightest neutrino, or otherwise MLFV cannot explain this feature. On the other hand, all the three MLFV versions, for both the mass orderings, can explain values for these ratios inside the regions aforementioned, generally between 0.01 and 0.1: this case would be the less favourable for distinguishing the different setups.These results are generically in agreement with previous analyses performed in Refs. <cit.> and the differences are due to the update input data used here. §.§.§ B^A_μ→ e As shown in Eq. (<ref>), the ratio of the two branching ratios with the best present sensitivities is independent from Δ and can be used to obtain information about the chirality of the operators contributing to the μ→ e conversion process. On the other hand, if the observation (or non-observation) of the leptonic radiative rare decays allows to identify the MLFV realisation from Figs. <ref>, <ref> and <ref>, the branching ratio of the μ→ e conversion in nuclei could provide the missing information necessary to fix the LFV scale. As an example, one can assume that an upper bound on R^μ→ eγ_τ→μγ of about 0.004 has been set, that could be explained by EFCII with a NO neutrino spectrum and a mass of the lightest neutrino of about 0.014. The upper bound on B^Au_μ→ e implies the upper bound v^2/()<5.7× 10^-17. By fixing the LNV scale to its lower bound, one finds that these observables can provide information on the LFV scale that should be larger than about 2× 10^6. The future expected sensitivity on B^Al_μ→ e is better than the presently achieved one by four orders of magnitude. A negative results of the planned future searches forμ→ e conversion would imply a bound on the LFV scale of about 10^7.§ B→ S ANOMALIES The effective Lagrangian in Eq. (<ref>) contains the operators which provide the most relevant contributions to the b→ s anomalies under discussion[The complete effective Lagrangian that describes effects in B physics can be found in Ref. <cit.>. In particular, another operator, with respect to the reduced list in Eq. (<ref>), would contribute at tree level to C_9, e_Rγ^μ e_R q_Lγ_μ q_L: this contribution is however negligible for the observables discussed here <cit.>, and then this operator is not considered in the present discussion.]: they are Ø^(3)_LL and Ø^(5)_LL, which contribute at tree level to the Wilson coefficients C_9 and C_10 defined in Eq. (<ref>), satisfying to δ C_10=-δ C_9.Focussing on the flavour structure of Ø^(3)_LL and Ø^(5)_LL, the two operators are invariant under the MFV flavour symmetry _Q×_L, but can only describe flavour conserving observables which predict universality conservation in both the quark and lepton sectors. In order to describe a process with quark flavour change, it is then necessary to insert powers of the quark Yukawa spurion _u. The dominant contributions would arise contracting the flavour indices of the quark bilinear with _u_u^†: once the spurions acquire their background values, the b→ s transitions are weighted by the V_tbV_ts^∗ factor appearing in Eq. (<ref>). Notice that, as (Y_u)_33=y_t≈1, an additional insertion of _u_u^† is not negligible and modifies the dominant contributions by (1+ y_t^2) factors. Further insertions of _u_u^† turn out to be unphysical, as they can be written as combinations of the linear and quadratic terms through the Cayley-Hamilton theorem. The complete spurion insertions in Ø^(3,5)_LL can then be written as ζ_1_u_u^†+ζ_2(_u_u^†)^2, with ζ_1,2 arbitrary coefficients, reflecting the independence of each insertion: the net contribution to the operator is then given by V_tbV_ts^∗(ζ_1y_t^2+ζ_2y_t^4).The anomalies in the angular observable P'_5 of B→ K^∗μ^+μ^-, in the ratios R_K and R_K^∗, and in the Branching Ratio of B_s→ϕμ^+μ^- are linked to the possible violation of leptonic universality. NP contributions leading to these effects can be described in terms of insertions of spurion combinations transforming under 8 of SU(3)_ℓ_L. The simplest structure is _e_e^† that, in the basis defined in Eq. (<ref>), is diagonal and therefore cannot lead to lepton flavour changing transitions. The phenomenological analysis associated to the insertion of this spurionic combination has been performed in Ref. <cit.>, where the focus was in understanding the consequences of having a setup where lepton universality is violated but lepton flavour is conserved. In Ref. <cit.>, the Abelian factors in Eq. (<ref>) are considered as active factors of the flavour symmetry and this leads to background values for _e, whose largest eigenvalue is of order 1. It should be noticed that strong constraints on this setup arise when considering radiative electroweak corrections as discussed in Ref. <cit.>.Focussing only on the non-Abelian factors, as in the tradicional MLFV, the largest entry of Y_e is of the order of 0.01, as can be seen from Eq. (<ref>). In this scenario, the insertion of _e is subdominant with respect to the insertion of the neutrino spurions: the most relevant are _ν^† _ν in the MFC, _ν_ν^† in the EFCI and _N^†_N in the EFCII. Once the spurions acquire background values, these contributions reduce to the Δ characteristic of each case. Similarly to what discussed above for Y_u, if the largest eigenvalue of Δ is of order 1, then additional insertions of the neutrino spurions need to be taken into consideration. The specific contribution depends on the model considered and only a generic form ∑^2_n=0ξ_n Δ^n can be generically written, where ξ_n are arbitrary Lagrangian coefficients, and where the sum is stopped at n=2 due to the Cayley-Hamilton theorem.In Ref. <cit.> the EFCI context has been considered and several processes have been studied, discussing the viability of this version of MLFV to consistently describe the b→ s anomalies.The aim of this section is to critically revisit the analysis of Ref. <cit.>, and to investigate the other two versions of MLFV. As already mentioned, EFCI will be disfavoured if the Dirac CP violation in the leptonic sector is confirmed, and therefore the viability of MFC and EFCII to describe the b→ s anomalies, consistently with the other (un)observed flavour processes in the B sector, becomes an interesting issue. Moreover, the results obtained in the previous section will be explicitly considered. §.§ B Semi-Leptonic Decays In order to facilitate the comparison with Ref. <cit.> similar assumptions will be taken. First of all, setting C_10^SM=-C_9^SM and considering that the contributions from Ø_LL^(3,5) satisfy to δ C_10=-δ C_9, one can consider a single Wilson coefficient in Eq. (<ref>): for definiteness, C_9 will be retain in what follows. A second relevant assumption is on the matching between the effective operators of the high-energy Lagrangian defined at Λ_LFV, Eq. (<ref>), and the low-energy phenomenological description in Eq. (<ref>): only the tree level relations will be considered in the following, while effects from loop-contributions and from the electroweak running will be neglected. The latter has been recently shown in Ref. <cit.> to lead to a rich phenomenology, especially in EWPO and τ sector.Considering explicitly the contributions from Ø_LL^(3,5), and specifying the flavour indexes, one can write δ C_9,ℓℓ'=πα_emv^2^2(c_LL,ℓℓ'^(3)+c_LL,ℓℓ'^(5)) ,where c_LL,ℓℓ'^(i) can be written in a notation that makes explicit the dependence on the neutrino spurion background[In Ref. <cit.> a slightly different notation has been adopted, wherec_LL,ℓℓ'^(i)=α_emπ^2v^2[ξ̃^(i)_0δ_ℓℓ'+ξ̃^(i)_1Δ_ℓℓ'+ξ̃^(i)_2Δ_ℓℓ'] ,with ξ̃^(i)_j=π√(2)α_emG_F^2 (ζ^(i)_1y_t^2+ζ^(i)_2y_t^4)ξ^(i)_i . ]:c_LL,ℓℓ'^(i)=(ζ^(i)_1y_t^2+ζ^(i)_2y_t^4)(ξ^(i)_0δ_ℓℓ'+ξ^(i)_1Δ_ℓℓ'+ξ^(i)_2Δ_ℓℓ') . In order to explain lepton universality violation, the contributions proportional to ξ^(i)_1, ξ^(i)_2, etc. should be at least comparable with ξ^(i)_0. Consequently, this requires Δ_ℓℓ∼ 1, and this allows to fix the scale of LNV: indeed, the bounds in Eqs. (<ref>), (<ref>) and (<ref>) become equalities, =6× 10^14 , for MFC =6× 10^14 , for EFCI and EFCII . The bounds from LFV purely leptonic processes discussed in the previous section allows to translate this result into specific values for the LFV scale: from the bounds on μ→ e conversion in nuclei, Fig. <ref>, one obtains that =4.4× 10^5 , for MFC =2× 10^5 , for EFCI =10^5 , for EFCII .With these results at hand, the order of magnitude for δ C_9 turns out to be δ C_9=1.3×10^-4 , for MFC δ C_9=6.5×10^-4 , for EFCI δ C_9=2.6×10^-3 , for EFCII ,estimating only the pre-factors appearing in Eq. (<ref>). These values should now be compared with the ones in Eq. (<ref>), necessary to explain the anomalies in b→ s decays: the version of MLFV that most contributes to the C_9 Wilson coefficient is EFCII, but its contributions are two order of magnitudes too small to explain the B anomalies. It would be only by accident that the parameters of order 1 in Eq. (<ref>) combine together to compensate such suppression, but this would be an extremely tuned situation.The only conclusion that can be deduced from this analysis is that all the three versions of MLFV cannot explain deviations from the SM predictions in the Wilson coefficient C_9 larger than a few per mil, once taking into consideration the bounds from leptonic radiative decays and conversion of muons in nuclei, contrary to what presented in previous literature.If the anomalies in the B sector will be confirmed, then it will be necessary to extend the MLFV context. Attempts in this directions have already appeared in the literature, although not motivated by the search for an explanation of the b→ s decay anomalies. The flavour symmetry of the M(L)FV is a continuous global symmetry and therefore, once promoting the spurions to dynamical fields,its spontaneous breaking leads to the arising of Goldstone bosons. Although it would be possible to provide masses for these new states, this would require an explicit breaking of the flavour symmetry. An alternative is to gauge the symmetry <cit.>: the would-be-Goldstone bosons would be eaten by flavour gauge bosons that enrich the spectrum. In recent papers <cit.>, a specific gauge boson arising from the chosen gauged flavour symmetry has the specific couplings to explain the b→ s anomalies here mentioned. § CONCLUSIONS The MFV is a framework to describe fermion masses and mixings and to provide at the same time a sort of flavour protection from beyond the Standard Model contributions to flavour processes. The lack of knowledge of the neutrino mass origin reflects in a larger freedom when implementing the MFV ansatz in the lepton sector: three distinct versions of the MLFV have been proposed in the literature. In the present paper, an update of the phenomenological analyses on these setups is presented considering the most recent fit on the neutrino oscillation data.The recent indication of CP violation in the leptonic sector, if confirmed, will disfavour the very popular MLFV version <cit.> called here EFCI, where right-handed neutrinos are assumed to be degenerate at tree level and the flavour symmetry is SU(3)_ℓ_L× SU(3)_e_R× SO(3)_N_R× CP. The study of the predictions within these frameworks for flavour changing processes has been presented, focussing on leptonic radiative rare decays and muon conversion in nuclei, which provide the stringent bounds. A strategy to disentangle between the different MLFV possibilities has been described: in particular, the next future experiments searching for μ→ eγ and μ→ e conversion in aluminium could have the power to pinpoint the scenario described here as EFCII <cit.>, characterised by the flavour symmetry SU(3)_ℓ_L+N_R× SU(3)_e_R, if the neutrino mass spectrum is normal ordered.An interesting question is whether the present anomalies in the semi-leptonic B-meson decays can find an explanation within the M(L)FV context. Contrary to what claimed in the literature, such an explanation would require a scale of New Physics that turns out to be excluded once considering purely leptonicprocesses, the limits on the rate of muon conversion in nuclei being themost constraining. These anomalies could find a solution extending/modifying the M(L)FV setup, for example, by gauging the flavour symmetry. § ACKNOWLEDGEMENTS L.M. thanks the department of Physics and Astronomy of the Università degli Studi di Padova for the hospitality during the writing up of this paper and Paride Paradisi for useful comments on this project and for all the enjoyable discussions during this visit. D.N.D thanks the Department of Physics of the University of Virginia for the hospitality and P.Q. Hung for the exciting discussions and kind helps.D.N.D. acknowledges partial support by the Vietnam National Foundation for Science and Technology Development (NAFOSTED) under the grant 103.01-2014.89, and by the Vietnam Education Foundation (VEF) for the scholarship to work at the Department of Physics of the University of Virginia. L.M. and S.T.P acknowledge partial financial support by the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreements No 690575 and No 674896. The work of L.M. was supported in part also by “Spanish Agencia Estatal de Investigación” (AEI) and the EU “Fondo Europeo de Desarrollo Regional” (FEDER) through the project FPA2016-78645-P, and by the Spanish MINECO through the Centro de excelencia Severo Ochoa Program under grant SEV-2012-0249 andby the Spanish MINECO through the “Ramón y Cajal” programme (RYC-2015-17173). The work of S.T.P. was supported in part by the INFN program on Theoretical Astroparticle Physics (TASP) and by the World Premier International Research Center Initiative (WPI Initiative), MEXT, Japan.100Abe:2011sj T2K Collaboration, K. Abeet. al.,Phys. Rev. Lett.107 (2011) 041801, [http://arxiv.org/abs/1106.2822 arXiv:1106.2822].Adamson:2011qu MINOS Collaboration, P. Adamsonet. al.,Phys. Rev. Lett.107 (2011) 181802, [http://arxiv.org/abs/1108.0015 arXiv:1108.0015].Abe:2011fz Double Chooz Collaboration, Y. Abeet. al.,Phys. Rev. Lett.108 (2012) 131801, [http://arxiv.org/abs/1112.6353 arXiv:1112.6353].An:2012eh Daya Bay Collaboration, F. P. Anet. al.,Phys. Rev. Lett.108 (2012) 171803, [http://arxiv.org/abs/1203.1669 arXiv:1203.1669].Ahn:2012nd RENO Collaboration, J. K. Ahnet. al.,Phys. Rev. Lett.108 (2012) 191802, [http://arxiv.org/abs/1204.0626 arXiv:1204.0626].Fukuyama:1997ky T. Fukuyama and H. Nishiura, Proceeding of 1997 Shizuoka Workshop on Masses and Mixings of Quarks and Leptons, World Scientific Pub. Comp. (1997), [http://arxiv.org/abs/hep-ph/9702253 arXiv:hep-ph/9702253]Altarelli:1998sr G. Altarelli and F. Feruglio,JHEP11 (1998) 021, [http://arxiv.org/abs/hep-ph/9809596 hep-ph/9809596].Harrison:2002er P. F. Harrison, D. H. Perkins, and W. G. Scott,Phys. Lett.B530 (2002) 167, [http://arxiv.org/abs/hep-ph/0202074 hep-ph/0202074].Harrison:2002kp P. F. Harrison and W. G. Scott,Phys. Lett.B535 (2002) 163–169, [http://arxiv.org/abs/hep-ph/0203209 hep-ph/0203209].Xing:2002sw Z.-z. Xing,Phys. Lett.B533 (2002) 85–93, [http://arxiv.org/abs/hep-ph/0204049 hep-ph/0204049].Ma:2001dn E. Ma and G. Rajasekaran,Phys. Rev.D64 (2001) 113012, [http://arxiv.org/abs/hep-ph/0106291 hep-ph/0106291].Babu:2002dz K. S. Babu, E. Ma, and J. W. F. Valle,Phys. Lett.B552 (2003) 207–213, [http://arxiv.org/abs/hep-ph/0206292 hep-ph/0206292].Altarelli:2005yp G. Altarelli and F. Feruglio,Nucl. Phys.B720 (2005) 64–88, [http://arxiv.org/abs/hep-ph/0504165 hep-ph/0504165].Altarelli:2005yx G. Altarelli and F. Feruglio,Nucl. Phys.B741 (2006) 215–235, [http://arxiv.org/abs/hep-ph/0512103 hep-ph/0512103].Altarelli:2006kg G. Altarelli, F. Feruglio, and Y. Lin,Nucl. Phys.B775 (2007) 31–44, [http://arxiv.org/abs/hep-ph/0610165 hep-ph/0610165].deMedeirosVarzielas:2006fc I. de Medeiros Varzielas, S. F. King, and G. G. Ross,Phys. Lett.B648 (2007) 201–206, [http://arxiv.org/abs/hep-ph/0607045 hep-ph/0607045].Feruglio:2007uu F. Feruglio, C. Hagedorn, Y. Lin, and L. Merlo,Nucl. Phys.B775 (2007) 120–142, [http://arxiv.org/abs/hep-ph/0702194 hep-ph/0702194]. [Erratum: Nucl. Phys.B836,127(2010)].Bazzocchi:2009pv F. Bazzocchi, L. Merlo, and S. Morisi,Nucl. Phys.B816 (2009) 204–226, [http://arxiv.org/abs/0901.2086 arXiv:0901.2086].Bazzocchi:2009da F. Bazzocchi, L. Merlo, and S. Morisi,Phys. Rev.D80 (2009) 053003, [http://arxiv.org/abs/0902.2849 arXiv:0902.2849].Petcov:1982ya S. T. Petcov,Phys. Lett.B110 (1982) 245–249.Vissani:1997pa F. Vissani,http://arxiv.org/abs/hep-ph/9708483 hep-ph/9708483.Barger:1998ta V. D. Barger, S. Pakvasa, T. J. Weiler, and K. Whisnant,Phys. Lett.B437 (1998) 107–116, [http://arxiv.org/abs/hep-ph/9806387 hep-ph/9806387].Kajiyama:2007gx Y. Kajiyama, M. Raidal, and A. Strumia,Phys. Rev.D76 (2007) 117301, [http://arxiv.org/abs/0705.4559 arXiv:0705.4559].Rodejohann:2008ir W. Rodejohann,Phys. Lett.B671 (2009) 267–271, [http://arxiv.org/abs/0810.5239 arXiv:0810.5239].King:2011zj S. F. King and C. Luhn,JHEP09 (2011) 042, [http://arxiv.org/abs/1107.5332 arXiv:1107.5332].Frampton:2004ud P. H. Frampton, S. T. Petcov, and W. Rodejohann,Nucl. Phys.B687 (2004) 31–54, [http://arxiv.org/abs/hep-ph/0401206 hep-ph/0401206].Romanino:2004ww A. Romanino,Phys. Rev.D70 (2004) 013003, [http://arxiv.org/abs/hep-ph/0402258 hep-ph/0402258].Altarelli:2004jb G. Altarelli, F. Feruglio, and I. Masina,Nucl. Phys.B689 (2004) 157–171, [http://arxiv.org/abs/hep-ph/0402155 hep-ph/0402155].Hochmuth:2007wq K. A. Hochmuth, S. T. Petcov, and W. Rodejohann,Phys. Lett.B654 (2007) 177–188, [http://arxiv.org/abs/0706.2975 arXiv:0706.2975].Petcov:1993rk S. T. Petcov and A. Yu. Smirnov,Phys. Lett.B322 (1994) 109–118, [http://arxiv.org/abs/hep-ph/9311204 hep-ph/9311204].Minakata:2004xt H. Minakata and A. Yu. Smirnov,Phys. Rev.D70 (2004) 073009, [http://arxiv.org/abs/hep-ph/0405088 hep-ph/0405088].Altarelli:2009gn G. Altarelli, F. Feruglio, and L. Merlo,JHEP05 (2009) 020, [http://arxiv.org/abs/0903.1940 arXiv:0903.1940].Toorop:2010yh R. de Adelhart Toorop, F. Bazzocchi, and L. Merlo,JHEP08 (2010) 001, [http://arxiv.org/abs/1003.4502 arXiv:1003.4502].Meloni:2011fx D. Meloni,JHEP10 (2011) 010, [http://arxiv.org/abs/1107.0221 arXiv:1107.0221].Altarelli:2010gt G. Altarelli and F. Feruglio,Rev. Mod. Phys.82 (2010) 2701–2729, [http://arxiv.org/abs/1002.0211 arXiv:1002.0211].Grimus:2011fk W. Grimus and P. O. Ludl,J. Phys.A45 (2012) 233001, [http://arxiv.org/abs/1110.6376 arXiv:1110.6376].Altarelli:2012ss G. Altarelli, F. Feruglio, and L. Merlo,Fortsch. Phys.61 (2013) 507–534, [http://arxiv.org/abs/1205.5133 arXiv:1205.5133].Bazzocchi:2012st F. Bazzocchi and L. Merlo,Fortsch. Phys.61 (2013) 571–596, [http://arxiv.org/abs/1205.5135 arXiv:1205.5135].King:2013eh S. F. King and C. Luhn,Rept. Prog. Phys.76 (2013) 056201, [http://arxiv.org/abs/1301.1340 arXiv:1301.1340].King:2015aea S. F. King,J. Phys.G42 (2015) 123001, [http://arxiv.org/abs/1510.02091 arXiv:1510.02091].Capozzi:2016rtj F. Capozzi, E. Lisi, A. Marrone, D. Montanino, and A. Palazzo,Nucl. Phys.B908 (2016) 218–234, [http://arxiv.org/abs/1601.07777 arXiv:1601.07777].Esteban:2016qun I. Esteban, M. C. Gonzalez-Garcia, M. Maltoni, I. Martínez-Soler, and T. Schwetz,JHEP01 (2017) 087, [http://arxiv.org/abs/1611.01514 arXiv:1611.01514].Capozzi:2017ipn F. Capozzi, E. Di Valentino, E. Lisi, A. Marrone, A. Melchiorri, and A. Palazzo,http://arxiv.org/abs/1703.04471 arXiv:1703.04471.Ma:2011yi E. Ma and D. Wegman,Phys. Rev. Lett.107 (2011) 061803, [http://arxiv.org/abs/1106.4269 arXiv:1106.4269].King:2011ab S. F. King and C. Luhn,JHEP03 (2012) 036, [http://arxiv.org/abs/1112.1959 arXiv:1112.1959].Lin:2009bw Y. Lin,Nucl. Phys.B824 (2010) 95–110, [http://arxiv.org/abs/0905.3534 arXiv:0905.3534].Altarelli:2009kr G. Altarelli and D. Meloni,J. Phys.G36 (2009) 085005, [http://arxiv.org/abs/0905.0620 arXiv:0905.0620].Varzielas:2010mp I. de Medeiros Varzielas and L. Merlo,JHEP02 (2011) 062, [http://arxiv.org/abs/1011.6662 arXiv:1011.6662].Altarelli:2012bn G. Altarelli, F. Feruglio, L. Merlo, and E. Stamou,JHEP08 (2012) 021, [http://arxiv.org/abs/1205.4670 arXiv:1205.4670].Toorop:2011jn R. de Adelhart Toorop, F. Feruglio, and C. Hagedorn,Phys. Lett.B703 (2011) 447–451, [http://arxiv.org/abs/1107.3486 arXiv:1107.3486].deAdelhartToorop:2011re R. de Adelhart Toorop, F. Feruglio, and C. Hagedorn,Nucl. Phys.B858 (2012) 437–467, [http://arxiv.org/abs/1112.1340 arXiv:1112.1340].Froggatt:1978nt C. D. Froggatt and H. B. Nielsen,Nucl. Phys.B147 (1979) 277–298.Altarelli:2000fu G. Altarelli, F. Feruglio, and I. Masina,JHEP11 (2000) 040, [http://arxiv.org/abs/hep-ph/0007254 hep-ph/0007254].Altarelli:2002sg G. Altarelli, F. Feruglio, and I. Masina,JHEP01 (2003) 035, [http://arxiv.org/abs/hep-ph/0210342 hep-ph/0210342].Buchmuller:2011tm W. Buchmuller, V. Domcke, and K. Schmitz,JHEP03 (2012) 008, [http://arxiv.org/abs/1111.3872 arXiv:1111.3872].Altarelli:2012ia G. Altarelli, F. Feruglio, I. Masina, and L. Merlo,JHEP11 (2012) 139, [http://arxiv.org/abs/1207.0587 arXiv:1207.0587].Bergstrom:2014owa J. Bergstrom, D. Meloni, and L. Merlo,Phys. Rev.D89 (2014), no. 9 093021, [http://arxiv.org/abs/1403.4528 arXiv:1403.4528].King:2001uz S. F. King and G. G. Ross,Phys. Lett.B520 (2001) 243–253, [http://arxiv.org/abs/hep-ph/0108112 hep-ph/0108112].King:2003rf S. F. King and G. G. Ross,Phys. Lett.B574 (2003) 239–252, [http://arxiv.org/abs/hep-ph/0307190 hep-ph/0307190].Chivukula:1987py R. S. Chivukula and H. Georgi,Phys. Lett.B188 (1987) 99–104.DAmbrosio:2002vsn G. D'Ambrosio, G. F. Giudice, G. Isidori, and A. Strumia,Nucl. Phys.B645 (2002) 155–187, [http://arxiv.org/abs/hep-ph/0207036 hep-ph/0207036].Cirigliano:2005ck V. Cirigliano, B. Grinstein, G. Isidori, and M. B. Wise,Nucl. Phys.B728 (2005) 121–134, [http://arxiv.org/abs/hep-ph/0507001 hep-ph/0507001].Davidson:2006bd S. Davidson and F. Palorini,Phys. Lett.B642 (2006) 72–80, [http://arxiv.org/abs/hep-ph/0607329 hep-ph/0607329].Gavela:2009cd M. B. Gavela, T. Hambye, D. Hernandez and P. Hernandez, JHEP0909 (2009) 038, [http://arxiv.org/abs/0906.1461 0906.1461].Alonso:2011jd R. Alonso, G. Isidori, L. Merlo, L. A. Munoz, and E. Nardi,JHEP06 (2011) 037, [http://arxiv.org/abs/1103.5461 arXiv:1103.5461].Anselm:1996jm A. Anselm and Z. Berezhiani,Nucl. Phys.B484 (1997) 97–123, [http://arxiv.org/abs/hep-ph/9605400 hep-ph/9605400].Barbieri:1999km R. Barbieri, L. J. Hall, G. L. Kane, and G. G. Ross, http://arxiv.org/abs/hep-ph/9901228 hep-ph/9901228.Berezhiani:2001mh Z. Berezhiani and A. Rossi,Nucl. Phys. Proc. Suppl.101 (2001) 410–420, [http://arxiv.org/abs/hep-ph/0107054 hep-ph/0107054].Feldmann:2009dc T. Feldmann, M. Jung, and T. Mannel,Phys. Rev.D80 (2009) 033003, [http://arxiv.org/abs/0906.1523 arXiv:0906.1523].Alonso:2011yg R. Alonso, M. B. Gavela, L. Merlo, and S. Rigolin,JHEP07 (2011) 012, [http://arxiv.org/abs/1103.2915 arXiv:1103.2915].Nardi:2011st E. Nardi, Phys. Rev. D84 (2011) 036008,[http://arxiv.org/abs/1105.1770 arXiv:1105.1770].Alonso:2012fy R. Alonso, M. B. Gavela, D. Hernandez, and L. Merlo,Phys. Lett.B715 (2012) 194–198, [http://arxiv.org/abs/1206.3167 arXiv:1206.3167].Alonso:2013mca R. Alonso, M. B. Gavela, D. Hernandez, L. Merlo, and S. Rigolin,JHEP08 (2013) 069, [http://arxiv.org/abs/1306.5922 arXiv:1306.5922].Alonso:2013nca R. Alonso, M. B. Gavela, G. Isidori, and L. Maiani,JHEP11 (2013) 187, [http://arxiv.org/abs/1306.5927 arXiv:1306.5927].Fong:2013dnk C. S. Fong and E. Nardi, Phys. Rev. D89 (2014) no.3, 036008, [http://arxiv.org/abs/1307.4412 arXiv:1307.4412].Cirigliano:2006su V. Cirigliano and B. Grinstein,Nucl. Phys.B752 (2006) 18–39, [http://arxiv.org/abs/hep-ph/0601111 hep-ph/0601111].Grinstein:2006cg B. Grinstein, V. Cirigliano, G. Isidori, and M. B. Wise,Nucl. Phys.B763 (2007) 35–48, [http://arxiv.org/abs/hep-ph/0608123 hep-ph/0608123].Paradisi:2009ey P. Paradisi and D. M. Straub,Phys. Lett.B684 (2010) 147–153, [http://arxiv.org/abs/0906.4551 arXiv:0906.4551].Grinstein:2010ve B. Grinstein, M. Redi, and G. Villadoro,JHEP11 (2010) 067, [http://arxiv.org/abs/1009.2049 arXiv:1009.2049].Feldmann:2010yp T. Feldmann,JHEP04 (2011) 043, [http://arxiv.org/abs/1010.2116 arXiv:1010.2116].Guadagnoli:2011id D. Guadagnoli, R. N. Mohapatra, and I. Sung,JHEP04 (2011) 093, [http://arxiv.org/abs/1103.4170 arXiv:1103.4170].Buras:2011zb A. J. Buras, L. Merlo, and E. Stamou,JHEP08 (2011) 124, [http://arxiv.org/abs/1105.5146 arXiv:1105.5146].Buras:2011wi A. J. Buras, M. V. Carlucci, L. Merlo, and E. Stamou,JHEP03 (2012) 088, [http://arxiv.org/abs/1112.4477 arXiv:1112.4477].Alonso:2012jc R. Alonso, M. B. Gavela, L. Merlo, S. Rigolin, and J. Yepes,JHEP06 (2012) 076, [http://arxiv.org/abs/1201.1511 arXiv:1201.1511].Alonso:2012pz R. Alonso, M. B. Gavela, L. Merlo, S. Rigolin, and J. Yepes,Phys. Rev.D87 (2013), no. 5 055019, [http://arxiv.org/abs/1212.3307 arXiv:1212.3307].Lopez-Honorez:2013wla L. Lopez-Honorez and L. Merlo,Phys. Lett.B722 (2013) 135–143, [http://arxiv.org/abs/1303.1087 arXiv:1303.1087].Barbieri:2014tja R. Barbieri, D. Buttazzo, F. Sala, and D. M. Straub,JHEP05 (2014) 105, [http://arxiv.org/abs/1402.6677 arXiv:1402.6677].Alonso:2016onw R. Alonso, E. Fernandez Martínez, M. B. Gavela, B. Grinstein, L. Merlo, and P. Quilez,JHEP12 (2016) 119, [http://arxiv.org/abs/1609.05902 arXiv:1609.05902].Crivellin:2016ejn A. Crivellin, J. Fuentes-Martin, A. Greljo, and G. Isidori,Phys. Lett.B766 (2017) 77–85, [http://arxiv.org/abs/1611.02703 arXiv:1611.02703].Forero:2014bxa D. V. Forero, M. Tortola, and J. W. F. Valle,Phys. Rev.D90 (2014), no. 9 093006, [http://arxiv.org/abs/1405.7540 arXiv:1405.7540].Blennow:2014sja M. Blennow, P. Coloma, and E. Fernandez-Martínez,JHEP03 (2015) 005, [http://arxiv.org/abs/1407.3274 arXiv:1407.3274].Capozzi:2013csa F. Capozzi, G. L. Fogli, E. Lisi, A. Marrone, D. Montanino, and A. Palazzo, Phys. Rev.D89 (2014) 093018, [http://arxiv.org/abs/1312.2878 arXiv:1312.2878].Olive:2016xmw Particle Data Group Collaboration, C. Patrignaniet. al.,Chin. Phys.C40 (2016), no. 10 100001.Feruglio:2012cw F. Feruglio, C. Hagedorn, and R. Ziegler,JHEP07 (2013) 027, [http://arxiv.org/abs/1211.5560 arXiv:1211.5560].Holthausen:2012dk M. Holthausen, M. Lindner, and M. A. Schmidt,JHEP04 (2013) 122, [http://arxiv.org/abs/1211.6953 arXiv:1211.6953].Feruglio:2013hia F. Feruglio, C. Hagedorn, and R. Ziegler,Eur. Phys. J.C74 (2014) 2753, [http://arxiv.org/abs/1303.7178 arXiv:1303.7178].Girardi:2013sza I. Girardi, A. Meroni, S. T. Petcov, and M. Spinrath,JHEP02 (2014) 050, [http://arxiv.org/abs/1312.1966 arXiv:1312.1966].Branco:2015gna G. C. Branco, I. de Medeiros Varzielas, and S. F. King,Nucl. Phys.B899 (2015) 14–36, [http://arxiv.org/abs/1505.06165 arXiv:1505.06165].Ding:2015rwa G.-J. Ding and S. F. King,Phys. Rev.D93 (2016) 025013, [http://arxiv.org/abs/1510.03188 arXiv:1510.03188].Varzielas:2016zjc I. de Medeiros Varzielas, S. F. King, C. Luhn, and T. Neder,Phys. Rev.D94 (2016), no. 5 056007, [http://arxiv.org/abs/1603.06942 arXiv:1603.06942].Shimizu:2014ria Y. Shimizu, M. Tanimoto, and K. Yamamoto,Mod. Phys. Lett.A30 (2015) 1550002, [http://arxiv.org/abs/1405.1521 arXiv:1405.1521].Petcov:2014laa S. T. Petcov,Nucl. Phys.B892 (2015) 400–428, [http://arxiv.org/abs/1405.6006 arXiv:1405.6006].Girardi:2015vha I. Girardi, S. T. Petcov, and A. V. Titov,Eur. Phys. J.C75 (2015) 345, [http://arxiv.org/abs/1504.00658 arXiv:1504.00658].Girardi:2015rwa I. Girardi, S. T. Petcov, A. J. Stuart, and A. V. Titov,Nucl. Phys.B902 (2016) 1–57, [http://arxiv.org/abs/1509.02502 arXiv:1509.02502].King:2013psa S. F. King, A. Merle, and A. J. Stuart,JHEP12 (2013) 005, [http://arxiv.org/abs/1307.2901 arXiv:1307.2901].Ballett:2013wya P. Ballett, S. F. King, C. Luhn, S. Pascoli, and M. A. Schmidt,Phys. Rev.D89 (2014), no. 1 016016, [http://arxiv.org/abs/1308.4314 arXiv:1308.4314].Ballett:2014dua P. Ballett, S. F. King, C. Luhn, S. Pascoli, and M. A. Schmidt,JHEP12 (2014) 122, [http://arxiv.org/abs/1410.7573 arXiv:1410.7573].Girardi:2014faa I. Girardi, S. T. Petcov, and A. V. Titov,Nucl. Phys.B894 (2015) 733–768, [http://arxiv.org/abs/1410.8056 arXiv:1410.8056].Gehrlein:2016wlc J. Gehrlein, A. Merle, and M. Spinrath,Phys. Rev.D94 (2016), no. 9 093003, [http://arxiv.org/abs/1606.04965 arXiv:1606.04965].Barbieri:2011ci R. Barbieri, G. Isidori, J. Jones-Perez, P. Lodone, and D. M. Straub,Eur. Phys. J.C71 (2011) 1725, [http://arxiv.org/abs/1105.2296 arXiv:1105.2296].Barbieri:2011fc R. Barbieri, P. Campli, G. Isidori, F. Sala, and D. M. Straub,Eur. Phys. J.C71 (2011) 1812, [http://arxiv.org/abs/1108.5125 arXiv:1108.5125].Barbieri:2012uh R. Barbieri, D. Buttazzo, F. Sala, and D. M. Straub,JHEP07 (2012) 181, [http://arxiv.org/abs/1203.4218 arXiv:1203.4218].Barbieri:2012bh R. Barbieri, D. Buttazzo, F. Sala, and D. M. Straub,JHEP10 (2012) 040, [http://arxiv.org/abs/1206.1327 arXiv:1206.1327].Barbieri:2015yvd R. Barbieri, G. Isidori, A. Pattori, and F. Senia,Eur. Phys. J.C76 (2016), no. 2 67, [http://arxiv.org/abs/1512.01560 arXiv:1512.01560].Bordone:2017anc M. Bordone, G. Isidori, and S. Trifinopoulos, http://arxiv.org/abs/1702.07238 arXiv:1702.07238.Feruglio:2008ht F. Feruglio, C. Hagedorn, Y. Lin, and L. Merlo,Nucl. Phys.B809 (2009) 218–243, [http://arxiv.org/abs/0807.3160 arXiv:0807.3160].Ishimori:2008uc H. Ishimori, T. Kobayashi, H. Okada, Y. Shimizu, and M. Tanimoto,JHEP04 (2009) 011, [http://arxiv.org/abs/0811.4683 arXiv:0811.4683].Feruglio:2009hu F. Feruglio, C. Hagedorn, Y. Lin, and L. Merlo,Nucl. Phys.B832 (2010) 251–288, [http://arxiv.org/abs/0911.3874 arXiv:0911.3874].Feruglio:2009iu F. Feruglio, C. Hagedorn, and L. Merlo,JHEP03 (2010) 084, [http://arxiv.org/abs/0910.4058 arXiv:0910.4058].Toorop:2010ex R. de Adelhart Toorop, F. Bazzocchi, L. Merlo, and A. Paris,JHEP03 (2011) 035, [http://arxiv.org/abs/1012.1791 arXiv:1012.1791]. [Erratum: JHEP01,098(2013)].Toorop:2010kt R. de Adelhart Toorop, F. Bazzocchi, L. Merlo, and A. Paris,JHEP03 (2011) 040, [http://arxiv.org/abs/1012.2091 arXiv:1012.2091].Ishimori:2010su H. Ishimori and M. Tanimoto,Prog. Theor. Phys.125 (2011) 653–675, [http://arxiv.org/abs/1012.2232 arXiv:1012.2232].Merlo:2011hw L. Merlo, S. Rigolin, and B. Zaldivar,JHEP11 (2011) 047, [http://arxiv.org/abs/1108.1795 arXiv:1108.1795].Aaij:2013qta LHCb Collaboration, R. Aaijet. al.,Phys. Rev. Lett.111 (2013) 191801, [http://arxiv.org/abs/1308.1707 arXiv:1308.1707].Aaij:2015oid LHCb Collaboration, R. Aaijet. al.,JHEP02 (2016) 104, [http://arxiv.org/abs/1512.04442 arXiv:1512.04442].Abdesselam:2016llu Belle Collaboration, A. Abdesselamet. al.,in Proceedings, Lhcski 2016 - a First Discussion of 13 TeV Results: Obergurgl, Austria, April 10-15, 2016, 2016, [http://arxiv.org/abs/1604.04042 arXiv:1604.04042].Aaij:2015esa LHCb Collaboration, R. Aaijet. al.,JHEP09 (2015) 179, [http://arxiv.org/abs/1506.08777 arXiv:1506.08777].Fajfer:2012vx S. Fajfer, J. F. Kamenik, and I. Nisandzic,Phys. Rev.D85 (2012) 094025, [http://arxiv.org/abs/1203.2654 arXiv:1203.2654].Lees:2013uzd BaBar Collaboration, J. P. Leeset. al.,Phys. Rev.D88 (2013), no. 7 072012, [http://arxiv.org/abs/1303.0571 arXiv:1303.0571].Na:2015kha HPQCD Collaboration, H. Na, C. M. Bouchard, G. P. Lepage, C. Monahan, and J. Shigemitsu,Phys. Rev.D92 (2015), no. 5 054510, [http://arxiv.org/abs/1505.03925 arXiv:1505.03925]. [Erratum: Phys. Rev.D93,no.11,119906(2016)].Aaij:2015yra LHCb Collaboration, R. Aaijet. al.,Phys. Rev. Lett.115 (2015), no. 11 111803, [http://arxiv.org/abs/1506.08614 arXiv:1506.08614]. [Erratum: Phys. Rev. Lett.115,no.15,159901(2015)].Huschle:2015rga Belle Collaboration, M. HusCHLeet. al.,Phys. Rev.D92 (2015), no. 7 072014, [http://arxiv.org/abs/1507.03233 arXiv:1507.03233].Aaij:2014ora LHCb Collaboration, R. Aaijet. al.,Phys. Rev. Lett.113 (2014) 151601, [http://arxiv.org/abs/1406.6482 arXiv:1406.6482].BifaniTalk LHCb Collaboration, S. Bifani,“Search for new physics with b→ sℓ^+ℓ^- decays at LHCb”, CERN Seminar, 18 April 2017.Lyon:2014hpa J. Lyon and R. Zwicky,http://arxiv.org/abs/1406.0566 arXiv:1406.0566.Descotes-Genon:2014uoa S. Descotes-Genon, L. Hofer, J. Matias, and J. Virto,JHEP12 (2014) 125, [http://arxiv.org/abs/1407.8526 arXiv:1407.8526].Jager:2014rwa S. Jäger and J. Martin Camalich,Phys. Rev.D93 (2016), no. 1 014028, [http://arxiv.org/abs/1412.3183 arXiv:1412.3183].Ciuchini:2015qxb M. Ciuchini, M. Fedele, E. Franco, S. Mishima, A. Paul, L. Silvestrini, and M. Valli,JHEP06 (2016) 116, [http://arxiv.org/abs/1512.07157 arXiv:1512.07157].Capdevila:2017ert B. Capdevila, S. Descotes-Genon, L. Hofer, and J. Matias, http://arxiv.org/abs/1701.08672 arXiv:1701.08672.Chobanova:2017ghn V. G. Chobanova, T. Hurth, F. Mahmoudi, D. Martínez Santos, and S. Neshatpour,http://arxiv.org/abs/1702.02234 arXiv:1702.02234.Descotes-Genon:2013wba S. Descotes-Genon, J. Matias, and J. Virto,Phys. Rev.D88 (2013) 074002, [http://arxiv.org/abs/1307.5683 arXiv:1307.5683].Altmannshofer:2013foa W. Altmannshofer and D. M. Straub,Eur. Phys. J.C73 (2013) 2646, [http://arxiv.org/abs/1308.1501 arXiv:1308.1501].Hurth:2013ssa T. Hurth and F. Mahmoudi,JHEP04 (2014) 097, [http://arxiv.org/abs/1312.5267 arXiv:1312.5267].Ghosh:2014awa D. Ghosh, M. Nardecchia, and S. A. Renner,JHEP12 (2014) 131, [http://arxiv.org/abs/1408.4097 arXiv:1408.4097].Altmannshofer:2014rta W. Altmannshofer and D. M. Straub,Eur. Phys. J.C75 (2015), no. 8 382, [http://arxiv.org/abs/1411.3161 arXiv:1411.3161].Descotes-Genon:2015uva S. Descotes-Genon, L. Hofer, J. Matias, and J. Virto,JHEP06 (2016) 092, [http://arxiv.org/abs/1510.04239 arXiv:1510.04239].Hurth:2016fbr T. Hurth, F. Mahmoudi, and S. Neshatpour,Nucl. Phys.B909 (2016) 737–777, [http://arxiv.org/abs/1603.00865 arXiv:1603.00865].Capdevila:2016ivx B. Capdevila, S. Descotes-Genon, J. Matias, and J. Virto,JHEP10 (2016) 075, [http://arxiv.org/abs/1605.03156 arXiv:1605.03156].Altmannshofer:2017fio W. Altmannshofer, C. Niehoff, P. Stangl, and D. M. Straub, http://arxiv.org/abs/1703.09189 arXiv:1703.09189.Capdevila:2017bsm B. Capdevila, A. Crivellin, S. Descotes-Genon, J. Matias, and J. Virto, http://arxiv.org/abs/1704.05340 arXiv:1704.05340.Altmannshofer:2017yso W. Altmannshofer, P. Stangl, and D. M. Straub, http://arxiv.org/abs/1704.05435 arXiv:1704.05435.Geng:2017svp L. S. Geng, B. Grinstein, S. Jager, J. Martin Camalich, X. L. Ren and R. X. Shi, http://arxiv.org/abs/1704.05446 arXiv:1704.05446.Ciuchini:2017mik M. Ciuchini, A. M. Coutinho, M. Fedele, E. Franco, A. Paul, L. Silvestrini and M. Valli, http://arxiv.org/abs/1704.05447 arXiv:1704.05447.Lee:2015qra C.-J. Lee and J. Tandean,JHEP08 (2015) 123, [http://arxiv.org/abs/1505.04692 arXiv:1505.04692].Hurth:2008jc T. Hurth, G. Isidori, J. F. Kamenik, and F. Mescia,Nucl. Phys.B808 (2009) 326–346, [http://arxiv.org/abs/0807.5039 arXiv:0807.5039].Lalak:2010bk Z. Lalak, S. Pokorski, and G. G. Ross,JHEP08 (2010) 129, [http://arxiv.org/abs/1006.2375 arXiv:1006.2375].Redi:2011zi M. Redi and A. Weiler,JHEP11 (2011) 108, [http://arxiv.org/abs/1106.6357 arXiv:1106.6357].Hurth:2012jn T. Hurth and F. Mahmoudi,Nucl. Phys.B865 (2012) 461–485, [http://arxiv.org/abs/1207.0688 arXiv:1207.0688].Calibbi:2013mka L. Calibbi, P. Paradisi, and R. Ziegler,JHEP06 (2013) 052, [http://arxiv.org/abs/1304.1453 arXiv:1304.1453].Bishara:2015mha F. Bishara, A. Greljo, J. F. Kamenik, E. Stamou, and J. Zupan,JHEP12 (2015) 130, [http://arxiv.org/abs/1505.03862 arXiv:1505.03862].Redi:2013pga M. Redi,JHEP09 (2013) 060, [http://arxiv.org/abs/1306.1525 arXiv:1306.1525].He:2014efa X.-G. He, C.-J. Lee, J. Tandean, and Y.-J. Zheng,Phys. Rev.D91 (2015), no. 7 076008, [http://arxiv.org/abs/1411.6612 arXiv:1411.6612].Feruglio:2015gka F. Feruglio, P. Paradisi, and A. Pattori,Eur. Phys. J.C75 (2015), no. 12 579, [http://arxiv.org/abs/1509.03241 arXiv:1509.03241].Feldmann:2016hvo T. Feldmann, C. Luhn, and P. Moch,JHEP11 (2016) 078, [http://arxiv.org/abs/1608.04124 arXiv:1608.04124].RodrigosThesis R. Alonso, “Dynamical Yukawa Couplings”, Ph.D. Thesis, Madrid, 2013.Kagan:2009bn A. L. Kagan, G. Perez, T. Volansky, and J. Zupan,Phys. Rev.D80 (2009) 076002, [http://arxiv.org/abs/0903.1794 arXiv:0903.1794].Weinberg:1979sa S. Weinberg,Phys. Rev. Lett.43 (1979) 1566–1570.Bilenky:1980cxS. M. Bilenky, J. Hosek and S. T. Petcov, Phys. Lett.94B (1980) 495.Minkowski:1977sc P. Minkowski,Phys. Lett.B67 (1977) 421–428.GellMann:1980vs M. Gell-Mann, P. Ramond, and R. Slansky,Conf. Proc.C790927 (1979) 315–321, [http://arxiv.org/abs/1306.4669 arXiv:1306.4669].Yanagida:1980xy T. Yanagida,Prog. Theor. Phys.64 (1980) 1103.Mohapatra:1980yp R. N. Mohapatra and G. Senjanovic,Phys. Rev.D23 (1981) 165.Schechter:1980gr J. Schechter and J. W. F. Valle,Phys. Rev.D22 (1980) 2227.Bertuzzo:2009im E. Bertuzzo, P. Di Bari, F. Feruglio, and E. Nardi,JHEP11 (2009) 036, [http://arxiv.org/abs/0908.0161 arXiv:0908.0161].AristizabalSierra:2009ex D. Aristizabal Sierra, F. Bazzocchi, I. de Medeiros Varzielas, L. Merlo, and S. Morisi,Nucl. Phys.B827 (2010) 34–58, [http://arxiv.org/abs/0908.0907 arXiv:0908.0907].Antusch:2003kp S. Antusch, J. Kersten, M. Lindner, and M. Ratz,Nucl. Phys.B674 (2003) 401–433, [http://arxiv.org/abs/hep-ph/0305273 hep-ph/0305273].Antusch:2005gp S. Antusch, J. Kersten, M. Lindner, M. Ratz, and M. A. Schmidt,JHEP03 (2005) 024, [http://arxiv.org/abs/hep-ph/0501272 hep-ph/0501272].Ellis:2005dr J. R. Ellis, A. Hektor, M. Kadastik, K. Kannike, and M. Raidal,Phys. Lett.B631 (2005) 32–41, [http://arxiv.org/abs/hep-ph/0506122 hep-ph/0506122].Lin:2009sq Y. Lin, L. Merlo, and A. Paris,Nucl. Phys.B835 (2010) 238–261, [http://arxiv.org/abs/0911.3037 arXiv:0911.3037].Petcov:1976ff S. T. Petcov,Sov. J. Nucl. Phys.25 (1977) 340 [Yad. Fiz.25 (1977) 641], Erratum: [Sov. J. Nucl. Phys.25 (1977) 698], Erratum: [Yad. Fiz.25 (1977) 1336]. Feruglio:1992wf F. Feruglio,Int. J. Mod. Phys.A8 (1993) 4937–4972, [http://arxiv.org/abs/hep-ph/9301281 hep-ph/9301281].Contino:2010mh R. Contino, C. Grojean, M. Moretti, F. Piccinini, and R. Rattazzi,JHEP05 (2010) 089, [http://arxiv.org/abs/1002.1011 arXiv:1002.1011].Alonso:2012px R. Alonso, M. B. Gavela, L. Merlo, S. Rigolin, and J. Yepes,Phys. Lett.B722 (2013) 330–335, [http://arxiv.org/abs/1212.3305 arXiv:1212.3305]. [Erratum: Phys. Lett.B726,926(2013)].Buchalla:2013rka G. Buchalla, O. Catà, and C. Krause,Nucl. Phys.B880 (2014) 552–573, [http://arxiv.org/abs/1307.5017 arXiv:1307.5017]. [Erratum: Nucl. Phys.B913,475(2016)].Brivio:2016fzo I. Brivio, J. Gonzalez-Fraile, M. C. Gonzalez-Garcia, and L. Merlo,Eur. Phys. J.C76 (2016), no. 7 416, [http://arxiv.org/abs/1604.06801 arXiv:1604.06801].deFlorian:2016spz LHC Higgs Cross Section Working Group Collaboration, D. de Florianet. al.,http://arxiv.org/abs/1610.07922 arXiv:1610.07922.Brivio:2013pma I. Brivio, T. Corbett, O. J. P. Éboli, M. B. Gavela, J. Gonzalez-Fraile, M. C. Gonzalez-Garcia, L. Merlo, and S. Rigolin,JHEP03 (2014) 024, [http://arxiv.org/abs/1311.1823 arXiv:1311.1823].Brivio:2014pfa I. Brivio, O. J. P. Éboli, M. B. Gavela, M. C. Gonzalez-Garcia, L. Merlo, and S. Rigolin,JHEP12 (2014) 004, [http://arxiv.org/abs/1405.5412 arXiv:1405.5412].Gavela:2014vra M. B. Gavela, J. Gonzalez-Fraile, M. C. Gonzalez-Garcia, L. Merlo, S. Rigolin, and J. Yepes,JHEP10 (2014) 044, [http://arxiv.org/abs/1406.6367 arXiv:1406.6367].Alonso:2014wta R. Alonso, I. Brivio, B. Gavela, L. Merlo, and S. Rigolin,JHEP12 (2014) 034, [http://arxiv.org/abs/1409.1589 arXiv:1409.1589].Hierro:2015nna I. M. Hierro, L. Merlo, and S. Rigolin,JHEP04 (2016) 016, [http://arxiv.org/abs/1510.07899 arXiv:1510.07899].Brivio:2015kia I. Brivio, M. B. Gavela, L. Merlo, K. Mimasu, J. M. No, R. del Rey, and V. Sanz,JHEP04 (2016) 141, [http://arxiv.org/abs/1511.01099 arXiv:1511.01099].Gavela:2016bzc B. M. Gavela, E. E. Jenkins, A. V. Manohar, and L. Merlo,Eur. Phys. J.C76 (2016), no. 9 485, [http://arxiv.org/abs/1601.07551 arXiv:1601.07551].Merlo:2016prs L. Merlo, S. Saa, and M. Sacristán-Barbero,Eur. Phys. J.C77 (2017), no. 3 185, [http://arxiv.org/abs/1612.04832 arXiv:1612.04832].Brivio:2017ije I. Brivio, M. B. Gavela, L. Merlo, K. Mimasu, J. M. No, R. del Rey, and V. Sanz,http://arxiv.org/abs/1701.05379 arXiv:1701.05379.Hernandez-Leon:2017kea P. Hernandez-Leon and L. Merlo,http://arxiv.org/abs/1703.02064 arXiv:1703.02064.Buchmuller:1985jz W. Buchmuller and D. Wyler,Nucl. Phys.B268 (1986) 621–653.Grzadkowski:2010es B. Grzadkowski, M. Iskrzynski, M. Misiak, and J. Rosiek,JHEP10 (2010) 085, [http://arxiv.org/abs/1008.4884 arXiv:1008.4884].Pruna:2014asa G. M. Pruna and A. Signer, JHEP1410 (2014) 014,[http://arxiv.org/abs/1408.3565 arXiv:1408.3565].Crivellin:2017rmk A. Crivellin, S. Davidson, G. M. Pruna and A. Signer, JHEP117 (2017) no.5,[http://arxiv.org/abs/1702.03020 arXiv:1702.03020].Kitano:2002mt R. Kitano, M. Koike, and Y. Okada,Phys. Rev.D66 (2002) 096002, [http://arxiv.org/abs/hep-ph/0203110 hep-ph/0203110]. [Erratum: Phys. Rev.D76,059902(2007)].Adam:2013mnn MEG Collaboration, J. Adamet. al.,Phys. Rev. Lett.110 (2013) 201801, [http://arxiv.org/abs/1303.0754 arXiv:1303.0754].Baldini:2013ke A. M. Baldiniet. al.,http://arxiv.org/abs/1301.7225 arXiv:1301.7225.Aubert:2009ag BaBar Collaboration, B. Aubertet. al., Phys. Rev. Lett.104 (2010) 021802, [http://arxiv.org/abs/0908.2381 arXiv:0908.2381].Hayasaka:2013dsa Belle/Belle II Collaborations, K. Hayasakaet. al., J. Phys. Conf. Ser.408 (2013) 012069.Bertl:2006up SINDRUM II Collaboration, W. H. Bertlet. al.,Eur. Phys. J.C47 (2006) 337–346.Kuno:2013mha COMET Collaboration, Y. Kuno,PTEP2013 (2013) 022C01.Abrams:2012er Mu2e Collaboration, R. J. Abramset. al., http://arxiv.org/abs/1211.7019 arXiv:1211.7019.Alonso:2014csa R. Alonso, B. Grinstein, and J. Martin Camalich,Phys. Rev. Lett.113 (2014) 241802, [http://arxiv.org/abs/1407.7044 arXiv:1407.7044].Hiller:2014yaa G. Hiller and M. Schmaltz,Phys. Rev.D90 (2014) 054014, [http://arxiv.org/abs/1408.1627 arXiv:1408.1627].Alonso:2015sja R. Alonso, B. Grinstein, and J. Martin Camalich,JHEP10 (2015) 184, [http://arxiv.org/abs/1505.05164 arXiv:1505.05164].Feruglio:2016gvd F. Feruglio, P. Paradisi, and A. Pattori,Phys. Rev. Lett.118 (2017), no. 1 011801, [http://arxiv.org/abs/1606.00524 arXiv:1606.00524].Feruglio:2017rjo F. Feruglio, P. Paradisi, and A. Pattori, http://arxiv.org/abs/1705.00929 arXiv:1705.00929.Alonso:2017bff R. Alonso, P. Cox, C. Han, and T. T. Yanagida, http://arxiv.org/abs/1704.08158 arXiv:1704.08158.Alonso:2017uky R. Alonso, P. Cox, C. Han, and T. T. Yanagida, http://arxiv.org/abs/1705.03858 arXiv:1705.03858. | http://arxiv.org/abs/1705.09284v2 | {
"authors": [
"D. N. Dinh",
"L. Merlo",
"S. T. Petcov",
"R. Vega-Alvarez"
],
"categories": [
"hep-ph"
],
"primary_category": "hep-ph",
"published": "20170525175957",
"title": "Revisiting Minimal Lepton Flavour Violation in the Light of Leptonic CP Violation"
} |
firstpage–lastpage Jet Production at RHIC and LHC Leticia Cunqueiro December 30, 2023 ==============================We present an extragalactic survey using observations from the Atacama Large Millimeter/submillimeter Array (ALMA) to characterise galaxy populations up to z = 0.35: the Valparaso ALMA Line Emission Survey (VALES). We use ALMA Band-3 CO(1–0) observations to study the molecular gas content in a sample of 67 dusty normal star-forming galaxies selected from the Herschel Astrophysical Terahertz Large Area Survey (H-ATLAS). We have spectrally detected 49 galaxies at >5σ significance and 12 others are seen at low significance in stacked spectra. CO luminosities are in the range of (0.03-1.31)×10^10 K km s^-1 pc^2, equivalent to log( M_gas/M_⊙) =8.9 - 10.9 assuming an α_ CO = 4.6 (K km s^-1 pc^2)^-1, which perfectly complements the parameter space previously explored withlocal and high-z normal galaxies. We compute the optical to CO sizeratio for 21 galaxies resolved by ALMA at ∼35 resolution (6.5 kpc),finding that the molecular gas is on average ∼ 0.6 times more compact than thestellar component. We obtain a global Schmidt-Kennicutt relation, given by log [Σ_ SFR/( M_⊙ yr^-1kpc^-2)] =(1.26 ± 0.02) × log [Σ_ M_H2/( M_⊙ pc^-2)] - (3.6 ± 0.2). We find a significant fraction of galaxies lying at`intermediate efficiencies' between a long-standing mode ofstar-formation activity and a starburst, specially at L_IR=10^11-12 L_⊙.Combining our observations with data taken from the literature,we propose that star formation efficiencies can be parameterised by log [ SFR/M_H2] = 0.19 ×(log L_IR - 11.45)-8.26- 0.41 ×arctan [-4.84 (log L_IR-11.45) ]. Within the redshift range we explore (z<0.35), we identify a rapid increase of the gas content as a function of redshift.galaxies: high-redshift – galaxies: ISM – infrared: galaxies – submillimeter: galaxies – ISM: lines and bands § INTRODUCTIONUnderstanding the way in which galaxies form and evolve throughout cosmic time is one of the major challenges of extragalactic astrophysics. Recently, theoretical models adopting a ΛCDM cosmology have been successful in probing the hierarchical gravitational growth of dark matter haloes, which is then associated to the large-scale structure of the observed baryonic matter <cit.>. On smaller scales, however, the physical processes that control galaxy growth have intricate non-linear dependencies that make its explanation far from trivial (e.g.). One of the key observations used to constrain galaxy formation and evolution models is the behaviour of the cosmic star-formation rate density. Understanding the cosmic evolution of the interplay between the observed star formation rate (SFR), molecular gas content (M_ gas), global stellar mass content (M_⋆) and gas-phase metallicity (Z) is a major goal in this field of research. We therefore require a detailed knowledge of the origin and the properties of the gas reservoir that ignites and sustains the star formation activity in galaxies at different epochs.The accretion of gas into the potential wells of galaxies, either from the inter-galactic medium or via galaxy-galaxy interactions, provides the gas reservoir for ongoing and future star formation <cit.>.Most stars form in giant molecular clouds (GMCs), in which the majority of the mass is in the form of molecular hydrogen (H_2). The lack of a permanent dipole moment in this molecule means that direct measurements of cold H_2 gas are extremely difficult <cit.>.Thus, an alternative approach to study the molecular gas content is through observations of carbon monoxide (CO) line emission of low-J transitions (e.g. J=2-1 or J=1-0) – the best standard tracer of the total mass in molecular gas <cit.>. Even though this tracer has been historically used as a tracer of the molecular gas mass, the ^12C^16O(J=1-0) [hereafter CO(1–0)] emission line is optically thick, hence the dynamics of the system becomes critical for converting luminosities into masses <cit.>.For instance, in the case of a merger where dynamical instabilities are large and the system is not virialised, Doppler-broadening could affect the line profiles and the emitting regions could be more dispersed throughout the inter stellar medium (ISM), thus enhancing the CO emission compared to that from a virialised system of the same mass <cit.>.In dense, optically-thick virialised GMCs, it is found that α_ CO∼5 M_⊙ (K km s^-1 pc^2)^-1, whereas α_ CO∼0.8M_⊙ (K km s^-1 pc^2)^-1 in more dynamically disrupted systems, such as in Ultra Luminous Infrared Galaxies (ULIRGs; ). On the other hand, α_ CO may be boosted in low-metallicity environments due to a lack of shielding dust that enhances photo-dissociation of the CO molecule <cit.>. For instance, <cit.> find a parametrisation of α_ CO in terms of gas metallicity, where α_ CO∝ Z^-0.65 (mixing both low and high-z galaxies), similar to that found by <cit.>. A higher redshifts, a flatter slope has been suggested ().Recent observations taken with the Herschel Space Observatory[Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia with an important participation from NASA.] <cit.> of local star-forming galaxies suggest the existence of at least two different mechanisms triggering the star formation. Taking into account the L_ FIR/ M_H2 ratio (where L_ FIR is the far-IR luminosity) as a tracer of the star-formation efficiency,<cit.> find an unusual point at ∼ 80 L_⊙ M^-1_⊙ at which average properties of the neutral and ionised gas change significantly, this observationis broadly consistent with a scenario of a highly compressed and more efficient mode of star formation that creates higher ionisation parameters that cause the gas to manifest in low line to continuum ratios. This value is similar to the one at which <cit.> and <cit.> claim a transition to a more efficient star-formation mode, above the so-called `main-sequence' for star-forming galaxies (e.g. ). The different mechanisms controlling the star-formation activity are thought to be the product of dynamical instabilities, where higher efficiencies are seen in more compact and dynamically disrupted systems, such as in Ultra Luminous.Over the last few years, significant efforts have been made to characterise the star formation activity of normal and starburst galaxies at low-z (e.g. ). The construction of large samples of galaxies with direct molecular gas detections (via CO emission) has remained a challenge. Beyond the local Universe, CO detections are limited to the most massive/luminous yet rare galaxies.For example, <cit.> report detections of the CO(J=1-0) transition for 11 ULIRGs with an average redshift of z=0.38. For these ULIRGs, the molecular gas mass as a function of look-back time demonstrates a dramatic rise by almost an order of magnitude from the current epoch out to 5 Gyr ago. In addition, <cit.> presented 18 detected ULIRGs at z∼0.2-0.6 for CO(1–0), CO(2–1) and CO(3–2) with an average CO luminosity of L'_ CO(1-0)=2 × 10^10 K km s^-1 pc^2, finding that the amount of gas available for a galaxy quickly increases as a function of redshift. Moreover, <cit.> presented the properties of 17 Herschel-selected ULIRGs (L_ IR > 10^11.5 L_⊙) at z=0.2-0.8, showing that the previously observed evolution of ULIRGs at those redshifts is already taking place by z∼0.3. Nevertheless, the observation of `normal' galaxies at these redshifts (and beyond) has so far been, at least, restricted.The advent of the Atacama Large Millimeter/submillimeter Array (ALMA) opens up the possibility to explore the still unrevealed nature of the `normal' star forming galaxies (SFGs) at low/high-z redshift. In this work, we exploit the Herschel Astrophysical Terahertz Large Area Survey (H-ATLAS[<http://www.h-atlas.org/>]; ) and the state-of-the-art capabilities of ALMA to characterise the CO(1-0) line emission (ν_ rest=115.271 GHz) of `normal' star-forming and mildly starburst galaxies up to z=0.35.This paper is organised as follows. Section <ref> explains the sample selection, observing strategy and data reduction. In Section <ref>, we present the main results and the implications of these new ALMA observations to the global context of galaxy evolution. Our conclusion is summarised in Section <ref>. Throughout this work, we assume a ΛCDM cosmology adopting the values H_ 0=70.0 km s^-1Mpc^-1, Ω_ M = 0.3 and Ω_Λ=0.7 for the calculation of luminosity distances and physical scales[We use Ned Wright's online calculator <http://www.astro.ucla.edu/w̃right/CosmoCalc.html.>].§ OBSERVATIONS §.§ H-ATLAS sampleThe galaxies presented in this paper have been selected from the equatorial fields of the H-ATLAS survey (∼160deg^2; ) and observed during ALMA Cycle-1 and Cycle-2 (programs 2012.1.01080.S & 2013.1.00530.S; P.I. E. Ibar). All galaxies have a >3σ detection with both the Photoconductor Array Camera and Spectrometer (PACS) at 160 μm and the Spectral and Photometric Imaging Receiver (SPIRE) at 250 μm, i.e. they are detected near the peak of the spectral energy distribution (SED) of a normal and local star-forming galaxy. All galaxies have been unambiguously identified in the Sloan Digital Sky Survey (SDSS; ) presenting a significant probability for association (reliability R>0.8; ).The optical counterparts to the Herschel-detected galaxies all have high-quality spectra from the Galaxy and Mass Assembly survey (GAMA[<http://www.gama-survey.org/>]; z_qual ≥ 3; ).Slightly different selection criteria were used in each cycle to construct the list of ALMA targets. In Cycle-1, we selected a representative sample of 41 galaxies with the following criteria:0.15 < z < 0.35 (the upper threshold in redshift corresponds to the limits at which the CO(1–0) line moves out of frequency range covered by Band-3 of ALMA); S_ 160μ m>100mJy; SDSS sizes isoa <100; and a reduced χ^2<1.5 when fitting the far-IR/submm SED using a modified black body (following a similar approach as in ). On the other hand, in Cycle-2 we targeted 27 galaxies that have previous Herschel PACS [C ii] spectroscopy as shown by <cit.> and so added the following criteria: 0.02<z<0.2 (the threshold is defined by the point where the [C ii] is redshifted to the edge of the PACS spectrometer); S_ 160μ m>150 mJy; Petrosian SDSS radii smaller than 150; sources do not have >3σ PACS 160 μm detections within 2 arcmin (to ensure reliable on-off sky subtraction).Combining Cycle-1 and Cycle-2 observations, we construct one of the largest samples of CO(1-0) detected galaxies at 0.02<z<0.35 (see Fig. <ref>). We highlight that some of the main advantages of our sample over previous studies of far-IR-selected galaxies are: (1) we cover fainter L_ 8-1000μ m≈ 10^10-12 L_⊙ and less massive M_ dust≈ 1.5 × 10^7-8 M_⊙ ranges than IRAS–selected samples, i.e. our samples are not significantly biased towards powerful ULIRGs that potentially have complex merger morphologies as those described by <cit.> and <cit.>; (2) the sample selection dominated by the 160 μm and 250 μm photometry gives relatively low dust temperature estimates (25<T_ dust/ K<60) and reduces (but not entirely) the well known bias towards high dust temperatures evidenced in 60 μm-selected IRAS–samples <cit.>; (3) the wealth of ancillary data already available for all the sources <cit.>; and (4) the redshift range puts galaxies far enough so galaxies can be imaged with a single ALMA pointing in Band-3 – it does not require large mosaicking (using the Atacama Compact Array) campaigns as in more local galaxy samples.These reasons enable us to address our science goals using a much simpler but wider parameter space for the diagnostics of interest (see Fig. <ref>). §.§ Observational strategy ALMA Cycle-1 observations were taken in Band-3 between December 2013 and March 2014 (see Table <ref>), spending approximately 3 to 9 minutes on-source in each source.Scheduling blocks (SBs) were designed to detect the CO(1-0) emission line down to a root mean square (rms) of 1.5 mJy beam^-1 at 50 km s^-1 channel width and at ∼ 3” - 4” resolution (the most compact configuration). On the other hand, Cycle-2 observations were taken in Band-3 on January 2015 and SBs were designed to observe the CO(1–0) emission line but down to 2 mJy beam^-1 at 30 km s^-1. Even though ALMA is not specifically designed as a `survey-like' telescope, we setup our experiment to minimise the number of spectral tunings needed to observe all sources independently. We make use of the fact that our targets come from three equatorial H-ATLAS/GAMA fields which are ∼ 4×14 deg^2 size, providing large numbers of galaxies at similar redshifts. We modified the `by-default' approach provided by the ALMA Observing Tool (OT) by setting source redshifts to zero, but fixing the spectral windows (SPW) manually in order to cover the widest possible spectral range, i.e. redshift range. We optimised the central frequency position of the SPWs (over ∼7.5GHz) to maximise the number of sources with the CO(1–0) line redshifted into the ranges covered by our SPWs. This observing strategy allowed us to spectrally resolve the CO(1–0) emission in 49 galaxies (see Fig. <ref>; ∼ 70% of the whole sample), while in 12 others we see low signal to noise emission in collapsed spectra (moment-0). §.§ Data reductionA summary of all ALMA observations are shown in Table <ref>. To process all observations in a standardised way, we developed a common pipeline within the Common Astronomy Software Applications[<http://casa.nrao.edu/index.shtml>] (casa version 4.4.0). Based on the standard pipeline for data processing, we designed our own structured pipeline for calibration, concatenation and imaging. The structure was designed in modules, taking into account the vast amount of data and high flexibility at the time to flag corrupted data. When a science goal has more than one observation, we re-calibrate the phase calibrator to an average flux density (usually variations are seen at ≲15%) and bootstrap this scaling to the targets before concatenating the observations. The bandpass, flux and phase calibrators for each data set can be seen in Table <ref>.In the first instance, imaging was performed using the task clean at different spectral resolutions (from 20 to 100 km s^-1 in steps of 10 km s^-1). We sought the resolution that provided the highest number of non-cleaned point-like detections >5.0σ within the data-cube (R.A.-Dec.-ν) near the expected source position. If the source was undetected, then we created the cube at 100 km s^-1 channel width. After choosing the best resolution, we ran task clean again but this time applying a primary beam correction, manually cleaning the CO line emission down to a threshold of 3.0σ, and choosing an image size of 256×256 pixels with roughly 5 pixels (semi-major axis) per synthesised beam full width half maximum (fwhm). We used the optically-derived spectroscopic redshifts (z_ spec) of each source in a barycentric velocity reference frame. The final cubes were created using natural weighting, resulting in image cubeswith typical synthesised beams of3" - 4". The physical sizes for each source, i.e.the deconvolved major-axes (in kpc) are given in Table <ref>. §.§ Source properties§.§.§ CO emission We get an average rms level of 1.6 mJy beam^-1 (at 50 km s^-1) for both Cycle-1 and Cycle-2.We identify 49 galaxies (out of 67) with a >5σ peak line detection in at least one spectral channel (from 10 to 100 km s^-1 in all binned). For the 49 spectrally detected galaxies we determine the central frequency (ν_ obs) of the CO emission line by using a single Gaussian fit to the spectra. We found that central frequencies are in agreementand within the scatter of the expected GAMA's optical redshifts (see column v_obs in Table <ref>). The fitted fwhm of the CO line in our sample covers a range of 67 - 805 km s^-1. All the spectra with spectrally resolved CO signal are displayed in Fig. <ref>, whereas non-detections are summarised in Table <ref>.The velocity integrated CO flux densities (S_ COΔ v in units of Jy km s^-1) were obtained by collapsing the data cubes between ± 1× fwhm centred on the line (see yellow range shown in Fig. <ref>). The 2D intensity map is then fitted with a 2D-Gaussian for all detected sources using the task gaussfit within casa. Errors in these measurements aretaken directory from gaussfit's outputs. In seven cases the CO emission is not well fitted by a 2D-Gaussian, so we have used an irregular aperture covering the whole extension of the emission. Errors for those aperture measurements come from the standard deviation of fluxes measured in random sky regions around the source. We find measurements in the range of 2.2 - 20.8 Jy km s^-1, with an average value of 6.9 ± 0.2 Jy km s^-1. We get 21 galaxies which are spatially resolved in CO, based on a fitted semi-major axis √(2) times larger than the major axis of the synthesised beam. For non-detections, we collapsed the cubes (moment 0 maps) between ± 250 km s^-1 centred at the expected observed frequency – a range consistent with the average line width we derive for the whole sample (251.6 ± 38.3 km s^-1). In these stacked spectra 12 other galaxies show emission (ensuring a corrected optical and redshift association). We provide these measurements in Table <ref>.These collapsed maps the rms values range between0.04 - 5.35 Jy km s^-1 (at 100 km s^-1 channel width) with anaverage of 1.64 Jy km s^-1.Some spectra show double line profiles providingvaluable dynamical information. Our kinematic results will bepublished by Molina et al. in prep.. We stress, however,that our single Gaussian profiles shown in Fig. <ref>are to define the spectral range used to collapse the cubes,from which we obtain the intensity maps to extract the velocity integrated flux densities. We look at how much the velocity integrated flux densities could change if we use double Gaussian profiles to fit the emission lines (in 16 spectra). Collapsing the cubes between the lower and the upper fwhm bound limits (of both Gaussians), and comparing these to those obtained from a single Gaussian fit, we obtain that fluxes decrease by a ∼ 5% (on average), although with a large scatter (∼ 30%). We decide to stick to the single Gaussian fit to estimate the fwhm to collapse the cubes.§.§.§ IR emissionFor each galaxy, we measure the IR luminosity by fitting the rest-frame SED constructed with photometry from IRAS, Wide-field Infrared Survey Explorer (WISE), and Herschel PACS and SPIRE instruments, using a modified black body that is forced to follow a power law at the high-frequency end of the spectrum. The fit constraints the dust temperature (T_d), the dust emissivity index (β), the mid-IR slope (α_ mid-IR), and the normalisation. Then we integrate the flux of the best-fitting SED between 8 and 1000 μm to obtain the total IR luminosity (), i.e., L_IR(8-1000 μ m) = 4π D_ L^2(z) ∫_ν_ 1^ν_ 2S_ν dν . The uncertainties on the IR luminosity are obtained by randomly varying the broadband photometry within the observational uncertainties in a Monte-Carlo simulation (100 times). Our results are listed in Table <ref>.We estimate the SFR following SFR(M_⊙ yr^-1) = 10^-10× L_ IR assuming a <cit.> Initial Mass Function (IMF), where L_ IR is in units of L_⊙<cit.>, and we assume a 1.72 factor to convert froma Salpeter to a Chabrier IMF.§.§.§ SED fittingAll of our galaxies are present in the GAMA Panchromatic Data Release[<http://gama-psi.icrar.org>] () that provides imaging for over 230 deg^2 with photometry in 21 bands extending from the far-ultraviolet to far-infrared from a range of facilities that currently includes the GALaxy Evolution eXplorer (GALEX), Sloan Digital Sky Survey (SDSS), Visible and Infrared Telescope for Astronomy (VISTA), WISE, and Herschel, meaning that the spectral energy distribution between 0.1–500 μm is available for each galaxy. These observed rest-frame SEDs have all been modelled with the Bayesian SED fitting code, MAGPHYS <cit.>, which fits the panchromatic SED from a library of optical and infrared SEDs derived from a generalised multi-component model of a galaxy, whilst giving special consideration to the dust–energy balance (see Fig. <ref>). Although Driver et al. (in prep.) will present a complete catalogue and analysis of all the GAMA SEDs modelled with MAGPHYS and the corresponding best-fit model parameters (see also ),in our present study we use the derived stellar masses and their uncertainties, which we calculate from the upper (16th percentile) and lower (84th percentile) limits of the probability distribution function associated with the stellar mass given by the best-fit model, as presented in Table <ref>. We briefly assess the quality of the fitting by comparing the stellarmasses and IR luminosities derived from MAGPHYS tothose estimates from our previous study presented in <cit.>. Both of these parameters demonstrate satisfactory agreement with a mean scatter of between 0.15 and 0.2 dex (see [fig:comparison]Fig. <ref>fig:comparison). In contrast, our derived star formation rates show a constant systematic offsetacross the parameter range (of a factor of 2, whereMAGPHYS are lower than those estimates from using L_ IR), which likely arises from differences in SFR definition/calibration (see e.g. ).However, removing this systematic offset yields a mean scatter of 0.2 dex.§.§.§ Morphological propertiesIn order to explore the morphological properties of our galaxies, we use the GAMA Panchromatic Swarp Imager[<http://gama-psi.icrar.org/psi.php>] to extract multi-wavelength imaging from GALEX, SDSS, VISTA and WISE.We classify each source (based on visual inspection agreed by fourmembers of our team) into three different categories accordingto the prominence of key morphological features: a Bulge (`B'), Disk (`D') and Merger-Irregular (`M'). If the source presents more than one morphology, we mark the first letter as the dominant morphology.If the source has multiple neighbouring systems, then we add `C' to denote these `companions'. In the following, we refer to our galaxies as `B', `D' or `M' dominated galaxies. We also note that this morphological classification is used to define the most suitable α_ CO to then compute M_ H_2 (this is discussed in <ref>).§ RESULTS AND DISCUSSION §.§ Morphological descriptionWe have made a census for the different optical/near-IR morphologies present in our sample, according to the morphological classification scheme explained in <ref>. From a total of 49 spectrally detected sources, we have identified 18 as B–dominated, 26 as D–dominated and 5 as M–dominated galaxies (see the morphology column in Table <ref>). By definition, the 5 M–dominated galaxies present signs of possible morphological disruption: 3 galaxies are clear interacting system (with two or more companions), and 2 show traces of the late stages of merger events. In the case of the spectrally undetected galaxies, we have identified 11 as B–dominated, 4 as D–dominated and 3 as M–dominated galaxies. We do not identify any clear morphological difference between CO–detected and –undetected galaxies.ALMA observations spatially resolve the CO emission in 21 galaxies (see <ref>).We calculate the deconvolved fwhm of the semi-major axis (R_ FWHM) using the task gaussfit (within casa), finding CO sizes in the range of 34 - 152 (3.7-35.0 kpc in physical units), usually resolved at asignificance of ∼7σ (median value). We compare the optical and CO sizes by using the Petrosian radius in r-band (R_ P,Opt) and the Petrosian radius in CO (R_ P,CO), using Eqn. (1) from <cit.>. We find values for R_P,CO in the range of 19 - 53 (2.8 -14.0 kpc), with an average of 36 (6.7 kpc). For our sample,we find that the mean and scatter of the R_ P,Opt/R_ P, COdistribution are 1.6±0.5 (i.e. the CO emission is typically smaller than the stellar; see Fig. <ref>). Previous studies have shown a CO-to-optical ratio of unity for `main-sequence' galaxies, locally <cit.> and at high-z <cit.>. In a different luminosity range, <cit.> found that the sizes of Sub-millimetre Galaxies (SMGs) at z = 2.6-4 in optical HST imaging are around four times larger than in CO. Taking into account the typical values of sSFR/sSFR(MS) for our 21 resolved galaxies, we explore if the optical–to–CO ratio changes as a function of sSFR.We perform a Kolmogorov-Smirnov (KS) for both `main sequence' and `starburst' R_ P,Opt/R_ P, CO populations (using sSFR/sSFR_ MS=4.0 as a threshold; see the definition of `main-sequence' in Fig. <ref>), finding a 90% probability that both populations come from the same parent distribution (see Fig. <ref>). This little difference might be a product of the small deviation seen from the main sequence (`starburstiness') or the six spatially resolved starburst galaxies presented here. §.§ Correlations between 𝐋_ 𝐈𝐑 and 𝐋'_ 𝐂𝐎We compute the CO luminosity following <cit.>, L'_ CO = 3.25 × 10^7 S_ CO Δ vν^-2_ obsD^2_L(1+z)^-3 [ K km s^-1 pc^2], where S_ CO Δ v is the velocity-integrated flux density in units of Jy km s^-1, ν_ obs is the observed frequency of the emission line in GHz, D_ L is the luminosity distance in Mpc, and z is the redshift. We find that the values for L'_ CO are in the range of (0.03 - 1.31) ×10^10K km s^-1 pc^2, with a median of (0.3 ± 0.1)× 10^10 K km s^-1 pc^2 (see Fig. <ref>). We note that our survey expands the parameter space explored before by previous similar studies, such as: <cit.> at (0.3-7)× 10^10 K km s^-1 pc^2; <cit.> at (4-9)× 10^10 K km s^-1 pc^2; <cit.> at (0.5-2)×10^10 K km s^-1 pc^2. Based on the IR luminosities derived from the H-ATLAS photometry ( <ref>), we find that the ratios between L_ IR and L'_ CO are similar to those found in normal local star-forming galaxies <cit.>. However, our galaxies have smaller L_ IR/L'_ CO ratio than typical (U)LIRGs in the same redshift range by a factor of ∼10 (see right panel of Fig <ref>). We compute a linear regression (in log scale) to the L_ IR versus L'_ CO relation for our spectrally detected B– and D–dominated galaxies, finding: log [ L_ IR / L_⊙] = (0.95 ± 0.04) ×logL'_ CO/[ K km s^-1 pc^2] + (2.0 ± 0.4). This parametrisation is within 1σ of the value previously presented by <cit.>, and supports the clear linearity between these two quantities. However, this slope is steepercompared with that found by <cit.> in high-z SMG,local ULIRGs and LIRGs (∼ 0.5-0.7). We confirm that most of our detected galaxies (blue circles in left panel Fig. <ref>) follow the so-called `sequence of disks', associated to a long-standing mode of star-formation.We remark, however, that if we include in the statistics those galaxies which are not spectrally detected in CO (blue stars in left panel Fig. <ref>), although have low signal to noise emission in collapsed (moment 0 maps), the scatter in the correlation significantly increases. This indicates that deeper observations are required to provide details for theco-existence of different modes of star-formation. We suggest that within the L_ IR=10^11-12 L_⊙ range,there might be a break (or a significant increment of the scatter) of the linear relation between the CO and far-IR luminosities <cit.>.If we add into the statistics all samples presented in Fig. <ref>) in addition to our B– and D–dominated galaxies, we find in Eqn. <ref> a slope andnormalisation of (0.99 ± 0.02) and (1.7 ± 0.2), respectively.Although these parameters are in agreement with previous studies (slope<cit.>,we should highlight the growing evidence that the star formationefficiencies increase with redshift (e.g.),therefore combining galaxy samples at different epochs might be an oversimplification of the analysis (see Fig. <ref>). For those spectrally identified CO galaxies, we do not identify any clear variation of the L_ IR/L'_ CO ratio as a function of redshift (up to z=0.35; see right panel of Fig. <ref>). Our results are consistent with previous works that have shown a constant average L_ IR/L'_ CO in `main-sequence' galaxies up to z∼ 0.5 <cit.>. The scatter on the L_ IR/L'_ CO ratio, however, increases if non-spectrally detected galaxies are included, an effect which is mainly dominated by the 0.15<z<0.35 galaxy population.Using L'_ CO, we compute the molecular gas mass (M_ H_2) assuming an α_ CO conversion factor dependent on the morphological classification (see <ref>). We adopt α_ CO= 4.6 (K km s^-1 pc^2)^-1 for the B– and D–dominated galaxies (which includes contribution of He; ), while α_ CO= 0.8 (K km s^-1 pc^2)^-1 for mergers/interacting galaxies. For B– and D–dominated galaxies, we find M_ H_2 values in the range of log( M_ H_2/M_⊙) = 8.9 - 10.9 with a median of log( M_ H_2/M_⊙) = 10.31 ± 0.1, while for M–dominated galaxies values are in the range of log( M_ H_2/M_⊙)= 9.3 - 9.8 with a median of log( M_ H_2/M_⊙) = 9.6 ± 0.2.Performing a linear regression to the SFR versus M_ H_2using our B– and D–dominated galaxies, we obtain: log [SFR/ M_⊙yr^-1] = (0.95±0.04) ×log (M_ H_2/[ M_⊙])- (8.6±0.5). In this work, we are significantly increase the number of previously detected galaxies atlog [ M_ H_2/M_⊙] ∼ 9-11. Our sample complements the `gap' between local spirals and `normal' high-z colour-selected galaxies (Fig. <ref>). We note that our M–dominated galaxies are shifted towards higher SFRs, and closer to the local ULIRGs described by <cit.>. At lower redshifts almost all galaxies follow a tight relationship between SFR and M_H2, nevertheless we identify that galaxies at the upper side of the redshift distribution (0.15<z<0.35) tend to show a higher scatter in this correlation. The scatter is larger when low signal-to-noise detections are included (see stars in left panel of Fig. <ref>).If we combine our observations with all samples shown in the left panel of Fig. <ref>),then in Eqn. <ref> we obtain a slope of 1.08±0.02 with a normalisation of -9.8±0.2.In both cases, one of the main factors controlling the scatterof the correlation is the different α_ CO conversionfactor chosen for M–dominated galaxies. It is worth pointing out that deeper ALMA observations to spectrally detect the CO emission for all of our galaxies would probably confirm a population of optically unresolved ULIRGs-like galaxies with high star formation efficiencies (filling the `gap' between spirals and ULIRGs local galaxies). As shown in left panel ofFig. <ref> (see blue stars), this population could significantly affect the slope and the scatter of the correlation.The major uncertainty in our molecular gas mass estimates originates from the assumption of the α_ CO conversion factor. Indeed, assuming a different α_ CO can change M_ H2 by over a factor of six (around 500 times higher than observational errors). On the other hand, considering the metallicity range of our sample <cit.> and using the α_ CO parametrisation for star-forming galaxies made by <cit.>, we find that the α_ CO could vary by a factor of four with a tendency to lower values than 4.6 (K km s^-1 pc^2)^-1. In the left panel of Fig. <ref>, galaxies can be shifted in position using a different α_ CO, producing an artificial bimodal behaviour for the star-formation activity in these galaxies. This can clearly affect the reliability for the existence of `disk' and `starburst' sequences. InMolina et al. (in prep.),we use kinematic arguments to confront the bimodality of the α_ CO conversion factor (e.g. ,). §.§ The Schmidt-Kennicutt relationWe introduce M gas = M_ H_2+M_ H i as the total mass content in molecular and atomic gas. As we do not have directM_H i observations, we estimate these using Eqn. 4 from : log [ M_H i/M_⋆]=-1.73238 (g-r)+ 0.215182 μ_i-4.08451, where is g and r are the photometric magnitudes in those filters, and μ_i is the i–band surface brightness (SDSS filters).This approximation provides a 0.31 dex scatter for the estimate. For our sample, using Eqn. <ref> we find that the contribution is ingeneral small (although non negligible) with a mean ratio ofM_H i/M_H2∼0.2 (see Fig. <ref>).Our 49 spectrally–detected CO sources have SFRs in the range of 1-94 M_⊙ yr^-1, with a median value of 15±1M_⊙ yr^-1. For those which are spatially resolved in CO (21 in total), we estimate the SFR and M_gas surface densities by dividing the measured values by the area of a two-sided disk (2 π R^2_ FWHM), where R_ FWHM is the deconvolved fwhm of the semi-major axis measured in ALMA CO images (see Table <ref>). In Fig. <ref> (right), we show the Schmidt-Kennicutt relation <cit.> comparing our samples with previous ones taken from the literature.For those spectrally detected and spatially resolved CO galaxies, we obtain values for log[Σ_ SFR/( M_⊙ yr^-1 kpc^-2)] in the rangeof -2.61 and -1.23 , with a median of -2.18±0.1 . Most of the spatially resolved ones (16) are D– dominated while the rest (5) are B– dominated galaxies. The B– and D–dominated galaxies have on average 3 times higher Σ_ SFR than local spiral galaxies, but around 30-70 times lower values than normal BzK galaxies at high-z. On the other hand, for the same spectrally detected CO galaxies, log[Σ_ Mgas/( M_⊙ pc^-2)] values rangebetween 0.55 and 1.71 , with a median value of 1.04 ± 0.29 .In terms of CO emission, we do not find any remarkable difference between our B– and D–dominated galaxies (although they do have different morphological optical features); the sample has on average 2 times greater Σ_ Mgas than local spiral galaxies. According to estimations of the molecular and atomic gas content in nearby galaxies <cit.>, there is a strong evidence that the atomic gas saturates in column gas densities higher than ∼ 10 M_⊙ pc^-2. This is attributed to a natural threshold for the atomic to molecular gas transitions <cit.>. As shown inFig. <ref>, our atomic gasestimates decrease as a function of Σ_ Mgas(as expected by the M_H i saturation), althoughthe large scatter dominating Eqn. <ref> still predicts anon-negligible fraction of atomic gas aboveΣ_ Mgas>10^10M_⊙ pc^-2-.Something to highlight is that left and right panels of Fig. <ref> should behave similarly, nevertheless we find that most of our galaxies (blue circles) which are lying near the BzK population disappear after considering our spatially resolved CO selection criterion. To explore this further, we compare the median values of M_H_2 and R_ CO using resolved galaxies (in CO) at different redshift bins (centred at 0.07 and 0.2) in order to identify a possible evolution in physical size and mass of gas content. Previous studies have shown that at fixed M_⋆ the averaged effective radius vary as R_ eff∝ (1 + z)^-0.8 <cit.>. If we assume that the stars and the molecular gas follow the same spatial distribution (which is actually not the case; see Fig. <ref>), the measured CO sizes are expected to decrease by a factor of 1.1 between z = 0.07 and z = 0.2. This variation is not sufficient to explain what we observe in Fig. <ref>. Considering the results coming from <ref> to estimate Σ_ SFR and Σ_ Mgas for spatially unresolved CO galaxies, we used the Petrosian optical radius divided by the mean R_ P,Opt/R_ P, CO=1.6 ratio found for our galaxies (note that R_ FWHM and R_ P, CO differ by only ∼2%). Brown and green circles in Fig. <ref> correspond to CO-unresolved B–/D–dominated and M–dominated galaxies, respectively. The inset panel in Fig. <ref> shows clearly that most of the spatially unresolved CO galaxies are those which are more distant (at 0.15 < z < 0.35). This analysis demonstrates that our sample perfectly complements the parameter space in the Schmidt-Kennicutt relation that joins the local spiral galaxies with those `normal' at high-z.Using our data, we perform a linear regression in Fig. <ref> using all B–/D–dominated galaxies which are spatially resolved in CO (21 sources) and unresolved in CO but resolved in the optical (23 sources). Being aware of the possible biases introduced by our H i estimates, we provide a parametrisation for two cases; M_ H2 and M_ gas, Σ_ SFR/ M_⊙yr^-1kpc^-2 = (1.16± 0.05)× log[Σ_ M_gas/ M_⊙ pc^-2] - (3.3 ± 0.1) (1.27 ± 0.05)× log[Σ_ M_H2/ M_⊙ pc^-2] - (3.6 ± 0.1) These results are consistent with previously analyses usingensembles of clumps composing galaxies atz=1-2 (e.g. ) andstar-forming disks with near-solar metallicities <cit.>. Presenting Eqn. <ref> for the molecular and total gas helps to see the way the M_ H i could affect our results. In particular we find that the slope is flatter when M_gas are used. We highlight that given that these galaxies have been selected in the far-IR, our results are not significantly affected by the assumptions in geometrical modelling of the dust as previous in <cit.> and <cit.> studies.If we take into account the samples of galaxies belonging to the sequence of disks shown in right panel of Fig. <ref> (including z=1.0 - 2.3 normal galaxies, BzK and z ∼ 0.5 disks, spiral galaxies and both our spatially resolved and unresolved B–/D–dominated galaxies) in our linear regression of Eqn. <ref>, we obtaina slope of (1.26 ± 0.02) and a normalisation of (3.6 ± 0.2). Mentioned before, this should be taken with caution as there is growing evidence for a cosmic evolution of the star formation efficiency, effective radius andgas content <cit.>, implying that the combination of samples at different epochs might be mixing intrinsically different populations. §.§ Star formation efficiencyWe define the star formation efficiency as SFE = SFR/M_ H2 and the M_H2 consumption time-scale (τ_ M_H2) as SFE^-1. In the left panel of Fig. <ref>, we show the SFE vs. L_ IR for our galaxies including other galaxy samples taken from the literature. Two distinctive types of galaxies are evident: those galaxies that present a long-standing mode of star formation with τ_ M_H2∼1.3 Gyr; and those affected by a much faster starburst processes with τ_ M_H2∼0.2 Gyr.We identify a significant number of sources that are located in the `transition zone' between both the sequence of disks and the sequence of starbursts (left panel in Fig. <ref>), with SFEs in the range of 4.3 - 11.7 Gyr^-1, and with a median of 8.5 ± 0.1 Gyr^-1 (e.g. similar to those SMGs at z = 0.22 - 0.25 presented by ). These sources seem to suggest the co-existence of both modes of star-formation at intermediate efficiencies.We note that 39 of our sources are located in the long-lasting mode, with SFEs in the range of 0.42 - 4.32 Gyr^-1,and with amedian of 0.8 ± 0.1 Gyr^-1. We highlight the evidence for galaxies located in the`transition zone' between `normal' and `starburst'confirming a break in SFE at L_IR≈ 10^11-12 L_⊙,which could indicate the possibility of a single evolutionary path(with a large scatter) rather than a sharp bimodal behaviour inevolution (see <ref>). This is inagreement with previous findings by <cit.>. We propose an empirical best-fit parametrisation to describe the dependence of the SFE on L_IR (based in all z<0.4 samples included in left panel of Fig. <ref>):log [ SFR/M_H2 ( yr^-1)] = 0.19 × (log [L_IR/L_⊙]-ϕ)+α+βarctan[ρ (log [L_IR/L_⊙]-ϕ) ],where α=-8.26, β=-0.41, ρ=-4.84 and ϕ=11.45. Thisfunction has a scatter of σ=0.5 dex.In this work we highlight that our method to compute the molecular gas masses is directly using CO(1-0), not assuming any particular conversion for high-J transitions, facilitating the interpretation of the results. <cit.>, for example, obtain different star formation modes for normal and starburst/SMG galaxies, which are likely affected by the different methods behind the computation of both gas masses at high redshift (using higher-J CO transitions) and different α_ CO for each type of galaxy. In spite of the remaining uncertainties on the assumptions used to derive M_H2, the detection of galaxies in the `transition zone', including spectrally detected/undetected with α_ CO=4.6 (K km s^-1 pc^2)^-1 and mergers with a smaller α_ CO by a factor of six, supports the scenario of a smooth increase of SFE as a function of L_IR.This has been hinted before in galaxies at z=1.6 by <cit.>, where as they explored sources above the `main sequence' they tentatively concluded a smooth increase of SFE instead of a bimodality in star formation modes. §.§ Evolution of the molecular gas fractionIn this section, we explore the evolution of the molecular gas fraction (f_H_2) as a function of redshift. We introducethe molecular gas mass to the stellar mass ratio as: M_H_2/M_⋆ = τ_ M_H2× sSFR, thus, the gas fraction can be calculated as f_H2 = M_ H2/(M_ H2+M_⋆). We find our sample covers a wide range of values f_H_2∼0.04 - 0.71 (for those sources spectrallyCO-detected above 5σ), which are similar to those shown by the Evolution of molecular Gas in Normal Galaxies <cit.> survey in normal star-forming galaxies. Compared to local ULIRGs, where f_H_2 ranges at 3-5% (e.g. ), our gas fractions are typically higher than those, although if we only consider those M–dominated galaxies we find similar values to those seen in local ULIRGs (lying near the lower f_H_2 envelope defined for starburst galaxies by ).We identify that our f_ H_2 values show a tendency to increase as afunction of redshift (see Fig. <ref>) – probably productof a selection effect induced by the Herschel detectability(these are dusty galaxies). <cit.> suggest a rapidincrease of the average fraction of molecular gas with redshift,similarly to what we find in our analysis. Based on recent works <cit.>, there is growingevidence for a rapid galaxy evolution at low redshifts. Particularly, <cit.> find evidence for fast evolution of the dust mass content of galaxies up to z=0.5, a result that suggest that the molecular gas content also rapidly evolves in samples of Herschel-selectedgalaxies. Actually, using galaxies taken from this same work,<cit.> suggests that this rapid evolution goes togetherwith a significant increment of the gas density (up to z=0.2), aidedby predictions from photo-dissociation region modelling.In the right panel of Fig. <ref>, the two black curves show semi-analytic prescriptions for galaxy formation and evolution of the molecular ISM computed by <cit.>, based on an empirical star formation law to estimate the molecular gas mass. Black solid lines correspond to mass halo models of M_ h = 10^11 and 10^12 M_⊙h^-1, from thinnest to thickest, respectively, that trace our B– and D–dominated galaxies.These models suggest that molecular gas mass content andSFR densities increase as a function of redshift, in roughagreement with what we see in our B– and D–dominated galaxies. On the other hand, the M–dominated galaxies are apparently associated to more massive dark matter halos of ∼10^12M_⊙.§ CONCLUSIONSIn this paper, we present the VALES survey – one of thelargest samples of CO detected normal star-forming galaxiesup to z=0.35. We use the ALMA telescope to estimate the molecular gas content via CO(1–0) emission in a sample of 67 dusty star-forming galaxies. Sources are bright far-IR emitters (S_ 160μ m≥100mJy; L_ IR≈10^10-12 M_⊙) selected from the equatorial fields of the H-ATLAS survey (with SFRs in the range of 1.4-94.2 M_⊙yr^-1). We have spectroscopically detected 49 galaxies (72 % of the sample) with a >5σ CO peak line significance and 12 others are detected in collapsed spectra at low signal to noise. We find that 21 galaxies are spatially resolved in CO (with physical sizes in the range of 3.7-35.1 kpc, allowing a multi-wavelength exploration over a wide parameter space. We summarise our main results as follows: * Based on a visual inspection to the optical/near-IR photometryof the 49 spectrally CO-detected galaxies, we classify 36% as being dominated by a (B)ulge morphology, 53% as a (D)isk morphology, and 11% show evidence for a (M)erger event or interaction.We spatially resolve 21 galaxies which on average show optical-to-CO size ratios of∼1.6±0.5, hence the molecular gas is more concentrated towards the central regions than the stellar component. * Our sample explores the L'_ CO luminosity range of0.3×10^10 K km s^-1 pc^2 expanding theparameter space to fainter values than previous relevant COsurveys at similar redshifts. Aided by the morphologicalclassification (assuming standard α_ CO conversion factorsfor disks and mergers), we estimate a range of M_ H_2= 10^8.9-10.9 M_⊙ for Bulge- andDisk-dominated galaxies while 10^9.3-9.8 M_⊙ forMerger-dominated galaxies. * We explore the Schmidt-Kennicutt relation usingvalues for global Σ_ SFR and Σ_ Mgas derived from a combination of CO and optical radii.Our sample perfectly complements the parameter space that joins both, local and high-z `normal' galaxy samples. We find a best linear fit with a power law slope of 1.16±0.05 and 1.27±0.05 when using M_ H2 and M_ H2+H i, respectively. * The median SFE of our sample is 8.5 Gyr^-1 (with values in the range of 0.4-11.7 Gyr^-1). Even though most of our galaxies follow a long standing mode of star-formation activity, we provide evidence for a population with efficiencies in the `intermediate valley' between normal star formation in disks and more rapid/violent starburst episodes. Within some galaxies there may be a mixture of star-formation modes occurring at the same time. We propose the existence of a continuous transition for the star formation efficiencies as a function of far-IR luminosities.* We estimate the molecular gas fraction, finding values in the range of f_ H_2 = 0.06 - 0.34. Our observations suggesta strong increment of the gas fraction as a function of redshift(up to z=0.35), faster than semi-analytical models predictions.This rapid evolution might be affected by the selection criteria aswe are selecting Herschel-detected galaxies with preferentiallyhigh dust content.To conclude, we note that one of the main uncertainties in this work is produced by the different CO conversion factors between CO luminosity and molecular gas mass, which undoubtedly impact ourestimates. Two of the most evident drivers of these uncertainties arethe dynamical state of the galaxies and the metallicity. We are puttingspecial emphasis on tackling the uncertainty on the molecular gas massestimates using: dynamical modelling of resolved galaxies (Molina et al. in prep), the physical conditions of the interstellar molecular gas within them (), and the calibration between the dust continuum luminosity and interstellar gas content <cit.>. § ACKNOWLEDGEMENTS EI and TMH acknowledge CONICYT/ALMA funding Program in Astronomy/PCI Project N^∘:31140020. M.A. acknowledges partial support fromFONDECYT through grant 1140099. This paper makes use of the following ALMA data: ADS/JAO.ALMA 2012.1.01080.S & ADS/JAO.ALMA 2013.1.00530.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), NSC and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. The Herschel-ATLAS is a project with Herschel,which is an ESA space observatory with science instruments providedby European-led Principal Investigator consortia and with importantparticipation from NASA. The H-ATLAS website is <http://www.h-atlas.org/>.PACS has been developed by a consortium of institutes led by MPE (Germany) and including UVIE (Austria); KU Leuven, CSL, IMEC (Belgium); CEA, LAM (France); MPIA (Germany); INAF-IFSI/OAA/OAP/OAT, LENS, SISSA (Italy); IAC (Spain). This development has been supported by the funding agencies BMVIT (Austria), ESA-PRODEX (Belgium), CEA/CNES (France), DLR (Germany), ASI/INAF (Italy), and CICYT/MCYT (Spain). SPIRE has been developed by a consortium of institutes led by Cardiff University (UK) and including Univ. Lethbridge (Canada); NAOC (China); CEA, LAM (France); IFSI, Univ. Padua (Italy); IAC (Spain); Stockholm Observatory (Sweden); Imperial College London, RAL, UCL- MSSL, UKATC, Univ. Sussex (UK); and Caltech, JPL, NHSC, Univ. Colorado (USA). This development has been supported by national funding agencies: CSA (Canada); NAOC (China); CEA, CNES, CNRS (France); ASI (Italy); MCINN (Spain); SNSB (Sweden); STFC, UKSA (UK); and NASA (USA). GAMA is a joint European-Australasian project based around a spectroscopic campaign using the Anglo-Australian Telescope. The GAMA input catalogue is based on data taken from the Sloan Digital Sky Survey and the UKIRT Infrared Deep Sky Survey. Complementary imaging of the GAMA regions is being obtained by a number of independent survey programmes including GALEX MIS, VST KiDS, VISTA VIKING, WISE, Herschel-ATLAS, GMRT and ASKAP providing UV to radio coverage. GAMA is funded by the STFC (UK), the ARC (Australia), the AAO, and the participating institutions. The GAMA website is <http://www.gama-survey.org/ >. D.R. acknowledges support from the National Science Foundation under grant number AST-1614213 to Cornell University. H.D. acknowledges financial support from the Spanish Ministry of Economy and Competitiveness (MINECO) under the 2014 Ramn y Cajal program MINECO RYC-2014-15686. LD, SJM and RJI acknowledge support from European Research Council Advanced Investigator Grant COSMICISM, 321302; SJM and LD are also supported by the European Research Council Consolidator Grant CosmicDust (ERC-2014-CoG-647939, PI H L Gomez). mnras | http://arxiv.org/abs/1705.09826v2 | {
"authors": [
"V. Villanueva",
"E. Ibar",
"T. M. Hughes",
"M. A. Lara-López",
"L. Dunne",
"S. Eales",
"R. J. Ivison",
"M. Aravena",
"M. Baes",
"N. Bourne",
"P. Cassata",
"A. Cooray",
"H. Dannerbauer",
"L. J. M. Davies",
"S. P. Driver",
"S. Dye",
"C. Furlanetto",
"R. Herrera-Camus",
"S. J. Maddox",
"M. J. Michalowski",
"J. Molina",
"D. Riechers",
"A. E. Sansom",
"M. W. L. Smith",
"G. Rodighiero",
"E. Valiante",
"P. van der Werf"
],
"categories": [
"astro-ph.GA"
],
"primary_category": "astro-ph.GA",
"published": "20170527145506",
"title": "VALES: I. The molecular gas content in star-forming dusty H-ATLAS galaxies up to z=0.35"
} |
Department of Computer Science, University College London, WC1E 6BT, London, U.K.Naturwissenschaftlich-Technische Fakultät, Universität Siegen, Walter-Flex-Straße 3, 57068 Siegen, GermanyDepartment of Computer Science, University College London, WC1E 6BT, London, U.K. Institute of Natural Sciences, Shanghai Jiao Tong University, Shanghai 200240, China Grid states form a discrete set of mixed quantum states that can be describedby graphs. We characterize the entanglement properties of these states andprovide methods to evaluate entanglement criteria for grid states in a graphicalway. With these ideas we find bound entangled grid states for two-particle systemsof any dimension and multiparticle grid states that provide examples for the different aspects of genuine multiparticle entanglement. Our findings suggest that entanglementtheory for grid states, although being a discrete set, has already a complexity similarto the one for general states.03.65.Ud, 03.67.Mn Entanglement properties of quantum grid states Simone Severini December 30, 2023 ==============================================§ INTRODUCTION Entanglement is a fundamental phenomenon of quantum theory<cit.> and is the key to the successes in the steadily maturing fieldof quantum technologies <cit.>. A rich mathematical theory ofentanglement has been developed in recent years <cit.>, with one of itsmain aims to devise techniques to detect and quantify the entanglement present in aphysical system. This direction has seen some success, with a number of results beingapplied in a laboratory setting <cit.>. In general however, testing if adensity matrix describes a state that is entangled or separable is highly non-trivial:so far, no necessary and sufficient criterion for separability has been discovered thatis efficiently computable. In a perhaps discouraging development, the problem ofdecidingif an arbitrary density matrix is separable turns out to be NP-hard<cit.>, suggesting that an efficient “silver bullet”entanglement criterionis permanently out of reach.In this work we propose the study of a simple family of quantum states calledgrid statesas a toy model for mixed state entanglement. Grid states are represented using a combinatorialobject called a grid-labelled graph <cit.>, and their entanglement properties can bedetermined by considering the structure of this graph. We show that despite their deceptivelysimple definition, grid states can exhibit a rich variety of entanglement properties. In particular, we demonstrate that there are bipartite bound entangled grid states in all dimensions. We also extend the grid state framework to multiparticle states, explicitlyconstructing a 3× 3× 3 grid state that is positive under partial transpose(PPT) over all bipartitions, but is genuinely multipartite entangled. This provides an example of a state which cannot be characterized by the method of PPT mixtures <cit.>, which is the strongest criterion for multiparticle entanglement so far. Note that the fact that many NP-complete problems are about graphs <cit.> gives furthermotivation for the study of grid states: it may be possible to prove that determining separabilityof grid states is NP-hard by reduction from a graph problem, e.g., SubgraphIsomorphism.Such a result would imply the known NP-hardness result for the more general problem. A proof of NP-hardnessfor these states would strengthen the complexity lower bound for the separability problem in its fullgenerality. The fact that we are able to demonstrate non-trivial entanglement structure in grid statesgives weight to the idea that this problem is computationally intractable. In this way, our approachalso initiates a new strategy in studying entanglement. So far, many works have been concerned with the study of certain families of quantum states (e.g., with symmetries), where the separability problem is simplified or can be solved <cit.>. Contrary to that, our strategy is to identify a small and discrete family of states, for which the separability problemhas a similar complexity as in the general case. We believe that this can be a way to shed newlight on open problems in quantum information theory. We add that recently the separabilityproblem for so-called Dicke-diagonal states has been shown to be NP-hard <cit.>, butthis is a continuous family of states. Moreover, due to the high symmetry, not all possibletypes of entanglement are present in this type of states, e.g, a multiparticle state that isseparable for one bipartition is already fully separable <cit.>.§ GRID STATESWe say that a quantum state is an m× n grid state if it is the uniformmixture of pure states of the form (|ij⟩-|kl⟩)/√(2), with 0≤ i,k<mand 0≤ j,l<n. Such a mixed state can be represented on a graph with mn verticesarranged in an m× n grid by associating each state |ij⟩-|kl⟩/√(2)with an edge between vertices (i,j) and (k,l). We call such a graph a grid-labelledgraph for the implicit Cartesian labelling of the vertices. For example, Fig. <ref>shows a grid-labelled graph that corresponds to the uniform mixture over the Bell states|Ψ^-⟩ and |Φ^-⟩. In general, if G is a grid-labelled graph, we denoteits corresponding grid state density matrix by ρ(G). It is straightforward to see that two different grid-labelled graphs lead to two different quantum states. When context allows, we refer togrid-labelled graphs simply as graphs. In Ref. <cit.> it is shown that for any grid-labelled graph G, the density matrixρ(G) corresponds to the Laplacian matrix of G, normalised to have trace 1.The Laplacian matrix of a graph with k vertices is an k× k matrix L(G) where eachdiagonal entry [L(G)]_ii is equal to the degree of vertex i; each off diagonal entry[L(G)]_ij is -1 if there is an edge between vertices i and j, and 0 otherwise.Considering the Laplacian matrix of a graph as the density matrix of a quantum stateis an approach initiated by Braunstein et al. <cit.>, and further developedin Refs. <cit.>, where it is shown that entanglement propertiesof the state are manifested in the structure of the corresponding graph. A drawback of theoriginal approach is that the entanglement properties of the state change when the vertices arelabelled in a different way. The study of grid-labelled graphs by the authors inRef. <cit.> remedies this issue by imposing the Cartesian vertex labelling.We also add that mixtures of Bell-type states were used in Ref. <cit.>to construct bound entangled states, but this employed a different strategy.The fact that the density matrix of a grid state corresponds to the Laplacian of thecorresponding graph means that a number of results from the already establishedliterature on graph Laplacian states can be brought to bear on grid states.In particular, the entanglement criterion of the positivity of the partialtranspose (PPT) <cit.> can be formulated in terms of grid states.For a given graph G, positivity of ρ(G)^T_B can be determined by consideringanother graph G^Γ. This is constructed from G by flipping the edges in each rectangle: an edge {(i,j),(k,l)} belongs to G^Γ if and only if {(i,l),(k,j)}belongs to G (see Fig. <ref> for an example).By definition, separable statesare of the form ρ_AB= ∑_k p_k ρ^(k)_A ⊗ρ^(k)_B,where the p_k form a probability distribution, and if a state is not separable then it is entangled. The PPT criterion statesthat for a separable state the partial transposition has no negative eigenvalues,ρ(G)^T_B≥ 0. For grid states, it can be shown that ρ(G) is PPTiff the degree of (i,j) in G is equal to the degree of (i,j) in G^Γ,for all vertices (i,j) <cit.>.Hence, if taking the partial transpose of G does not preserve the degrees ofthe vertices then ρ(G) is entangled. Naturally, this “degree criterion”is necessary and sufficient for separability in 2× 2 and 2× 3 grid states.Remarkably, it is also necessary and sufficient for graph Laplacian states inℂ^2⊗ℂ^q <cit.>, and so this is also the case for2× q grid states. It is easily verified that the grid states illustrated in Fig. <ref> (a) and in Fig. <ref> (a) satisfy the degreecriterion and are therefore positive under partial transpose (PPT). However,it can be verified by the computable cross norm or realignment criterion <cit.>that the statein Fig. <ref> (a) is entangled. Such a state, constructed inRef. <cit.> and referred to as a cross-hatch state in <cit.> is therefore bound entangled. Bound entangled states are at the heart of many problems in quantum information theory <cit.>, therefore it is highly desirableto identify such states in the bipartite or multipartite setting. § THE RANGE CRITERION AND GRAPH SURGERYOur main tool for showing that grid states have a rich entanglementstructure is a graphical way to evaluate the range criterion <cit.>. This criterion is one of the main criteria to detect bound entanglement inthe bipartite and multipartite setting. The criterion is stated like so:if a bipartite density operator ρ_AB is separable then there existsa set of product vectors P={|a_1⟩ |b_1⟩,…, |a_r⟩ |b_r⟩}such that P spans the range of ρ_AB and, at the same time,{|a_1⟩ |b_1^*⟩,…, |a_r⟩ |b_r^*⟩} spans therange of ρ_AB^T_B. Note, however, that it is in general very difficult to determine all sets of product vectors that span a given subspace. The range criterioncan be immediately generalised to the multipartite case <cit.>.We use the following corollary: if a rank r density operator has less than r productvectors in its range then it is entangled. In order to utilize this, we demonstrate atechnique we call graph surgery as a way of determining properties of therange of a grid state. The surgery procedure removes edges from a grid-labelledgraph, and it can be shown that the graph that results has the same number of productvectors in its range as the original graph. Surgery can be applied repeatedly,often producing grid states whose ranges are easily determined.In particular, let G be a graph with an isolated vertex (i,j), meaning that this vertex has no neighbours and so degree 0. Then we obtain another graph G_(i,j)^Rby performing the procedure row surgery on the isolated vertex. This consists of two steps:Row Surgery:(1) CUT: Remove all edges attached to vertices in row i. (2) STITCH: For every pair of vertices not on row i: if therewas a path between them that has been destroyed by the CUT step,then add an edge between them.Note that the STITCH step is not unique: any edge can be added, providedit reconnects the components that have been disconnected. As we shall seelater, it does not matter which edge (or edges) are added. In the same way,we can define column surgery, which produces G_(i,j)^C in an analogous mannerbut acting on column j. Examples of these operations are shown in Figs. <ref> and <ref>. In Fig. <ref>we demonstrate the result of performing row surgery (b) and column surgery (c) on vertex(1,1) of graph (a). Since the CUT step does not disconnect any connected components ofthe graph in this case, the STITCH step is not required. In Fig. <ref> wedemonstrate a more complicated example of surgery. The row surgery on vertex (1,4) removesthe pre-existing path between vertices (0,4) and (3,1). This can be rectified byadding an edge between (3,1) and (0,4). Our results follow from the following observation, which is formalised and proved inthe Appendix. Observation 1. Any product vector |α⟩|β⟩ in the range of ρ(G) mustbe in the range of ρ(G_(i,j)^R) or the range of ρ(G_(i,j)^C),for any isolated vertex (i,j) of G.This means that we can iterate row surgery and column surgery and simplify the graph,this can easily be done with the help of a computer <cit.>. During this iteration, it is clear that not all isolated vertices yield new informationabout the range of a grid state when surgery is performed. Consider a row i whereevery vertex is isolated [e.g., the second row in Fig. <ref>(b)].Then, performing row surgery on any vertex (i,j) [e.g., on vertex (1,0) inFig. <ref>(b)] on that row has no effect and we obtain the trivialstatement that a product vector in the range of ρ(G) is in the range of ρ(G) or ρ(G^C_(i,j)).This is also the case for isolated vertices on an isolated column. So, one should focus on isolated vertices which give new information and we therefore call isolated vertices that are on a non-isolated row and column viable [e.g.,vertex (0,0) in Fig. <ref>(b)]. Starting from a viable vertex,surgeries can be iterated until there are no longer any viable vertices in the graph,at which point the range of the graph can sometimes be easily determined.§ BIPARTITE ENTANGLEMENTNow we demonstrate how the surgery procedure can be applied in conjunctionwith Observation 1 to determine families of bipartite bound entangled states.We first demonstrate that the grid state corresponding to the cross-hatch graphin Fig. <ref>(a) is entangled. The isolated vertex in the middleis viable. Applying row surgery on this middle vertex yields graph (b), whilecolumn surgery gives graph (c). Due to the rotational symmetry, we consider onlythe former, which has two viable vertices, (0,0) and (2,2).Starting with (0,0), row surgery eliminates both edges giving the empty graph,and column surgery eliminates one, leaving the graph with a single diagonal edge.Another surgery can eliminate this edge, giving the empty graph. It is clear thatany sequence of surgeries starting at(2,2) has a similar outcome. So, allsequences of surgeries will terminate in the empty graph. Observation 1 tells us that any product vector in the range of ρ(G) must bein the range of one of these empty graphs, which is not possible because theyhave zero-dimensional ranges. So, there are no product vectors in the range and the state ρ(G) is entangled. Since it is PPT, it is bound entangled.The cross-hatch structure of the graph can be generalised to arbitrary grid sizes.It is easily checked that these graphs all are PPT. It is clear from similarreasoning to the 3× 3 case that for all grid sizes all sequences ofsurgeries terminate with empty graphs: every subgraph of a cross-hatch graphhas at least one viable vertex. So all cross-hatch graphs correspond to bound entangledstates.The second bipartite example is the square-loop graph G, see Fig. <ref>(a).Performing row surgery on the viable isolated vertex (2,2) gives us graph (b), whichhas two viable isolated vertices: (1,4) and (4,1). Row surgery on (1,4) yieldsgraph (c). We ask the reader to verify that the surgeries can be iterated in a similarway, and that any sequence of surgeries leads to one of two graphs: the 5× 5 emptygraph, or the `X'-shaped graph (d). Since the graph G has 25 vertices and 11connected components, the grid state is of rank 25-11=14, see Lemma 2 in the Appendix.If ρ(G) were separable then its range must have a productbasis. But the `X'-shaped grid state has rank 25-23=2 and the empty graph is of rank zero. So, these graphs do not contain enough product vectors.Finally, note that the cross-hatch states are edge states <cit.>, as there are no product vectors in their range. Edge states are highly entangled bound entangled states, lying at the border between PPT and NPT states. Further, all grid states are Schmidt rank two: by definitionthey are equal to uniform mixtures of pure states of the form (|ij⟩-|kl⟩)/√(2). § MULTIPARTICLE ENTANGLEMENTInterestingly, the constructions can be generalized to the multiparticle case, yielding furtherexamples of quantum states with surprising entanglement properties.Let us consider graphs on an l× m× n grid, whichcorrespond to tripartite grid states ρ_ABC∈ℂ^l⊗ℂ^m⊗ℂ^n.The cross-hatch construction can be generalised to the 3× 3× 3 grid, which is illustrated inFig. <ref>. Analoguous the bipartite case, a link between two vertices ijk and rst corresponds to the state (|ijk⟩-|rst⟩)/√(2).First, one can see by direct inspection that the graph has a symmetry, leading to a permutationally invariant state. Then, one can directly check that the state is PPT for any of the possible bipartitions A|BC, B|AC, and C|AB. In addition, one can apply the iteration of surgeries for the bipartitions. After nine iterations, one arrives at an empty graph, proving that there are no product vectors of the type |ϕ⟩_A ⊗|ψ⟩_BC (or similar vectors for other bipartitions) in the range of ρ(G) <cit.>. This implies that the state is genuine multiparticle entangled <cit.>. So, this state is an example where the entanglement criterion of PPT mixtures fails <cit.>. This criterion is the strongest criterion for multiparticle entanglement, and so far only three examples of states are known which cannot be detected by it <cit.>. This demonstrates that also weak and rare forms of multiparticle entanglement can be found among grid states. It is clear how to generalise this cross-hatch structure to the l× l × l case. Indeed,it seems likely that such states exist in the N-partite case, and can be constructed by connectingfaces of the N-dimensional hypercube. § CONCLUSIONWe have shown that grid states can be highly non-trivial entangled states. Based ongraphical ways to evaluate the PPT criterion and the range criterion, we have demonstratedthat for all bipartite dimensions there exists bound entangled grid states. We have generalized grid states to the multiparticle case, and again is turned out that these states can havecomplicated entanglement properties. This makes grid states a valuable test-bed for variousentanglement criteria.The diversity of the states we can generate with this formalism can be interpreted to meanthat testing separability of even this restricted class of states may be NP-hard. Furthermore,perhaps a graph theoretic reduction could be used in a hardness proof, potentially simplifyingthe argument of Gurvits <cit.>. On the other hand, the elegance of the graphical description makes the formalism an attractive tool for the study of quantumentanglement and the interplaybetween different entanglement criteria.For further work, it would be highly desirable to derive algorithms to prove separabilityof grid states in a graphical language. This is needed to analyze the algorithmic complexityof the separability problem for grid states further. Second, it would be useful if one canidentify graphical transformations that keep the entanglement properties invariant, as theyinduce only local unitary transformations of the state. Similar rules are known for the familiesof cluster states and graph states <cit.>. Finally, natural generalisationsof the grid state concept would include hypergraphs and weighted graphs. We thank Felix Huber and Danial Dervovic for helpful discussions. This work has been supported by the UK EPSRC (EP/L015242/1), the ERC (Consolidator Grant 683107/TempoQ), and the DFG. JL thanks the Theoretical QuantumOptics group at Universität Siegen for their hospitality.§ APPENDIXOur results follow by application of Observation 1, which we prove here. We firstrestate it in a more formal manner. In what follows, we denote the kernel and rangeof a density operator ρ by K(ρ) and R(ρ) respectively.Observation 1'. Let G be a grid-labelled graph with isolated vertex (i,j)∈ V(G). For all productvectors |α⟩ |β⟩∈ℂ^m⊗ℂ^n, if|α⟩ |β⟩∈ R[ρ(G)] then|α⟩ |β⟩∈ R[ρ(G_(i,j)^R)] or |α⟩ |β⟩∈ R[ρ(G_(i,j)^C)].Before proving this result, we need two lemmata. The following lemma provides a characterization of the range of a grid state. For its formulation, we denoteby C(G) the set of connected components of a graph. Here, also disconnected vertices are considered to constitute a connected component. For example, the cross hatch graph in Fig. 2(a) in the main text has five connected components,|C(G)|=5. We also associate with every grid-labelled graph G the state|G⟩=∑_(i,j)∈ V^'(G)|ij⟩, where V^'(G)={(i,j)∈ V(G) : d[(i,j)]>0} is the set of vertices of G with non-zero degree. This construction can also be applied to a single connected component S ∈ C(G).Lemma 2. Let G be an m× n grid-labelled graph, and let C(G) denote the set ofits connected components. Then |ψ⟩∈ R(ρ(G)) if and only if|ψ⟩⊥|S⟩ for all S ∈ C(G). This implies that for any m× n vertex grid-labelled graph G, the dimensionof the kernel of ρ(G) is equal to the number of connected components |C(G)|.Therefore, the rank of ρ(G) is equal to m × n-|C(G)|.Proof of Lemma 2. For all graphs G, ρ(G) is Hermitean so |ψ⟩∈ R(ρ(G)) ifand only if |ψ⟩⊥ K[ρ(G)]. For any connected component S∈ C(G) with k vertices,ρ(S)|S⟩=0, so |S⟩∈ K[ρ(S)]. Since S isconnected, it has a spanning tree T with k-1 edges. The edgesof T correspond to a set of linearly independent vectors(|ij⟩-|kl⟩)/√(2) in the range of R[ρ(S)],so dim(K[ρ(S)])≤ k-(k-1)=1. Therefore,K[ρ(S)]=span_ℂ(|S⟩). The density operator ρ(G) can be decomposed in terms ofC(G), ρ(G) =1/2|E|∑_{(i,j),(k,l)}∈ E(G)(|ij⟩-|kl⟩)(⟨ ij|- ⟨ kl|)=1/2|E|∑_S∈ C(G)2|E(S)|ρ(S)=∑_S∈ C(G)|E(S)|/|E(G)|ρ(S).By definition the components S have no edges in common, so|ψ⟩⊥ K[ρ(G)] if and only if|ψ⟩⊥ K[ρ(S)]=span_ℂ(|S⟩)for all S∈ C(G).To proceed we will need to define the vectors |G_i,*⟩= ∑_(k,l)∈ V(G) k≠ i|kl⟩ and |G_*,j⟩= ∑_(k,l)∈ V(G) l≠ j|kl⟩.for any subgraph G of a grid-labelled graph. Then we have:Lemma 3. Let G be a grid-labelled graph with m× n vertices. If a state |ψ⟩ is orthogonal to all states in * {|S_i,*⟩ : S∈ C(G)} and {|i,1⟩,…,|i,n⟩} then |ψ⟩∈ R[ρ(G_(i,j)^R)]; * {|S_*,j⟩ : S∈ C(G)} and {|1,j⟩,…,|m,j⟩} then |ψ⟩∈ R[ρ(G_(i,j)^C)]. Proof of Lemma 3. It is clear that G_(i,j)^R can be obtained by considering the effect of surgery on each connected component of G separately. For such a component S ∈ C(G), we have that K[ρ(S)]=∑_(k,l)∈ V(S)|kl⟩. Performing the CUT step of surgery on row i of S removes all edges to vertices in that row, which introduces new isolated vertices. The STITCH step then ensures that the remnants of the graph remain connected. Therefore, if a state |ψ⟩ is orthogonal to ∑_(k,l)∈ V(S)|kl⟩ for k≠ i, and is orthogonal to {|i,q⟩} for all of the new isolated vertices (i,q), then it is in the range of ρ(S_(i,j)^R) by Lemma 2. It is clear that if |ψ⟩ is orthogonal to each of the states |S_i,*⟩ for S∈ C(G), as well as all the isolated vertex states |i,1⟩,…, |i,m⟩ introduced by performing CUT on each component then it is in the range of ρ(G_(i,j)^R) by Lemma 2. By similar reasoning, the same is true for the graph obtained by column surgery. We may now prove the Observation.Proof of Observation 1'. Since (i,j) is isolated then |i,j⟩∈ K[ρ(G)].Therefore, if |α⟩|β⟩∈ R(ρ(G)) theneither |α⟩⊥ |i⟩ or |β⟩⊥ |j⟩.Suppose the former is the case. Then clearly |α⟩|β⟩is orthogonal to all |i,1⟩,…,|i,m⟩. Further, we know thatfor all S∈ C(G), |α⟩|β⟩ is orthogonal to |S⟩,and so must be orthogonal to |S_i,*⟩. Therefore, by Lemma 3 it mustbe in the range of ρ(G_(i,j)^R). If we instead assume that|β⟩⊥ |j⟩ then by similar reasoning,|α⟩|β⟩∈ R[ρ(G_(i,j)^C)].25 bell J. S. Bell, Physics 1, 3, 195 (1964). werner R. F. Werner, Phys. Rev. A 40, 4277 (1989). metrology V. Giovannetti, S. Lloyd, and L. Maccone, Nat. Photon. 5, 222 (2011). msmtbased H. J. Briegel, D. E. Browne, W. Dür, R. Raussendorf, and M. Van den Nest, Nat. Phys. 5, 19 (2009). horodecki R. Horodecki, P. Horodecki, M. Horodecki, and K. Horodecki, Rev. Mod. Phys. 81, 865 (2009). detecting O. Gühne and G. Tóth, Phys. Rep. 474, 1 (2009). gurvits L. Gurvits, Proc. of the 35th ACM Symp. on Theory of Comp. (STOC), pp. 10-19, (2003). sevag S. Gharibian, Quantum Inf. Comput. 10, 343 (2010). ioannou L. M. Ioannou, Quantum Inf. Comput. 7, 4 (2007). combent J. Lockhart and S. Severini, arXiv:1605.03564. pptmixerB. Jungnitsch, T. Moroder, and O. Gühne,Phys. Rev. Lett. 106, 190502 (2011). gj M. R. Garey and D. S. Johnson, “Computers and Intractability: A Guide to the Theory of NP-Completeness”, W. H. Freeman & Co. New York, NY, USA (1979). sym1 K. G. H. Vollbrecht and R. F. Werner, Phys. Rev. A 64, 062307 (2001). sym2 C. Eltschka and J. Siewert, Phys. Rev. Lett. 108, 020502 (2012). sym3 L.E. Buchholz, T. Moroder, and O. Gühne,Ann. Phys. (Berlin) 528, 278 (2016). yu N. Yu, Phys. Rev. A 94, 060101(R) (2016).turaJ. Tura, A. Aloy, R. Quesada, M. Lewenstein, and A. Sanpera,Quantum 2, 45 (2018).eckertK. Eckert, J. Schliemann, D. Bruß,and M. Lewenstein,Ann. Phys. (New York) 299, 88 (2002).ichikawaT. Ichikawa, T. Sasaki, I. Tsutsui, and N. Yonezawa, Phys. Rev. A 78, 052105 (2008). braunstein1 S. L. Braunstein, S. Ghosh, and S. Severini, Ann. Comb. 10, 291 (2006). braunstein2 S. L. Braunstein, S. Ghosh, T. Mansour, S. Severini, and R. C. Wilson, Phys. Rev. A 73, 012320 (2006). hildebrand R. Hildebrand, S. Mancini, and S. Severini, Math. Struct. in Comp. Science 8, 205 (2008). wu C. W. Wu, Phys. Lett. A 351, 18 (2006). piani M. Piani, Phys. Rev. A 73, 012345 (2006). peres A. Peres, Phys. Rev. Lett. 77, 1413 (1996). horodeckisep M. Horodecki, P. Horodecki, and R. Horodecki, Phys. Lett. A 223,1 (1996). rudolph O. Rudolph, Quantum Inf. Process. 4, 219 (2005). mr K. Chen, L. Wu, Quantum Inf. Comput. 3, 193 (2003). be1 K. Horodecki, M. Horodecki, P. Horodecki, and J. Oppenheim,Phys. Rev. Lett. 94, 160502 (2005). be2A. Acín, J. I. Cirac, and Ll. Masanes,Phys. Rev. Lett. 92, 107903 (2004). range P. Horodecki, Phys. Lett. A 232, 333 (1997). mathematica A Mathematica file for performing iterations of surgery operations is available upon request. edgestates M. Lewenstein, B. Kraus, J. I. Cirac, andP. Horodecki,Phys. Rev. A 62, 052310 (2000).pianimoraM. Piani and C. Mora,Phys. Rev. A 75, 012305 (2007).hubersenguptaM. Huber and R. Sengupta,Phys. Rev. Lett. 113, 100501 (2014). korea K.-C. Ha and S.-H. Kye,Phys. Rev. A 93, 032315 (2016).heinM. Hein, J. Eisert, and H.J. Briegel,Phys. Rev. A 69, 062311 (2004).tsimakuridzeN. Tsimakuridze and O. Gühne, J. Phys. A: Math. Theor. 50, 195302 (2017). | http://arxiv.org/abs/1705.09261v2 | {
"authors": [
"Joshua Lockhart",
"Otfried Gühne",
"Simone Severini"
],
"categories": [
"quant-ph",
"math.CO"
],
"primary_category": "quant-ph",
"published": "20170525170407",
"title": "Entanglement properties of quantum grid states"
} |
plain TheoremTheorem LemmaLemma PropositionProposition CorollaryCorollary ConjectureConjecture ProblemProblem FactFactExampleExample RemarkRemark | http://arxiv.org/abs/1705.09347v1 | {
"authors": [
"Yoram Zarai",
"Michael Margaliot",
"Tamir Tuller"
],
"categories": [
"q-bio.SC"
],
"primary_category": "q-bio.SC",
"published": "20170525201250",
"title": "Ribosome Flow Model with Extended Objects"
} |
=1 #1http://arxiv.org/abs/#1arXiv:#1 | http://arxiv.org/abs/1705.09687v1 | {
"authors": [
"Ovidiu Costin",
"Gerald V. Dunne"
],
"categories": [
"hep-th",
"cond-mat.other",
"math-ph",
"math.MP"
],
"primary_category": "hep-th",
"published": "20170526191855",
"title": "Convergence from Divergence"
} |
Stochastic Assume-Guarantee Contracts for Cyber-Physical System Design UnderProbabilistic Requirements Jiwei Li^1, Pierluigi Nuzzo^2, Alberto Sangiovanni-Vincentelli^3, Yugeng Xi^1, Dewei Li^1 ^1 Department of Automation, Shanghai Jiao Tong University. Email: [email protected], {ygxi,dwli}@sjtu.edu.cn^2 Department of Electrical Engineering, University of Southern California, Los Angeles. Email: [email protected] ^3 EECS Department, University of California, Berkeley. Email: [email protected] ===================================================================================================================================================================================================================================================================================================================================================================================================================================== We develop an assume-guarantee contract framework for the design of cyber-physical systems, modeled as closed-loop control systems, under probabilistic requirements. We use a variant of signal temporal logic, namely, Stochastic Signal Temporal Logic (StSTL) to specify system behaviors as well as contract assumptions and guarantees, thus enabling automatic reasoning about requirements of stochastic systems. Given a stochastic linear system representation and a set of requirements captured by bounded StSTL contracts, we propose algorithms that can check contract compatibility, consistency, and refinement, and generate a controller to guarantee that a contract is satisfied, following a stochastic model predictive control approach. Our algorithms leverage encodings of the verification and control synthesis tasks into mixed integer optimization problems, and conservative approximations of probabilistic constraints that produce both sound and tractable problem formulations. We illustrate the effectiveness of our approach on a few examples, including the design of embedded controllers for aircraft power distribution networks. § INTRODUCTION Large and complex Cyber-Physical Systems (CPSs), such as intelligent buildings, transportation, and energy systems, cannot be designed in a monolithic manner. Instead, designers use hierarchical and compositional methods, which allow assembling a large and complex system from smaller and simpler components, such as pre-defined library blocks. Contract-based design is emerging as a unifying formal compositional paradigm for CPS design and has been demonstrated on several applications <cit.>. It supports requirement engineering by providing formalisms and mechanisms for early detection of integration errors, for example, by checking compatibility between components locally, before performing expensive, global system verification tasks. However, while a number of contract and interface theories have appeared to support deterministic system models <cit.>, the development of contract frameworks for stochastic systems under probabilistic requirements is still in its infancy. Deterministic approaches fall short of accurately capturing those aspects of practical systems that are subject to variability (e.g., due to manufacturing tolerances, usage, and faults), noise, or model uncertainties. While trying to meet the specifications over the entire space of uncertain behaviors,they tend to produce worst-case designs that are overly conservative. Moreover, several design requirements in practical applications cannot be rigidly defined, and would be better expressed as probabilistic constraints, e.g., to formally capture that “the room temperature in a building shall be in a comfort region with a confidence level larger than 80% at any time during a day.” Providing support for reasoning about probabilistic behaviors and for the development of robust design techniques that can avoid over-design is, therefore, crucial. This need becomes increasingly more compelling as a broad number of safety-critical systems, such as autonomous vehicles, uses machine learning and statistical sensor fusion algorithms to infer information from the external world.An obstacle to the development of stochastic contract frameworks and their adoption in system design stems from the computational complexity of the main verification and synthesis tasks for stochastic systems (see, for example, <cit.>), which are needed to perform concrete computations with contracts.A few proposals toward a specification and contract theory for stochastic systemshave recently appeared, e.g., based on Interactive Markov Chains <cit.>, Constraint Markov Chains <cit.>, and Abstract Probabilistic Automata <cit.>. However, these frameworks mostly use contract representations based on automata, whichare more suitable to reason about discrete-state discrete-time system abstractions. Theytend to favor an imperative specification style, and may show poor scalability when applied to hybrid systems. A declarative specification style is often deemed as more practical for system-level requirement specification and validation, since it retains a better correspondence between informal requirements and formal statements.In this paper, we develop an A/G contract framework for automated design of CPSsmodeled as closed-loop control systems under probabilistic requirements. We aim to identify formalisms for contract representation and manipulation that effectively trade expressiveness with tractability: (i) they are rich enough to represent hybrid system behaviors using a declarative style; (ii) they are amenable to algorithms for efficient computation of contract operations and relations.We address these challenges by leveraging an extension of Signal Temporal Logic (STL) <cit.>, namely, Stochastic Signal Temporal Logic (StSTL), to support the specification of probabilistic constraints in the contract assumptions and guarantees. We show that the main verification tasks for bounded StSTL contracts on stochastic linear systems, i.e., compatibility, consistency, and refinement checking, as well as the synthesis of stochastic Model Predictive Control (MPC) strategies can all be translated into mixed integer programs (MIPs) which can be efficiently solved by state-of-the-art tools. Since probabilistic constraints on stochastic systems cannot be expressed in closed analytic form except for a small set of stochastic models <cit.>, we propose conservative approximations to provide optimization problem formulations that are both sound and tractable. We illustrate the effectiveness of our approach with a few examples, including the synthesis of controllers for an aircraft electric power distribution system. Related Work. A generic assume-guarantee (A/G) contract framework for probabilistic systems that can also capture reliability and availability properties using a declarative style has been recently proposed <cit.>. Our work differs from this effort, since it is not based on a probabilistic notion of contract satisfiability. In our approach, probabilistic constraints appear, instead, as predicates in the contract assumptions and guarantees. We express assumptions and guarantees using StSTL, which is an extension of STL <cit.>. STL was proposed for the specification of properties of continuous-time real-valued signals and has been previously used in CPS design <cit.>. A few probabilistic extensions of temporal logics have been proposed over the years to express properties of stochastic systems.Among these, Probabilistic Computation Tree Logic (PCTL) was introduced to expresses properties over the realizations (paths) of finite-state Markov chains and Markov decision processes <cit.> by extending the Computation Tree Logic (CTL) <cit.>. While PCTL can reason about global system executions and uncertainties about the times of occurrence of certain events,certain applications are rather concerned with capturing the uncertainty on the value of a signal at a certain time. This is the case, for instance, in the deployment of stochastic MPC schemes in different domains.By using StSTL, we can express requirements where uncertainty is restricted to probabilistic predicates and does not involve temporal operators.While being expressive enough to cover the applications of interest, this restriction is also convenient, since it allows directly translating design and verification problems into optimization and feasibility problems with chance (probabilistic) constraints that can be efficiently solved using off-the-shelf tools. Closely related to StSTL, Probabilistic Signal Temporal Logic (PrSTL) <cit.> has been recently proposedto specify properties and design controllers for deterministic systems in uncertain environments, captured by Gaussian stochastic processes.Our work is different since it focuses on developing a comprehensive contract framework that supports both verification and control synthesis tasks. Our framework can reason about a broader class of systems, including linear systems with additive and control-dependent noise and Markovian jump linear systems.Moreover, it supports non-Gaussian probabilistic constraints that cannot be captured in closed analytic form, by formulating encodings of synthesis and verification tasks that can produce sound and efficient approximations. § PRELIMINARIESAs we aim to extend the Assume-Guarantee (A/G) contract framework <cit.> to stochastic systems, we start by providing some background on A/G contracts and Stochastic Signal Temporal Logic (StSTL). §.§ Assume-Guarantee Contracts: An OverviewagcThe notion of contracts originates fromassume-guarantee reasoning <cit.>, which has been known for a long time as a hardware and software verification technique. However,its adoption in the context ofreactive systems, i.e., systems that maintain an ongoing interaction with their environment, such as CPSs, has been advocated only recently <cit.>. We provide an overview of A/G contracts starting with a generic representation of a component. We associate to it a set of properties that the component satisfies, expressed with contracts. The contracts will be used to verify the correctness of the composition and of the refinements. A component is an element of a design, characterized by a set of variables (input or output), a set ofports (input or output), and a set of behaviors over its variables and ports. Components can be connected together by sharing certain ports under constraints on the values of certain variables. Behaviors are generic and could be continuous functions that result from solving differential equations, or sequences of values or events recognized by an automaton. To simplify, we use the same term “variables” to denote both component variables and ports. We use M to denote the set of behaviors of component M.A contract C for a component M is a triple (V, A, G), where V is the set of component variables, and A and G are sets of behaviors over V <cit.>. A represents the assumptions that M makes on its environment, and G represents the guarantees provided by M under the environment assumptions. A component M satisfies a contract C whenever M and C are defined over the same set of variables, and all the behaviors of M are contained in the guarantees of C once they are composed (i.e., intersected) with the assumptions, that is, when M∩ A ⊆ G. We denote this satisfaction relation by writingMC, and we say that M is an implementation of C. However, a component E can also be associated to a contract C as an environment. We say that E is a legal environment of C, and write E _E C, whenever E and C have the same variables and E⊆ A.A contract C = (V, A, G) is in canonical form if the union of its guarantees G and the complement of its assumptions A is coincident with G, i.e., G = G ∪A, where A is the complement of A. Any contract C can be turned into a contract in canonical form C' by taking A'=A and G' = G ∪A. We observe that C and C' possess identical sets of environments and implementations.Such two contracts C and C' are then equivalent.Because of this equivalence, in what follows, we assume that all contracts are in canonical form. A contract is consistent when the set of implementations satisfying it is not empty, i.e., it is feasible to develop implementations for it. This amounts to verifying that G ≠∅, where ∅ denotes the empty set. Let M be any implementation; then C is compatible if there exists a legal environment E for M, i.e., if and only if A ≠∅. The intent is that a component satisfying contract C can only be used in the context of a compatible environment.Contracts can be combined according to different rules. Composition (⊗) of contracts can be used to construct complex globalcontracts out of simpler local ones. Let C_1 and C_2 be contracts over the same set of variables V.Reasoning on the compatibility and consistency of the composite contract C_1 ⊗ C_2 can then be used to assess whether there exist components M_1 and M_2 such that their composition is valid, even if the full implementation of M_1 and M_2 is not available. To reason about consistency between different abstraction layers in a design, contracts can be ordered by establishing a refinement relation. We say that C refines C', written C ≼ C', if and only if A ⊇ A' and G ⊆ G'. Refinement amounts to relaxing assumptions and reinforcing guarantees. Clearly, if MC and C ≼ C', then MC'. On the other hand, if E _E C', then E _E C. In other words, contract C refines C', if C admits less implementations than C', but more legal environments than C'. We can then replace C' with C.Finally, to combine multiple requirements on the same component that need to be satisfied simultaneously, the conjunction (∧) of contracts can also be defined so that, if a component M satisfies the conjunction of _1 and _2, i.e., MC_1 ∧ C_2, then it also satisfies each of them independently, i.e., MC_1 and MC_2. We refer the reader to the literature <cit.> for the formal definitions and mathematical expressions of contract composition and conjunction. In the following, we provide concrete representations of some of these operations and relations using operations on StSTL formulas. §.§ Stochastic Signal Temporal Logic (StSTL) We use StSTL to formalize requirements for discrete-time stochastic system and express both contract assumptions and guarantees. However, similarly to STL, StSTL also extends to continuous-time systems. Stochastic System. We consider a discrete-time stochastic system in a classic closed-loop control configuration as shown in Fig. <ref>. The system dynamics are given byx_0 = x̅_0,x_k+1 = f(x_k,u_k,w_k),k=0,1,…where f is an arbitrary measurable function <cit.>, x_k∈ℝ^n_x is the system state, x̅_0 is the initial state, u_k∈ℝ^n_u is the (control) input, and {w_k}_k=0^∞ is a random process on a complete probability space, which we denote as (Ω, ℱ, 𝒫), using the standard notation, respectively, for the sample space, the set of events, and the probability measure on them <cit.>. Each element ℱ_k of the filtrationℱ denotes the σ-algebra generated by the sequence {w_t}_t=0^k, while we set ℱ_-1 = {∅,Ω} as being the trivial σ-algebra. We assume that the input u_k is a function of the system states {x_t}_t=0^k and both x_k and u_k are ℱ_k-1-measurable random variables <cit.>. We also denote as z_k = (x_k,u_k,w_k) the vector of all the system variables at time k. Finally, we abbreviate as z = z_0, z_1, … a system behavior and as z^H = z_0, …, z_H-1 its truncation over the horizon H.StSTL Syntax and Semantics. StSTL formulas are defined over atomic predicates represented by chance constraints of the formμ ^[p] := 𝒫{μ(v) ≤ 0}≥ p,where μ(·) is a real-valued measurable function, v is a random variable on the probability space (Ω, ℱ, 𝒫), and p ∈ [0,1]. The truth value of μ ^[p] is interpreted based on the satisfaction of the chance constraint, i.e., μ ^[p] is true (denoted with ⊤) if and only if μ(v) ≤ 0 holds with probability larger than or equal to p. StSTL also supports deterministic predicates as a particular case. If μ(v) is deterministic, then μ^[p] holds for any value of p if and only if μ(v) ≤ 0 holds. In this case, we can omit the superscript [p].We define the syntax of an StSTL formula as follows:ψ := μ ^[p] | ψ | ψ∨ϕ | ψ _[t_1,t_2]ϕ | _[t_1,t_2]ψ,where μ ^[p] is an atomic predicate, ψ and ϕ are StSTL formulas, t_1, t_2 ∈ℝ_+ ∪{+∞}, andandare, respectively, theuntil and globally temporal operators. Other operators, such as conjunction (), weak until (), or eventually () are also supported and can be expressed using the operators in (<ref>).The semantics of an StSTL formula can be defined recursively as follows:(z,k)μ ^[p] ↔𝒫{μ(z_k) ≤ 0}≥ p, (z,k)ψ ↔( (z,k) ψ) (z,k)ψ∨ϕ ↔ (z,k) ψ∨ (z,k) ϕ,(z,k)ψ_[t_1,t_2]ϕ ↔∃ i ∈ [k+t_1,k+t_2]: (z,i) ϕ (∀ j ∈[k+t_1,i-1]: (z,j) ψ),(z,k)_[t_1,t_2]ψ ↔∀ i ∈ [k+t_1,k+t_2]: (z,i) ψ. As an example, (z,k) _[t_1,t_2]ϕ means that ϕ holds for all times t between t_1 and t_2.Intervals may also be open or unbounded, e.g., of the form [t_1,+∞).In this paper, we focus on bounded StSTL formulas, that is, formulas that contain no unbounded operators.StSTL reduces to STL for deterministic systems, with the exception that the atomic predicate has the form μ(v)≤ 0 rather than μ(v) > 0, as in STL. A difference between StSTL and PrSTL is in the interpretation of the negation of an atomic predicate. In PrSTL the semantics of negation is probabilistic, i.e., if (z,t) λ^ϵ_t_α_t holds for an atomic PrSTL predicateλ^ϵ_t_α_t, which is equivalent to stating that 𝒫{λ_α_t (z_t)<0} > 1 - ϵ_t, then (z,t) λ^ϵ_t_α_t is interpreted as 𝒫{λ_α_t (z_t)>0} > 1 - ϵ_t, so that λ^ϵ_t_α_t and λ^ϵ_t_α_t can be true at the same time. StSTL keeps, instead, the standard semantics of logic negation.§ PROBLEM FORMULATIONWe can concretely express the sets of behaviors A and G in a contract using temporal logic formulas <cit.> and, in particular, StSTL formulas. We then define an StSTL A/G contract as a triple (V,ϕ_A,ϕ_G), where ϕ_A and ϕ_G are StSTL formulas over the set of variables V. The canonical form of (V,ϕ_A,ϕ_G) can be achieved by setting ϕ_G := ϕ_A →ϕ_G. The main contract operators can then be mapped into entailment of StSTL formulas.We define below the verification and synthesis problems addressed in this paper.[Contract Consistency and Compatibility Checking]Given a stochastic system representation S as in (<ref>) and a bounded StSTL contract C = (V, ϕ_A, ϕ_G) on the system variables V, determine whether C is consistent (compatible), that is, whether ϕ_G (ϕ_A) is satisfiable. [Contract Refinement Checking]Given a stochastic system representation S as in (<ref>) and bounded StSTL contracts C_1 = (V, ϕ_A1, ϕ_G1)and C_2 = (V, ϕ_A2, ϕ_A2) on the system variables V, determine whether C_1 ≼ C_2, that is,ϕ_A2→ϕ_A1 and ϕ_A1→ϕ_G2 are both valid.[Synthesis from Contract]Given a stochastic system representation S as in (<ref>), a bounded StSTL contract C = (V, ϕ_A, ϕ_G)on the system variables V, and time horizon H, determine a control trajectory u^H such that (z^H,0) ϕ_A→ϕ_G.We consider the following system description:x_k+1 = [ 1 1; 0 1 ] x_k + [ 1 + 0.3w_k,1-0.2w_k,2;-0.2w_k,2 1 + 0.3w_k,1 ] u_k, where w_k = [w_k,1,w_k,2]^Tfollows a standard Gaussian distribution, i.e., w_k ∼𝒩(0,I) for all k, I being the identity matrix. We assume that the first state variable at time 0, [1,0] x_0, is in the interval [1,2] and require that with probability smaller than 0.7 the first state variable at time 2 does not exceed 1. We can formalize this requirement with the following StSTL contract C_1 = (ϕ_A1, ϕ_G1) in canonical form: ϕ_A1:=(1 ≤ [1,0] x_0)([1,0] x_0 ≤ 2),ϕ_G1:= ϕ_A1→ (𝒫{[1,0]x_2≤ 1}≥ 0.7),where, for brevity, we drop the set of variables in the contract tuple.Assumptions and guarantees are expressed by logical combinations of arithmetic constraints over real numbers and chance constraints, all supported by StSTL. We intend to verify the consistency of C_1.Given the assumption on the distribution of w_k, it is possible to show that there exists a constant matrix Λ_1^1/2∈ℝ^3× 3 such that the constraint 𝒫{[1,0] x_2≤ 1}≥ 0.7 translates into a deterministic constraint[Details on how to compute such a matrix Λ_1^1/2 are provided in Sec. <ref>.]f(x_0,u_0,u_1) ≤ 0, where f(.)= [1,2] x_0 + [1,1,1,0][ u_0; u_1 ] -1 ++ F^-1(0.7) Λ_1^1/2[ u_0; u_1; 1 ]_2, F^-1 is the inverse cumulative distribution of a standard normal random variable, and . _2 is the ℓ_2 norm. Hence, the contract is consistent if and only if there exists (x_0,u_0,u_1) that satisfies([1,0]x_0 < 1) ∨ ([1,0]x_0 > 2) ∨ f(x_0,u_0,u_1) > 0. To solve this problem, we can translate (<ref>) into a mixed integer program by applying encoding techniques proposed in the literature <cit.>. However, since one of the constraints in (<ref>) is non-convex, using a nonlinear solver may be inefficient and usually requires the knowledge of bounding boxes for all the decision variables. Moreover, analytical expressions of chance constraints may not be even available in general <cit.>. Similar considerations hold for the problems of checking compatibility,refinement, and for the generation of MPC schemes.Sec. <ref> addresses the issue highlighted in Example <ref> by providing techniques for systematically computing mixed integer linearapproximations of chance constraints and bounded StSTL formulas for three common classes of stochastic linear systems. To effectively perform the verification and synthesis tasks in Problem <ref>-<ref>, we look forboth under- and over-approximations of StSTL formulas. For example, if the under-approximation of (<ref>) is feasible, then we can conclude thatC_1 is consistent. However, infeasibility of the under-approximation is not sufficient to conclude about contract inconsistency; for this purpose, we need to prove that the over-approximation of (<ref>) is infeasible.encodingFor instance, sufficient and necessary conditions for the satisfiability of 𝒫((1,0)x_2≤ 1) ≥ 0.7 in (<ref>) can be, respectively, expressed by the following linear constraints: (1,2)x_0 + [1,1,1,0][ u_0; u_1 ] - 1 + F^-1(0.7) ∑_j=1^5 |e_j^T T [ u_0; u_1; 1 ]| ≤ 0,(1,2)x_0 + [1,1,1,0][ u_0; u_1 ] - 1 + F^-1(0.7)/√(5)∑_j=1^5 |e_j^T T [ u_0; u_1; 1 ]| ≤ 0,where e_j^T is the jth row of the identity matrix I. The constraints in (<ref>) can be easily linearized and are, therefore, more tractable than the one in (<ref>). § MIP ENCODING OF BOUNDED STSTL We present algorithms for the translation of bounded StSTL formulas into mixed integer constraints on the variables of a stochastic system.A MIP under-approximation of an StSTL formula ψ is a set of mixed integer constraints 𝒞^S(ψ) whose feasibility is sufficient to ensure the satisfiability of ψ.A MIP over-approximation of ψ is a set of mixed integer constraints 𝒞^N(ψ) which must be feasible if ψ is satisfiable. When tractable closed-form translations of chance constraints are available, the formula under- and over-approximations coincide and provide an equivalent encoding of the satisfiability problem. Otherwise, our framework provides under- and over-approximations in the form of mixed integer linear constraints. We start by discussing the translation of atomic predicates.§.§ MIP Translation of Chance ConstraintsOur goal is to translate chance constraints into sets of deterministic constraints that can be efficiently solved and provide a sound formulation for our verification and synthesis tasks.Since approximation techniquesdepend on the structure of the function μ(·) and the distribution of z_k at each time k, we detail solutions for three classes of dynamical systems and chance constraints that arise in various application domains. We denote by S(μ^[p]) ≤ 0 the under-approximation of the chance constraint, i.e., the set of mixed integer constraints whose feasibility is sufficient to guarantee the predicate satisfaction. Similarly, we denote by N(μ^[p])≤ 0 the chance constraint over-approximation, i.e., the set of constraints whose feasibility is necessary for the predicate satisfiability. For simplicity, we present approximations of nonlinear constraints consisting of single linear constraints. Piecewise-affine approximations can also be used to arbitrarily improve the approximation accuracy <cit.> at higher computation costs.§.§.§ Linear Systems with Additive and Control-Dependent Noise We consider the class of stochastic linear systems governed by the following dynamics x_k+1 = A x_k + B_k u_k + ζ_k, [B_k,ζ_k]= [B̅_k,ζ̅_k] + ∑_l=1^N [B̃_l,ζ̃_l] w_k,l,where w_k = [w_k,1,…,w_k,N]^T ∈ℝ^N follows the normal distribution 𝒩(w̅_k, Θ_k), and B̅_k and ζ̅_k, for each k, and B̃_l and ζ̃_l, for each l ∈{1,…,N}, are constant matrices and vectors, respectively. The resulting matrix B_k and vector ζ_k are stochastic and model, respectively, a multiplicative and and additive noise term.This model has been used, for instance, to represent motion dynamics under corrupted control signals <cit.> or networked control systems affected by channel fading <cit.>. Requirements such as policy gains or bounds on the states for these systems are often expressed by the following chance constraint:𝒫{μ(z_k) ≤ 0}≥ p,μ(z_k) = a^T x_k + b^T u_k + c. The next result provides an exact encoding for (<ref>). Let u_[0,k]=[ u_0^T,…,u_k^T ]^T be the vector of the control inputs from u_0 to u_k.We denote by Θ_k^(l_1 l_2) the l_1-th row and l_2-th column element of the covariance matrix Θ_k, and by F^-1 the inverse cumulative distribution function of a standard normal random variable. The chance constraint (<ref>) on the behaviors of the system in (<ref>) is equivalent toλ_1 (x_0, u_[0,k]) + F^-1(p) λ_2 (x_0, u_[0,k]) ≤ 0,where λ_1 is given byλ_1(x_0, u_[0,k]) = a^T A^k x_0 + b^T u_k + c + ∑_t=1^k a^T A^k-t (ζ̅_t-1 + B̅_t-1 u_t-1) + ∑_t=1^k∑_l=1^N a^T A^k-t (ζ̃_l+ B̃_l u_t-1) w̅_t-1,l,and λ_2 is an ℓ_2-norm of the system inputsλ_2(x_0, u_[0,k]) = Λ_k-1^1/2[ u_[0,k-1]^T, 1 ]^T _2.The scaling matrix Λ_k-1^1/2 is deterministic for the given dynamics (<ref>) and chance constraint (<ref>) and can be computed as a square root matrix of Λ_k-1, obtained as follows:Λ_k-1= [ Λ_1,1 Λ_1,2; Λ_1,2^T Λ_2,2 ],Λ_1,1= diag(α_k-1, …, α_0), Λ_1,2= [β_k-1, …, β_0]^T, Λ_2,2= ∑_t = 1^k ∑_l_1 = 1^N ∑_l_2 = 1^N a^T A^k-tζ̃_l_1 a^T A^k-tζ̃_l_2Θ_t-1^(l_1 l_2), ∀ t∈{0,…,k-1}:α_t = ∑_l_1 = 1^N ∑_l_2 = 1^N B̃_l_1^T (A^t)^T a a^T A^t B̃_l_2Θ_k-1-t^(l_1 l_2), β_t= ∑_l_1 = 1^N ∑_l_2 = 1^N a^T A^t ζ̃_l_1 a^T A^t B̃_l_2Θ_k-1-t^(l_1 l_2). The state x_k of the stochastic system (<ref>) is known to be a linear function of the Gaussian sequence {w_t}_t=0^k-1, hence it follows a Gaussian distribution. This also applies to μ(z_k). In fact, by substituting (<ref>) into the expression for μ(z_k), we obtain μ(z_k) =a^T A^k x_0 + b^T u_k + c + ∑_t=1^k a^T A^k-t (ζ̅_t-1 + B̅_t-1 u_t-1) + ∑_t=1^k∑_l=1^N a^T A^k-t (ζ̃_l+ B̃_l u_t-1) w_t-1,l.Therefore, μ(z_k) is linear in the random variables w_t-1,l, l∈{1,…,N} and also follows a Gaussian distribution. Next, we derive the mean and the standard deviation of μ(z_k). Since the random vector w_t-1 follows the Gaussian distribution 𝒩(w̅_t-1, Θ_k), the expectation of its l-th element w_t-1,l is w̅_t-1,l. Letλ_1 = 𝔼{μ(z_k)} be the expectation of μ(z_k). Then, we obtain λ_1 = a^T A^k x_0 + b^T u_k + c + ∑_t=1^k a^T A^k-t (ζ̅_t-1 + B̅_t-1 u_t-1) + ∑_t=1^k∑_l=1^N a^T A^k-t (ζ̃_l+ B̃_l u_t-1) w̅_t-1,l, which is (<ref>).To derive the standard deviation of μ(z_k), we first write μ̃ = μ(z_k) - 𝔼{μ(z_k)} into a more compact form, μ̃ = ℬ_k-1u_[0,k-1] + 𝒵_k-1 = [ℬ_k-1,𝒵_k-1] [ u_[0,k-1]; 1 ], where ℬ_k-1 and 𝒵_k-1 are random matrices defined as follows ℬ_k-1= ∑_l=1^N [ a^T A^k-1B̃_lw̃_0,l, …, a^T B̃_lw̃_k-1,l],𝒵_k-1= ∑_t=1^k ∑_l=1^N a^T A^k-tζ̃_lw̃_t-1,l,w̃_t-1,l= w_t-1,l - w̅_t-1,l. Then, we obtain 𝔼{μ̃^2}= 𝔼{[ u_[0,k-1]^T, 1 ] [ ℬ_k-1^T; 𝒵_k-1^T ][ℬ_k-1,𝒵_k-1] [ u_[0,k-1]; 1 ]} =[ u_[0,k-1]^T, 1 ] 𝔼{[ ℬ_k-1^T; 𝒵_k-1^T ][ℬ_k-1,𝒵_k-1] }[ u_[0,k-1]; 1 ] and, by renaming the positive semidefinite matrix Λ_k-1 = 𝔼{[ ℬ_k-1^T; 𝒵_k-1^T ][ℬ_k-1,𝒵_k-1] }, we can finally write 𝔼{μ̃^2} = Λ_k-1^1/2[ u_[0,k-1]^T, 1 ]^T _2^2 = λ^2_2, saying thatλ_2 in (<ref>) corresponds to the standard deviation of μ(z_k). The full expression for Λ_k-1 in (<ref>) can be obtained by computing the expectation 𝔼{·} and observing that 𝔼{w̃_t,l} = 0 and 𝔼{w̃_t,l_1w̃_t,l_2} = Θ_t^(l_1 l_2), which leads to (<ref>). Finally, the chance constraint (<ref>) on the random variable μ(z_k) following the distribution 𝒩(λ_1, λ_2) is equivalent to λ_1 + F^-1(p) λ_2 ≤ 0, which corresponds to (<ref>), as we wanted to prove. In (<ref>), λ_1 is a linear function of its variables, and λ_2 is an ℓ_2-norm of the system inputs. While (<ref>) is convex when p ≥ 0.5, this is no longer the case forp < 0.5.In both cases, we provide an efficient linear approximation by applying a classical norm inequality to derive lower and upper bound functions λ_2^u and λ_2^lfor λ_2(.)as follows:λ_2^u (x_0, u_[0,k])= ∑_j=1^k n_u + 1|e_j^T Λ_k-1^1/2[ u_[0,k-1]; 1 ]|,λ_2^l (x_0, u_[0,k])= 1/√(k n_u + 1)λ_2^u (x_0, u_[0,k]),where e_j^T is the j-th row of the identity matrix I and n_u is the dimension of u_k. Then, an under-approximation S(μ^[p]) ≤ 0 for (<ref>) is given byλ_1 (x_0, u_[0,k]) + F^-1(p) λ_2^u (x_0, u_[0,k]) ≤ 0,p ≥ 0.5λ_1 (x_0, u_[0,k]) + F^-1(p) λ_2^l (x_0, u_[0,k]) ≤ 0,p < 0.5. Similarly, an over-approximation N(μ^[p]) ≤ 0 can be obtained as follows: λ_1 (x_0, u_[0,k]) + F^-1(p) λ_2^l (x_0, u_[0,k]) ≤ 0,p ≥ 0.5λ_1 (x_0, u_[0,k]) + F^-1(p) λ_2^u (x_0, u_[0,k]) ≤ 0,p < 0.5.§.§.§ Markovian Jump Linear SystemsMarkovian jump linear systems are frequently used to model discrete transitions, for instance, due to component failures, abrupt disturbances, or changes in the operating points of linearized models of nonlinear systems <cit.>. They are characterized by the following dynamicsx_k+1 = A_k x_k + B_k u_k + ζ_k, [A_k,B_k,ζ_k]= [A(w_k),B(w_k),ζ(w_k)],where A_k, B_k, ζ_k are all functions of w_k,and the sequence {w_k}_k=0^∞ is a discrete-time finite-state Markov chain. We assume that, for all k, w_k takes a value w^l_k∈{w^0, …, w^N}.We use w_[0,k-1] and w^[l_0,l_k-1] to denote, respectively, the random trajectory w_0,…,w_k-1 and a particular scenario w^l_0,…,w^l_k-1. 𝒫{w_[0,k-1] = w^[l_0,l_k-1]} is the probability of occurrence of the scenario w^[l_0,l_k-1]. Moreover, for each scenario, we introduce a binary variable b(w^[l_0,l_k-1]) which evaluates to 1 if and only if μ(z_k) ≤ 0 holds for the scenario w^[l_0,l_k-1]. Then, an exact encoding for the chance constraint (<ref>) on a Markovian jump linear system is given by the following result. The chance constraint (<ref>) on the behaviors of the system in (<ref>) is equivalent to the following MIL constraints∑_t = 0^k-1∑_l_t = 0^N b(w^[l_0,l_k-1]) 𝒫{w_[0,k-1] = w^[l_0,l_k-1]}≥ p,λ(x_0, u_[0,k], w^[l_0,l_k-1]) ≤ 0 ↔ b(w^[l_0,l_k-1]) = 1,where λ(x_0, u_[0,k], w^[l_0,l_k-1]) ≤ 0 enforces that the particular scenario satisfies the chance constraint. λ(·) can be computed as follows:λ(·) =a^T 𝒜_k-1 x_0 + ℬ_k-1u_[0,k] + 𝒵_k-1 + c𝒜_k-1 = [A(w^l_k-1), ⋯, A(w^l_0)],ℬ_k-1 = [ a^T 𝒜_k-1 B(w^l_0), …,a^T B(w^l_k-1), b^T ]𝒵_k-1 =a^T 𝒜_k-1ζ(w^l_0) + … + a^T ζ(w^l_k-1),with u_[0,k] = [u_0^T,…,u_k^T]^T. For a given scenario w^[l_0,l_k-1] for the Markovian jump linear system in (<ref>), the system state x_k is a deterministic function of u_[0,k-1] = [u_0^T,…,u_k-1^T]^T. We can then express the constraint μ(z) = a^T x_k + b_i^T u_k + c ≤ 0 as in (<ref>). The probability 𝒫{a^T x_k + b^T u_k + c ≤ 0} can be computed by considering all the possible scenarios for w_[0,k-1] as follows: 𝒫{a^T x_k + b^T u_k + c ≤ 0} = ∑_t = 0^k-1∑_l_t = 0^N 𝒫{a^T x_k + b^T u_k + c ≤ 0, w^[l_0,l_k-1]} = ∑_t = 0^k-1∑_l_t = 0^N 𝒫{a^T x_k + b^T u_k + c ≤ 0 | w^[l_0,l_k-1]}· = ∑_t = 0^k-1∑_l_t = 1^H 𝒫{w_[0,k-1] = w^[l_0,l_k-1]}.Whether the constraint a^T x_k + b^T u_k + c ≤ 0 is satisfied or not under a given scenario w^[l_0,l_k-1] is a deterministic event, hence the probability 𝒫{a^T x_k + b^T u_k + c ≤ 0 | w^[l_0,l_k-1]} is either 1 or 0, and corresponds to the value of the binary indicator variable b(w^[l_0,l_k-1]). By introducingb(w^[l_0,l_k-1]) into (<ref>), the chance constraint 𝒫{a^T x_k + b^T u_k + c ≤ 0}≥ p reduces to the first constraint in (<ref>), where the probability 𝒫{w_[0,k-1] = w^[l_0,l_k-1]} is given by the transition probability matrix of the Markov chain. The second constraint in (<ref>) directly descends from the definition of b(w^[l_0,l_k-1]). Therefore, constraints (<ref>) and (<ref>) provide an exact encoding of the chance constraint (<ref>) for a Markovian jump linear system, which is what we wanted to prove. The implication in (<ref>) can be translated into MIL constraints using standard techniques <cit.>. §.§.§ Deterministic Systems with Measurement NoiseWe consider a systemx_k+1 = A x_k + B u_k, ξ_k = [ x_k; u_k ],subject to constraints of the form𝒫{μ(z_k)≤ 0}≥ p, μ(z_k) = w_k^T ξ_k + c,where w_k follows the normal distribution 𝒩(w̅_k, Θ_k). This setting can be used to represent uncertainties in perception, e.g., in the detection of environment obstacles to the trajectory of autonomous systems <cit.>. As for the system in Sec. <ref>, an exact translation of (<ref>) <cit.> leads tow̅_k^T ξ_k + c + F^-1(p) Θ_k^1/2ξ_k_2 ≤ 0,which may result in non-convex constraint. Again, by using a norm inequality to bound the ℓ_2-norm in (<ref>), we provide an under-approximation of (<ref>) in the formw̅_k^T ξ_k + c + F^-1(p) ∑_j=1^n_z|e_j^T Θ_k^1/2ξ_k| ≤ 0,p ≥ 0.5,w̅_k^T ξ_k + c + F^-1(p)/√(n_ξ)∑_j=1^n_ξ|e_j^T Θ_k^1/2ξ_k| ≤ 0,p < 0.5,where e_j is the j-th column of the identity matrix, and an over-approximation in the formw̅_k^T ξ_k + c + F^-1(p)/√(n_ξ)∑_j=1^n_ξ|e_j^T Θ_k^1/2ξ_k| ≤ 0,p ≥ 0.5,w̅_k^T ξ_k + c + F^-1(p) ∑_j=1^n_ξ|e_j^T Θ_k^1/2ξ_k| ≤ 0,p < 0.5. Table <ref> provides a summary of the encodings in this section.§.§ MIP Under-Approximation We construct a MIP under-approximation 𝒞_k^S(ψ) of a formula ψ by assigning a binary variable b^S_k(ψ) to the formula such thatb^S_k (ψ) = 1 → (z,k) ψ.We then traverse the parse tree ofψ and associate binary variables with all the sub-formulas in ψ. Following the semantics in Sec. <ref>, the logical relation between ψ and its sub-formulas is then recursively captured using mixed integer constraints. The translation terminates when all the atomic predicates are translated.Our encoding is different from the ones previously proposed for deterministic STL formulas <cit.>, in that the truth value of the Boolean variable b associated to each atomic predicate (μ≤ 0) is not equivalent to the predicate satisfaction. Instead, b = 1 is only a sufficient condition for predicate satisfaction, as we are only able to associate b with an under-approximationS(μ^[p])(z_k) ≤ 0. Because b=0 cannot encode the logical negation of the predicate, we deal with atomic predicates and their negations separately. Specifically, weconvert any formula into its negation normal form and associate distinct Boolean variables, e.g., b and b̅, to each atomic predicate and its negation, respectively. We use both b and b̅ to translate any Boolean and temporal operator involving the predicate or its negation in the formula. We illustrate this approach on some special cases below. ψ = μ^[p]: We requires that b_k^S(μ^[p]) = 1 implies the feasibility of a sufficient condition for (z,k)μ^[p] by the following constraintS(μ^[p])(z_k) ≤ (1 - b_k^S(μ^[p]))M,where M is a sufficiently large positive constant (“big-M” encoding technique) <cit.>, and S(μ^[p])(z_k) ≤ 0 is the chance constraint under-approximation. ψ = μ^[p]: If an under-approximation S(μ^[p])(z_k) ≤ 0 is available, then we require S(μ^[p])(z_k) ≤ (1 - b_k^S(μ^[p]))M. Otherwise, we recall that 𝒫(μ(z_k) ≤ 0) < p is equivalent to 𝒫(μ(z_k) > 0) > 1-p. To bring this predicate into a standard form, we require that 𝒫(-μ(z_k) + ϵ≤ 0) ≥ 1-p + ϵ, where ϵ > 0 is a sufficiently small real constant. We can then use the encoding in (<ref>) to obtainS((-μ + ϵ)^[1-p + ϵ])(z_k) ≤ (1 - b_k^S(μ^[p]))M.ψ = _[t_1,t_2]ϕ: To encode the bounded globally predicate we add to 𝒞_k^S(ψ) the mixed integer linear constraintb_k^S(_[t_1,t_2]ϕ) ↔∧_i=t_1^t_2 b_k+i^S(ϕ),requiring that b_k^S(_[t_1,t_2]ϕ) = 1 if and only if b_k+i^S(ϕ) = 1 for all i ∈ [t_1, t_2]. The conjunction of the b_k+i^S(ϕ) is then translated into mixed integer linear constraints using standard techniques <cit.>. ψ = _[t_1,t_2]ϕ: When globally is negated, we augment 𝒞_k^S(ψ) with the mixed integer linear constraintb_k^S( (_[t_1,t_2]ϕ)) ↔∨_i=t_1^t_2 b_k+i^S(ϕ),showing how we push the negation of a formula to its sub-formulas in a recursive fashion until we reach the atomic predicates.For brevity, we omit the encoding for the other temporal operators, which directly follows from the semantics in Sec. <ref> and the approach in (<ref>) and (<ref>).If (<ref>) and (<ref>) are linear, then 𝒞_k^S(ψ) is a mixed integer linear constraint set.Based on the above procedure, the following theorem summarizes the property of the MIP under-approximation. 𝒞_k^S(ψ) is a MIP under-approximation of ψ, i.e., if 𝒞_k^S(ψ) is feasible and z^* is a solution, then ψ is satisfiable and (z^*, k)ψ. We first prove the theorem for the atomic predicates μ^[p] and μ^[p]. We observe that 𝒞_k^S(μ^[p]) is equivalent to the conjunction of the constraints (b_k^S(μ^[p]) = 1) and (<ref>). If 𝒞_k^S(μ^[p]) is feasible, then S(μ^[p])(z_k)≤ 0 must hold. Since S(μ^[p])(z_k)≤ 0 is a sufficient condition for the satisfaction of the predicate, we conclude (z^*,k)μ ^[p]. Similarly, the feasibility of 𝒞_k^S(μ^[p]) implies(z^*,k)μ ^[p] using constraint (<ref>). We now consider a formula ψ such that Theorem <ref> holds for all its sub-formulas. Without loss of generality, we discuss ψ = ϕ_1 _[t_1,t_2]ϕ_2; the same proof structure can be applied to other temporal or logical operators. 𝒞_k^S(ψ) contains the following constraints b_k^S(ψ) = 1,b_k^S(ψ) = ∨_i=t_1^t_2 (b_k+i^S(ϕ_2) ∧_j=t_1^i-1b_k+j^S(ϕ_1)), 𝒞_k+i^S(ϕ_1)∖{b_k+i^S(ϕ_1) = 1}, 𝒞_k+j^S(ϕ_2)∖{b_k+j^S(ϕ_2) = 1}, for all i∈ [t_1,t_2] and j∈ [t_1,t_2-1]. We use 𝒞_k+i^S(ϕ_1)∖{b_k+i^S(ϕ_1) = 1} to denote the set of constraints in 𝒞_k+i^S(ϕ_1) except for the constraint (b_k+i^S(ϕ_1) = 1). If 𝒞_k^S(ψ) is feasible, then b_k^S(ψ) = 1 must hold, hence there exists i∈ [t_1,t_2] such that b_k+i^S(ϕ_2) ∧_j=t_1^i-1b_k+j^S(ϕ_1) = 1. We then obtain that b_k+i^S(ϕ_2) = 1 holds as well as b_k+j^S(ϕ_1) = 1, ∀ j ∈ [t_1,i-1]. This ensures that 𝒞_k+i^S(ϕ_1) and 𝒞_k+j^S(ϕ_2), ∀ j ∈ [t_1,i-1], are feasible. Since Theorem <ref> holds for ϕ_1 and ϕ_2, we also have (z^*,k+i)ϕ_2 and (z^*,k+j)ϕ_1 ∀ j ∈ [t_1,i-1], hence (z^*,k)ϕ_1 _[t_1,t_2]ϕ_2, which is what we wanted to prove. It is possible that both the𝒞_k^S(ψ) and 𝒞_k^S(ψ)under-approximations are infeasible, in which casewe cannot make any conclusion on whether ψ or ψ are satisfiable. To conclude on the unsatisfiability of a formula, we resort to a MIP over-approximation. §.§ MIP Over-Approximation To generate an over-approximation of ψ, we associate a binary variable b^N_k (ψ) to ψ and seek for a set of mixed integer constraints 𝒞_k^N(ψ) so that(z,k) ψ→ b^N_k (ψ) = 1.Creating an over-approximation only differs in the interpretation of the atomic propositions, since we now use deterministic mixed integer constraints that are necessary for the satisfaction of the chance constraints in the formula. As in Sec. <ref>, we deal with an atomic predicate and its negation separately, and provide necessary condition for their satisfaction as follows.ψ = μ^[p]: We assign a binary variable b_k^N(μ^[p]) so that, if the over-approximation N(μ^[p])(z_k) ≤ 0 is not satisfied, then b_k^N(μ^[p]) is false. We, therefore, add the following mixed integer constraint:N(μ^[p])(z_k) ≤ (1 - b_k^N(μ^[p]))M,where M is a large enough positive constant <cit.>.ψ = μ^[p]: If an over-approximation N(μ^[p])(z_k) ≤ 0 is available, then we add a binary variable b_k^N(μ^[p]) and the mixed integer constraintN(μ^[p])(z_k) ≤ (1 - b_k^N(μ^[p]))M.Otherwise, since 𝒫(μ(z_k) ≤ 0) < pimplies 𝒫(-μ(z_k) ≤ 0) ≥ 1- p we requireN((-μ)^[1-p])(z_k) ≤ (1 - b_k^N(μ^[p]))M. Other logic and temporal operators are encoded as in Sec. <ref>.By similar arguments, we obtain the result below. 𝒞_k^N(ψ) is a MIP over-approximation for the formula ψ, i.e., if 𝒞_k^N(ψ) is infeasible,thenψ is unsatisfiable. Weneed to prove that (z,k)ψ is sufficient for the feasibility of 𝒞_k^N(ψ). Let first ψ be the atomic proposition μ^[p]. Since N(μ^[p])(z_k) ≤ 0 is a necessary condition for the satisfaction of μ^[p], we obtain (z,k)μ^[p]→ N(μ^[p])(z_k) ≤ 0. Then, if μ^[p] is satisfiable, the conjunction of (<ref>) and b_k^N(μ^[p]) = 1 holds, which is equivalent to the feasibility of 𝒞_k^N(ψ). A similar argument can be used for μ^[p]. When ψ is a generic formula, let Theorem <ref> hold for the sub-formulas of ψ. Then, if a sub-formula is satisfiable, its over-approximation is feasible. Without loss of generality, we consider ψ = (ϕ_1 _[t_1,t_2]ϕ_2). (z,k)ψ is equivalent to ∧_i=t_1^t_2 ((z,k+i)ϕ_2 ∨_j=t_1^i-1 (z,k+j)ϕ_1) being true, meaning that for all i ∈ [t_1,t_2] either (z,k+i)ϕ_2 holds or there exists j ∈ [t_1,i-1] such that (z,k+j)ϕ_1. Since both ϕ_1 and ϕ_2 are sub-formulas of ψ, (z,k+i)ϕ_2 and(z,k+j)ϕ_1 imply, respectively, that 𝒞_k+j^N(ϕ_1) and 𝒞_k+i^N(ϕ_2) are feasible. We deduce that for all i ∈ [t_1,t_2] either b_k+i^N(ϕ_2)= 1 or there exists j ∈ [t_1,i-1] such that b_k+j^N(ϕ_1) = 1. Since the relation between b_k^N(ψ), b_k+j^N(ϕ_1), and b_k+i^N(ϕ_2), as encoded in 𝒞_k^N(ψ), is b_k^N(ψ) = ∧_i=t_1^t_2 (b_k+i^N(ϕ_2) ∨_j=t_1^i-1b_k+j^N(ϕ_1)), we infer that b_k^N(ψ) = 1 is feasible. The feasibility of 𝒞_k^N(ψ) is then proved sincea feasible solution for 𝒞_k^N(ψ) can be obtainedby solving the conjunction of the constraints 𝒞_k+j^N(ϕ_1)∖{b_k+j^N(ϕ_1) = 1} for all j∈[t_1, t_2-1], 𝒞_k+i^N(ϕ_2) ∖{b_k+i^N(ϕ_2) = 1} for all j∈[t_1, t_2], constraint (<ref>), and b_k^N(ψ) = 1.§ CONTRACT-BASED VERIFICATION AND SYNTHESIS We formulate verification and synthesis procedures that leverage under- and over-approximations of bounded StSTL contracts to solve Problem <ref>-<ref> for the classes of stochastic systems introduced in Sec. <ref>.A first result providessound procedures to check contract consistency and compatibility (Problem <ref>). LetS be a stochastic system belonging to one of the classes introduced in Sec. <ref> (Table <ref>); let C = (ϕ_A,ϕ_G) be an A/G contract where ϕ_A and ϕ_G are bounded StSTL formulas over the system variables. If over- and under-approximations are available for both ϕ_A and ϕ_A ∨ϕ_G, then the following hold: * If 𝒞_0^S(ϕ_A) is feasible, then C is compatible. * If 𝒞_0^N(ϕ_A) is infeasible, then C is not compatible. * If 𝒞_0^S(ϕ_A ∨ϕ_G) is feasible, then C is consistent. * If 𝒞_0^N(ϕ_A ∨ϕ_G) is infeasible, then C is not consistent. By Theorem <ref>, if 𝒞_0^S(ϕ_A) is feasible, then ϕ_A is satisfiable, which indicates that C is compatible. On the other hand, by Theorem <ref>, if 𝒞_0^N(ϕ_A) is infeasible, then ϕ_A is unsatisfiable, hence C is incompatible. The results on consistency can be obtained in the same way.The following result addresses refinement checking(Problem <ref>). LetS be a stochastic system belonging to one of the classes introduced in Sec. <ref> (Table <ref>); let C_1 = (ϕ_A1,ϕ_G1) and C_2 = (ϕ_A2, ϕ_G2) be A/G contracts whose assumptions and guarantees arebounded StSTL formulas over the system variables. If over- and under-approximations are available for ψ_1 = ϕ_A2∨ϕ_A1 and ψ_2 = (ϕ_A1∧ϕ_G1) ∨ (ϕ_A2∨ϕ_G2), then the following hold: * If 𝒞_0^N(ψ_1) and 𝒞_0^N(ψ_2) are infeasible, then C_1 ≼ C_2. * If 𝒞_0^S(ψ_1) or 𝒞_0^S(ψ_2) are feasible, then C_1 ⋠C_2. The proof proceeds as in Theorem <ref>, by directly applying the definition of contract refinement. By Theorem <ref>, if 𝒞_0^N(ψ_1) and 𝒞_0^N(ψ_2) are infeasible, then ψ_1 and ψ_2 are unsatisfiable, hence ψ_1 and ψ_2 are valid. We therefore obtain than ϕ_A2→ϕ_A1 and (ϕ_A1∨ϕ_G1) → (ϕ_A2∨ϕ_G2) are valid, hence C_1 ≼ C_2 by definition.Similarly, 𝒞_0^S(ψ_1) or 𝒞_0^S(ψ_2) being feasible implies that either ψ_1 or ψ_2 are not valid formulas by Theorem <ref>. We therefore conclude that C_1 ⋠C_2 holds. The above decision procedures are not complete. For instance, it is possible that 𝒞_0^S(ϕ_A) is infeasible and 𝒞_0^N(ϕ_A) is feasible, in which case we are not able to conclude on the satisfiability of ϕ_A. In this case, we increasingly refine piecewise-affine under- and over-approximations of chance constraints until we obtain an answer.Finally, as an application of Theorem <ref>,we provide a framework for the design of stochastic MPC schemes using StSTL contracts. We show how a stochastic optimization problem can be generated by enforcing contract consistency on the system in Fig. <ref> to obtain a control trajectory which solves Problem <ref>. [Generation of Stochastic MPC Schemes]In stochastic MPC, the controller measures the plant state x_k at time k and derives a control input u_k by solving a stochastic optimization problem. The plant state x_k+1 is a function of u_kand the random external signal w_k according to the system dynamics.Given a stochastic system described as in (<ref>), where the environment input (disturbance) w_k at each time k follows a distribution 𝒟, let the bounded StSTL contractC = (Q x_0 ≤r,ϕ) capture the system requirement that ϕ be satisfied if the initial state x_0 is in the polyhedron represented by set of linear inequalities Q x_0 ≤r for a fixed matrix Q and vector r.Control synthesis can then be formulated as the problem of finding a control trajectory u that makes C consistent and optimizes a predefined cost. For a finite horizon H, this translates into requiring that the guarantees of C are satisfiable in the context of its assumptions, hence the conjunction of the following constraints (z^H,0)(Qx̅_0 ≤ r) →ϕ, x_k+1 = f(x_k,u_k,w_k),w_k ∼𝒟,x_0 = x̅_0,for k=0,1,…, H-1, must be feasible, while optimizing a cost function J_H(x_0, u^H). By calling ψ := (Qx_0 ≤ r) →ϕ and using Theorem <ref>, we can finally solve this problem using the under-approximation 𝒞_0^S(ψ) obtained as described in Sec. <ref> over the horizon H, which provides the following stochastic optimization problem:min_u^H J_H(x_0, u^H), s.t.𝒞_0^S(ψ)to be executed in a receding horizon fashion. It is then possible to extend previous results on MPC from STL specifications <cit.> to stochastic linear systems. § CASE STUDIES We implemented the verification and synthesis procedures in Sec. <ref> in the Matlab toolbox SCAnS (Stochastic Contract-based Analysis and Synthesis).As shown in Fig. <ref>, SCAnS receives as inputs a system description in one of the classes of Sec. <ref>, a set of bounded StSTL contracts, a time horizon H, and a set of verification or synthesis tasks. In the verification flow, SCAnS computes under- and over-approximations of contract assumptions and guarantees and perform consistency, compatibility, and refinement checking of user-defined contracts using the results in Theorem <ref> and Theorem <ref>. In the synthesis flow, SCAnS follows the procedure in Example <ref> to generate a stochastic optimization problem from a user-defined contract, which can be executed in a receding horizon scheme. We illustrate the effectiveness of our approach on two examples. The first example utilizes both under- and over-approximations of StSTL formulas to perform contract compatibility, consistency, and refinement checking. The second example uses a formula under-approximation to synthesize an MPC controller for an aircraft power distribution network. SCAnS uses Yalmip <cit.> to formulate mixed integer programs, Gurobi <cit.> to solve mixed integer linear programs, and bmibnb (in Yalmip) to solve mixed integer nonlinear programs. All experiments ran on a 3.2-GHz Intel Core i5 processor with 4-GB memory. §.§ Contract-Based Verification We check compatibility and consistency for the contract and system in Example <ref>. By applying Theorem <ref> and the under-approximation in Sec. <ref>, we find that 𝒞_0^S(ϕ_A1) is feasible, and so is 𝒞_0^S(ϕ_A1∨ϕ_G1). Therefore, contract (ϕ_A1,ϕ_G1) is both compatible and consistent.Since the system is in the class of Sec. <ref>, our encoding uses (<ref>) and (<ref>). Given a contract C_2 defined as follows: ϕ_A2 := [1,0]x_0 ≤ 3,ϕ_G2 := ϕ_A2→_[1,3] (𝒫{[1,0]x_2≤ 2}≥ 0.6),we can also check that C_2 ≼ C_1 by using the results in Theorem <ref>. Moreover, to show the effectiveness of the proposed approximation, we increase the system dimension by redefining the dynamics as follows:x_k+1 = A x_k + B_k u_k, B_k= I + 0.3[ w_k,1; ⋱; w_k,1 ] -0.2[ w_k,2; ⋰; w_k,2 ]where A is a Jordan matrix constructed using blocks of dimension 2 as in (<ref>). Contract refinement checking on a system with 100 state variables took about 20 ms using the proposed approximate encoding, which is a 20× reduction in execution time with respect to the exact encoding. §.§ Requirement Analysis and Control Synthesis for Aircraft Electric Power Distribution An aircraft power system distributes power from generators (engines) to loads by configuring a set of electronic control switches denoted as contactors <cit.>. As shown in the simplified diagram of Fig. <ref>, physical components of a power system include generators, AC and DC buses, Transformer and Rectifier Units (TRUs), contactors (C1-C11), loads, and batteries. The controller, which is also denoted as Load Management System (LMS) and is not shown in the figure, determines the configuration of the contactors at each time instant, in order to provide the required power to the loads, while being subject to a set of constraints, e.g., on the battery charge level. A hierarchical LMS structure was proposed for aircraft power systems, which adopts two controller levels and is based on a deterministic model of the system <cit.>. A high-level LMS (HL-LMS) operates at a lower frequency (e.g., 0.1 Hz) and provides advice on the contactor configuration as obtained by solving an optimization problem. The control objective is to provide power to the highest number of loads at each time (minimize load shedding) and reduce the switching frequency of contactors, hence the wear-and-tear associated with switching. A low-level LMS (LL-LMS), working at a faster frequency (e.g., 1 Hz) takes critical decisions to place the system in safety mode by shedding non-essential loads every time a generator fails. The LL-LMS accepts the suggestion of the HL-LMS only if it is safe.We adopt the same model for the system architecture and the dynamics as in this reference design <cit.>. The system state is represented by the state of charge of the batteries, which are allowed to, respectively, discharge or charge when the generator power is insufficient or redundant with respect to the load power. The system contains a number of generators N_s = 3 and a number of AC (DC) buses N_b = 2, where each bus must be connected to a functional generator or TRU to receive power. Each DC bus has N_sl = 10 sheddable loads and N_nsl = 10 non-sheddable loads, which are shown as lumped components in Fig. <ref>. The maximum power supplied by the three generators is 100 kW (GEN1), 100 kW (GEN2), and 85 kW (GEN3). However, differently from the reference design <cit.>, the power demand of each load is now a Gaussian random variable. The average power demand assumes the values in Table II of our reference <cit.>, while the variance is 0.1 times larger than the average value. A controller based on stochastic MPC has been recently proposed for a similar power system model <cit.>. In this section, we show that SCAnS is able to automatically design a controller that follows the same approach but can handle a richer set of specifications.We use StSTL to express the control specification ψ for the HL-LMS, involving both deterministic constraints on the network connectivity <cit.> and stochastic constraints on the battery levels. Sample requirements in ψ, over a time horizon of 20 steps, are formalized as follows: * The battery charge level B_j shall not be less than 0.3 with probability larger than or equal to 0.95, i.e., □_[1,20] (0.3 - B_j)^[0.95], j = 1,…,N_b, * If the battery level B_j at time 0 is less than or equal to 0.25, then there exists a time in at most 5 steps at which B_j equals or exceeds 0.4 with probability larger than or equal to 0.95, i.e., for all j = 1,…, N_b: (B_j - 0.25 ≤ 0) →⊤ _[0,5] (0.4 - B_j)^[0.95], * If a generator is unhealthy, then it is disconnected from the buses. By denoting with h = (h_1,…,h_N_s) the binary vector indicating the health status of the generators, where 1 stands for “healthy," and with δ_j = [δ_1,j,…,δ_N_s,j]^T the vector whose componentδ_i,j is 1 if and only if generator i ∈{1,…,N_s} is connected to bus j, this requirement can be translated as □_[0,20] (δ_i,j -h_i ≤ 0), ∀ i ∈{1, …, N_s }.By calling ψ the conjunction of all system requirement assertions, such as the ones above, the system-level contract isC_S = ( (∀ j ∈{1,…,N_b}: B_j0∈ [0.2,1]) ∧∑_j=1^N_s h_j ≥ 2, ψ),stating that the specification ψ must be satisfied if the initial battery level is between 0.2 and 1 (20% and 100% of the full level of charge) and if there are at least two healthy generators.SCAnS was able to verify the consistency of C_S using the result in Theorem <ref> and generate a stochastic MPC scheme for the HL-LMS. We relied on the mixed integer linear under-approximation of ψ into the constraint set 𝒞_0^S(ψ) because of the large number of variables (more than 400) in the optimization problems. When parsing ψ, deterministic constraints encoding the atomic propositions(0.3 - B_j)^[0.95] were formulated using (<ref>). 𝒞_0^S(ψ) and the control objective formed the optimization problem solved by the HL-LMS every 10 s to provide suggestions to the LL-LMS. We observe that constraint (<ref>), capturing more complex transient behaviors, was not present in previous formulations <cit.>, while it could be easily expressed in StSTL and automatically accounted for in our MPC scheme.In every simulation run, GEN2 is shut down at time 34 to test the response of the LMS. The contactor signals indicating the connection of the 3 generators to the 2 AC buses are in Fig. <ref>. First, we observe that the LL-LMS connects GEN3 to bus 2 at time 34 to immediately replace the faulty generator GEN2, before the HL-LMS can respond to this event at time 40. Meanwhile, because the average total power consumption of either bus 1 or bus 2 exceeds 85 kW (the maximum power supplied by GEN3), the LL-LMS sheds the loads at time 34 in Fig. <ref>. Conversely, the HL-LMS does not detect this shutdown until time 40. Once a new optimal configuration is computed, as shown in Fig. <ref>, the HL-LMS realizes that GEN2 must indeed be disconnected from bus 2 (requirement (<ref>)) and proposes a configuration that connects GEN1 and GEN3 alternatively to the two buses. This prevents load shedding (all loads are now powered again) and better resource utilization, since the battery can now be effectively charged when GEN1 is connected and then used to provide extra power when GEN3 is connected. While the switching activity increases in this new configuration, the switching frequency is always compatible with the requirements and minimized by the MPC scheme.The trajectories of the battery charge level from 50 simulation runs are shown in Fig. <ref>. We see that the constraint (<ref>) is effective since the battery level mostly remains above 0.3 after time 0. Moreover, most of the battery profiles starting from the initial condition B_1,0 = B_2,0 = 0.225 climbs above 0.4 before time 5, which is consistent with requirement (<ref>). Finally, the rate of satisfaction of the constraint B_j ≥ 0.3, as estimated using 500 simulation runs, is larger than 0.95 at all times, which is consistent with requirement (<ref>). One optimization run takes 0.05 s on average and 0.24 s in the worst case.§ CONCLUSIONSWe developed an assume-guarantee contract framework and a supporting tool for the automated verification of certain classes of stochastic linear systems and the generation of stochastic Model Predictive Control (MPC) schemes. Our approach leverages Stochastic Signal Temporal Logic to specify system behaviors and contracts, and algorithms that can efficiently encode and solve contract compatibility, consistency, and refinement checking problems using conservative approximations of probabilistic constraints. We illustrated the effectiveness of our approach on a few examples, including the control of aircraft electrical power distributionsystems. Our tool can automatically design stochastic MPC schemes for a richer set of specifications than in previous work. Future work includes the investigation of mechanisms to improve the accuracy and scalability of our framework. IEEEtran | http://arxiv.org/abs/1705.09316v2 | {
"authors": [
"Jiwei Li",
"Pierluigi Nuzzo",
"Alberto Sangiovanni-Vincentelli",
"Yugeng Xi",
"Dewei Li"
],
"categories": [
"cs.SY",
"cs.LO"
],
"primary_category": "cs.SY",
"published": "20170525182451",
"title": "Stochastic Assume-Guarantee Contracts for Cyber-Physical System Design Under Probabilistic Requirements"
} |
=1psf | http://arxiv.org/abs/1705.09629v1 | {
"authors": [
"Daniel G. Figueroa",
"Mikhail Shaposhnikov"
],
"categories": [
"hep-lat",
"astro-ph.CO",
"hep-ph"
],
"primary_category": "hep-lat",
"published": "20170526155833",
"title": "Lattice implementation of Abelian gauge theories with Chern-Simons number and an axion field"
} |
=1 | http://arxiv.org/abs/1705.09664v2 | {
"authors": [
"Adam Coogan",
"Stefano Profumo"
],
"categories": [
"astro-ph.HE",
"hep-ph"
],
"primary_category": "astro-ph.HE",
"published": "20170526180002",
"title": "Origin of the tentative AMS antihelium events"
} |
Implicit Regularization in Matrix Factorization Suriya Gunasekar [email protected] Blake Woodworth [email protected] Srinadh Bhojanapalli [email protected] Behnam Neyshabur [email protected] Nathan Srebro [email protected] December 30, 2023 ====================================================================================================================================================================== We study implicit regularization when optimizing an underdetermined quadratic objective over a matrix X with gradient descent on a factorization of X.We conjecture and provide empirical and theoretical evidence that with small enough step sizes and initialization close enough to the origin, gradient descent on a full dimensional factorization converges to the minimum nuclear norm solution. § INTRODUCTIONWhen optimizing underdetermined problems with multiple global minima, the choice of optimization algorithm can play a crucial role in biasing us toward a specific global minima, even though this bias is not explicitly specified in the objective or problem formulation.For example, using gradient descent to optimize an unregularized, underdetermined least squares problem would yield the minimum Euclidean norm solution, while using coordinate descent or preconditioned gradient descent might yield a different solution. Such implicit bias, which can also be viewed as a form of regularization, can play an important role in learning.In particular, implicit regularization has been shown to play a crucial role in training deep models <cit.>: deep models often generalize well even when trained purely by minimizing the training error without any explicit regularization, and when there are more parameters than samples and the optimization problem is underdetermined.Consequently, there are many zero training error solutions, all global minima of the training objective, some of which my generalize horribly. Nevertheless, our choice of optimization algorithm, typically a variant of gradient descent, seems to prefer solutions that do generalize well.This generalization ability cannot be explained by the capacity of the explicitly specified model class (namely, the functions representable in the chosen architecture).Instead, it seems that the optimization algorithm biases us toward a “simple" model, minimizing some implicit “regularization measure”, and that generalization is linked to this measure.But what are the regularization measures that are implicitly minimized by different optimization procedures?As a first step toward understanding implicit regularization in complex models, in this paper we carefully analyze implicit regularization in matrix factorization models, which can be viewed as two-layer networks with linear transfer.We consider gradient descent on the entries of the factor matrices, which is analogous to gradient descent on the weights of a multilayer network.We show how such an optimization approach can indeed yield good generalization properties even when the problem is underdetermined. We identify the implicit regularizer as the nuclear norm, and show that even when we use a full dimensional factorization, imposing no constraints on the factored matrix, optimization by gradient descent on the factorization biases us toward the minimum nuclear norm solution.Our empirical study leads us to conjecture that with small step sizes and initialization close to zero, gradient descent converges to the minimum nuclear norm solution, and we provide empirical and theoretical evidence for this conjecture, proving it in certain restricted settings.§ FACTORIZED GRADIENT DESCENT FOR MATRIX REGRESSIONWe consider least squares objectives over matrices X∈^n× n of the form:min_X ≽ 0 F(X)=𝒜(X)-y_2^2.where 𝒜:^n× n→Ṟ^m is a linear operator specified by 𝒜(X)_i=A_iX, A_i∈^n× n, and y∈^m.Without loss of generality, we consider only symmetric positive semidefinite (p.s.d.) X and symmetric linearly independent A_i (otherwise, consider optimization over a larger matrix [ W X; X^⊤ Z ] with 𝒜 operating symmetricallyon the off-diagonal blocks). In particular, this setting covers problems including matrix completion (where A_i are indicators, <cit.>), matrix reconstruction from linear measurements <cit.> and multi-task training (where each column of X is a predictor for a deferent task and A_i have a single non-zero column, <cit.>).We are particularly interested in the regime where m ≪ n^2, in which case (<ref>) is an underdetermined system with many global minima satisfying 𝒜(X)=y.For such underdetermined problems, merely minimizing (<ref>) cannot ensure recovery (in matrix completion or recovery problems) or generalization (in prediction problems).For example, in a matrix completion problem (without diagonal observations), we can minimize (<ref>) by setting all non-diagonal unobserved entries to zero, or to any other arbitrary value.Instead of working on X directly, we will study a factorization X=UU^⊤. We can write (<ref>) equivalently as optimization over U as,min_U∈^n× d f(U)=𝒜(UU^⊤)-y_2^2.When d<n, this imposes a constraint on the rank of X, but we will be mostly interested in the case d=n, under whichno additional constraint is imposed on X (beyond being p.s.d.) and (<ref>) is equivalent to (<ref>). Thus, if m ≪ n^2,then (<ref>) with d=n is similarly underdetermined and can be optimized in many ways — estimating a global optima cannot ensure generalization (e.g. imputing zeros in a matrix completion objective). Let us investigate what happens when we optimize (<ref>) by gradient descent on U.To simulate such a matrix reconstruction problem, we generated m ≪ n^2 random measurement matrices and set y = 𝒜(X^*) according to some planted X^*≽ 0. We minimized (<ref>) by performing gradient descent on U to convergence, and then measured the relative reconstruction error X-X^*_F.Figure <ref> shows the normalized training objective and reconstruction error as a function of the dimensionality d of the factorization,for different initialization and step-size policies, and three different planted X^*.First, we see that (for sufficiently large d) gradient descentindeed finds a global optimum, as evidenced by the training error (the optimization objective) being zero. This is not surprising since with large enough d this non-convex problem has no spurious local minima <cit.>and gradient descent converges almost surely to a global optima <cit.>; there has also been recent work establishing conditions for global convergence for low d <cit.>.The more surprising observation is that in panels (a) and (b), even when d>m/n, indeed even for d=n, we still get goodreconstructions from the solution of gradient descent withinitializationU_0 close to zero and small step size.In this regime, (<ref>) is underdetermined and minimizing it does not ensure generalization. To emphasize this, we plot the reference behavior of a rank unconstrained global minimizer X_gd obtained via projected gradient descent for (<ref>) on the X space. For d<n we also plot an example of an alternate “bad"rank d global optima obtained with an initialization based on SVD of X_gd (`SVD Initialization'). When d<m/n, we understand how the low-rank structure can guarantee generalization <cit.> and reconstruction <cit.>. What ensures generalization when d≫ m/n? Is there a strong implicit regularization at play for the case of gradient descent on factor space and initialization close to zero? Observing the nuclear norm of the resulting solutions plotted in Figure <ref> suggests that gradient descent implicitly induces a low nuclear norm solution.This is the case even for d=n when the factorization imposes no explicit constraints. Furthermore, we do not include any explicit regularization and optimization is run to convergence without any early stopping.In fact, we cansee a clear bias toward low nuclear norm even in problems where reconstruction is not possible: in panel (c) of Figure <ref> the number of samples m=nr/4 is much smaller than those required to reconstruct a rank r ground truth matrix X^*. The optimization in (<ref>) is highly underdetermined and there are many possible zero-error global minima, but gradient descent still prefers a lower nuclear norm solution.The emerging story is that gradient descent biases us to a low nuclear norm solution, and we already know how having low nuclear norm can ensure generalization <cit.> and minimizing the nuclear norm ensures reconstruction <cit.>.Can we more explicitly characterize this bias?We see that we do not always converge precisely to the minimum nuclear norm solution. In particular, the choice of step size and initialization affects which solution gradient descent converges to.Nevertheless, as we formalize in Section <ref>, we argue that when U is full dimensional, the step size becomes small enough, and the initialization approaches zero, gradient descent will converge precisely to a minimum nuclear norm solution, i.e. to _X≽0X_* s.t. 𝒜(X)=y. § GRADIENT FLOW AND MAIN CONJECTURE The behavior of gradient descent with infinitesimally small step size is captured by the differential equation U̇_t := U_tt = -∇ f(U_t) with an initial condition for U_0.For the optimization in(<ref>) this isU̇_t = -𝒜^*(𝒜(U_tU_t^⊤)-y)U_t,where 𝒜^*:Ṟ^m→^n× n is the adjoint of 𝒜 and is given by 𝒜^*(r)=∑_ir_iA_i. Gradient descent can be seen as a discretization of (<ref>), and approaches (<ref>) as the step size goes to zero.The dynamics (<ref>) define the behavior of the solution X_t=U_t U_t^⊤ and using the chain rule we can verify that Ẋ_t = U̇_tU_t^⊤ + U_tU̇_t^⊤ = -𝒜^*(r_t)X_t - X_t𝒜^*(r_t), where r_t = 𝒜(X_t)-y is a vector ofthe residual.That is, even though the dynamics are defined in terms of specific factorization X_t=U_t U_t^⊤, they are actually independent of the factorization and can be equivalently characterized asẊ_t = -𝒜^*(r_t)X_t - X_t𝒜^*(r_t).We can now define the limit pointX_∞(X_init) := lim_t→∞ X_t for the factorized gradient flow(<ref>) initialized at X_0=X_init.We emphasize that these dynamics are very different from the standard gradient flow dynamics of (<ref>) on X, corresponding to gradient descent on X, which take the form Ẋ_t = -∇ F(X_t) = -𝒜^*(r_t). Based on the preliminary experiments in Section <ref> and a more comprehensive numerical study discussed in Section <ref>, we state our main conjecture as follows:For any full rank X_init, if X̂=lim_α→ 0 X_∞(α X_init) exists and is a global optima for (<ref>) with 𝒜(X̂)=y, then X̂∈_X≽ 0 X_* s.t.𝒜(X) = y.Requiring a full-rank initial point demands a full dimensional d=n factorization in (<ref>).The assumption of global optimality in the conjecture is generally satisfied: for almost all initializations, gradient flow will converge to a local minimizer <cit.>, and when d=n any such local minimizer is also global minimum <cit.>.Since we are primarily concerned with underdetermined problems, we expect the global optimum to achieve zero error, i.e. satisfy 𝒜(X)=y.We already know from these existing literature that gradient descent (or gradient flow) will generally converge to a solution satisfying 𝒜(X)=y; the question we address here is which of those solutions will it converge to. The conjecture implies the same behavior for asymmetric problems factorized as X = UV^⊤ with gradient flow on (U,V), since this is equivalent to gradient flow on thep.s.d. factorization of [[ W X; X^⊤ Z ]].§ THEORETICAL ANALYSIS We will prove our conjecture for the special case where the matrices A_i commute, and discuss the more challenging non-commutative case. But first, let us begin by reviewing the behavior of straight-forward gradient descent on X for the convex problem in (<ref>). Warm up:Consider gradient descent updates on the original problem (<ref>) in X space, ignoring the p.s.d. constraint.The gradient direction ∇ F(X) = 𝒜^*(𝒜(X)-y) is always spanned by the mmatrices A_i.Initializing at X_init=0, we will therefore always remain in the m-dimensional subspace ℒ = { X=𝒜^*(s) | s∈^m }.Now consider the optimization problem min_X X^2_F s.t. 𝒜(X)=y.The KKT optimality conditions for this problem are 𝒜(X)=yand ∃ν s.t.X=𝒜^*(ν).As long as we are in ℒ, the second condition is satisfied, and if we converge to a zero-error global minimum, then the first condition is also satisfied.Since gradient descent stays on this manifold, this establishes that if gradient descent converges to a zero-error solution, it is the minimum Frobenius norm solution. Getting started: 𝐦=1 Consider the simplest case of the factorized problem when m=1 with A_1=A and y_1=y. The dynamics of (<ref>) are given by Ẋ_t =- r_t (A X_t + X_tA), where r_t is simply a scalar, and the solution for X_t is given by, X_t= exp( s_t A )X_0exp( s_t A ) where s_T = -∫_0^T r_tdt. AssumingX̂ = lim_α→0X_∞ (α X_0) exists and 𝒜(X̂) = y,we want to show X̂ is an optimum for the following problemmin_X≽ 0X_*s.t.𝒜(X) = y.The KKT optimality conditions for (<ref>) are:∃ν∈ℝ^m s.t.𝒜(X) = y X ≽ 0 𝒜^*(ν) ≼ I(I - 𝒜^*(ν))X = 0We already know that the first condition holds, and the p.s.d. condition isguaranteed by the factorization of X. The remaining complementary slackness and dual feasibility conditions effectively require that X̂ is spanned by the top eigenvector(s) of A. Informally, looking to the gradient flow path above, for any non-zero y, as α→ 0 it is necessary that |s_∞|→∞ in order to converge to a global optima, thus eigenvectors corresponding to the top eigenvalues of A will dominate the span of X_∞(α X_init).What we can prove: Commutative 𝐀_𝐢_𝐢∈[𝐦] The characterization of the the gradient flow path from the previous section can be extended to arbitrary m in the case that the matrices A_i commute, i.e. A_iA_j = A_jA_i for all i,j. Defining s_T = -∫_0^T r_tdt – a vector integral, we can verify by differentiating that solution of(<ref>) is X_t = exp( 𝒜^*(s_t))X_0exp( 𝒜^*(s_t) )In the case where matrices A_i_i=1^m commute,ifX̂ = lim_α→ 0X_∞(α I) exists and is a global optimum for (<ref>) with 𝒜(X̂) = y,then X̂∈_X≽0X_* s.t. 𝒜(X) = y. It suffices to show that such a X̂ satisfies the complementary slackness and dual feasibility KKT conditions in (<ref>). Since the matrices A_i commute and are symmetric, they are simultaneously diagonalizable by a basis v_1,..,v_n, and so is 𝒜^*(s) for any s ∈ℝ^m. This implies that for any α, X_∞(α I) given by (<ref>) and its limit X̂ also have the same eigenbasis. Furthermore, since X_∞(α I) converges to X̂, the scalars v_k^⊤ X_∞(α I) v_k → v_k^⊤X̂ v_k for each k ∈ [n]. Therefore, λ_k(X_∞(α I)) →λ_k(X̂), where λ_k(·) is defined as the eigenvalue corresponding to eigenvector v_k and not necessarily the k^th largest eigenvalue.Let β = -logα, then λ_k(X_∞(α I)) = exp(2λ_k(𝒜^*(s_∞(β))) - 2β). For all k such that λ_k(X̂) > 0, by the continuity of log, we have 2λ_k(𝒜^*(s_∞(β))) - 2β - logλ_k(X̂) → 0 λ_k(𝒜^*(s_∞(β)/β)) - 1 - logλ_k(X̂)/2β→ 0.Defining ν(β) = s_∞(β)/β, we conclude that for all k such that λ_k(X̂)≠0,lim_β→∞λ_k(𝒜^*(ν(β))) = 1. Similarly, for each k such that λ_k(X̂) = 0,exp(2λ_k(𝒜^*(s_∞(β))) - 2β) → 0 exp(λ_k(𝒜^*(ν(β))) - 1)^2β→ 0.Thus, for every ϵ∈(0,1], for sufficiently large βexp(λ_k(𝒜^*(ν(β))) - 1) < ϵ^1/2β < 1 λ_k(𝒜^*(ν(β))) < 1.Therefore, we have shown that lim_β→∞𝒜^*(ν(β)) ≼ I and lim_β→∞𝒜^*(ν(β))X̂ = X̂ establishing the optimality of X̂ for (<ref>). Interestingly, and similarly to gradient descent on X, this proof does not exploit the particular form of the “control" r_t and only relies on the fact that the gradient flow path stays within the manifoldℳ = X = exp( 𝒜^*(s) )X_initexp( 𝒜^*(s) ) | s ∈ℝ^m. Since the A_i's commute, we can verify that the tangent space of ℳ at a point X is given by T_Xℳ = SpanA_iX + XA_i_i∈[m], thus gradient flow will always remain in ℳ.For any control r_t such that following Ẋ_t = -𝒜^*(r_t)X_t - X_t𝒜^*(r_t) leads to a zero error global optimum, that optimum will be a minimum nuclear norm solution. This implies in particular that the conjecture extends to gradient flow on (<ref>) even when the Euclidean norm is replaced by certain other norms, or when only a subset of measurements are used for each step (such as in stochastic gradient descent).However, unlike gradient descent on X, the manifold ℳ is not flat, and the tangent space at each point is different.Taking finite length steps, as in gradient descent, would cause us to “fall off" of the manifold.To avoid this, we must take infinitesimal steps, as in the gradient flow dynamics.In the case that X_init and the measurements A_i are diagonal matrices, gradient descent on (<ref>) is equivalent to a vector least squares problem, parametrized in terms of the square root of entries:Let x_∞(x_init) be the limit point of gradient flow on min_u∈ℝ^nA x(u)- y_2^2 with initialization x_init, where x(u)_i=u_i^2, A∈^m × n and y∈^m. If x̂ = lim_α→ 0 x_∞(α1⃗) exists and Ax̂ = y, then x̂∈_x∈^m_+x_1 s.t. Ax = y. The plot thickens: Non-commutative 𝐀_𝐢_𝐢∈[𝐦] Unfortunately, in the case that the matrices A_i do not commute, analysis is much more difficult. For a matrix-valued function F, texp(F_t) isequal to Ḟ_̇ṫexp(F_t) only when Ḟ_̇ṫ and F_t commute. Therefore, (<ref>) is no longer a valid solution for (<ref>).Discretizing the solution path, we can express the solution as the “time ordered exponential": X_t = lim_ϵ→0(∏_τ=t/ϵ^1 exp(- ϵ𝒜^*(r_τϵ) ) )X_0(∏_τ=1^t/ϵexp(- ϵ𝒜^*(r_τϵ) )),where the order in the products is important.IfA_i commute, the product of exponentials is equal to an exponential of sums, which in the limit evaluates to the solution in (<ref>). However, since in general exp(A_1)exp(A_2) ≠exp(A_1+A_2), the path (<ref>) is not contained in the manifold ℳ defined in (<ref>).It is tempting to try to construct a new manifold ℳ' such that SpanA_iX + XA_i_i∈[m]⊆ T_Xℳ' and X_0 ∈ℳ', ensuring the gradient flow remains in ℳ'. However, sinceA_i's do not commute, by combining infinitesimal steps along different directions, it is possible to move (very slowly) in directions that are not of the form 𝒜^*(s)X + X𝒜^*(s) for any s ∈ℝ^m. The possible directions of movements indeed corresponds to the Lie algebra defined by the closure of A_i_i=1^m under the commutator operator [A_i,A_j] := A_iA_j - A_jA_i. Even when m=2, this closure will generally encompass all of , allowing us to approach any p.s.d. matrix X with some (wild) control r_t. Thus, we cannot hope to ensure the KKT conditions for an arbitrary control as we did in the commutative case — it is necessary to exploit the structure of the residuals 𝒜(X_t) - y in some way.Nevertheless, in order to make finite progress moving along a commutator direction like [A_i,A_j]X_t + X_t[A_i,A_j]^⊤, it is necessary to use an extremely non-smooth control, e.g., looping 1/ϵ^2 times between ϵ steps in the directions A_i,A_j,-A_i,-A_j, each such loop making an ϵ^2 step in the desired direction.We expect the actual residuals r_t to behave much more smoothly and that for smooth control the non-commutative terms in the expansion of the time ordered exponential (<ref>) are asymptotically lower order then the direct term 𝒜^*(s) (as X_init→ 0).This is indeed confirmed numerically, both for the actual residual controls of the gradient flow path, and for other random controls. § EMPIRICAL EVIDENCEBeyond the matrix reconstruction experiments of Section <ref>, we also conductedexperiments with similarly simulated matrix completion problems, including problems where entries are sampled from power-law distributions (thus not satisfying incoherence),as well as matrix completion problem on non-simulated Movielens data. In addition to gradient descent, we also looked more directly at the gradient flow ODE (<ref>) and used a numericalODE solver provided as part of<cit.>. But we still uses a finite (non-zero) initialization.We also emulated stayingon a valid “steering path" by numerically approximating the time ordered exponential of <ref> — for a finite discretization η,instead of moving linearly in the direction of the gradient ∇ f(U) (like in gradient descent), we multiply X_t on right and left by e^-η𝒜^*(r_t). The results of these experiments are summarized in Figure <ref>. In these experiments, we again observe trends similar to those in Section <ref>.In some panels in Figure <ref>, we do see a discernible gap between the minimum nuclear norm global optima and the nuclear norm of the gradient flow solution with U_0_F=10^-4. This discrepancy could either bedue to starting at a non-limit point of U_0, or numerical issue arising from approximations to the ODE, or it could potentially suggest a weakening of the conjecture. Even if the later case were true, the experiments so far provide strong evidence for atleast approximate versions of our conjecture being true under a wide range of problems.*Exhaustive search Finally, we also did experiments on an exhaustive grid search over small problems, capturing essentially all possible problems of this size. We performed an exhaustive grid search for matrix completion problem instances in symmetric p.s.d. 3× 3 matrices.With m=4, there are 15 unique masks or {A_i}_i∈[4]'s that are valid symmetric matrix completion observations. For each mask, we fill the m=4 observations with all possible combinations of 10 uniformly spaced values in the interval [-1,1]. This gives us a total of 15× 10^4 problem instances. Of these problems instances, we discard the ones that do not have a valid PSD completion and run the ODE solver on every remaining instance with a random U_0 such that U_0_F=α̅, for different values of α̅. Results on the deviation from the minimum nuclear norm are reported in Figure <ref>.For small α̅=10^-5, 10^-3, most of instances of our grid search algorithm returned solutions with near minimal nuclear norms, and the maximum deviation is within the possibility of numerical error. This behavior also decays for α̅=1. § DISCUSSION It is becoming increasingly apparent that biases introduced by optimization procedures, especially for under-determined problems, are playing a key role in learning.Yet, so far we have very little understanding of the implicit biases associated with different non-convex optimization methods.In this paper we carefully study such an implicit bias in a two-layer non-convex problem, identify it, and show how even though there is no difference in the model class (problems (<ref>) and (<ref>) are equivalent when d=n, both with very high capacity), the non-convex modeling induces a potentially much more useful implicit bias.We also discuss how the bias in the non-convex case is much more delicate then in convex gradient descent: since we are not restricted to a flat manifold, the bias introduced by optimization depends on the step sizes taken.Furthermore, for linear least square problems (i.e. methods based on the gradients w.r.t. X in our formulation), any global optimization method that uses linear combination of gradients, including conjugate gradient descent, Nesterov acceleration and momentum methods, remains on the manifold spanned by the gradients, and so leads to the same minimum norm solution.This is not true if the manifold is curved, as using momentum or passed gradients will lead us to “shoot off” the manifold.Much of the recent work on non-convex optimization, and matrix factorization in particular, has focused on global convergence: whether, and how quickly, we converge to a global minima<cit.>.In contrast, we address the complimentary question of which global minima we converge to.There has also been much work on methods ensuring good matrix reconstruction or generalization based on structural and statistical properties<cit.>.We do not assume any such properties, nor that reconstruction is possible or even that there is anything to reconstruct—for any problem of the form (<ref>) we conjecture that (<ref>) leads to the minimum nuclear norm solution.Whether such a minimum nuclear norm solution is good for reconstruction or learning is a separate issue already well addressed by the above literature. We based our conjecture on extensive numerical simulations, with random, skewed, reconstructible, non-reconstructible, incoherent, non-incoherent, and and exhaustively enumerated problems, some of which is reported in Section <ref>.We believe our conjecture holds, perhaps with some additional technical conditions or corrections.We explain how the conjecture is related to control on manifolds and the time ordered exponential and discuss a possible approach for proving it. | http://arxiv.org/abs/1705.09280v1 | {
"authors": [
"Suriya Gunasekar",
"Blake Woodworth",
"Srinadh Bhojanapalli",
"Behnam Neyshabur",
"Nathan Srebro"
],
"categories": [
"stat.ML",
"cs.LG"
],
"primary_category": "stat.ML",
"published": "20170525175524",
"title": "Implicit Regularization in Matrix Factorization"
} |
[email protected] S. N. Bose National Centre for Basic Sciences, Block JD, Sector III, Salt Lake, Kolkata 700098, India [email protected] S. N. Bose National Centre for Basic Sciences, Block JD, Sector III, Salt Lake, Kolkata 700098, IndiaWe consider the effect of relativistic boosts on single particle Gaussian wave packets. The coherence of the wave function as measured by the boosted observer is studied as a function of the momentum and the boost parameter. Using various formulations of coherence it is shown that in general the coherence decays with the increase of the momentum of the state, as well as the boost applied to it. Employing a basis-independent formulation, we show however, that coherence may be preserved even for large boosts applied on narrow uncertainty wave packets. Our result is exemplified quantitatively for practically realizable neutron wave functions. 03.65.-w, 03.67.-aPreservation of quantum coherence under Lorentz boost for narrow uncertainty wave packets A. S. Majumdar December 30, 2023 =========================================================================================§ INTRODUCTION The realization that the physical world is both relativistic and quantum mechanical at the fundamental level has inspired the development of much of modern physics. Quantum information science that has origins in some key foundational questions <cit.> raised in the previous century, has undergone a rapid phase of development over the last several years. However, an overwhelming majority of such studies have been performed in the domain of nonrelativistic quantum information. A number of information theoretic protocols though rely for their implementation on photons for which there exists no nonrelativistic approximation.The relevance and impact of relativistic effects on the concepts of quantum information was first pointed out by Peres et al. <cit.>. In particular, considering a single qubit state in the framework of relativity, it was shown that the spin entropy of the qubit increases with respect to an inertial observer even due to pure boost as a resultof the coupling of the momentum degrees of freedom with the spin. The situation becomes worse in case of an arbitrary Lorentz transformation which may completely decohere a single qubit state forbidding single qubit communication without shared reference frames <cit.>. The study of relativistic quantum information is important not only due to the intricacies of the fundamental issues involved, but also due to its applications in diverse domains as discussed in several works. The relativistic generalization of the EPR experiment was first considered by Czachor <cit.>. The effects of observer dependence on entanglement have been widely studied by Fuentes et al. <cit.>. It has been observed that relativistic considerations impose additional constraints on the security of quantum key distribution <cit.>. Additionally, relativistic quantum information is essential to thestudy of the black hole information paradox <cit.>, and may be of relevance in information theoretic concepts applied to quantum gravity <cit.> and cosmology <cit.>. It has been recently realized that quantum coherence <cit.> is the most basic feature of quantumness of single systems responsible for superposition of quantum states, from which all quantum correlations arise in composite systems. Defined in a quantitative manner based on the framework of resource theory <cit.>,<cit.>,<cit.>quantum coherence may be exploited to perform quantum tasks. Several operational measures of quantum coherence have been proposed <cit.>,<cit.>,enabling it to be used for detection of genuine non-classicality in physical states. However, as is the case with entanglement, there exists no unique quantifier of coherence. Problemsof physical consistency arising out of basis dependent formulations of coherence measures have been noted <cit.>.On the other hand, basis independent measures of coherence have also been formulated <cit.>,<cit.>,which manifest intrinsic randomness contained in a quantum state.Inthe present work our motivation is to investigate the behaviour of quantumcoherence in the relativistic scenario. Though relativistic quantum information has been studied earlier in the context of entropies of single systems as well as entanglement of composite systems <cit.>,the question as to how coherence behaves under relativistic transformations remains to be analysed. Our aim here is to partially fill this gap in the literature in the context of single particle states.Specifically, westudy quantitatively the change in coherence of a single particle Gaussian state under the application of Lorentz boosts employing various coherence quantifiers. Our results exhibit a generic loss of coherence for the relativistic observer. However, using a basis independent measure we show that coherence may be preserved to a large extent for narrow wave packets enabling the possibility of single qubit communicationwithout sharing of reference frames. The plan of this paper is as follows. In the next section we present a brief overview of the different basis dependent measures and one basis independent measure that we have used in our subsequent analysis. In section III we provide a description of the behaviour of a single particle quantum state under relativistic boost. In section IV we compute the coherence of a spin-1/2 particle with Gaussian momentum distribution using the different measures of coherence. A specific example of a narrow uncertainty wave packet using neutron parameters is presented in section Vshowing that basis independent coherence is indeed preserved under relativistic boosts.We make some concluding remarks in section VI.§ MATHEMATICAL PRELIMINARIES: COHERENCE MEASURES According to, The defining properties that any functional C mapping states ρ to non-negative real numbers should satisfy in order for it to be a proper coherence measure, are <cit.>, (i) C(ρ) should vanish for any incoherent state, (ii) monotonicity under incoherent completely positive and trace preserving (ICPTP) maps, and (iii) convexity.Several candidate measures have been suggested which satisfy above criteria:*l_1-norm:C_l_1 = ∑_i,j i ≠ j|ρ_ij| *Relative Entropy of Coherence:C_rel. ent.(ρ) = S(ρ_diag) - S(ρ)where S is the von-Neumann entropy and ρ_diag is the state containing only diagonal elements of ρ. *Skew Information:If an observable X is measured on the state ρ, the skew information is given by <cit.>,<cit.>,ℐ(ρ, X) = -12 Tr{[√(ρ),X]^2}Because of the square root term this quantity cannot be expressed in terms of observable but it is possible to set a nontrivial lower bound which can be measured experimentally.For a generic state of the form ρ = 12(1 + n⃗·Σ)the skew information corresponding to the observable Σ_3 is given by<cit.>ℐ(ρ,Σ_3) = (1-√(1-|n⃗|^2)) (n_1^2 + n_2^2)where n⃗ is the bloch vector and {Σ_i} are the Pauli matrices. §.§ Basis Independent Measure of Coherence:The coherence quantifiers defined above are basis dependent, i.e, the amount of coherence in a quantum state quantified by those measures depends upon the bases in which the state is represented.Recently, a basis independent quantifier of coherence has been defined, which measures the intrinsic randomness contained in a quantum state. A Frobenius-norm based measure <cit.> is defined as 𝒞(ρ) = √(dd-1)‖ρ - ρ_⋆‖_Fwhere d is the dimension of the Hilbert space that spans ρ, and ρ_⋆ = 𝕀_d/d is the maximally mixed state. The Frobenius-norm is given by ‖ A ‖_F = √(Tr(A^†A)). Frobenius-norm is normalized to guarantee 𝒞(ρ) ∈ [0,1]. The most significant property of this measure is that it is basis independent, i.e, unitary invariant, 𝒞(ρ) = 𝒞(U ρ U^†) owing to the fact that the maximally mixed state ρ_⋆ is the only state that remains invariant under arbitrary unitary transformations.Eq.(<ref>) can be rewritten as 𝒞(ρ) = √(dd-1∑^d_j=1(λ_j - 1d)^2)where {λ_j} is the eigenvalues of ρ. The above quantityis a measure of purity, and𝒞^2(ρ) is proportional to the Brukner-Zeilinger information (BZI) <cit.> which is an operational notion defined as the sum of individual measures of information over a complete set of mutually complementary observables (MCO) <cit.>. BZI is itself invariant under the unitary transformation of the quantum state or equivalently of the choice of the measured set of MCO. § SINGLE PARTICLE QUANTUM STATE UNDER RELATIVISTIC BOOST:In Minkowski space-time positive energy, massive, single particle states furnish a spinor representation of the Poincaré group <cit.>. The basesof representation space are labelled by {|𝐩,j⟩}, where 𝐩 is the spatial components ofthe 4-momentum p^μ with p^0=√(𝐩^2+m^2). m is the rest mass of the particle. j is total angular momentum along a quantization axis and equal to the intrinsic spin s of the particle in its rest frame. Normalization is defined as <cit.> ⟨𝐩',j'|𝐩,j|=⟩δ(𝐩'-𝐩)δ_j'j. For Lorentz transformation Λ the basis state transforms under unitary transformation U(Λ) given byU(Λ)|𝐩,j⟩ = √((Λ p)^0p^0)∑_j'D_j j'(W(Λ,𝐩))|Λ𝐩,j'⟩We will assume j to be discrete. Λp is the spatial component of the Lorentz transformed 4-momentum. W(Λ,𝐩) is an element of the little group of the Poincaré group and D(W(Λ,𝐩)) is its unitary representation. For a massive particle W(Λ,𝐩) ∈ SO(3),hence D(W(Λ,𝐩)) ∈ SU(2).If the 4-momentum of the particleis parametrized byp^μ = (m coshβ, m sinhβf̂)where m be the mass of the particle, and the velocity of the frame O^Λ is 𝐯 = tanhα ê then,the representation of D(W(Λ,𝐩))is given by<cit.>D(W(Λ,𝐩)) = cosϕ21 + i sinϕ2 (Σ·𝐧̂)where cosϕ2 = coshα2coshβ2 + sinhα2sinhβ2(ê·𝐟̂)√(12 + 12coshαcoshβ + 12sinhαsinhβ(ê·𝐟̂)) sinϕ2𝐧̂ = sinhα2sinhβ2(ê×𝐟̂)√(12 + 12coshαcoshβ + 12sinhαsinhβ(ê·𝐟̂))with ϕ and 𝐧̂ being respectively, theangle and axis of Wigner rotation.A pure state may be written as |ψ⟩ = ∑_s∫ d𝐩 ψ(𝐩) |𝐩⟩⊗ a_s|s⟩w.r.t laboratory reference frame O. An observer O^Λ boosted by Lorentz transformation Λ w.r.t O sees the state (<ref>) as |ψ^Λ⟩ = ∑_s∫ d𝐩 √((Λ p)^0p^0)ψ(𝐩)a_s∑_s' D_s s'(W(Λ,𝐩))|Λ𝐩,s'⟩The state in equation(<ref>) is separable in spin and momentum, but not the state in equation (<ref>), since W(Λ,𝐩) is a function of momentum p and so is D(W(Λ,𝐩)). In equation (<ref>) the basis states have undergone a momentum dependent rotation known as Wigner rotation resulting in the coupling between spin and momentum which is known as spin-momentum entanglement <cit.>.A single particle spin-1/2 state <cit.> given by ρ = ∑_s_1,s_2∫∫ d𝐩_1 d𝐩_2 ψ(𝐩_1)ψ^∗(𝐩_2) a_s_1 a^∗_s_2|𝐩_1,s_1⟩⟨𝐩_2,s_2| may be traced over the momentum degrees of freedom to obtain the spin reduced density matrix, given by <cit.>ρ_s= ∑_s_1,s_2∫∫∫ d𝐩 d𝐩_1 d𝐩_2 ψ(𝐩_1)ψ^∗(𝐩_2)a_s_1 a^∗_s_2⟨𝐩|𝐩_1,s_1|⟨%s|%s⟩⟩𝐩_2,s_2|𝐩 = ∑_s_1,s_2∫∫ d𝐩 ψ(𝐩)ψ^∗(𝐩) a_s_1 a^∗_s_2|s_1⟩⟨s_2|Thedensity matrix in the frame of boosted observer is given byρ^Λ= ∑_s_1,s_2∫∫ d𝐩_1d𝐩_2 √((Λ p_1)^0 (Λ p_2)^0p_1^0 p_2^0) ψ(𝐩_1) ψ^∗(𝐩_2) a_s_1 a^∗_s_2∑_s_1', s_2' D_s_1 s_1'(W(Λ,𝐩_1))|Λ𝐩_1,s_1'⟩⟨Λ𝐩_2,s_2'| D^†_s_2 s_2'(W(Λ,𝐩_2))The corresponding reduced density matrix is hence given byρ^Λ_s =∑_s_1,s_2, s_1', s_2'∫ d𝐩 |ψ(𝐩) |^2 D_s_1 s_1'(W(Λ,𝐩))|s_1'⟩⟨s_2'| D^†_s_2 s_2'(W(Λ,𝐩))where we have usedδ(Λ𝐩_1 - Λ𝐩_2) = (p_1)^0(Λ p_1)^0δ(𝐩_1 - 𝐩_2).The reduced density matrix defined in this way is not covariant, as the transformation law ofthe secondary variable (spin) depends not only upon the Lorentz transformation Λ, but also upon the primary variable (spatial component of the 4-momentum).A boosted single particle Gaussian wave packet of the form e^-𝐩^2/2σ^2 was studied in Ref.<cit.> to obtain the von-Neumann entropy of the spin reduced density matrix (SRDM) in both the rest and the Lorentz boosted frames. A larger entropy was obtained in the boosted frame O^Λindicating the loss of information. The entropies corresponding to SRDM have been also obtained in other works assuming 4-momentum to be discrete <cit.>.§ COHERENCE OF A SPIN-1/2 PARTICLE WITH GAUSSIAN MOMENTUM DISTRIBUTION UNDER RELATIVISTIC BOOST Let us consider the single particle state |ψ⟩ = 1√(2)∫ d𝐩 ψ(𝐩) |𝐩⟩⊗ (|0⟩ + |1⟩)with momentum p^μ = (m coshβ, m sinhβx̂) = (p^0,p_xx̂) w.r.t to the observer O (for simplicity we consider the one dimensional velocity of the particle).The density matrix corresponding to the state |ψ⟩ is given byρ = 12∫∫ d𝐩_1d𝐩_2 ψ(𝐩_1)ψ^∗(𝐩_2)|𝐩_1⟩⟨𝐩_2|⊗ (1 + σ_1)Assuming ψ(𝐩) to be the normalised SRDM corresponding to ρ, one hasρ_s = 12(1 + σ_1) In the frame of O^Λ moving with velocity v = tanhα ẑ, the state of the particle is given by|ψ^Λ⟩ = 1√(2)∫ d𝐩 √((Λ p)^0p^0)ψ(𝐩) D(W(Λ,𝐩)) (|Λ𝐩,0⟩ + |Λ𝐩,1⟩)where D(W(Λ,𝐩)) = cosϕ_p_x2 + i sinϕ_p_x2σ_2 andcosϕ_p_x2 = coshα2coshβ2√(12 + 12coshαcoshβ) sinϕ_p_x2 = sinhα2sinhβ2√(12 + 12coshαcoshβ)with the axis of rotation being along the direction ẑ×x̂ = ŷ. Substituting Eq.(<ref>, <ref>) in Eq.(<ref>) we have |ψ^Λ⟩ = 1√(2)∫ d𝐩 √((Λ p)^0p^0)ψ(𝐩) [(cosϕ_p_x2 + sinϕ_p_x2)|Λ𝐩,0⟩ + (cosϕ_p_x2 - sinϕ_p_x2)|Λ𝐩,1⟩]The density matrix corresponding to the state |ψ^Λ⟩ is given byρ^Λ = 12∫∫ d𝐩_1d𝐩_2 √((Λ p_1)^0 (Λ p_2)^0p_1^0 p_2^0)ψ(𝐩_1)ψ^∗(𝐩_2) [ A_p_x1A_p_x2|Λ𝐩_1,0⟩⟨Λ𝐩_2,0|+ A_p_x1B_p_x2|Λ𝐩_1,0⟩⟨Λ𝐩_2,1| +A_p_x2B_p_x1|Λ𝐩_1,1⟩⟨Λ𝐩_2,0| + B_p_x1B_p_x2|Λ𝐩_1,1⟩⟨Λ𝐩_2,1|]where A_p_xi = (cosϕ_p_xi2 + sinϕ_p_xi2) and B_p_xi = (cosϕ_p_xi2 - sinϕ_p_xi2).Using Eq.(<ref>) the SRDM corresponding to ρ^Λ is given byρ^Λ_s = 12 ∫ d𝐩 |ψ(𝐩)|^2[ A_p_x^2|0⟩⟨0| + A_p_xB_p_x(|0⟩⟨1| + |1⟩⟨0|) + B_p_x^2|1⟩⟨1|]We will nowcalculate ρ^Λ_s for two particular forms of ψ(𝐩). Since we have assumed the velocity of the particle to be along x-axis, we will consider the following forms of ψ(𝐩) = f(p_x)δ(p_y)δ(p_z) with f(p_x) given by * case (i): (corresponding to the Gaussian wave packet centred at zero) f(p_x)=1(√(π)σ)^1/2 e^-12(p_xσ)^2* case (ii): (corresponding to the Gaussian wave packet centred at 𝔭) f(p_x)=1(√(π)σ)^1/2 e^-12(p_x-𝔭σ)^2, where 𝔭 is a constant.Eq.(<ref>) may be hence written as ρ^Λ_s = 12 ∫ dp_x |f(p_x)|^2[ A_p_x^2|0⟩⟨0| + A_p_xB_p_x(|0⟩⟨1| + |1⟩⟨0|) + B_p_x^2|1⟩⟨1|]Henceforth we will use p instead of p_x for convenience. Now substituting coshβ = √(1 + p^2m^2) , sinhβ = pm, coshα = b and sinhα = a we get A_p^2 = 1 + a pm1+b√(1 + p^2m^2) B_p^2 = 1 - a pm1+b√(1 + p^2m^2) A_pB_p= b + √(1 + p^2m^2)1+b√(1 + p^2m^2) The components of the SRDM ρ^Λ_s are given byρ^Λ_s 11 = 12∫ dp|f(p)|^2(1 + a pm1+b√(1 + p^2m^2)) ρ^Λ_s 22 = 12∫ dp|f(p)|^2(1 - a pm1+b√(1 + p^2m^2)) ρ^Λ_s 12 = ρ^Λ_s 21 =12∫ dp|f(p)|^2(b + √(1 + p^2m^2)1+b√(1 + p^2m^2))Under the approximation (σm)≪1, we obtain the components of the density matrix analytically, given by ρ^Λ_s 11 = ρ^Λ_s 22 = 12ρ^Λ_s 12 = ρ^Λ_s 21 = 12 - 18(coshα-1coshα+1) (σm)^2 For larger uncertainty we calculate the integral numerically. We first plot the dependence of ρ^Λ_s 12 = ρ^Λ_s 21 with respect to uncertainty σ of the state and the rapidity parameter αfor boosted observer in the figures <ref> and <ref>. The purpose of studying ρ^Λ_s 12 = ρ^Λ_s 21 is that in the basis dependent framework the coherence is manifested by theoff-diagonal elements of the density matrix. So, a decrease in ρ^Λ_s 12 implies decoherence. This results fromspin momentum entanglement induced by the Wigner rotation <cit.>. The computations displayed in the plots are done with taking mass m ≈ 0.5 MeV (case of an electron) and 𝔭 = 1/2√(3) MeV (momentum of electron moving with half of the speed of light).It can be checked using Eq.(<ref>) that for small values of (σm) the analytical result provides a good approximation to thenumerical calculation. As σ→ 0, it can be observed that ρ^Λ_s 12→ 1/2. The amount of this decoherenceincreases with α as seen in the plots. In case of the wave packet centred at zero (Figure<ref> ), there is no decoherence as σ→ 0, whatever is the value of α. This is because zero uncertaintyimplies that the particle is at rest and the pure boost due to O^Λdoes not induce Wigner rotation. However,for σ≠ 0 there is decoherence due to spin-momentum entanglement. In case of the wave packet centred at 𝔭 (Figure <ref>), decoherence due to spin momentum entanglement is again clearly exhibited. Note that there is Wigner rotationdue to two noncolinear boosts corresponding to the motion of particle and the observer respectively.When the uncertainty tends to zero, i.e., σ→ 0, implying that themomentum of the particle tends to a single sharp value 𝔭,there is no spin-momentum entanglement.But sincethe quantum state will undergo a pure rotation (cosϕ_p21 + i sinϕ_p2Σ_2) in this case, the basis dependent density matrix elementsundergo a corresponding change. This is evident from the drop in the values of ρ^Λ_s 12 with increasing α even for σ→ 0 in Figure <ref>. Now with the components of ρ^Λ_s we will study the change in coherence under relativistic boost using the different coherence quantifiers mentioned in Section II. First we study the basis dependent quantifiers. Note first that in this case the l_1-norm (<ref>) is simplyC_l_1 = 2ρ^Λ_s 12, and hence, its values can be read off from the figures<ref> and <ref>. The maximum value of the coherence of the state ρ_s is 1 measured by the l_1-norm andthe skew information (<ref>), and ln2 when calculated with relative entropy (<ref>). The coherence corresponding to ρ_s^Λ using the relative entropy measure is plotted in the figures <ref> and <ref> for wave packets centered at zero and 𝔭, respectively. Similarly, the skew information versus the uncertainty and the boost parameter is plotted in the figures <ref> and <ref> . As discussed earlier it is clear from plots that in the case of wave packets centred at zero (figures <ref> and <ref>), the coherence is maximum ifeither α or σ goes to zero. For wave packets centred at 𝔭 (figures <ref> and <ref> ), the coherence attains itsmaximum value only when α = 0. The reason for the sharp edge in the plot of skew information is the positive square-root in the equation (<ref>). The above plots all correspond tobasis dependent measures of coherence. If we demand that coherence should represent a physical property of the systemindependent of the choice of bases, we shouldconsider the coherence of a single particle quantum state under relativistic boost using a basis independent measure. Now, using the Frobenius norm based measure (<ref>) we compute the coherence(C_F) of the state ρ_s. For the state ρ_s^Λ. This is displayed in thefigures <ref> and<ref>forcase (i) and case (ii), respectively. It can be seen from both the figures that the value of C_F does not fall off with increasing boost α when σ goes to zero. This is a significant result even when the wave packet is centered at 𝔭, contrasting with the case of all of the basis dependent measures, i.e., C_l_1 from figure <ref>, C_rel.ent from figure <ref>, and skew information I from figure <ref>.The invariance of the Frobenius norm under unitary transformation <cit.> leads to preservation of this basis independent measure of coherence for small uncertainty wave packets, since a pure basis rotation is unable to impact the value of coherence even for large boosts. So, if there are two parties Alice (O) and Bob (O^Λ) who do not share any reference frame, and suppose Alice possesses a single party state whose spread in momentum is narrow enough. Then Bob, a relativistically moving observer canaccessthe qubit from his own frame in which he will not see the qubit decohered. It can be checked that such a feature would also be obtained by using more general 3-dimensional wave packets. However, when the value of (σm) increases, decoherence becomes effective due to spin momentum entanglement for large α, as expected. § EXAMPLESLet us now consider the specific case of a narrow uncertainty wave packet. For the situation in which the particle is nonrelativistic in Alice's frame having low momentum, there exist severaltechniques to produce narrow uncertainty wave packets, such as using hydrogen atoms cooled in millikelvin range <cit.>,or withultracold neutrons (UCN) having average kinetic energy < 300 neV. UCN because of their low energyare very sensitive to magnetic, gravitational and material potentials <cit.>. Theirsignificance in quantum gravity experiments have been proposed <cit.>, <cit.>. It is regarded that a gravitational field would make a qubit decohere <cit.>, and hence,it would be interesting toapply the Frobenius norm based measure of coherence in examples involving the action of gravity on quantum states. Below we provide a particular example of computation of theFrobenius norm based measure of coherence using neutron parameters.Let us consider the state |ψ⟩ = 1(√(π)σ)^3/2∫ d𝐩 e^-𝐩^22 σ^2|p⟩⊗|0⟩with respect to frame O, where 𝐩 = (p_x,p_y,p_z) is the 3-momentum of the particle and p^0 = √(𝐩^2 + m^2). The boosted observer O^Λ has velocity v = tanhαẑ. Thus, using the equations ( <ref> , <ref> , <ref>) we find the representation of Wigner's little group given byD(W(Λ,𝐩)) = 1[ (p^0 + m) (p^0 coshα + p_zsinhα + m) ]^1/2×[ (p^0 + m)coshα2 + p_zsinhα2 - i sinhα2(-p_xσ_y + p_yσ_x) ]From the above expression we can calculate transformed state |ψ^Λ⟩. The SRDM corresponding to |ψ^Λ⟩ is given by ρ^Λ_s = 1(√(π)σ)^3∫ d𝐩 e^-𝐩^2σ^2 [ MA B0;0 NA B ]where A = (p^0 + m) B = (p^0 coshα + p_zsinhα + m) M = A^2 cosh^2α2 + p^2_zsinh^2α2 + Ap_zsinhα N = (p^2_x + p^2_y)sinh^2α2Using equation (<ref>) we obtain the coherence of the state ρ^Λ_s which is displayed in the figure <ref>, where we have used the rest mass of neutron 939.36 MeV. It can be seen that the loss of coherence is negligible even for large values of the boost α when (σm) is small.For a narrow uncertainty wave packet the components of the Bloch vector can be obtained analytically <cit.> in theapproximation (σm) ≪ 1:ρ^Λ_s = 12[ 1+n_z 0; 0 1-n_z ]where n_z = 1 - ( σ2 mtanhα2)^2Using the above formula the Frobenius norm measure of coherence in this case turns out to be𝒞(ρ^Λ_s) = (1 - ( σ2 mtanhα2)^2). In case of the UCN the upper bound of the kinetic energy is around 300 neV. Assuming this value to represent the upper bound of σ, from Eq.(<ref>)we see that the loss of informationdue to decoherence resulting from relativistic spin-momentum entanglement is rather negligible of the order ∼ 10^-30.There isanother kind of experimentally available neutron called the thermal neutron whose average kinetic energy is around 0.025 eV <cit.>. With the correspondingvalue of sigma it can be checked that the loss of coherence is again negligible of the order ∼ 10^-20. § CONCLUSIONS In the present work we have studied the behaviour of various coherence quantifiers under relativistic boosts. We find using several measures of coherence such as the l_1-norm, the relative entropy of coherence <cit.>, and the skew information <cit.>, that a relativistic observer measures areduced value of coherence compared to the coherence of the pure quantum state in its rest frame. Such a result follows as expected from the coupling of the the spin and momentum degrees of freedom that originates due to the Wigner rotation encountered by the single particle quantum state under relativistic boost <cit.>. The above form of decoherence is a generic feature obtained for all measures of coherence, including a basis independent formulation <cit.> that we have employed here. The most significant aspect of our results is however, the preservation of coherence measured through the basis independent Frobenius norm for the case of narrow uncertainty wave packets. We have shown explicitly using neutron state parameters that the loss of coherence is negligible for not only ultracold but thermal neutrons as well. This makes it possible for a relativistic observer to recognize a narrow uncertainty wave packet as a pure state with the help of the above measure.Our analysis indicates that in order to place coherence as resource in relativistic framework, basis independent formulations are necessary. Recently, resource theory of asymmetry or reference framehas emerged which treats individual formulations of coherence measures as special cases <cit.>. It has been noted that both operational and geometric perspectives are in general significant for resource theory <cit.>-<cit.>,<cit.>,<cit.>. Since Frobenius-norm has a geometric perspective <cit.>, the basis independent measure of coherence <cit.>employed here clearly has geometric interpretation.Moreover, the Frobenius norm based measure is square of the BZI and hence,has an operational notion too. There are several protocols of communication using the BZI, such as in the case of quantum state estimation <cit.>, quantum teleportation <cit.>, and violation of Bell inequalities <cit.>. Following the formulation <cit.>, it may be feasible in a relativistic framework for Bob to perform quantum state estimation of the qubit possessed by Alice. Communication using single partite states without sharing reference frames <cit.> may thus indeed be possible Acknowledgements: ASM acknowledges support from the project SR/S2/LOP- 08/2013 of the DST, India. 1000epr A. Einstein, D. Podolsky, and N. Rosen, Phys. Rev. 47, 777 (1935).schrod E. Schrodinger, Proc. Cambridge Philos. Soc. 31, 553 (1935); 32, 446 (1936).bell J. S. Bell, Physics (Long Island City, N.Y.) 1, 195 (1964); J. F. Clauser, M. A. Horne, A. Shimony et al., Phys. Rev. Lett. 23, 880 (1969).peres A. Peres, P. F. Scudo and D. R. Terno, Phys. Rev. Lett. 88, 230402 (2002).peresrev A. peres and D. R. Terno, Rev. Mod. Phys. 76, 93 (2004).17 S. D. Bartlett, T. Rudolph, R. W. Spekkens, Phys. Rev. Lett. 91, 027901 (2003).18 S. D. Bartlett, D. R. Terno, Phys. Rev. A 71, 012302 (2005).czachor M. Czachor, Phys. Rev. A 55, 72 (1997).fuentes I. Fuentes-Schuller and R. B. Mann, Phys. Rev. Lett. 95, 120404 (2005); P. M. Alsing, I. Fuentes-Schuller, R. B. Mann, and T. E. Tessier, Phys. Rev. A 74, 032326 (2006);G. Adesso, I. Fuentes-Schuller, and M. Ericsson, Phys. Rev. A. 76, 062112 (2007).fuentes_inrt Paul M Alsing and Ivette Fuentes, Class. Quantum Grav. 29, 224001 (2012).barrett J. Barrett, L. Hardy, and A. Kent, Phys. Rev. Lett. 95, 010503 (2005).bh S. Lloyd, Phys. Rev. Lett. 96, 061302 (2006); J. Eisert, M. Cramer, and M. B. Plenio Rev. Mod. Phys. 82, 277 (2010).qg E. Livine and D. Terno, Rev. D 75 084001 (2007).cosmo A. S. Majumdar, D. Home and S. Sinha, Phys. Lett. B 679, 167 (2009);J. Maldacena, Progress of Physics, 64, 10 (2016).3 T. Baumgratz, M. Cramer, M. B. Plenio, Phys. Rev. Lett. 113, (2014) 140401. 1 M. Horodecki, J. Oppenheim, Int. J. Mod. Phys. B 27, 1345019 (2013). 2 F. G. S. L. Brandão, G. Gour, Phys. Rev. Lett. 115, 070503 (2015). 6 G. Gour, R. W. Spekkens, New J. Phys. 10, 033023 (2008). 7 G. Gour, I. Marvian, R. W. Spekkens, Phys. Rev. A 80, 012307 (2009). 8 I. Marvian, R. W. Spekkens, New J. Phys. 15, 033001 (2013). 9 I. Marvian, R. W. Spekkens, Phys. Rev. A 90, 062110 (2014). 10 I. Marvian, R. W. Spekkens,Nat. Commun. 5, 3821 (2014). 11 I. Marvian, R. W. Spekkens, P. Zanardi, Phys. Rev. A 93, 052331 (2016). 12 I. Marvian, R. W. Spekkens, Phys. Rev. A 94, 052324 (2016). 4 D. Girolami, Phys. Rev. Lett. 113, (2014) 170401. 5 C. Napoli, T. R. Bromley, M. Cianciaruso, M. Piani, N. Johnston, G. Adesso, Phys. Rev. Lett. 116, (2016) 150502.13 E. Chitambar, G. Gour, arXiv:1602.06969. 14 Y. Yao, G. H. Dong, X. Xiao & C. P. Sun, Scientific Reports 6, Article number: 32010 (2016). 15 W. C. Wang, M. F. Fang, arXiv:1701.05110. 28 <http://www.dunningham.org/papers/PALGE_2013_PhD_Thesis.pdf>ent J. Dunningham, V. Palge, and V. Vedral Phys. Rev. A 80, 044302 (2009); A. G. S. Landulfo and A. C. Torres Phys. Rev. A 87, 042339 (2013).19 M. Banik, P. Deb, S. Bhattacharya, arXiv:1408.6653. 20 C. Brukner, A. Zeilinger, Phys. Rev. Lett. 83, 3354 (1999). 21 C. Brukner, A. Zeilinger, Phys. Rev. A 63, 022113 (2001).24 Weinberg. S, 1995, Quantum Theory of Fields: Vol-I (Cambridge Univ. Press, Cambridge). 26 T. F. Jordan, A. Shaji, E. C. G. Sudarshan, Phys. Rev. A 73, 032104 (2006). 30 F. R. Halpern, Special Relativity and Quantum Mechanics (Prentice-Hall, Englewood Cliffs, NJ,1968). 27 V. Palge, J. Dunningham, Annals of Physics 363 (2015) 275-304.29 T. F. Jordan, A. Shaji, E. C. G. Sudarshan, Phys. Rev. A 75, 022101 (2007).34 <http://lansce.lanl.gov/facilities/ultracold-neutrons/about.php> 35 G. V. Kulin, A. I. Frank, S. V. Goryunov, D. V. Kustov, P. Geltenbort, M. Jentschel, A. N. Strepetov, V. A. Bushuev, arXiv: 1502.03243. 36 T. Jenke, G. Cronenberg, M. Thalhammer, T. Rechberger, P. Geltenbort, H. Abele, arXiv: 1510.03078.37 I. Pikovski, M. Zych, F. Costa, C. Brukner, Nature Physics 11, 668-672 (2015).38 N. J. Carron,An Introduction to the Passage of Energetic Particles Through Matter, (Taylor and Francis, 2007), p 308. 23 V. Vedral, M. B. Plenio, M. A. Rippin, P. L. Knight, Phys. Rev. Lett. 78, 2275 (1997).22 R. Bhatia, Matrix Analysis (Springer, 1997) 31 J. Rehacek, Z. Hradil, Phys. Rev. Lett. 88, 130401 (2002). 32 J. Lee, M. S. Kim, Phys. Rev. Lett. 84, 4236 (2000). 33 C. Brukner, M. Zukowski, A.Zeilinger,arXiv:quant-ph/0106119. | http://arxiv.org/abs/1705.09265v1 | {
"authors": [
"Riddhi Chatterjee",
"A. S. Majumdar"
],
"categories": [
"quant-ph",
"gr-qc",
"hep-th"
],
"primary_category": "quant-ph",
"published": "20170525171355",
"title": "Preservation of quantum coherence under Lorentz boost for narrow uncertainty wave packets"
} |
Performance Optimization of Co-Existing Underlay Secondary Networks Pratik Chakraborty Bharti School of Telecom. Technology and ManagementIndian Institute of Technology DelhiNew Delhi-110016, IndiaEmail: [email protected] Shankar Prakriya Department of Electrical Engineering and Bharti School of Telecom. Technology and ManagementIndian Institute of Technology DelhiNew Delhi-110016, IndiaEmail: [email protected] 30, 2023 ========================================================================================================================================================================================================================================================================================================================================================================================= In this paper, we analyze the throughput performance of two co-existing downlink multiuser underlay secondary networks that use fixed-rate transmissions. We assume that the interference temperature limit (ITL) is apportioned to accommodate two concurrent transmissions using an interference temperature apportioning parameter so as to ensure that the overall interference to the primary receiver does not exceed the ITL. Using the derived analytical expressions for throughput, when there is only one secondary user in each network, or when the secondary networks do not employ opportunistic user selection (use round robin scheduling for example), there exists a critical fixed-rate below which sum throughput with co-existing secondary networks is higher than the throughput with a single secondary network. We derive an expression for this critical fixed-rate. Below this critical rate, we show that careful apportioning of the ITL is critical to maximizing sum throughput of the co-existing networks. We derive an expression for this apportioning parameter.Throughput is seen to increase with increase in number of users in each of the secondary networks. Computer simulations demonstrate accuracy of the derived expressions.§ INTRODUCTIONA rapid increase in wireless devices and servicesin the past decade or so has led to a demand for very high data rates over the wireless medium. With such prolific increase in data traffic, mitigating spectrum scarcity and more efficient utilization of under-utilized spectrum has drawn attention of researchers both in academia and in the industry. Cognitive radios (CR) are devices that have shown promise in alleviating these problems of spectrum scarcity and low spectrum utilization efficiencies. In underlay mode of operation of cognitive radios, both secondary (unlicensed) and primary (licensed) users co-exist and transmit in parallel such that the total secondary interference caused to the primary user is below a predetermined threshold <cit.> referred to as the interference temperature limit (ITL). This ensures that primary performance in terms of throughput or outage is maintained at a desired level. Most of the analysis to date in underlay CR literature is confined to one secondary node transmitting with full permissible power and catering to its own set of receivers, while maintaining service quality of the primary network. For such secondary networks, performance improvement is achieved by exploiting diversity techniques <cit.>, resource allocation <cit.>, increasing the number of hops <cit.>, etc. Cognitive radios have attracted research interest due to the possibility of great increase in spectrum utilization efficiency. Researchers have proposed the ideaof concurrent secondary transmissions to further increase throughput (and therefore spectrum utilization efficiency), where two or more cognitive femtocells reuse the spectrum of a macrocelleither in a overlay, interweave or underlay manner <cit.>. By deploying femtocells, operators can reduce the traffic on macro base stations and also improve data quality among femtocell mobile stations due to short range communication. Toimplement such an underlay scheme, the major hindrance ismitigation of interferences among inter-femtocell users and careful handling ofinterferences from femtocell transmitters to the users of the macro cell <cit.>. A comprehensive survey of such heterogeneous networks, their implementation and future goals can be found in <cit.> (and references therein). In this paper, we consider two co-existing downlink multiuser underlay networks.We show that throughput with two co-existing secondary networks is larger than with one secondary network in some situations. Since throughput performance is ensured, this implies the possibility of increase in spectrum utilization efficiency. The main contributions of our paper are as follows: * Unlike other works onco-existing secondary networks that focus onoptimization <cit.> and game theoretic approaches <cit.>, we present an analytical closed form sum throughput expression for two co-existing secondary multiuser downlink networks using fixed-rate transmissions by the secondary nodes. * We evaluate analytically the maximum secondary fixed rate by sources that yields higher throughput with concurrent transmissions in two co-existing secondary networks. Beyond this rate,switching to a single secondary transmission is better. * We propose an optimal ITL apportioning parameter to further improve the sum throughput performance when two secondary sources transmit at the same time. * We show that sum throughput improves with user selection in individual secondary networks.The derived expressions and insights are a useful aid to system designers. § SYSTEM MODEL AND PROBLEM FORMULATIONWe consider two cognitive underlay downlink networks[Although primary and secondary networks are often assumed to be licensed and unlicensed users respectively, this need not always be the case.They can indeed be users of the same network transmitting concurrently to increase spectrum utilization efficiency. The same logic extends to two co-existing secondary networks. This eliminates most of the difficulties associated with interference channel estimation, security, etc.], where two secondary transmitters S_1 and S_2 transmit symbols concurrently in the range of a primary network by selecting their best receivers R_1i^* (among R_1i receivers, i ∈ [1,L]) and R_2i^* (among R_2i receivers, i ∈ [1,M]) respectively, from their cluster of users (Fig. <ref>). We assume that the two secondary networksare located relatively far apart so that the same frequency can be reused by S_1 and S_2 concurrently. We ensure that the total secondary interference caused to the primary receiver R_P is below ITL by careful apportioning of power between S_1 and S_2.All channels are assumed to be independent, and ofquasi-static Rayleigh fading type. The channels between S_1 and R_1i are denoted by h_1i∼ CN(0,1/λ_11), i ∈ [1,L]. The channels between S_2 and R_2i are denoted by h_2i∼ CN(0,1/λ_22), i ∈ [1,M]. Due to concurrent secondary transmissions, each transmitter interferes with the receivers of the other cluster. The interference channels between S_1 and R_2i are denoted by g_1i∼ CN(0,1/μ_12), i ∈ [1,M], with g_1i^* being the channel to the intended receiver R_2i^*. The interference channels between S_2 and R_1i are denoted by g_2i∼ CN(0,1/μ_21), i ∈ [1,L], with g_2i^* being the channel to the intended receiver R_1i^*. The channels to R_P from S_1 and S_2 are denoted by g_1P∼ CN(0,1/μ_1P) and g_2P∼ CN(0,1/μ_2P) respectively. We neglect primary interference at the secondary nodes assuming the primary transmitter to be located far away from the secondary receivers, which is a common assumption in CR literature, and well justified on information theoretic grounds <cit.>, <cit.>. Zero-mean additive white Gaussian noise of variance σ_n^2 is assumed at all terminals. As in all underlay networks, it is assumed that S_1 and S_2 can estimate |g_1P|^2 and |g_2P|^2 respectively by observing the primary reverse channel, or using pilots transmitted by R_P.In every signaling interval, S_1 transmits unit energy symbols x with power P_S1 = α I_P/|g_1P|^2 and S_2 transmits unit energy symbols z with power P_S2 = (1-α) I_P/|g_2P|^2, where I_P denotes the ITL, and 0<α<1 denotes the power allocation parameter which apportions I_P between S_1 and S_2 respectively. We use peak interference type of power control at S_1 and S_2 instead of limiting the transmit powers with a peak power due to the following reasons: * It is well known that the performance of CR networks exhibits an outage floor after a certain peak power and does not improve beyond a point when transmit powers are limited by interference constraints. * Since sufficient peak power is typically available, this assumption is quite reasonable. It is in this regime where cognitive radios are expected to operate. Such an assumption is also common in prior underlay CR literature <cit.>. * It keeps the analysis tractable, leading to precise performance expressions that offer useful insights. It also allows us to derive expressions for important parameters of practical interest in the normal range of operation of secondary networks, and can yield insights of interest to system designers.The received signals (y_R_1i and y_R_2i) at R_1i and R_2i can be written as follows: rCl y_R_1i=√(αI_P/|g_1P|^2)h_1ix + √((1-α) I_P/|g_2P|^2)g_2iz + n_R_1i, i ∈[1,L] y_R_2i=√((1-α) I_P/|g_2P|^2)h_2iz + √(αI_P/|g_1P|^2)g_1ix + n_R_2i, i ∈[1,M], where n_R_1i,n_R_2i∼ CN(0,σ^2_n) are additive white Gaussian noise samples at R_1i and R_2i respectively. When transmitters S_1 and S_2 select the receivers R_1i^* and R_2i^* with strongest link to them in their individual cluster, the instantaneous signal-to-interference-plus-noise ratios (SINRs) Γ_1 and Γ_2 at R_1i^* and R_2i^* can be written asfollows:rCl Γ_1=αI_P max_i ∈[1,L][|h_1i|^2]/|g_1P|^2 /(1-α) I_P |g_2i^*|^2/|g_2P|^2 + σ_n^2 Γ_2=(1-α) I_P max_i ∈[1,M][|h_2i|^2]/|g_2P|^2 /αI_P |g_1i^*|^2/|g_1P|^2 + σ_n^2. We note that the random variables |h_ij|^2 and |g_ij|^2 in (<ref>) follow the exponential distribution with mean values 1/λ_ii and 1/μ_ij respectively. In the following section, we derive sum throughput expression for this co-existing secondary network. It gives a measure of spectrum utilization with or without concurrent transmissions by sources in co-existing secondary networks. § SUM THROUGHPUT OF THE SECONDARY NETWORK FOR FIXED RATE TRANSMISSION SCHEMEWhen secondary nodes transmit with a fixed rate R,the sum throughput τ_sum is given by: rCl τ_sum=(1-p_out_1)R + (1-p_out_2)R, where p_out_1 and p_out_2 are outage probabilities of the two secondary user pairs S_1-R_1i^* and S_2-R_2i^* respectively.§.§ Derivation of p_out_1:The outage probability p_out_1 is defined as follows: rCl p_out_1={Γ_1 < γ_th}, where γ_th = 2^R-1. For notational convenience, we define random variable X = max_i ∈ [1,L][|h_1i|^2]. Clearly, it has cumulative distribution function (CDF) F_X(x) = (1 - e^λ_11x)^L. Thus, p_out_1 can be rewritten and evaluated as under: rCl p_out_1={X<(1-α/α)γ_th|g_1P|^2/|g_2P|^2|g_2i^*|^2 + γ_thσ_n^2/αI_P|g_1P|^2 }rCl =𝔼 [ ( 1 - e^-λ_11{(1-α/α)γ_th|g_1P|^2/|g_2P|^2|g_2i^*|^2 + γ_thσ_n^2/αI_P|g_1P|^2})^L ] =𝔼[1 - ∑_j=1^LLj(-1)^j+1 e^-λ_11j{(1-α/α)γ_th|g_1P|^2/|g_2P|^2|g_2i^*|^2 + γ_thσ_n^2/αI_P|g_1P|^2} ], where 𝔼[.] denotes the expectation over random variables |g_1P|^2, |g_2P|^2 and |g_2i^*|^2. We evaluate p_out_1 by successive averaging over random variables |g_2i^*|^2, |g_2P|^2 and |g_1P|^2 using standard integrals <cit.> and <cit.>. A final closed form expression for p_out_1 can be derived as follows (details omitted due to space limitations):rCl p_out_1=1 - ∑_j=1^LLj(-1)^j+1 [ 1/1+λ_11jγ_thσ_n^2/μ_1PαI_P -μ_2Pλ_11/μ_1Pμ_21j(1-α/α)γ_th { ln(1 + λ_11jγ_thσ_n^2/μ_1PαI_P/μ_2Pλ_11/μ_1P μ_21j(1-α/α)γ_th)/[1 + λ_11jγ_thσ_n^2/μ_1PαI_P - μ_2Pλ_11/μ_1P μ_21j(1-α/α)γ_th]^2 +(μ_2Pλ_11/μ_1P μ_21j(1-α/α)γ_th/1 + λ_11jγ_thσ_n^2/μ_1PαI_P) - 1 }]. §.§ Derivation of p_out_2:The outage probability p_out_2 is defined as follows: rCl p_out_2={Γ_2 < γ_th}. Due to the identical nature of SINR-s of Γ_1 and Γ_2, p_out_2 in (<ref>) can be derived in the same manner as p_out_1, whose final closed form expression is shown as follows: rCl p_out_2=1 - ∑_k=1^MMk(-1)^k+1 [ 1/1+λ_22kγ_thσ_n^2/μ_2P(1-α) I_P -μ_1Pλ_22/μ_2Pμ_12k(α/1-α)γ_th { ln(1 + λ_22kγ_thσ_n^2/μ_2P(1-α) I_P/μ_1Pλ_22/μ_2P μ_12k(α/1-α)γ_th)/[1 + λ_22kγ_thσ_n^2/μ_2P(1-α) I_P - μ_1Pλ_22/μ_2P μ_12k(α/1-α)γ_th]^2 +(μ_1Pλ_22/μ_2P μ_12k(α/1-α)γ_th/1 + λ_22kγ_thσ_n^2/μ_2P(1-α) I_P)- 1 }]. § OPTIMAL POWER ALLOCATION AND CRITICAL TARGET RATEOur objective is to find the optimum α (denoted by α^*) that maximizes τ_sum. From (<ref>), it is clear that α^* = max_α(τ_sum). In normal mode of operation, the interference channel variances are small (μ_1P and μ_2P are large) so thatλ_11<<μ_1PI_P and λ_22<<μ_2PI_P. Hence, the terms λ_11jγ_thσ_n^2/μ_1Pα I_P and λ_22kγ_thσ_n^2/μ_2P(1-α) I_P in (<ref>) and (<ref>) respectively are small quantities for practical values of target rates and can be ignored. (Computing α^* for high target rates is not required, as would become apparent in subsequent discussions.)Thus p_out_1 and p_out_2 reduce to the following form with x = μ_2Pλ_11/μ_1Pμ_21(1-α/α) and y = μ_1Pλ_22/μ_2Pμ_12(α/1-α): rCl p_out_1≈∑_j=1^LLj(-1)^j+1γ_th xj(γ_thxj - ln(γ_thxj) - 1)/(1-γ_thxj)^2, p_out_2 ≈∑_k=1^MMk(-1)^k+1γ_th yk(γ_thyk - ln(γ_thyk) - 1)/(1-γ_thyk)^2. Using the first order rational approximation for logarithm <cit.> ln(z)≈2(z-1)/(z+1) in (<ref>), which is close to (or follows) the logarithm function for a large range of z (and also used in underlay literature <cit.>), z(z-ln(z)-1)/(1-z)^2≈z/z+1. Hence, p_out_i, i ∈{1,2} in (<ref>) can further be approximated as: rCl p_out_1 ≈1 - ∑_j=1^LLj(-1)^j+11/γ_thxj+1, p_out_2 ≈1- ∑_k=1^MMk(-1)^k+11/γ_thyk+1. Obtaining α^* for general L and M is mathematically tedious, and can be evaluated offline by numerical search[We note that there is no dependence on instantaneous channel estimates.]. However, we present a closed form α^* for the special casewhen L=M=1.By taking the first derivative of τ_sum with respect to α using p_out_1 and p_out_2 in (<ref>), and equating it to zero, a closed form α^* can be obtained[We will present a detailed proof in the extended version of this paper.]with the root in [0,1] being: rCl α^*≈1/1 + μ_1P/μ_2P √(λ_22/λ_11μ_21/μ_12). By taking the second derivative of τ_sum with respect to α, and upon substitution of α^* from (<ref>), an expression is obtained, which can either be positive or negative depending on the value of γ_th (details are omitted due to space constraints). By equating the expression to zero and solving for γ_th (or equivalently for R), a closed form expression of critical target rate R=R_c (for L=M=1) can be obtained^3 as: rCl R_c≈log_2(1 + √(μ_12μ_21/λ_11λ_22)). When R<R_c, τ_sum is concave with respect to α and concurrent transmission offers higher throughput. When R>R_c, switching to single secondary transmission is optimal, as τ_sum is convex with respect to α. For a generalized L and M users, R_c and α^* can be evaluated by an offline numerical search^2. For larger L and M (multiple secondary users in each network), when a round robin scheduling scheme is used, the channel characteristics are exponential (same as when L=M=1), and (<ref>) and (<ref>) are valid for α^* and R_c. We emphasize that R_c and α^* both depend only on statistical channel parameters and do not require real-time computation. § SIMULATION RESULTSIn this section, we present simulation results to validate the derived expressions and bring out useful insights. We assume 𝔼[|h_ij|^2] ∝ d_ii^-ϕ, d_ii being the normalized distance between the transmitter and intended receiver in cluster i, where i ∈{1,2} and j ∈{L,M}. Again, 𝔼[|g_ij|^2] ∝ r_ij^-ϕ is assumed, where r_ij is the normalized distance between the transmitter of cluster i to the receiver of cluster j, where, i ∈{1,2} and j ∈{L,M,P}. The pass-loss exponent is denoted by ϕ (assumed to be 3 in this paper). In Fig. 2 we plot τ_sum vs α for different target rates. The system parameters chosen are as follows: d_11 = 2 units, d_22 = 1 unit, r_1P = r_2P = 3 units, r_12 = 4 units, r_21 = 3 units. L=M=1 and I_P = 20 dB is assumed. When target rates are below R_c=3.9724 (as calculated from (<ref>)), there is an improvement in sum throughput of the order of 1 bpcu when optimum α is chosen using concurrent transmission. If R exceeds R_c, switching to single secondary network is best. This happens because with high target rates, both user pairs suffer link outages, and mutual interferences further degrades performance. Switching to a single network not only improves transmit power, but also nullifies the interference from the other network, which cumulatively improve outage and throughput performance.In Fig. 3 we plot τ_sum vs α assuming L=M=1 for varying channel parameters, target rates and ITL to show that α^* as evaluated in (<ref>) gives a fairly accurate and robust measure of optimal ITL apportioning between S_1 and S_2, andimproves sum throughput performance. The system parameters chosen for the first plot are as follows: d_11 = 1 unit, d_22 = 2 units, r_1P = 4 units, r_2P = 3 units, r_12 = 3 units, r_21 = 3.5 units and I_P is chosen as 10 dB. R=1 is assumed to ensure that R<R_c=3.7037 (so that concurrent transmission is advantageous). α^*=0.1058 is obtained from (<ref>). In the second plot, we assume the following parameters: d_11 = 2 unit, d_22 = 1 units, r_1P = 3 units, r_2P = 4 units, r_12 = 4 units, r_21 = 3 units and I_P is chosen as 25 dB. R=2 is assumed to ensure that R<R_c=3.9724 (so that concurrent transmission is advantageous). α^*=0.9117 is obtained from (<ref>). We note, for symmetric channel conditions, ie λ_11=λ_22, μ_12=μ_21 and μ_1P=μ_2P, α^*=0.5, implying equal resource allocation between S_1 and S_2. In addition we have the following observations: 1) α decreases when the ratio μ_1P/μ_2P increases, or when S_2 is closer to the primary than S_1. This implies throughput can be maximized if more power is allocated to S_2 (thereby improving its outage), as S_1 has a weaker channel to primary (has more available power) and can meet its outage requirement with less transmit power. 2) α decreases with increase in λ_22/λ_11. In other words, when S_1-R_1i^* channel is better than S_2-R_2i^*, S_1 is able to meet its outage requirement with less power, and more power needs to be allocated to S_2 to improve performance. 3) α decreases with the ratio μ_21/μ_12, or when the channel between S_1 to R_2i^* is better than the channel between S_2 to R_1i^*. Thus, allocating more power to S_2 causes less interference to users of S_1, which improves the overall throughput. In Fig. 4, we plot τ_sum in (<ref>) vs R and show the effect of number of users in the two networkson sum throughput performance with concurrent transmissions. We choose parameters as follows: d_11 = d_22 = 1 unit, r_1P = r_2P = 3 units and r_12 = r_21 = 3 units. α=0.5 and I_P = 20 dB is chosen. Clearly,τ_sum increases with L and M. From (<ref>), it is also clear that R_c increases with user selection (this R_c refers to a network having generalized L and M users, which is not derived in this paper. However, intuitively it is clear that user selection statistically improves the main channels, thereby increasing R_c as in (<ref>)), which causes a rightward shift of the peaks of τ_sum. As also evident from earlier discussions, τ_sum first increases and then decreases after a certain critical rate as both S_1-R_1i^* and S_2-R_2i^* links start to suffer from outages, thereby decreasing the overall throughput performance with concurrent transmissions.§ CONCLUSIONIn this paper we analyze the sum throughput performance of two co-existing underlay multiuser secondary downlink networks utilizing fixed-rate transmissions. In the single user scenario, or in a multiuser scenario without opportunistic user selection, we establish that there exists a fixed critical rate beyond which co-existing secondary networks results in lower throughput. During concurrent secondary transmissions, we establish that user selection as well as judicious interference temperature apportioning, can increase throughput performance. § ACKNOWLEDGMENTThisworkwassupportedbyInformationTechnologyResearchAcademy through sponsored project ITRA/15(63)/Mobile/MBSSCRN/01. The authors thank Dr. Chinmoy Kundu for his inputs on this work. IEEEtran | http://arxiv.org/abs/1705.09098v1 | {
"authors": [
"Pratik Chakraborty",
"Shankar Prakriya"
],
"categories": [
"cs.IT",
"math.IT"
],
"primary_category": "cs.IT",
"published": "20170525091520",
"title": "Performance Optimization of Co-Existing Underlay Secondary Networks"
} |
Driving mechanisms and streamwise homogeneity in molecular dynamics simulations of nanochannel flows Vicente Bitrián^1,[email protected] Javier Principe^1,2,[email protected] ^1 Departament de Mecànica de Fluids, Universitat Politècnica de Catalunya, Eduard Maristany, 10-14, 08019, Barcelona, Spain. ^2 Centre Internacional de Mètodes Numèrics en Enginyeria,Parc Mediterrani de la Tecnologia, Esteve Terrades 5, 08860, Castelldefels, Spain. December 30, 2023 ============================================================================================================================================================================================================================================================================================================================================================================================== In molecular dynamics simulations, nanochannel flows are usually driven by a constant force, that aims to represent a pressure difference between inlet and outlet, and periodic boundary conditions are applied in the streamwise direction resulting in an homogeneous flow. The homogeneity hypothesis can be eliminated adding reservoirs at the inlet and outlet of the channel which permits to predict streamwise variation of flow properties. It also opens the door to drive the flow by applying pressure gradient instead of a constant force. We analyze the impact of these modeling modifications in the prediction of the flow properties and we show when they make a difference with respect to the standard approach.It turns out that both assumptions are irrelevant when low pressure differences are considered, but important differences are observed at high pressure differences. They include the density and velocity variation along the channel (the mass flow rate is constant) but, more importantly, the temperature increase and slip length decrease. Because viscous heating is important at high shear rates, these modeling issues are also linked to the use of thermostating procedures. Specifically, selecting the region where the thermostat is applied has a critical influence on the results. Whereas in the traditional homogeneous model the choices are limited to the fluid and/or the wall, in the inhomogeneous cases the reservoirs are also available, which permits to leave the region of interest, the channel, unperturbed. § INTRODUCTIONThe molecular dynamics (MD) configuration most commonly used to simulate nanochannel flows is shown in channel-homogeneous. The fluid particles are bounded by solid particles that model a wall, periodic boundary conditions are assumed in the streamwise and spanwise directions and the flow is driven applying a constant external force.This driving mechanism has raised long-standing criticism for it requires a huge force to be applied, which generates an important amount of heat that, in turn, requires dissipative mechanisms (thermostating), and only represents an applied pressure difference when the pressure gradient is assumed to be constant everywhere <cit.>. Using this configuration implies assuming that the flow is streamwise homogeneous, which makes the problem easier by reducing it to one spatial dimension, the other two (streamwise and spanwise) being only statistical. On the other hand developing effects are eliminated from the beginning.wall=[circle,fill=red,draw=red!50!black]fluid=[circle,fill=blue,draw=green]Nevertheless, this configuration has been used for years to study the flow slip over solid surfaces and it is still widely used, see e.g. <cit.>. Molecular dynamics simulations are performed integrating the equations of motion of individual molecules. Introducing the interactions between them results in a system whose size is the number of molecules. Apart from these interactions, the external force driving the system is also introduced. As the system is isolated, the work performed by the external force driving the flow results in an increase of internal energy. Therefore, the only way to reach a steady state is through the introduction of a dissipative mechanism. This term is included assuming that the fluid is “in contact with a thermal bath” or “a reservoir” <cit.> which extracts energy from the system. There are many possibilities, but the more commonly used in nonequilibrium MD simulations are the Langevin, Nosé-Hoover, Berendsen or DPD thermostats, see e.g. <cit.> and <cit.>. In the context of nanochannels the traditional approach <cit.> was to apply a thermostat in the whole channel while assuming the flow to be homogeneous in the streamwise and spanwise directions by applying periodic boundary conditions, as shown in channel-homogeneous. Wall particles are fixed but their interaction with fluid is kept, which results in fluid-solid interface friction. This model was improved including wall particles into the model, i.e. integrating their equations of motion too <cit.>. In this case, apart from the external force acting over all fluid particles in the streamwise direction, wall particles are also constrained to move around equilibrium positions by applying external forces to them (typically derived from quadratic potentials). This permits to apply a thermostat on the wall particles too. In an effort to minimize its impact, some authors <cit.> apply the thermostat to one solid layer only, the one being further from the fluid, shown in black color in channel-homogeneous. In fact, a rigorous derivation of this procedure has been developed in <cit.> and <cit.> and named stochastic boundary conditions, showing that applying a thermostat on the external border of a solid accounts for the influence of an infinitely large solid thermal bath around it.After this improvement, the next natural question is whether the fluid should be thermostats or not and there is a consensus in the literature about answering negatively <cit.>, considering that cooling through the walls is the only “realistic” <cit.> dissipative mechanism “mimicking real experiments” <cit.>. While in the case of the solid walls the thermal bath has a clear physical meaning, in the case of the fluid it has not. Besides, transport properties, viscosity and conductivity, shear stress and slip over a solid surface, measured by the slip length L_s, were shown to depend on the thermostat parameter Γ <cit.>. Applying thermostats to shear flows has also been put into question by <cit.> because they remove heat at rates that are higher than the rate of conduction of heat across the fluid. The implication of this fact is the lack of time to maintain redistribution of energy across the system which implies that the steady states reached depend on the degrees of freedom the thermostat is coupled to. This effect is specially severe at high shear rates and the (perhaps over-pessimistic) conclusion by <cit.> is that the effort “should be directed to simulate lower shear rates”.At this point, two well-established facts collide: the application of an external force generates an important amount of heat and the application of a thermostat is unphysical. On top of that, assuming periodic boundary conditions, and therefore streamwise homogeneity, eliminates an intrinsic heat transfer mechanism present in any (nano)channel, the transfer of heat by convection, which makes the model definitively unrealistic.Another important heat transfer mechanism is the generation by shear friction. While in macroscopic flows this term is usually negligible (except at very low Re numbers, i.e. creeping flows) at nanoscales this term is very important, specially at high shear rates. In that case the flow cannot be considered isothermal. To understand the balance of these mechanisms a simple model can be developed from the macroscopic energy conservation equation <cit.>ρ c_pdT/dt-β T dp/dt = -∇·𝐪 + Φwhere c_p is the specific heat, ρ the mass density, β the (possibly temperature-dependent) thermal expansion coefficient, p the pressure, 𝐪 the internal heat flux, and Φ is the Rayleigh function representing the mechanical dissipation of energy in sheared motion, proportional to the viscosity and the square of velocity gradients in Newtonian fluids. It is worth noting that the second term of the left-hand side of temperature is only relevant in compressible fluids and it is negligible for nearly incompressible ones, representing a heat sink due to the energy required by dilatation to occur. It is also relevant that this equation is equivalent to the one obtained by a statistical treatment of molecular equations of motion <cit.>, namely∂ E/∂ t + ∇·( E 𝐮 + 𝐪 - 𝐮·σ) = 0where 𝐮 is the fluid velocity and E=ρ(e+u^2/2+ψ) is the total energy per unit volume, that includes the internal energy e and the external potential ψ whose gradient is the applied driving force. Only after using kinetic and potential energy conservation temperature is obtained, which does not include the external force (whose work cancels with potential energy variation). The two terms on the left hand side of temperature come from the calculation of the internal energy variation, which, apart from the energy variation due to temperature changes, includes the energy variation by dilatation that vanishes for incompressible flows, as mentioned.Assuming a one dimensional steady flow that is cooled (or heated) from the walls and modeling that by a Newton law with convection coefficient h, which accounts for the heat conduction in the fluid and the Kapitza resistance of the interface <cit.>, we get from temperatureṁ c_p dT/dx - u A β T dp/dx = - P h (T-T_w) + Φ A where ṁ is the mass flow rate across a section of the channel of length L, cross sectional area A and perimeter P. The terms on the left-hand side represent convection heat transfer, the first term on the right-hand side cooling through walls and the second one viscous heating (given by Φ = μγ^2 with μ the shear viscosity and γ the shear rate). Once again, we emphasize that the second term on the left-hand side is negligible for incompressible flows but it turns out to play an important role otherwise, as it will be shown below.The solution of 1D-temperature can be obtained assuming the variables multiplying the temperature in the second term on the left-hand side to be constant, a restrictive hypothesis that, in any case, permits to understand the implications of neglecting it or not. Observe that in this case, equation 1D-temperature has a uniform solutionT_u = Ph/( Ph - u A βdp/dx) T_w+ Φ A/( Ph - u A βdp/dx). In general when the flow enters the channel at a temperature T_i, it is cooled (or heated) along the channel, according to ( T - T_u ) = ( T_i - T_u ) e^-( Ph - u A βdp/dx) L/ṁ c_px/L, reaching equilibrium asymptotically. This simple model permits to conclude that only if the inlet temperature is T_u the flow can be homogeneous. Otherwise an exponential increase or decrease is to be expected. Besides, when the pressure gradient and the shear rate are negligible, i.e. - u A βdp/dx≪ Ph and Φ A ≪ P h, taking the inlet temperature as the wall temperature results in an isothermal flow (the heat produced by shear is easily dissipated through the walls). This simple model also permits to understand how different configurations and flow driving mechanisms impact on the energy balance. In the traditional streamwise homogeneous model the terms in the left-hand side of 1D-temperature vanish and the equilibrium between heat generation by shear and cooling through walls determines the fluid temperature. Applying a thermostat can be a way of representing the dissipative terms neglected (observe that the pressure gradient is negative and therefore the second term on the left-hand side of 1D-temperature is dissipative).In this article we discuss an alternative configuration and a driving mechanism similar to others recently proposed <cit.>. In fact, many alternative driving mechanisms to study nanochannel flows have been proposed for years. The alternative proposed by <cit.> is the introduction of a “reflecting particle membrane”, a Maxwell daemon that precludes (with a given probability) the particles to cross it in one direction, thus generating a pressure gradient. Whereas the method to drive the flow does not introduce additional energy into the system (thus not requiring thermostating), the pressure gradient is difficult to control (which is done through the given probability). Another approach <cit.> is to fix the pressure in the external reservoirs by introducing rigid movable plates normal to the flow, which introduces a time dependent driving mechanism (the size of the reservoirs changes). In <cit.> the channel walls are moved as in the Couette problem whereas the flow is stopped by a cross sectional wall. The methods proposed by <cit.> and the one we study here are small variations of the so-called “reservoir method” first proposed by <cit.>. The pressure difference is generated applying a constant force in the reservoirs whereas they differ on how the temperature or density is controlled. Some authors introduce reservoirs while driving the flow with a constant force and a Nosé-Hoover thermostat applied in the whole domain, including the channel <cit.>.However, these alternatives have not been widely used, specially in the study of hydrodynamic slip, one of the reasons being the important increase in the computational cost. Abandoning the streamwise periodicity makes the problem two-dimensional (the spanwise direction being still statistical). On the other hand, the advantages of these improved models have not been demonstrated. The case of an inlet temperature substantially higher than that of the walls was analyzed in <cit.> but considering only thermal effects, the flow assumed to be hydrodynamically fully developed. On the other hand when the inlet temperature is similar to that of the walls, the fluid is heated inside the channel and hydrodynamic effects appear, e.g. the maximum velocity increases and the slip length decreases along the channel. We document these effects in results after detailed description of the methods in methods. We summarize the main conclusions in conclusions.§ SIMULATION METHODWe construct a nanochannel flow by confining a monoatomic fluid between two smooth solid walls. Both the fluid and walls are composed by atoms which interact through the pairwise Lennard-Jones (LJ) potential,V_ij(r_ij) = 4ϵ_ij[ ( σ_ij/r_ij) ^12 - ( σ_ij/r_ij) ^6 ] ,r_ij<r_c0 ,r_ij≥ r_c where r_ij = |𝐫_i - 𝐫_j | is the distance between atoms i and j whose positions are 𝐫_i and 𝐫_j, and ϵ_ij and σ_ij are the energy and length scales of the potential, respectively. The subscripts i and j indicate the atom types (hereinafter f stands for fluid atoms and w for wall ones), and the calculation of the interactions of each particle is truncated at a cut-off distance r_c = 2.5σ, since we have verified that the results do not change appreciably by increasing r_c. All the physical units in this work are expressed in LJ units (that is, in terms of the characteristic fluid length σ = σ_ff, energy ϵ = ϵ_ff, and atomic mass m = m_f). For liquid argon these values are σ = 3.4 Å, ϵ = 1.65 × 10^-21 J and m = 6.63 × 10^-26 kg respectively.In all our simulations, the interaction between wall and fluid atoms is chosen to be as intense as that between fluid monomers, ϵ_fw = ϵ_ff, which is considered highly hydrophilic <cit.>, and σ_fw = σ_ff. Each atom of the thermal wall is tethered around its equilibrium position via a quadratic potential,V_wall( 𝐫) = K_w ( 𝐫 - 𝐫_0 )^2 ,where 𝐫 is the position of the wall atom and 𝐫_0 its equilibrium position, and K_w models the stiffness of the wall <cit.>. For the current work we have used a value K_w = 600ϵ / σ ^2, which is inside the interval of values commonly used in MD studies, and has been proved to accomplish the two basic requirements for wall stiffness: (i) it is not too small, thus preventing the melting of the wall according to the Lindemann criterion <cit.>, and (ii) its associated frequency is low enough to allow the correct integration of the equations of motion of the wall atoms without reducing the time step <cit.>. The mass of wall particles is m_w = 10 m_f in order to reduce the vibration frequency, and they do not interact with each other to reduce computational time (ϵ_ww = 0, since it does not affect significantly the wall dynamics).Taking into account the volume accessible for the fluid, the average fluid mass density in all our simulations is ρ_f = 0.86 m σ ^-3. With regard to the walls, they form a face-centered cubic (fcc) lattice of number density equal to 3.90σ^-3, which implies an equilibrium nearest-neighbor distance of 0.71σ. The wall planes in contact with the fluid are (010) faces, with the [100] orientation of the fcc lattice aligned with the shear flow direction (x). The number of fluid, N_f, and wall atoms, N_w, vary from one studied configuration to another, and are detailed below.All the simulations have been carried out using the LAMMPS package <cit.>. The equations of motion are integrated using the velocity Verlet algorithm, with a time step of Δ t = 0.002τ, where τ = ( mσ^2/ϵ)^1/2 is the characteristic LJ time (τ = 2.16 × 10^-16 s for liquid argon). In the initial configuration the fluid particles are arranged in the positions of a fcc lattice, and the equilibration runs lasted typically 5 × 10^5 steps. Once the steady state is reached, a production run of a minimum of 10^6 steps (2× 10^3τ) is performed to average the data. The simulation domain is divided in bins of size Δ x = 1.5σ and Δ y = 0.5σ to discretize the collected data.The components of the local stress tensor in each spatial bin have been computed following the Irving-Kirkwood method <cit.>, that is,𝐏 (𝐫_bin) = 1/V_bin< ∑_i∈ bin^N_bin m_i [ 𝐮_i(t) - 𝐮(𝐫_bin,t) ][ 𝐮_i(t) - 𝐮(𝐫_bin,t) ]>+ 1/2V_bin<∑_i∈ bin^N_bin∑_j i^N 𝐫_ij(t) 𝐅_ij(t) >where the sum of the kinetic term includes the N_bin particles which are inside the bin located at 𝐫_bin at time t, and the potential term involves the interaction of particles i inside the bin with all the other atoms j of the system (in or outside the bin); 𝐅_ij is the sum of internal forces exerted on i by j, 𝐮(𝐫_bin,t) the average velocity in the bin, and V_bin its volume.In order to control the temperature in some region of the computational domain the dissipative particle dynamics (DPD) thermostat is considered. This type of thermostat is considered to be particularly suitable for nonequilibrium MD <cit.> since, among other advantages, it is a profile-unbiased thermostat <cit.>; that is, does not need to assume a predetermined streaming velocity profile. This virtue of the DPD thermostat is due to the fact that it involves relative velocities between pairs of particles, 𝐮_ij = 𝐮_i - 𝐮_j, instead of individual velocities as in other thermostats commonly used (e.g. Langevin thermostat). Hence, the equations of motion in the thermostated region arem_i d𝐮_i/dt= - ∑_j ≠ i∇_𝐫_i V_ij (r_ij) + 𝐅^D_i + 𝐅^R_i where two extra terms are added to the force resulting from the interatomic potential. 𝐅^D_i denotes the dissipative force on particle i and 𝐅^R_i the corresponding random force. Both are expressed as a sum of pairwise contributions,𝐅^D_i = ∑_j≠ i𝐅^D_ij = - ∑_j≠ iΓ w^2(r_ij)( 𝐫̂_ij·𝐮_ij) 𝐫̂_ij𝐅^R_i = ∑_j≠ i𝐅^R_ij = ∑_j≠ i√(2k_BTΓ) w(r_ij) α_ij𝐫̂_ij where 𝐫̂_ij=𝐫_ij/|𝐫_ij|, 𝐫_ij = 𝐫_i - 𝐫_j, T is the target temperature, Γ the friction coefficient (Γ = 1.0 mτ^-1 in our simulations), α_ij a Gaussian white noise variable that fulfills the condition α_ij = α_ji, and w(r) is a weighting function of r_ij. The usual choice is w(r_ij) = 1- r/r_c ,r_ij<r_c0 ,r_ij≥ r_cPrevious features are common to all the models simulated in this work. In the rest of this section we describe the specificities of the various studied models, that differ essentially in the driving mechanism of the flow, the thermostated regions, and the geometry of the channel. §.§ A simple approach: the streamwise homogeneous (SH) flow modelThe configuration geometry of the SH flow model is shown in channel-homogeneous. The channel length in the flow direction, L_x, varies from 200 to 400σ depending on the case, its width (measured as the distance between the wall planes in contact with the fluid) is L_y = 30.0σ, and its depth L_z = 10.0σ. Both the upper and lower walls consist of four fcc layers separated by a distance0.50σ (that is, the wall thickness is Δ y_w = 1.50σ). Then, the number of fluid and wall atoms in the simulation cell vary from N_f = 49980 and N_w = 39600 (for Lx =200 σ) to N_f = 99960 and N_w = 79200 (for Lx =400 σ). Periodic boundary conditions are applied in x and z directions. As specified in channel-homogeneous, the plane y=0 cuts the channel through its center and x=0 at the entrance.The flow is generated by applying a constant external force (per unit mass) f_x in x direction on all the fluid atoms. The interval of forces simulated in this work goes from f_x = 0.010ϵ / mσ to f_x = 0.040ϵ / mσ. Modeling the walls as non-rigid allows for the heat generated by friction to be removed through them. The wall temperature is fixed to the value T_w = 1.1ϵ/k_B by applying the DPD thermostat described above only to the wall atoms. §.§ Abandoning homogeneity: the streamwise inhomogeneous force driven (SIFD) flow model In a first step towards a more realistic model, the homogeneity hypothesis is abandoned and the configuration shown in channel-developing is studied.We consider a central channel of the same dimensions as in the SH model, L_x × L_y × L_z = 200-400σ× 30σ× 10σ, limited by the same fcc walls of thickness Δ y_w = 1.50σ. This is the domain of interest, where the fluid properties are extracted. But now we add two open reservoirs of length L_res = 50σ outside it, both on the left and on the right of the channel, where the fluid can move freely in the vertical (y) direction. Periodic boundary conditions are applied in the three directions. Again, we choose the origin in such a way that y=0 at the center of the channel and x=0 at the entrance (and, then, x coordinates take negative values at the left reservoir).We apply a DPD thermostat, dpdweight_function, to the fluid particles, but only when they are in the reservoirs outside the domain of interest. In such a way, we fix the temperature of the fluid at the inlet to be T_in = 1.1ϵ /k_B and we leave the fluid completely free inside the channel. As in SH model, the walls are also thermostated toT_w = 1.1ϵ/k_B, and the flow is driven by a constant external force f_x exerted on every fluid atom in the whole simulated domain.Unlike the SH flow model, the SIFD model allows for the evolution of the fluid properties along the channel, like the local temperature, and therefore incorporates the heat transfer by convection. The presence of the reservoirs also makes it possible to analyze the channel entrance effects, like pressure losses. The basic idea behind a model like this is to explicitly separate the domain of interest, where the system evolves according its natural dynamics, without being restricted by artificial forces or constraints, from the surroundings where constraints are applied to induce the desired fluid conditions at the entrance of the region of interest. This is, precisely, the great difficulty when periodicity is abandoned in MD: how to impose the proper boundary conditions to couple both regions adequately. In fact, this is also the main challenge to build hybrid models which couple molecular dynamics with continuum dynamics <cit.>.As mentioned in intro, other works have previously proposed different boundary conditions for generating inhomogeneous flows on nanopores and one of the reasons why the use of this kind of models is not generalized is that enlarging a system to include a region outside the domain of interest has a computational cost. In our case, the number of fluid atoms grows to N_f = 78300 (for L_x = 200σ) and N_f = 128280 (for L_x = 400σ). Nevertheless, it is affordable given the tremendous amount of computing power available in supercomputers and the maturity of the simulation software.A simplification of the configuration of this SIFD model is presented in channel-developing2. Again we consider a channel of length L_x and we add left and right reservoirs that are extensions of the channel. In these reservoirs the flow is thermostated and periodic boundary conditions are applied but the vertical motion is constrained by (fictitious) extensions of the channel walls. This configuration does not account for hydrodynamic entrance effects but it will be important to understand the relation to the SH model.There are therefore two types of SIFD models according to the type of reservoir: the SIFD model with open reservoirs (SIFD-OR) and with closed reservoirs (SIFD-CR). §.§ Leaving the channel unperturbed: the streamwise inhomogeneous pressure driven (SIPD) flow model Finally, we have simulated another model in which body forces no longer exist inside the channel. The configuration is the same shown in channel-developing, but now a pressure gradient is generated between the inlet and outlet reservoirs by applying a force only on those fluid atoms located far from the channel. In particular, we apply an external force of magnitudef_x =Δ p/ρ_fΔ z on all the fluid atoms located inside two regions of width L_res/3 at the edges of the simulation cell (one at the beginning of the left reservoir and the other at the end of the right one), but not outside them. Δ z = 2L_res/3 is the total length of both regions where the force is applied, and Δ p is the pressure difference created. The DPD thermostat described above is applied in the reservoirs of length L_res to fix the temperature to 1.1ϵ/k_B. In this configuration the heat generated by the external force is dissipated inplace, as far from the channel as possible trying to minimize the disturbance caused in the system. § RESULTS §.§ Homogeneous flow The fluid properties obtained by atomistic simulations of periodic homogeneous flows in nanochannels are rather well understood. In homo_vs_inhomo_y we show the averaged density, temperature, velocity, and pressure profiles for the SH model. As previously mentioned, this model maintains the wall temperature fixed but it does not thermostat the fluid, which is widely accepted to be the more realistic option for homogeneous flows <cit.>. Nevertheless, the evacuation of the viscous heat through the walls does not avoid a significant temperature increase in the fluid, as we shall see shortly. The constant force applied to induce the flow in this case has been f_x = 0.020ϵ/mσ, which despite being a value in the range of those commonly used in MD exceeds the gravity force by a factor of 1.5 × 10^11.The fluid density shows a clear layered structure (with at least six marked layers separated by a distance ∼ 0.9σ) in the region near to the atomic walls, where the surface effects are visible. On the other hand, it is constant and equals the bulk value in the center of the channel. This fact suggests that the channel is wide enough to assume the continuum equations to be valid at this scale <cit.>. In particular, the streaming velocity can be determined from the momentum equationρ u_x ∂ u_x/∂ x= - ∂ p/∂ x + μ∂^2 u_x/∂ y^2 + ρ f_x where ρ is the fluid mass density, and u_x the x component of the streaming velocity. Since there is no variation along the channel, the well-known quadratic profile is recovered,u_x ( y )=ρ f_x/2μ ( h^2/4 - y^2 + hL_s ) where h is the distance between the solid-liquid interfaces at the top and bottom walls. The position of the solid-liquid interface (that is, the point of closest approach where the boundary condition is imposed) is not well defined. To take into account the excluded volume effects, we locate the interface at a distance of 0.5σ from the wall innermost fcc planes <cit.>, and then h = 29σ. The slip length L_s is defined as the additional length, relative to the interface, at which the linearly extrapolated fluid tangential velocity vanishes,|∂ u_x/∂ y( y= ± h/2 ) |L_s = u_swith u_s = u_x(± h/2) the slip velocity at the interface. As it can be seen in homo_vs_inhomo_y, the solution in velocity_quadratic fits accurately the velocity profile assuming a value around μ∼ 2.4ϵτσ^-3 for the viscosity, which coincides with that obtained from the simulated shear stress, μ= P_xy( ∂ u_x / ∂ y )^-1 = 2.45 ± 0.10ϵτσ^-3, and is close to those obtained with similar models <cit.>. The simulated flow rate is then consistent with that obtained from the quadratic profileQ =L_z h^3/12μρ f_x( 1 + 6L_s/h)With regard to the temperature, again the homogeneity simplifies the energy balance equation,ρ c_p u_x ∂ T/∂ x - β T u_x ∂ p/∂ x=μ( ∂ u_x/∂ y)^2 + κ( ∂^2 T/∂ y^2) , which reduces to a quartic profile for the temperature <cit.>,T ( y )=ρ^2 f_x^2/12κμ [ ( h/2)^4 - y^4 +h^3 L_K/2] with κ the thermal conductivity of the system and L_K the Kapitza length, which is defined equivalently to the slip length in slip_length but changing u_x by T <cit.>. As it can be seen in homo_vs_inhomo_y the temperature profile is satisfactorily fitted by a quartic function. §.§ Streamwise inhomogeneous force-driven flow Unless stated explicitly, in this subsection we discuss the results obtained using the SIFD-CR model (that is, with the outer fixed-temperature reservoirs confined by the walls, the configuration shown in channel-developing2). We will show below that the results with the SIFD-OR model (using the open-reservoirs configuration in channel-developing) are qualitatively similar, and will discuss the slight differences. From homo_vs_inhomo_y, it could seem that the differences between homogeneous and non-homogeneous models are not that noticeable (except for the temperature). But we must take into account that, whereas the homogeneous profiles remain unaltered along the channel, the fluid properties in the inhomogeneous model evolve through x. In homo_vs_inhomo_y, then, we present the fluid profiles in a particular section (x=150σ, far enough from the entrance) only as an example. It is more convenient to analyze the results along the direction of the flow, as we do in homo_vs_inhomo_x.One of the main distinctive features of inhomogeneous models is their compressibility: the fluid density ρ diminishes significantly along the channel, and this reduction is more pronounced for higher external forces, as expected. Note that only the values inside the non-thermostated channel have physical meaning; those in the reservoirs are artificial because of the applied thermostat and the imposed periodicity. On the contrary, the pressure is almost constant in the flow direction (a slight gradient is observed only for very high forces). The reason of this behavior seems to be in the configuration used: since both reservoirs are limited by walls, viscous forces are high enough to equilibrate the external force in these regions (last two terms in momentum), and thus an appreciable pressure difference is not created between channel ends. On the contrary, we will see that a clear pressure gradient arises when f_x is applied in open reservoirs, in which friction is much less important, as it is the case in channel-developing. The compressibility of the flow allows the variation of the velocity along the channel, in such a way that the mass flow rate is constant. As it is shown in homo_vs_inhomo_x (c) and velocity_developing, when the fluid enters the non-thermostated channel its velocity profile starts to develop (in the reservoirs, the thermostat restrains the fluid and its velocity remains constant). Due to the friction, velocity gradually reduces in the regions near the walls, and therefore the fluid is accelerated at the center of the section to maintain the mass flow rate (velocity_developing). As a consequence the slip decreases (and shear rate γ increases) in the streamwise direction. The shear continues to grow downstream until the friction force equilibrates the external force; downstream this entry region, the flow is fully developed. Whether a given channel is long enough to consider the flow hydrodynamically developed should be determined when designing the simulation of nanoscale flows. On the basis of the results of this work, forces higher than 0.02ϵ/mσ requires entry lengths longer than 400σ. However, it is common to find considerably shorter channels in the literature in which entrance effects are neglected.Another noticeable impact of the change in the configuration is that the well-established quadratic profile for the velocity in velocity_quadratic may no longer be valid for non-homogeneous flows, since the first term in momentum equation, momentum, does not vanish. The solution becomes significantly more complex, but as a first approximation one can assume that the velocity gradient ∂ u_x / ∂ x does not depend on y (this is almost exactly true in our simulations). In this case, the new solution has the formu_x ( y )= u_0 [1 - A cosh( λ_0y ) ]whereu_0 = f_x/∂ u_x / ∂ x,λ_0^2 = ρ∂ u_x / ∂ x/μ andA^-1 =cosh( λ_0 h/2) + L_s λ_0sinh( λ_0 h/2) This solution reduces to velocity_quadratic when the velocity gradient is small. Although we have indeed confirmed that the hyperbolic profile fits better the results than the quadratic one for high forces, the difference in our simulations is small (it is only noticeable near the boundary; see the inset in velocity_developing). Nevertheless, it should be taken into account in future studies or for more intense driving forces, as an accurate velocity fit can affect the calculation of the slip length. Consequently, the volumetric flow rate expression in flow_rate, used regularly in the literature for obtaining the slip length from experimental flow rate measures <cit.>, should be also modified to take into account the non-homegeneity, to beQ = L_z u_0 [ h - 2A/λ_0sinh( λ_0h/2) ]More important than the change in the expression is the fact that the flow rate is variable in the streamwise direction. But over all the features of this model for inhomogeneous flows, there is one that makes it clearly more realistic than the traditional homogeneous models: it incorporates the fluid cooling by convection along the channel. As it occurs in real (nano)channels, the fluid at the entrance is colder than at the outlet, and this makes the temperature to gradually rise due to the viscous heat generated by friction. The evolution of T in homo_vs_inhomo_x(b) is qualitatively similar to the simple unidimensional model drafted in intro, 1D-temperaturetemp, and tends asymptotically to a constant value. Again, we have found that, for high external forces, the length of full thermal development is longer than the simulated channels. It must be noted that the asymptotic value to which T tends in the case of f_x = 0.020ϵ / mσ coincides with the temperature of the homogeneous model with the same applied force. This fact leads us to conclude that, while representing convection by a thermostat when assuming an homogeneous channel is a crude approximation, modeling cooling only through walls also fails to describe heat transfer, especially at the entrance, and overestimates the temperature at the channel. Only in low-shear regime (forces lower than f_x = 0.010ϵ / mσ in our model) temperature is approximately uniform and the role of convection less important, as thermal conduction is effective enough, a case in which the SH model without fluid thermostating provides similar results. It is also interesting to analyze how the temperature distribution across the channel varies with x. Results presented in temperature_developing show marked differences between the profiles as the flow progresses. At the inlet, where the convective cooling is intense, the thermal jump at the boundary is very pronounced and heat is transferred by conduction from the walls to the center of the channel. Only at sections where convection ceases to play a major role the temperature profile resembles that obtained in homogeneous models (see homo_vs_inhomo_y(b)). Understanding this behavior requires noticing that the convective term in the energy energy does not vanish now. Only to find an approximate solution, we can assume that specific heat, density, viscosity and thermal conductivity do not vary appreciably with y; that the temperature gradient is also approximately independent on y, and the pressure gradient negligible (the last two hypothesis have been checked to be valid here). Finally, for the sake of simplicity we take the solution velocity_quadratic for the velocity (since we have seen that solution velocity_hypcosin offers similar results for the simulations presented in this work). With these approximations, we getT ( y )= a_2 y^2 - a_4 y^4 + a_0 where a_2 =f_x/4 μρ^2/κ c_p ∂ T/∂ x( h^2/4 + hL_s ) , a_4=ρ f_x/24 μ ( ρ/κ c_p ∂ T/∂ x + 2 ρ f_x/κ) and a_0 =ρ f_x/2 μ ( ρ/κ c_p ∂ T/∂ x + 2 ρ f_x/κ) h^3/24( L_K +h/8)- ρ f_x/2 μρ/κ c_p ∂ T/∂ x( h^2/4 + hL_s ) h/2( L_K +h/4)where L_K is the Kapitza length. In temperature_developing it is shown that temperature profiles may indeed be very well fitted by this solution. At the beginning of the channel the convective term dominates and the profile is eminently quadratic; on the contrary, near the end where the flow is almost thermally developed a_2 ≈ 0 and the profile is ∝ y^4.Finally, we have focused on the results obtained for the flow slip over the solid surface. The study of the slip observed at microscales and nanoscales remains to be of great interest at present, among other reasons, because of its potential technological utility for nanoscale flows: since the occurrence of slip in nanochannels reduces the fluid-solid friction and increases the flow rate, it is a phenomenon sought in many nanofluidic applications <cit.>. Therefore, being able to establish a specific boundary condition for fluid flows over solid surfaces would be fundamental for understanding flows in these scales. But, although this phenomenon has been extensively investigated from experimental, theoretical and computational points of view <cit.>, there are some issues that are still controversial, like the slip dependence on shear rate. Since the seminal work of <cit.>, some authors have reported (both in experimental and simulation studies) a non-bounded monotonic increase of the slip length with shear rate and the existence of a critical shear rate at which L_s diverges <cit.>. However, other researchers have found that slip lengthtends to a finite constant value at high shear <cit.>. It is important to emphasize that in all these works an homogeneous flow is assumed, the first group applying a thermostat to the fluid and the second only to the solid <cit.>.In slip_length_vs_x we can see the slip length for the SIFD flow model, calculated from definition in slip_length and the velocity profiles obtained in MD simulation. The slip length is higher at the inlet, decreases along x and tends to a constant value when the flow is hydrodynamically developed (between 2σ and 6σ in the simulated range of forces). It is worth pointing out that, for f_x = 0.020ϵ / mσ, the slip length value at high x approximately coincides with that obtained with the homogeneous model and the same force. We confirm again, then, that the study of homogeneous flow can describe the developed flow, but not its developing behavior. We also see that shear rate increases along the channel, as it can be readily understood from the increasing slope of velocity profiles at the boundaries in velocity_developing (see inset). The fact that slip reduces with growing shear rate could appear to be in contradiction with those works that, in the line of <cit.>, conclude that slip grows with shear. However, those works assume constant temperature. Temperature variation affects the slip, as has already been highlighted in the literature <cit.>: when T increases, fluid particles become more active and a higher number of them are able to penetrate in the region of interaction with the wall, in such a way that the momentum transfer between the fluid and the solid improves, and the velocity slip between them decreases <cit.>. We suggest, therefore, that the evolution of slip in the channel is due to the rise in temperature in it. In order to support this conclusion, we have carried out ten extra MD simulations of the homogeneous model with f_x = 0.020ϵ / mσ, but now thermostating also the fluid (with a DPD thermostat) to force the fluid temperature and density to be equal to those at ten different sections of the channel. The results, shown with squared symbols in slip_length_vs_x, confirm that temperature is the crucial factor that makes the slip to decrease along the channel, even if the shear rate gradually increases. It also explains the smaller L_s for higher f_x (notice that at lower forces, thermal fluctuations cause noisier results, and additional averaging would be needed to smooth them). §.§ Open or closed reservoirs? A few comments on the effects of assuming open reservoirs outside the channel (configuration in channel-developing instead of that in channel-developing2) should also be made, since they can shed some light on the discussion about boundary conditions choice in MD simulations of nanoflows. The main difference with respect to the closed-reservoirs case reported so far lies on the pressure gradient created out of the channel (see open_vs_closed_x(d)). The lack of walls in the open reservoirs causes much flatter velocity profiles (in the reservoir) than those in the closed ones, as shown in reservoir_velocity, and then, much smaller viscous forces in these regions. As a result, a positive pressure gradient appears to compensate the external force (see momentum). This pressure difference between channel ends translates in a pressure drop inside the channel, and in an extra force on the confined fluid which adds to f_x. Its effects are not minor, since -∂ p / ∂ x is comparable to ρ f_x. It must therefore be concluded that simulated systems with the same force but different boundary conditions may not be dynamically equivalent, and this must be taken into account when designing the model to simulate.As a consequence a higher flow rate is observed when open reservoirs are considered, as it can be seen from open_vs_closed_x(a) and open_vs_closed_x(c) (note that, for example at x≃ 150 σ, the densities are similar but the velocity is bigger when open reservoirs are considered). Evidently, the hydraulic resistance of closed reservoirs is higher.On the other side, although the higher force exerted on the confined fluid in the open-reservoirs configuration (and the corresponding higher shear rate) could suggest a more intensive heating, the temperature distribution along the channel is not substantially different (and shows even a lower T) from the closed-reservoirs case (see open_vs_closed_x(b)). The explanation for this behavior can be found in the second term of the left-hand side of the energy equation, energy: a fraction of the heat transferred to the fluid is devoted to increase the fluid temperature, but another part goes to diminish the fluid pressure (unlike what happens with closed reservoirs). Also the variation of shear viscosity in the flow direction could affect T distribution at some extent (see a detailed discussion of this issue at the next subsection). To conclude this subsection we also note thatin the case of closed reservoirs, caution is recommended when choosing the reservoirs height (in y direction), since it can affect the results in some measure. The reason is that, for a given channel width L_y, increasing the reservoirs height (and then the wall thickness Δ y_w) results in bigger pressure losses at the entrance (since the flow contraction is more abrupt), which, in turn, results in a reduction in the pressure gradient inside the channel. This influences the fluid properties obtained because, as discussed by <cit.>, it is this pressure gradient, and not the pressure difference between reservoirs, which characterizes the flow (see also <cit.>). We have checked that entrance losses increase indeed if reservoirs height is enlarged, but it affects only slightly the presented results.§.§ Streamwise inhomogeneous pressure-driven flow Finally, we now move to discuss the third and last type of models studied in this work, which should be, a priori, the most realistic to simulate nanoflows, since its driving mechanism is not a fictitious external force that disturbs significantly the behavior of the fluid inside the channel, but a pressure gradient (obviously, induced also by a force but applied in this case far enough from the channel).Firstly, it has been confirmed that the application of a force of the magnitude in force_model3 in the margin regions of length L_res/3 (see channel-developing) translates in a pressure difference between the ends of the channel which coincides with the Δ p value imposed in force_model3 with satisfactory accuracy (less than a 10% discrepancy). In pressure_driven_streamwise_profiles(d) we present the pressure profiles for a channel of length L_x = 200σ and four different Δ p values, chosen to create the same driving in the confined fluid as the one in the SIFD-OR model shown in open_vs_closed_x (that is, the value of Δ p in the SIPD model is chosen such that Δ p / L_x in this model equals the average driving Δ p / L_x + ρ_ff_x in the SIFD-OR model for each value of f_x shown in open_vs_closed_x). This choice aims to compare dynamically equivalent flows.Observe that in the SIPD model the induced pressure gradient is much bigger than in the SIFD one. As it will be shown throughout this section, the pressure variation affects the rest of thermodynamic fluid properties and changes notably the results analyzed so far. Also note that pressure losses at the channel entrance, between the point where the external force is no longer applied and the inlet at x=0, are barely appreciable.The induced pressure difference causes a significantly more pronounced variation of the density along the channel than in the SIFD models with the same total force (see pressure_driven_streamwise_profiles(a)). In fact this density variation limits the applicability of this kind of models, since if the pressure drop is too high, ρ will diminish sufficiently to provoke a phase change at the exit of the channel. This imposes a limitation on the maximum Δ p applied in MD simulations, and on the channel length for a given pressure gradient. This is the reason why we are reporting results only for L_x = 200σ: simulations with larger L_x demand also larger Δ p to induce a certain gradient, and the phase transition occurs. Also related to density variation, it should be noted that the profiles in the wall-normal direction (y) show a more marked structure near the walls (with more clearly located atomic layers) at the beginning of the channel, where density is higher, as is apparent in pressure_driven_wall_normal_profiles(a) and pressure_driven_wall_normal_profiles(d). On the other hand, it has been verified that in our simulations fluid density evolves with pressure in a qualitatively similar fashion to that reported by <cit.>, who obtained the phase diagram of a Lennard-Jones fluid at equilibrium by MD simulations. As it can be seen in pressure_viscosity_exp(a), for small Δ p the p-ρ relation approaches the equilibrium equation of state, while for larger Δ p the pressure is slightly higher than the one at equilibrium but the functional relation with the density is similar.The averaged velocity also shows a faster growth in the channel when flow is induced by a difference in pressure (as it can be seen comparing pressure_driven_streamwise_profiles(c) with open_vs_closed_x(c), which is consistent with the greater density drop and the requirement of mass flow rate conservation along the channel. We can also observe that the gradient of u_x progressively increases along x, and it is clearly larger at the exit; that is, the fluid is more accelerated near the end than at the beginning of the channel. Besides, this effect is more pronounced for higher Δ p, in fact it is hardly noticeable for Δ p = 2.0 but clearly visible for Δ p = 8.0. One might ask for the physical cause of this behavior. Since pressure gradient does not change appreciably along x, we suggest that, again, it is the intense change of fluid properties in the channel (in this case, shear viscosity) which explains it. Figure <ref>(b) includes the results for viscosity as a function of x for different Δ p, extracted from the simulated shear stress through P_xy=μ( ∂ u_x / ∂ y ). μ lowering along x is in fact significant, being more marked as pressure gradient is increased. This tendency is consistent with the results of <cit.>, who reported both theoretical and MD calculations for shear viscosity at a wide range of temperatures, and showed that μ diminishes when ρ decreases (see the inset in pressure_viscosity_exp(b)). This behavior indicates that friction is reduced along the channel, and then explains the increase in the gradient of u_x. Precisely at those regions where ∂ u_x / ∂ x grows, the hyperbolic function velocity_hypcosin starts to differ from the quadratic function velocity_quadratic, and one can confirm that it is more suitable to fit the velocity profiles, although the discrepancy is still small (as an example, see the velocity in a point near the end of the channel for Δ p = 8.0ϵ / σ^3 in pressure_driven_vel_fitting).But certainly the most significant difference observed in our MD simulations between the SIFD and the SIPD flow models resides in the temperature distribution along the channel. If we look at pressure_driven_streamwise_profiles(b), we clearly observe that in SIPD models T raises to a much lesser extent than in SIFD models (open_vs_closed_x(b)). The difference is important enough to conclude that the choice of proper boundary conditions is a fundamental question in MD simulations of nanoflows, and must be addressed carefully. In this work we suggest that the cause of this disparity in the evolution of T is twofold. In the first place, as we mentioned for the case of models with an external force and open reservoirs, the term - β T u_x ∂ p/∂ x in the energy equation, energy, acts as an effective cooling mechanism. The internal energy increase produced by the viscous heat does not directly result in a temperature increase, as it would occur in an incompressible flow, due to the energy required by the pressure loss to occur. As the flow velocity u_x increases along the channel, this contribution becomes higher and temperature growth becomes progressively slower (for Δ p larger than 6.0 ϵ / σ^3 one can even observe a slight T reduction at the end of the channel). The second factor that contributes to moderate the temperature is shear viscosity, that, as we have seen, decreases in the flow direction, then causing a gradual reduction of the friction. The relative importance of these two causes is not clear, and deserves further research. What we do know is that both become much more important in models in which flow is induced by a pressure gradient, since the streamwise pressure and viscosity variation increases notably with respect to those driven by a uniform external force. With regard to the form of temperature distribution across the flow (pressure_driven_wall_normal_profiles(b)), we see again that T profiles meet the functional form derived in temperature_quartic+quadratic. Compared to those presented in temperature_developing for SIFD flow models, the quadratic term (we recall that it vanishes for SH flow models) dominates over the fourth-order term. § CONCLUSIONS A careful analysis of three different MD models for the flow in nanochannels has been reported. The traditional SH (force driven) flow model, which does not account for the variation of properties along the channel, permits to predict the density, temperature, pressure and velocity profiles (and thus slip length) when low forces are applied. In this case the heat generation by friction is small and it is easily dissipated by thermal conduction to the walls, where it is finally dissipated by the thermostat applied there. Other heat transfer (cooling) mechanisms, convection and dilatation, are missing as they are incompatible with an homogeneous flow. Therefore, at higher forces an inhomogeneous model must be used to capture flow developing profiles if the associated computational cost can be afforded.When two reservoirs are added at the inlet and the outlet and the fluid is thermostated there to fix the inlet temperature, streamwise variation of the flow can be predicted. The main difficulty here is that the results depend on the design of the reservoirs. If the reservoirs are surrounded by (fictitious) extensions of the walls no pressure gradient is generated because the flow is driven by an external force that balances the viscous dissipation, in the same way as it occurs inside the channel. If open reservoirs are considered, the velocities outside the channel are almost uniform and a pressure gradient is generated. In the former case, we have seen that also the reservoirs size can affect the pressure distribution inside the channel, although the influence in the results presented in this work is minor.The inclusion of reservoirs outside the domain of interest allows us to analyze the streamwise evolution of the shape of velocity and temperature profiles in the wall-normal direction. In particular, the appearance of a quadratic term in T(y) as a consequence of convection is discussed. For the higher forces in the range studied in this work, the flow is not fully (hydrodynamically and thermally) developed at the end of the channel, despite the large simulated channel lengths. The usual homogeneous simulations ignore this developing behavior, as well as the stabilization of the slip length along the channel.The pressure gradient inside the channel has an important influence on the results. Even if the pressure profile were constant across the channel section, a pressure gradient is equivalent to a constant external force only for incompressible flows. At high pressure differences, heat generation makes compressibility effects important, the density cannot be assumed to be constant and the dilatation work acts as a heat sink. Therefore, the results obtained using the SIPD model are substantially different from those obtained using the SIFD model, specially regarding temperature distribution. It has been demonstrated that the temperature growth along the channel is much smaller in SIPD than in SIFD models. Both the energy required by the pressure loss and the streamwise variation of viscosity are identified as the factors which explain this behavior.In this respect it is worth noting that if a low cost SH model is to be used, thermostating the fluid to account for the missing heat transfer mechanisms will produce better (but still inaccurate) results than those obtained with the SH model in which only walls are thermostated. This conclusion is obtained comparing the temperature distributions in open_vs_closed_x and pressure_driven_streamwise_profiles: the temperature obtained with the most realistic model (SIPD) do not exceed 1.3 ϵ/k_B whereas the temperatures obtained using the SH model almost doubles this value. It is therefore less inaccurate to consider the temperature fixed at its inlet value (1.1 ϵ/k_B). Nevertheless, we remark once again that the use of homogeneous models will only provide a first approximation due to their inability to describe the streamwise variation of flow properties.abbrv | http://arxiv.org/abs/1705.09544v1 | {
"authors": [
"Vicente Bitrián",
"Javier Principe"
],
"categories": [
"physics.flu-dyn"
],
"primary_category": "physics.flu-dyn",
"published": "20170525175123",
"title": "Driving mechanisms and streamwise homogeneity in molecular dynamics simulations of nanochannel flows"
} |
Hidden symmetries in N-layer dielectric stacks Riichiro Saito^1 December 30, 2023 ============================================== Recent work using plasmonic nanosensors in a clinically relevant detection assay reports extreme sensitivity based upon a mechanism termed inverse sensitivity, whereby reduction of substrate concentration increases reaction rate, even at the single-molecule limit. This near-homœopathic mechanism contradicts the law of mass action.The assay involves deposition of silver atoms upon gold nanostars, changing their absorption spectrum. Multiple additional aspects of the assay appear to be incompatible with settled chemical knowledge, in particular the detection of tiny numbers of silver atoms on a background of the classic `silver mirror reaction'. Finally, it is estimated here that the reported spectral changes require some 2.5E11 times more silver atoms than are likely to be produced. It is suggested that alternative explanations must be sought for the original observations. § INTRODUCTION Rodriguez-Lorenzo et al. <cit.> report an ultra-sensitive method for detecting analytes that can be recognised by an antibody. The PSA protein is used to demonstrate the technique. The basis of the assay is for the antigen to be recognised by antibodies conjugated with the glucose oxidase enzyme (GOx), which then produces hydrogen peroxide. The H_2O_2 in turn reduces silver ions, the resulting silver atoms being deposited on gold nanoparticles (`nanostars'). The deposition is detected by a blueshift of the absorption spectrum of the solution of gold nanoparticles. The reactions are summarised in Fig. <ref> of this analysis.§ INVERSE SENSITIVITY Rodriguez-Lorenzo et al. <cit.> report bizarre, less-is-more reaction kinetics, according to which the reaction proceeds more quickly as the substrate concentration is reduced close to zero. In their own words (from the abstract of their paper): However, because conventional transducers generate a signal that is directly proportional to the concentration of the target molecule, ultralow concentrations of the molecule result in variations in the physical properties of the sensor that are tiny, and therefore difficult to detect with confidence. Here we present a signal-generation mechanism that redefines the limit of detection of nanoparticle sensors by inducing a signal that is larger when the target molecule is less concentrated. The approximate form of the kinetics is sketched in Fig. <ref>A. As the substrate concentration is increased, the reaction rate rises abruptly from zero and then declines logarithmically (the authors' plots are semi-logarithmic) from a peak at extraordinarily low concentrations. In the GOx-detection experiment of their Fig. 1c, that peak occurs at a concentration where less than one molecule of GOx is expected to be present in the reaction volume (this is calculated in the next section). In contrast, the law of mass action states that the reaction rate is proportional to the product of the substrate concentrations (more accurately activities). Since only the analyte concentration is varied in the present experiments and it only appears with first-order kinetics, the reaction rate should simply be proportional to the analyte concentration at low concentrations[There is maybe some uncertainty regarding the dependence of the blueshift of the absorbance peak on the amount of silver deposited, but the former can be assumed to be an increasing function of the latter, so the conclusions reached here would be unaffected by the details of the relation.]. `Inverse sensitivity' appears to be spectacularly incompatible with the law of mass action.The `explanation' offered by Rodriguez-Lorenzo et al. <cit.> for this discrepancy is that spontaneous nucleation of pure silver nanoparticles at high concentrations bypasses the deposition of silver on the gold nanostars. Inspection of the assay reactions (Fig. <ref> of this analysis) shows that only the right-hand branch, in which silver ions are deposited on the gold nanostars, leads to the blueshift used to detect the analyte. Thus, the presence of a competing nucleation reaction can only reduce reaction sensitivity (Fig. <ref>B of this analysis), irrespective of the dependence of silver production, nucleation and deposition on analyte concentration. Nucleation cannot increase assay sensitivity[It is possible that in their conceptual argument the authors have confused the slope of the analyte-blueshift curve, which could conceivably become negative at high silver concentrations, with the absolute blueshift. In any case, what matters is the sensitivity at low analyte concentrations, where the nucleation reaction is unlikely to proceed.].§ SENSITIVITY AND NOISE The assay is reported to have extraordinary sensitivity and exceptionally low noise levels.Fig. 2c of the paper (note that this and all figure references below are to those in ref <cit.>) reports the detection of the difference between zero glucose oxidase and 1E-20g/ml glucose oxidase, which represents an average of 0.04 molecules of GOx (MW = 160kDa) per ml. The precise reaction volume is not reported in the paper, but would need to have been about 10ml to have had a 50 chance of containing a single molecule of GOx. A reaction volume of 1ml was used elsewhere in the paper. As no statistics are given for this figure, this observation may have been a lucky event whose replication was never attempted.Fig. 4 shows a quantification of the variability of the assay. In both panels a and b, we see that 1E-18g PSA in the reaction volume of 1ml is detected with fantastic precision compared to the amount of 1E-19g. At both concentrations, the standard deviation of the assay signal is in most cases smaller than the symbol and in all cases smaller than a few percent of the maximum signal. But 1E-18g/ml of PSA represents an average of just 23 molecules in the reaction volume of 1 ml. Such small quantities would necessarily exhibit stochastic variation in the number of molecules present. By Poisson statistics, 23 molecules should be associated with a standard deviation of √(23), equivalent to 21. This moreover represents a minimum. The signal amplification required to detect such small quantities would certainly contribute additional (high levels of) noise. Yet the authors consistently report improbably low standard deviations.This amazing sensitivity is at odds with a publication that predated Rodriguez-Lorenzo et al. Li et al. <cit.>, who used a variation of the present assay to detect glucose (of which more below) with an excess of GOx (as opposed to detecting GOx with excess glucose). Li et al. report a detection threshold of 10 nM (although their Fig. 1 suggests that values in the micromolar range might be more realistic). Even if a GOx molecule will obviously produce more silver than a glucose molecule (estimated below), the difference between the claimed detection thresholds for Li et al. and Rodriguez-Lorenzo et al. is extreme: 1E-8M vs. 6E-23M, a factor of 1.6E14.That single-molecule sensitivity is rendered even more unexpected by another result in Li et al. <cit.>. In their Fig. S7, they compare the abilities of glucose and H_2O_2 to reduce silver ions. They report that H_2O_2 is much less effective. From this we deduce that each molecule of H_2O_2 is by no means guaranteed to reduce a silver ion. A poor yield at this stage of the assay would reduce its sensitivity even further, making single-molecule detection even more implausible. § SILVER MIRROR REACTION Another problem is that the deposition of silver is triggered using a mixture of AgNO_3 and NH_3. The authors describe silver being deposited on the gold nanoparticles (or aggregating via nucleation and growth) as a result of reduction by the H_2O_2 produced by glucose oxidase. In order for this to allow detection of single molecules, a strict requirement is that absolutely no silver at all be deposited in the absence of GOx and the H_2O_2 it produces. However, it turns out that the assay reaction probably contained two sources of reductants that were neither acknowledged nor apparent in the results. Either of these sources would generate background reductant concentration in excess of that arising during the claimed detection of single analyte molecules.The authors seem to have been unaware that they were using a classic classroom reaction called the `silver mirror reaction'. The mixture of AgNO_3 and NH_3 is called Tollen's reagent and is used to detect aldehydes, whose presence triggers the deposition of a visually impressive silver layer on any available surface. A nice description of the reaction for motivating secondary school chemistry classes can be found on the Royal Chemistry Society web site <cit.>. As demonstrated in that example, the reaction will produce a positive in the presence of glucose, which has an aldehyde form in solution. The problem is that in the assay of Rodriguez-Lorenzo et al., 100mM glucose is present as the substrate for glucose oxidase. It seems inconceivable that it would not produce much more silver deposition than the tiny amounts of H_2O_2 produced by a few glucose oxidase molecules.The paper by Li et al. <cit.> provides direct support for our assertion that glucose would reduce silver and generate a signal, because they apply this assay precisely for the detection of glucose! The 100mM glucose present in all experiments of Rodriguez-Lorenzo et al. would therefore generate a saturating reduction of silver, against which background it would presumably be impossible to detect single-molecule signals. In any case, these expected and demonstrated background signals are simply absent from the results reported by Rodriguez-Lorenzo et al.The assay potentially contains a second source of reductant able to swamp single-molecule signals. Luo et al. <cit.> report that gold nanoparticles can catalyse the oxidation of glucose, producing H_2O_2. This catalysis is quite efficient for bare nanoparticles. Some coatings of the gold can prevent the catalysis and this may pertain in the experiments of Rodriguez-Lorenzo et al. However, the covering would have to be perfect to allow single-molecule detection.§ NANOPARTICLE NUMBERS There are two further issues with quantitative aspects of the assay as reported by the authors. I give a brief overview before expounding the detailed arguments. The first problem is that the quantities of enzyme involved will produce absolutely tiny amounts of H_2O_2 and correspondingly tiny amounts of silver—enough to deposit only a single atom on each of a very small fraction of the gold nanoparticles present. It is extremely unlikely that addition of a single atom will detectably change the absorbance spectrum of the nanoparticle. The second and related problem is that the expected large fraction of unmodified nanoparticles appears not to contribute to the reported spectrum. Because the assay signal is the absorbance of a dilute solution of nanoparticles, each nanoparticle will contribute approximately independently to that absorbance. In the absence of silver deposition, a control spectrum is obtained. Modified nanoparticles would have a different spectrum depending on the degree of modification. If a solution contains modified and unmodified nanoparticles, a simple mixture of the two spectra should be obtained. However, even under conditions where a very large fraction of nanoparticles must have been unmodified, their dominant contribution to the mixture spectrum was apparently absent.The more detailed explanations follow below and in the next section.The assay is in two stages.H_2O_2 is produced by the action of GOx attached to the nanostars for 1 hour, then the silver ions are added to trigger the silver deposition and/or nucleation, which are allowed to proceed for another 2 hours. The precise reaction mixture for the second stage is 0.1mM AgNO_3 + 40mM NH_3 added to the 10mM MES buffer (pH 5.9) already present.A first remark is that GOx is presumably totally inactivated by the basic pH ≥ 10 of the second stage after addition of NH_3 (see Fig. 5 of ref <cit.>). It also seems that GOx is strongly inhibited by silver ions <cit.>. So it is unnecessary to consider H_2O_2 and silver produced next to the nanostars, just the H_2O_2 concentration existing in the bulk solution at the end of the first stage and the silver it produces during the second stage. There is therefore no kinetic advantage in attaching the GOx to the nanostars.What is the concentration of H_2O_2? The authors have omitted details about the GOx used, so we'll assume it is the most active one available from Sigma: G7141, with an activity of 100000–250000 units/g <cit.>. The unit definition is:One unit will oxidize 1.0 μmole of β-D-glucose to D-gluconolactone and H_2O_2 per min at pH 5.1 at 35 equivalent to an O_2 uptake of 22.4 μl/min. If the reaction mixture is saturated with oxygen, the activity may increase by up to 100. Another Sigma page <cit.> indicates that the final glucose concentration under the conditions for the unit definition is 1.61% w/v or 90mM—similar to the 100mM used by the authors.Consider Fig. 2 and in particular the spectra in panel b for zero glucose oxidase (black, blue) and 1E-20g/ml GOx (red). Using the enzyme activity values just given, it can be calculated that this low concentration of GOx would produce an H_2O_2 concentration of 1.5E-16M after 1 hour. Generously assuming the production of one silver atom per H_2O_2 molecule, 9E4/ml silver atoms would be produced. (Above, we mentioned results that suggest that this conversion is far from complete, which would result in many fewer silver atoms.)We now calculate the number of nanostars. The concentration of nanostars is presumably the same as in the assays: [Au] = 0.25mM (Methods). We'll also need the following values: nanostar diameter 60nm (Fig. 2a; Methods), so radius 30nm; density of gold 19.3g/ml; atomic weight of gold 197. The volume of a nanostar (assumed spherical) would be 1E-16ml. This would contain 2E-15g of gold or 1E-17moles. So 1ml of 0.25mM [Au] should contain 2.3E10 nanostars.There would therefore only be enough silver to deposit just one atom on each of 0.0004 (about 1 in 260000) of the nanostars. The rest would have no deposited silver. As mentioned above, such a minimal modification as deposition of a single silver atom is very unlikely to produce a detectable change of absorbance of a nanostar; we estimate in the next section the amount of silver deposition necessary to create the spectral changes reported.Furthermore, at least 99.9996 of the nanostars must be unaltered. They would necessarily have the same spectrum as those in the zero GOx control. The small admixture of the 0.0004 nanostars each modified by a single silver atom will presumably make very little difference. Yet hugely different spectra are reported. Please compare again the black and red spectra, and consider that the difference is supposed to result from 0.0004 of nanostars having a single silver ion deposited on them. In reality, a spectrum dominated by the majority unmodified nanostars and therefore almost identical to the control spectrum would be expected.A similar, if slightly less extreme, problem exists for the PSA assays of Fig. 4, which show very strong signal at 1E-18g/ml PSA and for which the exact gold concentration is specified (i.e. [Au] = 0.25mM). If we make the very generous assumption that each PSA molecule has attached to it 100 GOx molecules, still only about 1 in 4 nanostars will receive a solitary silver ion, with the rest being unaltered.§ EXPECTED BLUESHIFT I now estimate the amount of silver deposition required to produce the reported spectral shifts of nanostar absorbance.In general, unadorned gold nanoparticles are associated with a (relatively) red absorbance peak, while those with silver shells display a peak that is closer to the blue. The spectral peaks in Rodriguez-Lorenzo et al. are rather red-shifted compared to most of the spectra in the literature; presumably because of the relatively large size of the present nanoparticles.The key observation is that under conditions where silver is supposed to have been deposited on the nanostars, there is no sign of the spectral peak attributable to the unmodified gold nanostars. In particular, the spectrum for 1E-20g/ml GOx of Fig. 2b (red) shows no sign of the peak seen in the control spectra (black and blue). This suggests that the majority of nanostars have been coated with a silver layer sufficient to obscure the gold peak. I'll try to estimate this thickness with reference to work in the literature.This simple calculation will assume spherical nanoparticles. Conveniently, the densities and atomic weights of silver (10.3g/l and 108) are such that metallic gold (19.3g/l and 197) and silver contain very similar numbers of atoms per unit volume.Kim et al. <cit.> measure spectra before and after silver deposition. They report the spectra of gold-core nanoparticles with silver shells for different mole fractions of the two metals. By a little elementary geometry, we can obtain the thickness T_Ag of the silver shell from the radius of the gold core (r_Au) and the silver mole fraction (m_Ag): T_Ag = r_Au(√((1+m_Ag(1-m_Ag)))-1). The volume of silver per nanoparticle is V_Ag = 4/3π((r_Au + T_Ag)^3 - r_Au^3) and if the volume is in cubic metres, the number of silver atoms is then N_Ag = 1E7× 10.3 V_Ag/6E23× 108. Fig. 2 of ref <cit.> shows the growth of a blueshifted peak that eventually obscures the red peak from the gold core. Two particle sizes of diameters 13 nm and 25 nm were tested. With the smaller one, none of the silver mole fractions tested obscured the gold peak in the way seen in Fig. 2b of Rodriguez-Lorenzo et al. Such an effect is, however, observed with the larger particles. The largest silver mole fraction for which the gold peak is still larger than the silver one (and therefore still definitely detectable) is 0.25. This corresponds to an average silver layer thickness of about 1.3 nm. Even on such small nanoparticles this would imply 1E5 silver atoms per nanoparticle. (Note that the nanostars are larger and have an increased surface area because of their shape, but my aim here is to avoid overestimating the number of silver atoms.)If there are 2.3E10/ml nanostars (see previous section), that would imply that 1 ml of solution would require deposition of at least 2.3E15 silver atoms to achieve the observed spectral shift. The discrepancy with the maximum number of 9E4 that could be produced by 1E-20g/ml GOx (calculated above) is a mere factor of 2.5E11. Beside this large number, the various imprecisions in my calculation (size of the nanostars, any specific plasmonic effects associated with the vertices of the nanostars) are probably irrelevant.§ SUMMARY The premise of inverse sensitivity in Rodriguez-Lorenzo et al. <cit.>, that a competing reaction can increase the sensitivity of an assay at the single-molecule limit, seems to be kinetic nonsense. They report detection of GOx when the reaction volume would only rarely have contained a single molecule. The detection of small numbers of analyte molecules does not display the stochastic variability expected. The detection of tiny numbers of silver atoms is implicitly claimed, but the assay conditions contain a textbook reaction for producing silver atoms in large quantities independently of the analyte detection mechanism. The complete disappearance of the spectral peak of gold nanostars unmodified by silver atoms is hard to reconcile with the estimate that only a tiny fraction of stars will receive even a single silver atom. The apparent discrepancy between the amount of silver likely to be produced by analyte detection and that estimated to be required to produce the changes of the absorbance spectrum is a factor of at least 2.5E11. The authors should provide a more plausible explanation for their observations.§ ACKNOWLEDGEMENTS This analysis is based upon comments I posted on the PubPeer platform as Peer 2:<https://pubpeer.com/publications/3E8208F0654769A44C22D4E78DA2B8>.My attention was drawn to the article by the initial comments on the paper by Peer 1. | http://arxiv.org/abs/1705.09509v2 | {
"authors": [
"Boris Barbour"
],
"categories": [
"q-bio.QM",
"physics.chem-ph"
],
"primary_category": "q-bio.QM",
"published": "20170526100610",
"title": "Inverse sensitivity of plasmonic nanosensors at the single-molecule limit"
} |
Two characteristic polynomials corresponding tographical networks over min-plus algebraSennosuke WATANABE^a, Yuto TOZUKA^b,Yoshihide WATANABE^c,Aito YASUDA^d,Masashi IWASAKI^e^a Department of General Education, National Institute of Technology, Oyama College, 771 Nakakuki, Oyama City, Tochigi, 323-0806 Japan ^bGraduate School of Science andEngineering, Science of Environment and Mathematical Modeling,Doshisha University, 1-3 Tatara Miyakodani, Kyotanabe, 610-0394 Japan ^cFaculty of Science and Engineering,Department of Mathematical Sciences, Doshisha University, 1-3 Tatara Miyakodani,Kyotanabe, 610-0394 Japan^d^e Faculity of Life and Environmental Sciences,Kyoto Prefectural University, 1-5 Nakaragi-cho, Shimogamo, Sakyo-ku,Kyoto, 606-8522 Japan E-mail addresses:^[email protected], ^[email protected],^[email protected] this paper, we investigate characteristic polynomials of matrices in min-plus algebra. Eigenvalues of min-plus matrices are known to be the minimum roots of the characteristic polynomialsbased on tropical determinants which are designed from emulating standard determinants. Moreover, minimum roots of characteristic polynomials have a close relationship tographs associated with min-plus matrices consisting of vertices and directed edges with weights. The literature has yet to focus on the other roots of min-plus characteristic polynomials. Thus, here we consider how to relate the 2nd, 3rd, … minimum rootsof min-plus characteristic polynomials to graphical features. We then define new characteristic polynomials of min-plus matricesby considering an analogue of the Faddeev-LeVerrier algorithm that generates the characteristic polynomials of linear matrices. We conclusively show that minimum roots of the proposed characteristic polynomials coincide with min-plus eigenvalues, and observe the other roots as in the study of the already known characteristic polynomials. We also give an example to illustrate the difference between the already known and proposed characteristic polynomials.Keywords Circuit, Directed and weighted graph, Eigenvalue problem, Faddeev-LeVerrier algorithm, Min-plus algebra. § INTRODUCTION Various fields of mathematics consider min-plus algebra, which is an abstract algebras with idempotent semirings. The arithmetic operations of min-plus algebra are min(a,b) and a+b for a, b∈ℝ_min:=ℝ∪{∞}where ℝ is the set of all real numbers.Although it has different operators to the well-known linear algebra, the eigenvalue problem is fundamental both types of algebra.The min-plus eigenvalue problem was shown in Gondran-Minoux <cit.> and Zimmermann <cit.> to have a close relationship with the shortest path problem on graphs consisting of vertexes and edges, where every edge links to two distinct vertices.Directions are added to edges in directed graphs, and a value is associated with each edge in weighted graphs.Matrices whose entries are min-plus algebra figures are sometimes considered with respect to directed and weighted graphs.Such matrices are called min-plus matrices, and are practically defined byassigning the weights of edges from the vertices i to j to the (i,j) entries.According to Gondran-Minoux <cit.> and Zimmermann <cit.>, if a min-plus matrix has an eigenvalue, this eigenvalue reflects a significant feature in the network on a directed and weighted graph that is associated with the min-plus matrix. There exist circuits whose average weights are the eigenvalue, where a circuit signifies a closed path without crossing;its average weight is given by the ratio of the sum of all weights to the vertex number.Conversely, in the network involving circuits, the minimum of the average weights of circuitscoincides with the eigenvalue of the corresponding min-plus matrix. Moreover, the eigenvalues of min-plus matrices are the minimum roots of the characteristic polynomials, defined using tropical determinants, which correspond to the determinants over linear algebra, over min-plus algebra <cit.>. However, the 2nd, 3rd, … minimum roots have not yet been related to graphical features. Thus, the first goal of this paper is to identify graphical significance of the 2nd, 3rd, … minimum roots.Over linear algebra, the QR, qd and Jacobi algorithms are representative numerical solvers for eigenvalue problem <cit.>.The divided-and-conquer and bisection algorithms are also the famous linear eigenvalue solvers <cit.>.In contrast, few procedures for min-plus eigenvalues have been studied, with the exception of work by Maclagan-Sturmfels <cit.>.Moreover, to the best of our knowledge,min-plus eigenvalue procedures based on linear equivalents have not been yet addressed in the literature. Thus, the second goal of this paper is to propose new eigenvalue algorithms for min-plus algebra by emulating the Faddeev-LeVerrier algorithm <cit.> in linear algebra. The Faddeev-LeVerrier algorithm employs only linear scalar and matrix arithmetic, which can be intuitively replaced with min-plus one. Strictly speaking, the Faddeev-LeVerrier algorithm generates not eigenvalues but characteristic polynomials of square matrices.In other words, our second goal is to essentially show how to derive new characteristic polynomials of min-plus matrices.The remainder of this paper is organized as follows.Section 2 describes elementary scalar and matrix arithmetic over the min-plus algebra. Section 3 and 4 explain the relationships between min-plus matrices and the corresponding networks,and linear factorizations of min-plus polynomials, including an effective preconditioning, respectively.In Section 5, we elucidate not only minimum roots, but also the other roots of given min-plus characteristic polynomialsgiven using tropical determinants from the perspective of graphical networks. In Section 6, by considering an analogue of the Faddeev-LeVerrier algorithm,we derive new characteristic polynomials of min-plus matrices, then clarify their features in the comparison with known characteristic polynomials.Finally, in Section 7, we provide concluding remarks. § MIN-PLUS ARITHMETICIn this section, we present elementary definitions and properties concerning min-plus algebra. We first focus on scalar arithmetic over the min-plus algebra, and then present the matrix arithmetic. For a,b∈ℝ_min, min-plus algebra has only two binary arithmetic operators, ⊕ and ⊗, which have the following definitions,a⊕ b=min{a,b}, a⊗ b= a+b.We can easily check that both ⊕ and ⊗ are associative and commutative,and ⊗ is distributive with respect to ⊕, namely, for a,b,c∈ℝ_min, a⊗ (b⊕ c)=(a⊗ b)⊕ (a⊗ c). Moreover, we may regard ε =+∞ and e=0 as identities with respect to⊕ and ⊗, respectively because for any a∈ℝ_min, a⊕ε =min{a,+∞}=a, a⊗ e=a+0=a.Using the identity e, we can uniquely define the inverse of a∈ℝ_min∖{ε}with respect to ⊗, denoted by b, asa⊗ b=e.Since it holds thata⊗ε =a+∞=ε,the identity ε =+∞ with respect to ⊕ is absorbing for ⊗. Here, we consider matrices whose entries are all ℝ_min numbers to be min-plus matrices. For positive integers m and n, we designate the set of all m-by-n min-plus matrices as ℝ_min^m× n. Since the min-plus matrices appearing in the later sections are all square matrices,we hereinafter limit discussion to n-by-n min-plus matrices. For A=(a_ij),B=(b_ij)∈ℝ_min^n× n, the sum A⊕ B =([A⊕ B]_ij)∈ℝ_min^n× nand product A⊗ B=([A⊗ B]_ij)∈ℝ_min^n× n are respectively given as:[A⊕ B]_ij=a_ij⊕ b_ij=min{a_ij,b_ij},and[A⊗ B]_ij=⊕^k_ℓ=1(a_iℓ⊗ b_ℓ j)=ℓ=1,2,…,kmin{a_iℓ+b_ℓ j}.Moreover, for α∈ℝ_min and A=(a_ij)∈ℝ_min^n× n,the scalar multiplication α⊗ A =([α⊗ A]_ij)∈ℝ_min^n× n is defined as [α⊗ A]_ij=α⊗ a_ij.§ GRAPHS AND MIN-PLUS EIGENVALUESIn this section, we first give a short explanation for min-plus matrices corresponding to graphs which are not functional graphs. Then, we review the relationships between the eigenvalues of min-plus matrices and the corresponding graphs. Let v_1,v_2,…, v_m denote vertices on the graph G,and let e_i,j=(v_i,v_j) be edges which link the vertices v_i and v_j. The edge e_i,i=(v_i,v_i) is called a loop. Moreover, let V:={v_1,v_2,…,v_m} and E:={e_i,j | (i,j)∈σ}where σ is the set of all pairs of i and j such that the edge e_i,j exists. Then, two sets V and E uniquely determine the graph G. Thus, such G is often expressed as G=(V,E). If G is a directed graph, then e_i,j are directed edges whose tail and head vertices are v_i and v_j, respectively. Further, if G is a directed and weighted graph,then the real number w(e_i,j) is assigned to each edge e_i,j, and is called the weight. The pair 𝒩=(G,w) is often called the network on the graph G. The following definition gives the so-called weighted adjacency matrices associated with networks. For the network 𝒩 involving m vertices,an m-by-m weighted adjacency matrix A(𝒩)=(a_ij) is given using ℝ_min numbers as a_ij={[ w((v_i,v_j)) if (v_i,v_j)∈ E,; +εotherwise . ].It is emphasized here that the weighted adjacency matrix A(𝒩) is a min-plus matrix. Conversely, for any matrix A∈ℝ_min^n× n,there exists a network whose weighted adjacency matrix coincides with A. We hereinafter denote such a network by 𝒩(A). If the vertex indices i(0),i(1),…,i(s) are different from each other,and edges e_i(0),i(1), e_i(1),i(2),…,e_i(s-1),i(s) exist,then P=(v_i(0),v_i(1),…,v_i(s)) is a path on the network 𝒩. For the path P, the length ℓ (P) denotes the edge number s,and the weight sum ω (P) designates the sum of the edge weights:ω(P)=∑_k=0^s-1w((v_i(k),v_i(k+1)))=∑_k=0^s-1a_i(k)i(k+1) =⊗_k=0^s-1a_i(k)i(k+1).Moreover, the path P with i(0)=i(s) is just a circuit,and its length and weight are calculated in the same manner as those in path P. The following definition describes the average weight of the circuit C.For the circuit C, the average weight ave(C) is given by ave(C)=ω(C)ℓ(C).The eigenvalues and eigenvectors of matrices play important roles in both linear algebra and min-plus algebra. The following definition determines the eigenvalues and eigenvectors of the min-plus matrix. For the min-plus matrix A∈ℝ_min^n× n, if there exist λ∈ℝ_min andx∈ℝ_min^n∖{(ε,ε,…,ε)^⊤} satisfying A⊗x=λ⊗x,then λ and x are an eigenvalue and its corresponding eigenvector. The eigenvalues of the min-plus matrix were shown in Baccelli et al. <cit.> and Gondran-Minoux <cit.>to have interesting relationships with circuits in the network. If the min-plus matrix A∈ℝ_min^n× n has an eigenvalue λ≠ε,there exists a circuit in the network 𝒩(A) whose average weight is equal to λ.The minimum of the average weights of circuits in the network 𝒩(A)coincides with the minimum eigenvalue of the min-plus matrix A∈ℝ_min^n× n. In particular, Theorem <ref> suggests that we can algebraically computethe minimum of average weights of circuits in 𝒩(A) without grasping pictorial situations. § FACTORIZATION OF MIN-PLUS POLYNOMIALSIn this section, we briefly review Maclagan-Sturmfels <cit.> with regards to linear factorization over the min-plus algebra,and then describe a preconditioning algorithm in linear factorizations which will be helpful in later sections. We now consider the so-called min-plus polynomial of degree n with respect to x,p(x)=x^n⊕ c_1⊗ x^n-1⊕⋯⊕ c_n-1⊗ x⊕ c_n.where x^k:=x⊗ x⊗⋯⊗ x_k times=kxand c_1,c_2,…,c_n∈ℝ_min are the coefficients. The following proposition gives the necessary and sufficient condition for factorizing the min-plus polynomial p(x)into linear factors as p(x)=(x⊕ c_1)⊗ [x⊕ (c_2-c_1)]⊗⋯⊗ [x⊕ (c_n-c_n-1)].[Maclagan-Sturmfels <cit.>] The min-plus polynomial p(x) can be completelyfactorized into linear factors if, and only if, the coefficientsa_0,a_1,…,a_n-1∈ℝ_min satisfy the following inequality,c_1≤ c_2-c_1≤⋯≤ c_n-c_n-1.Regarding p(x) as the min-plus function with respect to x, we see that p(x) is piecewise linear. This is because p(x)=min{nx,c_1+(n-1),…,c_n-1+x,c_n}.Thus, the functional graph consists of line segments and rays. Figure <ref> shows an example of the functional graph of x^2⊕ 2⊗ x⊕ 6. The piecewise linearity implies that p(x) has a finite number of break points. Roots of the min-plus function p(x) coincide with the values of the x-coordinates of break points. From Figure <ref>, we can thus factorize p(x) as p(x)=(x⊕ 2)⊗ (x⊕ 4). It is emphasized here that two distinct min-plus polynomials,p(x) and p'(x), are sometimes factorized using common linear factors, which differs from over linear algebra. If the linear factorizations of p(x) and p'(x) are the same,then we recognize that p(x) is equivalent to p'(x). To distinguish p(x)=q(x), namely, p(x) is completely equal to p'(x),we express p(x)≡ p'(x) if p(x) is equivalent to p'(x). We later need to find equivalent min-plus polynomialsto observe the characteristic polynomials of min-plus matrices. Although it is not so difficult to derive equivalent min-plus polynomials,we show how to reduce them to equivalent ones that can be directly factorized into linear factors. Such algorithms, to the best of our knowledge, have not previously been presented. Therefore, we describe an algorithm for constructingan equivalent polynomial that can be factorized into linear factors.Constructing p'(x)= x^n⊕ c'_1⊗ x^n-1⊕⋯⊕ c'_n-1⊗ x⊕ c'_n which is equivalent to p(x)=x^n⊕ c_1⊗ x^n-1⊕⋯⊕ c_n-1⊗ x⊕ c_n,namely, p(x)≡ p'(x).Input: The coefficients c_1,c_2,…,c_n in the min-plus polynomial p(x).Output:The coefficients c'_1,c'_2,…,c'_n in the equivalent min-plus polynomial p'(x).01:Set c_1=0 and i:=0.02:Set c_j=ε if p(x) does not involve x^n-j.03: Compute T_k:=(c_k -c_i)/(k-i) for k=i+1,i+2,…,n.04: Find integer m such that T_m=min_k=i+1,i+2,…,n T_k.05: Compute c'_i+1,c'_i+2,…,c'_m as c'_ℓ={[ c_i+(ℓ-i)c_m-c_im-i,ℓ=i+1,i+2,…,m-1,; c_m,ℓ=m. ].06: Overwrite i:=m.07: Set c'_n:=c_n if i=n. Otherwise, go back to line 03.§ ALL ROOTS OF CHARACTERISTIC POLYNOMIALS OF MIN-PLUS MATRICES Characteristic polynomials of matrices over linear algebra have roots which are just the eigenvalues. However, to the best of our knowledge, the characteristic polynomials of min-plus matrices have not yet been strictly defined. Min-plus characteristic polynomials can be, for example, given using the tropical determinant. Such characteristic polynomials have minimum roots which coincide with minimum eigenvaluesand the minimums of average weights in the corresponding networks. The literature has not discussed whether the other roots are eigenvalues or not,nor whether they are meaningful features or not in the network. In this section, we thus clarify the relationship between the 2nd, 3rd,…,minimum roots and the average weights of circuits in a special network. We first review characteristic polynomials of min-plus matrices using the tropical determinant <cit.>. For the min-plus matrix A=(a_ij)∈ℝ_min^n× n,the tropical determinant, denoted tropdet(A), is defined by:tropdet(A)=⊕_σ∈ S_na_1σ(1)⊗ a_2σ(2)⊗⋯⊗ a_nσ(n),where S_n is the symmetric group of permutations of {1,2,…,n}. The following definition then determines the characteristic polynomial of A. For the min-plus matrix A∈ℝ_min^n× n, the characteristic polynomial g_A(x) is given by g_A(x)=tropdet(A⊕ x⊗ I),where I is the n-by-n identity matrixwhose (i,j) entries are 0 if i=j, or ε otherwise. To distinguish the distinct circuits in the network 𝒩(A),that are associated with the min-plus matrix A∈ℝ_min^n× n,we hereinafter use the notation C(ℓ_i,p_i)as the circuit of length ℓ_i and with the average weight p_i. Moreover, we prepare a set of circuits with a length sum of ℓ̃_i in the network 𝒩(A). Here, we regard the extended circuit of length ℓ̃_i, and designate itas C̃(ℓ̃_i,p̃_i) where p̃_i is the average weight. Of course, simple circuits are members of extended circuits,and the weight sum of C̃(ℓ̃_i,p̃_i) is ℓ̃_ip̃_i for each i. According to Maclagan-Sturmfels <cit.>,we can easily derive a propositionconcerning the relationships between coefficients of the characteristic polynomialand the weight sums of extended circuits in the network.For the min-plus matrix A∈ℝ_min^n× n,let us assume that the characteristic polynomial g_A(x) is expanded as g_A(x)≡ x^n⊕ c_1⊗ x^n-1⊕⋯⊕ c_n-1⊗ x⊕ c_n.Then, each coefficient c_j coincides with the minimum of the weight sums of the extended circuitsin the set of the separated and extended circuits𝒞_j:={C̃(ℓ̃_i,·)|ℓ̃_i=j}in the network 𝒩(A) that are associated with A. Now, we consider the case where k separate circuits C(ℓ_1,p_1),C(ℓ_2,p_2),…, C(ℓ_k,p_k)existin the network 𝒩. Strictly speaking, C(ℓ_1,p_1),C(ℓ_2,p_2),…, C(ℓ_k,p_k)are distinct to each other and every vertex belongs to at most one circuit in the network 𝒩. Without loss of generality, we may assume that p_1≤ p_2≤⋯≤ p_k. Moreover, we recognize that the extended circuitC̃(ℓ̃_i,p̃_i) is homogeneousif all simple circuits in C̃(ℓ̃_i,p̃_i) have the same average weight p̃_i. We then see that, in the case where C(ℓ_1,p_1), C(ℓ_2,p_2),…,C(ℓ_k,p_k) are separate circuits,j homogeneous extended circuits C̃(ℓ̃_1,p̃_1),C̃(ℓ̃_2,p̃_2),…,C̃(ℓ̃_j,p̃_j) exist where p̃_1< p̃_2<⋯<p̃_jand j≤ k in the network 𝒩. This is key role to deriving the following two main theorems in this section. Let us assume that all circuits are separated in the network 𝒩(A)associated with the min-plus matrix A∈ℝ_min^n× n. Then the characteristic polynomial g_A(x) can be factorized into linear factors of the formg_A(x)≡ (x⊕p̃_1)^ℓ̃_1⊗ (x⊕p̃_2)^ℓ̃_2⊗⋯⊗ (x⊕p̃_k)^ℓ̃_k⊗ x^r,where r:=n-(ℓ̃_1+ℓ̃_2+⋯+ℓ̃_k).Without loss of generality, we may assume p̃_1<p̃_2<⋯<p̃_k. We first prove that p̃_1,p̃_2,…,p̃_k are roots ofg_A(x)=x^n⊕ c_1⊗ x^n-1⊕⋯⊕ c_n-1⊗ x⊕ c_n. It is obvious that the leading term x^n becomes np̃_1 at x=p̃_1. From Proposition <ref>, the coefficient c_ℓ̃_1 is equal tothe minimum of weight sums of the extended circuits in the set 𝒞_ℓ̃_1. Since p̃_1 is the minimum average weight, c_ℓ̃_1=ℓ̃_1p̃_1. Thus, we can simplify the term c_ℓ̃_1⊗ x^n-ℓ̃_1 asℓ̃_1p̃_1+(n-ℓ̃_1)p̃_1=np̃_1 at x=p̃_1. Similarly, for all i≠ℓ̃_1, c_i⊗ x^n-i=c_i+(n-i)p̃_1 at x=p̃_1. If c_i+(n-i)p̃_1<np̃_1, namely, c_i/i<p_1,then the homogeneous extended circuit C̃(i,p̃_0) exists where p̃_0<p̃_1. This contradicts the assumption that p̃_1 is the minimum average weight. Thus, we conclude that x=p̃_1 is a root of g_A(x). Moreover, we can easily derive c_ℓ̃_1+ℓ̃_2⊗ x^n-ℓ̃_1-ℓ̃_2 =ℓ̃_1p̃_1+(n-ℓ̃_1)p̃_2 at x=p̃_2. This is because Proposition <ref> immediately leads toc_ℓ̃_1+ℓ̃_2=ℓ̃_1p̃_1+ℓ̃_2p̃_2. Simultaneously, we can observe that c_ℓ̃_1+x^n-ℓ̃_1 =ℓ̃_1p̃_1+(n-ℓ̃_1)p̃_2. Thus, to prove that p̃_2 is a root of g_A(x), it is necessary to show that, for alli≠ℓ̃_1,ℓ̃_1+ℓ̃_2, c_i⊗ x^n-i≥p̃_1ℓ̃_1 + (n-ℓ̃_1) p̃_2 at x=p̃_2, namely,c_i-ip̃_2 ≥ℓ̃_1(p̃_1-p̃_2). Recalling here that p̃_1<p̃_2, we see that c_i-ip̃_2<0, namely,c_i/i<p̃_2 if c_i-ip̃_2<ℓ̃_1(p̃_1-p̃_2). This implies that c_i/i=p̃_1 for i≠ℓ̃_1,but c_i/i≠p̃_1 for i≠ℓ̃_1. Therefore, we recognize that p̃_2 is also a root of g_A(x). Along the same lines, we observe that, for m=2,3,…,k, only two terms:ℓ̃_1p̃_1⊗ℓ̃_2p̃_2⊗⋯⊗ℓ̃_m-1p̃_m-1⊗ x^n-ℓ̃_1-ℓ̃_2-⋯-ℓ̃_m-1 and ℓ̃_1p̃_1⊗ℓ̃_2p̃_2⊗⋯⊗ℓ̃_mp̃_m ⊗ x^n-ℓ̃_1-ℓ̃_2-⋯-ℓ̃_m become bothℓ̃_1p̃_1+⋯+ℓ̃_m-1p̃_m-1 +(n-ℓ̃_1-ℓ̃_2-⋯-ℓ̃_m-1) at x=p̃_m, and are the minimum among all terms in g_A(x). This suggests that x=p̃_m is a root of g_A(x). Next, we examine the linear factorization of g_A(x). We can update c_1,c_2,…,c_ℓ̃_1 asp̃_1,2p̃_1,…,ℓ̃_1p̃_1, respectively, using Algorithm <ref>. We then see that x^n,c_1⊗ x^n-1,…,c_ℓ̃_1⊗ x^n-ℓ̃_1are equal to each other at x=p̃_1. Similarly, Algorithm <ref> updates c_ℓ̃_1+1,c_ℓ̃_1+2,…, c_ℓ̃_1+ℓ̃_2 as ℓ̃_1p̃_1+p̃_2,ℓ̃_1p̃_1 +2p̃_2,…,ℓ̃_1p̃_1+ℓ̃_2p̃_2, then, it holds thatc_ℓ̃_1⊗ x^n-ℓ̃_1=c_ℓ̃_1+1⊗ x^n-ℓ̃_1-1=⋯ =c_ℓ̃_1+ℓ̃_2⊗ x^n-ℓ̃_1-ℓ̃_2 at x=p̃_2.Applying Algorithm <ref> repeatedly, we see that c_i+1-c_i={[ p̃_1, i=0,1,…,ℓ̃_1-1,; p̃_2,i=ℓ̃_1,ℓ̃_1+1,…,ℓ̃_1+ℓ̃_2-1,; ⋮; p̃_k, i=ℓ̃_1+ℓ̃_2+⋯+ℓ̃_k-1,…,ℓ̃_1+ℓ̃_2+⋯+ℓ̃_k-1+ℓ̃_k-1, ].where c_0=0.If r=n-(ℓ̃_1+ℓ̃_2+⋯+ℓ̃_k)=0,then it immediately follows from Proposition <ref> thatg_A(x)=(x⊕p̃_1)^ℓ̃_1⊗ (x⊕p̃_2)^ℓ̃_2⊗⋯⊗ (x⊕p̃_k)^ℓ̃_k.If r>0, then there is no extended circuits greater in length thann-r in the network 𝒩(A). This is because there exist r vertices that do not belong to any circuits. Thus, we can overwrite the coefficients c_n-r+1,c_n-r+2,…,c_n-1 andthe constant term c_n with 0. Therefore, we have g_A(x)=(x⊕p̃_1)^ℓ̃_1⊗(x⊕p̃_2)^ℓ̃_2⊗⋯⊗ (x⊕p̃_k)^ℓ̃_k⊗ x^r.For the min-plus matrix A∈ℝ_min^n× n, assume that all circuits are separatedin the network 𝒩(A) associated with A. If the characteristic polynomial g_A(x) can be factorized into linear factors of the formg_A(x)≡ (x⊕p̃_1)^ℓ̃_1⊗ (x⊕p̃_2)^ℓ̃_2⊗⋯⊗ (x⊕p̃_k)^ℓ̃_k⊗ x^r, then there exist homogeneous extended circuitsC̃(ℓ̃_1,p̃_1),C̃(ℓ̃_2,p̃_2), …,C̃(ℓ̃_k,p̃_k).Similarly to prove Theorem <ref>,assume that p̃_1<p̃_2<⋯<p̃_k. Here, we focus on the case r=0. Going over the proof of Theorem <ref>, we see that g_A(x) is equivalent toĝ_A(x) =x^n⊕ℓ̃_1p̃_1⊗ x^n-ℓ̃_1⊕(ℓ̃_1p̃_1⊗ℓ̃_2p̃_2)⊗x^n-ℓ̃_1-ℓ̃_2⊕⋯⊕ (ℓ̃_1p̃_1⊗⋯⊗ℓ̃_k-1p̃_k-1) ⊗ x^ℓ̃_k⊕ (ℓ̃_1p̃_1⊗⋯⊗ℓ̃_kp̃_k) The coefficients ℓ̃_1p̃_1,ℓ̃_1p̃_1⊗ℓ̃_2p̃_2, …,ℓ̃_1p̃_1⊗ℓ̃_2p̃_2⊗⋯⊗ℓ̃_k-1p̃_k-1 and the constant term ℓ̃_1p̃_1⊗ℓ̃_2p̃_2 ⊗⋯⊗ℓ̃_kp̃_k imply that the network 𝒩(A) includesthe homogeneous extended circuits C̃(ℓ̃_1,p̃_1), C̃(ℓ̃_2,p̃_2),…,C̃(ℓ̃_k,p̃_k). From Theorems <ref> and <ref>, we can conclude that the 2nd, 3rd, …kthminimum roots of the characteristic polynomial g_A(x) are equal to the average weightsp̃_2,p̃_3,…,p̃_k, respectively, if, and only if,the circuits C(ℓ_1,p̃_1), C(ℓ_2,p̃_2),…,C(ℓ_k,p̃_k) are all separated. § NEW CHARACTERISTIC POLYNOMIALS In this section, we propose new characteristic polynomials of min-plus matricesby imagining the analogue of the Faddeev-LeVerrier algorithm <cit.>, which is an algorithm for generating characteristic polynomials of matrices in linear algebra. In the Faddeev-LeVerrier algorithm, only the sums and products of scalars and matricesconstruct the characteristic polynomials of linear matrices. In fact, for a linear matrix A∈ℝ^n× n, the coefficients c_1,c_2,…,c_nappearing in the characteristic polynomial x^n+c_1x^n-1+⋯+c_n-1x+c_nis recursively given as c_1=-Tr(A), c_2=-1/2Tr(A^2+c_1A),⋮ c_n=-1/nTr(A^n+c_1A^n-1+⋯+c_n-1A).Thus, we can derive new characteristic polynomials of min-plus matrices based on this method. For the min-plus matrix A∈ℝ_min^n× n, the characteristic polynomial ĝ_A(x)ĝ_A(x)=x^n⊕ c_1⊗ x^n-1⊕⋯⊕ c_n-1⊗ x⊕ c_n,is recursively given as c_1=Tr(A), c_2=Tr(A^2⊕ c_1⊗ A),⋮ c_n=Tr(A^n⊕ c_1⊗ A^n-1⊕⋯⊕ c_n-1⊗ A),where A^k=A^k-1⊗ A for k=2,3,…,n. It is remarkable that the new characteristic polynomial ĝ_A(x) usually differs fromthe already known characteristic polynomial g_A(x). The following theorem gives the relationship between the minimum root ofthe characteristic polynomial ĝ_A(x) and the eigenvalue of the min-plus matrix A∈ℝ_min^n× n. For the min-plus matrix A∈ℝ_min^n× n,the minimum root of the characteristic polynomial ĝ_A(x) is equal to the eigenvalue of A.With the help of Proposition <ref>,we may prove that the minimum root, denoted p_min,is just the minimum of average weights of circuits in the network 𝒩(A). Regarding ĝ_A(x) as the function with respect to x,we recall that p_min coincides with the minimum of the x-coordinatesof breakpoints on the corresponding xy functional graph. It is worth noting here that the breakpoints with the minimum x-coordinate are the intersectionof two lines y=nx and y=c_i(n-i)x for some i. Thus, we derive p_min=c_i/i. It remains to proven that c_i/i becomes the minimum of the average weightsof circuits in the network 𝒩(A). Obviously, the coefficient c_1= Tr(A) is equal to the minimum of the weight sumsof circuits in the set 𝒞_1. Taking into account that the diagonals of A^2 and c_1⊗ A are the weight sumsof all extended circuits in 𝒞_2, we see that c_2=Tr(A^2⊕ c_1⊗ A)expresses the minimum of the weight sums of all extended circuits in 𝒞_2. Similarly, c_i signifies the minimum of the weight sumsof all extended circuits in 𝒞_i. Thus, p_min=min_ic_i/i is equal to the minimum of the average weightsof all extended circuits in 𝒞_i. Simultaneously, we see that the average weights of all extended circuitsin the network 𝒩(A) are equal to or larger than p_min. Moreover, if the minimum of the average weights of extended circuits in 𝒞_ℓ̃_iis p_min, then C̃(ℓ̃_i,p̃_i) is a simple circuit. This is because, if C̃(ℓ̃_i,p̃_i) is not a simple circuit, namely,C̃(ℓ̃_i,p̃_i)= {C(ℓ_1,p_1),C(ℓ_2,p_2),…,C(ℓ_k,p_j)},then the average weight min_i=1,2,…,j{ave(C(ℓ_j,p_j))} is smaller than p_min. Therefore, we conclude that p_min becomes the minimum of the average weightsof the circuits in the network 𝒩(A). Although two characteristic polynomials, g_A(x) and ĝ_A(x) are essentially distinct,they are equivalent to each other in a special case. If all circuits are simple and separated in the network 𝒩(A),then two characteristic polynomials g_A(x) and ĝ_A(x) satisfy g_A(x)≡ĝ_A(x). Corollary <ref> can be provided via the proofs of Theorems <ref>, <ref> and <ref>. We here give an example to illustrate the differencebetween two characteristic polynomials g_A(x) and ĝ_A(x). For the min-plus matrix A=[ ε ε 2 ε ε ε ε; 3 ε ε 2 ε ε ε; ε 1 3 9 1 ε ε; ε 6 ε ε ε 2 ε; ε ε ε ε ε 2 1; ε ε ε ε ε ε 1; ε ε ε ε ε ε ε ],we obtain two characteristic polynomials g_A(x)=x^7⊕ 3⊗ x^6⊕ 8⊗ x^5⊕ 6⊗ x^4⊕ 20⊗ x^3,ĝ_A(x)=x^7⊕ 3⊗ x^6⊕ 6⊗ x^5⊕ 6⊗ x^4⊕ 9⊗ x^3 ⊕ 12⊗ x^2⊕ 12⊗ x⊕ 15.Using Algorithm <ref>,we can factorize g_A(x) and ĝ_A(x) as g_A(x)≡ (x⊕ 2)^3⊗ (x⊕ 14)⊗ x^3,ĝ_A(x)≡ (x⊕ 2)^6⊗ (x⊕ 3).As shown in Theorems <ref>, <ref> and <ref>,the minimum roots of g_A(x) and ĝ_A(x) are certainly both the eigenvalue of A. However, since the 2nd minimum roots of g_A(x) and ĝ_A(x) are 14 and 3, respectively,they are not equal to each other. In actuality, limited to computing the eigenvalue of A, we can equivalently simplify ĝ_A(x) as ğ_A(x)=x^4⊕ 3⊗ x^3⊕ 6⊗ x^2⊕ 6⊗ x⊕ 9 ≡ (x⊕ 2)^3⊗ (x⊕ 3).In other words, it is not necessary to determine the coefficientsc_5=c_2⊕ 6=12,c_6=c_3⊕ 6=12,c_7=c_4⊕ 6=15 to compute the eigenvalue. Obviously, the linear factorization of ğ_A(x) is easier than that of g_A(x). New characteristic polynomials are thus expected to gain more advantage, as the matrix-size increases. § CONCLUDING REMARKS In this paper, we focused on all the roots of the already known characteristic polynomials of matrices,which are given from the links of vertices in networks on graphs, over min-plus algebra,and presented distinct new characteristic polynomials. First, we briefly explained scalar and matrix arithmetic over min-plus algebra,the eigenvalues of min-plus matrices and the minimum average weights of circuits in networks,and the linear factorizations of min-plus polynomials.We then described a preconditioning algorithm for performing effective linear factorizations.Of course, the eigenvalues of min-plus matrices are the minimum roots of the already known characteristic polynomials.In other words, the minimum roots coincide with the minimum average weights of circuits in the corresponding networks. Restricting the case to one where all circuits are completely separated in networks,we next showed that the 2nd, 3rd, … minimum roots of the already known characteristic polynomialsare just the 2nd, 3rd, … minimum average weights, respectively.Finally, we propose new characteristic polynomials whose minimum roots are also the eigenvalues of min-plus matrices,and showed that they are equivalent to the already known characteristic polynomialsif all circuits are completely separated in networks.We provided an example to verify the difference between the already known and proposed characteristic polynomials.The example simultaneously suggests that the proposed characteristic polynomials can be substantially reducedif the edge number is not large in the corresponding networks. Thus, the proposed characteristic polynomials may be, so to speak, minimal polynomials. Future work will focus on examining this aspect and designing reduction algorithms. AcknowledgementsThis was partially supported by Grants-in-Aid for Scientific Research (C) No. 26400208from the Japan Society for the Promotion of Science.99 AMO K. Ahuja, L. Magnanti and B. Orlin,Network Flows, Prentice-Hall, 1993. BCOQ F. Baccelli, G. Cohen, G.J. Olsder and J.P. Quadrat,Synchronization and Linearity, Wiley, 1992. D J. Demmel, Applied Numerical Linear Algebra, SIAM, Philadelphia, 1997. Fad D.K. Faddeev and V.N. Faddeeva Computational Methods of Linear Algebra, W.H.Freeman & Co Ltd, 1963. GL G.H. Golub, C.F. Van Loan, Matrix Computations, 4th edn., Johns Hopkins University Press, Baltimore, 2013. GM M. Gondran and M. Minoux,Graph, Dioids and Semiring, Springer Verlag, 2008. MS D. Maclagan and B. Sturmfels,Introduction to Tropical Geometry,AMS, 2015. R H. Rutishauser, Lectures on Numerical Mathematics, Birkhäuser, Boston, 1990. Zim U. Zimmermann,Linear and Combinatorial Optimization in Ordered Algebraic Structures, North-Holland Publishing Company, 1947. | http://arxiv.org/abs/1705.09513v1 | {
"authors": [
"Sennosuke Watanabe",
"Yuto Tozuka",
"Yoshihide Watanabe",
"Aito Yasuda",
"Masashi Iwasaki"
],
"categories": [
"math.CO"
],
"primary_category": "math.CO",
"published": "20170526102934",
"title": "Two characteristic polynomials corresponding to graphical networks over min-plus algebra"
} |
This manuscript corresponds to simmer version 3.6.4 and was typeset on December 05, 2017. For citations, please use the version accepted on the JSS when available.§ INTRODUCTION The complexity of many real-world systems involves unaffordable analytical models, and consequently, such systems are commonly studied by means of simulation. This problem-solving technique precedes the emergence of computers, but tool and technique got entangled as a result of their development. As defined by <cit.>, simulation “is the process of designing a model of a real system and conducting experiments with this model for the purpose either of understanding the behaviour of the system or of evaluating various strategies (within the limits imposed by a criterion or a set of criteria) for the operation of the system.”Different types of simulation apply depending on the nature of the system under consideration. A common model taxonomy classifies simulation problems along three main dimensions <cit.>: (i) deterministic vs. stochastic, (ii) static vs. dynamic (depending on whether they require a time component), and (iii) continuous vs. discrete (depending on how the system changes). For instance, Monte Carlo methods are well-known examples of static stochastic simulation techniques. On the other hand, Discrete-event simulation (DES) is a specific technique for modelling stochastic, dynamic and discretely evolving systems. As opposed to continuous simulation, which typically uses smoothly-evolving equational models, DES is characterised by sudden state changes at precise points of (simulated) time.Customers arriving at a bank, products being manipulated in a supply chain, or packets traversing a network are common examples of such systems. The discrete nature of a given system arises as soon as its behaviour can be described in terms of events, which is the most fundamental concept in DES. An event is an instantaneous occurrence that may change the state of the system, while, between events, all the state variables remain constant.The applications of DES are vast, including, but not limited to, areas such as manufacturing systems, construction engineering, project management, logistics, transportation systems, business processes, healthcare and telecommunications networks <cit.>. The simulation of such systems provides insights into the process' risk, efficiency and effectiveness. Also, by simulation of an alternative configuration, one can proactively estimate the effects of changes to the system. In turn, this allows one to get clear insights into the benefits of process redesign strategies (e.g., extra resources). A wide range of practical applications is prompted by this, such as analysing bottlenecks in customer services centres, optimising patient flows in hospitals, testing the robustness of a supply chain or predicting the performance of a new protocol or configuration of a telecommunications network.There are several world views, or programming styles, for DES <cit.>. In the activity-oriented approach, a model consists of sequences of activities, or operations, waiting to be executed depending on some conditions. The simulation clock advances in fixed time increments. At each step, the whole list of activities is scanned, and their conditions, verified. Despite its simplicity, the simulation performance is too sensitive to the election of such a time increment. Instead, the event-oriented approach completely bypasses this issue by maintaining a list of scheduled events ordered by time of occurrence. Then, the simulation just consists in jumping from event to event, sequentially executing the associated routines. Finally, the process-oriented approach refines the latter with the addition of interacting processes, whose activation is triggered by events. In this case, the modeller defines a set of processes, which correspond to entities or objects of the real system, and their life cycle.simmer <cit.> is a DES package for R which enables high-level process-oriented modelling, in line with other modern simulators. But in addition, it exploits the novel concept of trajectory: a common path in the simulation model for entities of the same type. In other words, a trajectory consist of a list of standardised actions which defines the life cycle of equivalent processes. This design pattern is flexible and simple to use, and takes advantage of the chaining/piping workflow introduced by the magrittr package <cit.>.Let us illustrate this with a simple example taken from <cit.>, Section 5.3.1: Consider a simple engineering job shop that consists of several identical machines. Each machine is able to process any job and there is a ready supply of jobs with no prospect of any shortages. Jobs are allocated to the first available machine. The time taken to complete a job is variable but is independent of the particular machine being used. The machine shop is staffed by operatives who have two tasks: enumi.* RESET machines between jobs if the cutting edges are still OK.* RETOOL those machines with cutting edges that are too worn to be reset. In addition, an operator may be AWAY while attending to personal needs. Figure <ref> shows the activity cycle diagram for the considered system. Circles (READY, STOPPED, OK, WAITING) represent states of the machines or the operatives respectively, while rectangles (RUNNING, RETOOL, RESET, AWAY) represent activities that take some (random) time to complete. Two kind of processes can be identified: shop jobs, which use machines and degrade them, and personal tasks, which take operatives AWAY for some time. There is a natural way of simulating this system with simmer which consist in considering machines and operatives as resources, and describing the life cycles of shop jobs and personal tasks as trajectories.First of all, let us instantiate a new simulation environment and define the completion time for the different activities as random draws from exponential distributions. Likewise, the interarrival times for jobs and tasks are defined (NEW_JOB, NEW_TASK), and we consider a probability of 0.2 for a machine to be worn after running a job (CHECK_JOB). R> library(simmer) R> R> set.seed(1234) R> R> (env <- simmer("Job Shop"))simmer environment: Job Shop | now: 0 | next:R> RUNNING <- function() rexp(1, 1) R> RETOOL <- function() rexp(1, 2) R> RESET <- function() rexp(1, 3) R> AWAY <- function() rexp(1, 1) R> CHECK_WORN <- function() runif(1) < 0.2 R> NEW_JOB <- function() rexp(1, 5) R> NEW_TASK <- function() rexp(1, 1) The trajectory of an incoming job starts by seizing a machine in READY state. It takes some random time for RUNNING it after which the machine's serviceability is checked. An operative and some random time to RETOOL the machine may be needed, and either way an operative must RESET it. Finally, the trajectory releases the machine, so that it is READY again. On the other hand, personal tasks just seize operatives for some time. R> job <- trajectory()R+ seize("machine")R+ timeout(RUNNING)R+ branch( R+ CHECK_WORN, continue = TRUE, R+ trajectory()R+ seize("operative")R+ timeout(RETOOL)R+ release("operative") R+ )R+ seize("operative")R+ timeout(RESET)R+ release("operative")R+ release("machine") R> R> task <- trajectory()R+ seize("operative")R+ timeout(AWAY)R+ release("operative") Once the processes' trajectories are defined, we append 10 identical machines and 5 operatives to the simulation environment, as well as two generators for jobs and tasks. R> envR+ add_resource("machine", 10)R+ add_resource("operative", 5)R+ add_generator("job", job, NEW_JOB)R+ add_generator("task", task, NEW_TASK)R+ run(until=1000)simmer environment: Job Shop | now: 1000 | next: 1000.09508921831Resource: machine | monitored: TRUE | server status: 3(10) | queue... Resource: operative | monitored: TRUE | server status: 2(5) | queue... Generator: job | monitored: 1 | n_generated: 5177 Generator: task | monitored: 1 | n_generated: 995The simulation has been run for 1000 units of time, and the simulator has monitored all the state changes and lifetimes of all processes, which enables any kind of analysis without any additional effort from the modeller's side. For instance, we may extract a history of the resource's state to analyse the average number of machines/operatives in use as well as the average number of jobs/tasks waiting for an assignment. R> aggregate(cbind(server, queue) resource, get_mon_resources(env), mean) resource server queue 1 machine 7.987438 1.0355590 2 operative 3.505732 0.4441298 The development of the simmer package started in the second half of 2014. The initial need for a DES framework for R came up in projects related to process optimisation in healthcare facilities. Most of these cases involved patients following a clear trajectory through a care process. This background is not unimportant, as it lead to the adoption and implementation of a trajectory concept at the very core of simmer's DES engine. This strong focus on clearly defined trajectories is somewhat innovative and, more importantly, very intuitive. Furthermore, this framework relies on a fast C++ simulation core to boost performance and make DES modelling in R not only effective, but also efficient.Over time, the simmer package has seen significant improvements and has been at the forefront of DES for R. Although it is the most generic DES framework, it is however not the only R package which delivers such functionality. For example, the SpaDES package <cit.> focuses on spatially explicit discrete models, and the queuecomputer package <cit.> implements an efficient method for simulating queues with arbitrary arrival and service times. Going beyond the R language, the direct competitors to simmer are SimPy <cit.> and SimJulia <cit.>, built for respectively the Python and Julia languages.§ THE SIMULATION CORE DESIGN The core of any modern discrete-event simulator comprises two main components: an event list, ordered by time of occurrence, and an event loop that extracts and executes events. In contrast to other interpreted languages such as Python, which is compiled by default to an intermediate byte-code, R code is purely parsed and evaluated at runtime[Some effort has been made in this line with the compiler package, introduced in R version 2.13.0 <cit.>, furthermore, a JIT-compiler was included in R version 3.4.0.]. This fact makes it a particularly slow language for DES, which consists of executing complex routines (pieces of code associated to the events) inside a loop while constantly allocating and deallocating objects (in the event queue).In fact, first attempts were made in pure R by these authors, and a minimal process-based implementation with R6 classes <cit.> proved to be unfeasible in terms of performance compared to similar approaches in pure Python. For this reason, it was decided to provide a robust and fast simulation core written in C++. The R API interfaces with this C++ core by leveraging the Rcpp package <cit.>, which has become one of the most popular ways of extending R packages with C or C++ code.The following subsections are devoted to describe the simulation core architecture. First, we establish the DES terminology used in the rest of the paper. Then, the architectural choices made are discussed, as well as the event queue and the simultaneity problem, an important topic that every DES framework has to deal with. §.§ Terminology This document uses some DES-specific terminology, e.g., event, state, entity, process or attribute. Such standard terms can be easily found in any textbook about DES (refer to <cit.>, for instance). There are, however, some simmer-specific terms, and some elements that require further explanation to understand the package architecture.Resource A passive entity, as it is commonly understood in standard DES terminology. However, simmer resources are conceived with queuing systems in mind, and therefore they comprise two internal self-managed parts:Server which, conceptually, represents the resource itself. It has a specified capacity and can be seized and released.Queue A priority queue of a certain size. Manager An active entity, i.e., a process, that has the ability to adjust properties of a resource (capacity and queue size) at run-time.Generator A process responsible for creating new arrivals with a given interarrival time pattern and inserting them into the simulation model.Arrival A process capable of interacting with resources or other entities of the simulation model. It may have some attributes and prioritisation values associated and, in general, a limited lifetime. Upon creation, every arrival is attached to a given trajectory.Trajectory An interlinkage of activities constituting a recipe for arrivals attached to it, i.e., an ordered set of actions that must be executed. The simulation model is ultimately represented by a set of trajectories.Activity The individual unit of action that allows arrivals to interact with resources and other entities, perform custom routines while spending time in the system, move back and forth through the trajectory dynamically, and much more.§.§ Architecture Extending an R package (or any other piece of software written in any interpreted language) with compiled code poses an important trade-off between performance and flexibility: placing too much functionality into the compiled part produces gains in performance, but degrades modelling capabilities, and vice versa. The following lines are devoted to discuss how this trade-off is resolved in simmer.Figure <ref> sketches a UML (Unified Modelling Language) description of the architecture, which constitutes a process-based design, as in many modern DES frameworks. We draw the attention now to the C++ classes (depicted in white).The first main component is the Simulator class. It comprises the event loop and the event queue, which will be addressed in the next subsection. The Simulator provides methods for scheduling and unscheduling events. Moreover, it is responsible for managing simulation-wide entities (e.g., resources and generators) and facilities (e.g., signaling between processes and batches) through diverse C++ unordered maps:* Maps of resources and processes (generators, arrivals and managers) by name.* A map of pending events, which allows to unschedule a given process.* Maps of signals subscribed by arrivals and handlers defined for different signals.* Maps for forming batches of arrivals, named and unnamed. This class also holds global attributes and monitoring information. Thus, monitoring counters, which are derived from the Monitor class, are centralised, and they register every change of state produced during the simulation time. There are five types of built-in changes of state that are recorded by calling Simulator's record_*() methods:* An arrival is accepted into a resource (served or enqueued). The resource notifies about the new status of its internal counters.* An arrival leaves a resource. The resource notifies the new status of its internal counters, and the arrival notifies start, end and activity times in that particular resource.* A resource is modified during runtime (i.e., a change in the capacity or queue size). The resource notifies the new status of its internal counters.* An arrival modifies an attribute, one of its own or a global one. The arrival notifies the new value.* An arrival leaves its trajectory by exhausting the activities associated (considered as finished) or because of another reason (non-finished, e.g., it is rejected from a resource). The arrival notifies global start, end and activity times. As mentioned in the previous subsection, there are two types of entities: passive ones (Resource) and active ones (processes Generator, Arrival and Manager). Generators create new arrivals, and the latter are the main actors of the simulation model. Managers can be used for dynamically changing the properties of a resource (capacity and queue size). All processes share a run() method that is invoked by the event loop each time a new event is extracted from the event list.There is a fourth kind of process not shown in Figure <ref>, called Task. It is a generic process that executes a given function once, and it is used by arrivals, resources, activities and the simulator itself to trigger dynamic actions or split up events. A Task is for instance used under the hood to trigger reneging or to broadcast signals after some delay.The last main component, completely isolated from the Simulator, is the Activity class. This abstract class represents a clonable object, chainable in a double-linked list to form trajectories. Most of the activities provided by simmer derive from it. Fork is another abstract class (not depicted in Figure <ref>) which is derived from Activity. Any activity supporting the definition of sub-trajectories must derive from this one instead, such as Seize, Branch or Clone. All the activities must implement the virtual methods print() and run().Finally, it is worth mentioning the couple of blue circles depicted in Figure <ref>. They represent the points of presence of R in the C++ core, i.e., where the core interfaces back with R to execute custom user-defined code.In summary, the C++ core is responsible for all the heavy tasks, i.e., managing the event loop, the event list, generic resources and processes, collecting all the statistics, and so on. And still, it provides enough flexibility to the user for modelling the interarrival times from R and execute any custom user-defined code through the activities. §.§ The event queue The event queue is the most fundamental part of any DES software. It is responsible for maintaining a list of events to be executed by the event loop in an ordered fashion by time of occurrence. This last requirement establishes the need for a data structure with a low access, search, insertion and deletion complexity. A binary tree is a well-known data structure that satisfies these properties, and it is commonly used for this purpose. Unfortunately, binary trees, or equivalent structures, cannot be efficiently implemented without pointers, and this is the main reason why pure R is very inefficient for DES.In simmer, the event queue is defined as a C++ multiset, a kind of associative container implemented as a balanced tree internally. Apart from the efficiency, it was selected to support event unscheduling through iterators. Each event holds a pointer to a process, which will be retrieved and run in the event loop. Events are inserted in the event queue ordered by 1) time of occurrence and 2) priority. This secondary order criterion is devoted to solve a common issue for DES software called the simultaneity problem.§.§.§ The simultaneity problem As noted by <cit.>, <cit.>, there are many circumstances from which simultaneous events (i.e., events with the same timestamp) may arise. How they are handled by a DES framework has critical implications on reproducibility and simulation correctness.As an example of the implications, let us consider an arrival seizing a resource at time t_i-1, which has capacity=1 and queue_size=0. At time t_i, two simultaneous events happen: 1) the resource is released, and 2) another arrival tries to seize the resource. It is indisputable what should happen in this situation: the new arrival seizes the resource while the other continues its path. But note that if 2) is executed before 1), the new arrival is rejected (!). Therefore, it is obvious that release events must always be executed before seize events.If we consider a dynamically managed resource (i.e., its capacity changes over time) and, instead of the event 1) in the previous example, the manager increases the capacity of the resource, we are in the very same situation. Again, it is obvious that resource managers must be executed before seize attempts.A further analysis reveals that, in order to preserve correctness and prevent a simulation crash, it is necessary to break down resource releases in two parts with different priorities: the release in itself and a post-release event that tries to serve another arrival from the queue. Thus, every resource manager must be executed after releases and before post-releases. This and other issues are solved with a priority system (see Table <ref>) embedded in the event list implementation that provides a deterministic and consistent execution of simultaneous events.[]@ll@ Priority EventPriority Event Modify a generator (e.g., activate or deactivate it) Resource release Manager action (e.g., resource capacity change) Resource post-release (i.e., serve from the queue) Generate new arrivals… General activities Other tasks (e.g., a timer for reneging)Priority system (in decreasing order) and events associated. § THE SIMMER API The R API exposed by simmer comprises two main elements: the simmer environment (or simulation environment) and the trajectory object, which are depicted in Figure <ref> (blue classes). As we will see throughout this section, simulating with simmer simply consists of building a simulation environment and one or more trajectories. For this purpose, the API is composed of verbs and actions that can be chained together. For easy-of-use, these have been made fully compatible with the pipe operator (%>%) from the magrittr package. §.§ The trajectory object A trajectory can be defined as a recipe and consists of an ordered set of activities. The idea behind this concept is very similar to the idea behind dplyr for data manipulation <cit.>. To borrow the words of H. Wickham, “by constraining your options, it simplifies how you can think about” discrete-event modelling. Activities are verbs that correspond to common functional DES blocks.The trajectory() method instantiates the object, and activities can be appended using the %>% operator: R> traj0 <- trajectory()R+ log_("Entering the trajectory")R+ timeout(10)R+ log_("Leaving the trajectory") The trajectory above illustrates the two most basic activities available: displaying a message (log_()) and spending some time in the system (timeout()). An arrival attached to this trajectory will execute the activities in the given order, i.e., it will display “Entering the trajectory”, then it will spend 10 units of (simulated) time, and finally it will display “Leaving the trajectory”.The example uses fixed parameters: a string and a numeric value respectively. However, at least the main parameter for all activities (this is specified in the documentation) can also be what we will call a dynamical parameter, i.e., a function. This thus, although not quite useful yet, is also valid: R> traj1 <- trajectory()R+ log_(function() "Entering the trajectory")R+ timeout(function() 10)R+ log_(function() "Leaving the trajectory") Also, trajectories can be split apart, joined together and modified: R> traj2 <- join(traj0[c(1, 3)], traj0[2]) R> traj2[1] <- traj2[3] R> traj2trajectory: anonymous, 3 activitiesActivity: Timeout| delay: 10 Activity: Log| message Activity: Timeout| delay: 10There are many activities available. We will briefly review them by categorising them into different topics.§.§.§ Arrival properties Arrivals are able to store attributes and modify these using set_attribute(). Attributes consist of pairs (key, value) (character and numeric respectively) which by default are set per arrival unless they are defined as global. As we said before, all activities support at least one dynamical parameter. In the case of set_attribute(), this is the value parameter.Attributes can be retrieved in any R function by calling get_attribute(), whose first argument must be a simmer object. For instance, the following trajectory prints 81: R> env <- simmer() R> R> traj <- trajectory()R+ set_attribute("weight", 80)R+ set_attribute("weight", function() get_attribute(env, "weight") + 1)R+ log_(function() paste0("My weight is ", get_attribute(env, "weight"))) Arrivals also hold a set of three prioritisation values for accessing resources:priority A higher value equals higher priority. The default value is the minimum priority, which is 0.preemptible If a preemptive resource is seized, this parameter establishes the minimum incoming priority that can preempt this arrival (the activity is interrupted and another arrival with a priority greater than preemptible gains the resource). In any case, preemptible must be equal or greater than priority, and thus only higher priority arrivals can trigger preemption.restart Whether the ongoing activity must be restarted after being preempted. These three values are established for all the arrivals created by a particular generator, but they can also be dynamically changed on a per-arrival basis using the set_prioritization() and get_prioritization() activities, in the same way as attributes.§.§.§ Interaction with resources The two main activities for interacting with resources are seize() and release(). In their most basic usage, they seize/release a given amount of a resource specified by name. It is also possible to change the properties of the resource with set_capacity() and set_queue_size().The seize() activity is special in the sense that the outcome depends on the state of the resource. The arrival may successfully seize the resource and continue its path, but it may also be enqueued or rejected and dropped from the trajectory. To handle these special cases with total flexibility, seize() supports the specification of two optional sub-trajectories: post.seize, which is followed after a successful seize, and reject, followed if the arrival is rejected. As in every activity supporting the definition of sub-trajectories, there is a boolean parameter called continue. For each sub-trajectory, it controls whether arrivals should continue to the activity following the seize() in the main trajectory after executing the sub-trajectory. R> patient <- trajectory()R+ log_("arriving...")R+ seize( R+ "doctor", 1, continue = c(TRUE, FALSE), R+ post.seize = trajectory("accepted patient")R+ log_("doctor seized"), R+ reject = trajectory("rejected patient")R+ log_("rejected!")R+ seize("nurse", 1)R+ log_("nurse seized")R+ timeout(2)R+ release("nurse", 1)R+ log_("nurse released") R+ )R+ timeout(5)R+ release("doctor", 1)R+ log_("doctor released") R> R> env <- simmer()R+ add_resource("doctor", capacity = 1, queue_size = 0)R+ add_resource("nurse", capacity = 10, queue_size = 0)R+ add_generator("patient", patient, at(0, 1))R+ run()0: patient0: arriving... 0: patient0: doctor seized 1: patient1: arriving... 1: patient1: rejected! 1: patient1: nurse seized 3: patient1: nurse released 5: patient0: doctor released The value supplied to all these methods may be a dynamical parameter. On the other hand, the resource name must be fixed. There is a special mechanism to select resources dynamically: the select() activity. It marks a resource as selected for an arrival executing this activity given a set of resources and a policy. There are several policies implemented internally that can be accessed by name:shortest-queue The resource with the shortest queue is selected.round-robin Resources will be selected in a cyclical nature.first-available The first available resource is selected.random A resource is randomly selected. Its resources parameter is allowed to be dynamical, and there is also the possibility of defining custom policies. Once a resource is selected, there are special versions of the aforementioned activities for interacting with resources without specifying its name, such as seize_selected(), set_capacity_selected() and so on.§.§.§ Interaction with generators There are four activities specifically intended to modify generators. An arrival may activate() or deactivate() a generator, but also modify with set_trajectory() the trajectory to which it attaches the arrivals created, or set a new interarrival distribution with set_distribution(). For dynamically selecting a generator, the parameter that specifies the generator name in all these methods can be dynamical. R> traj <- trajectory()R+ deactivate("dummy")R+ timeout(1)R+ activate("dummy") R> R> simmer()R+ add_generator("dummy", traj, function() 1)R+ run(10)R+ get_mon_arrivals()name start_time end_time activity_time finished replication 1 dummy012 1 TRUE 1 2 dummy134 1 TRUE 1 3 dummy256 1 TRUE 1 4 dummy378 1 TRUE 1 §.§.§ Branching A branch is a point in a trajectory in which one or more sub-trajectories may be followed. Two types of branching are supported in simmer. The branch() activity places the arrival in one of the sub-trajectories depending on some condition evaluated in a dynamical parameter called option. It is the equivalent of an if/else in programming, i.e., if the value of option is i, the i-th sub-trajectory will be executed. On the other hand, the clone() activity is a parallel branch. It does not take any option, but replicates the arrival n-1 times and places each one of them into the n sub-trajectories supplied. R> env <- simmer() R> R> traj <- trajectory()R+ branch( R+ option = function() round(now(env)), continue = c(FALSE, TRUE), R+ trajectory()R+ trajectory()R+ )R+ clone( R+ n = 2, R+ trajectory()R+ trajectory()R+ )R+ synchronize(wait = TRUE)R+ log_("out") R> R> envR+ add_generator("dummy", traj, at(1, 2))R+ run() 1: dummy0: branch 1 2: dummy1: branch 2 2: dummy1: clone 0 2: dummy1: clone 1 2: dummy1: out Note that clone() is the only exception among all activities supporting sub-trajectories that does not accept a continue parameter. By default, all the clones continue in the main trajectory after this activity. To remove all of them except for one, the synchronize() activity may be used.§.§.§ Loops There is a mechanism, rollback(), for going back in a trajectory and thus executing loops over a number of activities. This activity causes the arrival to step back a given amount of activities (that can be dynamical) a number of times. If a check function returning a boolean is supplied, the times parameter is ignored and the arrival determines whether it must step back each time it hits the rollback. R> hello <- trajectory()R+ log_("Hello!")R+ timeout(1)R+ rollback(amount = 2, times = 2) R> R> simmer()R+ add_generator("hello_sayer", hello, at(0))R+ run() 0: hello_sayer0: Hello! 1: hello_sayer0: Hello! 2: hello_sayer0: Hello! §.§.§ Batching Batching consists of collecting a number of arrivals before they can continue their path in the trajectory as a unit[A concrete example of this is the case where a number of people (the arrivals) together take, or rather seize, an elevator (the resource).]. This means that if, for instance, 10 arrivals in a batch try to seize a unit of a certain resource, only one unit may be seized, not 10. A batch may be splitted with separate(), unless it is marked as permanent. R> roller <- trajectory()R+ batch(10, timeout = 5, permanent = FALSE)R+ seize("rollercoaster", 1)R+ timeout(5)R+ release("rollercoaster", 1)R+ separate() By default, all the arrivals reaching a batch are joined into it, and batches wait until the specified number of arrivals are collected. Nonetheless, arrivals can avoid joining the batch under any constraint if an optional function returning a boolean, rule, is supplied. Also, a batch may be triggered before collecting a given amount of arrivals if some timeout is specified. Note that batches are shared only by arrivals directly attached to the same trajectory. Whenever a globally shared batch is needed, a common name must be specified.§.§.§ Asynchronous programming There are a number of methods enabling asynchronous events. The send() activity broadcasts one or more signals to all the arrivals subscribed to them. Signals can be triggered immediately or after some delay. In this case, both parameters, signals and delay, can be dynamical. Arrivals are able to block and wait() until a certain signal is received.Arrivals can subscribe to signals and (optionally) assign a handler using the trap() activity. Upon a signal reception, the arrival stops the current activity and executes the handler[The handler parameter accepts a trajectory object. Once the handler gets called, it will route the arrival to this sub-trajectory.] if provided. Then, the execution returns to the activity following the point of interruption. Nonetheless, trapped signals are ignored when the arrival is waiting in a resource's queue. The same applies inside a batch: all the signals subscribed before entering the batch are ignored. Finally, the untrap() activity can be used to unsubscribe from signals. R> t_blocked <- trajectory()R+ trap( R+ "you shall pass", R+ handler = trajectory()R+ log_("got a signal!") R+ )R+ log_("waiting...")R+ wait()R+ log_("continuing!") R> R> t_signal <- trajectory()R+ log_("you shall pass")R+ send("you shall pass") R> R> simmer()R+ add_generator("blocked", t_blocked, at(0))R+ add_generator("signaler", t_signal, at(5))R+ run() 0: blocked0: waiting... 5: signaler0: you shall pass 5: blocked0: got a signal! 5: blocked0: continuing! By default, signal handlers may be interrupted as well by other signals, meaning that a handler may keep restarting if there are frequent enough signals being broadcasted. If an uninterruptible handler is needed, this can be achieved by setting the flag interruptible to FALSE in trap().§.§.§ Reneging Besides being rejected while trying to seize a resource, arrivals are also able to leave the trajectory at any moment, synchronously or asynchronously. Namely, reneging means that an arrival abandons the trajectory at a given moment. The most simple activity enabling this is leave, which immediately triggers the action given some probability. Furthermore, renege_in() and renege_if() trigger reneging asynchronously after some timeout t or if a signal is received respectively, unless the action is aborted with renege_abort(). Both renege_in() and renege_if() accept an optional sub-trajectory, out, that is executed right before leaving. R> bank <- trajectory()R+ log_("Here I am")R+ renege_in( R+ 5, R+ out = trajectory()R+ log_("Lost my patience. Reneging...") R+ )R+ seize("clerk", 1)R+ renege_abort()R+ log_("I'm being attended")R+ timeout(10)R+ release("clerk", 1)R+ log_("Finished") R> R> simmer()R+ add_resource("clerk", 1)R+ add_generator("customer", bank, at(0, 1))R+ run() 0: customer0: Here I am 0: customer0: I'm being attended 1: customer1: Here I am 6: customer1: Lost my patience. Reneging... 10: customer0: Finished§.§ The simulation environment The simulation environment manages resources and generators, and controls the simulation execution. The simmer() method instantiates the object, after which resources and generators can be appended using the %>% operator: R> env <- simmer() R> R> envR+ add_resource("res_name", 1)R+ add_generator("arrival", traj0, function() 25)simmer environment: anonymous | now: 0 | next: 0Resource: res_name | monitored: TRUE | server status: 0(1) | queue... Generator: arrival | monitored: 1 | n_generated: 0Then, the simulation can be executed, or run(), until a stop time: R> envR+ run(until=40)25: arrival0: Entering the trajectory 35: arrival0: Leaving the trajectorysimmer environment: anonymous | now: 40 | next: 50Resource: res_name | monitored: TRUE | server status: 0(1) | queue... Generator: arrival | monitored: 1 | n_generated: 2There are a number of methods for extracting information, such as the simulation time (now()), future scheduled events (peek()), and getters for obtaining resources' and generators' parameters (capacity, queue size, server count and queue count; number of arrivals generated so far). There are also several setters available for resources and generators (capacity, queue size; trajectory, distribution).A simmer object can be reset() and re-run. However, there is a special method, wrap(), intended to extract all the information from the C++ object encapsulated into a simmer environment and to deallocate that object. Thus, most of the getters work also when applied to wrapped environments, but such an object cannot be reset or re-run anymore.§.§.§ Resources A simmer resource, as stated in Section <ref>, comprises two internal self-managed parts: a server and a priority queue. Three main parameters define a resource: name of the resource, capacity of the server and queue_size (0 means no queue). Resources are monitored and non-preemptive by default. Preemption means that if a high priority arrival becomes eligible for processing, the resource will temporarily stop the processing of one (or more) of the lower priority arrivals being served. For preemptive resources, the preempt_order defines which arrival should be stopped first if there are many lower priority arrivals, and it assumes a first-in-first-out (FIFO) policy by default. Any preempted arrival is enqueued in a dedicated queue that has a higher priority over the main one (i.e., it is served first). The queue_size_strict parameter controls whether this dedicated queue must be taken into account for the queue size limit, if any. If this parameter enforces the limit, then rejection may occur in the main queue.§.§.§ Generators Three main parameters define a generator: a name_prefix for each generated arrival, a trajectory to attach them to and an interarrival distribution. Parameters priority, preemptible and restart have been described in Section <ref>. The monitoring flag accepts several levels in this case: enumi. * No monitoring enabled.* Arrival monitoring.* Level 1 + attribute monitoring. The interarrival distribution must return one or more interarrival times for each call. Internally, generators create as many arrivals as values returned by this function. They do so with zero-delay and re-schedule themselves with a delay equal to the sum of the values obtained. Whenever a negative interarrival value is obtained, the generator stops. §.§ Monitoring and data retrieval There are three methods for obtaining monitored data (if any) about arrivals, resources and attributes. They can be applied to a single simulation environment or to a list of environments, and the returning object is always a data frame, even if no data was found. Each processed simulation environment is treated as a different replication, and a numeric column named replication is added to every returned data frame with environment indexes as values.get_mon_arrivals() Returns timing information per arrival: name of the arrival, start_time, end_time, activity_time (time not spent in resource queues) and a flag, finished, that indicates whether the arrival exhausted its activities (or was rejected). By default, this information is referred to the arrivals' entire lifetime, but it may be obtained on a per-resource basis by specifying per_resource=TRUE.get_mon_resources() Returns state changes in resources: resource name, time instant of the event that triggered the state change, server count, queue count, capacity, queue_size, system count (server + queue) and system limit (capacity + queue_size).get_mon_attributes() Returns state changes in attributes: name of the attribute, time instant of the event that triggered the state change, name of key that identifies the attribute and value. § MODELLING WITH SIMMER The following subsections aim to provide some basic modelling examples. The topics addressed are queuing systems, replication, parallelisation and some best practices. We invite the reader to learn about a broader selection of activities and modelling techniques available in the package vignettes, which cover the use of attributes, loops, batching, branching, shared events, reneging and advanced uses of resources among others. §.§ Queuing systems The concept of trajectory developed in simmer emerges as a natural way to simulate a wide range of problems related to Continuous-Time Markov Chains (CTMC), and more specifically to the so-called birth-death processes and queuing systems. Indeed, simmer not only provides very flexible resources (with or without queue), branches, delays and arrival generators, but they are bundled in a very comprehensive framework of verbs that can be chained with the pipe operator. Let us explore the expressiveness of a simmer trajectory using a traditional queuing example: the M/M/1. The package vignettes include further examples on M/M/c/k systems, queueing networks and CTMC models.In Kendall's notation <cit.>, an M/M/1 system has exponential arrivals (M/M/1), a single server (M/M/1) with exponential service time (M/M/1) and an infinite queue (implicit M/M/1/∞). For instance, people arriving at an ATM at rate λ, waiting their turn in the street and withdrawing money at rate μ. These are the basic parameters of the system, whenever ρ < 1: ρ = λ/μ≡N= ρ/1-ρ≡T= N/λ≡ If ρ≥ 1, it means that the system is unstable: there are more arrivals than the server is capable of handling and the queue will grow indefinitely. The simulation of an M/M/1 system is quite simple using simmer: R> library(simmer) R> R> set.seed(1234) R> R> lambda <- 2 R> mu <- 4 R> rho <- lambda/mu R> R> mm1.traj <- trajectory()R+ seize("mm1.resource", amount=1)R+ timeout(function() rexp(1, mu))R+ release("mm1.resource", amount=1) R> R> mm1.env <- simmer()R+ add_resource("mm1.resource", capacity=1, queue_size=Inf)R+ add_generator("arrival", mm1.traj, function() rexp(1, lambda))R+ run(until=2000) After the parameter setup, the first code block defines the trajectory: each arrival will seize the resource, wait some exponential random time (service time) and release the resource. The second code block instantiates the simulation environment, creates the resource, attaches an exponential generator to the trajectory and runs the simulation for 2000 units of time. Note that trajectories can be defined independently of the simulation environment, but it is recommended to instantiate the latter in the first place, so that trajectories are able to extract information from it (e.g., the simulation time).As a next step, we could extract the monitoring information and perform some analyses. The extension package simmer.plot <cit.> provides convenience plotting methods to, for instance, quickly visualise the usage of a resource over time. Figure <ref> gives a glimpse of this simulation using this package. In particular, it shows that the average number of customers in the system converges to the theoretical value given by Equation (<ref>). §.§ Replication and parallelisation Typically, running a certain simulation only once is useless. In general, we will be interested in replicating the model execution many times, maybe with different initial conditions, and then perform some statistical analysis over the output. This can be easily achieved using standard R tools, e.g., lapply() or similar functions.Additionally, we can leverage the parallelised version of lapply(), mclapply(), provided by the parallel package, to speed up this process. Unfortunately, parallelisation has the shortcoming that we lose the underlying C++ objects when each thread finishes. To avoid losing the monitored data, the wrap() method can be used to extract and wrap these data into a pure R object before the C++ object is garbage-collected.The following example uses mclapply() and wrap() to perform 100 replicas of the M/M/1 simulation from the previous section (note that the trajectory is not redefined): R> library(simmer) R> library(parallel) R> R> set.seed(1234) R> R> mm1.envs <- mclapply(1:100, function(i)R+ simmer()R+ add_resource("mm1.resource", capacity=1, queue_size=Inf)R+ add_generator("arrival", mm1.traj, function() rexp(100, lambda))R+ run(until=1000/lambda)R+ wrap() R+ , mc.set.seed=FALSE) With all these replicas, we could, for instance, perform a t-test over N, the average number of customers in the system: R> mm1.data <- R+ get_mon_arrivals(mm1.envs)R+ dplyr::group_by(replication)R+ dplyr::summarise(mean = mean(end_time - start_time)) R> R> t.test(mm1.data[["mean"]]) One Sample t-testdata:mm1.data[["mean"]] t = 94.883, df = 99, p-value < 2.2e-16 alternative hypothesis: true mean is not equal to 0 95 percent confidence interval:0.4925143 0.5135535 sample estimates: mean of x 0.5030339§.§ Best practices DES modelling can be done in an event-by-event basis, but this approach is fairly tedious and mostly unpractical. Instead, modern process-oriented approaches commonly relate to the identification of resources and processes in a given problem, and the interactions between them. The simmer package internally follows this paradigm and exposes generic resources and processes (arrivals, in simmer terminology), so that the user can implement all the interactions as action sequences (trajectories).There are usually multiple valid ways of mapping the identified resources and processes into the elements exposed by the simmer API. For example, let us suppose that we would like to model an alarm clock beeping every second. In this case, the beep may be identified as a process, so that we have different beeps (multiple arrivals) entering a beep trajectory once per second: R> beep <- trajectory()R+ log_("beeeep!") R> R> env <- simmer()R+ add_generator("beep", beep, function() 1)R+ run(2.5)1: beep0: beeeep! 2: beep1: beeeep! But instead, identifying the alarm clock as the process is equally valid, and then we have a single alarm (single arrival) producing all the beeps in a loop: R> alarm <- trajectory()R+ timeout(1)R+ log_("beeeep!")R+ rollback(2) R> R> env <- simmer()R+ add_generator("alarm", alarm, at(0))R+ run(2.5)1: alarm0: beeeep! 2: alarm0: beeeep! These are two common design patterns in simmer for which the outcome is the same, although there are subtle differences that depend on the problem being considered and the monitoring requirements. Furthermore, as a model becomes more complex and detailed, the resulting mapping and syntax may become more artificious. These issues are shared in different ways by other frameworks as well, such as SimPy, and arise due to their generic nature.Furthermore, the piping mechanism used in the simmer API may invite the user to produce large monolithic trajectories. However, it should be noted that it is usually better to break them down into small manageable pieces. For instance, the following example parametrises the access to a resource, where G refers to arbitrary service times, and n servers are seized. Then, it is used to instantiate the trajectory shown in the former M/M/1 example: R> xgn <- function(resource, G, n) R+ trajectory()R+ seize(resource, n)R+ timeout(G)R+ release(resource, n) R> R> (mm1.traj <- xgn("mm1.resource", function() rexp(1, mu), 1))trajectory: anonymous, 3 activitiesActivity: Seize| resource: mm1.resource, amount: 1 Activity: Timeout| delay: 0x556861e81830 Activity: Release| resource: mm1.resource, amount: 1Standard R tools (lapply() and the like) may also be used to generate large lists of trajectories with some variations. These small pieces can be concatenated together into longer trajectories using join(), but at the same time, they allow for multiple points of attachment of arrivals.During a simulation, trajectories can interact with the simulation environment in order to extract or modify parameters of interest such as the current simulation time, attributes, status of resources (get the number of arrivals in a resource, get or set resources' capacity or queue size), or status of generators (get the number of generated arrivals, set generators' attached trajectory or distribution). The only requirement is that the simulation object must be defined in the same R environment (or a parent one) before the simulation is started. Effectively, it is enough to detach the run() method from the instantiation (simmer()), namely, they should not be called in the same pipe. But, for the sake of consistency, it is a good coding practice to instantiate the simulation object always in the first place as follows: R> set.seed(1234) R> env <- simmer() R> R> traj <- trajectory()R+ log_(function() paste0("Current simulation time: ", now(env))) R> R> env <- envR+ add_generator("dummy", traj, at(rexp(1, 1)))R+ run()2.50176: dummy0: Current simulation time: 2.50175860496223 § PERFORMANCE EVALUATION This section investigates the performance of simmer with the aim of assessing its usability as a general-purpose DES framework. A first subsection is devoted to measuring the simulation time of a simple model relative to SimPy and SimJulia. The reader may find interesting to compare the expressiveness of each framework. Last but not least, the final subsection explores the cost of calling R from C++, revealing the existent trade-off, inherent to the design of this package, between performance and model complexity.All the subsequent tests were performed under Fedora Linux 25 running on an Intel Core2 Quad CPU Q8400, with R 3.3.3, Python 2.7.13, SimPy 3.0.9, Julia 0.5.1 and SimJulia 0.3.14 installed from the default repositories. Absolute execution times presented here are specific to this platform and configuration, and thus they should not be taken as representative for any other system. Instead, the relative performance should be approximately constant across different systems. §.§ Comparison with similar frameworks A significant effort has been put into the design of simmer in order to make it performant enough to run general and relatively large simulation models in a reasonable amount of time. In this regard, a relevant comparison can be made against other general-purpose DES frameworks such as SimPy and SimJulia. To this effect, we retake the M/M/1 example from Section <ref>, which can be bundled into the following test: R> library(simmer) R> R> test_mm1_simmer <- function(n, m, mon=FALSE)R> mm1 <- trajectory()R> seize("server", 1)R> timeout(function() rexp(1, 1.1))R> release("server", 1) R> R> env <- simmer()R> add_resource("server", 1, mon=mon)R> add_generator("customer", mm1, function() rexp(m, 1), mon=mon)R> run(until=n) R>With the selected arrival rate, λ=1, this test simulates an average of n arrivals entering a nearly saturated system (ρ=1/1.1). Given that simmer generators are able to create arrivals in batches (i.e., more than one arrival for each function call) for improved performance, the parameter m controls the size of the batch. Finally, the mon flag enables or disables monitoring.Let us build now the equivalent model using SimPy, with base Python for random number generation. We prepare the Python benchmark from R using the rPython package <cit.> as follows: R> rPython::python.exec(" R> import simpy, random, time R> R> def test_mm1(n): R> def exp_source(env, lambd, server, mu): R> while True: R> dt = random.expovariate(lambd) R> yield env.timeout(dt) R> env.process(customer(env, server, mu)) R> R> def customer(env, server, mu): R> with server.request() as req: R> yield req R> dt = random.expovariate(mu) R> yield env.timeout(dt) R> R> env = simpy.Environment() R> server = simpy.Resource(env, capacity=1) R> env.process(exp_source(env, 1, server, 1.1)) R> env.run(until=n) R> R> def benchmark(n, times): R> results = [] R> for i in range(0, times): R> start = time.time() R> test_mm1(n) R> results.append(time.time() - start) R> return results R> ") Equivalently, this can be done for Julia and SimJulia using the rjulia package <cit.>. Once more, n controls the number of arrivals simulated on average: R> rjulia::julia_init() R> rjulia::julia_void_eval(" R> using SimJulia, Distributions R> R> function test_mm1(n::Float64) R> function exp_source(env::Environment, lambd::Float64, R> server::Resource, mu::Float64) R> while true R> dt = rand(Exponential(1/lambd)) R> yield(Timeout(env, dt)) R> Process(env, customer, server, mu) R> end R> end R> R> function customer(env::Environment, server::Resource, mu::Float64) R> yield(Request(server)) R> dt = rand(Exponential(1/mu)) R> yield(Timeout(env, dt)) R> yield(Release(server)) R> end R> R> env = Environment() R> server = Resource(env, 1) R> Process(env, exp_source, 1.0, server, 1.1) R> run(env, n) R> end R> R> function benchmark(n::Float64, times::Int) R> results = Float64[] R> test_mm1(n) R> for i = 1:times R> push!(results, @elapsed test_mm1(n)) R> end R> return(results) R> end R> ") It can be noted that in both cases there is no monitoring involved, because either SimPy nor SimJulia provide automatic monitoring as simmer does. Furthermore, the resulting code for simmer is more concise and expressive than the equivalent ones for SimPy and SimJulia, which are very similar.We obtain the reference benchmark with n=1e4 and 20 replicas for both packages as follows: R> n <- 1e4L R> times <- 20 R> R> ref <- data.frame( R> SimPy = rPython::python.call("benchmark", n, times), R> SimJulia = rjulia::j2r(paste0("benchmark(", n, ".0, ", times, ")")) R> ) As a matter of fact, we also tested a small DES skeleton in pure R provided in <cit.>. This code was formalised into an R package called DES, available on GitHub[<https://github.com/matloff/des>] since 2014. The original code implemented the event queue as an ordered vector which was updated by performing a binary search. Thus, the execution time of this version was two orders of magnitude slower than the other frameworks. The most recent version on GitHub (as of 2017) takes another clever approach though: it supposes that the event vector will be short and approximately ordered; therefore, the event vector is not sorted anymore, and the next event is found using a simple linear search. These assumptions hold for many cases, and particularly for this M/M/1 scenario. As a result, the performance of this model is only ∼2.2 times slower than SimPy. Still, it is clear that pure R cannot compete with other languages in discrete-event simulation, and DES is not considered in our comparisons hereafter.Finally, we set a benchmark for simmer using microbenchmark, again with n=1e4 and 20 replicas for each test. Figure <ref> shows the output of this benchmark. simmer is tested both in monitored and in non-monitored mode. The results show that the performance of simmer is equivalent to SimPy and SimJulia. The non-monitored simmer shows a slightly better performance than these frameworks, while the monitored simmer shows a slightly worse performance.At this point, it is worth highlighting simmer's ability to generate arrivals in batches (hence parameter m). To better understand the impact of batched arrival generation, the benchmark was repeated over a range of m values (1, …, 100). The results of the batched arrival generation runs are shown in Figure <ref>. This plot depicts the average execution time of the simmer model with (red) and without (blue) monitoring as a function of the generator batch size m. The black dashed line sets the average execution time of the SimPy model to serve as a reference.The performance with m=1 corresponds to what has been shown in Figure <ref>. But as m increases, simmer performance quickly improves and becomes ∼1.6 to 1.9 times faster than SimPy. Surprisingly, there is no additional gain with batches greater than 40-50 arrivals at a time, but there is no penalty either with bigger batches. Therefore, it is always recommended to generate arrivals in big batches whenever possible. §.§ The cost of calling R from C++ The C++ simulation core provided by simmer is quite fast, as we have demonstrated, but performance is adversely affected by numerous calls to R. The practice of calling R from C++ is generally strongly discouraged due to the overhead involved. However, in the case of simmer, it not only makes sense, but is even fundamental in order to provide the user with enough flexibility to build all kinds of simulation models. Nevertheless, this cost must be known, and taken into account whenever a higher performance is needed.To explore the cost of calling R from C++, let us define the following test: R> library(simmer) R> R> test_simmer <- function(n, delay)R+ test <- trajectory()R+ timeout(delay) R+ R+ env <- simmer()R+ add_generator("test", test, at(1:n))R+ run(Inf) R+ R+ arrivals <- get_mon_arrivals(env) R+This toy example performs a very simple simulation in which n arrivals are attached (in one shot, thanks to the convenience function at()) to a test trajectory at t=1, 2, ..., n. The trajectory consists of a single activity: a timeout with some configurable delay that may be a fixed value or a function call. Finally, after the simulation, the monitored data is extracted from the simulation core to R. Effectively, this is equivalent to generating a data frame of n rows (see the example output in Table <ref>).[]@lrrrlr@ Name Start time End time Activity time Finished ReplicationName Start time End time Activity time Finished Replicationtest0 1 2 1 TRUE 1test1 2 3 1 TRUE 1test2 3 4 1 TRUE 1Output from the test_simmer() function. As a matter of comparison, the following test_R_for() function produces the very same data using base R: R> test_R_for <- function(n)R> name <- character(n) R> start_time <- numeric(n) R> end_time <- numeric(n) R> activity_time <- logical(n) R> finished <- numeric(n) R> R> for (i in 1:n)R> name[i] <- paste0("test", i-1) R> start_time[i] <- i R> end_time[i] <- i+1 R> activity_time[i] <- 1 R> finished[i] <- TRUE R>R> R> arrivals <- data.frame( R> name=name, R> start_time=start_time, R> end_time=end_time, R> activity_time=activity_time, R> finished=finished, R> replication = 1 R> ) R>Note that we are using a for loop to mimic the behaviour of simmer's internals, of how monitoring is made, but we concede the advantage of pre-allocated vectors to R. A second base R implementation, which builts upon thefunction, is implemented as the test_R_lapply() function: R> test_R_lapply <- function(n)R> as.data.frame(do.call(rbind, lapply(1:n, function(i)R> list( R> name = paste0("test", i - 1), R> start_time = i, R> end_time = i + 1, R> activity_time = 1, R> finished = TRUE, R> replication = 1 R> ) R> ))) R>The test_simmer(), test_R_for() and test_R_lapply() functions all produce exactly the same data in a similar manner (cfr. Table <ref>). Now, we want to compare how a delay consisting of a function call instead of a fixed value impacts the performance of simmer, and we use test_R_for() and test_R_lapply() as yardsticks.To this end, the microbenchmark package <cit.> is used. The benchmark was executed with n=1e5 and 20 replicas for each test. Table <ref> shows a summary of the resulting timings. As we can see, simmer is ∼4.4 times faster than for-based base R and ∼3.6 times faster than lapply-based base R on average when we set a fixed delay. On the ther hand, if we replace it for a function call, the execution becomes ∼6.5 times slower, or ∼1.5 times slower than for-based base R. It is indeed a quite good result if we take into account the fact that base R pre-allocates memory, and that simmer is doing a lot more internally. But still, these results highlight the overheads involved and encourage the use of fixed values instead of function calls whenever possible.[]@lrrrr@ Expr Min Mean Median MaxExpr Min Mean Median Maxtest_simmer(n, 1) 429.8663 492.365 480.5408 599.3547test_simmer(n, function() 1) 3067.9957 3176.963 3165.6859 3434.7979test_R_for(n) 2053.0840 2176.164 2102.5848 2438.6836test_R_lapply(n) 1525.6682 1754.028 1757.7566 2002.6634Execution time (milliseconds). § SUMMARY The simmer package presented in this paper brings a generic yet powerful process-oriented Discrete-Event Simulation framework to R. simmer combines a robust and fast simulation core written in C++ with a rich and flexible R API. The main modelling component is the activity. Activities are chained together with the pipe operator into trajectories, which are common paths for processes of the same type. simmer provides a broad set of activities, and allows the user to extend their capabilities with custom R functions.Monitoring is automatically performed by the underlying simulation core, thereby enabling the user to focus on problem modelling. simmer enables simple replication and parallelisation with standard R tools. Data can be extracted into R data frames from a single simulation environment or a list of environments, each of which is marked as a different replication for further analysis.Despite the drawbacks of combining R calls into C++ code, simmer shows a good performance combined with high flexibility. It is currently one of the most extensive DES frameworks for R and provides a mature workflow for truly integrating DES into R processes.§ ACKNOWLEDGEMENTS We thank the editors and the anonymous referee for their thorough reviews and valuable comments, which have been of great help in improving this paper. Likewise, we thank Norman Matloff for his advice and support. Last but not least, we are very grateful for vignette contributions by Duncan Garmonsway, and for all the fruitful ideas for new or extended features by several users via the simmer-devel mailing list and GitHub. | http://arxiv.org/abs/1705.09746v2 | {
"authors": [
"Iñaki Ucar",
"Bart Smeets",
"Arturo Azcorra"
],
"categories": [
"stat.CO"
],
"primary_category": "stat.CO",
"published": "20170527003810",
"title": "simmer: Discrete-Event Simulation for R"
} |
aachen]Birte Schmidtmannmycorrespondingauthor [mycorrespondingauthor]Corresponding author [email protected] duesseldorf]Pawel Buchmüller aachen]Manuel Torrilhon [aachen]MathCCES, RWTH Aachen, Schinkelstr. 2, 52062 Aachen [duesseldorf]Institute of Mathematics, Heinrich-Heine University Düsseldorf, Universitätsstr. 1, 40225 Düsseldorf, Germany equationsection limiter functionsfinite volume schemes third-order accuracy non-uniform grids 2D AMR In this paper we extend the recently developed third-order limiter function H_3L^(c) [J. Sci. Comput., (2016), 68(2), pp. 624–652] to make it applicable for more elaborate test cases in the context of finite volume schemes. This work covers the generalization to non-uniform grids in one and two space dimensions, as well as two-dimensional Cartesian grids with adaptive mesh refinement (AMR). The extension to 2D is obtained by the common approach of dimensional splitting. In order to apply this technique without loss of third-order accuracy, the order-fix developed by Buchmüller and Helzel [J. Sci. Comput., (2014), 61(2), pp. 343–368] is incorporated into the scheme. Several numerical examples on different grid configurations show that the limiter function H_3L^(c) maintains the optimal third-order accuracy on smooth profiles and avoids oscillations in case of discontinuous solutions. § INTRODUCTIONIn the context of finite volume schemes, one of the building blocks for obtaining higher-order accuracyis the reconstruction of interface values <cit.>. There are many different approaches usinglinear and non-linear reconstruction functions. Restricting ourselves to the compact stencil of only thecell of interest and its direct neighbors, the best order of accuracy we can obtain is third order. This cane.g. be achieved by constructing a quadratic polynomial whose average over each of the three cellsof interest needs to agree with the cell average of the solution in these cells.s This reconstruction yieldsthird-order accuracy, however, the resulting scheme is linear, causing oscillations at discontinuities<cit.>. One possibility to work around this are limiter functions in the MUSCLframework <cit.>. These limiters use three cell mean values per reconstructionand are total variation diminishing (TVD), however, they generally yield second-order accuracy,see <cit.> and references therein. One of their major drawbacks is the loss ofaccuracy near extrema <cit.>, also called extrema clipping.In 1987, Harten et. ak. <cit.> presentedthe essentially non-oscillatory (ENO) scheme, further developed by Liu et. al. <cit.>to become the weighted essentially non-oscillatory (WENO) scheme. This method enjoys greatpopularity and has been extended by many authors. One of the most-widespread enhancements arethe smoothness indicators proposed by Jiang and Shu <cit.> which increase theorder of accuracy. This scheme will be referred to as WENO-JS. Another development,incorporating a global higher-order smoothness indicator was proposed by Borgeset. al. <cit.> and is named WENO-Z. We compare our results to the third-orderversions of these two methods.Another approach is the use of non-polynomial reconstructions, such as hyperbolic reconstructionschemes, cf. Marquina <cit.> or local double-logarithmic reconstruction schemes,cf. Artebrant and Schroll <cit.>. Based on their work, Čada and Torrilhondeveloped a third-order limiter function avoiding the above-mentioned extrema clipping. Our recentarticle <cit.> continued this work, introducing a third-order limiterfunction . This function contains a decision criterion able to distinguish between smoothextrema and discontinuities. This limiter was first developed and tested for one-dimensionalhyperbolic conservation laws on uniform grids in the context of finite volume methods. Now, theaim is to extend the scheme to make it applicable for numerical test cases on non-uniform meshesand in two space dimensions.The paper is structured as follows: in Sec. <ref> we recall the formulation of the third-order limiter function H_3L^(c) in one space dimension for equidistant grids. Then, the limiter is extended for the use of non-equidistant grids. Sec. <ref> explains the extension to two space dimensions and the order-fix which allows to maintain high-order accuracy within the flux-splitting framework. Here, we firstly treat Cartesian grids in Sec. <ref>, then introduce a parallel adaptive-mesh-refinement (AMR) framework in Sec. <ref>, and in Sec. <ref> extend the theory of non-uniform grids to two space dimensions. Numerical results visualizing the theoretical concepts are presented in Sec. <ref> and in Sec. <ref> we draw some conclusions. Finally, more details on the formulation of the limiter function on uniform as well as non-uniform grids can be found in the appendix. § THIRD-ORDER LIMITER IN ONE SPACE DIMENSION Achieving high-order accuracy with finite volume schemes requires large stencils. This leads to an increase in communication among grid cells and is undesirable when thinking of parallel codes and boundaries. Therefore, we want to remain on the most compact stencil of three cells in one-dimension and five cells in two space dimensions. This means, the stencil consists of the cell of interest and its direct neighbors. For the sake of simplicity, we restrict the theoretical development of the limiter function to one dimensional scalar equations. The transition from the one-dimensional formulation to two-dimensions is obtained via a dimensional splitting. The exact procedure is explained in Sec. <ref>. Also, the theory easily extends to systems of conservation laws by applying it component-wise. In the one-dimensional case of Cartesian grids, we divide the domain of interest Ω⊂ℝ in non-overlapping cells C_i=[x_i-12, x_i+12) such that Ω=⋃_i C_i. Denote by x_i the cell centers and by Δ x_i = x_i+1/2 - x_i-1/2 the size of cell C_i. Fig. <ref> depicts the here-introduced notation for the equidistant case Δ x_i ≡Δ x ∀ i. In this work we are interested in hyperbolic conservation law of the form ∂_t u(x⃗,t) + ∇· f(u(x⃗,t)) = 0 with suitable initial conditions u(x⃗, 0)=u_0(x⃗), x⃗=x in 1D and x⃗=(x,y)^T in 2D, respectively. To avoid boundary effects, we impose periodic boundary conditions. Integrating Eq. (<ref>) in one space dimension over cell C_i and dividing by the cell width Δ x_i yields an exact update formula for the cell mean values 1Δ x_i∫_C_i u(x,t) dx. This formulation however requires the exact solution of Riemann problems at each cell boundary <cit.>. To avoid this costly procedure, approximate Riemann solvers are incorporated and the exact cell mean values are approximated by u̅_i. The update formula for u̅_i is then given by the so-called semi-discrete scheme d u̅_i/d t = -1/Δ x_i( f̂_i+1/2 - f̂_i-1/2), with numerical flux functions f̂_i+1/2 = f̂(u^(-)_i+1/2, u^(+)_i+1/2) and f̂_i-1/2 = f̂(u^(-)_i-1/2, u^(+)_i-1/2). These functions take as input the left and right limiting values at the cell interface, see Fig. <ref>. One could simply insert the left and right cell mean values, however, this only yields a first-order accurate scheme <cit.>. In order to achieve higher-order accuracy, one way is to use reconstructions for the interface values. As described in more detail in <cit.>, the reconstructed interface values of cell i can be written in the general form u^(-)_i+1/2 = u̅_i + 12H(δ_i-1/2,δ_i+1/2) = M(u̅_i-1, u̅_i, u̅_i+1), u^(+)_i-1/2 = u̅_i - 12H(δ_i+1/2,δ_i-1/2) = P(u̅_i-1, u̅_i, u̅_i+1). The function H fully determines the way limiting is performed and thus the order of accuracy of the resulting scheme. The undivided differences between neighboring cells are denoted by δ_i-1/2 = u̅_i-u̅_i-1 δ_i+1/2 = u̅_i+1-u̅_i. Remark: The standard form for reconstructions, e.g. found in <cit.> reads u^(-)_i+1/2 = u̅_i + 12 ϕ(θ_i)δ_i-1/2, u^(+)_i-1/2 = u̅_i - 12 ϕ(θ_i^-1)δ_i+1/2 with the ratio of consecutive gradients θ_i = δ_i-1/2/δ_i+1/2 which acts as a smoothness indicator and a monovariant limiter function ϕ. This form, introduced by <cit.> relates to Eq. (<ref>) via ϕ(θ_i)δ_i-1/2 = H(δ_i-1/2,δ_i+1/2). More details on the two-variate form and its advantages can be found in <cit.>.§.§ Formulation for Equidistant Grids In this section we will shortly recall the formulation of third-order limiter functions for equidistant grids developed in <cit.>. Starting with a quadratic ansatz function, evaluated on (x_i-1, u̅_i-1), (x_i, u̅_i), (x_i+1, u̅_i+1), we can obtain an unlimited reconstruction formulation which yields a third-order accurate scheme. Rewriting the polynomial reconstruction in the form (<ref>) yields u^(∓)_i±1/2 = u̅_i ±1/2 δ_i+1/2 + δ_i±1/2 + δ_i-1/2/3 leading to the function H_3(δ_i-1/2, δ_i+1/2) :=1/3(2 δ_i+1/2 + δ_i-1/2). For purely smooth functions, the full (unlimited) third-order reconstruction shows good results. However, for solutions containing discontinuities, spurious oscillations develop since a linear higher-order method is not monotonicity preserving <cit.>. Therefore, we need to apply non-linear reconstruction functions. In <cit.> we recently constructed a limiter function, called H_3L, based on a double logarithmic ansatz function, first proposed by Artebrant and Schroll <cit.> and further developed by Čada and Torillhon <cit.>. Furthermore, we designed the combined limiter function H_3L^(c) which includes a decision criterion η = η(δ_i-1/2,δ_i+1/2), able to distinguish between smooth extrema and discontinuities. The combined limiter applies the full third-order reconstruction H_3 on parts which are classified as smooth and switches to the limited function H_3L if η indicates large gradients. It reads H_3L^(c) (δ_i-1/2,δ_i+1/2) := H_3(δ_i-1/2,δ_i+1/2)if η(δ_i-1/2,δ_i+1/2) <1 H_3L(δ_i-1/2,δ_i+1/2)if η(δ_i-1/2,δ_i+1/2) ≥ 1. The formulation for H_3L and the decision criterion η, as well as more details on H_3L^(c) are given in <ref> and in <cit.>. §.§ Formulation for Non-Equidistant Grids For general grids, the size of cell C_i, denoted by Δ x_i is not uniform for all cells, i.e. Δ x_i ≠Δ x ∀ i, see Fig. <ref>. In this case, the definition of the undivided differences δ _i±1/2 Eq. (<ref>) is not meaningful anymore and new concepts need to be developed. Starting again with the full third-order reconstruction, consider a quadratic polynomial p_i(x) in cell i that has to maintain the cell averages in the three cells C_i+ℓ, ℓ∈{-1, 0, 1}. This polynomial is then evaluated at the cell boundaries x_i±1/2 and yields the reconstructed cell interface values u_i+1/2^(-) = p_i(x_i+1/2) !=u̅_i + 1/2 H_3,neq u_i-1/2^(+) = p_i(x_i-1/2) !=u̅_i - 1/2H_3,neq. Even though this procedure is similar to the full third-order reconstruction on uniform grids, Eq. (<ref>), the reconstruction function H_3,neq differs from H_3 since the different cell sizes need to be taken into account. The full (unlimited) third-order reconstruction function reads H_3,neq(δ _i-1/2,δ _i+1/2, Δ x_i, Δ x_i-1, Δ x_i+1) = Δ x_i/Δ_i1/3(2 Δ_i-1/2/Δ_i+1/2δ _i+1/2 + Δ x_i+1/Δ_i-1/2δ _i-1/2) with the abbreviations Δ_i = Δ x_i-1+Δ x_i+Δ x_i+1/3, Δ_i-1/2 = Δ x_i-1+Δ x_i/2,Δ_i+1/2 = Δ x_i+Δ x_i+1/2. As in the equidistant case, the reconstructed interface values, Eq. (<ref>), can be compactly refomulated as u^(∓)_i±1/2 = u̅_i ±Δ x_i/2 Δ x_i-1δ_i+1/2 + Δ x_i δ_i±1/2 +Δ x_i+1δ_i-1/2/Δ x_i-1+Δ x_i+Δ x_i+1withδ_i-1/2 = δ _i-1/2/Δ_i-1/2,δ_i+1/2 = δ _i+1/2/Δ_i+1/2. It can easily be seen that for equidistant grids, i.e. Δ x_i-1=Δ x_i=Δ x_i+1≡Δ x, the abbreviated terms reduce to Δ_i = Δ_i-1/2 = Δ_i+1/2 = Δ x and therefore, the formulas for H_3,neq and H_3 match, as expected. Eq. (<ref>) and (<ref>) indicate that for non-equidistant meshes, the equivalent of the undivided differences δ_i±1/2 are the scaled slopes u_i+1/2^(-) : δ _i-1/2→ Δ x_i+1δ _i-1/2/Δ_i-1/2 = Δ x_i+1δ_i-1/2 δ _i+1/2→ Δ_i-1/2 δ _i+1/2/Δ_i+1/2 = Δ x_i/2 δ_i+1/2 + Δ x_i-1/2 δ_i+1/2 for the reconstruction of the right interface of cell C_i and u_i-1/2^(+): δ _i-1/2→ Δ_i+1/2 δ_i-1/2/Δ_i-1/2 =Δ x_i/2 δ_i-1/2 + Δ x_i+1/2 δ_i-1/2 δ _i+1/2→ Δ x_i-1δ_i+1/2/Δ_i+1/2 =Δ x_i-1δ_i+1/2 for the reconstruction of the left interface of cell C_i. These expressions resemble the smoothness indicators introduced by Jiang and Shu <cit.>, which are given by Δ x_i δ_i-1/2 and Δ x_i δ_i+1/2. In order to generalize the third-order limiter function developed in <cit.>, we replace the undivided differences as mentioned above to obtain the reconstructions u_i+1/2^(-) = u̅_i + 1/2 H^(c)_3L( Δ x_i+1δ_i-1/2, Δ_i-1/2 δ_i+1/2) u_i-1/2^(+) = u̅_i - 1/2H^(c)_3L( Δ x_i-1δ_i+1/2, Δ_i+1/2 δ_i-1/2). with the limiter function H^(c)_3L (<ref>) described in Sec. <ref>. The non-equidistant version of the limiter function can be defined as a function H^(c)_3L,neq, given by H^(c)_3L,neq(δ _i-1/2,δ _i+1/2, Δ x_i, Δ x_i-1, Δ x_i+1)= H^(c)_3L( Δ x_i+1δ _i-1/2/Δ_i-1/2, Δ_i-1/2 δ _i+1/2/Δ_i+1/2) H^(c)_3L,neq(δ _i+1/2,δ _i-1/2,Δ x_i, Δ x_i+1, Δ x_i-1)= H^(c)_3L( Δ x_i-1δ_i+1/2/Δ_i+1/2, Δ_i+1/2 δ_i-1/2/Δ_i-1/2). The decision criterion η (<ref>) for non-uniform meshes reads η(δ_1,δ_2) = √(δ_1^2+δ_2^2)/√(5/2) α dx^2, where δ_1,δ_2 are the same input arguments as for H^(c)_3L,neq, see Eq. (<ref>) and dx is the average mesh size, dx = (∑_i Δ x_i)/# cells. § THIRD-ORDER LIMITER IN TWO SPACE DIMENSIONS In this section we extend the third-order limiter function to two space dimensions covering three core areas. First, we discuss how to apply the scheme on uniform Cartesian grids. Then we extend the method to adaptively refined grids. The last part of this section explains how the method can be used on rectangular grids which are non-uniform in x- and y-direction. §.§ Formulation for 2D Cartesian Grids In two space dimensions, the domain of interest Ω is divided into non-overlapping cells C_i,j=[x_i-1/2, x_i+1/2)× [y_j-1/2, y_j+1/2) such that Ω=⋃_i,j C_i,j. Denote by (x_i,y_j) the cell center of cell C_i,j. The mesh width is given by Δ x_i = x_i+1/2 - x_i-1/2 and Δ y_j = y_j+1/2 - y_j-1/2. Furthermore we denote by ũ̅̃_i,j the cell–averaged value over cell C_i,j and by ũ_i+1/2,j and u̅_i,j+1/2 the interface–averaged values over the corresponding interface. The tilde notation ·̃ denotes the average in y-direction and bar ·̅ denotes the average in x-direction as in the one-dimensional case. Integrating a hyperbolic conservation law of the form ∂_t u(x,y,t) + ∂_x f(u(x,y,t)) + ∂_y g(u(x,y,t)) = 0 over cell C_i,j and dividing by the cell area Δ x_iΔ y_j yields the two-dimensional semi-discrete flux-differencing finite volume scheme <cit.> d ũ̅̃_i,j/d t = -1/Δ x_i(f̃̂̃_i+1/2,j - f̃̂̃_i-1/2,j) -1/Δ y_j( ĝ̅_i,j+1/2 - ĝ̅_i,j-1/2). Here, the numerical flux functions f̂_i+1/2,j andĝ_i,j+1/2 are approximations to averages of the flux across the corresponding interface <cit.> f̃̂̃_i+1/2,j ≈1/Δ y_j∫_y_j-1/2^y_j+1/2 f(u(x_i+1/2,y,t)) dy, ĝ̅_i,j+1/2 ≈1/Δ x_i∫_x_i-1/2^x_i+1/2 g(u(x,y_j+1/2,t)) dx. Similar to other finite volumen methods, applying the scheme described in Sec. <ref> in a dimension–by–dimension fashion results in a second order scheme, see e.g. <cit.><cit.><cit.>. In order to remain third-order accurate, we apply the fourth order transformation proposed by Buchmüller and Helzel <cit.>. Incorporating this so-called order-fix, the scheme can be summarized as follows. * Compute the averaged values of the conserved quantities at the cell interfaces in the interior of cell C_i,j for all i,j using the one-dimensional limiter functions described in Sec. <ref> ũ^(-)_i+1/2,j = ũ̅̃_i,j + 12H(δ_i-1/2,j,δ_i+1/2,j),ũ^(+)_i-1/2,j = ũ̅̃_i,j - 12H(δ_i+1/2,j,δ_i-1/2,j),u̅^(-)_i,j+1/2 = ũ̅̃_i,j + 12H(δ_i,j-1/2,δ_i,j+1/2),u̅^(+)_i,j-1/2 = ũ̅̃_i,j - 12H(δ_i,j+1/2,δ_i,j-1/2). Here, the reconstruction function H can be the unlimited third-order reconstruction H_3, the limiter function H_3L^(c) or any other third-order limiter fitting the setting. The undivided differences in two-dimensions, δ_i±1/2,j, δ_i, j±1/2, are defined similarly to their one-dimensional equivalents, Eq. (<ref>), δ_i-1/2,j = ũ̅̃_i,j -ũ̅̃_i-1,j δ_i+1/2,j = ũ̅̃_i+1, j -ũ̅̃_i,j δ_i, j-1/2 = ũ̅̃_i,j -ũ̅̃_i,j-1 δ_i, j+1/2 = ũ̅̃_i,j+1 -ũ̅̃_i,j. * Compute point values of the conserved quantities at the center of each cell interface, i.e. compute u_i+1/2,j^(±)= ũ_i+1/2,j^(±) - 1/24( ũ_i+1/2,j-1^(±) - 2 ũ_i+1/2,j^(±) + ũ_i+1/2,j+1^(±)), u_i,j+1/2^(±)= u̅_i,j+1/2^(±) - 1/24( u̅_i-1,j+1/2^(±) - 2 u̅_i,j+1/2^(±) + u̅_i+1,j+1/2^(±)). * Compute fluxes at the center of the cell interfaces using the computed point values and a consistent numerical flux function, i.e. f̂_i+1/2,j=f̂(u^(-)_i+1/2,j, u^(+)_i+1/2,j), ĝ_i,j+1/2=ĝ(u^(-)_i,j+1/2, u^(+)_i,j+1/2). * Compute averaged values of the numerical flux function, i.e. compute f̃̂̃_i+1/2,j= f̂_i+1/2,j + 1/24( f̂_i+1/2,j-1 - 2 f̂_i+1/2,j + f̂_i+1/2,j+1),ĝ̅ _i,j+1/2= ĝ_i,j+1/2 + 1/24( ĝ_i-1,j+1/2 - 2 ĝ_i,j+1/2 + ĝ_i+1,j+1/2). * Use a high–order accurate Runge–Kutta method for the update in time. In this work, we use the strong stability preserving third–order Runge–Kutta method described by Gottlieb et. al. <cit.>. This procedure is quite robust even when discontinuities are present.Nevertheless in some situations an unphysical state may be created, therefor we apply a simple limiting as suggested by Buchmüller et. al. <cit.>.Details on this limiting procedure can be found in the original paper. Step 1. comprises the reconstruction function H(·,·) that has been described inSec. <ref>. For purely smooth solutions, the full third-order reconstructionH_3 can be used in this step. However, when discontinuities are present, the limiter functionH_3L^(c), Eq. (<ref>), is more advisable since oscillations are prevented.In principle, in the dimension-splitting approach described above, H_3L^(c) canbe applied in the same manner as in one dimension. The decision criterionη (see Appendix <ref>, Eq. (<ref>)) can also be used withoutany changes in the splitting approach. Only the definition of the radius of the asymptoticregion, α (see Appendix <ref>, Eq. (<ref>)) has to be adapted.It is defined asα = max_(x,y)∈Ω\Ω_d|Δ u_0(x,y)| in the two-dimensional scheme. Again, Ω is the domain of interest and Ω_d⊂Ω the subset containing discontinuities.§.§ Adaptive Mesh Refinement (AMR)For computations in two dimensions, we use the parallel AMR framework Racoon developed by Dreher and Grauer <cit.>.Both, the grid adaptivity and the parallelization are based on a block–structure.In a 2-dimensional space, a grid of level ℓ consists of (2^2)^ℓ = 4^ℓ blocks. Computations can be performed simultaneouslyon each block and due to the Cartesian grid structure within each block, we can simply apply the method described inSec. <ref>.A typical block is illustrated in Fig. <ref>.The cells in the gray region are ghost cells needed for the communication between the blocks.For refinement a block of level ℓ is replaced by 2^d blocks of level ℓ+1. These blocks may then be further refined until the maximum refinement level is reached.There are three reasons for refinement.* Some refinement criteria is met.Here we compute δ = |q_i-1,j- 2 q_i,j + q_i+1,j| +|q_i,j-1 - 2 q_i,j + q_i,j+1| /|q_i,j| Δ xΔ yfor each cell.If δ is bigger than a predefined threshold δ_0, the cell is marked for refinement and therefore the block will be refined.* Neighbouring blocks are may also refined, so that after refinement the region with the marked cell is surrounded by fine blocks.In 2D for example, if a cell in the upper left part of a block is marked for refinement, then the upper block, the block on the left-hand side, and the block in the upper left diagonal direction will be refined as well.* Finally the grid needs to be properly nested, that is the level of neighboring blocks is not allowed to differ by more than one.Which may lead to further refinement.In Fig. <ref>, a typical block structure is illustrated. The blocks in this figure are of level 2-5, where light gray corresponds to 2 and increasingdarkness corresponds to increasing refinement level.As mentioned above, ghost cells are used for communication between blocks and need to be updated in every stage of the time stepping scheme. In most cases this means simply copying the cell–averaged data from the neighboring block.To transfer data from a fine block to a coarse block, the values of the corresponding cells are averaged.Values for the fine block are created by polynomial reconstruction using data of the coarse block.The same is procedure is applied when a block is refined or coarsen again, see<cit.> for more details.§.§ Non-Uniform Rectangular 2D GridsIn this section we consider non-uniform two-dimensional meshes. The Cartesian grid cells are transformed into non-uniform cells in x- and y-direction by addinga perturbation to the cell centers (x_i, y_j). In this work we used the transformationx_i→ x_i + δ x sin(c_x π x_i), y_j→y_j + δ y sin (c_yπy_j) with the constants δ x, c_x, δ_y, c_y, which determine the structure of the mesh. This procedure yields rectangles that are still aligned with the x- and y-axes but exhibit differentcell sizes. In order to apply the numerical schemes presented in the first sections of this paper, we need to adaptthe schemes as follows: The general structure of the numerical algorithm for two-dimensional Cartesiangrids, introduced in Sec. <ref>, remains the same. We only need to adapt step 1, i.e.the reconstruction of the interface values. The first step of the algorithm for non-uniform grids reads * Compute the averaged values of the conserved quantities at the cell interfaces in the interior of cell C_i,j for all i,j using the one-dimensional limiter functions for non-uniform grids, described in Sec. <ref>. ũ^(-)_i+1/2,j = ũ̅̃_i,j + 12H(δ_i-1/2,j,δ_i+1/2,j,Δ x_i, Δ x_i-1, Δ x_i+1),ũ^(+)_i-1/2,j = ũ̅̃_i,j - 12H(δ_i+1/2,j,δ_i-1/2,j, Δ x_i, Δ x_i+1, Δ x_i-1),u̅^(-)_i,j+1/2 = ũ̅̃_i,j + 12H(δ_i,j-1/2,δ_i,j+1/2, Δ y_j, Δ y_j-1, Δ y_j+1),u̅^(+)_i,j-1/2 = ũ̅̃_i,j - 12H(δ_i,j+1/2,δ_i,j-1/2, Δ y_j, Δ y_j+1, Δ y_j-1). Here, H = H_3, neq or H = H^(c)_3L,neq. The two-dimensional undivided differences δ_i±1/2,j, δ_i, j±1/2, are defined as above, Eq. (<ref>) and are adapted to the non-uniform setting as explained in Sec. <ref>. Steps 2.-5. remain the same, see Sec. <ref>.§ NUMERICAL EXAMPLESIn this section we present different numerical examples validating the concepts introduced in Sec. <ref>. We first prove in Sec. <ref> that third-order accuracy isobtained on non-equidistant one-dimensional grids. In Sec. <ref>, the vortex evolutionperformed on a two-dimensional Cartesian mesh shows that also in 2D, the limiter yields third-order accuracy.Then, Sec. <ref> presents the two-dimensional advection equation on a non-uniformCartesian mesh. Finally, the double Mach reflection, Sec. <ref> and the two-dimensionalRiemann problem with four shocks show the excellent performance of H_3L^c using AMR.All simulations were performed using the third-order accurate strong stability preserving Runge-Kutta (SSP-RK3)time integrator developed by Gottlieb et. al. <cit.>.§.§ Testing the Convergence Order on a Non-Uniform 1D Grid In this section we want to verify that the extension of the third-order limiterfunction H_3L^c from equidistantto non-equidistant grids still yields third order accurate solutions. Thus, weconsider the linear advection equation with smooth initial conditions u_t+ u_x= 0 u(x,0)= sin(2π x) on the domain [0,1] with periodic boundary conditions. In order to verifythe order of convergence we carry out simulations with N=25× 2^j, j=0,…,6grid cells with end time t_end=1.0 and CFL number 0.95. Since we areinterested in non-equidistant grids, the original grid is perturbed by addingc_1·sin(c_2 2π x_i+1/2) to each cell boundary x_i+1/2 withsome constants c_1, c_2∈ℝ. In this test case, c_1 = (10· c_2)^-1and c_2=5 have been applied. Fig. <ref> shows the exact solutionas well as the solution obtained with H_3L^c on a grid with 25 cells.This solution is compared to the third order WENO method developed by Liu et. al. <cit.> with the smootheness measure by Jiang and Shu <cit.>.This scheme is denoted by WENO-JS. The choice of depicting a coarse grid emerges fromthe fact that the non-equidistant mesh structure is well-visible. Also, the improvedsolution quality of the limiter function can be best observed on coarse meshes, as forfine grids, all convergent methods look the same.Finally, to verify that the limiter function is third-order accurate on non-equidistant grids,Table <ref> displays the L_1- and L_∞-errors of H_3L^c.The corresponding empirical order of convergence (EOC) is obtained by log(err_j+1/err_j)/log(N_j/N_j+1). It can be seen that the limiter function obtainsthe desired accuracy already on coarse meshes.§.§ 2D Advection Equation on Non-Uniform Rectangular GridsThis numerical problem verifies the accuracy of the two-dimensional numerical scheme on non-uniformgrids, described in Sec. <ref> and <ref>. We consider thetwo-dimensional linear advection equation with smooth initial conditions u_t+ au_x + b u_y= 0 u(x, y, 0)=u_0(x, y) = 1/2sin(π x)sin(π y). The computational domain is set to Ω = [-1, 1]× [-1, 1] and the non-uniformity is obtainedby Eq. (<ref>) with δ x=0.1, c_x=2, δ y=0.1, c_y=1.Appying an advection speed of either (a,b)= (1,0) or (a,b)= (1,1) and the simulation timeT_end = 2, the initial condition can be used as exact solution. Thus, the L_1-errorof the numerical solution u_ij^n can easily be computed asu_ij^n - u_0(x_i,y_j)_1 = |C_i,j|∑_i,j|u_ij^n - u_0(x_i,y_j)|.For the simulation, the CFL condition 0.5 has been imposed and for the decision criterion ηof the limiter functionH_3L,neq^(c), the input value α = π^2 is obtained by Eq. (<ref>). In order to verify the order of convergence, we carry out simulations withN={5× 5, 10× 10, 20× 20, 30× 30, 50× 50} grid cells. The mesh with30× 30 grid cells, perturbed as described above, is depicted in Fig. <ref>and the solution obtained using is shown in Fig. <ref>. The L_1-errors andthe corresponding empirical orders of convergence are given in Table <ref>. The errors for advection speed (1,0)are by a mean factor of 0.7 better than the errors of the solutions advectedin diagonal direction (1,1). Nevertheless, both simulations yield third order accuracy, see Table <ref>§.§ 2D Vortex EvolutionThis problem, originally proposed by Hu <cit.>, describes a two-dimensional vortex evolution on the periodic domain [-7,7]×[-7,7], where the flow is described by the Euler equations.The initial data consists of a mean flow ρ=u=v=p=1, perturbed by[δρ; δ u; δ v; δ p ] = [(1+δ T)^1/(γ-1)-1; -yσ/2πe^0.5(1-r);xσ/2πe^0.5(1-r); (1+δ T)^γ/(γ -1)-1 ]. The perturbations in density and pressure are expressed in terms of perturbation in temperature, δ T, given by δ T =-(γ-1)σ^2/8γπ^2e^1-r^2, with r^2 = x^2 + y^2, the adiabatic index γ=1.4, and the vortex strength σ = 5. The initial data is also used as a reference solution at time t=14, where it agrees with the exact solution. For the limiter function we need to compute the radius of the asymptotic region, α, given by Eq. (<ref>). In this case, α = 7.9 is used for the simulation.By applying the method on a two-dimensional Cartesian grid, as suggested inSec. <ref>, we obtain the full third order, as shown in Table <ref>. The results compare well with the third-order WENO-Z3 implementation developed by Borgeset. al. <cit.> and further improved by Don and Borges <cit.>. Both schemes, the H^(c)_3L limiter function and WENO-Z3, are implementedfollowing the algorithm described in Sec. <ref>.§.§ Double Mach ReflectionIn this section we apply the limiter function on a Cartesian grid with AMR, as described inSec. <ref>. The test case consists of the double Mach reflection problemproposed by Woodward and Colella <cit.>. It describes a Mach 10shock reflection off a 30–degree wedge. The computational domain is the rectangle [0,3] × [0,1].To obtain the same resolution for both, the x- and y-direction, each block contains 36 × 12 mesh cells. We set level 3 as the coarsest level and allow up to 4 additional refinements, thus the finest level corresponds to a discretization with 4608 × 1536 mesh cells.The refinement threshold is set to δ_0 = 2000.Due to the constant initial date, α turns out to be 0, cf. Eq. (<ref>).Therefore, the combined limiter function , Eq. (<ref>), reduces to H_3L, see Eq. (<ref>).Fig. <ref> shows the result of the simulation using the third-order limiterfunction H_3L at time t_end=0.2, including the block structure. A close–up view of the Mach stem region is shown in Fig. <ref>. The computations were performed with four (Fig. <ref> and <ref>)and five (Fig. <ref> and <ref>) levels of refinement. For comparison, we also show the results of a third-order WENO-Z reconstruction onthe left hand side.In direct comparison the H^(c)_3L scheme produces more roll ups in the inner region.This is a desired feature since the slip line is physically instable, indicating thatthe scheme with introduces less numerical viscosity than WENO-Z3.§.§ 2D Riemann Problem The next testcase we consider is a configuration of four interacting shocks in the domain [0,1]^2.The initial values have the form (ρ, p, u, v)(x,y,0) = {[ (1.5, 1.5, 0.0, 0.0) x>0.5, y>0.5; (0.5323, 0.3, 1.2060, 0.0) x<0.5, y>0.5;(0.1380, 0.029, 1.2060, 1.2060) x<0.5, y<0.5; (0.5323, 0.3, 0.0, 1.2060) x>0.5, y<0.5 ].. This testcase was originally proposed by Schulz-Rinne<cit.> along with several other configurations of 2D Riemann problems.Due to the constant initial data we set here α= 0 for the limiter , see <ref>.Fig. <ref> shows the results at final time t_end = 0.3.The results obtained by applying the limiter function are compared to results computed withthe third-order WENO-Z reconstruction. As for the double Mach reflection, Sec. <ref>,the scheme with introduces less numerical viscosity than WENO-Z3.§ CONCLUSIONIn this work we have extended the recently proposed third-order limiter function H^(c)_3L<cit.> from one-dimensional equidistant grids tonon-uniform and Cartesian AMR meshes in two space dimensions. For the reconstruction ofinterface values, the presented limiter function takes into account the smallest possible stencilfor reaching third-order accuracy. Thus, in one space dimension the reconstructionnecessitates three cell mean values and in two dimensions five cells. For the limiter to be applicable to one dimensional non-equidistant grids, the undivided differencesδ_i- 1/2 = u̅_i-u̅_i-1 and δ_i+1/2 = u̅_i+1-u̅_ihave been adapted to be meaningful again. The resulting expressions are closelyrelated to the smootheness indicators by Jiang and Shu <cit.>. A numerical testcase verifies that the resulting scheme yields the desired third-order accuracy.For the two dimensional test cases, the popular approach of dimension splitting has been applied.In order to use this method without loss of third-order accuracy, we use the order-fix developed in<cit.>. Here, the scheme has first been extended to Cartesian grids, thenwe showed that it can be incorporated in a scheme with adaptive mesh refinement and alsotwo-dimensional, non-uniform rectangular grids have been proven to yield third-order accuratesolutions.The resulting scheme has been tested on a number of numerical examples andshows the desired third-order accuracy. We also compared the limiter function to the third-order WENO-Z3 reconstruction <cit.>and obtain equally good results. seems to introduce less viscosity into the scheme, thusbeing able to better reproduce the physically instable details of the double Mach reflection problem. § APPENDIX §.§ Brief Summary of H_3L^(c) on Uniform-Grids In this section we want to recall the third-order limiter function developed in <cit.>. In one space dimension on equidistant grids it reads H_3L(δ_i-1/2,δ_i+1/2) = (δ_i+1/2) max(0,min((δ_i+1/2) H_3, max(-(δ_i+1/2)δ_i-1/2, min(2 (δ_i+1/2)δ_i-1/2, (δ_i+1/2) H_3, 1.5 |δ_i+1/2|)))). As described in <cit.>, there exist cases, where the limiter function H_3L is unable to distinguishing between smooth extrema and discontinuities due to the constraint of using three cells only. Therefore, the decision criterion η was introduced, which is able to distinguish between smooth extrema and discontinuities in most cases. The switch function is defined by η=η(δ_i-1/2,δ_i+1/2) = √((δ_i-1/2)^2+(δ_i+1/2)^2)/√(5/2) α Δ x^2, where α denotes the maximum second derivative of the initial conditionsα≡max_x ∈Ω\Ω_d |u_0^''(x)|. Here, Ω is the computational domain and Ω_d⊂Ω the subset containing discontinuities. This means that the maximum second derivative is only considered in smooth parts of the domain. With the switch function η, the combined limiter function reads H_3L^(c) (δ_i-1/2,δ_i+1/2) = H_3(δ_i-1/2,δ_i+1/2)if η <1 H_3L(δ_i-1/2,δ_i+1/2)if η≥ 1. Note that for performance reasons, instead of computing η in every cell by using (<ref>) we precompute τ = 5/2(αΔ x^2)^2 and use H_3L^(c) (δ_i-1/2,δ_i+1/2) = H_3(δ_i-1/2,δ_i+1/2)if δ^2_i-1/2+δ^2_i+1/2<τ H_3L(δ_i-1/2,δ_i+1/2)if δ^2_i-1/2+δ^2_i+1/2≥τ instead of Eq. (<ref>). This leads to a significant reduction of computational time.§.§ Derivation of the Full-Third-Order Reconstruction on Non-Equidistant Grids The quadratic polynomial p_i(x) which satisfies 1/Δ x_i+ℓ∫_x_i-1/2+ℓ^x_i+1/2+ℓ p_i(x) dx = u̅_i+ℓ,ℓ∈{-1, 0, 1} is given by p_i(x)= a (x-x_i)^2 + b (x-x_i) + cwith a= 1/2Δ x_i+Δ x_i+1/2(u_i-1-u_i) + Δ x_i+Δ x_i-1/2(u_i+1-u_i) /(Δ x_i+Δ x_i-1/2)(Δ x_i+Δ x_i+1/2)Δ_i , b= u_i (Δ x_i+1-Δ x_i-1) (2 Δ_i+3Δ x_i) /(Δ x_i+Δ x_i-1/2)(Δ x_i+Δ x_i+1/2)Δ_i - 2/3 u_i-1(Δ x_i+Δ x_i+1/2) (2(Δ x_i+Δ x_i+1)/2+Δ x_i+1) /(Δ x_i+Δ x_i-1/2)(Δ x_i+Δ x_i+1/2)Δ_i + 2/3 u_i+1(Δ x_i+Δ x_i-1/2) (2(Δ x_i+Δ x_i-1)/2+Δ x_i-1) /(Δ x_i+Δ x_i-1/2)(Δ x_i+Δ x_i+1/2)Δ_i , c= u_i (Δ x_i(6Δ x_i^2+9Δ x_i (Δ x_i-1+Δ x_i+1)+4 (Δ x_i-1+Δ x_i+1)^2)+4Δ x_iΔ x_i-1Δ x_i+1) / 4 (Δ x_i+Δ x_i-1) (Δ x_i+Δ x_i+1) 3Δ_i + 4Δ x_i-1Δ x_i+1 (Δ x_i-1+Δ x_i+1)u_i - Δ x_i^2 (u_i-1 (Δ x_i+Δ x_i+1)+u_i+1 (Δ x_i+Δ x_i-1)) / 4 (Δ x_i+Δ x_i-1) (Δ x_i+Δ x_i+1) 3Δ_i . Evaluating this polynomial at x_i+ and rearranging yields the formulationu^(-)_i+1/2= u̅_i + 12H_3,neq(δ _i-1/2,δ _i+1/2, Δ x_i, Δ x_i-1, Δ x_i+1) with H_3,neq given byH_3,neq(δ _i-1/2,δ _i+1/2, Δ x_i, Δ x_i-1, Δ x_i+1)=Δ x_i/Δ_i1/3 (2 Δ_i-1/2/Δ_i+1/2δ _i+1/2 + Δ x_i+1/Δ_i-1/2δ _i-1/2) with the abbreviations Δ_i = Δ x_i-1+Δ x_i+Δ x_i+1/3, Δ_i-1/2 = Δ x_i-1+Δ x_i/2,Δ_i+1/2 = Δ x_i+Δ x_i+1/2. Evaluating and rearranging p_i(x_i-) yields u^(+)_i-1/2= u̅_i - 12H_3,neq(δ _i+1/2,δ _i-1/2, Δ x_i, Δ x_i+1, Δ x_i-1) with the same reconstruction function H_3,neq. Note however, that the order of the argumentshas changed in this case leading to H_3,neq(δ _i+1/2,δ _i-1/2, Δ x_i, Δ x_i+1, Δ x_i-1)=Δ x_i/Δ_i1/3 (2 Δ_i+1/2/Δ_i-1/2δ_i-1/2 + Δ x_i-1/Δ_i+1/2δ_i+1/2).§ REFERENCES apa | http://arxiv.org/abs/1705.10608v1 | {
"authors": [
"Birte Schmidtmann",
"Pawel Buchmüller",
"Manuel Torrilhon"
],
"categories": [
"math.NA",
"cs.NA"
],
"primary_category": "math.NA",
"published": "20170526215347",
"title": "Third-order Limiting for Hyperbolic Conservation Laws applied to Adaptive Mesh Refinement and Non-Uniform 2D Grids"
} |
A Viral Timeline Branching Process to Study a Social NetworkRanbir DhounchakIEOR, IIT Bombay, [email protected] Kavitha IEOR, IIT Bombay, India [email protected] AltmanINRIA, France [email protected] =============================================================================================================================================================================================================Table of contents * Part-I -Viral Marketing Branching Processes in OSNs* Part-II - Competitive Viral Marketing Branching Processes in OSNs Part-I: Viral Marketing Branching Processes in OSNs Viral Marketing Branching Processes in OSNs We consider the inherent timeline structure of the appearance of contents in online social networks (OSNs), while studying content propagation.We model the propagation of a post/content of interest by an appropriate multi-type branching process. The branching process allows one to predict the emergence of global macro properties (e.g., the spread of a post in the network) from the laws and parameters that determine local interactions. The local interactions largely depend upon the timeline (an inverse stack capable of holding many posts and one dedicated to each user) structure and the number of friends (i.e., connections) of users, etc. We explore the use of multi-type branching processes to analyze the viral properties of the post, e.g., to derive the expected number of shares, the probability of virality of the content, etc.In OSNs the new posts push down the existing contents in timelines, which can greatly influence content propagation; our analysis considers this influence. We find that one leads to draw erroneous conclusions when the timeline (TL) structure is ignored: a) for instance, even less attractive posts are shown to get viral; b) ignoring TL structure also indicates erroneous growth rates. More importantly, one cannot capture some interesting paradigm shifts/phase transitions, e.g., virality chances are not monotone with network activity parameter, as shown by analysis including TL influence. In the last part, we integrate the online auctions into our viral marketing model. We study the optimization problem considering real-time bidding. We again compared the study with and without considering the TL structure for varying activity levels of the network. We find that the analysis without TL structure fails to capture the relevant phase transitions, thereby making` the study incomplete. Keywords: Viral marketing, Branching processes, Online social network, Martingales, Online auctions. § INTRODUCTION The advent of the Internet has transformed the advertising industry in various ways. With the constantyear-on-year growth of the number of users, the global userbase of the Internet passed 3.5 billion mark in 2017,constituting nearly half of the earth's population <cit.>. This has made the Internet a powerful tool for organizations to interact with users and advertise their products/services in a personalized manner. In particular, Online Social Networks (OSNs) such as Facebook, Twitter, YouTube, etc play an instrumental role in the overall digital advertising of the products/services of various organizations. Users on these OSNs keep exchanging volumes of information/data in the form of images, blogs, texts, videos, etc.Due to immense activities of the users in OSNs, the marketing/advertising companies promote their commercial contents by leveraging the strengths of these OSNs.In viral marketing, the content providers (CPs)/advertisers create contents that are appealing to the users (e.g., giving offers, discounts, advertising in attractive manner). Whenusers find the service/product good enough, they involuntarily spread a word about it, triggering word-of-mouth.Users share the content with their friends, and the information is thus spread through OSNs. In the abstract sense, information spreads like a virus from one person to another, and hence called viral marketing. However, the content propagation has additional complexities which must be incorporated in the model to accurately investigate the process/phenomenon. Andwe study the samein this paper. §.§ Motivation and scope of research: Timeline structureOSNs store volumes of information consumed by the users. At the user level, these segments of information(called posts) arearranged based on their chronological order for the display to users <cit.>. In other words, posts appear at different levels (each level holds one post) based on their newnesson each user's page in an OSN, for instance, News Feed in Facebook. We call this reverse chronological appearance of the posts a `timeline' (TL), one dedicated for each user.This order of storing contents on timelines (TLs)and the related dynamics have great influence on content propagation.However, no attention is paid to the TL structure of the posts/contents appearing on a user's page in viral marketing literature. We study the content propagation phenomenon over OSNs, considering the inherent TL structure. A typical example of TL structure (for three users) is shown inFigure <ref>. Referring to the Figure <ref>, the natural questionto ask is:how many users have a particular post of interest?, and the next immediate one is: at what level does that post reside (i.e., the position)? For instance, all the three users have the post `A[5]' on their TLs, but at different levels. It is clearthat the posts positioned on the top of the TLs receive more attention/visibility compared to the ones at lower levels. Further, the arrival of new contents keeps shifting/pushing down the existing contents ofa TL. Consequently, a particular content of interest may reach lower levels before the user visits its TL, andmay miss the user's attention. Thus, the content of interest can potentially be missed by such shift transitions. Technically, a user can scroll through indefinite number of posts at various levels. However, it is known that users' attention is limited to the first few levels <cit.>.We consider this aspect while analysing content propagation in the viral marketing scenario. We observe (theoretically as well as numerically) that this aspect makes a huge difference in the conclusions related to such a study. In addition,the TL structure also influences content propagation because of the following other aspects:a) multiple posts with the same user (the attention gets divided);and b) decreasing interests towards reading contents at lower levels of TLs etc. We find that these key aspects must be incorporated into the model to accurately study the content propagation. We discover thatwithout these key elements into consideration, one leads to draw erroneous conclusions. Further when content of competing content providers circulates through the same social network, at the same point of time, there would be lot more influences on the propagation of thecontent. These influences are more complicatedwithtimeline structure; for example, user might neglect the content of a low influential content provider when it has(simultaneously) competing content.Or alternatively one might be interested in forwarding all, or the user might be interested only in forwarding the post that appears first.It might be possible that content of a content provider gets viral, but not that of the others etc.These aspects require study of decomposable branching processesand the same along with the propagation of the competing content is considered in Part-II <cit.> of this work.§.§.§ Our approach and contribution: Branching processes and viral marketingContent propagation over OSNs follows a number of models based on factors such as empirical evidences, the structure of an OSN, etc. There has been an extensive literature on the content propagation over the OSNs, and an important approach for modeling the dynamics of content propagation has been the branching processes (see <cit.>, etc). Authors in <cit.> studied information diffusion in the real viral marketing campaigns (involving 31000 individuals) and showed that the branching processes explain the dynamics of information diffusion.The branching processes (BPs) are adequate to incorporate the characteristics of content diffusion (e.g., phase transition-epidemic threshold) and provide explicit expressions for many important performance measures. As an example, authors in <cit.> provided a discrete time branching model to predict the spread of a campaign. They estimated, using the theory of branching processes, the campaign's performance via various measures such as the number of forwarded e-mails, the number of viral e-mails, etc. as a function of system parameters.Other studies (e.g., <cit.>),reinforce that the branching process can well fit the content propagation trajectories collected from real data.In a branching process, a parent produces identically and independently distributed (IID) offsprings. When one models content propagation over an OSN as a branching process, any parent should produce identical number of offspring(identical to the other parents) and independent of the offspring produced previously. Moreover, the parents keep producing offspring even when the population explodes. This is possible only when the OSN has infinite population.One can assume that the OSNs with huge user base have infinite population (unbounded number of users). Further, when users have identically distributed number of friends, then the BPs can model the content propagation over OSNs. This simplifies modeling and analysis. We use Multi-Type Branching Processes (MTBPs) (e.g., <cit.>) tomodelthe influence of TL structure oncontent propagation in OSNs.We further extract the realistic features of content sharing in a typical OSN and incorporate them appropriately in our model. The branching processes can mimic most of the phenomenon that influences the content propagation. For example, one can model the effects of multiple posts being forwarded to the same friend,and multiple forwards of the same post, etc.A post on a higher level in TL has better chances of being read by the user. Posts of appealing nature, e.g., containing irresistible offers, have a great chance of being in circulation, and we call it the post quality factor. Posts of similar nature appearing at lower levels on the TL have smaller chances of appreciation, etc.To study all these factors, one needs to differentiate the TLs that have the `post'at different levels, and this is possible only through multitype BPs. The following are some elements of our approach: * Using the well-known results of multi-type branching processes (MTBPs), we obtain closed-form expressions for some performance measures which provide insights for the performance of the campaign, e.g., visibility of the content, virality[A post is said to be viral if a large number of posts are already shared and if the number of users with posts is exploding with time.], etc.For the other measures, we either have approximate (time asymptotic) closed-form expressions or simple fixed point equations whose solution provides the required measures. * We study the influence of various network (structural) parameters on the content propagation. We also study the effects of posts sliding down theTLs due to network activity. As themean number of friends (network activity) increases, one can expect contents to spread more rapidly (monotonous behavior). Contrary to that, we discover non-monotonous behavior in the virality of a content. This phenomenon is fundamentally due to TL structure. In the last part, we integrate online auctions into our viral marketing model. We study an optimization problem considering real-time bidding.We compare the study considering theTL structure to that without considering the TL structure for varying activity levels of the network. Our observations are similar: drastically differentoptimizers and the analysis without TL structure fails to capture the relevant phase transitions.§.§ Related workThe huge growth in the activity on the Internet has generated wide interest in understanding content propagation on the Internet. Previously, peer to peer (P2P) networks (e.g., <cit.>)have played an important role. While of late there has been a lot of interest in content propagation over OSNs (e.g., <cit.>, etc). The P2P networks pull the required information from their peers, while in viral marketing the information is pushed for marketing purposes. In viral marketing, one needs to keep pushing information by passing it on to seed nodes to keep the flow going on.We consider content propagation over OSNs like Facebook, Twitter, etc where the information (called post/message) is again pushed, but involuntarily.Here the post/content is forwarded to few initial seeds, and the post gets viralbased on the interest generated among the users and the extensive sharing. There is a vast literature that studies the propagation of contentover OSNs.Many models discretize the time and study content propagation across the discrete time slots(e.g., <cit.>). As argued in <cit.> and references therein, a continuous time version (events occurs at continuously distributed random time instances)is abetter model(<cit.> ) and we consider the same. In majority of the works which primarily use graph-theoretic models, the information is spread at maximum to one user at any message forward event (e.g., <cit.> etc.).However, when a user visits a OSN (e.g., Facebook, Twitter, etc.),it typically forwards multiple posts and typically (each post) to multiple friends. Authors in (<cit.>, etc) study viral marketing problem,where the marketing message is pushedcontinuously via emails,banner advertisements,or search engines, etc. This scenario allows multiple forwards of the same post, and is analysed using BPs.However, they do not consider the influence of other posts using the same medium,and the other effects of TLs.As already mentioned, these aspects majorly influence the analysis. Branching processes have been used in analysing various types of networks, such as,polling systems (<cit.>) which have been used to model local area networks andP2P networks (<cit.>), etc. We use branching processes not only to study the time evolution of the contents of interest (extinction and viral growth) but also to provide a spatio-temporal description of the process. We model the evolution of the number of timelines that have a givencontent at a given level of the timeline (e.g., top of the timeline). § SYSTEM DESCRIPTION We consider a giant OSN, e.g., Facebook, Twitter, VKontakte, etc. In this paper, we track a content of interest corresponding to one specific content provider. In the second part of this work, we consider the contents ofmultiple (competing) content providers[The extension of this work is submitted separately for details see Part-II <cit.> ]. Users use these networks to connect to other users to share photos, news, events/activities taking place around them, commercial contents, etc. We briefly refer to these pieces of information as a post. Recall that these posts appear at different levels (on the screen) based on their newness, for instance, News Feed on Facebook. When a user visits[The users 'visit' OSNs at random intervals of time and in each `visit' it browses some/all new posts.] the OSN, it reads the posts on its timeline and shares a post, upon finding it appealing/useful, with some of its friends (users connected to him). In this sharing process, the post appears on the top level of the timelines of those friends with whom the post was shared. This brings about a change in the appearance of contents on the timelinesof recipients of the post. Basically, the existing contents of these TLs shift one level down each. And a user can share as many posts as it wants. The number of shares of a particular post by a particular user depends upon: a) the distribution of its number of friends;and b) the extent to which the user liked the post.And extensive sharing of the post amongst the users potentially makes the post viral. It is evident that the sharing of a post depends on how engaging the content provider (CP) designs its post.There are some more aspects which influence the content propagation. For example, users may become reluctant to read/share the contents on the lower levels of their TLs. When they see multiple posts of similar nature, they may appreciate few posts while the remaining ones receive reduced attention.We study all those aspects and the dynamics created by the actions (e.g., like, share, etc)of the users, whichhave a major impact on the propagation of the commercial content. §.§ Continuous time branching processesThe continuous time branching processes (CTBPs)are often good candidates for modeling viral marketing models. We describe these processes briefly as follows.LetX(0) be the number of initial particles in a CTBP. Each of these particles stays alive for an exponentially distributed time with parameter say λ and then dies.The `death' times of the particles are independent of the others. Hence, the first death occurs afterexponentially distributed time with parameter X(0)λ. Upon its death, it produces a random number (say ζ) of offspring, which join the existing population. The number of particles immediately after the death of the first particle changes to X(0) - 1 + ζ. The `death' times are again exponentially distributed(by the memoryless property), and the process continues. It is well-known that (under certain assumptions) the BPs have certain dichotomy:a) either the population gets extinct(death of last individual TL), or b) the population grows exponentially fast with time;and c) there is no third way. §.§ Dynamics ofcontent propagation and branching process The content propagation in a typical OSN is as follows. Let us say we are interested in the propagation of post-P when the process starts with X(0) number of seed TLs. We track the post-P till first N levels of TLs. It is important to note that X(0) remain unread before their respective users become aware of the contents on the TLs, i.e., before they visit their TLs. We call this TLs asnumber of unread TLs (NU-TLs). If a user, amongX(0), visiting its TL findspost-P attractive, it reads the post and may share the same with a random number of its friends.And post-P would be placed on the top level of the recipient TLs. As shown inFigure <ref>, the recipient TL haspost-P on the top, and remaining posts shift down one level each.If some more posts are shared again with some of these recipient TLs, the contents further shift down of the corresponding TLs.For instance, when one more post is shared after the post-P with the same shared user, the post-P resides on the second level of the corresponding TL.We first argue that the continuous time version of the branching process fits the content propagation better than the discrete counterpart. In a CTBP,any one of the existing particles `dies'after exponentially distributed time while in a discrete time version all the particles of a generation `die' together. When the number of copies of CP-post grows fast (i.e., when the post is viral), the time period between two subsequent changes decreases rapidly as time progresses. This is also well captured by CTBP, which mimics the content dynamics better.As the underlying OSN is huge, one can say that the visit times of users are virtually independent of the each other. We assume memory-less visit times, i.e., the users visit their TLs at intervals that are exponentially distributed as in a CTBP. The sharing process generates a random number, say ζ, of new TLs holding post-P. If the user does not read or share the post after visiting its TL, then ζ = 0. If sharing process is independent and identical across all the users, the new TLs ζ so generated resemble IIDoffspring in a CTBP and the effective NU-TLs with post-P may appear like the particles of a CTBP.When one of the users of these NU-TLs (including the new ones) visits its TLand starts sharing thepost-P (as before), then the content propagation dynamics again resemble a CTBP. However, the CTBPdescribed above does not capture some aspects related to post-propagation process. Post-P can disappear from some of the TLs, before the corresponding user's visits. To be precise, the post-P would disappear from a TL with (N-l+1) or more shares, if initially post-P were at level l. For example, the post 'A[5]' is lost by the arrival of post-Pin Figure <ref>.In all, the propagation of content in an OSNis influenced by two factors:a) the evolution ofTLs with post–P when some other postsare shared with them(contents on the TL shift down); and b) the sharing dynamics of post–P between different TLs. If we consider a CTBPwith a single type of population,all the particles will have the same death rate and offspring distribution (e.g., <cit.>).However, the disappearance of post-P from a TL depends upon the level at which the post is available.Further, we will see that many more aspects of the dynamics depend upon the level at which the post-P resides.Thus clearly, the single type CTBP is not sufficient, and we require amulti-typecontinuous time branching process (MTBP). An MTBP describes the population dynamics in the scenarios with a finite number of population-types. All the particles belonging to one type have the same death rate and offspring distribution; however, these parameters could be different across different types. To model the rich behaviour of the propagation dynamics,we will require (details in later sections) a particle of a certain type to produce offspring of other types. This modeling feature is readily available with MTBPs. We will show that the propagation dynamics can be well modeled by an appropriate MTBP, where for any l ≤ N, all the TLs with post-P in level l form one type of population. We use the following feature of the branching processes: it suffices to study the evolution of the population with one initial/seed particle. To be more specific, the analysis starting with multiple seeds can be derived using the analysis with one seed particle (details are in later sections). Assumptions: We track the post of the CP and study the time evolution of the post overTLs till first N levels. We assume a TL with posts of the CPs is not written[We say a TL is written when a friend of it shares a post which changes its content.] with the post of the same CP again. In a huge social network, it reasonable to assume that the probability of the same post being shared again with any user is very small. Also, one can find applications that satisfy such assumptions.As an example, consider few organizations that plan to advertise their products using a coupon system. Also, consider that these coupons can be shared with friends.But a user with one or two such coupons can not be shared with another coupon at a later point of time.To avoid multiple shares to the same user, there is a control mechanism. Any user sharing the coupons with its friends, needs to declare the recipients in a list which disables the share of coupons to the same recipients.§ SINGLE CONTENT PROVIDER MODELWe consider a single CP and refer to its post as the CP-post. The TLs containing CP-post may have itat any level from one to N. These TLs also contain the other posts, and the movement of these posts can also affect the propagation of the CP-post. And our focus would be on CP-post. We say a user is of type l, if its TL contains the CP-post on level land the top l-1 levels do not contain the CP-post.Let X_l(t) represent the number of unread TLs (NU-TLs) oftype lat time t. We study the time evolution of { X_l (t) }_l. We will show below that the N-valued vector process X(t) := {X_1(t),X_2(t),⋯,X_N(t)} is an MTBPunder suitable conditions.§.§ Modeling detailsBirth-death process via shift and share transitions:To model content propagation process by an appropriatebranching process, one needs to specify the `death' of an existing parent(a TL with `unread' CP-post in our case) and the distribution of its offspring. A user of type l is said to `die' either when its TL is written by another user or when the user itself wakes up (visits its TL) and shares the post with some of its friends. In the former event, exactly one user of type (l+1) (if l < N) is `born' while the latter event gives birth to a random number of offspring of types 1 or 2 or ⋯ N.If i-1 (with i ≤ N) posts are shared with the same user after the CP-post, then the CP-post is available on the i-th level and we will have a type i offspring. Assumethat a user produces offspring of type i with probability ρ_i and that ρ_1 > 0.Note that ∑_i ρ_i=1. In general, users have lethargy to view/read all the posts. We represent this via a level based reading probability, r_l, which represents the probability that a typical user reads the post on the level l. It is reasonable to assume r_1 ≥ r_2 ⋯≥ r_N. We have two types of transitions that modify the MTBP, which we call shift and share transitions. In the share transition, a user first reads the CP-post and based on the interest generated, it shares CP-post with a random number of friends. The Figure <ref> below describes the share transition.In the shift transition, user with CP-post is written by other users, and the position of CP-post shifts down. CP-post propagation dynamics:Let𝒢_1 represent the subset of users with CP-post at some level, while𝒢_2 contains the other users.We assume the OSN(and hence 𝒢_2) has infinitely many users and note𝒢_1 at time t has,X(t):= ∑_l≤ N X_l (t), number of users.Group G_2 has an infinite number of users/agents, and this remains the same irrespective of the size of G_1, which is finite at any finite time.Thus, the transitions between G_2 andG_1 are more significant, and one can neglect the transitions withinG_1.It is obvious that we are not interested in transitions withinG_2 (users without CP-post). We thus model the action of these groups inthe following consolidated manner:*Share transition: Any user from 𝒢_1 wakes up after exp(ν) time (exponentially distributed with parameter ν)to visit its TL and writes to a random (IID) number of users of 𝒢_2 (refer to Figure <ref>).*Shift transition:The TL ofany user ofG_1 is written byone of the users of G_2, and the time intervals between two successive writes are exponentially distributed with parameter λ (refer to Figure <ref>).The state of the network, X (t), changes when the first of the above-mentioned events occurs. At time t, we have X(t) (see equation (<ref>)) number of users in 𝒢_1 and thus (first) one of them wakes up according to exponential distribution with parameter X(t)ν.Similarly, the first TL/user of the group G_1 is written with a post after exponential time with parameter X(t)λ.Thus, the state X (t), changesafter exponential time with parameter X(t)λ + X(t)ν. Thus, the rate of transitions at any time is proportional to X(t), the number of NU-TLs at that time, and hence, the rate of transitions increase sharply as time progresses, when the post gets viral. Considering all the modeling aspects,the IID offspring generated by one l-type user are summarized as below (w.p. means with probability):ξ_l ={[ e_l+11_l < Nw.p.θ:= λ/λ + ν; ζe_i w.p. (1-θ ) r_lρ_i ∀ i ≤ N;0 w.p. (1-θ)(1-r_l). ].where e_l represents standard unit vector of size N with one in the l-th position, 1_A represents the indicator, ζ is the random number of friends to whom the post is shared and r_l is the probability the user reads/views a post on level l. Figure <ref> demonstrates the transitions. Recall that users (offspring) of type i are produced with probability ρ_i during the share transitions. From equation (<ref>) the offspring distributionisidentical at alltimeinstances t, ζ can be assumedindependent across users, andhenceξ_l are IID offspring from any type l user. Further, all the transitions occur after memoryless exponential times, and hence X (t) is an MTBP withN- types(e.g. <cit.>).PGFs and post quality factor: Let f_F( s, β) be the probability generating function (PGF) of the number of friends, , of atypical user, parametrized by β.For example,f_F(s, β) = exp (β (s-1) ) stands for Poisson distributed , f_F(s,β) = (1-β)/(1- β s) stands for geometric.Let m = f'_F (1, β) represent the corresponding mean. A user shares the post with some/all of its friends (ζ of equation (<ref>)) based on how engaging the post is. Let the post quality factor η quantify the extent of the CP-post engagement on a (continuous) scale of 0 to 1 where η = 0 means the worst and η =1 is the best quality. We assume that the meanofthe number of sharesis proportional to this quality factor.In other words, m (η) = m η represents the post quality dependent mean of the random shares.Let f(s, η, β) represent the PGF of ζ.For example, for Poissonfriends, the PGF and the expected value of ζare given respectively by: f (s, η, β) = f_F(s, ηβ) = exp (βη (s-1) )m (η) = ηβ. For Geometric friends, one may assume the post quality dependent parameter β_η =(1-β)/(1-β+βη),m(η) = ηβ .And then the PGF of ζ is given by f(s,η,β) = f_F(s, β_η) = (1-β_η)/(1-β_η s). One can derive such PGFs for other distributions of .Interestingly enough, we find that mostof the analysis does not depend upon the distribution ofbut only on its expected value. Let s :=(s_1, ⋯,s_N ) and(s, η ):= ∑_i = 1^N f(s_i,η, β) ρ_i. Thepost quality factor dependentPGF,of the offspring distribution of the overall branching process,is given by (see equation (<ref>)): h_l(s)=θ(s_l+11_l<N + 1_l=N) + (1-θ)r_l(s, η ) +(1-θ)(1-r_l). §.§Generator matrixThe key ingredient required for analysis of anyMTBPis its generator matrix. We begin with the generator forMTBP that represents the evolution of unread TLs with CP-post. We refer to this process briefly as TL-CTBP, timeline continuous time branching process. The generator matrix, A, is given by A = (a_lk)_N× N, where a_lk= a_l( ∂ h_l( s)/∂ s_k | _ s =1 - 1_{l=k})and a_l represents the transition rate of atype-l particle (see <cit.> for details).For our case, from previous discussions a_l = λ + ν for all l.Further, using equation (<ref>), thematrix Afor our single CP case is given by (with c := (1-θ) mη, c_l = cρ_l)A = (λ + ν) [ [c_1 r_1 -1c_2r_1 + θ ⋯c_N-1r_1c_Nr_1; c_1 r_2 c_2r_2 -1 ⋯c_N-1r_2c_Nr_2; ⋮; c_1 r_N-1c_2r_N-1 ⋯ c_N-1r_N-1 -1 c_N r_N-1 + θ; c_1 r_Nc_2r_N ⋯c_N-1r_Nc_N r_N -1; ]] .The largest eigenvalue and the corresponding eigenvectors of the above generator matrix are instrumental in obtaining the analysis of TL-CTBP (<cit.>) and the following lemma establishes important properties about the same.We also prove that the resulting TL-CTBP is positive regular[A matrix Bis called positive regular (irreducible) if there exists an n such that the matrix B^n has all strict positive entries.A BP is positive regular when its mean matrix is positive regular. With A as generator, the positive regularity is guaranteed if e^A is positive regular (e.g. <cit.>). ], which is an important property that establishes the simultaneous survival/extinction of all the types of TLs. i) When 0< θ < 1, the matrix e^At for any t > 0is positive regular.ii)Let α be the maximal real eigenvalue of the generator matrix A. This eigen value lies in the real interval, i.e.,α∈(r.c-1,r.c -1+θ) (λ+ν), whereinner product r.c := ∑_i=1^Nr_ic_i. When the reading probabilities have special form r_l = d_1 d_2^l (for some0 ≤ d_1, d_2 ≤ 1), then α→(r.c -1+θ d_2)(λ + ν)asN →∞.iii)The left and right eigenvectors u, v corresponding to α satisfy the following equationsc_1r.u= σ u_1 and c_1r.v=σ v_N where σ := α/( λ+ ν) + 1. We haveu_l= ∑_i = 0^l-1ρ_l-i/ρ_1(θ/σ)^i u_1,2 ≤ l ≤ N v_l = ∑_i = 0^ N-l(θ/σ)^i r_l+i/r_N v_N 1 ≤ l ≤ N-1. Proof: The proof is given in Appendix. At the finest details, we now developed a full-fledged MTBP that models the content propagation.The multitype continuous time branching processes (MTBPs) are well studied in the literature (e.g., <cit.>).The analysis of MTBP largely depends upon its generator matrix.Lemma <ref> describes the characteristics of the generator matrix specific to our model.It yields in positive regularity of TL-CTBP, i.e., the generator matrix A (<ref>) is positive regular. The largest eigenvalue α of A characterizes the growth rate of NU-TLs. We later see that left and right eigenvectors, u andv (corresponding to α) characterize the visibility of the CP-post.Using the characterizations of Lemma <ref> and the rich theory of MTBPs,we derive various performance measures specific to this content propagation. The CP would be interested in many related performance measures as a function of post quality factor and we consider the same in the next section. §.§ Performance analysis If the CP invests sufficiently in preparing the content/post and ensures a good quality, the post can get viral.It is important to note that the overall evolution of post depends on the number of seed TLs with CP-post. It is sufficient to consider one seed TL to derive various performance measures corresponding. This is because of properties of the branching processes: the analysis of process is quite similar when started with multiple seed TLs (i.e., growth rate is same).The central questions in a branching process which are also relevant to our content propagation process include:a) What is the extinction probability, i.e., the probability with which the entire population gets extinct?;b) What is the rate at which the population grows?;and c) What is the total progeny? etc.We apply the well-known results addressing the above questions to our context and derive some performance measures.We employ fixed point techniques to obtain the other performance measures. We begin with the probability of extinction. §.§.§ Extinction probabilities Depending upon the context of the problem, for instance, an awareness campaign, the CP may be interested in knowing the chances of dissemination of its information to a large population, i.e., the chance of virality of its post. This probability can be obtained directly using the extinction probability of the corresponding MTBP, as explained below.The CP-post is said to be extinct when it disappears completely off the OSN, i.e., none of the N-length TLs contain the CP-post eventually (as time progresses). Let q_l be theprobability withwhich the process gets extinct when TL-CTBP starts with one TL of type l,q_l :=P(X(t) = 0 for some t>0|X(0)=e_l).Let q := {q_1, q_2, ⋯, q_N} represent the vector of extinction probabilities. Underpositive regularity conditions of Lemma <ref>.(i) when a BPis not extinct, the populationgrows exponentially fast to infinity(see <cit.> , etc).This fact is established for our TL-CTBP in Theorem <ref>, provided in the later subsections. Thus, we have a dichotomy: the post gets viral with the exponential rate when it is not extinct and dies off completely otherwise.And hence the extinction probability equals one minus the probability of virality.Assume0 < θ < 1 andE[log] < ∞with log() := 0 when = 0. Then clearly E[ ζ log ζ ] < ∞ for any post quality factor η.Hence we have the following: (i)If α≤ 0,extinction occurs w.p.1, i.e., q =1 = (1, ⋯, 1);(ii) If α > 0, then[Vector q <s ifq_i < s_i for all components i.] q < 1, i.e., the post gets viral with positive probability irrespective of the type of the seed TL. In this case the extinction probability vector q is the unique solution ofthe equation,h(s) = s,and liesin the interior of [0,1]^N.Proof It follows from <cit.> and by Lemma <ref>. ▪ It is easy to verify that the hypotheses of this lemma are easily satisfied by many distributions. For example, Poisson, Geometric etc. satisfyE[log] < ∞.By Lemma <ref>.(ii) the extinction probabilities are obtained by solvingh( s) =s. The extinction probability can be obtained by conditioning on events and is given asbelow, when the process starts a type-l TL:q_l = θ (q_l+11_{l < N}+1_{l = N}) + (1-θ)r_l( q,η) + (1-θ)(1-r_l). The above simplifies to:q_N-l = (q_N -1) ∑_i = 0^l θ^l-ir_N-i/r_N+ 11 ≤ l < N, and the solution of the above provides the extinction probabilities.Virality Threshold:By Lemma <ref>.(ii)the CP-post gets viral, i.e., theTL-CTBP survives and explodes with non-zero probability, when α> 0. When N is sufficiently large, by Lemma <ref>.(ii) and Lemma <ref>.(ii),α≈(m η (1-θ) ρ.r - 1+ θ d_2 ) (λ + ν) = (m ηρ.r- 1)ν- (1- d_2) λ.It is well-known that the BPs survive with positive probability if the largest eigenvalue of the generator matrix, A, is positive (supercritical process). We have an (almost) equivalent of the same, i.e.; the TL-CTBP can survive when mη(1-θ)ρ.r > 1 -θ d_2 (see (<ref>)), for a BP pitted against the shifting process. The virality threshold, denoted by η̅, is definedin terms of network parameters and is given by η̅>1 -θ d_2/m (1-θ)ρ.r.Thus, the virality chances are influenced by post qualityη, shift factor (1-θ),by the types of posts produced as given by ρ, the mean number of friends m andthe reading probabilities r. In effect, the virality chances are influenced by factor, (1-θ )ηρ.r. No-TL Case: What if all the effects of the TLs were neglected?Majority of the works (e.g., <cit.>) considers study of content propagation without considering TL structure, and as mentioned before, this is an incomplete study. We would like to compare our conclusions with the case when the effects of TLs are neglected.When users do not posses TL structure, there will be:* No notion of post residing at various levels, i.e., all posts reside at one level only, and so N = 1; consequently its reading probability is one (r_1 = 1), andfurther,ρ_1 =1, ρ_i = 0∀ i >1.* No notion of shifting effect, consequently θ = 0 which is equivalent to saying λ = 0.The remaining modeling details of the content propagation are the same as before. In the view ofthis, it is evident that the content propagates according toa single type continuous time Markov branching process.Thus, the analysis of this case boils down to a special case of the TL model (with λ = 0, N = 1, ρ_1 = 1, r_l = 1 ∀ l ).For this special case, fromequation(<ref>),the rate of growth say α_No-TL is given by: α_No-TL≈(m η- 1)ν. Observe that the postgets viral when m η > 1, as is well understood in branching and viralmarketing literature (<cit.>). However, as mentioned before this neglects the key aspects of content propagation—effects of TLs. It isaccompanied by an erroneous conclusionthat the virality chances areinfluencedby m and η only. While in reality there is additional influence, whichis summarized by factor (1-θ )ηρ.r.InNo-TL case, the extinction probability is obtained is given by solvingq = f(q,η) (substituting the parameter values in equation (<ref>)). It again becomes evident that the effect of post residing at various levels is disappeared. The extinction probability is the same, whether it is started with one CP-post on level 1 or level 9.This is again a wrong interpretation and the solutions of the equation(<ref>)/(<ref>)provide the correct extinction probabilities which considers the influence of TLs. Influence of the Network Connectivity on Extinction Probability: We call an OSN sparsely connected when a sizable portion of the users have less number of friends (random), i.e., when they have a smaller mean number of friends, m=E[𝔽]. Whereas in a densely/highly connected OSN, a sizable number of users have a large number of friends and hence m is large.We study the impact of network connectivity on the extinction probability from sparsely connected OSN to densely connected OSN. When the mean mincreases, the network becomes more active as the sharing of different posts becomes more pronounced. The TLs are flooded with different posts rapidly, so do the TLs containing post-P,and one might anticipate an increase in its virality chances.However, these TLs also receive the other posts rapidly, resulting in rapid shifts to their contents.Thus, with an increase in m,the λ increases, and so does θ. We observean interesting phenomenon in Figure <ref>,with respect to the virality chances1-q_ρ := ∑_l q_l ρ_l,when λ is set proportional to mean m. To begin with, the virality chances 1-q_ρ improve (q_ρ decreases) with mean m, as anticipated.However, if one increases m further,we notice an increase in q_ρ.Basically, increased m implies more shares ofpost-P to new users but it also implies post-P is missed more often.This phenomenon is mainly observed because of timeline structure: when TL structure is neglected, any user will view all the posts with equal interest irrespective of their levels. And consequently, one would not have noticed the effect of mon extinction probability (as in No-TL case). There seems to be an optimal number of mean friends, which is best suited for post propagation. §.§.§ Time evolution of the NU-TLsThe number of unread timelines (NU-TLs), at various time instances, may serve as an indicator of the reach of the CP-post. The reach of CP-post is another yardstick of the campaign effectiveness. In this section, we obtain the time evolution of NU-TLs.We have the following theorem which is instrumental in obtaining the expected number of NU-TLs in viral scenario: Let (Ω, ℱ, ℙ) be an appropriate probability space andlet {ℱ_t } be the natural filtration forTL-CTBP X(.), i.e.,for each t, ℱ_t is the σ-algebra generated by {X(t');t' ≤ t}. Theprocess {v.X(t)e^-α t;t≥ 0}, with v, α as in Lemma <ref>,is a non negative martingale(with natural filtration)lim_t→∞X(t, ω)e^-α t = W(ω)u for almost all ω, where W is a non negative random variable that satisfies[ We use E_l and P_l to represent the conditional expectation andprobability respectively when TL-CTBPstarts with one l-type TL.]: P_l(W=0)=q_l, E_l[W] = v_l for each l, withu.v=1.Proof: Under the assumptions of Lemma <ref>, the TL-CTBP satisfies the hypotheses ofTheorem 1 of <cit.>. ▪The CP-post gets extinct on the sample paths with W =0 in equation (<ref>) (see <cit.> for details, also observe that P_l(W=0)=q_lfrom the above theorem). Itgets viralin the complementary paths, i.e., whenW > 0,as is also evident from the limit[Note that for large t, the NU-TLsX(t, ω) ≈ W(ω)ue^α t, which grows exponentially fast when W(ω) >0.] given by equation (<ref>). On the viral paths, we have two important measures: 1) the growth rate α, and 2) the visibility of the post. The growth rate characterizes the rate at which the post spreads through the OSN.From (<ref>),the TLs grow exponentially fast with time at the rate α (given by equation (<ref>)), i.e., as according to e^α t. And the other measure,the visibility of the postcan be determined by the number of potential users that can read the post and thereby get influenced to buy the product/service. Recall that users attention is limited to the first few number of levels. Clearly,the visibility of the post depends on the level at which it resides on the NU-TLs. The more the number of TLs having post on higher levels, the more the visibility. The number of potential users viewing the post on the level l is approximately r_l u_l e^α t (large t) where u_i is the i-th component of vector u. We define the visibility of the post at level sayl as thefraction of NU-TLs holding the post at level l after a long time t which is given as u_l/∑_i u_i.We also obtain the time evolution of the expected value of NU-TLs. This result is obtained as a corollary of Theorem <ref>. Whenα > 0 and starting with onetype-iseed TL, ∑_l = 1^N E_i[ X_l(t)] = e^α tv_i ∑_l u_l. Further, when r_i = d_1d_2^i,ρ_i = ρ̃ρ^i (with ∑_i ρ_i=1 and 0 < ρ≤ 1) for all i, we have∑_l = 1^N E[ X_l(t)] =ϱ e^α t d_2^i-1ϱ:=(1-d_2 ρ) (1/1-ρ - θ/ρ1/σ -θ) (σ - θ d_2)(σρ -θ)/(σ -θ) (ρ -θ). Proof: Using the fact that v.X(t)e^-α t is a martingale and u.v = 1, one can write the followingE[v.X(t)e^-α t]=E[v.X(0)e^-α× 0] = v_i; u. E[v.X(t)e^-α t] = uv_i(X_i(0) = 1 ) u. E[v.X(t)e^-α t] =u.vE[X(t)e^-α t]= E[X(t)e^-α t] = uv_i ∵u.v = 1.Thus E[X(t)] = uv_i e^α t. Further, by taking the sum of individual component of the expected value of the random vector ∑_l = 1^N E[ X_l(t)] =E[∑_l = 1^N X_l(t)] = e^α tv_i ∑_l u_l ∵ NSubstituting the value of v_i ∑_l u_l from equation (<ref>) in Appendix,we get the desired result∑_l = 1^N E[ X_l(t)] =e^α t d_2^i-1 (1-d_2 ρ) (1/1-ρ - θ/ρ1/σ -θ) (σ - θ d_2)(σρ -θ)/(σ -θ) (ρ -θ) = ϱ e^α t d_2^i-1 .▪ §.§.§ Time evolution of the number of sharesWe derive another important performance measure, the expected number of shares of the post,before a given timet. This measure gives the total spread of the post, i.e., the total number of shares a post gets in the given time-frame(e.g., the number of shares in Facebook). It is basically the total number of distinct TLs (i.e., users)that received a copy of the Post before time t. It is important to observe here that `number of shares' is different from thewell known `total progeny'[The total number of offspring produced so far, by the BP.] of the underlying BP.The `number of shares' is due to offspring generated by share transition only, while the `total progeny'is due to both `share' as well as `shift' transition offspring. We discuss the number of shares in viral (q < 1) and non viral (sure extinction) scenario.Viral scenarios:We employ probability generation based technique to obtain the time evolution of number of shares.Let Y(t)bethe accumulated number ofsharestill time t andlet Y =lim_t →∞ Y(t) (can also be infinity) be the eventual number of shares. The following Lemma captures the time evolution of number of shares.Let y (t) := [y_1 (t) ⋯, y_N(t)] with y_l (t) := E_l [Y(t)] = E[ Y(t) |X(0) =e_l], the expected number of shares till time t when started with one l-type TL for each l. If α > 0, we have y (t) = e^At ( 1 + (λ+ ν) A^-1 k ) - (λ+ ν)A^-1k k=[1-θ, 1-θ, ⋯, 1-θ,1 ]^T.Proof: The proof is given in Appendix.Thus,the expected number of shares grow exponentially fast,with time, for viral scenarios.Further, the growth rate α (see eqn. (<ref>)) is the same as that for the unread posts. From(<ref>),for large t, the expected shares when started with one type-l particle is:y_l (t) ≈e_l,0 e^α te_l,0 = v_l ∑_i=1^N u_i(1 + ν/α ) + v_l λ/α u_N. Non viral scenarios:When population gets extinct with probability one, the expectednumber oftotal shares is finite. One can directly obtain the expected number of shares by conditioning on the first transition event as follows:y_l := E_l[Y] =θ y_l+11_{l < N} + (1-θ) r_l (mη + m ηy.ρ); l≤ N.On recursively simplifying the above system of equations backward, we obtain the following for any l ≤ Ny_l =(1-θ)mη(1+y.ρ) ∑_i=0^N-lθ^N-l-i r_N-i. Summing the above over l after multiplying with ρ_l, we obtain:y.ρ = ∑_l=1^N ρ_l y_l=(1-θ)mη(1+y.ρ)∑_l ρ_l ∑_i=0^N-lθ^N-l-i r_N-i .Thus, the FP equation for y.ρ islinear and hence we have a unique FP solution for y.ρwhenever(1-θ)mη∑_l ρ_l ∑_i=0^N-lθ^N-l-i r_N-i <1.If (1-θ)mη r.ρ-1+θ=r.c-1+θ < 0, fromLemma <ref>.(ii) α < 0 and the process would be extinct w.p. one. In this scenario:(1-θ)mη∑_l ρ_l ∑_i=0^N-lθ^N-l-i r_N-i ≤(1-θ)mη∑_l ρ_lr_l ∑_i=0^N-lθ^N-l-i= (1-θ)mη∑_l ρ_lr_l 1- θ^N-l-1/1-θ= m η r.ρ < 1 because r_1 ≥ r_2 ⋯≥ r_N. We can similarly show using the limit of the eigenvalue α of Lemma <ref>, that when the process is extinct w.p. one, the above condition isalways satisfiedasymptotically.To be more precise the condition is satisfied for allN bigger than a threshold N̅, whenever the process is extinct w.p. one.We thus have the followingunique FP for y.ρ under the conditions discussed above:y.ρ = (1-θ)mη∑_l ρ_l ∑_i=0^N-lθ^N-l-i r_N-i/ 1-(1-θ)mη∑_l ρ_l ∑_i=0^N-lθ^N-l-i r_N-i.One can substitute the above in equation (<ref>) to obtain y_l for all l:y_l = (1-θ)mη∑_i=0^N-lθ^N-l-i r_N-i/ 1-(1-θ)mη∑_l ρ_l ∑_i=0^N-lθ^N-l-i r_N-i. Also, it is easy to verify that the FP is unique by uniqueness of the FP solutions for y.ρ.Note that in No-TL case, the number of shares is computed using y = E[Y] = mη(1+y)y = mη/1-mηwhich is again inaccurate. Special case: Sayr_i = d_1d_2^i,ρ_i = ρ̃ρ^i (with ∑_i ρ_i=1 and 0 < ρ≤ 1) for all i, one can easilysimplify the above.We have the following∑_l ρ_l ∑_i=0^N-lθ^N-l-i r_N-i =d_1 ρ̃∑_l ρ^l ∑_i=0^N-lθ^N-l-i d_2^N-i = d_1 ρ̃∑_l ρ^l d_2^l ∑_i=0^N-lθ^N-l-i d_2^N-l-i = d_1 ρ̃∑_l ρ^l d_2^l ∑_i=0^N-lθ^ i d_2^i = d_1 ρ̃∑_l ρ^l d_2^l (1- (θ d_2)^N-l+1 ) /1-θ d_2 =d_1 ρ̃/1-θ d_2 ( ρ d_2∑_l=0^N-1ρ^l d_2^l - d_2 (θ d_2)^Nρ∑_l=0^N-1ρ^lθ^-l)= d_1 ρ̃/1-θ d_2 ( ρ d_2 1- (ρ d_2)^N/1-ρ d_2 -(d_2)^N+1ρθ^N - ρ^N/θ(θ - ρ)). Substituting this in equation (<ref>) andunder the limit N →∞, we obtain the following compact expression (whereρ̃= (1-ρ)/ρnow in the limit):y.ρ≈O_mean/1- O_meanO_mean :=(1-θ)mη (1- ρ) d_1d_2 /(1-θ d_2)(1-ρ d_2). §.§ Validation of the number of shares We validate our theoretical expression for the expected number of shares by Monte Carlo simulation based on a real dataset; Stanford Large Network Dataset Collection (SNAP) dataset as provided in ego-Facebook, Social Networks section <cit.>. The dataset consists of friends' list of 4039 Facebook users and undirected connections among them. The sum of the number of friends of all these 4039 users (undirected connections) stands at 88234. To judiciously validate the theoretical finding, we add new users to the existing dataset as it has insufficient users originally. Basically, we split the friends of the nodes that have higher degree of connections into multiple sets. We then created new users and madeundirected connectionsby randomly choosing the nodes from each of the above-mentioned sets. (We now have a total of 20109 users.)We emulate our content propagation model on the above dataset as follows. We represent each user by a TL comprising five levels (N = 5). The starting type-1 seed TL reads the CP-post with probability r_1 shares it with a random number of friends from its friends' list (as in the dataset) while influenced by the post quality factor η. We incorporated all the other details, e.g., shifting, the lifetime of a TL, etc into the simulation.We obtain the number of shares in each sample path (realization) at fixed points in time. We then computed the average number of shares generated in such 8000 sample paths at each of the fixed time instances y_1(t), i.e., the time evolution of the expected number of shares. We plot the time evolution of the expected number of shares obtained theoretically and via simulation in Figure <ref>).And we compute the number of sharestheoretically using the same set of values. For the sake of convenience, we use the natural log scale on the y-axis. As the number of users are finite (dataset), the trajectory of log y_1(t)begins to saturate as time elapses in Monte Carlo simulation. While theoretically, the expected number of shares continues to grow indefinitely.Wesee inFigure <ref>) that the theoretical trajectoryof log y_1(t) matches well to that of the simulation based trajectory till saturation. § VIRAL MARKETING AND REAL TIME BIDDING The performance measures obtained in the previous sections can be useful in many advertisement/campaign related objectives such as brand awareness, search engine optimization, maximizing the number of clicks to a post/advertisement (ad), etc. In this section, we will study online auctioning for advertisements in viral marketing using the performance measures as obtained in the previous sections.The publishers of OSNs sell the advertisement inventory/space to various content providers (CPs) via auction mechanism commonly known as real time bidding (<cit.>). For example, Facebook auctions billions of advertisement space inventory every day, and the advertisements(ads) of the winners are served. Real-time bidding enables the CPs to automatically submit their bids in real time, and the advertisement of the highest worth (based on bid amount and its performance) is thus served. By virtue of auctioning, a natural competition occurs among the CPs for winning auctions. A content provider (CP) has to win the auction to get sufficient number of seed (initial) timelines.The virality/sharing of the post further depends upon the quality of the advertisement/post (recall the post quality factor η). On summarizing, the CP has to invest in two aspects: a) the bid amount to win the auction, and b) the amount spent to the design of the post (η). Recall that designing of a post could include providing authentic information about your services/products, or providing quality content,or giving offers, etc.Inappropriately tailored post can make users lose interest in the post, and thereby reducing the virality chances.Content providers (CPs) typically have wide-ranging objectives while advertising on OSNs. For example, a CP may be in interested in enhancing the brand awareness of its products. Brand awareness plays a central role in users' decision making for a purchase. Such an objective is achieved if the brand promotional post gets viral.Recall, we say a post gets viral if it spreads on a massive scale via its sharing among the users.Given that a post gets viral, a CP may be interested in knowing how fast the post spreads, i.e., the rate of virality. Other objectives, a CP may be interested in, include:maximizing the number of clicks on its post, improving its reputation, increasing its presence in the marketplace, etc. In previous sections, we derived some of these performance measures. For example, we obtained the time evolution of the number of shares and NU-TLs which characterize the rate of virality. We also obtained the expression for the probability of virality. On the other hand, in non-viral (sure extinction) scenarios, we computed the expected number of total shares before extinction. We providedexplicit expressions for some of the performance measures as a function of controllable parameters while others are represented as the solutions of appropriateFP (fixed point) equations. One can use these measures to study a relevant optimization problem taking auctions into account. In particular, and without loss of generality, we take the expected shares/NU-TLs as an indicative of the performance of CP's posts. §.§Optimal budget allocationIn the single CP model[When we consider the study of competing content in Part-II, theCPs further compete over relative visibility of their own content (details in Part-II <cit.>). ], the CP (indirectly) competes with otherCPs only for advertisement inventory space (i.e., for winning initial seeds). This is because the other CPs are advertising unrelated content.We consider the details related to winning auctions, and then the resultant rewards derived by single CP. The CP has to first win the auction, and then its post will propagate via the forwards/shares as discussed before. Recall that these shares/forwards generate revenue to the CP.Therefore, it becomes important for the (concerned) CP to know the bid distribution of the other CPs/advertisers. In particular, we need the highest bid of the advertisers participating in the auction. Authors in <cit.> show that the maximum bid value follows the log-normal distribution withparameters mean μ_b and varianceσ_b^2. Let 𝐁 denote the distribution of the maximum bid values, it follows form <cit.> log𝐁∼ N(μ_b, σ^2_b)N(,)As mentioned before, we take the expected value of NU-TLs as one among several choices of performance measures to study the optimization problem. More specifically,we take the sum of the expected number of users with CP-post at various levels, ∑_l=1^N E[X_l(t)] for some large t, as an indicator of CP's revenue. Note that when the characteristics of underlying social network (e.g., sparsely connected) are such that the probability of extinction is one, non-viral scenario, the CP getszero reward as the NU-TLs become zero after some time.Whereas in the viral scenarios (i.e.,q < 1), we have η̅≤η≤ 1 (see (<ref>)) and the CP gets∑_l=1^N E[X_l(t)] provided that it wins the auction.Authors in <cit.> state that the winner of the auction is decided based on the bid amount and the corresponding quality of the post/advertisement collectively. In other words, the CP wins the auction when the bid amount x and η collectively exceeds the bid distribution, i.e., x η > 𝐁. Thus, the probability of winning the bid is P( 𝐁 < x η), which is the cumulative density function (CDF) of log-normal distribution. Given that the CP wins the auction, its content is placed at the top-level of one TL, i.e., we begin content propagation with one seed TL of type-1. By the time the seed user visits its TL, the post might have shifted down or might disappear completely off the TL. In all, the CP invests: 1) x for the bid amount to win the auction, and 2)κ_1 η for preparing the post and κ_1 >0.Let us say the CP wants to maximize its utility, denoted by 𝐂(x,η) where𝐂(x,η)=(log E(∑_l X_l(t))-κ_2(x +κ_1 η))P( 𝐁 < x η),ifη̅≤η≤ 1 0,else,where the weightage κ_2 capturestrade-off between the rewardlog E(∑_l X_l(t))and the overall cost x +κ_1 η. The close-form expression of CDF of log-normal distribution with erf as the error functionP( 𝐁 < x η) = 1/2 + 1/2(log x η - μ_b/√(2)σ_b) = 1/2 + sign (f(xη))1/√(π)∫_0^f(x η) e^-z^2 dz, where sign(a) = +1 if a ≥ 0 and -1 otherwise; and f(xη)= log x η - μ_b/√(2)σ_b. Using Corollary <ref>, we rewrite it as𝐂(x,η)=(log( e^α tv_i ∑_l u_l ) - κ_2( x +κ_1 η)) (1/2 + sign(f(xη))1/√(π)∫_0^f(x η) e^-z^2 dz ), ifη̅≤η≤ 1 0,else.Thus, the optimization problem is stated as: O1: max_x,η𝐂(x,η) s.t. x ≥ 0 0 ≤η≤ 1.In some scenarios, the CP is constrained by limited budget. Given a budget amount say B̅, how toallocate/divide the sameinto bid amount x andηrelated cost,such that the revenue is maximized;in other words we consider constrained optimization (revenue maximization) problemunderthe budget constraint ℬ(x,η) = x + κ_1 η≤B̅.This leads to the formulation of a variant of the above stated optimization problem: O2: max_x,η log E(∑_l X_l(t))P(𝐁≤ x η) s.t. η̅≤η≤ 1,x ≥ 0, x + κ_1 η≤B̅.Optimizers of the above problem give the best allocation of theavailablebudget to the following factors: 1) winning the auction,and 2) maintaining post quality such that overall spending does not exceed B̅. Any pair of optimizers, (x^*,η^*), ofO2satisfy x^* + κ_1 η^* = B̅.Proof The proof is given in Appendix.Due to the complex nature of the underlying objective functions, it is hard toanalyse both of the optimization problems analytically.In particular, we are interested in obtaining the optimizers and study their variations with different system parameters in both the optimization problems. Let us say C^* and C^*_con be optimal objective values of O1 and O2. We compare and contrast the optimizers and objective values of O1 and O2 in the plots below.Figures <ref>and <ref>depict the CP'sspending 1) on the bid amountforwinning the auction,x; and 2) on the post quality factor η in order to maximize its utility.When the CP has a limited budget, i.e., as in optimization problem O2,we see in Figure <ref> thatx^* increases whereas η^* decreases as the mean number of friends increases (m). Eventually, both settle at their respective constant values, i.e., x^* ≈ 2.08,η^* ≈ 0.69. This pattern is attributed to two factors: 1) the cost factor for η, i.e., κ_1 is comparable to B̅; and 2) the increasing mean number of friends accounts for the steady decrease ofη to 0.69. In other words, as m increases, the post can get viral with smaller η (see equation (<ref>), and hence, the CP tends to proportionally invest more in winning bid. This kind of trend is seen only when x and η are taken together as in budget constraint. While in optimization problem O1, due to the absence of budget constraint, we immediately see in Figure <ref> that x^* increases unrestricted and η^* also increases to its maximum value one as the network activity increases (measure by m).Hence, we see higher optimal objective values attained in O1 compared to those of O2. Basically, the CP can utilize the increasing connectivity of network (i.e., m increases) by investing more in x and η. Note that the trend is different for the extinction probability as in Figure <ref>. There we saw that as connectivity of network increases, the virality chances decreases. However, as seen now, the virality chances reduces but the expected shares still improves.While in O2, the CP steadily increasesallocation in x, and hence, proportionally invests lesser in η. Andboth eventually settle to the constant values.When mean number of friends increases, it is natural that the CPs would bid more as it would be easier for a content to get viral (recall α∝ m). In other words,an increase in mimplies an increase in μ_b ( μ_b ∝ m).When the mean of bid distribution increases, it gets difficult to win the auctionin constrained problem O2 (see Figure <ref>). Consequently, the CP has to invest more in winning the auction, which comes at the cost of reducing the post quality η (x can not increase unrestricted due to the budget constraint). Further, as explained earlier the increasing mean number accounts for the steady decrease in η^*, and thereby, x^* increases and both of them converge to the fixed values as can be seen in Figure <ref>. Note that the objective value C_con^*, in this case, decreases after m ≈ 5 because the allocation to x is considerably higher than that seen in Figure <ref>. Again in Figure <ref>, without thebudget constraint (in O1), we do not see the trend. The optimal value increases with an increase in m, as x^* can take unrestricted values.Impact of timeline structure on optimizers: Earlier we studied the impact of TL structure on the post propagation. We now seethrough Figures<ref>and <ref>how neglecting the TL structure influences the optimizers.In No-TL case, the optimal values in both versions O1 and O2are higher than that of their respective timeline scenarios,overestimating the realistic optimal value. Also, the realizable optimal objective value (TL case) further gets compromised when it is accompanied by adopting No-TL case optimizers. They may be sub-optimal for TL scenario for, e.g., in the context of O2 problem with No-TL case concluding x^* ≈ 2.27,η^* ≈ 0.64 (see Figure <ref>) to be optimizers is fallacious (the actual optimizers in the TLcase as in Figures <ref> and <ref> are x^* ≈ 2.08,η^* ≈ 0.69). Thus, ignoring TL can cause a CP to make sub-optimal decisions and may indicatefalse trends. § CONCLUSIONSWe studied the propagation of a post of interest over a huge OSN.We modeled the propagation of the post, considering the timeline (TL) structure, by an appropriate multi-type branching process. We found that the underlying branching process exhibits a certain dichotomy: either the post gets extinct or gets viral. We obtained various performance measures such as the time evolution of the number of unread posts, the expected number of shares, the probability of virality, etc. We showed that the expected number of shares grow at the same rate as the number of unread posts. We compare our results with the results that one would obtain without considering TL structure. We discovered that without considering the TL structure, one leads to draw erroneous conclusions. For instance, we found that a study without TLs shows that even less attractive posts can get viral. It also indicates erroneous growth rates.More importantly, we also observe that without TL effects, one cannot capture some interesting paradigm shifts/phase transitions in certain behavioral patterns. For example, as the network becomes more active, one anticipates that it is more beneficial to engage in the network. The studies which do not incorporate these effects of TL lead to this erroneous conclusion; and argue that the virality chances increase monotonically as the mean number of friends increases (m). We demonstrated that virality chances do not increase monotonically with the number of friends. After a certain value of m, it decreases for some intermittently active networks (medium m values). To be more specific, for some range of parameters,less active networks are preferable to more active networks. Lastly, we integrated online auctions into our viral marketing model. We studied the optimizationproblem considering the online auctions. We again compared the study with and without considering TL structure for varying activity levels of the network. We observe that the analysis without considering TL structure fails to capture phase transitions, thereby making the overall study incomplete. Our study provides a framework using which, one can estimate important performance measures related to content propagation over online social networks, which further can be used in solving relevant optimization/game theoretic problems. apacite1IntStat Meeker, Mary, and Liang Wu. "Internet trends 2018." (2018).1BranchVMVan der Lans, Ralf, et al. "A viral branching model for predicting the spread of electronic word of mouth." Marketing Science 29.2 (2010): 348-365.1BranchNonMarkov Iribarren, Jose Luis, and Esteban Moro. "Branching dynamics of viral information spreading." Physical Review E 84.4 (2011): 046116. 1VirBranch Stewart, David B., Michael T. Ewing, and Dineli R. Mather. "A conceptual framework for viral marketing." Australian and New Zealand Marketing Academy (ANZMAC) Conference 2009 (Mike Ewing and Felix Mavondo 30 November 2009–2 December 2009). 2009. 1YU X. Yang and G.D. Veciana, Service Capacity of Peer to Pe er Networks, Proc. of IEEE Infocom 2004 Conf., March 7-11, 2004, Hong Kong, China. 1TLLit Piantino, S., Case, R., Funiak, S., Gibson, D. K., Huang, J., Mack, R. D., ... & Young, S. (2014). U.S. Patent No. 8,726,142. Washington, DC: U.S. Patent and Trademark Office.1Efficient Chen, Wei, Yajun Wang, and Siyu Yang. "Efficient influence maximization in social networks." Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2009.part2 Dhounchak, Ranbir, Veeraruna Kavitha, and Eitan Altman. “Part-II: CompetitiveViral Marketing Branching Processes in OSNs.”arXiv preprint arXiv:1705.09828RumorSpread1 Doerr, Benjamin, Mahmoud Fouz, and Tobias Friedrich. "Why rumors spread so quickly in social networks." Communications of the ACM 55.6 (2012): 70-75. 1CPonGraph Du, MFB Nan, Yingyu Liang, and L. Song. "Continuous-time influence maximization for multiple items." CoRR, abs/1312.2164 (2013). 1Scroll Nielsen, Jakob. "Scrolling and attention." Nielsen Norman Group (2010). 1xeta Mahdian, Mohammad, and Kerem Tomak. "Pay-per-action model for online advertising." Proceedings of the 1st international workshop on Data mining and audience intelligence for advertising. ACM, 2007.1R1 J.A.C. Resing, "Polling systems and multitype branching processes", Queueing Systems, December 1993.1R2 Xiangying Yang and Gustavo de Veciana,Service Capacity of Peer to Peer Networks, IEEE Infocom 2004. 1SNAP <https://snap.stanford.edu/data/>1xu Eitan Altman, Philippe Nain, Adam Shwartz, Yuedong Xu"Predicting the Impact of Measures Against P2P Networks: Transient Behaviour and Phase Transition", IEEE Transactions on Networking (ToN),pp. 935-949,2013. 1AthreyaBook Krishna B Athreya and Peter E Ney. Branching processes, volume 196. Springer Science & Business Media, 2012.1AthreyaPaper Krishna Balasundaram Athreya. Some results on multitype continuous time markov branching processes. The Annals of Mathematical Statistics, pages 347–357, 1968.1BidEst Cui, Ying, et al. ”Bid landscape forecasting in online ad exchange marketplace." Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2011.1Harris Theodore E Harris. The theory of branching processes. Courier Corporation, 2002. § APPENDIX Proof of Lemma <ref>:i)The matrix e^At for any t > 0 is positive regular iff e^A is (<cit.>), because A+I has only non-negative entries. Thus it is sufficient to prove e^A is positive regular. Without loss of generality we can drop the multiplier λ + ν. Then the matrix A can be written in the following way A = A_1 + A_2, where A_1 = [ c_1 r_1c_2r_1 + θ ·c_N-1r_1c_Nr_1; c_1 r_2c_2r_2 ·c_N-1r_2c_Nr_2; · · · · ·; c_1 r_N-1c_2r_N-1 ·c_N-1r_N-1 c_N r_N-1 + θ; c_1 r_Nc_2r_N ·c_N-1r_N c_N r_N ]andA_2 = Diag(-1) is the diagonal matrix with `-1' on all the diagonals. Thus, e^A = e^A_1 e^A_2 =e^-1 e^A_1since the matrices commute. For any i, one can expresse^A_i= I + A_i + A_i^2/2!+ A_i^3/3! + ⋯,where I is the identity matrix. Also e^A_2 = e^-1 I commutes withe^A_1.A matrix is positive regular if there exists an n such that A^n has all positive entries.If c_l > 0 and r_l > 0 for all l, then A_1 is trivially positive regularand hence e^A is also positive regular. Consider a general case, where some of the constants can be zero, in particular consider the case withc_l = 0∀ l > 1 and c_1 > 0. For this case: A_1 = [ c_1 r_1 θ 0 0 · 0 0 0 0; c_1 r_2 0 θ 0 · 0 0 0 0; c_1 r_3 0 0 θ · 0 0 0 0; c_1 r_4 0 0 0 · 0 0 0 0; · · · · · · · · ·; c_1 r_N-3 0 0 0 · 0 θ 0 0; c_1 r_N-2 0 0 0 · 0 0 θ 0; c_1 r_N-1 0 0 0 · 0 0 0 θ; c_1 r_N 0 0 0 · 0 0 0 0 ] Then it is clear thatA_1^2 = [c_1^2 r_1^2 +θ c_1 r_2θ c_1r_1 θ^2 · 0 0;c^2_1r_1 r_2+θ c_1 r_3 θ c_1 r_2 0 · 0 0; ⋮ ⋮ ⋮ ⋮ ⋮; c^2_1r_1 r_N-2 +θ c_1 r_N-1θ c_1r_N-2 0 · 0 θ^2; c^2_1r_1 r_N-1 +θ c_1 r_Nθ c_1r_N-1 0 · 0 0; c_1^2 r_1 r_N θc_1r_N 0 · 0 0 ] The third power A_1^3 = A_1^2 A_1 will have first three columns positive because the first two columns in A_1^2 is have strict positive terms and the first 2× 3 sub matrix of A_1 [ c_1 r_1 θ 0; c_1 r_2 0 θ ] has at least one positive entryin every column. Continuing this way one can verify that A_1^Nhas all positive entries by induction.Basically onceA_1^n has first n columns with onlypositive entries, because the first n× (n+1) sub-matrix of A_1 has atleast one positive entry inevery column, the matrix A_1^n+1= (A_1^n) × A_1 will have its first n+1 columns with onlypositive entries.Further A^n has only non negative entries for any n ∈ℕ. From (<ref>) it is direct that e^A_1 is positive regular and so is e^-1e^A_1.For the general case, when only some of {c_l} are non-zero since terms are non-negative, the positive regularity follows from the above case and expansion (<ref>). The result is true as long as c_1 > 0. Proof of parts (ii)-(iii):We proved that e^A is positive regular. By Frobenius-Perron theory of positive regular matrices: a)there exists an eigenvalue, call it e^α, of the matrix e^A whose algebraic and geometric multiplicities are one and which dominates all the other eigenvalues in the absolute sense. In fact, α would be a real eigenvalue of matrix A, anditdominates the real components of all other eigenvalues of the matrix A;b) there exists a left eigenvector u and a righteigenvector v, bothwith all positive components,corresponding to α.Fix one such set of left and righteigenvectors u, v.Note that the eigenvectors of matrices A and e^A are the same.Any left eigenvector of α, in particularu,satisfies uA = αu and hence we get the following system of equations relatingu and α(λ + ν) c_1r.u - (λ + ν)u_1= α u_1c_1r.u = α +λ + ν/λ + ν u_1,c_lr.u+ θ u_l-1= α +λ + ν/λ + ν u_l, l ≥ 2 .Simplifying the above we obtain the following relation among various components ofleft eigenvectoru:for any l ≤ Nu_l = ∑_i = 0^l-1ρ_l-i/ρ_1(θ/σ)^i u_1;∑_i=1^Nu_i =∑_l=1^Nρ_l/ρ_1∑_i=0^N-l(θ/σ)^iu_1 σ := α +λ + ν/λ + ν.Following exactly the same procedure, we obtain the relation among various components of righteigenvectorv which arev_l = ∑_i = 0^N-lr_l+i/r_N(θ/σ)^i v_N∀ l = 1,2, ⋯, N-1.This completes the proof of part (iii).Fix u, vas before, and considerthe following linearfunction of σ':P(σ') := (r.c) r.u + θ∑_i = 1^N-1 r_i+1 u_i- σ' r.u where r.c := ∑_i = 1^Nr_i c_i etc. Multiplying eitherside ofthe equation (<ref>)with r_l and then summing over lwe notice thatσ isa zero of P(.). In other words,eigenvalue α = (σ^*- 1)(λ+ν), where σ^*is a zero of P(.)Because u_i >0 for all l,r.u >0 and similarlyr.c > 0. Thusσis the only zero of P(.).It is clear thatP( r.c)= θ∑_i = 1^N-1 r_i+1 u_i > 0. Since r_is are monotonic, i.e., because r_1 ≥ r_2≥⋯≥ r_N,P(r.c + θ)= θ∑_i = 1^N-1 r_i+1 u_i- θr.u< 0. Thus, the only zero ofP(.) lies in theopen interval interval (r.c, r.c + θ ).Thus α∈( r.c-1, r.c + θ -1 )(λ+ν).Consider the special case withr_l = d_1d_2^l, where d_1 and d_2 ≤ 1 are constants, then clearly the only root of equation (<ref>) σ equals σ = r.c+ θ d_2∑_i = 1^N-1 r_i u_ir.u = r.c+ θ d_2(1-r_N u_N/r.u) .Now westudy the convergence of σ as N →∞. It is obvious that the eigenvectors/eigenvalues corresponding to different N would be different.We would normalize them by choosing the eigenvector uwith u_1 = 1 for any N.With such a choice,it is clear from (<ref>) that u_N remains bounded even when we let N →∞. Thus as N →∞σ =r.c+ θ d_2(1-r_N u_N/r.u) →r.c+ θ d_2 as N →∞ ∵(r_N → 0).Thus,as the number of TL levels increase the largest eigenvalue, α of matrix A converges to (r.c+ θ d_2 -1 )(λ + ν). Computation of v_l ∑_l u_l: Referring to Theorem <ref>, the left and right eigenvectors of the matrix Aare u_l = ∑_i = 0^l-1ρ_l-i/ρ_1(θ/σ)^i u_1, v_l = ∑_i = 0^N-lr_l+i/r_N(θ/σ)^i v_N; l ≥ 2 σ =α/(λ + ν) +1. When ρ_i = ρ̃ρ^i, r_i = d_1d_2^i with0 < d_1,d_2, ρ < 1. On substituting these values, we obtain u_l =1 -(θ/σρ)^l /1-θ/σρρ^l-1 u_1= ρ^l-1 -1/ρ(θ/σ)^l /1-θ/σρu_1;∑_l =1^N u_l =u_1/( 1- θ/σρ)(1-ρ^N/1-ρ - θ/σ×ρ1 -(θ/σ)^N/1-θ/σ) v_l =1- (θ d_2/σ)^N-l+1/1 - θ d_2/σd_2^l-Nv_N =d_2^l/d_2^N- (θ/σ)^N -l+1 d_2^N-l+1 + l-N/1 - θ d_2/σv_N= d_2^l/d_2^N - (θ/σ)^N+1 d_2 (σ/θ)^l/1 - θ d_2/σv_N.v_l ∑_l u_l =d_2^l/d_2^N- (θ/σ)^N+1 d_2 (σ/θ)^l/1 - θ d_2/σv_N u_1/( 1- θ/σρ)(1-ρ^N/1-ρ - θ/σ×ρ1 -(θ/σ)^N/1-θ/σ)=d_2^l/d_2^N- (θ/σ)^N+1 d_2 (σ/θ)^l/(1 - θ d_2/σ)(1 - θ/σ)(1-ρ^N/1-ρ - θ/σ×ρ1 -(θ/σ)^N/1-θ/σ) u_1 v_N.We require the value of u_1v_N towards obtaining v_l ∑_l u_l. For this, we will usethe fact that u.v = 1. So,∑_l=1^N u_l v_l =∑_l=1^N ρ^l-1 -1/ρ(θ/σ)^l /1-θ/σρd_2^l/d_2^N- (θ/σ)^N+1 d_2 (σ/θ)^l/1 - θ d_2/σ u_1v_N = 1.Observe that∑_l=1^N(ρ^l-1 -1/ρ(θ/σ)^l ) ×( d_2^l/d_2^N- (θ/σ)^N+1 d_2 (σ/θ)^l) =d_2/d_2^N∑_l=1^N (d_2 ρ)^l-1- 1/ρ d_2^N∑_l=1^N (θ d_2/σ)^l - ∑_l=1^N (θ/σ)^N+1d_2/ρ(σρ/θ)^l + ∑_l=1^N d_2/ρ(θ/σ)^N+1 = d_2/d_2^N1-(d_2ρ)^N/1-d_2ρ- 1/ρ d_2^Nθ d_2/σ1-(θ d_2/σ)^N/1-θ d_2/σ - d_2/ρ(θ/σ)^N+1σρ/θ1-(σρ/θ)^N/1-σρ/θ + d_2/ρ(θ/σ)^N+1 N. Substituting this value back to ∑_l=1^N u_l v_l, we getu_1v_N/(1-θ/σρ)(1-θ d_2/σ)(d_2/d_2^N1-(d_2ρ)^N/1-d_2ρ- 1/ρ d_2^Nθ d_2/σ1-(θ d_2/σ)^N/1-θ d_2/σ - d_2/ρ(θ/σ)^N+1σρ/θ1-(σρ/θ)^N/1-σρ/θ + d_2/ρ(θ/σ)^N+1 N ) =1 u_1v_N= (1-θ/σρ)(1-θ d_2/σ)(d_2/d_2^N1-(d_2ρ)^N/1-d_2ρ- 1/ρ d_2^Nθ d_2/σ1-(θ d_2/σ)^N/1-θ d_2/σ - d_2/ρ(θ/σ)^N+1σρ/θ1-(σρ/θ)^N/1-σρ/θ + d_2/ρ(θ/σ)^N+1 N ) Now substitutingthe value of u_1 v_N in the equation <ref> v_l ∑_l u_l =d_2^l/d_2^N- (θ/σ)^N+1 d_2 (σ/θ)^l/(1 - θ d_2/σ)(1 - θ/σ)(1-ρ^N/1-ρ - θ/σ×ρ1 -(θ/σ)^N/1-θ/σ) ×(1-θ/σρ)(1-θ d_2/σ)(d_2/d_2^N1-(d_2ρ)^N/1-d_2ρ- 1/ρ d_2^Nθ d_2/σ1-(θ d_2/σ)^N/1-θ d_2/σ - d_2/ρ(θ/σ)^N+1σρ/θ1-(σρ/θ)^N/1-σρ/θ + d_2/ρ(θ/σ)^N+1 N )=d_2^l/d_2^N- (θ/σ)^N+1 d_2 (σ/θ)^l/(1 - θ/σ)(1-ρ^N/1-ρ - θ/σρ1 -(θ/σ)^N/1-θ/σ) ×(1-θ/σρ)(d_2/d_2^N1-(d_2ρ)^N/1-d_2ρ- 1/ρ d_2^Nθ d_2/σ1-(θ d_2/σ)^N/1-θ d_2/σ - d_2/ρ(θ/σ)^N+1σρ/θ1-(σρ/θ)^N/1-σρ/θ + d_2/ρ(θ/σ)^N+1 N )=d_2^l - (θ d_2/σ)^N+1(σ/θ)^l/(1 - θ/σ)(1-ρ^N/1-ρ - θ/σρ1 -(θ/σ)^N/1-θ/σ) ×(1-θ/σρ)(d_2 1-(d_2ρ)^N/1-d_2ρ- θ d_2/σρ1-(θ d_2/σ)^N/1-θ d_2/σ - 1/ρ(θ d_2/σ)^N+1σρ/θ1-(σρ/θ)^N/1-σρ/θ + 1/ρ(θ d_2/σ)^N+1 N ) . On simplifying and using property of limits: lim_ N →∞ v_l ∑_l =1^N u_l=lim_ N →∞d_2^l - (θ d_2/σ)^N+1(σ/θ)^l/(1 - θ/σ)(1-ρ^N/1-ρ - θ/σρ1 -(θ/σ)^N/1-θ/σ)× lim_ N →∞(1-θ/σρ)(d_2 1-(d_2ρ)^N/1-d_2ρ- θ d_2/σρ1-(θ d_2/σ)^N/1-θ d_2/σ - d_2 (θ d_2/σ)^N -( d_2 ρ)^N/1-σρ/θ + 1/ρ(θ d_2/σ)^N+1 N ) .Note that in viral scenario α > 0 andσ > 1 and hence θ/σ < 1, solim_ N →∞(θ/σ)^N+1 = 0,lim_ N →∞(θ d_2/σ)^N+1 = 0,lim_ N →∞N (θ d_2/σ)^N+1= 0∵ d_2 <1. In what follows, for any fixed l, we have lim_ N →∞d_2^l - (θ d_2/σ)^N+1(σ/θ)^l/(1 - θ/σ)(1-ρ^N/1-ρ - θ/σρ1 -(θ/σ)^N/1-θ/σ) =d_2^l/(1 - θ/σ)(1/1-ρ - θ/σρ1 /1-θ/σ)lim_ N →∞(1-θ/σρ)(d_2 1-(d_2ρ)^N/1-d_2ρ- θ d_2/σρ1-(θ d_2/σ)^N/1-θ d_2/σ - d_2 (θ d_2/σ)^N -( d_2 ρ)^N/1-σρ/θ + 1/ρ(θ d_2/σ)^N+1 N ) =(1-θ/σρ)( d_2/1-d_2ρ- θ d_2/σρ1/1-θ d_2/σ)Substituting these limits (equations <ref> and <ref>) in equation <ref>, we getlim_ N →∞ v_l ∑_l =1^N u_l=d_2^l/(1 - θ/σ)(1/1-ρ - θ/σρ1 /1-θ/σ)(1-θ/σρ)( d_2/1-d_2ρ- θ d_2/σρ1/1-θ d_2/σ) =d_2^l-1/(1 - θ/σ)(1/1-ρ - θ/ρ1 /σ-θ)(1-θ/σρ)( 1/1-d_2ρ- θ/ρ1/σ -θ d_2)Thus, we havev_l ∑_l u_l =d_2^l-1 (1-d_2 ρ) (1/1-ρ - θ/ρ1/σ -θ) (σ - θ d_2)(σρ -θ)/(σ -θ) (ρ -θ) Proof of Lemma <ref> : Let j_ x= {j_x_1,j_x_2,⋯,j_x_N} be the number of TLs of type 1,2,⋯,N respectively, and y be the total number of shares. It is easy to observe thaty ≥∑_i j_x_i. We write it in short form as y ≥ j_ x.Define s_ x^ j_ x := Π_is_x_i^j_x_i. Then the PGF of TL-CTBP,when started with one type-1 particle,can be written as[ P_(e_1, 1) → ( j_ x, y)(t) is the probability that state (e_1, 1)(one type-1 particle and 1 total progeny/number of shares)after time t getstransformed to population vector j_ x and number of shares y ] F_1(s, t) = ∑_ j_ x = 0^∞∑_y ≥ j_ x^∞ P_(e_1, 1) → ( j_ x, y)(t) s_ x^ j_ xs_y^yand,δ F_1(s, t)/δ t= ∑_ j_ x = 0^∞∑_y ≥ j_ x^∞ P'_(e_1, 1) → ( j_ x, y)(t) s_ x^ j_ x s_y^y.This is obtained by conditioning on the events of the first transition. Note that the populations generated by two parents evolve independently of each other and the procedure is similar to the standard procedure used in these kind of computations (e.g., <cit.>). Let ξ = ( ξ_1, ξ_2, ⋯, ξ_N ) represent the offspring produced by one parent of type-1 and let := ∑_iξ_i. By backward equation, we have P'_1k(t) = ∑_jq_1jP_jk(t); in our case it isδ F_1(s, t)/δ t=(λ +ν) ((1-θ) r_1 ∑_ξ∑_ j_ x = 0^∞∑_y ≥ j_ x^∞ P_1(ξ)P_( ξ,+1 )→ ( j_ x, y)( t )s_ x^ j_ xs_y^y+θ F_2(s, t)-∑_ j_ x = 0^∞∑_y ≥ j_ x^∞ P_(e_1, 1) → ( j_ x, y)(t) s_ x^ j_ xs_y^y + (1-θ)(1-r_1)s_y)δ F_1(s, t)/δ t=(λ +ν)( (1-θ) r_1 ∑_ξ P_1(ξ) Π_i =1^N(∑_ j_ x = 0^∞∑_y ≥ j_ x^∞ P_(e_i, 1) → ( j_ x, y)(t) s_ x^ j_ xs_y^y)^ξ_i s_y+θ F_2(s, t) -F_1(s, t)+(1-θ)(1-r_1)s_y)δ F_1(s, t)/δ t=(λ +ν)( (1-θ) r_1 s_y f_1(F(s, t) ) +θ F_2(s, t)- F_1(s, t)+(1-θ)(1-r_1)s_y )where F(s, t) := { F_1(s, t),F_2(s,t), ⋯, F_N(s,t)}. Similarly we can writefor any lδ F_l(s, t)/δ t=(λ +ν)( (1-θ) r_l s_y f_l(F(s, t) ) + θ( 1_l<N F_l+1(s, t) +s_y 1_l = N) - F_l(s, t)+(1-θ)(1-r_l)s_y ).Let ẏ_l(t) = δ^2 F_l(s,t)/δ tδ s_y|_s= 1 ∀ l ={1,2,⋯, N} represent the time derivative of number shares till time t when started with a type l progenitor. We have the following expression ẏ_1(t) =(λ +ν) ( (1-θ) r_1 f_1(1) + (1-θ) r_1 ∑_i=1^Nδ f_1(F(s,t) /δ F_i(s,t)δ F_i(s,t)/δ s_y|_s= 1 + (1-θ)(1-r_1)1 +θδ F_2(s,t)/δ s_y|_s= 1 -δ F_1(s,t)/δ s_y|_s= 1 )=(λ +ν) ( (1-θ) r_1+ (1-θ)(1-r_1) + (1-θ) r_1 mη∑_i=1^Nρ_iy_i(t) + θy_2(t)-y_1(t) )=(λ +ν) ( 1-θ + r_1∑_i=1^N c_iy_i(t)+ θ y_2(t) -y_1(t) ).Similarly, we can write the above for any lẏ_l(t) = (λ +ν) (1- θ +r_l∑_i=1^N c_iy_i(t) +θy_l+1(t)1_l<N - y_l(t) +θ1_l =N).The above can be writtenin matrix form as 1/λ + ν[ [ẏ_1(t);ẏ_2(t); ⋮; ẏ_N-1 (t);ẏ_N(t) ]]=[ [c_1 r_1 -1c_2r_1 + θ ⋯c_N-1r_1c_Nr_1; c_1 r_2 c_2r_2 -1 ⋯c_N-1r_2c_Nr_2; ⋮; c_1 r_N-1c_2r_N-1 ⋯ c_N-1r_N-1 -1 c_N r_N-1 + θ; c_1 r_Nc_2r_N ⋯c_N-1r_Nc_N r_N -1; ]] [ [y_1(t);y_2(t); ⋮; y_N-1 (t);y_N(t) ]] + [ [ 1-θ; 1-θ; ⋮; 1-θ; 1 ]]. Solving the above set of equations, we obtain:[ [y_1(t);y_2(t); ⋮; y_N-1 (t);y_N(t) ]] = e^At[ [y_1(0);y_2(0); ⋮; y_N-1 (0);y_N(0) ]]+ e^At∫_0^te^-As(λ + ν)[ [ 1-θ; 1-θ; ⋮; 1-θ; 1 ]] ds = e^At[ [y_1(0);y_2(0); ⋮; y_N-1 (0);y_N(0) ]]+ e^At A^-1( I -e^-At) (λ + ν)[ [ 1-θ; 1-θ; ⋮; 1-θ; 1 ]] With y (t) := {y_1(t), y_2(t), ⋯, y_N(t)}, we can represent the above as:y (t) := [ [y_1(t);y_2(t); ⋮; y_N-1 (t);y_N(t) ]] = e^At(+ (λ + ν) A^-1k) - (λ + ν)A^-1kwhere k= [1-θ, 1-θ, ⋯, 1-θ,1 ]^T.y(t) = e^At( 1 + (λ + ν) A^-1k) - (λ + ν)A^-1k From <cit.>, e^At can be approximatedfor large t.By which, we can writey (t)≈ e^α t v u'( 1 + (λ + ν) A^-1k) - (λ + ν)A^-1k≈ e^α t(v∑_i = 1^N u_i + λ + ν/αvu'k)- (λ + ν)A^-1k≈ v e^α t ( ∑_iu_i( 1 + λ + ν/α (1-θ))+ λ + ν/α u_N) - (λ + ν)A^-1k≈ v e^α t∑_iu_i(1 +1-θ/ r.c -1 + θ d_2 ) - (λ + ν)A^-1k.Proof of Proposition <ref>: We will prove that at optimality the budget constraint is tight, i.e., x + κ_1 η = B̅ using the Lagrangian relaxation method. To do so, we first change the inequality budget constraint, x + κ_1 η≤B̅ , to equality constraint as follows. Let s^2 (ensuring it to be ≥ 0) be slack variablesuch that x + κ_1 η + s^2 = B̅.Similarly, we have η - s_1^2 =η̅,η+ s_2^2 = 1, x -s_3^2 = 0η≥η̅, η≤ 1where s_1^2, s_2^2, s_3^2 are the slack/surplus variables.The Lagrangian function 𝐿(x,η, Λ) with Lagrangian multiplier Λ, Λ_1,Λ_2,Λ_3 is given asmax_x,ηlog E(∑_l X_l(t))P( Bid ≤ x η) -Λ(B̅ -x - κ_1 η - s^2) - Λ_1 ( η̅ -η + s_1^2) - Λ_2 (1-η- s_2^2 ) - Λ_3(x-s_3^2)_𝐿(x,η, Λ,Λ_1,Λ_2,Λ_3).The critical points oflog E(∑_l X_l(t))P( Bid ≤ x η) with the given constraint, say G:= -x - κ_1 η - s^2,G_1:= η̅ - η + s_1^2, G_2:= 1-η- s_2^2 ,G_3:= x-s_3^2 are obtained by solving the following system of simultaneous equations[see <http://users.wpi.edu/ pwdavis/Courses/MA1024B10/1024_Lagrange_multipliers.pdf>] [and <http://www.math.harvard.edu/archive/21a_spring_09/PDF/11-08-Lagrange-Multipliers.pdf>] ∂log E(∑_l X_l(t))P( 𝐁 < x η) /∂ x = Λ∂ G/∂ x + Λ_1∂ G_1/∂ x + Λ_2∂ G_2/∂ x +Λ_3∂ G_3/∂ x log E(∑_l X_l(t)) e^-f(xη)^2/√(2 π)σ_b x = -Λ + Λ_3∂log E(∑_l X_l(t))P( 𝐁≤ x η) /∂η=Λ∂ G/∂η + Λ_1∂ G_1/∂η + Λ_2∂ G_2/∂η +Λ_3∂ G_3/∂η log E(∑_l X_l(t)) e^-f(xη)^2/√(2 π)σ_b η+ P( 𝐁≤ x η)∂log E(∑_l X_l(t))/∂η =-κ_1Λ -Λ_1-Λ_2 ∂log E(∑_l X_l(t))P( 𝐁≤ x η) /∂ s =Λ∂ G/∂ s + Λ_1∂ G_1/∂ s + Λ_2∂ G_2/∂ s +Λ_3∂ G_3/∂ s0 = 2Λ sand also x + κ_1 η +s^2 = B̅.We now compute the gradient w.r.t. to s_1, s_2, s_3:[Note that we mainly require equations <ref> and <ref> for this proof.]∂log E(∑_l X_l(t))P( 𝐁≤ x η) /∂ s_1 = Λ∂ G/∂ s_1 + Λ_1∂ G_1/∂ s_1 + Λ_2∂ G_2/∂ s_1 +Λ_3∂ G_3/∂ s_1 0 = -2s_1 Λ_1∂log E(∑_l X_l(t))P( 𝐁≤ x η) /∂ s_2 = Λ∂ G/∂ s_2 + Λ_1∂ G_1/∂ s_2 + Λ_2∂ G_2/∂ s_2 +Λ_3∂ G_3/∂ s_2 0 = 2s_2 Λ_2∂log E(∑_l X_l(t))P( 𝐁≤ x η) /∂ s_3 = Λ∂ G/∂ s_3 + Λ_1∂ G_1/∂ s_3 + Λ_2∂ G_2/∂ s_3 +Λ_3∂ G_3/∂ s_3 0 = -2s_3 Λ_3 η̅ - η + s_1^2 = 0,1-η- s_2^2 =0 , x-s_3^2 =0. Referring to equation <ref>, we have either s_3 =0 or Λ_3 = 0. If s_3 = 0, then x =0 (see equation <ref>); which is clearly not an optimal solution (zero objective value)as the objective can be improve when x>0. In particular, we do not need to compute Λ_1, Λ_2, s_1, s_2, s_3 for this proof. We only need to prove that s = 0. For this,observe that equation(<ref>) gives that either Λ = 0 or s = 0. However, Λ 0 because log E(∑_l X_l(t)) e^-f(xη)^2/√(2 π)σ_b x is positive (recall Λ_3 = 0). Therefore, we must have s = 0, which consequently brings out the tightness of budget constraint x + κ_1 η= B̅. Hence proved.Part-II: CompetitiveViral Marketing Branching Processes in OSNs CompetitiveViral Marketing Branching Processes in OSNsWe study the content propagation of competing contents in Online Social Networks. Wemodel the propagation of competing posts/contents by an appropriatebranching process. The underlying branching process turns out to be decomposable.Consequently, the evolution of the competing posts can be drastically different from each other. We utilize the existing theory of branching process and our newly developed results on decomposable branching process to study this problem. We obtain various performance measures such as the time evolution of the population of one of competing posts, extinction probabilities, etc. We also compare our results with the results that one would obtain without considering the timeline structure. We find that one leads to draw erroneous conclusions when the timeline structure is ignored. At last, we formulate a game theoretic framework to study the competition considering the online auctions. We numerically compute the Nash equilibria.Keywords: Viral marketing, Branching processes, Online social network, Game theory,Martingales, Online auctions. § INTRODUCTION In viral marketing, the content providers (CPs)/advertisers create contents/posts that are appealing to the users. When a user finds a post about products/services attractive, it spreads a word about it. The post is transmitted from one user to its neighbour, which causes a chain reaction. By the extensive sharing/transmission of a postthe post spreads on a massive scale, then we say the post got viral and hence this process is called viral marketing. In Part-I (<cit.>) of thiswork, we studied viral marketing branching process for the propagation of posts corresponding to a content provider (CP). In this paper, we will extend this study to investigate the propagation of postscorresponding to competing content providers.Online social network and timelines: Online Social Networks (OSNs) store volumes of information about the users. An important feature of these OSNs is the timeline(TL) structure of the appearance of the posts. Each post appears at a certain level based on its newnesson each user's page in an OSN, for instance, News Feed in Facebook. We call this reverse chronological appearance of the posts a `timeline' (TL). There is one TL dedicated for each user. As mentioned inPart-I <cit.>, no attention is paid to the TL structure of the posts/contents appearing on a user's page in viral marketing literature. We study the content propagation of competing CPs over OSNs, considering the inherent TL structure. A typical example of TL structure (for three users) with competing content (say posts P and Q) is shown inFigure <ref>. The figureshows the TL, consisting of different posts at different levels, forthree users.Users 1 and 2 have both the posts, while user 3 has only post-P. Our goal is to understand the propagation of these competing posts.Here a natural questionto ask is:at given time, how many users have post-P or post-Q?;and the next immediate one is at what level does that post reside (i.e., the position)? For instance, all the three users have the post `post-P' on their TLs, but at different levels. It is clearthat the posts positioned on the top of the TLs receive more attention/visibility compared to the ones at lower levels. Further, when a user has two or more posts ofcompeting nature, it may pay more attention to the one at top level; for example, user 2 may pay more attention to post-Q while user 1 may pay relativelymore attention to post P.Note also that the arrival of new contents keeps shifting/pushing down the existing contents ofa TL. Thus, a particular content of interest may reach lower levels before the user visits[The users 'visit' OSNs at random intervals of time and in each `visit' it browses some/all the new posts.]its TL, and the user may miss it. Technically, a user can scroll through indefinite number of posts. However, it is known that users' attention is limited to the first few levels <cit.>. Asin Part-I <cit.>, we consider this aspect in analysing content propagation, further,considering the extra complications that arise because of competing content on TLs. Without these key elements, one leads to draw erroneous conclusions (see Part-I <cit.> for similar results in the context of propagation of a single post). Methodology: Similar to Part-I, we model the content propagation of (competing) posts as an appropriate branching process(see Part-I <cit.> for more details). The branching processes can mimic most of the phenomenon that influences the content propagation;one can model the effects of multiple posts being forwarded to the same friend,and multiple forwards of the same post, etc. Further, werequirea multitype branching process to model this propagation.This is because we require separate counts of TLswith the given post (say post-P)at each levelandat any time instance. A post on a higher level in TL has better chances of being read by the user. Posts of appealing nature, e.g., containing irresistible offers, have a great chance of being in circulation, and we call it the post quality factor. Posts of similar nature appearing at lower levels on the TL have smaller chances of appreciation, etc.To study all these factors, one needs to differentiate the TLs that have the `post'at different levels, and this is possible only through multitype branching processes (BPs). Further to the above, we have more factors to considerwhile studying the competing contents. One may have TLs with one particular content (e.g., post-P or Q),or may have TLs with both the posts but at different levels. The propagation of a post (say post-P) is impacted bythatofthe other post (e.g., post-Q).Thisimpact is largely different than that considered in Part-I <cit.>because of the competition between the two posts. When a new post is received by a TL with post-P: a) post-Pcanonly lose itsposition in higher levels, when the new post is unrelated to post-P (like in Part-I);b) if the new post is (competing) post-Q,in addition,the post-P may receive reduced attentionfromthe user of TL.In Part-I <cit.>,we studied the propagation of content belonging to single content provider,we required non-decomposable (or irreducible) branching processes.Whereas for the propagation of competing content,we find that a decomposable branching process is needed.The analysis of decomposable is far more complicated, and is a relatively less studied object in literature.§.§ An overview of branching processesAs our analysis uses the theory of branching processes extensively, we briefly present an overview of branching process literature.Branching processes can be categorized on a number of factors,for example, the discrete and continuous time branching processes (classification by time), single type and multi-type branching processes,critical, super-critical or sub-critical, etc. Further, each category has subcategories giving rise to numerous variants of the branching processes (e.g., <cit.>, <cit.> etc). In a multitypeMarkoviancontinuous time branching process (CTBP), a particle lives for an exponentially distributed random time (e.g., <cit.>). It produces a random number of offspring of various types independent of the other particles and then dies. And this continues. The underlying generatormatrix, say A,plays a vital role in carrying out the analysis of continuous time branching process (CTBP). When the e^At matrix is positive regular[Matrix e^At is positive regular if each entry of it is strictly positive for some t_0 >0.], the underlying CTBP is classified as irreducible/non-decomposable. In this case, the branching processes exhibit a certain dichotomy (e.g., <cit.>):a) either all the types survive together and grow exponentially (with time) with the same rate; b) or all the types get extinct after some time.Additionally,the largest eigenvalue ofA, say α, determines the growth rate, extinction probability, etc. The CTBP is called subcritical, critical and supercriticalbased onwhether α < 0, α = 0 and α > 0 respectively.When α≤ 0,the population gets extinct with probability one. Whereas whenα > 0, the CTBP can survive on some sample paths. Further, on these sample paths, all types grow exponentially fast with the common rate αprovided the CTBP is non-decomposable <cit.>.When the process is such that the particles of certain types do not produce offspring of certain other types,we have a very different variety of branching process called as decomposable branching process. In this process, the types get partitioned into different classes, where the types across different classes may have different characteristics. These processes behave significantly different fromnon-decomposable processes.First and foremost, the dichotomy no longer holds, i.e., a particular class(a group of types) may thrive/survive whereas another gets extinct. Secondly, the types in different classes may have different growth rates, etc. It turns out that the branching processes (BPs) modeling the competing content is adecomposable branching processes and this necessitated the extension of the theory of the correspondingBPs.In particular weobtained thetime evolution ofexpected performance of a metricthat resemblesthe well knowntotal progeny (or total population, i.e., the total number of population from the start, including the perished ones) of the BP.This performance helps in estimating the expected `number of shares'. The decomposable processes are relatively less studied in comparison with the non-decomposable objects. There are strands of literature (e.g., <cit.>) that study the discrete time decomposable branching processes. But to the best of our knowledge, there is no dedicated literature on decomposable continuous time branching process (DCTBPs).Some analysis could probably be derived using the discrete-time results. However, there are many more questions that need to be answered directly for continuous time versions, and we consider the same. Authors in<cit.> studied the multitype decomposable branching process indiscrete time framework. Their study mainly focused on investigating the growth rates of the particles belonging to different classes.Authors in <cit.> investigated the class-wise extinction probabilities, where the extinction of a class is shown to be the minimal non-negative solution of the extinction probability equation but with added constraints. Wemainly studythe `total progeny' of different classesin a decomposable branching processes, andin the context of continuous time processes. The continuous time processes are useful in studying many practical problems such as cancer biology, viral marketing problem etc. In these problems, it becomes important to know the following: the growth rates of different classes,the growth rates of `total progeny' of different classes, the probability thatthe particles of one specific class explodes, while, the others get extinct,etc. We find that the resultsrequiredfor our analysis are missing in the currentliterature related to decomposable branching process. By this way, we contribute to decomposable branching processes. In particular, we derive a performance measure similar to total progeny (which is well analyzed for irreducible branching processes),fordecomposable branching processes in the continuous time framework.We employ the well-known results of branching processes and our newly developed results on decomposable processes to study competing posts.The decomposable branching processes are relatively less studied objects, particularly in the continuous time framework. As usual practice in the theory of decomposable branching processes, we group various types into different irreducible classes. These irreducible classes evolve according to the well-studied nondecomposable/irreducible branching processes(when they start in their own class) and we study further evolution of them when they are inter connected according to a reducible generator matrix. § SYSTEM DESCRIPTION We consider an OSN with large number of users, for example, Facebook, Twitter, etc.Users use these networks to share pieces of information such as messages, photos, videos etc. We briefly refer to these pieces of information asposts. The posts are stored in a reverse chronological order on inverse stacks which we refer to as timelines (TLs).When a user visitsthe OSN, it reads the posts on its timeline (TL)and shares a post, upon finding it appealing/useful, with some of its friends.Due to this, the shared post appears on the top level of the timelines of those friends with whom the post was shared. This brings about a change in the appearance of contents on the timelinesof the recipients of the post. Basically, the existing contents of these TLs shift one level down. And a user can share as many posts as it wants. The number of shares of a particular post by a particular user depends upon: a) the distribution ofnumber of friends of the user;b)the level in the TL at which the post resides; and c) the extent to which the user liked the post etc. Basically,the sharing of a post depends on how engaging the content provider (CP) designs its post.And extensive sharing of the post amongst the users/friends potentially makes the post viral. There are many more aspects which influence the content propagation (see Part-I <cit.>). Users may become reluctant to read/share the contents on the lower levels of their TLs. When they see multiple posts of similar nature, they may appreciate few posts while the remaining receive reduced attention.We study all these aspects and the dynamics created by the actions (e.g., like, share, etc)of the users, whichhave a major impact on the propagation of the commercial content.Further, in this paper,we consider propagation of multiple posts which compete with each other; for example two competitors can spread simultaneously their advertisementsthrough the same social network and users responseto one of the postsdepends also upon the post of the competitor.We consider multi-type continuous time branching process to model the propagation of competing content. We begin with the description of the relevant dynamics and that in an appropriate branching process.§.§ Dynamics ofcontent propagation and branching process The content propagation in a typical OSN is as follows. Let us say we are interested in the propagation of two competing posts, namelypost-P and post-Q,when the process starts with X(0) number of seed TLs. Some of these X(0) TLs have onlypost-P/post-Q, while some others have both the posts.Further, the tagged (post-P/post-Q) posts can be residing anywhere on the first N-levels of the corresponding TLs; we track these posts only till first N levels of the TL.Note that the posts of theseX(0) TLs remain unread before their respective users visit their TLs.Thus, we call theseasnumber of unread TLs (NU-TLs). If a user, amongX(0), visiting its TL findspost-P/post-Q or bothattractive, it reads the post(s) and may share the same with a random number of its friends.And post-P/post-Q or bothwould be placed on the top level(s)of the recipient TLs (see Figure <ref>). As shown in the figure, the recipient TL haspost-P and post-Q on the top levels, and the existing posts shift down by two levels. Further, it is clear (in this example) that thepost-Qis shared before thepost-P.If some more posts are shared with some of these recipient TLs, their contents further shift down.For instance, in Figure <ref>,if one more post is shared afterpost-P, the post-Pwould reside on the second level (and post-Q on level 3) of theTL. As argued in Part-I <cit.>, the continuous time version of the branching process fits the content propagation better than the discrete counterpart. In a CTBP,any one of the existing particles `dies'after exponentially distributed time while in a discrete time version all the particles of a generation `die' together; theusers of theTLs withpost-P/post-Qvisit their respective TLs at different instances of times.As the underlying OSN is huge, one can say that the visit times of users are virtually independent of the each other. We assume that a user visits its TL after exponentially distributed time.When the number of copies of a CP-post grows fast (i.e., when the post is viral), the time period between two subsequent changes decreases rapidly as time progresses. This is also well captured by CTBP, which mimics the content dynamics better.The sharing process generates a random number, say ζ, of new TLs holding post-P or post-Q or both. If the user does not read or share the post after visiting its TL, then ζ = 0. If sharing process is independent and identical across all the users, the new TLs ζ so generated resemble IIDoffspring in a CTBP and the effective NU-TLs with one or both of the postsmay appear like the particles of a CTBP.When one of the users of these number of unread TL(NU-TLs) visits its TLand starts sharing thepost-P/post-Q or both (as before), then the content propagation dynamics again resemble that in a CTBP. However, the CTBPdescribed above does not capture some aspects related to the modeling of the post-propagation process. Post-P/post-Q can disappear from some of the TLs, before the corresponding user's visit. For example, post-P would disappear from a TL with (N-l+1) or more shares (before user's visit), if initially post-P were at level l.Further, TLs with only one of the two posts are different from the ones that have both, etc.Thus, we will need (continuous time branching processes) CTBPswith multiple types of particles to model this kind of content propagation.Further, with competing content, one of posts may get viral and the other may get extinct.This kind of an effect is not seen in irreducible BPs. Thus, the BPs that model our processcannot be irreducible, they will have to bedecomposable(e.g., <cit.>). § MODELING DETAILS We consider two competing content providers (CPs) and refer to them as CP-1 and CP-2 respectively. The competing CPs are operating in a similar kind of business, e.g., tourism industry, hotel/restaurant services, manufacturing businesses, etc. And they havecompeting contents/posts. We track the posts of both the CPs till first N levels of the TLs.The propagation of competingcontent can bemodelled by a multitype branching process (MTBP).As already mentioned, we have multiple types of population, and they further can be classified into three classes, as explained below.§.§ Different Typesof TLs §.§.§ Exclusive-typesThere are two classes in this category and each class contains the usersholding the post ofone of the competing CPs only on their TLs.Let X_l,0(t) be the number of unread TLs having CP-1 post at level l and these donot contain CP-2 post at time t. And let X^1_ex(t) : = {X_1,0(t),⋯, X_N,0(t)}andX^2_ex(t) := {X_0,1(t), ⋯, X_0,N(t)} denote the population vector of NU-TLs holding CP-1 and CP-2 posts respectively.These types are exactly like those in Part-I (<cit.>). We describe the details with one exclusive type post (say post-P) and we refer to it as CP-post. We have two types of transitions namely shift and share transitions that modify the NU-TLs holding exclusively the CP-post.Figure <ref> demonstrates the transitions.Share Transitions:In the share transition, a user first reads the CP-post and based on the interest generated, it shares CP-post with a random number of friends.When a user visits its TL, it reads some/all posts located on its TL and shares them with some of its friends.We illustrate the sharing transition in Figure <ref>.The user reads (and shares) the posts residing on different levels, with varying levels of interest based on many factors (as in Part-I <cit.>). Firstly, it reads posts on higher levels with higher probabilities than those on lower levels; the interest can also depend upon the influence of the content provider; and it can further depend upon the quality of the post etc.We assign probability r_i for reading the post at level-i, and note that r_1 ≥ r_2 ⋯≥ r_N. If the user finds the post interesting, which is determined by the post quality factor η, it shares the post with random number of its friends. And if the user shares more posts along with the tagged post (e.g., post-P), the level of thepost(inthe recipient TLs) changes accordingly. We consider this aspect into our model by defining ρ_i, where ρ_i is the probability of sharing i number of posts (see Figure <ref>). Furthermore, a user can respond more actively to the post of a more influential CP. Let w_j(≥ 1) be the influence factor of withj=1 or 2, and we assume thatthe post quality factorofis given by w_j η_j. Thus, the CP with high influence factor can obtain good results even with alower post quality.To simplify the notation, we use η_j to represent w_j η_j and with thisnotation,η_j ∈ [0, 1/w_j]. Shift Transition:In the shift transition, when the TL of user with CP-post is written by other users, the position of CP-post shifts down (see Figure <ref> and <ref>). CP-post propagation dynamics:Let𝒢_1 represent the subset of users with CP-post[Similarly𝒢_2contains subset ofusers withpost of CP-2.] at some level, while𝒢 contains the other users of social network without any post of our interest (i.e., post-P and post-Q).We assume the OSN(and hence 𝒢) has infinitely many users and note𝒢_1 at time t has,X_ex^1(t):= ∑_l≤ N X_l,0 (t), number of users.Group G has an infinite number of users/agents, and this remains the same irrespective of the size of G_1 (andG_2), which is finite at any finite time.Thus, the transitions between G andG_1 are more significant, and one can neglect the transitions withinG_1.It is obvious that we are not interested in transitions withinG (users without CP-posts). We thus model the action of these groups inthe following consolidated manner:* In the share transition, any user from 𝒢_1 wakes up after exp(ν) time (exponentially distributed with parameter ν)to visit its TL and writes to a random (IID) number of users of 𝒢(refer to Figure<ref>).* In the shift transition,The TL ofany user ofG_1 is written byone of the users of G, and the time intervals between two successive writes are exponentially distributed with parameter λ.Thestate[to be more precise,the components of the entire system state, corresponding to the post of CP-1.] of the network, X_ex^1 (t), changes when the first of the above-mentioned events occurs. At time t, we have X_ex^1(t) (see equation (<ref>)) number of users in 𝒢_1 and thus (first) one of them wakes up according to exponential distribution with parameter X_ex^1(t)ν.Similarly, the first TL/user of the group G_1 is written with a post after exponential time with parameter X_ex^1(t)λ.Thus, the state X_ex^1 (t), changesafter exponential time with parameter X_ex^1(t)λ + X_ex^1(t)ν. Thus, the rate of transitions at any time is proportional to X_ex^1(t), the number of NU-TLs at that time, and hence, the rate of transitions increase sharply as time progresses, when the post gets viral. Considering all the modeling aspects,the IID offspring generated by one (l,0)-type user are summarized as below (w.p. means with probability):ξ_l,0 ={[ e_l+11_l < Nw.p.θ:= λ/λ + ν; ζe_i w.p. (1-θ ) r_lρ_i ∀ i ≤ N;0 w.p. (1-θ)(1-r_l). ].where e_l represents standard unit vector of size N with one in the l-th position, 1_A represents the indicator, ζ is the random number of friends to whom the post is shared and r_l is the probability the user reads/views a post on level l. Recall that users (offspring) of exclusive type (0,i) are produced with probability ρ_i during the share transitions.From equation (<ref>) the offspring distributionisidentical at alltimeinstances t, ζ can be assumedindependent across users, andhenceξ_l are IID offspring from any type l user. Further, all the transitions occur after memoryless exponential times, and hence X_ex^1 (t) by itself isan MTBP withN- types(e.g. <cit.>), when one starts only this exclusive type TLs.PGFs and post quality factor: Let f_F( s, β) be the probability generating function (PGF) of the number of friends, , of atypical user, parametrized by β.For example,f_F(s, β) = exp (β (s-1) ) stands for Poisson distributed , f_F(s,β) = (1-β)/(1- β s) stands for geometric.Let m = f'_F (1, β) represent the corresponding mean. A user shares the post with some/all of its friends (ζ of equation (<ref>)) based on how engaging the post is. As mentioned before,the post quality factor η quantifies the extent of the CP-post engagement on a (continuous) scale of 0 to w_1,where η = 0 means the worst and η =w_1 is the best quality. We assume that the meanofthe number of sharesis proportional to this quality factor.In other words, m (η) = m η represents the post quality dependent mean of the random shares.Let f(s, η, β) represent the PGF of ζ.For example, for Poissonfriends, the PGF and the expected value of ζare given respectively by:f (s, η, β) = f_F(s, ηβ) = exp (βη (s-1) )m (η) = ηβ. For Geometric friends, one may assume the post quality dependent parameter β_η =(1-β)/(1-β+βη),m(η) = ηβ .And then the PGF of ζ is given by f(s,η,β) = f_F(s, β_η) = (1-β_η)/(1-β_η s). One can derive such PGFs for other distributions of .Interestingly enough, we find that mostof the analysis does not depend upon the distribution ofbut only on its expected value. Let s :=(s_1, ⋯,s_N ) and(s, η ):= ∑_i = 1^N f(s_i,η, β) ρ_i. Thepost quality factor dependentPGF,of the offspring distribution of the overall branching process,is given by (see equation (<ref>)): h_l,0(s)=θ(s_l+11_l<N + 1_l=N) + (1-θ)r_l(s, η ) +(1-θ)(1-r_l). Generator matrix The key ingredient required for analysis of anyMTBPis its generator matrix. We begin with the generator forMTBP that represents the evolution of unread TLs with CP-post. We refer to this process briefly as TL-CTBP, timeline continuous time branching process. The generator matrix, A, is given by A = (a_lk)_N× N, where a_lk= a_l( ∂ h_l( s)/∂ s_k | _ s =1 - 1_{l=k})and a_l represents the transition rate of atype-l particle (see <cit.> for details).For our case, from previous discussions a_l = λ + ν for all l.Further, using equation (<ref>), thematrix Afor our single CP case is given by (with c := (1-θ) mη, c_l = cρ_l)A_ex^1 = (λ + ν) [ [c_1 r_1 -1c_2r_1 + θ ⋯c_N-1r_1c_Nr_1; c_1 r_2 c_2r_2 -1 ⋯c_N-1r_2c_Nr_2; ⋮; c_1 r_N-1c_2r_N-1 ⋯ c_N-1r_N-1 -1 c_N r_N-1 + θ; c_1 r_Nc_2r_N ⋯c_N-1r_Nc_N r_N -1; ]] .The exclusive types corresponding to CP-2 can be defined in exactly a similar way.§.§.§ Mixed-types:These are the TLs having the posts of both the CPs (i.e., bothpost-P and post-Q), i.e., the TLs in 𝒢_1 ∩𝒢_2.Denote by X_l,k (t)the number of users withpost-P on the l-th level andpost-Q on the k-th level of their TLs at time t. We classify theseTLs as (l,k) type TLs. We consider the analysis with initial TLs having the post-P and post-Q on the top levels, i.e., we begin with either (1,2) or (2,1) type TLs.It is not difficult to start with other types of TLs, but the expressions become complicated, and we would like to explain the results in a simplified manner.Now, with a shift transition, a (1,2) type (a (2,1) type) gets converted to a (2,3)type (a (3,2) type respectively), which further gets converted to (3,4) (to (4,3)) type with another shift, and so on. Thus, we have 2(N-1) mixed-type TLs, whichat time t aregiven by,X_mx (t)= ( X_mx1 (t), X_mx2(t) )with X_mx1 (t) :={ X_1,2(t),X_2,3(t), ⋯, X_N-1,N (t) } , X_mx2(t) :={ X_2,1 (t), X_3,2(t), ⋯,X_N,N-1 (t) }.And the group𝒢, as before, has all the other TLs without the post of either CP.Transitions: Recall that a shift transition occurs when a user of 𝒢 writes to a user/TL of 𝒢_1/𝒢_2. In this event, the exclusive-type TLs are changed in as described in (<ref>). While for the mixed-types, the position of each post slides down by one level. For example an (l,k) type TL(withl = k+1 or k-1) gets convertedto (l+1,k+1) type when l, k < N; and(N-1, N) and(N, N-1) type TLsget converted to exclusive-types (N,0) and (0, N)respectively.In the share transition,the exclusive-type TLs propagate as in the case of single CP. Whereas a mixed-type TL, say (l,l+1),undergoes thefollowing changes when subjected to the share transition * The user first views the post-P with probability (w.p.) r_l and shares the same with some of its friends (as in single CP case). * The post-Q is below post-P and recall that the posts are of similar nature. The interest of the users to read the second post of similar naturewould be lesser. We assume thattheuser views the second post w.p.δ. * When the user views/reads both the posts, it can sharepost-P alone with some of its friends, post-Q alone with some others, and both the posts with some more.Otherwise, only post-P is shared. And the TLs of exclusive-types are produced when a user shares post-P or post-Q only.While mixed-type TLs are produced when it shares both the posts.* When only one CP's post is shared,e.g.post-P, it can produce type (i,0) w.p. ρ̅_i,i = 1,2⋯, N-1 and that ∑_i=1^N-1ρ̅_̅i̅ =1. It can not produce (N, 0) type as the user has already discarded one post, that ofCP2.Recall that a TL of type i is produced when (i-1) more posts are shared with it after the CP's post.When both the posts are shared with the same friend, the mixed-type (i+1, i) and (i, i+1)(withi < N) areproducedw.p. pρ̅_i and (1-p)ρ̅_i respectively.With high probability, post-P is shared first followed by sharing of post-Q,as we started with (l, l+1) TL.Hence, the order of the posts in the recipient TLs would be reversed with high probability, andp would, in general, be larger than (1-p). We have similartransitions with (l+1,l) type TLs also.§.§.§PGF and the overall generator matrix The PGF for the two CPs case can be obtained using the above modeling details as before. The random number of friends with whom both the posts are shared, is now parametrized by η_1 η_2. Whereas the random number of friends with whom exclusive post-P or post-Q is shared, is parametrized by η_1(1-η_2) or η_2 (1-η_1) respectively. For example, if the number of friends, , is Poisson with parameter β, then random (sampled) number of friends with whom both the posts are shared isPoisson with parameterβη_1 η_2.Denote byh_l, l+1 ( s)the PGF for (l,l+1) type with the following notations: s := {s^1_ex, s^2_ex,s_mx1, s_mx2}, s_mx1 = {s_l,l+1}, s_mx2 = {s_l+1,l},(s, η) := ∑_i =1^N-1f( s_i, η,β)ρ̅_̅i̅.We obtain h_l, l+1 ( s) by conditioning on the events of the first transition, h_l, l+1 ( s)=θ ( s_l+1,l+21_l<N-1 + s_N,01_l=N-1)+(1-θ)(1-r_l)+ (1-θ)r_l(1-δ)( s^1_ex, η_1 )+(1-θ)r_l δ(( (1-p)(s_mx1, η_1η_2 ) + p(s_mx2, η_1 η_2 ))( s^1_ex, η_1(1-η_2) )( s^2_ex, η_2 (1-η_1) ) )h_l+1, l ( s)=θ ( s_l+2,l+11_l<N-1 + s_0,N1_l=N-1)+(1-θ)(1-r_l)+(1-θ)r_l (1-δ)( s^2_ex, η_2 ) + (1-θ)r_lδ(( p(s_mx1, η_1η_2 ) + (1-p)(s_mx2, η_1 η_2 )) ( s^1_ex, η_1(1-η_2) )( s^2_ex, η_2 (1-η_1) ) ).And the PGF for exclusive-types isas in the single CP case, e.g., h_l, 0 ( s) = h_l,0 ( s^1_ex). The generator matrix𝔸 has the following structure:𝔸 = [[A_mx A^1_mx,ex A^2_mx,ex; 0A^1_ex 0; 0 0A^2_ex ]],where: a) matricesA^j_ex for j = 1, 2areas in (<ref>); b) the matrix A_mxcorresponds to transitions within the mixed-types and is given by the following when types are arranged in the following order(1,2),(2,1),(2,3),(3,2),⋯, (N-1,N),(N, N-1), A_mx= (λ + ν)[[ z'_1r_1 -1z_1 r_1θ +z'_2 r_1…z'_N-1r_1 z_N-1r_1;z_1 r_1 z'_1r_1 -1z_2 r_1… z_N-1r_1z'_N-1r_1;z'_1r_2z_1 r_2 z'_2r_2- 1…z'_N-1r_2z_N-1 r_2;z_1 r_2z'_1r_2z_2 r_2… z_N-1r_2z'_N-1r_2;⋮⋮⋮⋱⋮; z'_1 r_N-2z_1 r_N-2z'_2r_N-2… θ + z'_N-1 r_N-2z_N-1 r_N-2;z_1 r_N-2z'_1r_N-2z_2 r_N-2… z_N-1r_N-2θ + z'_N-1r_N-2; z'_1 r_N-1z_1 r_N-1 z'_1⋯ z'_N-1 r_N-1-1z_N-1 r_N-1;z_1 r_N-1z'_1r_N-1z_2 r_N-1… z_N-1r_N-1z'_N-1 r_N-1 -1 ]],withc_mx := δ(1-θ)η_1η_2 m, z'_i := (1-p)c_mxρ̅_̅i̅ and z_i := p c_mxρ̅_̅i̅for all i; andc) the matrix A^j_mx,ex for j=1,2 represents the transitions from mixed-types to exclusive-types (exclusive CP types) andA^1_mx,ex= 1,0 2,0⋯N-1,0 N,01,2c_mx,1 r_1 ρ̅_1c_mx,1 r_1 ρ̅_2⋯c_mx,1 r_1ρ̅_N-10 2,1 c'_mx,1 r_1 ρ̅_1c'_mx,1 r_1 ρ̅_2⋯c'_mx,1 r_1 ρ̅_N-10 2,3 c_mx,1 r_2 ρ̅_1c_mx,1 r_2 ρ̅_2⋯c_mx,1 r_2 ρ̅_N-10 3,2c'_mx,1 r_2 ρ̅_1c'_mx,1 r_2 ρ̅_2⋯c'_mx,1 r_2 ρ̅_N-10⋮ ⋮ ⋮ ⋱ ⋮ ⋮N-2, N-1c_mx,1 r_N-2ρ̅_1c_mx,1 r_N-2ρ̅_2⋯c_mx,1 r_N-2ρ̅_N-10N-1, N-2 c'_mx,1 r_N-2ρ̅_1c'_mx,1 r_N-2ρ̅_2⋯c'_mx,1 r_N-1ρ̅_N-10N-1, Nc_mx,1 r_N-1ρ̅_1c_mx,1 r_N-1ρ̅_2⋯c_mx,1 r_N-1ρ̅_N-1 θN, N-1 c'_mx,1 r_N-1ρ̅_1c'_mx,1 r_N-1ρ̅_2⋯c'_mx,1 r_N-1ρ̅_N-10 (λ + ν)where c_mx,j :=(1-θ)mη_j [1-δ+ δ (1-η_-j )] =(1-θ)mη_j(1-δη_-j) c'_mx,j :=(1-θ)m δη_j (1-η_-j )-j :=1 1_{j = 2} + 21_{j = 1} Because of0 sub-matrices of(<ref>), the matrix𝔸 is not positive regular, and hence, the underlying MTBP is decomposable (e.g., <cit.>). §.§ Analysis of the mixed-type populationFrom the structure of generator matrix 𝔸 given by (<ref>), it is clear that thesubgroup of types corresponding to mixed populations,{ (l, k) : l ≥ 1, k ≥ 1 l = k+1 k = l+1 }, survive on their own. A mixed-type can be produced only by another mixed-type. Note thatthe mixed-types can produce exclusive CP types, but not the other way round (see the matrix in (<ref>)).Thus, the extinction/virality analysis of the mixed population can beobtained independently.To begin with, we have the following result. i) If 0 < θ, p < 1,matrix e^A_mx t for any t > 0, is positive regular.ii) Let α_mx be the largest eigenvalue of the generator matrix A_mx. Thenα_mx∈ ( (c_mx r.ρ̅-1), (c_mx r.ρ̅ -1 + θ) )(λ+ν) where the reading probability vectorr is redefined as (r_1 ⋯ r_N-1).When r_l = d_1d_2^l ∀ l:α_mx→(c_mx r.ρ̅ -1 + θ d_2)(λ+ν), c_mx := δ(1-θ)η_1η_2 mN →∞.iii)Further, the left eigenvector u_mx = (u_mx,1, ⋯, u_mx, 2N-2) corresponding to α_mx satisfies for any 2≤ l≤ N-1:u_mx,2l-1 =∑_i = 0^l-1ρ̅_l-i/ρ̅_1(θ/σ_mx)^i u_mx,1 ;u_mx,2l=∑_i = 0^l-1ρ̅_l-i/ρ̅_1(θ/σ_mx)^i u_mx,2; σ_mx = α_mx/λ + ν + 1. And the right eigenvector v_mx = (v_mx,1, ⋯, u_mx, 2N-2) corresponding to α_mx satisfies for any 1 ≤ l≤ N-2:v_mx,2l-1 = ∑_i = 0^N-1-lr_l+i/r_N-1(θ/σ_mx)^i v_mx,2N-3; v_mx,2l-2 = ∑_i = 0^N-1-lr_l+i/r_N-1(θ/σ_mx)^i v_mx,2N-2.iv) The process{v_mx. X_mx(t)e^-α_mx t}is anon-negative martingale, where v_mx is right eigenvector andlim_t→∞X_mx(t, ω)e^-α_mx t = W_mx(ω)u_mx for almost allω.Proof: The proof is given in Appendix A. From part (ii) of the above Theorem, the mixed TLs get viral when c_mx r.ρ̅ > 1, and the rate of explosion on viral paths equals α_mx. Whenα_mx< 0, the mixed population gets extinct surely, i.e., P(X_mx (t) = 0t > 0|X_mx (0) =e_l,k ) = 1However, before the extinction, the mixed-type TLs generate exclusive-type TLs which then evolve on their own. Further, these exclusive-type TLs canget viral ifα_j >0 (j=1 or 2).And this is possible because α_mx is less than α_j for j =1,2. For example, consider the case withρ_l= ρ̅_l = 1_{ l = 1} and as N →∞,α_j → (1-θ) m η_j -1+ θ d_2 while α_mx→c_mx-1+ θ d_2 = (1-θ) δ m η_1 η_2 -1+ θ d_2; and clearly α_j > α_mx.We will now deviate to derive some results in a special type of decomposable branching process; these results would be usedlater for analyzing the propagation characteristics of competing posts.§ TYPE-CHANGING DECOMPOSABLE BRANCHING PROCESS In our example, `mx' (mixed)classparticles produce the offspring of all the classes whereas an 'ex' (exclusive) class particle produces offspring of its class only. This allows us to split the generator matrix into two sub-matrices as below, which would facilitateindependent/separateanalysis of each 'ex' class:[ [A_mx A^1_mx,ex; 0A^1_ex ]] [ [A_mx A^2_mx,ex; 0A^2_ex ]].In particular, we are interested in deriving the time evolution of`expected net progeny'(a measureliketotal progeny, which would be defined soon), which would represent the expected number of shares in our social network context. The above processes are slightly different from the usual decomposablebranching process;the difference lies in theevents of the transition/reproduction epoch.Any parent at the transition epoch either produces a random number of offspring (as is usually considered in branching processes)or its type gets changed;one of the two events takes place. And then it dies.We consider a decomposable branching process consists of two irreducible classes, namely mixed (M_x) class and exclusive (E_x) class. Particles of M_x class produce particles of M_x class as well as that of E_x class. While particles of E_x produces particles of E_x class only. In one of the two events at a transition epoch, a particle of l ∈ M_x type wakes up after exponentially distributed time with parameter ν, i.e., exp(ν) and produces a random number of offspring. Whereas in the other event, the type of the particle gets changed after exp(λ) time. It is easy to see that probability of the former event is 1-θ and that of the latteris θ where θ = λ / (λ+ ν). We refer to this process briefly as type-changingdecomposable branching process (TC-DBP).Let m_l,k represent the expected number of offspring of type k produced by a parent of type l, where l and k can be of E_x class or M_x class. And a_l,k is the probability that a l ∈ M_x particle gets converted to a k ∈ M_x particle. In a similar way, type change transitions are allowed within E_xclass, however, there are no type changes possible from one class to another.The generator matrix of such a TC-DBP has the following structure:[ [A_mx A_mx,ex; 0A_ex ]], where A_mx represents all the transitions between types belonging to class M_x, A_ex represents all the transitions between types belonging to class E_x,whileA_mx,ex represents the transition between M_x and E_x (offspring ofclass E_x producedby class M_x).The matrix A_mx includes type change as well as real transitions as below: A_mx :=[[ θ a_1,1 + (1-θ)m_1,1-1θ a_1,2 + (1-θ) m_1,2⋯θ a_1,M + (1-θ) m_1,M; θ a_2,1 + (1-θ)m_2,1θ a_2,2 + (1-θ)m_2,2 -1⋯ θ a_2,M + (1-θ)m_2,M; ⋮;θ a_M,1 + (1-θ) m_M,1 θ a_M,2 + (1-θ)m_M,2⋯ θ a_M,M + (1-θ) m_M,M -1;]].The matrix A_ex= (( θ a_l, k + (1-θ) m^e_l,k- 1_l=k)) has exactly similar structure, with the only difference being that now l, k ∈ E_x, but a_l,k are the same (i.e., type change transitions are the same).There are no type changes from one class to another, henceA_mx, ex= (( m_l,k )), with l ∈ M_x and k ∈ E_x. Our focus is to investigate the evolution of the number of shares of E_x class particles when started with a particle of M_x. Note that this kind of branching processes can model various real-world applications,including our social network example.§.§Analysis: time evolution of the expectednet progeny We are now ready to study the time evolution of the expected net progenyin TC-DBP. Prior to that, we define the relevant terms.Two different notions of `total' progeny: We emphasize that there are two different notions for the total progeny in TC-DBP, as opposed to the standard one in the branching processes. One may view the type-changing as the production of one offspring of a different type, and thereby adding one to the total progeny (for each type-change). This phenomenon is the usual way the total progeny is counted in standard BPs.Alternatively, one may not view type-change as an offspring, which can lead to a different (new) notion of total progeny that counts only the new offspring.For instance, as we already discussed, in a social networkone needs only the count of total shares (the number of distinct users shared with the post of interest).We refer to the progeny that does not count the type-changes as `net progeny', while the one that counts all the transitions as the usual `total progeny.'In this section, we derive the time evolution of the expected net progeny.Using the results on net progeny one can also obtain corresponding results for total progeny[To the best of our knowledge there are no results on total progeny of decomposable branching processes.] in the standard decomposable branchingprocesses (whichis of independent importance).It is clear that the total progeny ofa decomposable branching process is obtained by substituting θ = 0 in the expression for the expected net progeny of appropriate TC-DBP.We obtain these results by using slight tweaks of the existing methods to analyze such branching processes; in particular we study the functions representing the time evolution ofexpected net progeny asfixed points in some appropriate Banachspacesand derive the required analysis by obtaining the approximate fixed point solutions(for applications like our social network).Thenet (total) progeny at a time instance, say t,represents the total accumulated population (i.e., including the dying particles) of all types till t, without (with) considering the type-changes. We study the evolution of thenet progeny for both exclusive class and mixed class particles. When one starts only with 'exclusive' parents, i.e.,parents from class E_x and consider expected net progeny of exclusive class,then the resulting branching process is well known irreducible multi-type continuous time branching process (e.g., viral branching process representing the propagation of single post in Par1-<cit.>). We derived the net progeny (a.k.a., number of shares)in Part-I using standard tools, but here we would require the same without considering the count of initial population.That is provided in Appendix B and the solution in general case is given by (when the matrix is invertible):y^e (t) = (e^ A_ex t - I )λ_ν(1-θ ) A_ex^-1 [ [ ∑_k ∈ E_xm^e_1,k; ∑_k ∈ E_xm^e_2,k;⋮; ∑_k ∈ E_xm^e_N,k ] ].For the special case of OSNs, we have the following simplification (see Appendix B):y^e_l (t)≈ ( e^α_e t- 1) h_l^eh_l^e =λ_ν(1-θ) r_l m η_1/α_el ∈ E_x, and α_e ≈m η_1 ∑_lr_lρ_l+ λ_νθΔ_r-1, when r_l+1 / r_l = Δ_r and ρ̅_l = ρ_l for all l. The above result is true even if A_ex is not invertible;further whenα_e > 0, i.e.,when the exclusive cangetviral, then α_e is the Perron root of A_ex (see Appendix B).We now focus on investigating the evolution of the expected net progeny ofexclusive class when the process starts with a mixed class particle. We obtain this by first deriving appropriate fixed point equations.§.§.§ Derivation of an appropriate fixed point (FP) equation Denote by y_l (t) theexpected net progeny of E_x class till time t when the process is initiated with one type-l particle ofM_x class, and y(t) = {y_l (t)}_l represents the `net progeny' vector till time t and y =y (·) represents the vector ofnet progeny time evolution. We will show that y satisfies a fixed point equation in an appropriate functional space, i.e.,y =z wherez: = {z_l(· ) }_l represent finite number of waveforms on time interval [0, ∞) and satisfies an appropriate fixed point equation, z_l (t)=G_l (z) (t)(for all l,t). We arrive at the fixed point function G = {G_l} by conditioning on the events related to the first transition epoch. Let the random variable τ represent the time instance of the first transition epoch, whichis exponentially distributedwith parameter λ+ν. Conditioning on the first transition events, we observe that the net progenyy (·) should satisfy the following fixed point equation (for all l and t):y_l (t) = G_l ( y) (t):= θ∫_0^t∑_k ∈ M_xa_l, ky_k (t-τ) (λ+ν) e^- (λ+ν) τ dτ+(1-θ ) ∫_0^t∑_k ∈ M_xm_l,k (1+ y_k (t-τ))(λ+ν) e^- (λ+ν) τ dτ+ (1-θ)∫_0^t ∑_k∈ E_x m_l,k( 1 +y^e_k (t-τ) )(λ+ν)e^- (λ+ν) τ dτ.The above is due to the following reasons: * The type-l undergoes a shift transition w.p. θ, its type gets changed to type-k of thesame class(i.e., M_x).* The type-l undergoes a share transition w.p. 1-θ,itproduces m_l,k offspring belonging to either classM_x or E_x. As per example, it produces particles of M_x whenboth post-P and post-Q are shared, whereas particles of E_x are produced when only one of the posts is shared. In Appendix C, we showed thatG( ·) is a contraction mapping and hence has unique fixed point solution. In view of the uniqueness, it suffices to obtain any solution of G, which is considered immediately.§.§.§ Solution of the fixed point equation The solution of the fixed point equation is derived in Appendix C and it equals (when the matrices are invertible)y (t)=( e^A_mx t -I )c_v_0 +e^A_mx t∫_0^te^- A_mx s A_mx, exe^ A_ex sA_ex^-1 c_v_1ds c_v_0= A_mx^-1λ_ν(1-θ )( [ [ ∑_k ∈ M_xm_1,k; ∑_k ∈ M_xm_2,k;⋮; ∑_k ∈ M_xm_N,k ]] + A_mx, ex (1- A_ex^-1 c_v_1 ) ) c_v_1= λ_ν (1-θ) [ [ ∑_k ∈ E_xm^e_1,k; ∑_k ∈ E_xm^e_2,k;⋮; ∑_k ∈ E_xm^e_N,k ] ] . We then showed that the net progeny has the following simplifiedform for our OSN example (details in Appendix C,and approximation is good as N →∞):y_l (t) ≈g_l + h_l e^α_e t+ o_le^α̅ t , for some appropriate coefficients {h_l, o_l, g_l}.As in exclusive case, this approximation is true even the matrices are not invertible, we would only require that someeigen values are positive (e.g., Case I in Appendix C). We discuss more details of this representation in the coming sections.Net progeny when started with Mixed class: We assume the following structure for fixed point waveform, y_l(t) =g_l + h_l e^α_e t+ o_l e^α̅ tl ∈ M_x and α̅is a constant (which we will find out). We show that these kind of functions indeed satisfy the requiredfixed point equations.Towards this, we have the following Lemma.Let α_e, α_mx be the largest eigenvalue of the matrices (A_ex - I ) λ_ν, (A_mx- I ) λ_ν respectively.When α_e>0, i.e. the exclusive class is super-critical, a solution of the above fixed point equation y_l= G_l(y )is the following: * When the M_x population gets extinct with probability one (i.e., when α_mx<0), then y_l (t) = y^e_l(t) =g^e_l + h^e_l e^α_e t ,g_l^e = - h_l^e.* When the M_x population survives with non zero probability (i.e., when α_mx>0), then y_l (t) =g_l + h_l e^α_e t+ o_le^α̅ twhere g_l, h_l, o_l are as givenas: h_l =λ_ν(1-θ) ∑_k∈ E_x ∪ M_xm_l,k/α_e- (1-θ) λ_ν∑_k∈ E_x m_l,kh^e_k /( α̅-α_e)+ (1-θ) λ_ν/ ( α̅-α_e) ∑_k ∈ M_x ∪ E_xα̅(λ_ν+ α_e)m_l,k -(1-θ) λ_να̅/ ( α̅-α_e)(λ_ν∑_k ∈ M_x(θa_l, k + (1-θ) m_l,k )∑_k'∈ E_x ∪ M_xm_k, k' ) g_l = -λ_ν(1-θ) ∑_k∈ E_x ∪ M_xm_l,k/α_e + (1-θ) λ_ν∑_k ∈ E_x m_l,kh^e_k /α̅-(1-θ) λ_ν/α̅α_e ∑_k ∈ M_x ∪ E_x (λ_ν+ α_e)m_l,k+(1-θ) λ_ν/α̅α_e(λ_ν∑_k ∈ M_x(θa_l, k + (1-θ) m_l,k )∑_k'∈ E_x ∪ M_xm_k, k' ). o_l = (1-θ) λ_να_e ∑_k∈ E_x m_l,kh^e_k /( α̅-α_e) α̅+ (1-θ) λ_ν/ ( α̅-α_e) α̅ (λ_ν∑_k ∈ M_x(θa_l, k + (1-θ) m_l,k )∑_k'∈ E_x ∪ M_xm_k, k'-(λ_ν+ α_e) ∑_k ∈ M_x ∪ E_xm_l,k )and α̅ is as given in equation (<ref>), i.e., α̅=(eig ( A_mx )- 1 )λ_ν. Assume that α_mx is the only eigenvalue of (A_mx- I ) λ_ν larger than zero,then we have α̅ = α_mx.Proof The proof is given in Appendix C. From (<ref>), for Social network example:o_l = r_l (1-θ) λ_ν/( α̅-α_e) α̅ ( λ_νΔ_r θ +(1-θ) λ_νm η_1∑_k c_k^m r_k- (λ_ν +α_e)m η_1)Thus, we notice that for decomposable branching processes, the growth rates of thecurrent population as well as the total shares (<ref>) areinfluenced by two distinct exponential functions. Also,both of them (like in other variants of the branching process) are influenced by the same growth patterns.We now return to our OSN example.§ CP-WISE PERFORMANCE MEASURES Mixed-type TLs keep producing the exclusive-typesas well as their own type TLs till driven to extinction, i.e., when none of the TLs contain both the CP posts. Once the mixed-types get extinct,the leftover exclusive-types do not influence each other (matrix (<ref>)), and hence, they evolve on their own. Nevertheless,their survival/growth depends upon the effects created by mixed-types before death.When the mixed population gets viral,total CP population (sum of exclusive-type and mixed-type TLs having that particular CP's post) is clearly influenced by mixed-types. The mixed population thus gives an impetus to the propagation of exclusive CP-1 and CP-2 posts with different degrees, and consequently, introduces competition between these posts for relative visibility. In other words, the more the number ofexclusive-type TLs generated by the mixed-types, the better it is for the corresponding CP (owning the said exclusive-type TLs). To summarize, the evolution of the population corresponding to a particular CPdepends upon the competition regardless of whether the source of the competition (i.e., mixed-types) dies out or not.Recall that the underlying MTBP is decomposable. This MTBP behaves significantly different from the MTBP in the single CP scenario. Here it may happen that the population corresponding to a particular CP gets extinct with probability one while the other can get viral with positive probability. Further, they can have different growth rates in the event of virality. §.§ CP-wise extinction probabilitiesWe say is extinct when all the mixed-type andexclusive CP-j type TLs get extinct. By Lemma 1 of Part-I <cit.>, the sub-matrix A_ex^j of (<ref>) is irreducible.Thus, all exclusive-type TLs of one CP survive/die together when the process starts with exclusive-type TL of the same CP. And the same is the case for mixed population when started with a mixed type TL, as matrix A_mx is irreducible.With e_l,k as the unit vector with one only at (l,k) position where l, k =l+1 or l-1, we define the extinction probability of as below: q_l,k^j := P(X^j_ex(t) = 0, X_mx (t)= 0 t > 0|X (0) =e_l,k). q^j := { q^j_ex,q^j_mx1,q^j_mx2}q^j_ex:= {q^j_l, 0}_l, q^j_mx1 := {q^j_l, l+1}_l q^j_mx2 := {q^j_ l+1, l}_l.Byconditioning again on the events of first transition, we obtain q_mx1^1 via fixed point (FP) equations:q_l,l+1^1= θ ( q_l+1,l+2^11_{l < N-1} + 1_{l < N-1 } q_N, 0^1 )+(1-θ)(1-r_l) + (1-θ)r_l [ (1-δ) (q^1_ex,η_1 ) (q^1_ex,η_1(1-η_2) )+δ( p(q^1_mx2,η_1 2) +(1-p)(q^1_mx1,η_12)) ] ,(s', η) := ∑_i =1^N-1f( s_i', η,β)ρ̅_̅i̅and η_12 := η_1 η_2. One can write the fixed point (FP) equations for q_mx2^1, q_mx1^2 and q_mx2^2 in a similar way. And the expression for q_ex^j starting from exclusive CP types is same as that in thesingle CP scenario (<cit.>). We further have the following: When q^j_ex <1 = (1, ⋯, 1),we have unique solution in the interior of[0, 1]^2N-2, i.e., q^j_mx1<1,q_mx2^j <1. When q^j_ex =1,we have that ( q^j_mx1,q_mx2^j) =1 is the unique solution, under extra assumption that ρ_N = 0 andρ̅_i = ρ_i for all i < N.Proof: The proof is given in Appendix A. Thus when the exclusive types get extinct with probability one when started with exclusive types,thenthey get extinct with probability one even upon starting withmixed types. When the exclusive can survive upon starting with their own types, then the exclusives can survive with positive probability even when started with mixed types. §.§ Evolution ofexclusive-type NU-TLs inpresence of mixed-typesThe evolution of exclusive CP population when started with exclusive-type TLs is same as in single CP scenario, and we have the following resultfor sufficiently large t X_ex^i(t)^T v^i≈ W_ie^α_i t,X_ex^i(t)≈ W_ie^α_i tu^i ;i = 1,2;-W_i is a non negative random variable, with P(W_i = 0) = extinction probability (<cit.>)-u^i,v^i are the normalized left and right eigenvectors corresponding to the eigenvalue α_i (Perron root) of the matrix A_ii for i = 1,2.We derive the evolution of exclusive-types when started with a mixed-type TL.Consider any i, by<cit.> we have the following result, when α_mxα_i.Let( X_ex^i(t), X_mx(t )) be the number of unread TLs having post of CP-i at various levelsat time t as before and let ℱ^i_t := σ{X_mx(t'), X^i_ex(t');t' ≤ t } be the natural sigma algebra.When this process starts with a mixed-type TL, then the stochastic process:{X^i_ex(t) ·v^ie^-α_i te^-α_i tX_mx(t)(α_i I - A_mx)^-1 A^i_mx,ex·v^i;ℱ^i_t; t ≥ 0 }is a martingale, where the vectors v^i, u^i are as defined above. Remarks: 1) Thus even in the presence of mixed population, the population corresponding to exclusive types eventually evolves with a growth rate given by α_i;when multiplied with e^-α_i t the otherwise exploding process (exploding with time) converges to a limit. 2)By Martingale property,the weighted sum of expected values ofthe individual components(all arecolumn vectors and · is the dot product),E[X^i_ex(t) ] ·v^i +E [X_mx(t) ] (α_i I - A_mx)^-1 A^i_mx,ex·v^i = e^α_1 t ( X_mx(0)(α_1 I - A_mx)^-1 A^1_mx,ex·v^i ). This is due to the following. From Lemma <ref>(without loss of generality consider i=1) and since the expected value of martingales are constant with respect to time, 9.56 E[X^1_exm(t).v^1e^-α_1 t +e^-α_1 tX_mx(t)(α_1 I - A_mx)^-1 A^1_mx,exv^1] =X_mx(0)(α_1 I - A_mx)^-1 A^1_mx,exv^1 E[X^1_exm(t) ]v^1.u^1+E [X_mx(t)(α_1 I - A_mx)^-1 A^1_mx,ex]v^1. u^1= e^α_1 tX_mx(0)(α_1 I - A_mx)^-1 A^1_mx,exv^1.u^1E[X^1_exm(t) ]+E [X_mx(t) ] (α_1 I - A_mx)^-1 A^1_mx,ex= e^α_1 tX_mx(0)(α_1 I - A_mx)^-1 A^1_mx,exNow using E [X_mx(t) ](α_1 I - A_mx)^-1 A^1_mx,ex≈∑_l v_l^mxe^α_mx tu_mx(α_1 I - A_mx)^-1 A^1_mx,ex E[X^1_exm(t) ]+ ∑_l v_l^mxe^α_mx tu_mx(α_1 I - A_mx)^-1 A^1_mx,ex ≈e^α_1 tX_mx(0)(α_1 I - A_mx)^-1 A^1_mx,ex. From Theorem <ref>.iv (under appropriate second moment conditions), we have for all large t:E[X^i_ex(t) ] ·v^i ≈c_ons1 e^α_mxt + c_ons2 e^α_1 t c_ons1= - (E [W_mx ] u_mx(α_i I - A_mx)^-1 A^i_mx,ex·v^i ) c_ons2 =( X_mx(0)(α_1 I - A_mx)^-1 A^1_mx,ex·v^i ).Thus the growth of the expected numberof TLs with a CPi-post (E[X^i_ex(t) ]), when the process starts with mixed-type TLs,is governedbytwo exponential curves.E[X^1_exm(l,t) ] =h̅_l e^α_1 t - o̅_l e^α_mx t;h̅_land o̅_l are the l-th component of the vectors X_mx(0)(α_1 I - A_mx)^-1 A^1_mx,ex and ∑_l v_l^mxu_mx(α_1 I - A_mx)^-1 A^1_mx,ex respectively. §.§Evolution of the expected number of shares In the single CP scenario, we derived the evolution of the expected number of shares, which serves as an important performance measure for the spread of the content. We now derive the time evolution of the expected number of shares in the two-CP scenario; which, in this case, is instrumental in obtaining performance measures such as relative visibility. Let Y^j_l,k(t) be the total number of shares of posttill time t andy^j_l,k(t) represents its expected value, when started with one TL of the type (l,k) with k = l + 1 or l-1. Note that these shares include mixed-type shares as well as exclusive-type shares belonging to CP-j. We present the evolution for the number of shares to CPs' posts in non-viral and viral scenarios.§.§.§ Number of shares in non-viral scenario:With m < 1, any post (CP1-Post or CP2-Post)gets extinct with probability one. Even with m > 1,the CPj-post canget extinct with probability one,depending on other parameters.We refer to this as non-viral scenario. We obtain the total expected shares (before extinction) when the process starts with a TL of mixed/exclusive type. Recall that{y_l,k ^j} is expected number of shares of post when started with (l,k) type TL y_l,k^j = E[lim_t →∞Y^j (t) |X(0)=e_l,k]; j = 1,2.These { y_l,k^j }canbe obtained bysolving appropriate FP equations (below) as in the single CP scenario (<cit.>).Without loss of generality we consider shares of CP-1. Let y_mx1^j :={y^j_l, l+1}, y_mx2^j :={y^j_ l+1, l} and y^j_mx:=y^j_mx1+ y^j_mx2, by appropriate conditioningy_l,l+1=θ ( 1_{l < N-1} y_l+1,l+2 + 1_{l = N-1} y_N,0 )+(1-θ)r_l (1-δ) mη_1 (1 + y_ex1.ρ̅)+(1-θ)r_lδmη_1 [(1-η_2)(1+ y_ex1.ρ̅) +η_2(1+ py_mx1.ρ̅ +(1-p)y_mx2.ρ̅) ]where y_ex1 = { y_1,0^1, y_2, 0^1, ⋯, y_N-1,0}. And again for any l < N,y_l+1,l=1_{l < N-1}θ y_l+2,l+1 +(1-θ)r_lδmη_1 [(1-η_2)(1+ y_ex1.ρ̅)η_2(1+ py_mx1.ρ̅ +(1-p)y_mx2.ρ̅ )]. For the special case with ρ̅_l =ρ̅^l/∑_i=1^N-1ρ̅^i,r_l = d_1 d_2^l and as N →∞, we have(with -j := 2 1_{j=1} + 1 1_{j=2})y^j_mx.ρ̅ →( 2 c_jδ[ (1 + y^j_ex.ρ̅)(1-η_-j) + η_-j]+c_j (1-δ) (1 + y^j_ex.ρ̅))O^*_mx/1-c_mxO^*_mx;O_m_x^*=d_1d_2(1-ρ̅)/(1-d_2 ρ̅)(1-θ d_2) y^1_ex = { y_1,0,⋯, y_N-1,0}; y^2_ex = { y_0,1,⋯, y_0,N-1}. Here y^j_exjis similar to that in Part-I <cit.>, and{ y_l,k^j } with k = l+1 or l-1can be computeduniquely using {y^j_mxρ̅} and equation (<ref>) and backward induction. The derivation of these limits and expressions areprovided in Appendix D.§.§.§ Number of shares in viral scenarioIn viral scenarios, we have two sub-cases: 1) one when the mixed survives with non zero probability, and 2)when the mixed population gets extinct, but exclusive types may survive.In both these cases, the expected number of shares explode with time.This analysis can be obtained using net progeny of section <ref>, which is provided by (<ref>).We explain the results for the case with p=0 for ease of explanation, one can easily extend the results to general case using the results of Appendix B and C.Consider without loss of generality the number of shares for CP1-post.When one starts with (l, l+1)-type (and with p=0) one should consider the following modelling details for using expression (<ref>):a_l, k := a_(l,l+1),(k, k+1) = a_(l,0), (k,0) =1_{k = l+1}1_l < N, l,k ∈ M_xl, k ∈ E_x, m_l,k :=m_(l,l+1),(k, k+1)=m η_1 η_2 δρ̅_kr_l, l,k ∈ M_x , m_l,k := m_(l,l+1),(k,0) = m η_1r_l ρ̅_k (1-δ + δ (1-η_2)), l ∈ M_x,k ∈ E_x m^e_l,k :=m_(l, 0),(k, 0) =m η_1 r_l ρ_k,l ∈ E_x,k ∈ E_x. When one starts with ( l+1, l)-type (and with p=0) one should consider the following modelling details:a_l, k := a_(l+1, l),(k+1, k) = a_(l,0), (k,0) =1_{k = l+1}1_l < N; l,k ∈ M_xl, k ∈ E_x, m_l,k := m_(l+1, l),(k+1, k) =m η_1 η_2 δρ̅_kr_l, l,k ∈ M_x ,m_l,k :=m_(l+1, l),(k,0) = m η_1 r_lρ̅_k (1-δ + δ (1-η_2)) l ∈ M_x,k ∈ E_x m^e_l,k:= m_(l, 0),(k, 0) =m η_1 r_l ρ_kl ∈ E_x,k ∈ E_x.We derivedmuch simplified expressionsfor the same in Appendix C (case I andII),when ρ̅_l = ρ_l for all l,p =0and when r_l+1/r_l = Δ_r for all l < N and we reproduce the same here:Whenone starts with (l, l+1) type particle, irrespective of whether α_mx >0 or not, the expected number of shares/net progeny evolves exactly as when one started with an exclusive particle at l level. It is not difficult to see this equality with p=0, if one closely introspects the two evolutions (when one counts all shares that belonging to CP1). This case is studied as Case I in Appendix C and the final result is the following: y_l,l+1^1 (t)≈ λ_ν(1-θ) m η_1 r_l /α_1(e^α_1 t - 1 ) l,α_1≈ λ_ν( θΔ_r - 1+ (1-θ)m η_1∑_k ρ_kr_k ).One can have another approximate solution for this sub-case (see Appendix C, Case 2 for more details).When one starts with (l+1,l) type particle, from Case II we have: y_l+1, l^1 (t) ≈ h_l (e^α_1 t -1 )+ o_l (e^α_mx t - 1) o_l = r_l (1-θ) λ_ν/(α_mx -α_1) α_mx ( λ_νΔ_r θ + (1-θ) λ_νm^2 η_1^2 δ (1-η_2)+ η_2δ) ∑_k ρ̅_kr_k- (λ_ν +α_1)m η_1δ),h_l = λ_ν(1-θ) m η_1δr_l - o_l α_mx/α_1 ,α_mx ≈ λ_ν ( θΔ_r - 1 + (1-θ) mη_1 δη_2 ∑_k ρ̅_k r_k) α_1≈ λ_ν( θΔ_r - 1+ (1-θ)m η_1∑_k ρ_kr_k ).Appealing to<cit.>,the expected sharesgrow as the sum of two exponential curves where the first part corresponds toexclusive-types and the second one corresponds to the mixed-types. We translate the result of Theorem 4 of <cit.> to our case as follows.Without loss of generality we consider shares of with j = 1. With M_x= { (l,l+1), (l+1, l):l0, k0 } andE_x= { (l, 0) } from <cit.>, we have (for example,(l, l+1) ∈ M_x and l+1 < N )a_(l,l+1),(k, k+1) = 1_{k = l+1};m_(l,l+1),(k, k+1) =(z_k + z'_k) r_l, m_(l, 0),(k, 0) =m η_1 r_l ρ_km_(l,l+1),(k,0) = m r_l ρ̅_k (1-δ + δ (1-η_2)) ,m_(l+1, l),(k,0) =m r_l+1ρ̅_k (1-δ + δ (1-η_2)).Substituting the above, we have the following result for the expected number of shares to (with j =1) posty_l,l+1^j (t)≈g_l + h_l e^α_j t + o_l e^α_mx twhere g_l= -∑_i =1^N-1(z_i + z'_i) r_l/α_j +∑_i =1^N-1 c_mx,jr_l ρ̅_i /α_mx + ∑_i =1^N-1(z_i r_l + z'_i r_l + c_mx,jr_l ρ̅_i ) ( λ + ν +α_j )/α_mxα_j + 1 /α_mxα_j((λ + ν)θ+ ∑_i =1^N-1( z_i r_l+ z'_i r_l ) ∑_i =1^N-1(z_i r_l+ z'_i r_l + c_mx,jr_l ρ̅_i )) h_l=∑_i =1^N-1(z_i r_l + z'_i r_l + c_mx,jr_l ρ̅_i )/α_j- c_mx,jr_l ρ̅_i /( α_mx-α_j)+ α_mx( λ + ν +α_j ) / ( α_mx-α_j) ∑_i =1^N-1(z_i r_l + z'_i r_l + c_mx,jr_l ρ̅_i )-α_mx/ ( α_mx-α_j)((λ + ν)θ+ ∑_i =1^N-1( z_i r_l+ z'_i r_l ) ∑_i =1^N-1(z_i r_l + z'_i r_l + c_mx,jr_l ρ̅_i )) o_l = α_j c_mx,jr_l ρ̅_i/( α_mx-α_j) α_mx+ 1 / ( α_mx-α_j) α_mx ((λ + ν)θ+ ∑_i =1^N-1( z_i r_l+ z'_i r_l ) ∑_i =1^N-1(z_i r_l + z'_i r_l + c_mx,jr_l ρ̅_i ) ) -∑_i =1^N-1(z_i r_l+ z'_i r_l + c_mx,jr_l ρ̅_i ) ( λ + ν +α_j ) / ( α_mx-α_j) α_mx.One can write similar expressions for CP2-post and for arbitrary p. As mentioned before, α_mx≤α_j for j=1,2. One can argue that y_l,l+1^j (t) grows with rate α_j in the long run, i.e., the growth rate of the expected number of shares to when mixed-type TLs get viral is carried by dominating rate (α_j) which is same as the growth rate ofshares in the single CP case.§.§.§ Numerical Evidence In Figures <ref> and <ref>, we consider Social network examples with details as given in the figure.Here we plotted the Monte Carlo (MC) based estimates of expected net progeny (or the expected number of shares) at different time points and also the theoretical estimatesobtained using approximate fixed point solutions given by (<ref>) and (<ref>).In these figures we are starting with either one each of (l, l+1)(for all l < N) or one each of (l+1, l) type, i.e.,with a total ofN-1 particles. We observe that the theoretical approximate solutions well approximate the net progeny trajectories estimated by MC basedsimulations.Based on these examples and many more such examples(not mentioned in this paper), we found that: a) the theoretical approximate solutions well approximate the MC based estimates when one starts with {(l, l+1)} types for almost all sets of the parameters;b) there is a good match between the two, for the sub-cases when one starts with {(l+1, l)} types only for small θ; the approximation is very good at small θ (for most of the cases withθ < 0.2).Thus we derived various relevant performance measures related to content propagation in the presence of competing posts. One can use these performance measures for any relevant optimization or game theoretic problems. In the following section we use some of these measures for problems related to online auctions and viral marketing.When the exclusive-typescan get viral (α_j > 0) and the mixed TLs get extinct w.p. 1 (α_mx < 0):Again appealing to <cit.>, the expected number of shares grows at rate α_j which is same as in the case of the single CP model. Basically, the mixed population produces exclusive-types before driven to extinction. The exclusive population then grows independently, and hence the overall growth rate is given by α_j only. Thus,y_l,l+1^j (t) =e^j_l,l+1 e^α_j t,where e^j_l,l+1 is the l-th component of the matrix-(λ + ν) (A_ex^-1k), where k is defined in Lemma 3 of Part-I <cit.>.Observe that the growth rate is again the same asthat in the Part-I <cit.>. § VIRAL MARKETING AND REAL TIME BIDDING The performance measures obtained in the previous sections can be useful in many advertisement/campaign related objectives such as brand awareness, search engine optimization, maximizing the number of clicks to a post/advertisement (ad), etc. In this section, we will study online auctioning for advertisements in viral marketing using the performance measures as obtained in the previous sections.The publishers of OSNs sell the advertisement inventory/space to various content providers (CPs) via auction mechanism commonly known as real time bidding (<cit.>). For example, Facebook auctions billions of advertisement space inventory every day, and the advertisements(ads) of the winners are served. Real-time bidding enables the CPs to automatically submit their bids in real time, and the advertisement of the highest worth (based on bid amount and its performance) is thus served. By virtue of auctioning, a natural competition occurs among the CPs for winning auctions. Further, a content provider (CP) has to win the auction to get sufficient number of seed (initial) timelines.The virality/sharing of the post further depends upon the quality of the advertisement/post (recall the post quality factor η). On summarizing, the CP has to invest in two aspects: a) the bid amount to win the auction, and b) the amount spent to the design of the post (η). Recall that designing of a post could include providing authentic information about your services/products, or providing quality content,or giving offers, etc.Inappropriately tailored post can make users lose interest in the post, and thereby reducing the virality chances.Content providers (CPs) typically have wide-ranging objectives while advertising on OSNs. For example, a CP may be in interested in enhancing the brand awareness of its products. Brand awareness plays a central role in users' decision making for a purchase. Such an objective is achieved if the brand promotional post gets viral.Recall, we say a post gets viral if it spreads on a massive scale via its sharing among the users.Given that a post gets viral, a CP may be interested in knowing how fast the post spreads, i.e., the rate of virality. Other objectives, a CP may be interested in, include:maximizing the number of clicks on its post, improving its reputation, increasing its presence in the marketplace, etc. In previous sections, we derived some of these performance measures. For example, we obtained the time evolution of the number of shares and NU-TLs which characterize the rate of virality. We also obtained the expression for the probability of virality. On the other hand, in non-viral (sure extinction) scenarios, we computed the expected number of total shares before extinction. We provide the explicit expressions for some of the performance measures as a function of controllable parameters while others are represented as the solutions of appropriateFP (fixed point) equations. One can use these measures to study a relevant optimization problem taking auctions into account. In particular, and without loss of generality, we take the expected shares/NU-TLs as an indicative of the performance of CP's posts. §.§ Games with auctionWe now discuss this problem in the context of direct competition between the two CPs. We formulate a game, as before, considering online auctions additionally. As mentioned before, a natural competition is induced between the CPs due to the propagation of the competing posts through the same OSN. As before, we also have competition due to winning auctions. We study this competition by formulating an appropriate game theoretic framework. We begin with the description of the utility functions of the CPs in the game.As before, we take the utility of the CP as the “number of TLs having its post at time t, i.e., X(t)" as one among those choices. Further, each CP incurs twofold costs as before: 1) winning the auction, and 2) cost for post quality. The net utility is thus obtained by subtracting this total cost from the revenue generated from X(t). As there are two CPs, we assume that two sequential auctions are conducted and each CP participates only in one auction. The auctions so conducted can result in the following outcomes: 1)each CP wins its own auction, 2) CP-1 wins while CP-2 loses the auction or vice versa, and 3) both CPs lose their respective auctions.We disregard the third outcome as we do not have the post of interest. The second outcomes gives rise to the propagation of the post corresponding to one specific CP only, whose analysis is carried out in a single CP scenario. Whereas the first outcome leads to the propagation of both the CPs' posts, i.e., production of the mixed population in addition to the exclusive CP types populations. With this, we now describe the net utility, say 𝐂_i(x_1,x_2, η_1, η_2 ) for i = 1,2, derived by the CP-i as below: 𝐂_1(x_1,x_2, η_1, η_2 )=(log E(∑X^1_ex(t) ) -κ_2(x_1 + κ_1 η_1) )P( 𝐁 < x_1 η_1) P( 𝐁> x_2 η_2) + 0 × P(𝐁 > x_1 η_1) + (log E(∑X^1_exm(t) ) -κ_2(x_1 + κ_1 η_1) )P( 𝐁 < x_1 η_1) P(𝐁 < x_2 η_2)𝐂_2(x_1,x_2, η_1, η_2 )=(log E(∑X^2_ex(t) ) - κ_2(x_2 + κ_1 η_2) )P(𝐁 < x_2 η_2) P(𝐁 > x_1 η_1)+(log E(∑X^2_exm(t) ) -κ_2(x_2 + κ_1 η_2) )P(𝐁 < x_1 η_1) P(𝐁 < x_2 η_2).We study the game theoretic problem in the budget constraint framework 𝐵_i(x_i, η_i) := x_i + κ_1 η_i ≤B̅ for CP-i where i = 1,2.We obtain the well-known solution concept to this game, Nash Equilibrium (NE), using best response method.The best response of any CP say CP-1, (x_1^*,η_1^*), against any strategy(x_2, η_2) of the other CP satisfies x_1^* + κ_1 η_1^* = B̅. Proof: Observe that the best response of CP-1 is computed by solving the optimization problem which is, essentially, similar to the problem O2. Now appealing to Proposition 1 of Part-I <cit.>, it follows.By the above proposition, each CP now has only one variable to choose and the other one x_i is obtained through the equality constraint, i.e., x_i =B̅ -κ_1η_i;i = 1,2. And this suffices to study the game with one controllable variable only. NE is an equilibrium strategy say (η_1^*, η_2^*)deviating unilaterally from which neither of the CPs wouldbenefit.Existenceand uniqueness would be a topic of future research. We numerically compute theNE usinggradient andbest-response dynamics based algorithm. We vary the m and study the NE in two scenarios: 1) when m is directly proportional to μ_b, and 2)when m is directly proportional to λ in the figures below. We see that in both the Figure <ref> and Figure <ref>, the more influential CP (i.e., CP-1) chooses η_1^* = 1/w_1 (maximum value of η) under the Nash strategy.Whereas the Nash strategy of less influential one (CP-2) decreases initially, and then there is a jump discontinuity followed by a gradual decrease again. This is because the weaker CP has to invest more money in winning the auction initially (m is small) as it gives higher returns compared to that in η, and consequently the allocation on η shrinks (in both figures). Beyond a threshold m, investing in η earns more revenue. And the gradual decrease is due to the following: in Figure <ref>, winning auction gets difficult as m is directly proportional to μ_b and hence CP needs to invest more in auction;whereas in Figure <ref>, the growth rate α decreases as λ increases investing in x yields higher returns compared to that in η.In No-TL case,we see in Figure <ref> that the Nash strategiesshow monotonous behavior as the network activity increase (m). This pattern is quite different from that seen in Figure <ref> in the Nash strategy of CP-2, η^*_2.In all, we see that the performance measures and the optimizersare drastically different in the study with and without considering the TL structure. Further, as the network activity increases, one anticipates that a good quality content can easily get viral. However, this is not true because of the shifting effect. Recall that as m increases, contents get pushed down rapidly by the arrival of new posts, and hence the content gets missed more often before a user visits its TL.These important aspects are missed when the TL structure is ignored.§ CONCLUSIONS We modeled the propagation of competing posts by multi-type branching process. As the underlying branching process is decomposable, the entire analysis is different from that in the Part-I. We found that the dichotomy no longer holds, i.e., one of the competing posts may get viral while the other gets extinct;different types of populations can have different growth rates. We obtained various performance measures, using the previous results, specific to individual CP such as CP-wise extinction probabilities, the expected number of shares to a CP's post in the viral and non-viral scenario, etc. We conjectured using partial theoretical arguments that the expected number of shares corresponding to one CPgrow exponentially fast with time (if viral) in the presence of the competing posts. We verified the same numerically.We found that the virality chances of a post are greatly influenced by the competing post propagation. We then formulated a non-cooperative game between competing CPs using the CP-wise performance measures and studied the relevant Nash equilibria. Again, we found that the study without considering the TL structure cannot capture accurately the competition induced due to the propagation of competing posts. More importantly, we also observe that without TL effects, one cannot capturesome interesting paradigm shifts/phase transitions in certain behavioral patterns. For example, as the network becomes more active, one anticipatesthat it is more beneficial to engage in the network. The studies which do not include the effects of TL often leads to this erroneous conclusion; and argue that the virality chances increase monotonically as the mean number of friends increases (m). We demonstrated that the virality chances does not increase monotonically with the number of friends. After a certain value of m, it actually decreases for some intermittently active networks (medium m values). To be more specific, for some range of parameters,less active networks are preferable to more active networks. IntStat Meeker, Mary, and Liang Wu. "Internet trends 2018." (2018).BranchVMVan der Lans, Ralf, et al. "A viral branching model for predicting the spread of electronic word of mouth." Marketing Science 29.2 (2010): 348-365. BranchNonMarkov Iribarren, Jose Luis, and Esteban Moro. "Branching dynamics of viral information spreading." Physical Review E 84.4 (2011): 046116. VirBranch Stewart, David B., Michael T. Ewing, and Dineli R. Mather. "A conceptual framework for viral marketing." Australian and New Zealand Marketing Academy (ANZMAC) Conference 2009 (Mike Ewing and Felix Mavondo 30 November 2009–2 December 2009). 2009. YU X. Yang and G.D. Veciana, Service Capacity of Peer to Pe er Networks, Proc. of IEEE Infocom 2004 Conf., March 7-11, 2004, Hong Kong, China. TLLit Piantino, S., Case, R., Funiak, S., Gibson, D. K., Huang, J., Mack, R. D., ... & Young, S. (2014). U.S. Patent No. 8,726,142. Washington, DC: U.S. Patent and Trademark Office. dec1 Vatutin, Vladimir, et al. "A decomposable branching process in a Markovian environment." International Journal of Stochastic Analysis 2012 (2012). BidEst Cui, Ying, et al. ”Bid landscape forecasting in online ad exchange marketplace." Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2011. arxDhounchak, Ranbir, and Veeraruna Kavitha. “Decomposable Branching Processes and ViralMarketing." arXiv preprint arXiv:1907.00160 (2019).Ranbir2 Dhounchak, Ranbir, Veeraruna Kavitha, and Eitan Altman. “Part-I: Viral Marketing Branching Processes in OSNs.”arXiv preprint arXiv:1705.09828Efficient Chen, Wei, Yajun Wang, and Siyu Yang. "Efficient influence maximization in social networks." Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2009.RumorSpread Doerr, Benjamin, Mahmoud Fouz, and Tobias Friedrich. "Why rumors spread so quickly in social networks." Communications of the ACM 55.6 (2012): 70-75. CPonGraph Du, MFB Nan, Yingyu Liang, and L. Song. "Continuous-time influence maximization for multiple items." CoRR, abs/1312.2164 (2013). Scroll Nielsen, Jakob. "Scrolling and attention." Nielsen Norman Group (2010). xeta Mahdian, Mohammad, and Kerem Tomak. "Pay-per-action model for online advertising." Proceedings of the 1st international workshop on Data mining and audience intelligence for advertising. ACM, 2007.R1 J.A.C. Resing, "Polling systems and multitype branching processes", Queueing Systems, December 1993.R2 Xiangying Yang and Gustavo de Veciana,Service Capacity of Peer to Peer Networks, IEEE Infocom 2004. SNAP <https://snap.stanford.edu/data/> HautS. Hautphenne, "Extinction probabilities of supercritical decomposable branching processes." Journal of Applied Probability,639-651, 2012. KestenH. Kesten, and BP. Stigum. "Limit theorems for decomposable multi-dimensional Galton-Watson processes." Journal of Mathematical Analysis and Applications, 1967.xu Eitan Altman, Philippe Nain, Adam Shwartz, Yuedong Xu"Predicting the Impact of Measures Against P2P Networks: Transient Behaviour and Phase Transition", IEEE Transactions on Networking (ToN),pp. 935-949,2013. AthreyaBook Krishna B Athreya and Peter E Ney. Branching processes, volume 196. Springer Science & Business Media, 2012.AthreyaPaper Krishna Balasundaram Athreya. Some results on multitype continuous time markov branching processes. The Annals of Mathematical Statistics, pages 347–357, 1968. Harris Theodore E Harris. The theory of branching processes. Courier Corporation, 2002.FirstOptSundaram, Rangarajan K, "A first course in optimization theory", Cambridge university press, 1996.§ APPENDIX A Proof of Theorem <ref> :The proof of part-i is as follows. The generator matrix A_mx is[ [ z'_1r_1 -1z_1 r_1θ +z'_2 r_1…z'_N-1r_1 z_N-1r_1;z_1 r_1 z'_1r_1 -1z_2 r_1… z_N-1r_1z'_N-1r_1;z'_1r_2z_1 r_2 z'_2r_2- 1…z'_N-1r_2z_N-1 r_2;z_1 r_2z'_1r_2z_2 r_2… z_N-1r_2z'_N-1r_2;⋮⋮⋮⋱⋮⋮; z'_1 r_N-2z_1 r_N-2z'_2r_N-2… θ + z'_N-1 r_N-2z_N-1 r_N-2;z_1 r_N-2z'_1r_N-2z_2 r_N-2… z_N-1r_N-2θ + z'_N-1r_N-2; z'_1 r_N-1z_1 r_N-1 z'_1⋯ z'_N-1 r_N-1-1z_N-1 r_N-1;z_1 r_N-1z'_1r_N-1z_2 r_N-1… z_N-1r_N-1z'_N-1 r_N-1 -1 ]].First, we prove thate^A_mx is positive regular for any 0< θ, p < 1.As in the case of Lemma 1 of Part-I <cit.>, we prove the result for a specialcase withz_l = z'_l = 0∀ l > 1 and z_1 > 0, z'>0. Theresult again follows for the general case because all the terms involved are non-negative.For this special case, the matrix A_mx+Ihas the followingform with all z_l r_k or z_l'r_k terms being strictly positive because 0 < p < 1:A_mx+I =[ [ z_1' r_1z_1 r_1θ0…0000;z_1 r_1 z_1' r_10θ…0000; z_1' r_2z_1 r_200…0000;z_1 r_2 z_1' r_200…0000;⋮⋮⋮⋮⋱⋮⋮⋮⋮; z_1' r_N-2z_1 r_N-200…00θ0;z_1 r_N-2 z_1' r_N-200…000θ; z_1' r_N-1z_1 r_N-100⋯0000;z_1 r_N-1 z_1' r_N-100…0000 ]];Positive regularity of a matrix is determined by the existence of all positive terms in some power of the given matrix. Since the matrix A_mx+I has only non-negative entries, it is sufficient to check zero, non-zero structure (the location of zero and non zero terms in the given matrix and not the exact values) of the resulting powers of the matrices (A_mx+I)^n. The matrix (A_mx+I)is exactly similar in zero non-zero structureasthe second power A_1^2 given in Part-I <cit.>.Thus, positive regularity follows in exactly similar lines.Proof of parts (ii)-(iii):We follow exactly the same procedure as in the proof of parts (ii)-(iii) of Lemma 1 of Part-I <cit.>. We mention only the differences with respect to that proof. Let u_mx= { u_mx,1, u_mx,2, ⋯,u_mx,2N-3,u_mx,2N-2} be the left eigenvector of A_mx, corresponding to largest eigenvalue α_mx, both ofwhichexist because of positive regularity given by part (i).On solving u_mx A_mx = α_mxu_mx as before, we have the following system of equations with σ_mx = α_mx/(λ + ν) + 1:z_1 r.u_mx,e +z'_1 r.u_mx,o = σ_mxu_mx,1,z_l r.u_mx,e + z'_l r.u_mx,o + θu_mx,2l-3=σ_mx u_mx,2l-1;∀l ≥2 z'_1 r.u_mx,e + z_1 r.u_mx,o=σ_mx u_mx,2,z'_l r.u_mx,e + z_lr.u_mx,o +θu_mx,2l-2 =σ_mx u_mx,2l;∀l ≥2. where r.u_mx,o: = ∑_i = 1^N-1 r_iu_mx,2i-1, r.u_mx,e := ∑_i = 1^N-1 r_i u_mx,2i andu_mx,-1(u_mx,-2) :=0. Now, we write the expression of u_mx,2l-1 and u_mx,2l in terms of u_mx,1 and u_mx,2 receptively as done before in the single CP case.After simplifying equations (<ref>) , we have the following for any 2 ≤ l ≤ N-1 u_mx,2l-1=∑_i = 0^l-1ρ̅_l-i/ρ̅_1(θ/σ_mx)^i u_mx,1; u_mx,2l=∑_i = 0^l-1ρ̅_l-i/ρ̅_1(θ/σ_mx)^i u_mx,2. Following the same procedure, we obtain the relation among various components of righteigenvectorv_mx. Thus, we have(∀ l = 1,⋯ N-2)v_mx,2l-1 = ∑_i = 0^N-1-lr_l+i/r_N-1(θ/σ_mx)^i v_mx,2N-3; v_mx,2l-2 = ∑_i = 0^N-1-lr_l+i/r_N-1(θ/σ_mx)^i v_mx,2N-2.Recall c_mx=δ(1-θ)mη_1η_2, the above equations can be rewritten as c_mxρ̅_1 (p r.u_mx,e +(1-p) r.u_mx,o)=σ_mx u_mx,1, c_mxρ̅_1 ((1-p)r.u_mx,e +p r.u_mx,o) = σ_mx u_mx,2and c_mxρ̅_l (p r.u_mx,e +(1-p) r.u_mx,o)+ θ u_mx,2l-3=σ_mxu_mx,2l-1∀ l ≥ 2c_mxρ̅_l ((1-p) r.u_mx,e +p r.u_mx,o)+ θ u_mx,2l-2=σ_mxu_mx,2l-2∀ l ≥ 2.On multiplying with r_i and adding all even and odd term equations separately, we obtain c_mxr.ρ̅(p r.u_mx,e +(1-p) r.u_mx,o)+ θ∑_i = 1^N-1 r_i u_mx,2i-3 = σ_mxr.u_mx,o; for odd terms, c_mxr.ρ̅((1-p)r.u_mx,e +p r.u_mx,o)+ θ∑_i = 1^N-1 r_i u_mx,2i-2 = σ_mxr.u_mx,e; for even terms.On adding the above equations, we obtain the following linear equation P(σ_mx) = c_mxr.ρ̅(r.u_mx,o +r.u_mx, e) + θ∑_i = 1^N-1 r_i (u_mx,2i-3 + u_mx,2i-2)- σ_mx(r.u_mx,o +r.u_mx,e),and σ_mx = (α_mx+ λ + ν)/(λ + ν) would be the only zero of it.Now P( c_mxr.ρ̅) > 0 and P( c_mxr.ρ̅ + θ) < 0 (again using monotonicity of the reading probabilities). And due to similar reasons as in the singe CP case, the largest eigenvalue lies in theintervalα_mx∈(c_mxr.ρ̅-1 , c_mxr.ρ̅+ θ -1)(λ+ν). Let us assume r_l = d_1d_2^l, onceu_mx,2N-2 + u_mx,2N-3 are both bounded for any N.In what follows, theroot of equation (<ref>) for this specialcase σ_mx = c_mxr.ρ̅ + θ d_2 ∑_i = 1^N-2 r_i ( u_mx,2i-1 + u_mx,2i)(r.u_mx,o +r.u_mx,e) c_mxr.ρ̅ +θ d_2 (1-r_N-1( u_mx,2N-2 + u_mx,2N-3)/r.u_mx,o +r.u_e),converges to the following because r_N = d_1 d_2^N → 0,σ_mx→ c_mxr.ρ̅+ θ d_2 as N →∞ . Thus, as the number of TL levels increases the largest eigenvalue, α_mx of matrix A_mx converges to (c_mxr.ρ̅ + θ d_2 -1 )(λ + ν).Part-iv: Observe that{X_mx(t) } evolves according to non-decomposable BP when the process starts with a mixed-type TL, as in the single CP case. Now appealing to Theorem 1 of Part-I <cit.>, the proof follows.Proof of Lemma <ref>: Existence:We consider q_ex^1 a constant vector, as explained below. We have a continuous mapping from [0,1]^2N-2into [0,1]^2N-2 i.e. over compact set. By Brouwer's fixed point theorem there exists a solution to the given system of equations. Uniqueness: The exclusive CP1 types evolve on their own, and by Lemma 2 of Part-I <cit.>, we have a unique solution to the relevant fixed point equations in unit cube [0,1]^N which provide the extinction probabilities of CP1 population when started with one of its exclusive-types. That is, we have a unique q^1_ex={q^l_l,0}_l which represents the extinction probabilities for any given set of system parameters.We treat them as constants while studying the fixed point equations of the other equations that provide the extinction probabilities when started with a mixed population, (q_mx1q_mx2).One can rewrite the fixed point equations corresponding to this set of the extinction probabilities as below for any l < N, after suitable simplification: q_l,l+1^1 =K_1l (pg_mx1 +(1-p) g_mx2 )(q^1_ex,η_1(1-η_2) ) +K_2l(1-δ) (q^1_ex,η_1 )+K_3l+ θ^N-lq^1_N,0q_l+1,l^1= K_1l( (1-p)g_mx1 + pg_mx2)(q^1_ex,η_1(1-η_2) )+ K_2l(1-δ) +K_3l +θ^N-lwhereK_2l=(1-θ) ∑_i=0^N-l-1θ^i r_l+i; K_3l= (1-θ) ∑_i=0^N-l-1θ^i (1-r_l+i)K_1l= K_2lδ g_mx1=(q_mx1,η_1 η_2 );g_mx2 =(q_mx2,η_1η_2 ).Consider the following weighted sum over l, of terms f (q_l,l+1^1, η_1η_2, β) and f(q_l+1,l^1,η_1η_2, β)∑_l=1^N-1ρ̅_l f (q_l,l+1^1) ∑_l=1^N-1ρ̅_l f(q_l+1,l^1) , and note that these precisely equal g_mx1 and g_mx2 respectively.Thus usingthe right hand side (RHS) of equation (<ref>),we have the following two dimensional equation,Ψ = (Ψ_1, Ψ_2), whose fixed point provides (g_mx1, g_mx2):Ψ_1 (g_1, g_2 ) = ∑_i=1^N-1f ( θ^N-iq^1_N,0 +K_1i (pg_1 +(1-p) g_2 )(q^1_ex,η_1(1-η_2) )+ K_2i (1-δ) (q^1_ex,η_1 ) +K_3i , η_1η_2,β) ρ̅_iΨ_2 (g_1, g_2 ) =∑_i=1^N-1 f ( θ^N-i +K_1i ( (1-p)g_1 + pg_2) (q^1_ex,η_1(1-η_2) )+ K_2i (1-δ) + K_3i,η_1η_2, β)ρ̅_i. It is easy verify for any l that K_1l+ K_2l (1-δ) + K_3l =K_2l+ K_3l =(1- θ) ∑_i=0^N-l-1θ^i =(1-θ^N-l);and hence thatθ^N-l + K_1l +K_2l (1-δ) + K_3l= 1. Thus for any q^1_ex≤ 1 we[Here ≤ represents the usual partial order between two Euclidean vectors, i.e., a <bif and only if a_i < b_i for all i and a≤ bif a_i ≤ b_i for all i .] have: θ^N-l +K_1l(q^1_ex,η_1(1-η_2) )+ K_2l (1-δ) + K_3l≤ 1θ^N-lq^1_N,0 + K_1l(q^1_ex,η_1(1-η_2) ) + K_2l (1-δ) (q^1_ex,η_1 ) +K_3l≤ 1 .Case 1When q^1_ex <1:When q^1_ex < 1,(q^1_ex,η_1 ) < 1 as well as(q^1_ex,η_1(1-η_2) )< 1 and so we have strict inequality in (<ref>)and thusΨ_j (1,1) < 1 for each j.Considerj=1 without loss of generality. Thus, Ψ_1 (1,g_2) < 1 for any g_2≤ 1.Consider the one-variablefunction g →Ψ_1 (g, g_2), represented byΨ_1^g_2 (g) := Ψ_1 (g, g_2),for any fixed g_2, which is clearly a continuous and monotone function. Let id(g) := g represent the identity function.From the definition of Ψ, clearlyΨ_1^g_2 (0) > 0for any g_2. Hence Ψ_1^g_2 (0) - id(0) > 0 while Ψ_1^g_2 (1) - id(1) < 0. Thus, by intermediate value theorem as applied to the (continuous) function Ψ_1^g_2 () - id(), there exists at least one point at which it crosses the 45-degree line, the straight linethrough origin (0,0)and (1,1). Note that the intersection points of this 45-degree line and a function are precisely the fixed points of that function.It is easy to verify that the derivative of the function Ψ_1^g_2 (partial derivative of Ψ_1 with respect to the second variable)is positive.Thus Ψ_1^g_2 () for any fixed g_2 is continuous increasing strict convex function. If Ψ_1^g_2 () function were to cross 45-degree line more than oncebefore reaching Ψ_1^g_2 (1) < 1 at 1, then it wouldhave to cross the 45-degree line three times (recallΨ_1^g_2 (0)>0).However, this is not possible because any strict convex real function crosses anystraight line at maximum twice. Thus, there exists exactly one point in interval [0,1] at which Ψ_1^g_2 () crosses 45-degree line, which would be itsunique fixed point.Thus, for any g_2 there exists a unique fixed point of the mapping Ψ_1^g_2 () in the interval [0,1] and call the unique fixed pointas g^*(g_2).It is easy to verify that this fixed point is minimizer of the following objective function parametrized by g_2:min_g ∈ [0,1] Φ (g, g_2) Φ (g, g_2):=(Ψ_1 (g, g_2) - g)^2.The function Φ is jointly continuous, convex in (g, g_2) and the domain of optimization is same for all g_2. Further, for each g_2 by previous arguments there exists unique optimizer in [0,1]. Thus, by <cit.>, the fixed point function g^*(.) is continuous,and convex function.We now obtain the overall (two dimensional) fixed point via the solution of the following one-dimensional fixed point equation. Γ(g) := Ψ_2 ( g^*(g) ,g).f ( θ^N-i +K_1l ( (1-p)g_1 + pg_2) (q^1_ex,η_1(1-η_2) )+ K_2l (1-δ) + K_3l,η_1η_2, β)Let K_4l : = θ^N-l + K_2l (1-δ) + K_3l and K_5l :=K_1l(q^1_ex,η_1(1-η_2) ). With thesedefinitions:Γ(g) = ∑_l=1^N-1 f ( K_4l+K_5l ( (1-p)g^*(g) + pg ) ,η_1η_2,β)ρ̅_lConsider any 0≤γ, g, g' ≤ 1 andby convexity of g^* andmonotonicity of Ψ_2 we haveΓ(γ g + (1-γ) g')=∑_l=1^N-1 f (K_4l+K_5l( (1-p) g^*(γ g + (1-γ) g') + p[γ g + (1-γ) g' ] ) ,η_1η_2,β)ρ̅_l≤ ∑_l=1^N-1 f (K_4l+K_5l( (1-p)[γ g^*(g) + (1-γ)g^*(g') ] + p[γ g + (1-γ) g'] ),η_1η_2,β)ρ̅_l=∑_l=1^N-1 f (K_4l+K_5l(γ [ (1-p)g^*(g) + pg ] + (1-γ)[ (1-p)g^*(g')+ p g']),η_1η_2,β)ρ̅_l; f≤∑_l=1^N-1 ( γ f (K_4l+K_5l [(1-p)g^*(g) + pg] ,η_1η_2,β) +(1-γ)f ( K_4l+K_5l [ (1-p)g^*(g')+ p g'],η_1η_2,β))= γΓ(g)+ (1-γ) Γ(g').This shows that Γ is convex, further we have Γ (1) < 1 and Γ (0) > 0.Note here that g^*(0) > 0 because Ψ_1(0,0) >0. Thus, using similar arguments as before we establish the existence of unique fixed point g_2^* for functionΓ.Therefore, ( g^*(g_2^*), g_2^*) represents the unique fixed point, in unit cube [0, 1]^2, of the two dimensional function Ψ.This establishes the existence and uniqueness of extinction probabilities (g_12, g_21).The uniqueness of other extinction probabilities is now direct from equation (<ref>). Case 2When q^1_ex = 1:Consider that we start with one of the following three TLs: one exclusivetype (l, 0), onemixedtype (l, l+1) or one mixed-type (l+1, l).Considerthe scenario in which theCP-1 population gets extinct at the first transitionepoch itself,when started with one (l, 0) type. This can happenif oneof the following twoevents occur: a)the TL does not view post-P (w.p. r_l);or b) the TL sharespost-Pto none (0) of its friends. In either of the two events the CP-1 population gets extinct even when started with mixed TLs(l, l+1) or (l+1, l). Thus, the event of extinction at first transition epoch starting with one (l, 0) TL implies extinction at first transition epoch when started with either one(l, l+1) TL or one (l+1, l) TL.Say the number of shares at first transition epoch were non-zero and say they equal x_i of type (i,0) for each iwhen started with one (l,0) type TL. This proof is given under extra assumption that ρ_N = 0 and that ρ̅_i = ρ_i.We assume the following is the scenario under assumption.When we start with mixed-type(l, l+1) (or (l+1, 1) type respectively),post-P is shared with ∑_i x_i number of Friends as when started withexclusive (l,0) type.Out of these, some are now converted to mixed TLs because the parent TL also shares CP-2 post.And a convertedtype (i, 0) offspring becomes (i, i+1) offspring w.p. p(w.p. (1-p)respectively) and (i+1, i) w.p. (1-p)(w.p. (1-p) respectively).When started with mixed-type (l+1, l) it is possible that some out of ∑ x_i shares of CP1 post are discarded (w.p. δ) because the TL would have viewed the post-Q first and would be discouraged to view post-P. Thus,in either case, with or without extinction atfirst transition epoch,the resulting eventsare inclined towards survival with bigger probabilitywhen started with one exclusive (l,0) type than when started with either of the mixed-type TLs. Basically, the aforementionedarguments can be applied recursively to arrive atthis conclusion, and hence the probabilities of extinctions satisfy the following inequalities:q_l, 0^1≤ q_l, l+1^1 q_l, 0^1≤ q_ l+1, l^1l < N-1.Further,with q^1_ex = 1, it easy to verify thatΨ_i(1, 1)=1 for i=1 as well as 2. Thus, we have unique extinction probabilities, q_mx1 =1 andq_mx2 =1. § APPENDIX B: EVOLUTION OFEXCLUSIVE NET PROGENY The evolution of the size of the population of exclusive/mixed class when initiated with its own class particle(s) is obtained using the well-known theory of non-decomposable BPs, in particular Lemma 3 ofPart-I (<cit.>)provides the time evolution of expected net progeny;it is easy to observe that the `number of shares' isthe net progeny.According to <cit.> thenet progenyof an E_x classtill time t when initiated with type-l particle ofits own class, i.e., with l ∈ E_x,is represented by y^e_l(t)and is provided in the vector form.In <cit.>the study is about the net progeny when one started with one l-particle, for anyl ∈ E_x andafter setting y_l^e(0) = 1. We require a small change to facilitate study of net progeny whenstarted in mixed class; we require thaty^e_l(0) = 0, i.e., net progeny at time 0 is set to 0; inother words, the initial particle is not counted as an offspring, but its offsprings, offsprings of offsprings so on toform the progeny.We consider this study, using similar fixed point equations as in section <ref>.Consider only exclusive types, evolving on their own;when you start in E_xclass, the particles produce offsprings of onlyE_xclass. Conditioning on the events of first transition,y_l (t) (for any l ∈ E_x) satisfies the following fixed point equation y^e_l (t) = θ∫_0^t∑_k ∈ E_xa_l, ky^e_k (t-τ) (λ+ν) e^- (λ+ν) τ dτ +(1-θ)∫_0^t ∑_k∈ E_x m^e_l,k( 1 +y^e_k (t-τ) )(λ+ν)e^- (λ+ν) τ dτ. By change of variable from t-τ = s we obtain:y^e_l (t) = e^- (λ+ν) t θ∫_0^t∑_k ∈ E_xa_l, ky^e_k (s) (λ+ν) e^ (λ+ν) s ds + e^- (λ+ν) t (1-θ)∫_0^t ∑_k∈ E_x m^e_l,k( 1 +y^e_k (s) )(λ+ν) e^ (λ+ν) sd s. Differentiating we obtain (with λ_ν := λ + ν):d y_l^e (t) /dt = - λ_ν y^e_l (t)+ λ_ν∑_k ∈ E_x( θa_l, k+ (1- θ )m^e_l,k ) y^e_k (t) + λ_ν (1-θ) ∑_k∈ E_x m^e_l,k .In vector form, dy ^e (t) /dt=A_ex y (t) +λ_ν (1-θ ) [ [ ∑_k ∈ E_xm^e_1,k; ∑_k ∈ E_xm^e_2,k;⋮; ∑_k ∈ E_xm^e_N,k ] ] .Thusthe solution is given by:y^e (t) = (e^ A_ex t - I )λ_ν(1-θ ) A_ex^-1 [ [ ∑_k ∈ E_xm^e_1,k; ∑_k ∈ E_xm^e_2,k;⋮; ∑_k ∈ E_xm^e_N,k ] ]. For the special case, we have:y^e_l (t)= ( e^α_e t- 1) h_l^eh_l^e =λ_ν(1-θ) r_l m η_1/α_e α_e =m η_1 ∑_lr_lρ_l-1 . We claimthat the solution of (<ref>) has the following approximate (approximation good as N →∞)structure for the special case of social networks:y_l(t) =g^e_l + h^e_l e^α_e tl ∈ E_xt.Directly substituting the above representation of {y_l(t)} in both sides of the fixed point equation, we have the following (λ_ν := λ+ ν):g^e_l + h^e_l e^α_e t=y_l (t)= θ∫_0^t∑_k ∈ E_xa_l, k (g^e_k + h^e_k e^α_e (t - τ))λ_ν e^-λ_ντ dτ +(1-θ)∫_0^t ∑_k∈ E_x m^e_l,k ( 1 + (g^e_k + h^e_k e^α_e (t - τ)) )λ_ν e^-λ_ντ dτ= [θ∑_k ∈ E_xa_l, k g^e_k + (1-θ) ∑_k ∈ E_x m^e_l,k(1+g^e_k) ](1- e^-λ_ν t )+e^α_e t [θ∑_k ∈ E_xa_l, k h^e_k + (1-θ) ∑_k ∈ E_x m^e_l,kh^e_k] (1-e^- (λ_ν +α_e) t)λ_ν/λ_ν + α_e .Thus, if one can find the solution (for coefficients {g_l^e, h_l^e } and α_e) of the followingequations,it is easy to verify that the waveforms given by (<ref>)are a fixed point solution of G:g^e_l= θ∑_k ∈ E_xa_l, k g^e_k+ (1-θ) ∑_k∈ E_x m^e_l,k(1+g^e_k) , h^e_l=(θ∑_k ∈ E_xa_l, k h^e_k+ (1-θ) ∑_k ∈ E_x m^e_l,kh^e_k ) λ_ν/λ_ν+ α_e.and that- θ∑_k ∈ E_xa_l, k g^e_k - (1-θ) ∑_k∈ E_x m^e_l,k(1+g^e_k)- [θ∑_k ∈ E_xa_l, k h^e_k + (1-θ) ∑_k∈ E_x m^e_l,kh^e_k ] λ_ν/α_e + λ_ν= 0l. The last one is required as we musthave the coefficients of e^- λ_νt term zero for all t. The above implies g_l + h^e_l = 0.Further we also requirey_l(0) =0, i.e., the user with post of interest has not shared,for which again we needg_l + h^e_l = 0.Now using(<ref>) and(<ref>), we have for any l ∈E_x:g^e_l + α_e + λ_ν/λ_νh^e_l =θ∑_k ∈ E_xa_l, k (g^e_k +h^e_k) +(1-θ) ∑_k∈ E_x m^e_l,k(1+g^e_k +h^e_k) =(1-θ) ∑_k∈ E_x m^e_l,k, h_l^e = ( (1-θ) ∑_k∈ E_x m^e_l,k ) λ_ν/α_e l.Thus, y^e_l(t )= λ_ν(1-θ) ∑_k∈ E_x m^e_l,k/α_e(e^α_e t - 1 ), if the { h_l^e } given by (<ref>) satisfies(<ref>) with an appropriate α_e.Note thatα_1 > 0 is a required condition for virality;otherwise (by positiveness of {m_l,k^e}) the coefficients {h_l^e} are negative and then thesolution y_l^e (t)settles to apositive limit (which in our OSN context represents the eventual expected number of shares before extinction) and does not explode (i.e., no virality) . Derivation of α_e: From (<ref>) we have:(1-θ) ∑_k∈ E_x m^e_l,k = [θ∑_k ∈ E_xa_l, k h^e_k + (1-θ) ∑_k∈ E_x m^e_l,kh^e_k ] α_e /α_e + λ_ν ,and this equation is to be used to derive α_e. We consider this for the example of our Social network.Social network example: Here E_x = {1, 2, ⋯, N},m^e_l, k= r_l c^e_kwith c^e_k := m η_1 ρ_kanda_l,k = 1_k = l+1 1_l < N.Assume r_l+1/r_l = Δ_r a constant(independent of l). For thiscasewe require for each l:(1-θ) r_l ∑_k c^e_k(α_e + λ_ν ) = θ r_l+1 (1-θ)λ_ν∑_k c^e_k+ λ_ν(1-θ)^2r_l∑_k c^e_k r_k ∑_k' c^e_k'Or we need(α_e + λ_ν ) = θ r_l+1/r_lλ_ν+ (1-θ) λ_ν∑_k c^e_k r_kl ∈ E_x.Thus α_eequals:α_e ≈λ_ν( θΔ_r - 1+ (1-θ)m η_1 ∑_k ρ_k r_k ).This is only approximation, which is accurate as N →∞, because the above equations are not satisfied forl = N. For this case, a_N, k=0 for all k in social network example, thusone needs to satisfy the following equation r_N(α_e + λ_ν ) =r_N(1-θ)m η_1 ∑_k ρ_k r_k,which is approximately true because r_N ≈ 0 (recall r_N → 0 as N →∞).From equations(<ref>) and(<ref>) it is clear that α_e and {h_l^e} are the eigen value and eigen vector of matrix A_ex; further α_e is the Perron root (largest eigen value)as{h_l^e}is the vector of all positive elements. In all, y_l^e (t) ≈λ_ν(1-θ) m η_1 r_l /α_e(e^α_e t - 1 ) l ,as ∑_l c_k^e = m η_1.§ APPENDIX C:EXPECTED NET PROGENY,WITH MIXED POPULATION§.§ Uniqueness of solution of GDefine the following norm on the space of waveforms y(· ) = {y_l (· )}_l ||y ||_ϕ := ∑_l ||y_l||_ϕ ||y_l||_ϕ := ∫_0^∞|y_l (t) |ϕ e^- ϕ t dt.and observe that ∫_0^∞ | ∫_0^t∑_k ∈ M_xa_l, k (y_k (t-τ)- z_k (t-τ))(λ+ν) e^- (λ+ν) τ dτ | ϕ e^- ϕ t dt ≤ ∫_0^∞∫_τ ^∞∑_k ∈ M_xa_l, k |y_k (t-τ)- z_k (t-τ)|ϕ e^- ϕ t dt (λ+ν) e^- (λ+ν) τ dτ≤ ∫_0^∞∫_0^∞∑_k ∈ M_xa_l, k |y_k (s)- z_k (s)|ϕ e^- ϕ (s+τ)ds (λ+ν) e^- (λ+ν) τ dτ≤ ∑_k ∈ M_xa_l, k∫_0^∞ϕ e^- ϕ s||y_k - z_k||_ϕ (λ+ν) e^- (λ+ν) τ dτ= λ+ν/λ+ν + ϕ∑_k ∈ M_xa_l, k ||y_k - z_k||_ϕ .Working in a similar way,we have||G(y )-G ( z) ||_ϕ ≤ θλ+ν/λ+ν + ϕ∑_k ∈ M_x ∑_la_l, k ||y_k - z_k||_ϕ+ (1-θ) λ+ν/λ+ν + ϕ∑_k ∈ M_x ∑_lm_l, k ||y_k - z_k||_ϕIf one chooses a suitable ϕ_s such that we have the following for some ζ_s < 1:max_k(∑_l ( θa_l, k+ (1-θ) m_l,k ) )λ+ν/λ+ν + ϕ_s= ζ_s, then||G(y )-G ( z) ||_ϕ_s ≤ ζ_s∑_k ∈ M_x ||y_k - z_k||_ϕ_s =ζ_s|y -z ||_ϕ_s . Thus G( ·) is a contraction mapping and hence has unique fixed point solution. §.§ Solution of the fixed point equation G As y_l (t) satisfies the following fixed point equationfor any l ∈ M_x,y_l (t) = θ∫_0^t∑_k ∈ M_xa_l, ky_k (t-τ) (λ+ν) e^- (λ+ν) τ dτ+ (1-θ ) ∫_0^t∑_k ∈ M_x m_l,k (1+ y_k (t-τ))(λ+ν) e^- (λ+ν) τ dτ+(1-θ)∫_0^t ∑_k∈ E_x m_l,k( 1 +y^e_k (t-τ) )(λ+ν)e^- (λ+ν) τ dτ.After change of variable from t-τ to s and differentiating we get :dy_l (t)/dt = -λ_νy_l (t) + λ_ν(θ∑_k ∈ M_xa_l, k y_k(t)+ (1-θ )∑_k ∈ M_xm_l,k (1+ y_k (t)) +(1-θ) ∑_k∈ E_x m_l,k( 1 +y^e_k (t) ))dy_l (t)/dt = λ_ν( ∑_k ∈ M_x (θ a_l, k +(1-θ )m_l,k - 1_l =k )y_k(t)+ (1-θ )∑_k ∈ M_xm_l,k+(1-θ) ∑_k∈ E_x m_l,k( 1 +y^e_k (t) )) .In other words in vector notationdy (t)/d t =A_mx y (t)+ λ_ν (1-θ ) [ [ ∑_k ∈ M_xm_1,k; ∑_k ∈ M_xm_2,k;⋮; ∑_k ∈ M_xm_N,k ] ] +λ_ν (1-θ ) A_mx, ex [ [ 1 + y_1^e (t); 1 + y_2^e (t); ⋮; 1 + y_N^e (t) ] ].Using the standard tools of ODEs and the solution{y^e_l (.)} derived in (<ref>) we havey (t)=( e^A_mx t -I )c_v_0 +e^A_mx t∫_0^te^- A_mx s A_mx, exe^ A_ex sA_ex^-1 c_v_1ds c_v_0= A_mx^-1λ_ν(1-θ )( [ [ ∑_k ∈ M_xm_1,k; ∑_k ∈ M_xm_2,k;⋮; ∑_k ∈ M_xm_N,k ]] + A_mx, ex (1- A_ex^-1 c_v_1 ) ) c_v_1= λ_ν (1-θ) [ [ ∑_k ∈ E_xm^e_1,k; ∑_k ∈ E_xm^e_2,k;⋮; ∑_k ∈ E_xm^e_N,k ] ] .§.§ Simplified form for Social network example: To keep the explanations simple we do not consider[In actuality, the group M_x (in social network example)has two sub-classes:the CP1-post is in higher levelthan CP2-post inin one sub-class and it isvice versa in other.We consider the former case (p=0 and when we start with CP1-post at higher level), while the calculations easily extend to the other case (p=1) and a combination of the two cases (0 < p < 1) also. ] p, the probability of swapping the order of the two posts in the recipient TLs. We consider the number of shares of CP1-post, which can be obtained by computing the net progeny.Here E_x = {1^e, 2^e, ⋯, N^e} (the notation ^e is only required to differentiate between mixed and exclusive types, but we discard this notationwhen things are obvious),and M_x = {1, 2, ⋯, N-1}.When one starts with (l, l+1)- mixed type of section <ref>(i.e., with CP1-post at level l and CP2-post at l+1 level and since p=0, we have,m_l,k = r_l c^m_k(for any l ∈ M_x) with c^m_k := mη_1 δη_2 ρ̅_k and c^m_k := m η_1 (1- δη_2)ρ̅_k respectively fork ∈ M_x and k ∈ E_x.We also have m^e_l,k = r_l c^e_k with c^e_k := mη_1ρ_k. Further we assumeρ̅_l = ρ_llΔ_r = r_l+1 /r_ll < N. Now we have the following:∑_k ∈ M_xm_l,k =mη_1 η_2 δ r_l l ∈ M_x,∑_k ∈ E_xm_l,k= mη_1 (1- η_2 δ )r_ll ∈ M_x,∑_k ∈ E_xm^e_l,k=mη_1 r_l l ∈ E_x,∑_k ∈ E_x ∪ M_xc_k^m= mη_1= ∑_ k ∈ E_xc_k. Here r_l remains the same for mixed as well as exclusive types. We claimunder this special case that the net progeny has the following approximate (approximation good as N →∞) simplified structure:y_l(t) =g_l + h_l e^α_e t+ o_l e^α̅ tl ∈ M_xt, with α̅ = α_mx the largest eigen value of matrix A_mx and appropriate {g_l, h_l, o_l} and α_e is the Perron root/largest (positive) eigen value of A_ex. We prove our claim in the following:Directly substituting the above representation of {y_l(t)} in both sides of the fixed point equation, we have the following (λ_ν := λ+ ν):g_l + h_l e^α_e t+ o_l e^α̅ t=y_l (t)= θ∫_0^t∑_k ∈ M_xa_l, k (g_k + h_k e^α_e (t - τ)+ o_k e^α̅ (t - τ) )λ_ν e^-λ_ντ dτ+ (1-θ ) ∫_0^t∑_k ∈ M_xm_l,k (1+ g_k + h_k e^α_e (t - τ)+ o_k e^α̅ (t - τ) )λ_ν e^-λ_ντ dτ +(1-θ)∫_0^t ∑_k∈ E_x m_l,k ( 1 + (g^e_k + h^e_k e^α_e (t - τ)) )λ_ν e^-λ_ντ dτ= [θ∑_k ∈ M_xa_l, k g_k+ (1-θ) ∑_k ∈ M_xm_l,k (1+ g_k) + (1-θ) ∑_k ∈ E_x m_l,k(1+g^e_k) ](1- e^-λ_ν t )+e^α_e t [θ∑_k ∈ M_xa_l, k h_k + (1-θ) ∑_k ∈ M_xm_l,kh_k + (1-θ) ∑_k ∈ E_x m_l,kh^e_k] (1-e^- (λ_ν +α_e) t)λ_ν/λ_ν + α_e + e^α̅ t [θ∑_k ∈ M_xa_l, k o_k + (1-θ) ∑_k ∈ M_xm_l,ko_k] (1-e^- (λ_ν +α̅) t)λ_ν/λ_ν + α̅.Thus, if one can find the solution (for coefficients {g_l, h_l, o_l} and α̅)of the followingequations,it is easy to verify that the waveforms given by (<ref>)are a fixed point solution of G:g_l= θ∑_k ∈ M_xa_l, k g_k+ (1-θ) ∑_k ∈ M_xm_l,k (1+ g_k) + (1-θ) ∑_k∈ E_x m_l,k(1+g^e_k) , h_l=(θ∑_k ∈ M_xa_l, k h_k + (1-θ) ∑_k ∈ M_xm_l,kh_k + (1-θ) ∑_k ∈ E_x m_l,kh^e_k ) λ_ν/λ_ν+ α_eo_l=(θ∑_k ∈ M_xa_l, k o_k + (1-θ) ∑_k ∈ M_xm_l,ko_k)λ_ν/λ_ν+α̅, andwe alsorequireg_l + h_l + o_l = 0l, because we musthave the coefficients of e^- λ_νt term zero for all t. - θ∑_k ∈ M_xa_l, k g_k- (1-θ) ∑_k ∈ M_xm_l,k(1+ g_k)- (1-θ) ∑_k∈ E_x m_l,k(1+g^e_k) - [θ∑_k ∈ M_xa_l, k h_k + (1-θ) ∑_k ∈ M_xm_l,kh_k + (1-θ) ∑_k∈ E_x m_l,kh^e_k ] λ_ν/α_e + λ_ν-[θ∑_k ∈ M_xa_l, k o_k + (1-θ) ∑_k ∈ M_xm_l,ko_k]λ_ν/α̅ +λ_ν = 0l.The above condition is also required as y_l(0) =0 (for all l), i.e., the user with post of interest has not shared. Further computations are considered in two cases. Case I, without α̅ term:We will first try for a solutionwith o_l = 0.If such a solution is possible, then by uniqueness this is the solution and in this case we will not require an additionalpositive α̅. In our OSN example, such a thing is possible. Of course we would require some conditions as shown below. Further and more importantly, we are finding here approximate solutionfor the example of OSNs; the solution might be unique, however one may have more than one approximation (for the same unique solution); we will see that one sub case (when started with (l, l+1) types) has another approximation in Case 2 provided below.With o_l = 0, one requiresg_l = - h_l for all l. Now using(<ref>) and(<ref>), we have:g_l + α_e + λ_ν/λ_νh_l =θ∑_k ∈ M_xa_l, k (g_k +h_k) + (1-θ) ∑_k ∈ M_xm_l,k (1+ g_k + h_k) + (1-θ) ∑_k∈ E_x m_l,k(1+g^e_k +h^e_k)=(1-θ) ∑_k ∈ M_xm_l,k +(1-θ) ∑_k∈ E_x m_l,k .By substituting -g_l = h_l and solving we get (for each l):h_l=λ_ν(1-θ) ∑_k∈ E_x ∪ M_xm_l,k/α_e= - g_l. y_l(t ) =-λ_ν(1-θ) ∑_k∈ E_x ∪ M_xm_l,k/α_e+ λ_ν(1-θ) ∑_k∈ E_x ∪ M_xm_l,k/α_ee^α_e t. Further from (<ref>) α_e should satisfy (for all l ∈ M_x) the followingas well as (<ref>):(1-θ) ∑_k ∈ M_x ∪ E_xm_l,k =[θ∑_k ∈ M_xa_l, k h_k + (1-θ) ∑_k ∈ M_xm_l,kh_k + (1-θ) ∑_k∈ E_x m_l,kh^e_k ] α_e /α_e + λ_ν .Consider nowthe example of OSNs (see equations (<ref>) and (<ref>) and the description above it), i.e., for whichα_ehas to equivalently satisfy the following for alll:α_e + λ_ν = θΔ_r λ_ν1_ l< N + λ_ν(1-θ) ∑_k ∈ M_x ∪ E_x c_k^mr_k .So we are done (for all l < N) if same α_e satisfies the above as well as (<ref>), i.e.,ifthe following is true:m η_1 ∑_k ρ_k r_k = ∑_k ∈ E_x c_k r_k =∑_k ∈ M_x ∪ E_x c_k^mr_k = m η_1 ∑_k ρ̅_k r_k; and this is true by conditions (<ref>) and (<ref>).Once again, the equation (<ref>) is satisfied only approximately for the case when l = N. However when θ = 0, it is satisfied exactly and hence the solution is the exact and hence is the unique solution.Another sub case:When we startwith(l+1, l) types andconsider net progeny corresponding to CP1-post, then it is easy to verify that we will have ∑_k ∈ M_xm_l,k =mη_1 η_2 δ r_l l ∈ M_x( (l+1,l)),∑_k ∈ E_xm_l,k= mη_1 (1- η_2) δr_ll ∈ M_x, ∑_k ∈ E_xm^e_l,k=mη_1 r_l l ∈ E_x, .For this sub-case we will have∑_ k ∈ E_xc^e_k=mη_1 ∑_k ∈ E_x ∪ M_xc_k^m= m η_1 δ.The rest of the details can go through, however equation (<ref>) is not satisfied. Thus we can't have this kind of a solution. For this sub-case the solution would be given by Case II, provided the value α̅ (computed below) is positive.Thus we need that the potential values of {h_l} given by (<ref>) satisfy the following:h_l α_e = λ_ν∑_k ∈ M_x( θ a_l,k+ (1-θ)m_l,k- 1_l=k ) h_k + λ_ν (1-θ) ∑_k∈ E_x m_l,kh^e_k l ∈ M_x.We would consider this only for special case. Recall {h_l^e } is eigen vector of A_ex matrix corresponding to the largest eigen value α_e for the special case and equals λ_νc̅^er /α_e and for the special case,{h_l} given by (<ref>) alsosimplifies to λ_νc̅^er /α_e (because c̅^m + c̅^me = c̅^e as seen from (<ref>) and from (<ref>)α_e = λ_ν ( m η_1 ∑_lr_lρ_l-1 ) However unfortunately the RHS of the equation becomes(after cancelling λ_ν, r_l λ_νc̅^e / α_e etc and note ∑_k ∈ E_x m_l,k + ∑_k ∈ M_x m_l,k=) λ_ν ( m η_1 ∑_lr_lρ̅_l-1 )and hence are not equalCase II, with additional α̅ term:When Case I is not possible, we should try with o_l0and appropriate α̅, and recall by uniquenessif such a solution is possible, it would be the solution[Once again we are only obtaining approximate solutions and one may have more than one approximate solution even when the solution is unique.].When o_l0using g_l + h_l + o_l = 0 andg_l^e + h_l^e = 0 for each land further using (<ref>)-(<ref>) we getg_l + λ_ν + α_e/λ_ν h_l +λ_ν + α̅/λ_ν o_l= θ∑_k ∈ M_xa_l, k(g_k + h_k + o_k) + ∑_k ∈ M_x (1-θ) m_l,k(1+g_k + h_k + o_k )+∑_k ∈ E_x (1-θ) m_l,k(1+g^e_k + h^e_k )=∑_k∈ E_x ∪ M_x(1-θ) m_l,k,h_l =[ - g_l +(1-θ) ∑_k∈ E_x ∪ M_x m_l,k- o_l α̅ +λ_ν/λ_ν ] λ_ν/λ_ν + α_e =[ -g_l -o_l +(1-θ) ∑_k∈ E_x ∪ M_x m_l,k- o_l α̅/λ_ν ] λ_ν/λ_ν + α_e. Thush_l =λ_ν(1-θ) ∑_k∈E_x ∪M_xm_l,k - o_lα̅/ α_e , g_l = -h_l - o_l = -λ_ν(1-θ) ∑_k∈E_x ∪M_xm_l,k - o_l( α̅-α_e) / α_e .Now summing equations (<ref>)-(<ref>),andby using h_l+ g_l + o_l = 0and h_l^e = -g_l^e in equation (<ref>),we have: 0 = g_l + h_l + o_l =[θ∑_k ∈ M_xa_l, k g_k+ (1-θ) ∑_k ∈ M_xm_l,k (1+ g_k) + (1-θ) ∑_k ∈ E_x m_l,k(1+g^e_k) ]+ [θ∑_k ∈ M_xa_l, k h_k + (1-θ) ∑_k ∈ M_xm_l,kh_k + (1-θ) ∑_k∈ E_x m_l,kh^e_k ] ( 1 -α_e /λ_ν+ α_e ) +[θ∑_k ∈ M_xa_l, k o_k + (1-θ) ∑_k ∈ M_xm_l,ko_k]( 1- α̅/λ_ν+α̅ ) =-[θ∑_k ∈ M_xa_l, k h_k + (1-θ) ∑_k ∈ M_xm_l,kh_k + (1-θ) ∑_k∈ E_x m_l,kh^e_k ] α_e/λ_ν+ α_e- [θ∑_k ∈ M_xa_l, k o_k + (1-θ) ∑_k ∈ M_xm_l,ko_k]α̅/λ_ν+α̅ +(1-θ) ∑_k ∈ M_x ∪ E_xm_l,k.Substitutingthe value of h_k as given in equation (<ref>), we get:- ∑_k ∈ M_x (θa_l, k+(1-θ)m_l,k) (λ_ν(1-θ) ∑_k'∈ E_x ∪ M_xm_k,k' - o_kα̅/α_e )α_e/λ_ν+ α_e - (1-θ) ∑_k∈ E_x m_l,kh^e_k α_e/λ_ν+ α_e - [θ∑_k ∈ M_xa_l, k o_k + (1-θ) ∑_k ∈ M_xm_l,ko_k]α̅/λ_ν+α̅+ (1-θ) ∑_k ∈ M_x ∪ E_xm_l,k =0.Simplifying, [θ∑_k ∈ M_xa_l, k o_k + (1-θ) ∑_k ∈ M_xm_l,ko_k]( α̅/λ_ν+ α_e-α̅/λ_ν+α̅ )=(1-θ) ∑_k∈ E_x m_l,kh^e_kα_e/λ_ν+ α_e - (1-θ) ∑_k ∈ M_x ∪ E_xm_l,k +[ ∑_k ∈ M_x(θa_l, k + (1-θ) m_l,k )∑_k'∈ E_x ∪ M_x (1-θ) m_k, k' ] λ_ν/λ_ν+ α_e.Finally using equation (<ref>),o_l λ_ν+α̅/λ_ν ( α̅/λ_ν+ α_e-α̅/λ_ν+α̅ )=( 1-θ) ∑_k∈ E_x m_l,kh^e_kα_e/λ_ν+ α_e- (1-θ) ∑_k ∈ M_x ∪ E_xm_l,k+ (1-θ)[ ∑_k ∈ M_x(θa_l, k + (1-θ) m_l,k )∑_k'∈ E_x ∪ M_xm_k, k' ] λ_ν/λ_ν+ α_e .Thus for any l ∈ M_x,o_l = (1-θ) λ_να_e ∑_k∈ E_x m_l,kh^e_k /( α̅-α_e) α̅+ (1-θ) λ_ν/ ( α̅-α_e) α̅ (λ_ν∑_k ∈ M_x(θa_l, k + (1-θ) m_l,k )∑_k'∈ E_x ∪ M_xm_k, k'-(λ_ν+ α_e) ∑_k ∈ M_x ∪ E_xm_l,k ) . Similarly, one can obtain the closed form expressions for h_l and g_l by substituting the value of o_l in equation (<ref>). h_l^e=( (1-θ) ∑_k∈ E_x m^e_l,k ) λ_ν/α_e≈ ( (1-θ)m η_1 r_l ) λ_ν/α_e l ∈ E_x, for Social network example.From (<ref>), for Social network example (<ref>), o_l simplifies as belowo_l= r_l (1-θ) λ_ν/( α̅-α_e) α̅ ( λ_νΔ_r θ1_ l < N + (1-θ) λ_νm^2 η_1^2∑_k ρ̅_kr_k- (λ_ν +α_e)m η_1) ≈r_l (1-θ) λ_ν/( α̅-α_e) α̅λ_νΔ_r θ(1 - m η_1)l, r_N ≈ 0and theseor equivalently vector {r_l } should satisfy equation (<ref>) with appropriate α̅, i.e., we will requirer_l=(θ∑_k ∈ M_xa_l, k r_k + (1-θ) ∑_k ∈ M_xm_l,kr_k)λ_ν/λ_ν+α̅,which for social network example (special case) translates to satisfy the following:r_l=(θΔ_rr_l+ (1-θ) r_l∑_k ∈ M_x c_k^m r_k)λ_ν/λ_ν+α̅.Thusthe following α̅ satisfies all equationsα̅=λ_ν ( θΔ_r - 1 + (1-θ) mη_1 δη_2 ∑_k ρ̅_k r_k), except for the case with l = N, for which it is an approximationas in Case 1.Again as seen above {o_l} (or equivalently {r_l}) is a vector of all positive or all negative entries, thus α̅would be the unique Perron root ofA_mx (see equation (<ref>)), when it is positive definite (i.e., when α̅ > 0). This is the second approximate solution for OSNs, whenone starts with(l, l+1) types. It is easy to verify that the two approximations coincide when θ = 0; easy to verify that o_l = 0 for all l and h_l with case 2 coincides with h_l of case 1. In this case the solution in fact is exact, as there is no difference between l < N and l = Nin(<ref>). For this case, h_l =λ_ν(1-θ) m η_1 r_l- o_lα̅/α_e l ∈ M_x.It is easy to check that the sub-case (<ref>)(starting with (l+1,l types) mentioned at the end of Case I can also satisfy the conditions of this case and has thesolution with same α̅, but witho_l =r_l (1-θ) λ_ν/( α̅-α_e) α̅ ( λ_νΔ_r θ + (1-θ) λ_νm^2 η_1^2 δ (1-η_2 + η_2δ) ∑_k ρ̅_kr_k- (λ_ν +α_e)m η_1δ )h_l= λ_ν(1-θ) m η_1 δr_l- o_lα̅/α_e l ∈ M_x. § APPENDIX D: EXPECTED NUMBER OF SHARES IN NON-VIRAL SCENARIOWe have y_l,k^j = E[lim_t →∞Y^j (t) |X(0)=e_l,k]. y_l,k^j canbe obtained by solving appropriate FP equations.These FP equations are obtained by conditioning on the events of the first transition, as before. Here we have some additional events depending upon the starting TL. When a mixed type TL is subjected to the `share transition',then we can have shares exclusively of the post of one of the CPs, and or shares of both the posts. Whereas when an exclusive CP-typeTLis subjected to a `share transition',only exclusive types are engendered, as in single CP.With the `shift' transition, we have similar changes as in the single CP case. Below we obtain the expected shares for CP-1 without loss of generality and hence suppress the superscript ^j for remaining discussions.Let Y_l,k = lim_t →∞ Y_l,k (t) be the totalnumber of shares of post-P, before extinction, when started with one TL of (l,k) type(with k = l+1 or l-1). Lety_l,k := E[Y_l,k] be its expected value. The totalnumber of sharesof any CP post is finite on the extinction paths. Thus,by conditioning on the events of first transition epoch,one can write the following recursive equations for any l < N: y_l,l+1=θ ( 1_{l < N-1} y_l+1,l+2 + 1_{l = N-1} y_N,0 )+(1-θ)r_l (1-δ) mη_1 (1 + y_ex1.ρ̅)+(1-θ)r_lδmη_1 [(1-η_2)(1+ y_ex1.ρ̅) +η_2(1+ py_mx1.ρ̅ +(1-p)y_mx2.ρ̅) ]where y_ex1 = { y_1,0^1, y_2, 0^1, ⋯, y_N-1,0}. And again for any l < N,y_l+1,l=1_{l < N-1}θ y_l+2,l+1 +(1-θ)r_lδmη_1 [(1-η_2)(1+ y_ex1.ρ̅) η_2(1+ py_mx1.ρ̅ +(1-p)y_mx2.ρ̅ )]. One can easily solve the above set of linear equations to obtain thefixed point solution, by first obtaining the solutions for y_mx1ρ̅+ y_mx2ρ̅y_mx1 :={ y_1,2^1, y_2, 3^1, ⋯, y^1_N-1,N}y_mx2:={ y_2,1^1, y_3,2^1, ⋯, y^1_N, N-1}. We carry out the analysis for the special case with reading probabilities: r_i = d_1 d_2^i.Recall c_j =(1-θ) mη_j. Define the following which will be used only in this part:B_ex1, δ= c_1δ(1-η_2)(1+ y_ex1.ρ̅), B_ex1, 1-δ= c_1 (1-δ) (1 + y_ex1.ρ̅),C̅_mx1= c_mx(1+ (p y_mx1 + (1-p)y_mx2).ρ̅)C̅_mx2= c_mx(1+ ((1-p) y_mx1 + py_mx2).ρ̅)The first two quantities can be computed from the expected number ofshares given as in Part-I <cit.> (in the single CP model), while the remaining are obtained by solving the above FP equations. Now we can rewrite the equations(<ref>)-(<ref>) in the following manner for the special case[One can easily write down the equations for general case, but are avoid to simplify the expressions.] with r_i = d_1 d_2^i y_l,l+1=θ y_l+1,l+2+( B_ex1, 1-δ + C̅_mx2 + B_ex1, δ) d_1 d_2^ll < N-1y_N-1,N=θ y_N,0 + ( B_ex1, 1-δ + C̅_mx2 + B_ex1, δ) d_1 d_2^N-1 .Solving these equations using backward recursion: y_N-2,N-1= θ^2 y_N,0 +( B_ex1, 1-δ + C̅_mx2 + B_ex1, δ)(θd_1 d_2^N-1 + d_1 d_2^N-2). and then continuing in a similar way y_N-l,N-l+1=θ^l y_N,0 +( B_ex1, 1-δ + C̅_mx2 + B_ex1, δ) d_1 d_2^N-l[ ∑_i = 0^l-1(θ d_2 )^i].One can rewrite it as the following for any l < N: y_l,l+1= θ^N-l y_N,0 +( B_ex1, 1-δ + C̅_mx2 + B_ex1, δ) d_1 d_2^l [ ∑_i = 0^N-l-1(θ d_2 )^i] = θ^N-l y_N,0 +( B_ex1, 1-δ + C̅_mx2 + B_ex1, δ) d_1 d_2^l 1-(θ d_2 )^N-l/1-θ d_2.In exactly similar lines, for any l < N, we have:y_l+1,l=θ y_l+2,l+1+ ( C̅_mx1 + B_ex1, δ) d_1 d_2^l.This simplifies to the following for any l < N:y_l+1,l=θ^N-l+ ( C̅_mx1 + B_ex1, δ) d_1 d_2^l 1-(θ d_2 )^N-l/1-θ d_2.Multiplying the left hand sides of the equations (<ref>) and (<ref>) with ρ̅_l and summing it up we obtainy_mx1.ρ̅ and y_mx2.ρ̅ respectively:y_mx1.ρ̅=∑_l < Nρ̅_lθ^N-ly_N, 0+( B_ex1, 1-δ + C̅_mx2 + B_ex1, δ)O_mx y_mx2.ρ̅=∑_l < Nρ̅_lθ^N-l+(C̅_mx1 + B_ex1, δ)O_mxO_mx :=d_1 ∑_l(d_2^l ρ̅_l ) -(θ d_2)^N ( ρ̅_l / θ^l) /(1-d_2 ρ̅)(1-θ d_2) .Note that for general r_l which need not be d_1 d_2^l, we will haveO_mx := ∑_l < Nρ̅_l∑_i=0^N-l-1θ^i r_l+i . On adding equations (<ref>) and (<ref>)y_mx1.ρ̅ + y_mx2.ρ̅=∑_l < Nρ̅_lθ^N-l (1 +y_N,0)+ ( B_ex1, 1-δ + C̅_mx1 + C̅_mx2 + 2 B_ex1, δ) O_mx .This implies using (<ref>) y_mx1.ρ̅ + y_mx2.ρ̅=∑_l < Nρ̅_lθ^N-l (1 +y_N,0) +( B_ex1, 1-δ +c_mx( 2 + y_mx1.ρ̅ + y_mx2.ρ̅)+ 2 B_ex1, δ) O_mx .We have unique fixed point solution (whenc_mx O_mx < 1) fory^1_mxρ̅:= y_mx1.ρ̅ + y_mx2.ρ̅, which equalsy^1_mxρ̅ = ∑_l < Nρ̅_lθ^N-l (1 +y_N,0) + (2( B_exj, δ + c_mx)+ B_exj, 1-δ)O_m_x/1-c_mxO_m_x.In the above y_N,0 and y_ex1 of equation (<ref>) can be obtained as in Part-I<cit.> We obtain further simpler expressions for the special case, whenρ̅_l = ρ̃̅̃ρ̅^l (with ρ̅ < 1) withρ̃̅̃ = 1/∑_i=1^N-1ρ̅^i= (1- ρ̅)/ρ̅(1-ρ̅^N-1)and when N →∞. Observe thatO_mx = d_1ρ̃̅̃∑_i=1^N-1(d_2 ρ̅)^i- (d_2θ)^N(ρ̅/θ)^i/1-θ d_2 → d_1 d_2(1-ρ̅) /(1-d_2 ρ̅)(1-θ d_2) , as N →∞ because ρ̃̅̃∑_i=0^N-1(d_2θ)^N(ρ̅/θ)^i=(d_2θ)^N(1-ρ̅^N)/(1- ρ̅)θ^N - ρ̅^N/θ - ρθ^-N+1 → 0.In a similar wayρ̃̅̃θ^N∑_l < N ( ρ̅/θ)^l= ρ̃̅̃θθ^N - ρ̅^N/θ - ρ̅→ 0. And y_N,0 can be bounded as N→∞ (see Part-I <cit.> for details). Thus as N →∞ for anyj=1,2:y^j_mxρ̅ →( 2 c_jδ[ (1 + y^j_exj.ρ̅)(1-η_-j) + η_-j]+c_j (1-δ) (1 + y^j_exj.ρ̅))O_mx/1-c_mxO_mxO_m_x → d_1d_2(1-ρ̅)/(1-d_2 ρ̅)(1-θ d_2) -j := 2 1_{j=1} + 1 1_{j=2}. Here y^j_exjis similar to that in Part-I <cit.>, and{ y_l,k^j } with k = l+1 or l-1can be computeduniquely using {y^j_mxρ̅}.apacite | http://arxiv.org/abs/1705.09828v4 | {
"authors": [
"Ranbir Dhounchak",
"Veeraruna Kavitha",
"Eitan Altman"
],
"categories": [
"math.PR",
"cs.SI"
],
"primary_category": "math.PR",
"published": "20170527150716",
"title": "A Viral Timeline Branching Process to study a Social Network"
} |
definition defnDefinitionplain theorem[defn]Theorem conjecture[defn]Conjecture corollary[defn]Corollary lemma[defn]Lemma proposition[defn]Proposition remark rem[defn]Remarkdefinition exmp[defn]Example calculation calculationNUM ⇔⇒⇐ Minimum quantum resources for strong non-localityS. Abramsky, R. S. Barbosa, G. Carù, N. de Silva, K. Kishida, S. MansfieldMinimum quantum resources for strong non-locality Samson AbramskyRui Soares BarbosaGiovanni CarùDepartment of Computer ScienceUniversity of Oxford{samson.abramsky, rui.soares.barbosa, giovanni.caru}@cs.ox.ac.ukNadish de Silva Department of Computer ScienceUniversity College [email protected] Kohei Kishida, Department of Computer ScienceUniversity of [email protected] Shane Mansfield School of InformaticsUniversity of [email protected]========================================================================================================================================================================================================================================================================================================================================================================================================================================================================== We analyse the minimum quantum resources needed to realise strong non-locality, as exemplified e.g. by the classical GHZ construction. It was already known that no two-qubit system, with any finite number of local measurements, can realise strong non-locality. For three-qubit systems, we show that strong non-locality can only be realised in the GHZ SLOCC class, and with equatorial measurements. However, we show that in this class there is an infinite family of states which are pairwise non LU-equivalentthat realise strong non-locality with finitely many measurements. These stateshave decreasing entanglement between one qubit and the other two, necessitating an increasing number of local measurements on the latter. § INTRODUCTION In this paper, we aim at identifying the minimum quantum resources needed to witness strong contextuality <cit.>, and more specifically, strong (or maximal) non-locality. Non-locality is, of course, a fundamental phenomenon in quantum mechanics – both from a foundational point of view, and with respect to quantum information and computation, in which it plays a central rôle.The original form of Bell's argument <cit.>, as well as its now more standard formulation due to Clauser, Horne, Shimony, and Holt (CHSH) <cit.>, rests on deriving an inequality that must be satisfied by probabilities arising from any local realistic theory, but which is violated by those predicted by quantum mechanics for a particular choice of a state and a finite set of measurements. Greenberger, Horne, Shimony, and Zeilinger (GHSZ) <cit.> gave a stronger, inequality-free argument for quantum non-locality. This depended only on the possibilistic aspects of quantum predictions, i.e. on which joint outcomes given a choice of measurements have non-zero probability, regardless of the actual value of the probabilities. Their argument was later simplified by Mermin <cit.>. Whereas the Bell–CHSH argument used local measurements on a two-qubit system prepared in a maximally entangled state, the GHZ–Mermin argument required a three-qubit system in the GHZ state. Subsequently, Hardyshowed that one can indeed find a proof of non-locality “without inequalities”, i.e. based on possibilistic information alone, using a bipartite, two-qubit system <cit.>. Hardy's argument works on any two-qubit entangled state bar the maximally entangled ones <cit.>. In fact, a similar argument works on almost all n-qubit states <cit.>, the exceptions being those states which are products of one-qubit states and two-qubit maximally entangled states, which provably do not admit any non-locality argument “without inequalities”<cit.>. However,there is an important logical distinction between the GHSZ and Hardy possibilistic arguments.Abramsky and Brandenburger <cit.> introduced a general mathematical framework for contextuality, in which non-locality arises as a particular case. This approach studies these phenomena at a level of generality that abstracts away from the particularities of quantum theory. The point is that contextuality and non-locality are witnessed by the empirical data itself, without presupposing any physical theory. For this reason, one deals with “empirical models”– tables of data for a given experimental scenario, obtained from empirical observations or predicted by some physical theory, specifying probabilities of joint outcomes for the allowed sets of compatible measurements.Various kinds of contextuality (or, in particular, non-locality) arguments were studied and classified at this abstract level, leading to the introduction of a qualitative hierarchy of strengths of contextuality in <cit.>, with further refinements in <cit.>. The classic arguments for quantum non-locality, familiar from the literature, sit at different levels in this hierarchy. There is a strict relationship of strengths of non-locality, rendered asBell < Hardy < GHZwhere these representative examples correspond, respectively, to probabilistic non-locality, possibilisticnon-locality, and strongnon-locality.Strong contextuality (or, in particular, non-locality)arises when there is no assignment of outcomes to all the measurements consistent with the events that the empirical model deems possible, i.e. to which it attributes non-zero probability. It is exactly this impossibility which is shown by Mermin's classic argument in <cit.>. Strong contextuality is also the highest level of contextuality in a different, quantitative sense. It turns out to coincide with the notion of maximalcontextuality, the property that an empirical model admits no proper decomposition into a convex combination of a non-contextual model and another model. This corresponds to attaining the maximum value of 1 for the contextual faction, a natural measure of contextuality introduced in<cit.> as a generalisation of the notion of non-local fraction <cit.>. The contextual fraction is shown in <cit.> to be equal to the maximal normalised violation of a contextuality-witnessing inequality. Hence, a model is strongly contextual if and only if it violates a generalised Bell inequality up to its algebraic bound.Strong non-locality is particularly relevant to quantum computing. It is exhibited, for example, by all graph states under stabiliser measurements <cit.>, which provide resource states and measurements for universal quantum computing via the one-way or measurement-based model <cit.>. It is also known to be necessary for increasing computational power in certain models of measurement-based quantum computing with restricted classical co-processing <cit.>. For instance, in <cit.> it was shown that GHZ strong non-locality enables a linear classical co-processor to implement the non-linear 𝖠𝖭𝖣 function, and subsequently in <cit.> that it enables the function to be implemented in a secure delegated way. Moreover, strong non-locality has important consequences for certain information processing tasks: in particular, it is known to be required for perfect strategies <cit.> in certain cooperative games <cit.>.§.§ Summary of resultsIn this paper, our aim is to analyse the minimum quantum resources needed to realise strong non-locality. More precisely, we consider n-qubit systems viewed as n-partite systems,[We know by a result of Heywood and Redhead <cit.> that strong contextuality can be realisedusing a bipartite system, but with a qutrit at each site. Hence our focus on qubits.] where each party can perform one-qubit local projective measurements.[Throughout this paper, we focus on projective measurements. The more general POVMs are justified as physical processes by 's dilation, since they are described as projective measurements in a larger physical system. Given that we are interested in characterising the minimum resources needed in order to witness strong non-locality, it seems reasonable to focus on PVMs, which do not need to be seen as measurements on a part of a larger system. ] We shall consider the case where each party has a finite set of measurements available – this is what corresponds to the standard experimental scenarios for non-locality. *The first result we present is limitative in character. It shows that strong non-locality cannot be realised by a two-qubit system with any finite number of local measurements. This result was already proven, using different terminology, in <cit.>. However, we include it for completeness and because its proof is useful as a warm-up for proving the other results in this paper.[Note that, in the same paper, it is also shown that the result applies to any bipartite state where one of the systems is a qubit, by an application of Schmidt decomposition of any bipartite state. This means that the optimal dimention in which strong non-locality can be realised is 2 × 2 × 2 = 8, i.e. a three-qubit system, since a two-qutrit system has dimension 9.] There is a subtle counterpoint to this in a result from <cit.>, which shows that using a maximally entangled bipartite state, and an infinite family of local measurements, strong non-locality is achieved “in the limit” in a suitable sense. More precisely, as more and more measurements from the family are used, the local fraction – the part of the behaviour which can be accounted for by a local model – tends to 0, or equivalently the non-local fraction tends to 1. There is an interesting connection to this in our results for the tripartite case.However, there is a practical advantage in being able to witness strong non-locality with a fixed finite number of measurements. If one wishes to design an experimental test for maximal non-locality, it is desirable that one can increase precision, i.e. increase the lower bound on the non-local fraction, without needing to expand the experimental setup – in particular, the number of measurement settings required to be performed – but rather by simply performing more runs of the same experiment. *Having shown that strong non-locality cannot be realised in the two-qubit case, we turn to the analysis of three-qubit systems. Of course, we know by the classical GHSZ–Mermin construction that strong non-locality can be achieved in this case, using the GHZ state and PauliX and Y measurements on each of the qubits. Our aim is to analyse for which states, and with respect to which measurements, can strong non-locality be achieved. We use the classification into SLOCC classes for tripartite qubit systems from <cit.>. According to this analysis, there are two maximal SLOCC classes, the GHZ and W classes. Below these, there are the degenerate cases of products of an entangled bipartite state with a one-qubit state, e.g. AB-C. By the previous result, these degenerate cases cannot realise strong non-locality. We furthermore show that no state in the W class can realise strong non-locality, for any choice of finitely-many local measurements. *This leaves us with the GHZ SLOCC class. We use the detailed description of this class as a parameterised family of states from <cit.>.We first show that any state in this class witnessing strong non-localitywith finitely many local measurements must satisfy a number of constraints on the parameters. In particular, the state must be balanced in the sense that the coefficients in its unique linear decomposition into a pair of product states have the same complex modulus. We furthermore show that only equatorial measurements need be considered (the equators being uniquely determined by the state) – no other measurements can contribute to a strong non-locality argument.*Having thus narrowed the possibilities for realising strong non-locality considerably, we find a new infinite family of models displaying strong non-locality using states within the GHZ SLOCC class that are not LU-equivalent to the GHZ state. The states in this family start from GHZ and tend in the limit to the state ⊗|+⟩ in the AB–C class with maximal entanglement on the first two qubits,and in product with the third. This family is actually closely related to the construction from <cit.> in which an increasing number of measurements on a bipartite maximally entangled state eventually squeezes the local fraction to zero in the limit. Our family is obtained by adding a third qubit to this setup, with two available local measurements, and some entanglement between the first two qubits and the third one, thus allowingstrong non-locality to be witnessed with a finite number of measurements. There is a trade-off between the number of measurement settings available on the first two qubits – and, consequently, the lower bound for the non-local fraction these measurements can witness – and the amount of entanglement necessary between the third qubit and the original two. Outline.The remainder of this article is organised as follows: Section <ref> summarises some background material on non-locality and entanglement classification of three-qubit states, Section <ref> shows that strong non-locality cannot be witnessed by two-qubit states and a finite number of local measurements; Section <ref> does the same for three-qubit states in the SLOCC class of W; Section <ref> deals with states in the SLOCC class of GHZ, deriving conditions on these necessary for strong non-locality; Section <ref> presents the family of strong non-locality arguments using states in the GHZ-SLOCC class; and Section <ref> concludes with some discussion of open problems and further directions. Detailed proofs of all the results are found in the Appendix. § BACKGROUND§.§ Measurement scenarios and empirical models We summarise some of the main ideas of <cit.>, with particular emphasis on non-locality. This is merely an instance of contextuality in a particular kind of measurement scenarios known as multipartite Bell-type scenarios. For each notion, we introduce the general definition followed by its specialisation to multipartite Bell-type scenarios.Measurement scenarios are abstract descriptions of experimental setups. In general, a measurement scenario is described by a set of measurement labels X, a set of outcomes O, and a coverof X consisting of measurement contexts, i.e. maximal sets of measurements that can be jointly performed. We are typically interested in measurement scenarios with finite X, but for technical reasons it will be useful to consider scenarios with infinitely many measurements in order to prove results about all their finite `subscenarios' at once. Throughout this paper, we shall also restrict our attention to dichotomic measurements, with outcome set O = -1,+1. This is a reasonable restriction, especially since our main focus shall be projective measurements on single qubits. Multipartite Bell-type scenarios are a particular kind of measurement scenario which can be thought to describe multiple parties at different sites, each independently choosing to perform one of a number of measurements available to them. More formally, an n-partite Bell-type scenario is described by sets X_1, …, X_n labelling the measurements available at each site (so that XX_1 ⊔⋯⊔ X_n), with maximal contexts corresponding to a single choice of measurement for each party, or in other words a tuple= m_1, …, m_n∈ X_1 ×⋯× X_n (so ≅∏_i=1^n X_i).An empirical model is a collection of probabilistic data representing possible results of running the experiment represented by a measurement scenario. Given a measurement scenario X,,O, an empirical model on that scenario is a familye_C_C ∈ where each e_C ∈(O^C) is a distribution over the set of joint outcomes to the measurements of C. Given an assignment sCO of outcomes to each measurement in C,the value e_C(s) is the probability of obtaining the outcomes determined by s when jointly performing the measurements in the context C. In the particular case of a Bell-type scenario, we have a family e_m∈(O^n)_m∈∏_i X_i of probability distributions. Given a vector of outcomes = o_1,…,o_n∈ O^n, the probability e_() of obtaining the joint outcomesupon performing the measurementsat each site is often denoted in the literature on non-locality as follows:e_() =( | ) = (o_1, …, o_n | m_1, …, m_n) Empirical models are usually assumed to satisfy a compatibility condition: that marginal distributions agree on overlapping contexts, i.e. for all C and C' in , e_C|_C ∩ C' = e_C'|_C ∩ C'. In the case of multipartite scenarios, this corresponds to the familiar no-signalling condition.§.§ Contextuality and non-localityAn empirical model is said to be non-contextual if there is a distribution onassignments of outcomes to all the measurements, d ∈(O^X), that marginalises to the empirical probabilities for each context, i.e. C ∈ d|_C = e_C. Note that this means there is a deterministic, non-contextual hidden-variable theory with the set of global assignments O^X serving as a canonical hidden variable space. Indeed, the existence of such a global distribution is in fact equivalent to the existence of a probabilistic hidden variable theory that is factorisable, a notion that in multipartite scenarios specialises to the standard formulation of Bell locality: there is a set of hidden variables Λ, a distribution in h∈(Λ), and ontic probabilities ( | , λ) that are consistent with the empirical ones, i.e. for all ∈ and ∈ O_n ∑_λ∈Λ( | , λ)h(λ) = ( | ) = e_() and that factorise when conditioned on each λ∈Λ, i.e.( | , λ) = ∏_i=1^n (o_i | m_i,λ)where the probabilities on the right-hand side are obtained as the obvious marginals.The equivalence between the two formulations of non-contextuality or locality – in terms of a probability distribution on global assignments (canonical deterministic hidden variable theory) and in terms of factorisable hidden variable theory – was proven in <cit.> for general measurement scenarios, vastly extending a result by Fine <cit.>. This justifies viewing non-locality as the special case of contextuality in multipartite systems.For some empirical models, it suffices to consider their possibilistic content, i.e. whether events are possible (non-zero probability) or impossible (zero probability), to detect the presence of contextuality. In this case, we say that the model is logically contextual. An even stronger form of contextuality, which will be our main concern in this article, arises when no global assignment of outcomes to all measurements is consistent with the events deemed possible by the model: the empirical model e is said to be strongly contextual if there is no assignment gXO such that C ∈ e_C(g|_C) > 0. In the particular case of multipartite scenarios, such a global assignment is determined by a family of maps g_iX_iO for each site i so that g = _i=1^n g_i_i=1^n X_iO. The consistency condition then reads: for any choice of measurements = m_1, …, m_n∈∏ X_i, writing g() = g_1(m_1),…,g_n(m_n), we havee_(g()) = (g() | ) = (g_1(m_1),…,g_n(m_n) | m_1, …, m_n) > 0 As mentioned in Section <ref>, strong contextuality was shown in <cit.> to exactly capture the notion of maximal contextuality. The proof of this equivalence depends crucially on the finiteness of the number of measurements. If one would consider an infinite number of measurements, a situation could occur in which there is a global assignment g consistent with the model, in the sense that C ∈ e_C(g|_C)>0, but where inf_C∈e_C(g|_C) = 0, in which case g does not correspond to any positive fraction of the model. This will indeed be the case for all the consistent global assignments described in this paper. Note, however, that proving the failure of strong contextuality in a scenario with an infinite number of measurements, even if the witnessing global assignment hasinf_C ∈e_C(g|C) = 0, is nonetheless sufficient to show that maximal contextuality cannot be realised using only a finite subset of the measurements.§.§ Quantum realisable modelsWe are mainly concerned with empirical models that are realisable by quantum systems. This means that one can find a quantum state and associate to each measurement label a quantum measurement in the same Hilbert space such thatmeasurements in the same context commute and the probabilities of the various outcomes are given by the Born rule.More specifically, we are concerned with models arising from n-qubit systems with local, i.e. single-qubit, measurements. The Bloch sphere representation of one-qubit pure states will be useful: assuming a preferred orthonormal basis |0⟩,|1⟩ of ^2, we shall use the notationcosθ/2|0⟩ +e^iφsinθ/2|1⟩for any θ∈ [0,π] and φ∈ [0, 2π).Any single-qubit projective measurement is fully determined by specifying such a normalised vector in ^2, namely the pure state corresponding to the +1 eigenvalue or outcome. Hence, the set of local measurements for a single qubit is labelled by = [0,π] × [0, 2π)The quantum measurement determined by (θ,φ) ∈ has eigenvalues O = +1,-1 with the eigenvector corresponding to outcome o ∈ O given by:if o = +1 π-θφ + π if o = -1 Throughout this paper, we shall be considering the n-partite measurement scenario with X_i = for every site. Measurement contexts correspond to a choice of single qubit measurements for each of the n sites, represented by a tuple = ,…,. Performing all the measurements of a context in parallel yields an outcome = o_1, …, o_n∈ O^n. The vector corresponding to this outcome is denoted⊗⋯⊗We shall also find it useful to write ⊗⋯⊗ = +1, …, +1for the vector corresponding to the joint outcome assigning +1 at every site.An n-qubit state |ψ⟩ determines an empirical model e^|ψ⟩ for this measurement scenario:e^|ψ⟩_() = ^|ψ⟩(o_1,…,o_n | ,…,)|ψ|^2We are concerned with checking for strongly non-local behaviour on such a model. As explained in the previous section, this amounts to checking for the existence of maps g_iO for each site such that for any choice of measurements , the corresponding outcome has positive probability:e_(g)= ^|ψ⟩(g_1,…, g_n | ,…,) = |gψ|^2 > 0Given that these are quantum probabilities, we can rephrase this condition in terms of non-vanishing amplitudes: gψ≠ 0.The following fact will be used throughout. Suppose we want to check the consistency with the empirical model of a given global assignment g = _i=1^n g_i. If this assignment satisfiesi ∈1, …,ng_i(θ,φ) = -g_i(π - θ,φ+π)that is, measurements with +1 eigenstates diametrically opposed in the Bloch spehere (i.e. measurements that are the negation of each other) are assigned opposite outcomes, theng_i =if g_i = +1 π-θφ + π if g_i = -1( g_i(π - θ,φ+θ) = +1)meaning that g = '' with g_i(θ'_i,φ_i') = +1 for all i. In other words,should we wish to calculate the amplitude for a joint outcomeon a given context , we may equivalently calculate the amplitude for the joint outcome +1,…,+1 on a new context (',') obtained by substituting θ_i ↦π - θ_i and φ_i ↦π + φ_i for all i such that o_i = -1. Therefore, it suffices to verify the equation gψ≠ 0 for all contexts whose measurements are all assigned +1. Indeed, the same is true if (<ref>) is relaxed to simply say that g_i(π - θ,φ+π)=-1g_i(θ,φ)=+1. Incidentally, even though we shall not need this fact, note that if there is any global assignment consistent with the model, there will be one that satisfies (<ref>), for this would only require a subset of the conditions.We conclude this subsection with two observations regarding these particular quantum empirical models. First, note that local unitaries (LU) on the state don't affect non-locality, or indeed strong non-locality, of the resulting empirical model. This follows from the fact that by moving from the Schrödinger to the Heisenberg picture, we may equivalently leave the state fixed and apply the corresponding unitaries to the sets of available local measurements. Since the available local measurements are all the projective one-qubit measurements, a local unitary, which can be seen as a rotation of the Bloch sphere, merely maps this set to itself. Secondly, if we are dealing with a product state of n-qubits, |ψ⟩ = |ψ_1⟩⊗⋯⊗|ψ_n⟩, then the resulting empirical model is necessarily local. This is because the probabilities factorise:^|ψ⟩( | ) = |ψ|^2 = |∏_i=1^nψ_i|^2 = ∏_i=1^n|ψ_i|^2§.§ SLOCC classes of three-qubit states A classification of multipartite quantum states by their degree of entanglement is given by the notion of LOCC (local operations and classical communication) equivalence <cit.>.A protocol is said to be LOCC if it is of the following form: each party may perform local measurements and transformations on their system, and may communicate measurement outcomes to the other parties, so that local operations may be conditioned on measurement outcomes anywhere in the system. A state |ψ_1⟩ is LOCC-convertible to a state |ψ_2⟩ if there exists a LOCC protocol that deterministically produces |ψ_2⟩ when starting with |ψ_1⟩.Intuitively, such a protocol cannot increase the degree of entanglement and so we think of |ψ_1⟩ as being at least as entangled as |ψ_2⟩.The notion of LOCC-convertibility defines a preorder [A preorder is a reflexive and transitive relation; i.e. it is like a partial order except that it can deem two distinct elements equivalent.] on multipartite states that in turn yields a notion of LOCC-equivalence of states: the states |ψ⟩ and |ϕ⟩ are LOCC-equivalent when |ψ⟩ is LOCC-convertible to |ϕ⟩ and vice versa.The LOCC-convertibility preorder then naturally defines a partial order on the collection of LOCC equivalence classes of states.A coarser classification of multipartite quantum states is given by relaxing the requirement that our conversion protocols succeed deterministically to the requirement that they succeed with non-zero probability <cit.>.The previous paragraph holds true for SLOCC (stochastic LOCC) mutatis mutandis. Note that equivalence of two states under LU transformations implies their SLOCC-equivalence. More generally, two states are SLOCC-equivalent if and only if they are related by an invertible local operator (ILO) <cit.>. Dür, Vidal, and Cirac <cit.> classified the SLOCC classes of three-qubit systems and found there to be exactly six classes (see Figure <ref>).The GHZ and W states are representatives of the two maximal, non-comparable classes.Three intermediate classes are characterised by bipartite entanglement between two of the qubits, which are in a product with the third.Finally, the minimal class is given by product states.By the last observation in the previous section, it is obvious that a state in the A–B–C class cannot realise non-locality, and that the case of a state in one of the intermediate classes can be reduced to that of the two qubits that are entangled. Hence, we shall first discuss strong non-locality for two-qubit states and then proceed in turn to each of the maximal SLOCC classes of three-qubit states, W and GHZ. § TWO-QUBIT STATES ARE NOT STRONGLY NON-LOCAL Every two-qubit state can be written, up to LU, uniquely as =cosδ|00⟩+sinδ|11⟩,where δ∈ [0,π/4]. The state (<ref>) is either: the product state |00⟩, which is obviously non-contextual since it is separable, when δ=0; or an entangled state in the SLOCC class of the Bell state |Φ^+⟩=1/√(2)(|00⟩+|11⟩), when δ > 0.[equivalent to <cit.>]theoremthmBipartite Two-qubit states do not admit strongly non-local behaviour. This proof rests on defining an explicit global assignment g:⊔→ O consistent with the possible events of the empirical model. More specifically, the map g is obtained by assigning outcome +1 to one hemisphere of the Bloch sphere, and -1 to the other, with special conditions on the poles and a slight asymmetry between the two parties. We start by computing the amplitude ψ of measuring (,)=(θ_1,φ_1),(θ_2,φ_2) on the general state (<ref>) and obtaining joint outcome +1,+1:ψ=cosδcosθ_1/2cosθ_2/2+ sinδsinθ_1/2sinθ_2/2e^-i(φ_1+φ_2)Since δ=0 gives rise to a product state, we will assume δ≠ 0.We define the following maps:g_1 O(θ,φ) +1 if θ=π or (θ≠ 0and φ∈[-π/2,π/2))-1 if θ=0or (θ≠π and φ∈[π/2,3π/2) )g_2 O(θ,φ) +1 if θ=π or (θ≠ 0and φ∈(-π/2,π/2])-1 if θ=0or (θ≠π and φ∈(π/2,3π/2] )and let g g_1⊔ g_2⊔O be a global assignment. A graphical representation of the map g can be found in Figure <ref>.Let (,) be a context whose individual measurements are mapped to +1 by g (see Section <ref> for why this is sufficient). In particular, it holds that θ_1,θ_2≠ 0. Since δ≠ 0, we have ssinδsinθ_1/2sinθ_2/2>0ccosδcosθ_1/2cosθ_2/2≥ 0.If θ_1=π or θ_2=π, then c=0, which implies ψ=se^-i(φ_1+φ_2)≠ 0. Otherwise, φ_1∈[-π/2,π/2), φ_2∈(-π/2,π/2] and ψ=c+se^-i(φ_1+φ_2) is the sum of a positive real number and a non-zero complex number. For it to be zero, the latter must be real and negative, hence φ_1+φ_2=π 2π,which cannot be satisfied in the domain of φ_1,φ_2. § W-SLOCC STATES ARE NOT STRONGLY NON-LOCALA general state in the SLOCC class of the W state =1/√(3)(|001⟩+|010⟩+|100⟩) can be written, up to LU, as=√(a)|001⟩+√(b)|010⟩+√(c)|100⟩+√(d)|000⟩,where a,b,c∈ℝ_>0 and d 1-(a+b+c)∈ℝ_≥0. Indeed, we can obtainfromby applying the following ILO to :( √(a) √(b)0√(c)) ⊗( √(3)00√(3b)/√(a)) ⊗ I.In order to prove that W-SLOCC states are not strongly non-local, we will need the following lemma, which generalises the argument used in the proof of Theorem <ref> to show that the amplitude could not be zero.Let z_1,…,z_m∈ℂ, and r∈ℝ_≥0. If ∑_i=1^mz_i+r=0,then one of the following holds: (i) z_1=⋯ = z_m=r=0; (ii) there exists a z_k∈ℝ_< 0; (iii) there exists 1≤ k, l≤ m such that (z_k)∈(0,π) and (z_l)∈(-π,0).If all the z_i are real, then, since r is non-negative, we must have either (i) or (ii). Now, suppose there is a 1≤ k≤ m such that (z_k)≠ 0. By (<ref>), we have ∑_i=1^n(z_i)=0. Thus,∑_i≠ k(z_i)=-(z_k) ∑_i≠ k|z_i|sin((z_i))=-|z_k|sin((z_k)).Hence, there exists at least one l≠ k for which the sign of (z_l) is opposite to that of (z_k),which implies that z_l and z_k are in different sides of the real axis, implying the condition about (z_l) and (z_k). theoremthmWSLOCC States in the SLOCC class of W do not admit strongly non-local behaviour.Similarly to the bipartite case of Theorem <ref>, the key idea of the proof is the definition of a global assignment g:⊔⊔→ O whose restriction to each context is contained in the support of the model. Once again, g is obtained by partitioning the Bloch sphere into two hemispheres to which are assigned different outcomes, with asymmetric polar conditions across the parties.We start by computing the amplitudeof measuring (,) on the general state (<ref>) and obtaining joint outcome +1,+1,+1: = √(a)(cosθ_1/2cosθ_2/2sinθ_3/2 e^-iφ_3)_ z_3∈ℂ+√(b)(cosθ_1/2cosθ_3/2sinθ_2/2 e^-iφ_2)_ z_2∈ℂ+√(c)(cosθ_2/2cosθ_3/2sinθ_1/2 e^-iφ_1)_ z_1∈ℂ+√(d)(cosθ_1/2cosθ_2/2cosθ_3/2)_ r∈ℝ_≥ 0.Define the following functions:h=g_1=g_2 O(θ,φ) +1 if θ=0or (θ≠π and φ∈(-π,0])-1 if θ=π or (θ≠ 0and φ∈(0,π]) g_3 O(θ,φ) +1 if θ=π or (θ≠ 0and φ∈(-π,0])-1 if θ= 0or (θ≠π and φ∈(0,π])and let g h⊔ h⊔ g_3⊔⊔O be a global assignment. The map g is graphically represented in Figure <ref>. Let (,) be a context whose individual measurements are mapped to +1 by g. In particular, θ_1,θ_2≠π and θ_3≠ 0. Since a>0, we have |z_3|=√(a)cosθ_1/2cosθ_2/2sinθ_3/2>0,which implies z_3≠ 0. Now, if θ_3=π, then z_1=z_2=r=0 and =z_3≠ 0.Otherwise, θ_3≠π and φ_3∈(-π,0], implying that (z_3)=-φ_3∈[0,π). For i=1,2, we either have θ_i=0 or φ_i∈(-π, 0], implying that z_i=0 or (z_i)=-φ_i∈[0,π). Using Lemma <ref>, we conclude that ≠ 0: (i) fails because z_3≠ 0, while (ii) and (iii) fail because (z_i)∈[0,π) whenever z_i≠ 0. § STRONG NON-LOCALITY IN THE SLOCC CLASS OF GHZ§.§ The n-partite GHZ state and local equatorial measurements Before we tackle the general case of GHZ-SLOCC states, we consider the GHZ state itself. We show that equatorial measurements are the only relevant ones in the study of strong non-locality for this state. In fact, this holds for the general n-partite GHZ state,1/√(2)(|0⟩^⊗ n+|1⟩^⊗ n)and consequentely, in light of the remark towards the end of Section <ref>, for any state in its LU class. In the next section, we generalise this result to arbitrary states in the SLOCC class of the tripartite GHZ state, and study conditions for strong non-locality within this class. theoremthmGHZn Any strongly non-local behaviour ofcan be witnessed using only equatorial measurements. That is, there is a global assignment g consistent with the model e^ in all contexts that are not exclusively composed of equatorial measurements. The proof is achieved using a construction of a global assignment similar to the ones previously discussed. First, we derive the formula for the amplitudeof measuring (,) and obtaining joint outcome +1,…,+1: = 1/√(2)( ∏_i=1^n cosθ_i/2 + e^-i ∑_i=1^n φ_i∏_i=1^n sinθ_i/2). Consider the function hO(θ,φ) +1 if θ∈[0,π/2]-1 if θ∈(π/2,π]i.e. h assigns +1 to the equator and the northern hemisphere, and -1 to the southern hemisphere. Let g_i=1^n h_i=1^n O. We show that this global assignment is consistent with the probabilities at all contexts that include at least a non-equatorial measurement. Let (, ) be a context whose measurements are mapped to +1 by g. In particular, θ_i≤π/2 for all i. If =0, then∏_i=1^n cosθ_i/2 = -e^-i (∑_i=1^n φ_i)∏_i=1^n sinθ_i/2Taking the modulus of both sides and dividing the right-hand by the left-hand side yields:∏_i=1^n tanθ_i/2 = 1which is verified if and only if θ_i=π/2 for all 1≤ i ≤ n. §.§ Balanced GHZ-SLOCC states and local equatorial measurements A general state in the SLOCC class of the GHZ state can be written, up to LU, as=√(K)(cosδ|000⟩ + sinδ e^iΦ|φ_1⟩|φ_2⟩|φ_3⟩),where K=(1+2cosδsinδcosαcosβcosγcosΦ)^-1, and|φ_1⟩ = cosα|0⟩+sinα|1⟩, |φ_2⟩ = cosβ|0⟩+sinβ|1⟩, |φ_3⟩ = cosγ|0⟩+sinγ|1⟩,for some δ∈(0,π/4], α,β,γ∈ (0,π/2], and Φ∈[0,2π). Indeed,is obtained fromvia the ILO√(2K)( cosδ sinδcosα e^iΦ0sinδsinα e^iΦ ) ⊗(1cosβ0sinβ ) ⊗(1cosγ0sinγ ). In order to prove the results of this section, it is convenient to describein a slightly different form. By applying local unitaries, we can rewrite it as =√(K)(cosδ|v_λ_1⟩|v_λ_2⟩|v_λ_3⟩+sinδ e^iΦ|w_λ_1⟩|w_λ_2⟩|w_λ_3⟩),where|v_λ⟩ = λ0 =cosλ/2|0⟩+sinλ/2|1⟩, |w_λ⟩ =π-λ0=sinλ/2|0⟩+cosλ/2|1⟩for some λ_i ∈ [0,π/2), i=1,2,3. The action of this LU can be thought of as choosing a new orthonormal basis for each qubit: a graphical illustration of this process can be found in Figure <ref>.A key advantage of this LU-equivalent description of a general state in the GHZ SLOCC class is that the equator of the i-th qubit's Bloch sphere coincides with the great circle that bisects the i-th components of the two unique product states that form a linear decomposition of the state. Note that any state in the GHZ SLOCC class thus uniquely defines an equator in each Bloch sphere. It is to the measurements lying on these that we refer as being equatorial.We say that a state in the GHZ SLOCC class is balanced if the coefficients in its unique linear decomposition into a pair of product states have the same complex modulus – when the state is written in the form (<ref>), this corresponds to having δ = π/4, hence cosδ = sinδ = 1/√(2). Let |v_λ⟩ and |w_λ⟩ be given as in (<ref>), with λ∈[0,π/2), and consider a measurement (θ,φ) with θ∈[0,π/2), i.e. with +1 eigenstate in the `northern hemisphere'. Then |⟨θ,φ|v_λ⟩|>|⟨θ,φ|w_λ⟩|. We have |⟨θ,φ|v_λ⟩|>|⟨θ,φ|w_λ⟩|| cosθ/2cosλ/2+sinθ/2sinλ/2e^-iφ| >| cosθ/2sinλ/2+sinθ/2cosλ/2e^-iφ||1+tanλ/2tanθ/2e^-iφ|>|tanλ/2+tanθ/2e^-iφ|,where, for the last step, we divide both sides by cosλ/2cosθ/2, which is never 0 since λ,θ∈[0,π/2). Let xtanλ/2 and ytanθ/2, then|1+xye^-iφ|>|x+ye^-iφ|⇔ |1+xy(cosφ-isinφ)|>|x+y(cosφ-isinφ)|⇔ 1+2xycosφ+x^2y^2>x^2+2xycosφ+y^2⇔ 1+x^2y^2-x^2-y^2>0⇔ (1-x^2)(1-y^2)>0and this is always verified since x,y∈[0,1) by the definition of the domains of θ and λ.We use this lemma to generalise Theorem <ref> to arbitrary states in the SLOCC class of the tripartite GHZ state.theoremthmEquatorialBalanced A state in the SLOCC class of GHZ that displays strong non-locality must be balanced. Moreover, any such strongly non-local behaviour can be witnessed using only equatorial measurements. The proof of this theorem can be derived by taking advantage of the special properties of balanced states and combining them with the argument used for Theorem <ref>. As before, we compute the amplitude :=√(K)(cosδ∏_i=1^3v_λ_i+sinδ e^iΦ∏_i=1^3w_λ_i)Take hO as defined in the proof of Theorem <ref> and let g h⊔ h⊔ h. We claim that g is consistent with the empirical probabilities at all contexts that include at least a non-equatorial measurement. Letbe a context whose measurements are all mapped to +1 by g. In particular, θ_i≤π/2 for i=1,2,3. If =0, thencosδ∏_i=1^3v_λ_i=-sinδ e^iΦ∏_i=1^3w_λ_i,and taking the complex modulus of both sides,cosδ∏_i=1^3|v_λ_i|=sinδ∏_i=1^3|w_λ_i|Since δ∈(0,π/4] we have cosδ≥sinδ, with equality iff δ=π/4. By Lemma <ref>, we conclude that this equation can only be satisfied if δ=π/4 (i.e. the state is balanced) and θ_i=π/2 for i=1,2,3 (i.e. all the measurements are equatorial).§.§ Further restrictionsThe theorem above allows us to reduce the scope of our search for strongly non-local behaviour in the SLOCC class of GHZ to: (i) balanced states, i.e. those of the form√(K/2)(|v_λ_1⟩|v_λ_2⟩|v_λ_3⟩+e^iΦ|w_λ_1⟩|w_λ_2⟩|w_λ_3⟩),determined by a tuple =λ_1,λ_2,λ_3∈[0,π/2)^3 and a phase Φ, where |v_λ⟩ and |w_λ⟩ are given as in (<ref>); (ii) local equatorial measurements in the sense defined above, i.e. those with +1 eigenstate|φ⟩|π/2,φ⟩=1/√(2)(|0⟩+e^iφ|1⟩)for φ∈[0,2π). Given this premise, we are interested in understanding when the amplitude function ⟨|⟩ is 0. We have:⟨|⟩=0∏_i=1^3⟨φ_i|v_λ_i⟩+e^iΦ∏_i=1^3⟨φ_i|w_λ_i⟩=0 ∏_i=1^3⟨φ_i|w_λ_i⟩ = -e^-iΦ∏_i=1^3⟨φ_i|v_λ_i⟩∏_i=1^3⟨φ_i|w_λ_i⟩ = -e^-iΦ∏_i=1^3e^-iφ_i⟨φ_i|w_λ_i⟩∏_i=1^3e^iφ_i⟨φ_i|w_λ_i⟩⟨φ_i|w_λ_i⟩^^-1=-e^-iΦ∏_i=1^3e^iφ_i(⟨φ_i|w_λ_i⟩/|⟨φ_i|w_λ_i⟩|)^2=-e^-iΦ∑_i=1^3(φ_i+2⟨φ_i|w_λ_i⟩)=π-Φ 2πwhere to get (<ref>) we use ⟨φ|v_λ⟩ = 1/√(2)(cosλ/2+sinλ/2e^-iφ) = e^-iφ/√(2)(cosλ/2e^iφ+sinλ/2) = e^-iφ⟨φ|w_λ⟩.and for the last step we take the argument of two complex numbers of norm 1. Definingβ(λ, φ)φ+ 2⟨φ|w_λ⟩=φ-2arctan(sinλ/2sinφ/cosλ/2+sinλ/2cosφ),we can rewrite the condition above as⟨|⟩=0 ∑_i=1^3β(λ_i,φ_i)=π-Φ 2π propositionpropLambda If λ_1+λ_2+λ_3>π/2, the statedoes not admit strongly non-local behaviour. We start by showing that the map β(λ,φ), seen as a function of φ, is strictly increasing for all λ∈[0,π/2). To see this, it is sufficient to compute the derivative:λ∈[0,π/2),φ∈[0,2π)∂/∂φβ(λ,φ)=cosλ/1+cosφsinλThis is strictly positive since cosλ>0 and cosφsinλ>-1 since 0≤sinλ <1.Now, define a function h[0,2π)O byh(φ) +1 if φ∈(-π/2,π/2]-1 if φ∈(π/2,3π/2]and let g h⊔ h⊔ h. Take a contextwhose measurements are assigned +1 by g, i.e. φ_i∈(-π/2,π/2]. Using the fact that β(λ,-) is increasing, we have|∑_i=1^3β(λ_i,φ_i)|≤∑_i=1^3|β(λ_i,φ_i)|≤∑_i=1^3β(λ_i,π/2)=∑_i=1^3(π/2-λ_i)=3π/2-∑_i=1^3λ_i<3π/2-π/2=π.Consequently, ∑_i=1^3β(λ_i,φ_i)≠π 2π, hence by (<ref>), ⟨|⟩≠0 as required. § A FAMILY OF STRONGLY NON-LOCAL THREE-QUBIT MODELS theoremthmGHZfamily Let m∈ℕ_>0 and N 2m an even number. Consider the tripartite measurement scenario with X_1=X_2=0,…,N-1 and X_3=0,N/2. The empirical model determined by the state 0,0,λ_N0, where λ_Nπ/2-π/N, with the measurement label i at each site interpreted as the local equatorial measurement cosiπ/Nσ_X+siniπ/Nσ_Y (i.e. the measurement with +1 eigenstate π/2iπ/N), is strongly non-local.This proof rests on deriving, using the algebraic structure of _2N, a (conditional) system of linear equations over ℤ_2 that must be satisfied by any global assignment consistent with the possible events of the empirical model, yet does not admit any solution. This seems to be closely related to the general concept of all-vs-nothing (AvN) arguments introduced in <cit.>, but does not quite fit this setting. The reason is that the system of linear equations that a global assignment g must satisfy depends on the value that g assigns to a particular measurement. In that sense, this could be seen as a conditional version of an AvN argument. Consider a context i,j,k∈ X_1× X_2× X_3, with i,j∈0,…,N-1, k ∈0,m, and a triple of outcomes a_i,b_j,c_k∈ℤ_2^3 for the measurements in the context.[For this proof, it is convenient to relabel +1, -1, × as 0,1,⊕, where ⊕ denotes addition modulo 2.]From equation (<ref>), we know that measuring i,j,k and obtaining outcomes a_i,b_j,c_k has probability zero if and only if β(0,iπ/N+a_iπ)+β(0,jπ/N+b_jπ)+β(π/2-π/N,kπ/N+c_kπ)=π 2πWith simple computations, we can show that β(0,φ)=φ for all φ∈[0,2π), and thatβ(π/2-π/N,c_0π)=c_0πandβ(π/2-π/N,π/2+c_mπ)=(-1)^c_mπ/N. An arbitrary global assignment is defined by choosing outcomes for all the measurements in X_1 ⊔ X_2 ⊔ X_3:a_0,…, a_N-1, b_0,…, b_N-1, c_0, c_m∈ℤ_2.By (<ref>) and (<ref>), such an assignment is consistent with the probabilities of the empirical model at every context if and only ifiπ/N+a_iπ+jπ/N+b_jπ+c_0π≠π 2π∀ i,j∈0,…,N-1iπ/N+a_iπ+jπ/N+b_jπ+(-1)^c_mπ/N≠π2π∀ i,j∈0,…,N-1We will proceed to show that this system admits no solution, which implies strong non-locality. By identifying the group kπ/Nk∈ℤ_2N with ℤ_2N, we can equivalently rewritei+a_iN+j+b_jN+c_0N≠ N2N∀ i,ji+a_iN+j+b_jN+(-1)^c_m≠ N 2N∀ i,ji+j+N(a_i⊕ b_j⊕ c_0)≠ N 2N∀ i,ji+j+(-1)^c_m+N(a_i⊕ b_j)≠ N 2N∀ i,ja_i⊕ b_j⊕ c_0=0∀ i,js.t.i+j=0a_i⊕ b_j⊕ c_0=1∀i,js.t.i+j=Na_i⊕ b_j=0∀i,js.t.i+j+(-1)^c_m=0a_i⊕ b_j=1∀ i,js.t.i+j+(-1)^c_m=N.a_0⊕ b_0⊕ c_0=0a_i⊕ b_N-i⊕ c_0=1∀ is.t. 1≤ i≤ N-1a_i⊕ b_N-i-1=1∀ is.t.0≤ i≤ N-1ifc_m=0a_0⊕ b_1=0a_1⊕ b_0=0 ifc_m=1a_i⊕ b_N+1-i=1∀ is.t. 2≤ i≤ N-1 Since N=2m is even, if we sum all the N equations from the first two lines we obtain⊕_i=0^N-1a_i⊕⊕_j=0^N-1b_j=1.On the other hand, if we sum any of the other two groups of N equations we get⊕_i=0^N-1a_i⊕⊕_j=0^N-1b_j=0,showing that the system is unsatisfiable regardless of whether c_m=0 or c_m=1. This new family of strongly non-local three-qubit systems is tightly connected to a construction on two-qubit states due to Barrett, Kent, and Pironio <cit.>. In particular, our empirical models restricted to the first two parties coincide, up to a rotation of the equatorial measurements, to those used in <cit.>. The local fraction of these bipartite empirical models tends to zero as the number of measurements increases, but obviously none of them are strongly non-local. Despite the lack of strong non-locality in the bipartite systems constructed in <cit.>, we show that it is possible to witness strongly non-local behaviour with a finite amount of measurements by adding a third qubit with some entanglement, and only two local measurements – Pauli X and Y– available on it. An interesting aspect is that there is a trade-off between the number of measuring settings available on the first two qubits and the amount of entanglement between the third qubit and the system comprised of the other two.We illustrate this by computing the bipartite von Neumann entanglement entropy between the first two qubits and the third, i.e. the von Neumann entropy of the reduced state of 0,0,λ0 corresponding to the third qubit, as a function of λ. Let ρ_ABC denote the density matrix of 0,0,λ0. The reduced density matrix corresponding to the third qubit isρ_C(λ)=_AB[ρ_ABC]=⟨00|__ABρ_ABC|00⟩__AB+⟨11|__ABρ_ABC|11⟩__AB=1/2(1 2cosλ/2sinλ/22cosλ/2sinλ/21 ).The eigenvalues of ρ_C(λ) are ϵ_±(λ)1/2(1±sinλ). Hence, by rewriting ρ_C(λ) in its eigenbasis, we can easily compute the von Neumann entropy S_C as a function of λ:S_C(λ) -[ ρ_C(λ)log_2ρ_C(λ)]=-ϵ_+(λ)log_2ϵ_+(λ)- ϵ_-(λ)log_2ϵ_-(λ)The plot of the function S_C(λ) is shown in Figure <ref>. Notice that the entanglement entropy is maximal, i.e. equal to 1, when N=2, in which case λ_2=0 and so 0,0,λ_20=. This corresponds to the usual GHSZ argument with Pauli measurements X,Y for each qubit. On the other hand, S(λ) becomes arbitrarily small as N →∞, when λ_N →π/2 and0,0,λ_N0 approaches the state ⊗|+⟩, which has no entanglement between the first two qubits and the third.§ OUTLOOK Our analysis of strong non-locality for three-qubit systems has been quite extensive. We shall discuss a number of directions for further research. *First, it remains to complete our classification of all instances of three-qubit strong non-locality.*The original GHSZ–Mermin model witnesses the yet stronger algebraic notion of all-versus-nothing (AvN) non-locality, formalised in a general setting in <cit.>, and indeed provides one of the motivating examples for considering this kind of non-locality. The family of strongly non-local models introduced in Section <ref> does not fit this framework exactly.Nevertheless, our proof of strong non-locality does make essential use of the algebraic structure of ℤ_2N (or the circle group), in what amounts to a conditional version of an AvN argument. One may wonder whether a similar property will hold for all instances of three-qubit strong non-locality.*This family also highlights an inter-relationship between non-locality, entanglement and the number of measurements available, and raises the question of whether this is an instance of a more general relationship. *Finally, while the present results provide necessary conditions for strong non-locality in three-qubit states, the more general question of characterising strong non-locality of n-qubit states, where little is known about SLOCC classes, remains open. § ACKNOWLEDGMENTSThis work was carried out in part while some of the authors visited the Simons Institute for the Theory of Computing (supported by the Simons Foundation) at the University of California, Berkeley, as participants of the Logical Structures in Computation programme (AB, RSB, GC, NdS, SM), and while SM was based at the Institut de Recherche en Informatique Fondamentale, Université Paris Diderot – Paris 7. Support from the following is also gratefully acknowledged: EPSRC EP/N018745/1 (SA, RSB) and EP/N017935/1 (NdS), `Contextuality as a Resource in Quantum Computation'; EPSRC Doctoral Training Partnership and Oxford–Google Deepmind Graduate Scholarship (GC); U.S. AFOSR FA9550-12-1-0136, `Topological & Game-Semantic Methods for Understanding Cyber Security' (KK); Fondation Sciences Mathématiques de Paris, post-doctoral research grant otpFIELD15RPOMT-FSMP1, `Contextual Semantics for Quantum Theory' (SM). eptcs | http://arxiv.org/abs/1705.09312v1 | {
"authors": [
"Samson Abramsky",
"Rui Soares Barbosa",
"Giovanni Carù",
"Nadish de Silva",
"Kohei Kishida",
"Shane Mansfield"
],
"categories": [
"quant-ph"
],
"primary_category": "quant-ph",
"published": "20170525180834",
"title": "Minimum quantum resources for strong non-locality"
} |
^1 National Fusion Research Institute, Daejeon 34133, Korea ^2 Ulsan National Institute of Science and Technology, Ulsan 689-798, Korea ^3 Pohang University of Science and Technology, Pohang, Gyungbuk 790-784, Korea ^4 Columbia University, New York, NY 10027, USA ^5 Seoul National University, Seoul 08826, Korea ^6 University of California at Davis, Davis, CA 95616, [email protected] Multiscale interaction between the magnetic island and turbulence has been demonstrated through simultaneous two-dimensional measurements of turbulence and temperature and flow profiles. The magnetic island and turbulence mutually interact via the coupling between the electron temperature (T_e) gradient, the T_e turbulence, and the poloidal flow. The T_e gradient altered by the magnetic island is peaked outside and flattened inside the island. The T_e turbulence can appear in the increased T_e gradient regions. The combined effects of the T_e gradient and the the poloidal flow shear determine two-dimensional distribution of the T_e turbulence. When the reversed poloidal flow forms, it can maintain the steepest T_e gradient and the magnetic island acts more like a electron heat transport barrier. Interestingly, when the T_e gradient, the T_e turbulence, and the flow shear increase beyond critical levels, the magnetic island turns into a fast electron heat transport channel, which directly leads to the minor disruption.Keywords: magnetic island, turbulence, multiscale interactionMultiscale interaction between a large scale magnetic island and small scale turbulence M J Choi^1, J Kim^1, J-M Kwon^1, H K Park^1,2, Y In^1, W Lee^1, K D Lee^1, G S Yun^3, J Lee^2, M Kim^2, W-H Ko^1, J H Lee^1, Y S Park^4, Y-S Na^5, N C Luhmann Jr^6, B H Park^1 December 30, 2023 ===================================================================================================================================================================================§ INTRODUCTION A large scale magnetic island in tokamak plasmas was known to degrade the plasma confinement by increasing the radial transport along the reconnected field line. Recent studies, however, have found that the transport physics near the magnetic island can be much more complicated due to various multiscale interactions between the island and small scale turbulence <cit.>. For example, the helical topology of the magnetic island results in 3D perturbation of the magnetic flux surfaces, and profiles of plasma temperature and density are also modified accordingly. Change of pressure profile will be accompanied by changes of flow profile and turbulent fluctuation <cit.>. On the other hand, small scale turbulence driven by the background profile can trigger the onset of a magnetic island through a nonlinear beating <cit.> or affect the nonlinear growth of the island <cit.>. Multiscale interaction between the magnetic island and turbulence is multi-directional and the transport physics near the magnetic island is complicated. This paper focuses on effects of the magnetic island on profiles and turbulence and its consequence for the electron heat transport or the nonlinear stability of the magnetic island. Inside the magnetic island, a pressure profile flattens when the island size grows sufficiently large so that the parallel transport along the reconnected field line becomes dominant over the perpendicular transport <cit.>. Outside the magnetic island, a pressure profile can be radially steepened because the magnetic flux surfaces are perturbed to be close to each other <cit.>. Reduction of turbulent fluctuation by loss of the pressure gradient inside the flat magnetic island has been observed in <cit.>. Experimental measurements of the poloidal flow in the vicinity of the magnetic island have been reported in <cit.>, and they found that the flow shear across the island can be important in multiscale interactions and consequently in the transport across the island. Recent simulation studies have predicted multiscale interactions via both pressure and flow profiles. They made detail observations such as the localized turbulence distribution <cit.> or the poloidal vortex flow around the magnetic island <cit.>. The turbulence level is expected to be insignificant across the O-point region probably due to small pressure gradient inside the magnetic island and the strong flow shear outside the magnetic island <cit.>. The turbulent transport is only significant close to the X-point <cit.>. Simultaneous experimental measurements of turbulence and flow in a two-dimensional (2D) space are required to fully validate those multiscale interactions in numerical simulations.In this paper, the T_e profile, the T_e turbulence, and the poloidal flow near the m/n=2/1 magnetic island (m and n are the poloidal and toroidal mode number, respectively) are measured directly and simultaneously in 2D space for the first time. Both the T_e and flow profiles altered by the magnetic island are indeed important in multiscale interactions. The two-dimensional T_e turbulence distribution is determined by the combined effect of the T_e gradient (turbulence drive) and the poloidal flow (turbulence suppression/convection). In particular, when the reversed poloidal flow forms around the magnetic island, the steepest T_e gradient is obtained in the inner region (r<r_si where r_si represents the inner separatrix of the magnetic island) and the magnetic island acts more like a barrier of the electron heat transport until the transport bifurcation occurs. In section <ref>, multiscale interaction in the reversed poloidal flow state is described and compared qualitatively with previous studies. In section <ref>, coupled evolution of the T_e gradient, the T_e turbulence, and the poloidal flow towards the reversed flow state and the transport bifurcation phenomena are discussed. Summary and conclusion are given in section <ref>. § MULTISCALE INTERACTION IN THE REVERSED POLOIDAL FLOW STATE §.§ Experimental set-up In the Korea Superconducting Tokamak Advanced Research (KSTAR; major radius R = 180 cm and minor radius a = 50 cm) experiment #13371, the plasma was heated by 1 MW neutral beam injection and kept in the low confinement mode with the plasma current I_p = 0.7 MA, the safety factor at the 95% magnetic flux surface q_95∼ 4.6, and the Spitzer resistivity η∼ 1.4× 10^-7. The non-rotating m/n=2/1 magnetic island was induced by an external n=1 resonant magnetic perturbation (RMP) field. Coil current for the n=1 RMP field was increased in time as shown in Fig. <ref>, and above a critical threshold value the n=1 field penetrates deep into the plasma. The toroidal flow speed (V_t) near the q = 2 region measured by the charged exchange spectroscopy (CES) <cit.> dropped to almost zero within the measurement error (± 5 km/s) during the penetration. The core electron temperature from the electron cyclotron emission (ECE) diagnostics indicates that the sawtooth crash became very frequent and small <cit.>. A slow decrease in the line averaged electron density, often referred to as the density pump-out, was also observed. The major disruption occurs with the continuously increased n=1 field <cit.>.For measurements of the T_e profile, the T_e turbulence, and the poloidal flow, the 1D ECE diagnostics and the 2D ECE imaging (ECEI) diagnostics <cit.> were utilized. The ECEI diagnostics was cross-calibrated <cit.> using the axis-symmetric T_e profile from the absolutely calibrated ECE diagnostics and the EFIT reconstructed equilibrium <cit.> in the period w/o the magnetic island in Fig. <ref>. The poloidal flow velocity could be deduced from the vertical pattern velocity (v_pt) <cit.> estimated using two vertically adjacent ECEI channels. A spatial resolution of the ECEI diagnostics is close to 2 cm in both radial and vertical directions and a temporal resolution is 2 μs. Note that effects of the relativistic shift, the Doppler broadening, and finite poloidal field <cit.> for the radial channel positions are more or less canceled out in this plasma condition, and the cold resonance positions could be used. In the outer region (r > r_so where r_so means the outer separatrix of the magnetic island), T_e measurement is uncertain because the ECE diagnostic capability becomes marginal. In terms of the optical depth (τ) <cit.>, it is close to or less than 1 in the outer region while close to 3 in the inner region and in between 1 and 3 inside the magnetic island.§.§ The T_e profile with the magnetic island When the m/n=2/1 magnetic island is induced, the T_e profile is altered along the magnetic topology of the island and it is no longer axis-symmetric. The radial T_e profiles measured by the ECE diagnostics in the high field side and the 2D T_e profile by the ECEI diagnostics in the low field side at different toroidal angles are shown in Fig. <ref>. The T_e profile inside the magnetic island flattens probably due to the fast parallel transport along the reconnected field line <cit.> and/or the negligible turbulence spreading <cit.>. The full width of the magnetic island (W) will be close to or larger than 5 cm which is larger than the typical critical width (W_c ∼ 1.0 cm) for the T_e flattening in the KSTAR L-mode plasmas <cit.>. Note that the separatrix of the magnetic island in the 2D T_e profile can be roughly estimated by the temporal behavior of the electron temperature. A full 2D measured electron temperature profile and a proper modeling with a synthetic diagnostics are needed to estimate the magnetic island full width accurately <cit.>, especially when the localized (not uniform) and dynamic turbulence exists around the magnetic island which can affect the perpendicular electron heat transport characteristics <cit.>. In contrast to the flattened T_e profile inside the magnetic island, the T_e profile in the inner region (r < r_si) becomes more steepened with increase of the core T_e level (Fig. <ref>(a)). In particular, the T_e gradient increases towards the O-point region as indicated by widths of the orange arrows in the 2D T_e profile in Fig. <ref>(b). More closely packed magnetic flux surfaces due to the magnetic island may induce some local T_e profile modifications <cit.>. In order to understand formation of the global peaked T_e profile, the electron heat transport around the magnetic island needs to be studied with measurements of the T_e turbulence and the poloidal flow as follows.§.§ The T_e turbulence and its characteristics To estimate the electron turbulent heat transport near the magnetic island, the T_e fluctuations measured by the ECEI diagnostics are analyzed. For example, Figs. <ref> (a)—(c) are the cross coherence of δ T_e / ⟨ T_e ⟩≡ (T_e - ⟨ T_e ⟩) / ⟨ T_e ⟩ where ⟨ ⟩ means the time average. It represent the coherent fraction of the total δ T_e / ⟨ T_e ⟩ fluctuation power. They are calculated using two vertically adjacent ECEI channels for t= 7.35–7.40 s in the plasma #13371. One inside the magnetic island does not show a significant coherent fluctuation, but the others show some coherent fluctuation power. In the inner region where the T_e gradient is increased significantly, the fluctuation power over a broad frequency band (0 ≤ f ≤ 75 kHz) is measured clearly. The T_e gradient may be considered as a predominant drive of this turbulent fluctuation. In fact, the coherence increases with the T_e gradient as shown in Fig. <ref>. Note that the weak fluctuation power over a narrow frequency band (0 ≤ f ≤ 30 kHz) is measured in the outer region. A detail 2D distribution of the T_e turbulence level can be investigated by calculating the summed cross coherence image using more ECEI channels. The cross coherence only above a significance level is summed over a 10–75 kHz band to make the summed coherence image. Note that a 0–10 kHz band was neglected because some channels suffer from 4 kHz electronics noise in this experiment. Each dot in the images in Fig. <ref>(d) and Figs. <ref>(a) and <ref>(b) represents the summed coherence estimated using the channel at that position and the one below. Note that one row of the ECEI channels had a low signal-to-noise ratio and reliable coherence calculations in two rows near the midplane are not available. The smooth and continuous 2D T_e profile in Fig. <ref>(b) is obtained by interpolations. The summed coherence image in Fig. <ref>(d) shows that the strong T_e turbulence is localized both radially and poloidally in the inner region. It has the maximum close to the inner separatrix of the magnetic island near the X-point. The insignificant (<2) summed coherence is observed inside the magnetic island, and weak but meaningful coherence is observed in the outer region. The T_e turbulence distribution has been further studied in a similar KSTAR plasma #15638 in which the toroidal phase of the applied n=1 field is slowly varying at the frequency of 2 Hz. In that experiment, both the X-point and O-point regions can be captured in the ECEI view frame in different time periods (20 ms each) and the δ T_e / ⟨ T_e ⟩ summed coherence images are obtained as shown in Figs. <ref>(a) and <ref>(b), respectively. The summed coherence is insignificant everywhere for the O-point period, which implies the small turbulent electron heat transport there <cit.>. For the X-point period, it is found that the significant coherence is not only localized but also poloidally asymmetric against the X-point. Note that this localized turbulence follows the X-point, which is rotating with the RMP field, with a constant poloidal shift.The localized asymmetric turbulence near the X-point region strongly suggests that the T_e gradient is not the only control parameter in growth of the T_e turbulence. The poloidal flow can be important as it will be discussed in next section. In fact, the poloidal shift of the turbulence with respect to the X-point coincides with the direction of the local poloidal flow <cit.>. This locality of the island-associated T_e turbulence is consistently observed in other experiment <cit.>. Although it is beyond the scope of this paper, the radial locality may also imply that the magnetic island itself can be important in driving the turbulence via a direct nonlinear coupling <cit.>. At this point, it would be helpful to provide some quantitative characteristics of the T_e turbulence such as the rms amplitude, correlation lengths, and the poloidal wavenumber. Firstly, the rms amplitude can be measured by integrating the cross power spectral density between vertically adjacent channels over a 10–75 kHz band, and the maximum turbulence rms amplitude is about 2.5 ± 0.25 % in the plasma #13371. Note that the 2D rms amplitude has the almost same distribution with the summed coherence image in Fig. <ref>(d). Next, the summed coherence images in Figs. <ref>(c)–(f) are calculated especially for estimation of correlation lengths in the plasma #15638. Pair of a fixed reference channel (indicated by a black cross) and other channel are used to estimate the correlation length defined as a range of the significant summed cross coherence. The correlation length is found to be not uniform and has a finite poloidal (2–6 cm) and radial (2–3 cm) range. Note that it is a little larger than the radial correlation lengths of density fluctuation across the magnetic island in <cit.>. Lastly, the poloidal wavenumber of the T_e turbulence can be estimated from the cross phase (ΔΘ) between vertically adjacent ECEI channels. Figs. <ref>(a) and <ref>(b) represent the vertical ECEI cross phase measured in the inner and outer region of the plasma #13371, respectively. Fluctuations in a range of k_θρ_i ≈ΔΘ/Δ zρ_i ≤ 0.4 were revealed in the most channels in the inner region and in some channels in the outer region where ρ_i is the ion gyroradius. The vertical distance between two adjacent channels (Δ z) was set to be about 2 cm and detectable poloidal wavenumber is roughly limited to k_θρ_i ≤ 0.4 in this experiment.§.§ The poloidal flow Using the slope of the coherent vertical ECEI cross phase, the vertical group velocity in the laboratory frame, or the pattern velocity (v_pt), can be measured <cit.>. Fig. <ref>(c) shows 2D measurement of the v_pt near the magnetic island for t= 7.35–7.40 s in the plasma #13371. Note that the v_pt measured with uncertainty less than 0.8 km/s is only shown. For the accurate v_pt measurement, the ECEI data should have sufficient power of the coherent fluctuation and record length (at least 50 ms). In this state, the v_pt in the inner region is positive (a counter clockwise or the electron diamagnetic direction), and its speed is radially peaked near the separatrix of the magnetic island. More importantly, it is not uniform in poloidal direction, i.e. it increases toward the O-point region. Therefore, the positive radial shear of the poloidal flow (dv_pt/dr ≥ 10^5 s^-1) forms in the inner region and it also increases toward the O-point region. This v_pt behavior is consistent with the numerical simulation results <cit.>, and can explain that the T_e turbulence is not detected and the steep T_e profile is maintained near the O-point region. In addition, the v_pt is reversed across the magnetic island and the strong negative radial shear of the poloidal flow (-dv_pt/dr > 10^5 s^-1) develops across the island. Although it was not possible to measure the flow inside the flat and quiet magnetic island, the poloidal flow around and inside the island is expected to have a vortex structure <cit.>. This 2D sheared poloidal flow can prohibit a turbulent eddy from developing across the magnetic island <cit.> and from spreading into the island <cit.>.Origin of the poloidal flow is not clearly understood yet. The toroidal flow decreases significantly after the field penetration and its contribution would be negligible in the v_pt. The diamagnetic drift may serve as a nearly uniform and small background, considering evolution of poloidal flows in Fig. <ref>(d). Note that from t = 7.50 s to t = 7.55 s the plot E showed a drastic change (+4 km/s) which is hardly explained by the 10% increase of the electron temperature gradient. The E × B flow <cit.> or zonal flow driven by the turbulence itself <cit.> may play a role in the measured v_pt. § THE POLOIDAL FLOW REVERSAL AND THE TRANSPORT BIFURCATION In previous experiments, the applied RMP field strength keeps increasing in time, and it may not be appropriate to study the temporal coupled evolution between the T_e gradient, the T_e turbulence, and the poloidal flow.In the experiment #16150, the constant and non-rotating n=1 RMP field is applied and the plasma is maintained in the mode-locking state without the major disruption. A repetitive minor disruption is observed as the plasma evolves in time with the constant RMP field, and the coupled evolution is studied for a single minor disruption cycle.Four distinctive phases are observed during a single minor disruption cycle as illustrated in T_e profiles in Fig. <ref>(a). The temporal evolutions of the T_e gradient in the inner region, the T_e turbulence level (the summed cross coherence) at different positions (A, B, C, and D), and the poloidal flow (v_pt) at different positions (A, B, C, D, and E) are shown in Figs. <ref>(b)–(d), respectively. Note that the summed coherence in Fig. <ref>(c) and the summed coherence image for the phase 1 (Fig. <ref>(e)) and the phase 2 (Fig. <ref>(f)) are obtained using pair of vertically adjacent ECEI channels and the cross coherence over a 0–60 kHz band in which there is no electronics noise in this experiment.In the initial phase 1, the T_e gradient in the inner region is not very steep but increasing in time as shown in Fig. <ref>(b). The summed cross coherence image in Fig. <ref>(e) shows that the coherent fluctuation power is relatively weak but peaked across the X-point of the magnetic island. The negative poloidal flow is sheared across the X-point as shown in the v_pt measurement at B, D, and E in Fig. <ref>(d), but near the X-point it seems not to be strong enough to affect the turbulence distribution. Although the accurate entire two-dimensional flow measurement was not available in this phase due to the marginal turbulent fluctuation power except the X-point region, the localized turbulence near the X-point implies that the flow shear may be effective beyond the X-point region.The transition from the phase 1 to the phase 2 involves with a rapid increase of the T_e gradient, i.e. T_e increases at the core region and decreases slightly in the q≥2 region, as well as changes of the 2D T_e turbulence level and poloidal flow. Note that the 2D estimated magnetic island geometry (indicated by the dashed purple line) is also perturbed as seen decrease of the summed coherence at D, which might involve change of the island full width. The line averaged density is nearly constant in the transition and decreases by a few percent later in phase 3. The electron density profile measured by the Thomson scattering system <cit.> becomes a little broader but it is not clear due to the unsatisfactory measurement condition. In the phase 2, the 2D T_e turbulence distribution is changed as shown in Fig. <ref>(f) as Fig. <ref>(d), and the reversed poloidal flow forms as shown in the v_pt measurement at A, C, and E in Fig. <ref>(d) as Fig. <ref>(c). The poloidal flow reversal can be originated from change in v_E × B around the magnetic island by the nonlinear resonant low n electrostatic mode <cit.> or the response potential to the magnetic perturbation in the initial shear flow <cit.>. The strongly sheared flow developed across the magnetic island can prohibit the turbulence growth or convection across the X-point <cit.> and shift the T_e turbulence level upwards in the inner region as observed in Fig. <ref>(c) and <ref>(f), which can explain the T_e gradient increase in phase 2.A sudden decrease of electron temperature in the q≥2 region occurs in the phase 3 through some unknown process (possibly related to edge modes), which leads to a jump in the T_e gradient and the T_e turbulence in the inner region. In addition, the stronger radial shear of the poloidal flows in the inner region (difference between A and E in Fig. <ref>(d)) and across the X-point (difference between A and C in Fig. <ref>(d)) are observed. When all the T_e gradient, the T_e turbulence, and the flow shear increase significantly, a massive fast (∼ 100 μ s) T_e collapse occurs. Note that the T_e profile collapses in two steps, i.e. the local q≈2 region collapse and the q ≤ 1 region collapse, which is very similar with the large minor disruption in <cit.> where the RMP field was not applied. The role of the magnetic island has been changed from a barrier of the electron heat transport (from phase 1 to phase 3) to a fast channel (from phase 3 to phase 4). The observed transport bifurcation may be relevant to either the bifurcation observed in <cit.>, secondary innstabilities <cit.>, or the vortex flow shear destabilization of the long wavelength fluctuation <cit.>. § SUMMARY AND CONCLUSION The 2D profiles of T_e and poloidal flow and the 2D T_e turbulence distribution are closely coupled around the magnetic island. The magnetic island and turbulence mutually interact via this coupling which has a critical effect on the electron heat transport. The magnetic island can play as either a barrier or a fast channel of the electron heat transport. In particular, the magnetic island acts more like an electron heat transport barrier when the poloidal flow is perturbed to have a strongly sheared profile. The speed of the flow is peaked near the separatrix of the magnetic island increasing towards the O-point region. The positive flow shear in the inner region would suppress the T_e turbulence around the O-point region, and the T_e turbulence level is only significant in the narrow region close to the X-point region. The negative flow shear across the magnetic island would prevent a turbulent eddy from growing across the X-point and from spreading into the island. In this state, the poloidal flow developed around the magnetic island seems to regulate the electron turbulent heat transport across the magnetic island. However, when the T_e gradient, the T_e turbulence, and the flow shear exceed critical levels, the transport bifurcation occurs and a massive heat transport event follows. The role of the magnetic island on the electron thermal transport is more complicated than a direct thermal loss channel. This experiment clearly demonstrates multiscale nonlinear interaction between a large scale magnetohydrodynamic instability and small scale turbulence and its importance on the electron thermal transport. It may provide some physical insights to understand the internal transport barrier formation <cit.> or the RMP edge localized mode suppression <cit.>. More researches focused on the validation of the ongoing gyrokinetic simulation with the experimental observation will be done in near future. § ACKNOWLEDGEMENTOne of the authors (M. J. C.) acknowledges helpful discussions with Dr. J. Seol, Dr. J.-H. Kim, Dr. M. Leconte, Dr. S. Zoletnik, and Dr. L. Bardóczi. This work is supported by Korea Ministry of Science, ICT and Future Planning under KSTAR project (Contract No. OR1509) and under NFRI R&D programs (NFRI-EN1741-3), and also partly supported by NRF Korea under Grant No. NRF-2014M1A7A1A03029865 and NRF-2014M1A7A1A03029881. § REFERENCES iopart-num 10 url<#>1#1urlprefixURL Thyagaraja:2005cy Thyagaraja A, Knight P J, de Baar M R, Hogeweij G M D and Min E 2005 Physics of Plasmas 12 090907Diamond:2006hx McDevitt C J and Diamond P H 2006 Physics of Plasmas 13 032302Nakajima:2007dn Ishizawa A and Nakajima N 2007 Nuclear Fusion 47 1540–1551Militello:2008kb Militello F, Waelbroeck F L, Fitzpatrick R and Horton W 2008 Physics of Plasmas 15 050701Wang:2009by Wang Z X, Li J Q, Kishimoto Y and Dong J Q 2009 Physics of Plasmas 16 060703Waelbroeck:2009hz Waelbroeck F L, Militello F, Fitzpatrick R and Horton W 2009 Plasma Physics and Controlled Fusion 51 015015Muraglia:2009dk Muraglia M, Agullo O, Benkadda S, Garbet X, Beyer P and Sen A 2009 Physical Review Letters 103 145001Connor:2009je Wilson H R and Connor J W 2009 Plasma Physics and Controlled Fusion 51 115007Poli:2009ce Poli E, Bottino A and Peeters A G 2009 Nuclear Fusion 49 075010Ishizawa:2010bh Ishizawa A and Nakajima N 2010 Physics of Plasmas 17 072308Muraglia:2011ck Muraglia M, Agullo O, Benkadda S, Yagi M, Garbet X and Sen A 2011 Physical Review Letters 107 095003Hornsby:2015da Hornsby W A, Migliano P, Buchholz R, Zarzoso D, Casson F J, Poli E and Peeters A G 2015 Plasma Physics and Controlled Fusion 57 054018Hornsby:2011ib Hornsby W A, Siccinio M and Peeters A G 2011 Plasma Physics and Controlled Fusion 53 054008Muraglia:2017fz Muraglia M, Agullo O, Poy A, Benkadda S, Dubuit N, Garbet X and Sen A 2017 Nuclear Fusion 57 072010Fitzpatrick:1995ud Fitzpatrick R 1995 Physics of Plasmas 2 825–838deVriesPC:1997gz de Vries, P C, Waidmann G, Kramer-Flecken A, Donné A J H and Schuller F C 1997 Plasma Physics and Controlled Fusion 39 439–451Bardoczi:2016gj Bardóczi L, Rhodes T L, Carter T A, Bañón Navarro A, Peebles W A, Jenko F and McKee G 2016 Physical Review Letters 116 215001Bardoczi:2017im Bardóczi L, Rhodes T L, Bañón Navarro A, Sung C, Carter T A, La Haye R J, McKee G R, Petty C C, Chrystal C and Jenko F 2017 Physics of Plasmas 24 056106Ida:2002ga Ida K, Ohyabu N, Morisaki T, Nagayama Y, Inagaki S, Itoh K, Liang Y, Narihara K, Kostrioukov A Y, Peterson B J, Tanaka K, Tokuzawa T, Kawahata K, Suzuki H, Komori A and LHD Experimental Group 2001 Physical Review Letters 88 015002Ida:2004ix Ida K, Inagaki S, Tamura N, Morisaki T, Ohyabu N, Khlopenkov K, Sudo S, Watanabe K, Yokoyama M, Shimozuma T, Takeiri Y, Itoh K, Yoshinuma M, Liang Y, Narihara K, Tanaka K, Nagayama Y, Tokuzawa T, Kawahata K, Suzuki H, Komori A, Akiyama T, Ashikawa N, Emoto M, Funaba H, Goncharov P, Goto M, Idei H, Ikeda K, Isobe M, Kaneko O, Kawazome H, Kobuchi T, Kostrioukov A, Kubo S, Kumazawa R, Masuzaki S, Minami T, Miyazawa J, Morita S, Murakami S, Muto S, Mutoh T, Nakamura Y, Nakanishi H, Narushima Y, Nishimura K, Noda N, Notake T, Nozato H, Ohdachi S, Oka Y, Osakabe M, Ozaki T, Peterson B J, Sagara A, Saida T, Saito K, Sakakibara S, Sakamoto R, Sasao M, Sato K, Sato M, Seki T, Shoji M, Takeuchi N, Toi K, Torii Y, Tsumori K, Watari T, Xu Y, Yamada H, Yamada I, Yamamoto S, Yamamoto T, Yoshimura Y, Ohtake I, Ohkubo K, Mito T, Satow T, Uda T, Yamazaki K, Matsuoka K, Motojima O and Fujiwara M 2004 Nuclear Fusion 44 290–295Zhao:2015gra Zhao K J, Shi Y J, Hahn S H, Diamond P H, Sun Y, Cheng J, Liu H, Lie N, Chen Z P, Ding Y H, Chen Z Y, Rao B, Leconte M, Bak J G, Cheng Z F, Gao L, Zhang X Q, Yang Z J, Wang N C, Wang L, Jin W, Yan L W, Dong J Q, Zhuang G and J-TEXT team 2015 Nuclear Fusion 55 073022Rea:2015he Rea C, Vianello N, Agostini M, Cavazzana R, De Masi G, Martines E, Momo B, Scarin P, Spagnolo S, Spizzo G, Spolaore M and Zuin M 2015 Nuclear Fusion 55 113021Estrada:2016gz Estrada T, Ascasíbar E, Blanco E, Cappa A, Hidalgo C, Ida K, López-Fraguas A and van Milligen B P 2016 Nuclear Fusion 56 026011Hornsby:2010fh Hornsby W A, Peeters A G, Snodin A P, Casson F J, Camenen Y, Szepesi G, Siccinio M and Poli E 2010 Physics of Plasmas 17 092301Poli:2010hy Poli E, Bottino A, Hornsby W A, Peeters A G, Ribeiro T, Scott B D and Siccinio M 2010 Plasma Physics and Controlled Fusion 52 124021Navarro:2017ei Bañón Navarro A, Bardóczi L, Carter T A, Jenko F and Rhodes T L 2017 Plasma Physics and Controlled Fusion 59 034004–12Izacard:2016de Izacard O, Holland C, James S D and Brennan D P 2016 Physics of Plasmas 23 022304Ishizawa:2009es Ishizawa A and Nakajima N 2009 Nuclear Fusion 49 055015Hu:2016kea Hu Z Q, Wang Z X, Wei L, Li J Q and Kishimoto Y 2016 Nuclear Fusion 56 016012Lee:2011cb Lee H, Song E j, Park Y d, Oh S g and Ko W H 2011 Rev Sci Instrum 82 063510–6Piron:2016fa Piron C, Martin P, Bonfiglio D, Hanson J, Logan N C, Paz-Soldan C, Piovesan P, Turco F, Bialek J, Franz P, Jackson G, Lanctot M J, Navratil G A, Okabayashi M, Strait E, Terranova D and Turnbull A 2016 Nuclear Fusion 56 106012Hender:1992wq Hender T C, Fitzpatrick R, Morris A W, Carolan P G, Durst R D, Edlington T, Ferreira J, Fielding S J, Haynes P S, Hugill J, Jenkins I J, La Haye R J, Parham B J, Robinson D C, Todd T N, Valovic M and Vayakis G 1992 Nuclear Fusion 32 2091–2117Yun:2014kv Yun G S, Lee W, Choi M J, Lee J, Kim M, Leem J, Nam Y, Choe G H, Park H K, Park H, Woo D S, Kim K W, Domier C W, Luhmann N C, Ito N, Mase A and Lee S G 2014 Review of Scientific Instruments 85 11D820Choi:2016ga Choi M J, Park H K, Yun G S, Nam Y B, Choe G H, Lee W and Jardin S 2016 Review of Scientific Instruments 87 013506Park:2011go Park Y S, Sabbagh S A, Berkery J W, Bialek J M, Jeon J M, Hahn S H, Eidietis N, Evans T E, S W Y, Ahn J W, Kim J, Yang H L, You K I, Bae Y S, Chung J, Kwon M, Oh Y K, Kim W C, Kim J Y, Lee S G, Park H K, Reimerdes H, Leuer J and Walker M 2011 Nuclear Fusion 51 053001Lee:2016bl Lee W, Leem J, Yun G S, Park H K, Ko S H, Choi M J, Wang W X, Budny R V, Ethier S, Park Y S, Luhmann N C, Domier C W, Lee K D, Ko W H, Kim K W and KSTAR Team 2016 Physics of Plasmas 23 052510Lee:2016kua Lee J, Yun G S, Choi M J, Kwon J M, Jeon Y M, Lee W, Luhmann N C and Park H K 2016 Physical Review Letters 117 075001–5Rathgeber:2013ej Rathgeber S K, Barrera L, Eich T, Fischer R, Nold B, Suttrop W, Willensdorfer M, Wolfrum E and ASDEX Upgrade team 2013 Plasma Physics and Controlled Fusion 55 025004Hutchinson:2002ws Hutchinson I H 2002 Principles of Plasma Diagnostics 2nd ed (Cambridge University Press)Choi:2014kj Choi M J, Yun G S, Lee W, Park H K, Park Y S, Sabbagh S A, Gibson K J, Bowman C, Domier C W, Luhmann N C, Bak J G, Lee S G and the KSTAR Team 2014 Nuclear Fusion 54 083010Bardoczi:2016gjba Bardóczi L, Rhodes T L, Carter T A, Crocker N A, Peebles W A and Grierson B A 2016 Physics of Plasmas 23 052507Agullo:2017ig Agullo O, Muraglia M, Benkadda S, Poyé A, Dubuit N, Garbet X and Sen A 2017 Physics of Plasmas 24 042309Lucas:2016tr Morton L A 2016 Turbulence and transport in large magnetic islands Annual Meeting of American Physical Society Division of Plasma Physics (San Jose) p PI3.2Hu:2014ksa Hu Z Q, Wang Z X, Wei L, Li J Q and Kishimoto Y 2014 Nuclear Fusion 54 123018FernandezMarina:2017el Fernández-Marina F, Estrada T, Blanco E and García L 2017 Physics of Plasmas 24 072513Wang:2009jua Wang Z X, Li J Q, Dong J Q and Kishimoto Y 2009 Physical Review Letters 103 015004Ciaccio:2015wb Ciaccio G, Schmitz O, Spizzo G, Abdullaev S S, Evans T E, Frerichs H and White R B 2015 Physics of Plasmas 22 102516–9Leconte:2017jn Leconte M and Kim J H 2017 Nuclear Fusion 57 086037Li:2009fh Li J, Kishimoto Y, Kouduki Y, Wang Z X and Janvier M 2009 Nuclear Fusion 49 095007Lee:2010el Lee J H, Oh S T and Wi H M 2010 Review of Scientific Instruments 81 10D528Choi:2016jz Choi M J, Park H K, Yun G S, Lee W, Luhmann N C, Lee K D, Ko W H, Park Y S, Park B H and In Y 2016 Nuclear Fusion 56 066013Ida:2016ka Ida K, Kobayashi T, Yoshinuma M, Suzuki Y, Narushima Y, Evans T E, Ohdachi S, Tsuchiya H, Inagaki S and Itoh K 2016 Nuclear Fusion 56 092001Li:2014jg Li J, Kishimoto Y and Wang Z X 2014 Physics of Plasmas 21 020703Waelbroeck:2001dl Waelbroeck F L, Connor J W and Wilson H R 2001 Physical Review Letters 87 215003Wolf:2003wx Wolf R C 2003 Plasma Physics and Controlled Fusion 45 R1–R91Chung:ug Chung J, Kim H S, Jeon J M, Kim J, Choi M J, Ko J S, Lee K D, Lee H H, Yi S, Kwon J M, Hahn S H, Ko W H, Lee J H and Yoon S W to be published in Nuclear FusionEvans:2006hra Evans T E, Moyer R A, Burrell K H, Fenstermacher M E, Joseph I, Leonard A W, Osborne T H, Porter G D, Schaffer M J, Snyder P B, Thomas P R, Watkins J G and West W P 2006 Nature Physics 2 419–423Jeon:2012hu Jeon Y M, Park J K, Yoon S W, Ko W H, Lee S G, Lee K D, Yun G S, Nam Y U, Kim W C, Kwak J G, Lee K S, Kim H K and Yang H L 2012 Physical Review 109 035004 | http://arxiv.org/abs/1705.09487v2 | {
"authors": [
"M. J. Choi",
"J. Kim",
"J. -M. Kwon",
"H. K. Park",
"Y. In",
"W. Lee",
"K. D. Lee",
"G. S. Yun",
"J. Lee",
"M. Kim",
"W. -H. Ko",
"J. H. Lee",
"Y. S. Park",
"Y. -S. Na",
"N. C. Luhmann Jr",
"B. H. Park"
],
"categories": [
"physics.plasm-ph"
],
"primary_category": "physics.plasm-ph",
"published": "20170526090645",
"title": "Multiscale interaction between a large scale magnetic island and small scale turbulence"
} |
( ^1 Department of Physics, Indian Institute of Technology Kharagpur, W.B. 721302, India ^2 Department of Physics, Virginia Tech, Blacksburg, VA 24061, U.S.A ^3 Centre for Theoretical Studies, Indian Institute of Technology Kharagpur, W.B. 721302, India ^4Department of Physics and Astronomy, Clemson University, Clemson, SC 29634, U.S.A In condensed matter physics, the term “chiral anomaly” implies the violation of the separate number conservation laws of Weyl fermions of different chiralities in the presence of parallel electric and magnetic fields. One effect of chiral anomaly in the recently discovered Dirac and Weyl semimetals is a positive longitudinal magnetoconductance (LMC).Here we show that chiral anomaly and non-trivial Berry curvature effects engender another striking effect in WSMs, the planar Hall effect (PHE). Remarkably, PHE manifests itself when the applied current, magnetic field, and the induced transverse “Hall" voltage all lie in the same plane, precisely in a configuration in which the conventional Hall effect vanishes. In this work we treat PHE quasi-classically, and predict specific experimental signatures for type-I and type-II Weyl semimetals that can be directly checked in experiments.In the presence of parallel electric and magnetic fields, the violation of separate number conservation laws for the three dimensional left and right handed Weyl fermions is known as the chiral anomaly. The recent discovery of Weyl and Dirac semimetals has paved the way for experimentally testing the effects of chiral anomaly via longitudinal magneto-transport measurements. More recently, a type-II Weyl semimetal (WSM) phase has been proposed, where the nodal points possess a finite density of states due to the touching between electron- and hole- pockets.It has been suggested that the main difference between the two types of WSMs (type-I and type-II) is that in the latter, chiral anomaly and the associated longitudinal magnetoresistance are strongly anisotropic, vanishing when the applied magnetic field is perpendicular to the direction of tilt of Weyl fermion cones in a type-II WSM. We analyze chiral anomaly in a type-II WSM in quasiclassical Boltzmann framework, and find that the anomaly related longitudinal magneto-transport has a non-vanishing contribution along any arbitrary direction. Chiral anomaly as origin of planar Hall effect in Weyl semimetals Sumanta Tewari^1,4================================================================= Introduction: In condensed matter physics the Weyl equation, originally introduced in high energy physics <cit.>, describes the low energy quasiparticles near the touching of a pair of non-degenerate bands in a class of topological systems known as Weyl semimetals (WSM) <cit.>.In WSMs the momentum space touching points of non-degenerate pairs of bands act as source and sink of Abelian Berry curvature, an analog of magnetic field but defined in momentum space <cit.>. WSMs violate spatial inversion and/or time reversal symmetries <cit.>, and are topologically protected by a non-zero flux of Berry curvature across the Fermi surface. By Gauss's theorem, the flux of the Berry curvature known as Chern number is related to the strength of the magnetic monopole enclosed by the Fermi surface, and is quantized to integer values. It can be shown that <cit.> in WSMs the Weyl points come in pairs of positive and negative monopole charges (also called chirality) and the net monopole charge summed over all the Weyl points in the Brillouin zone vanishes. In 𝐤·𝐩 theory, the effective Hamiltonian for the low energy linearly dispersing quasiparticles near an isolated Weyl point situated at momentum space point 𝐊 can be written as,H_𝐤 = ∑_i=1^3v_i (𝐤_i)σ_i,where the crystal momenta 𝐤_i are measured from the band degeneracy point 𝐊, ħ=c=1, and σ_is are the three Pauli matrices. WSMs evince many anomalous transport and optical properties, such as anomalous Hall effect in time reversal broken WSMs, dynamic chiral magnetic effect related to optical gyrotropy and natural optical activity in inversion broken WSMs <cit.>, and, most importantly, negative longitudinal magnetoresistance in the presence of parallel electric and magnetic fields due to non-conservation of separate electron numbers of opposite chirality for relativistic massless fermions, an effect known as the chiral or Adler-Bell-Jackiw anomaly <cit.>.In the absence of parallel electric and magnetic fields in WSMs, as for relativistic chiral fermions in high energy physics, the numbers of right and left handed Weyl fermions (i.e. Weyl fermions of different chiralities) are separately conserved. However, in the presence of externally imposed parallel electric and magnetic fields, the separate number conservation laws are violated <cit.>, leaving only the total number of fermions to be conserved.An important criterion for the existence of chiral anomaly is the unbounded linear dispersion of the quasiparticles, and in the continuum theory the particles from one Weyl point transfer to the other through the infinite Dirac sea. In a solid state system, in addition to the externally applied electric field there is always a periodic electric field due tothe crystal, and the dispersion relations are bounded. Hence, it will seem impossible to observe any tangible effects of chiral anomaly in any solid state system. But, in the presence of a relaxation mechanism, the scattering rate cuts off the effects of periodic electric field (Bloch oscillations), thus allowing the effects of anomaly to manifest in longitudinal magnetotransport measurements.To describe this effect for strong magnetic fields the system must be described quantum mechanically leading to Landau levels. The lowest Landau level in the resulting Landau level spectrum is chiral, dispersing with positive or negative velocities depending on the chirality of the Weyl node. In the simultaneous presence of an electric field in parallel to the magnetic field, the electrons are accelerated, leading to charge pumping from Weyl node of one chirality to the other.For weak magnetic fields for which the Landau level quantization is wiped out by disorder effects, a semiclassical description <cit.> of magnetoresistance suggests that 𝐄·𝐁≠ 0 leads to a positive LMC as a result of chiral anomaly, while the transverse magnetoresistance remains positive and conventional. Consistent with this picture, recently, several experimental groups have found the evidence of chiral anomaly induced positive LMC in Dirac and Weyl materials <cit.>.In this paper we discuss a second effect of chiral anomaly, the so-called planar Hall effect <cit.>, i.e. appearance of an in-plane transverse voltage when the co-planar electric and magnetic fields are not perfectly aligned to each other. The planar Hall conductivity σ_yx, i.e. the transverse conductivity measured across the ŷ direction perpendicular to the applied electric field and current in the x̂ direction in the presence of a magnetic field in x-y plane making an angle θ with the x axis, is known to occur in ferromagnetic systems <cit.> with dependence on θ similar to what we find here for WSMs. It has also been observed recently with similar angular dependence in the surface state of a topological insulator where it has been linked to magnetic field induced anisotropic lifting of the protection of the surface state from backscattering <cit.>. Here we develop a quasi-classical theory of planar Hall effect in Weyl semimetals, where the electron or hole Fermi surfaces enclose non-zero fluxes of Berry curvature in momentum space.Unlike anomalous Hall effect understood quasi-classically in terms of Berry curvature effects <cit.>, to the best of our knowledge planar Hall effect has so far not been discussed as a topological response function. Our treatment of PHE in terms of chiral anomaly and non-trivial Berry curvature, along with specific experimental signatures in type-I and type-II WSMs, is an important first step to fill this gap.Model Hamiltonian: The momentum space Hamiltonian for a generic single chiral Weyl node can be expressed asH_𝐤^χ=ħv_F(χ𝐤·σ+Ck_xσ_0)where v_F is the Fermi velocity, χ is the chirality associated with the Weyl node, σ represent the vector of Pauli matrices, σ_0 is the identity matrix, and C is the tilt parameter which can be taken along the k_x direction without any loss of generality <cit.>.When the anisotropy is zero i.e. C=0, electron and hole bands touch at the Weyl point leading to a point like Fermi surface. When the anisotropy along k_x is small enough (C=0.5), the Fermi surface is still point-like and is classified as the type-I Weyl node. With the increase in anisotropy (|C|> 1), electron and hole pockets now appear at the Fermi surface leading to a distinct phase which is classified as a type-II Weyl node.Planar Hall effect: We will now investigate the electronic contributions to the planar Hall conductivity in the quasi-classical Boltzmann formalism <cit.>. The Boltzmann formalism is valid because for the scattering time τ∼ 10^-13 s in typical Dirac and Weyl semimetals <cit.>, and the effective mass m^*∼ 0.11 m_e <cit.> , ω_cτ∼ 0.3 < 1 for typical magnetic field B∼ 3-5 T, where ω_c = eB/m^*c is the cyclotron frequency. Additionally, we use the standard relaxation time approximation <cit.> which assumes that any perturbation in the system decays exponentially with a characteristic time constant τ. This approximation is valid in isotropic systems with elastic (impurity dominated) scattering processes generally valid in WSMs <cit.>. We begin with the linear response equation for the charge current (𝐉) to external perturbative fields (electric field 𝐄 and temperature gradient ∇ T), which is given byJ_a=σ_abE_b+α_ab(-∇_b T)where σ̂ and α̂ are different conductivity tensors. A phenomenological Boltzmann transport equation can be written as <cit.> (∂/∂ t+ṙ·∇_𝐫+𝐤̇·∇_𝐤)f_𝐤,𝐫,t=I_coll{f_𝐤,𝐫,t}where on the right side I_coll{f_𝐤,𝐫,t} is the collision integral which incorporates the effects of electron correlations and impurity scattering. We are interested in computing the electron distribution function which is given by f_𝐤,𝐫,t. Since we are primarily interested in steady-state solutions to the Boltzmann equation, Eq. (<ref>) can be rewritten as(ṙ·∇_𝐫+𝐤̇·∇_𝐤)f_k=f_eq-f_𝐤/τ(𝐤)where we have invoked the relaxation time approximation (RTA) for the collision integral and also dropped the 𝐫 dependence of f_𝐤,𝐫,t, valid for spatially uniform fields. The relaxation time τ (𝐤) on the Fermi surface can in general have a momentum dependence but we will ignore this dependence in our work as it doesn't change any of our qualitative conclusions. The function f_eq is the equilibrium Fermi-Dirac distribution function which describes electron distribution in the absence of any external fields.The intrinsic contribution to charge conductivity can be calculated from a Kubo formula for an ideal lattice. This contribution is however related to the topological properties of the Bloch states and essentially reduces to the integral of Berry phases over cuts of Fermi surface segments <cit.>. Therefore a quasi-classical formalism incorporating Berry phase effects suffices to describe the low-energy transport properties of a generic Weyl semimetal.It is now well established that the low energy transport properties are substantially modified due to the Berry curvature of the electron wave functions <cit.>.To calculate planar Hall effect, we apply an electric field (𝐄) along the x-axis and a magnetic field (𝐁) in the xy plane at a finite angle θ from the x-axis, i.e. 𝐁=Bcosθx̂+Bsinθŷ, 𝐄=Ex̂.In the presence of Berry curvature associated with a single chiral Weyl node, the quasi-classical equations of motion are <cit.> ṙ=D(𝐁,Ω_𝐤)[𝐯_𝐤+e/ħ(𝐄×Ω_𝐤)+e/ħ(𝐯_𝐤·Ω_𝐤)𝐁] ħ𝐤̇=D(𝐁,Ω_𝐤)[e𝐄+e/ħ(𝐯_𝐤×𝐁)+e^2/ħ(𝐄·𝐁)Ω_𝐤]Here, D(𝐁,Ω_𝐤)=(1+e/ħ(𝐁.Ω_𝐤))^-1 is the phase space factor, where Ω_𝐤 is the Berry curvature, and 𝐯_𝐤 is the group velocity <cit.>. For ease of notation hereafter we will simply denote D(𝐁,Ω_𝐤) by D, dropping the implied 𝐁 and Ω_𝐤 dependence. Substituting the above equations of motion into the steady state Boltzmann equation Eq. (<ref>), it then takes the form(eEv_x/ħ+e^2/ħBEcosθ(𝐯_𝐤.Ω_𝐤))∂ f_eq/∂ϵ+eB/ħ^2(-v_zsinθ∂/∂ k_x(v_xsinθ-v_ycosθ)∂/∂ k_z+v_zcosθ∂/∂ k_y)f_𝐤 =f_eq-f_𝐤/DτWe solve the above equation by assuming the following ansatz for the deviation of the electron distribution function δ f_𝐤=f_𝐤-f_eq δ f_𝐤=(eDEτv_x+ e^2DBEτcosθ (𝐯_𝐤·Ω_𝐤)/ħ+𝐯·Γ)(∂ f_eq/∂ϵ)where Γ is correction factor due to magnetic field 𝐁. Plugging δ f_𝐤 into Eq. (<ref>), we haveeB/ħ^2(-v_zsinθ∂/∂ k_x+v_zcosθ∂/∂ k_y+(v_xsinθ-v_ycosθ)∂/∂ k_z) (eEDτ(v_x+eBcosθ/ħ(𝐯_𝐤·Ω_𝐤))+𝐯·Γ)=𝐯·Γ/DτWe now calculate the correction factor Γ which vanishes in the absence of any magnetic field 𝐁 by expanding the inverse band-mass which arises in Eq. (<ref>), and noting the fact that the above equation is valid for all values of velocity.The Boltzmann distribution function f_𝐤 is then evaluated to be,f_𝐤 =f_eq-eDEτ(v_x+eBcosθ/ħ(𝐯_𝐤·Ω_𝐤))∂ f_eq/∂ϵ-eDEτ(v_xc_xsinθ+v_yc_ycosθ+v_zc_z))∂ f_eq/∂ϵwhere c_x, c_y and c_z are correction factors which incorporate Berry phase effects and are related to Γ (see supplementary material).In the absence of any thermal gradient, the charge current can be written as 𝐉=e∫[d^3k]D^-1ṙf_𝐤, accounting for the modified density of states due to the phase space factor D.Substituting f_k into this equation and comparing it with Eq. (<ref>), we now arrive at the expression for the longitudinal electrical conductivityσ_xx =e^2∫d^3k/(2π)^3τ [D(v_x+eBcosθ/ħ(𝐯_𝐤·Ω_𝐤))^2](-∂ f_eq/∂ϵ)where we have dropped the other terms which vanish upon integration around a single Weyl node, or are of a much smaller order of magnitude compared to others in typical Weyl metals. In the above equation the anomalous velocity factor eBcosθ/ħ(𝐯_𝐤·Ω_𝐤) arises due to the topological chiral anomaly term which gives a finite 𝐁-dependent longitudinal electrical conductivity, which is otherwise absent for a regular Fermi liquid.When θ=0, we recover the formula for LMC for parallel 𝐄 and 𝐁 fields as derived in earlier works <cit.>. Now substituting f_k from Eq. (<ref>) into Eq. (<ref>), we then arrive at the following expression for the electrical Hall conductivityσ_yx =e^2∫d^3k/(2π)^3Dτ(-∂ f_eq/∂ϵ) [(v_y+eBsinθ/ħ(𝐯_𝐤·Ω_𝐤)) (v_x+eBcosθ/ħ(𝐯_𝐤·Ω_𝐤))]-e^2/ħ∫d^3k/(2π)^3Ω_zf_eq+e^2∫d^3k/(2π)^3τ (sinθ c_xv_x+cosθ c_yv_y+c_zv_z)v_y(-∂ f_eq/∂ϵ) In the above expression the second momentum space integral (of the Berry curvature Ω_z) in the above equation corresponds to the regular anomalous Hall contribution (σ_xy^a) from a single Weyl node.Summed over all the nodes this term is non-zero for time reversal broken WSMs but vanishes for inversion broken WSMs as the integral over the Berry curvature vanishes in the presence of time reversal symmetry. We shall not consider this term any further as we are only interested in the chiral anomaly induced contribution to the Hall conductivity. We also note that our present Boltzmann treatment with energy independent scattering time defined on the Fermi surface is valid for μ>>k_BT, ħω_c <cit.>, and in this limit the values of the terms involving c_x, c_y, c_z are orders of magnitude smaller than the contribution from the rest in Eq. (<ref>).We then arrive at our final expression for the chiral anomaly induced planar Hall conductivity,σ_yx^ph =e^2∫d^3k/(2π)^3Dτ(-∂ f_eq/∂ϵ) [eBsinθ/ħ(𝐯_𝐤·Ω_𝐤) (v_x+eBcosθ/ħ(𝐯_𝐤·Ω_𝐤))]where the superscript `ph' stands for “planar Hall” effect.Eqs. (<ref>,<ref>) are the central results of this paper. The numerical calculations to compute LMC and PHC have been performed for a prototype lattice model of a time reversal symmetry breaking Weyl semimetal with the lattice regularization providing a physical ultra-violetcut-off to the momentum integrals. The prototype lattice model is given by,H_𝐤=H^L(𝐤)+H^T(𝐤)where H^L produces a pair of Weyl nodes of type-I at (± k_0,0,0) <cit.>,H^L(𝐤) =(m(cos(k_yb)+cos(k_zc)-2)+2t(cos(k_xa)-cos k_0))σ_1-2tsin(k_yb)σ_2-2tsin(k_zc)σ_3Here, m is the mass and t is hopping parameter. The second term of the Hamiltonian H^T tilts the nodes along k_x direction, and can be written as,H^T(𝐤)=γ(cos(k_xa)-cos k_0)σ_0where γ is the tilt parameter. We first examine Eq. (<ref>) for γ=0, the case of atype-I WSM. After performing the momentum space integrals, and retaining only the non-vanishing terms, σ^ph_xy is given by σ_yx^ph =e^2∫d^3k/(2π)^3Dτ(-∂ f_eq/∂ϵ) e^2B^2sinθcosθ/ħ^2(𝐯_𝐤·Ω_𝐤)^2Clearly, when θ=0,π/2, σ_yx^ph=0, as expected, and the net Hall conductivity is determined by the Berry phase induced anomalous Hall contribution (if present as in the case of a time reversal broken WSM). But σ_yx^ph in Eq. (<ref>) is generically non-zero for any other arbitrary angle.Using Eqs. (<ref>,<ref>) we can now express σ_xx and σ_yx^ph in terms of the diagonal components of the conductivity tensor, σ_∥ and σ_⊥, corresponding to the cases when the current flows along and perpendicular to the magnetic field. Substituting θ=0 and θ=π/2 into Eq. (<ref>), we haveσ_∥=σ+e^4∫d^3k/(2π)^3Dτ(-∂ f_eq/∂ϵ) B^2/ħ^2(𝐯_𝐤·Ω_𝐤)^2σ_⊥=σEq. (<ref>) and Eq. (<ref>) thus take the formσ_xx=σ_⊥+Δσcos^2θσ_yx^ph=Δσsinθcosθwhere Δσ=σ_∥-σ_⊥, gives the anisotropy in conductivity due to chiral anomaly. The amplitude of planar Hall conductivity showsB^2-dependence i.e. Δσ∝ B^2 for any value of θ except for θ=0 and θ=π/2 as shown in Fig. <ref>(b) whereas LMC has the finite value for all field directions and follows the B^2 dependence except at θ=π/2 (Inset of Fig. <ref>(b)). The longitudinal magnetoconductivity has the angular dependence of cos^2θ which is shown in Fig. <ref>(c), leading to the anisotropic magnetoresistance(AMR) <cit.> whereas the planar Hall conductivity follows the cosθsinθ dependence as depicted in Fig. <ref>(d). Note that the planar Hall conductivity discussed here does not satisfy the antisymmetry property of regular Hall conductivity (σ_xyρ_yx=-1) since its origin is linked to the topological chiral anomaly term and not to a conventional Lorentz force and this fact can be used to remove the regular Hall contribution from the total Hall response to isolate PHE in experiments by taking measurements with both positive and negative B. On the other hand, the planar Hall effect can be distinguished from anomalous Hall effect by taking measurements with both B=0 and B 0 and subtracting the background (B=0) contribution. In Fig. <ref> we have plotted the numerically calculated LMC (σ_xx) for a type-II WSM as a function of 𝐁, where 𝐄 is applied along the tilt direction (x axis). Our calculations suggest that the LMC follows a 𝐁-linear dependence <cit.> when both applied magnetic field and electric field are parallel to the tilt axis (also valid for 0 ≤θ<π/2 in first quadrant of the plane Fig. <ref>(a)) as depicted in Fig. <ref>(a). For non-zero magnetic field, LMC shows cosθ dependence for the same configuration of the applied 𝐄 and 𝐁 as shown in Fig. <ref>(b). Further the planar Hall conductivity (σ_yx^ph at θ=π/4) computed using Eq. (<ref>) also follows the linear 𝐁 dependence at any angle 0<θ≤π/2 in first quadrant of the plane when the applied𝐄 is along the tilt axis and it also shows the sinθ angular dependence at finite magnetic field for the same configuration. However, the B-dependence of both LMC and PHC is quadratic in B when the electric field is applied perpendicular to the tilt direction. The appropriate system for measurement of this anisotropy are recently discovered type II WSMs such as WTe_2 <cit.> and MoTe_2 <cit.>.Conclusion: In this work we have presented a quasi-classical theory of chiral anomaly induced planar Hall effect in Weyl semimetals. We derived an analytical expression for planar Hall conductivity and also elucidated its generic behavior for type-I and type-II WSMs. Unlike anomalous Hall effect <cit.>, to the best of our knowledge PHE has not been described as a topological response function in terms of Berry phases, and our unified treatment of PHE and LMC in terms of chiral anomaly and Berry phase effects, together with experimental predictions in type-I and type-II Weyl semimetals, is an important first step in this direction.Acknowledgment: The authors (SN and AT) acknowledge the computing facility from DST-Fund for S and T infrastructure (phase-II) Project installed in the Department of Physics, IIT Kharagpur, India.ST acknowledges support from ARO Grant No: (W911NF-16-1-0182).10Peskin M. E. Peskin, and D. V. Schroeder, An introduction to quantum field theory, Westview, (1995).Murakami1:2007 S. Murakami, New Journal of Physics 9, 356 (2007).Murakami2:2007 S. Murakami, S. Iso, Y. Avishai, M. Onoda, and N. Nagaosa, Phys. Rev. B 76, 205304 (2007).Yang:2011 K. Y. Yang, Y. M. Lu, and Y. Ran, Phys. Rev. B 84, 075129 (2011).Burkov1:2011 A. A. Burkov, M. D. Hook, and L. Balents, Phys. Rev. B 84, 235126 (2011).Burkov:2011 A. A. Burkov and Leon Balents, Phys. Rev. Lett. 107, 127205, (2011).Volovik G. E. Volovik, Universe in a helium droplet, (Oxford University Press, 2003).Wan:2011 X. Wan, A. M. Turner, A. Vishwanath, and S. Y. Savrasov, Phys. Rev. B 83, 205101 (2011).Xu:2011 G. Xu, H. Weng, Z. Wang, X. Dai, and Z. Fang, Phys. Rev. Lett. 107, 186806 (2011).Xiao:2010 D. Xiao, M. C. Chang, and Q. Niu, Rev. Mod. Phys. 82, 1959 (2010).Nielsen:1981 H. B.Nielsen and M. Ninomiya, Phys. Lett. B 105 219 (1981).Nielsen:1983 H. B. Nielsen and M. Ninomiya, Phys. Lett. B 130, 389 (1983).Goswami:2015 P. Goswami, G. Sharma, S. Tewari, Phys. Rev. B 92, 161110 (2015).Zhong S. Zhong, J. Orenstein, J. E. Moore, Phys. Rev. Lett. 115, 117403 (2015).Goswami:2013 P. Goswami and S. Tewari, Phys. Rev. B 88, 245107 (2013).Bell:1969 J. S. Bell and R. A. Jackiw, Nuovo Cimento A 60, 47 (1969).Aji:2012 V. Aji, Phys. Rev. B 85 241101 (2012).Adler:1969 S. Adler, Phys. Rev. 177, 2426 (1969).Zyuzin:2012 A. A. Zyuzin, S. Wu, and A. A. Burkov, Phys. Rev. B 85, 165110 (2012).Son:2013 D. T. Son and B. Z. Spivak, Phys. Rev. B 88, 104412 (2013).Kim:2014 Ki-Seok Kim, Heon-Jung Kim, and M. Sasaki, Phys. Rev. B 89, 195137, (2014).He:2014 L. P. He, X. C. Hong, J. K. Dong, J. Pan, Z. Zhang, J. Zhang, and S. Y. Li, Phys. Rev. Lett. 113, 246402 (2014).Liang:2015 T. Liang, Q. Gibson, M. N. Ali, M. Liu, R. J. Cava, N. P. Ong,Nat Mater 14, 280 (2015).CLZhang:2016 C.-L. Zhang, S.-Y. Xu, I. Belopolski, Z. Yuan, Z. Lin, B. Tong, G. Bian, N. Alidoust, C.-C. Lee, S.-M. Huang, T.-R. Chang, G. Chang, C.-H. Hsu, H.-T. Jeng, M. Neupane, D. S. Sanchez, H. Zheng, J. Wang, H. Lin, C. Zhang, H.-Z. Lu, S.-Q. Shen, T. Neupert, M. Z. Hasan, and S. Jia, Nat. Commun. 7, 10735 (2016).QLi:2016 Q. Li, D. E. Kharzeev, C. Zhang, Y. Huang, I. Pletikosic, A. V. Fedorov, R. D. Zhong, J. A. Schneeloch, G. D. Gu, and T. Valla, Nat. Phys. 12, 550 (2016). Xiong J. Xiong, S. K. Kushwaha, T. Liang, J. W. Krizan, M. Hirschberger, W. Wang, R. J. Cava, and N. P. Ong, Science 350, 413 (2015).Hirsch M. Hirschberger, S. Kushwaha, Z. Wang, Q. Gibson, S. Liang, C. A. Belvin, B. A. Bernevig, R. J. Cava, and N. P. Ong, Nat Mater 15, 1161 (2016).Burkov:2017 A. A. Burkov, arXiv:1704.05467 (2017).KY:1968 Vu Dinh Ky, Phys. stat. sol. 26, 565 (1968).Dobrowolska:2007 Z. Ge, W. L. Lim, S. Shen, Y. Y. Zhou, X. Liu, J. K. Furdyna, and M. Dobrowolska, Phys. Rev. B 75, 014407 (2007).Bowen_2005 M. Bowen, K.-J. Friedland, J. Herfort, H.-P. Schönherr, and K. H. Ploog, Phys. Rev. B 71, 172401 (2005).Keizer_2007 S. T. B. Goennenwein, R. S. Keizer, S. W. Schink, I. van Dijk, T. M. Klapwijk, G. X. Miao, G. Xiao, and A. Gupta, Appl. Phys. Lett. 90, 142509 (2007).Friedland_2006 K.-J. Friedland, M. Bowen, J. Herfort, H. P. Schönherr, and K. H. Ploog, J. Phys.: Condens. Matter 18, 2641 (2006).Taskin:2017 A. A. Taskin, Henry F. Legg, Fan Yang, Satoshi Sasaki, Yasushi Kanai, Kazuhiko Matsumoto, Achim Rosch, Yoichi Ando, arXiv:1703.03406 (2017).Jung T. Jungwirth, Q. Niu, and A. H. MacDonald, Phys. Rev. Lett. 88, 207208 (2003).Soluyanov:2015 A. A. Soluyanov, D. Gresch, Z. Wang, Q. Wu, M. Troyer, X. Dai, and B. A. Bernevig, Nature 527, 495 (2015).XiongEPL J. Xiong, S. K. Kushwaha, J. W. Krizan, T. Liang, R. J. Cava, N.P. Ong, Europhys. Lett. 114, 27002 (2016).Watzman_2017 Sarah J. Watzman, Timothy M. McCormick, Chandra Shekhar, Shu-Chun Wu, Yan Sun, Arati Prakash, Claudia Felser, Nandini Trivedi, and Joseph P. Heremans, arXiv:1703.04700 (2017).ShekharNature C. Shekhar, A.K. Nayak, Y. Sun, M. Schmidt, M. Nicklas, I. Leermakers, U. Zeitler, Z. Liu, Y. Chen, W. Schnelle, J. Grin, C. Felser, B.Yan, Nature Physics 11, 645 (2015).Lundgren:2014 R. Lundgren, P. Laurell, and G. A. Fiete, Phys. Rev. B 90 165115 (2014).Ziman John. M. Ziman, Electrons and phonons: the theory of transport phenomena in solids. Oxford, UK: Clarendon Press, (2001). Duval:2006 C. Duval, Z. Horvth, P. A. Horvthy, L. Martina, and P. C. Stichel, Mod. Phys. Lett. B, 20, 373 (2006).Niu:2006 D. Xiao, Y. Yao, Z. Fang, and Q. Niu, Phys. Rev. Lett, 97, 026603 (2006).Sharma:2016 G. Sharma, P. Goswami, and S. Tewari, Phys. Rev. B 93, 035116 (2016).Nandini_2017 Timothy M. McCormick, Itamar Kimchi, and Nandini Trivedi, Phys. Rev. B 95, 075133 (2017).Pan J. P. Pan, Solid State Physics, edited by F. Seitz and D. Turnbull, Vol. 5 (Academic, New York, 1957) pp. 1-96.Hong:1995 K. Hong and N. Giordano, Phys. Rev. B 51, 9855 (1995).Tang:2003 H. X. Tang, R. K. Kawakami, D. D. Awschalom, and M. L. Roukes, Phys. Rev. Lett, 90, 107201 (2003).Sharma2:2016 G. Sharma, P. Goswami, and S. Tewari, Phys. Rev. B 96, 045112 (2017).Wu_2016 Yun Wu, Daixiang Mou, Na Hyun Jo, Kewei Sun, Lunan Huang, S. L. Bud’ko, P. C. Canfield, and Adam Kaminski, Phys. Rev. B 94, 121113 (R) (2016).Deng_2016 Ke Deng, Guoliang Wan, Peng Deng, Kenan Zhang, Shijie Ding, Eryin Wang, Mingzhe Yan, Huaqing Huang, Hongyun Zhang, Zhilin Xu, Jonathan Denlinger, Alexei Fedorov, Haitao Yang, Wenhui Duan, Hong Yao, Yang Wu, Shoushan Fan, Haijun Zhang, Xi Chen, and Shuyun Zhou, Nat. Phys. 12, 1105-1110 (2016). Son:2012 D.T. Son and N. Yamamoto, Phys. Rev. Lett. 109, 181602 (2012). Goswami:2013 P. Goswami and S. Tewari, Phys. Rev. B 88 245107 (2013). Grushin:2012 A.G. Grushin, Phys. Rev. D 86, 045001 (2012). Fukushima:2008 K. Fukushima, D.E. Kharzeev, H.J. Warringa, Phys. Rev. D 78, 074033 (2008). Franz:2013 M.M Vazifeh, and M. Franz, Phys. Rev. Lett. 111 027201 (2013). Chen:2013 Chen, Y., Si Wu, and A. A. Burkov, Phys. Rev. B 88 125105 (2013). §SUPPLEMENTARY INFORMATIONIn this section we provide the detailed calculation to find the correlation factor Γ starting from the Eq. (<ref>) of the main text. The correlation factor arises due to applied magnetic field (𝐁). Using the expression of band-mass tensor m_ij^-1=1/ħ^2∂^2ϵ_k(𝐤)/∂ k_i∂ k_j, we have from the Eq. (<ref>)e^2BEDτ[v_z(cosθ/m_xy-sinθ/m_xx)+(v_xsinθ-v_ycosθ)/m_xz+ eBcosθ/ħ(-v_z(Ω_x/m_xx+Ω_y/m_xy+Ω_z/m_xz)sinθ+v_z(Ω_x/m_xy+Ω_y/m_yy+Ω_z/m_yz) cosθ+(v_xsinθ-v_ycosθ)(Ω_x/m_xz+Ω_y/m_yz+Ω_z/m_zz))]+eB[-v_zsinθ(Γ_x/m_xx+Γ_y/m_xy+Γ_z/m_xz)+v_zcosθ(Γ_x/m_xy+ Γ_y/m_yy+Γ_z/m_yz) +(v_xsinθ-v_ycosθ)(Γ_x/m_xz+Γ_y/m_yz+Γ_z/m_zz)]=1/Dτ(v_xΓ_x+v_yΓ_y+v_zγ_z)The above equation can be rewritten ase^2BEDτ[v_z(cosθ/m_xy-sinθ/m_xx)+(v_xsinθ-v_ycosθ)/m_xz+ eBcosθ/ħ(-v_zPsinθ+v_zRcosθ+(v_xsinθ-v_ycosθ)T)] +eB[-v_zsinθ(Γ_x/m_xx+Γ_y/m_xy+Γ_z/m_xz)+v_zcosθ(Γ_x/m_xy+ Γ_y/m_yy+Γ_z/m_yz)+(v_xsinθ-v_ycosθ)(Γ_x/m_xz+Γ_y/m_yz+Γ_z/m_zz)] =1/Dτ(v_xΓ_x+v_yΓ_y+v_zγ_z)where P=Ω_x/m_xx+Ω_y/m_xy+Ω_z/m_xz, R=Ω_x/m_xy+Ω_y/m_yy+Ω_z/m_yz and T=Ω_x/m_xz+Ω_y/m_yz+Ω_z/m_zz.Now, imposing the condition that the above equation is valid for all values of v_x, v_y, and v_z, we havee^2BEDτ(sinθ/m_xz+eB sinθcosθ/ħT)+eBsinθ (Γ_x/m_xz+Γ_y/m_yz+Γ_z/m_zz)-Γ_x/Dτ=0 e^2BEDτ(cosθ/m_xz+eB cos^2θ/ħT)+eBcosθ (Γ_x/m_xz+Γ_y/m_yz+Γ_z/m_zz)+Γ_y/Dτ=0 e^2BEDτ[(cosθ/m_xy-sinθ/m_xx)+eB cosθ/ħ(Rcosθ-Psinθ)]+eB[cosθ(Γ_x/m_xy+ Γ_y/m_yy+Γ_z/m_yz) -sinθ(Γ_x/m_xx+Γ_y/m_xy+Γ_z/m_xz)]-Γ_z/Dτ=0After solving the above three equations, we haveΓ_z=N(M_1M_2+M_3M_4)/1/(Dτ)^2-(eBcosθ/m_yz-eBsinθ/m_xz)^2-eB/m_zzM_4; Γ_y=-cosθ[NM_3+Γ_zeB/m_zz]/M_2;Γ_x=sinθ[NM_3+Γ_zeB/m_zz]/M_2where,N=e^2EBDτ ;M_1=-sinθ/m_xx+cosθ/m_xy+eBcosθ/ħ(Rcosθ-Psinθ);M_2=1/Dτ-eBsinθ/m_xz+eBcosθ/m_yzM_3=eBcosθ/ħT+1/m_xz;M_4=eBsin 2θ/m_xy-eBcos^2θ/m_yy-eBsin^2θ/m_xxTo write the Boltzmann distribution function f_k explicitly we have defined Γ_x=eEτ c_xsinθ, Γ_y=eEτ c_ycosθ, and Γ_z=eEτ c_z. where,Using the above equations, we can write the Boltzmann distribution function f_k as given in the Eq. (<ref>) of the main text. | http://arxiv.org/abs/1705.09308v2 | {
"authors": [
"S. Nandy",
"Girish Sharma",
"A. Taraphder",
"Sumanta Tewari"
],
"categories": [
"cond-mat.mes-hall"
],
"primary_category": "cond-mat.mes-hall",
"published": "20170525180349",
"title": "Chiral anomaly as origin of planar Hall effect in Weyl semimetals"
} |
a]Brian Batell,[email protected] b,c,d,1]Michael A. Fedderke,10000-0002-1319-1622 [email protected] b,c,d]and Lian-Tao Wang [email protected] [a]Pittsburgh Particle Physics, Astrophysics, and Cosmology Center, Department of Physics and Astronomy, University of Pittsburgh, 3941 O'Hara Street, Pittsburgh, PA 15260, USA [b]Department of Physics, The University of Chicago, 5640 S Ellis Ave, Chicago, IL 60637, USA [c]Enrico Fermi Institute, The University of Chicago, 5640 S Ellis Ave, Chicago, IL 60637, USA [d]Kavli Institute for Cosmological Physics, The University of Chicago, 5640 S Ellis Ave, Chicago, IL 60637, USA1705.09666We describe a composite Higgs scenario in which a cosmological relaxation mechanism naturally gives rise to a hierarchy between the weak scale and the scale of spontaneous global symmetry breaking. This is achieved through the scanning of sources of explicit global symmetry breaking by a relaxion field during an exponentially long period of inflation in the early universe. We explore this mechanism in detail in a specific composite Higgs scenario with QCD-like dynamics, based on an ultraviolet SU(N)_TC `technicolor' confining gauge theory with three Dirac technifermion flavors.We find that we can successfully generate a hierarchy of scales ξ≡⟨ h ⟩^2 / F_π^2 ≳ 1.2 × 10^-4 (i.e., compositeness scales F_π∼ 20 TeV) without tuning. This evades all current electroweak precision bounds on our (custodial violating) model. While directly observing the heavy composite states in this model will be challenging, a future electroweak precision measurement program can probe most of the natural parameter space for the model. We also highlight signatures of more general composite Higgs models in the cosmological relaxation framework, including some implications for flavor and dark matter.Relaxation of the Composite Higgs Little Hierarchy [ Received: date / Accepted: date ================================================== § INTRODUCTIONThe cosmological relaxation scenario of Graham, Kaplan, and Rajendran <cit.> provides a novel approach to the hierarchy problem of the Standard Model (SM).In this scenario, the vacuum expectation value (vev) of an axion field <cit.>, dubbed the relaxion, slowly rolls through a trans-Planckian excursion down a very flat shift-symmetry-breaking potential during an exponentially long period of low-scale inflation, in the process dynamically `scanning' the value of the Higgs squared-mass parameter. Although the bare Higgs squared-mass parameter can be assumed natural (i.e., positive and of the order of the cutoff), it is eventually scanned through zero, triggering spontaneous electroweak symmetry breaking (EWSB). The breaking of electroweak symmetry gives rise to a back-reaction[ In the simplest realization <cit.>, the back-reaction is supplied by the emergence of the periodic QCD vacuum potential <cit.> following EWSB, with V_qcd∝ m_π^2 f_π^2 ∝ m_u,d∝ v. ] on the flat relaxion scanning potential which, combined with energy dissipation from Hubble friction, stalls the relaxion rolling, dynamically locking in a small, technically natural value of the Higgs vev and Higgs mass.[ Related ideas utilizing Hubble friction and back-reaction from field dynamics are employed in the warm inflation scenario <cit.>. ] While some of the required ingredients may appear rather exotic from an effective field theory perspective, the scenario offers a fresh perspective on the hierarchy problem, in the spirit of the self-organized criticality proposal of Giudice:2008bi, and is worthy of further exploration (see [s]Espinosa:2015eda,Hardy:2015laa,Antipin:2015jia,Patil:2015oxa,Jaeckel:2015txa,Gupta:2015uea,Batell:2015fma,Matsedonskyi:2015xta,DiChiara:2015euo,Ibanez:2015fcv,Fonseca:2016eoo,Evans:2016htp,Kobayashi:2016bue,Farakos:2016hly,Hook:2016mqo,Higaki:2016cqb,Choi:2016luu,McAllister:2016vzi,Choi:2016kke,Flacke:2016szy,Lalak:2016mbv,You:2017kah,Evans:2017bjs,Agugliaro:2016clv,Beauchesne:2017ukw for some recent studies).See also [s]Abbott:1984qf,Dvali:2003br,Dvali:2004tma,Arkani-Hamed:2016rle,Arvanitaki:2016xds for other cosmological approaches to naturalness.The relaxion models presented in Graham:2015cka are based on the QCD axion, or an extended strong dynamics with a non-QCD axion. These simple models are able to extend the cutoff of the SM to scales that are parametrically larger than the weak scale, but still well below the GUT or Planck scales. In other words, these simple models are not able to fully address the `big' hierarchy problem, but instead can offer a solution to the `little' hierarchy problem—i.e., a way to understand the absence of new particles at the LHC, as well as deviations from the SM predictions in precision flavor, electroweak, and Higgs measurements. While it is possible that more complex models (perhaps with additional scanning fields <cit.>) can extend the cutoff further into the ultraviolet (UV) and perhaps even all the way to the Planck scale, one can also imagine that the new physics that emerges at the cutoff is of a more conventional type, such as supersymmetry or compositeness, which shields the Higgs against arbitrary short-distance physics. Supersymmetric completions of the relaxion were investigated in [s]Batell:2015fma,Evans:2016htp.In this paper, we consider the relaxion scenario in the context of composite Higgs (CH) models <cit.>[ See also [s]Terazawa:1976xx,Terazawa:1979pj for some early work proposing the Higgs as a bound state of constituent fermions. ] (see Panico:2015jxa for a recent review). In such models, the big hierarchy problem is ameliorated by the assumption that the Higgs is a composite object of heavy `technifermions' bound together by the agency of strong `technicolor' (TC) gauge dynamics <cit.>. Above the confinement scale, the theory is one of free fermion constituents whose masses are technically natural. As in QCD, dimensional transmutation accounts for the hierarchy between the ultimate cutoff scale (e.g., the GUT or Planck scale) and the confinement scale of the composite theory. Below the confinement scale, the theory is that of the pseudo-Nambu–Goldstone bosons (pNGBs) <cit.>—four of which comprise the Higgs doublet—of a spontaneous global symmetry breaking triggered by the strong dynamics when it confines.Due to explicit global symmetry breaking, the pNGB Higgs develops a potential, and vacuum misalignment arguments dictate that the Higgs vev in such models is expected to be of the same order as the compositeness scale, whereas phenomenological viability of CH models demands that the Higgs vev, ⟨ h ⟩ = 246 GeV, should lie somewhat below the compositeness scale, F_π. This is summarized by the well-known requirement ξ ≡⟨ h ⟩^2/F_π^2 ≪ 1, which encapsulates the little hierarchy problem in CH models.Our aim in this paper is to demonstrate, within the context of an explicit CH model, that a large hierarchy between the weak scale and the global symmetry breaking scale, (<ref>), can be achieved in a technically natural fashion by invoking the cosmological relaxation mechanism. The essential feature is that as the relaxion evolves in the early universe, it scans the techniquark masses, which provide a source of explicit global symmetry breaking. Since the Higgs potential is controlled by such explicit symmetry breaking, this manifests in the low energy effective theory as a scanning of the Higgs potential, allowing the relaxation mechanism to be implemented in a manner similar to Graham:2015cka.UV completions of CH models based on strong technicolor dynamics generally give rise to the cosets SU(N_F)/SO(N_F), SU(N_F)/Sp(N_F), and [SU(N_F)× SU(N_F)]/SU(N_F), when N_F technifermions are in a real, pseudoreal, or complex representation, respectively, of the technicolor gauge group <cit.>. While the relaxation mechanism can be implemented with any of these cosets, we will construct and investigate a concrete model with QCD-like dynamics, based on angauge group with N_F = 3 Dirac flavors (an `L + N' model). This leads to the global symmetry breaking pattern SU(3)× SU(3) × U(1) → SU(3) × U(1) ⊃ SU(2)_W× U(1)_Y. Indeed, this theory can in many ways be viewed as a scaled up copy of QCD. Interestingly, this is the smallest in the class of QCD-like cosets, [SU(N_F)× SU(N_F)]/SU(N_F), which furnishes a Higgs doublet. However, this coset is not usually considered for CH models since it does not contain the custodial symmetry group, SU(2) × SU(2) <cit.>, which protects against large tree-level corrections to the electroweak precision T parameter <cit.>.In our scenario, however, this coset can indeed be viable since the relaxation mechanism will naturally generate the large hierarchy in (<ref>), allowing the T parameter to be adequately suppressed and compatible with precision electroweak measurements.In our construction, the relaxion is taken to have axion-like couplings to both thetechnicolorgauge groupandthe QCDgauge group. An appropriate chiral rotation of the technifermion fields leads to a coupling of the relaxion to the techniquark masses.As the techniquark mass terms explicitly break the global symmetry and contribute to the composite Higgs potential, this coupling provides the basis for the scanning mechanism.We construct the low-energy Chiral Lagrangian, taking into account the large radiative corrections to the Higgs potential due to the top quark, and show that the potential contains the requisite relaxion–Higgs couplings to effect electroweak symmetry breaking and halt the relaxion evolution once itdynamically rolls through some critical value. The strong-CP problem is addressed as in Graham:2015cka with a slope-drop mechanism:we assume the scanning potential for the relaxion arises from a coupling to the inflaton, such that it dominates the rolling during inflation but disappears post-inflation, allowing the effective QCD θ-angle to relax to small values.By design, the relaxation mechanism pushes the dynamics stabilizing the weak scale to higher scales, making experimental confirmation of the scenario more challenging.While there is no guarantee that the framework can be fully tested with near-term experiments, there are certainly some experimental opportunities worth pursuing.In the specific model studied here, the spectrum of the pNGB sector in our model consists of a light composite Higgs state, and four additional ultra-heavy composite technimesons, which are either neutral or only charged under the electroweak (EW) gauge group. It will be challenging to directly probe such a heavy spectrum, even at proposed future hadron colliders such as the SPPC <cit.> or FCC-hh <cit.>. More promisingly, future improvements in the measurements of electroweak precision observables (EWPO) such as the T parameter at the ILC <cit.>, CEPC <cit.>, or FCC-ee <cit.> have the potential <cit.> to probe this model over most of the natural parameter space. More generally, there is potentially a diverse set of experimental probes for this and other composite Higgs theories within the cosmological relaxion framework, including tests of flavor and CP violation, electroweak precision measurements, dark matter and axion searches, and collider searches for new states. We will highlight some of these opportunities.Our work is not the first to consider the cosmological relaxation mechanism of Graham:2015cka in the context of composite Higgs models; however, important details of our model differ significantly from previous work <cit.>.In particular, we consider in our work only a single composite Higgs doublet, whereas [s]Antipin:2015jia,Agugliaro:2016clv both analyze Type-I Two Higgs Doublet Models with one elementary and one composite Higgs doublet. Another major difference is that we require the additional axion-like coupling of the relaxion to QCD to stall its rolling, following Graham:2015cka, whereas in [s]Antipin:2015jia,Agugliaro:2016clv, the relaxion rolling is stalled by virtue of the Higgs-vev-dependent barriers for ϕ that appear as a result of the non-QCD strong gauge dynamics.The rest of this paper is structured as follows:we begin in cartoon with a discussion of a simplified `cartoon' picture of the mechanism we wish to explore in this paper, in order to orient the reader before we delve into the detailed construction and analysis of our explicit model; in this section we also review both the post-inflation slope-drop mechanism of Graham:2015cka as it applies to our model to solve the strong-CP problem, and the clockwork mechanism <cit.> that may potentially generate the requisite super-Planckian axion decay constants for our model. In constituent_model we begin our explicit model construction, by presenting the constituent UV model that defines the underlying theory for the CH sector; we also specify the effective four-fermion interactions that give rise to the CH Yukawa couplings, specify the relaxion sector, and make some initial manipulations to the model to allow construction of the low-energy Chiral Lagrangian.In chiral_lag we explicitly construct the Chiral Lagrangian describing the composite states of the theory defined in constituent_model, and we extract those terms from the Chiral Lagrangian which are required to obtain the spectrum of the theory and understand its vacuum structure. Section <ref> contains our detailed analysis of the effective potential for the model, along with our analysis of the properties of the broken and symmetric electroweak phases of the theory. The relaxion potential is discussed in relaxion_potential. We present a summary of our analytical results and a numerical investigation of the model parameter space in summary_numerical. A general discussion of some additional phenomenologically interesting considerations applicable to both our model, and more general composite Higgs models with large F_π, is given in Section <ref>. We conclude in conclusion. Appendix <ref> gives the closed-form expression for a general exponentiated SU(3) matrix, which is of some utility in our analysis. Appendix <ref> contains a more general analysis of the EWSB dynamics of our full model, in which we relax one of the simplifying assumptions made in effective_potential. § A SIMPLIFIED `CARTOON' MODELIn order to facilitate a better understanding of the ideas we will explore in this paper, we present in this section a cartoon picture of the mechanism that we develop in greater detail in the sections to follow.In order to successfully implement the relaxion mechanism of Graham:2015cka in a composite Higgs model, we need to engineer three essential components: (a) a CH–relaxion coupling, (b) a potential for the relaxion which is sufficiently flat and which causes the field to slow-roll in the correct direction in field space to trigger dynamical EWSB, and (c) a mechanism to create barriers in the relaxion potential that stall its slow-roll once EWSB is triggered.We will achieve (a) and (c) by assuming that the relaxion ϕ is an axion of both the strongly coupled TC gauge group that confines to yield the composite Higgs state, and of QCD (see clockwork): ⊃g_s^2/16π^2[ ϕ/f - θ_qcd] G_μνG^μν+ g_tc^2/16π^2ϕ/FG_tc μνG^μν_tc, where f and F are dimensionful parameters with F ≫ f. Appropriate to the level of our cartoon picture in this section, we will discuss only an approximate low-energy pseudo-Nambu–Goldstone boson (pNGB) description of the composite sector of the theory, via the Chiral Lagrangian. Ifis the matrix-valued field of pNGBs which include among them the composite Higgs state h, then once the axion-type couplings are rotated into the technifermion mass matrices in the underlying constituent theory, and we include the large, dominant radiative effects of the top quark,[ There are also subdominant effects from gauge loops. These will not change the qualitative picture we explore in this paper, and we ignore them. ] the following terms will appear in the effective potential for the model:V∼ - c_mF_π^2Me^iϕ/F + h.c. - c_ty_t^2 N_c ^2 F_π^2 /16π^2 | ·Δ |^2 + V_ϕ(ϕ) + V_qcd∼ - c_mF_π^2 m cos( h/F_π)cos( ϕ/F) - c_ty_t^2 N_c ^2 F_π^2 /16π^2sin^2( h/F_π) + V_ϕ(ϕ) + V_qcd, where c_t and c_m are perturbatively incalculable 𝒪(1) numbers, F_π is the compositeness scale associated with spontaneous global flavor symmetry breaking, ≈ (4π/√(N)) F_π is the cutoff scale of the composite theory, and we have taken m to be a representative mass of the technifermions. Furthermore, in (<ref>), V_ϕ(ϕ) is an additional relaxion potential which will be discussed in cartoonRelaxionV, and Δ is the appropriate projection operator that extracts the part ofto which the top quark couples (i.e., the Higgs doublet).We also emphasize that (<ref>) is highly schematic—much of the development in the following sections is precisely to deal with the more complicated structures that actually appear when evaluating (<ref>) in a realistic theory. Nevertheless, this simplified picture captures the essential features of the model. It also suffices for the present discussion to merely assume that V_qcd is a cosine periodic potential: V_qcd∼ - ^4 cos( ϕ/f - θ_qcd), where ^4 depends linearly on the Higgs vevthrough its dependence on the quark masses m_q: ^4 ∼ m_π^2 f_π^2 ∝ m_q ∝.§.§ Electroweak Symmetry BreakingThe dynamical picture to bear in mind is that while the relaxion is slow-rolling down its potential during an exponentially long period of low-scale inflation, the other fields respond by assuming their instantaneous equilibrium vacuum expectation values, such that the effective potential is minimized with ϕ held fixed. Therefore, before we return to a discussion of the relaxion rolling, consider first the dynamics of the h field per (<ref>); we will ignore the dynamics of any other states in the theory—this topic will consume much of our attention in the concrete model we analyze in the following sections.If we define cos(/F)≡ ( c_t y_t^2 N_c) / (8π^2 c_m m)>0 [we assume c_t, c_m>0], then the minimization condition for the potential (<ref>) in the h-direction is ∂_h V∝sin( /F_π) [ cos(ϕ/F)/cos(/F) - cos( /F_π) ] =0, where we have ignored correction terms ∼^4 / ( F_π^2 ) ≪ 1. We see that =0 is always a solution to (<ref>); whether or not there are other solutions in the region near /F_π≈ 0 depends on the relative sizes of cos(ϕ/F) and cos(/F). For cos(ϕ/F) > cos(/F), the [ ⋯]-bracket in (<ref>) cannot be set to zero for any value of h, so no additional solution(s) exists; however, for cos(ϕ/F) < cos(/F), two additional solutions to (<ref>) appear symmetrically around h=0. A graphical sketch of the situation is shown in the left panel of cartoon_plots, both for cos(ϕ/F) > cos(/F) and vice versa. Clearly, if ϕ rolls to larger values from some initial value satisfying ϕ<,[ We do not view this as a tuning. The initial value of ϕ must merely satisfy cos(ϕ_init./F) > cos(/F) to ensure a stable EW-symmetric vacuum to start with; this occurs for a large fraction of the available parameter space. See also Graham:2015cka, wherein an analogous mild assumption about the qualitative size of the initial value of the relaxion field is made. ] as it crossesit triggers a dynamical destabilization of the h=0 solution leading to a dynamically generated spontaneous EWSB. Per the mechanism developed in Graham:2015cka, once the h field gets a non-zero vev , the QCD barriers grow in size and rapidly stall the slow-roll of the relaxion field ϕ in the vicinity of , locking in a small, technically natural .Note that for the discussion in the previous paragraph to work, we must demand that 0<cos(/F)≤1, which implies a lower bound on the masses of the fermions charged under the strong dynamics (assuming that c_t,c_m>0): m ≥ N_cc_t/c_my_t^2/8π^2.§.§ The Higgs MassSome mild residual tuning is required to obtain the correct Higgs mass. From (<ref>)s]cartoonV2 and (<ref>), it is straightforward to derive that[ We remind the reader that -3mu(x)≡sin(x)/x ≃ 1 - x^2/6 + 𝒪(x^4) for small x. ] m_h^2= 4 c_t (N_c/N) [ y_t/√(2)( /F_π) ]^2 = 4 c_t (N_c/N) m_t^2 . Therefore, assuming that the relaxion mechanism has already selected the correct value of the Higgs vev(and by implication the correct top mass m_t), the residual tuning can be estimated by comparing the expected c_t ∼𝒪(1) with the required c_t ∼( N/N_c) ( 1/2m_h/m_t)^2 ∼ 0.1(N/3). The residual tuning for the Higgs mass is thus at worst on the order of an additional one-in-ten tuning, and may even be much milder if N∼ 10.§.§ The Relaxion PotentialIn order to achieve condition (b) and obtain a sufficiently flat relaxion potential which also drives the relaxion field toward the critical value, we will need to add in an extra potential term for the relaxion, V_ϕ(ϕ).This is illustrated in the right panel of cartoon_plots: without an additional term V_ϕ(ϕ) in the potential, during the EW-symmetric phase the relaxion would roll in the incorrect direction in field space to give rise to the dynamical EWSB we have just discussed. Following Graham:2015cka, we thus add a linear potential for the relaxion, to obtain the correct rolling. For convenience, we will choose to parametrize this term asV_ϕ(ϕ) = - γ c_mF_π^2 m / Fϕ, where γ is an entirely free parameter of as-yet-unknown size controlling the slope of the linear potential.[ We emphasize that this parametrization is only for convenience in writing expressions like (<ref>), and should be considered with care: γ may depend on other parameters in the theory in such a way that any naïve conclusions drawn from (<ref>) about the behaviour of V_ϕ in various parameter limits may be wrong; in particular, we do not intend to imply that V_ϕ(ϕ) vanishes in the m→0 or F →∞ limits. ] In the EW-symmetric phase, we would then have V(ϕ;h=0)∼ - c_mF_π^2 m [ cos( ϕ/F) + γϕ/ F] ; assuming slow-roll in the EW-symmetric phase this yields ∂_t ϕ∝ - ∂_ϕ V(ϕ;h=0)=c_m F_π^2 m/F[γ - sin( ϕ/F) ], which provides a lower bound γ≳ 1 such that ∂_t ϕ>0 for ϕ∈[0,].With an appropriate V_ϕ added in, the rolling direction is now correct, as is illustrated again in the right panel of cartoon_plots (which is shown schematically for γ∼ 1). In order for the QCD barriers to be effective in stopping the rolling of the relaxion field at ϕ^* =+ δϕ where 0<δϕ≪, the slope of the QCD barriers and the `relaxion rolling slope' must match approximately at the stopping point during inflation. If we naïvely[ Since the dynamical origin of the additional slope will in general be different from the strong TC dynamics, γ∼ 1 would appear to require some accidental coincidence. ] were to consider that γ∼1 then, up to 𝒪(1) numbers, c_m F_π^2 m/F ∼^4/f ⇒ F∼ c_m F_π^2 m/^41mu f(γ∼ 1) . Taking ∼80 TeV, F_π∼√(N) / 4π∼20 TeV for N∼10, m ∼ 3 TeV, ∼√( m_π f_π)≈ 110 MeV,[ We used the QCD neutral pion mass m_π≈ 135 MeV <cit.>, and the QCD pion decay constant f_π≈ 93 MeV. ] and c_m =1, this implies that F ∼ (7 × 10^20)f. This means that with the usual QCD Peccei–Quinn <cit.> symmetry breaking scale f∼ 10^11 GeV, this model will require F ∼ 7 × 10^31 GeV if we want the compositeness scale on the order of 20 TeV; we will comment on the viability of such a large dimensionful scale (F ≫) in clockwork.However, we know from Graham:2015cka that using the QCD barriers to stop the relaxion rolling results in a severe strong-CP problem if a significant non-QCD relaxion slope persists to the present day, because the stopping point for the relaxion is displaced from the minimum of the QCD potential. In order to alleviate this constraint, we will utilize the mechanism of post-inflation slope-drop proposed in Graham:2015cka: the basic idea of this mechanism is that the slope of the relaxion scanning potential V_ϕ(ϕ) should originate via a coupling to a field σ during inflation, γ = γ(σ), such that γ = γ_i during inflation, but when the σ field rolls to end inflation, it causes the slope of the scanning potential to disappear: γ→ 0. Thus, if we were to naïvely assume that γ_i∼1during inflation, and that slope-drop mechanism sends γ→ 0 at the end of inflation as σ rolls, the strong-CP problem would not be alleviated (see cartoon_plots_slope_drop). Suppose then that instead of considering the parameter regime γ_i ∼ 1, we consider γ_i ≫ 1, such that the relaxion rolling slope is entirely dominated by the linear contribution from the coupling to the σ-field. Then the estimate (<ref>) is modified by an additional factor of γ_i:γ_i1mu c_m F_π^2 m/F ∼^4/f ⇒ F∼γ_i1mu c_m F_π^2 m/^41mu f(γ_i ≫ 1). Now, taking the same parameter estimates as just below (<ref>), this implies that F ∼γ_i × (7× 10^31) GeV. For large γ_i, F is proportionally larger than the estimate at (<ref>), so that the overall slope of the additional potential term, ∂_ϕ V_ϕ = γ_i ( c_mF_π^2 m / F ) ∝γ_i/F, remains of the correct size to cancel against the QCD barriers and stop the rolling, while the contribution to the relaxion potential from the strong TC dynamics is highly suppressed. It is this suppression of the contribution from the strong dynamics in this region of parameter space that allows the slope-drop mechanism to work (see cartoon_plots_slope_drop).The estimate for the post-inflation settling point for the relaxion is obtained from (<ref>)s]cartoonV2, (<ref>) and (<ref>) with γ→0: ∂_ϕ V= 0⇒| c_mF_π^2 m/Fcos( /F_π) sin(ϕ/F) |∼| Λ^4/fsin( ϕ/f - θ_qcd) |. Post slope-drop, the relaxion will only roll a distance |Δϕ| ∼ f from its initial stopping point (see cartoon_plots_slope_drop), so the changes in sin( ϕ / F) and inwill be negligible.Taking sin( ϕ / F) ∼𝒪(1), using that the scanning mechanism selects cos(/ F_π ) ∼ 1 (i.e., ξ≪ 1) for the purposes of this estimate, and noting that θ_qcd^eff. = ( θ_qcd -ϕ/f ) 2π, we estimate that c_mF_π^2 m/F∼Λ^4/f|2mu sin( θ_qcd^eff.) |; but using (<ref>), we then have |2musin( θ_qcd^eff.) |∼1/γ_i ⇒|2muθ_qcd^eff.|∼1/γ_i . Therefore, we must have γ_i ∼ 10^10 to obtain an appropriately small QCD θ-angle, |1muθ_qcd^eff.|∼ 10^-10. We emphasize that the large dimensionless parameter γ_i is merely an artifact of our parametrization of V_ϕ(ϕ); the physical content of the statement that γ_i ≫ 1 is that the slope of the relaxion potential contributed by the strong dynamics should be highly suppressed compared to the slope of V_ϕ(ϕ); i.e., F must be taken larger than it would be were γ_i ∼ 1 (see clockwork). §.§ Self-ConsistencyIn addition to the `slope matching' condition (<ref>), Graham:2015cka presented a number of restrictions on the relaxion mechanism which must be satisfied to achieve self-consistency. As applied to this cartoon model, these restrictions are as follows:Vacuum energy domination. The total change in the relaxion energy density as ϕ rolls must be a sub-leading correction to the energy density driving inflation. The relaxion ϕ must roll from an initial position in field space near ϕ=0 to a value nearin order to trigger EWSB. While the exact value fordepends on parameter choices, we can generically assume that, because ϕ enters the Higgs potential as cos(ϕ/F), -2mu will not be orders of magnitude different from F. Therefore, V_I= 3 H_I^2 ^2 ≫Δ V(ϕ,h) ∼γ_i1mu c_mF_π^2 m ⇒ H_I≳( γ_ic_m F_π^2 m/3^2)^1/2, where = 1/ √(8π G_n)≈ 2.418GeV is the reduced Planck mass, V_I is the energy density driving inflation, and H_I is the Hubble constant during inflation.Barrier formation. The Hubble scale must be low enough that the QCD barriers form: H_I ≲. Classical beats quantum. It is necessary to impose a constraint such that classical rolling of the relaxion dominates over quantum fluctuations, so that following inflation each patch of the universe obtains a vev of order the weak scale. In a Hubble time, Δϕ_cl. ∼ϕ̇Δ t_H ≈1/3 | ∂_ϕ V|/H_I^2∼γ_i c_mF_π^2 m/3F H_I^2,whileΔϕ_quantum ∼ H_I so Δϕ_cl. ≳Δϕ_quantum ⇒ H_I ≲( γ_i c_mF_π^2 m/3F)^1/3. Sufficiently many e-folds. For ϕ to roll a distance on the order of F in field space [ To avoid the assumption of a situation in which the initial value of ϕ is tuned to be near , we consider this conservative condition; were ϕ_init. accidentally closer to , the relaxion would not need to roll so far. See also footnote <ref>. ] given N_e e-folds of inflation requires that F ∼Δϕ ≈ϕ̇Δ t ≈ϕ̇N_e/H_I≈1/3 |∂_ϕ V|/H_I^2 N_e ⇒N_e∼3H_I^2F^2/γ_i c_mF_π^2 m . Combining (<ref>)s]slopeMatchingCartoon, (<ref>), and (<ref>) gives an upper limit on F:[ Using (<ref>) in place of (<ref>) gives a much weaker upper limit for all reasonable values of f. ] F≤( 3 ^6 f/^4)^1/3 = ( 8 41 GeV) ( f/10^11 GeV)^1/3. Using (<ref>), ≈ (4π/√(N))F_π≈ 4 F_π×(10/N)^1/2, and writing[ These estimates are consistent with the parameter choices appearing just below (<ref>). ] m ≈ N_cc_t/c_m y_t^2 /8π^2≈c_t/25c_m —which saturates (<ref>)—transforms (<ref>) into an upper limit on , the UV cutoff of the composite theory: ≲( 128π^4 √(3)/c_t y_t^2NN_c)^1/4γ_i^-1/4( ^4 ^3/f)^1/6 = ( 37 GeV) ( N/10)^-1/4( 1/γ_i)^1/4( f/10^11 GeV)^-1/6= ( 84 GeV) ( N/10)^-1/4( 10^10/γ_i)^1/4( f/10^11 GeV)^-1/6= ( 84 GeV) ( N/10)^-1/4( θ_qcd^eff./10^-10)^1/4( f/10^11 GeV)^-1/6. We therefore find that the relaxation mechanism can indeed explain in a technically natural fashion a large hierarchy between the UV cutoff of the composite theory and the weak scale; that is, we have provided a realization of the picture advocated from the outset in which compositeness addresses the big hierarchy problem while the relaxion explains the little hierarchy. We also remark here that the parameter point discussed earlier, ∼ 80 TeV, does not obviously run afoul of any of the self-consistency conditions when the effective QCD θ-angle is appropriately small. Additionally, (<ref>)s]slopeMatchingCartoon and (<ref>) together provide the most stringent available upper bound on the scale of inflation: V_I^1/4≲(√(3)^4/f)^1/6^1/2 = ( 6 6 GeV) ( f/10^11 GeV)^-1/6, while (<ref>)s]slopeMatchingCartoon and (<ref>) together provide a lower bound on the scale of inflation: V_I^1/4≳( F/f)^1/4 = ( 6 6 GeV) ( f/10^11 GeV)^-1/4( F/841 GeV)^1/4. The number of e-folds is bounded by (<ref>)s]domination and (<ref>): N_e ≳( F/)^2= ( 1 47) ( F/841 GeV)^2. As in the original relaxion model <cit.>, these results indicate that an extremely long period of low-scale inflation, and an ultra-trans-Planckian relaxion field excursion are required to make the model viable.Beginning in constituent_model, we devote significant effort to the presentation and detailed analysis of a concrete model that realizes the mechanism which we have just described schematically.§.§ Clockwork MechanismAchieving the hierarchy F≫ f required for the viability of our model requires further model building.One possibility is the `clockwork' mechanism of [s]Choi:2015fiu,Kaplan:2015fuy, which we will briefly review here as it applies to our model;see [s]Kim:2004rp,Choi:2014rja,delaFuente:2014aca for earlier work in the context of inflation, and [s]Giudice:2016yja,Kehagias:2016kzt,Farina:2016tgd,Ahmed:2016viu,Hambye:2016qkf,Craig:2017cda for further recent theoretical and phenomenological investigations of the clockwork. The clockwork mechanism postulates the existence of (M+1) complex scalar fields φ_j (j=0,1,…,M) interacting via the Lagrangian ℒ ⊃∑_j=0^M ( |∂_μφ_j|^2 + μ^2| φ_j |^2 - λ|φ_j|^4 ) + ϵ∑_j=0^M-1( φ_j^†φ^3_j+1 + h.c.), where ϵ≪λ.If ϵ = 0, this Lagrangian would exhibit a global [U(1)]^M+1 symmetry; all (M+1) of these global symmetries are spontaneously broken such that φ_j ≡1/√(2) (f_0 + ρ_j ) exp[ - i π_j / f_0 ] with f_0≡μ^2/λ, giving rise to (M+1) NGBs, π_j. When ϵ≠ 0, M of the U(1) symmetries are additionally explicitly broken, which masses-up M of the NGBs.The residual unbroken global U(1) has an interesting pattern of charges, with φ_j having charge q_j = 3^-j; the corresponding massless NGB is taken to be the relaxion. The radial modes have masses m_ρ_j≈√(2λ) f_0.In order to obtain couplings of the relaxion to both QCD and the strong TC dynamics, we assume a KSVZ-type <cit.> axion model, with vector-like fermions charged under QCD coupled to the field φ_K (where K is an integer obeying 0≤ K ≪ M), and fermions charged under the strong TC dynamics coupled to the field φ_M. These fermions will obtain masses m_K,M = y_K,M f_0 / √(2), where y_K,M are the Yukawa couplings of the φ_K,M fields to the fermions. Integrating out these fermions in the usual way generates the usual axion couplings of the π_K and π_M fields to, respectively, the QCD and TC field strength tensors.Ref. <cit.> supplied the general procedure for the diagonalization of the tri-diagonal mass matrix for the π_j that arises from the explicit breaking terms in (<ref>); this procedure was elaborated on in detail in [s]Giudice:2016yja,Farina:2016tgd. Letting the massless NGB be the relaxion ϕ, and calling the other mass-eigenstate pNGB fields a_n (these fields are sometimes called the `gears' of the clockwork mechanism), it is straightforward to show that the Lagrangian for the (p)NGBs after the vector-like fermions are integrated out can be expressed in terms of the mass-eigenstate fields as ℒ ⊃1/2 (∂_μϕ )^2 +(g_s)^2/16π^2[ ϕ/f - θ_qcd^0 ]G_μνG^μν +(g_tc)^2/16π^2ϕ/FG_tc μνG^μν_tc+ ∑_n=1^M [ [ 1/2 (∂_μ a_n )^2 - 1/2m_n^2 a_n^2 +(g_s)^2/16π^2a_n/f_K^(n) G_μνG^μν +(g_tc)^2/16π^2a_n/f_M^(n)G_tc μνG^μν_tc ]], where m_n are pNGB masses that obey 2√(ϵ)f_0 ≤ m_n ≤ 4√(ϵ) f_0; f_j^(n) are decay constants which are generically roughly of the order √( (M+1)/2 ) f_0; and, crucially, f≡3/2√(2)3^K f_0 ≈ 3^K f_0 and F≡3/2√(2)3^M f_0≈ 3^M f_0. With moderate K and M, this exponential enhancement of the decay constants makes it straightforward to engineer large hierarchies: F≫ f ≫ the weak scale.Two scenarios suggest themselves: (a) f_0 ∼ f [i.e., K=0 and M≫1], and (b) f_0∼ a few (tens of) TeV ≪ f [i.e., 1≪ K≪ M].Scenario (a): f_0 ∼ f. To obtain F ∼γ_i × (6.6× 10^31)GeV∼ 6.6× 10^41 GeV for γ_i ∼ 10^10 with f∼10^11 GeV, we need M ∼ 65. The only light field is the relaxion, while the radial modes and gears have masses of order √(2λ) f and 3√(ϵ) f, respectively.The radial modes are thus heavy and, provided ϵ is not exponentially small, so too are the gears. Additionally, the vector-like fermions charged under QCD and the strong TC dynamics—which were integrated out to give rise to the dimension-5 axion couplings—have masses on the order of y f/√(2); assuming that y ∼𝒪(1) (or at least not exponentially small), these too are unobservably heavy. While the mechanism thus allows for the exponential scale separation F≫ f ≫ the weak scale, there are no additional experimental signatures which are accessible at any current or proposed collider, as all the additional new states are extremely massive. Scenario (b): f_0∼ a few (tens of) TeV ≪ f. Taking F ∼ 6.6× 10^41 GeV, f∼10^11 GeV, and f_0 ∼ 10 TeV, we find that we need K ∼ 15 and M ∼ 79.The radial modes and gears have masses of order √(2λ) f_0 and 3√(ϵ) f_0, respectively. While the radial modes masses are thus around a few (tens of) TeV since λ∼ 1, the gears could easily have masses below a TeV for reasonably small ϵ. Moreover, the couplings of the gears and radial modes to the QCD and TC field strength tensors have `decay constants' of order ∼ 3f_0 and ∼ 6f_0, respectively. Additionally, the colored and TC-charged vector-like fermions that were integrated out to give rise to the dimension-5 axion couplings would also have masses on the order of yf_0/√(2). Therefore, in this scenario we expect that additional experimental signatures would be accessible at current or future colliders: the gears and radial modes, and the colored fermions from the KSVZ mechanism, could all presumably be produced strongly if they are light enough. § CONSTITUENT MODELWe begin our concrete model construction by specifying in detail the underlying constituent UV model for the technicolor dynamics. This underlying constituent model can be broken into three distinct components: (a) the CH sector, (b) the terms which give rise to the CH–SM Yukawa couplings, and (c) the relaxion sector. We discuss each of these components in turn in [s]CH-sector–<ref>.§.§ Composite Higgs SectorOur CH model is constructed to exhibit the global (TC-flavor) symmetry breaking pattern × U(1)_V→ SU(3)_V× U(1)_V→ SU(2)× U(1)× U(1), where the first step arises from spontaneous chiral symmetry breaking, and the second arises due to the addition of explicit breaking terms (technifermion masses, Yukawas, and gauge couplings). A remaining global SU(2)× U(1) is gauged and identified as the SM electroweak gauge group, ×. The gauge group which will confine to give rise to the composite Higgs state is assumed to be an SU(N) group under which all SM field content transforms as singlets.[ See, e.g., [s]Katz:2003sn,Galloway:2010bp,Barnard:2013zea,Ferretti:2013kya,Cacciapaglia:2014uja,Vecchi:2015fma,Ma:2015gra,Sannino:2016sfx,Galloway:2016fuo for some recent studies of UV-complete composite Higgs models with strong TC dynamics. ] The SM is augmented by three (Dirac) technifermions which transform in the fundamental of this gauged TC group, hereinafter referred to as . In order to allow for the existence of a bound state of the technifermions with the correct SM quantum numbers to be interpreted as the Higgs, we demand that the three (Dirac) technifermion species consist of andoublet 𝕃, and an -singlet ℕ; additionally, we demand that 𝕃 carry SMhypercharge of +1/2, while ℕ is taken to be neutral under . For the remainder of this paper, we will write all fermion fields as left-handed[ That is, transforming under the (1/2,0) representation of the Lorentz group <cit.>. ] two-component Weyl spinors;[ We generally follow the notational conventions of Dreiner:2008tw. Explicitly, in the Weyl basis, a Dirac fermion 𝔽 can be expressed in terms of the two-component Weyl fermions F and F^c as 𝔽 = [ F_α; [ ( F^c )^†]^α̇ ], where α is a (1/2,0) Lorentz spinor index, and α̇ is a (0,1/2) Lorentz spinor index. ] our naming conventions for the two-component Weyl fermions, and the matter-field gauge charges, are given in gauge_charges. Note that given those charges, LN^c ∼ ( 1 , 1 , 2 )_+1/2, where contraction of the TC fundamental and anti-fundamental indices on L and N^c, respectively, is understood. As these are the quantum numbers of the SM Higgs, the requisite composite Higgs state will be formed after confinement.Introducing the N additional hypercharged -doublets L and L^c will modify the running of the couplings g_1 and g_2. If N is taken too large, there is a possibility that Landau poles in these couplings may occur below the Planck scale. A simple check indicates that with N≲ 14, no such poles should appear in either coupling.In order to impose the globalTC-flavor symmetry in our Lagrangian and make it manifest, we arrange L and N into a (3,1) TC-flavor multiplet χ, and we arrange L^c and N^c into a (1,3̅) TC-flavor multiplet χ^c: χ ≡[ L; N ] andχ^c≡[ L^c; N^c ]. If U_L≡exp[ i α_L^a T^a ] and U_R≡exp[ i α_R^a T^a ] are, respectively, forwardandtransformations, we then have χ→ U_Lχ and χ^c →χ^c U_R^†. Note also that the χ and χ^c transform in the fundamental and anti-fundamental, respectively, of the gaugedgroup, given the representations assigned in gauge_charges.Thus far, the TC-gauge-coupling and kinetic terms for the χ and χ^c fields can be written in manifestly TC-flavor andinvariant fashion as ⊃ i χ^†( σ̅· D ) χ + i χ^c ( σ· D ) ( χ^c )^†, where D_μ{χ , (χ^c)^†} ⊃( ∂_μ - i g_TC A_μ^TC) {χ , (χ^c)^†}, where we have suppressed all indices, and have written the matrix-valued TC gauge field A_μ^TC≡ (A_μ^TC)^n( T_TC)^n, where n=1,…,(N^2-1), with T_TC the fundamental-representation generators for thegroup.Gauging thesubgroup of the TC-flavor group is straightforward. We assume that the matrix representatives of the generators T^a of the SU(3)_L,R transformations are given by T^a = 1/2λ^a, where λ^a are the usual Gell-Mann matrices <cit.> (the Dynkin index is 1/2).[ We have suppressed TC-flavor indices here and have just written T^a as the generators for either simple SU(3)_L,R factor in the TC-flavor group. At the risk of being pedantic, we should really write separate generators T^a_L and T^a_R, and be consistent in the usage of each throughout. The matrix representatives of these generators are numerically equal as matrices, but as generators they are distinct objects as they carry different types of indices. ] In this convention, λ^ã = [ σ^ã 0; 0 0 ] where (here, and throughout) ã = 1,2,3, and σ^ã are the usual Pauli matrices. Therefore,is gauged by simply adding to the covariant derivative D_μ the term D_μ{χ,(χ^c)^†}⊃ -i g_2 W_μ^ã T^ã{χ,(χ^c)^†}, and demanding that under a forwardgauge transform parametrized by α^ã we have{χ , (χ^c)^†} → U_V {χ , (χ^c)^†} and W_μ → U_V W_μ U_V^† + i/g_2 U_V ∂_μ U_V^†, where U_V≡exp[ i α^ã T^ã]and W_μ ≡ W_μ^ã T^ã.Gaugingis marginally more complicated.Recall that [T^8,T^ã] = 0 for the SU(3) generators as defined above; this may lead one to conclude that T^8 is thegenerator because it commutes with thegenerators. However, the matrix representative of T^8 is T^8 = 1/2λ^8= 12√(3)[1 -0 -0;0 -1 -0;0 -0 -2 ]; since the 3-3 component of this representative is non-vanishing, T^8 will act non-trivially on the N component of χ. But this cannot then be the SMhypercharge generator since N is uncharged under U(1)_Y. This problem is easily remedied by noting that the true global symmetry of (<ref>) is U(3)_L× U(3)_R = × U(1)_V× U(1)_A at the classical level; while it is well known that the U(1)_A is anomalous <cit.>, the U(1)_V symmetry remains good at the quantum level. This additional U(1)_V has a generator equal to the identity on TC-flavor space: T^X≡1_3, which obviously commutes with all the T^a.We can thus form the hypercharge generator Y≡1/√(3) T^8+ 1muT^X = [ (+ 1/6 )1mu 1_ 2 × 20;0- 1/3 ], whereis chosen appropriately to the field on which Y acts. For the χ and (χ^c)^† fields, we need [χ]=[ (χ^c)^†]= 1/3 to obtain the hypercharge assignments for L^(c) and N^(c) in gauge_charges. We then gaugeby adding to the covariant derivative D the term D_μ{χ,(χ^c)^†}⊃ -i g_1 B_μ Y {χ,(χ^c)^†}, and demanding that under a forwardtransformation parametrized by α, we have {χ , (χ^c)^†} →exp[ i α Y ] {χ , (χ^c)^†} and B_μ → B_μ + 1/g_1∂_μα.We can also include mass terms for the χ and χ^c fields by including the explicit -breaking terms ⊃ - χ^c M χ + h.c. =-M χχ^c + h.c., where ⋯ is over the TC-flavor indices, and where M is a mass matrix in TC-flavor space which in the mass-eigenbasis must respect the residual global SU(2)× U(1) symmetry in order not to explicitly break the electroweak gauging: M = [ m_L 1_2× 20;0m_N ]. Note that while M explicitly breaks → SU(2)_V× U(1)_V,[ At least if m_L ≠ m_N; if m_L = m_N, it only breaks → SU(3)_V. ] if M is given the usual spurionic transformation M → U_R M U_L^†, then (<ref>) is a spurionic TC-flavor invariant. To summarize, the underlying constituent theory for the CH sector is specified by the Lagrangian terms ⊃ i χ̅^†( σ̅· D ) χ + i χ^c ( σ· D ) ( χ^c )^† - M χχ^c + h.c., where D_μ{χ , (χ^c)^†} = (∂_μ - i g_TC A_μ^TC-i g_2 W_μ^ã T^ã -i g_1 B_μ Y ) {χ , (χ^c)^†}. We return to the low-energy description of this sector by means of the Chiral Lagrangian in chiral_lag.§.§ Contact Interactions that lead to YukawasIn order for the low-energy Chiral Lagrangian describing the CH model to exhibit the correct Higgs Yukawa couplings to the SM quarks, we must write couplings of the technifermions to the SM quarks in the constituent theory. In this work we simply write down the required four-fermion operators coupling the SM fermions to the techniquark bilinear condensate, remaining agnostic about their underlying UV origin. Various mechanisms exist to generate such couplings, such as extended technicolor <cit.>, partial compositeness <cit.>, or bosonic technicolor <cit.>; exploring their detailed consequences in this context goes beyond the scope of this work.In order to write these couplings in a fashion which will allow spurionic TC-flavor symmetries to be made manifest, we arrange the SM field content into incomplete TC-flavor multiplets. There are of course multiple ways to do this.We assign U^c to an incomplete (1,3) of TC-flavor which we denote U^c_3_R, and we assign D^c to an incomplete (3̅,1) of TC-flavor which we denote D^c_3̅_L:U^c_3_R ≡[ 0; 0; U^c ] and D^c_3̅_L ≡[ 0; 0; D^c ]. The spurionictransformations of these incomplete multiplets are U^c_3_R→ U_R U^c_3_R and (D^c_3̅_L)^†→ U_L (D^c_3̅_L)^†. In order to obtain the correct hypercharge assignments for U^c and D^c under the gauging prescription developed above, we set [U^c_3_R] = - 1/3 and [(D^c_3̅_L)^†] = 0.In close analogy to the requirement in the SM to use the two opposite-hypercharge -fundamental fields H and H (following the notation of Schwartz:2014qft) to write the SM Yukawas, we will need to embed the quark doublets in incomplete multiplets in two different ways. If Q is in the fundamental 2 of , then the field Q≡ i σ^2 Q (in components, Q^i ≡ϵ^ij Q_j where ϵ is the anti-symmetric invariant symbol of SU(2) with ϵ^12 = +1) is in the conjugate[ Since the spinorial 2 of SU(2) is a pseudo-real representation <cit.>, it is unitarily equivalent to the conjugate 2̅ representation. There is thus no distinction between 2 and 2̅; nevertheless, we maintain the bar to match up with the notation for the 3 and 3̅ representations of SU(3) into which the fermions are embedded. ] anti-fundamental 2̅ of . We will embed Q in an incomplete (1,3) of TC-flavor which we denote Q_3_R, and embed Q in an incomplete (3̅,1) of TC-flavor which we denote Q_3̅_L: Q_3_R ≡[ Q; 0 ] and Q_3̅_L ≡[ Q; 0 ]. The spurionictransformations of these incomplete multiplets are thus Q_3_R→ U_R Q_3_R and (Q_3̅_L)^†→ U_L ( Q_3̅_L)^†; note also that the action of the gaugedsubgroup of TC-flavor automatically gives the correctaction on the quark doublets given these embeddings since we have placed the fundamental 2 ofin the fundamental 3 of , and the anti-fundamental<ref> 2̅ ofin the anti-fundamental 3̅ of . In order to obtain the correct hypercharge assignment for Q under the gauging prescription developed above, we set [Q_3_R] = 0 and [ (Q_3̅_L)^†] = -1/3.Armed with these embeddings of the SM quarks, and noting that χχ^c ∼ (3,3̅) under , there are only two independent four-fermion spurionic TC-flavor invariants with zero net Q_X charge which can be formed which couple χχ^c to SM quarks: D^c_3̅_L( χχ^c ) Q_3_R and Q_3̅_L( χχ^c ) U^c_3_R , along with their Hermitian conjugates. By construction, the spurionic TC-flavor invariance (and net zero Q_X charge) of these terms implies actual invariance under the gauged electroweak subgroup of the TC-flavor group.We thus add the following contact terms to the Lagrangian[ To be explicit, the full index structure here is⊃ + (Y_d)_p^q/Λ_y^2[D^c_3̅_L]^α,j̅,a,p[ χ]_j̅,m^β[ χ^c ]_β^î,m[ Q_3_R]_α,î,a,q + h.c.+ (Y_u)_p^q/Λ_y^2[ Q_3̅_L]_a,q^α,j̅[ χ]^β_j̅,m[ χ^c ]^î,m_β[ U^c_3_R]^a,p_α,î + h.c. , where α, β=1,2 are (1/2,0) spinorial Lorentz indices, a=1,2,3 is ancolor index, p,q=1,2,3 are SM generation indices, j̅=1,2,3 is anindex, ĵ=1,2,3 is anindex, and m=1,…,N is anindex. Per the conventions of Dreiner:2008tw, for all indices other than the spinorial Lorentz indices, a lowered position denotes a fundamental index and a raised position denotes an anti-fundamental index. Repeated indices are obviously summed. ] ⊃ + (Y_d)_p^q/Λ_y^2[D^c_3̅_L]^p ( χχ^c ) [ Q_3_R]_q + h.c. + (Y_u)_p^q/Λ_y^2[ Q_3̅_L]_q( χχ^c ) [ U^c_3_R]^p + h.c. , where Λ_y is a dimensionful scale that will be fixed later to obtain canonically normalized Yukawas in the Chiral Lagrangian, and Y_u,d are Yukawa matrices in SM-generation space, which we have indexed explicitly: p,q=1,2,3.While (<ref>) is useful in that it displays manifest spurionic TC-flavor invariance, an alternative form not expressed in terms of incomplete TC-flavor multiplets is more useful for computational use. To write this alternative form, we define projection operators [ P_k ]_î^j̅ ≡δ_î^3 δ^j̅_kand[ P̃^k ]_î^j̅≡δ_î^k δ^j̅_3, where δ is the Kronecker-δ symbol and where the raised j̅=1,2,3 is ananti-fundamental index, the lowered î=1,2,3 is anfundamental index, and the lowered (raised) k=1,2 is an (anti-)fundamentalindex. Then (<ref>) can be alternatively written as ⊃ + 1/Λ_y^2(D^c Y_d Q_j ) P̃^j ( χχ^c )+ h.c. - 1/Λ_y^2( U^c Y_u Q_i ) ϵ^ij P_j ( χχ^c )+ h.c., where ⋯ is a trace overindices, ϵ is as before the antisymmetric invariant symbol of SU(2), and we have suppressed the implied SM-generation indices in this form. Although the SM quark fields are no longer in (incomplete) multiplets of TC-flavor, we can maintain a spurionic invariance of this expression underby assigning the spurionic transformation rule P → U_R P U_L^†, where P∈{ P_j , P̃^j }.In the low-energy Chiral Lagrangian, the terms in (<ref>) will give rise to, inter alia, the requisite quark Yukawa terms.§.§ Relaxion SectorThe relaxion sector of the theory consists of a real pseudoscalar field ϕ, the relaxion <cit.>, which we take to have dimension-5 effective axion couplings both to QCD and to thegauge group (see clockwork); additionally, we allow for the existence of an additional potential V_ϕ(ϕ) for the relaxion field: ⊃1/2 (∂_μϕ)^2 +(g_s)^2/16π^2[ ϕ/f - θ_qcd^0 ]G_μνG^μν +(g_tc)^2/16π^2ϕ/FG_tc μνG^μν_tc - V_ϕ(ϕ), where G and G_tc are, respectively, the matrix-valued gauge field strength tensors for theandgauge groups, G_(tc) μν≡1/2ϵ_μναβ G_(tc)^αβ are the respective dual field strength tensors, and g_s and g_tc are the respective gauge couplings. In (<ref>), θ_qcd^0 is a bare QCD θ-term; we do not write an analogous independent θ^0_tc angle, as it could be absorbed into an unobservable shift of ϕ and θ_qcd^0. Additionally, f is the usual QCD Peccei–Quinn (PQ) symmetry breaking scale, and F is a PQ symmetry breaking scale which, as we mentioned in cartoon, needs to be taken exponentially larger than f: F≫ f. Indeed, as we discussed in cartoon and as we will find in more detail in summary_numerical, F will be required to be many orders of magnitude larger than the Planck scale; see clockwork for further discussion. The additional potential V_ϕ(ϕ) is required to obtain the correct dynamical rolling of the relaxion field and will be discussed in more detail in relaxion_potential.§.§ Chiral RotationsBefore constructing the Chiral Lagrangian, we perform a U(1)_A chiral rotation of the χ,χ^c fields to rotate the –relaxion coupling into the mass matrix: χ → e^i ϕ / 6F χ χ^c → e^i ϕ / 6F χ^c . Following the method of Fujikawa <cit.> to include the anomalous <cit.> transformation of the measure of the functional integral under this transformation, we find that this rotation results in the following form for the Lagrangian: =L_sm,H=0 +1/2 (∂_μϕ)^2 - V_ϕ(ϕ) + (g_s)^2/16π^2[ ϕ/f - θ_qcd^0 ]G^μνG_μν + i χ^†σ̅^μD_μχ + i χ^cσ^μ D_μ (χ^c)^† -M ( χχ^c)e^i ϕ/3F + h.c. +1 /Λ_y^2( D^c Y_d Q_j ) P^j(χχ^c) e^i ϕ/3F + h.c. -1 /Λ_y^2( U^c Y_u Q_i ) ϵ^ij P_j( χχ^c )e^i ϕ/3F+ h.c. - ∂_μϕ/6F (J_A^0)^μ - ϕ/F[ (g_2)^216π^2N/3 W_μνW^μν + (g_1)^216π^2N/31/2B_μνB^μν], where W_μν are the matrix-valuedfield strength tensors, B_μν is thehypercharge field strength tensor, and we have also now included the non-Higgs part of the SM Lagrangian, which we denote _sm,H=0 (by which notation we mean the usual SM Lagrangian with the elementary Higgs doublet H set equal to zero); (J_A^0)^μ≡[χ^†σ̅^μχ - χ^c σ^μ (χ^c)^†] is the U(1)_A axial current; and the covariant derivative is still given by (<ref>).As a final step before we pass to the Chiral Lagrangian, we rotate to the mass-eigenstate basis for the SM quarks via the usual CKM manipulations <cit.>.Following the exposition of Schwartz:2014qft, and letting X∈{U,D} for the remainder in this paragraph (with X always to be read consistently as either U or D in every formula), it is always possible to write Y_x≡ P_x Q_x y_x Q_x^† where P_x and Q_x are unitary matrices in SM-generation space and y_x are diagonal matrices whose entries are the real positive square roots of the eigenvalues of the Hermitian matrix Y_xY_x^†. We define V_ckm≡ Q_u^† Q_d. The necessary chiral quark rotation to bring the fields to the mass-eigenstate basis is then given by X^c → X^c Q_x^† P_x^† and X → Q_x X, which shifts θ^0_qcd→θ^0_qcd -Y_uY_d≡θ_qcd. The final result on the Lagrangian is L=L_sm,H=0, no quarks + i U^†σ̅^μD̂_μ U + i D^†σ̅^μD̂_μ D + i (D^c) σ^μD̂_μ (D^c)^† + i (U^c) σ^μD̂_μ (U^c)^† + g_2/2 W_μ^3 ( U^†σ̅^μ U - D^†σ̅^μ D ) + g_2/√(2)( W_μ^+ U^†σ̅^μ V_ckm D + W_μ^- D^†σ̅^μ V_ckm^† U ) +1/2 (∂_μϕ)^2 - V_ϕ(ϕ) + (g_s)^2/16π^2[ ϕ/f - θ_qcd]G^μνG̃_μν + i χ^†σ̅^μD_μχ + i χ^cσ^μ D_μ (χ^c)^† -M ( χχ^c)e^i ϕ/3F + h.c. +1 /Λ_y^2( D^c y_D D ) P^2(χχ^c) e^i ϕ/3F + h.c. -1 /Λ_y^2( U^c y_U U )P_2 ( χχ^c )e^i ϕ/3F+ h.c. +1 /Λ_y^2( D^c y_D V_ckm^† U ) P^1 (χχ^c) e^i ϕ/3F + h.c. +1 /Λ_y^2( U^c y_U V_ckm D ) P_1 ( χχ^c )e^i ϕ/3F + h.c. - ∂_μϕ/6F (J_A^0)^μ -ϕ/F[ (g_2)^216π^2N/3 W_μνW^μν + (g_1)^216π^2N/31/2B_μνB^μν], where D̂_μ≡∂_μ - i g_s G_μ - i g_1 B_μ Y (we have explicitly extracted thegauge couplings to the quarks) with G_μ the matrix-valuedgauge fields, and B_μ and Y thehypercharge field and hypercharge operator respectively; W_μ^±≡( W_μ^1 ∓ i W_μ^2 )/√(2); and the notation `no quarks' on the SM part of the Lagrangian indicates that we have explicitly extracted and displayed all the quark-dependent terms. § CHIRAL LAGRANGIANTo proceed with the analysis of our model, (<ref>), we must now pass to the theory of the bound states of χ and χ^c fermions after thegroup confines. We therefore construct the Chiral Lagrangian (see, e.g., Georgi:1984wem) based on the global × U(1)_V (spurionic) TC-flavor symmetry exhibited by (<ref>). On confinement, × U(1)_V→ SU(3)_V× U(1)_V owing to the spontaneous emergence of a chiral condensate ⟨χχ^c ⟩∼ - F_π^21_3 (naïve dimensional analysis [NDA] estimate <cit.>).The Chiral Lagrangian is therefore the theory of the eight `technipions' ^a—the pseudo-Nambu–Goldstone bosons of the spontaneously broken global SU(3)_A symmetry. We assume that the excitation associated with the anomalous U(1)_A symmetry, η_tc', is massive enough to have been integrated out of the theory.§.§ Matrix-Valued Technipion Field UThe fundamental object in the construction of the Chiral Lagrangian is the matrix-valued field of the technipions, , which is assumed to transform in a (3,3̅) of theTC-flavor group:[ See SU3exponentiation for the general, closed-form expression for the matrixin terms of the ^a fields. ] ≡exp[ 2i/F_πΠ]whereΠ ≡^a T^a, with T^a the SU(3) generators, and where F_π is the dimensionful compositeness scale. We define η_tc≡^8, i H^+ ≡( ^4 - i ^5 )/√(2) and i H^0 ≡( ^6 - i ^7 )/√(2); then Π is explicitly given by Π = 1/2( [ 1/√(3)η_tc1_2 + Πi √(2) H;-i√(2) H^† -2/√(3)η_tc ]),where Π̃ = ( [^3 ^1 - i ^2;^1+i^2- ^3 ])and H≡[ H^+; H^0 ]. For reasons to become clear shortly, we will rewrite (<ref>) as Π̃ =V_ξ^†( [ ^0 √(2)^+; √(2)^- - ^0 ]) V_ξ and H≡1/√(2) V_ξ^†[ 0; h ], where V_ξ≡exp[ i ξ^ã(x) τ^ã]. Under SU(3)_V transformations,transforms as → V_3 V_3^†, which implies that Π→ V_3 Π V_3^†; in particular, since an SU(2) subgroup of SU(3)_V is gauged as , angauge transformation V acts with V_3≡[ V 0; 0 1 ] where V≡exp[ i α^ã(x) τ^ã]; alternatively, Π̃ → V Π̃ V^†, H→ V H, andη_tc →η_tc. We will work in a gauge where the ξ^ã in (<ref>) are gauged away [i.e., a local gauge transformation with α^ã = ξ^ã is made to (<ref>)]. That is, (<ref>)s]Udefn, (<ref>) and (<ref>) define the matrix-valued fieldin terms of its five physical degrees of freedom ^0,^±,η_tc, and h provided we simply replace V_ξ^(†)→1_2 in (<ref>). After this gauge choice is made, it will turn out to be most convenient to also make the following SO(2) rotation in field space [ ;] ≡[ cosϑ sinϑ; - sinϑ cosϑ ][ ^0; η_tc ] withϑ = π/6 , and work instead with the five degrees of freedom ,,^±, and h.§.§ Chiral Lagrangian A careful analysis of the spurionic symmetries of (<ref>), and the anomalies in the various axial currents, yields the Chiral Lagrangian corresponding to (<ref>), at leading order in spurions and momenta:L=L_sm,H=0+1/2 (∂_μϕ)^2 - V_ϕ(ϕ) + (g_s)^2/16π^2[ ϕ/f - θ_qcd]G^μνG̃_μν +F_π^2/4 (D_μ )^† (D^μ)+ c_mF_π^2Me^i ϕ /3F+ h.c.- F_π/√(2)( D^c y_D D ) P^2 e^i ϕ /3F + h.c. + F_π/√(2)( U^c y_U U )P_2e^i ϕ /3F + h.c. - F_π/√(2)( D^c y_D V_ckm^† U ) P^1e^i ϕ /3F + h.c. - F_π/√(2)( U^c y_U V_ckm D ) P_1e^i ϕ /3F + h.c. - N/8π^2[[ (g_2)^22Π_ϕ[ 1_2 0; 0 0 ] W_μνW^μν + (g_1)^2 Π_ϕY^2B_μνB^μν; + 2g_1g_2 Π_ϕY T^b̃ B_μνW^b̃ μν ]] , where Π_ϕ≡ - i/2ln +( ϕ / 6F ) 1_3; D_μ≡∂_μ - i [ v_μ , ] with v_μ≡ g_1 B_μ Y + g_2 W_μ^ã T^ã; ≈ 4π F_π / √(N) is the cutoff scale for this effective description; c_m is a perturbatively incalculable 𝒪(1) constant; we have absorbed the quark kinetic and gauge-coupling terms which were explicitly displayed in (<ref>) back into L_sm,H=0; and we have demanded canonically normalized Yukawa terms, which, up to a perturbatively incalculable constant, suggests Λ_y^2 ∼√(2) F_π≈ (√(2N)/4π) ·^2. While this last relation would seem to indicate all flavor structures are generated at the same scale, we emphasize that this is not necessarily the case—different structures could be generated with a hierarchy of scales and/or Wilson coefficients, depending on the mechanism underlying flavor. §.§ YukawasIt is straightforward to show that⊃F_π/√(2)( U^c y_U U )P_2e^i ϕ /3F+ h.c.⊃ -1/√(2)( U^c y_U U ) hexp[ i ϕ/3F - i /√(3) F_π]( /F_π) + h.c. , and ⊃ - F_π/√(2)( D^c y_D D ) P^2 e^i ϕ /3F + h.c.⊃ -1/√(2)( D^c y_D D ) hexp[ i ϕ/3F - i /√(3) F_π]( /F_π) + h.c., where≡√( h^2 + ^2 ) and2mu x≡sin x/x , and where these results are correct to all orders in F_π for the uncharged fields, but we have ignored any (interaction) terms involving the electromagnetically charged (EM-charged) states ∼^+ ^-.The other types of couplings of thefield to the SM quarks are (minimally) three-point interactions (after the h gets a vev) involving the EM-charged technipions ^±: ⊃ - F_π/√(2)( D^c y_D V_ckm^† U )P^1e^i ϕ /3F+h.c.⊃- i/2( D^c y_D V_ckm^† U )h ^- /F_πexp[ i ϕ/3F - i /√(3) F_π] [ 1 + 𝒪(F_π^-1) ] +h.c. , ⊃ - F_π/√(2)( U^c y_U V_ckm D )P_1e^i ϕ /3F +h.c.⊃+ i/2( U^c y_U V_ckm D )h ^+ /F_πexp[ i ϕ/3F - i /√(3) F_π] [ 1 + 𝒪(F_π^-1) ]+h.c. , where we have kept only those terms with no additional charged fields.Eqs. (<ref>) and (<ref>) include, inter alia, the Yukawas that give rise to the SM quark masses; however, they also contain unwanted phases unless ⟨ϕ/(3F) - / (√(3) F_π) ⟩ = 0.We therefore perform a further chiral field redefinition on (every generation of) the SM quarks:{D ,D^c ,U ,U^c} →{D ,D^c ,U ,U^c}·exp[- i/2(ϕ/3F - /√(3) F_π) ] . This has a number of effects: (a) it removes the exponential factors in (<ref>)s]upYukawa and (<ref>), so that the Lagrangian contains the straightforward Yukawa terms⊃ -1/√(2)( U^c y_U U + D^c y_D D ) h ( /F_π) + h.c. ; (b) it removes the explicit exponential factors in (<ref>)s]P1upper and (<ref>); (c) the QCD θ-angle shifts: θ_qcd→θ_qcd +(6)/(√(3) F_π)- i (2ϕ)/F; and (d) an additional term is added to the Lagrangian:⊃ - 1/2[ U^c σ^μ (U^c)^† - U^†σ̅^μ U + ( U^(c)↔ D^(c)) ] ∂_μ[ ϕ/3F - /√(3) F_π].§.§ Expanding the Chiral LagrangianIn order to analyze (<ref>) [as modified per the discussion in yukawas], it is necessary to write it out in terms of the physical degrees of freedom of : , , ^±, and h.By making use of (<ref>) [see SU3exponentiation], it is in principle possible to do this exactly in closed form.However, our primary interest here will be in those terms which have a bearing on the spectrum of the theory, and the effective potential for the neutral scalar fields. As such, we will not need all the higher-order interaction terms in their full generality.In addition to the Yukawa terms we already discussed in yukawas, the terms of interest to us are: (a) the kinetic and potential terms for the relaxion, and its coupling to QCD; (b) the kinetic terms for the states , , and h; (c) the kinetic terms for the states ^±, their kinetic mixing terms with W_μ^± owing to our gauge choice discussed in Umatrix, and the mass terms for the W^± and Z bosons; (d) the mass terms for the EM-charged technipions; and (e) the terms which give rise to the full tree-level potential for the EM-neutral scalars , , and h. We discuss each of these terms in turn. (a) After the chiral rotation in yukawas, the relaxion-dependent terms in the first line of (<ref>) are L ⊃1/2 (∂_μϕ)^2 - V_ϕ(ϕ) + (g_s)^2/16π^2[ ϕ/f( 1 + 2f/F) - 2√(3)/F_π - θ_qcd]G^μνG̃_μν. The term proportional to f/F≪1 can dropped. (b) The kinetic terms for , , and h are contained in the term F_π^24 (D_μ )^† (D^μ) in (<ref>). We are only interested in the two-derivative terms which are potentially quadratic in the fields after the EM-neutral scalars possibly obtain vevs; we can thus always neglect any terms which contain un-differentiated^± fields, but we need to keep terms to all orders in the un-differentiated ,, and h fields. It is straightforward to obtain these terms by sending ^±→ 0 in the definition forbefore directly exponentiating and inserting the result into the relevant terms in (<ref>). The terms which arise are ⊃1/2 (∂_μ)^2 + 1/2 (∂_μ h)^2 [ h^2/^2+^2/^2^2( /F_π)] + 1/2 (∂_μ)^2 [ ^2/^2+h^2/^2^2( /F_π)] + (∂_μ h)(∂^μ) h/^2[1 - ^2( /F_π)]. (c) Given our gauge choice, the ^± fields can possibly kinetically mix with the W_μ^±. The relevant kinetic terms for ^±, the kinetic mixing terms, and the gauge boson mass terms are also contained in the term F_π^24 (D_μ )^† (D^μ) in (<ref>). Similar arguments to those made at (b) apply about which terms need to be kept to find all possible contributions to the two-derivative, one-derivative–one-gauge-boson, or two-gauge-boson terms which are possibly quadratic in the fields after the EM-neutral scalars possibly obtain vevs. To extract these terms, we make use of the exact closed-form expression (<ref>), expanded out to quadratic order in the charged technipion fields. The relevant terms are ⊃∂_μ^+ ∂^μ^- ( [ 2F_π^2h^2 + (+ √(3))^2(^2-3^2)^2[ 1 - cos( √(3)F_π) cos( F_π) ]; - 2F_π^2 (+ 2√(3)) + 3^2 (^2-3^2)^2sin( √(3)F_π) ( F_π) ]) - i g_2(W_μ^+ ∂^μ^- -W_μ^- ∂^μ^+ ) ([ F_π^2 (√(3)+)^2-3^2[ 1 - cos( √(3)F_π) cos( F_π) ];- F_π (^2+√(3))^2-3^2sin( √(3)F_π) ( F_π) ]) + g_2^2 F_π^2/2[ 1 - cos(F_π) cos(√(3)F_π) - F_π( F_π) sin(√(3)F_π) ] W_μ^- W^μ+ + g_1^2+g_2^2 /8 h^2 ^2( F_π) Z^2. (d) The mass terms for the EM-charged technipions are obtained from the terms c_mF_π^2Me^i ϕ /3F+ h.c. in (<ref>). The relevant terms in the expansion are those proportional to ^+^-, and are again obtained using the exact closed-form expression (<ref>), expanded out to quadratic order in the charged technipion fields: ⊃ c_m ^+ ^- ×[[ - 4F_π^2 (m_L-m_N) h^2(^2-3^2)^2[cos( 2√(3)F_π + ϕ3F) - cos( F_π) cos( √(3)F_π - ϕ3F) ]; + 6√(3) F_π (m_L-m_N)h^2 ( ^2 - ^2 ) ^2 (^2 - 3^2)^2( F_π) sin( √(3)F_π - ϕ3F);+ 4 F_π m_L √(3)+^2-3^2sin( 2√(3)F_π + ϕ3F);+ 2 F_π√(3) (m_L+m_N)h^2 + 2m_L (^2 + √(3))^2(^2-3^2); ×cos( F_π) sin( √(3)F_π - ϕ3F);- 2(m_L+m_N) h^2 + 2m_L (√(3)+)(^2-3^2)( F_π) cos( √(3)F_π - ϕ3F) ]]equal masses ⟶ 4c_m m ^+ ^- ×[[ F_π√(3)+^2-3κ^2[ cos( F_π) sin( √(3)F_π - ϕ3F) + sin( 2√(3)F_π + ϕ3F) ];- ^2 + √(3)^2-3^2( F_π) cos( √(3)F_π - ϕ3F) ]], where we have also displayed the equal-mass m_L=m_N≡ m limit of this result, as it will be needed later. (e)Finally, the terms that give the (tree-level) contribution to the scalar potential for the EM-neutral technipions are also contained in the terms c_mF_π^2Me^i ϕ /3F+ h.c. in (<ref>). The relevant terms are those with no EM-charged technipions [i.e., we again send ^±→0 in the expression for , before directly exponentiating and inserting the result into the relevant terms in (<ref>)]: ⊃ + 2F_π^2 c_m (m_L+m_N) cos( F_π) cos( √(3)F_π - ϕ3F) + 2F_π^2 c_m m_L cos( 2√(3)F_π + ϕ3F) +2F_π^2 c_m (m_L-m_N) /F_π( F_π) sin( √(3)F_π - ϕ3F). § EFFECTIVE POTENTIAL AND ELECTROWEAK SYMMETRY BREAKINGEqs. (<ref>)–(<ref>) contain all the terms from the Chiral Lagrangian (<ref>) which will be relevant for our further analysis.The immediate next step is to understand the effective potential for the EM-neutral scalars in more detail, in order to understand which of the EM-neutral scalars , , and h obtain vevs as the relaxion field ϕ slow-rolls.[ Since the relaxion field ϕ is assumed to be slow-rolling down its potential until it stalls, we will always assume that the fields , , and h take vevs such that the instantaneous minimum—with ϕ held fixed—of the effective potential is obtained. ]§.§ Effective Potential After QCD quark confinement, and reading off the relevant terms from (<ref>)s]terms_a and (<ref>), we have the following tree-level contributions to the effective potential (recalling that ⊃ -V): V_eff., tree = V_ϕ(ϕ) + V_qcd(h,,,ϕ) - 2F_π^2 c_m (m_L+m_N) cos( /F_π) cos( /√(3)F_π - ϕ/3F) - 2F_π^2 c_m m_L cos( 2/√(3)F_π + ϕ/3F) -2F_π^2 c_m (m_L-m_N) /F_π( /F_π) sin( /√(3)F_π - ϕ/3F). §.§.§ QCD ContributionAlthough the detailed form of the QCD contribution to (<ref>) depends on the exact details of QCD confinement and the SM quark masses, the only properties that will be relevant are that the potential is (a) periodic, and (b) proportional to the light SM quark masses, which owing to the Yukawa couplings (<ref>) implies proportionality to | h (/F_π) |. We will thus take the approximate form, based on (<ref>)s]Yukawas and (<ref>), for the QCD contribution: V_qcd(h,,,ϕ) ≈ - ^3 |1.5mu h ( /F_π) | cos[ ϕ/f - 2√(3)/F_π - θ_qcd]. In this normalization, ^3 h ( /F_π)∼ m_π^2 f_π^2. For h ( /F_π) = v_sm≈ 246 GeV, we then have ≈ 8.5 MeV. Note also that the effective QCD θ-angle is given by θ_qcd^eff. = θ_qcd - ϕ/f + 2√(3)/F_π .§.§.§ Radiative CorrectionsImportant one-loop radiative corrections to the potential for the composite pNGB states arise from top quark loops owing to the large top Yukawa, y_t ≈ 1.[ As mentioned in footnote <ref>, we ignore the subdominant gauge loops, as they do not qualitatively alter the dynamics of our model. ] The impact of the top loops is in principle finite and calculable (e.g., on the lattice), but cannot be computed in the Chiral Lagrangian framework because the corrections are quadratically divergent, and are thus sensitive to physics at the cutoff scaleof the low-energy effective description (which we cannot perturbatively match to the [known] UV completion as the latter is strongly coupled at the matching scale). Nevertheless, we can estimate their size using NDA <cit.>, and add to the effective potential a contribution V_eff.⊃ - c_t N_c y_t^2/16π^2^2 h^2 ^2( /F_π), where c_t is an 𝒪(1) constant which is incalculable in perturbation theory, and N_c=3 is the number of QCD quark colors; note the dependence on y_t^2 h^2 ^2( / F_π), which is proportional to m_t^2 per (<ref>).The sign here is crucially important, but is also not calculable within the Chiral Lagrangian framework for reasons similar to those advanced above about the size of the top loop correction; we nevertheless assume that the sign is negative, as is obtained from a naïve perturbative loop computation (see, e.g., Galloway:2010bp for discussion of this point).§.§.§ Equal-Mass LimitFor the remainder of the body of this paper we will work in the equal-mass limit m_L = m_N; this case is most amenable to straightforward analysis, and yields all the desired properties. In unequal_masses, we revisit the more complicated case of unequal masses, m_L≠ m_N. The important conclusion from the analysis in unequal_masses is that the equal-mass limit is not in any way special from the point of view of its physical properties: much the same qualitative picture of the EWSB dynamics is obtained for m_L≠ m_N as for m_L=m_N, and we thus do not lose any qualitative features by making the simplifying equal-mass assumption.Combining (<ref>)s]Vefftree–(<ref>), and setting m_L = m_N≡ m, we obtain the following contributions to the one-loop effective potential: V_eff. ⊃ V_ϕ(ϕ) - ^3 | h ( /F_π) | cos[ ϕ/f - 2√(3)/F_π - θ_qcd] - 2F_π^2 c_m m [ 2 cos( /F_π) cos( /√(3)F_π - ϕ/3F) + cos( 2/√(3)F_π + ϕ/3F) ] - 2F_π^2c_m m ϵ_t h^2/F_π^2^2( /F_π), where we have definedϵ_t≡c_t/c_m N_c y_t^2/32π^2/m > 0.§.§ Electroweak-Symmetric PhaseIn the electroweak-symmetric (EW-symmetric) phase of the theory, it is straightforward to show that =0, =0, =0, and, assuming slow-roll of the relaxion field ϕ, ∂_t ϕ∝ - ∂_ϕ V|_===0 = - ∂_ϕ V_ϕ - 2F_π^2c_m m/Fsin( ϕ/3F). We will return to a discussion of the rolling of the relaxion in relaxion_potential; for now let us focus on the other properties of this phase, for a fixed value of the relaxion field ϕ. It is straightforward to see from (<ref>)s]terms_b and (<ref>) that , , ^±, and h have canonical kinetic terms, that there is no ^±–W_μ^± kinetic mixing, and that the W^± and Z bosons are massless (as is of course required for this phase). Moreover, ignoring in the squared-mass matrix[ Defined for the EM-neutral scalars as M^2_XY≡∂_X ∂_Y V_eff.|_===0 with X,Y∈{h, , , ϕ}; for the EM-charged scalars, one can simply read off the mass from (<ref>). ] off-diagonal entries proportional to F_π / F (which ratio is exponentially small) that mix ϕ withand , that matrix is diagonal and the squared-masses of the scalars arem^2_ = m^2_ = m^2_^± = 4 c_m m cos( ϕ/3F), and m^2_h = 4 c_m m [ cos( ϕ/3F) - ϵ_t ]; since ϕ is still rolling in this phase, m^2_ϕ≡∂_ϕ^2 V|_===0 has no physical interpretation. In exact parallel with the `cartoon' model of cartoon, we see from (<ref>) that the EW-symmetric phase is thus stable so long as cos( ϕ / 3F ) > cos(/ 3F ) > 0, wherecos( /3F)≡ϵ_t .We emphasize the perhaps obvious point that the solution (<ref>) exists independent of the values of any of the other parameters in the theory; this will be important to bear in mind when we discuss the evolution of the potential with changing ϕ in EWSBsolutions.§.§ Broken PhaseIn the broken phase of the theory, we find that both h andobtain vevs, which are determined by the following relations:cos( / F_π) = 1/ϵ_tcos[ ϕ/3F - 1/2arctan( 2sin(ϕ/3F) [ cos(ϕ/3F) -ϵ_t]/cos(2ϕ/3F) + 2 ϵ_t cos(ϕ/3F) ) ],tan( 2/√(3)F_π) = 2sin(ϕ/3F) [ cos(ϕ/3F) -ϵ_t]/cos(2ϕ/3F) + 2 ϵ_t cos(ϕ/3F) , =0. Per (<ref>), since =0, h andhave canonical kinetic terms, and there is no h–1mu kinetic mixing. The fielddoes not however have a canonical kinetic term: ⊃1/2 (∂_μ )^2 ^2(/F_π); we will return to this point below. §.§.§ A Deeper Investigation of the EWSB MinimumPrior to any further examination of the properties of this phase (masses, etc.), it is worthwhile to examine the results (<ref>)s]broken_cos_h and (<ref>) in more detail in the vicinity of ϕ =, as this clarifies the physical situation tremendously. Suppose that ϕ =+ 3F ·δ, where |δ| ≪ 1.Expanding (<ref>)s]broken_cos_h and (<ref>) in powers of δ, we find cos( /F_π)= 1 - 3ϵ_t δ√(1-ϵ_t^2)/4ϵ_t^2-1 + 𝒪(δ^2)tan( 2 /√(3)F_π)= - 2δ1-ϵ_t^2/4ϵ_t^2-1 + 𝒪(δ^2) . Clearly, (<ref>) has a real solution foronly if δ/4ϵ_t^2-1 ≥ 0and 0<ϵ_t≤ 1, where we used that ϵ_t>0 [(<ref>)]. There are two regimes that satisfy these constraints:(a) δ > 0 and 1/2< ϵ_t≤1, and(b) δ < 0 and 0<ϵ_t<1/2. In either case (a) or (b), (<ref>)s]cos_h_delta_exp and (<ref>) have two approximate solutions in the vicinity of ==0 (cf. cartoon): /F_π = ±[ 6 δϵ_t √(1-ϵ_t^2)/4ϵ_t^2 - 1+ 𝒪(δ^2) ]^1/2 and/F_π = -√(3)δ1-ϵ_t^2/4ϵ_t^2-1 + 𝒪(δ^2). Note also that ∂_h^2 V_eff.|_===0 = - 4c_m mδ√(1-ϵ_t^2) + 𝒪(δ^2).Suppose then that 0<ϵ_t<1/2.As we noted above, the solution at ==0 also exists for any value of the parameters. If additionally δ<0, then both solutions (<ref>) exist [case (b)], for a total of three solutions forandin the vicinity of ==0 (and =0). These solutions merge as δ→ 0 from below. Once δ > 0, the solutions (<ref>) no longer exist, leaving only the solution ==0 in the vicinity of ==0. Moreover, since ∂_h^2 V_eff.|_===0 is positive for δ <0 and negative for δ >0, the solution ==0 is stable for δ<0 and unstable for δ >0. Further analysis shows that the solutions (<ref>) are unstable if 0<ϵ_t<1/2. Therefore, we find that for 0<ϵ_t<1/2, the model exhibits a so-called subcritical pitchfork bifurcation at δ =0 (see, e.g., Strogatz:2014ndc). As δ approaches zero from below, a stable solution exists at the origin in field space; two additional solutions—both unstable—exist nearby in field space. As δ gets nearer zero, the two unstable solutions approach the stable one, and they merge at δ =0 (i.e., ϕ =). Once δ >0, there are no stable solutions left in the vicinity of the original stable solution. Indeed, in this case, once δ >0, the nearest minimum of the potential occurs for |/F_π| ≈π and |/F_π| ∼𝒪(1). This is clearly the incorrect behavior for a reasonable EWSB transition.Consider then the other case, 1/2<ϵ_t≤1.For δ < 0, the solutions (<ref>) do not exist, and the only solution in the vicinity of ==0 is ==0 itself. If δ>0, then both solutions (<ref>) exist [case (a)], for a total of three solutions forandin the vicinity of ==0 (and =0). These solutions separate from each other as δ grows more positive. Moreover, the solution ==0 is stable for δ<0 and unstable for δ >0. Further analysis shows that the solutions (<ref>) are stable if 1/2<ϵ_t≤ 1. Therefore, we find that for 1/2<ϵ_t≤ 1, the model exhibits a so-called supercritical pitchfork bifurcation at δ =0. For δ<0, a stable solution exists at the origin in field space, and no other solutions exist nearby. Once δ>0, two new stable solutions appear in the vicinity of the origin in field space, and the solution at the origin becomes unstable. The system will relax to one or the other of these new stable solutions, which slowly separate from ==0 as δ becomes increasingly positive. This is the behavior we need, and closely mirrors the behavior of the `cartoon' model of cartoon; see cartoon_plots.An alternative analysis is also instructive.Consider V_eff. evaluated at =0 and withfixed at the solution (<ref>). Expanding V_eff. in powers of h for fixed δ, we find V_eff.|_ from (<ref>) =0/ c_mF_π^2 m ≈( -6 ϵ_t + 6 δ√(1-ϵ_t^2)) - 2 δ√(1-ϵ_t^2)( h^2/F_π^2)+4ϵ_t^2 - 1 / 6 ϵ_t ( h^4/F_π^4) + ⋯. Therefore, for 1/2<ϵ_t≤1, the quartic coupling is positive, with the h squared-mass parameter positive for δ<0 and negative for δ>0, exactly as required to obtain a slow separation of the EWSB minimum from the EW-symmetric minimum as δ increases through zero. For 0<ϵ_t<1/2, the h squared-mass parameter is still positive for δ<0 and negative for δ>0; however, the quartic coupling is negative in both cases. Thus, the moment the h squared-mass parameter runs negative as δ increases through zero, the h field rolls off to a large field value.[ Strictly speaking, the expansion for fourth-order in h does not allow one to make this latter conclusion because, e.g., the sixth-order term could stabilize a nearby minimum. Our conclusion here is nevertheless correct, and is based on evaluation of the full (unexpanded) potential. ] A parameter space restriction is thus required [cf. (<ref>)]: 1/2< ϵ_t ≤ 1⇔c_tN_cy_t^232π^2c_m ≤m < c_t N_c y_t^216π^2c_m. The technifermion masses may thus be no larger than a loop factor smaller than ; such a choice is, however, technically natural. Note also that this implies a very mild restriction on the value of : 0≤ ( / F) < π. §.§.§ Properties of the Broken PhaseAgain, we will return to a discussion of the rolling of the relaxion in relaxion_potential; for now let us focus again on the other properties of this phase for a fixed value of the relaxion field ϕ.Canonically normalizing thefield by sending → / (/F_π), and evaluating the broken-phase scalar squared-mass matrix M_XY^2≡. ∂^2 V_eff./∂ X ∂ Y|_ from (<ref>)from (<ref>) =0ϕ=+3Fδ with X,Y ∈{h, , , ϕ}, ignoring all terms suppressed by one of more powers of the (exponentially small) ratio F_π/F, and ignoring (small) QCD corrections everywhere except in the squared-mass of the relaxion field, we find that the squared-masses of the scalars[ The h andfields mix; the physical Higgs is mostly h; the `physical ' is the mostlystate. Note that if we did not ignore the terms ∼ F_π/F in the mass matrix, the ϕ would also mix with the h and , with a mixing angle ∼ F_π/F ≪ 1. This mixing of the CP-even and CP-odd scalars is allowed since the ϕ vev is non-zero, which breaks CP. ] are m_phys. Higgs^2= 2/3c_m m×[[2 cos( 2/√(3)F_π + ϕ/3F) + 4 cos( /F_π) cos( /√(3)F_π - ϕ/3F) - 3 ϵ_t cos( 2/F_π); - ([ [[ 2 cos( 2/√(3)F_π + ϕ/3F) - 2 cos( /F_π) cos( /√(3)F_π - ϕ/3F); + 3 ϵ_t cos( 2/F_π) ]]^2;+ 12 sin^2( /F_π) sin^2( /√(3)F_π - ϕ/3F) ])^1/2 ]] = ( 8c_m m√(1-ϵ_t^2)) δ + 𝒪(δ^2) , m_phys. ^2= 2/3c_m m×[[2 cos( 2/√(3)F_π + ϕ/3F) + 4 cos( /F_π) cos( /√(3)F_π - ϕ/3F) - 3 ϵ_t cos( 2/F_π); + ([ [[ 2 cos( 2/√(3)F_π + ϕ/3F) - 2 cos( /F_π) cos( /√(3)F_π - ϕ/3F); + 3 ϵ_t cos( 2/F_π) ]]^2;+ 12 sin^2( /F_π) sin^2( /√(3)F_π - ϕ/3F) ])^1/2 ]] = 4c_m mϵ_t[ 1 -4 δ(2ϵ_t^2-1)√(1-ϵ_t^2)/ϵ_t(4ϵ_t^2-1) + 𝒪(δ^2) ], m_^2= 4c_m mcos( /√(3) F_π - ϕ/3F) + ϵ_t [ (/F_π) - cos(/F_π) ]/(/F_π)= 4c_m mϵ_t(exactly), and m_ϕ^2 ≃[ ∂^2V_ϕ(ϕ)/∂ϕ^2+ ^3 F_π^2/f^2sin( /F_π) cos( ϕ/f- 2√(3)/F_π - θ_qcd) ]_ from (<ref>)from (<ref>) ϕ=+3Fδ, where in the first two results we have used (<ref>)s]broken_cos_h and (<ref>), and ϕ =+ 3F·1.5muδ, have expanded in powers of δ, and have kept only leading terms, as we expect that the QCD barriers will stall the relaxion in the vicinity of(i.e., at 0<δ≪ 1); see relaxion_potential. The expression for m_^2 at (<ref>) is exact owing to the relation cos( /√(3) F_π - ϕ/3F) =ϵ_t cos(/F_π) in the broken phase, which can easily be verified using (<ref>)s]broken_cos_h and (<ref>). Note also that m_ϕ^2 is only interpretable as the present-day squared-mass of the relaxion field once it has stopped rolling after the post-inflation slope-drop discussed in cartoonRelaxionV has occurred; see relaxion_potential. It is also straightforward to read off the W and Z-boson squared-masses from (<ref>) (note that we had to keep the 𝒪(δ^2) terms in the expansions (<ref>) in order to obtain the 𝒪(δ^2) terms here correctly): m_W^2= g_2^2/2 F_π^2 [ 1 - cos( /F_π) cos( √(3)/F_π) ] = 3/2 g_2^2 F_π^2 δ[ ϵ_t√(1-ϵ_t^2)/4ϵ_t^2-1 + 3/2δ2-3ϵ_t^2+2ϵ_t^4/(4ϵ_t^4-1)^2 + 𝒪(δ^2) ],m_Z^2= g_1^2+g_2^2/4 F_π^2 sin^2( /F_π) = 3/2 (g_1^2+g_2^2) F_π^2 δ[ ϵ_t√(1-ϵ_t^2)/4ϵ_t^2-1 + 3/2δ1-2ϵ_t^2+2ϵ_t^4/(4ϵ_t^4-1)^2 + 𝒪(δ^2) ], which imply a tree-level contribution to the T parameter ofα_e T≡1/m_W^2[ Π_W^+W^-(0) - c_w^2 Π_ZZ(0) ] = 1 - g_2^2/g_1^2+g_2^2m_Z^2/m_W^2= 1 - 1/2sin^2( /F_π)/1 - cos( /F_π) cos( √(3)/F_π)= 3/2δ√(1-ϵ_t^2)/ϵ_t(4ϵ_t^2-1) + 𝒪(δ^2) , where α_e is measured at the Z-pole, and c_w≡ g_2 / √(g_1^2+g_2^2) is the cosine of the weak mixing angle. Note that α_e T ∝ξ per (<ref>)s]xi and (<ref>) (cf. eq. (16) of Giudice:2007fh).Owing to (a) the fact that ^± do not have canonical kinetic terms, and (b) the kinetic mixing of the ^± with the W_μ^±, two manipulations are required before the ^± masses can be read off from (<ref>)s]terms_c and (<ref>): (1) we send W^±_μ→ W^±_μ± iα∂_μ^±, with α chosen to eliminate the kinetic mixing term in (<ref>): α = 2/g_2sin( /F_π) sin( √(3)/F_π) -√(3)[ 1 -cos( /F_π) cos( √(3)/F_π) ]/( ^2 - 3^2 ) [ 1 -cos( /F_π) cos( √(3)/F_π) ], which does not impact the W mass but does modify the ^± kinetic term (and of course induces couplings to the ^± for all fields that couple to the W_μ^±); and (2) we rescale the ^± fields to achieve a canonical kinetic term: ^±→^±×^2 - 3^2 /√(2) F_π√(1 -cos( /F_π) cos( √(3)/F_π) )/cos( √(3)/F_π)- cos( /F_π) . The squared-mass of the ^± can then be read off from the rescaled (<ref>) as m_^±^2= 2c_m m ^2-3^2/^2 1 -cos( /F_π) cos( √(3)/F_π) /[ cos( /F_π) - cos( √(3)/F_π) ]^2 ×[ [F_πsin( F_π) cos( √(3)F_π - ϕ3F); -√(3)F_π[ cos( F_π) sin( √(3)F_π - ϕ3F) + sin( 2√(3)F_π + ϕ3F)] ]] = 2c_m m ϵ_t ^2-3^2/F_π^2 1 -cos( /F_π) cos( √(3)/F_π) /[ cos( /F_π) - cos( √(3)/F_π) ]^2 ( F_π) cos( F_π) = 4c_m mϵ_t[ 1 -1/2δ(13ϵ_t^2-6)√(1-ϵ_t^2)/ϵ_t(4ϵ_t^2-1) + 𝒪(δ^2) ], where at (<ref>) we have used the exact relations cos( √(3)F_π - ϕ3F)=ϵ_tcos( F_π) and cos( F_π) sin( √(3)F_π - ϕ3F) + sin( 2√(3)F_π + ϕ3F) =0, which follow from (<ref>)s]broken_cos_h and (<ref>).At (<ref>), we have used (<ref>)s]broken_cos_h and (<ref>), and have expanded in powers of δ.§.§ SummaryIn summary, we have found that for the parameter range 1/2<ϵ_t≤ 1, the theory has a stable EW-symmetric vacuum solution ===0 while cos(ϕ/3F) > cos(/3F) ≡ϵ_t. This vacuum solution destabilizes if ϕ is larger than , and if ϕ=+3Fδ (with 0<δ≪ 1), we find a Higgs-vev ∝√(δ) F_π, a -vev ∝δ F_π, a light physical Higgs mass (m_h^2 ∝ m δ), light W and Z masses (m_W,Z^2 ∝ F_π^2 δ), and four heavy states (M^2 ∝ m)—two of these states are EM-neutral and two are charged. A tree-level T parameter α_e T ∝δ is generated. § RELAXION POTENTIALIn order to exploit the observations summarized in EWsummary, we desire to have the relaxion field initially in the range ϕ∈ [0,), and have the field slow-roll out to larger values of ϕ over time: ∂_t ϕ >0. In exactly the same fashion as discussed in cartoon, this will trigger dynamical EWSB as ϕ crosses , giving rise to increasingly large QCD barriers to the rolling, which will stall the relaxion shortly after it crosses , while 0<δ≪ 1, per the mechanism of Graham:2015cka.To this end, examine again (<ref>), which gives the gradient of the potential with respect to ϕ in the EW-symmetric phase. As the second term in (<ref>) is negative on ϕ∈ [0,), the first term must be made positive to obtain the correct rolling direction.Following Graham:2015cka and our discussion in cartoonRelaxionV, we add a linear term for the ϕ, which explicitly breaks the residual discrete ϕ shift symmetry.As in cartoon, we write this term as follows (the additional factor of 2 here compared to V_ϕ(ϕ) in cartoonRelaxionV is merely a convenient rescaling of γ): V_ϕ(ϕ)= -γ 2 F_π^2c_m m /F ϕ , where the free numerical prefactor γ = γ(σ) is again assumed to take a value γ_i ∼ 10^10 during inflation (see cartoonRelaxionV).This implies that, during inflation, [cf. (<ref>)] ∂_t ϕ∝ - ∂_ϕ V|_EW-symmetric = 2 F_π^2c_m m/F[ γ_i - sin( ϕ/3F) ]. Taking γ_i ≫ 1 certainly guarantees that ∂_tϕ >0 in the EW-symmetric phase.In the broken phase, we require the relaxion to stop rolling once the QCD barriers become sufficiently large.The stopping condition ∂_ϕ V_eff.= 0 is ^3/f h ( /F_π) sin[ ϕ/f - 2√(3)/F_π - θ_qcd] ≈2 F_π^2 c_m m/F[ γ + 2/3cos( /F_π) sin( /√(3)F_π - ϕ/3F) - 1/3sin( 2/√(3)F_π + ϕ/3F) ]. Here, during inflation γ = γ_i for the initial stalling of the relaxion, but γ→ 0 when the post-inflation slope-drop occurs and ϕ settles to its new minimum. Evaluating this at ϕ =+ 3Fδ in the broken phase as defined by (<ref>)s]broken_cos_h–(<ref>), and expanding in powers of δ everywhere except for the ϕ/f term in the argument of the sine term on the LHS of (<ref>), we find during inflation that the relaxion stalls when 2F_π^2 c_m m /F[ γ_i - √(1-ϵ_t^2)] ≈√(δ)√(6)F_π^3/f(1-ϵ_t^2)^1/4√(ϵ_t)/√(4ϵ_t^2-1)sin[ 3F/f(-2muarccos(ϵ_t)+ δ) - θ_qcd] + 𝒪(δ) . This cannot be solved for δ exactly in closed form, and no small-δ expansion of the argument of the sine term is possible owing to the large δ-prefactor proportional to F/f ≫ 1. However, we can make progress by assuming that the sine factor on the RHS of (<ref>) is equal to 1 to obtain the approximate solution for the relaxion stalling during inflation: δ≈2/3 f^2 F_π^2 c_m^2 m^2 ^2 / F^2 ^64ϵ_t^2-1/ϵ_t √(1-ϵ_t^2)[ γ_i - √(1-ϵ_t^2)]^2 ≈2/3 f^2 F_π^2 c_m^2 m^2 ^2 / F^2 ^64ϵ_t^2-1/ϵ_t √(1-ϵ_t^2)2mu γ_i^2, where in the latter approximate equality we have used γ_i ≫√(1-ϵ_t^2)∼𝒪(1). Self-consistency of the expansion effectively demands that, up to 𝒪(1) factors,F ≫γ_i fF_π c_m m/^3. Since we have that ^3 ∼^4 [cf. (<ref>)s]cartoonVQCD and (<ref>)], so that ^-3∼√(ξ) F_π^-4, we require here that F ≫√(ξ)γ_i fF_π^2 c_m m/^4; this should be compared to (<ref>), which indicates that in the cartoon model we had F ∼γ_i fF_π^2 c_m m/Λ^4: the parametrics examined for the cartoon model thus correctly imply ξ≪ 1.By the intermediate value theorem, the actual solution of (<ref>) must occur for a value of δ no more than Δδ = 2π/3f/F≪ 1 greater than the approximate solution (<ref>). Although it is not obvious parametrically, the shift Δδ turns out to be numerically small compared to δ. Eq. (<ref>) is thus a sufficient approximate solution to be used everywhere except when an expression depends on a (co)sine factor with an argument containing a contribution proportional to ϕ/f. The only other place that this occurs is in the relaxion squared-mass [note that terms proportional to (F_π/F)^2 ≪1 have been ignored here]: m_ϕ^2≈^3F_π/f^2sin( /F_π) cos[θ_qcd - ϕ/f + 2√(3)/F_π]_ from (<ref>)from (<ref>). However, for precisely the reason that this expression in sensitive to shifts of size Δδ∼ f/F (i.e., Δϕ∼ f), the relaxion mass will change after the post-inflation slope-drop as the relaxion rolls a distance of order |Δϕ| ∼ f to its new settling point (see cartoonRelaxionV); on the other hand, all the other estimates we have made will not be significantly impacted by this small change in ϕ due to the slope drop. Since the argument of the cosine factor in (<ref>) is just θ^eff._qcd [which from comparing (<ref>) with γ = γ_i and with γ→ 0, is easily seen to be a factor of γ_i∼10^10 smaller than its 𝒪(1) value at initial stalling, cf. (<ref>)], it follows that the post-slope-drop relaxion mass can be estimated by setting the cosine factor to 1, and using (<ref>)s]exp_soln and (<ref>) in the sin(/F_π) term: m_ϕ^2≈2 F_π^2 c_m m/ fF2mu γ_i ≈^3/f^2≈m_π^2 f_π^2/f^2 (post slope-drop),where we have used the stopping relation estimated from (<ref>) in the second step above, and the estimates ^3∼Λ^4 ∼ m_π^2 f_π^2 in the third step [see the discussions just below (<ref>)s]slopeMatchingCartoongamma1 and (<ref>)]. Note that the expression appearing on the RHS of the first approximate equality in (<ref>) is F/f ≫ 1 larger than the 𝒪(F_π^2/F^2) terms we neglected in (<ref>). We thus see that the relaxion mass is expected to obey the standard scaling relation of a generic QCD axion.§ SUMMARY AND NUMERICAL RESULTSIn this section, we present a summary of our analytical results, as well as selected numerical results.In order to present the analytical results in the cleanest fashion possible, we first exchange all appearances of m for ϵ_t using (<ref>), and we replace → 4π F_π / √(N). Next, keeping the terms at 𝒪(δ^2) in the expansion for ^2/F_π^2 which were not shown explicitly at (<ref>), we invert that expansion to obtain δ as a power series in ^2/F_π^2 correct to 𝒪(^4/F_π^4), and insert this inverted expansion into the various results from [s]effective_potential and <ref> that had previously been expanded in powers of δ. In this fashion, all results other than ^2/F_π^2 can be expressed as a power series in ^2/F_π^2, while ^2/F_π^2 is expressed as a power series in δ, which is estimated by (<ref>). In the broken phase, this procedure leaves us with the following results: δ ≈( c_t y_t^2 N_c/N)^2f^2 F_π^6/ F^2 ^6 γ_i^2 /ϵ_t^2 [4ϵ_t^2-1/6ϵ_t√(1-ϵ_t^2)] , ^2/F_π^2 =6ϵ_t√(1-ϵ_t^2)/4ϵ_t^2-1δ + 3(2ϵ_t^4-2ϵ_t^2+3)/(4ϵ_t^2-1)^2δ^2 + ⋯,/F_π = - √(1-ϵ_t^2)/2√(3)2mu ϵ_t ^2/F_π^2 + ⋯ , m/F_π ≡1/8π( c_t/c_m y_t^2 N_c/√(N)) 1/ϵ_t, m_t/F_π = y_t/√(2)/F_π[ 1 - 1/6^2/F_π^2 + ⋯],m_phys. Higgs^2 /F_π^2 = 2/3( c_ty_t^2 N_c/N) 4ϵ_t^2-1/ϵ_t^2^2 /F_π^2[ 1 - 2/3ϵ_t^2^2/F_π^2 + ⋯],m_phys. ^2/F_π^2≈ m_^±^2 /F_π^2≈ m_^2/F_π^2 = 2 (c_t y_t^2 N_c/N), m_W^2/F_π^2 = g_2^2/4^2/F_π^2[ 1 -1/3( 1- 3/4ϵ_t^2) ^2/F_π^2+ ⋯],m_Z^2/F_π^2 = g_1^2+g_2^2/4^2/F_π^2[ 1 -1/3^2/F_π^2+ ⋯],T≡α_e T= 1/4ϵ_t^2^2/F_π^2+ ⋯ , G_F = g_2^2/4√(2) m_W^2 = 1/√(2)^2[ 1 + 1/3( 1 - 3/4ϵ_t^2) ^2/F_π^2 + ⋯], m_ϕ^2≈^3/f^2≈m_π^2 f_π^2/f^2(post slope-drop). To investigate these results numerically, we fix c_m=c_t=1, N_c=3, = 8.5MeV, f=10^11 GeV, and γ_i = 10^10. Using as input the values G_F= 1.1663787(6)-5 GeV^-2 <cit.>, m_t = 173.21(51)(71) GeV <cit.>, and m_phys. Higgs = 125.09(24) GeV <cit.>, we scan over N and F_π, solving for y_t, , and ϵ_t using (<ref>)s]mtop, (<ref>), and (<ref>).We also fix α_e(m_Z) ≈ 1/127.950(17) <cit.> to be able to compute T using the values of ϵ_t andthus obtained.[ We do not require the values of g_1 and g_2 to discuss the relevant phenomenology. They could be obtained by, e.g., additionally fixing m_Z=91.1876(21) GeV <cit.>, or by performing a global EW fit with the additional tree-level shifts indicated above accounted for; however, as is well-known even in the SM, it would be necessary to include the one-loop SM corrections in order to obtain accurate values here. ] The results of this numerical analysis are shown in [s]numerics_pheno and <ref>; the former shows the phenomenologically interesting results for T and m_phys. ≈ m_^±≈ m_, while the latter shows the values of F, m, , and ϵ_t that are required at each point in parameter space. Also shown in numerics_pheno are the current and projected 95%-confidence one-parameter upper limits on the T parameter (taken at fixed S=U=0) for a variety of proposed collider configurations for the ILC, CEPC, and FCC-ee (these limits are taken from the presentation in Fedderke:2015txa of the limits examined in Fan:2014vta, and assume that the best-fit point for the global electroweak fit is at (S,T)=(0,0); cf. Baak:2014ora).A benchmark parameter point of interest is N=10 and F_π = 20 TeV, indicated by the black dots on [s]numerics_pheno and <ref>. At this point, and with c_m, c_t, N_c, , f, and γ_i fixed as above, we have = 80 TeV, F = 5.241 GeV, T = 1.3-2, m_phys. ≈ m_≈ m_^± = 15 TeV, m = 1.2 TeV, ϵ_t = 0.61, y_t = 0.99, and = 246 GeV.Additionally, we find θ_qcd^eff.≈π/2 at the initial stalling point. Post slope-drop, we have m_ϕ≈ 120 μeV and |θ_qcd^eff.|=7.9-11, which is small enough to evade the constraint from the neutron electric dipole moment (EDM) limits, |d_n| ∼ 3-16 |θ_qcd^eff.| < 3-26 e cm <cit.>.In [s]numerics_pheno and <ref>, we also show the limits of the parameter region in which the self-consistency conditions discussed in cartoonSelfConsistency are satisfied. The essential content of the conditions for the cartoon model are captured by (<ref>) which, taking= 4π F_π / √(N), expresses a constraint on the upper limit of F_π of about 20 TeV.For the full model, essentially the same parametric estimate is obtained, which the exception of a weak ϵ_t-dependence: F_π ≲( √(3) N / 2γ_i c_t y_t^2N_c)^1/4 (2ϵ_t)^1/4( ^4 ^3/f)^1/6≈ (20 TeV) ×( N/10)^1/4×( ϵ_t/0.61)^1/4. For N=10, this translates to a lower bound ξ≳ 1.2-4.The massive bound states here are on the order of 10 TeV, and are charged only under the electroweak gauge group; as such they would be extremely hard to see at the proposed SPPC and FCC-hh high-energy hadron colliders. The T parameter is however a highly relevant probe: although current experiments are not sensitive to values of T as small as those obtained at the benchmark point N=10 and F_π = 20 TeV, with improved Z-pole measurements and a top-threshold scan, both CEPC and FCC-ee would be able to exclude at 95% confidence not only this benchmark point, but almost the entirety of the model parameter space in which the relaxion consistency conditions are satisfied, and in which Landau poles in the couplings g_1,2 are not expected below the Planck scale.§ OTHER CONSIDERATIONSThe cosmological relaxation mechanism generates a technically natural weak scale when the cutoff of the SM effective field theory is at much higher scales. In our context, we have seen that we are able to push the cutoffto scales of order 100 TeV. Despite the high scale of new physics, it is possible to imagine low energy probes of scenario, including precision electroweak, flavor, and CP tests, as well as signatures of heavy technibaryon composite dark matter. Similar considerations would follow from a tuned composite Higgs model (see, e.g., [s]Vecchi:2013iza,Barnard:2014tla); searches for the relaxion are therefore crucial to test the scenario, although connecting the low- and high-energy dynamics may be challenging. In this section we will make a few remarks concerning these issues. §.§ FlavorIn this work we have taken a bottom-up approach to flavor, writing only the minimal effective operators that generate the Yukawa couplings; none of our detailed model conclusions depend sensitively on the exact UV mechanism leading to these couplings. The relaxion mechanism allows us to pushto higher scales than in typical quasi-natural composite Higgs models, and this implies that the scale of flavor dynamics may also be at a higher scale. This generally eases the severe constraints from anomalous flavor changing neutral currents (FCNCs) and potentially allows for simpler UV models of flavor (i.e., without necessarily requiring large anomalous dimensions of the composite operators, walking dynamics, etc. <cit.>). Nevertheless, it is not possible in our scenario to push the flavor scale arbitrarily high due to the self-consistency conditions in the inflation sector. In particular, the upper limit on the cutoff in our model (∼ 10^2 TeV) is too low to be automatically safe from current flavor constraints on composite Higgs models; we thus expect additional UV structure will be required in the flavor sector. Thus it may still be possible to have experimental signatures of flavor and CP violation within the reach of current and future experiments.§.§ Relaxion PhenomenologyA chief prediction of this scenario is the existence of a QCD (rel-)axion, which can be tested through a variety of techniques, depending on its underlying couplings to the SM; see, e.g., Jaeckel:2015txa for a review of axion phenomenology. The classic probes of the relaxion–photon coupling include helioscopes, light-shining-through-walls experiments, and observations of a variety of astrophysical systems in which relaxions may be produced. Furthermore, since the effective QCD θ-angle is expected to be small but non-zero, it may be possible to probe the relaxion–gluon coupling with improved measurements of the static neutron EDM. The relaxion may also form some or all of the dark matter, and there are numerous proposals expected to make significant inroads in the axion dark matter parameter space; see, e.g., [s]Asztalos:2003px,Budker:2013hfa,Arvanitaki:2014dfa,Rybka:2014cya,Graham:2015ifn,Kahn:2016aff,Barbieri:2016vwg,Arvanitaki:2016fyj,Chung:2016ysi,TheMADMAXWorkingGroup:2016hpc,Stadnik:2013raa,Stadnik:2014tta,Stadnik:2015kia.§.§ Custodial ModelsThe coset studied in this work does not admit a custodial symmetry, leading generically to a large T parameter if the compositeness scale is below about 10 TeV (see numerics_pheno). As we have shown, the relaxion mechanism can allow a high compositeness scale, but this means it will be challenging to directly search for the additional composite pNGBs and resonances at colliders.[ See also, e.g., [s]Croon:2015wba,Banerjee:2017qod for some non-minimal composite Higgs scenarios in which the resonances are made heavier compared to the compositeness scale, also allowing evasion of direct search bounds with alleviated tuning. ] On the other hand, one can certainly consider cosets that are custodially symmetric. For such models, it is conceivable that the compositeness scale is as low as a few TeV, and that the relaxion addresses only a mild little hierarchy. This would potentially open the window for observation of new heavy composite particles at the LHC and future hadron colliders. §.§ Composite Dark Matter Another potentially interesting consequence of the general scenario is heavy composite dark matter in the form of a technibaryon (see, e.g., [s]Nussinov:1985xr,Chivukula:1989qb,Barr:1990ca,Bagnasco:1993st,Gudnason:2006yj,Antipin:2015xia,Kribs:2016cew). The technibaryon in our scenario is expected to have a mass m_B∼ N ∼ 1001.5mu–1000 TeV, which is, interestingly, in the correct range for a thermal relic cosmology. In general composite Higgs models, one must take care to ensure that the lightest technibaryon is a neutral state if a dark-matter interpretation is desired. Furthermore, if the lightest state carries hypercharge, the scenario may face strong constraints from direct detection experiments due to the tree-level Z-boson exchange. Possible phenomenological implications range from scattering in direct detection experiments, to indirect signals from decaying dark matter.§.§ Inflation sectorThe success of our scenario relies on an exponentially long period of low-scale inflation. As realized in the original relaxion paper <cit.>, the construction of an explicit model of inflation with these features that is consistent with cosmological observations from Planck <cit.> and other experiments presents a challenging task, and may very well bring with it new naturalness questions that would need to be addressed. Moreover, the viability of the slope-drop mechanism cannot be evaluated without an explicit construction. While these issues concerning the inflation sector go beyond the scope of our work, they clearly represent a critical open problem and we encourage further model building efforts addressing this sector. See, e.g., [s]Hardy:2015laa,Patil:2015oxa,DiChiara:2015euo,Hook:2016mqo,Higaki:2016cqb,Choi:2016kke,You:2017kah,Evans:2017bjs for further work on this issue. § CONCLUSIONIn this paper, we have examined how the cosmological relaxation mechanism of Graham:2015cka can be utilized to dynamically generate the little hierarchy in a composite Higgs model based on underlying strongtechnicolor dynamics with three Dirac fermion flavors, leading to the global chiral symmetry breaking × U(1)_V→ SU(3)_V× U(1)_V.The relaxion was given anomaly-like couplings to both the technicolor and QCD gauge groups, with the former giving rise to the requisite coupling of the relaxion to the pNGB Higgs in the low-energy Chiral Lagrangian description, and the latter giving rise to the back-reaction on the relaxion slope required to stall the relaxion rolling after QCD chiral symmetry breaking and confinement <cit.>. The hierarchy of axion decay constants f ≪ F required by our model was engineered using a clockwork mechanism <cit.>. We found that an additional potential term for the relaxion is required in this model to obtain the correct slow-roll direction for the relaxion field during the requisite exponentially long period of low-scale inflation. With the additional term in the potential, the technifermion masses—a source of explicit global chiral symmetry breaking—are scanned as the relaxion field rolls, which in turn results in the scanning of the term in the Higgs potential which opposes EWSB. Eventually, this scanning results in the importance of the dominant top-loop radiative corrections to the Higgs potential increasing relative to the contributions arising from the technifermion masses, leading to dynamical EWSB provided the technifermion masses are chosen to be roughly a loop factor below the cutoff scale of the composite theory. We utilized the post-inflation slope-drop mechanism of Graham:2015cka to obtain an acceptably small QCD θ-angle in this framework.We conclude that little hierarchies on the order of ξ≡⟨ h ⟩^2 / F_π^2 ∼𝒪(10^-4) can be generated by our model (i.e., F_π∼ 20 TeV, with ∼ 80 TeV for N=10) while remaining within the region of parameter space in which the relaxion model is self-consistent, and without running afoul of the QCD θ-angle constraint. Phenomenological signatures of this (custodial violating) model include an electroweak T parameter large enough that high-precision measurements at proposed e^+e^- Higgs factories could explore essentially the entire viable parameter space for the model at the 95%-confidence exclusion level; a set of electroweak-charged states with masses on the order of 10 TeV, which would be challenging to observe at next-generation hadron colliders owing to large backgrounds and low rates; observables related to the existence of a QCD-like axion; and—depending on how this part of the model is implemented in detail—possibly also additional strongly charged states associated with the clockwork mechanism. Additionally, within the general scenario of composite Higgs models with cutoffs on the order of 100 TeV (whether tuned, or arising from relaxation as we have considered here), there may be interesting signatures associated with flavor physics, or technibaryon dark matter.We would like to thank Andrea Tesi for many useful discussions, and collaboration at an early stage of this project.The work of B.B. is supported in part by the U.S. Department of Energy under grant No. DE-SC0015634, and in part by PITT PACC.The work of M.A.F. is supported in part by the Kavli Institute for Cosmological Physics at the University of Chicago through grant NSF PHY-1125897 and an endowment from the Kavli Foundation and its founder Fred Kavli. The work of L.-T.W. is supported by the U.S. Department of Energy under grant No. DE-SC0013642. We would also like to thank the Aspen Center for Physics for hospitality, where part of the work was completed. The Aspen Center for Physics is supported by the NSF under Grant No. PHYS-1066293. § SU(3) EXPONENTIATIONThe generic form of an exponentiated SU(3) matrixin terms of the pion fields can be expressed in closed form <cit.>. If we define H≡ 2^a T^a /√(^a ^a ) ,θ ≡√(^a^a)/F_π ,andφ ≡1/3[ arccos( 3√(3)/2 H ) - π/2], then ≡exp[ 2i/F_π^aT^a ] = exp[iθ H ] = ∑_k=0^2 [ H^2 + 2/√(3) H sin( φ + 2π k3) - 1/31_3 ( 1 + 2 cos[ 2 ( φ + 2π k3) ] ) ]×exp[ 2/√(3) i θsin( φ + 2π k /3) ] / 1 - 2 cos[ 2 ( φ + 2π k/3) ] . This expression is of limited direct utility owing to its complexity; it is however of great utility in providing a closed-form expression forthat can be expanded out to find certain relevant terms, as outlined in, e.g., effective_potential.§ UNEQUAL MASSES, ML =/= MNIn equal_mass, we specialized to the case m_L=m_N≡ m, as this simplified the presentation of the analysis of the effective potential in the main body of the paper. The aim of this appendix is to demonstrate that qualitatively the same EWSB picture is obtained for the case m_L ≠ m_N.In particular, we will demonstrate that over a non-negligible region of parameter space, the most important characteristics of the equal-mass case carry over to the unequal-mass case: (a) an initially stable EW-symmetric solution is destabilized as ϕ rolls through a critical value, (b) the destabilization is due to the squared-eigenmass corresponding to the h-field changing sign when evaluated at the EW-symmetric solution, while the other squared-eigenmasses remain positive there, and (c) ϕ = is a supercritical pitchfork bifurcation point for the system.As in equal_mass, the one-loop effective potential is again obtained by combining (<ref>)s]Vefftree–(<ref>), except we now define ϵ_t ≡3 c_t N_c |y_t|^2 32π^2 c_m ( 2m_L + m_N ), which reduces to the definition (<ref>) in the m_L=m_N≡ m limit; the top-loop contribution to the potential is then written as [cf. (<ref>)s]Vt and (<ref>)] V ⊃ - 2/3F_π^2c_m (2m_L+m_N) ϵ_t h^2/F_π^2^2( /F_π). We also define m_N ≡ z1mu m_L.In this appendix, we will ignore the V_ϕ(ϕ) and V_qcd contributions to the potential, and present an analysis of the EWSB dynamics analogous to that in [s]EWsym and <ref>, working in the limit ϕ =+ 3Fδ where |δ| ≪ 1; the value offor the unequal mass case will be defined at (<ref>) below.Evaluating the minimization conditions ∂_X V = 0 for X∈{ h,, }, we find that the EW-symmetric phase is given by=0,and = 1/√(3) [i.e., ⟨π^0_tc⟩=0, ⟨η_tc⟩≠ 0], withimplicitly defined as a function of ϕ by (for z≠1) sin(ϕ/3F)= ±sin(2/3F_π) + z sin(4/3F_π) /√( 1 + z^2 - 2z cos( 2/F_π) ) forcos(2/3F_π) ≶ z cos(4/3F_π). There are two regions of parameter space to consider.§.§ Case 1 The first case is cos(2/3F_π) < z cos(4/3F_π); this can only obtained in the vicinity ofif z>1.We will phrase the physical squared-eigenmasses in terms ofinstead of ϕ because inverting (<ref>) in closed form is not straightforward. In the EW-symmetric phase, the three squared-eigenmasses of the canonically normalized[ At =0 and ≠ 0, the h field has a non-canonical kinetic term ⊃1/2 (∂_μ h)^2 ^2(/ F_π ); cf. (<ref>). We must thus rescale h → h / (/ F_π ) to canonically normalize. The result for m_h^2 at (<ref>) is shown after this rescaling has been effected. ] h–– system are, as a function of , m_h^2= 2/3 c_mm_L[ 3(z^2-1)/√(z^2+1-2zcos(2F_π)) - 2(z+2) ϵ_t ], m_2^2= 4 c_mm_L zcos(2F_π)-1/√(z^2+1-2zcos(2F_π)) , m_3^2= 4/3 c_mm_L (2z+1)(z-1)+2zsin^2(F_π)/√(z^2+1-2zcos(2F_π)). It is easy to see that m_3^2 is always positive on z>1; m_h^2 will change sign at ϕ= [condition (a)] if 3(z-1)/2(z+2) <ϵ_t<3(z+1)/2(z+2), while demanding that m_2^2 is still positive at this point [condition (b)] implies thatϵ_t > 3√(z^2-1)/2(z+2). Here,is given by cos(/3F)= { 1 -4ϵ_t^2(z+2)^2/9(z^2-1)^2[ [sin{13arccos[ 12z( 1+z^2-9(z^2-1)^24ϵ_t^2(z+2)^2) ] }; + z sin{23arccos[ 12z( 1+z^2-9(z^2-1)^24ϵ_t^2(z+2)^2)] } ]]^2 }^1/2z≃1=ϵ_t - z-1/6ϵ_t +⋯ .It remains to satisfy condition (c): that the destabilization point at ϕ = is a supercritical pitchfork bifurcation. As in the equal-mass case, this can be done by demonstrating that, in addition to =0, two other solutions forexist for δ >0, and that these separate from =0 as δ increases in size. Because the solution with =0 is stable for ϕ < and unstable for ϕ >, the existence of two such additional extrema in the potential in the vicinity for =0 for ϕ > (i.e., δ >0) suffices to demonstrate the supercriticality of the pitchfork bifurcation. We need only study the vevs in the perturbative limit 0<δ≪ 1. We will expand in formal power series = ^EW-sym(δ) + ∑_j= 0δ^2j+1^(2j+1), = ^EW-sym(δ) + ∑_j= 0δ^2j+1^(2j+1), = √(δ)∑_j=0δ^j ^(j), where ^EW-sym(δ) and ^EW-sym(δ) are the vevs forandin theEW-symmetric phase, so that the coefficients ^(j), ^(2j+1), and ^(2j+1)parametrize the deviations from the EW-symmetric solution. In the EW-symmetric phase we know that we have ^EW-sym(δ) = (1/√(3)) ^EW-sym(δ). We expand ^EW-sym(δ)≡^crit. + δ·^EW-sym slope+ ⋯, where ^crit. is the solution to m_h^2=0 [see (<ref>)]: ^crit. = F_π/2arccos{1/2z[ 1+z^2-( 3(z^2-1)/2ϵ_t(z+2))^2 ] }, and ^EW-sym slope is obtained by inserting the expansion (<ref>) and the definition ϕ = +3Fδ into the EW-symmetric solution (<ref>), expanding in powers of δ and equating coefficients: ^EW-sym slope =9F_π(z^2-1)/3(z^2-1)+4(z+2)^2ϵ_t^2 .To find the values of the other expansion coefficients in (<ref>)s]case1omegaexp–(<ref>), we substitute all relevant expansions into the extremization conditions ∂_X V =0 for X∈{h, ,} and expand in powers of δ to find a formal power series whose individual coefficients we equate to zero to obtain a hierarchical system of equations which we can solve recursively to find ^(j), ^(2j+1), and ^(2j+1); the lowest-order solutions ^(0), ^(1), and ^(1) suffice to determine the nature of the bifurcation. The resulting expressions correctly reduce to the results (<ref>) in the z→ 1 equal-mass limit, but are too lengthy to display here explicitly. Nevertheless, analysis of these expressions allows us to conclude that the two additional solutions for ^(0) exist for 0<δ≪ 1, and that the pitchfork bifurcation is thus supercritical, if the following conditions are met [we additionally impose ϵ_t>0 by definition]: [ 0 < ϵ_t < 3√(z^2-1)/2(z+2)or 3z/2(z+2) <ϵ_t < 3(z+1)/2(z+2)]and3(z-1)/2(z+2) < ϵ_t < 3(z+1)/2(z+2) . The most stringent constraint for the region z>1 is thus3z/2(z+2) < ϵ_t < 3(z+1)/2(z+2) .§.§ Case 2 The second case is cos(2/3F_π) > z cos(4/3F_π). The relevant part of this region of parameter space occurs for z<1 when ϕ≈; although there is also some part of this region of parameter space at z>1 when ϕ≈, it turns out not to be interesting for our purposes, and we will not discuss it.We follow a similar analysis as for the previous case. In the EW-symmetric phase, the three squared-eigenmasses of the canonically normalized h–– system are, as a function of , m_h^2= 2/3 c_mm_L [ 3(1-z^2)/√(z^2+1-2zcos(2F_π)) - 2(z+2) ϵ_t ],m_2^2= 4/3 c_mm_L 1-2z^2+zcos(2F_π)/√(z^2+1-2zcos(2F_π)), and m_3^2= 4 c_mm_L (1-z)+2zsin^2(F_π)/√(z^2+1-2zcos(2F_π)). It is again easy to see that m_3^2 is always positive on 0<z<1; m_h^2 will change sign at ϕ= [condition (a)] if 3(1-z)/2(z+2) <ϵ_t<3(z+1)/2(z+2) , while demanding that m_2^2 is still positive at this point [condition (b)] implies thatϵ_t > √(3)√(1-z^2)/2(z+2) , although this latter constraint is only applicable on 1/2 < z < 1, where a solution for m_2^2=0 exists. The value ofis given by (<ref>) without change.We perform the same perturbative expansion and solution as for the previous case to find the requirements to satisfy condition (c), supercriticality of the pitchfork bifurcation. The value of ^crit. changes sign relative to the solution in (<ref>), while ^EW-sym slope is still given by (<ref>). The solutions for ^(0), ^(1), and ^(1) are again too lengthy to show here explicitly.Once again, the relevant conclusion that can be drawn from this procedure is that the two additional solutions for ^(0) exist for 0<δ≪1, and that the bifurcation is thus supercritical, if the following conditions are met [we additionally impose ϵ_t>0 by definition]: [ 0 < ϵ_t < 3√(1-z^2)/2(z+2)or 3z/2(z+2) <ϵ_t < 3(z+1)/2(z+2)]and3(1-z)/2(z+2) < ϵ_t < 3(z+1)/2(z+2) . The most stringent constraint for the region z<1 is thusmax{3(1-z)/2(z+2) , 3z/2(z+2)} < ϵ_t < 3(z+1)/2(z+2) . §.§ Summary In summary, the unequal-mass case exhibits qualitatively similar behavior to the equal-mass case analyzed in the main text when the following conditions are met: max{3|z-1|/2(z+2) , 3z/2(z+2)} < ϵ_t < 3(z+1)/2(z+2) . This region of parameter space is displayed in unequal_mass_case_region. Note that for z=1, the constraint correctly reduces to 1/2<ϵ_t<1; cf. (<ref>).tocsectionReferences JHEP | http://arxiv.org/abs/1705.09666v2 | {
"authors": [
"Brian Batell",
"Michael A. Fedderke",
"Lian-Tao Wang"
],
"categories": [
"hep-ph"
],
"primary_category": "hep-ph",
"published": "20170526180006",
"title": "Relaxation of the Composite Higgs Little Hierarchy"
} |
Submitted to IEEE/ACM Transactions on Networking Huang et al.: Optimum Transmission Window for EPONs with Gated-limited Service Optimum Transmission Window for EPONs with Gated-Limited Service Huanhuan Huang, Tong Ye, Member, IEEE, Tony T. Lee, Fellow, IEEE, and Weisheng Hu, Member, IEEE This work was supported in part by the National Science Foundation of China under Grant 61571288, Grant 61671286, and Grant 61433009, and in part by the Open Research Fund of Key Laboratory of Optical Fiber Communications (Ministry of Education of China).The authors are with the State Key Laboratory of Advanced Optical Communication Systems and Networks, Shanghai Jiao Tong University, Shanghai 200240, China (email: [email protected]; [email protected]; [email protected]; [email protected]). May 24, 2017 ===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== This paper studies the Ethernet Passive Optical Network (EPON) with gated-limited service. The transmission window (TW) is limited in this system to guaranteeing a bounded delay experienced by disciplined users, and to constrain malicious users from monopolizing the transmission channel. Thus, selecting an appropriate TW size is critical to the performance of EPON with gated-limited service discipline. To investigate the impact of TW size on packet delay, we derive a generalized mean waiting time formula for M/G/1 queue with vacation times and gated-limited service discipline. A distinguished feature of this model is that there are two queues in the buffer of each optical network unit (ONU): one queue is inside the gate and the other one is outside the gate. Furthermore, based on the Chernoff bound of queue length, we provide a simple rule to determine an optimum TW size for gated-limited service EPONs. Analytic results reported in this paper are all verified by simulations.Ethernet Passive Optical Network (EPON), Gated-Limited Service, M/G/1. § INTRODUCTIONThe ever-growing Internet traffic generated by emerging services, including video on demand, remote e-learning, and online gaming, continuously exacerbates the last mile bottleneck problem in recent decades <cit.>. Ethernet Passive Optical Network (EPON) has been considered as an attractive solution to this problem due to its low cost, large capacity and ease of upgrade to higher bit rates <cit.>. It has been deployed widely in many access networks such as Fiber-To-The-Home (FTTH), Fiber-To-The-Building (FTTB) and Fiber-To-The-Curb (FTTC) <cit.>.A typical EPON is plotted in Fig. <ref>. An EPON is a point-to-multipoint network, where one optical line terminal (OLT) in the central office is connected to multiple optical network units (ONUs) located at the users' premises via an optical passive splitter. In the downstream direction, the OLT broadcasts the packets to all the ONUs, and each ONU only accepts the packets destined to it. In the upstream direction, the OLT schedules the ONUs to share the bandwidth in a time division multiplexing (TDM) manner. The OLT assigns transmission windows (TWs) to each ONU through sending GATE messages in a round-robin fashion. Upon receiving the GATE message, the ONU transmits upstream data in the allocated TW. The number of packets that the ONU can send during a TW is called the TW size in this paper. After data transmission, the ONU generates a REPORT message to inform the OLT of its buffer status <cit.>. The TWs of two successive ONUs are separated by a guard time to avoid data overlapping. The sizes of TWs that the OLT allocates to each ONU depend on the service discipline that the OLT adopts.The gated service discipline has been widely studied in previous works <cit.>. In the gated service, each ONU is authorized to transmit the amount of data that it requests in the REPORT <cit.>. Thus, the gated service may lead to the phenomenon called the capture effect <cit.>, when an ONU with heavy traffic monopolizes the upstream channel for a long time and transmits excessive amounts of data. The capture effect will impose a large delay to other ONUs and thus impair the quality of service (QoS) of other ONUs. With the gated-limited service, the EPON users have to sign a service level agreement (SLA) with the network operator to specify the upstream traffic rate, and the OLT typically sets a limit of the maximum TW size to guarantee the QoS of each ONU according to their signed SLAs.The selection of the maximum TW size is a critical choice in the gated-limited service EPON. On one hand, if the maximum TW size is set too small, the backlog of the ONUs cannot be cleaned up quickly and the upstream bandwidth is wasted by a large number of guard times and REPORT messages. In this case, the ONU will suffer from a large delay. On the other hand, if the maximum TW size is set too large, the capture effect cannot be suppressed effectively. An extreme case is that the gated-limited service discipline will change to the gated service discipline when the limit goes to infinity.In existing literature, only a few previous works have studied the selection of the maximum TW size via simulations. The impact of the maximum TW size on delay performance of an ONU is discussed in <cit.>, in which the author points out that the maximum TW size for each ONU can be fixed based on the SLA, but doesn't provide any concrete scheme for the selection of the maximum TW size. The aim of our paper is to develop a systematic method to select a proper maximum TW size for gated-limited service EPONs.The upstream transmission process of each ONU can be described by a vacation queuing system, in which each TW of the ONU is considered as a busy period while the time between two successive TWs of the ONU is treated as a vacation period. In general, the modeling of a vacation queuing system with limited service discipline is quite difficult <cit.>. In the gated-limited service EPON, the number of packets that an ONU can transmit in a TW is limited by the maximum TW size. Thus, before the transmission of an arrival packet, it may have to wait multiple vacations, which is typically difficult to analyze <cit.>. §.§ Previous Works Related to EPONs with Gated-Limited ServiceThe exhaustive type k-limited vacation queuing systems were studied in <cit.>, where the server takes a vacation when either a queue has been emptied or a predefined number of k customers have been served during the visit. In <cit.>, the distributions of queue length, waiting time and busy period were obtained by using the embedded Markov chain and a combination of the supplementary variables and sample biasing techniques. In <cit.>, the authors used matrix-analytic techniques to iteratively calculate the queue length distribution. In <cit.>, a polling system with two priority queues and k-limited service discipline was analyzed, where the high priority queue is served with queue length dependent service time while the low priority queue is served with constant service time. The high priority queue length distribution at departure instants were derived by the embedded Markov chain. However, these models cannot be directly applied to gated-limited service EPONs, in which the OLT only serves the packets that arrived before the last REPORT message of an ONU up to a predefined number, regardless if the buffer is empty or not.The gated type k-limited service vacation queuing systems were considered in <cit.>, where the server serves at most k customers that present at a queue upon visiting and then begins a vacation. A queuing model based on an embedded Markov chain was developed in <cit.> to derive the Laplace-Stieltjes transforms of waiting time and busy period distributions, but the computation is too complex to give a clear physical insight into the performance of the entire system. To resolve this problem, a simple geometric approach was proposed in <cit.> to obtain the mean waiting time, but this approach can only solve a special case when the user is allowed to transmit one packet in each busy period.In exiting literature, a few works were devoted to the modeling of gated-limited service EPONs. In <cit.>, the authors gave an approximate expression of mean waiting time for a gated-limited service (which is called limited service in <cit.>) EPON under the assumption that the maximum TW size in terms of time (instead of the number of packets) is quite large, which is actually similar to analysis of the gated service EPON. In <cit.>, an approximate mean delay of gated-limited service EPONs is derived by using a discrete Markov chain, which is invalid when traffic load is high.In summary, none of the previous works have obtained a useful formula of mean waiting time for general gated-limited service EPONs where the maximum TW size is finite and larger than one, and neither have they discussed how to select a proper maximum TW size for each ONU of gated-limited service EPONs. §.§ Our Approach and ContributionsIn this paper, we analyze the polling process of EPONs with gated-limited service discipline. Our goal is to develop an insightful model to describe the delay performance of gated-limited service EPONs, and to find a systematic method of selecting the maximum TW size for each ONU based on the SLA.First, we adopt the geometric approach described in <cit.> to derive the mean waiting time of an M/G/1 queue with vacations and gated-limited service. A key step is to calculate the mean number of whole vacations, excluding the residual vacation, experienced by an arrival before it receives service. The computation of this key parameter is based on an innovative approach that establishes the connection between the mean number of whole vacations and the first and second moments of the number of packets served in a busy period. A distinguished feature of this model is that there are two queues in the buffer of each ONU: one queue is inside the gate and the other one is outside the gate.Next, we apply the Chernoff bound of queue length to select the optimum TW size. According to the SLA, the delay performance of an ONU shouldn't be influenced by the TW size limit if its traffic rate does not exceed the subscribed rate. Thus, the criterion of selecting the optimum TW size is to choose the smallest integer that makes the probability of queue length exceeding the TW size limit negligible. That is, when an ONU operates in the subscripted region, its buffer can be emptied with a high probability at the end of every busy period. Otherwise, the ONU will suffer from a large delay when the input traffic rate exceeds the subscribed rate. Our specific contributions are summarized as follows: * We derive a generalized formula of mean waiting time for M/G/1 queue with vacation time and gated-limited service discipline, which includes the mean waiting times of two queues.* We provide a simple rule to determine a proper optimum TW size for ONUs of the gated-limited service EPON based on their SLAs, which is proved to be effective by simulations. The remainder of this paper is organized as follows. In Section II, we demonstrate the capture effect and provide an overview of the polling process between the OLT and ONUs of a gated-limited service EPON. In Section III, we derive the mean waiting time of the M/G/1 queue with vacation time and gated-limited service discipline, and apply the result to ONUs of the EPON. Section IV discusses the method of selecting the optimum TW size and the delay performance of gated-limited service EPONs under the selected TW size. Section V draws the conclusion.§ MOTIVATION AND OVERVIEW An EPON can be considered as a polling system where a single server visits a set of queues in a cyclic order. A service discipline is one of the three policies, exhaustive service, gated service, and limited service, that specify the criteria of the server when progressing to the next queue <cit.>. In a polling system with exhaustive service discipline where the server serves a queue until it becomes empty, the capture effect occurs when a heavily loaded user transmits excessive amounts of data and monopolizes the channel for a long time, such that other lightly loaded users suffer from prolonged waiting times.To alleviate this problem, current EPON systems adopt the gated service discipline where the OLT only transmits the packets that are requested by the ONU in the last REPORT message during a TW. However, the capture effect still persists since the user with heavy traffic can report a large number of packets to the OLT at the end of a TW to secure a large size TW in the next cycle, which will lengthen the delays that other ONUs have to endure. This problem can be completely solved by the gated-limited service discipline, which is able to cap the TW size assigned to each ONU by a predefined value.This point can be illustrated by using the example where two ONUs are connected to the OLT, and each of them is equipped with an infinite buffer. The service capacity of EPON is 1000 packets/ms. We assume that these two ONUs have signed the same SLAs with the network operator, meaning that they subscribe to the same upstream traffic rate (300 packets/ms) and have identical TW sizes, 4. Suppose ONU 1 is a disciplined user with a fixed input rate of 300 packets/ms according to the signed SLA, whereas ONU 2 is a malicious user whose traffic input rate is more than 300 packets/ms. Their mean waiting times versus the arrival rate of ONU 2 under the gated and gated-limited service disciplines are plotted in Fig. <ref> and <ref>, respectively.With the gated service discipline, despite that the constant loading of ONU 1 is smaller than that of ONU 2, as Fig. <ref> shows, the mean waiting time of ONU 1 is not only continuously increasing with the arrival rate of ONU 2, but also is uniformly larger than that of the malicious user ONU 2. Moreover, it is unbounded when the input traffic rate generated by ONU 2 reaches 700 packets/ms. In this case, 70 percent of bandwidth is monopolized by ONU 2.On the other hand, with the gated-limited service discipline, each ONU can transmit no more than 4 packets during each busy period in our example. Fig. <ref> shows that once the arrival rate of ONU 2 exceeds the subscribed rate 300 packets/ms, it immediately suffers from a larger mean waiting time than ONU 1 and soon becomes unstable when the arrival rate is larger than 400 packets/ms due to the limited TW size. The disciplined user ONU 1 enjoys a small mean waiting time all the time, and it is immune to the malicious behavior of ONU 2. This example clearly shows that the EPON with gated-limited service discipline can completely avoid the capture effect and provide a fair service to disciplined users while penalizing malicious users.In an EPON system with N ONUs that adopts the gated-limited service discipline, the packets waiting in the buffer of each ONU are divided into two groups by a fictitious gate. The number of packets inside the gate is bounded by the maximum TW size, denoted by M. An arrival packet first waits outside the gate and then enters the gate before it can be transmitted. As Fig. <ref> illustrates, the buffer status is represented by a two-tuple state (n,m), where n is the number of packets waiting outside the gate, and m is that waiting inside the gate. The number n increases by 1 upon a new arrival, and m decreases by 1 when a packet inside the gate begins to be transmitted by the ONU. Fig. <ref> plots the polling process of an EPON, where N=2 and M=3. A 64-byte GATE is employed by the OLT to notify an ONU about the start time and the length of each allocated TW. Upon receiving the GATE message, the ONU transmits all packets inside the gate during the TW. At the end of the TW, the ONU sends a 64-byte REPORT to the OLT, which reports the number of packets waiting outside the gate. According to the number n stated in the REPORT, the OLT decides the TW size for this ONU in the next cycle, which equals the smaller of M and n. Thus, the message REPORT offers the admission for packets waiting outside the gate to enter the gate.After the ONU 1 issued the first REPORT, as Fig. <ref> shows, the buffer state changes from (2,0) to (0,2), which means two packets entered the gate. Then the ONU 1 becomes idle while the OLT polls the next ONU. When the OLT finishes the polling of all other (N-1) ONUs, it sends a GATE message to ONU 1 again to repeat the process. To avoid data overlapping induced by the clock synchronization problem between the OLT and the ONUs <cit.>, two successive TWs are separated by a guard time. As Fig. <ref> depicts, the constant interval G includes the guard time and the transmission time of a REPORT.The purpose of limiting the TW size is twofold: to guarantee a bounded delay experienced by disciplined users, and to constrain malicious users from monopolizing the transmission channel. With the gated-limited service discipline, the TW size M limits the maximum number of packets that can be served in a busy period. The gated-limited service discipline cannot effectively constrain malicious users if the TW size M is too large, and it may introduce longer than expected delays for disciplined users if M is too small. Therefore, selecting an appropriate maximum TW size M is critical to the performance of EPON with gated-limited service discipline. In the following sections, we devise a queueing model to analyze the mean waiting time of each ONU under the gated-limited service discipline, and provide a rule to determine the proper TW size M of an EPON. The notations used throughout this paper are defined as follows for easy reference. N Number of ONUsM The maximum TW sizeG The guard time plus the transmission time of a REPORTλ_E^* Subscribed traffic rate for all the ONUsλ_E Arrival rate of all the ONUsρ_E Offered load to the EPONn_i Number of packets found waiting outside the gate of the buffer when the i-th packet arrivesm_i Number of packets found waiting inside the gate of the buffer when the i-th packet arrivesN_i Number of packets found waiting in the buffer when the i-th packet arrives, N_i=m_i+n_iR_i Residual time seen by the i-th packet when it arrives during a busy period or a vacation time in progressY_i Duration of all the whole vacation times experienced by the i-th packet before it gets serviceX_i Service time of the i-th packetW_i Waiting time of the i-th packet § MODELING OF EPONS WITH GATED-LIMITED SERVICEIn this section, we analyze the mean packet waiting time of an ONU in the EPON with gated-limited service discipline. As Fig. <ref> shows, an ONU is busy with packet transmission during the TW, followed by a vacation period with a duration that equals the sum of NG and the TWs of other (N-1) ONUs. Therefore, at the end of each busy period, a predominate number of packets that an ONU reports to the OLT attributes to the number of arrivals during the vacation period before it. Hence, there are dependencies among TW sizes of ONUs. However, in the analysis of multiple access systems, such as Aloha or CSMA, this kind of dependency is weak and can be neglected when N is large <cit.>. Thus, we can treat each ONU independently. We make the following assumptions in the modeling of EPONs with gated-limited service: A1. All ONUs in the EPON are statistically identical.A2. The number of ONUs N is large, such that the TWs of the ONUs can be considered as i.i.d. random variables. In practice, the number of ONUs is usually more than 16 in a typical EPON.A3. The packet arrival process of the EPON is Poisson, as is the arrival process of each ONU.A4. The packets are transmitted in a first-in-first-out (FIFO) manner, and the transmission times of the packets are i.i.d. random variables with a general distribution.A5. The propagation delay between the OLT and an ONU is very small in currently commercial EPONs, and can be ignored in the analysis. Under these assumptions, each ONU can be considered as an M/G/1 queue with vacations and gated-limited service. In Section III-A, we derive the mean waiting time of this queuing system, which provides the key to analyze EPONs with gated-limited service discipline in this paper. §.§ M/G/1 Queue with Vacations and Gated-Limited ServiceWe adopt the following notations in the analysis of the M/G/1 queue with vacations and gated-limited service: * the traffic arrival rate is λ,* the service times of the packets X_1, X_2,⋯ are i.i.d. random variables with the first moment X and the second moment X^2, and* the vacation times V_1, V_2,⋯ are i.i.d. random variables with the first moment V and the second moment V^2. Under the gated-limited service discipline, up to Mpackets waiting outside the gate will enter the gate at the end of each busy period, and they will be served in the next busy period. After each busy period, the server takes a vacation. When the vacation terminates, the server returns to serve the packets if the buffer inside the gate is not empty; otherwise, the server takes another vacation.A cycle starts at the end of a busy period, and consists of a vacation period followed by another busy period. As Fig. <ref> illustrates, the i-th packet A_i may arrive at the system during a busy period or a vacation period. The following definitions pertaining to busy periods will be adopted in the derivation of mean waiting time of the packets: B A busy period.K=k The number of packets served in a busy period, where k=0,1,2,⋯,M.B_k A busy period, during which k packets are served, where k=0,1,2,⋯,M (B_0 happens when a vacation finishes while the buffer inside the gate is empty).b_k The probability that a busy period is a B_k.P_k The probability that a packet is served in a B_k, where k=0,1,2,⋯,M.Δ_i The number of packets served ahead of the i-th packet A_i in the same busy period. We need the following two lemmas to facilitate the derivation of the mean waiting time of packets.myLemmaLemmaThe probability that the i-th packet A_i is served in a busy period B_k is given byP_k=kb_k/KSuppose there are θ_k busy periods B_k during a time interval [0,T], where k=0,1,2,⋯,M. The probability b_k that a busy period is a B_k is defined byb_k=lim_T→∞θ_k/∑_k=0^Mθ_kDuring the time interval [0,T], the number of packets that are served in all θ_kbusy periods B_k is kθ_k, and the total number of packets served in the interval [0,T] is ∑_k=1^Mkθ_k. It follows that the probability P_k that the i-th packet A_iis served in a busy period B_k can be obtained as followsP_k =lim_T→∞kθ_k/∑_k=1^Mkθ_k=lim_T→∞k×θ_k/∑_k=0^Mθ_k/∑_k=1^Mk×θ_k/∑_k=0^Mθ_k=kb_k/∑_k=1^Mkb_k=kb_k/K.myLemma1[myLemma]LemmaThe mean number of packets served ahead of the i-th packet A_i in the same busy period is given byE[Δ_i]=K^2-K/2K.Conditioning on the event that packet A_i is served in a busy period B_k, we haveE[Δ_i]=∑_k=1^ME[Δ_i|A_i is served in a B_k]P_k =∑_k=1^M0+1+⋯+(k-1)/kkb_k/K=∑_k=1^M(k^2-k)b_k/2K=K^2-K/2K.As Fig. <ref> shows, it may take the server several busy periods to clear all packets waiting in the buffer ahead of packet A_i. Since the server can only transmit up to M packets in each busy period, before the starting of service, the waiting time of packet A_i includes the following components: * The time to complete current service or current vacation. When the packet A_i arrives, the residual time, either residual service time or residual vacation time, seen by A_i is denoted by R_i.* The service times of all N_i packets found waiting in the buffer when A_i arrives.* Besides residual vacation time, the duration of the whole vacation times experienced by A_i before the starting of service is denoted by Y_i.It follows from the similar argument given in <cit.>, we haveW=R+N_QX+Y,andR=E[R_i]=λX^2/2+(1-ρ)V^2/2V,where N_Q=E[N_i]=λW is the mean queue length, and ρ=λX is the traffic load. The key to derive the mean waiting time (<ref>) is the third term Y=E[Y_i], which is given in the proof of the following theorem.myTheoremTheoremThe mean waiting time of an M/G/1 queue with vacations and gated-limited service discipline is given byW=λX^2/2+(1-ρ)V^2/2V+[1-(1+ρ)(K^2-K)/2MK-λV/M]V/1-ρ-λV/M.Suppose the system is in state (n_i,m_i) when packet A_i arrives, meaning that the number of packets waiting in the buffer are n_i outside and m_i inside the gate. After A_i arrives, all m_i packets inside the gate are sent out during the first busy period, at the end of which the first M of the n_i packets enters the gate. The packet A_i enters the gate at the end of the (⌊.n_i/M.⌋+1)-th busy period, and is sent out during the(⌊.n_i/M.⌋+2)-th busy period. That is, the number of whole vacations that A_i has to experience before the starting of service is⌊.n_i/M.⌋+1, where ⌊x⌋ is the largest integer smaller than x.For example, as Fig. <ref>(a) shows, the state of system is (n_i,m_i)=(4,2) upon the arrival of A_i and M is three, thus A_i has to wait for ⌊.n_i/M.⌋+1=⌊4/3⌋+1=2 whole vacation times in the buffer before it can be transmitted. It's the same as that in Fig. <ref>(b). Thus, we haveY=(1+E[ ⌊n_i/M⌋])V.In the (⌊.n_i/M.⌋+2)-th busy period, the number of packets transmitted ahead of A_i is given by Δ_i=n_i-⌊n_i/M⌋M. For example, as Fig. <ref>(a) shows, packet A_i is the second packet served in the third busy period. Since n_i=4 upon the arrival of A_i and M=3, it follows that Δ_i=n_i-⌊n_i/M⌋M=4-3=1. However, in Fig. <ref>(b), there is no packet transmitted before A_i in the third busy period since the system state is (n_i,m_i)=(3,3) when A_i arrives, and in this case Δ_i=n_i-⌊n_i/M⌋M=3-3=0. Therefore, by definition, we haveE[⌊n_i/M⌋]=E[n_i]-E[Δ_i]/M.Notice that the mean queue length N_Q is the sum of the mean number of packets waiting outside the gate n=E[n_i] and that waiting inside the gate m=E[m_i]. It follows thatn=E[n_i]=N_Q-m.Since the packet A_i is moved into the gate at the end of the (⌊.n_i/M.⌋+1)-th busy period and served in the (⌊.n_i/M.⌋+2)-th busy period, the waiting time of A_i inside the gate, denoted as W_in^i, includes the vacation time V between these two busy periods, and the total service time of Δ_i packets transmitted ahead of A_i in the (⌊.n_i/M.⌋+2)-th busy period. Thus, the mean waiting time of A_i inside the gate is given byW_in=E[W_in^i]=V+E[Δ_i]X.From Little's Law and Lemma 2, we obtain the following mean number of packets waiting inside the gate:m=λW_in=λV+ρ(K^2-K)/2K.The theorem is established by combining (<ref>)-(<ref>) and (<ref>)-(<ref>). Suppose the distribution of service time X is given. The evaluation of the mean waiting time (<ref>) requires the first two moments of the vacation time V and the number of packets K transmitted in a busy period. Intuitively, they are related to each other because the random variable K is dependent on the number of arrivals during the vacation time V. Focusing on the application of the above theorem to EPONs, we will discuss the relationship between the first two moments of V and K in the next Section 3-B. §.§ Mean Packet Waiting Time of EPONs with Gated-Limited Service DisciplineIn this subsection, we apply the result of Theorem 1 to calculate the mean packet waiting time of an ONU in the gated-limited service EPON, where the rate of the traffic input to the network is λ_E and to each ONU is λ=.λ_E/N.. We assume that the distribution of the packet transmission time X is given.1) Moments of Vacation Time V of An ONU: An ONU is busy with probability ρ=λX and idle with probability 1-ρ. The mean busy period of an ONU is given byB=E[B]=ρV/1-ρ.In an EPON with N ONUs, the vacation time of an ONU is equal to the TWs of other (N-1) ONUs plus NG. According to our assumption A2, the TWs are i.i.d. random variables. By definition, we haveV=(N-1)B+NG=(N-1)ρ_E/NV/1-ρ_E/N+NG,where ρ_E=λ_EX=Nρ is the offered load to the EPON. After some reconfigurations, the first moment of the vacation time V for an ONU is given byV=N-ρ_E/1-ρ_EG.Similarly, the second moment of the vacation time V for an ONU is defined as followsV^2=V^2+σ_V^2=V^2+(N-1)σ_B^2,where σ_V^2 and σ_B^2 are the variances of V and B, respectively. Recall that B_k is a busy period during which k packets are transmitted. It follows that B_k=∑_i=1^kX_i, in which X_1,X_2,⋯,X_k are i.i.d. random variables. Let X^*(θ) and B^*(θ) be the Laplace-Stieltjes transforms of the probability density function (PDF) of the service time X and the busy period B, respectively. They are related as follows:B^*(θ)=E[e^-θB]=E[E[e^-θB|B_k]]=∑_k=0^ME[e^-θB_k]b_k=∑_k=0^M(∏_i=1^kE[e^-θX_i])b_k=∑_k=0^M[X^*(θ)]^kb_k=F[X^*(θ)],where F(z)=∑_k=0^Mb_kz^k is the generating function of b_k. Therefore, the variance of the busy period σ_B^2 can be obtained byσ_B^2=B^*''(0)-[-B^*'(0)]^2=F^''(1)[X^*'(0)]^2+F^'(1)X^*''(0)-[F^'(1)X^*'(0)]^2=(K^2-K)X^2+KX^2-(KX)^2=X^2(K^2-K^2)+K(X^2-X^2).Substituting (<ref>) into (<ref>), we obtain the following expression of V^2:V^2=V^2+(N-1)[X^2(K^2-K)+K(X^2-X)].From (<ref>) and (<ref>), we know from (<ref>) that the mean packet waiting time of an ONU can now be determined by the first two moments K and K^2 of the number of packets transmitted in a busy period.2) Moments of Number of Packets K Transmitted in A Busy Period: The first moment of K can be easily derived from (<ref>) and (<ref>), and given as follows:K=B/X=λV/1-ρ=λ_E/NV/1-ρ_E/N=λ_EG/1-ρ_E.The derivation of the second moment K^2, however, has to resort to the discrete time Markov chain embedded in the epochs at the end of busy periods. As Fig. <ref> shows, the upstream transmission process of an ONU is a sequence of cycles. For example, at the end of cycle j-1 and the beginning of cycle j, the ONU reports its queue length, denoted as l_j, to the OLT. According to this report, the OLT determines the size of the TW in cycle j, denoted as k_j, as follows: k_j=M if l_j≥M, and k_j=l_j, if l_j<M. That is, the TW size k_j in cycle j is determined by the queue length l_j at the start of cycle j and given as followsk_j=l_j-(l_j-M)^+,where (l_j-M)^+≜max{l_j-M,0} is the number of reported packets that are not transmitted in cycle j.Let q_n=lim_j→∞Pr{l_j=n}. Recall that b_k is the probability that k packets are served in a busy period. According to (<ref>), we haveb_k={[q_k, k=0,1,⋯,M-1; 1-∑_k=0^M-1q_k, k=M ].Thus, the second moment of K can be obtained based on the distribution of the queue length q_n as followsK^2=∑_k=0^Mk^2b_k=∑_k=0^M-1k^2q_k+M^2(1-∑_k=0^M-1q_k).On the other hand, the queue length l_j+1at the start point of cycle j+1 is determined by the number of packets k_j transmitted during the busy period of cycle j and the number of arrivals a_j during cycle j. Thus, from (<ref>), the queue length at the start point of each cycle satisfies the following Lindley's equation:l_j+1=l_j-k_j+a_j=(l_j-M)^++a_j. Let h_n=lim_j→∞Pr{a_j=n} be the probability that there are n arrivals during a cycle time C. We immediately derive the following equilibrium equation from (<ref>):q_n=∑_i=0^M-1q_ih_n+∑_i=M^M+nq_ih_n+M-i,from which we obtain the generating function of queue length:Q(z)≜∑_n=0^∞q_nz^n=[∑_i=0^M-1q_i(z^M-z^i)]H(z)/z^M-H(z),where H(z)≜∑_n=0^∞h_nz^n is the generating function of h_n.According to our assumption A3 that the packet arrival process of each ONU is a Poisson process, the distribution h_n is completely determined by the cycle time distribution. Let c(t) be the PDF of the cycle time C. We haveH(z)=∑_n=0^∞[∫_0^∞(λt)^n/n!e^-λtc(t)dt]z^n=C^*[λ(1-z)]. A cycle consists of a vacation period and a busy period, which means that the cycle time is the sum of NG and the duration of the TWs of N ONUs, which are i.i.d. random variables according to our assumption A2. It follows that the distribution of cycle time is approximately a Gaussian distribution according to the central limit theorem <cit.>. The Laplace-Stieltjes transform of the cycle time distribution is given byC^*(θ)=exp[-μ_Cθ+1/2σ_C^2θ^2],where the mean cycle time can be obtained from (<ref>), and is given as follows:μ_C=V/1-ρ=(N-ρ_E)G/1-ρ_E/1-ρ_E/N=NG/1-ρ_E,while the variance of the cycle time C is determined by the variance of busy period (<ref>):σ_C^2=Nσ_B^2=N[X^2(K^2-K^2)+K(X^2-X^2)].We know that the second moment K^2 in (<ref>) is coupled to the queue length probability q_n, for n=0,1,⋯,M-1. For a given K^2, according to Rouche's theorem and Lagrange's theorem <cit.>, we can first solve q_n (n=0,1,⋯,M-1) from (<ref>)-(<ref>), and then update K^2 by substituting q_n into (<ref>). Repeatedly applying this iterative procedure, we can obtain the value of K^2 and then obtain the mean waiting time (<ref>) of an ONU by combining (<ref>), (<ref>), (<ref>), and K^2. The procedure that numerically calculates K^2 and the mean waiting time is given in APPENDIX A. In the next section, we seek a systematic rule to select the optimum TW size for each ONU of the EPON that satisfies practical operational requirements of EPONs with gated-limited service.§ OPTIMUM TRANSMISSION WINDOW SIZEAs we mentioned in Section I, EPON users usually have to sign a SLA with the network operator to specify the upstream traffic rate. Suppose all ONUs are statistically identical, and each ONU subscribes to a SLA with a maximum traffic rate .λ_E^*/N., where λ_E^* is the total subscribed traffic rate of all the ONUs, which is less than .1/X. . An ONU is a disciplined user if its input traffic rate is in the admissible region λ=λ_E/N∈[0,λ_E^*/N]. Otherwise, it is considered a malicious user. An EPON system is regular if all the ONUs are disciplined users. In this section, we first describe the methodology and procedure to select an optimum TW size M for a given traffic rate λ_E^*/N. We then discuss the stability and delay performance of EPON system under this selectedTW size M.As we mentioned in Section 2, the purpose of limiting the TW size M is twofold: to guarantee that the mean delays experienced by disciplined users are bounded, and to penalize malicious users. With gated-limited service discipline, the TW size M limits the maximum number of packets that can be served in a busy period. Ideally, a proper TW size M ensures that all packets arrived at a disciplined ONU during a cycle time can be completely served in the next busy period. To achieve this goal, the probability that the queue length at the beginning of a cycle exceeds the limit M should be kept very small. Thus, the criterion for the selection of TW size M is given byPr{l≥M}=lim_j→∞Pr{l_j≥M}≤ε,for some positive ε≪1, when λ=λ_E/N∈[0,λ_E^*/N]. In the following, we show that an optimum TW size M that satisfies the criterion (<ref>) can be selected by using the Chernoff bound of queue length. §.§ Chernoff Bound of Queue LengthThe Chernoff bound of the tail distribution of queue length l at the beginning of a cycle is given as follows <cit.>:Pr{l≥μ_l+t}=Pr{z^l≥z^μ_l+t}≤E[z^l]/z^μ_l+t,for any z>1, where E[z^l]=Q(z) is the generating function of the queue length distribution and μ_l=E[l] is the mean queue length.Suppose all the ONUs are disciplined users with input traffic rate λ=λ_E/N∈[0,λ_E^*/N], and the TW size M satisfies the criterion (<ref>), then each time the queue length reported by an ONU should be typically smaller than M with a high probability 1-ε. It follows that equations (<ref>) and (<ref>) will respectively degenerate to the following approximate equations:l_j ≈k_j,l_j+1 ≈a_j,which implies that the following generating functions of l_j, k_j and a_j are approximately equal:Q(z)≈F(z)≈H(z).Thus, according to (<ref>)-(<ref>), we haveQ(z)≈H(z)= exp[-λμ_C(1-z)+1/2λ^2σ_C^2(1-z)^2] = exp{-λ_EG(1-z)/1-ρ_E+[X^2(K^2-K^2).. ..+K(X^2-X^2)]λ_E^2(1-z)^2/2N}.In this equation, the second moment of the number of packets served in each busy period can be obtained byK^2=F^''(1)+F^'(1)≈H^''(1)+H^'(1).Substituting (<ref>) into (<ref>), we haveK^2= (λ_EG/1-ρ_E)^2+ρ_E^2/N[K^2-(λ_EG/1-ρ_E)^2] +λ_E^3G/N(1-ρ_E)(X^2-X^2)+λ_EG/1-ρ_E,which yieldsK^2=(λ_EG/1-ρ_E)^2+λ_E^3G/N(1-ρ_E)(X^2-X^2)+λ_EG/1-ρ_E/1-ρ_E^2/N.Substituting (<ref>) into (<ref>), we obtain Q(z) in the regular case as follows:Q(z)≈exp[-λ_EG/1-ρ_E(1-z)+λ_E^3GX^2/2(1-ρ_E)(N-ρ_E^2)(1-z)^2].The mean and variance of queue length l are respectively given as follows:μ_l=Q^'(1)=λμ_C=λ_EG/1-ρ_E,andσ_l^2=Q^''(1)+Q^'(1)-[Q^'(1)]^2=λ^2σ_C^2+λμ_C =λ_E^3GX^2/(1-ρ_E)(N-ρ_E^2)+λ_EG/1-ρ_E.It follows from the first equation of generating function of queue length in (<ref>), the Chernoff bound (<ref>) is given byPr{l≥μ_l+t}≤exp[-(μ_l+t)logz+λμ_C(z-1)+1/2λ^2σ_C^2(z-1)^2],for any z>1. Substituting t=M-μ_l into the above Chernoff bound, the criterion (<ref>) can be fulfilled if M is the smallest integer that satisfies the following inequality:Pr{l≥M}≤inf_z>1{exp[-Mlogz+λμ_C(z-1)+1/2λ^2σ_C^2(z-1)^2]}≤ε,where λ∈[0,λ_E^*/N]. We discuss the procedure to find the optimum TW size M^* that satisfies (<ref>) in the next subsection. §.§ Optimum Transmission Window SizeSolving the optimum TW size M^* from (<ref>) involves a complicated transcendental equation, therefore it can only be solved numerically. To initialize the computation procedure, we provide a lower bound M_1 and an upper bound M_2 of M^* in the following theorem. myTheorem1[myTheorem]TheoremThe optimum TW size M^* that satisfies (<ref>) is bounded by⌈μ_l+λσ_C√(2α)⌉=M_1≤M^*≤M_2=⌈μ_l+α+√(α^2+2ασ_l^2)⌉where α=logε^-1 and λ=λ_E/N∈[0,λ_E^*/N].▪The proof of the above theorem is given in APPENDIX B. An accurate approximation of the optimum TW size M^* can be derived from the upper deviation inequality of normal random variables. We know that the cycle time C approaches a normal random variable 𝒩(μ_C,σ_C^2) when N is large. The relation (<ref>) indicates that the queue length at the beginning of each cycle is approximately equal to the number of arrivals during a cycle time C, or l∼λC. As expected, the mean queue length μ_l given by (<ref>) is the product of the arrival rate λ and the mean cycle time μ_C. It is also interesting to note that the variance of the queue length σ_l^2given by (<ref>) is the sum of λ^2σ_C^2 and the variance of a Poisson random variable with parameter μ_l. Thus, the queue length l can be approximated by a normal random variable 𝒩(μ_l,σ_l^2) that is discretized by a Poisson process with rate λ. That is, we adopt the following approximation of queue length distribution:q_n≅q_n^'=1/√(2π)σ_l∫_n-1/2^n+1/2e^-(x-μ_l)^2/2σ_l^2dx.As Fig. <ref> shows, the bigger the gap between these two distributions, the smaller the probability q_n, where q_n is obtained through the inverse transform of Q(z) in (<ref>). We use the same set of parameters, including the number of ONUs N=32, a guard time and a REPORT message transmission time G=1.512μs, the first and second moments of service time X=1μs,X^2=1μs^2, in all figures of this subsection. It is well-known that any normal random variable X∼𝒩(μ,σ^2) satisfies the following upper deviation inequality <cit.>:Pr{X≥μ+t}≤exp[-t^2/2σ^2],for t≥0. Since the distribution of queue length l is close to that of the normal random variable 𝒩(μ_l,σ_l^2), from the above inequality, the optimum TW size M^* can be estimated by the smallest integer M̂ that satisfies the following relation:Pr{l≥M̂}≤exp[-(M̂-μ_l)^2/2σ_l^2]≤ε,and it is explicitly given byM̂=⌈μ_l+σ_l√(2α)⌉.The following inequality immediately follows from (<ref>) and (<ref>):⌈μ_l+λσ_C√(2α)⌉≤M̂≤⌈μ_l+α+√(α^2+2ασ_l^2)⌉.That is, the approximation M̂ of the optimum TW window size M^* also lies between the two bounds M_1 and M_2.As we mentioned before, the optimum TW size M^* that satisfies the inequality (<ref>) can only be solved numerically from the following equation:f(t,z^*)=exp[-(μ_l+t)logz^*+λμ_C(z^*-1)+1/2λ^2σ_C^2(z^*-1)^2]=ε,where z^* is obtained from the proof of theorem 2 in APPENDIX B and given as follows:z^*=√((λμ_C-λ^2σ_C^2)^2+4(μ_l+t)λ^2σ_C^2)-(λμ_C-λ^2σ_C^2)/2λ^2σ_C^2. The following procedure is used to solve the optimum TW size M^* that satisfies the inequality (<ref>). * λ=λ_E^*/N,M=M̂,low=M_1,up=M_2;* t=M-μ_l, calculate z^* by (<ref>);* If f(t,z^*)>ε, low=M; else up=M;/* If f(t,z^*) is too large, we update the lower bound of searching region to decrease f(t,z^*), otherwise we update the upper bound. */* If ⌈low⌉<⌈up⌉, M=(low+up)/2, go to Step 2;* M^*=⌈low⌉=⌈up⌉, output M^*. In the practical operation of EPON, the parameter ε can be selected from the region [0.001,0.1], which implies that the buffered packets of an ONU can be emptied with a probability 1-ε between 0.9 to 0.999 at the end of every busy period. A too large ε causes a too small M that impairs the delay performance of disciplined users. On the other hand, a too small ε causes a too large M that cannot effectively suppress the capture effect.Fig. <ref> and <ref> respectively illustrate that the optimum TW size M^*, its lower bound M_1, upper bound M_2, and approximation M̂, vary with the tail bound ε and the subscribed traffic rate λ_E^*/N of each ONU. In these figures, we find that both approximate and optimum TW sizes, M̂ and M^*, are always bounded between M_1 and M_2. Moreover, the approximation M̂ is uniformly smaller than M^*, which can be readily seen from the distributions illustrated in Fig. <ref>. The convergence rate of normal distribution q_n^' is faster than that of q_n, thus a smaller TW size is needed to achieve the same probability of tail distribution. In spite of that, as Fig. <ref> shows, the difference between M̂ and M^* is very small in the region ε∈[0.001,0.1] of our interest in practice. Besides, the selection of TW size is very sensitive to the traffic rate subscribed by a user, as illustrated in Fig. <ref>, with the growth of λ_E^*/N, the TW size also increases greatly. The tail distribution of queue length, ∑_n=M^∞q_n, and that of approximation, ∑_n=M^∞q_n^', are plotted in Fig. <ref> where each ONU inputs traffic with the rate of 21.875 packets/ms. In our interested region ε∈[0.001,0.1], the difference between the TW sizes selected by tail distributions of q_n and q_n^' is quite small. When their gap is large, such as in the area ε∈[10^-5,10^-4], ε would be extremely small and far below the region of our interest in the practical operation of EPON. For a fixed ε=0.05, as Fig. <ref> shows, despite that M̂<M^*, the probability Pr{l≥M̂} is still below ε, which means the approximation M̂ also satisfies the criterion (<ref>). If the TW size is set equal to the lower bound M_1, the criterion Pr{l≥M}≤ε could be violated and packets may experience longer delay than expected. On the other hand, if the TW size is set equal to the upper bound M_2, the criterion can be easily satisfied because Pr{l≥M_2} is negligible in comparison with ε. However, the upper bound M_2 would be too large to be an effective constraint on malicious users. As a compromise, the approximation M̂ can serve as a practical TW size for EPONs. §.§ Stability and Delay Performance of EPONIn this subsection, we study the delay performance of disciplined ONUs in a regular gated-limited service EPON with the TW size limit M given by (<ref>). The gated service discipline is a special case of the gated-limited service discipline with infinite TW size, thus the mean waiting time in gated service is the lower bound of that in gated-limited service.The EPON system with gated service is stable if the offered load ρ of each ONU is less than 1/N, i.e. λ<1/(NX), which guarantees that input packets will be transmitted steadily and their mean waiting time, or mean queue length, is bounded. However, a bounded mean queue length is not sufficient to guarantee that a regular EPON with gated-limited service is stable due to the limitation of TW size M. From the mean queue length formula (<ref>) of an ONU, the stable condition of the gated-limited service EPON is given byμ_l=λμ_C=λ·NG/1-ρ_E<M,where ρ_E=Nρ=NλX. After some algebraic manipulation, a stable traffic rate λ should be bounded by λ̂ that is defined as follows:λ<λ̂=M/N(MX+G)Its evident that when M→∞, λ̂→1/(NX). Furthermore, the TW size limit M is selected based on criterion (<ref>), which guarantees a very small probability ε that the queue length will exceed the limit M, this is a much more stringent condition than the stable condition (<ref>). Therefore, in a regular EPON with gated-limited service, disciplined ONUs with input traffic rate in the region λ∈[0,λ_E^*/N] must be all stable, which implies λ_E^*/N≤λ̂.According to the conditions described above, the performance of an ONU in a gated-limited service EPON can be characterized in the following three traffic regions: * Subscripted region λ∈[0,λ_E^*/N]. In the subscripted region, the QoS of each ONU in terms of mean delay is guaranteed by the SLA signed with the network operator.* Overloaded region λ∈(λ_E^*/N,λ̂). If an ONU inputs the packets with the rate higher than its subscripted rate and in the overloaded region, the mean delay is impaired by the limit of the maximum TW size M, but it is still bounded. This region provides an adjustment period for the ONU to decrease its input traffic rate when the user experiences a larger than expected delay.* Saturated region λ∈[λ̂,1/NX). In the saturated region, the arrival rate is too high for the OLT to handle. The ONU is unstable when the queue length outside the gate of the buffer becomes unbounded. In the subscripted region λ∈[0,λ_E^*/N] or in the overloaded region λ∈(λ_E^*/N,λ̂), the mean waiting time of an ONU can be calculated by the procedure described in APPENDIX A. In the saturated region λ∈[λ̂,1/NX), the mean waiting time is unbounded.The analytical results of mean waiting time in these traffic regions are verified by simulations. We consider a 1G EPON with N=32 statistically identical ONUs, and assume that they have signed the same SLAs. The parameters G=1.512μs, X=1μs and X^2=1μs^2 are the same as those used in Section 4-B. Thus the service capacity of the EPON is 1000 packets/ms, evenly divided into 31.25 packets/ms for each ONU. Two scenarios are considered in our study, where each user subscribes to a low traffic rate of 9.375 packets/ms and a high traffic rate of 21.875 packets/ms respectively. According to formula (<ref>), for a fixed ε=0.05, we should set the maximum TW size M equal to 3 and 9 in respect to the above two scenarios.Fig. <ref> and <ref> illustrate the variance of busy periods for each ONU. It is evident that the analytical result given in APPENDIX A is consistent with the simulation result, which validates the accuracy of our analysis in Section 3. If we adopt the gated service discipline (i.e., M is infinite), Fig. 9 shows that the variance of busy periods monotonically increases with arrival rate up to infinity. However, with the gated-limited service discipline (i.e., M is finite), the variance of busy periods approaches zero, because each ONU transmits a constant number of M packets in each busy period when the arrival rate is high.In the subscripted region, as shown in Fig. <ref> and <ref>, the disciplined users in the gated-limited service EPON experience the same mean waiting time as that in gated service EPON. This desirable property is due to the criterion Pr{l≥M}≤ε of selecting the TW size, which is sufficiently large to empty the buffered packets almost in every busy period.In the overloaded region, the ONU will suffer a larger mean delay than expected, which serves as a precaution measure for the ONU to reduce the loading back to the subscripted region. If the ONU continues increasing the input traffic rate to the saturated region, its mean delay tends to infinity, and the service is collapsed to prevent its malicious behavior from impacting the QoS of other disciplined users.§ CONCLUSIONIn this paper, we consider an EPON with gated-limited service discipline as a polling system. Each ONU of an EPON is modeled as an M/G/1 queue with vacations and gated-limited service. A distinguished feature of this model is that there are two queues in the buffer of each ONU: one queue is inside the gate and the other one outside the gate. We extend the traditional geometric approach to derive the Pollaczek-Khinchine formula of mean waiting time. Moreover, the Chernoff bound method is applied to the selection of the optimum TW size. The criterion of selecting the TW size M is to guarantee that the delay performances experienced by disciplined users are bounded, and to constrain malicious users from monopolizing the transmission channel. For this purpose, we devise a simple rule to determine a proper optimum TW size for each ONU of the gated-limited service EPON based on their SLAs.§ ITERATIVE PROCEDURE OF CALCULATING MEAN WAITING TIME W AND VARIANCE OF BUSY PERIODS Σ_B^2As analyzed in Section 3, it is critical to obtain the second moment of the number of packets served in a busy period K^2 when calculating the mean waiting time and variance of busy periods. However, the value of K^2 and that of distribution q_n (n=0,1,⋯,M-1) depend on each other, and we can only solve them numerically.According to Rouche's theorem, the denominator of (<ref>) has M zeros inside and on |z|=1, one of them is z=1. Then by Lagrange's theorem <cit.>, the other (M-1) zeros inside |z|=1 are given byz_m=∑_n=1^∞.e^2πmni/M/n!d^n-1/dz^n-1[H(z)]^n/M|_z=0,for m=1,2,⋯,M-1. Since Q(z) is analytic in |z|≤1, the numerator of (<ref>) must also be zero at z=z_m. Therefore, q_n (n=0,1,⋯,M-1) satisfy the following (M-1) linear equations:∑_n=0^M-1q_n(z_m^M-z_m^n)=0, m=1,2,⋯,M-1.Another equation is given as follows by the condition Q(1)=1:∑_n=0^M-1q_n(M-n)=M-λμ_C=M-λ_EG/1-ρ_E.Thus, if we know the expression of H(z), we can solve q_n (n=0,1,⋯,M-1) by combining (<ref>)-(<ref>), then obtain K^2 based on (<ref>), which isK^2=∑_n=0^M-1n^2q_n+M^2(1-∑_n=0^M-1q_n). However, the expression of H(z) is instead dependent on K^2. Therefore, given a calculation accuracy δ, we can numerically solve K^2 through the following iteration procedure:* K^2=0;*Calculate H(z) by combining (<ref>), (<ref>)-(<ref>);*Solve z_m,m=1,2,⋯,M-1 by (<ref>);*Solve q_n,n=0,1,⋯,M-1 by combining (<ref>) and (<ref>);*If |∑_n=0^M-1n^2 q_n+M^2(1-∑_n=0^M-1q_n)-K^2|>δ, K^2=∑_n=0^M-1n^2q_n+M^2(1-∑_n=0^M-1q_n), go to Step 2;*Output K^2.Then, we can easily obtain the variance of busy periods for an ONU by substituting K^2 and (<ref>) into (<ref>), and the mean waiting time by combining (<ref>), (<ref>), (<ref>), (<ref>) and K^2.§ PROOF OF THEOREM 2Define the following function:f(t,z)=exp[-(μ_l+t)logz+λμ_C(z-1)+1/2λ^2σ_C^2(z-1)^2],where z>1 and t≥0. We know that the following inequality holds for all x≥0:-x≤-log(1+x)≤-(x-1/2x^2).Let x=z-1, and apply (<ref>) to (<ref>), then we have the following inequality:f_1(t,z)≤f(t,z)≤f_2(t,z),where the two functions f_1(t,z) and f_2(t,z) are defined as follows:f_1(t,z)=exp[-t(z-1)+1/2λ^2σ_C^2(z-1)^2],andf_2(t,z)=exp[-t(z-1)+1/2(σ_l^2+t)(z-1)^2]. Take the derivatives of (<ref>), (<ref>) and (<ref>), we obtain-μ_l+t/z^*+λμ_C+λ^2σ_C^2(z^*-1) =0, -t+λ^2σ_C^2(z_1-1) =0,and-t+(σ_l^2+t)(z_2-1)=0.It follows from (<ref>), the following inequalities should hold:inf_z>1f_1(t,z) =f_1(t,z_1)≤f_1(t,z^*)≤f(t,z^*),inf_z>1f(t,z) =f(t,z^*)≤f(t,z_2)≤f_2(t,z_2).Combining (<ref>) and (<ref>), the following expression can be obtained from z_1 and z_2 given by (<ref>) and (<ref>) respectively,f_1(t,z_1) =exp[-t^2/2λ^2σ_C^2]≤f(t,z^*)≤exp[-t^2/2(σ_l^2+t)]=f_2(t,z_2). Let t^*, t_1 and t_2 be the solutions that respectively satisfy the following three equations:exp[-t_1^2/2λ^2σ_C^2]=f(t^*,z^*)=exp[-t_2^2/2(σ_l^2+t_2)]=ε.Then, according to (<ref>), we haveexp[-t^*^2/2λ^2σ_C^2]≤f(t^*,z^*)=exp[-t_1^2/2λ^2σ_C^2]=exp[-t_2^2/2(σ_l^2+t_2)]≤exp[-t^*^2/2(σ_l^2+t^*)].Since those exponential functions in (<ref>) are monotonically decreasing with t, we havet_1≤t^*≤t_2.Substituting t=M-μ_l into (<ref>), we obtainM_1≤M^*≤M_2.Hence, the smallest integer M_1 that satisfies the following inequality is a lower bound of M^*:exp[-(M_1-μ_l)^2/2λ^2σ_C^2]≤ε,and it can be explicitly expressed as follows:M_1=⌈μ_l+λσ_C√(2logε^-1)⌉=⌈μ_l+λσ_C√(2α)⌉.Similarly, the smallest integer M_2 that satisfies the following inequality is an upper bound of M^*:exp[-(M_2-μ_l)^2/2(λ^2σ_C^2+M_2)]≤ε,and it can be given as follows:M_2 =⌈μ_l+logε^-1+√((logε^-1)^2+2logε^-1(μ_l+λ^2σ_C^2))⌉=⌈μ_l+α+√(α^2+2ασ_l^2)⌉.We obtain (<ref>) by combining (<ref>)-(<ref>). IEEEtran | http://arxiv.org/abs/1705.09433v1 | {
"authors": [
"Huanhuan Huang",
"Tong Ye",
"Tony T. Lee",
"Weisheng Hu"
],
"categories": [
"cs.NI",
"cs.PF"
],
"primary_category": "cs.NI",
"published": "20170526051655",
"title": "Optimum Transmission Window for EPONs with Gated-Limited Service"
} |
We show that in Grayson's model of higher algebraic K-theory using binary acyclic complexes, the complexes of length two suffice to generate the whole group.Moreover, we prove that the comparison map from Nenashev's model for K_1 to Grayson's model for K_1 is an isomorphism. It follows that algebraic K-theory of exact categories commutes with infinite products.First-spike based visual categorization using reward-modulated STDP Mohammad Ganjtabesh^1,2,[Corresponding author.Email addresses: [email protected] (MM),[email protected] (SRK),[email protected] (TM), [email protected] (AND),[email protected] (MG).]===================================================================================================================================================================================================================================§ INTRODUCTIONOn a conceptual level, the algebraic K-theory functor is by now well understood in terms of a universal property, which encapsulates the known fundamental properties of Quillen's or Waldhausen's construction <cit.>.One of the more elusive properties of algebraic K-theory is its compatibility with infinite products. This question was studied by Carlsson <cit.> in connection to work of Carlsson–Pedersen on the split injectivity of the K-theoretic assembly map <cit.>, and permeates the literature adapting their “descent” argument to prove more general cases of the K-theoretic Novikov conjecture <cit.>. Carlsson's proof, while relying on the Additivity theorem, is for the most part concerned with simplicial techniques involving what he calls quasi-Kan complexes.The present article aims to provide a different perspective on the question. In <cit.>, Grayson showed that the higher algebraic K-theory of an exact category can be expressed in terms of binary acyclic complexes. See <ref> for a quick review.In <cit.> Nenashev gave a different presentation K^N_1() of K_1() whose generators are binary acyclic complexes of length two. Regarding a binary acyclic complex of length two as a class in K_0(Ω) defines a natural homomorphismΦ K_1^N() → K_0(Ω),see <cit.> and the beginning of <ref>.Unpublished work of Grayson shows that Φ is a surjection, cf. <cit.>. Building on Grayson's unpublished argument (see <ref>), we improve this to a bijectivity statement. The map Φ is an isomorphism. We use this to show the following theorem. For every family {_i}_i∈ I of exact categories, the natural map ^-∞(∏_i∈ I_i)→∏_i∈ I^-∞(_i) is a π_*-isomorphism.Since Grayson's results in <cit.> rely only on the fundamental properties of K-theory, our proof is not only elementary, but also exhibits <ref> as a consequence of the universal property of algebraic K-theory.Since the proof of <ref> is technical, we will begin by showing the following weaker statements. Here the proofs are considerably easier and they suffice to deduce <ref>.In <ref> we give a proof that the complexes of length four suffice to generate K_0(Ω). The canonical map K_0(Ω_[0,4])→ K_0(Ω) is a surjection.A closer inspection of the constructions involved in proving <ref> provides a candidate for a homomorphism K_0(Ω) → K_0(Ω_[0,4]). Admitting slightly larger complexes, we show that this homomorphism is well-defined and provides a right inverse to the canonical comparison map. For every n∈ the canonical map K_0(Ω_[0,7]^n) → K_0(Ω^n) ≅ K_n() admits a natural section.In <ref>, we use this right inverse to show <ref>. Finally, we give the proof of <ref> in <ref>. §.§ AcknowledgmentsWe are indebted to Daniel Grayson for sharing with us his proof of surjectivity of Φ. We thank Robin Loose for helpful discussions. § BINARY COMPLEXES In this section, we give a quick review of Grayson's description of the higher algebraic K-groups <cit.>. In the followingwill always denote an exact category. Chain complexes inwill always be assumed to be bounded. Denote by C the category of (bounded) chain complexes in .A chain complex (P_*,d) inis called acyclic if each differential admits a factorization into an admissible epimorphism followed by an admissible monomorphism d_nP_n ↠ J_n-1↣ P_n-1 such that J_n ↣ P_n ↠ J_n-1 is a short exact sequence.Denote by C^q⊆ C the full subcategory of acyclic chain complexes. A binary acyclic complex (P_*, d, d') is a graded object P_* overtogether with two degree -1 maps d, d'P_* → P_* such that both (P_*,d) and (P_*,d') are acyclic chain complexes.The differentials d and d' are called the top and bottom differential. A morphism of binary acyclic complexes is a degree 0 map of underlying graded objects which is a chain map with respect to both differentials. The resulting category of binary acyclic complexes is denoted by B^q. There is a natural exact functor Δ C^q() → B^q() which duplicates the differential of a given acyclic chain complex.Fix n > 0. Since both C^q and B^q are exact categories, these constructions can be iterated. For any finite sequence = (W_1,…,W_n) in {B,C}, denote by ^q the category W_1^q… W_n^q. Ifis the constant sequence on the letter B, we also write (B^q)^n. Lettingvary over all possible choices defines a commutative n-cube of exact categories which induces a commutative n-cube of spectra upon taking algebraic K-theory. The spectrum (Ω^n) is defined to be the total homotopy cofiber of this cube.We rely on the following result about (Ω^n).The abelian groups K_n() and K_0(Ω^n) are naturally isomorphic. This theorem facilitates a completely algebraic description of higher K-theory <cit.>. For example, it implies that K_1() can be described as the Grothendieck group of the category of binary acyclic complexes B^q with the additional relation that a binary acyclic complex represents the trivial class if its top and bottom differential coincide. We use this description of K_1() extensively in <ref>.Throughout this article, we employ the following variations of this construction: Let J ⊆ be an interval, i.e. J = { z ∈| a ≤ z ≤ b } for some a,b ∈∪{±∞}. Then we denote by B^q_J and C^q_J the categories of (binary) acyclic complexes supported on J. Thus, any sequence of intervals = (J_1,…,J_n) ingives rise to an abelian group K_0(Ω_) := K_0(Ω_J_1…Ω_J_n). If ' = (J_1',…,J_n') is another such sequence satisfying J_k ⊆ J_k' for all k, we have a natural homomorphismi_,' K_0(Ω_) → K_0(Ω_').Note that Δ C^q_J→ B^q_J admits two natural splits ⊤ andwhich forget the bottom, respectively top, differential of a binary acyclic complex. Using one of these, we see that i_,' is naturally a retract of the homomorphismK_0(Ω_J_1B^q_J_2… B^q_J_n) → K_0(Ω_J_1'B^q_J_2'… B^q_J_n').Moreover, we observe that any permutation σ{1,…,n}{1,…,n} induces an isomorphismK_0(Ω_I_1…Ω_I_n) ≅ K_0(Ω_I_σ(1)…Ω_I_σ(n)). It is notationally convenient to work with -graded bounded chain complexes instead of -graded chain complexes. The following lemma justifies this convention.The natural map K_0(Ω_[0,∞)^n) → K_0(Ω^n) is an isomorphism for all n ≥ 1.We begin with the case n=1.The map K_0(Ω_[0,∞)) → K_0(Ω) is an isomorphism since the class group of a filtered union is isomorphic to the colimit of the class groups and shifting induces an isomorphism in K-theory.We will now prove the lemma by induction. Assume that it holds for n-1.The map K_0(Ω^n_[0,∞))→ K_0(ΩΩ^n-1_[0,∞)) is a retract of K_0(Ω_[0,∞)(B^q_[0,∞))^n-1)→ K_0(Ω(B^q_[0,∞))^n-1), which is an isomorphism by the induction beginning. Hence, K_0(Ω^n_[0,∞))→ K_0(ΩΩ^n-1_[0,∞)) is an isomorphism as well. Using that Ω and Ω_[0,∞) commute, it suffices to show thatK_0(Ω^n-1_[0,∞)Ω)→ K_0(Ω^n) is an isomorphism. This map is a retract of K_0(Ω^n-1_[0,∞)B^q)→ K_0(Ω^n-1B^q), which is an isomorphism by assumption. From now on, we write K_0(Ω) for K_0(Ω_[0,∞)). All chain complexes considered in the sequel will be assumed to be positive.In the remainder of this section, we record some important properties of K_0(Ω).Let = (P_*, d, d') be a binary acyclic complex and let i ∈.* The i-th shift [i] is defined to be the binary acyclic complex with underlying graded object P[i]_* = P_*-i and differentials P[i]_n = P_n-i[r, shift right, "d_n-i'"'][r, shift left, "d_n-i"][+10pt] P_n-i-1 = P[i]_n-1 * The i-th suspension Σ^i is defined to be the binary acyclic complex with underlying graded object Σ^iP_* = P_*-i and differentials Σ^iP_n = P_n-i[r, shift right, "(-1)^i d_n-i'"'][r, shift left, "(-1)^i d_n-i"][+20pt] P_n-i-1 = Σ^iP_n-1Our terminology is in disagreement with <cit.>, where the suspension is called a shift. As for ordinary chain complexes, we have the following lemma:Letbe a binary acyclic complex. Then [[i]] = [Σ^i] = (-1)^i[] ∈ K_0(Ω). The first equality holds since [1] ≅Σ. The second equality holds sinceand Σ fit into a short exact sequence with the cone of .A binary double complex is a bounded bigraded object (P_k,l)_k,l ∈ intogether with morphisms d^h_k,l P_k,l→ P_k-1,l,d^v_k,l P_k,l→ P_k,l-1 and d^',h_k,l P_k,l→ P_k-1,l,d^',v_k,l P_k,l→ P_k,l-1 such that (P_*,*, d^h, d^v) and (P_*,*, d^',h, d^',v) are double complexes in the sense that(P_*,l, d^h) and (P_*,l,d^',h) are chain complexes for all l, (P_k,*,d^v) and (P_k,*,d^',v) are chain complexes for all k,and d^hd^v =d^vd^h, respectively d^',hd^',v = d^',vd^',h.We call (P_*,*, d^h, d^v, d^',h, d^',v) a binary acyclic double complex if (P_*,l, d^h, d^',h) is a binary acyclic complex for all land (P_k,*, d^v, d^',v) is a binary acyclic complex for all k. Let (P_*,*, d^h, d^v, d^',h, d^',v) be a binary acyclic double complex. Forming the total complex of (P_*,*, d^h, d^v) and (P_*,*, d^',h, d^',v), using the usual sign trick, produces a binary acyclic complex . Filteringaccording to the horizontal (respectively vertical) filtration of the double complexes and applying <ref> immediately gives the following lemma.Let (P_*,*, d^h, d^v, d^',h, d^',v) be a binary acyclic double complex.Then we have ∑_l (-1)^l [P_*,l, d^h, d^',h] = ∑_k (-1)^k [P_k,*, d^v, d^',v] in K_0(Ω_). This relation is analogous to the relation used by Nenashev <cit.> to define K^N_1(), hence its name. Specifying a binary double complex involves a sizeable amount of data. In order to write down such complexes without occupying too much space, we will follow Nenashev's convention and depict binary double complexes by diagrams of the form ∙[r, shift right][r, shift left][d, shift right][d, shift left]∙[r, shift right][r, shift left][d, shift right][d, shift left]∙[d, shift right][d, shift left]∙[r, shift right][r, shift left][d, shift right][d, shift left]∙[r, shift right][r, shift left][d, shift right][d, shift left]∙[d, shift right][d, shift left]∙[r, shift right][r, shift left]∙[r, shift right][r, shift left]∙ where it is understood that the left vertical morphisms commute with the top horizontal morphisms (corresponding to d^h and d^v), and that the right vertical morphisms commute with the bottom horizontal morphisms (corresponding to d^',h and d^',v).Let J be an object inand denote by τ_J := [ 0 𝕀_J; 𝕀_J 0 ] J ⊕ J → J ⊕ J the automorphism which switches the two summands.Then the element [J⊕ J[r, shift left, "𝕀"][r, shift right, "τ_J"'] [-5pt] J⊕ J ] ∈ K_0(Ω_[0,2]) has order two.An application of <ref> to the binary acyclic double complex J ⊕ J[r, shift left, "τ_J"][r, shift right, "𝕀_J⊕ J"'][d, shift left, "τ_J"][d, shift right, "τ_J"'] J ⊕ J[d, shift left, "𝕀_J⊕ J"][d, shift right, "τ_J"']J ⊕ J[r, shift left, "τ_J"][r, shift right, "τ_J"'] J ⊕ J shows that [ J⊕ J[r, shift left, "𝕀_J⊕ J"][r, shift right, "τ_J"'] [-5pt] J⊕ J]=-[J⊕ J[r, shift left, "τ_J"][r, shift right, "𝕀_J⊕ J"'] [-5pt] J⊕ J ] in K_0(Ω_[0,2]); cf. also <cit.>.On the other hand, the isomorphismJ ⊕ J[r, shift left, "τ_J"][r, shift right, "𝕀_J⊕ J"'][d, "τ_J"'] J ⊕ J[d, "𝕀_J⊕ J"] J ⊕ J[r, shift left, "𝕀_J⊕ J"][r, shift right, "τ_J"'] J ⊕ Jof binary acyclic complexes implies[ J⊕ J[r, shift left, "𝕀_J⊕ J"][r, shift right, "τ_J"'] [-5pt] J⊕ J]=[J⊕ J[r, shift left, "τ_J"][r, shift right, "𝕀_J⊕ J"'] [-5pt] J⊕ J ]∈ K_0(Ω_[0,2]).§ SHORTENING BINARY COMPLEXES The goal of this section is to prove <ref> and <ref>. As before,denotes an exact category. The basic approach is the same as that of Harris <cit.> in showing that the canonical map from Bass' K_1 to K_0(Ω) is an isomorphism for split-exact categories. Our arguments rely on a description of equality of classes in K_0 of an exact category which is due to Heller <cit.>. We include a proof following <cit.> for the reader's convenience.Let J,K ∈.* We call J and K extension-equivalent if there are objects A, B ∈ such that there exist exact sequencesA[r, rightarrowtail] J[r, twoheadrightarrow] BandA[r, rightarrowtail] K[r, twoheadrightarrow] B.* We call J and K stably extension-equivalent if there exists an object S ∈ such that J ⊕ S and K ⊕ S are extension-equivalent. Despite its name, extension-equivalence need not be an equivalence relation. On the other hand, the following lemma shows that stable extension-equivalence is always an equivalence relation.Letbe an exact category and let J, J', K, K' ∈.Then [J] - [J'] = [K] - [K'] ∈ K_0() if and only if J ⊕ K' and K ⊕ J' are stably extension-equivalent.Define a relation on pairs of objects inby setting (J,J') ∼ (K,K') if and only if J ⊕ K' and K ⊕ J' are stably extension-equivalent.We claim that ∼ is an equivalence relation.Reflexivity and symmetry are obvious.To see transitivity, suppose that (J,J') ∼ (K,K') ∼ (L,L'), i.e. there exist A, B, C, D, S, T ∈ such that there are exact sequences A[r, rightarrowtail] J ⊕ K' ⊕ S[r, twoheadrightarrow] B and A[r, rightarrowtail] K ⊕ J' ⊕ S[r, twoheadrightarrow] B as well as C[r, rightarrowtail] K ⊕ L' ⊕ T[r, twoheadrightarrow] D and C[r, rightarrowtail] L ⊕ K' ⊕ T[r, twoheadrightarrow] D. Then the sequences formed by taking direct sums A ⊕ C[r, rightarrowtail] J ⊕ K' ⊕ S ⊕ K ⊕ L' ⊕ T[r, twoheadrightarrow] B ⊕ D,A ⊕ C[r, rightarrowtail] K ⊕ J' ⊕ S ⊕ L ⊕ K' ⊕ T[r, twoheadrightarrow] B ⊕ D are exact, too. Rewriting J ⊕ K' ⊕ S ⊕ K ⊕ L' ⊕ T ≅ J ⊕ L' ⊕ K ⊕ K' ⊕ S ⊕ T and K ⊕ J' ⊕ S ⊕ L ⊕ K' ⊕ T ≅ L ⊕ J' ⊕ K ⊕ K' ⊕ S ⊕ T proves transitivity, so ∼ is an equivalence relation.Denote by k() the set of equivalence classes in ob× ob with respect to ∼.We write [J,J'] for the class of (J,J') in k().Clearly, if (J,J') and (K,K') are pairs of objects such that J ≅ K and J' ≅ K', then [J,J'] = [K,K'].Hence, the direct sum operation ininduces the structure of a commutative monoid on k() via [J,J'] + [K,K'] := [J ⊕ K, J' ⊕ K']. It is easy to check that [J,J] = [0,0] for every object J ∈, so k() is an abelian group since [J,J'] + [J',J] = [J ⊕ J', J ⊕ J'] = 0. Let now J ↣ K ↠ L be an exact sequence in . Since both J[r, rightarrowtail] J ⊕ L[r, twoheadrightarrow] L. and J [r, rightarrowtail] K[r, twoheadrightarrow] L are exact, it follows that [J ⊕ L, 0] = [K,0].Hence, the map ob→ k(), J ↦ [J,0] induces a homomorphism ϕ K_0() → k().Note that ϕ sends the class [J] - [J'] ∈ K_0() to ϕ([J] - [J']) = [J,J'], so ϕ is an epimorphism.Moreover, it is immediate from the definition of ∼ that the kernel of ϕ is trivial.This proves that ϕ is an isomorphism, and the claim of the lemma follows. We can now prove <ref>. Let :=(P_*, d, d') be a binary acyclic complex supported on [0,m] for some m∈. Choose factorizations d_nP_n ↠ J_n-1↣ P_n-1 and d_n'P_n ↠ K_n-1↣ P_n-1 for all n. Since J_n and K_n both fit into an exact sequence with P_n-1,…, P_0, they represent the same class in K_0(). Therefore, there exist A_n,B_n,S_n∈ and exact sequences A_n[r, rightarrowtail] J_n ⊕ S_n[r, twoheadrightarrow] B_nandA_n[r, rightarrowtail] K_n ⊕ S_n[r, twoheadrightarrow] B_n. For n ≥ 3, let _n denote the binary acyclic complex [column sep = small]A_n[r, shift left][r, shift right] K_n ⊕ S_n ⊕ J_n[r, shift left][r, shift right] B_n⊕ P_n⊕ A_n-1[r, shift left][r, shift right] J_n-1⊕ K_n-1⊕ S_n-1[r, shift left][r, shift right] B_n-1 consisting of top differential [row sep= tiny] A_n[r, rightarrowtail] K_n ⊕ S_n[r, twoheadrightarrow][d, phantom, "⊕"] B_n[d, phantom, "⊕"]J_n[r, rightarrowtail] P_n[r, twoheadrightarrow, "d_n"][d, phantom, "⊕"] J_n-1[d, phantom, "⊕"]A_n-1[r, rightarrowtail] K_n-1⊕ S_n-1[r, twoheadrightarrow] B_n-1 and bottom differential [row sep= tiny] A_n[r, rightarrowtail] J_n⊕ S_n[r, twoheadrightarrow][d, phantom, "⊕"]B_n[d, phantom, "⊕"] K_n[r, rightarrowtail] P_n[r, twoheadrightarrow, "d_n'"][d, phantom, "⊕"]K_n-1[d, phantom, "⊕"] A_n-1[r, rightarrowtail] J_n-1⊕ S_n-1[r, twoheadrightarrow]B_n-1.Note that _n is zero for almost all n. Furthermore, let _2 denote the binary acyclic complexA_2[r, shift left][r, shift right] K_2⊕ S_2⊕ J_2[r, shift left][r, shift right] B_2⊕ P_2[r, shift left][r, shift right] P_1[r, shift left][r, shift right] P_0consisting of top differential[row sep= tiny]A_2[r, rightarrowtail] K_2⊕ S_2[r, twoheadrightarrow][d, phantom, "⊕"]B_2[d, phantom, "⊕"]J_2[r, rightarrowtail] P_2[r, "d_2"]P_1[r, twoheadrightarrow, "d_1"] P_0and bottom differential[row sep= tiny]A_2[r, rightarrowtail] J_2⊕ S_2[r, twoheadrightarrow][d, phantom, "⊕"]B_2[d, phantom, "⊕"]K_2[r, rightarrowtail] P_2[r, "d_2'"]P_1[r, twoheadrightarrow, "d_1'"] P_0. []=∑_n=2^∞(-1)^n[_n]∈ K_0(Ω_[0,m+3]) Let ' denote the binary acyclic complex …[r, shift left][r, shift right] P_4[r, shift left][r, shift right] P_3⊕ A_2[r, shift left][r, shift right] J_2⊕ K_2⊕ S_2[r, shift left][r, shift right] B_2 with top differential [row sep= tiny] …[r] P_4[r, "d_4"] P_3[r, twoheadrightarrow, "d_3"][d, phantom, "⊕"] J_2[d, phantom, "⊕"]A_2[r, rightarrowtail] K_2⊕ S_2[r, twoheadrightarrow]B_2 and bottom differential [row sep= tiny] …[r] P_4[r, "d'_4"] P_3[r, twoheadrightarrow, "d'_3"][d, phantom, "⊕"] K_2[d, phantom, "⊕"]A_2[r, rightarrowtail] J_2⊕ S_2[r, twoheadrightarrow]B_2 We will show that [ ]=[_2]-[']. The lemma then follows by iterating this procedure. Consider the following binary acyclic double complex. All differentials written as a single arrow are the identity on the summand appearing in domain and codomain and zero on all other summands. In particular, both differentials agree in this case. The remaining four non-trivial binary acyclic complexes are , ', _2 and a fourth one explained in the diagram. A_2[d][r] A_2[d, shift left][d, shift right][+5pt] …[r, shift left][r, shift right] P_4[r, shift left][r, shift right][d] P_3⊕ A_2[r, shift left][r, shift right][d] J_2⊕ K_2⊕ S_2[r, shift left][r, shift right][d, shift left][d, shift right] B_2[d]…[r, shift left][r, shift right] P_4[r, shift left][r, shift right] P_3[r, shift left][r, shift right] P_2⊕ B_2[r, shift left][r, shift right][d, shift left][d, shift right] P_1⊕ B_2[r, shift left][r, shift right][d] P_0[d]P_1[r, shift left, "(𝕀, d_1)"][r, shift right, "(𝕀, d_1')"'] [d, shift left][d, shift right] P_1⊕ P_0[r, shift left, "d_1-𝕀"][r, shift right, "d_1'-𝕀"'][d] P_0P_0 [r] P_0 Applying Nenashev's relation (<ref>) and omitting all summands which are obviously zero, we obtain -[ P_1[r, shift left, "(𝕀, d_1)"][r, shift right, "(𝕀, d_1')"']P_1 ⊕ P_0[r, shift left, "d_1-𝕀"][r, shift right, "d_1'-𝕀"'] P_0 ] + [] - ['[1]] = [_2]. We will show that the first summand is trivial. Assuming this, it follows from <ref> that [] + ['] = [_2] as claimed. In fact, triviality of the binary acyclic complex P_1[r, shift left, "(𝕀, d_1)"][r, shift right, "(𝕀, d_1')"'][+5pt] P_1⊕ P_0[r, shift left, "d_1-𝕀"][r, shift right, "d_1'-𝕀"'][+5pt] P_0 in K_0(Ω) follows directly from the existence of the following short exact sequence of binary acyclic complexes: [baseline=(current bounding box.south)] [+5pt] P_0[d, rightarrowtail][r, shift left, "-𝕀"][r, shift right, "-𝕀"'][+5pt] P_0[d, rightarrowtail] P_1[r, shift left, "(𝕀, d_1)"][r, shift right, "(𝕀, d_1')"'] [d, twoheadrightarrow] P_1⊕ P_0[r, shift left, "d_1-𝕀"][r, shift right, "d_1'-𝕀"'][d, twoheadrightarrow] P_0 P_1[r] P_1<ref> immediately implies <ref> since the complexes _n are supported on [0,4] for all n ≥ 2.Our next goal is to prove <ref>. The map K_0Ω→ K_0Ω_[0,7] given by []↦∑_n=2^∞(-1)^n[_n] is a well-defined homomorphism. Note that all J_n and K_n are unique up to isomorphism. We first show that ∑_n=2^∞(-1)^n[_n] is independent of the choices of A_n, B_n, S_n and the extensions A_n[r, rightarrowtail] J_n⊕ S_n[r, twoheadrightarrow]B_n and A_n[r, rightarrowtail] K_n⊕ S_n[r, twoheadrightarrow]B_n. Fix k > 2, and let A_k', B_k' and S_k' be different choices fitting into extensions as A_k, B_k and S_k. Denote by _k' and _k+1' the same binary acyclic complexes as _k and _k+1, except that the extensions involving A_k, B_k and S_k are replaced by those involving A_k', B_k' and S_k'. Note that _l is independent of the choice of A_k, B_k and S_k for l ≠ k, k+1. The binary acyclic complexes _k' ⊕ (_k+1[2]) and _k ⊕ (_k+1'[2]) have isomorphic underlying graded objects. We regard both as binary acyclic complexes A_k+1[r, shift left][r, shift right][-10pt] K_k+1⊕ S_k+1⊕ J_k+1[r, shift left][r, shift right][-10pt] B_k+1⊕ P_k+1⊕ A_k ⊕ A_k'[r, shift left][r, shift right] [-10pt]J_k ⊕ K_k ⊕ S_k ⊕ K_k ⊕ S_k' ⊕ J_k[r, shift left][r, shift right][-10pt] B_k ⊕ B_k' ⊕ P_k ⊕ A_k-1[r, shift left][r, shift right][-10pt] J_k-1⊕ K_k-1⊕ S_k-1[r, shift left][r, shift right][-10pt] B_k-1, Both the pair of chain complexes given by the top differentials and the pair of chain complexes given by the bottom differentials of _k' ⊕ (_k+1[2]) and _k ⊕ (_k+1'[2]) are isomorphic: The isomorphism for the top differentials has to flip the two copies of K_k, while the one for the bottom differentials has to flip the two copies of J_k. That is, there is the following binary acyclic double complex whose upper row is '_k ⊕ (_k+1[2]), whose lower row is _k ⊕ (_k+1'[2]). Here all unmarked downward arrows are the identity, and τ_K and τ_J denote the automorphisms switching the two copies of K_k and J_k, respectively. …[r, shift left][r, shift right][-15pt] B_k+1⊕ P_k+1⊕ A_k ⊕ A_k'[r, shift left][r, shift right][d][-15pt] J_k ⊕ K_k ⊕ S_k ⊕ K_k ⊕ S_k' ⊕ J_k[r, shift left][r, shift right][d, shift left, "τ_J"][d, shift right, "τ_K"'][-15pt] B_k ⊕ B_k' ⊕ P_k ⊕ A_k-1[d][r, shift left][r, shift right][-15pt] … …[r, shift left][r, shift right] B_k+1⊕ P_k+1⊕ A_k ⊕ A_k'[r, shift left][r, shift right] J_k ⊕ K_k ⊕ S_k ⊕ K_k ⊕ S_k' ⊕ J_k [r, shift left][r, shift right] B_k ⊕ B_k' ⊕ P_k ⊕ A_k-1[r, shift left][r, shift right]… Applying <ref>, the difference between the classes of _k⊕ ('_k+1[2]) and '_k⊕ (_k+1[2]) is therefore the same as [ K_k⊕ K_k[r, shift left, "τ_K"][r, shift right, "𝕀"'][-5pt] K_k⊕ K_k]+ [ J_k⊕ J_k[r, shift left, "𝕀"][r, shift right, "τ_J"'] [-5pt] J_k⊕ J_k ] in K_0(Ω_[0,7]). Since J_k and K_k represent the same class in K_0, we have [ K_k⊕ K_k[r, shift left, "τ_K"][r, shift right, "𝕀"'] [-5pt] K_k⊕ K_k]=[J_k⊕ J_k[r, shift left, "τ_J"][r, shift right, "𝕀"'] [-5pt]J_k⊕ J_k ]. Therefore,'_k⊕ (_k+1[2]) and _k⊕ ('_k+1[2]) represent the same class in K_0(Ω_[0,7]) by <ref>. In combination with <ref>, this shows [_k]-[_k+1]=['_k]-['_k+1]. An analogous argument works for k=2, so the class ∑_n=2^∞(-1)^n[_n] is independent of the choices we make. Next, we show that the map is independent of the choice of the representativeof []. First note that if both differentials of the double complexagree, then K_n and J_n agree and we can choose the same extension for both. In this case, both differentials for all _n agree, so [_n] = 0 for all n. It remains to see that for a short exact sequence '↣↠” we also get short exact sequences '_n↣_n↠”_n for all n ≥ 2. For every n, we have short exact sequences J'_n↣ J_n↠ J_n” and K'_n↣ K_n↠ K”_n. As above, the K_0-classes of K'_n and J'_n as well as those of K_n” and J_n” agree. By the Additivity theorem <cit.>, we have [ K'_n↣ K_n↠ K”_n ] = [ J'_n↣ J_n↠ J_n” ] ∈ K_0(), whereis the exact category of exact sequences in . Therefore, we find short exact sequences A'_n ↣ A_n↠ A_n”, B'_n ↣ B_n ↠ B_n” and S'_n ↣ S_n ↠ S”_n fitting into short exact sequences of short exact sequences: A_n'[r, rightarrowtail][d, rightarrowtail] J'_n⊕ S'_n[r, twoheadrightarrow][d, rightarrowtail] B_n'[d, rightarrowtail] A_n[r, rightarrowtail][d, twoheadrightarrow] J_n ⊕ S_n[r, twoheadrightarrow][d, twoheadrightarrow] B_n[d, twoheadrightarrow] A_n”[r, rightarrowtail] J_n”⊕ S”_n[r, twoheadrightarrow] B_n”and A_n'[r, rightarrowtail][d, rightarrowtail] K'_n⊕ S'_n[r, twoheadrightarrow][d, rightarrowtail] B_n'[d, rightarrowtail] A_n[r, rightarrowtail][d, twoheadrightarrow] K_n⊕ S_n [r, twoheadrightarrow][d, twoheadrightarrow] B_n[d, twoheadrightarrow] A_n”[r, rightarrowtail] K_n”⊕ S”_n[r, twoheadrightarrow] B_n” Note that the middle vertical exact sequences are direct sums of the given sequences. Using these extensions for the definition of '_n,_n and ”_n, we get the desired short exact sequence '_n↣_n↠”_n.<ref> and <ref> prove the case n=1. The case n > 1 follows by induction, compare <cit.>.The map K_0(Ω^n_[0,7])→ K_0(ΩΩ_[0,7]^n-1) is a retract of K_0(Ω_[0,7](B^q_[0,7])^n-1)→ K_0(Ω(B^q_[0,7])^n-1) which admits a natural section by the case n=1. Hence K_0(Ω^n_[0,7])→ K_0(ΩΩ_[0,7]^n-1) admits a natural section as well.Since Ω_[0,7] and Ω commute, it suffices to show that K_0(Ω_[0,7]^n-1Ω)→ K_0(Ω^n) admits a natural section. But this map is a retract of the map K_0(Ω_[0,7]^n-1B^q)→ K_0(Ω^n-1B^q), which admits a natural section by the induction assumption.§ ALGEBRAIC K-THEORY OF INFINITE PRODUCT CATEGORIES The results of <ref> allow us to show that the comparison map (∏_i∈ I_i) →∏_i∈ I(_i) of connective K-theory spectra is a π_*-isomorphism:For every family {_i}_i∈ I of exact categories and every n∈ the natural mapK_n(∏_i∈ I_i) →∏_i∈ I K_n(_i)is an isomorphism.Note that the natural map K_0(∏_i∈ I_i) →∏_i∈ I K_0(_i) is clearly surjective, and that injectivity is a consequence of <ref>. Recall that K_n() is naturally isomorphic to K_0(Ω^n).Consider the following diagram, where the vertical maps are the sections from <ref> followed by the canonical homomorphisms.K_0(Ω^n∏_i∈ I_i)[r][d]∏_i∈ I K_0(Ω^n_i)[d]K_0(Ω^n_[0,7]∏_i∈ I_i) [d][r, "≅"] ∏_i∈ I K_0(Ω^n_[0,7]_i) [d]K_0(Ω^n∏_i∈ I_i)[r]∏_i∈ I K_0(Ω^n_i) Since the natural functors C^q_[0,7](∏_i∈ I_i) →∏_i∈ I C^q_[0,7](_i) and B^q_[0,7](∏_i∈ I_i) →∏_i∈ I B^q_[0,7](_i) are isomorphisms,the middle horizontal map is an isomorphism.A diagram chase implies that the natural map K_n(∏_i∈ I_i) →∏_i∈ I K_n(_i) is an isomorphism. In the remainder of this section, we extend this statement to non-connective K-theory. Our model for the non-connective algebraic K-theory ^-∞ of an exact category is Schlichting's delooping <cit.>.The argument to extend <ref> to non-connective algebraic K-theory is based on a localization sequence of Schlichting <cit.>. To state it, we need to recall the following definition.Letbe an exact category, and let ⊆ be an extension closed full subcategory.* An admissible epimorphism N ↠ A with N ∈ and A ∈ is special if there exists an admissible monomorphism B ↣ N with B ∈ such that the composition B → A is an admissible epimorphism.* The inclusion ⊆ is called left s-filtering if the following holds:* The subcategoryis closed under admissible subobjects and admissible quotients in .* Every admissible epimorphism N ↠ A from an object N ∈ to an object A ∈ is special.* For every morphism fA → N with A ∈ and N ∈ there exists an object B ∈, a morphism f'A → B and an admissible monomorphism iB ↣ N such that if' = f. Let ⊆ be a left s-filtering subcategory. A weak isomorphism inis a morphism which can be written as the composition of admissible monomorphisms with cokernel inand admissible epimorphisms with kernel in . Let Σ denote the collection of weak isomorphisms in . The set Σ satisfies a calculus of left fractions <cit.>, so one can form the localization [Σ^-1]. The localization inherits an exact structure fromby declaring a sequence to be exact if it is isomorphic to the image of an exact sequence under the localization functor →[Σ^-1] <cit.>. The resulting exact category is denoted /. This quotient category has the universal property that functors on / correspond bijectively to functors onwhich vanish on<cit.>.Letbe an idempotent complete, left s-filtering subcategory of the exact category .Then the sequence →→/ of exact functors induces a homotopy fiber sequence of spectra ^-∞() →^-∞() →^-∞(/).Finally, recall the countable envelopeof an idempotent complete exact category<cit.> (and references therein). The concrete definition need not concern us here. It suffices to know thatis an exact category which containsas a left s-filtering subcategory, and that ^-∞() is contractible <cit.>; the latter claim holds becauseadmits countable coproducts. Moreover,depends functorially on . Denote bythe quotient category /. The categoryis called the suspension of . Write ^n for the n-fold suspension of . From <ref>, it follows directly that Ω^n^-∞(^n) is naturally equivalent to ^-∞(). In particular, we have K_-n() ≅ K_0((^n)) for all n > 0, where (-) denotes the idempotent completion functor.Let {_i }_i ∈ I be a family of exact categories.Since the natural map ^-∞() ^-∞(()) is an equivalence for every exact category and (∏_i ∈ I_i) ≅∏_i ∈ I(_i),we may assume that _i is idempotent complete for all i ∈ I.Consider the left s-filtering inclusion ∏_i ∈ I_i ⊆(∏_i ∈ I_i).The various projection functors ∏_i ∈ I_i →_j induce an exact functor (∏_i ∈ I_i) →∏_i ∈ I_i.Moreover, the inclusion ∏_i ∈ I_i ⊂∏_i ∈ I_i is left s-filtering since it is left s-filtering on each factor.Since _i is obtained from _i by a calculus of left fractions,we can identify ∏_i ∈ I_i / ∏_i ∈ I_i ≅∏_i ∈ I_i.Therefore, we have by <ref> a map of homotopy fiber sequences of spectra: ^-∞(∏_i ∈ I_i)[r][d, "𝕀"]^-∞((∏_i ∈ I_i))[r][d]^-∞((∏_i ∈ I_i))[d]^-∞(∏_i ∈ I_i)[r]^-∞(∏_i ∈ I_i)[r]^-∞(∏_i ∈ I_i) Since both (∏_i ∈ I_i) and ∏_i ∈ I_i admit countable coproducts, the K-theory of both vanishes and the middle vertical arrow is a π_*-isomorphism.Hence, the right vertical map is a π_*-isomorphism.By induction, it follows that the canonical map ^-∞(^n(∏_i ∈ I_i)) →^-∞(∏_i ∈ I^n_i) is a π_*-isomorphism for every family of idempotent complete exact categories.Let n > 0. We have the commutative diagram K_-n(∏_i ∈ I_i)[r, phantom, "≅"][d][-20pt] K_0((^n(∏_i ∈ I_i)))[r, "c"][d] K_0(∏_i ∈ I(^n_i))[dl]∏_i ∈ I K_-n(_i)[r, phantom, "≅"]∏_i ∈ I K_0((^n_i)) The map c is an isomorphism as we have just discussed.Since the diagonal map is an isomorphism by <ref>, the theorem follows. Note that the proof for negative K-groups only used that K_0 commutes with infinite products,which was a direct consequence of <ref>.§ THE RELATION TO NENASHEV'S K1 The abelian group K_0(Ω) is not the first algebraic description of K_1 of an exact category. Nenashev gave the following description of K_1().Define K_1^N() as the abelian group generated by binary acyclic complexes of length two =P_2 [r, rightarrowtail, shift right][r, rightarrowtail, shift left] P_1[r, twoheadrightarrow, shift right][r, twoheadrightarrow, shift left] P_0 subject to the following relations:* If the top and bottom differential of a binary acyclic complex coincide, that complex represents zero.* For any binary acyclic double complex (see <ref>) P_2'[r, rightarrowtail, shift right][r, rightarrowtail, shift left][d, rightarrowtail, shift right][d, rightarrowtail, shift left] P_1'[r, twoheadrightarrow, shift right][r, twoheadrightarrow, shift left][d, rightarrowtail, shift right][d, rightarrowtail, shift left] P_0[d, rightarrowtail, shift right][d, rightarrowtail, shift left] P_2[r, rightarrowtail, shift right][r, rightarrowtail, shift left][d, twoheadrightarrow, shift right][d, twoheadrightarrow, shift left] P_1[r, shift right][r, twoheadrightarrow, shift left][d, twoheadrightarrow, shift right][d, twoheadrightarrow, shift left] P_0[d, twoheadrightarrow, shift right][d, twoheadrightarrow, shift left] P_2”[r, rightarrowtail, shift right][r, rightarrowtail, shift left] P_1”[r, twoheadrightarrow, shift right][r, twoheadrightarrow, shift left] P_0”we have[_0] - [_1] + [_2] = ['] - [] + [”].The main result of <cit.> states that K_1^N() is isomorphic to K_1(). By <ref>, regarding a binary acyclic complex of length two as a class in K_0(Ω) defines a natural homomorphismΦ K_1^N() → K_0(Ω),as already remarked in the introduction. In this section, we prove <ref>.Before doing so, we give the following corollary. For all n≥ 1, the homomorphism K_0(Ω_[0,2]^n) → K_0(Ω^n) is a surjection and the homomorphism K_0(Ω_[0,4]^n) → K_0(Ω^n) admits a natural section. By <ref>, Φ is an isomorphism. Since K_0(Ω_[0,2]) → K_1^N() is a surjection, so is K_0(Ω_[0,2])→ K_0(Ω). By <ref>, Φ factors as Φ K_1^N() → K_0(Ω_[0,4]) → K_0(Ω).This exhibits K_0(Ω) as a natural retract of K_0(Ω_[0,4]).For n > 1, the claim follows as in the proof of <ref> by induction. Hence, <ref> also proves that the algebraic K-theory functor commutes with infinite products.In the remainder of this section, we give a proof of <ref>. As in the proof of <ref>, this will be accomplished by producing an explicit formula that expresses the class of an arbitrary binary acyclic complex in terms of binary acyclic complexes of length two.Before we start shortening binary acyclic complexes, we make a quick observation about K_1^N(), which we will need later in the argument.For any binary acyclic complex of length two, we have [P_2 [r, rightarrowtail, shift right, "d_2'"'][r, rightarrowtail, shift left, "d_2"] P_1[r, twoheadrightarrow, shift right, "d_1'"'][r, twoheadrightarrow, shift left, "d_1"] P_0 ] = - [P_2 [r, rightarrowtail, shift right, "d_2"'][r, rightarrowtail, shift left, "d_2'"] P_1[r, twoheadrightarrow, shift right, "d_1"'][r, twoheadrightarrow, shift left, "d_1'"] P_0]∈ K_1^N(). This follows directly from applying the defining relations of K_1^N to the binary acyclic double complex [column sep = large, baseline=(current bounding box.south)] P_2 ⊕ P_2 [r, rightarrowtail, shift right, "d_2' ⊕ d_2"'][r, rightarrowtail, shift left, "d_2 ⊕ d_2'"][d, rightarrowtail, shift right, "𝕀"'][d, rightarrowtail, shift left, "τ_P_2"] P_1⊕ P_1[r, twoheadrightarrow, shift right, "d_1' ⊕ d_1"'][r, twoheadrightarrow, shift left, "d_1 ⊕ d_1'"][d, rightarrowtail, shift right, "𝕀"'][d, rightarrowtail, shift left, "τ_P_1"] P_0 ⊕ P_0[d, rightarrowtail, shift right, "𝕀"'][d, rightarrowtail, shift left, "τ_P_0"]P_2 ⊕ P_2 [r, rightarrowtail, shift right, "d_2 ⊕ d_2'"'][r, rightarrowtail, shift left, "d_2 ⊕ d_2'"] P_1⊕ P_1[r, twoheadrightarrow, shift right, "d_1 ⊕ d_1'"'][r, twoheadrightarrow, shift left, "d_1 ⊕ d_1'"] P_0 ⊕ P_0Let :=(P_*, d, d') be a binary acyclic complex. In a first step we will not shortenbut produce a complexrepresenting the same class in K_0(Ω), which we will then be able to shorten.Choose factorizations d_2P_2 ↠ J ↣ P_1 and d_2'P_2 ↠ K ↣ P_1. Since J and K both are the kernel of an admissible epimorphism P_1 ↠ P_0, they represent the same class in K_0(). Therefore, there exist by <ref> A,B,S∈ and exact sequences A[r, rightarrowtail] J ⊕ S[r, twoheadrightarrow] B and A[r, rightarrowtail] K ⊕ S[r, twoheadrightarrow] B.Letdenote the binary acyclic complexA[r, shift left][r, shift right] K⊕ S⊕ J[r, shift left][r, shift right] B⊕ P_1[r, shift left][r, shift right] P_0consisting of top differential[row sep= tiny] A[r, rightarrowtail] K⊕ S[r, twoheadrightarrow][d, phantom, "⊕"]B[d, phantom, "⊕"] J[r, rightarrowtail] P_1[r, "d_1"]P_0and bottom differential[row sep= tiny] A[r, rightarrowtail] J⊕ S[r, twoheadrightarrow][d, phantom, "⊕"]B[d, phantom, "⊕"] K[r, rightarrowtail] P_1[r, "d_1'"]P_0.Let ' denote the binary acyclic complex…[r, shift left][r, shift right] P_3[r, shift left][r, shift right] P_2⊕ A[r, shift left][r, shift right] J⊕ K⊕ S[r, shift left][r, shift right] Bwith top differential[row sep= tiny] …[r] P_3[r, "d_3"] P_2[r, twoheadrightarrow, "d_2"][d, phantom, "⊕"] J[d, phantom, "⊕"]A[r, rightarrowtail] K⊕ S[r, twoheadrightarrow]Band bottom differential[row sep= tiny] …[r] P_3[r, "d'_3"] P_2[r, twoheadrightarrow, "d'_2"][d, phantom, "⊕"] K[d, phantom, "⊕"]A[r, rightarrowtail] J⊕ S[r, twoheadrightarrow]BFor an object M∈ we denote by Δ_M the binary acyclic complexM[r, shift left, "𝕀_M"][r, shift right, "𝕀_M"'] M.Note that [Δ_M]=0∈ K_0(Ω).Consider the following binary acyclic double complex. All differentials written as a single arrow are the identity on the summand appearing in domain and codomain and zero on all other summands. In particular, both differentials agree in this case. The remaining four non-trivial binary acyclic complexes are ⊕Δ_B, ' and .A[d][r] A[d, shift left][d, shift right] [+5pt] …[r, shift left][r, shift right] P_3[r, shift left][r, shift right][d] P_2⊕ A[r, shift left][r, shift right][d] J⊕ K⊕ S[r, shift left][r, shift right][d, shift left][d, shift right] B[d] …[r, shift left][r, shift right] P_3[r, shift left][r, shift right] P_2[r, shift left][r, shift right] P_1⊕ B[r, shift left][r, shift right][d, shift left][d, shift right] P_0⊕ B[d]P_0[r] P_0Applying Nenashev's relation (<ref>) and omitting all summands which are obviously zero, we obtain[] - ['] = [].Letdenote the binary acyclic complex…[r, shift right][r, shift left] P_3[r, shift left][r, shift right] P_2⊕ J⊕ K[r, shift left][r, shift right]J⊕ P_1⊕ K[r, shift left][r, shift right] P_0with top differential[row sep= tiny] …[r] P_3[r, "d_3"] P_2[r, twoheadrightarrow, "d_2"][d, phantom, "⊕"] J[d, phantom, "⊕"]J[r, rightarrowtail][d, phantom, "⊕"] P_1[r, "d_1"][d, phantom, "⊕"] P_0 K[r, "𝕀_K"] Kand bottom differential[row sep= tiny] …[r] P_3[r, "d'_3"] P_2[r, twoheadrightarrow, "d'_2"][d, phantom, "⊕"] K[d, phantom, "⊕"]K[r, rightarrowtail][d, phantom, "⊕"] P_1[r, "d'_1"][d, phantom, "⊕"] P_0 J[r, "𝕀_J"] JWe can build the following binary acyclic double complex involving ⊕Δ_B, '⊕Δ_J[1]⊕Δ_K[1] and Δ_J[1]⊕⊕Δ_K[1]—to recognize the last of these, identify J⊕ P_1⊕ K⊕ B ≅ J⊕ B⊕ P_1 ⊕ K. A[d][r] A[d, shift left][d, shift right] [+5pt] …[r, shift left][r, shift right] P_3[r, shift left][r, shift right][d] P_2⊕ A⊕ J⊕ K[r, shift left][r, shift right][d] J⊕ K⊕ S⊕ J⊕ K[r, shift left][r, shift right][d, shift left][d, shift right] B[d] …[r, shift left][r, shift right] P_3[r, shift left][r, shift right] P_2⊕ J⊕ K[r, shift left][r, shift right] J⊕ P_1⊕ K⊕ B[r, shift left][r, shift right][d, shift left][d, shift right] P_0⊕ B[d]P_0[r] P_0 Applying Nenashev's relation (<ref>) and omitting all summands which are obviously zero, we obtain[] - ['] = []and thus []=[].Letdenote the binary acyclic complex…[r, shift right][r, shift left] P_4[r, shift right][r, shift left] P_3⊕ K⊕ J[r, shift left][r, shift right] P_2⊕ P_1⊕ J⊕ K[r, shift left][r, shift right] J⊕ P_0⊕ Kwith top differential[row sep= tiny] …[r] P_4[r, "d_4"] P_3[r, "d_3"][d, phantom, "⊕"] P_2[r, twoheadrightarrow, "d_2"][d, phantom, "⊕"] J[d, phantom, "⊕"] K[r, rightarrowtail][d, phantom, "⊕"] P_1[r, "d_1'"][d, phantom, "⊕"] P_0[dd, phantom, "⊕"] J[r, "𝕀_J"] J[d, phantom, "⊕"] K[r, "𝕀_K"] Kand bottom differential[row sep= tiny] …[r] P_4[r, "d'_4"] P_3[r, "d'_3"][d, phantom, "⊕"] P_2[r, twoheadrightarrow, "d'_2"][d, phantom, "⊕"] K[d, phantom, "⊕"] J[r, rightarrowtail][d, phantom, "⊕"] P_1[r, "d_1"][d, phantom, "⊕"] P_0[dd, phantom, "⊕"] K[r, "𝕀_K"] K[d, phantom, "⊕"] J[r, "𝕀_J"] JLet us fix the following notation: If M is an object containing N as a direct summand, denote by e_N the obvious idempotent M → M whose image is N.Consider the following double complex involvingand [1]. Note that only the rows are acyclic. This suffices to see that the total complex shifted down by one represents the same class as []+[].…[r, shift left][r, shift right] P_4[d][r, shift left][r, shift right] P_3[d][r, shift left][r, shift right] P_2⊕ J⊕ K[d, "e_P_2"][r, shift left][r, shift right] J⊕ P_1⊕ K[d, shift right, "e_J"'][d, shift left, "e_K"][r, shift left][r, shift right] P_0 …[r, shift left][r, shift right] P_4[r, shift left][r, shift right] P_3⊕ K⊕ J[r, shift left][r, shift right] P_2⊕ P_1⊕ J⊕ K[r, shift left][r, shift right] J⊕ P_0⊕ KLetdenote the total complex shifted down by one. Thenis…[r, shift left][r, shift right] P_4⊕ P_3[r, shift left][r, shift right] [ P_3⊕ K⊕ J⊕;P_2⊕ J⊕ K ][r, shift left][r, shift right] [ P_2⊕ P_1⊕ J⊕; K⊕ J⊕ P_1⊕ K ][r, shift left][r, shift right]J⊕ P_0⊕ K⊕ P_0.Assume thatwas supported on [0,m], thenadmits a projection onto Δ_P_m[m-1]. The kernel of this projection admits a projection to Δ_P_m-1[m-2] and so on until we take the kernel of the projection to Δ_P_2[1]. The remaining acyclic binary complex ' isK⊕ J⊕ J⊕ K[r, shift left][r, shift right]P_1⊕ J⊕ K⊕ J⊕ P_1⊕ K[r, shift left][r, shift right]J⊕ P_0⊕ K⊕ P_0with top differential[row sep=tiny, column sep=large] P_1[rdd] K[ru] J J J[ru] K[rd] P_0 J[rd] J[ruu] K K[rd] P_1[r] P_0 Kand bottom differential[row sep= tiny] P_1[rdd] K[rd] J[r] J J[ruu] K P_0 J[r] J K K[r] P_1[r] P_0 K[ruu]It follows that []=[']-[]. Since ' is supported on [0,2] andhas length one shorter than , iterating this argument already shows that K_0(Ω_[0,2])→ K_0(Ω) is surjective. The idea to use the complexes , and ' is from the aforementioned, unpublished result of Grayson. He uses a different argument to compute []-[] which only shows that it is contained in the image of Φ.We now want to simplify '. Let _triv' denote the binary acyclic complex whose underlying graded object is that of ', but with both differentials equal to the top differential of '. Then the following diagram, where the upper row is ' and the second row is '_triv, commutes.K⊕ J⊕ J⊕ K[d, shift right, "𝕀"'][d, shift left, "τ_K⊕τ_J"][r, shift left][r, shift right] P_1⊕ J⊕ K⊕ J⊕ P_1⊕ K[d, shift right, "𝕀"'][d, shift left, "τ_P_1⊕τ_J⊕τ_K"][r, shift left][r, shift right]J⊕ P_0⊕ K⊕ P_0[d, shift right, "𝕀"'][d, shift left, "τ_P_0"] K⊕ J⊕ J⊕ K[r, shift left][r, shift right] P_1⊕ J⊕ K⊕ J⊕ P_1⊕ K[r, shift left][r, shift right]J⊕ P_0⊕ K⊕ P_0Both differentials of '_triv agree and thus it represents the trivial class. Since τ_K and τ_J are of order two, we conclude from <ref> that['] = [P_0⊕ P_0[r, shift left, "𝕀"][r, shift right, "τ_P_0"'] P_0⊕ P_0 ]-[ P_1⊕ P_1[r, shift left, "𝕀"][r, shift right, "τ_P_1"'] P_1⊕ P_1 ].Since J↣ P_1↠ P_0 is exact, this is the same as [ J⊕ J[r, shift left, "𝕀"][r, shift right, "τ_J"'] J⊕ J ]. This shows that[]=[ J⊕ J[r, shift left, "𝕀"][r, shift right, "τ_J"'] J⊕ J ]-[].We are now going to iterate this argument. Choose factorizations d_nP_n ↠ J_n-1↣ P_n-1 and d_n'P_n ↠ K_n-1↣ P_n-1 for all n ≥ 2 such that J_n ↣ P_n ↠ J_n-1 and K_n ↣ P_n ↠ K_n-1 are exact for all n. Set J_0 := P_0 and K_0 := P_0. For any natural number k, fix the following auxiliary notation:J_^k:= ⊕_n ≤ k,n odd J_n,J_^k:= ⊕_2 ≤ n ≤ k,n even J_n,J_,0^k:= ⊕_n ≤ k,n even J_n, K_^k:= ⊕_n ≤ k,n odd K_n,K_^k:= ⊕_2 ≤ n ≤ k,n even K_n,K_,0^k:= ⊕_n ≤ k,n even K_n,P_^k:= ⊕_n ≤ k,n odd P_n, P_^k:= ⊕_2 ≤ n ≤ k,n even P_n.First of all, we define for every natural number k a binary acyclic complex _k of the form…[r, shift left][r, shift right] P_k+3[r, shift left][r, shift right] [P_k+2⊕; ⊕_n=1^k(J_n⊕ K_n) ][r, shift left][r, shift right] [⊕_n=1^k+1 P_n⊕; ⊕_n=1^k (J_n ⊕ K_n) ][r, shift left][r, shift right] [ P_0 ⊕; ⊕_n=1^k (J_n ⊕ K_n) ]For even natural numbers k, we equip _k with the top differential[row sep=tiny] …[r] P_k+3[r, "d_k+3"] P_k+2[r, "d_k+2"][d, phantom, "⊕"] P_k+1[r, "d_k+1"][d, phantom, "⊕"] J_k[ddd, phantom, "⊕"] K_^k[r][d, phantom, "⊕"] K_^k[d, phantom, "⊕"]J_^k[r][ddd, phantom, "⊕"] J_^k[d, phantom, "⊕"] K_^k[r][d, phantom, "⊕"] K_^k[d, phantom, "⊕"]J_^k[r][d, phantom, "⊕"] J_^k[d, phantom, "⊕"] J_^k[r][d, phantom, "⊕"] P_^k[r][d, phantom, "⊕"] J_,0^k-1[d, phantom, "⊕"] K_^k[r] P_^k[r] K_^kand bottom differential[row sep=tiny] …[r] P_k+3[r, "d'_k+3"] P_k+2[r, "d'_k+2"][d, phantom, "⊕"] P_k+1[r, "d'_k+1"][d, phantom, "⊕"] K_k[ddd, phantom, "⊕"] J_^k[r][d, phantom, "⊕"] J_^k[d, phantom, "⊕"]K_^k[r][ddd, phantom, "⊕"] K_^k[d, phantom, "⊕"] J_^k[r][d, phantom, "⊕"] J_^k[d, phantom, "⊕"]K_^k[r][d, phantom, "⊕"] K_^k[d, phantom, "⊕"] K_^k[r][d, phantom, "⊕"] P_^k[r][d, phantom, "⊕"] K_,0^k-1[d, phantom, "⊕"] J_^k[r] P_^k[r] J_^kNote that _0 is precisely the complex .For odd natural numbers k, we equip _k with the top differential[row sep=tiny] …[r] P_k+3[r, "d_k+3"] P_k+2[r, "d_k+2"][d, phantom, "⊕"] P_k+1[r, "d_k+1"][d, phantom, "⊕"] J_k[ddd, phantom, "⊕"] J_^k[r][d, phantom, "⊕"] J_^k[d, phantom, "⊕"]K_^k[r][ddd, phantom, "⊕"] K_^k[d, phantom, "⊕"] J_^k[r][d, phantom, "⊕"] J_^k[d, phantom, "⊕"]K_^k[r][d, phantom, "⊕"] K_^k[d, phantom, "⊕"] K_^k[r][d, phantom, "⊕"] P_^k[r][d, phantom, "⊕"] K_,0^k[d, phantom, "⊕"] J_^k[r] P_^k[r] J_^k-1and bottom differential[row sep=tiny] …[r] P_k+3[r, "d'_k+3"] P_k+2[r, "d'_k+2"][d, phantom, "⊕"] P_k+1[r, "d'_k+1"][d, phantom, "⊕"] K_k[ddd, phantom, "⊕"] K_^k[r][d, phantom, "⊕"] K_^k[d, phantom, "⊕"]J_^k[r][ddd, phantom, "⊕"] J_^k[d, phantom, "⊕"] K_^k[r][d, phantom, "⊕"] K_^k[d, phantom, "⊕"]J_^k[r][d, phantom, "⊕"] J_^k[d, phantom, "⊕"] J_^k[r][d, phantom, "⊕"] P_^k[r][d, phantom, "⊕"] J_,0^k[d, phantom, "⊕"] K_^k[r] P_^k[r] K_^k-1Note that _1 is precisely the complexappearing in (<ref>). Moreover, if k is sufficiently large so that P_n ≅ 0 for all n > k, then _k+1 is obtained from _k by interchanging the top and bottom differential.For every k let _k denote the complex obtained from _k by the same procedure asis obtained from .Suppose now that k is odd. Substituting appropriately in (<ref>), we obtain the equation[_k] = [X⊕ X[r, shift left, "𝕀"][r, shift right, "τ_X"'] X⊕ X] - [_k] ∈ K_0(Ω),where X denotes the kernel of the first top differential of _k.By the definition of the binary acyclic complex _k, we may chooseX := J_k+1⊕⊕_n=1^k(J_n ⊕ K_n).As in the proof of <ref>, since J_n and K_n represent the same class in K_0 for all n, we have by <ref>[ (J_n⊕ K_n)⊕ (J_n⊕ K_n)[r, shift left, "𝕀"][r, shift right, "τ_J_n⊕ K_n"'] (J_n⊕ K_n)⊕(J_n⊕ K_n) ]=2·[ J_n⊕ J_n[r, shift left, "𝕀"][r, shift right, "τ_J_n"'] J_n⊕ J_n]=0.Therefore,[X⊕ X[r, shift left, "𝕀"][r, shift right, "τ_X"'] X⊕ X]=[J_k+1⊕ J_k+1[r, shift left, "𝕀"][r, shift right, "τ_J_k+1"'] J_k+1⊕ J_k+1 ].Similarly,Y := K_k+1⊕⊕_n=1^k(J_n ⊕ K_n)is the kernel of the first bottom differential of _k. Note that the complement of J_k+1 in X and the complement of K_k+1 in Y are the same; let Z denote that complement. Unwinding the definition of _k, we see that, up to automorphisms flipping the two copies of Z in the three lowest degrees of _k, _k coincides with the sum of _k+1 with some complexes in the image of the diagonal functor Δ. Since, by (<ref>), [Z⊕ Z[r, shift left, "𝕀"][r, shift right, "τ_Z"'] Z⊕ Z] = 0, we see that [_k] = [_k+1]. Hence,[_k] = [J_k+1⊕ J_k+1[r, shift left, "𝕀"][r, shift right, "τ_J_k+1"'] J_k+1⊕ J_k+1 ] - [_k+1].The argument for k even is completely analogous. Therefore, we have for every k ≥ 0 the equation[] = x(,k) := (-1)^k[_k] + ∑_n=1^k [ J_n⊕ J_n[r, shift left, "𝕀"][r, shift right, "τ_J_n"'] J_n⊕ J_n]. Define a map Ψ K_0(Ω)→ K_1^N() by the rule [] ↦ x(, k()), where k() is defined to be k() := min{ n ∈| P_n'≅ 0 for all n' > n }. We have to show that this is a well-defined homomorphism.By our definition of k(), the complex _k() has length two. Let ' ↣↠” be an exact sequence of binary acyclic complexes.Evidently, x(',k()) + x(”,k()) = x(,k()).Note that k('), k(”) ≤ k() and at least one of k(') and k(”) equals k().If k(') = k() = k(”), we already have Ψ([']) + Ψ([”]) = Ψ([]).Suppose k(') < k().Then _k() arises from _k(') by interchanging the role of top and bottom differential k() - k(') times.Since interchanging the top and bottom differential results only in a change of sign (<ref>) and J'_n ≅ 0 for n > k('),we have x(',k(')) = x(',k()).The case k(”) < k() is analogous. Suppose now thatlies in the image of the diagonal functor Δ C^q→ B^q.Then we may choose J_n = K_n for all n.In this case, the top and bottom differential of _k() are isomorphic.However, the two differentials do not agree on the nose but only after flipping all appearing K_n=J_n.Since each one of these appears three times in _k(), applying the Nenashev relation we see that [ _k() ] = ∑_n=1^k()[ J_n⊕ J_n[r, shift left, "𝕀"][r, shift right, "τ_J_n"'] J_n⊕ J_n]. Consequently, Ψ([]) = 0 by <ref>. This shows that Ψ is a well-defined homomorphism K_0(Ω) → K_1^N().Our previous discussion implies that Φ∘Ψ = 𝕀_K_0(Ω).What is left to do is to show that Ψ∘Φ = 𝕀_K_1^N().To do so, it suffices to establish <ref> in K_1^N() for all binary acyclic complexes of length two.Let=P_2 [r, rightarrowtail, shift right, "d_2'"'][r, rightarrowtail, shift left, "d_2"] P_1[r, twoheadrightarrow, shift right, "d_1'"'][r, twoheadrightarrow, shift left,"d_1"] P_0 be a binary acyclic complex of length two.Then _1 is the binary acyclic complex P_2 ⊕ P_2 [r, rightarrowtail, shift right][r, rightarrowtail, shift left] P_2 ⊕ P_2 ⊕ P_2 ⊕ P_1[r, twoheadrightarrow, shift right][r, twoheadrightarrow, shift left] P_2 ⊕ P_2 ⊕ P_0 with the following top and bottom differentials: [row sep=tiny] P_2[r] P_2 P_2[r] P_2 P_2[r] P_2 P_2[r, "d_2'"] P_1[r, "d_1'"] P_0[row sep=tiny] P_2[rdd] P_2 P_2[rdd, "d_2"] P_2[ru] P_2 P_2 P_2[ru] P_1[r, "d_1"] P_0 Consider the following binary acyclic double complex where the upper row is _1, the lower row iswith switched differentials plus Δ_P_2⊕Δ_P_2⊕Δ_P_2[1] and the middle vertical map τ_P_2 denotes the flip of the second and third summand.P_2 ⊕ P_2 [d, shift left, "τ_P_2"][d, shift right, "𝕀"'][r, rightarrowtail, shift right][r, rightarrowtail, shift left] P_2 ⊕ P_2 ⊕ P_2 ⊕ P_1[d, shift left, "τ_P_2"][d, shift right, "𝕀"'][r, twoheadrightarrow, shift right][r, twoheadrightarrow, shift left] P_2 ⊕ P_2 ⊕ P_0[d, shift left, "τ_P_2"][d, shift right, "𝕀"']P_2 ⊕ P_2 [r, rightarrowtail, shift right][r, rightarrowtail, shift left] P_2 ⊕ P_2 ⊕ P_2 ⊕ P_1[r, twoheadrightarrow, shift right][r, twoheadrightarrow, shift left] P_2 ⊕ P_2 ⊕ P_0Therefore, we obtain using <ref> and <ref>[]=[ P_2⊕ P_2[r, shift left, "𝕀"][r, shift right, "τ_P_2"'] P_2⊕ P_2 ]-[_1]. This finishes the proof.amsalpha | http://arxiv.org/abs/1705.09116v1 | {
"authors": [
"Daniel Kasprowski",
"Christoph Winges"
],
"categories": [
"math.KT",
"19D06 (Primary), 18E10 (Secondary)"
],
"primary_category": "math.KT",
"published": "20170525100217",
"title": "Shortening binary complexes and commutativity of $K$-theory with infinite products"
} |
1]Oscar Peralta 2]Leonardo Rojas-Nandayapa 3]Wangyue Xie 3]Hui Yao [1]Department of Applied Mathematics and Computer Science, Technical University of Denmark, Denmark, [email protected] [2]Mathematical Sciences, University of Liverpool, UK, [email protected] [3]School of Mathematics and Physics, The University of Queensland, Australia, [email protected], [email protected] of Ruin Probabilities via Erlangized Scale Mixtures [=================================================================In this paper, we extend an existing scheme for numerically calculating the probability of ruin of a classical Cramér–Lundberg reserve process having absolutely continuous but otherwise general claim size distributions. We employ a dense class of distributions that we denominate Erlangized scale mixtures (ESM) and correspond tononnegative and absolutely continuous distributions which can be written as a Mellin–Stieltjes convolution Π⋆ G of anonnegative distribution Π with an Erlang distribution G. A distinctive feature of such a class is that it contains heavy-tailed distributions.We suggest a simple methodology for constructing a sequence of distributions having the form Π⋆ G to approximate the integrated tail distribution of the claim sizes. Then we adapt a recent result which delivers an explicit expression for the probability of ruin in the case that theclaim size distribution is modelled as an Erlangized scale mixture.We providesimplified expressions for the approximation of the probability of ruin and construct explicit bounds for the error of approximation.We complement our results with a classical example where the claim sizes are heavy-tailed. § INTRODUCTION In this paper we propose a new numerical scheme for the approximationof ruin probabilities in the classical compound Poisson risk model — also known asCramér–Lundberg risk model <cit.>. In such a risk model, the surplus process is modelled as a compound Poisson process with negative linear drift and a nonnegative jump distribution F, the later corresponding to the claim size distribution. The ruin probability within infinite horizon and initial capital u, denoted ψ(u),is the probabilitythat the supremum of the surplus process is larger than u. The Pollaczek–Khinchine formula provides the exact value of ψ(u), though it can be explicitly computed in very few cases. Such a formula is a functional of F, the integrated tail distribution of F. From here on, we will use ψ_F(u) instead of ψ(u) to denote this dependence. A useful fact is that the Pollaczek–Khinchine formula can be naturally extended in order to define ψ_G(u) even if G does not correspond to an integrated tail distribution. We are doing so throughout this manuscript. The approach advocated in this paper is to approximate the integrated claim size distribution F by using the familyof phase-type scale mixture distributions <cit.>,but we also consider the more common approach of approximating the claim size distribution F.The family of phase-type scale mixture distributions is dense within the class of nonnegative distributions,and it is formed by distributions which can be expressed as a Mellin–Stieltjes convolution, denoted Π⋆ G, of an arbitrary nonnegative distribution Π and a phase-type distribution G <cit.>.The Mellin–Stieltjes convolutioncorresponds to the distribution of the product between two independent random variableshaving distributions Π and G respectively: Π⋆ G(u):=∫_0^∞ G(u/s)dΠ(s)=∫_0^∞Π(u/s)dG(s).In particular, if Π is a nonnegative discrete distribution and Π⋆ G is itselfthe integrated tail of a phase-type scale mixture distribution, then an explicit computable formula for the ruin probability ψ_Π⋆ G(u)of the Cramér–Lundberg process having integrated tail distribution Π⋆ G is given in <cit.>.Hence, it is plausible that if Π⋆ Gis close enough to the integrated tail distribution F of theclaim sizes, then we can use ψ_Π⋆ G(u) as an approximation for ψ_F(u), the ruin probability of a Cramér–Lundberg process having claim size distribution F. One of the key features of the class of phase-type scale mixtures is that if Π has unbounded support, then Π⋆ G is aheavy-tailed distribution <cit.>, thusconfirming the hypothesis that the class of phase-type scale mixtures is more appropriate for approximating tail-dependent quantities involving heavy-tailed distributions.In contrast, the class of classical phase-type distributions is light-tailed and approximations derived from this approach may be inaccurate in the tails <cit.>.Our contribution is to propose a systematic methodology to approximate any continuous integrated tail distributionF using aparticularsubclass of phase-type scale mixtures calledErlangized scale mixtures (ESM).The proposed approximation is particularly precise in the tails and the number of parameters remains controlled. Our construction requires a sequence{Π_m:m∈} of nonnegative discrete distributionshaving the property Π_m→F (often taken as a discretization ofthe target distribution over some countable subset of the support of F), and a sequence of Erlang distributions with equal shape and rate parameters, denoted G_m∼(ξ(m),ξ(m)). If the sequence ξ(m)∈ is increasing and unbounded, thenΠ_m⋆ G_m→F.Then we can adapt the results in<cit.> to compute ψ_Π_m⋆ G_m(u), and use this as an approximation ofthe ruin probability of interest. To assess the quality of ψ_Π_m⋆ G_m(u) as an approximation of ψ_F(u)we identify two sources of theoretical error. The first source of error comes from approximatingF via Π_m, so we refer to this asthe discretization error. The second source of error is due to the convolution with G_m, so this will be called the Erlangization error.The two errors are closely intertwined soit is difficult to make a precise assessment of the effect of each of them in the general approximation. Instead, we use the triangle inequality to separate these as follows|ψ_F(u)-ψ_Π_m⋆ G_m(u)|_Approximation error≤|ψ_F(u)-ψ_F⋆ G_m(u)|_Erlangization error +|ψ_F⋆ G_m(u)-ψ_Π_m⋆ G_m(u)|_Discretization error.Therefore, the error of approximating ψ_F(u) with ψ_Π_m⋆ G_m(u) can be bounded above with the aggregation of the Erlangization error and the discretization error. In our developments below, we construct explicit tight bounds for each source of error. We remark that the general formula for ψ_Π⋆ G(u) in <cit.> is computational intensive and can be difficult or even infeasible to implement since it is given as aninfinite series with terms involving products of finite dimensional matrices.We show that for our particular model, ψ_Π⋆ G_m(u) can be simplified down to amanageable formula involving binomial coefficients instead of computationally expensive matrix operations. In practice, the infinite series can be computed only up to a fine number of terms, but as we will show, this numerical error can be controlled by selecting an appropriate distribution Π. Such a truncated approximation of ψ_Π⋆ G(u) will be denoted ψ_Π⋆ G(u). We provide explicit bounds for thenumerical error induced by truncating the infinite series. All things considered, we contribute to the existing literature for computing ruin probabilities for the classical Cramér–Lundberg model by proposing a new practical numerical scheme. Our method, coupled with the bounds for the error of approximation, provides an attractive alternative for computing ruin probabilities based on a simple, yet effective idea. The approach described above is a further extension to the use of phase-type distributions for approximating general claim size distributions <cit.>. Several attempts to approximate the probability of ruin for Cramér–Lundberg model have been made (see <cit.> and references therein). A recent and similar approach can be found in <cit.> which uses discretization and Erlangizations argument as its backbone.We emphasise here that we address the problem of finding the probability of ruin differently. Firstly, we propose to directly approximate the integrated tail distribution instead of the claim size distribution. This will yield far more accurate approximations of the probability of ruin.Secondly, since we investigate the discretization and the Erlangization part separately, we are able to provide tight error bounds for our approximation method. This will prove to be helpful in challenging examples such as the one presented here: the heavy-traffic Cramér–Lundberg model with Pareto distributed claims. Lastly, each approximation of ours is based on a mixture of Erlang distributions of fixed order, while the approach in <cit.> is based on a mixture of Erlang distributions of increasing order.By keeping the order of the Erlang distribution in the mixture fixed, we can smartly allocate more computational resources in the discretization part, yielding an overall better approximation.More importantly,we find the use of ESM more natural becauseincreasing the order of the Erlang distributions in the mixture translates in having different levels of accuracy of Erlangization at different points. The choice of having sharper Erlangization in the tail of the distribution than in the body seems arbitrary and is actually not useful tail-wise, given that the tail behavior of Π∗ G_m is the same for each ξ(m)≥ 1.The rest of the paper is organized as the follows. Section <ref> provides an overview of the main concepts and methods. In Section <ref>, we present the methodology for constructinga sequence of distributions of the form Π_m⋆ G_m approximating the integrated tail of a general claimsize distribution F. Based on the results of <cit.>, we introduce asimplified infinite series representation of the ruin probability ψ_Π_m⋆ G_m. In Section <ref>, we construct the bound for the error of approximation |ψ_F-ψ_Π⋆ G|.In Section <ref>, we provide a bound for the numerical error of approximation induced by truncating the infinite series representation of ψ_Π_m⋆ G_m. A numerical example illustrating the sharpness of our result is given in Section <ref>. Some conclusions are drawn inSection <ref>. § PRELIMINARIES In this section we provide a summary of basic concepts needed for this paper.In subsection <ref> we introduce the family of classicalphase-type (PH) distributions and their extensions to phase-type scale mixtures and infinite dimensional phase-type (IDPH) distributions. We will refer to the former class of distributions as classical in order to make a clear distinction from the two later classes of distributions.In section <ref> we introduce a systematic method for approximating nonnegative distributions within the class of phase-type scale mixtures; such a method will be called approximation via Erlangized scale mixtures.The resulting approximating distribution will be more tractable due to the special structure ofthe Erlang distribution. §.§ Phase-type scale mixturesA phase-type (PH) distributioncorrespondsto the distribution of the absorption time of a Markov jumpprocess {X_t}_t≥ 0 with a finite state space E={0,1,2,⋯,p}. The states {1,2,⋯,p}are transient while the state 0 is an absorbing state. Hence, phase-typedistributions are characterized by a p-dimensional row vectorβ=(β_1,⋯,β_p), corresponding to theinitial probabilities of each of the transient states of the Markov jump process, and an intensity matrix 𝐐=([ 0 0; λ Λ ]).The subintensity matrix Λ corresponds to the transition rates among the transient states while the column vector λ correspondsto the exit probabilities to the absorption state.Sinceλ=- Λ 𝐞,where e is a column vector with all elements to be 1,then the pair (β,Λ) completely characterizes the absorption distribution, the notation(β,Λ) is reserved for such a distribution. The density function, cumulative distribution function and expectation of(β,Λ) are given by the following closed-form expressions givenin terms of matrix exponentials:g(y)=β^Λ yλ, G(y)=1-β^Λ ye,∫_0^∞y G(y)=- βΛ^-1e. A particular example of PH distribution which is of interest in our later developments is that of an Erlang distribution.It is simple to deduce that the Erlang distributionwith parameters (λ,m) has a PH-representation given by the the m-dimensional vector β=(1,0,⋯,0)and the m× m dimensional matrix Λ=( [ -λλ; ⋱⋱ ; -λλ;-λ ]).We denote(λ,m). In this paper we will be particularly interested in the sequence of G_m∼(ξ(m),ξ(m)) distributions with ξ(m)→∞. These type of sequencesare associated to a methodologyoften known as Erlangization (approximation of a constant via Erlang random variables).Using Chebyshev inequality, it is simple to prove thatG_m(y)→_[1,∞)(y) weakly, whereis the indicator function. Next, we turn our attention to the class of phase-type scale mixture distributions <cit.>. In this paper, we introduce such a class via Mellin–Stieltjes convolution Π⋆ G(u):=∫_0^∞ G(u/s)dΠ(s)=∫_0^∞Π(u/s)dG(s), where G∼(β,Λ) and Π is a proper nonnegativedistribution. Mellin–Stieltjes convolutions can be interpreted in two equivalent ways.The most common one is to interpret the distributionΠ⋆ G as scaled mixture distribution; for instance, ∫ G(u/s)dΠ(s) can be seen as a mixture of the scaled distributions G_s(u)=G(u/s) with scaling distribution Π(s) (and vice versa). However, it is often more practical to see that Π⋆ G corresponds to the distributionof the product of two independent random variables having distributions Π and G.Furthermore, the integrated tail of Π⋆ G is given in the following proposition.Let Π and G be independentnonnegative distributions,then the integrated tail of Π⋆ G is given byΠ⋆ G =H_Π⋆G,where H_Π(s)=sΠ(s)/μ_Π is called the moment distribution of Π and G is the integrated tail of G. We use μ to denote the expecation.Since the Mellin–Stieltjes convoluton of Π and G can be seen as the distribution of two independent random variables having distribution Π and G, then μ_Π⋆ G=μ_Πμ_G. Observe thatΠ⋆ G(u) = 1/μ_Π·μ_G∫_0^u (1-Π⋆ G(t)) t= 1/μ_Π∫_0^u ∫_0^∞1-G(t/s)μ_GΠ(s)t = ∫_0^∞G (u/s)s Π(s)/μ_Π = ∫_0^∞G (u/s)H_Π(s)= H_Π⋆G(u).If G is a PH distribution G∼(β,Λ), then G∼(-βΛ^-1/μ_G,Λ) is also a PH distribution <cit.>. The following can be seen as a particular case of Proposition <ref> when G corresponds to the point mass at one probability measure, however, a self-contained proof is provided. Let H_F(s): =sF(s)/μ_F be the moment distribution of F and U∼(0,1). ThenF = H_F⋆ U.F(u)= 1/μ_F∫_0^u (1-F(t)) t = 1/μ_F∫_0^u ∫_0^∞_(t,∞)(s) F(s)t = 1/μ_F∫_0^∞{∫_0^u_[0,s)(t) t} F(s) = 1/μ_F∫_0^∞{u∧ s} F(s) = ∫_0^∞{(u/s)∧ 1}sF(s)/μ_F= H_F⋆ U(u),where the second equality follows from Tonelli's theorem and from the fact that for s,t≥ 0, _(t,∞)(s) = _[0, s)(t). In this paper we are particularly interested in the case where Π is a discrete distributionhaving support {s_i:i∈} with 0<s_1< s_2<… and vector of probabilities π=(π_1,π_2,⋯) such that π_∞=1, where _∞ is an infinite dimensional column vector with all elements to be 1. In such a case, the distribution of Π⋆ G can be written as(Π⋆ G)(u)=∑_i=1^∞G(u/s_i)π_i, u≥ 0.Since the scaled phase-type distributions G(u/s_i)∼(β,Λ/s_i)are PH distributions again, we choose to call Π⋆ G a phase-type scale mixture distribution. The class of phase-type scale mixtures was first introduced in <cit.>, though they restricted themselves to distributions Π supported over the natural numbers. One of the main features of the class of phase-type scale mixtures having a nonnegative discrete scaling distribution Π is that it forms a subclass of the so called infinite dimensional phase-type (IDPH) distributions; indeed, in such a case Π⋆ G can be interpreted as the distribution of absorption timeof a Markov jump process with one absorbing state and infinite number of transient states, having representation(α, T) where α=(π⊗β),the Kronecker product of π and β, and 𝐓=( [ Λ/s_1 0 0 ⋯; 0 Λ/s_2 0 ⋯; 0 0 Λ/s_3 ⋯; ⋮ ⋮ ⋮ ⋱ ]).Finally, if the underlying phase-type distribution G is Erlang, and Π is any nonnegative discrete distribution, then we say thatthe distribution Π⋆ G is an Erlangized scale mixture.We will discuss more properties of this distribution in later sections. All the classes of distributions defined above are particularly attractive for modelling purposes in part because they are dense in the nonnegative distributions (both the class of infinite dimensional phase-type distributions and the class of phase-type scale mixturestrivially inherit the dense property from classical phase-type distributions, while the proof that the class ofErlangized scale mixtures being dense is simple and given in the next subsection). The class of infinite dimensional phase-type distributions contains heavy-tailed distributions but it is mathematically intractable.The rest of the classes defined above remain dense, contain both light and heavy-tailed distributions and are more tractable from boththeoretical and computational perspectives. Here, we concentrate on a particular subclass of the phase-type scale mixtures defined in <cit.> by narrowing such a class to Erlangized scale mixtures having scaling distribution Π with general discrete support.§.§ Approximations via Erlangized scale mixturesNext we present a methodology for approximating an arbitrary nonnegative distribution Π within the class of Erlangized scale mixtures. The construction is simple and based on the following straightforward result. Let Π_m be a sequence of nonnegative discrete distributions such thatΠ_m→Π and G_m∼Erlang(ξ(m),ξ(m)). ThenΠ_m⋆ G_m⟶Π.Since the sequence G_m converges weakly to _[1,∞), then the result follows directly from an application of Slutsky's theorem <cit.>.For convenience, we refer to this method of approximation as approximation via Erlangized scale mixtures. Thesequence of discrete distributions Π_m can be seen as roughapproximations of the nonnegative distribution Π. Since G_m is anabsolutely continuous distribution with respect to the Lebesgue measure, then the Mellin–Stieltjes convolution has a smoothing effect over the rough approximating distributions Π_m. Indeed, Π_m⋆ G_m is an absolutely continuousdistribution with respect to the Lebesgue measure (see Figure <ref>).§ RUIN PROBABILITIESIn this section we introduce a method of approximation for the ruin probability in the Cramér–Lundberg risk model using Erlangized scale mixtures. Weapply the results of <cit.> to obtain expressions for the ruin probability in terms of infinite series involving operations with finite dimensional arrays, andexploit the simple structure of the Erlang distributionto obtain explicit formulas which will be free of matrix operations.For constructing approximations of the ruin probability we follow two alternative approaches. In the first approach we approximate directly the integrated tail distribution F via Erlangized scale mixtures and is the one that we advocate in this paper, we shall call it approximation A. Thisstraightforward approach delivers explicit formulas which are simple to write and implement; as we will see, the approximations obtained are very accurate.However, the approximationobtained by using this approach cannot be easily related to the probability of ruin of some reserve processes because we cannot identify an Erlangized scale mixture as the integrated tail of a phase-type scale mixture.Therefore, an approximating distribution for the claim sizes is notimmediately available in this setting.It is also required to have an explicit expression for the integrated tail distribution F. A second approach, which is named as approximation B, is also analysed wherethe claim size distribution is approximated with an Erlangized scale mixture. This is equivalent to approximating the integrated tail F with the integrated tail distribution of an Erlangized scale mixture distribution. As we will show later, such an integrated tail distribution is in the class of phase-type distributions so similar explicit formulas for the ruin probability are obtained. This approach can be considered more natural but the resulting expressions are more complex and the approximations are less accurate. The error of approximation is bigger as a result of the amplifying effect of integrating the tail probability of the approximating distribution. Its implementation is more involved and the computational times are much slower when compared to the results delivered using approximation A.We remark that approximation B is the more commonly used, like for instance in <cit.> and <cit.>.Thus we have included its analysis for comparison purposes.The remaining content of this section is organised as follows: in subsection <ref> we introduce some basic concepts of ruin probabilities in the classical Cramér–Lundberg risk model. The two approximations of the ruin probability via Erlangized scale mixtures are presented in subsection <ref>. §.§ Ruin probability in the Cramér–Lundberg risk modelWe consider the classical compound Poisson risk model <cit.>:R_t=u+t-∑_k=1^N_tX_k. Here u is the initial reserve of an insurance company, the premiums flow inat a rate 1 per unit time t, X_1, X_2,⋯ are i.i.d. claim sizes with common distribution F and mean μ_F, {N_t}_t≥ 0 is a Poisson process with rate γ,denoting the arrival of claims. So R_t is a risk model for the time evolutionof the reserve of the insurance company. We say that ruin occurs if and only if the reserve ever dropsbelow zero; we denote ψ_F(u):=inf{R_t<0:t>0}. For such a model, the well-known Pollaczek–Khinchine formula <cit.> implies that the ruin probability can be expressed in terms of convolutions:ψ_F(u)=(1-ρ)∑_n=1^∞ρ^nF^∗ n(u),where ρ=γμ_F<1 is the average claim amount per unit time, F^∗ n denotes the nth-fold convolution of F, :=1-F denotes the tail probability of F, andF is the integrated tail distribution, also known as the stationary excess distribution:F(u)=1μ_F∫_0^u (t) t. The calculation of ruin probability is conveniently approached via renewal theory. The ruin probability ψ_F(u) of the classical Cramér–Lundberg process can be written as the probability that a terminating renewalprocess reaches level u.In such a model, the distribution of the renewals is defective, and given by ρF(u). In particular, if the renewals follow a defective phase-type scale mixture distributionwith distribution ρΠ⋆ G with 0<ρ<1, then <cit.> derived thethe probability that the lifetime of the renewal is larger than u is given byψ_Π⋆ G(u)=ρα^(T+ρtα)ue_∞, where α=(π⊗β), T=( s× I_∞)^-1⊗Λ and t=- T e_∞. Here s=(s_1,s_2,⋯), I is an identity matrix and I_∞ is that of infinite dimension. The formula above is not of practical use becausethe vectors α,t andthe matrixT have infinite dimensions. However, using the special structure of T,they further refined the formula above and expressed ψ_Π⋆ G as an infinite series involving matrices and vectors of finite dimension which characterizethe underlying distributions Π and G.Next, we obtain the explicit formula for ψ_Π⋆ G(u) in terms of the parameters characterising the renewal distribution Π⋆ G (equivalently the integrated tail distribution). This is a slight generalization of the results given in <cit.> who implicitly assumed that Π⋆ G is the integrated tail of phase-type scale mixture distribution, so their results are given instead in terms of the parameterscharacterising the underlying claim size distribution. For simplicity of notation, we will write G_m∼Erlang(ξ,ξ) instead ofErlang(ξ(m),ξ(m)) for the rest of the paper. [<cit.>]Let 0<ρ<1,ψ_Π⋆ G_m(u)=∑_n=0^∞κ_n(θ u/s_1)^n^-θ u/s_1n!,where θ is the largest diagonal element of -Λ and κ_n =ρ, n=0, ρ[ s_1θ(∑_i=0^n-1κ_n-1-i∑_j=1^∞π_js_j B_ij)+∑_j=1^∞π_jC_nj], n>0,whereB_ij :=β(I+(s_jθ/s_1)^-1Λ)^iλ, C_nj :=β(I+(s_jθ/s_1)^-1Λ)^ne. Since θ is the largest diagonal element of -Λ and{s_i} is an increasing sequence, then θ/s_1 is the largest diagonal element of- T, then from Theorem 3.1 in <cit.>, we have ψ_Π⋆ G_m(u)=∑_n=0^∞κ_n(θ u/s_1)^n^-θ u/s_1n!,where κ_0=ρ(π⊗β)_∞=ρ∑_i=0^∞π_i=ρ, and κ_n=ρ[∑_i=0^n-1s_1θ(π⊗β)(I_∞+s_1θT)^itκ_n-1-i+(π⊗β)(I_∞+s_1θT)^ne_∞].It is not difficult to see that(π⊗β)(I_∞+s_1θT)^it=∑_j=1^∞π_jβ(I+s_1s_jθΛ)^i(-Λes_j)=∑_j=1^∞π_js_jB_ijand (π⊗β)(I_∞+s_1θT)^ne_∞= ∑_j=1^∞π_jβ(I+s_1s_jθΛ)^ne=∑_j=1^∞π_jC_nj,where B_ij and C_nj are defined as above.Proposition <ref> is to be interpreted as the probability that the lifetime of a defective renewal process exceeds level u.An interpretation in terms of the risk process is not always possible since we may not be able to identify a claim size distribution having integrated tail Π⋆ G_m. The result above can be seen as a (slight) generalization ofTheorem 3.1 of <cit.>. This can be seen from Proposition <ref> that shows that if the claim sizes are distributed according to an Erlangized scale mixture Π⋆ G_m,then its integrated tail of Π⋆ G_m remains in the family of phase-type scale mixtures. Using the results of Proposition <ref> and Remark <ref>, we recover the formula of <cit.>.ψ_H_Π⋆G(u)=∑_n=0^∞κ_n(θ u/s_1)^n^-θ u/s_1n!,where θ is the largest diagonal element of -Λ and κ_n =ρ, n=0, ρμ_Πμ_G[s_1θ(∑_i=0^n-1κ_n-1-i∑_j=1^∞π_jC_ij) +∑_j=1^∞π_js_jD_nj], n>0,whereC_ij :=β(I+(s_jθ/s_1)^-1Λ)^ie, D_nj :=β(-Λ)^-1(I+(s_jθ/s_1)^-1Λ)^ne.A drawback from the formulas given above is that the calculation of the quantities B_ij, C_ij and D_ij is computationally expensive since theseinvolve costly matrix operations. However, these expression can be simplified in our case because because the subintensity matrix Λ of an Erlang distribution can be written as a bidiagonal matrix, while the vectors denoting theinitial distribution β and the absorption rates λ are proportional to canonical vectors.Hence, the resulting expressions for the terms B_ij,C_ij and D_ij in Proposition <ref> and Proposition <ref> take relatively simple forms.These are given in the following Lemma.Suppose that G_m∼(ξ,ξ), then B_ij = [6.5cm]0, i< ξ-1, [6.5cm]ξ(i ξ-1)(1-s_1s_j)^i-ξ+1(s_1s_j)^ξ-1, i≥ξ-1, C_ij = [6.5cm]1, i≤ξ-1, [6.5cm]∑_k=0^ξ-1(i k)(1-s_1s_j)^i-k(s_1s_j)^k, i≥ξ-1, D_ij = [6.5cm]1-iξs_1s_j, i≤ξ, [6.5cm]∑_k=0^ξ-1ξ-kξ(i k)(1-s_1s_j)^i-k(s_1s_j)^k, i>ξ. Let (β,Λ) be the canonical parameters of the phase-type representation of an (ξ,ξ) distribution (see Section 2.1), so θ=ξ. Recall thatB_ij :=β(I+(s_jξ/s_1)^-1Λ)^iλ, C_ij :=β(I+(s_jξ/s_1)^-1Λ)^ie, D_ij :=β(-Λ)^-1(I+(s_jξ/s_1)^-1Λ)^ie.Observe that the matrix (I+(s_jξ/s_1)^-1Λ) is bidiagonal with all the elements in the diagonal being equal. In particular, the (k,ℓ)-th entry of the i-th power of such a matrix is given by (I+(s_jξ/s_1)^-1Λ)_kℓ^i=(i ℓ-k)(1-s_1s_j)^i-ℓ+k(s_1s_j)^ℓ-k 1≤ k≤ℓ≤ i+10 otherwise.Therefore, B_ij corresponds to the (1,ξ)-entry of the matrix (I+(s_jξ/s_1)^-1Λ) multiplied by ξ. C_ij corresponds to the sum of the elements of the first row of (I+(s_jξ/s_1)^-1Λ).For the last case, observe that Λ^-1=-λ^-1 U where U is an upper triangular matrix of ones.Therefore, D_ij corresponds to the sum of the elements of (I+(s_jξ/s_1)^-1Λ) and divided by ξ.D_ij is written as the sum of all the elements in the upper diagonals divided by ξ. §.§ Ruin probability for Erlangized scale mixturesIn this subsection we specialize in approximating the ruin probability ψ_F(u) usingErlangized scale mixtures.We assume that the target Cramér–Lundberg risk process has Poisson intensity γ and claimsize distribution F, so the average claim amount per unit of time is ρ=γμ_F.First, we approximate the integrated tail F with an Erlangized scale mixture Π⋆ G_m where Π is an approximating discrete distribution of F, that is, the approach of approximation A.The approximation for ψ_F(u) is given next:Let Π be anonnegative discrete distribution supported over {s_i:i∈},G_m∼(ξ,ξ) and ρ=γμ_F<1. The lifetime of a terminating renewal process having defective renewal distribution ρΠ⋆ G_mis given byψ_Π⋆ G_m(u)=∑_n=0^∞κ_n(ξ u/s_1)^n^-ξ u/s_1n!,whereκ_n=γμ_F, 0 ≤ n ≤ξ-1, γμ_F[ ∑_i=ξ-1^n-1κ_n-1-iℬ_i +𝒞_n], ξ≤ n,and ℬ_i =∑_j=1^∞π_j s_1s_j(ξ-1;i,s_1/s_j), 𝒞_n =∑_j=1^∞ π_j(ξ-1;n,s_1/s_j),where (·;n,p) and (·;n,p) denote the pdf and cdfrespectively of a binomial distribution with parameters n and p . The result follows by letting ρ=γμ_F, θ=ξ, λ=ξ,applying Proposition <ref>and Lemma <ref> givenin the previous subsection.We propose to use ψ_Π⋆ G_m as an approximation of ruin probability ψ_F. One of the most attractive features of the result above is that because of the simple structure of Erlangized scale mixture it is possible to rewrite the approximation of the ruin probability in simple terms which are free of matrix operations.In particular, thesimplified expressions for the values of κ_n given in terms of the binomial distribution are particularly convenient for computational purposes.As stressed before, for approximation A we sacrifice the interpretation of theapproximation ψ_Π⋆ G_mas the ruin probability of some Cramér–Lundberg reserve process since it is not possible to easily identify a distribution whose integrated tail corresponds to the Erlangized scale mixture distribution Π⋆ G_m.We also lose the interpretation of the value ρ as the average claim amount per unit of time (in the original risk process, the value of ρ is selected as the product of the expected value of an individual claim multiplied by the intensity of the Poisson process), butfor practical computations this is easily fixedby simply letting ρ=γμ_F where μ_F is the mean value of the original claim sizes.As mentioned before, a more common and somewhat natural approach isto approximate the claim size distributions via Erlangized scale mixtures, i.e. approximation B.The following theorem provides an expression for approximation B of the probability of ruin ψ_F withthe ruin probability of a reserve process having claim sizes Π⋆ G_m. This result could be useful for instance in a situation where the integrated tail is not available and it is difficult to compute. Note that we have modified the intensity of the Poisson process in order to match the average claim amount per unit of time ρ=γμ_F of the original process.Thisselection will help to demonstrate uniform convergence. Let Π be anonnegative discrete distribution supported over {s_i:i∈} andG_m∼(ξ,ξ). The probability of ruin in the Cramér–Lundberg model having intensity γμ_F/μ_Π and claim size distribution Π⋆ G is given byψ_H_Π⋆G_m(u)=∑_n=0^∞κ_n(ξ u/s_1)^n^-ξ u/s_1n!,whereκ_n=γμ_F, n=0,(γμ_F-1)(1+γμ_F s_1 μ_Πξ)^n+1,1 ≤ n ≤ξ, γμ_F s_1μ_Πξ∑_i=0^n-1κ_n-1-i𝒞_i +γμ_F μ_Π𝒟_n, ξ< n.and𝒞_i =∑_j=1^∞π_j(ξ-1;i,s_1/s_j), 𝒟_n =∑_j=1^∞ π_j s_j∑_k=0^ξ-1ξ-kξ(k;n,s_1/s_j). Let θ=ξ and λ=ξ. If 1≤ n ≤ξ, thenfrom Proposition <ref> and Lemma <ref>.we have thatκ_n = γμ_F s_1μ_Πξ∑_i=0^n-1κ_n-1-i + γμ_F μ_Π∑_j=1^∞π_js_j(1-n/ξs_1/s_j),= γμ_F s_1μ_Πξ∑_i=0^n-1κ_n-1-i +γμ_F- γμ_F μ_Πξ∑_j=1^∞s_jπ_jns_1s_j= γμ_F s_1μ_Πξ(∑_i=0^n-1κ_i-n)+γμ_F.Then by induction, we can get for 1≤ n ≤ξ,κ_n=(γμ_F-1)(1+ γμ_F s_1μ_Πξ)^n+1.The cases n=0 and ξ<n follow directly from applying Proposition <ref> and Lemma <ref>.§ ERROR BOUNDS FOR THE RUIN PROBABILITYIn this section we will assess the accuracy of the two proposed approximations for the ruin probability. We will do so by providing boundsfor the error of approximation. We identify two sources of error. The first source is due to the Mellin–Stieltjes convolution with the Erlang distribution; we will call this the Erlangization error. The second source of error is due to theapproximation of the integrated tail F (via Π in the first case, and via H_Π in the second case); we will refer to this as the discretization error. For the case of approximation Ain Theorem <ref>we can use the triangle inequality to bound the overall error with the aggregation of the two types of errors, that is|ψ_F(u)-ψ_Π⋆ G_m(u)|≤|ψ_F(u)-ψ_F⋆ G_m(u)|+ |ψ_F⋆ G_m(u)-ψ_Π⋆ G_m(u)|. Forapproximation B in Theorem <ref> we have an analogous bound|ψ_F(u)-ψ_H_Π⋆G_m(u)|≤|ψ_F(u)-ψ_H_F⋆G_m(u)|+ |ψ_H_F⋆G_m(u)-ψ_H_Π⋆G_m(u)|.We will rely on the Pollaczek–Khinchine formula (<ref>) for the construction of the bounds.Recall that the formula above is interpreted as the probability that a terminating renewal process having defective renewal probability ρF(·) will reach level u before terminating.In our two approximations of ψ_F, we have selected the value of ρ=γμ_F so we can write the errors of approximation in terms of the differences between the convolutions of the integrated tail exclusively.For instance, the error of Erlangization inapproximation A is given by|ψ_F(u)-ψ_F⋆ G_m(u)| =|∑_n=1^∞(1-ρ)ρ^n(F^∗ n(u)-F⋆ G_m^∗ n(u))|.Note that n=0 in the above series is equal to zero.For ourapproximation B, it is noted that setting the parameter ρ=γμ_F is equivalent to calculating the ruin probability for a risk process having integrated claim sizes distributed according toH_Π⋆G_m while the intensity of the Poisson process is changed to γμ_F/μ_Π. With such an adjustment, it is possible to write both the Erlangization and discretization errors in terms of differences of higher order convolutions as given above.We will divide this section in three parts. In subsection <ref> we refine an existing bound introduced in <cit.> for the error of approximation of the ruin probability. This refined result will be used in the construction of bounds for the error of discretization. In subsections <ref> and <ref> we provide bounds for the errors for each of the two approximations proposed.§.§ General bounds for the error of approximationThe following Theorem provides a refined bound for the error of approximation for the ruin probability provided by <cit.>. For any distributions with positive support F_1 and F_2 and fixed u>0, we have that|ψ_F_1(u)-ψ_F_2(u)| ≤sup_s<u{|F_1(s) - F_2(s)|}(1-ρ) ρ/(1-ρF_1(u))(1-ρF_2(u)).We claim that for any n≥ 1,sup_s<u{|F_1^*n(s) - F_2^*n(s)|≤sup_s<u{|F_1(s) - F_2(s)|}∑_i=0^n-1F_1^i(u) F_2^n-1-i(u).Let us prove it by induction. It is clearly valid for n=1. Let us assume that it is valid for some n≥ 1. Thensup_s<u{|F_1^*n+1(s) - F_2^*n+1(s)|}= sup_s<u{|F_1^*n+1(s) - F_1^*n*F_2(s) +F_1^*n*F_2(s) - F_2^*n+1(s)|}≤sup_s<u{|F_1^*n+1(s) - F_1^*n*F_2(s)|} +sup_s<u{| F_1^*n*F_2(s) - F_2^*n+1(s)|}.Clearly,sup_s<u{|F_1^*n+1(s) - F_1^*n*F_2(s)|} ≤sup_s<u{∫_0^s|F_1(r) - F_2(r)|F_1^*n(r)}≤sup_s<u{∫_0^ssup_l<u{|F_1(l) - F_2(l)|}F_1^*n(r)}= sup_l<u{|F_1(l) - F_2(l)|}sup_s<u{∫_0^sF_1^*n(r)}= sup_l<u{|F_1(l) - F_2(l)|}F_1^*n(u)≤sup_l<u{|F_1(l) - F_2(l)|}F_1^n(u).In the last step we have used that F^∗ n(u) corresponds to the probability of an event where the sum of n i.i.d. random variables is smaller equal than u while F^n(u) corresponds to the probability of the maximum of i.i.d. random variables issmaller equal than u; if the random variables are nonnegative then the probability of thesum is clearly smallerthan the probability of the maximum. Using the hypothesis induction we have thatsup_s<u{| F_1^*n*F_2(s) - F_2^*n+1(s)|} ≤sup_s<u{∫_0^s|F_1^*n(r) - F_2^*n(r)|F_2(r)}≤sup_s<u{∫_0^ssup_l<u{|F_1^*n(l) - F_2^*n(l)|}F_2(r)} = sup_l<u{|F_1^*n(l) - F_2^*n(l)|}sup_s<u{∫_0^sF_2(r)}≤(sup_s<u{|F_1(s) - F_2(s)|}∑_i=0^n-1F_1^i(u)F_2^n-1-i(u))F_2(u) = sup_s<u{|F_1(s) - F_2(s)|}∑_i=0^n-1F_1^i(u)F_2^n-i(u).Summing (<ref>) and (<ref>), we get thatsup_s<u{|F_1^*n+1(s) - F_2^*n+1(s)|}≤sup_s<u{|F_1(s) - F_2(s)|}∑_i=0^nF_1^i(u)F_2^n-i(u),so that formula (<ref>) is valid for all n≥ 1. Finally,|ψ_F_1(u)-ψ_F_2(u)| ≤∑_n=1^∞ (1-ρ)ρ^n |F_1^*n(u) - F_2^*n(u)|≤sup_s<u{|F_1(s) - F_2(s)|}(1-ρ)∑_n=1^∞ρ^n ∑_i=0^n-1F_1^i(u)F_2^n-1-i(u) = sup_s<u{|F_1(s) - F_2(s)|} (1-ρ)∑_i=0^∞∑_n=i+1^∞ρ^n F_1^i(u)F_2^n-1-i(u) = sup_s<u{|F_1(s) - F_2(s)|}(1-ρ)∑_i=0^∞∑_n=0^∞ρ^n+i+1F_1^i(u)F_2^n(u) = sup_s<u{|F_1(s) - F_2(s)|} (1-ρ) ρ∑_i=0^∞ρ^i F_1^i(u) ∑_n=0^∞ρ^nF_2^n(u) = sup_s<u{|F_1(s) - F_2(s)|} (1-ρ) ρ1/1-ρF_1(u)1/1-ρF_2(u) = sup_s<u{|F_1(s) - F_2(s)|}(1-ρ) ρ/(1-ρF_1(u))(1-ρF_2(u)). We remark that the bound given above is a refinement of the resultobtained in <cit.>: The construction of our bound is based on the inequality (<ref>) and given bysup_s<u{| F_1^*n*F_2(s) - F_2^*n+1(s)|}≤sup_s<u{|F_1(s) - F_2(s)|}∑_i=0^n-1F_1^i(u)F_2^n-i(u).The expression on the right hand side takes values in (0,1) for all values of n. In contrast, the quantity used in <cit.> to bound the expression in the left hand side is nF(u), which goes to infinity as n→∞. We remark however, that the final bound for the error termproposed there remains bounded. A comparison of the two bounds reveals that the one suggested aboveimproves <cit.>'s bound by a factor of(1-ρ)^2/(1-ρF_1(u))(1-ρF_2(u))≤ 1.§.§ Error bounds for ψ_Π⋆ G_mThis subsection is dedicated to the construction of the bounds forapproximation A suggested in Theorem <ref>. §.§.§ Bounds for the Erlangization error of ψ_Π⋆ GA bound for the Erlangization error is constructed throughout the following results.Let {𝒜_k:k∈} be an decreasing collection of closed intervals in ^+, so𝒜_k=[a_k,b_k] and 𝒜_k+1⊂𝒜_k. If 𝒜_0=[0,∞] and 𝒜_k↘{1} thensup_ℓ≤ u|F(ℓ)-F⋆ G_m(ℓ)| ≤∑_k=0^∞sup_ℓ<u(F_b_k(ℓ)-F_a_k(ℓ))(G_m(𝒜_k)-G_m(𝒜_k+1)),where G_m(𝒜_k):=G_m(b_k)-G_m(a_k).sup_ℓ≤ u|F(ℓ)-F⋆ G_m(ℓ)| ≤sup_ℓ<u|∑_k=0^∞[F(ℓ)∫_𝒜_k/𝒜_k+1 G_m(s)-∫_𝒜_k/𝒜_k+1F(ℓ/s) G_m(s)]|=∑_k=0^∞sup_ℓ<u|∫_𝒜_k/𝒜_k+1[F(ℓ)-F(ℓ/s)] G_m(s)|≤∑_k=0^∞sup_ℓ<u(F(ℓ/b_k)-F(ℓ/a_k))(G_m(𝒜_k)-G_m(𝒜_k+1))≤∑_k=0^∞sup_ℓ<u(F_b_k(ℓ)-F_a_k(ℓ))(G_m(𝒜_k)-G_m(𝒜_k+1)).An upper bound for the Erlangization error is given next. Let {𝒜_k:k∈} be a sequence as defined in Lemma <ref>. Then| ψ_F(u) - ψ_F⋆ G_m(u) |≤ρ/(1-ρF(u))∑_k=0^∞sup_ℓ<u(F_b_k(ℓ)-F_a_k(ℓ))(G_m(𝒜_k)-G_m(𝒜_k+1)).Moreover, if F is absolutely continuous with bounded density then ψ_F(u) →ψ_F⋆ G_m(u) uniformly as ξ(m)→∞.In our numerical experiments we found that it is enough to take a finite number K ofsets 𝒜_1,…,A_K to obtain a usable numerical bound. This is equivalent to take 𝒜_k={1} for all k≥ K in the Theorem above. The proof follows from Theorem <ref>, Lemma <ref> and the following observation1/1-ρF⋆ G_m(u)≤1/1-ρ, ∀ u>0.To prove uniform convergence we simply note that the expression above can be further bounded above by| ψ_F(u) - ψ_F⋆ G_m(u) |≤ρ/1-ρ∑_k=0^∞sup_ℓ>0(F_b_k(ℓ)-F_a_k(ℓ))(G_m(𝒜_k)-G_m(𝒜_k+1)).Notice that if F is an absolutely continuous distribution with a bounded density, then for any sequence of nonempty sets such that 𝒜_k↘{1}, it holds that for every ϵ>0 we can find k_0∈ such that sup_ℓ>0(F_b_k(ℓ)-F_a_k(ℓ))<ϵ (1-ρ)/2ρ for all k>k_0.Similarly, we can find ξ(m_0)∈ large enough such that 1-G_m(A_k+1)≤ϵ (1-ρ)/2ρ. Putting together this results we obtain that for all k≥ k_0 and m≥ m_0| ψ_F(u) - ψ_F⋆ G_m(u) |≤ρ/1-ρ[ sup_ℓ>0(F_b_k(ℓ)-F_a_k(ℓ))+(1-G_m(𝒜_k+1))] = ϵ.Hence, uniform convergence follows.§.§.§ Bounds for the discretization error of ψ_Π⋆ GNext, we address the construction of a bound for the discretization error: |ψ_F⋆ G_m(u)-ψ_Π⋆ G_m(u)|.The following Theorem makes use of our refinement of <cit.>'s bound for the construction of an upper bound for the discretization error.Letη:=sup_0≤ s≤ u{|F⋆G_m(s) - Π⋆G_m(s)|} then for all 0<δ<∞ it holds that|ψ_F⋆G_m(u)-ψ_Π⋆ G_m(u)|≤η(1-ρ) ρ/(1-ρ (F(u/δ)+ G_m(δ)))(1-ρ (Π(u/δ)+ G_m(δ))). The bound above decreases asΠ gets close to F; this is reflected in the value of η.The bound will become smaller as long as terms F(u/δ)+ G_m(δ) and Π(u/δ)+ G_m(δ) in the denominator become bigger.The value of δ minimizing this bound can be easily found numerically.The result follows from observing that F⋆ G_m(u)= ∫_0^δF(u/s) G_m(s)+∫_δ^∞F(u/s) G_m(s)≤F(u/δ) +G_m(δ).We just apply our refinement of <cit.>'s bound provided in Theorem <ref>. |ψ_F⋆ G_m(u)-ψ_Π⋆G_m(u)|≤η(1-ρ) ρ/(1-ρF⋆G_m(u))(1-ρΠ⋆G_m(u)).A lower bound for Π⋆ G_m(u) can be found in an analogous way. The last step in the construction of an upper bound for the discretization error is finding an upper bound for η=sup_0≤ s≤ u|F⋆G_m(s)-Π⋆G_m(s)|. We suggest a bound in the following Proposition.Let 0<δ<∞, thensup_0≤ s≤ u/δ|F⋆ G_m(s) - Π⋆ G_m(s)|≤η(δ),whereη(δ)=sup_u/δ≤ s<∞|F(s)-Π(s)|G_m(δ)+sup_0<s≤ u/δ|F(s)-Π(s)|G_m(δ). |F⋆G_m(u) - Π⋆G_m(u)| =|∫_0^∞F(u/s) G_m(s)-∫_0^∞Π(u/s) G_m(s)|≤∫_0^∞|F(u/s)-Π(u/s)| G_m(s)≤∫_0^δ|F(u/s)-Π(u/s)| G_m(s) +∫_δ^∞|F(u/s)-Π(u/s)| G_m(s)≤sup_u/δ≤ s<∞| F(s)-Π(s)| G_m(δ)+sup_0<s≤ u/δ|F(s)-Π(s)|G_m(δ). In practice, we would select a value of δ which minimizes the upper bound η(δ). Notice, that if the tail probability of F is well approximated by Π, then the error bound will in general decrease.This suggests thatΠshould provide a good approximation of F particularly in the tail in order to reduce effectively the error of approximation.§.§ Error bounds for ψ_H_Π⋆G_mNext we turn our attention toapproximation B of the ruin probability when the claim size distribution F is approximated via Erlangized scale mixtures. We remark that the bounds presented in this section are simple and sufficient to show uniform convergence. However, these bounds are too rough for practical purposes.A set of more refined bounds can be obtained but their construction and expressions are more complicated, so these have been relegated to the appendix. §.§.§ Bounds for the Erlangization error of ψ_H_Π⋆GThe following theorem provides a first bound for the Erlangization error ofthe approximation ψ_H_F⋆G_m.A tighter bound for the Erlangization error can be found in the Appendix. | ψ_H_F⋆ U(u) - ψ_H_F⋆G_m(u) | ≤2ρϵ_m/1-ρ(1-ϵ_m)≤ρ/1-ρ√(2/π m),where ϵ_m is defined as in Lemma <ref>. Since G_m(s)→_[1, ∞ )(s) for all s≠1so g_m(s):=/ sG_m(s) = 1 - G_m(s)→_[0, 1)(s),∀ s≠1.Let {X_n'} be a sequence of independentand identically H_F distributed random variables. Then, by Propositions <ref> - <ref>,| F^*n(u) - F⋆ G^*n(u)| =| (H_F⋆ U)^*n(u) - (H_F⋆G_m)^*n(u)|≤_^nℙ(s_1 X_1' + … + s_n X_n'≤ u ) | ∏_i=1^n _[0,1)(s_i)-∏_i=1^n g_m(s_i)|s_1 … s_n≤_^n| ∏_i=1^n _[0,1)(s_i)-∏_i=1^n g_m(s_i)|s_1 … s_n.That the last integral is bounded by 2(1-(1-ϵ_m)^n) follows from Corollary <ref> in the Appendix. Therefore, we have that| ψ_F(u) - ψ_F⋆ G_m(u) | ≤∑_n=1^∞ (1-ρ)ρ^n| F^*n(u) - (F⋆ G_m)^*n(u)| ≤∑_n=1^∞ (1-ρ)ρ^n 2(1-(1-ϵ_m)^n) = 1-1-ρ/1-ρ(1-ϵ_m) =2ρϵ_m/1-ρ(1-ϵ_m). Lemma <ref> provides an explicit bound for ϵ_m.The following result provides with an explicit expression useful for obtainingthe integrated distance between the survival function1-G_m and the density of a (0,1) distribution.That is∫_0^∞|(1-G_m(s))-_0,1(s)| s.ϵ_m=∫_0^1 G_m(s)ds =∫_1^∞(1-G_m(s))ds=^-ξξ^ξξ!≤ (2πξ)^-1/2. Firstly observe that μ_G_m=1, it follows that 1-G_m is the density of the integrated tail distribution G_m. Hence,∫_1^∞(1-G_m (s)) s=1- ∫_0^1(1-G_m (s))dd s = ∫_0^1 G_m (s) s =ϵ_m,and the second equality follows.For the third equality we have thatϵ_m =∫_0^1G_m(s) s =∫_0^1(1-∑_n=0^ξ-11/n!^-ξ s(ξ s)^n) s =1-∑_n=0^ξ-11/n!∫_0^1^-ξ s(ξ s)^n s=1-∑_n=0^ξ-11/n!(n!ξ^-1-^-ξ∑_k=0^nn!ξ^k-1k!) =^-ξ∑_n=0^ξ-1∑_k=0^nξ^k-1/k!=^-ξ∑_k=0^ξ-1(ξ-k)ξ^k-1/k! =^-ξ(∑_k=0^ξ-1ξ^k/k!-∑_k=1^ξ-2kξ^k/k!) =^-ξξ^ξξ!.Finally, an application of Stirling's formulaξ!>√(2π)ξ^ξ+1/2^-ξ yields ϵ_m<(2πξ)^-1/2. Note that the bound for the error provided above only depends onthe parameter of the Erlang distribution ξ and the average claim amount per unit of time ρ.This bound does not depend on the initial reserve u, nor the underlying claim size distribution F, so ψ_F⋆G_m converges uniformly to ψ_F. However, in practice this bound is too rough and not useful for practical purposes.In Theorem<ref> we provide a refinement of the bound above. The refined bound proposed in there no longer has a simple form but in return it is much sharper and more useful for practical purposes. §.§.§ Bounds for the discretization error of ψ_H_Π⋆GFinally, we address the construction of a bound for the discretization error. The next two results are analogous to the ones in subsection <ref> and presented without proof.Letη:=sup_0≤ s≤ u{|H_F⋆G_m(s) - H_Π⋆G_m(s)|} then|ψ_H_F⋆G_m(u)-ψ_H_Π⋆G_m(u)|≤η(1-ρ) ρ/(1-ρ H_F(u/δ) (1-G_m(δ)))(1-ρ H_Π(u/δ)(1-G_m(δ))). An upper bound for sup_0≤ s≤ u|H_F⋆G_m(s)-H_Π⋆G_m(s)|, is suggested in the next Proposition.For δ>1 we have thatsup_0≤ s≤ u|H_F⋆G_m(s) - H_Π⋆G_m(s)|≤η(δ),where η(δ):=sup_u/δ≤ s<∞|H_Π(s)-H_F(s)|G_m(δ)+sup_0<s≤ u/δ|H_F(s)-H_Π(s)|(1-G_m(δ)). The construction of the previous bounds depends on the availability of thedistance between moment distributions |H_F-H_Π|, but the later might not always be available. For such a casewe suggest a bound for such a quantity in Lemma <ref> for a specific type of approximating distributions Π. The bound presented in there depends on the cdf of the distribution H_F, the restricted expected value of the claim size distribution F and its approximation Π. § BOUNDS FOR THE NUMERICAL ERROR OF APPROXIMATIONThe probability of ruin of a reserve process as given in Theorems <ref> and <ref>is not computable in exact form since the expression is given in terms of various infinite series.In practice, we can compute enough terms and then truncate the series at a level where the error of truncation is smaller than some desired precision. Since all terms involved are positive, such anapproximation will provide an underestimate of the real ruin probability.In this section we compute error bounds for the approximation of the ruin probabilitiesoccurred by truncating those series.A close inspection of Theorems <ref> and <ref> reveals that there will exist two sources of error due to truncation. The ruin probability can be seen as the expected value of κ_N where N∼(ξ u/s_1), so thefirst errorof truncation is [κ_N|N≥ N_1],we call N_1 the level of truncation for the ruin probability. Since the values of κ_n are bounded above by 1, then it is possible to bound this error term with (N≥ N_1) and use Chernoff's bound <cit.> to obtain an explicit expression1-ζ(N_1;λ)=(N>N_1)≤^-λ(·λ)^N_1+1(N_1+1)^N_1+1.The second source of numerical error comes from truncating the infinite seriesinduced by the scaling distribution Π; that is, we need to truncate the series defining the terms ℬ_i, 𝒞_i and 𝒟_i. The following Lemma shows these truncated seriescan be bounded by quantities depending on the tail probability of Π and the level of truncation s_N_2, where N_2 is the level of truncation for the scaling. Let S∼Π and define ε_1=(S>s_N_2) and ε_2=[S;S>s_N_2]. Thenℬ_i-ℬ_i ≤ε_1,0 ≤ i , 𝒞_n-𝒞_n ≤ε_1, ξ ≤ n , 𝒟_n-𝒟_n ≤ε_2, ξ ≤ n ,where ℬ_i, 𝒞_i and 𝒟_i denote to the truncated series at N_2 terms. If 0≤ i< ξ-1 then ℬ_i=ℬ_i=0, otherwise if ξ-1≤ i ≤ N_2 then ℬ_i-ℬ_i=ξ/i+1∑_j=N_2+1^∞π_j(ξ;i+1,s_1/s_j) ≤∑_j=N_2+1^∞π_j=ε_1. Similarly, if n≥ξ then 𝒞_n-𝒞_n=∑_j=N_2+1^∞π_j(ξ-1;n,s_1/s_j) ≤∑_j=N_2+1^∞π_j=ε_1, while𝒟_n-𝒟_n=∑_j=N_2+1^∞π_j s_j∑_k=0^ξ-1ξ-k/ξ(k;n,s_1/s_j) ≤∑_j=N_2+1^∞π_js_j=ε_2.§.§ Truncation error for ψ_Π⋆ G_m We start by writing the expression for the ruin probability in Theorem <ref> (approximation A) as a truncated series ψ_Π⋆ G_m(u)=^-ξ u/s_1∑_n=0^N_1κ_n(ξ u)^ns_1^nn!,whereκ_n=γμ_F, 0≤ n ≤ξ-1, γμ_F[∑_i=ξ-1^n-1κ_n-1-iℬ_i + 𝒞_n], ξ≤ n ≤ N_1, withℬ_i = ξ/i+1∑_j=1^N_2π_j(ξ;i+1,s_1/s_j), 𝒞_n = ∑_j=1^N_2π_j (ξ-1;n,s_1/s_j). Let ε_1=(S>s_N_2).Thenψ_Π⋆ G_m(u)-ψ_Π⋆ G_m(u)≤ε_1 [γμ_F/1-γμ_F(ξ u/s_1) + 2/(1-γμ_F)^2^-(1-γμ_F)ξ u/s_1] +(1-ζ(N_1;ξ u/s_1)),where ζ(N_1;ξ u/s_1) denotes the cdf of a Poisson with parameter ξ u/s_1 and evaluated at N_1. Observe that ψ_Π⋆ G_m(u)-ψ_Π⋆ G_m(u)= ^-ξ u/s_1∑_n=0^N_1 (κ_n-κ_n)(ξ u)^ns_1^nn! +^-ξ u/s_1∑_n=N_1+1^∞κ_n(ξ u)^ns_1^nn!.Firstly we consider the second term in the right hand side of (<ref>). Using that κ_n≤1 we obtain that if N_1>ξ u/s_1-1, then^-ξ u/s_1∑_n=N_1+1^∞κ_n(ξ u)^ns_1^nn! ≤∑_n=N_1+1^∞^-ξ u/s_1(ξ u)^ns_1^nn!=(1-ζ(N_1;ξ u/s_1)).Next we look into the first term of equation (<ref>) and observe that κ_n-κ_n=0, 0 ≤ n ≤ξ-1, γμ_F [∑_i=ξ-1^n-1(κ_n-1-iℬ_i-κ_n-1-iℬ_i) + 𝒞_n-𝒞_n], ξ≤ n≤ N_1.Notice that if n≥ξ we can rewrite∑_i=ξ-1^n-1(κ_n-1-iℬ_i-κ_n-1-iℬ_i) = ∑_i=ξ-1^n-1( (κ_n-1-i-κ_n-1-i)ℬ_i+ κ_n-1-i(ℬ_i-ℬ_i) ).Since 0<κ_i≤κ_i≤ 1 for 0≤ i then we can use the first part of Lemma <ref> to obtain the following bound of the expression above∑_i=ξ-1^n-1(κ_i-κ_i)ℬ_i+(n-ξ+1)ε_1.Putting(<ref>) and the second part of Lemma <ref> together we arrive atκ_n-κ_n≤γμ_F[∑_i=ξ-1^n-1(κ_i-κ_i)ℬ_i+(n-ξ+1)ε_1+ε_1 ] ≤γμ_F[sup_ξ-1≤ i< n-1(κ_i-κ_i)∑_i=ξ^∞ℬ_i+(n-ξ+2)ε_1]≤γμ_F[sup_ξ-1≤ i< n-1(κ_n-1-κ_n-1)+(n-ξ+2)ε_1].Note that ∑_i=ξ^∞ℬ_i=1 follows from relating the formula of ℬ_ito the probability mass function of a negative binomial distribution (ξ,1-s_1/s_j). Using the hypothesis that γμ_F<1 and induction it is not difficult to prove thatκ_n-κ_n≤ε_1 ∑_i=2^n-ξ+2 i (γμ_F)^n-ξ+3-i≤ε_1 [γμ_F/1-γμ_Fn+2/(1-γμ_F)^2(γμ_F)^n ].Inserting the bound above into the first term of equation (<ref>) and assuming that ξ u/s_1>1 we arrive at ^-ξ u/s_1∑_n=0^N_1 (κ_n-κ_n)(ξ u)^ns_1^nn!≤ε_1 [γμ_F/1-γμ_F(ξ u/s_1) + 2/(1-γμ_F)^2^-(1-γμ_F)ξ u/s_1] . The term (1-ζ(N_1;ξ u/s_1)) can be bounded using Chernoff's bound1-ζ(N_1;ξ u/s_1)≤^-ξ u/s_1(^1·ξ· u/s_1)^N_1+1(N_1+1)^N_1+1. §.§ Truncation error for ψ_H_Π⋆G_mWe write the ruin probability in Theorem <ref> (approximation B) as a truncated series: ψ_H_Π⋆G_m(u)=∑_n=0^N_1κ_n(ξ u/s_1)^n^-ξ u/s_1n!,whereκ_n=γμ_F, n=0,(γμ_F-1)(1+γμ_F s_1 μ_Πξ)^n+1,1 ≤ n ≤ξ, γμ_F s_1μ_Πξ∑_i=0^n-1κ_n-1-i𝒞_i +γμ_Fμ_Π𝒟_n, ξ< n.with𝒞_i =∑_j=1^N_2π_j(ξ-1;i,s_1/s_j), 𝒟_n =∑_j=1^N_2 π_j s_j∑_k=0^ξ-1ξ-kξ(k;n,s_1/s_j).The result and its proof are similar to the previous case. Let S∼Π and defineε_2=[S;S>s_N_2]. Thenψ_H_Π⋆G_m(u)-ψ_H_Π⋆G_m(u) ≤ε_2^ξ uγμ_F/s_1μ_Π(γμ_Fμ_Π+1)^-ξ +ζ(N_1;ξ u/s_1). Building a bound for the numerical error of approximation B is more involved than for approximation A given in Theorem <ref>.The reason is that it is not simple to provide a tight bound for the ∑_i=1^∞𝒞_i as for ∑_i=1^∞ℬ_i.Notice that the bound is not as tight as in the case of Theorem <ref> and may not be of much practical use. This aspect highlights an additional advantage of our first estimator.Observe that κ_n-κ_n=0, 0≤ n≤ξ, γμ_F μ_Π[s_1ξ∑_i=ξ+1^n-1(κ_n-1-i𝒞_i-κ_n-1-i𝒞_i ) +𝒟_n-𝒟_n], ξ< n≤ N_1.The summation in(<ref>) can be rewritten as∑_i=ξ+1^n-1(κ_n-1-i𝒞_i-κ_n-1-i𝒞_i )=∑_i=ξ+1^n-1( (κ_n-1-i-κ_n-1-i)𝒞_i+ κ_n-1-i(𝒞_i-𝒞_i) ).Since 0<κ_i≤κ_i≤ 1 and 𝒞_i≤ 1for 0≤ i then we can use the second part of Lemma <ref> to obtain the following bound of the expression above∑_i=ξ+1^n-1(κ_i-κ_i)+(n-ξ-1)ε_2.Putting(<ref>) and the third part of Lemma <ref> together we arrive atκ_n-κ_n ≤γμ_Fμ_Π(s_1/ξ∑_i=ξ+1^n-1(κ_i-κ_i)+(n-ξ)ε_2)≤γμ_Fμ_Π(∑_i=ξ+1^n-1(κ_i-κ_i)+(n-ξ)ε_2).Induction yields thatγμ_Fμ_Π(∑_i=ξ+1^n-1(κ_i-κ_i)+(n-ξ)ε_2) = ε_2((γμ_Fμ_Π+1)^n-ξ-1)≤ε_2(γμ_Fμ_Π+1)^n-ξ.Hence we arrive at ^-ξ u/s_1∑_n=0^N_1 (κ_n-κ_n)(ξ u)^ns_1^nn!≤ε_2^ξ uγμ_F/s_1μ_Π(γμ_Fμ_Π+1)^-ξ. § NUMERICAL IMPLEMENTATIONS We briefly discuss some relevant aspects of the implementation of Theorems <ref> and <ref>. Suppose we want to approximate a distribution F via Erlangized scale mixtures.The selection of the parameter ξ∈ of the Erlang distributionboils down to selecting a value ξ large enough so the bound provided in Theorem <ref> and Theorem <ref> is smaller thana preselected precision.It is however not recommended to select a value which is too large since this will require truncating at higher levels and thus resulting in a much slower algorithm(this will be further discussed below). The most critical aspect for an efficient implementation is the selection ofan appropriate approximating distribution Π. The selection can be made rather arbitrary but we suggestthe following general family of discrete distributions:Let W={w_i:i∈ℤ^+} and Ω_Π={s_i:i∈ℤ^+} be sets of strictly increasing nonnegative values such that w_0=s_0=inf{s:F(s)>0} and for all k∈ℕ it holds thats_k≤ w_k≤ s_k+1.Then we define the distribution Π as Π(s):=∑_k=0^∞ F(w_k)_[s_k,s_k+1)(s). The distribution Π is a discretized approximating distribution which is upcrossed by F in every interval (w_k-1,w_k). This type of approximation is rather general as we can consider general approximations by selecting s_k∈(w_k-1,w_k), approximations from below by setting s_k=w_k, approximations from above by setting s_k=w_k-1, or the middle point (see Figure <ref>).Heuristically, one might expect to reduce the error of approximation by selecting the middle point (this was our selection in our experimentations below). In practice we can justcompute a finite number of terms π_k, so we end up with an improper distribution.This represents a serious issue because truncating at lower levels affects the quality of the approximation in the tail regions.Computing a larger number of terms is not often an efficient alternative since the computational times become rapidly unfeasible.Thus, our ultimate goal will be to select among the partitions of certain fixed size (we restrict the partition size since we assume we have a limited computational budget),the one that minimizes the distance |F-Π|, in particular in the tails.In our numerical experimentations we found that an arithmetic progression required a prohibitively large number of terms to obtain sharp approximations in the tail.We obtained better results using geometric progressions as these can provide better approximations with a reduced number of terms. Moreover, since the sequence determining the probability massfunction converges faster to 0, then it is easier to compute enough terms sofor practical purposes it is equivalent to work with a proper distribution.The speed of the algorithm is heavily determined by the total number oftermsof the infinite series in Theorems <ref> and <ref> computed.Since the probability of interest can be seen as the expected value [κ_N] where N∼(ξ u/s_1),it is straightforward to see that the total number N_1 of terms needed to provide an accurate approximationis directly related to the value ξ u/s_1.Thus, large values of ξ and u combined with small values of s_1will require longer computational times. Since smaller values of ξ andlarger values of s_1 will typically result in increased errors of approximation, there will be a natural trade-off between speed and precision in the selection of these values. In our numerical experiments below we have selected empirically these values with the help of the error bounds found in the previous sections.It is also worth noting thatthe calculation of the value κ_n for n>ξ in both Theorems<ref> and <ref> requires the evaluation ofthe binomial probability mass functions (·;i,·) for all i=ξ,…,N_1. While the computation of such probabilities is relatively simple, it is not particularly efficient to compute each term separately because the computational times become very slow as n goes to infinity. Due to the recursive nature of the coefficients κ_n one may incur in significant numerical errors if thethe binomial probabilities are not calculated at a high precision. For more details, see for instance <cit.> for recommended strategies that can be used to increase the speed and accuracy of the binomial probabilities.Finally, we remark thatthe speed of the implementation can be significantly improved by using parallel computing. In our implementations below we have broken the series into smaller pieces and we have sent this toan HPC (high performance computing) facility to run independent units of work. §.§ Numerical examples In this section, we show the accuracy of our approximation A through the followingPareto example. In such an example, the claim sizes are Pareto distributed, so theirintegrated tails are regularly varying.The exact values of the ruin probability are given in <cit.>,and are now considered a classical benchmark for comparison purposes.We have limited our numerical experiments to the Pareto with parameter 2 and net profit condition close to 0(ρ→1) as thisis one of the most challenging ruin probabilities we could find for which there are results available for comparison.[Pareto claim sizes] We consider a Cramér–Lundberg model with unit premium rate, and claim sizes distributed according to a Pareto distribution with a single parameter ϕ>1 with support on the positive real axis, mean 1and having the following cumulative distribution functionF(x) = 1 - (1+x/ϕ-1)^-ϕ,forx > 0and ϕ >1,(other parametrizations of the Pareto distribution are common as well). The integrated tail of the above distribution is regularly varying with parameter ϕ-1:F(x)= 1/μ_F∫_0^x F(t)d t = ∫_0^x (1+t/ϕ-1)^-ϕ d t = 1-(1+x/ϕ-1)^-(ϕ-1).The parameters of the risk model selected were ρ=0.95, ϕ=2. We implemented the approximation A in Theorem <ref> and its analysis is presented next. For comparisons purposes we also included the approximation B in Theorem <ref>, but overall we found that it is less accurate, much slower and more difficult to analyze since its bounds are not tight enough.First we analyzed the Erlangization error for approximation A.For this example, it is possible to compute the bound given byTheorem <ref> for values of ξ=100,500,1000. The bound appears to be tighter for smaller values of ρ while it gets loosen as long as the value of ρ→1.The bound alsoincreases as u→∞, so probabilities of ruin with large initial reserves will be more difficult to approximate.The bound appears to decrease proportionally in ξ but in practice, we didn't noticed significant changes in the numerical approximation of the probability of ruin for values of ξ larger than 100. Nevertheless, since larger values of ξ affect the speed of the algorithmwe settled with a value of ξ=100 which already gave good results overall. Next we constructed a discrete approximating distribution Π by considering a discretized Pareto supported over the geometric progression {^t_0, ^t_1, ^t_2, ⋯}, where t_k=t_0+k/K. It is rather clear that a finer partition of the interval [0,∞) would yield a better approximation and this would be attained by letting the value of t_0→-∞ and K→∞.However, small values of s_1:=^t_0 affect severely the speed of the algorithm (see the discussion above) while in practice not much precision is gained by taking it too close to 0.A similar trade-off in speed and precision occurs by letting K→∞. For this example we have selected these values empirically withthe help of the bound in Theorem <ref> and Proposition <ref>. We settled with t_0=-3 and K=270 for all the examples. The results are in the first column in the Table <ref> below.Next we selected the truncation levels. In the case of N_1 we were able to select a natural number large enoughsuch that the truncation error was smaller than the floating point precision without increasingsignificantly the computational times.This selection implies that the third term in the bound for the truncation error given in Theorem <ref> is eliminated for practical purposes.As for N_2, we choose the smallest integer N_2 such that ε_1 < 9.5701 × 10^-14.The error bounds are presented in the last column of Table <ref>. Notice that the dominant term in Theorem <ref> is asymptotically linear in u.This pattern is also observed numerically as the error bound appears increasing linearly with respect to u, thus providing empirical evidence that suggests this bound is tight. The numerical results for the probabilities of ruin are now summarized in Table <ref>.The results show that the approximated ruin probabilities are remarkably close to the true value calculated using equation (20) of <cit.>. The numerical results above were produced with the same values of ξ, t_0 and K.We remark that as long as the value of u increases, then the numerical approximation appears to be less sharp, but this can be improved by increasing the value of K (this would make the partition finer) and to a lesser extent by reducing the value of t_0 (improving the approximation of the target distribution in a vicinity of 0).The approximation was less sensitive to increases in the value of ξ but makes it considerably slower.§ CONCLUSION <cit.> remarked that the family of phase-type scale mixtures could be used to provide sharp approximations of heavy-tailed claim size distributions.In our work, we addressed such a remark and provided a simple systematic methodology to approximate any nonnegative continuous distribution within such a family of distributions. We employed the results of <cit.> and provided simplified expressions for the probability of ruin in the classicalCramér–Lundberg risk model. In particular we opted to approximate the integrated tail distribution F rather than the claim sizes as suggested in <cit.>; we showed that such an alternative approach results in a more accurateand simplified approximation for the associated ruin probability.We further provided bounds for the error of approximation induced by approximating the integrated tail distributionas well as the error induced by the truncationof the infinite series.Finally, we illustrated the accuracy of our proposed method by computing the ruin probability of a Cramér-Lundberg reserve processwhere the claim sizes are heavy-tailed. Such an example is classical but often considered challenging due to the heavy-tailed nature of the claim size distributions and the value of the net profit condition.*Acknowledgements The authors thank Mogens Bladt for multiple discussions on the ideas which originated this paper and an anonymous referee who provided a detailed review that helped to improve the quality of this paper. OP is supported by the CONACYT PhD scholarship No. 410763 sponsored by the Mexican Government. LRN is supported by ARC grant DE130100819.WX is supported by IPRS/APA scholarship at The University of Queensland. HY is supported by APA scholarship at The University of Queensland.chicagoa§ APPENDIX:BOUNDS FOR ERRORS OF APPROXIMATIONIn the first subsection of this appendix we provide a refined bound for one of the approximationsproposed in the main section. In the second subsection of the appendix we provide an auxiliary result that will be useful for the numerical computation of one of the bounds proposed. §.§ Refinements for the Erlangization error of ψ_H_Π⋆G_m Through Theorem <ref> we provide a refinement of the bound proposed in Theorem <ref>. This refined bound is much tighter although more difficult to construct and implement. The following preliminary results are needed first.For any δ>0, define ϵ_m(δ) := ∫_0^δ G_m(s) s.Then ϵ_m(δ) = e^-ξδ(∑_k=0^ξ-1(ξδ)^k/k! - ∑_k=0^ξ-2δ(ξδ)^k/k!). Notice that ϵ_m(1) = ϵ_m=1-G_m(1) while ϵ_m(0)=0.Consider∫_0^δ e^-ξ s(ξ s)^n s = -e^-ξδ/ξ(ξδ)^n + n∫_0^δ e^-ξ s(ξ s)^n-1 s = n!/ξ -e^-ξδ/ξ(∑_i=0^nn!/i!(ξδ)^i ),so thatϵ_m(δ) = ∫_0^δ(1 - ∑_n=0^ξ-11/n!e^-ξ s(ξ s)^n) s = 1 - ∑_n=0^ξ -11/n!∫_0^δ e^-ξ s(ξ s)^n s = 1 - ∑_n=0^ξ -11/n!(n!/ξ -e^-ξδ/ξ(∑_i=0^nn!/i!(ξδ)^i )) = e^-ξδ/ξ∑_n=0^ξ-1∑_k=0^n (ξδ)^k/k! = e^-ξδ/ξ∑_k=0^ξ-1(ξ-k) (ξδ)^k/k! = e^-ξδ(∑_k=0^ξ-1(ξδ)^k/k! - ∑_k=0^ξ-2δ(ξδ)^k/k!).Let 0≤δ_1≤ 1 and 1≤δ_2≤∞.Define A_δ_1=[δ_1,1]^n, A^δ_1,δ_2=[δ_1,δ_2]^n∖[δ_1,1]^n.Then_A_δ_1| ∏_i=1^n _[0,1)(s_i) - ∏_i=1^n g_m(s_i)|s_1 … s_n = (1-δ_1)^n - (1-δ_1 - ϵ_m(1) + ϵ_m(δ_1))^n_A^δ_1, δ_2| ∏_i=1^n _[0,1)(s_i) - ∏_i=1^n g_m(s_i)|s_1 … s_n = (G_m(δ_2)-G_m(δ_1))^n - (G_m(1)-G_m(δ_1))^n. _A_δ_1| ∏_i=1^n _[0,1)(s_i) - ∏_i=1^n g_m(s_i)|s_1 … s_n = _A_δ_11- ∏_i=1^n g_m(s_i) s_1 … s_n =(1-δ_1)^n - (∫_δ_1^1 g_m(s) s)^n =(1-δ_1)^n - (∫_δ_1^1 1-G_m(s) s)^n =(1-δ_1)^n - (1-δ_1 - ϵ_m + ϵ_m(δ_1))^n. For the second equality, notice that_[δ_1, δ_2]^n∏_i=1^n g_m(s_i)s_1 … s_n= (∫_δ_1^δ_2g_m(s) s)^n = (G_m(δ_2)-G_m(δ_1))^n, so that _A^δ_1,δ_2| ∏_i=1^n _[0,1)(s_i) - ∏_i=1^n g_m(s_i)|s_1 … s_n= _A^δ_1,δ_2∏_i=1^n g_m(s_i) s_1 … s_n= _[δ_1,δ_2]^n∖ [δ_1,1]^n∏_i=1^n g_m(s_i)s_1 … s_n =(G_m(δ_2)-G_m(δ_1))^n - (G_m(1)-G_m(δ_1))^n.Fix δ_2∈(1,∞). Then there exists δ_1∈[0,1] such that_A_δ_1| ∏_i=1^n _[0,1)(s_i) - ∏_i=1^n g_m(s_i)|s_1 … s_n =_A^δ_1,δ_2| ∏_i=1^n _[0,1)(s_i) - ∏_i=1^n g_m(s_i)|s_1 … s_n,where A_δ_1= (δ_1, 1)^n and A^δ_1,δ_2 = (δ_1, δ_2)^n∖ (δ_1, 1)^n. Define the following functions with domain [0,1]:p(δ):= _A_δ| ∏_i=1^n _[0,1)(s_i) - ∏_i=1^n g_m(s_i)|s_1 … s_n, q_δ_2(δ):=_A^δ,δ_2| ∏_i=1^n _[0,1)(s_i) - ∏_i=1^n g_m(s_i)|s_1 … s_n.By Lemma <ref>, both functions are continuous, p is non-increasing and q_δ_2 is non-decreasing. The image of q_δ_2 is contained in [0,1-ϵ_m(1)] while the image of p is exactly [0,1-ϵ_m(1)]. All these mean that there exists a point δ_1∈[0,1] such that q_δ_2(δ_1) = p(δ_1), concluding the proof. The following Corollary follows immediately from Lemma <ref> by setting δ_1=0 and δ_2=∞.This Corollary is needed in the proof of Theorem <ref>. _^n| ∏_i=1^n _[0,1)(s_i) - ∏_i=1^n g_m(s_i)|s_1 … s_n = 2(1-(1-ϵ_m)^n).The following provides a simple bound between the difference of the n-th convolution of any distribution function F with density f evaluated at two different points. Let F be any continuous distribution function supported on [0,∞) with density function fand fix b>a>0. Then |F^*n(a)-F^*n(b)|≤Δ_a,b^F·F^n-1(b),where Δ_a,b^F := sup{F(b-a+c) - F(c):c∈(0,a)}.|F^*n(a)-F^*n(b) | = ∫_a^b f^*n(u) u= ∫_a^b∫_0^u f(u-s)f^*(n-1)(s) s u≤∫_a^b∫_0^b f(u-s)f^*(n-1)(s) s u = ∫_0^b f^*(n-1)(s) ∫_a^b f(u-s) usThen there exists a constant c∈(0,a) such that the previous expression is equal to{∫_a^b f(u-c) u}∫_0^b f^*(n-1)(s)s= {∫_a-c^b-c f(u) u}F^*n-1(b) ≤{∫_c^(b-a)+c f(u) u}F^n-1(b). The result follows from taking the supremum over c.Let {X_n'} be a sequence of i.i.d. random variables with common distirbution H_F. Fix δ_2∈(1,∞) and let δ_1∈(0,1) be as in Corollary <ref>. Then ∑_n=0^∞(1-ρ)ρ^n _A_δ_1∪ A^δ_1,δ_2ℙ(s_1 X_1' + … + s_n X_n'≤ u ) | ∏_i=1^n _[0,1)(s_i)-∏_i=1^n g_m(s_i)|s_1 … s_n = (1-ρ)ρΔ_u/δ_2, u/δ_1^H_F(ϵ_m(1) - ϵ_m(δ_1)/(1-ρ F(uδ_1))(1-ρ F(uδ_1)(1-ϵ_m(1) + ϵ_m(δ_1))).Clearly, for any (r_1,…, r_n)∈ A_δ_1 and (s_1,…, s_n)∈ A^δ_1, δ_2, we have thatℙ(r_1 X_1' + … + r_n X_n'≤ u ) ≤ H_F^*n(u/δ_1),ℙ(s_1 X_1' + … + s_n X_n'≤ u )≥ H_F^*n(u/δ_2),so that|_A_δ_1∪ A^δ_1,δ_2ℙ(s_1 X_1' + … + s_n X_n'≤ u ) ( ∏_i=1^n _[0,1)(s_i)-∏_i=1^n g_m(s_i))s_1 … s_n| = |_A_δ_1ℙ(s_1 X_1' + … + s_n X_n'≤ u ) | ∏_i=1^n _[0,1)(s_i)-∏_i=1^n g_m(s_i)|s_1 … s_n. - ._A^δ_1, δ_2ℙ(s_1 X_1' + … + s_n X_n'≤ u ) | ∏_i=1^n _[0,1)(s_i)-∏_i=1^n g_m(s_i)|s_1 … s_n|≤|H_F^*n(u/δ_1)_A_δ_1| ∏_i=1^n _[0,1)(s_i)-∏_i=1^n g_m(s_i)|s_1 … s_n. - .H_F^*n(u/δ_2)_A^δ_1, δ_2| ∏_i=1^n _[0,1)(s_i)-∏_i=1^n g_m(s_i)|s_1 … s_n| = (H_F^*n(u/δ_1) - H_F^*n(u/δ_2))_A_δ_1| ∏_i=1^n _[0,1)(s_i)-∏_i=1^n g_m(s_i)|s_1 … s_n = (H_F^*n(u/δ_1) - H_F^*n(u/δ_2))[(1-δ_1)^n - (1-δ_1 - ϵ_m(1) + ϵ_m(δ_1))^n].Using the previous results and Lemma <ref> we get that∑_n=0^∞(1-ρ)ρ^n _A_δ_1∪ A^δ_1,δ_2ℙ(s_1 X_1' + … + s_n X_n'≤ u ) | ∏_i=1^n _[0,1)(s_i)-∏_i=1^n g_m(s_i)|s_1 … s_n≤∑_n=0^∞(1-ρ)ρ^n {H_F^*n(u/δ_1) - H_F^*n(u/δ_2)}[(1-δ_1)^n - (1-δ_1 - ϵ_m(1) + ϵ_m(δ_1))^n]≤∑_n=1^∞(1-ρ)ρ^n{Δ_u/δ_2, u/δ_1^H_F H_F^*(n-1)(u/δ_1)}[(1-δ_1)^n - (1-δ_1 - ϵ_m(1) + ϵ_m(δ_1))^n]≤∑_n=1^∞(1-ρ)ρ^n{Δ_u/δ_2, u/δ_1^H_F H_F^(n-1)(u/δ_1)}[(1-δ_1)^n - (1-δ_1 - ϵ_m(1) + ϵ_m(δ_1))^n] =(1-ρ)ρΔ_u/δ_2, u/δ_1^H_F∑_n=0^∞ρ^n H_F^n(u/δ_1)[(1-δ_1)^n+1 - (1-δ_1 - ϵ_m(1) + ϵ_m(δ_1))^n+1] = (1-ρ)ρΔ_u/δ_2, u/δ_1^H_F( 1-δ_1/1-ρ H_F(u/δ_1)(1-δ_1) - 1-δ_1 - ϵ_m(1) + ϵ_m(δ_1)/1-ρ H_F(u/δ_1)(1-δ_1 - ϵ_m(1) + ϵ_m(δ_1))).For any fixed δ_2∈(1,∞) let δ_1 be as in Corollary <ref>. Then| ψ_H_F⋆ U(u) - ψ_H_F⋆G_m(u) |≤ (1-ρ)ρΔ_u/δ_2, u/δ_1^H_F𝒯_1+ 2ρϵ_m/1-ρ(1-ϵ_m) - (1-ρ)𝒯_2.where Δ_a,b := sup_0≤ s≤ a{H_F(s + (b-a)) - H_F(s)}, ϵ_m(δ)=∫_0^δ G_m(s) s and𝒯_1 :=1-δ_1/1-ρ H_F(u/δ_1)(1-δ_1) - 1-δ_1 - ϵ_m + ϵ_m(δ_1)/1-ρ H_F(u/δ_1)(1-δ_1 - ϵ_m + ϵ_m(δ_1)) 𝒯_2 :=1/1-(1-δ_1)ρ-2/1-(1-δ_1-ϵ_m+ϵ_m(δ_1))ρ+ 1/(G_m(δ_2)-1+ϵ_m)ρ. The construction of this particular bound requires the selection of two values δ_1 and δ_2 provided in Corollary <ref>. In general, it will not be possible to write down a closed-form expression for such values but in practice this can be easily determined numerically. Recall that an explicit expression for the term ϵ_m(δ) can be found in Lemma <ref>.Recall that| F^*n(u) - F⋆ G^*n(u)| =| (H_F⋆ U)^*n(u) - (H_F⋆G_m)^*n(u)|≤_^nℙ(s_1 X_1' + … + s_n X_n'≤ u ) | ∏_i=1^n _[0,1)(s_i)-∏_i=1^n g_m(s_i)|s_1 … s_n.Split the last integral in two parts: over [δ_1, δ_2]^n and over [0,∞)^n∖ [δ_1, δ_2]^n,bound the first one using Lemma <ref> and the second one usingLemma <ref> and Corollary <ref>. Then apply Pollaczeck–Khinchine formula and sum the geometric series. §.§ Bound for |H_F-H_Π| As stated in subsection <ref>, the result of Theorem <ref> depends on the availability of |H_F-H_Π|. In the following we state a bound for such a quantity in the casewhere an explicit expression for |H_F-H_Π| is not available or too difficult to compute.Let Π be defined as in Definition <ref> anddefineΔ_k H_F:=H_F(s_k)-H_F(s_k-1).Thensup_u≤ s<∞|H_F(s)-H_Π(s)|≤sup_K≤ k<∞Δ_k H_F+ |μ_Π-μ_F|·[S;S> u]/μ_Πμ_F+|[X;X> s_K]-[S;S> s_K]|/μ_F , sup_0<s≤ u|H_F(s)-H_Π(s)| ≤sup_0≤ k≤ KΔ_k H_F +|μ_Π-μ_F|·[S;S≤ u]/μ_Πμ_F+ |[S;S≤ u]-[X;X≤ u]|/μ_F. Moreover, if μ_F=μ_Π, thensup_u≤ s<∞|H_F(s)-H_Π(s)| ≤sup_K≤ k<∞Δ_k H_F+ |[X;X> s_K]-[S;S> s_K]|/μ_F,sup_0<s≤ u|H_F(s)-H_Π(s)|≤sup_0≤ k≤ KΔ_k H_F+ |[S;S≤ u]-[X;X≤ u]|/μ_F.Notice that the particular selection of Π implies that it is possible to select partitions for which μ_Π=μ_F.Also, recall that when ξ(m)→∞, then ϵ_m→0, so for ξ(m) sufficiently large, the bound decreasesas |[X;X> s_K]-[S;S> s_K]| becomes smaller.The last is achieved if the tail probability of H_Π getscloser to the tail probability of H_F.Since K∈ is such that s_K=u then |H_F⋆G_m(u) - H_Π⋆G_m(u)| =|∫_0^∞ H_F(u/s)dG(s)-∫_0^∞ H_Π(u/s)dG(s)|≤∫_0^∞|H_F(u/s)-H_Π(u/s)|dG(s)≤sup_u≤ s<∞|H_F(s)-H_Π(s)|∫_0^1dG(s)+sup_0<s≤ u|H_F(s)-H_Π(s)|∫_1^∞G(s)= sup_u≤ s<∞|H_Π(s)-H_F(s)|∫_0^1dG(s)+sup_0<s≤ u|H_F(s)-H_Π(s)|∫_1^∞ dG(s).Observe that for all 0< s<∞ there exist k such that t_k≤ s< t_k+1, so |H_Π(s)-H_F(s)| ≤max{|H_F(s_k)-H_Π(s_k)|, |H_F(s_k+1)-H_Π(s_k)|}.Using the previous identity we first constructing a bound for (<ref>).|H_Π(s_k)-H_F(s_k)| ≤|H_Π(s_k)-H_F(s_k+1)|+|H_F(s_k)-H_F(s_k+1)|≤|H_Π(s_k)-H_F(s_k+1)|+Δ_k H_F, where Δ_k H_F:=H_F(s_k+1)-H_F(s_k), andin consequence sup_u≤ s<∞|H_Π(s)-H_F(s)|≤sup_ K≤ k<∞|H_Π(s_k)-H_F(s_k+1)|+sup_ K≤ k<∞Δ_k H_F. Next observe that sup_K≤ k< ∞|H_Π(s_k)-H_F(s_k+1)| =sup_K≤ k< ∞|∑_i=k+1^∞s_iπ_i/μ_Π-∫_s_k+1^∞t dF(t)/μ_F|=sup_K≤ k< ∞|∑_i=k+1^∞∫_s_i^s_i+1(s_i/μ_Π-t/μ_F)dF(t)|≤1/μ_Πμ_F∑_i=K+1^∞∫_s_i^s_i+1 |s_iμ_F-t μ_Π|dF(t)≤1/μ_Πμ_F∑_i=K+1^∞∫_s_i^s_i+1(|s_iμ_F-s_iμ_Π| + |s_iμ_Π-t μ_Π|)dF(t)≤|μ_F-μ_Π|/μ_Πμ_F∑_i=K+1^∞s_i∫_s_i^s_i+1dF(t)+1/μ_F∑_i=K+1^∞∫_s_i^s_i+1 |s_i-t|dF(t) ≤|μ_Π-μ_F|[S;S> s_K]/μ_Πμ_F+|[X;X> s_K]-[S;S> s_K]|/μ_F. Therefore, sup_u≤ s<∞|H_Π(s)-H_F(s)|≤sup_K≤ k<∞Δ_k H_F+ |μ_Π-μ_F|·[S;S> u]/μ_Πμ_F+|[X;X> s_K]-[S;S> s_K]|/μ_F. Our construction for the bound for (<ref>) is analogous.Note that |H_F(s_k+1)-H_Π(s_k)| ≤|H_F(s_k+1)-H_F(s_k)|+|H_F(s_k)-H_Π(s_k)|≤Δ_k H_F+|H_F(s_k)-H_Π(s_k)|,so sup_0<s≤ u|H_F(s)-H_Π(s)|≤sup_0≤ k≤ KΔ_k H_F+sup_0≤ k≤ K|H_F(s_k)-H_Π(s_k)|, where s_0=inf{s:F(s)>0}.Next observe that sup_0≤ k≤ K|H_F(s_k)-H_Π(s_k)| =sup_0≤ k≤ K|∫_0^s_kt dF(t)/μ_F-∑_i=1^ks_iπ_i/μ_Π|=sup_0≤ k≤ K|∑_i=1^k∫_s_i-1^s_i(t /μ_F-s_i/μ_Π)dF(t)|≤1/μ_Πμ_F∑_i=1^K∫_s_i-1^s_i |tμ_Π-s_iμ_F|dF(t)≤1/μ_Πμ_F∑_i=1^K∫_s_i-1^s_i |tμ_Π-s_iμ_Π|+(|s_iμ_Π-s_iμ_F|)dF(t)≤1/μ_F∑_i=1^K∫_s_i-1^s_i |s_i-t|dF(t)+|μ_Π-μ_F|/μ_Πμ_F∑_i=1^Ks_i∫_s_i-1^s_idF(t)≤|[S;S≤ s_K]-[X;X≤ s_K]|/μ_F+|μ_Π-μ_F|·[S;S≤ s_K]/μ_Πμ_F. Therefore, sup_0≤ s≤ u/δ|H_F(s)-H_Π(s)|≤sup_0≤ k≤ KΔ_k H_F+ |[S;S≤ u]-[X;X≤ u]|/μ_F+|μ_Π-μ_F|·[S;S≤ u]/μ_Πμ_F. | http://arxiv.org/abs/1705.09405v1 | {
"authors": [
"Oscar Peralta",
"Leonardo Rojas-Nandayapa",
"Wangyue Xie",
"Hui Yao"
],
"categories": [
"math.PR"
],
"primary_category": "math.PR",
"published": "20170526012248",
"title": "Approximation of Ruin Probabilities via Erlangized Scale Mixtures"
} |
( #1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1H_0^LH_0^RH_∥^LH_∥^RH_⊥^LH_⊥^R =α=̱β=̧χ=̣δ=ε=φ=γ=η=̨κł=λ=μø=ω=̊ϱ=σ=τþ=ϑ=υ=ξ=ζ=ι=ϖ=ρ=ϕ=ϵ=θ=ν =Δ=Φ=ΓŁ=ΛØ=Ω=Π=Ψ=Σ=Θ=Ξ=Υ A B C D E F G H I J K L M N O P Q R S T U V W X Y Z [ M_Pl Ge-0.125em V Te-0.125em V Me-0.125em [email protected] Physics Division, Physical Research Laboratory, Navrangpura, Ahmedabad 380 009, [email protected] Physics Division, Physical Research Laboratory, Navrangpura, Ahmedabad 380 009, [email protected] Physics Division, Physical Research Laboratory, Navrangpura, Ahmedabad 380 009, [email protected] Physics Division, Physical Research Laboratory, Navrangpura, Ahmedabad 380 009, India The LHCb has measured the ratios of B→ K^∗μ^+μ^- to B→ K^∗ e^+ e^- branching fractions in two dilepton invariant mass squared bins, which deviate from the Standard Model predictions by approximately 2.5σ. These new measurements strengthen the hint of lepton flavor universality breaking which was observed earlier in B→ Kℓ^+ℓ^- decays. In this work we explore the possibility of explaining these anomalies within the framework of R-parity violating interactions. In this framework, b→ sℓ^+ℓ^- transitions are generated through tree and one loop diagrams involving exchange of down-type right-handed squarks, up-type left-handed squarks and left-handed sneutrinos. We find that the tree level contributions are not enough to explain the anomalies, but at one loop, simultaneous explanation of the deviations in B→ K^∗ℓ^+ℓ^- and B→ Kℓ^+ℓ^- is feasible for a parameter space of the Yukawa couplings that is consistent with the bounds coming from B→ K^(∗)νν̅ and D^0→μ^+μ^- decays and B_s-B̅_s mixing. Scrutinizing R-parity violating interactions in light of R_K^(∗) data Namit Mahajan December 30, 2023 ====================================================================== § INTRODUCTIONPrecision measurements of the rare decays provide excellent probes for testing new physics (NP) beyond the Standard Model (SM) of particle physics. In the SM flavor changing neutral current transitions b→ s ℓ^+ℓ^- arise at one-loop level andare suppressed by the Glashow-Iliopoulos-Maiani mechanism.To this end we study the ratio of branching ratios of B→ K(K^∗)ℓℓ decays into di-muons over di-electronsR_K = Br(B → K μ^+μ^-)/Br(B → K e^+e^-) , R_K^* = Br(B → K^* μ^+μ^-)/Br(B → K^* e^+e^-) .In these ratios the hadronic uncertainties cancel and therefore these observables are sensitive to lepton flavor universality (LFU) violating NP <cit.>. In 2014 the LHCb Collaboration reported the measurement of R_K in the dilepton invariant mass squared bin q^2∈ [1,6] GeV^2 to be <cit.> R_K^LHCb=0.745±_0.074^0.090± 0.036,corresponding to a 2.6 σ deviation from the SM prediction of R_K^SM=1.00± 0.01<cit.>. Very recently, the LHCb Collaboration presented their first results for R_K^*<cit.>R_K^*[0.045, 1.1] = 0.66 ^+0.11_-0.07± 0.03 , R_K^* [1.1, 6] = 0.69 ^+0.11_-0.07± 0.05 ,where the subscript indicates the dilepton invariant mass squared bin in GeV^2. These values correspond to 2.4σ and 2.5 σ deviations from the SM values R_K^*[0.045, 1.1]^SM∼ 0.93 and R_K^* [1.1, 6]^SM∼ 0.99, respectively. Combination of these results shows significant deviation from the SM which strongly hints to LFU breaking NP.To address these anomalies we consider the low energy effective Hamiltonian for the b→ sℓℓ transitionℋ_ eff = -4G_F/√(2)α_e/4πV_tbV_ts^∗∑_i=9,10 (C_i 𝒪_i + C_i^'𝒪_i^' ),where the four-fermion operators are defined as 𝒪_9(10) = (s̅γ^μ P_L b)(ℓ̅γ_μ(γ_5)ℓ) , 𝒪_9(10)^' = (s̅γ^μ P_R b)(ℓ̅γ_μ(γ_5)ℓ),with P_L,R=(1∓γ_5)/2 as the chiral projectors. The model-independent approach to address these tensions is to modify the Wilson coefficients C_i=C_i^ SM+δ C_i, where C_9^ SM=-C_10^ SM∼ 4.2 for all the charged leptons. δ C_i and the Wilson coefficients of the chirality flipped operators C_9,10^' can appear in different NP extensions and can be lepton flavor dependent. To obtain R_K<1 and R_K^∗<1, as suggested by the data, one can consider NP contributions in the Wilson coefficients (δ C_i,C_i^') for electrons and the muons such that either the B→ K^∗μ^+μ^- rate is reduced or the B→ K^∗ e^+e^- is enhanced or both. However, data on B → K^(∗) e^+e^- seems to be consistent with the SM predictions. Therefore, we work in a scenario where the B→ K^(∗)μ^+μ^- rates are reduced by NP contributions while the B→ K^(∗)e^+e^- rates are SM like. Introducing lepton superscripts in the Wilson coefficients such solutions for (δ C_i, C_i^') look like δ C_i^μ =C_i^ SM-C_i^μ 0,C_i^'μ 0, δ C_i^e=C_i^ SM-C_i^e = 0,C_i^' e = 0.Following the announcement of the LHCb results <cit.>, several NP models have been considered in Refs. <cit.> to explain both R_K and R_K^∗ anomalies, where the above type solutions have also been considered. Note that the effective Hamiltonian (<ref>) in general comprises of (pseudo)scalar and tensor operators but they are unable to explain the LHCb data <cit.>.In this work we explore the possibility to explain R_K^(∗) anomalies in R-parity violating (RPV) interactions. RPV interactions have been studied previously inRefs. <cit.> to accommodate R_K data. In Ref. <cit.> the authors assume a scenario where a tree-level exchange of left-handed up type squarkgenerates enhanced b→ s e^+e^- rate to obtain R_K <1. But we note that this scenario is unable to produce both R_K <1 and R_K^∗ <1 simultaneously. On the other hand, in Ref. <cit.> the authors studied the possibility of explaining R_K anomaly within the context of RPV via one-loop contribution involving d̃_R. However, the authors inRef. <cit.> note that the severe constraints from B→ K^(∗)νν̅ make it difficult for a viable explanation of R_K in this scenario. We note that there are also left-handed up-type squarks ũ_L and sneutrinos ν̃_L in this model which can give additional one-loop contributions to b→ sℓ^+ℓ^- transition.We take into account their contributions, and in addition to revisiting the R_K anomaly, we show that one can find a parameter space for the Yukawa couplings that simultaneously explain the R_K^∗ anomalies. We find that this parameter space is compatible with the upper bounds on B→ K^(∗)νν̅ branching ratios. We also briefly discuss the latest experimental results on other rare B and D decays.The outline of the rest of the paper is as follows. In section <ref> we briefly discuss the R-parity violating interactions relevant for b→ s μ^+μ^-. In section <ref> we discuss the one-loop contributions to b→ s μ^+μ^- and the relevant constraints from latest experimental data for B_s-B̅_s mixing amplitude, and B̅→ K^(*)νν̅ and D^0→μ^+μ^- decays. In section <ref> we summarize our results and conclude. § R-PARITY VIOLATING INTERACTIONS In MSSM, the relevant R-parity violating interactions are generated through the superpotential given by <cit.> W_RPV = μ_i L_i H_u + 1 2λ_ijkL_i L_j E^c_k + λ^'_ijkL_i Q_j D^c_k + 1 2λ”_ijkU^c_iD^c_jD^c_k .Here Q_j represents the SU(2)_L quark isodoublet superfield while U^c_i and D^c_j represent right-handed up type and down type quark isosinglet superfields respectively. L_i, E^c_i denote SU(2)_L lepton isodoublet and isosinglet superfields respectively. H_u is the up type Higgs superfield that gives masses to the up type quarks. The trilinear terms contain only dimensionless parameters, while the bilinear term contains dimensionful coupling. To ensure proton stability we will assume the λ” coupling to be zero.Since the processes of our interest involve both leptons and quarks, we will consider the λ' interaction term as the source of NP in this work. The interactions induced by this term at the tree and one-loop can contribute to b→ sll. The relevant interaction terms in the Lagrangian can be obtained by expanding the superpotential term involving λ' in terms of fermions and sfermions asℒ = λ^'_ijk ( ν̃^i_L d̅^k_R d^j_L + d̃^j_L d̅^k_Rν^i_L + d̃^k*_R ν̅^ci_L d^j_L- l̃^i_L d̅^k_R u^j_L - ũ^j_L d̅^k_R l^i_L - d̃^k*_R l̅^ci_L u^j_L ) ,where the sfermions are denoted by tildes, and “c” denotes charge conjugated fields.§ B→ Sℓ^+ℓ^- IN R-PARITY VIOLATING INTERACTIONS One can obtain a potential tree level contribution to b→ sℓ^+ℓ^- via the interaction terms given in Eq. (<ref>). Integrating out ũ_L, one obtains the following four fermion operator at the tree levelL_eff = -λ'_ijkλ^'*_i'jk' 2m^2_ũ^j_Lℓ̅^i'_L γ^μℓ^i_L d̅^k_R γ_μ d^k'_R ,where m_ũ^j_L is the mass of ũ^j_L. For k=2 and k'=3, theoperator (s̅_R γ_μ b_R)(ℓ̅_L γ^μℓ_L)contributes to b → sμ^+μ^-. Comparing Eq. (<ref>) with the b→ s ℓ^+ℓ^- effective Hamiltonian given in Eq. (<ref>) we find the Wilson coefficients C_9^'ℓ and C_10^'ℓ corresponding to the operators (s̅_R γ_μ b_R)(ℓ̅γ^μℓ) and (s̅_R γ_μ b_R)(ℓ̅γ^μγ_5 ℓ) respectively to beC_10^'ℓ=-C_9^'ℓ=λ_ℓ j2'λ_ℓ j3'^*/V_tbV_ts^*π/α_e√(2)/4m_ũ_L^j^2G_F; ℓ=e,μ.We observe that for i=i^' =2 the solution C_10^'μ=-C_9^'μ is not able to generate R_K<1 and R_K^∗<1 simultaneously. So it is not possible to explain both R_K^* and R_K^* with tree level contributions coming from R-parity violating interactions. Therefore, in the rest of the paper we do not consider this contribution.Next we will explore one-loop contributions to b→ sℓ^+ℓ^- to see if R_K and R_K^* anomalies can be simultaneously explained. The model-independent analysis <cit.>shows that for simultaneous explanation of R_K and R_K^* a negative value of C^ NP μ_LL is favored, where C^ NP μ_LL = δ C_9^μ-δ C^μ_10. If one allows only one k,i.ek^'=k in Eq. (<ref>) then there is no tree level contributions to b→ sμ^+μ^- but one-loop contributions are still possible due to the exchange of d̃_R, ũ_L and ν̃_L as can be seen from equation (<ref>). Representative one-loop diagrams contributing to b→ sμ^+μ^- are shown in Fig. <ref>. The contributions coming from these box diagrams in the limit M_W^2, m_t^2 << m_d̃_R^2give rise toC^ NP μ_LL=λ_23k^'λ^'∗_23k/8π_e(m_t/m_d̃^k_R)^2 - λ_i3k^'λ^'∗_i2kλ_2jk^'λ^'∗_2jk/32√(2)G_FV_tbV_ts^∗πα_e m^2_d̃^k_R - λ_i3k^'λ^'∗_i2kλ_2jk^'λ^'∗_2jk/32√(2)G_FV_tbV_ts^∗πα_elog(m_ũ^j_L^2/ m^2_ν̃^i_L)/m_ũ^j_L^2 - m^2_ν̃^i_L,where repeated indices i and j are summed over and we assume that only couplings with k=3 are the dominant ones. Note that the first term correspond to the contribution coming from the box diagrams with a W boson and d̃^k_R in the loop. The second and the third terms correspond to box diagrams with two d̃^k_R in the loop and its supersymmetric counterpart respectively. The first two contributions are similar to the ones in the leptoquark model discussed in Ref. <cit.>. We also note that the γ- and Z-penguin diagrams (including the supersymmetric counterparts) give vanishing contributions <cit.>. The last term which is the new contribution in our analysis was not included in Ref. <cit.> on account of the assumption that ũ, ν̃ are much heavier compared to d̃_R. In the absence of this contribution, the constraints from B̅→ K^(∗)νν̅ proves to be too severe to explain the b→ s μ^+μ^- induced R_K and R_K^* anomalies as noted in Ref. <cit.>. Interestingly, since the the third term in equation (<ref>) gives a negative contribution to C_LL^ NPμ we are able to find a parameter space for the Yukawa couplings that give R_K<1 and R_K^*<1 while being consistent with the latest upper bound on B̅→ K^(*)νν̅ branching ratios.Before we study the parameter space, we discuss the constraints coming from other rare processes such as B_s-B̅_s mixing. RPV interations can give rise to tree level contribution to B_s-B̅_s due to ν̃ exchange, but for specific choices of the indexes j and k. Since we assume k = k^', tree level contribution is absent and the leading contribution to B_s-B̅_s arises through one loop diagrams involving d̃-ν and ν̃-d in our scenario. In Eq. (<ref>) we see that C^ NPμ_ LL depends on the product of couplings λ_i3k^'λ^'∗_i2k which also contributes to B_s-B̅_s mixing amplitude which can in turn be used to constrain these set of couplings. We follow the prescription of the UTfit Collaboration <cit.> and define the ratio C_B_s e^2 i ϕ_B_s = ⟨ B_s |H^full_eff | B̅_s ⟩ / ⟨ B_s |H^SM_eff | B̅_s ⟩ which readsC_B_se^2iϕ_B_s = 1 + m_W^2/g^4 S_0(x_t)( 1/m^2_d̃^k_R + 1/m^2_ν̃^i_L) λ_i3k^'λ_i2k^'∗λ_i'3k^'λ_i'2k^'∗/( V_tb V^∗_ts)^2.The latest UTfit values of the B_s-B̅_s mixing parameters are C_B_s = 1.070±0.088 and ϕ_B_s=(0.054± 0.951)^∘<cit.>. To be conservative, we take the upper limit of C_B_s and find the constraint on λ^'_i3kλ^'∗_i2k to be |λ^'_i3kλ^'∗_i2k|≲ 0.067for m_d̃^k_R∼ 1 TeV and m_ν̃^i_L∼ 0.6TeV. Now these same set of couplings also contribute toprocesses B̅→ K(K^*) νν̅. The ratio R_B̅→ K(K^*) νν̅ =Γ_ RPV(B̅→ K(K^*) νν̅)/Γ_ SM(B̅→ K(K^*) νν̅) is given by <cit.> R_B̅→ K(K^*) νν̅= ∑_i=,e,μ,τ1 3| 1 +χ^RPV_ν_iν̅_i X_0(x_t) V_tbV^*_ts| ^2 + 1 3∑_i≠ i'|χ^RPV_ν_iν̅_i' X_0(x_t)V_tbV^*_ts| ^2 ,withχ^RPV_ν_i ν̅_i' = π s^2_W √(2) G_F α ( -λ^'_i3kλ^' *_i' 2 k 2 m^2_d̃^k_R),X_0(x_t) = x_t(2+x_t) 8(x_t - 1) + 3x_t(x_t-2) 8(x_t-1)^2ln x_t ,and x_t = m^2_t/m^2_W. TheRPV couplings which can modifythe rate of B→ K^(∗)νν appear in the following combinations,λ_33k^'λ^'∗_32k, λ_23k^'λ^'∗_22k, λ_23k^'λ^'∗_32k, and λ_33k^'λ^'∗_22k. The latest experimental data from Belle <cit.> gives R_B→ K(K^*) νν̅ <3.9 (2.7) at 90% confidence level. Assuming one set of the product of couplings (i=i^' or i i^') to be non-zero, the bounds on these couplings turn out to be0.038 ( m_d̃_R/1 TeV)^2≳( λ^'_23kλ^'∗_22k+λ^'_33kλ^'∗_32k) ≳ -0.079 ( m_d̃_R/1 TeV)^2 , if i=i^', and0.055 ( m_d̃_R/1 TeV)^2≳( λ^'_33kλ^'∗_22k+λ^'_23kλ^'∗_32k)≳ -0.055 ( m_d̃_R/1 TeV)^2 if i i^', The contribution from the box diagrams also depends on one additional set of couplings λ^'∗_2jkλ_2jk^' which is always positive in our case. For j=2, 3, the set of couplings λ^'∗_22kλ_22k^' and λ^'∗_23kλ_23k^'are constrained from the experimental upper bound on the branching ratio for D^0→μ^+μ^-, which is given by 6.2 × 10^-9 at 90% confidence level <cit.>. At the quark level, D^0→μ^+μ^- is mediated by the transition c→ uμ^+μ^-. The short-distance effective Lagrangian for c→ uμ^+μ^-in R-parity violating interactions is given byℒ_eff = 1 2 m^2_d̃^k_Rλ^'_2jkλ^' *_2j'k V_1j'V^*_2jμ_Lγ_μμ_L u̅_L γ^μ c_L . In terms of RPV couplings, the decay width for D^0→μ^+μ^- is given by <cit.> Γ(D^0→μ^+μ^-) = 1128 π|λ^'_2jkλ^' *_2j'k V_1j'V^*_2j m^2_d̃^k_R| ^2 f_D^2 m_D m^2_μ√(1-4 m^2_μ m^2_D) ,where the D^0 decay constant is f_D = 212(1) MeV <cit.>. Note that in the SM, the decay rate for D^0→μ^+μ^- is very tiny(<10^-10) and we neglect it. Taking only λ^'_22k to benon zero, the upper bound on D^0→μ^+μ^- branching ratio givesλ^'∗_22kλ_22k^'<0.3 (m_d̃_R/1 TeV)^2, and taking only λ^'∗_23kλ_23k^' to be non zero we get a very weak bound,λ^'∗_23kλ_23k^'<10^2(m_d̃_R/1 TeV)^2 which was also noted in Ref. <cit.>. In this work wefix the combination λ^'_i3kλ^'∗_i2k from the experimental data on R_B→ K(K^*) νν̅ and B_s-B̅_s mixing discussed above, and explore the parameter space in terms of the other two couplings while being compatible with the bounds coming from D^0→μ^+μ^-. To this end, we must mention that in this analysis we set the R-parity violating couplings associated with electron modes to be vanishing in view of the fact that the electron modes are consistent with the SM. We also set λ^'_i1k to be zero and therefore the constraints from the processes like K→πνμ̅ and B→πνν̅ will not affect our analysis. Constraints on RPV couplings can also be obtained from partonic channels like bb̅→τ^+τ^- and bb̅→μ^+μ^-. Using the ATLAS <cit.> data on di-tau final states, constraints on ( 3,2, 1/6) leptoquark model have been studied in <cit.>. We note that in our model bb̅→τ^+τ^- (μ^+μ^-) arise at tree level via exchange of ũ which shares the same gauge charges with a leptoquark weak doublet ( 3,2, 1/6).Following <cit.> we find that for m_ũ=1TeV the ũ^j_L d̅^k_R l^i_L couplings are allowed by the current ATLAS data.In fig. <ref> (left plot) we show the parameter space in λ^'_23k-λ_22k^' plane that is allowed by the current R_K and R_K^∗, [1.1-6.0] data. The constraints from B → K^(∗)νν and D^0 →μ^+μ^- are also taken into account.We have fixed the product of the couplings λ^'∗_33kλ_32k^'= 0.0568and have taken m_d̃= 1.1TeV,m_ν̃= 600 GeV, and m_ũ =1 TeV. The blue band corresponds to the regions which can explain R_K^∗, [1.1-6.0] data and the magenta bands correspond to the allowed regions from R_K data. The overlapping regions correspond to the regions allowed by both R_K and R_K^∗, [1.1-6.0] data. The gray shaded regions are allowed by the latest experimental data on R_B→ K(K^*) νν̅, while the light yellow region corresponds to values consistent within 1σ of UTfit values on B_s-B̅_s mixing parameters.We observe that by taking a heavier mass for d̃_R and while keeping m_ũ fixed, one can find a better parameter space allowed by the considered processes in our analysis.This is simply due to the fact that the contributions from the first two terms in the expression of C^ NP μ_LL in Eq. (<ref>) are suppressed for larger values of m_d̃.Then the third termin Eq. (<ref>) drives the main contribution to C^ NP μ_LL which is always negative in our case. This is demonstrated infig. <ref> (right plot) where weshow the parameter space in λ^'_23k-m_d̃ plane.Here we have again fixed the product of the couplings λ^'∗_33kλ_32k^'= 0.0568 and m_ũ= 1 TeV and m_ν̃= 600 GeV.The blue bands correspond to the allowed region by R_K^∗, [1.1-6.0] data and the magenta bands correspond to the allowed region by R_K data. The overlapping region show the values which can explain both R_K and R_K^∗, [1.1-6.0] data simultaneously. The gray (light yellow) shaded regions show the parameter space allowed by the latest experimental data onR_B→ K(K^*) νν̅ (B_s-B̅_s mixing).Note that for the above parameter space we find the range of R^ RPV_K^∗, [0.045-1.1] to be [0.82-0.87] which is close to the 1σ range of LHCb measurement (<ref>). Moreover, values of all the couplings are well below the naive perturbative unitarity limit √(4π). We consider this as a very good agreement.In order to present a more robust numerical analysis, we perform a χ^2-test by defining a χ^2 function as χ^2 = (R_K^ Exp-R_K^ Th)^2/(Δ R_K^ Exp)^2 + (R_K^∗ , [1.1-6.0]^ Exp-R_K^∗, [1.1-6.0]^ Th)^2/(Δ R_K^∗, [1.1-6.0]^ Exp)^2 + (R_K^∗, [0.045-1.1]^ Exp-R_K^∗, [0.045-1.1]^ Th)^2/(Δ R_K^∗, [0.045-1.1]^ Exp)^2,where R_K^ Exp, R_K^∗ , [1.1-6.0]^ Exp, and R_K^∗, [0.045-1.1]^ Exp refer to the central values of the experimental measurements of observables as given in (<ref>) and (<ref>). Δ R_i^ Exp denote the 1σ uncertainties in the experimental measurements of observables R_i^ Exp (with systematic and statistical errors added in the quadrature), while R_i^ Th are the theoretical predictions of the observable. In the SM, we find χ^2_ SM≃ 19. In the considered model, the observables R_K, K^∗ are functions of four new couplings ł_22k,23k,32k,33k^' and masses of d̃_R, ũ_L and ν̃_L. We minimize the χ^2 function subject to the conditions that the parameter space do not violate the data on the B_s-B̅_s mixing parameters, b → s νν and D →μμ processes as discussed earlier. We also take into account of the b→ c (u) ℓν_ℓ data that we will discuss in the next paragraph. Note that b→ c (u) ℓν_ℓ processes are very sensitive to the coupling ł_33k^'. Though a larger value of ł_33k^' is acceptable by b → s μμ data, it will produce large branching ratios for b→ c (u) ℓν_ℓ. We will comment more on this issue after discussing numerical analysis. During the minimization we keep the masses of d̃_R, ũ_L and ν̃_L fixed. We find that in our model, with the choices m_d̃ = 1.1 TeV, m_ũ = 1 TeV and m_ν̃ = 0.6 TeV, minimum χ^2 is χ^2 ≃ 2.65 which corresponds to the RPV couplings ł_22k^' = -0.05, ł_23k^' = 2.49, ł_32k^' =0.04, ł_33k^' =1.42.These values of the couplings yield C_LL^ NPμ = -1.14 and the corresponding values of the observables read R_K = 0.74,R_K^∗ , [1.1-6.0] = 0.73, and R_K^∗ , [0.045-1.1] = 0.84. This is a good consistency with the experimental data on R_K and R_K^∗ [1.1-6], while the value of R_K^∗ data in the low q^2 bin [0.045-1.1] lies just outside the 1σ window of experimental mean value. One important point to note is that, by choosing slightly higher mass for d̃_R or slightly lower mass for ũ_L improves the fit further and a smaller χ^2 value can be achieved. For example, by taking m_d̃_R =1.5 TeV and keeping other masses same as in the previous case, we find χ^2 = 2.44 which correspond to ł_22k^' =-0.06, ł_23k^' =2.61, ł_32k^' = 0.06, ł_33k^' = 1.40. The corresponding values for observables read R_K = 0.70, R_K^∗, [1.1-6.0] = 0.69, and R_K^∗, [0.045-1.1] = 0.83.As also noted in the previous paragraph, this happens because the first two terms in the expression of C_LLμ^ NP in Eq. (<ref>) are suppressed by m_d̃_R^2 while the last term (always negative in our case) is independent of m_d̃_R. In particular, the suppression of the first term (which is always positive in our case) helps in obtaining overall negative value of C_LLμ^ NP required to explain the anomalies. The higher value for m_d̃_R also relaxes severe constraints from B -B̅ andb → s νν (this is shown in the second plot in Fig <ref>). Wenow would like to comment on the impact of the above parameter space on the latest B→ D^(∗)ℓν̅ date.The R-parity violating couplings can also induce new physics contribution to the semileptonic decays induced by b→ c (u) lν whereB-factories <cit.> and LHCb <cit.> have measured related lepton flavor universality ratios R_D^(*)R_D^(∗)=Br(B̅→ D^(∗)τν̅)/Br(B̅→ D^(∗)ℓν̅); ℓ = e, μ.The world average of the measurements for R_D^* and R_D at present is R_D^∗ = 0.310 ± 0.015 ± 0.008 and R_D = 0.403 ± 0.040 ± 0.024<cit.>. When combined together these values differ from the SM predictions <cit.> by about 4 σ. We note that the RPV interactions given in Eq.(<ref>) allows for tree level contribution to b→ c (u) ℓν transitions via the exchange of down-type right handed squarks d̃^k_R.There exists a number of studies concerning the explanation of R_D^(*) experimental data within RPV scenario <cit.>. The minimal setup to explain these excesses is by invoking new physics in tau mode only and having muon and electron modes SM like. However, simultaneous explanation of LFU ratios R_K^(∗) and R_D^(∗) in RPV pose a challenge, as noted in Ref. <cit.>. In our scenario for some region of the parameter space above that is consistent with R_K and R_K^∗ we find that ratios R_D and R_D^* to be SM like. Following Ref. <cit.> to study LFUin semileptonic B-decays one can define ratios r(B→ D^(∗)τν) = R_D^(∗)/R_D^(∗)^ SM r(B→ D^(∗)τν) = 2 R_τ(c)/R_μ(c) + R_e(c),where R_ℓ(c) = BR(B → D^(∗)ℓν)/ BR(B → D^(∗)ℓν)_ SM (ℓ=e, μ, τ). Similarly, one can define a ratio r(B→τν) related to decay B→ℓν asr(B→τν) = 2 R_τ(u)/R_μ(u) + R_e(u),with R_ℓ(u) given by R_ℓ(c) =BR(B →ℓν)/ BR(B →ℓν)_ SM. In the SM both r(B→ D^(∗)τν)and r(B→τν) are 1. The current experimental data showingenhanced ratios for R_D and R_D^* with respect to the SM prefersr(B→ D^(∗)τν) to be about ∼ 1.25<cit.>.As a standard benchmark point, for a right handed sbottom of mass 1.1 TeV and taking previously obtained best fit point (ł_22k^'=-0.05, ł_23k^'=2.49, ł_32k^'=0.04, ł_33k^'=1.42) for the couplings, we find r(B→ D^(∗)τν) and r(B→τν) to be ∼ 1.04. The individual decay rates for B→ D^(*)τν, B→ D^(*)μν, B→μν, B→τν are also under control and are allowed to be enhanced at most by 10% with respect to the SM, which is acceptable given the uncertainties in both experimental data and the SM predictions for these decay modes. We note that one can accommodate the current experimental data for R_D and R_D^* by taking a somewhat larger value of coupling ł_33k^'. However, largerł_33k^' will also induce large enhancement in the decay rate ofB→τνwhich has not been seen in the experiments. Therefore a simultaneous explanation ofLFU ratios related to b→ s ℓ^+ ℓ^- and b→ c ℓν remains a challenge in our scenario.§ CONCLUSIONS The recent LHCb results on R_K and R_K^∗ hint to lepton flavor universality breaking NP. In this work we have explored the possibility of addressing these anomalies in the framework of R-parity violating interaction. In our scenario, where we assume that NP enter only in the couplings of muons to the (axial)vector operators while the couplings of the electron remain SM like, we find that the tree level contributions to b→ sμ^+μ^- transition are not able to simultaneously yield R_K<1 and R_K^∗<1.Beyond the tree level, one-loop contributions to b→ sμ^+μ^- are generated by the exchange of d̃_R, ũ_L and ν̃_L, which lead to a parameter space for the Yukawa couplings that can simultaneously accommodate R_K and R_K^∗,[1.1-6] data while there is a good agreement between R_K^∗,[0.045-1.1]^ RPV and the measured value of R_K^∗,[0.045-1.1] by the LHCb. The parameter space is also consistent with the constraints coming from B→ K^(∗)νν̅ and D^0→μ^+μ^- decays and B_s-B̅_s mixing. Acknowledgments—The authors would like to thank N.G. Deshpande and Xiao-Gang He for many valuable and helpful communications. We also thank Anjan Joshipura for useful discussions. 99 Hiller:2003js G. Hiller and F. Kruger,Phys. Rev. D69, 074020 (2004),[hep-ph/0310219].Aaij:2014ora R. Aaijet al. [LHCb Collaboration],Phys. Rev. Lett.113, 151601 (2014),[arXiv:1406.6482 [hep-ex]]; Bordone:2016gaqM. Bordone, G. Isidori and A. Pattori,Eur. Phys. J. C76, no. 8, 440 (2016),[arXiv:1605.07633 [hep-ph]]. Bobeth:2007dw C. Bobeth, G. Hiller and G. Piranishvili,JHEP0712, 040 (2007),[arXiv:0709.4174 [hep-ph]]. LHCb2017R. Aaijet al. [LHCb Collaboration],arXiv:1705.05802 [hep-ex]. Capdevila B. Capdevila, A. Crivellin, S. Descotes-Genon, J. Matias and J. Virto,arXiv:1704.05340 [hep-ph]. Altmannshofer W. Altmannshofer, P. Stangl and D. M. Straub,arXiv:1704.05435 [hep-ph].DAmico G. D'Amico, M. Nardecchia, P. Panci, F. Sannino, A. Strumia, R. Torre and A. Urbano,arXiv:1704.05438 [hep-ph].Hiller G. Hiller and I. Nisandzic,arXiv:1704.05444 [hep-ph].Geng L. S. Geng, B. Grinstein, S. Jäger, J. Martin Camalich, X. L. Ren and R. X. Shi,arXiv:1704.05446 [hep-ph].Ciuchini M. Ciuchini, A. M. Coutinho, M. Fedele, E. Franco, A. Paul, L. Silvestrini and M. Valli,arXiv:1704.05447 [hep-ph].Celis A. Celis, J. Fuentes-Martin, A. Vicente and J. Virto,arXiv:1704.05672 [hep-ph].Becirevic D. Bečirević and O. Sumensari,arXiv:1704.05835 [hep-ph].Cai Y. Cai, J. Gargalionis, M. A. Schmidt and R. R. Volkas,arXiv:1704.05849 [hep-ph].Kamenik J. F. Kamenik, Y. Soreq and J. Zupan,arXiv:1704.06005 [hep-ph].Sala F. Sala and D. M. Straub,arXiv:1704.06188 [hep-ph].DiChiara S. Di Chiara, A. Fowlie, S. Fraser, C. Marzo, L. Marzola, M. Raidal and C. Spethmann,arXiv:1704.06200 [hep-ph].Ghosh D. Ghosh,arXiv:1704.06240 [hep-ph].Alok1 A. K. Alok, D. Kumar, J. Kumar and R. Sharma,arXiv:1704.07347 [hep-ph].Alok2 A. K. Alok, B. Bhattacharya, A. Datta, D. Kumar, J. Kumar and D. London,arXiv:1704.07397 [hep-ph].Alonso R. Alonso, P. Cox, C. Han and T. T. Yanagida,arXiv:1704.08158 [hep-ph].Wang W. Wang and S. Zhao,arXiv:1704.08168 [hep-ph]. Greljo A. Greljo and D. Marzocca,arXiv:1704.09015 [hep-ph]. Bonilla C. Bonilla, T. Modak, R. Srivastava and J. W. F. Valle,arXiv:1705.00915 [hep-ph]. Feruglio F. Feruglio, P. Paradisi and A. Pattori,arXiv:1705.00929 [hep-ph].Ellis:2017nrp J. Ellis, M. Fairbairn and P. Tunney,arXiv:1705.03447 [hep-ph]. Crivellin:2017zlbA. Crivellin, D. Müller and T. Ota,arXiv:1703.09226 [hep-ph]. Bishara:2017pje F. Bishara, U. Haisch and P. F. Monni,arXiv:1705.03465 [hep-ph]. Alonso:2017uky R. Alonso, P. Cox, C. Han and T. T. Yanagida,arXiv:1705.03858 [hep-ph].Tang Y. Tang and Y. L. Wu,arXiv:1705.05643 [hep-ph].Datta:2017ezoA. Datta, J. Kumar, J. Liao and D. Marfatia,arXiv:1705.08423 [hep-ph].Hurth:2017hxgT. Hurth, F. Mahmoudi, D. Martinez Santos and S. Neshatpour,arXiv:1705.06274 [hep-ph].Hiller:2014yaa G. Hiller and M. Schmaltz,Phys. Rev. D90, 054014 (2014),[arXiv:1408.1627 [hep-ph]];Deshpand:2016cpwN. G. Deshpande and X. G. He,Eur. Phys. J. C77, no. 2, 134 (2017), [arXiv:1608.04817 [hep-ph]].Biswas:2014ggaS. Biswas, D. Chowdhury, S. Han and S. J. Lee,JHEP1502, 142 (2015), [arXiv:1409.0882 [hep-ph]].Bauer:2015knc M. Bauer and M. Neubert,Phys. Rev. Lett.116, no. 14, 141802 (2016),[arXiv:1511.01900 [hep-ph]]. Barbier:2004ez R. Barbieret al.,Phys. Rept.420, 1 (2005),[hep-ph/0406039].Lunghi:1999uk E. Lunghi, A. Masiero, I. Scimemi and L. Silvestrini,Nucl. Phys. B568, 120 (2000),[hep-ph/9906286]. Abada:2012cq A. Abada, D. Das, A. Vicente and C. Weiland,JHEP1209, 015 (2012),[arXiv:1206.6497 [hep-ph]]. Krauss:2013gya M. E. Krauss, W. Porod, F. Staub, A. Abada, A. Vicente and C. Weiland,Phys. Rev. D90, no. 1, 013008 (2014),[arXiv:1312.5318 [hep-ph]]. Abada:2014kba A. Abada, M. E. Krauss, W. Porod, F. Staub, A. Vicente and C. Weiland,JHEP1411, 048 (2014),[arXiv:1408.0138 [hep-ph]]. Das:2016vkrD. Das, C. Hati, G. Kumar and N. Mahajan,Phys. Rev. D94, 055034 (2016),[arXiv:1605.06313 [hep-ph]]. Bona:2007vi M. Bonaet al. [UTfit Collaboration],JHEP0803, 049 (2008),[arXiv:0707.0636 [hep-ph]], updates are available at UTfit webpage<http://utfit.org/UTfit/WebHome>. Grygier:2017tzo J. Grygieret al. [Belle Collaboration],arXiv:1702.03224 [hep-ex]. Olive:2016xmw C. Patrignaniet al. [Particle Data Group],Chin. Phys. C40, no. 10, 100001 (2016). Rosner:2015wva J. L. Rosner, S. Stone and R. S. Van de Water, [arXiv:1509.02220 [hep-ph]]. Aad:2015osaG. Aadet al. [ATLAS Collaboration],JHEP1507, 157 (2015), [arXiv:1502.07177 [hep-ex]]. Aaboud:2016creM. Aaboudet al. [ATLAS Collaboration],Eur. Phys. J. C76, no. 11, 585 (2016),[arXiv:1608.00890 [hep-ex]]. Faroughy:2016oscD. A. Faroughy, A. Greljo and J. F. Kamenik,Phys. Lett. B764, 126 (2017),[arXiv:1609.07138 [hep-ph]]. Lees:2012xjJ. P. Leeset al. [BaBar Collaboration],Phys. Rev. Lett.109, 101802 (2012),[arXiv:1205.5442 [hep-ex]]. Lees:2013uzdJ. P. Leeset al. [BaBar Collaboration],Phys. Rev. D88, no. 7, 072012 (2013),[arXiv:1303.0571 [hep-ex]]. Huschle:2015rgaM. Huschleet al. [Belle Collaboration],Phys. Rev. D92, no. 7, 072014 (2015),[arXiv:1507.03233 [hep-ex]]. Sato:2016svkY. Satoet al. [Belle Collaboration],Phys. Rev. D94, no. 7, 072007 (2016),[arXiv:1607.07923 [hep-ex]]. Abdesselam:2016xqtA. Abdesselamet al.,arXiv:1608.06391 [hep-ex]. Hirose:2016wfnS. Hiroseet al. [Belle Collaboration],arXiv:1612.00529 [hep-ex]. Aaij:2015yraR. Aaijet al. [LHCb Collaboration],Phys. Rev. Lett.115, no. 11, 111803 (2015) Erratum: [Phys. Rev. Lett.115, no. 15, 159901 (2015)],[arXiv:1506.08614 [hep-ex]]. Amhis:2016xyhY. Amhiset al.,arXiv:1612.07233 [hep-ex],updates are available at HFAG webpage<http://www.slac.stanford.edu/xorg/hfag/>Bernlochner:2017jkaF. U. Bernlochner, Z. Ligeti, M. Papucci and D. J. Robinson,arXiv:1703.05330 [hep-ph]. Fajfer:2012vxS. Fajfer, J. F. Kamenik and I. Nisandzic,Phys. Rev. D85, 094025 (2012),[arXiv:1203.2654 [hep-ph]]. Bigi:2016mdzD. Bigi and P. Gambino,Phys. Rev. D94, no. 9, 094008 (2016),[arXiv:1606.08030 [hep-ph]]. Lattice:2015rgaJ. A. Baileyet al. [MILC Collaboration],Phys. Rev. D92, no. 3, 034506 (2015),[arXiv:1503.07237 [hep-lat]]. Na:2015khaH. Naet al. [HPQCD Collaboration],Phys. Rev. D92, no. 5, 054510 (2015) Erratum: [Phys. Rev. D93, no. 11, 119906 (2016)], [arXiv:1505.03925 [hep-lat]]. Deshpande:2012rrN. G. Deshpande and A. Menon,JHEP1301, 025 (2013), [arXiv:1208.4134 [hep-ph]].Altmannshofer:2017poe W. Altmannshofer, P. S. B. Dev and A. Soni,arXiv:1704.06659 [hep-ph]. ] | http://arxiv.org/abs/1705.09188v2 | {
"authors": [
"Diganta Das",
"Chandan Hati",
"Girish Kumar",
"Namit Mahajan"
],
"categories": [
"hep-ph"
],
"primary_category": "hep-ph",
"published": "20170525140847",
"title": "Scrutinizing $R$-parity violating interactions in light of $R_{K^{(\\ast)}}$ data"
} |
Tuning the topological states in metal-organic bilayers F. Crasto de Lima, Gerson J. Ferreira, and R. H. Miwa December 30, 2023 =========================================================The shear-induced deformation of a capsule with a stiff nucleus, a model of eukaryotic cells, is studied numerically. The membrane of the cell and of its nucleus are modelled as a thin and impermeable elastic material obeying a Neo-Hookean constitutive law. The membranes are discretised by a Lagrangian mesh and their governing equations are solved in spectral space using spherical harmonics, while the fluid equations are solved on a staggered grid using a second-order finite differences scheme. The fluid-structure coupling is obtained using an immersed boundary method. The numerical approach is presented and validated for the case of a single capsule in a shear flow. The variations induced by the presence of the nucleus on the cell deformation are investigated when varying the viscosity ratio between the inner and outer fluids, the membrane elasticity and its bending stiffness. The deformation of the eukaryotic cell is smaller than that of the prokaryotic one. The reduction in deformation increases for larger values of the capillary number. The eukaryotic cell remains thicker in its middle part compared to the prokaryotic one, thus making it less flexible to pass through narrow capillaries. For a viscosity ratio of 5, the deformation of the cell is smaller than in the case of uniform viscosity. In addition, for non-zero bending stiffness of the membrane, the deformation decreases and the shape is closer to an ellipsoid. Finally, we compare the results obtained modeling the nucleus as an inner stiffer membrane with those obtained using a rigid particle.§ INTRODUCTION The deformation of a cell in shear flows is one of the fundamental mechanical problems in cell biology. A living cell is subjected to mechanical forces of various magnitude, direction and distribution throughout its life. The cell response to those forces reflects its biological function <cit.>. As an example, red blood cells (RBC) have a diameter of about 7 μ m in the underformed state. Their ability to deform quite significantly allow them to pass through narrow capillaries having a diameter of 3 μ m <cit.>. This high deformability enables them to reach various parts of the human body and to distribute oxygen and nutrient to cells. RBC can however be affected by protozoan Plamodium falciparum, the parasites that cause malaria. These parasites change the red blood cell chemical and structural composition <cit.>, thus inducing a stiffening of the cell membrane <cit.>. Such variations in the mechanical properties of the cells affects the blood rheology which may help to diagnose diseases.Many early experimental studies have addressed the interaction between tiny deformable particles and an external flow. Several interesting types of motion have been discovered such as tumbling and tank-treading in shear flow <cit.>, the zipper flow pattern <cit.> or parachute cell shapes <cit.>. More recent studies focused on cells that exhibit very large deformations at high shear rates, which can cause breaking <cit.>, just to mention few examples. Most of these studies are of experimental nature. Such investigations can however be quite expensive since they require dedicated facilities not easy to fabricate. In addition, experimentally measuring the exact deformation and stresses can be rather complicated. Developing robust and reliable numerical platforms is thus of increasing importance in order to perform high-fidelity simulations beside laboratory experiments. Many cells, including red blood cells, can be modeled as capsules. Capsules consist of a droplet enclosed by a thin membrane: the membrane area can vary while the enclosed volume is constant. Nowadays, several numerical studies on the deformation of a capsule in shear flow have been reported in literature. At certain shear rates, the capsule reaches a steady shape while its membrane exhibits a rotation known as tank-treading motion <cit.>. This tank-treading motion disappears when the viscosity or shear rate of the external fluid becomes low enough and instead a flipping or tumbling motion similar to that of a rigid body appears <cit.>. Membranes can also undergo buckling or folding for high elastic moduli or at low and high shear rates in absence of bending rigidity <cit.>. A solution to this problem is proposed by introducing a stress on undeformed membrane, the so-called pre-stressed capsule <cit.>. As regards the motion of non-spherical capsules in shear flow, different types of motion occur when changing the fluid viscosity, the membrane elasticity, the geometry of the problem or the applied shear rate. In<cit.>, a phase diagram is presented for biconcave shaped capsule in which the transition from tank-treading to tumbling motion is identified when decreasing the shear rate.For eukaryotic cells, the overall mechanical properties of a cell are not only determined by its membrane but also by other cell organelle such as the cell nucleus <cit.>. Typically, the nucleus is stiffer than the surrounding cytoplasm which results in lower deformation when subject to the external stimuli <cit.>. To model and predict the cell behavior, the mechanical properties of the nucleus need to be quantified. To this end, both experimental tests and numerical simulations have been carried out in <cit.>. The elastic modulus of the nucleus in round and spread cells was found to be around 5000 N/m^2, roughly ten times larger than for the cytoplasm. As further example, the nucleus of bovine cells is nine times stiffer than the cytoplasm <cit.>, yet small deformations of the nucleus may occur when a cell is subjected to flow <cit.>. Though it can exhibit large deformation on a substrate when highly compressed, stretched or flattened<cit.>, the nucleus may be assumed as a rigid particle for an intermediate range of the applied forces (external shear). The objective of present research is to quantify the deformation of a cell with nucleus under simple shear flow at low but finite Reynolds numbers (fluid inertia) using numerical simulations. For the present model, the cell is made of two thin (two-dimensional) elastic membranes as originally proposed by <cit.>: the outer membrane separating the cell from the ambient fluid and an inner membrane acting as the boundary of the nucleus. The neo-Hookean hyperelastic model is chosen for the strain energy of the membranes, while the inner and outer fluids are assumed to be Newtonian and can have different viscosities. The viscosity inside nucleus is assumed to be the same as the surrounding cytoplasm. As the deformation of the nucleus is assumed to be negligible, this is numerically treated either with a second inner stiffer membrane or, alternatively, as a rigid particle using a different numerical approach <cit.>. The presence of a nucleus and the effect of viscosity ratio and bending stiffness on the cell deformation are investigated to document the potential of the numerical method here developed. The results are expected to provide an interesting comparison between mechanical properties of cells with and without nucleus.This manuscript is organised as follows. In §<ref>, the geometry of the problem and the governing equations for the membranes and flow dynamics are presented. Section §<ref> provides a brief introduction to the numerical methods used in order to simulate the problem. Validations of the present implementations are presented in §<ref> while section §<ref> reports the main results of the present study. Finally, conclusions and perspectives are given in section §<ref>.§ PROBLEM STATEMENTFigure <ref> depicts the flow configuration considered and the coordinate system adopted. An initially spherical cell located at the geometrical center of a rectangular computation box is considered. The upper and lower walls of the domain move at opposite velocities in the streamwise direction while periodicity is assumed in the other two directions. As boundary conditions we therefore impose no-slip at the walls and periodicity in the steamwise and spanwise directions .§.§ Navier-Stokes equations The dynamics of the incompressible flow of a Newtonian fluid are governed by the Navier-Stokes equations,∇·u=0, ∂u/∂t+u·∇u=-∇ P+1/Re∇·[μ^∗(∇u+∇u^T)]+f,where u=(u, v, w)^T is the velocity vector, P the hydrodynamic pressure, and f indicates the fluid-solid interaction force. The Reynolds number Re is defined asRe=ργ̇R^2/μ_o.In the expression above, γ̇ is shear rate, R the reference cell radius, ρ the reference density (assumed here to be the same for the fluid inside and outside the cell) and μ_o the reference viscosity. In the present work, the reference viscosity is set to be the viscosity of the fluid outside the cell and the ratio between inner and outer viscosity is defined as λ=μ_i/μ_o, with μ_i the inner viscosity. In the expression above μ^∗=μ(x)/μ_o indicates the ratio of viscosity at each point to the reference viscosity. §.§ Membrane dynamics Cells are surrounded by a deformable membrane known as the plasma or cytoplasmic membrane. Along with a number of biological functions, its main purpose is to separate the interior of each cell from the external environment. It can moreover deform quite significantly, as in the case of red blood cells traveling through capillary vessels. In the present numerical study, such membrane is modelled using a hyper-elastic model.A point on the surface of the cell is expressed by usingthe curvilinear coordinates (ξ^1,ξ^2).To define the cell, two different coordinate bases are used, see figure <ref>. The first is a fixed cartesian base, ( e_1,e_2,e_3) corresponding to positionx(ξ^1,ξ^2).The second coordinate is a local covariant base ( a_1, a_2, a_3) which follows the local deformation of the membrane. The unit vectors of the local base area_1=∂x/∂θ, a_2=∂x/∂ϕ,a_3=a_1×a_2/|a_1×a_2|, where θ and ϕ are latitudinal and longitudinal angles on the cell surface. The co-variant and contra-variant metric tensors are defined asa_αβ=a_α·a_β,a^αβ=a^α·a^β,where α,β=1,2. The basis vectors and metric tensors in the undeformed (reference) state are hereafter denoted by capital letters (A^α, A^αβ). The invariants of the transformation I_1 and I_2 are defined asI_1=A^αβa_αβ-2, I_2=|A^αβ||a_αβ|-1.Equivalently, they can also be determined from the principal stretching ratios λ_1 and λ_2 asI_1 = λ_1^2 +λ_2^2 - 2, I_2 = λ_1^2 λ_2^2 - 1 = J_s^2 - 1. The ratio of the deformed to the undeformed surface area is definedby the Jacobian J_s=λ_1λ_2. The two dimensional Cauchy stress tensor, T, is computed from the strain energy function per unit area W_s(I_1,I_2) of the undeformed membrane as T=1/J_sF·∂W_s/∂e ·F^Twhere F= a_α⊗A^α and e=(F^T· F-I)/2 is the Green-Lagrange strain tensor. Equation (<ref>) can be further expressed component-wise asT^αβ=2/J_s∂W_s/∂I_1A^αβ+2J_s∂W_s/∂I_2a^αβ. In the rest of this work, the strain energy function W_s is modeled using the neo-Hookean (NH) law <cit.>. Using this model, the strain energy function is expressed asW^NH_s= 1/2 We(I_1-1+1/I_2+1),where We=ρ R^3 γ̇^2/G_s is the Weber number (or non-dimensional surface shear modulus). The local equilibrium relates the tensor T to the external elastic load q according to∇_s·T+q=0,with ∇_s· the surface divergence operator. In curvilinear coordinates, the load vector can be written as q=q^βa_β+ q^n n. The force balance in equation (<ref>) is further decomposed into tangential and normal components,∂T^αβ/∂ξ^α+Γ^α_αλT^λβ+Γ^β_αλT^αλ+q^β=0,β=1,2T^αβb_αβ+q^n=0where Γ_αλ^α and Γ^β_αλ are the Christoffel symbols.In some cases, due to noninfinitesimal membrane thickness or a preferred configuration of an interfacial molecular network, bending moments accompanied by transverse shear tensions play an important role on cell deformation <cit.>. Bending stiffness is incorporated into the model using a linear isotropic model for the bending moment <cit.>: M^α_β=-B(b^α_β-B^α_β),where B=G_b/ρ R^5 γ̇^2 is the non dimensional bending modulus, and b^α_β is the Gaussian curvature (B^α_β corresponds to that of the reference configuration). According to the local torque balance, including bending moments on the membrane, we obtain the transverse shear vector Q and in-plane stress tensor T,M^αβ_|α-Q^β=0, ε_αβ(T^αβ-b^α_γ M^γβ)=0,where ^, _|α ^, represents the covariant derivative and ε is the two-dimensional Levi-Civita tensor. The left hand side of equation (<ref>) identifies the antisymmetric part of the in-plane stress tensor, which is always zero as proved in <cit.>. Including the transverse shear stress Q, the local stress equilibrium, including bending finally gives∂T^αβ/∂ξ^α+Γ^α_αλT^λβ+Γ^β_αλT^αλ-b^β_α Q^α+q^β=0,β=1,2T^αβb_αβ-Q^α_|α+q^n=0.The non-dimensional numbers in equations (<ref>-<ref>) are obtained using the radius of the cell R as length scale, the shear rate 1/γ̇ as time scale, thus coupling membrane deformation and flow dynamics, ρ_oγ̇^2R^3 as reference surface shear modulus and ρ_oγ̇^2R^5 for the bending stiffness.TheWeber number can be re-written as We=Re · Ca where Ca=μ_o R γ̇/G_s is the Capillary number. In the present study, we shall considerdifferent stiffnesses of the cell membrane by varying the Capillary number and keeping the Reynolds number constant.§ NUMERICAL METHODS §.§ The Navier-Stokes solver The Navier-Stokes equations are discretized using a staggered uniform grid to prevent checkerboard numerical instability, while the time integration relies on the classical projection method <cit.>. This method is a three-step procedure: first, a non-solenoidal velocity field u^* is computed asu^* -u^n/Δ t =RHS( u, p),where RHS( u, p) is the right-hand side of the discretised Navier-Stokes equations and contains the fluid-structure interaction (FSI) forces. In a second step, the pressure field is obtained as the solution to the following Poisson equation∇^2 p = -1/Δ t∇· u^*.Finally, the corrected velocity field u^n+1 is obtained asu^n+1 =u^* + Δ t ∇ p,where ∇ p is the pressure gradient required for the velocity field u^n+1 to be divergence-free. Second-order central differences are used for the spatial discretization of the convective terms, while their temporal integration relies on the Adams-Bashforth explicit method.As we allow for viscosity contrast between the fluid inside and outside the cells, the classical Fast Fourier spectral method cannot be readily used to evaluate the diffusive term Du = ∇·( μ[ ∇ u + ∇ u^T ] ). Indeed, the viscosity field being a function of space, this operator cannot be reduced to a constant coefficient Laplace operator. However, Dodd & Ferrante <cit.> have recently introduced a splitting operator technique able to overcome this drawback. Though it has initially been derived for the pressure Poisson equation, this splitting approach can easily be extended to the Helmholtz equation resulting from an implicit (or semi-implicit) integration of the diffusive terms, ( I - Δ tD) u^n+1 =RHS^nwhere I is the identify matrix, and RHS^n the discretized right-hand side including the non-linear advection terms. Given the viscosity fieldμ^∗( x) =1 + μ'( x)where 1 is the constant part and μ'( x) the space-varying component, the diffusive term Du can be re-written asDu = 1/Re∇^2u_ D_1 u + 1/Re∇·( μ'( x) [ ∇ u + ∇ u^T ] )^ D_2 u. The constant coefficients operator D_1 can then be treated implicitly while the variable coefficients operator D_2 is treated explicitly. The resulting Helmholtz equation then reads ( I - Δ tD_1) u^n+1 =RHS^n + Δ tD_2 u^n . Since D_1 is now a constant coefficient Laplace operator, equation (<ref>) can be solved using a classical Helmholtz solver based on Fast Fourier transforms. Note that a similar Fast Fourier-based solver is used to solve the Poisson equation (<ref>) for the pressure.§.§ Membrane representation: Spherical harmonics The membrane shape has been modeled as linear piece-wise functions on triangular meshes by Pozrikidis <cit.>, Ramanujan & Pozrikidis <cit.> and Li & Sarkar <cit.> among others. The finite element method has also been employed by Walter et al. <cit.> for its generality and versatility. Another interesting method is the global spectral method. Fourier spectral interpolation and spherical harmonics have been used for two-dimensional <cit.> and three-dimensional simulations <cit.>. Here, we follow the approach of Zhao et al. <cit.> , previously implemented in <cit.>. This is briefly outlined below.The capsule surface is mapped onto the surface of the unit reference sphere S^2,using the angles in spherical coordinates (θ,ϕ) for the parametrization. The parameter space {(θ,ϕ)| 0≤θ≤π, 0≤ϕ≤2π} is discretized by a quadrilateral grid using Gauss-Legendre (GL) quadrature intervals in θ and uniform spacing in the ϕ direction. All other surface quantities are stored on the same mesh, i.e. the grid is co-located. The surface coordinates x(θ,ϕ) are expressed by a truncated series of spherical harmonic functions,x(θ,ϕ)=∑_n=0^N_SH-1∑_m=0^nP̅^m_n(cosθ)(a_nmcosmϕ+b_nmsinmϕ),yielding N^2_SH spherical harmonic modes. The corresponding normalized Legendre polynomials are given byP̅^m_n(x)=1/2^nn!√((2n+1)(n-m)!/2(n+m)!)(1-x^2)^m/2d^n+m/dx^n+m(x^2-1)^n. The SPHEREPACK library <cit.> is employed for the forward and backward transformations. To deal with aliasing errors arising due to the nonlinearities in the membrane equations (products, roots and inverse operations needed to calculate the geometric quantities), we use an approximate de-aliasing by performing the nonlinear operations on M_SH>N_SH points and filtering the result back to N_SH points. A detailed discussion on this issue is provided in Freund & Zhao <cit.>. For most of the results presented in the present study, 576 spherical harmonics with 24 modes are used to define the cell shape.Considereing different viscosity inside and outside the cell, a space and time dependent viscosity field is defined by an indicator function I( x, t) related to the membrane location,μ^∗( x)= (1 - I( x)) + I( x)λ. Here we followUnverdi and Tryggvason <cit.> for the definition of the indicator function as the solution to the following Poisson equation∇^2 I = ∇·Gwhere the Green's function G=∫δ(X-x) n ds, and n is the unit normal vector to the cell surface.Using the smooth Dirac delta function introduced below in the computation of G makes the indicator function smoother near the boundary <cit.>. Such indicator function is similar to the regularised Heaviside function used in the levelset framework.§.§ Immersed boundary method§.§.§ Immersed boundary method for deformable partices The immersed boundary method <cit.> is commonly used to solve fluid-solid interaction problems. In this method, two distinct sets of grid points are used: (i) an Eulerian mesh to solve fluid flow, see section <ref>, and (ii) a Lagrangian mesh for solving the particle motion, section <ref>. In the presentlow Reynolds number framework, we start from the original approach of Peskin <cit.>. At each time step, the fluid velocity defined on the Eulerian mesh is first interpolated onto the Lagrangian mesh,U_ib(X,t)=∫_Ωu(x,t)δ(X-x)dx,where x and X are the Eulerian and Lagrangian coordinates and δ is a smooth Dirac delta function, here that proposed in Ref. <cit.>. The elastic force per area q and surface normal vectors n are then computed from the membrane equations described above. As next step, the normal vectors are used to compute the indicator function I( x, t) on the Eulerian mesh. The force is then spread to Eulerian mesh and added to the momentum equations as f(x,t)=∫q(X,t)δ(x-X)ds. Thereafter, the positions of the Lagrangian points are updated according to X^n+1=X^n+∫^t_0 U_ib dt. Note that equation (<ref>) assumes an over-damped regime, i.e. the Lagrangian points go to their equilibrium position immediately after the FSI force is applied. Finally, the fluid flow is solved in the Eulerian framework as explained in section <ref>. A flowchart for computational procedure at each iteration is depicted in figure <ref> .The method described above is not particularly efficient when the Reynolds number increases since it requires very small time steps, whichincreases the computational time. At moderate and high Reynolds numbers, a modification of the method by Kim et al. <cit.>, is employed to be consistent with the assumption of inertialess membrane. In our approach, in addition to theLagrangian coordinates X, we introduce the additional immersed boundary points X_ib whose motion is governed by equation (<ref>).Since the total force exerted on each element on the membrane surface is equal to the difference between its acceleration and the acceleration of fluid element at the same location, the motion of the real Lagrangian points is governed by ρ_os∂^2X/∂t^2=ρ_os∂^2X_ib/∂t^2+F_e-F_FSI+F_A where ρ_os is the surface density of the base fluid. The two sets of Lagrangian points, X and X_ib, are connected to each other by a spring and damper, i.e. a fluid-solid interaction force F_FSI computed using the following feedback law F_FSI=-κ[(X_ib-X)+Δ t(U_ib-U)]. The final modified procedure is therefore as follows. At each time step, we first compute U_ib and the fluid-solid interaction force F_FSIfrom equation (<ref>). The indicator function I( x, t) is then computed to identify the interior of the cell and impose viscosity contrasts, and the momentum equation solved to obtain the flow velocity u. Finally, the positions of the Lagrangian points X are updated using equation (<ref>). This additional equation is made non-dimensional as above, in particular with ρ_oR for the surface density of the base fluid and ρ_oR^2γ̇^2 for the elastic and fluid-solid interaction forces per unit area. For completeness, we report the non-dimensional form of equation (<ref>), d^∗∂^2X/∂t^2=d^∗∂^2X_ib/∂t^2+F_e-F_FSI+F_A where d^∗=d/R is ratio between the membrane thickness and initial radius of the cell, assumed in the present study to be d^∗=0.01. In the above, F_A is the penalty force used to enforce volume conservation, calculated as in <cit.>: F_A=Δ p ·η (θ,ϕ) ·e_n,Δ p=1/β(1-V/V_0)+1/β∫_0^t(1-V/V_0)dt^'. Here Δ p represents the pressure generated by the volume change, η (θ,ϕ) is the surface area of each element and e_n the local unit normal vector. This force is also added to the elastic force q before spreading it to the Eulerian mesh according to equation (<ref>). §.§.§ Immersed boundary method for rigid particles In the present study, two different immersed boundary methods are considered to simulate the stiff nucleus. First, themethod described above is used with high surface shear modulus for the inner capsule. In the second approach we follow the implementation byBreugem <cit.>, which has been widely used in the framework of rigid particles, see e.g. <cit.>. In this method, a moving Lagrangian mesh is adopted to impose no-slip and no-penetration on the surface of a rigid object. The numerical procedure is as follows: first, the prediction velocity u^∗ is computed from the Navier-Stokes equations neglecting the fluid-solid interaction force. This fluid velocity is then interpolated onto the Lagrangian mesh (U^∗) and the fluid-solid interaction force computed, based on the difference between the fluid and the solid body velocity at each Lagrangian point,F_FSI=U_P-U^∗/Δ t. This force is spread to the Eulerian grid and the second prediction velocity u^∗∗ obtained by solving the Navier-Stokes equations with the fluid-solid interaction force. The divergence-free constraint is then imposed on the velocity field by solving the pressure Poisson equation and correcting the velocity field appropriately. Finally, the total force and torque on each particle is computed, and the translational and rotational velocities of the particle obtained by integrating the Newton-Euler equations. Readers are also referred to <cit.> for further details.§.§ Notes on the parallelisation and implementation The Eulerian mesh is decomposed using a 2D-pencil domain decomposition in the streamwise and spanwise direction. For that purpose, the library 2DECOMP & FFT <cit.> is used. The same library handles all of the transpose operations required for the Helmholtz and Poisson solvers based on the fast Fourier transforms. Regarding the parallelization of each particle/capsule, each processor can either be master or slave. A processor is labelled master if it contains most of the Lagrangian points describing the given particle, while those containing only some of these points are labelled as slaves. The rest of the processors do not have any rule for the considered particle. Only the master processor has the information of the particle (e.g. Lagrangian points and their velocities) in its memory, though the slaves can access it for interpolation and spreading operations, which might require information from the neighbours. For the particle equations, the master is responsible for all the numerical procedures, e.g. transformation using spherical harmonics. Such parallelization saves memory usage but requires communication between cores at each time step. In order to obtain accurate results, the density of Eulerian and Lagrangian grid points should be similar. In some cases, it is nonetheless necessary to have a very fine Eulerian mesh, thus requiring very fine Lagrangian mesh. However, since the spherical harmonic calculations are costly, the overall computational time increases significantly. To make the code more efficient, two different sets of Lagrangian points are therefore considered: forcing points that are used for interpolation-spreading and the spherical-harmonic points that are used to define the shape of the cell and the elastic stresses. While the density of the forcing points has to be similar to that of the Eulerian points, fewer points are required for the spherical harmonics representation of the cell, especially in the case of stiffer membranes deforming less. At each time step, before computing the elastic forces, the spherical-harmonic points are obtained using spectral interpolation. These points are then used to compute the elastic forces and surface normal vectors. Once computed, the elastic forces and surface normal vectors are interpolated onto the finer mesh so that the ealstic forces are spread on the Eulerian mesh. All spectral interpolations are done with the SPHEREPACK library <cit.>.§ VALIDATIONSIn order to validate the current implementation, two different cases are considered for a simple capsule: deformation of a capsule in shear flow at low Reynolds number and equilibrium position of a capsule in Poiseuille flow at finite Reynolds number. The domain is a box with the size 10×10×10 in units of cell radius, and the cell is located at its center. The Eulerian grid is 128^3 whereas for the Lagrangian mesh 24×48 points are chosen in the latitudinal and longitudinal directions respectively. The non-dimensional bending stiffness G_B/ρ_o R^5γ̇^2 is zero unless otherwise mentioned. The dealiasing ratio is kept at M_SH/N_SH=2. A grid independence study has been carried out for Ca=0.6 by increasing the number of the Eulerian grid points and of spherical-harmonic grids by a factor of 1.5. In this case, the change in the deformation parameter is found to be less than 2%. In addition, we also increased the box size by a factor 1.5 and measured a difference in the deformation parameter below 0.3%. To check time step independency of the results, we decreased the time step by 50% and obtain a change in the deformation parameter of only 0.002%. Finally, the effect of the ratio between the forcing points and the number of spherical harmonics describing the membrane deformation is investigated using 1152 force points. When the ratio between the two is4, the difference measured by the deformation parameter is less than 0.5% with respect to the case withratio 1. Increasingthe ratio from 1 to 2.25, the change is of about 0.1% .Here we used the more demanding ratio of 1 to be sure to capture all features of the cell shape for large deformations. §.§ Single cell in shear flow In this section, the deformation of a cell in a simple shear flow obtained with the present implementation is compared to the results by Pranay et al. <cit.> and Zhu et al. <cit.>. These authors have used boundary integral approaches to solve for the Stokes flow, so the Reynolds number is chosen here small enough, Re=0.1, to ensure low inertia of the flow. The cell deformation is quantified by the parameterD=|l_2-l_1|/l_2+l_1where l_1 and l_2 denote the major and minor semi-axes of the equivalent ellipsoid in the middle plane, respectively. The inertia tensor of the cell is used to obtain the equivalent ellipsoid as in <cit.>. As shown in figure <ref>, a good agreement is obtained with the different implementations. §.§ Capsule in channel flow at finite inertia The second validation case is provided by the equilibrium position of a cell in Poiseuille flow at finite Reynolds numbers. Indeed, after some transients, a single capsule reaches a steady state characterised by a constant wall-normal position, constant velocity and deformation. This equilibrium position is located between the wall and the channel centerline and depends on both the Reynolds and Capillary numbers <cit.>. In the present validation, the Capillary number is set to Ca = 0.174 and the Reynolds number is varied. Note that since the present bending model is different from that in the work of Kilimnik et al <cit.>, we chose the value of the bending stiffness that best fit their data, B=.02. The dependence of the wall-normal position and deformation of the capsule on the Reynolds number are reported on figures <ref>(a) and <ref>(b), respectively. As shown by these figures, a good agreement with the results in <cit.> is obtained. In figure <ref>(b), the difference appearing at Re=100 may be attributed to the model considered in <cit.>. These authors used a Hookean law with a lattice-spring model (LSM) where the membrane has a finite thickness whereas in present work we use the hyperelastic neo-Hookian model with a infinitely thin membrane. At small Reynolds numbers, when the deformation is relatively small, the last term on the right hand side of equation (<ref>) vanishes and our model practically reduces to a Hookean law: the two methods give thus very similar results. In the case of large deformations, the term introducing non-linearity in the constitutive model is not negligible any longer and the results start to deviate from each other.§ RESULTS §.§ Stiff nucleus As a starting point, the two-membrane model, based on the original IBM by Peskin <cit.>, is employed. In order to have a stiff nucleus, the capillary number of the nucleus is chosen to be 300 times smaller than that of the outer membrane. The cell is subject to homogeneous shear as in the validation cases presented above, i.e. same configuration and numerical parameters.Figure <ref> depicts the deformation parameter as a function of the Capillary number for viscosity ratios λ=1 and 5 in the absence of bending resistance, both for capsules with and without a stiff nucleus. Note that the deformation parameter is computed on the outer membrane, the inner one not being noticeably deformed. As shown in this figure, the presence of a nucleus reduces the deformation, and this reduction is larger the higher the Capillary number. The stiff nucleus reduces the outer membrane deformation since the minimum radius cannot be smaller than the radius of the nucleus. At higher Capillary numbers, the membrane would tend to deform more thus making the effect of the nucleus becomes more evident. It can be inferred from figure <ref> that the deformation is smaller for the cases with a more viscous fluid between the outer membrane and the nucleus, λ=5. In this case, viscous forces appear to work together with elastic forces to reduce the cell deformation. The transient evolution of the deformation parameter to reach the final state is demonstrated in figure <ref> for three different capillary numbers. The figure shows that larger capillary numbers require longer time to reach the final steady state. As for the steady state, the deformation is larger for higher capillary numbers.The first two rows of figure <ref>(a) depict the steady shape of the cell for three different capillary numbers of the outer membrane and zero beding stiffness. Note that, in the first row, the cell considered has no nucleus. In the absence of nucleus, the cell assumes an ellipsoidal shape, while it has a thicker middle part in the presence of the nucleus. Cells with nucleus thus have a lower flexibility and may encountermore difficulties to pass through narrow vessels.The effect of bending stiffness on the deformation parameter is presented in figure <ref>(b). It can be observed that cells with bending stiffness deform less and the reduction measured by the deformation parameter increases with the Capillary number. This effect is documented by the shape of capsules with bending stiffness reported in the lowest panels of figure <ref>. Here, one can see that the deformation is reduced mainly on the edges of the capsule. Indeed, figure <ref> shows that the cell shape is closer to an ellipsoid when adding bending rigidity. The difference between the different cases shown in the figures demonstrates that the effect of the bending rigidity is not negligible in such conditions and should be accounted for to obtain more accurate predictions. For a number of microfluidic applications, it may be important to understand the effect of flow inertia on the deformation of the transported cells. We therefore report in figure <ref> the effect of increasing the Reynolds number on the deformation parameter. To prevent buckling, we considered small bending stiffness in the simulations. It can be observed that when increasing the Reynolds number the steady state deformation parameter first decreases for Re=1 and then increases (Re=5). The initial deformation rate is faster when increasing inertia.Note also some oscillations in the deformation for Re=5, as observed in previous studies. These can be attributed to the formation of a pair of vortices inside the cell, on the two sides of the nucleus, in the transient stage (See figure 9b). Such vortices disappear at steady state but their formation and breakup results in oscillations of the cell membrane. §.§ Rigid nucleusAs mentioned previously, two different models of nucleus are considered. For the results in the present section, the nucleus is modeled as a rigid, not deformable, particle following a different implementation of the immersed boundary method, see above. Figure <ref> reports a comparison of the deformation for cells whose nucleus is modelled as a stiff membrane or as a rigid spherical particle, for viscosity ratio λ=1. The two methods produce approximately similar results, however with slightly larger deformation for cells with a rigid spherical nucleus. This fact can be related to the different nucleus rotation rates obtained with the two models. Indeed, for a rigid nucleus we assume no slip at the interface, whereas some slip is present at the surface of the nucleus if this is represented by an elastic membrane. Finally, we report that the computational time needed for the case of a capsule with rigid nucleus is about 1.2 times that for a cell with a nucleus represented by a stiff membrane. Note thatthe simulations assuming a rigid nucleus have been performed by coupling together two different approaches, able to model deformable and rigid object. This implementation opens possibility of modeling new more complicated structures, which will be investigated in the future. § CONCLUSIONSThe deformation of a capsule containing a stiff nucleus in homogeneous shear flow is studied numerically using an immersed boundary method to account for the fluid-solid interaction. The Neo-Hookean hyperelastic constitutive model is used to describe the cell membrane deformation while the fluid inside and outside each capsule is assumed to be Newtonian. The cell nucleus is modeled in two different ways, first as a second inner capsule with a significantly stiffer membrane and then as spherical rigid particle using a different implementation of the immersed boundary method, most suited to solid objects <cit.>.In the immersed boundary method, a Lagrangian mesh is used to follow the deformation of the elastic membrane defining the capsule and an Eulerian mesh to solve the momentum equations. The shape of the membrane, its deformation and internal stresses are represented by means of spherical harmonics in order to have an accurate computation of the high-order derivatives of the membrane geometry. To save computational time in cases with very fine underlying Eulerian meshes, we have implemented the possibility of using a coarser Lagrangian mesh for the computation of the cell shape and stresses and a finer mesh for the communication of forces exchanged with the fluid. Spectral interpolation is employed to link the two Lagrangian representations of the geometry of the cell. Finally, to have the possibility to consider different viscosities inside and outside the cell, an indicator function is computed on the Eulerian mesh as the solution of a Poisson equation. The right hand side is obtained by spreading the normal vectors to cell surface known at the Lagrangian grid points onto the Eulerian mesh. The viscosity is then taken to be a function of this indicator function. The accuracy of the code is validated against results pertaining to the deformation of capsules without nucleus. In particular, we consider the inertialess flow of a capsule in homogeneous shear andtransport in pressure-driven Poiseuille flow at moderate Reynolds numbers (finite inertia). The behavior of cells with a stiff nucleus in homogeneous shear has then been investigated. The cell deformation parameter is reported for different Capillary numbers, two values of the viscosity ratio and with or without bending rigidity. We observe that the deformation is smaller for cells with a nucleus. Examining the shape of the cell, that with nucleus is thicker in the middle part making it less flexible to pass through narrow vessels. When also considering bending stiffness, we observe an even smaller deformation and the shape of the cell is more regular and closer to an ellipsoid. Finally, we have compared the results obtained by modeling the nucleus as a rigid particle, reporting small differences. We show also thatthe numerical approaches for rigid and deformable objects can coexist, which opens the possibility of modelling more complicated structures, e.g. small rigid cavities and obstacles in the flow <cit.>.The method presented here can be employed and extended in the future to study the behavior of cells in different and more complicated configurations, enabling us to extract qualitative and quantitative data about the maximum stress on the membrane. Possible extensions include the possibility to consider a vesicle <cit.>, i.e. an inextensible membrane, and a density contrast. Including cell-cell and cell-wall interactions in the numerical platform would allow us to study pair interactions of cells with nucleus in shear flow <cit.> and ultimately investigate dense suspensions of deformable objects <cit.>. § ACKNOWLEDGEMENTS This work was supported by the European Research Council Grant No. ERC-2013-CoG-616186, TRITOS and by the Swedish Research Council (VR). The authors acknowledge computer time provided by SNIC (Swedish National Infrastructure for Computing) and the support from the COST Action MP1305: Flowing matter. Dr. Lailai Zhu is thanked for the help with the implementation of the membrane module.plain 10adams1999spherepack John C Adams and Paul N Swarztrauber. Spherepack 3.0: A model development facility. Monthly Weather Review, 127(8):1872–1878, 1999.ardekani2016numerical Mehdi Niazi Ardekani, Pedro Costa, Wim-Paul Breugem, and Luca Brandt. Numerical study of the sedimentation of spheroidal particles. arXiv preprint arXiv:1602.05769, 2016.bannister2003ins Lawrence Bannister and Graham Mitchell. The ins, outs and roundabouts of malaria. Trends in parasitology, 19(5):209–213, 2003.breugem2012second Wim-Paul Breugem. A second-order accurate immersed boundary method for fully resolved simulations of particle-laden flows. Journal of Computational Physics, 231(13):4469–4498, 2012.caille1998assessment Nathalie Caille, Yanik Tardy, and Jean-Jacques Meister. Assessment of strain field in endothelial cells subjected to uniaxial deformation of their substrate. Annals of biomedical engineering, 26(3):409–416, 1998.caille2002contribution Nathalie Caille, Olivier Thoumine, Yanik Tardy, and Jean-Jacques Meister. Contribution of the nucleus to the mechanical properties of endothelial cells. Journal of biomechanics, 35(2):177–187, 2002.chang1993experimental Kuo-Shu Chang and William Lee Olbricht. Experimental studies of the deformation and breakup of a synthetic capsule in steady and unsteady simple shear flow. Journal of Fluid Mechanics, 250:609–633, 1993.chorin1968numerical Alexandre Joel Chorin. Numerical solution of the navier-stokes equations. Mathematics of computation, 22(104):745–762, 1968.cooke2001malaria Brian M Cooke, Narla Mohandas, and Ross L Coppel. The malaria-infected red blood cell: structural and functional changes. Advances in parasitology, 50:1–86, 2001.dodd2014fast Michael S Dodd and Antonino Ferrante. A fast pressure-correction method for incompressible two-fluid flows. Journal of Computational Physics, 273:416–434, 2014.fischer1977tank Th Fischer. Tank tread motion of red-cell membranes in viscometric flow-behavior of intracellular and extracellular markers (with film). Blood cells, 3(2):351–365, 1977.fischer1978red Thomas M Fischer, M Stohr-Lissen, and Holger Schmid-Schonbein. The red cell as a fluid droplet: tank tread-like motion of the human erythrocyte membrane in shear flow. Science, 202(4370):894–896, 1978.freund2010high JB Freund and H Zhao. A high-resolution fast boundary-integral method for multiple interacting blood cells. Computational Hydrodynamics of Capsules and Biological Cells, page 71, 2010.freund2007leukocyte Jonathan B Freund. Leukocyte margination in a model microvessel. Physics of Fluids (1994-present), 19(2):023301, 2007.gaehtgens1979motion P Gaehtgens, C Dührssen, and KH Albrecht. Motion, deformation, and interaction of blood cells and plasma during flow through narrow capillary tubes. Blood cells, 6(4):799–817, 1979.galbraith1998shear CG Galbraith, R Skalak, and S Chien. Shear stress induces spatial reorganization of the endothelial cell cytoskeleton. Cell motility and the cytoskeleton, 40(4):317–330, 1998.gao2011rheology Tong Gao, Howard H Hu, and Pedro Ponte Castañeda. Rheology of a suspension of elastic particles in a viscous shear flow. Journal of Fluid Mechanics, 687:209, 2011.gao2013dynamics Tong Gao, Howard H Hu, and Pedro Ponte Castañeda. Dynamics and rheology of elastic particles in an extensional flow. Journal of Fluid Mechanics, 715:573–596, 2013.goldsmith1972flow HL Goldsmith and Jean Marlow. Flow behaviour of erythrocytes. i. rotation and deformation in dilute suspensions. Proceedings of the Royal Society of London B: Biological Sciences, 182(1068):351–384, 1972.guilak1995compression Farshid Guilak. Compression-induced changes in the shape and volume of the chondrocyte nucleus. Journal of biomechanics, 28(12):1529–1541, 1995.guilak2000mechanical Farshid Guilak and Van C Mow. The mechanical environment of the chondrocyte: a biphasic finite element model of cell–matrix interactions in articular cartilage. Journal of biomechanics, 33(12):1663–1673, 2000.guo2016deformability Quan Guo, Simon P Duffy, Kerryn Matthews, Xiaoyan Deng, Aline T Santoso, Emel Islamzada, and Hongshen Ma. Deformability based sorting of red blood cells improves diagnostic sensitivity for malaria caused by plasmodium falciparum. Lab on a Chip, 16(4):645–654, 2016.huang2012three Wei-Xi Huang, Cheong Bong Chang, and Hyung Jin Sung. Three-dimensional simulation of elastic capsules in shear flow by the penalty immersed boundary method. Journal of Computational Physics, 231(8):3340–3364, 2012.ingber1990fibronectin Donald E Ingber. Fibronectin controls capillary endothelial cell growth by modulating cell shape. Proceedings of the National Academy of Sciences, 87(9):3579–3583, 1990.kan1999effects Heng-Chuan Kan, Wei Shyy, HS Udaykumar, Philippe Vigneron, and Roger Tran-Son-Tay. Effects of nucleus on leukocyte recovery. Annals of biomedical engineering, 27(5):648–655, 1999.kessler2008swinging S Kessler, R Finken, and U Seifert. Swinging and tumbling of elastic capsules in shear flow. Journal of Fluid Mechanics, 605:207–226, 2008.kilimnik2011inertial Alex Kilimnik, Wenbin Mao, and Alexander Alexeev. Inertial migration of deformable capsules in channel flow. Physics of Fluids (1994-present), 23(12):123302, 2011.kim2015inertial Boyoung Kim, Cheong Bong Chang, Sung Goon Park, and Hyung Jin Sung. Inertial migration of a 3d elastic capsule in a plane poiseuille flow. International Journal of Heat and Fluid Flow, 54:87–96, 2015.kruger2014interplay Timm Krüger, Badr Kaoui, and Jens Harting. Interplay of inertia and deformability on rheological properties of a suspension of capsules. Journal of Fluid Mechanics, 751:725–745, 2014.lac2005deformation Etienne Lac and Dominique Barthès-Biesel. Deformation of a capsule in simple shear flow: effect of membrane prestress. Physics of Fluids (1994-present), 17(7):072105, 2005.lashgari2014laminar Iman Lashgari, Francesco Picano, Wim-Paul Breugem, and Luca Brandt. Laminar, turbulent, and inertial shear-thickening regimes in channel flow of neutrally buoyant particle suspensions. Physical review letters, 113(25):254502, 2014.lashgari2016channel Iman Lashgari, Francesco Picano, Wim Paul Breugem, and Luca Brandt. Channel flow of rigid sphere suspensions: Particle dynamics in the inertial regime. International Journal of Multiphase Flow, 78:12–24, 2016.li20102decomp Ning Li and Sylvain Laizet. 2decomp & fft-a highly scalable 2d decomposition library and fft interface. In Cray User Group 2010 conference, pages 1–13, 2010.li2008front Xiaoyi Li and Kausik Sarkar. Front tracking simulation of deformation and buckling instability of a liquid capsule enclosed by an elastic membrane. Journal of Computational Physics, 227(10):4998–5018, 2008.lim2006mechanical CT Lim, EH Zhou, and ST Quek. Mechanical models for living cells–a review. Journal of biomechanics, 39(2):195–216, 2006.maniotis1997demonstration Andrew J Maniotis, Christopher S Chen, and Donald E Ingber. Demonstration of mechanical connections between integrins, cytoskeletal filaments, and nucleoplasm that stabilize nuclear structure. Proceedings of the National Academy of Sciences, 94(3):849–854, 1997.peskin2002immersed Charles S Peskin. The immersed boundary method. Acta numerica, 11:479–517, 2002.pozrikidis1995finite C Pozrikidis. Finite deformation of liquid capsules enclosed by elastic membranes in simple shear flow. Journal of Fluid Mechanics, 297:123–152, 1995.pozrikidis2001effect C Pozrikidis. Effect of membrane bending stiffness on the deformation of capsules in simple shear flow. Journal of Fluid Mechanics, 440:269–291, 2001.pozrikidis2010computational Constantine Pozrikidis. Computational hydrodynamics of capsules and biological cells. CRC Press, 2010. p. 89.pranay2010pair Pratik Pranay, Samartha G Anekal, Juan P Hernandez-Ortiz, and Michael D Graham. Pair collisions of fluid-filled elastic capsules in shear flow: Effects of membrane properties and polymer additives. Physics of Fluids (1994-present), 22(12):123103, 2010.ramanujan1998deformation S Ramanujan and C Pozrikidis. Deformation of liquid capsules enclosed by elastic membranes in simple shear flow: large deformations and the effect of fluid viscosities. Journal of Fluid Mechanics, 361:117–143, 1998.rodriguez2013review Marita L Rodriguez, Patrick J McGarry, and Nathan J Sniadecki. Review on cell mechanics: experimental and modeling approaches. Applied Mechanics Reviews, 65(6):060801, 2013.roma1999adaptive Alexandre M Roma, Charles S Peskin, and Marsha J Berger. An adaptive version of the immersed boundary method. Journal of computational physics, 153(2):509–534, 1999.rorai2015motion Cecilia Rorai, Antoine Touchard, Lailai Zhu, and Luca Brandt. Motion of an elastic capsule in a constricted microchannel. The European Physical Journal E, 38(5):1–13, 2015.schmid1969fluid Holger Schmid-Schönbein and Roe Wells. Fluid drop-like transition of erythrocytes under shear. Science, 165(3890):288–291, 1969.seol2016immersed Yunchang Seol, Wei-Fan Hu, Yongsam Kim, and Ming-Chih Lai. An immersed boundary method for simulating vesicle dynamics in three dimensions. Journal of Computational Physics, 322:125–141, 2016.skalak1969deformation R Skalak and PI Branemark. Deformation of red blood cells in capillaries. Science, 164(3880):717–719, 1969.skotheim2007red JM Skotheim and Timothy W Secomb. Red blood cells and other nonspherical capsules in shear flow: oscillatory dynamics and the tank-treading-to-tumbling transition. Physical review letters, 98(7):078301, 2007.swarztrauber2000generalized Paul N Swarztrauber and William F Spotz. Generalized discrete spherical harmonic transforms. Journal of Computational Physics, 159(2):213–230, 2000.uhlmann2005immersed Markus Uhlmann. An immersed boundary method with direct forcing for the simulation of particulate flows. Journal of Computational Physics, 209(2):448–476, 2005.unverdi1992front Salih Ozen Unverdi and Grétar Tryggvason. A front-tracking method for viscous, incompressible, multi-fluid flows. Journal of computational physics, 100(1):25–37, 1992.walter2001shear Anja Walter, Heinz Rehage, and Herbert Leonhard. Shear induced deformation of microcapsules: shape oscillations and membrane folding. Colloids and Surfaces A: Physicochemical and Engineering Aspects, 183:123–132, 2001.walter2010coupling J Walter, A-V Salsac, D Barthès-Biesel, and P Le Tallec. Coupling of finite element and boundary integral methods for a capsule in a stokes flow. International journal for numerical methods in engineering, 83(7):829–850, 2010.wu2013simulation Tenghu Wu and James J Feng. Simulation of malaria-infected red blood cells in microfluidic channels: Passage and blockage. Biomicrofluidics, 7(4):044115, 2013.zhang2015multiple Yao Zhang, Changjin Huang, Sangtae Kim, Mahdi Golkaram, Matthew WA Dixon, Leann Tilley, Ju Li, Sulin Zhang, and Subra Suresh. Multiple stiffening effects of nanoscale knobs on human red blood cells infected with plasmodium falciparum malaria parasite. Proceedings of the National Academy of Sciences, 112(19):6068–6073, 2015.zhao2010spectral Hong Zhao, Amir HG Isfahani, Luke N Olson, and Jonathan B Freund. A spectral boundary integral method for flowing blood cells. Journal of Computational Physics, 229(10):3726–3744, 2010.zhu2015motion Lailai Zhu and Luca Brandt. The motion of a deforming capsule through a corner. Journal of Fluid Mechanics, 770:374–397, 2015.zhu2015dynamics LaiLai Zhu, Jean Rabault, and Luca Brandt. The dynamics of a capsule in a wall-bounded oscillating shear flow. Physics of Fluids (1994-present), 27(7):071902, 2015.zhu2014microfluidic Lailai Zhu, Cecilia Rorai, Dhrubaditya Mitra, and Luca Brandt. A microfluidic device to sort capsules by deformability: a numerical study. Soft matter, 10(39):7705–7711, 2014. | http://arxiv.org/abs/1705.09338v1 | {
"authors": [
"Arash Alizad Banaei",
"Jean-Christophe Loiseau",
"Iman Lashgari",
"Luca Brandt"
],
"categories": [
"physics.flu-dyn",
"physics.comp-ph"
],
"primary_category": "physics.flu-dyn",
"published": "20170525193940",
"title": "Numerical simulations of elastic capsules with nucleus in shear flow"
} |
pics⟩⟨ Instytut Fizyki imienia Mariana Smoluchowskiego,Uniwersytet Jagielloński, ulica Profesora Stanisława Łojasiewicza 11 PL-30-348 Kraków, Poland Instytut Fizyki imienia Mariana Smoluchowskiego,Uniwersytet Jagielloński, ulica Profesora Stanisława Łojasiewicza 11 PL-30-348 Kraków, PolandMark Kac Complex Systems Research Center, Uniwersytet Jagielloński, ulica Profesora Stanisława Łojasiewicza 11 PL-30-348 Kraków, Poland 03.75.Lm, 03.75.Hh, 42.65.Tg Schrödinger equation for Bose gas with repulsive contact interactions in one-dimensional space may be solved analytically with the help of the Bethe ansatz if we impose periodic boundary conditions. It was shown that for such a system there exist many-body eigenstates directly corresponding to dark soliton solutions of the mean-field equation. The system is still integrable if one switches from the periodic boundary conditions to an infinite square well potential. The corresponding eigenstates were constructed by M. Gaudin. We analyze the weak interaction limit of Gaudin's solutions and identify parametrization of eigenstates strictly connected with single and multiple dark solitons. Numerical simulations of measurements of particle's positions reveal dark solitons in the weak interaction regime and their quantum nature in the presence of strong interactions. Quantum dark solitons in Bose gas confined in a hard wall box Krzysztof Sacha December 30, 2023 ============================================================= § INTRODUCTION Self-reinforcing solitary solutions of non-linear wave equations maintaining their shape during time evolution are called solitons and appear in various fascinating phenomena <cit.>. Solitons are particularly investigated in non-linear optics and ultra-cold atomic gases. In the latter case bosons may form a Bose-Einstein condensate (BEC) where all atoms occupy the same single-particle state and the many-body wave function factorizes into the product of identical single-particle states <cit.>. In the presence of inter-particle interactions the product state describes atoms living in an averaged potential coming from themilieu of other identical particles (mean-field description). The behaviour of the single-particle state is determined by the Gross-Pitaevskii equation (GPE) which has analytical bright and dark soliton solutions in one-dimensional (1D) space for attractively and repulsively interacting system, respectively <cit.>. GPE gives very accurate description of the solitons realized experimentally so far <cit.>.Theoretical investigations of quantum nature of dark solitons, i.e. properties which go beyond the standard Bogoliubov corrections <cit.>,employ the phase imprinting method <cit.>— starting with the system in the ground state one can carve a soliton by shining a short laser pulse on a half of the atomic cloud. Sufficiently long time evolution reveals quantum character of the dark soliton when the soliton positionfluctuates on a length scale which can be much greater than the soliton width. Hence, the soliton location has to be considered as a quantum degree of freedom described by a probability density whose standard deviation increases with time <cit.>. It is not easy to observe the many-body effects experimentally because it requires relatively small number of particles in the system in order to reduce atomic losses that are able to kill the quantum character of the solitons <cit.>. In the field of ultra-cold atomic gases the experimental techniques evolve very rapidly giving an opportunity to investigate systems for which the many-body effects play a key role <cit.>.The Schrödinger equation for some quantum many-body systems possesses analytical solutions. Generally, it happens for systems in lower dimensions when one can use a brilliant concept of the Bethe ansatz <cit.>. It turns out that the Bethe ansatz approach is crucial in the analysis of bosonic and fermionic 1D systems interacting via point-like δ-potentials <cit.>. Non-relativistic ultra-cold bosons in 1D space with contact interactions are described by renowned Lieb-Liniger model <cit.>. One can expect dark soliton solutions in the presence of repulsive inter-particle interactions. In contrast to attractive interactions for which bright soliton solution corresponds to the lowest energy state, dark solitons have to be a reflection of excitations. Furthermore, if the system satisfies periodic boundary conditions all energy eigenstates are also eigenstates of the unitary operator that translates all particles by the same distance. Hence, they satisfy translation symmetry which is broken in the case of the mean-field soliton solutions. For those reasons the identification of dark soliton-like many-body eigenstates was particularly difficult. The conjecture and various evidences that the eigenstates belonging to the so-called type II excitation spectrum of the Lieb-Liniger model are strongly connected with the mean-field soliton solutions can be found in the literature <cit.>. Especially, it has been shown that dark soliton signatures are visible in the reduced single-particle density if the system is prepared in a proper superposition of the eigenstates coming from the second branch of the excitation spectrum <cit.>. Ultimately, it was demonstrated that dark soliton signatures emerge in the course of measurement of particle positions if the system is prepared initially in a type II eigenstate <cit.>.In the presence of hard-wall boundary conditions the system does not possess the translation symmetry but the question if there are many-body eigenstates directly corresponding to mean-field dark soliton solutions in this case is still open. It is clear that in the limit of infinitely weak repulsion, one can identify many-body eigenstates that reveal density notches present in plots of the single-particle probability densities. Genuine solitons may appear if the inter-particle interactions are turned on and solutions of the GPE reveal phase flips and density notches whose widths are much smaller than the size of the hard-wall box. The goal of the present work is to show that there are many-body eigenstates that correspond directly to the solitonic structures.We will show that the phase flips and signatures of the density notches can be observed for arbitrary strength of the repulsive particle interaction. Therefore, in order to simplify the nomenclature in the present manuscript this class of states we will be sometimes dubbeddark solitons even in the nearly non-interacting and strongly interacting cases.§ LIEB-LINIGER MODEL Ultra-cold system consisting of N bosons with repulsive contact interactions in the 1D space may be described by the following Lieb-Liniger Hamiltonian <cit.> H=∫_0^Ldx[∂_xψ̂^†∂_xψ̂+cψ̂^†ψ̂^†ψ̂ψ̂],where ψ̂ is the canonical Bose field operator and L is the system size. The units have been chosen so that 2m=ħ=1 where m is the particle mass. The dimensionless parameterγ=c/n,reflects the strength of the interactions. The coupling constant c>0 and n=N/L denotes the average density of particles. One deals with the weak interaction limit when γ≪1 and strongly interacting impenetrable bosons for γ≫ 1<cit.>.§.§ Gaudin's solution in the presence of hard walls The Hamiltonian (<ref>) with periodic boundary conditions has analytical solutions that can be found with thehelp of the Bethe ansatz <cit.>. The Lieb-Liniger eigenstates are parametrized by a collection {k} of N real (if c>0) numbers called quasimomenta <cit.>. It is clear that the integrability of the model can be easily broken by switching on a trapping potential. However, the model is still integrable in the presence of hard-wall boundary conditions. The solutions were constructed by M. Gaudin <cit.>. Assuming that 0≤ x_1 ≤ x_2 ≤…≤ x_N ≤ L we have to fulfil two conditionsΨ(x_1= 0, x_2, …, x_N) =Ψ(x_1, x_2, …, x_N=L) = 0. Firstly, using McGuire's optical analogy <cit.>, one constructs solutions in the semi-infinite 1D space x_i ≥ 0 vanishing at x_1=0. The solutions turn out to be superpositions of elementary wavesand take the following formΨ({x},{q}) =∑_σ∈𝒮_N∑_{ϵ}ϵ_1 ϵ_2 ···ϵ_Nexp[ i ∑_s=1^N q_σ(s) x_s] ×∏_i<j( 1-i c/q_i+q_j)( 1+i c/q_σ(i)-q_σ(j)),where ϵ_i = ± 1, q_i=ϵ_i|k_i| (with 0<|k_1|<… < |k_N|). The first sum has to be taken over all possible N-element combinations of ±1. Thesecond sum refers to all permutations σ from the permutation group 𝒮_N. Therefore, calculation of a single value of (<ref>) requires summation over 2^N N! elements <cit.>.Secondly, imposing the vanishing of the wave function at x_N=L one obtains elegant system of coupled equationsL k_i = π n_i +∑_j=1 j≠ i ^N[ arctanc/k_i - k_j +arctanc/k_i + k_j],called Gaudin's equations where integer numbers n_i are parameters that define an eigenstate. In the case of repulsive inter-particle interactions the system of Gaudin's equations (<ref>) has unique real solutions k_i<cit.>. In order to deal with admissible physically different solutions one considers only k_i=1,… , N>0 and the set of integers {n } where 1≤ n_1≤ n_2 ≤…≤ n_N<cit.>. By construction the set {k} determines the energy of the eigenstate (<ref>) <cit.>E_{k}=∑_j=1^Nk_j^2.We would like to stress that even if two or more n_i parameters are equal, the resulting solutions k_i of (5) are always distinct.§.§ Collective soliton-like excitationsFor the non-interacting system (c→ 0^+) the solutions (<ref>) reduce to superposition of products of sine functions (see Appendix) Ψ({x},{k})c→ 0^+∝∑_σ∈𝒮_N∏_s=1^nsin(k_σ(s) x_s),with k_j c→ 0^+⟶π n_j /L for n_j= 1, 2, 3, … (negative values are physically equivalent to non-negative ones). Obviously, they coincide with the well known solutions of the problem of non-interacting particles in an infinite square well potential. If we choose identical integer parameters n_i=j in the Gaudin's equation (<ref>), all elements of the set {k} approachπ j /L in the limit c→ 0^+ but they are always slightly different if c is not strictly equal to 0. In the non-interacting case of Bose gas confined in the 1D box of length L the solutions resembling solitons correspond to product states where all particles occupy the same excited eigenstate of the single-particle problem, Φ_sol^j-1({x}_N)∝∏_s=1^Nsin( π j x_s/L),where j=2,3,…. It is clear that the wave function (<ref>) reveals j-1 density notches and phase flips and thus resembles signatures of j-1 dark solitons. When j=1 it reproduces the ground state. In the c→ 0^+ limit the eigenstates (<ref>) reduce to (<ref>) for the following collections of the integer numbers in (<ref>) 𝒯(j): n_1=n_2=… =n_N=j.We believe that in the presence of particle interactions there is a range of the system parameters where the eigenstates (4), with quasimomenta {k} parametrized by (9), not only resemble but reveal dark solitons unambiguously.Looking at the expression (<ref>) we notice that 𝒯(1+s) corresponds directly to s-fold collective excitation of the aforementioned non-interacting system. For convenience, we will use this terminology also in the presence of inter-particle repulsion. The excitation given by 𝒯(j) will be dubbed as the first collective excitation for j=2 and the higher (j-1)th collective excitation for j>2.§.§ Numerical method Properties of the many-body eigenstates Ψ({x},{q}) are hidden in the structure of terms in (<ref>) whose number dramatically grows with increasing number of particles N. Hence, despite the fact that one has the analytical solutions it is not easy to understand their features. In order to study the eigenstates of the many-body system we will simulate measurements of particle positions. In order to simulate results of the measurements one has to choose randomly positions of N particles according to the N-dimensional probability density. In practice it is often done sequentially: one by one particle at each step calculating conditional probability density for a choice of a next particle <cit.>. This procedure is very difficult in the present case.Instead, we follow an equivalent idea of usage of the Monte Carlo algorithm of Metropoliset al.<cit.>. As in Refs. <cit.> we perform Markovian walk in the configuration space. That is, by samplingN-dimensional probability distribution |Ψ(x_1,…,x_N,{q})|^2 we generate collections of sets 𝒳={x_1,…,x_N} of particles' positions. The procedure is based on step by step acceptance of sets 𝒳'with probabilityp= min(1,|Ψ(𝒳')|^2/|Ψ(𝒳)|^2), where 𝒳 is the previously accepted set of particles' positions.Although the method is very efficient, only a few body systems are numerically attainable. In the studies of the Lieb-Liniger model with periodic boundary conditions we were able to investigate 8-particle system <cit.>. There, number of terms in the Bethe ansatz solutions increases like N! with an increase of the total number of particles N. In the present casewe need to use the Gaudin's solution (<ref>) where the number of terms proliferates 2^N times faster than in the previously used Bethe ansatz solutions. Hence, in order to collect reliable statistics, we perform simulations for the system consisting of N=6 and 7 bosons only.§ THE ANALYSIS OF COLLECTIVELY EXCITED MANY-BODY EIGENSTATESThe ground and collectively excited states ofa Bose system are usually describedwith the help of the mean-field approximation if we deal with the weak interaction regime (γ≪ 1). The assumption that all particles occupy the same single-particle state leads to GPE <cit.>. Analytical solutions of GPE in the presence of the hard wall box potential exist and are given by Jacobi elliptic functions <cit.>. If L≫ξ, where ξ=1/√(cn) is the so-called healing length <cit.>, the analytical solution related to a dark soliton can be approximated by the following function <cit.>ψ(x)={[-√(n)tanh(x/ξ) for 0≤ x≪ L/2-ξ; √(n)tanh(x-L/2/ξ) for ξ≪ x≪ L-ξ; √(n)tanh(L-x/ξ)for L/2+ξ≪ x≤ L. ]. The healing length ξ describes a typical distance over which the condensate wave function forgets about, e.g.,boundaries of the system. Therefore, in a finite system with hard walls positioned at the boundaries we expect disturbance of the particle density close to the boundaries on the same length scale as the width of the dark soliton notch. When quantum many-body effects are taken into account, one may expect that the position of the dark soliton starts fluctuating in different realizations of the particle positions measurements. The fluctuations are the more significant the stronger interactions are present <cit.>. Hence, we expect that the averaged particle density ρ(x), i.e. averaged over many realizations of the measurement process, should reveal the shallower dark soliton notches the stronger repulsion is. §.§ First collective excitation The signatures of single dark soliton are expectedto be observed in the weak interaction limit whenone prepares the system in the eigenstate (<ref>) choosing the parameters in (<ref>) so that ∀_i=1,… N: n_i = 2.We have performed numerical simulations of the particles positions measurement for 6- and 7-body systems confined in a box of lengthL=1. Single realization of the detection process produces a sequence of N positions only. Therefore, in order to investigate averaged particle density, we have repeated the measurement procedure starting with the same many-body eigenstate many times. Collecting all results of the simulations in a histogram allows us to look at the average particle density. In panels (a) and (c) of Fig. <ref> we present histograms of the obtained particle positions in many realizations of the detection process for N=6 and 7 and for a wide range of the interaction strength (γ =0.01, 1, 100). Smearing of the density notch with increasing γ is clearly visible at the center of the box. It is caused by the fact that the soliton position varies randomly from realization to realization and the range of the fluctuations is the larger the stronger interactions are <cit.>.In a single realization of the measurement process one obtains a set of particle positions {x_1,…,x_N}. Let us choose from this set arbitrary N-1 positions and consider the eigenstate (<ref>),parametrized by the set of n_i that satisfy (<ref>) with j=2, as a single-particle wave function of the last remaining particle x_i=x,ψ(x)∝Ψ(x_1,…,x_i-1,x,x_i+1…,x_N).It turns out that the wave function ψ(x) reveals density notch and a phase flip at the center of the notch.We identify the position of the phase flip with the position of the dark soliton.The distributions of the phase flip positions in the case of different interaction strengths are depicted in Fig. <ref> for N=6 (b) and N=7 (d). Additionally, in Fig. <ref>(e), we present the quantity F≡ρ(L/2), which measures the degree of filling of the notch observed in the averaged particle densities ρ(x) versus γ parameter in the case of N=7. The degree of filling F saturates already for γ≈ 5. For strong interactions (γ = 100) one can also observe N+1 oscillations in the histograms, cf. panels (a) and (c) of Fig. <ref>. In the ground state case and for a small particle number one can expect N local maxima in the average particle density that correspond to average locations of N bosons with strong repulsive interactions — the density should be identical to the density of non-interacting fermions when γ→∞ (Tonks-Girardeau limit) <cit.>. In Fig. <ref> we see one local maximum more which seems to be related to the fact that if there is a phase flip between neighbouring particles, then their relative distance is modified and on average it results in an additional oscillation in the density profile. We expect that the oscillations can be visible for small N only and for large particle number they will become negligible, i.e. the averagedparticle density profile will be almost flat (except the edges of the system).The distributions of positions of the phase flip in the case of strong repulsion (γ = 100) also reveal oscillations [dotted histograms Fig. <ref>(b) and (d)]. We note that the positions of local maxima of such distributions roughly coincide with positions of local minima of corresponding averaged particle densities (dotted histograms in Fig. <ref>(a) and (b), respectively). Moreover, dealing with even (odd) number of particles we observe that in the presence of strong inter-particle interactions the distribution of the phase flip positions has the minimum (maximum) in the center of the box x=L/2. The presence of the oscillations and the correlations between positions of the maxima in the averaged particle densities and positions of the minima in the distributions of phase flip positions results from the fact that for large γ at a space point where there is greater probability to observe a particle, there must be smaller probability to find the phase flip. The significant quantum many-body effects appear for γ>0.1 but they do not destroy the signatures of solitons like the density notch and phase flip in single realizations of the detection process. Especially the phase flip can be clearly observed even in the very strong interaction regime (γ≫ 1). In Fig. <ref> we show probability density and phase of the wave function (<ref>) for the last 7th particle (provided 6 particles have been already detected) in the 7-particle system obtained in a single realization of the measurement for γ=1 and 100. One easily notices that every measurement leaves its mark on the wave function. In the Fig. <ref>(a) we observe 6 slight incisions in the profile of the probability density for γ=1. The stronger inter-particle repulsion is, the deeper incisions are observed. In the case of strong repulsion (γ=100) we note that the probability density is essentially nonzero only in regions away from the measured positions of particles. The reason is the strong repulsion does not allow for detection of two particles close to each other. Therefore, it is very difficult to establish where is the phase flip of the wave function (<ref>) by looking at the profile of the probability density onlyif γ≫ 1— compare Fig. <ref>(a) and Fig. <ref>(b).In the non-interacting case the many-body eigenstate that we analyse here reduces to a simple product state (<ref>) with j=2. Then, the parity symmetry, which is fulfilled by the Hamiltonian (<ref>), becomes apparent because such a state is also an eigenstate of the operator that transforms each x_i in the following way (x_i-L/2)→ -(x_i-L/2). The resulting average particle density vanishes at the centre of the box, cf. Fig. <ref>. Average particle densities related to γ=1 and γ=100 do not possess this property but it is not in contradiction with the parity symmetry. Indeed, the interactions between particles introduce coupling between states where all particles occupy anti-symmetric modes with states where some even number of particles occupy symmetric modes — superposition of such states is also an eigenstate of the parity operator.§.§ Higher collective excitations Increasing the number j in the eigenstate parametrization (<ref>) one expects to observe increasing number of notches that appear in averaged particle densities for weak and intermediate interaction strength. We performed numerical simulations of particles detection for N=7 in the system prepared initially in the eigenstates parametrized by n_i numbersthat fulfil (<ref>) with j=3 and j=4.The results confirm that there are j-1 notches in the average probability densities for γ=0.01 and 1, see Fig. <ref>.Analysis of single measurement realizations reveals j-1 phase flips in the wave function (<ref>) for the last particle in the system. The phase flips are key signatures of dark solitons for the intermediate interactions and indicates where the solitons are localized in single realizations. The phase flips are also present in the strong interaction regime.Similarly to the case of the first collective excitation, considered in the previous subsection, in the presence of intermediate and strong inter-particle repulsion the density notches are blurred because of fluctuations of positions of the phase flips. We also observe oscillations in the averaged particle densities for γ=100, see Fig. <ref>. One notices that the number of the oscillations is equal to N+j-1, i.e. the number of particles plus the number of phase flips.In the strong interaction case and for small N the average particle density related to the ground state would reveal N local maxima. The presence of a phase flip between a pair of particles modifies its relative distance. It turns out that the presence of j-1 phase flips between different neighbouring particles results in j-1 additional local maxima in the average particle density as compared to the ground state case.In the case of the eigenstateparametrized by n_i numbers that fulfill(<ref>) with j=3, the distribution of distances between positions of two phase flips is obviously concentrated at L/3≈ 0.33 in the non-interacting case, cf. Fig. <ref>. For weak and intermediate interactions it is still localized around L/3 but its width increases with γ. This can be explained by the observation that the wave function (<ref>) must drops at the edges on a length scale similar to half of the width of the density notches. If we now imagine that we merge the edges of the box, the shape of the modulus squared of the wave function (<ref>) resembles not 2 but 3 solitons on a ring. Thus the expected mean separation between solitons should be approximately equal to L/3 but due to fluctuations of the soliton positions the width of the distribution increases with γ.For γ→∞ we approach the Tonks-Girardeau regime where impenetrable bosons tend to localize at L/N distances one from each other. Then, the phase flips are expected to be located half the way between two neighboring particles. Thus, the distributions of distances between two phase flips may reveal peaks at integer multiple of L/N. In Fig. <ref> we show the distribution for γ=100 where the peaks structure emerges. Similar structures were also observed in the case of periodic boundary conditions <cit.>. § CONCLUSIONS We have consideredgas of bosons interacting via point-like δ-potential confined in a one-dimensional hard wall box. Eigenstates of the system were analytically constructed by M. Gaudin and they were parametrized by a set of positive integers {n}. We show that Gaudin's solutions, in the limit of infinitely weak interactions, factorize to simple eigenstates of non-interacting particles in the box. In this case each number n_icorresponds simply to a number of an excited eigenstate of a single-particle in thesquare well potential. If allnumbers n_i are equal to the same integer j>1, the average particle density for infinitely weak interactions resemble j-1 dark soliton-like notches. Genuine dark solitons may appear only for non-vanishing inter-particle interactions. In this case the numbers {n} do not have clear interpretation and should be treated just as parameters defining a many-body eigenstate uniquely. Nevertheless, the eigenstates parametrized by ∀_i n_i = j do correspond to j-1 dark solitons. In order to show it we have performed numerical simulations of measurements of particle positions. It turns out that the wave function before the measurement of the last particle always possesses dark soliton signatures. That is, thereare j-1 phase flips of the wave function and j-1 probability density notches for the weak and intermediate interactions. The small numbers of particles (N=6,7) we consider do not allow us to fulfil the conditions, γ≪ 1 and ξ≪ L. Hence, shape of the density notches do not reproduce a hyperbolic tangent function like in Eq. (<ref>). For strong interactions, the positions of the phase flips fluctuate strongly from one realization of the measurement process to another one that results in smearing of the notches in the average particle density.We have also investigated the relative distance between neighboring phase flips in the case of j=3. The results show that depending on interaction strength there are specific distances the phase flips localize more eagerly. § ACKNOWLEDGMENTSThe authors express their sincere gratitude to D. Delande forfruitful discussions.Support of the National Science Centre, Poland via projects: No.2016/21/B/ST2/01095 (A.S.) and No.2015/19/B/ST2/01028 (K.S.) is acknowledged.A.S. acknowledges support in the form of a special scholarship from the Marian Smoluchowski Scientific Consortium "Matter-Energy-Future", from the KNOW funding. This work was performed with the support of EU via Horizon2020 FET project QUIC (nr. 641122) § APPENDIX Let us consider the Gaudin's equations for 2-particle system with identical numbers n_1 = n_2 in (<ref>). Subtracting the particular equations one obtains Lδ k =2arctanc/δ k, where δ k = k_2 - k_1. Taking the tangent of both sides and using the series expansion,tan(x)= x + O(x^3), one notices that in the limit c→ 0^+L/2(δ k)^2 ≈ c ⟹lim_c→ 0^+c/δ k = 0. Last result holds true also in the case of N-particle system parametrized by s≤ N identical numbers n_j. It obviously coincides with the statement that physically relevant solutions may always be ordered such that 0<k_1<k_2<… <k_N<cit.>.Using the fact that δ k vanishes slower than c in the limit c→ 0^+ the equation (<ref>) reduces to lim_c→ 0^+Ψ({x},{q}) = ∑_σ∈𝒮_N,{ϵ}_Nϵ_1···ϵ_Nexp[ i ∑_s=1^N q_σ(s) x_s].Now we want to show the following identity ∑_σ∈𝒮_N∑_{ϵ}_Nϵ_1 ···ϵ_Nexp[ i ∑_s=1^Nϵ_σ(s) k_σ(s) x_s]= ∑_σ∈𝒮_N∏_s=1^N( e^i k_σ(s) x_s -e^-i k_σ(s) x_s).It is clear that it is enough to show the identity for a single arbitrary permutation σ,∑_ {ϵ}_Nϵ_1 ···ϵ_Nexp[ i ∑_s=1^Nϵ_σ(s) k_σ(s) x_s]= ∏_s=1^N( e^i k_σ(s) x_s -e^-i k_σ(s) x_s).In general, we can change positions of epsilons (or equivalently permute indices) in the following way ϵ_1ϵ_2···ϵ_N = ϵ_3ϵ_1···ϵ_N···ϵ_5 = ϵ_σ(1)ϵ_σ(2)···ϵ_σ(N). Now, the equality (<ref>) can be shown using the fact that ∑_ {ϵ}_Nϵ_1 ···ϵ_Nexp[ i ∑_s=1^Nϵ_σ(s) k_σ(s) x_s] = ( ∑_ϵ_σ(1)ϵ_σ(1)e^iϵ_σ(1) k_σ(1)x_1) ( ∑_ϵ_σ(2)ϵ_σ(2)e^iϵ_σ(2) k_σ(2)x_2)···( ∑_ϵ_σ(N)ϵ_σ(N)e^iϵ_σ(N) k_σ(N)x_N) , which ends the proof because every single term of the product, ∑_ϵ_σ(r)ϵ_σ(r)e^iϵ_σ(r) k_σ(r)x_r = e^i k_σ(r)x_r -e^-i k_σ(r)x_r =2isin(k_σ(r)x_r).Hence, in the considered limit, the equation (<ref>) can be rewritten in terms of sine functions lim_c→ 0^+Ψ({x},{k})=(2i )^N ∑_σ∈𝒮_N∏_s=1^nsin(k_σ(s) x_s ). 99KivsharOpticalSol Y. S. Kivshar and G. P. Agrawal,Optical Solitons, Academic Press, An imprint of Elsevier Science, San Diego, California, 2003.pethicksmith C. Pethick and H. Smith,Bose-Eistein condensation in dilute gases (Cambridge University Press, Cambridge, England, 2002).burger1999 S. Burger, K. Bongs, S. Dettmer, W. Ertmer, K. Sengstock, A. Sanpera, G. V. Shlyapnikov, and M. Lewenstein, Phys. Rev. Lett.83, 5198 (1999).denschlag2000 J. Denschlag, J. E. Simsarian, D. L. Feder, Charles W. Clark, L. A. Collins, J. Cubizolles, L. Deng, E. W. Hagley, K. Helmerson, W. P. Reinhardt, S. L. Rolston, B. I. Schneider, and W. D. Phillips, Science287, 97 (2000).strecker2002 K. E. Strecker, G. B. Partridge, A. G. Truscott, and R. G. Hulet, Nature417, 150 (2002).khaykovich2002 L. Khaykovich, F. Schreck, G. Ferrari, T. Bourdel, J. Cubizolles, L. D. Carr, Y. Castin, and C. Salomon, Science296, 1290 (2002).becker C. Becker, S. Stellmer, P. Soltan-Panahi, S. Dörscher, M. Baumert, E.-M. Richter, J. Kronjäger, K. Bongs, and K. Sengstock, Nature Physics4, 496 (2008).Stellmer2008 S. Stellmer, C. Becker, P. Soltan-Panahi, E.-M. Richter, S. Dörscher, M. Baumert, J. Kronjäger, K. Bongs, and K. Sengstock, Phys. Rev. Lett.101, 120406 (2008).Weller2008 A. Weller, J. P. Ronzheimer, C. Gross, J. Esteve, M. K. Oberthaler, D. J. Frantzeskakis, G. Theocharis, and P. G. Kevrekidis, Phys. Rev. Lett.101, 130401 (2008).Theocharis2010 G. Theocharis, A. Weller, J. P. Ronzheimer, C. Gross, M. K. Oberthaler, P. G. Kevrekidis, and D. J. Frantzeskakis, Phys. Rev. A81, 063604 (2010).dziarmaga2004 J. Dziarmaga, Phys. Rev A.70, 063616 (2004).Mishmash2009_1 R. V. Mishmash and L. D. Carr, Phys. Rev. Lett.103, 140403 (2009). Mishmash2009_2 R. V. Mishmash, I. Danshita, Charles W. Clark, and L. D. Carr, Phys. Rev. A80, 053612 (2009).Dziarmaga2010 J. Dziarmaga, P. Deuar, and K. Sacha, Phys. Rev. Lett.105, 018903 (2010).Mishmash2010 R. V. Mishmash and L. D. Carr, Phys. Rev. Lett.105, 018904 (2010).delande2014 D. Delande and K. Sacha, Phys. Rev. Lett.112, 040402 (2014).kronke15S. Krönke and P. Schmelcher, Phys. Rev. A91, 053614 (2015). Hans2015 G. C. Katsimiga, G. M. Koutentakis, S. I. Mistakidis, P. G. Kevrekidis, and P. Schmelcher, arXiv:1612.09151.Yefsah T. Yefsah, A. T. Sommer, M. J. H. Ku, L. W. Cheuk, W. Ji, W. S. Bakr, and M. Zwierlein, Nature499, 426 (2013).roberts2000 J. L. Roberts, N. R. Claussen, S.L. Cornish, and C. E. Wieman, Phys. Rev. Lett.85, 728 (2000).plodzien2012 M. Płodzień and K. Sacha, Phys. Rev. A86, 033617 (2012).Lai89 Y. Lai and H. A. Haus, Phys. Rev. A 40, 844 (1989).Lai89a Y. Lai and H. A. Haus, Phys. Rev. A 40, 854 (1989).castinleshouches Y. Castin, in Les Houches Session LXXII,Coherent atomic matter waves 1999, edited by R. Kaiser, C. Westbrook and F. David, (Springer-Verlag Berlin Heilderberg New York 2001).Weiss09 C. Weiss and Y. Castin, Phys. Rev. Lett.102, 010403 (2009).delande2013 D. Delande, K. Sacha, M. Płodzień, S. K. Avazbaev, and J. Zakrzewski, New J. Phys.15, 045021 (2013).corney97 J. F. Corney, P. D. Drummond, and A. Liebman, Opt. Commun.140, 211 (1997).corney01 J. F. Corney and P. D. Drummond, J. Opt. Soc. Am. B18, 153 (2001).martin2010b A. D. Martin and J. Ruostekoski, New J. Phys.12, 055018 (2010).Boisse2017 A. Boissé, G. Berthet, L. Fouché, G. Salomon, S. Aspect, S. Lepoutre, and T. Bourdel, arXiv:1701.00414.Bethe31 H. Bethe, Z. Physik71, 205 (1931).Korepin93 V. E. Korepin, N. M. Bogoliubov, and A. G. Izergin,Quantum Inverse Scattering Method and Correlation Functions (Cambridge University Press, Cambridge, 1993).Gaudin M. Gaudin,The Bethe wavefunction, Cambridge University Press, 2014.Oelkers2006 N. Oelkers, M. T. Batchelor, M. Bortz, and X. W. Guan, J. Phys. A: Math. Gen.39, 1073 (2006).Lieb63 E. H. Lieb and W. Liniger, Phys. Rev.130, 1605 (1963).Lieb63a E. H. Lieb, Phys. Rev.130, 1616 (1963). kulish76 P. P. Kulish, S. V. Manakov, and L. D. Faddeev, Theor. Math. Phys.28, 615 (1976).ishikawa80 M. Ishikawa and H. Takayama, J. Phys. Soc. Jpn.49, 1242 (1980).komineas02 S. Komineas and N. Papanicolaou, Phys. Rev. Lett.89, 070402 (2002). jackson02 A. D. Jackson and G. M. Kavoulakis, Phys. Rev. Lett.89, 070403 (2002).kanamoto08 R. Kanamoto, L. D. Carr, and M. Ueda, Phys. Rev. Lett.100, 060401 (2008).kanamoto10 R. Kanamoto, L. D. Carr, and M. Ueda, Phys. Rev. A81, 023625 (2010); Erratum Phys. Rev. A81, 049903(E) (2010).karpiuk12 T. Karpiuk, P. Deuar, P. Bienias, E. Witkowska, K. Pawłowski, M. Gajda, K. Rza̧żewski, and M. Brewczyk, Phys. Rev. Lett.109, 205302 (2012).karpiuk15 T. Karpiuk, T. Sowiński, M. Gajda, K. Rza̧żewski, and M. Brewczyk, Phys. Rev. A91, 013621 (2015).sato12 J. Sato, R. Kanamoto, E. Kaminishi, and T. Deguchi, Phys. Rev. Lett.108, 110401 (2012).sato12a J. Sato, R. Kanamoto, E. Kaminishi, and T. Deguchi, preprint arXiv:1204.3960.sato16 J. Sato, R. Kanamoto, E. Kaminishi, and T. Deguchi, New J. Phys.18, 075008 (2016).Gawryluk2017 K. Gawryluk, M. Brewczyk, and K. Rza̧żewski, Phys. Rev. A95, 043612 (2017).Syrwid2015 A. Syrwid and K. Sacha, Phys. Rev. A92, 032110 (2015).Syrwid2016 A. Syrwid, M. Brewczyk, M. Gajda, and K. Sacha, Phys. Rev.A 94, 023623 (2016).Gaudin71 M Gaudin, Phys. Rev. A4, 386 (1971).Batchelor05 M. T. Batchelor, X. W. Guan, N. Oelkers, C. Lee, J. Phys. A38, 7787-7806 (2005).Tomchenko17 M. Tomchenko, J. Phys. A: Math. Theor.50, 055203 (2017).McGuire J. B. McGuire, J. Math. Phys.5, 622 (1964). javanainen96 J. Javanainen and S. M. Yoo, Phys. Rev. Lett.76, 161 (1996).dziarmaga03 J. Dziarmaga, Z. P. Karkuszewski, and K. Sacha, J. Phys. B,36, 1217 (2003).dziarmaga06 J. Dziarmaga and K. Sacha, J. Phys. B,39, 57 (2006).Dagnino09 D. Dagnino, N. Barberán, and M. Lewenstein, Phys. Rev. A80, 053611 (2009).Kasevich20016 K. Sakmann and M. Kasevich, Nature Physics12, 451-454 (2016).Metropolis1953 M. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller, and E. Teller, J. Chem. Phys.21, 1087 (1953). Gajda_PauliCrystal M. Gajda, J. Mostowski, T. Sowiński, and M. Załuska-Kotur, EPL115, 20012 (2016).Carr_HWSoliton L. D. Carr, Charles W. Clark, and W. P. Reinhardt, Phys. Rev.A 62, 063610(2000).girardeau1960 M. Girardeau, J. Math. Phys. (NY)1, 516 (1960).Paredes2004 B. Paredes, A. Widera1, V. Murg, O. Mandel, S. Fölling, I. Cirac, G. V. Shlyapnikov, T. W. Hänsch, and I. Bloch, Nature429, 277 (2004). | http://arxiv.org/abs/1705.09607v2 | {
"authors": [
"Andrzej Syrwid",
"Krzysztof Sacha"
],
"categories": [
"cond-mat.quant-gas",
"nlin.PS",
"quant-ph"
],
"primary_category": "cond-mat.quant-gas",
"published": "20170526150047",
"title": "Quantum dark solitons in Bose gas confined in a hard wall box"
} |
[email protected] Department of Physics, The Ohio State University at Newark, 1179 University Dr., Newark, OH [email protected] Department of Physics, The Ohio State University, Columbus, OH 43210 Light traveling through the vacuum interacts with virtual particlessimilarly to the way that light traveling through a dielectric interacts with ordinary matter.And just as the permittivity of a dielectric can be calculated, the permittivity ϵ_0 of the vacuum can be calculated, yielding an equation for the fine-structure constant α. The most important contributions to the value ofα arise from interactions in the vacuum of photonswith virtual, bound states of charged lepton-antilepton pairs.Considering only these contributions, thefully screened α≅ 1/(8^2√(3π/2)) ≅ 1/139.77.22.Ch, 36.10.Dr Theoretical calculation of the fine-structure constantand the permittivity of the vacuumDecember 30, 2023 ========================================================================================== The fine-structure constant α, which has a value independent of the choice of units, is given by [SI units are used throughout.] α = e^2/(4 πϵ_0)ħ c= e^2/2 ϵ_0h c=e^2/2h√(μ_0/ϵ_0) , where c=1/√(μ_0ϵ_0) is used to obtain the final equality. In (<ref>) e,h, c, ϵ_0, and μ_0 are, respectively, the (screened) magnitude of the charge on an electron, Planck's constant, the speed of light in the vacuum, and the permittivity andpermeability of the vacuum. Usingtechniquessimilar to those employed for calculating the permittivity of a dielectric,a formula for the permittivity ϵ_0of the vacuum is derived by exploiting properties of virtual, lepton-antileptonpairs in the vacuum. The calculationis simplified–and the numerical accuracy is reduced–by including only the most significant interactions, those of photons interacting with virtual, bound states of charged lepton-antilepton pairs in the vacuum.The formula for ϵ_0 is easily converted into a formula for α, yielding the approximate theoretical value 1/α≅8^2√(3π/2))≅139, which is to be compared with the experimental value 1/α=1/137.036…. Solving for the permittivity of the vacuum,ϵ_0 ≅ (4^4)√(6π) e^2/hc ≅ 8.98× 10^-12 C^2/ Nm^2. The experimental value is 8.85× 10^-12 C^2/ Nm^2. The possibility that the properties of the quantum vacuum determine, in the vacuum, the speed of light, the permittivity, and the permeability<cit.> date back to the beginning of quantum mechanics. As early as 1936 the idea of treating the vacuum as a medium with electric and magnetic polarizability was discussed by Weisskopf and Pauli <cit.>. Twenty-one years later Dicke<cit.> wrote about the possibility that the vacuum could be considered as a dielectric medium. Einstein's 1920 Leiden lecture “Ether and the Theory of Relativity” <cit.> was a significant influence on Wilczek's2008 book The Lightness of Being <cit.> that met Einstein's challenge by expressing and encompassing the fundamental characteristics of space and time through the concept of the Grid, “the entity we perceive as empty space.Our deepest physical theories reveal it to be highly structured; indeed, it appears as the primary ingredient of reality.” From Wilczek's conceptualization it is expected that the Grid determines all physics. In this paper a derivation of the value ofα is carried out based on characteristics of the Grid.A second foundation of this calculation is the Heisenberg uncertainty relation, Δ E Δ t ≥ħ/2, where Δ E and Δ t are the respective,simultaneous uncertainties in the energy and time.Here calculation of the permittivity ϵ_0 of the Grid–and of α–is begunby considering the classicalformula <cit.>for thepermittivity ϵ of a dielectric.To determine how the formula must be modified to describe virtual oscillators in the Grid instead of physical particles that oscillate, the derivation of the classical formula <cit.> is briefly reviewed. Consider a plane, sinusoidal, linearly-polarized electromagnetic wave traveling in the z-direction with the electric vector along the x-axis, the magnetic field along the y-axis, and both fields oscillating at an angular frequency ω.The origin of the reference frame is at the equilibrium position of the dipole charges. Thej^ th variety of oscillatorsis composed of charges q_j and -q_j that oscillate along the x-axis, affecting the progress of the light wave. Writing F_x=ma_x the following equation is obtained: -kx(t)-(h_r+h_c) dx(t)/ dt+q_jEcosω t =μ_jd^2x(t)/ dt^2 . On the left-hand side of the above equation -kx(t) is the elastic restoring force, -(h_r+h_c) dx(t)/ dt is the damping force resulting from radiation (h_r)and collisions (h_c),q_jEcosω t is the force on the j^ th oscillator resulting from the electric field Ecosω t of the light wave, and μ_j is the reduced mass of the j^ th oscillator.Defining the electric dipole moment p_j(t), the damping parameter τ and the classical resonant frequency ω_0^j of the oscillator, respectively, by p_j(t)=q_jx(t) , 1/τ= h_r+h_c/μ_j and(ω_0^j)^2=k/μ_j , (<ref>)can be rewritten as d^2p_j(t)/ dt^2 +1/τ dp_j(t)/ dt+(ω_0^j)^2p_j(t)= q_j^2/μ_jEcosω t .The solution of(<ref>) is of the form p_j(t)=p_j^(0)cos(ω t-ϕ) , where p_j^(0)=(q_j^2/μ_j)E/√([(ω_0^j)^2-ω^2]^2+ω^2/τ^2) , and tanϕ=ω/τ[(ω_0^j)^2-ω^2] .Typically in dielectrics the damping is small so the damping term ω^2/τ^2 can be neglected. Except when the resonant frequency ω_0^j and ω are almost the same, it follows from(<ref>) that the electric field and the dipole have almost the same phase when ω < ω_0^jand essentially opposite phases when ω > ω_0^j. Therefore, p_j^(0)≅(q_j^2/μ_j)E/(ω_0^j)^2-ω^2 .Using ϵ E=ϵ_0 E +P, wherethe permittivity of the dielectric is ϵ,the polarization density P=Σ_j N_j p_j^(0), andN_j is the number of oscillators per unit volume of the j^ th variety that are available to interact, it follows that <cit.> ϵ≅ϵ_0 +∑_j N_j (q_j^2/μ_j)/ (ω_0^j)^2 - ω^2 . The oscillators in the dielectric contribute to an increase in ϵ from the value ϵ_0 in the Grid. The quantum formula <cit.> for the propagation of a photon through a dielectric is identical to (<ref>) exceptthat ω_0^jnow is the frequency corresponding to the ground state instead of the classical resonant frequency.The above formula can be used for gases: complicating issues can arise when calculating the permittivity of liquids or solids.In (<ref>)the second term on the right-hand side is the increase in the permittivity fromϵ_0 to ϵas a result of photons interacting with oscillators in the dielectric and results entirely from polarization of the atoms, molecules or both in the dielectric. It then follows that the permittivity of the Grid must result entirely from polarization of the virtual atoms, molecules, or both in the Grid.Ifthe particles in the Grid were real instead of virtual, ϵ_0 ∼∑_j N_j (q_j^2/μ_j)/ (ω_0^j)^2 - ω^2 . For (<ref>) to become a defining equation for ϵ_0, the right-hand-side of the above formula must be rewritten so that it describes the interaction of photons with virtual oscillators in the Grid instead of oscillators consisting of ordinary matter.When virtual oscillators in the Grid disappear, they can't leave energy behind for any significant time because the principle of conservation of energy would be violated beyond that allowed by the uncertainty principle. The final term ω^2/τ^2 in the denominator of (<ref>)occurs because of damping. While neglecting damping is an approximation for physical particles that oscillate in a dielectric, it is exactly true for virtual oscillators in the Grid: from(<ref>) it follows that damping arises because (a) oscillators radiate energyand (b) oscillators loose energy in collisions with other oscillators. But virtual oscillators in the Grid can do neither. If they did, after they vanished they would leave behindenergy, violating the principle of conservation of energy. But this implies that the term multiplying 1/τ in (<ref>) vanishes sodp_j(t)/ dt=0 in the Grid.Consequently, p_j(t) is a constant, implyingd^2p_j(t)/ dt^2=0. In the derivation of (<ref>), taking the second derivative of p_j(t)yields the term in the denominator proportional to ω^2. Since the second derivative is zero, the term ω^2 does not appear. Thusfor (<ref>) todescribethe permittivity of the Grid, ω^2 must be set to zero: ϵ_0 =∑_j (N_j q_j^2/μ_j)/(ω_0^j)^2 . The above discussion is for a classical electric field. For a quantum field, a photon is absorbed by a virtual oscillator in the Grid. When the virtual oscillator vanishes into the Grid, a photon is emitted that has the same energy and momentum as the original photon.The three types ofvirtual oscillators considered here are virtual, atomic, bound states of a charged lepton and anti-lepton:positronium, muon-antimuon bound states and tau-antitau bound states. Initially attention is restricted to positronium.Conservation of angular momentum requires that positronium be created in the J=0 state, which is parapositronium (p-Ps), a singlet spin statethat must decay into an even number of photons. The Heisenberg uncertainty principle isΔ E_ p-Ps Δ t_ p-Ps=ħ/2. Denoting the mass of an electron (or positron) by m_e,Δ E_ p-Ps is the energy 2m_ec^2 for the production ofvirtual parapositronium [The binding energy of parapositronium, which is small in comparison with 2m_ec^2, is being neglected.]. Then (<ref>) yields the average time Δ t_ p-Ps that virtual parapositronium exists, Δ t_ p-Ps= ħ/4m_ec^2. During the time Δ t_ p-Ps, a beam of lighttravels a distance L_ p-Ps given by L_ p-Ps= cΔ t_ p-Ps= ħ/4m_ec.Here the major new physics,which follows from dimensional analysis and uses standard, cubic wave packets,is the ansatz thatthe number of virtual parapositronium atoms per unit volume is1/L_ p-Ps^3(=1.11× 10^39/ m^3), a result that can immediately be generalized to other virtual particles in the Grid. To minimize the violation of conservation of energy, when a virtual, particle-antiparticle pair is created from the Grid, the pair is assumed toappear on mass shell in its lowest energy state. Thus virtual parapositronium, positronium's ground state (n=1) for which J=0 <cit.>, is created in the Grid.During the time Δ t_ p-Ps that virtual parapositronium exists, light travels less than one-thousandth of the Bohr radius of parapositronium.Consequently, virtual parapositronium survives such a short time that when it interacts with a photon, the parapositronium would be expected to vanish back into the Grid before it could be elevated to an excited state.As in ordinary matter, the decay of a virtual parapositronium atom is assumed to be dominated by its interaction with the quantum electromagnetic field. Therefore, to calculate the probability that a photon interacts with a virtual parapositronium atom that annihilates electromagnetically, the electromagneticdecayrate Γ is calculated using the mechanism for the annihilation of ordinary matter, [Details of this calculation will be given elsewhere.] Γ = α^5 m_e c^2/ħ, which is twice the decay rate of parapositronium into two photons <cit.>.At equilibrium the average rate for which virtual parapositronium absorbs a photon equals the average rate for which virtual parapositronium annihilates and emits a photon. As a consequence, the average probability thatvirtual parapositronium absorbs a photon is Γ Δ t_ p-Ps.For virtual parapositronium the quantity N_j in (<ref>), denotedN_ p-Ps,is the number density of virtual parapositroniummultiplied by the average probability that virtual parapositronium will absorb an incoming photon:N_ p-Ps = 1/L_ p-Ps^3×Γ Δ t_ p-Ps=α^5/4 (4 m_e c/ħ )^3. Since positronium is a bound state of an electron and positron, the reduced mass μ_i in (<ref>) isμ_e = m_e/2.Thenon-relativistic, ground-state energy level for positronium is obtained from the n = 1 energy level of hydrogen by replacing the reduced mass of hydrogen with the reduced mass m_e/2: E_e = - (m_e/2)e^4 /2(4πϵ_0)^2 ħ^2 =-m_eα^2c^2/4 . The above formula is used<cit.> to calculate the natural angular frequency of positronium in its ground state: ω_0^j=ω_1^e=-E_e/ħ: 1/(ω_1^e)^2=(4ħ/m_eα^2c^2)^2 . Eq. (<ref>)then takes the form ϵ_0 =∑_j 8^3α e^2/ħ c. Note that the mass of the electron has cancelled from the expression for ϵ_0, implying thatvirtual, bound muon-antimuon and tau-antitau pairseach contribute the same amount to the value ofϵ_0 as virtual positronium. Thus, ϵ_0 = 38^3α e^2/ħ c. Multiplying both sides of (<ref>) by (4 πħ c)/e^2and using (<ref>) yields the desired result: 1/α≅ 8^2√(3π/2)≅ 138.93…. The experimental value is 1/α=137.036…. Using the second expression for α in (<ref>), substituting the expression into (<ref>), and solving for ϵ_0,ϵ_0 ≅ 4^2√(6π) e^2/hc≅ 8.98× 10^-12 C^2/ Nm^2. Alternatively, selecting the third expression for α in (<ref>) that includes the defined quantity μ_0, substituting the expression into (<ref>), and solving for ϵ_0, ϵ_0 ≅ 3(8^3) π μ_0e^4/h^2≅ 9.10× 10^-12 C^2/ Nm^2.Because the expression for ϵ_0 in (<ref>) is not exact,slightly different values for ϵ_0 are obtained when expressed in terms ofμ_0,e, and hinstead of c,e, and h.The experimentalvalue is ϵ_0=8.85× 10^-12 C^2/ Nm^2. Usingc=1/√(μ_0ϵ_0) and the value for ϵ_0 in (<ref>), c=2.98× 10^8m/s.Virtual quark-antiquark pairs contribute little to the value of α: for the heavy quarks Q=c, b,ort, there are no QQ̅ states that decay directly into two photons.While there are QQ̅ states that decay into a lepton-antilepton pair that, in turn, decays into two photons, the contributions to ϵ_0 from these decays are suppressed because of the intermediate lepton-antilepton pair. For the light quarks q=u, d,ors, the π^0, η, and η^'are J=0 combinations of qq̅ bound states that decay into two photons.If the the π^0, η, and η^' resonate, the frequencies would be many orders of magnitude larger than those of bound, lepton-antilepton pairs, suppressing their contributions to 1/α. On the other hand, in comparison to their masses, the decay rates of these mesons are orders of magnitude larger than those of bound, lepton-antilepton pairs. The net result is that the light quarks' contributionsalso would not significantly affect the value of α to the accuracy obtained here. The fact that the above value for 1/α isabout 1.4%too large might be explainedif some bound, lepton-antilepton pairs convert into another formthat contributes less to the the right-hand side of (<ref>).Specifically, if some virtual parapositronium atoms combine to form virtual diparapositronium molecules that have an electric quadrupole moment but not a dipole moment, then theright-hand side of (<ref>) and thus 1/α would decrease. Also, higher-order corrections in α could be significant. Note in particular that diparapositronium moleculesrespond to (quadrupole) gravitational radiation, inextricably linking electromagnetism and gravitational radiation. This subject is currently under investigation.In the early universe when the temperature was sufficiently high that it was difficult for virtual, lepton-antilepton pairs to bind into virtual, lepton-antilepton atoms,the number density ofvirtual, lepton-antilepton atoms in the Grid would have been much less than today. From (<ref>) it then follows that ϵ_0 would also have been much smaller.Since c=1/√(ϵ_0 μ_0),the decrease in the value of ϵ_0 would tend to make the speed of light in the early universe much larger. To make a definitive statement about the speed of light in the early universe, however, an analysis of μ_0 is required. | http://arxiv.org/abs/1705.11068v1 | {
"authors": [
"G. B. Mainland",
"Bernard Mulligan"
],
"categories": [
"physics.gen-ph",
"hep-th"
],
"primary_category": "physics.gen-ph",
"published": "20170526164954",
"title": "Theoretical calculation of the fine-structure constant and the permittivity of the vacuum"
} |
Diamond polarizationFlorian Kassel et al. e-mail , Phone: +41 75 411 74861 Institute for Experimental Nuclear Physics (IEKP), KIT, Karlsruhe, Germany2 CERN, Meyrin, SwitzerlandXXXX, revised XXXX, accepted XXXXXXXXThe Beam Condition Monitoring Leakage (BCML) system is a beam monitoring device in the CMS experiment at the LHC consisting of 32 poly-crystalline (pCVD) diamond sensors. The BCML sensors, located in rings around the beam, are exposed to high particle rates originating from the colliding beams. These particles cause lattice defects, which act as traps for the ionized charge carrier leading to a reduced charge collection efficiency (CCE). The radiation induced CCE degradation was however much more severe than expected from low rate laboratory measurements. Measurement and simulations presented in this paper show that this discrepancy is related to the rate of incident particles. At high particle rates the trapping rate of the ionization is strongly increased compared to the detrapping rate leading to an increased build-up of space charge. This space charge locally reduces the internal electric field increasing the trapping rate and hence reducing the CCE even further.In order to connect these macroscopic measurements with the microscopic defects acting as traps for the ionization charge the TCAD simulation programwas used. It allows to introduce the defects as effective donor and acceptor levels and can calculate the electric field from Transient Current Technique (TCT) signals and CCE as function of the effective trap properties, like density, energy level and trapping cross section. After each irradiation step these properties were fitted to the data on the electric field from the TCT signals and CCE. Two effective acceptor and donor levels were needed to fit the data after each step. It turned out that the energy levels and cross sections could be kept constant and the trap density was proportional to the cumulative fluence of the irradiation steps. The highly non-linear rate dependent diamond polarization and the resulting signal loss can be simulated using this effective defect model and is in agreement with the measurement results. Description of radiation damage in diamond sensors using an effective defect modelFlorian Kassel,, Moritz Guthoff, Anne Dabrowski, Wim de Boer Submitted 25 May 2017. Accepted 26 September 2017. ==================================================================================§ INTRODUCTIONThe CMS Beam Condition Monitor Leakage (BCML) system at LHC is a beam monitoring device based on 32 poly-crystalline (pCVD) diamond sensors. The BCML sensors measure the ionization current created by beam losses leaking outside the beam pipe, e.g. by scattering on the residual gas or beam collimators. In case of very intense beam loss events, which could potentially damage the CMS detector, the BCML system triggers the LHC beam abort leading to a beam dump to protect the CMS detector.Although diamond sensors were expected to be radiation hard, the charge collection efficiency (CCE) dropped much faster <cit.> in this high particle rate environment in comparison to low particle rate laboratory measurements <cit.> and simulations <cit.>, see Fig. <ref>. Here the charge collection distance (CCD) of the BCML sensors with an average thickness of d=400 μm is plotted as funtion of particle fluence. The CCD defines the average drift length of the charge carriers before beeing trapped. After a total fluence of Φ = 16× 10^14 p_24GeV/cm^2 the CCD of the pCVD diamonds used at the CMS detector dropped about twice as fast compared to the expectations based on laboratory measurements. This discrepancy in CCD between the real application in a particle detector and laboratory experiments can be explained by the rate dependent polarization <cit.> of the diamond detector, as was deduced from detailed laboratory measurements and simulations. The rate dependent diamond polarization describes the asymmetrical build-up of space charge in the diamond bulk, which leads to a locally reduced electric field configuration. The charge carrier recombination is increased in this low field region resulting in a reduced CCE of the diamond detector. At high particle rates the trapping rate of the ionization is even more increased compared to the de-trapping rate leading to an increased build-up of space charge. The increased amount of space charge causes an even stronger local reduction in the internal electric field and hence reduces the CCE further. The study presented in the following is an update to the work presented in <cit.>.§ EFFECTIVE DEFECT MODEL DESCRIBING THE RADIATION INDUCED SIGNAL DEGRADATION OF DIAMOND DETECTORSWithin the scope of this publication an effective defect model will be introduced, capable of explaining the radiation induced signal degradation of diamond detectors. This effective defect model was found by optimizing simulations to experimental Transient-Current-Technique (TCT) <cit.> and CCE measurement results of an irradiation campaign with high-quality single-crystal CVD (sCVD) diamonds detectors. The basic properties of this effective defect model like energy level and charge carrier cross sections for electrons and holes are based on <cit.>.§.§ Diamond irradiation campaignA dedicated irradiation campaign was carried out to gain quantitative understanding of the diamond polarization affecting the charge collection efficiency of diamond detectors. The diamond samples were irradiated stepwise up to a maximum fluence of Φ = 30.1× 10^13n_1 MeV,eq./cm^2. After each irradiation step the electrical properties of the diamond sensors were studied using the TCT method for an indirect electric field measurement and by measuring the charge collection efficiency. The diamond polarization, crucial in understanding the irradiation damage, is modifying the internal electrical field, which can be measured using TCT. The CCE measurements were used to study the reduced detector efficiency due to these electrical field modifications. In order to measure the build-up of polarization in the diamond for a given radiation damage the following measurement procedure is used for the TCT and CCE measurements: * The diamond is exposed during the entire measurement to a constant ionization rate by a ^90Sr source creating electron-hole pairs in the entire diamond bulk filling up the traps. In the steady-state the trapping and detrapping rates are in equilibrium. According to the simulations discussed below about 55 % of the effective deep traps are filled. * In order to remove any residual field and set the diamond into a unpolarized state, the sensor is exposed to the ^90Sr source for a duration of 20 minutes without bias voltage applied. A homogeneous trap filling in the diamond bulk, and hence an unpolarized diamond state, is reached. * The bias voltage is ramped up fast (t_ramp≤ 10 s) and the measurement is started immediately (t=0 s). * The diamond starts to polarize as soon as bias voltage is applied. The measurement is performed over an extended period of time (t>3000 s) until the diamond is fully polarized and the measurement results are stable. Four new single crystalline diamonds of highest quality 'electronic grade' corresponding to small nitrogen and boron impurities ([N] < 5 ppB and [B] < 1 ppB), produced by Element6 <cit.> were used to investigate the radiation induced signal degradation. The diamond samples were irradiated stepwise with either 23 MeV protons or with neutron particles with an energy distribution up to 10 MeV <cit.>. More detailed information to the diamond samples, the TCT and CCE measurement setup can be found in <cit.>. TCT measurement results The TCT measurements of the different irradiated diamond samples are shown in Fig. <ref> for the hole carrier drift. The expected rectangular TCT pulse shape, indicating a constant electrical field, is measured for the un-irradiated diamond sensor and remains stable as function of exposure time to the ^90Sr source. This rectangular TCT pulse shape is measured as well for the irradiated diamond samples immediately after the TCT measurement has been started (t=25 s) even though radiation damage leads to an increased amount of defects trapping free charge carriers. In this initial moment the diamond is in a pumped state and the defects are homogeneously ionized leading to a neutral effective space charge. Applying of bias voltage changes however the charge carrier distribution resulting in an inhomogeneous trapping. Trapped charge carriers, acting as space charge, are modifying the electric field configuration and leading to a modified TCT pulse shape. Increased radiation damage leads to an even stronger TCT pulse modification as function of the ionization duration. Furthermore, a faster transition to the final stable TCT pulse and hence to the final electrical field configuration is measured with respect to increased radiation damage. These TCT measurements demonstrate the increased build-up of space charge which leads to a strongly modified electric field distribution caused by radiation damage. The TCT measurement results for the electron drift are in agreement with the hole drift measurements and are therefore not explicitly discussed here. A direct comparison of the electron and hole drift can be found in <cit.>. CCE measurement results The charge collection efficiency of the diamond sensors used in the irradiation campaign was measured regularly at the CCE setup at DESY in Zeuthen <cit.> using minimum ionizing particles (MIP). The CCE measurement results are discussed in the following as function of measurement time, during which the diamond is exposed to an ionization source (^90Sr). The influence of the ionization source to the CCE measurement result is shown in Fig. <ref> for different radiation damages. The un-irradiated diamond sample is not influenced by the ionization source and remains constant at a CCE of 100%. The radiation damaged diamond sensors are however affected by the ionization source. A steep decrease of the initial measured CCE is observed. The time constant of this reduction matches the time constant of the TCT pulse modification. Hence the modified electric field distribution directly affects the CCE and demonstrates the importance of the build-up of space charge in diamond sensors in order to understand the effects of radiation damage. This direct correlation of the TCT pulse modification and the measured CCE value is discussed in more detail in <cit.>.The stabilized CCD measurements are plotted in Fig. <ref> as function of fluence for an electrical field of E = 1.0 V/μm. The CCD as function of radiation damage can be described by the following equation:1/CCD(Φ) = 1/CCD_0 + k ×Φ,with CCD_0 as initial CCD, Φ as particle fluence in p_24GeV eq./cm^2 and k as radiation constant. It was found that the radiation constant k describes both, the behavior of sCVD as well as pCVD diamond sensors <cit.>. Irradiation studies done by the RD42 collaboration with pCVD diamond sensors determined a radiation constant of k = 6.5× 10^-19 cm^2/μm <cit.> for irradiation with 24 GeV protons. The conversion of the radiation damage created by n_1 MeV to p_24 GeV is caluclated using NIEL <cit.>, which is increased by a factor of 3.59.Based on the measurements done within this paper a radiation constant of k = (8.2 ± 0.5)× 10^-19 cm^2/μm was found which is in reasonable agreement with the RD42 value. §.§ TCAD simulation of radiation damageIn order to create a defect model for diamond sensors and gain quantitative understanding of the radiation induced signal degradation, the diamond sensor was modeled with the software Silvaco TCAD <cit.>. Besides the electrical diamond properties, like e.g. band gap or mobility parameters, radiation induced lattice defects can be taken into account by introducing effective deep traps acting as recombination centers. The properties of these defects, like energy levels, capture cross section for electrons and holes, were found by optimizing the simulation of TCT and MIP pulses to match the experimental data and are listed in Table <ref>, further information can be found in <cit.>. The simulated MIP pulses were used to calculate the charge collection efficiency of the diamond sensor. The TCAD simulation included furthermore a detailed implementation of the geometrical properties of the TCT and CCE measurement setups, affecting e.g. the energy deposition of the ionization sources in the diamond sensors. Optimization of the effective defect densities to the TCT and CCE measurements Within the scope of the irradiation campaign TCT and CCE measurements for 12 different irradiation steps were obtained. The TCT and CCE simulation were fitted to each TCT and CCE measurement and a uniquely optimized trap configuration was found with respect to the particular fluence. The traps were optimized by adjusting the trap density of the effective recombination centers (ρ_eRC1 and ρ_eRC2), since the trap properties like energy level or cross section should not be affected by the fluence. In Fig. <ref> the simulated and measured TCT pulses of the hole charge carrier drift are shown for different fluences. The TCT simulation were optimized to the measurement results at the lowest measurable bias voltage, where the polarization is affecting the electric field most. The build-up of the internal polarization field is so quick for higher fluences that a reliable TCT measurementat low bias voltages was not possible. The TCT measurements were therefore done at increased bias voltages.The simulated MIP signal is shown in Fig. <ref>,b as function of the ionization time (T_exp) for two different fluences. The MIP signal was integrated to calculate the charge collection efficiency of the simulation result. The MIP signal for the sensor with the lowest fluence is simulated for an electrical field of E = 0.18 V/μm. The build-up of space charge leads to a two peak structure in the MIP pulse shape, which results in an overall reduced charge collection efficiency. The MIP particle for the highly irradiated diamond sample is simulated at an increased electrical field of E = 0.72 V/μm. The build-up of space charge is as well influencing the signal shape of the MIP particle resulting in a reduced charge collection efficiency. Based on the simulated MIP signals, the calculated charge collection efficiency is shown in Figs. <ref>,d as function of exposure time to the ^90Sr source. The simulated charge collection efficiency is in agreement with the experimental measured CCE values.§.§ Effective defect model as function of radiation damageThe optimization of the effective defect model to the experimental measurement results shown in Fig. <ref> and <ref> are representative for the fitting of the defect model for each of the 12 different irradiation steps. The optimized trap densities for both effective recombination centers eRC1 and eRC2 for each irradiation step are plotted as function of the radiation damage caused by the particle fluence of Φ (p_24 GeV eq./cm^2) in Fig. <ref>. The error to the absolute radiation damage in the x-dimension is due to the limited accuracy of the irradiation facilities. The accuracy of the proton irradiation is ± 20% and the accuracy of the neutron irradiation is ± 30%. The linear regression of the trap densities with respect to the total radiation damage gives:ρ_eRC1 = Φ· 0.0252 (cm^-1)+ 9.40× 10^11 (cm^-3), ρ_eRC2 = Φ· 0.0215 (cm^-1) + 6.67× 10^11 (cm^-3),with Φ as (p_24 GeV eq./cm^2) particle fluence. A possibility to verify this effective trap model is the simulation of the radiation induced signal degradation in terms of charge collection efficiency as function of radiation damage, see Fig. <ref>. Based on this degradation the radiation constant can be calculated using Eq. <ref> and be compared to the radiation constant found by the experimental measurements. The CCE of the diamond sensor is simulated for a typical particle rate environment created by a ^90Sr and at an electrical field of E = 1 V/μm.The simulated charge collection distances matches the experimental measurement results for the laboratory rate environment. Based on the simulation a radiation constant of k_sim. = (8.9 ± 1.1) × 10^-19 cm^2μm^-1 is calculated. The simulated radiation constant is in agreement with the experimental measurement result of k_meas. = 8.2× 10^-19 cm^2μm^-1. This is however not surprising since the effective defect model was optimized to these particular measurements.The effective defect model is now used to simulate increased particle rate environments. In laboratory measurements the particle rate created by a ^90Sr source typically creates a particle rate of f_^90Sr = 0.15 GHz/cm^3 <cit.>. The particle rates at the CMS detector are significantly higher and depend on the exact detector location. For the BCML detector location a MIP particle rate of f_BCML1≈ 10 GHz/cm^3 is estimated based on FLUKA <cit.> simulations. The simulated charge collection distance for the increased particle rate is indicated in Fig. <ref> in blue dashed lines. The increased particle rate environment leads to a strongly reduced charged collection distance. After a radiation damage, corresponding to a particle fluence of Φ = 10 × 10^14 p_24GeV/cm^2, the charge collection distance simulated for the particle rate environment at the BCML location is reduced by 53% compared to the ^90Sr particle rate environment. Based on these simulation results a three times increased radiation constant of k_sim. = 27.7× 10^-19 cm^2μm^-1 is calculated. This radiation constant can be directly compared to the radiation constant measured with the BCML pCVD diamond sensors at the CMS detector (indicated in solid blue), since the radiation constant is independent of the diamond material (sCVD or pCVD) used to measure the radiation induced signal degradation <cit.>. The different diamond material is only reflected in a different initial charge collection distance value, compare sec. <ref>.The calculated radiation constant based on the BCML detector degradation of k_meas. = 56.0× 10^-19 cm^2μm^-1 is still by a factor of two higher than the simulation result, see Table <ref>. This discrepancy can be caused e.g. by an underestimation of the radiation damage, which is based on a FLUKA simulation. Furthermore, the BCML sensors are exposed to a mixed particle field that could contribute to non linear degradation effects of the sensors.A radiation damage of Φ = 1.5 × 10^15 p_24GeV/cm^2 results in a signal degradation of 42% and 70% for the ^90Sr and BCML particle rate environment, respectively. The corresponding electrical field configurations of the diamond sensors operated at an electrical field of E = 1 V/μm are shown in Fig. <ref>. The expected electric field modification due to an almost linear build-up of space charge (Fig. <ref>) in the ^90Sr particle rate environment leads to a local minimum in the electric field. This results in a slightly increased recombination rate (Fig. <ref>) leading to the reduced CCE.The electrical field is significantly modified at the increased particle rate environment of the BCML detector location, see Fig. <ref> in red. The highly increased build-up of space charge leads to a suppression of the electrical field in about ∼ 230 μm, that is about half of the sensor thickness. In this region the charge carrier recombination is strongly increased and explains the poor charge collection efficiency of ∼ 30%.§ SIMULATION OF RADIATION DAMAGE FOR DIFFERENT ELECTRICAL FIELDSIn this section the effective defect model is used to analyze the charge collection distance as function of the electric field at which the sensor is operated. In Fig. <ref> the radiation induced signal degradation is simulated for three different electric fields of E = 1.00 V/μm, E = 0.36 V/μm and E = 0.18 V/μm. Based on the simulation results the radiation constant is calculated and listed in Table <ref>. Comparison the radiation constants demonstrates the importance of preferably high operational electric fields. A three times reduced electric field from 1 to 0.36 V/μm leads to an increased radiation constant by a factor of ∼ 4.7. Furthermore, reducing the electric field by a factor of 5 to an electric field of E = 0.18 V/μm leads to a ∼ 11.3 times increased radiation constant. Hence, the diamond polarization leads to a non-linear increase in the radiation induced signal degradation as function of the electric field at which the diamond is operated.A more detailed understanding of the diamond polarization leading to the severe signal degradation is obtained by analyzing the corresponding electric field and space charge inside the diamond sensors for a particular fluence, see Fig. <ref>. Although the total amount of space charge is reduced for the lower electrical field configurations, the impact to the overall electric field inside the diamond sensor is strongly increased. This leads to the suppression of the electric field in more than half of the detector thickness leading to a strongly increased charge carrier recombination, see Fig. <ref>.§ CONCLUSIONThe radiation induced signal degradation of diamond detectors can be described using the effective defect model presented in this paper. This defect model was found by optimizing TCT and CCE simulations to the experimental measurement results of different irradiated diamond sensors. The simulation and measurement results underline the crucial role of the polarization effect to understand the radiation induced signal degradation of diamond detectors. The build-up of space charge leads to a locally reduced electric field at which the increased charge carrier recombination leads to a reduced CCE. Using the effective defect model to extrapolate the reduction of the electric field by the polarizing space charge inside the sensor to the high rate environment of the CMS detector explains the poor performance of the diamond sensors in this harsh environment of the LHC.The effective defect model showed furthermore the importance of high bias voltages in order to minimize the radiation induced signal degradation. At reduced bias voltages the diamond polarization leads to a non-linear increase in the radiation induced signal degradation. Hence one should try to increase the high voltage breakthrough voltage, so one could operate with an electric field from the bias voltage well above the electric field from the space charge. Alternatively, the switching of the bias voltage with a few Hz, so that space charge would switch direction as well and could not build-up inhomogeneously could avoid the diamond polarization.This work has been sponsored by the Wolfgang Gentner Programme of the Federal Ministry of Education and Research and been supported by the H2020 project AIDA-2020, GA no. 654168 (http://aida2020.web.cern.ch/). [1]Guthoff2013168M. Guthoff et al., Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 730, 168 - 173 (2013). Guthoff2015M. Guthoff et al., PoS: Proceedings - 3rd International Conference on Technology and Instrumentation in Particle Physics (TIPP 2014) 281, (2014). Guthoff2014M. Guthoff, Radiation damage to the diamond based Beam Condition Monitor of the CMS Detector at the LHC, PhD thesis IEKP-KA/2014-01 (2014).RD42RD42 Coll. (W. Adam et al.), Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 565, 278 - 283 (2006). deBoer2007W. de Boer et al., Physica Status Solidi (a) 204, 3004-3010 (2007). Guthoff2013M. Guthoff et al., Nuclear Instruments and Methods in Physics Research A 735, 223-228 (2014).Kassel2016F. Kassel et al., physica status solidi (a) 213-10, 2641-2649 (2016). Rebai2016M. Rebai et al., Diamond and Related Material 61, 1-6 (2016).Valentin2015A. Valentin et al., Physica Status Solidi (a) 212, 2636-2640 (2015). RD42_2008RD42 Coll. (W. Trischuk et al.), arXiv preprint arXiv:0810.3429 (2008). Kassel2017F. Kassel, The rate dependent radiation induced signal degradation of diamond detectors, PhD thesis, Karlsruhe Institute of Technology (2017). Isberg2002J. Isberg et al., Science 297, 1670 (2002) Pernegger2005H. Pernegger et al., Journal of Applied Physics 97, 073704 (2005) Element6ElementSix: Synthetic diamond producer, www.e6.com.NeutronFacilityA. Kolšek et al., Nuclear Engineering and Design 283, 155 - 161 (2015).Grah2009C. Grah et al., IEEE Transactions on Nuclear Science 56 - 2, 462 - 467 (2009).Guthoff2014223M. Guthoff et al., Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 735, 223 - 228 (2014).SilvacoSilvaco TCad - Atlas: Semiconductor device simulator; www.silvaco.com.FLUKA1A. Ferrari et al.,'FLUKA: a multi-particle transport code', CERN 2005-10, SLAC-R-773 (2005). FLUKA2T. T. Boehlen et al., 'The FLUKA Code: Developments and Challenges for High Energy and Medical Applications', Nuclear Data Sheets 120, 211-214 (2014) | http://arxiv.org/abs/1705.09324v1 | {
"authors": [
"Florian Kassel",
"Moritz Guthoff",
"Anne Dabrowski",
"Wim de Boer"
],
"categories": [
"physics.ins-det"
],
"primary_category": "physics.ins-det",
"published": "20170525184632",
"title": "Description of radiation damage in diamond sensors using an effective defect model"
} |
Magnetic Evolution and the Disappearance of Sun-like Activity Cycles [ December 30, 2023 ==================================================================== We present a software tool that employs state-of-the-art natural language processing (NLP) and machine learning techniques to help newspaper editors compose effective headlines for online publication. The system identifies the most salient keywords in a news article and ranks them based on both their overall popularity and their direct relevance to the article. The system also uses a supervised regression model to identify headlines that are likely to be widely shared on social media. The user interface is designed to simplify and speed the editor's decision process on the composition of the headline. As such, the tool provides an efficient way to combine the benefits of automated predictors of engagement and search-engine optimization (SEO) with human judgments of overall headline quality. § INTRODUCTION The headline is an extremely important component of every news article that performs multiple functions: summarizing the story, attracting attention, and signaling the voice and style of the newspaper <cit.>. In the online realm, headlines are expected to meet several new functions; for instance, to convey the article's contents in different online contexts or to optimize the article for search engine queries (i.e, SEO).Indeed, arguably, the headline is now more important than ever, as itbecomes the only visible part of the article in microblog posts, social media feeds and listings on news-aggregation sites. These multiple requirements on the news headline have complicated the composition task facing news editors, as they attempt to ensure that each headline is crafted as perfectly as possible.Prior NLP work in the area of news headlines has mostly focused on the task of automatic headline generation, cast as “very short summary generation” in the DUC tasks of the early 2000s; tasks that produced much of the research on the topic. The best-performing system in the 2004 DUC task worked by parsing the first sentence of the article and pruning it to the desired length <cit.>, an approach that works by leveraging human intelligence: journalists generally compose news articles in the “inverted pyramid” style, which places the most important informationin the lead paragraph <cit.>. Other headline generation systems generally work by first using some metric to identify terms within the document that are likely to appear in the headline, and then constructing a headline containing these terms <cit.>.This latter approach has much in common with the task of keyword selection for SEO, which first caught the attention of major newspapers at least ten years ago <cit.>, and continues to be a much-discussed issue today <cit.>. While even long-established, traditional news publications have begun to move away from classical forms of headlines towards more direct, keyword-laden headlines, many copy editors would still prefer to write clever, witty headlines <cit.>, and readers of the news seem to value creativity in headlines over clarity or informativeness <cit.>. Therefore, one of the key considerations in the design of our system was to balance the mechanical act of filling a headline with informative, relevant keywords, against the creative act of writing headlines that appeal to human interests and emotions.We expect that the most interesting and emotional stories are likely to be more popular with readers than the “average” story. Analysis of reader behavior has shown that there is no correlation between how much an article is shared on social media and how much of the article is read by an average user <cit.>; a fact that could be taken as evidence supporting the widely-held view that people share articles online that they have not fully read themselves <cit.>. In this case, the headline—which people presumably read even if they don't read the full text—may be an important factor in determining the “shareability” of a news article; an idea that is another key motivation behind the design of our system.The tool presented here is designed to facilitate the decision-making process facing a news editor in composing a headline. The software employs state-of-the-art NLP and machine learning techniques to make its recommendations, but it is not designed to automatically generate headlines or to make decisions about a headline's goodness on its own. In the sections below, we present the design and behavior of the tool before discussing the internal workings of the system. We conclude with an assessment of the current state of the project, including some preliminary evaluation results and a discussion of areas for improvement. § DESIGN AND BEHAVIOR From a user-interface perspective, the software has two modes of operation: input mode and analysis mode. The input mode (illustrated in Figure <ref>) facilitates the entry of a news article and its corresponding headline and sub-headline, which may either be entered manually or selected from a feed of recent articles. In practice, this feed would be integrated into the newspaper's workflow so that an editor could review all new articles with the software prior to publication.After the editor-user selects an article, the system switches to the analysis mode, showing the results of the automated analysis (illustrated in Figure <ref>). This mode is designed to allow the user to quickly assess the strengths and weaknesses of the headline and decide whether any changes should be made to improve it. The five most highly-ranked keywords from the article are listed on the right side of the screen sorted by weight, a metric combining the keyword's frequency in the article and its SEO score, which respectively capture the keyword's local relevance to the article itself as well as its global prominence among news stories in general. (See section <ref> below for details on these measures.)The keywords are color-coded to distinguish keywords which already appear in the headline (green) from those which do not appear in the headline (red), and size-coded according to their weight. Thus, any large, red keywords are those which an editor should consider adding to the headline. In the example in Figure <ref>, the top three recommended keywords are already present in the headline; the two remaining recommendations, “Irish Republic” and “GPO”, are both sensible suggestions for the article. In addition to the keyword recommendations, the system scores each headline for its “shareability” on two social media platforms: Twitter and Facebook; if the shareability score on either platform exceeds a threshold value, then an alert is displayed to the user. In the example in Figure <ref>, the article has exceeded the Facebook threshold but not the Twitter threshold, so only one of the two alerts is displayed. The newspaper's editor in charge of social media can use this information when deciding which stories should be posted and promoted on social media sites. The threshold is set to a relatively conservative value, so that most articles will not produce alerts, and only the most promising headlines will come to the editor's attention.Ultimately, it is up to the editor to decide what action, if any, to take based on the information presented by the software. The editor has the leeway to add keywords in the headline in creative ways that fit the style of the story and the news organization, and she can also flexibly deal with any errors that may be produced by the keyword recommendation system, rather than blindly following its advice.§ IMPLEMENTATIONThe system consists of three major components: a user-interface front-end, a text analysis back-end, and a web server that mediates communication between the two. The user interface is implemented with HTML and Javascript and accessed via a web browser; its behavior is described and illustrated in the previous section. The web server is implemented in Python (based on the Flask framework), which allows easy integration with the text analytic back-end, which is also mainly implemented in Python. We use the sklearn module for regression and Stanford's CoreNLP Java suite for NLP <cit.>. The entire system is deployed on a web server and accessed by the client's web browser.The back-end consists of two components—keyword analysis and shareability analysis—which operate independently of one another and are discussed in detail below.§.§ Keyword AnalysisThe role of keyword analysis is to identify terms in the article body that are good candidates for inclusion in the headline. We believe that headlines containing informative and popular keywords can be both more appealing to readers and more prominent in users' search results and on news aggregator websites.Processing of an input article begins with tokenization and named-entity recognition using CoreNLP, which identifies all entities (e.g. people, organizations, locations) in the article. Next, any known keywords appearing in the text are identified, by using a database of 90k keywords and their frequencies from Irish news articles in recent years, which we populated with data provided to us by two other Irish news-related projects <cit.>. This process results in a list of entities, which may be unique to the given article, and a list of keywords, which are known to have been encountered in previous news articles. These keywords and named entities are linked using a simple, rule-based approach that resolves pairs like “Enda Kenny” and “(Mr.) Kenny”, yielding a single list of resolved keywords, along with a list of all positions in the text where each keyword appears.Our keyword ranking system aims to capture the intuition that salient keywords should ideally be both locally prominent (i.e. appearing frequently in the given news article) and globally popular (i.e. appearing frequently in articles other then the current one). Thus, we calculate the weight w of each keyword k in the document d as the weighted sum of its local weight w_local and its global weight w_global: w(k,d) = λ w_local(k,d) + (1-λ) w_global(k) The local weight is calculated as the normalized within-document frequency of the keyword, so that the most frequent keyword in the document gets a w_local of 1. The global weight is calculated in a similar way, using the across-document frequencies from the keyword database and applying a nonlinear (log) transformation to compensate for the highly skewed distribution of these frequencies (note that it is possible for a keyword to have a zero global weight if it does not appear in our database; this is common for named entities in the article which have not been mentioned in the news before). The relative contributions of the local and global weights are balanced with the λ parameter.This formula was chosen as the simplest method (a linear combination) of combining the two factors. It is similar to a tf-idf score in that it combines both term frequency and document frequency, but it is critically different in that it rewards, rather than penalizes, terms that occur in many documents. This is a good thing because we believe that terms which may be very common (e.g. the names of well-known politicians or celebrities) can be good headline terms, and also because our method of selecting terms (via a closed set of keywords and automatic named entity detection) generally avoids selecting words which may be high-frequency but low-quality (like stopwords).We manually set the value of λ to achieve rankings which we subjectively deemed to be suitable.[We found that a value of 0.6 (i.e. slightly favoring local frequency over global frequency) worked well for our data, but this value changed depending on which keyword list we used. Ultimately, we combined both keyword lists, which introduced a large number of noisy terms. To suppress these noisy terms, we added an additional term to boost the score of keywords which were identified as named entities in the article (up to 0.2 of the overall weight).] This manual parameter setting allowed us to deploy our system quickly with acceptable performance, but a better option would be to learn these parameters automatically. To do so would require a dataset containing news articles, their headlines, and either some measure of the quality of the headline or an assurance that the headlines in the data set are “good”, in order toguarantee that the parameters are set based on “good” headline examples. This type of data was not available to us when the system was under development.This method ultimately assigns a weight to each keyword between 0 and 1.0, which determines its ranking in the analysis output (Figure <ref>). In the user interface, the weight is displayed in a table alongside the keyword's “frequency” and “SEO Score”, which we consider to be more user-friendly thanw_local and w_global themselves (the frequency is exactly the number of times the term appears in the article, and the SEO score is just w_global scaled to the familiar scale of 0 to 100). §.§ Shareability AnalysisThe role of shareability analysis is to identify headlines that are likely to be shared on social media. With the rise of social media as dissemination channels for the news, headlines now need to be both informative and “shareable”; that is, the headline somehow needs to attract people to post, share, and engage with the article on social media, in order to reach a large online audience.According to the Reuters Institute Digital News Report <cit.>, Facebook and Twitter generate 54% of the visits to online news sites, suggesting that direct visits to the home pages of news providers are being supplanted by social media mediated access. However, Facebook and Twitter are known to have quite different audiences and engage users in different ways <cit.>. Users on Twitter generally actively search news and their consumption varies across news categories <cit.>, whereas on Facebook, news tends to be just encountered by sharing amongst friends. Therefore, in our system, we model the two social networks separately. Using the Twitter streaming API we collected over 700k tweets and retweets posted by each one of 200 media outlets and journalist accounts for two time periods in 2013 and 2014, for a duration of 71 and 50 days, respectively. From the collected tweets we extracted all the URLs and used the Facebook and Twitter APIs to collect the number of times each URL was shared on Facebook or posted on Twitter. Because these posts were made by journalists, the links in the tweets are mainly to news articles, from which we extracted headlines. This step yielded a data set of 55k headlines with corresponding counts of social shares for each one. We used a regression analysis to estimate the relationship between features of the headlines and the target variable of number of shares. Each headline in our collection is represented as a vector consisting of eight features covering three main aspects of the headline's content: the sentiment polarity (as computed by the TextBlob Python package), the presence of named entities, and the length in words. The complete list of features is presented in Table <ref>.We used Regularized Linear Regression (RLR), Random Forest (RF) and Gradient Boosting Trees (GBT) as our methods for regression and used the metric Mean Squared Error (MSE) to assess their performance. We split our headlines set into 44k (80%) for training and the remaining 11k (20%) for testing. We train two different regression models, one for Facebook and one for Twitter. RF and GBT performed better than the RLR models. Between RF and GBT models, GBT performed slightly better than RF, although no significant difference was observed. On the basis of these results we use GBT as our method for regression. GBT have shown to outperform other models in classification and regression tasks and have been used successfully for audience engagement prediction <cit.>. We observe that the models for Twitter and Facebook behave differently: comparing the values of the MSE for both models, predictions for shareable headlines on Facebook present an MSE of 41.8, while for Twitter the error is slightly smaller, 37.6. Once the GBT models are trained, we store them and incorporate them into the system's pipeline. Every inputted headline receives two shareability scores, one for each social media site; however, in order to avoid triggering too many notifications to the journalist or news editor, the system only shows a result if the score is equal or larger than a manually-defined thresholdof 3.7 and 1.7 for Facebook and Twitter, respectively, which correspond to the median number of shares (on each platform) received by the headlines in our collection.§ EVALUATION The tool was developed in collaboration with The Irish Times, andseveral professional editors have tested its usability. The feedback from these sessions has been positive and has informed several design features. In particular, the color-coding and font-size features of the interface have been noted for their usability. On the basis of this success, we are now looking at integration into editors' daily workflow, to allow more usability data to be gathered.Current tests of the system have identified some potential areas for improvement. The keyword system commonly fails to recognize when pairs of equivalent but non-identical keywords have the same referent; for example Taioseach and Enda Kenny, or GPO and General Post Office. While editors easily recognize this duplication, this error affects frequency counts, which in turn affect keyword rankings. This type of co-reference resolution is an open question in NLP research, with typical solutions relying on a rule-based or gazette-based approach to fix commonly-occurring cases.The system could also be improved by moving from a static keyword database to a dynamic, real-time database. We were fortunate to be able to bootstrap our system with the keyword sources discussed in section <ref>; however neither of these sources were created with this specific use-case in mind, and the static nature of these lists means that the keyword database will become outdated over time. Updating the keyword frequency counts on a rolling basis is an easy first step; but a more sophisticated approach is probably required, where new entities are added to the database over time, and more recent articles are given a greater weight than older articles. Because our system already identifies named entities in news articles, these entities could be added as new keywords in our database as they are encountered.We are also evaluating the impact of using the tool on SEO, based on determining whether it improves article rankings in news aggregators and search engines. While the lack of click-through from Google News has led some to question its effectiveness at driving traffic to news sites <cit.>, for The Irish Times' website, Google News is a major source of referrals. An analysis of 30k Irish Times articles (from 1/10/15 to 31/3/2016) has shown that articles listed on the Google News (Irish edition) front pages received significantly (p<0.01) more page views than unlisted articles; with Google News listed articles receiving almost twice as many views (n=11,125, μ=1665.5 views per article) as unlisted articles (n=19,339, μ=892.4 views per article). Google News' ranking algorithm is not publicly known, so the exact factors leading to this correlation are opaque; however, for practical purposes, if our keyword recommender leads to greater visibility on Google News, then we know it should increase readership.Finally, the quality of our keyword recommendations can, in part, be assessed by noting whether the system's top-recommended keywords are already present in the original headline written for the article, as it shows that the system corresponds to human judgments (n.b., the keyword analysis only uses the article body, not the headline, as input). We processed a sample of roughly 3,000 Irish Times headlines with our system, and found that a majority of these (64%) contained two or more of the top five keywords recommended by our system (in either the headline or the sub-headline), and a large majority (88%) contained at least one of the recommended keywords. We take this as evidence that the keywords recommended by our system generally correspond with the types of keywords that a human editor would normally include in the headline. § CONCLUSION In this paper, we have presented a system for recommending keywords for inclusion in newspaper headlines and for identifying headlines with high potential shareability on social media. The system identifies plausible keywords that are both relevant to the given news article and popular overall in past news articles, in an effort to maximize both the reader interest and the SEO aspect of the headline. In addition, the system identifies headlines that are likely to receive above-average engagement on social media, allowing editors to effectively target their social media strategy. We believe that this tool can be a helpful component in modern, online-oriented newsrooms.§ ACKNOWLEDGMENTS The authors would like to thank The Irish Times for their funding and help on this project. This work is supported by Science Foundation Ireland through the Insight Centre for Data Analytics under grant number SFI/12/RC/2289. named | http://arxiv.org/abs/1705.09656v1 | {
"authors": [
"Terrence Szymanski",
"Claudia Orellana-Rodriguez",
"Mark T. Keane"
],
"categories": [
"cs.CL",
"cs.HC",
"cs.IR"
],
"primary_category": "cs.CL",
"published": "20170526174058",
"title": "Helping News Editors Write Better Headlines: A Recommender to Improve the Keyword Contents & Shareability of News Headlines"
} |
Center for Brain and Cognition. Department of Information and Communication Technologies, Universitat Pompeu Fabra, 08018 Barcelona, Spain Department of Physics, Lancaster University, Lancaster LA1 4YB, United KingdomCentre de Recerca Matemàtica, Campus de Bellaterra, Edifici C,08193 Bellaterra, Barcelona, Spain.Center for Brain and Cognition. Department of Information and Communication Technologies, Universitat Pompeu Fabra, 08018 Barcelona, Spain Recurrently coupled networks of inhibitory neurons robustly generate oscillations in the gamma band.Nonetheless, the corresponding Wilson-Cowantype firing rate equation for such an inhibitory population does not generate such oscillations without an explicit time delay. We show that this discrepancy is due to a voltage-dependent spike-synchronization mechanism inherent in networks of spiking neurons which is not captured by standard firing rate equations.Here we investigate an exact low-dimensional description for a network of heterogeneous canonical Class 1 inhibitory neurons which includes the sub-threshold dynamics crucial for generating synchronous states.In the limit of slow synaptic kinetics the spike-synchrony mechanism is suppressed and the standard Wilson-Cowan equations are formally recovered as long as external inputs are also slow.However, even in this limit synchronous spiking can be elicited by inputs which fluctuate on a time-scale of the membrane time-constant of the neurons. Our meanfield equations therefore represent an extension of thestandard Wilson-Cowan equations in which spike synchrony is also correctly described. Firing rate equations require a spike synchrony mechanism tocorrectly describe fast oscillations in inhibitory networks Ernest Montbrió December 30, 2023 ========================================================================================================================= § INTRODUCTION Since the seminal work of Wilson and Cowan <cit.>, population models of neuronal activity have become a standard tool of analysis in computationalneuroscience. Rather than focus on the microscopic dynamics of neurons,these models describe the collective properties of large numbers of neurons, typically in terms of the mean firing rate of a neuronal ensemble.In general, such population models, often called firing rate equations,cannot be exactlyderived from the equations of a network of spiking neurons,but are obtained using heuristic mean-field arguments,see e.g. <cit.>. Despite their heuristic nature, heuristic firing rate equations (which we call H-FRE)often show remarkable qualitative agreement with the dynamics in equivalent networks of spiking neurons <cit.>, and constitute an extremely useful modeling tool, see e.g. <cit.>. Nonetheless, this agreement can break down once a significant fraction of theneurons in the population fires spikes synchronously, see e.g. <cit.>.Such synchronous firing may come about due to external drive,but also occurs to some degree during spontaneously generated network states.As a case in point, here we focus on partially synchronized states in networks of heterogeneous inhibitory neurons. Inhibitory networks are able to generate robustmacroscopic oscillations due to the interplay of external excitatory inputs with the inhibitory mean field produced by the population itself.Fast synaptic processing coupled with subthreshold integration of inputsintroduces an effective delay in the negative feedback facilitating the emergence of what is often called Inter-Neuronal Gamma (ING) oscillations <cit.>. Modeling studies with networks of spiking neurons demonstrate that, in heterogeneous inhibitory networks,large fractions of neurons become frequency-entrained during these oscillatory episodes, and that the oscillations persist for weak levels of heterogeneity <cit.>.Traditional H-FRE(also referred to as Wilson-Cowan equations) fail to describe such fast oscillations. To overcome this limitation, explicit fixed time delays have been considered in H-FRE as a heuristicproxy for the combined effects of synaptic and subthreshold integration <cit.>.Here we show that fast oscillations in inhibitory networks are correctly describedby a recently derived set of exact macroscopic equations for quadratic integrate-and-fire neurons (that we call QIF-FRE) which explicitly take into account subthresholdintegration <cit.>. Specifically, the QIF-FRE reveal how oscillations arise via a voltage-dependent spike synchronization mechanism, missing in H-FRE, as long as the recurrent synaptic kinetics are sufficiently fast.In the limit of slow recurrent synaptic kinetics intrinsically generated oscillations are suppressed, and the QIF-FRE reduce to an equation formally identicalto the Wilson-Cowan equation for an inhibitory population. However, even in this limit,fast fluctuations in external inputs can drive transient spike synchrony in the network,and the slow synaptic approximation of the QIF-FRE breaks down. This suggests that, in general, a correct macroscopic description of spikingnetworks requires keeping track ofthe mean subthreshold voltage along with the mean firing rate. Additionally, the QIF-FRE describe the disappearance of oscillations forsufficiently strong heterogeneity which isrobustly observed in simulations of spiking networks. Finally, we also show that the phase diagrams ofoscillatory states found in the QIF-FRE qualitatively matchthose observed in simulationsof populations of more biophysically inspired Wang-Buzsáki neurons <cit.>. This shows that not only are the QIF-FRE an exact mean-field description ofnetworks of heterogeneous QIF neurons, butthat they also provide a qualitatively accurate description of dynamical states in networks ofspiking neurons more generally, including states with significant spike synchrony.§ RESULTS Recurrent networks of spiking neurons with inhibitory interactions readily generate fast oscillations. Figure <ref> shows an illustration of such oscillations in a network of globally coupled Wang-Buzsáki (WB) neurons <cit.>.Panels (a,c) show the results of anumerical simulation of the network for fast synapses (time constant τ_d=5 ms),compared to the membrane time constant of the neuron model (τ_m = 10 ms). Although the neurons have different intrinsic frequencies due toa distribution in external input currents, the raster plot reveals that fast inhibitory coupling produces the frequency entrainmentof a large fraction of the neurons in the ensemble. This collectivesynchronization is reflected at the macroscopic scale as an oscillation with the frequency of the synchronous cluster of neurons <cit.>. Indeed, panel (a) shows the time series of both the mean synaptic activation variableS, and the mean firing rate R, which display ING oscillations.Panels (b,d) of Fig. <ref> show the disappearance of the synchronousstate when the synaptic kinetics is slow (τ_d=50 ms).§.§ A heuristic firing rate equation A heuristic firing rate description of the spiking network simulated inFig. <ref> takes the form <cit.> τ_mṘ = -R+Φ( -Jτ_mS+Θ ), τ_dṠ =-S+R.where R represents the mean firing rate in the population, S is the synapticactivation, and the time constants τ_m and τ_d are the neuronal and synaptic time constantsrespectively <cit.>. The input-output function Φ, also known as thef-I curve, is a nonlinear function, the form of which depends on the details of the neuronal model and on network parameters.Finally,J≥ 0 is the synaptic strength and Θ is the mean external input current compared to threshold. In contrast with the network model, the H-FRE Eqs. (<ref>) cannot generate sustained oscillations.In fact, a linearstability analysis of steady state solutions in Eqs. (<ref>) shows that the resulting eigenvalues areλ = -α (1±√(1-β) ), where the parameter α = (τ_m+τ_d)/(2τ_mτ_d) is always positive.Additionally, the parameter β = [4τ_mτ_d(1+Jτ_mΦ' )]/(τ_m+τ_d)^2is also positive, since the f-I curve Φ(x) is an increasing function,and its derivative evaluated at the steady state is then Φ'>0. Therefore the real part of the eigenvalueλ is always negative and hence steady states are always stable, although damped oscillations are possible,e.g. for strong enough coupling J. Introducing an explicit fixed time delay in Eqs. (<ref>) can lead to the generation of oscillations with a period on theorder of about twice the delay <cit.>. Nonetheless, inhibitory networks of spiking neurons robustly show oscillations even in the absence of explicit delays, as seen in Fig. <ref>.This suggests that there is an additional mechanism in the network dynamics, key for driving oscillatory behavior, whichH-FRE do not capture.§.§ An exact firing rate equation which captures spike synchronyHere we show thatthe mechanism responsible for thegeneration of the oscillations shown in Fig. <ref>is the interplay between the mean firing rate and membrane potential,the dynamics of which reflect thedegree of spike synchrony in the network.To do this, we use a set of exact macroscopic equations which have been recently derived from a population of heterogeneous quadratic integrate-and-fire (QIF) neurons <cit.>. We refer to these equations as the QIF-FRE. The QIF-FRE with exponentialsynapses have the form τ_mṘ = Δ/πτ_m + 2RV,τ_m V̇ = V^2 -(πτ_mR)^2-Jτ_mS + Θ, τ_dṠ =-S+R.where Δ is a parameter measuring the degree of heterogeneity in the network and the otherparameters are as in the H-FRE Eqs. (<ref>). Eqs. (<ref>) are an exact macroscopic description of the dynamics in alarge network of heterogeneous QIF neurons with inhibitory coupling.In contrast with thetraditional firing rate equations Eqs. (<ref>), they contain an explicit dependence on the subthreshold state of the network, and hencecapture not only macroscopic variations in firing rate, but also in spike synchrony.Specifically, a transient depolarizing input which drives the voltage to positive values (the voltage has been normalized such that positivevalues are suprathrehsold, see Materials and Methods) will lead to a sharp growth in the firing rate through the bilinear termin Eq. (<ref>a). Simulations in the corresponding network model reveal that this increase is due to the synchronous spiking of a subset of neurons <cit.>. This increase in firing rate leads to a hyperpolarization of the mean voltage through the quadratic term in R in Eq. (<ref>b). This term describes the effect of the neuronal reset.This decrease in voltage in turn drives down the mean firing rate, and the process can repeat.Therefore the interplay between mean firing rateand mean voltage in Eqs. (<ref>) can generate oscillatory behavior; this behaviorcorresponds to transient bouts of spikesynchrony in the spiking network model. It is precisely the latency inherent in this interplay which provides the effectivedelay, which when coupled with synaptic kinetics, generates self-sustained fast oscillations.In fact, in thelimit of instantaneous synapses (τ_d→ 0), Eqs. (<ref>) robustly display damped oscillationsdue to the spike generation and reset mechanism described in the preceding paragraph <cit.>.Contrast thiswith the dynamics in Eqs.(<ref>) in the same limit: the resulting H-FRE is one dimensional and hencecannot oscillate.On the face of things the Eqs. (<ref>) appear to have an utterly distinctfunctional form from the traditional Wilson-Cowan Eqs.(<ref>). In particular, the f-I curve in H-FRE traditionally exhibits an expansive nonlinearity at low rates and a linearization or saturation at high rates, e.g. a sigmoid. There is no suchfunction visible in the QIF-FRE which have only quadratic nonlinearities.However, thisseeming inconsistency is simply due to the explicit dependence of the steady-state f-I curve on the subthresholdvoltage in Eqs. (<ref>).In fact, the steady-state f-I curve in the QIF-FRE is “typical” in the qualitative sense described above.Specifically, solving for the steady state value of thefiring rate in Eqs. (<ref>) yieldsR_*= Φ( - J τ_m R_*+Θ), where Φ (I) = 1/√(2)πτ_m√(I+√(I^2+Δ^2)). The f-I curve Eq. (<ref>) is shown in Fig. <ref> for several values of theparameter Δ, which measures the degree of heterogeneity in the network. Therefore, the steady-state firing rate in Eqs. (<ref>), which corresponds exactlyto that in a network of heterogeneous QIF neurons, could easily be captured in a heuristicmodel such as Eqs. (<ref>) by taking the function Φ to have the form as inEq. (<ref>).On the other hand, the non-steady behavior inEqs. (<ref>), and hence in spiking networks as well, can be quite different fromthat in the heuristic Eqs. (<ref>). §.§.§ Fast oscillations in the QIF-FREWe have seen that decreasing the time constant of synaptic decay τ_d in a network of inhibitory spiking neuronslead to sustained fast oscillations, while such a transition to oscillations is not found in the heuristic rate equations, in whichthe synaptic dynamics are taken into account Eqs. (<ref>). The exact QIF-FRE, on the other hand, do generate oscillations in this regime. Figure <ref> shows a comparison of the firing rate R and synaptic variable Sfrom simulations of the QIF-FRE Eqs.(<ref>), with that of theH-FRE Eqs. (<ref>), for two different values of thesynaptic time constants.Additionally, we also performed simulations of a network of N=5× 10 ^4QIF neurons. The mean firing rate of the network is shown in red, and perfectly agrees with the firing rate of the low dimensional QIF-FRE (solid black lines). The function Φ in Eqs. (<ref>) is chosen to be that from Eq. (<ref>), which is why the firing rate from both models converges to the same steady state value when oscillations are not present (panels (b,d)for τ_d = 50 ms).We will study the transition to fast oscillationsin Eqs.(<ref>) in greater details in the following sections. §.§ Linear stability analysis of the QIF-FRE We can investigate the emergence of sustained oscillations in Eqs. (<ref>)by considering small amplitude perturbations of the steady-state solution. If we take R = R_*+δ Re^λ t,V = V_*+δ Ve^λ t and S = S_*+δ Se^λ t,where δ R, δ V, δ S ≪ 1,then the sign of the real part of the eigenvalue λ determines whether the perturbation grows or not. Plugging this ansatz into Eqs. (<ref>) yields three coupled linearequations which are solvable if the followingcharacteristic equation also has a solution -2 Jτ_m R_* = (1+ τ_dλ ) [(2πτ_m R_*)^2 + (τ_mλ+ Δ/πτ_mR_*)^2 ]. The left hand side ofEq. (<ref>) is always negative and, for τ_d=0, this implies thatthe solutions λ are necessarily complex.Hence, for instantaneous synapses, the fixed point of the QIF-FREis always of focus type, reflecting transient episodesof spike synchrony in the neuronal ensemble <cit.>.In contrast, setting τ_d=0 in the H-FRE,the system becomes first orderand oscillations are not possible. This is the critical difference between thetwo firing rate models.In fact, and in contrast with the eigenvaluesEq. (<ref>) corresponding to the growth rate of smallperturbations in the H-FRE, here oscillatory instabilities may occur for nonvanishing values of τ_d. Figure <ref> shows the Hopf boundaries obtained from Eq. (<ref>), as a function of the normalized synaptic strength j= J/√(Θ) and the ratioof the synaptic andneuronal time constants τ =√(Θ)τ_d/τ_m,and for different values of the ratio δ = Δ/Θ —see Materials and Methods, Eqs.(<ref>-<ref>).The shaded regions correspond to parameter values where the QIF-FRE display oscillatory solutions.§.§.§ Identical neurons In the simplest case of identical neurons we find the boundaries of oscillatoryinstabilities explicitly. Indeed, substitutingλ=ν+iω in Eq. (<ref>) we find that, near criticality (|ν|≪ 1),the real part of the eigenvalue isν≈JτR_*/1+ (2 πτ_d R_*)^2 . Thus, the fixed point (<ref>) is unstable for J τ>0, and changes itsstability for either J=0, or τ=0. In particular, given a non-zerosynaptic time constant there is an oscillatoryinstability as the sign of the synaptic coupling J changes from positive to negative. Therefore oscillations occur only for inhibitory coupling <cit.>.The frequency along this Hopf bifurcation line isdetermined entirely by the intrinsic firing rate of individual cells ω_c=2π R_*. On the other hand, in the limit of fast synaptic kinetics,i.e. τ_d=0 in Eq. (<ref>), we find another Hopf bifurcation withω_c=1/τ_m√(2τ_mR_*(J+2π^2τ_mR_*)).Thisreflects the fact that oscillations cannot be induced if the synaptic interactions are instantaneous.The left panel of Figure <ref> shows the phase diagram with the Hopfboundaries depicted in red, reflecting that oscillations are found for all valuesof inhibitory coupling and for non-instantaneous synaptic kinetics. §.§.§ Heterogeneous neurons Once heterogeneity is added to the network the region of sustained oscillatory behaviorshrinks, see Fig.<ref>, center and right. The red closed curves correspond to the Hopf bifurcations, which have been obtained inparametric form from the characteristic equation (<ref>), see Materials and Methods.Note that for small levels of δ, oscillations are present in aclosed region of the phase diagram, and disappear for large enough values of τ (thesynaptic time constant relative to the neuronal time constant). Further increases in δ gradually reduce the region of oscillationsuntil it fully disappears at the critical valueδ_c=( Δ/Θ)_c=1/5√(5-2√(5)) =0.1453 …, which has been obtained analytically from the characteristic Eq. (<ref>), seeMaterials and Methods.This result is consistent with numerical studies investigating oscillations inheterogeneous inhibitory networks which indicate thatgamma oscillations are fragile against the presence of quenched heterogeneity <cit.>.In the following, we compare the phase diagrams of Fig. <ref>with numerical results using heterogeneous ensembles of Wang-Buzsáki neurons withfirst order synapses.Instead of using the population mean firing rate or mean synaptic activation, in Fig. <ref> we computed the amplitude of the population meanmembrane potential. This variable is less affected by finite-size fluctuations and hence the regions of oscillations are more easily distinguishable. The results are summarized in Fig. <ref> for different values of δ and have been obtained by systematically increasing the coupling strength k for fixed values of τ_d. The resulting phase diagrams qualitatively agreewith those shown in Fig. <ref> .As predicted by the QIF-FRE, oscillations are found in a closed region in the (τ_d,k)parameter space, and disappear for large enough values of δ.Here, the critical value of δ=σ/I̅ is about 6%, smallerthan the critical valuegiven by Eq. (<ref>). This is due to the steepf-I curve of the WB model, which implies a larger dispersionin the firing rates of the neurons even for small heterogeneities in theinput currents. Additionally, for small τ_d (fast synaptic kinetics) and strong coupling k,we observed small regions where the oscillations coexist with the asynchronousstate —not shown.Numerical simulations indicate that this bistability is not present in the QIF-FRE. For strong coupling, andcoexisting with the asynchronous state, we also observed various clustering states, already reported in the original paper of Wang & Buzsáki <cit.>. Clusteringin inhibitory networks has also been observed in populations of conductance-based neuronswith spike adaptation <cit.> or time delays <cit.>.The fact that such states do not emerge in the model Eqs. (<ref>) may bedue to the purely sinusoidal shape of the phase resettingcurve of the QIF model <cit.>.§.§ Firing Rate Equations in the limit of slow synapsesWe have seen that the oscillations which emerge in inhibitory networksfor sufficiently fast synaptic kinetics arecorrectly described by the firing rate equations Eqs. (<ref>),but not by the heuristic Eqs. (<ref>).The reason for this is that the oscillationscrucially depend on the interaction between the population firing rate and the subthreshold membrane potential during spike initiation and reset;this interaction manifests itself at the network level through spike synchrony.Therefore, if one could suppress the spike synchrony mechanism, and hence thedependence on the subthreshold membrane potential, in Eqs. (<ref>), the resultingequations ought to bear resemblance to Eqs. (<ref>).In fact, as we saw in Fig. <ref>,the two firing rate models become more similarwhen the synaptic kinetics become slower.Nextweshow that the two models become identical, formally, in the limit of slow synaptic kinetics. To show this, we assume the synaptic time constant is slow, namelyτ_d = τ̅_d/ϵ where 0 < ϵ≪ 1, and rescale timeas t̅ = ϵ t. In this limit we are tracking the slow synaptic dynamics inwhile the neuronal dynamics are stationary to leading order,i.e. R_* = Φ (-Jτ_mS +Θ ). Therefore, the dynamics reduce to the first order equation τ_d Ṡ = -S+ Φ(-Jτ_m S+Θ). Notably, this shows that the QIF-FRE Eqs. (<ref>),and the H-FRE (<ref>), do actually have the same dynamics in the limit of slow synapses. In other words, Eq. (<ref>) is formally equivalentto the Wilson-Cowan equations for a single inhibitory population, and thisestablishes a mathematical link between the QIF-FRE and Heuristic firing rate descriptions.Additionally, considering slow second order synaptic kinetics (not shown), allows one to reduce the QIF-FRE with either alpha or double exponential synapsesto a second-order system that formally corresponds tothe so-called neural mass models largely used for modeling EEG data,see e.g. <cit.>. §.§.§ External inputs and breakdown of the slow-synaptic limit Eq. (<ref>) It is important to note that, in the derivation of Eq. (<ref>)we considered external inputs Θ to be constant. Then, if synapses are slow, the neuronal variables (R in the case of Eqs. (<ref>) and R and V in the case ofEqs. (<ref>)) decay rapidly to their fixed point values. However even in the limit of slow synapses, such reduction can break down if externalinputs are time-varying Θ=Θ(t), with a time-scale which itself is notsufficiently slow. To demonstrate this, in Fig. <ref>, we compared the dynamics of theQIF-FRE and H-FRE with the approximation Eq. (<ref>), for periodic stimuli of variousperiods —panels (g-i)—, and always considering slow synapses, τ_d=100 ms.As expected, the models show good agreement for very slow external inputs—see panels (a,d)—, but this discrepancy is strongly magnified for fast external inputs Specifically, for fast inputs —see panels (c,f)—, thedynamics of the S and R variables of the QIF-FRE are clearly different form thoseof the other models.This illustrates that even in the limit of slow synapses, the response of spikingnetworks to arbitrary time-varying inputs will always generate some degree of spikesynchrony. § DISCUSSION Firing rate models, describing the average activity of large neuronal ensembles are broadly used in computational neuroscience.However, these modelsfail todescribe inhibition-based rhythms, typically observed in networks of inhibitory neurons with synaptic kinetics <cit.>. To overcome this limitation, some authors heuristically included explicit delays in traditional FRE, and found qualitative agreement with the oscillatory dynamics observed in simulations of spiking neurons with both synaptic kinetics and fixed time delays <cit.>. Nonetheless it remains unclear why traditional H-FRE with first order synaptic kinetics do not generateinhibition-based oscillations. Here we have investigated a novel class of FRE which can be rigorously derived from populations of spiking (QIF) neurons <cit.>. Networks of globally coupledQIF neurons with fast inhibitory synapses readily generate fast self-sustained oscillations.The corresponding exact FRE, which we call the QIF-FRE, therefore also generates oscillations. The benefit of havinga simple macroscopic description for the network dynamics is its amenability to analysis. In particular, thenonlinearities in Eqs.(<ref>), which arise due to the spike initiation and reset mechanism in the QIF model, conspire to generate damped oscillations which reflect transient spike synchrony in the network.This oscillatory mode can be driven by sufficiently fast recurrent inhibitory synaptic activation, leading to sustained oscillations.This suggests that any mean-field description of network activity which neglects subthreshold integrationwill notproperly capture spike-synchrony-dependent dynamical behaviors, including fast inhibitory oscillations. The QIF-FRE have also allowed us to generate a phase diagram for oscillatory behavior in a network ofQIF neurons withease via a standard linear stability analysis, see Fig.<ref>. This phase diagram agrees qualitatively with that of an equivalent network of Wang-Buzsáki neurons, suggesting that the QIF-FRE not only provide an exact description of QIF networks, but also a qualitatively accurate description of macroscopic behaviors in networks of Class I neurons in general. In particular, theQIF-FRE capture the fragility of oscillations to quenched variability in the network,a feature that seems to be particularly pronounced for Class 1 neuronal models compared to neuralmodels with so-called Class 2 excitability <cit.>.Finally we have shown that the QIF-FRE and the heuristic H-FRE are formally equivalent in the limit of slow synapses. In this limit the neuronal dynamics is slaved to the synaptic activation and well-described by Eq. (<ref>), as long asexternal inputs are stationary. In fact, in the absence of quenched heterogeneity (Δ = 0), the approximationfor slow synapses Eq. (<ref>) is also fully equivalent to the reduction for slowsynapses in networks of Class 1 neurons derived by Ermentrout in <cit.>. This further indicates that the QIF-FRE are likely valid for networks ofClass 1 neurons in general.However, we also show that in the more biologically plausible scenario of time-varying external drive some degree of neuronal synchronization is generically observed, as in Fig. (<ref>), and the slow-synapse reduction Eq. (<ref>) is not valid. The results presented here are obtained undertwo important assumptions that need to be taken into account when comparing our workto the existing literature on fast oscillations in inhibitory networks.(i) A derivation of an exact firing rate model for a spiking neuron network isonly possible for ensembles of QIF neurons, which are the canonical model forClass 1 excitability <cit.>.Although many relevant computational studies on fast inhibitoryoscillations also consider Class 1 neurons <cit.>,experimental evidence indicates that fast spiking interneurons arehighly heterogeneous in their minimal firing rates in responseto steady currents, and that a significant fraction of them areClass 2 <cit.> —but see also <cit.>. (ii) Our derivation of the QIF-FRE is valid for networks of globally coupled QIF neurons, with Lorentzian-distributed currents.In this system inhibition-based oscillations are only possible when the majority of theneurons are self-sustained oscillators, i.e. for Θ>0 inEq. (<ref>), and are due to thefrequency locking of a fraction of the oscillators in the population <cit.> —as it can be seen in the raster plot of Fig. <ref>(c).In this state, the frequency of the cluster of synchronized oscillators coincides withthe frequency of the mean field. The value of the frequency itself is determinedthrough an interplay between single-cell resonance and network effects. Specifically, thesynchronized neurons have intrinsic spiking frequencies near that of the mean-field oscillationand hence are driven resonantly. This collective synchronization differs from the so-called sparse synchronizationobserved in inhibitory networks of identical Class 1 neurons under theinfluence of noise <cit.>. In the sparsely synchronized stateneurons fire stochastically at very low rates,while the population firing rate displays thefast oscillations as the ones reported here. Oscillatory phenomena arising from single-cell resonances, and which reflectspike synchrony at the population level, are ubiquitous innetworks of spiking neurons.Mean-field theory for noise-driven networks leading to a Fokker-Planck formalism, allows for a linear analysis of the response of the network to weak stimuliwhen the network is in an asynchronous state <cit.>. Resonances can appear in the linear response when firing rates are sufficiently high or noise strength sufficiently low. Recent work has sought to extend H-FRE in this regime byextracting the complex eigenvalue corresponding to the resonance and using it to construct the linear operator of a complex-valued differential equation, the real part of which is thefiring rate <cit.>. Other work has developed an expression for the response of spiking networks to external drive, which often generates resonance-related damped oscillations, through an eigenfunction expansion of the Fokker-Planck equation <cit.>.Our approach is similar in spirit to such studies in that we also work with a low dimensional description of the network response.In contrast to previous work our equations are an exact description of the macroscopic behavior, although they are only validfor networks of heterogeneous QIF neurons. Nonetheless, the QIF-FRE are simple enough to allow for anintuitive understanding of the origin of fast oscillations in inhibitory networks,and in particular, of why these oscillations are not properly captured by H-FRE.§ ACKNOWLEDGMENTSF.D. and E.M acknowledge supportby the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 642563. A.R. acknowledges a project grant from the Spanish ministry of Economics and Competitiveness, Grant No. BFU2012-33413. A.R. has been partially funded by the CERCA progam of the Generalitat de Catalunya. E.M. acknowledges the projects grantsfrom the Spanish ministry of Economics and Competitiveness, Grants No. PSI2016-75688-P andNo. PCIN-2015-127. § MATERIALS AND METHODS §.§ Populations of inhibitory Quadratic Integrate and Fire neurons We model fast-spiking interneurons, the dynamics of which are well-described by theHodgkin-Huxley equations with only standard spiking currents.Many models of inhibitory neurons are Class 1 excitable <cit.>, including for example theWang-Buszáki (WB) <cit.>,and the Morris-Lecar models <cit.>.Class 1 models are characterized by the presence of a saddle-node bifurcation on an invariant circle at the transition from quiescence to spiking.One consequence of this bifurcation structure is the fact the spiking frequency can be arbitrarily low near threshold. Additionally, near threshold the spiking dynamics are dominated by the time spent in the vicinity of the saddle-node itself, allowing for a formal reduction in dimensionality from the full neuron model to a reduced normal form equation for a saddle-node bifurcation <cit.>.This normal form,which is valid for any Class 1 model near threshold, is known as the quadratic integrate-and-fire model (QIF).Using a change of variables, the QIF model can be transformed to aphase model, called Theta-Neuron model <cit.>, which has an strictly positivePhase Resetting Curve (PRC).Neuron models with strictly positive PRC are called Type 1 neurons,indicating that perturbations always produce anadvance (and not a delay) of their phase. In general, Class 1 neurons have aType 1 PRC <cit.>, but see <cit.>. In a network of QIF neurons, the neuronal membrane potentials are {Ṽ_i}_i=1,…,N, which obey thefollowing ordinary differential equations <cit.>: C dṼ_i/dt=g_L (Ṽ_i-V_t)(Ṽ_i-V_r)/(V_t-V_r) +I_0,i where C is the cell capacitance, g_L is the leak conductance and I_0,i areexternal currents. Additionally,V_r and V_t represent the resting potential and threshold of the neuron, respectively.Using the change of variables Ṽ'_i=Ṽ_i -(V_t+V_r)/2,and then rescaling the shifted voltages as V_i = Ṽ'_i/ (V_t-V_r),the QIF model (<ref>) reduces toτ_m V̇_i= V_i^2 + I_i where τ_m=C/g_L is the membrane time constant,I_i=I_0,i/(g_L(V_t-V_r))-1/4 and the overdot represents derivation withrespect to time t. Note that in the model (<ref>) thevoltage variables V_i and the inputs I_i do not have dimensions.Thereafter we work with QIF model its simplest form Eq. (<ref>). We assume that the inputs are I_i= η_i - J τ_m S, where J is the inhibitory synaptic strength, and S is thesynaptic gating variable. Finally, the currentsη_i are constants taken from some prescribed distribution that herewe consider it is a Lorentzian of half-width Δ, centered at Θg(η)= 1/πΔ/(η-Θ)^2 +Δ^2. In numerical simulations the currents wereselected deterministically to represent the Lorentzian distribution as: η_i=Θ+Δtan(π/2 (2i-N-1)/(N+1)), for i=1,…,N. In the absence of synaptic input, the QIF model Eqs.(<ref>,<ref>) exhibits two possible dynamical regimes, depending on the sign of η_i. If η_i<0, the neuron is excitable, and an initial conditionV_i(0)<√(-η_i), asymptotically approaches the resting potential -√(-η_i). For initial conditions above theexcitability threshold, V_i(0)>√(-η_i), the membrane potential grows without bound.In this case, once the neuron reaches a certainthreshold value V_θ≫ 1, it is reset to a new value -V_θ after arefractory period 2τ_m/V_θ (in numerical simulations, we choose V_θ = 100).On the other hand, if η_j>0, the neuronbehaves as an oscillator and, if V_θ→∞, it fires regularly with a period T=πτ_m/√(η_i). The instantaneous population mean firing rate is R= lim_τ_s → 01/N1/τ_s∑_j=1^N∑_k∫_t-τ_s^tdt' δ (t'-t_j^k), where t_j^k is the time of the kth spike of jth neuron, and δ (t)is the Dirac delta function. Finally, the dynamics of thesynaptic variable obeys the first order ordinary differential equation τ_d Ṡ =-S+ R.For the numerical implementation of Eqs. (<ref>,<ref>), we set τ_s=10^-2τ_m. To obtain a smoother time series, the firing rate plotted in Fig. <ref> was computed according to Eq. (<ref>) with τ_s=3 · 10^-2τ_m. §.§ Firing Rate Equations for populations of Quadratic Integrateand Fire neuronsRecently Luke et al. derived the exact macroscopic equations for a pulse-coupled ensemble ofTheta-Neurons <cit.>, and this has motivated a considerable number of recent papers <cit.>. This work applies the so-called Ott-Antonsen theory <cit.> to obtain a low-dimensional description of the networkin terms of the complex Kuramoto order parameter.Nevertheless, it is is not obvious how these macroscopic descriptionsrelate to traditional H-FRE.As we already mentioned, the Theta-neuron model exactlytransforms to the Quadratic Integrate andFire (QIF) model by a nonlinear change of variables <cit.>.This transformation establishes a map betweenthe phase variable θ_i ∈(-π,π] of a Theta neuron i,and the membrane potential variable V_i ∈ (-∞,+∞) of the QIF model Eq. (<ref>).Recently it was shown that, under some circumstances,a change of variables also exists at the population level <cit.>.In this case, the complex Kuramoto order parameter transforms into a novel order parameter,composed of two macroscopic variables:The population-mean membrane potential V, andthe population-mean firing rate R. As a consequence of that,the Ott-Antonsen theory becomes a unique method for deriving exact firing rate equations for ensembles of heterogeneous spiking neurons —see also <cit.>for recent alternative approaches.Thus far, the FRE for QIF neurons (QIF-FRE) have beensuccessfully applied to investigate the collective dynamics ofpopulations of QIF neurons with instantaneous <cit.>, time delayed <cit.> and excitatory synapseswith fast synaptic kinetics <cit.>.However, to date the QIF-FRE have not been used toexplore the dynamics of populations of inhibitory neurons with synaptic kinetics —but see <cit.> for a numerical investigation using the low-dimensionalKuramoto order parameter description.The method for deriving the QIF-FRE corresponding to a population of QIFneurons Eq. (<ref>) is exact in thethermodynamic limit N→∞, and, under the assumption that neurons are all-to-all coupled.Additionally, if the parameters η_i in Eq. (<ref>)(which in the thermodynamic limit become a continuous variable)are assumed to be distributed according to the Lorentzian distribution Eq. (<ref>), the resulting QIF-FRE become two dimensional.For instantaneous synapses,the macroscopic dynamics of the population of QIF neurons (<ref>)is exactly described by the system of QIF-FRE <cit.>τ_mṘ = Δ/πτ_m + 2RV, τ_m V̇ = V^2 -(πτ_mR)^2- J τ_m R +Θ ,where R is the mean firing rate and V the mean membrane potential in the network. With exponentially decaying synaptic kinetics the QIF-FRE Eqs. (<ref>) become Eqs.(<ref>). In our study, we consider Θ>0, so that the majority of the neurons areoscillatory —see Eq.(<ref>).§.§.§ Fixed pointsThe fixed points of the QIF-FRE (<ref>) are obtainedimposing Ṙ=V̇=Ṡ =0. Substituting this intoEqs. (<ref>),we obtain the fixed point equation V^*=-Δ/(2 πτ_m R^*), the firing rate given by Eq. (<ref>) and S_*=R_*. Note that for homogeneous populations, Δ=0, the f-I curve Eq. (<ref>) reduces toΦ(I)=1/π√(|I|_+), which displays a clear threshold at I=0(Here, |I|_+=I if I≥0, and vanishes for I<0.) This function coincides with the squashing function foundby Ermentrout for homogeneous networks of Class 1 neurons <cit.>.As expected, for heterogeneous networks,the well-defined threshold of Φ(I) for Δ=0 is lost and the transfer function becomes increasingly smoother. §.§.§ Nondimensionalized QIF-FRE The QIF-FRE (<ref>) have five parameters. It is possible to non-dimensionalize the equations so that the system can be written solelyin terms of 3 parameters. Generally, we adopt the following notation:we use capital letters to refer to the original variables and parametersof the QIF-FRE, and lower case letters for non-dimensionalquantities. A possible non-dimensionalization, valid for Θ>0, isṙ = δ/π + 2rv, v̇ = v^2 - π^2 r^2- j s +1 , τṡ =-s + r,where the overdot here means differentiation with respect to the non-dimensionaltimet̃= √(Θ)/τ_m t. The other variables are defined as r=τ_m/√(Θ) R,v = V/√(Θ), s=τ_m/√(Θ)S. On the other hand, the new coupling parameter is defined asj=J/√(Θ). and the parameter δ= Δ/Θ, describes the effect of the Lorentzianheterogeneity (<ref>) into the collective dynamics ofthe FRE (<ref>). Though the Lorentzian distribution does not have finite moments, for the sake of comparison of our results with those ofstudies investigating the dynamics of heterogeneous networks of inhibitoryneurons, e.g.<cit.>, the quantity δ can be compared to thecoefficient of variation, which measures the ratio of the standarddeviation to the mean of a probability density function.Finally, the non-dimensional time τ= √(Θ)/τ_mτ_d, measures the ratio of the synaptic time constant to the most-likely periodof the neurons (times π),T̅=πτ_m/√(Θ). In numerical simulations we will use the original QIF-FRE (<ref>), with Θ=4, and τ_m=10ms. Thus T̅=10 π/3 ≈ 15.71ms,so that the most likely value of the neurons' intrinsic frequency is f̅≈ 63.66 Hz.However, our results are expressed in a more compact form in termsof the quantities j,δ,τ, and we will use them in some of our calculations andfigures. §.§ Parametric formula for the Hopf boundaries To investigate the existence of oscillatory instabilities we useEq. (<ref>) written in terms of the non-dimensional variables and parametersdefined previously, which is-2 jr_* = (1+ λ̃τ) [(2π r_*)^2 + (λ̃+ δ/πr_*)^2 ]. Imposing the condition of marginal stability λ̃=iω̃ inEq. (<ref>) gives the system of equations 0=2 j r_*+4 π^2 r_*^2 + 4 v_*^2 - (1 - 4 v_* τ) ω̃^2 0 = ω̃ (4 v_* - 4 π^2 r_*^2 τ - 4 v_*^2 τ + τω̃^2)where the fixed points are obtained from Eq. (<ref>) solving 0=v_*^2-π^2r_*^2-j r_*+1, with v_*=-δ/2πr_* Eq. (<ref>) gives the critical frequencyω̃=2/τ√( (πτ r_*)^2 + τ v_* (τ v_* -1)). The Hopf boundaries can be plotted in parametric formsolving Eq. (<ref>) forj, and substituting j and ω̃ into Eq. (<ref>). Then solving Eq. (<ref>) for τ gives the Hopf bifurcation boundaries τ^±(r_*) = π^2 r_*^2 -1 + 7 v_*^2 ±√(( π^2 r_*^2 -1)^2 - (14 + 50 π^2 r_*^2) v_*^2- 15 v_*^4)/16 v_* (π^2 r_*^2 + v_*^2). Using the parametric formula (j(r_*),τ^±(r_*) )^±=( v_*^2/r_* +1/r_* - π^2 r_* ,τ^± (r_*)). we can be plot the Hopf boundariesfor particular values of the parameter δ, as r_* is changed.Figure <ref> shows these curves in red, for δ=0.05 and δ=0.075.They define a closed region in parameter space (shaded region) whereoscillations are observed.§.§.§ Calculation of the critical value δ_c, Eq (<ref>) The functions τ^± meet at two points, when the argument of the square rootin Eq. (<ref>) is zero. This gives four different roots for δ,and only one of them is positive and real δ^* (r_*) =2π r_*/√(15)√( 8 √(1 + 5 π^2 r_*^2 + 10 π^4 r_*^4) -7 - 25 π^2 r_*^2). This function has two positive zeros, one at r_*min=0, and one at r_*max=1/π, corresponding, respectively, to the minimal (j →∞) and maximal (j=0) valuesof the firing rate for identical neurons (δ=0). Between these two points the function attains a maximum, where r_*min=r_*max=r_*c, withr_*c=1/√(2√(5))π=0.1505…The function δ^*(r_*) evaluated at its local maximum r_*=r_*c gives Eq. (<ref>). §.§ Populations of Wang-Buzsáki neurons We perform numerical simulations using the the Wang-Buzsáki (WB) neuron <cit.>, and compare them with our results using networks of QIF neurons. The onset of oscillatory behavior in the WB model is via a saddle node on the invariant circle (SNIC) bifurcation.Therefore, the populations of WB neurons near this bifurcation are expected to be well described by the theta-neuron/QIF model, the canonical model for Class 1 neuralexcitability <cit.>. We numerically simulated a network of N all-to-all coupled WB neurons, where the dynamics of each neuron is described by the time evolution of its membrane potential <cit.> C_mV̇_̇i̇=-I_Na,i-I_K,i-I_L,i-I_syn+I_app,i+I_0. The cell capacitance is C_m=1 μF/cm^2.The inputs I_app (in μA /cm^2) are distributed according toa Lorentzian distribution with half width σ and center I̅. In numerical simulations these currents wereselected deterministically to represent the Lorentzian distribution as I_app,i=I̅+σtan(π/2 (2i-N-1)/(N+1)), for i=1,…,N. The constant input I_0=0.1601 μA /cm^2 sets the neuron at the SNIC bifurcation when I_app=0. The leak current is I_L,i=g_L(V_i-E_L), with g_L=0.1mS/ cm^2, so that the passive time constant τ_m=C_m/g_L=10ms.The sodium current isI_Na,i=g_Na m_∞^3 h (V_i-E_Na), where g_Na=35mS/cm^2, E_Na=55mV,m_∞= α_m/(α_m+β_m) with α_m(V_i)=-0.1(V_i+ 35)/(exp(-0.1(V_i+ 35)-1)), β_m(V_i)=4exp(-(V_i+60)/18).The inactivation variable h obeys the differential equation ḣ=ϕ(α_h(1-h)-β_hh),with ϕ=5, α_h(V_i)=0.07exp(-(V_i+58)/20) and β_h(V_i)=1/(exp(-0.1(V_i+ 28))+1).The potassium current follows I_K,i=g_Kn^4(V_i-E_K), with g_K=9mS/cm^2, E_K=-90mV. The activation variable n obeys ṅ=ϕ(α_n(1-n)-β_nn), where α_n(V_i)=-0.01(V_i+34)/(exp(-0.1(V_i+34))-1) and β_n(V_i)=0.125exp(-(V_i+44)/80).The synaptic current is I_syn=k C_mS, where the synaptic activation variable S obeys the first order kinetics Eq. (<ref>) and k is the coupling strength (expressed in mV). The factor C_m ensuresthat the effect of an incoming spike to the neuron is independent from its passive time constant. The neuron is defined to emit a spike when its membrane potential crosses 0 mV. The population firing rate is then computed according to Eq. (<ref>), with τ_s=10^-2 ms.In numerical simulations we considered N=1000 all-to-all coupled WB neurons,using the Euler method with time step dt=0.001 ms. In Fig. <ref>, the membrane potentials were initially randomly distributed according to a Lorentzian functionwith half width 5 mV and center -62 mV.Close to the bifurcation point, this is equivalent to uniformly distribute the phases of the corresponding Theta-Neurons in [-π,π]<cit.>. The parameters were chosen as I̅=0.5 μA/cm^2, σ=0.01 μA/cm^2 and k=6 mV. The population firing rate was smoothed setting τ_s=2 ms in Eq.(<ref>). In Fig. <ref>, we systematically varied the coupling strength and the synaptic time decay constant to determine the range of parameters displaying oscillatory behavior.For each fixed value of τ_d we varied the coupling strength k; we performed two series of simulations, for increasing and decreasing coupling strength.In Fig. <ref> we only show results for increasing k. All quantities were measured after a transient of 1000 ms.To obtain the amplitude of the oscillations of the mean membrane potential,we computed the maximal amplitude V̅_max-V̅_minover time windows of 200 ms for 1000 ms,and then averaged over the five windows.10 WC72 Wilson HR, Cowan JD. Excitatory and inhibitory interactions in localized populations of model neurons. Biophys J. 1972;12(1):1–24.ET10 Ermentrout GB, Terman DH. Mathematical foundations of neuroscience. vol. 64. Springer; 2010.GKN+14 Gerstner W, Kistler WM, Naud R, Paninski L. Neuronal dynamics: From single neurons to networks and models of cognition. Cambridge University Press; 2014.DA01 Dayan P, Abbott LF. Theoretical neuroscience. Cambridge, MA: MIT Press; 2001.Cow14 Cowan J. A personal account of the development of the field theory of large-scale brain activity from 1945 onward. In: Neural fields. Springer; 2014. p. 47–96.CGP14 Coombes S, beim Graben P, Potthast R. Tutorial on neural field theory. In: Neural fields. Springer; 2014. p. 1–43.LRN+00 Latham P, Richmond B, Nelson P, Nirenberg S. Intrinsic dynamics in neuronal networks. I. Theory. Journal of Neurophysiology. 2000;83(2):808–827.SHS03 Shriki O, Hansel D, Sompolinsky H. Rate models for conductance-based cortical neuronal networks. Neural Comput. 2003;15(8):1809–1841.RBH05 Roxin A, Brunel N, Hansel D. Role of delays in shaping spatiotemporal dynamics of neuronal activity in large networks. Phys Rev Lett. 2005;94(23):238103.RM11 Roxin A, Montbrió E. How effective delays shape oscillatory dynamics in neuronal networks. Physica D. 2011;240(3):323–345.WC73 Wilson HR, Cowan JD. A mathematical theory of the functional dynamics of cortical and thalamic nervous tissue. Kybernetik. 1973;13(2):55–80. doi:10.1007/BF00288786.Ama74 Amari Si. A method of statistical neurodynamics. Kybernetik. 1974;14(4):201–215. doi:10.1007/BF00274806.Nun74 Nunez PL. The brain wave equation: a model for the EEG. Mathematical Biosciences. 1974;21(3):279 – 297. doi:http://dx.doi.org/10.1016/0025-5564(74)90020-0.EC79 Ermentrout GB, Cowan JD. A mathematical theory of visual hallucination patterns. Biological Cybernetics. 1979;34(3):137–150. doi:10.1007/BF00336965.BLS95 Ben-Yishai R, Bar-Or RL, Sompolinsky H. Theory of orientation tuning in visual cortex. Proc Nat Acad Sci. 1995;92(9):3844–3848.PBS+96 Pinto DJ, Brumberg JC, Simons DJ, Ermentrout GB, Traub R. A quantitative population model of whisker barrels: Re-examining the Wilson-Cowan equations. Journal of Computational Neuroscience. 1996;3(3):247–264. doi:10.1007/BF00161134.HS98 Hansel D, Sompolinsky H. Modeling Feature Selectivity in Local Cortical Circuits. In: Koch C, Segev I, editors. Methods in Neuronal Modelling: From Ions to Networks. Cambridge: MIT Press; 1998. p. 499–567.TPM98 Tsodyks M MH Pawelzik K. Neural networks with dynamic synapses. Neural Comput. 1998;10:821.Wil99 Wilson HR. Spikes, decisions, and actions: the dynamical foundations of neurosciences. 1999;.TSO+00 Tabak J, Senn W, O’Donovan MJ, Rinzel J. Modeling of spontaneous activity in developing spinal cord using activity-dependent depression in an excitatory network. J Neurosci. 2000;20:3041–3056.BCG+01 Bressloff PC, Cowan JD, Golubitsky M, Thomas PJ, Wiener MC. Geometric visual hallucinations, Euclidean symmetry and the functional architecture of striate cortex. Philosophical Transactions of the Royal Society of London B: Biological Sciences. 2001;356(1407):299–330. doi:10.1098/rstb.2000.0769.LTG+02 Laing CR, Troy WC, Gutkin B, Ermentrout GB. Multiple bumps in a neuronal model of working memory. SIAM Journal on Applied Mathematics. 2002;63(1):62–97.HT06 Holcman D, Tsodyks M. The emergence of up and down states in cortical networks. PLoS Comput Biol. 2006;2(3):e23.MRR07 Moreno-Bote R, Rinzel J, Rubin N. Noise-induced alternations in an attractor network model of perceptual bistability. J Neurophysiol. 2007;98(3):1125–1139.MBT08 Mongillo G, Barak O, Tsodyks M. Synaptic theory of working memory. Science. 2008;319(5869):1543–1546.TWC+11 Touboul J, Wendling F, Chauvel P, Faugeras O. Neural mass activity, bifurcations, and epilepsy. Neural computation. 2011;23(12):3232–3286.MR13 Martí D, Rinzel J. Dynamics of feature categorization. Neural computation. 2013;25(1):1–45.TDD14 Ton R, Deco G, Daffertshofer A. Structure-function discrepancy: inhomogeneity and delays in synchronized neural networks. PLOS Comput Biol. 2014;10(7):e1003736.SOA13 Schaffer ES, Ostojic S, Abbott L. A Complex-Valued Firing-Rate Model That Approximates the Dynamics of Spiking Networks. PLoS Comput Biol. 2013;9(10):e1003301.WB96 Wang XJ, Buzsáki G. Gamma oscillation by synaptic inhibition in a hippocampal interneuronal network model. The journal of Neuroscience. 1996;16(20):6402–6413.WTJ95 Whittington MA, Traub RD, Jefferys JG. Synchronized oscillations in interneuron networks driven by metabotropic glutamate receptor activation. Nature. 1995;373:612–615.WCR+98 White JA, Chow CC, Rit J, Soto-Treviño C, Kopell N. Synchronization and oscillatory dynamics in heterogeneous, mutually inhibited neurons. Journal of computational neuroscience. 1998;5(1):5–16.WTK+00 Whittington MA, Traub RD, Kopell N, Ermentrout B, Buhl EH. Inhibition-based rhythms: experimental and mathematical observations on network dynamics. Int Journal of Psychophysiol. 2000;38(3):315 – 336. doi:http://dx.doi.org/10.1016/S0167-8760(00)00173-2.TJ00 Tiesinga P, José JV. Robust gamma oscillations in networks of inhibitory hippocampal interneurons. Network: Computation in Neural Systems. 2000;11(1):1–23.BH06 Brunel N, Hansel D. How noise affects the synchronization properties of recurrent networks of inhibitory neurons. Neural Comput. 2006;18(5):1066–1110.BH08 Brunel N, Hakim V. Sparsely synchronized neuronal oscillations. Chaos: An Interdisciplinary Journal of Nonlinear Science. 2008;18(1):015113.BVJ07 Bartos M, Vida I, Jonas P. Synaptic mechanisms of synchronized gamma oscillations in inhibitory interneuron networks. Nature reviews neuroscience. 2007;8(1):45–56.Wan10 Wang XJ. Neurophysiological and computational principles of cortical rhythms in cognition. Physiological reviews. 2010;90(3):1195–1268.KFR17 Keeley S, Fenton AA, Rinzel J. Modeling fast and slow gamma oscillations with interneurons of different subtype. Journal of Neurophysiology. 2017;117(3):950–965. doi:10.1152/jn.00490.2016.MPR15 Montbrió E, Pazó D, Roxin A. Macroscopic Description for Networks of Spiking Neurons. Phys Rev X. 2015;5:021028. doi:10.1103/PhysRevX.5.021028.Win67 Winfree AT. Biological rhythms and the behavior of populations of coupled oscillators. J Theor Biol. 1967;16:15–42.Kur84 Kuramoto Y. Chemical Oscillations, Waves, and Turbulence. Berlin: Springer-Verlag; 1984.LB11 Ledoux E, Brunel N. Dynamics of networks of excitatory and inhibitory neurons in response to time-dependent inputs. Frontiers Comp Neurosci. 2011;5:25.VAE94 Van Vreeswijk C, Abbott LF, Bard Ermentrout G. When inhibition not excitation synchronizes neural firing. Journal of Computational Neuroscience. 1994;1(4):313–321. doi:10.1007/BF00961879.Erm96 Ermentrout B. Type I membranes, phase resetting curves, and synchrony. Neural Comp. 1996;8:979–1001.HMM95 Hansel D, Mato G, Meunier C. Synchrony in excitatory neural networks. Neural Comput. 1995;7:307–337.KE11 Kilpatrick ZP, Ermentrout B. Sparse Gamma Rhythms Arising through Clustering in Adapting Neuronal Networks. PLoS Comput Biol. 2011;7(11):e1002281. doi:10.1371/journal.pcbi.1002281.EPG98 Ernst U, Pawelzik K, Geisel T. Delay-induced multistable synchronization of biological oscillators. Physical review E. 1998;57(2):2150.Oku93 Okuda K. Variety and generality of clustering in globally coupled oscillators. Physica D: Nonlinear Phenomena. 1993;63(3-4):424–436.HMM93 Hansel D, Mato G, Meunier C. Clustering and slow switching in globally coupled phase oscillators. Phys Rev E. 1993;48:3470–3477. doi:10.1103/PhysRevE.48.3470.KK01 Kori H, Kuramoto Y. Slow switching in globally coupled oscillators: robustness and occurrence through delayed coupling. Phys Rev E. 2001;63:046214. doi:10.1103/PhysRevE.63.046214.Kor03 Kori H. Slow switching and broken cluster state in a population of neuronal oscillators. Int J Mod Phys B. 2003;17:4238–4241. doi:10.1142/S0217979203022246.PR15 Politi A, Rosenblum M. Equivalence of phase-oscillator and integrate-and-fire models. Phys Rev E. 2015;91:042916. doi:10.1103/PhysRevE.91.042916.CPR16 Clusella P, Politi A, Rosenblum M. A minimal model of self-consistent partial synchrony. New J Phys. 2016;18(9):093037.Fre75 Freeman WJ. Mass action in the nervous system. Academic Press, New York; 1975.JR95 Jansen BH, Rit VG. Electroencephalogram and visual evoked potential generation in a mathematical model of coupled cortical columns. Biological Cybernetics. 1995;73(4):357–366. doi:10.1007/BF00199471.RRW97 Robinson PA, Rennie CJ, Wright JJ. Propagation and stability of waves of electrical activity in the cerebral cortex. Phys Rev E. 1997;56:826–840. doi:10.1103/PhysRevE.56.826.ACN16 Ashwin P, Coombes S, Nicks R. Mathematical Frameworks for Oscillatory Network Dynamics in Neuroscience. The Journal of Mathematical Neuroscience. 2016;6(1):1–92. doi:10.1186/s13408-015-0033-6.TMW+15 Tikidji-Hamburyan RA, Martínez JJ, White JA, Canavier CC. Resonant Interneurons Can Increase Robustness of Gamma Oscillations. Journal of Neuroscience. 2015;35(47):15682–15695. doi:10.1523/JNEUROSCI.2601-15.2015.Erm94 Ermentrout B. Reduction of conductance-based models with slow synapses to neural nets. Neural Comput. 1994;6(4):679–695.Izh07 Izhikevich EM. Dynamical Systems in Neuroscience. Cambridge, Massachusetts: The MIT Press; 2007.BH99 Brunel N, Hakim V. Fast global oscillations in networks of integrate-and-fire neurons with low firing rates. Neural Comput. 1999;11(7):1621–1671.BW03 Brunel N, Wang XJ. What determines the frequency of fast network oscillations with irregular neural discharges? I. Synaptic dynamics and excitation-inhibition balance. Journal of neurophysiology. 2003;90(1):415–430.HM03 Hansel D, Mato G. Asynchronous states and the emergence of synchrony in large networks of interacting excitatory and inhibitory neurons. Neural Computation. 2003;15(1):1–56.GDS+07 Golomb D, Donner K, Shacham L, Shlosberg D, Amitai Y, Hansel D. Mechanisms of firing patterns in fast-spiking cortical interneurons. PLoS Computational Biology. 2007;3(8):e156.THR04 Tateno T, Harsch A, Robinson HPC. Threshold Firing Frequency–Current Relationships of Neurons in Rat Somatosensory Cortex: Type 1 and Type 2 Dynamics. Journal of Neurophysiology. 2004;92(4):2283–2294. doi:10.1152/jn.00109.2004.TR07 Tateno T, Robinson HPC. Phase Resetting Curves and Oscillatory Stability in Interneurons of Rat Somatosensory Cortex. Biophys J. 2007;92(2):683–695. doi:10.1529/biophysj.106.088021.MLP+07 Mancilla JG, Lewis TJ, Pinto DJ, Rinzel J, Connors BW. Synchronization of Electrically Coupled Pairs of Inhibitory Interneurons in Neocortex. Journal of Neuroscience. 2007;27(8):2058–2073. doi:10.1523/JNEUROSCI.2715-06.2007.CRT+06 La Camera G, Rauch A, Thurbon D, Lüscher HR, Senn W, Fusi S. Multiple Time Scales of Temporal Response in Pyramidal and Fast Spiking Cortical Neurons. Journal of Neurophysiology. 2006;96(6):3448–3464. doi:10.1152/jn.00453.2006.OB11 Ostojic S, Brunel N. From spiking neuron models to linear-nonlinear models. PLoS Comput Biol. 2011;7(1):e1001056.MD02 Mattia M, Del Giudice P. Population dynamics of interacting spiking neurons. Phys Rev E. 2002;66:051917. doi:10.1103/PhysRevE.66.051917.RE89 Rinzel J, Ermentrout B. Analysis of neural excitability and oscillations. In: Koch C, Segev I, editors. Methods in Neuronal Modelling: From Ions to Networks. Cambridge: MIT Press; 1989. p. 135–171.ML81 Morris C, Lecar H. Voltage oscillations in the barnacle giant muscle fiber. Biophysical journal. 1981;35(1):193–213.EK86 Ermentrout B, Kopell N. Parabolic bursting in an excitable system coupled with a slow oscillation. SIAM J Appl Math. 1986;46:233–253.ABC11 Achuthan S, Butera RJ, Canavier CC. Synaptic and intrinsic determinants of the phase resetting curve for weak coupling. Journal of Computational Neuroscience. 2011;30(2):373–390. doi:10.1007/s10827-010-0264-1.EGO12 Ermentrout GB, Glass L, Oldeman BE. The Shape of Phase-Resetting Curves in Oscillators with a Saddle Node on an Invariant Circle Bifurcation. Neural Computation. 2012;24(12):3111–3125. doi:10.1162/NECO_a_00370.LBS13 Luke TB, Barreto E, So P. Complete classification of the macroscopic behavior of a heterogeneous network of theta neurons. Neural Comput. 2013;25(12):3207–3234.SLB14 So P, Luke TB, Barreto E. Networks of theta neurons with time-varying excitability: Macroscopic chaos, multistability, and final-state uncertainty. Physica D. 2014;267(0):16–26. doi:http://dx.doi.org/10.1016/j.physd.2013.04.009.Lai14 Laing CR. Derivation of a neural field model from a network of theta neurons. Phys Rev E. 2014;90:010901. doi:10.1103/PhysRevE.90.010901.Lai15 Laing CR. Exact Neural Fields Incorporating Gap Junctions. SIAM Journal on Applied Dynamical Systems. 2015;14(4):1899–1929.Lai16i Laing CR. Travelling waves in arrays of delay-coupled phase oscillators. Chaos. 2016;26(9). doi:http://dx.doi.org/10.1063/1.4953663.Lai16ii Laing CR. Bumps in Small-World Networks. Frontiers in Computational Neuroscience. 2016;10:53. doi:10.3389/fncom.2016.00053.CB16 Coombes S, Byrne Á. Next generation neural mass models. in Lecture Notes in Nonlinear Dynamics in Computational Neuroscience: from Physics and Biology to ICT Springer (In Press).RM16 Roulet J, Mindlin GB. Average activity of excitatory and inhibitory neural populations. Chaos: An Interdisciplinary Journal of Nonlinear Science. 2016;26(9):093104. doi:10.1063/1.4962326.OS16 O'Keeffe KP, Strogatz SH. Dynamics of a population of oscillatory and excitable elements. Phys Rev E. 2016;93:062203. doi:10.1103/PhysRevE.93.062203.PD16 Pietras B, Daffertshofer A. Ott-Antonsen attractiveness for parameter-dependent oscillatory systems. Chaos: An Interdisciplinary Journal of Nonlinear Science. 2016;26(10):103101. doi:10.1063/1.4963371.ERA+17 Esnaola-Acebes JM, Roxin A,Avitabile D, Montbrió E. Synchrony-induced modes of oscillation of a neural field model. Phys Rev E. 2017;96:052407. doi:10.1103/PhysRevE.96.052407.CHC+17 Chandra S, Hathcock D, Crain K, Antonsen TM, Girvan M, Ott E. Modeling the network dynamics of pulse-coupled neurons. Chaos: An Interdisciplinary Journal of Nonlinear Science. 2017;27(3):033102. doi:10.1063/1.4977514.OA08 Ott E, Antonsen TM. Low dimensional behavior of large systems of globally coupled oscillators. Chaos. 2008;18(3):037113. doi:10.1063/1.2930766.OA09 Ott E, Antonsen TM. Long time evolution of phase oscillator systems. Chaos. 2009;19(2):023117. doi:10.1063/1.3136851.OHA11 Ott E, Hunt BR, Antonsen TM. Comment on “Long time evolution of phase oscillators systems”. Chaos. 2011;21:025112.Mat16 Mattia M. Low-dimensional firing rate dynamics of spiking neuron networks. arXiv preprint arXiv:160908855. 2016;.ALB+17 Augustin M, Ladenbauer J, Baumann F, Obermayer K. Low-dimensional spike rate models derived from networks of adaptive integrate-and-fire neurons: comparison and implementation. PLOS Computational Biology. 2017;13(6). doi:10.1371/journal.pcbi.1005545.SDG17 Schwalger T, Deger M, Gerstner W. Towards a theory of cortical columns: From spiking neurons to interacting neural populations of finite size. PLOS Computational Biology. 2017;13(4):1–63. doi:10.1371/journal.pcbi.1005507.PM16 Pazó D, Montbrió E. From Quasiperiodic Partial Synchronization to Collective Chaos in Populations of Inhibitory Neurons with Delay. Phys Rev Lett. 2016;116:238101. doi:10.1103/PhysRevLett.116.238101.RP16 Ratas I, Pyragas K. Macroscopic self-oscillations and aging transition in a network of synaptically coupled quadratic integrate-and-fire neurons. Phys Rev E. 2016;94:032215. doi:10.1103/PhysRevE.94.032215. | http://arxiv.org/abs/1705.09205v3 | {
"authors": [
"Federico Devalle",
"Alex Roxin",
"Ernest Montbrió"
],
"categories": [
"q-bio.NC",
"nlin.AO"
],
"primary_category": "q-bio.NC",
"published": "20170525144831",
"title": "Firing rate equations require a spike synchrony mechanism to correctly describe fast oscillations in inhibitory networks"
} |
[pages=1-last]main.pdf | http://arxiv.org/abs/1705.09525v1 | {
"authors": [
"Claudio Antares Mezzina",
"Emilio Tuosto"
],
"categories": [
"cs.LO",
"cs.FL"
],
"primary_category": "cs.LO",
"published": "20170526104922",
"title": "Choreographies for Automatic Recovery"
} |
Term Models of Horn Clauses over Rational Pavelka Predicate LogicCostaA,B]Vicent Costa, A,B,C]Pilar Dellunde[A]Universitat Autònoma de Barcelona [B]Artificial Intelligence Research Institute (IIIA - CSIC)Campus UAB, 08193 Bellaterra, Catalonia[C]Barcelona Graduate School of [email protected]@uab.cat This paper is a contribution to the study of the universal Horn fragment of predicate fuzzy logics, focusing on the proof of the existence of free models of theories ofHorn clauses over Rational Pavelka predicate logic. We define the notion of a term structure associated to every consistent theory T over Rational Pavelka predicate logic and we prove that the term models of T are free on the class of all models of T. Finally, it is shown that if T is a set of Horn clauses, the term structure associated to T is a model of T. Horn clause term model free model Rational Pavelka predicate logic.Term Models of Horn Clauses over Rational Pavelka Predicate Logic [ 30 July 2017 ================================================================= § INTRODUCTION Free models and Horn clauses have a relevant role in classical logic and logic programming. On the one hand, free models, which appeared first in category theory (see for instance <cit.>), are crucial in universal algebra and, thereby, in model theory. In the context of logic programming, free structures, introduced in <cit.> and also named initial (as for instance in <cit.>), are important in logic programming, since theyallow a procedural interpretation of the programs and admitting free structures makes reasonable the negation as failure (see for instance <cit.>). In the context of abstract data types, Tarlecki <cit.> characterizes abstract algebraic institutions which admit free constructions. On the other hand, the significant importance of Horn clauses in classical logic was detailed in <cit.>, while it is well-known that Horn clauses are used both as a specification and as a programming language in Prolog, the most common language in logic programming. In the context of fuzzy logics, several definitions of Horn clause have been proposed in the literature, but there is not a canonical one yet. An extensive and important work in predicate fuzzy logics has been done by Bělohlávek and Vychodil (see <cit.>). Even if the work of these authors also adopts Pavelka-style, it differs from our approach: we do not restrict Horn clauses to fuzzy equalities and we work in the general semantics of <cit.>. Another approach is shown in <cit.>, where Dubois and Prade discuss different possibilities of defining fuzzy rules and they show how these different semantics can be captured in the framework of fuzzy set theory and possibility theory. We find also that, in the context of fuzzy logic programming, there is a rich battery of proposals of Horn clauses which differ depending on the programming approach chosen. Some reference here are <cit.>. With the goal of developing a systematic study of the universal Horn fragment of predicate fuzzy logics from a model-theoretic point of view, we took in <cit.> the syntactical definition of Horn clause of classical logic. Starting by this general and basic definition we studied the existence offree models of theories of Horn clauses in MTL∀. As a generalisation of a group-theoretic construction, Mal'tsev showed in classical logic that any theory of Horn clauses has a free model. In the present paper, a definition of Horn clause in RPL∀ using evaluated formulas is introduced. Consequently, we prove the existence of free models of theories of RPL∀- Horn clauses showing in RPL∀ an analogous result to Mal'tsev's one. The advantage of using these RPL∀-Horn clauses instead of the ones of <cit.> lies in the fact that the former can be better settled in the context of fuzzy logic programming. For instance, from a syntactical point of view, basic RPL∀-Horn clauses are a particular case of the clauses used in <cit.>The paper is organized as follows. Section 2 contains the preliminaries on RPL∀. In Section 3 we introduce the definition of a term structure associated to a consistent theory and prove that when this structure is a model of the associated theory, the term structure is free on the class of all models of the theory. In Section 4 we define the notion of RPL∀-Horn clause and it is shown that whenever the associated theory is a set of RPL∀-Horn clauses, the term structure is a model of this theory.§ PRELIMINARIESIn this section we introduce the basic notions and results of RPL∀, the first-order extension of Rational Pavelka Logic. For an extensive presentation of RPL∀ see <cit.> and <cit.>.Rational Pavelka Predicate Logic <cit.>Rational Pavelka Predicate Logic RPL∀ is the expansion of Ł∀ by adding a truth constant for each rational number r in [0,1] and by adding the axioms RPL1 and RPL2. The following is an axiomatic sytem for RPL∀:(Ł 1) φ→(ψ→φ)(Ł 2) (φ→ψ)→((ψ→ξ)→(φ→ξ)) (Ł 3) (ψ→φ)→(ψ→φ)(Ł 4) ((φ→ψ)→ψ)→((ψ→φ)→φ) (RPL1) (r→s)↔r→ s(RPL2) (r&s)↔ r& s(∀ 1) (∀ x)φ(x)→φ(t), where the term t is substitutable for x in φ. (∀ 2) (∀ x)(ξ→φ)→(ξ→(∀ x)φ(x)), where x is not free in ξ. The rules are Modus Ponens and Generalization, that is, from φ infer (∀ x)φ. A theory Φ is a set of sentences. We denote by Φ⊢_RPL∀φ the fact that φ is provable in RPL∀ from the set of formulas Φ. From now on, when it is clear from the context, we will write ⊢ to refer to ⊢_RPL∀. We say that a theory Φ is consistent if Φ⊬0. An evaluated formula (φ,r) in a language of RPL∀ is a formula of the form r→φ, where r∈[0,1] is a rational number and φ is a formula without truth constants apart from 0 and 1. We say that an evaluated formula (φ,r) is atomic whenever φ is atomic.Now we introduce the semantics of the predicate languages. Let [0,1]_RPL be the standard RPL-algebra <cit.>, a structure for a predicate language 𝒫 of the logic RPL∀ has the form ⟨[0,1]_RPL, M⟩, where M=⟨ M, (P_M)_P∈ Pred, (F_M)_F∈ Func⟩, M is a non-empty domain; for each n-ary predicate symbol P∈ Pred, P_𝐌 is an n-ary fuzzy relation M, i.e., a function M^n→[0,1]_RPL (identified with an element of [0,1]_RPL if n=0); for each n-ary function symbol F∈ Func, F_𝐌 is a function M^n→ M (identified with an element of M if n=0).An M-evaluation of the object variables is a mapping v which assigns an element from M to each object variable. Let v be an M-evaluation, x a variable, and a∈ M. Then by v[x↦ a] we denote the M-evaluation such that v[x↦ a](x)=a and v[x↦ a](y)=v(y) for each object variable y different from x. We define the values of terms and the truth values of formulas in the structure ⟨[0,1]_RPL, M⟩for an evaluation v recursively as follows: given F∈ Func, P∈ Pred and c a connective of RPL:* ||x||^[0,1]_RPL_𝐌,v=v(x)* ||F(t_1,…,t_n)||^[0,1]_RPL_𝐌,v=F_𝐌(||t_1||^[0,1]_RPL_𝐌,v,…,||t_n||^[0,1]_RPL_𝐌,v)* ||P(t_1,…,t_n)||^[0,1]_RPL_𝐌,v=P_𝐌(||t_1||^[0,1]_RPL_𝐌,v,…,||t_n||^[0,1]_RPL_𝐌,v)* ||c(φ_1,…,φ_n)||^[0,1]_RPL_𝐌,v=c_[0,1]_RPL(||φ_1||^[0,1]_RPL_𝐌,v,…,||φ_n||^[0,1]_RPL_𝐌,v)* ||(∀ x)φ||^[0,1]_RPL_𝐌,v=inf{||φ||^[0,1]_RPL_𝐌,v[x→ a]| a∈ M}* ||(∃ x)φ||^[0,1]_RPL_𝐌,v=sup{||φ||^[0,1]_RPL_𝐌,v[x→ a]| a∈ M}. Observe that, since the universe of the standard RPL-algebra is the interval of real numbers [0,1], which is complete, all the infima and suprema in the definition of the semantics of the quantifiers exist.For every formulaφ, possibly with variables, we write ||φ||^[0,1]_RPL_M=inf{||φ||^[0,1]_RPL_M,v| for every M -evaluationv},we say that ⟨[0,1]_RPL, M⟩ is a model of a sentence φ if ||φ||^[0,1]_RPL_M=1; and that ⟨[0,1]_RPL, M⟩ is a model of a theory Φ if||φ||^[0,1]_RPL_M=1 for every φ∈Φ.In particular, given a structure ⟨[0,1]_RPL, M⟩ and two formulas φ and ψ: ||φ&ψ||^[0,1]_RPL_M=max{||φ||^[0,1]_RPL_M+||ψ||^[0,1]_RPL_M-1,0} ||φ→ψ||^[0,1]_RPL_M=min{1-||φ||^[0,1]_RPL_M+||ψ||^[0,1]_RPL_M,1}.Let ⟨[0,1]_RPL,𝐌⟩ and ⟨[0,1]_RPL,𝐍⟩ be structures, and g be a mapping from M to N. We say that g is a homomorphism from ⟨[0,1]_RPL,𝐌⟩ to ⟨[0,1]_RPL,𝐍⟩ if for every n-ary function symbol F, any n-ary predicate symbol P and d_1,…,d_n∈ M, (1) g(F_𝐌(d_1,…,d_n))=F_𝐍(g(d_1),…,g(d_n)), and (2) P_𝐌(d_1,…,d_n)=1⇒ P_𝐍(g(d_1),…,g(d_n))=1. Throughout the paper we assume that all our languages have a binary predicate symbol ≈ and we extend the axiomatic system of RPL∀ in <cit.> with the following axioms of similarity and congruence.<cit.> S1. (∀ x)x≈ xS2. (∀ x)(∀ y)(x≈ y→ y≈ x)S3. (∀ x)(∀ y)(∀ z)(x≈ y & y≈ z→ x≈ z)C1.For each n-ary function symbolF, (∀ x_1)…(∀ x_n)(∀ y_1)…(∀ y_n)(x_1≈ y_1&…& x_n≈ y_n→ F(x_1,…,x_n)≈ F(y_1,…,y_n))C2.For each n-ary predicate symbolP, (∀ x_1)…(∀ x_n)(∀ y_1)…(∀ y_n)(x_1≈ y_1&…& x_n≈ y_n→ (P(x_1, …, x_n)↔ P(y_1,…, y_n))Let Φ be a theory over RPL∀, φ a formula in a language of RPL∀ and r∈[0,1] a rational number. (i) The truth degree of φ over Φ is ||φ||_Φ=inf{ ||φ||^[0,1]_RPL_M|⟨[0,1]_RPL,M⟩ is a model of Φ}.(ii) The provability degree of φ over Φ is |φ|_Φ=sup{r|Φ⊢r→φ}. Pavelka-style completeness <cit.>Let Φ be a theory over RPL∀ and φ a formula in a language of RPL∀. Then, |φ|_Φ=||φ||_Φ. § TERM STRUCTURESIn this section we introduce the notion of term structure associated to a consistent theory Φ over RPL∀, and prove that whenever the term structure is a model of Φ, the structure is free on the class of models of Φ. Term structures have been used extensively in classical logic, for instance, to prove the satisfiability of a set of consistent sentences (see for example <cit.>).Let Φ be a consistent theory, we define a binary relation on the set of terms, denoted by ∼, in the following way: For every terms t_1,t_2, t_1∼ t_2 if and only if |t_1≈ t_2|_Φ=1. Using Axioms ∀1, S1, S2 and S3, it can be proven that ∼ is an equivalence relation. Next lemma, which states that the equivalence relation ∼ is compatible with the symbols of the language, is proved using Axioms ∀1, C1, C2 and<cit.>. For any consistent theory Φ, the following holds: If t_i∼ t'_i for every 1≤ i≤ n, then(i) For any n-ary function symbol F, F(t_1,…,t_n)∼ F(t'_1,…,t'_n).(ii) For any n-ary predicate symbol P and rational number r∈[0,1],|(r→ P(t_1,...,t_n))↔ (r→ P(t'_1,...,t'_n))|_Φ=1From now on, for any term t we denote by t the ∼-class of t. Term StructureLet Φ be a consistent theory. We define the following structure ⟨[0,1]_RPL,𝐓^Φ⟩, where T^Φ is the set of all equivalence classes of the relation ∼ and * For any n-ary function symbol F and terms t_1,…,t_n, F_𝐓^Φ(t_1,…,t_n)=F(t_1,…,t_n)* For any n-ary predicate symbol P and terms t_1,…,t_n,P_𝐓^Φ(t_1,…,t_n)=|P(t_1,…,t_n)|_Φ We call ⟨[0,1]_RPL,𝐓^Φ⟩ the term structure associated to Φ.Notice that for 0-ary functions, that is, for individual constants, c_𝐓^Φ=c. Given a consistent theory Φ, let e^Φ be the following 𝐓^Φ-evaluation: e^Φ(x)=x for every variable x. We call e^Φ the canonical evaluation of ⟨[0,1]_RPL,𝐓^Φ⟩. Let Φ be a consistent theory, the following holds:(i) For any term t, ||t||^[0,1]_RPL_𝐓^Φ,e^Φ=t. (ii) For any atomic formula φ, ||φ||^[0,1]_RPL_𝐓^Φ,e^Φ=1 if and only if |φ|_Φ=1.(iii) For any evaluated atomic formula (φ,s), ||(φ,s)||^[0,1]_RPL_𝐓^Φ,e^Φ=1 if and only if |(φ,s)|_Φ=1. The proofs of (i) and (ii) are straightforward. Regarding (iii), let (φ,s)=(P(t_1…,t_n),s), we have: [ ||(P(t_1…,t_n),s)||^[0,1]_RPL_𝐓^Φ,e^Φ=1 iff; s≤||P(t_1…,t_n)||^[0,1]_RPL_𝐓^Φ,e^Φ iff;s≤ P_T^Φ(t_1…,t_n) iff; s≤ |P(t_1…,t_n)|_Φ iff|s→ P(t_1,…, t_n)|_Φ=1. ]The last equivalence is proved from <cit.>. Since the simplest well-formed formulas are atomic formulas, Lemma <ref> (ii) can be read as saying that term structures are minimal with respect to atomic formulas. By Theorem <ref>, |φ|_Φ=||φ||_Φ and, by Lemma <ref> (ii), the term structure ⟨[0,1]_RPL,𝐓^Φ⟩ only assigns the truth value 1 to those atomic formulas that have 1 as their truth value in every model ⟨[0,1]_RPL,M⟩ of Φ. By a similar argument, Lemma <ref> (iii) states that the term structure ⟨[0,1]_RPL,𝐓^Φ⟩ is minimal with respect to evaluated atomic formulas.From an algebraic point of view, the minimality of the term structure is revealed by the fact that the structure is free. The following theorem proves that in case that the term structure associated to a theory is a model of that theory, the term structure is free.Working in predicate fuzzy logics (and, in particular, in RPL∀) allows to define the term structure associated to a theory using similarities instead of crisp identities. This leads us to a notion of free structure restricted to the class of reduced models of that theory. Remember that reduced structures are those whose Leibniz congruence is the identity. By <cit.>, a structure ⟨[0,1]_RPL,𝐌⟩ is reduced iff it has the equality property (EQP) (that is, for any d,e∈ M, || d≈ e||^[0,1]_RPL_𝐌=1 iff d=e). Observe that, by using Definitions <ref> and <ref> and the fact that ∼ is an equivalence relation, it can be proven that ⟨[0,1]_RPL,𝐓^Φ⟩ is a reduced structure. Let Φ be a consistent theory such that ⟨[0,1]_RPL,𝐓^Φ⟩ is a model of Φ. Then ⟨[0,1]_RPL,𝐓^Φ⟩ is free on the class of all the reduced models ⟨[0,1]_RPL,𝐍⟩ of Φ. That is, for every reduced model of Φ ⟨[0,1]_RPL,𝐍⟩ and every 𝐍-evaluation v, there is a unique homomorphism g from ⟨[0,1]_RPL,𝐓^Φ⟩ to ⟨[0,1]_RPL,𝐍⟩ such that for every variable x, g(x)=v(x).Let ⟨[0,1]_RPL,𝐍⟩ be a reduced model of Φ and v an N-evaluation.We define g by: g(t)=|| t ||^[0,1]_RPL_𝐍,v for every term t. We show that g is the claimed homomorphism. Let us first check that g is well-defined. Let t_1,t_2 be terms with t_1=t_2, i.e., t_1∼ t_2, that is, |t_1≈ t_2|_Φ=1. From Theorem <ref> we have ||t_1≈ t_2||_Φ=1. Since ||Φ||^[0,1]_RPL_𝐍=1, it follows that ||t_1≈ t_2||^[0,1]_RPL_𝐍=1 and, in particular, ||t_1≈ t_2||^[0,1]_RPL_𝐍,v=1. From this and the fact that ⟨[0,1]_RPL,𝐍⟩ is reduced, we deduce, by <cit.>, that ||t_1||^[0,1]_RPL_𝐍,v=||t_2||^[0,1]_RPL_𝐍,v, i.e., g(t_1)=g(t_2).The task is now to see that g satisfies the conditions (1) and (2) of Definiton <ref>. For any 0-function symbol c, c_𝐓^Φ=c=c_N by Definition <ref>. Let t_1,…,t_n∈ T^Φ and F be an n-ary function symbol, F_𝐓^Φ(t_1,…,t_n)=F(t_1,…,t_n) by Definition <ref>. Then, by the definition of g,g(F_𝐓^Φ(t_1,…,t_n))=g(F(t_1,…,t_n))=F_N(|| t_1||^[0,1]_RPL_𝐍,v,… ,|| t_n ||^[0,1]_RPL_𝐍,v)=F_N(g(t_1),…,g(t_n)).Let P be an n-ary predicate symbol such that P_𝐓^Φ(t_1,…,t_n)=1. By Definition <ref> and Theorem <ref>, 1=P_𝐓^Φ(t_1,…,t_n)=|P(t_1,…,t_n)|_Φ=||P(t_1,…,t_n)||_Φ. Consequently, ||P(t_1,…,t_n)||^[0,1]_RPL_𝐍=1, because ||Φ||^[0,1]_RPL_𝐍=1. Thus || P(t_1,…,t_n)||^[0,1]_RPL_𝐍,v=1. Therefore P_𝐍(|| t_1||^[0,1]_RPL_𝐍,v,… ,|| t_n ||^[0,1]_RPL_𝐍,v)=1, that is, P_𝐍(g(t_1),…,g(t_n))=1.Finally, since the set {x| x is a variable} generates the universe T^Φ of the term structure associated to Φ, g is the unique homomorphism such that for every variable x, g(x)=v(x).Observe that in languages in which the similarity symbol is interpreted by the crisp identity, by using an analogous argument to the one in Theorem <ref>, we obtain that the term structure is free in the class of all the models ⟨[0,1]_RPL,𝐌⟩ of the theory and not only in the class of the reduced ones.§ RPL∀-HORN CLAUSES In the previous section we have seen that if the term structure associated to a theory Φ is a model of Φ, then the structure is free in the class of all models of Φ. In this section, we show in Theorem <ref> that whenever Φ is a theory of RPL∀-Horn clauses, ⟨[0,1]_RPL,T^Φ⟩ is a model of Φ. Theorem <ref> gains in interest if we realize that it proves (using Theorem <ref>) the existence of free models of theories of RPL∀-Horn clauses. Let us first define the notion of RPL∀-Horn clauses. In predicate classical logic, a basic Horn formula is a formula of the form α_1∧…∧α_n→β, where n is a natural number and α_1,…,α_n,β are atomic formulas. Notice that there is not a unique way to extend this definition in fuzzy logics, where we have different conjunctions and implications. In this section we present one way to define Horn clauses over RPL∀ extending the classical definition. Basic RPL∀-Horn Formula A basic RPL∀-Horn formula is a formula of the form (α_1,r_1)&…&(α_n,r_n)→(β,s)where (α_1,r_1)…,(α_n,r_n),(β,s) are evaluated atomic formulas and n is a natural number. Observe that n can be 0. In that case the basic RPL∀-Horn formula is an evaluated atomic formula.Quantifier-free RPL∀-Horn FormulaA quantifier-free RPL∀-Horn formula is a formula of the form ϕ_1&…&ϕ_m, where m is a natural number and ϕ_i is a basic RPL∀-Horn formula for every 1≤ i≤ m.RPL∀-Horn Clause A RPL∀-Horn clause is a formula of the form Qγ, where Q is a (possibly empty) string of universal quantifiers (∀ x) and γ is a quantifier-free RPL∀-Horn formula. Let 𝒫 be a predicate language with a unary predicate symbol P, a binary predicate symbol R and a an individual constant. The following formulas are examples of RPL∀-Horn clauses:(1) (P(a),0.5), (2) (P(a),0.6)&(R(a,x),0.3), (3) (P(a),0.5)→(R(a,a),0.1), (4) (P(a),0.6)&(R(a,x),0.3)→(P(x),0.8), (5) (∀ x)((P(x),0.6)&(R(a,x),0.3)), (6) (∀ x)((P(x),0.6)&(R(a,x),0.3)→(P(a),0.9)).Observe that, in general, RPL∀-Horn clauses are not evaluated, only the atomic RPL∀-Horn clausesare evaluated formulas.A weak version of RPL∀-Horn clauses can be introduced by substituting each strong conjunction & appearing in the formula by the weak conjunction ∧. Although in this paper we do not present this weak version, all the results we prove are also true for weak RPL∀-Horn clauses. In classical logic, the set of all Horn clauses is recursively defined, because the formula (∀ x)(φ∧ψ) is logically equivalent to (∀ x)φ∧(∀ x)ψ. In RPL∀ these two formulas are also logically equivalent, so the set of the weak version of fuzzy RPL∀-Horn clauses is recursively definable. However, this is not the case for fuzzy RPL∀-Horn clauses. Indeed, let P and R be unary predicate symbols, consider the structure ⟨[0,1]_RPL,M⟩ such thatM={a,b}, P_M(a)=R_M(b)=0.4 and P_M(b)=R_M(a)=0.7. Then, ||(∀ x)((P(x),1)&(R(x),1))||^[0,1]_RPL_M=0.1, but ||(∀ x)((P(x),1))&(∀ x)((R(x),1))||^[0,1]_RPL_M=0. We now see that for any consistent theory of RPL∀-Horn clauses Φ, the term structure associated to Φ is a model of Φ. To show that, we need the following lemmas and the notion of rank of a formula. Our definition of rankis a variant of the notion of syntactic degree of a formula of <cit.>). Let φ be a formula, the rank of φ, denoted by rk(φ) is defined by:* rk(φ)=0 if φ is atomic;* rk(φ)=rk((∃ x)φ)=rk((∀ x)φ)=rk(φ)+1;* rk(φ∘ψ)=rk(φ)+rk(ψ) for every binary propositional connective ∘. Note that since the set of RPL∀-Horn clauses is not recursively definable, induction on the complexity of the clause cannot be applied. Hence it is applied on the rank of the clauses. Such induction can be used to prove next lemma.Let φ be an RPL∀-Horn clause where x_1,…,x_m are pairwise distinct free variables. Then, for every terms t_1,…,t_m, the substitutionφ (t_1,…,t_m/x_1,…,x_m)is an RPL∀-Horn clause.For any consistent theory Φ and any evaluated atomic formula (φ,s), ||(φ,s)||^[0,1]_RPL_T^Φ=||(φ,s)||_Φ.It is enough to show that for any rational number t∈[0,1], ||(φ,s)||^[0,1]_RPL_T^Φ≥ tiff||(φ,s)||_Φ≥ t. Let t∈[0,1] be a rational number, we have:||(φ,s)||^[0,1]_RPL_T^Φ≥ tiff||t→(s→φ)||^[0,1]_RPL_T^Φ=1iff ||t&s→φ||^[0,1]_RPL_T^Φ=1iff||φ||^[0,1]_RPL_T^Φ≥ t*_Łs iff ||φ||^[0,1]_RPL_M≥ t*_Łs for every model ⟨[0,1]_RPL,M⟩ of Φ iff for any model ⟨[0,1]_RPL,M⟩ of Φ, ||t→(s→φ)||^[0,1]_RPL_M=1.The second and latter equivalence are proved by using <cit.>. The latter expression is equivalent to ||(φ,s)||^[0,1]_RPL_M≥ t for every model ⟨[0,1]_RPL,M⟩ of Φ, i.e., ||(φ,s)||_Φ≥ t.For any consistent theory Φ and any evaluated atomic sentences (φ_1,s_1),…,(φ_n,s_n),||(φ_1,s_1)&…&(φ_n,s_n)||^[0,1]_RPL_T^Φ≤||(φ_1,s_1)&…&(φ_n,s_n)||_Φ. By Lemma <ref>, it is clear for n=1. For the sake of clarity, we present the proof for the case n=2, but the argument is analogous for the cases with n> 2. First, by Lemma <ref> we have:||(φ_1,s_1)&(φ_2,s_2)||^[0,1]_RPL_T^Φ=||(φ_1,s_1)||^[0,1]_RPL_T^Φ*_Ł||(φ_2,s_2)||^[0,1]_RPL_T^Φ= ||(φ_1,s_1)||_Φ*_Ł||(φ_2,s_2)||_Φ. Since for any model ⟨[0,1]_RPL,M⟩ of Φ, ||(φ_1,s_1)||_Φ≤||(φ_1,s_1)||^[0,1]_RPL_M and ||(φ_2,s_2)||_Φ≤||(φ_2,s_2)||^[0,1]_RPL_M, we have that for any model ⟨[0,1]_RPL,M⟩ of Φ,||(φ_1,s_1)||_Φ*_Ł||(φ_2,s_2)||_Φ≤ ||(φ_1,s_1)||^[0,1]_RPL_M*_Ł||(φ_2,s_2)||^[0,1]_RPL_M= ||(φ_1,s_1)&(φ_2,s_2)||^[0,1]_RPL_M.Therefore, since ||(φ_1,s_1)&(φ_2,s_2)||_Φ is the infimum, we have ||(φ_1,s_1)||_Φ*_Ł||(φ_2,s_2)||_Φ≤ ||(φ_1,s_1)&(φ_2,s_2)||_Φ.Consequently, ||(φ_1,s_1)&(φ_2,s_2)||^[0,1]_RPL_T^Φ≤||(φ_1,s_1)&(φ_2,s_2)||_Φ.Let Φ be a consistent theory. For every RPL∀-Horn clause φ without free variables, If|φ|_Φ=1, then||φ||^[0,1]_RPL_T^Φ=1. Let φ be an RPL∀-Horn clause without free variables. We proceed by induction on rk(φ). rk(φ)=0. We can distinguish three subcases: 1) If φ=(ψ,s) is an evaluated atomic formula, the statement holds by Lemma <ref> (iii). 2) Let φ=(ψ_1,s_1)&…&(ψ_n,s_n)→(ψ,s) be a basic RPL∀-Horn formula, where (ψ_1,s_1),…,(ψ_n,s_n),(ψ,s) are evaluated atomic formulas. By hypothesis and Theorem <ref> we have:1=|(ψ_1,s_1)&…&(ψ_n,s_n)→(ψ,s)|_Φ=||(ψ_1,s_1)&…&(ψ_n,s_n)→(ψ,s)||_Φ. Therefore, ||(ψ_1,s_1)&…&(ψ_n,s_n)||_Φ≤ ||(ψ,s)||_Φ.By Lemmas <ref> and <ref>, || (ψ,s)||^[0,1]_RPL_T^Φ=||(ψ,s)||_Φ and||(ψ_1,s_1)&…&(ψ_n,s_n)||^[0,1]_RPL_T^Φ≤ ||(ψ_1,s_1)&…&(ψ_n,s_n)||_Φ. Therefore ||(ψ_1,s_1)&…&(ψ_n,s_n)||^[0,1]_RPL_T^Φ≤ || (ψ,s)||^[0,1]_RPL_T^Φ. That is,||(ψ_1,s_1)&…&(ψ_n,s_n)→(ψ,s)||^[0,1]_RPL_T^Φ=1.3) If φ=ϕ_1&…&ϕ_m is a conjunction of basic RPL∀-Horn formulas,||ϕ_1&…&ϕ_m||^[0,1]_RPL_𝐓^Φ=1 iff||ϕ_i||^[0,1]_RPL_𝐓^Φ=1 for every 1 ≤ i ≤ m.From 1) and 2), |ϕ_i|_Φ=1 for every 1≤ i≤ m and thus |ϕ_1&…&ϕ_m|_Φ=1. rk(φ)=n+1. Let φ=(∀ x)ψ be such that ψ is an RPL∀-Horn clause of rank n. Assume inductively that for any RPL∀-Horn clause without free variables ξ of rank less or equal than n and such that |ξ|_Φ=1, ||ξ||^[0,1]_RPL_𝐓^Φ=1. By assumption and Axiom ∀ 1,Φ⊢(∀ x)ψ→ψ(t/x) for every term t.From Axiom Ł2, sup{r|Φ⊢r→φ}=1 implies that sup{r|Φ⊢r→ψ(t/x)}=1 for any term t. That is, |ψ(t/x)|_Φ=1 for every term t.Since ψ has rank n and is an RPL∀-Horn clause by Lemma <ref>, we can apply the inductive hypothesis and conclude that ||ψ(t/x)||^[0,1]_RPL_𝐓^Φ=1 for any term t. So, by Lemma <ref> (i), ||ψ(x)||^[0,1]_RPL_𝐓^Φ,v[x↦t]=1 for every element t of the domain, and thus we get ||(∀ x)ψ||^[0,1]_RPL_𝐓^Φ=1. § CONCLUSIONS AND FUTURE WORK The present paper is another step towards a systematic study of theories of Horn clauses over predicate fuzzy logics from a model-theoretic point of view, a study that we started in <cit.> and which is still in progress. In particular, here wehave proved the existence of free models of theories of Horn clauses in RPL∀. Future work will be devoted to study the broad approach taken in <cit.> to fuzzy logics with enriched languages. We shall see if RPL∀-Horn clauses introduced here can be generalized to that logics with enriched languages. Later, since one of our next goals is to solve the open problem (formulated by Cintula and Hájek in <cit.>) about the characterization of theories of fuzzy Horn clauses in terms of quasivarieties, we will analyze quasivarieties and try to define them in the context of fuzzy logics using recent results on products over fuzzy logics like <cit.>.Herbrand structures have been important in model theory and in the foundations of logic programming. Therefore, as a continuation of the present work, we would like to characterize the free Herbrand model in the class of the Herbrand models of theories of RPL∀-Horn clauses without equality. Finally, we will focus on a generalization of Herbrand structure, fully named models, in order to show that two types of minimality for these models (specifically free models and A-generic models) are equivalent. § ACKNOWLEDGMENTS We would like to thank the referees for their useful comments. This project has received funding from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No 689176 (SYSMICS project). Pilar Dellunde is also partially supported by the project RASO TIN2015-71799-C2-1-P (MINECO/FEDER) and the grant 2014SGR-118 from the Generalitat de Catalunya.00 BaWe98 F. Barr, C. Wells, Category Theory for Computing Science, 2nd. edition, Prentice Hall International (UK), 1995. BeVic06 R. Bělohlávek, V. Vychodil, Fuzzy Horn logic I, Arch. Math. Log., 45 (2006) 3–51. BeVic06b R. Bělohlávek, V. Vychodil, Fuzzy Horn logic II, Arch. Math. Log., 45 (2006) 149–177. BeVy05 R. Bělohlávek, V. Vychodil, Fuzzy Equational Logic, Studies in Fuzziness and Soft Computing, vol. 186, Springer, 2005, pp. 1–266. Be03 R. Bělohlávek, Birkhoff variety theorem and fuzzy logic,Arch. Math. Log. 42 (8) (2003) 781–790.Be02 R. Bělohlávek, Fuzzy equational logic,Arch. Math. Log. 41 (1) (2002) 83–90. Cao04 T.H. Cao, A note on the model-theoretic semantics of fuzzy logic programming for dealing with inconsistency, Fuzzy Sets Syst. 144 (2004) 93–104. CiHaNo11 P. Cintula, P. Hájek, C. Noguera (Eds.), Handbook of Mathematical Fuzzy Logic, Studies in Logic, Mathematical Logic and Foundations, vol. 38, College Publications, London, 2011 (in 2 volumes). CiHa10P. Cintula, P. Hájek, Triangular norm based predicate fuzzy logics, Fuzzy Sets Syst. 161 (2010) 311–346.CoDe16 V. Costa, P. Dellunde, On the existence of Free Models in Fuzzy Universal Horn Classes, Journal of Applied Logic, http://dx.doi.org/10.1016/j.jal.2016.11.002, in press. DeGaNo14 P. Dellunde, A. García-Cerdaña, C. Noguera, Lwenheim-Skolem theorems for non-classical first-order algebraizable logics, Log. J. IGPL (2016), http://dx.doi.org/10.1093/jigpal/jzw009, in press. De12 P. Dellunde, Preserving mappings in predicate fuzzy logics, J. Log. Comput. 22 (6) (2012) 1367–1389.DuPra96 D. Dubois, H. Prade, What are fuzzy rules and how to use them, Fuzzy Sets Syst. 84 (1996) 169–185. EbiFlu94 H.D. Ebbinghaus, J. Flum, W. Thomas, Mathematical Logic, 2nd edition, Springer, 1994.Ebra01 R. Ebrahim, Fuzzy logic programming, Fuzzy Sets Syst. 117 (2001) 215–230. GoThWaWr75 J.A. Goguen, J.W. Thatcher, E.G. Wagner, J.B. Wright, Abstract data types as initial algebras and the correctness of data representations, in: IEEE Computer Soc. (Ed.), Proceedings of the Conference on Computer Graphics, Pattern Recognition and Data Structures, New York (United States), 1975, pp. 89–93. Ha98 P. Hájek, Metamathematics of Fuzzy Logic, Trends Log. Stud. Log. Libr., vol. 4, Kluwer Academic Publishers, 1998.Hod93 W. Hodges, Logical features of Horn logic, in: M. Gabbay, C.J. Hogger, J.A. Robinson, J. Siekmann (Eds.), Handbook of Logic in Artificial Intelligence and Logic Programming: Logical Foundations, vol. 1, Clarendon Press, 1993, pp. 449–503.Mak87 J.A. Makowsky, Why Horn formulas matter in computer science: initial structures and generic examples, J. Comput. Syst. Sci. 34 (1987) 266–292. Tar85 A. Tarlecki, On the existence of free models in abstract algebraic institutions, Theor. Comput. Sci. 37(1985) 269–304. Voj01 P. Vojtš, Fuzzy logic programming, Fuzzy Sets Syst. 124 (2001) 361–370.Vy15 V. Vychodil. Pseudovarieties of algebras with fuzzy equalities, Fuzzy Sets Syst. 260 (2015) 110–120. | http://arxiv.org/abs/1705.09572v1 | {
"authors": [
"Vicent Costa",
"Pilar Dellunde"
],
"categories": [
"math.LO"
],
"primary_category": "math.LO",
"published": "20170526130737",
"title": "Term Models of Horn Clauses over Rational Pavelka Predicate Logic"
} |
Coverage and Spectral Efficiency of Indoor mmWave Networks with Ceiling-Mounted Access PointsFadhil Firyaguna, Jacek Kibiłda, Carlo Galiotto, Nicola Marchetti CONNECT Centre, Trinity College Dublin, Ireland {firyaguf, kibildj, galiotc, nicola.marchetti}@tcd.ieAccepted 2017 June 22. Received 2017 June 21; in original form 2017 April 21 =================================================================================================================================================================================== Abstract: Vasculature is known to be of key biological significance, especially in the study of cancer. As such, considerable effort has been focused on the automated measurement and analysis of vasculature in medical and pre-clinical images. In tumors in particular, the vascular networks may be extremely irregular and the appearance of the individual vessels may not conform to classical descriptions of vascular appearance. Typically, vessels are extracted by either a segmentation and thinning pipeline, or by direct tracking. Neither of these methods are well suited to microscopy images of tumor vasculature. In order to address this we propose a method to directly extract a medial representation of the vessels using Convolutional Neural Networks. We then show that these two-dimensional centerlines can be meaningfully extended into 3D in anisotropic and complex microscopy images using the recently popularized Convolutional Long Short-Term Memory units (ConvLSTM). We demonstrate the effectiveness of this hybrid convolutional-recurrent architecture over both 2D and 3D convolutional comparators.§ INTRODUCTIONAdvances in microscopy have led to an ever increasing ability to see into the minute detail of the tumor micro-environment. Using in vivo fluorescence microscopy, it is possible to observe the development of capillary level vascularization of the tumor. This development of neovasculature via angiogenesis has been identified as one of the key processes in the development of a tumor <cit.>. However, the closely packed and complex organization of this neovasculature makes automatic analysis extremely challenging. This is compounded by the fact that vascular networks are inherently 3-dimensional (3D) structures. Using multi-photon microscopy, it is possible to build up a 3D reconstruction of the vasculature. However, often, this leads to highly anisotropic voxel sizes and a drastically reduced ability to resolve detail in the z dimension, compared to in the in-plane (x-y) dimensions. Ultimately our aim is to be able to quantitatively track changes in the vasculature both temporally and between different tumors and different therapeutic conditions. In order to do this we require automated approaches for extracting information from the images. Typically, vascular imaging relies on the use of a contrast agent, which enhances the appearance of vasculature in the image. However, tumor vasculature is extremely leaky and irregularly perfused making this an unreliable approach. In addition to this, an important downstream application of the proposed method is to quantify levels of perfusion within the tumor. This requires a vessel extraction method that operates independently of levels of perfusion. However, imaging vasculature via the endothelium rather than the lumen means that our vasculature will follow a non-standard appearance model of a hollow tube rather than an enhanced cylinder. This renders a number of established vessel segmentation algorithms unusable for data of this kind. The current state of the art in semantic image segmentation are the methods utilizing advances in deep learning, specifically deep Convolutional Neural Networks (CNNs). CNNs have proved extremely effective at a wide range of vision tasks due to their ability to build complex hierarchical representations of an image. By replacing the dense connectivity of the earlier neural networks with kernel convolutions, the number of model parameters is drastically reduced by enforcing a strong spatial prior. Repeated application of convolution and down-sampling allow the network to consider a large field-of-view, while only considering very local interactions at any given layer <cit.>. Specific to image segmentation are the subclass of methods known as `fully convolutional' networks, which output an image of equal size to their input, transformed into some output domain <cit.>. These methods allow the reuse of many calculations by applying the same spatial priors applied to feature extraction to the labeling. Although CNNs have traditionally been applied to extremely large public databases such as for the ImageNet competition <cit.>, in the field of Medical Imaging there has been great interest in attempting to apply these same techniques to more bespoke applications with much smaller datasets <cit.>.Although CNNs have been extended to consider 3D interactions, we believe that this may be sub-optimal for applications such as ours, where the image does not represent an isotropic sampling of the domain. Since both the number of parameters and the computational complexity will increase exponentially with the effective dimension for convolutions, we restrict ourselves to convolution only in the information-rich x-y plane. However, there is important contextual information contained in neighboring planes that we must exploit in order to produce a satisfactory extraction of the vasculature. In order to tap into this information without overly burdening ourselves with model complexity, we will use a variant of the Long Short-Term Memory (LSTM) recurrent neural network units <cit.>. The recently popularized Convolutional Long Short-Term Memory (ConvLSTM) units <cit.> provide a way to model sequences of images. These networks modify the LSTM model of recurrent neural networks, replacing their inner product operations on feature vectors with kernel convolution operations on images. In this work we will apply these ConvLSTM units to tie together the detection results from each plane to create a meaningful 3D structure for the vasculature present in the image volume. A common pipeline for the extraction of vasculature is segmentation followed by a medial axis extraction, however, this is not always possible. In images containing tightly packed vascular structures it may be the case that a segmentation will be unable to separate extremely close, or even touching, sections of vasculature. If we then attempt to extract the vascular topology from this segmentation, it may not represent the true network structure of the underlying vasculature. In order to address this, we propose to learn to extract the medial representation of the vasculature directly, by training our network to reproduce skeletons of the vasculature, derived from manual segmentations, rather than training against the segmentations themselves. Difficulty arises from the fact that the medial representation of a 3D structure inherently requires a 3D view of the domain, and unlike segmentations it is extremely sparse. This cannot be achieved by just applying a CNN to each slice in our image. Instead, we propose to link together a shared CNN network applied to each image slice using ConvLSTM layers. This provides a mechanism for the sparse information to be shared between the feature extraction CNN networks on neighboring slices, even when no information about the skeleton may be present directly for that slice of the image. In addition to this we will analyze the role played by different loss functions and also the importance of applying our recurrent layers in a bidirectional sense, as has been explored in other applications such as video analysis <cit.>. § RELATED WORKA number of authors have attempted to apply Deep Learning methods to segmentation of vasculature. However, the vast majority of effort in this area is applied to segmentation of vasculature in optical imaging of the retinal fundus<cit.>. Prentašic et al.have applied deep learning methods to coherence microscopy images of micro-vasculature <cit.> but they consider only 2D images in which the vasculature already has an enhanced appearance. Teikari et al.have applied CNNs to the task of segmentation in 3D fluorescence microscopy images of micro-vasculature, similar to ours <cit.>. However, they consider contrast enhanced vasculature, and only concern themselves with the segmentation of the vasculature, rather than extracting a medial representation. 3D CNNs have been applied successfully to brain lesion segmentation by Kamnitsas et al.<cit.> with their Deep Medic architecture. 3D CNNs have also been explored in the V-Net architecture of Milletari at al. <cit.> for prostate segmentation in MRI and the 3D extension of U-Net <cit.>.ConvLSTMs were originally proposed by Shi et al. <cit.> for weather forecasting.They have recently been applied to a similar domain to ours in 3D microscopy by Chen et al. <cit.>. In this work they combine a stacked kU-net with a deep ConvLSTM architecture. However, they use this as a means to refine the segmentation, rather than fundamentally altering the output as we do. In addition to the convolutional LSTM, others have suggested the use of orthogonal LSTM units to traverse 3D image volumes, such as in the PyraMiD-LSTM work by Stollenga et al. <cit.>. Other convolutional-recurrent hybrids have also been explored in medical imaging, such as the work by Poudel et al. <cit.>, who apply recurrence to the inner layers of a U-Net inspired architecture, for cardiac MRI segmentation.The majority of classical vessel segmentation and extraction techniques rely on the tubular appearance of enhanced vasculature within an image. By considering the image pixel intensity as a hyper-surface N+1-dimensional space, we can consider a bright tubular structure as a ridge in this surface. This can be quantified by computing a local Hessian matrix for the surface, ℋ_ij = [I]x_ij. Using considerations of the eigen-system of ℋ it is possible to construct many vessel targeted segmentation methods. A thorough review of these methods may be found in the review papers by Lesage <cit.> and by Kirbas and Quek <cit.>. However, as our vasculature does not present a regular tubular geometry or appearance as in most applications, we do not consider this to be a traditional vessel segmentation task. Skeletonization is usually applied as a post-processing step to binary segmentation images. The most common approaches consist of some variation of homotopic thinning <cit.>. A skeletal representation jointly describes the topology and the structure of our branching system of vasculature. This is a crucial stage in the process of quantitatively analyzing the vasculature as it allows us to attach rich, extracted features to some minimal representation of the vasculature.A detailed review of the current state-of-the-art in 3D skeletonization algorithms has been provided by Tagliasacchi et al. <cit.>. As they identify in their review, the skeleton of a volumetric structure is not a uniquely defined representation, and many such skeletons of a single volume may be equally valid. It is our view that, in light of this, it is beneficial to pass the decision making about what should be included in the vascular skeleton to the most `intelligent' part of our pipeline, namely the neural network.In this work we will demonstrate the ability of hybrid convolutional-recurrent architectures to approach direct 'skeletonization', rather than relying heavily on thinning or tracking algorithms. We believe that by learning to extract centerlines directly in the machine learning task, we remove the degree of subjectivity which is involved in the skeletonization and pruning pipelines that are popular in many vessel extraction methods. § MATERIALS In this section we will outline the data used for the experiments in this paper. We perform experiments on both real, pre-clinical data as well as synthetic data.Tumor Microscopy: For our experiments on real data we have delineated vasculature from images acquired using high resolution fluorescence multi-photon microscopy. This approach achieves a theoretical lateral resolution of 0.4 and an axial resolution of 1.3. Voxels are sized 5 in the z direction and 0.83 × 0.83 in the x-y plane. Imaging was performed using an abdominal window chamber model in mice. This allows for intra-vital imaging of the tumors <cit.>. The abdominal window chamber was surgically implanted in transgenic mice on C57Bl/6 background that had expression of red fluorescent protein tdTomato in the endothelial cells only. The murine colon adenocarcinoma MC38 tumors with expression of green fluorescent protein (GFP) in the cytoplasm were induced by injecting 5 of dense cell suspension in a 50/50 mixture of saline and matrigel (Corning, NY, USA). The images of tumors were acquired 9 -– 14 days after tumor induction with Zeiss LSM 880 microscope (Carl Zeiss AG), connected to a Mai-Tai tunable laser (Newport Spectra Physics). We used an excitation wavelength of 940 nm and the emitted light was collected with Gallium Arsenide Phosphide (GaAsP) detectors through a 524 – 546 nm bandpass filter for GFP and a 562.5 – 587.5 nm bandpass filter for tdTomato and with a multi-alkali PMT detector through a 670 – 760 bandpass filter for Qtracker® 705. A 20x water immersion objective with NA of 1.0 was used to acquire a Zstacks-TileScan with dimensions of 512 × 512 pixels in x and y, and approximately 70 planes in z, with a z step of 5. Each tumor is covered by approximately 100-200 tiles, depending on the size. All animal studies were performed in accordance with the Animals Scientific Procedures Act of 1986 (UK) and Committee on the Ethics of Animal Experiments of the University of Oxford. The advantage of using both a labeled blood-pool based agent (Qtracker®705) and transgenic mouse model with fluorescently labeled endothelium is that it allows us to assess the functional behaviour of the tumor vasculature. Skeletons are derived by first producing a manual segmentation of the vasculature on each slice. This is then followed by a centerline extraction of this manual segmentation in 3D, using the NeuTube software package <cit.>. Extracted skeletons are then manually pruned and refined using the NeuTube software package. The training dataset consists of 25 manually segmented tiles of size 512 × 512 × 70, which are then subdivided, with overlap, to form a larger training set of smaller image stacks. We hold back 10% of our training data for validation and hyper-parameter tuning. The tiles were taken from 4 different tumors.Synthetic data: In order to test our method against a known ground truth, we have also generated a synthetic dataset that presents the same issues as our real data. The synthetic data consists of a number of hollow, tubular structures that are tightly packed and represented with anisotropic voxel sizes, comparable to the real data. The vessels are generated by first iteratively growing the centerlines throughout the volume. Then the segmentation mask is produced by dilating this skeleton to some randomly selected radius (chosen to reflect the range of sizes visible in the real data). We then generate two distance maps d_1 and d_2 that represent the distance from the nearest foreground and nearest background respectively. The endothelium is then generated from these distance maps:E = exp(-d_1 / σ_1 ) exp(-d_2 / σ_2),where σ_1 and σ_2 are tuned to give a qualitative appearance similar to our data. Due to the anisotropy, many intersections between vessels are visible and the algorithm must reconstruct the 3D branching structure. We add Gaussian noise, whose variance is tuned to the background noise in the real images. We also add `Salt and Pepper' and Poisson noise to simulate the detector noise present in fluorescence microscopy. Slice `jitter' is also added to simulate the slight slice misalignment present in the real data, due to mouse breathing. Examples of this data are shown in Figure <ref>.For the synthetic experiments we train on 20 image volumes, with size 512 × 512 × 40, generated in this manner. Each volume is subdivided into tiles of size 128 × 128 × 16. Testing was performed on 20 separate volumes, generated identically to the training volumes. § METHODSHere we will give a description of the network architectures used, as well as details on training and the loss functions considered. §.§ ArchitecturesCNN:For the CNN portion of the network we use a U-Net style architecture <cit.> with 2 convolutional layers at each pooling level. Each convolutional layer is followed by a Batch Normalization <cit.> layer and then a (Leaky) Rectified Linear Unit (LReLU) which is defined as follows. LReLU(x; α) = {[ α x x < 0; x ]. The Leaky ReLU modification attempts to avoid the `dying ReLU' problem by maintaining a small slope in the negative portion whilst retaining the piecewise constant gradient, which provides the efficiency for ReLU activations. We use this as our highly background dominated target data can lead to many `dead' ReLU units, which we are unable to recover from during training.The full architecture is described in Figure <ref>. A sequence of down sampling units (U1) provide a broader and broader view on the image. In this coarse-grained view, a sequence of U0 units extract rich features. These are then up-sampled by a sequence of U2 units, whilst concatenating these new features with the features from the corresponding scaled U1 unit. The final feature map is reduced using a 1×1 convolution before being passed through a sigmoidal activation: σ(x) = 1/1 + e^-x. This produces a feature map of the same spatial dimensions as the input image. All convolutions are performed using 3 × 3 kernels. Max pooling is performed over a 2 × 2 region and up-sampling is performed with stride 2. Weights are initialized from a uniform distribution with bounds as suggested by Glorot et al. <cit.>ConvLSTM: The Convolutional LSTM modifies the traditional LSTM architecture <cit.> by replacing the inner product operations with kernel convolutions, therefore allowing us to efficiently operate on images, as in a CNN. The full state equations for this unit can be found in the work by Shi et al. <cit.>.We apply each ConvLSTM bidirectionally, concatenating the outputs. Each ConvLSTM layer uses 20 units. Using the design principles of Network-in-Network <cit.> to reduce dimensionality, we follow each bidirectional stack by a 1×1 convolution. For these experiments we explore this unit applied in both a `deep' and `shallow' configuration.§.§.§ `Deep' configuration: For this architecture we arrange the ConvLSTM units (Figure <ref> in a U-shaped configuration as in the CNN network a modified U-Net configuration has also been employed by Chen et al. <cit.>. We stack two units, followed by a max pooling layer, followed by a further two units, followed by an up-sampling layer, followed by a final two units. The output from this is then passed through a 1×1 convolution to compress the features before a sigmoidal activation to provide the final output. §.§.§ `Shallow' configuration For this configuration we use just a single ConvLSTM unit, as shown in Figure <ref>. Combination:In order to combine these networks, we apply a shared copy of the CNN network to each slice of the 3D image volume. This produces a 2D skeleton detection on each slice. This new volume is then processed in parallel by the stacked ConvLSTM layers to produce the new, context aware 3D skeleton detection. Training:By design, it is possible that this network may be trained in an end-to-end fashion. However, depending on the size of the component architectures, it may be necessary toseparate the training for the two components of the network. Our training set consists of a number of manually annotated image volumes where all visible vasculature has been segmented. In order to train this end-to-end, we train against our extracted skeletons. In order to train the architecture in a split fashion, we suggest performing a skeletonization in 2D for each slice of the manual segmentations to train the CNN layers. The outputs from the trained CNN can then be used as input volumes for training the ConvLSTM layers, with the 3D thinned segmentations being the targets. Here we will test results training against a number of different loss functions. A binary cross-entropy loss, given by:ℒ_bce(x,y) = -∑_iy_i log(x_i) + (1-y_i) log(1-x_i).A Dice coefficient based loss:ℒ_d(x,y) = 1 - 2x · y + δ/|x| + |y| + δ An advantage of using a Dice coefficient based loss is that it implicitly focuses on the foreground class. This is advantageous for applications such as this where there exists a severe class imbalance between foreground (vessel centerline) and background. Here δ is a small positive value to keep the loss well behaved in the case that |x| = |y| = 0. For these experiments we set δ = 1.Another way to address this imbalance without discarding data is to use a class weighted loss function. Where foreground and background observations have their importance weighted according to the ratio of class imbalance. However, this has the effect of relatively under-weighting the background regions in the immediate vicinity of the thin foreground regions. We therefore propose a pixel-wise loss weighting, similar to Ronneberger et al. <cit.>: ℒ_pbce(x,y) = -∑_iW_i y_i log(x_i) + W_i (1-y_i) log(1-x_i). W = (1-β) y ∗ g_σ + β, where g_σ represents a Gaussian kernel with standard deviation σ and ∗ represents the kernel convolution operator. Here β represents the ratio of class imbalance. In doing this we incorporate class imbalance while spreading its influence to the local vicinity of the thin foreground skeleton. In these experiments we trained our networks using the Adam optimizer <cit.> with a learning rate of 10^-4. All training was performed using NVIDIA Tesla K40 12GB GPUs.Skeletonization:A final stage in any pipeline for the analysis of vasculature may include a skeletonization process, to reduce any kind of detected representation to a topological skeletal representation for further analysis. Uncertainty and variability in our training set can result in skeletonization results that are thicker than a single pixel wide skeleton, we therefore account for this by performing a skeletonization as a post-processing. In these experiments we consider a simple homotopic thinning algorithm <cit.>, followed by a pruning phase to remove artifactual branches. A key advantage of our method is that although we still apply a thinning algorithm to finalize our results, the majority of `skeletonization' has already been performed by the neural network. § RESULTS Here we will present the results of experiments both on synthetic data as well as real life microscopy imaging data. We will first briefly outline the architectures used for comparison. U-2D:A 2D U-Net architecture as described in Section <ref> (CNN). In order to apply this to volumetric images, we apply the network to each slice in the z-stack independently and concatenate the results. Trainable weights: 200,000. U-2D+CLSTM (S):A shared 2D U-Net architecture is combined with a single (shallow) ConvLSTM unit with 32 filters as described in Section <ref>. Trainable weights: 270,000.U-2D+CLSTM (D):A shared 2D U-Net architecture is combined with a `U-shaped' (deep) ConvLSTM architecture as described in Section <ref>, with 32 filters per unit. Trainable weights: 390,000.U-3D:A 3D U-Net architecture, similar to that described in Section <ref> (CNN) but with all 2D convolution and pooling operations replaced with their 3D equivalent. Trainable weights: 580,000.CLSTM:A `U-shaped' (deep) ConvLSTM architecture as described in Section <ref> (ConvLSTM). Trainable weights: 190,000.We would like to explore the importance of the rich feature extraction of the CNN architectures (U-2D, U-3D) as well as the impact of the recursive architectures (CLSTM), both with and without a CNN component.For each experiment we tune an optimal threshold on validation data for each architecture which is used to binarize the output from each network before skeletonization and analysis.§.§ Results on synthetic data §.§.§ Experiment 1In order to provide a baseline for later comparisons we first compare the ability of our proposed network to a number of comparison networks to segment our synthetic data. We measure this using the Dice overlap, a commonly used segmentation evaluation metric, described in Equation <ref>. We compare a 2D U-Net, our U-Net + ConvLSTM, a ConvLSTM and a 3D U-Net network. All networks are trained to optimize a Dice coefficient based loss, given in Equation <ref>. For evaluation purposes we use the standard Dice coefficient, with β = 0. Results can be seen in Table <ref>. These networks are trained against the full segmentation masks. §.§.§ Experiment 2Here we compare the ability of a number of networks to recreate the correct skeletal structure of the vessels in our synthetic data. In order to evaluate this we train all of our networks to recreate the 3D skeletons of the synthetic data, as can be seen in Figure <ref>. We train these against the weighted binary cross-entropy loss described in Equation <ref> in order to account for the severe class imbalance. These networks are trained against the ground truth 3D skeletons. In order to evaluate skeleton distance we first threshold the output from each network and perform any necessary additional thinning until we have a single pixel-wide skeletal representation <cit.>. In order to compute the distance between skeletons we use the Modified Hausdorff Distance, proposed by Dubuisson et al. <cit.>. We refer to this as `Skeleton Error' in the results section and can be interpreted as the average shortest distance between any point on the ground truth skeleton to some point on the target skeleton and vice versa. Here it is given in units of m. In addition to this we compute the `coverage' of the ground truth, the fraction of ground truth skeleton that is within some radius of the nearest bit of skeleton on the target to be compared. In this case we use a radius of 20m, as this is comparable in size to the radius of vasculature present in these images. The results of this experiment are shown in Table <ref>§.§.§ Experiment 3Here we evaluate the importance of using a bidirectional ConvLSTM unit, rather than simply running in one direction. A single ConvLSTM unit would only be able to look through the image stack in a single direction, e.g. top-to-bottom or bottom-to-top. By applying ConvLSTM units in both directions and concatenating the outputs we gain the ability to consider changes occurring in both directions. The results of this experiment are shown in Table <ref> §.§.§ Experiment 4Finally, we compare the effects of different loss functions for training to reproduce vessel skeletons. The loss functions we compare are the binary cross-entropy, the weighted binary cross-entropy and the Dice coefficient based loss. The results of this analysis can be seen in Table <ref>.§.§ Results on real microscopy data We also test the effectiveness of these architectures on five extremely challenging real image volumes, where tightly packed vasculature makes delineation via segmentation and thinning impossible. We again compare the the Hausdorff average distance between skeletons. These images were taken from a different tumor to those used to construct the training set. All architectures were trained using the weighted binary cross-entropy loss function described in Equation <ref>. These networks are all trained against the ground truth 3D skeletons. A qualitative demonstration of the benefits of training against the skeleton rather than the segmentation can be seen in Figure <ref>. We see that if the network is trained on the segmentation alone, as in the left pane, many distinct vessels are combined into a single region of segmented vasculature. When we train against the skeletons themselves, as in the right pane, considerably more detail is visible.§ DISCUSSION In this paper we have shown that by combining CNN networks with a Convolutional LSTM network we are able to extract truly 3D vascular structures from complex microscopy images while avoiding the complexity of having to perform convolutions in 3D. We have also shown that using skeletal centre-lines as training targets rather than segmentations helps to distinguish between tightly packed structures. In addition to showing this to be an effective method on our task of extracting vessel skeletons, we have explored the role played by various loss functions tailored to this task. In our experiments we found that although the 3D CNN achieves comparable results in terms of segmentation overlap, the U-Net + ConvLSTM architecture shows a clear improvement in terms of ability to recreate skeletal representations of the vasculature. A qualitative comparison of the difference in results between the U-Net2D and the U-Net2D + ConvLSTM architectures can be seen in Figure <ref>. In the results from the 2D network we notice that a large number of spurious connections have formed between non-connected parts of the vasculature. We interpret this as being due to the networks inability to separate structures in 3-dimensions. The 2D network performs adequately in regions of low density (for example along the top edge of the image) however begins to fail in the regions of high density in the center. However, our approach performs well in all of these regions as we are able to localize and separate responses in 3D. In our comparison of the loss functions, given in Table <ref>, we observe that the use of a weighted binary cross-entropy loss produced the best results. While the Dice derived loss function performed well, we found that it was not capable of learning a meaningful model for these extremely sparse skeleton structures. In addition to these results it is useful to examine to relative model sizes of the architectures being compared. In the case of the synthetic Experiment 2, we see that the U-Net2D was completely incapable of learning this 3D representation, this is to be expected as contextual 3D information is required. However, the addition of a single ConvLSTM layer, resulting in an increase of about 30% in the number of trainable weights makes this representation possible. In contrast, the natural extension of the U-Net2D to the U-Net3D requires at least an approximate tripling in the number of parameters, as 3×3 kernels are extended to 3×3×3 volumetric kernels. Hybrid convolutional/recurrent architectures of this kind represent a sort of hybrid between 2D and 3D image processing. This is fitting for data of this kind as it also represents a hybrid between 2D and 3D imaging, where images acquired predominantly in 2D are concatenated to form image volumes. Interestingly we noted a slight improvement in performance of the shallow variant of the U-Net + ConvLSTM architecture. We believe this is most likely due to the small training set size, but we believe that this is an important observation as networks that are able to perform with smaller training sets are of great interest to researchers in biomedical applications, where large public datasets will not usually exist for bespoke applications. The results presented here have been achieved using a training set that may reasonably be acquired by a single researcher. We believe that the pipeline we have demonstrated here represents a principled approach to information sharing that targets the majority of the computational power of the network to the primary source of information (convolutions in-plane) and forces efficiency in the use of trainable parameters. We envision this approach also being useful in other anisotropic imaging modalities where 3D structures are extracted from a concatenation of higher resolution in-plane images, e.g. cardiac MRI, lung CT or other microscopy applications.§ ACKNOWLEDGEMENTS The research leading to these results has received funding from the People Programme (Marie Curie Actions) of the European Unions Seventh Framework Programme (FP7/2007- 2013) under REA grant agreement No 625631. This work was also supported by Cancer Research UK (CR-UK) grant numbers C5255/A18085 and C5255/A15935, through the CRUK Oxford Centre and by CRUK/EPSRC Oxford Cancer Imaging Centre (grant number C5255/A16466). RB acknowledges funding from the EPSRC Systems Biology Doctoral Training Centre, Oxford (EP/G03706X/1).ieeetr | http://arxiv.org/abs/1705.09597v1 | {
"authors": [
"Russell Bates",
"Benjamin Irving",
"Bostjan Markelc",
"Jakob Kaeppler",
"Ruth Muschel",
"Vicente Grau",
"Julia A. Schnabel"
],
"categories": [
"cs.CV"
],
"primary_category": "cs.CV",
"published": "20170526143029",
"title": "Extracting 3D Vascular Structures from Microscopy Images using Convolutional Recurrent Networks"
} |
Proper Functors]Proper Functors and Fixed Points for Finite BehaviourS. Milius]Stefan Milius Lehrstuhl für Informatik 8 (Theoretische Informatik), Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany [email protected] Supported by the Deutsche Forschungsgemeinschaft (DFG) under project MI 717/5-1 [inline]Maybe reformulate to include universal property of ϕ F. The rational fixed point of a set functor is well-known to capture the behaviour of finite coalgebras. In this paper we consider functors on algebraic categories. For them the rational fixed point may no longer be fully abstract, i.e. a subcoalgebra of the final coalgebra. Inspired by Ésik and Maletti's notion of a proper semiring, we introduce the notion of a proper functor. We show that for proper functors the rational fixed point is determined as the colimit of all coalgebras with a free finitely generated algebra as carrier and it is a subcoalgebra of the final coalgebra. Moreover, we prove that a functor is proper if and only if that colimit is a subcoalgebra of the final coalgebra. These results serve as technical tools for soundness and completeness proofs for coalgebraic regular expression calculi, e.g. for weighted automata.[ [ December 30, 2023 =====================This is a new title! § INTRODUCTION* rational fixed point as fully abstract * locally finite fixed point as subcoalgebra * picture with 4 fixed points (coalgebras) and hierarchy Coalgebras allow to model many types of systems within a uniform and conceptually clear mathematical framework <cit.>. One of the key features of this framework is final semantics; the final coalgebra provides a fully abstract domain of system behaviour (i.e. it identifies precisely the behaviourally equivalent states). For example, the standard coalgebraic modelling of deterministic automata (without restricting to finite state sets) yields the set of formal languages as final coalgebra. Restricting to finite automata, one obtains precisely the regular languages <cit.>. It is well-known that this correspondence can be generalized to locally finitely presentable (lfp) categories <cit.>, where finitely presentable objects play the role of finite sets. For a finitary functor F (modelling a coalgebraic system type) one then obtains the rational fixed point ρ F, which provides final semantics to all coalgebras with a finitely presentable carrier <cit.>. Moreover, the rational fixed point is fully abstract, i.e. ρ F is a subcoalgebra of the final one ν F, whenever the classes of finitely presentable and finitely generated objects agree in the base category and F preserves non-empty monomorphisms <cit.>. While the latter assumption on F is very mild, the former one on the base category is more restrictive. However, it is still true for many categories used in the construction of coalgebraic system models (e.g. sets, posets, graphs, vector spaces, commutative monoids, nominal sets and positively convex algebras). Hence, in these cases the rational fixed point ρ F is the canonical domain of regular behaviour, i.e. the behaviour of `finite' systems of type F. In this paper we will consider rational fixed points in algebraic categories (a.k.a. finitary varieties), i.e. categories of algebras specified by a signature of operation symbols with finite arity and a set of equations (equivalently, these are precisely the Eilenberg-Moore categories for finitary monads on sets). Being the target of generalized determinization <cit.>, these categories provide a paradigmatic setting for coalgebraic modelling beyond sets. For example, non-deterministic automata, weighted or probabilistic automata <cit.>, or context-free grammars <cit.> are coalgebraically modelled over the categories of join-semilattices, modules for a semiring, positively convex algebras, and idempotent semirings, respectively. In algebraic categories one would like that the rational fixed point, in addition to being fully abstract, is determined already by those coalgebras carried by free finitely generated algebras, i.e. precisely those coalgebras arising by generalized determinization. In particular, this feature is used in completeness proofs for generalized regular expressions calculi <cit.>; there one proves that the quotient of syntactic expressions modulo axioms of the calculus is (isomorphic to) the rational fixed point by establishing its universal property as a final object for that quotient. A key feature of the settings in loc. cit. is that it suffices to verify the finality only w.r.t. coalgebras with a free finitely generated carrier.The purpose of the present paper is to provide sufficient conditions on the algebraic base category and coalgebraic type functor that ensure that the rational fixed point is fully abstract and that such finality proofs are sound. To this end we form a coalgebra that serves as the semantics domain of all behaviours of target coalgebras of generalized determinization (modulo bisimilarity on the level of these coalgebra). More precisely, let T: → be a finitary monad on sets and F: ^T →^T be a finitary endofunctor preserving surjective T-algebra morphisms (note that the last assumption always holds if F is lifted from some endofunctor on ). Now form the colimit ϕ F of the inclusion functor of the full subcategory F formed by all F-coalgebras of the form TX → FTX, where X is a finite set. Urbat <cit.> has shown that ϕ F is a fixed point of F. We first first provide a characterization of ϕ F that uniquely determines it up to isomorphism: based on Adámek et al.'s notion of a Bloom algebra <cit.>, we introduce the new notion of an ffg-Bloom algebra, and we prove that, considered as an algebra for F, ϕ F is the initial ffg-Bloom algebra (Theorem <ref>).Then we turn to the full abstractness of the rational fixed point ρ F and the soundness of the above mentioned finality proofs. Inspired by Ésik and Maletti's notion of a proper semiring (which is in fact a notion concerning weighted automata), we introduce proper functors (Definition <ref>), and we prove that for a proper functor on an algebraic category the rational fixed point is determined by the coalgebras with a free finitely generated carrier. More precisely, if F is proper, then the rational fixed point ρ F is (isomorphic to) initial Bloom algebra ϕ F. Moreover, we show that a functor F is proper if and only if ϕ F is a subcoalgebra of the final coalgebra ν F (Theorem <ref>). As a consequence we also obtain the desired result that for a proper functor F the finality property of ρ F can be established by only verifying that property for all coalgebras from F (Corollary <ref>).In addition, we provide more easily established sufficient conditions on ^T and F that ensure properness: F is proper if finitely generated algebras of ^T are closed under kernel pairs and F maps kernel pairs to weak pullbacks in . For a lifting F this holds whenever the lifted functor on sets preserves weak pullbacks; in fact, in this case the above conditions were shown to entail Corollary <ref> in previous work <cit.>. However, the type functor (on the category of commutative monoids) of weighted automata with weights drawn from the semiring of natural numbers provides an example of a proper functor for which the above condition on ^T fails.Another recent related work concerns the so-called locally finite fixed point θ F <cit.>; this provides a fully abstract behavioural domain whenever F is a finitary endofunctor on an lfp category preserving non-empty monomorphisms. In loc. cit. it was shown that θ F captures a number of instances that cannot be captured by the rational fixed point, e.g. context free languages <cit.>, constructively algebraic formal power-series <cit.>, Courcelle's algebraic trees <cit.> and the behaviour of stack machines <cit.>. However, as far as we know, θ F is not amenable to the simplified finality check mentioned above unless F is proper.Putting everything together, in an algebraic category we obtain the following picture of fixed points of F (wheredenotes quotient coalgebras anda subcoalgebra):ϕ F ρ F θ F ν F.We exhibit an example, where all four fixed points are different. However, if F is proper and preserves monomorphisms, then ϕ F, ρ F and θ F are isomorphic and fully abstract, i.e. they collapse to a subcoalgebra of the final one: ϕ F ≅ρ F ≅θ F ν F.At this point, note that Urbat's above mentioned recent work <cit.> also provides a framework which covers the four above fixed points as four instances of one theory. This provides, for example, a uniform proof of the fact that they are fixed points and their universal properties (in the case of ρ F, θ F and ϕ F). However, Urbat's paper does not study the relationship between the four fixed points.The rest of the paper is structured as follows: in Section <ref> we collect some technical preliminaries and recall the rational and locally finite fixed points more in detail. Section <ref> introduces the new fixed point ϕ F and establishes the picture in (<ref>). Next, Section <ref> provides the characterization of ϕ F as the initial ffg-Bloom algebra for F. Section <ref> introduces proper functors and presents our main results, while in Section <ref> we present the proof of Theorem <ref>. Finally, Section <ref> concludes the paper.This paper is a reworked full version of the conference paper <cit.>. We have included detailed proofs, and in addition, we have added the new results in Section <ref>. Acknowledgments I would like to thank Jiří Adámek, Henning Urbat and Joost Winter for helpful discussions. I am also grateful to the anonymous reviewers whose constructive comments have helped to improve the presentation of this paper. § PRELIMINARIESIn this section we recall a few preliminaries needed for the subsequent development. We assume that readers are familiar with basic concepts of category theory. We denote the coproduct of two objects X and Y of a categoryby X+Y with injections : X → X+Y and : Y → X+Y. Recall that a strong epimorphism in a categoryis an epimorphism e: AB ofthat has the unique diagonal property w.r.t. any monomorphism. More precisely, whenever the outside of the following square A @->>[r]^-e [d]_fB [d]^g @–>[ld]_d C @ >->[r]_-mDcommutes, where m:CD is a monomorphism, then there exists a unique morphism d: B → C with d · e = f and m · d = g. Similarly, a jointly epimorphic family e_i: A_i → B, i ∈ I, is strong if it has the following similar unique diagonal property: for every monomorphism m: CD and morphisms g: B → D and f_i: A_i → C, i ∈ I, such that m · f_i = g · e_i holds for all i ∈ I, there exists a unique d: C → D such that m · d = g and d · e_i = f_i for all i ∈ I.On several occasions we will make use of the following fact. Let D: → be a diagram with a colimit cocone _d: Dd → C. Then the colimit injections _d form a strongly epimorphic family. First, it is easy to see that the _d form a jointly epimorphic family. To see that it is strong, suppose we have a monomorphism m: MN and morphisms g: C → N and f_d: Dd → M for every object d insuch that m · f_d = g ·_d. Then the f_d: Dd → M form a cocone of D. Indeed, for every morphism h: d → d' ofwe havem · f_d'· Dh = g ·_d'· Dh = g ·_d = m · f_d,which implies that f_d'· Dh = f_d since m is a monomorphism. Therefore there exists a unique i: C → M such that f_d = i ·_d for every d in . It follows that also m · i = g since this equation holds when extended by every _d; then use that the _d form an epimorphic family. §.§ Algebras and Coalgebras* monad with unit η and multiplication μ; Kleisli-triple (T,η, (-))* algebras for a monad, Eilenberg-Moore category ^T* just write the carrier* coalgebras for a functor F; homomorphisms; final coalgebra (ν F,t) and behavioural equivalence ∼; notation c: C →ν F for the unique coalgebra morphism. * leading example are weighted automata; so recall semirings, weighted automata, weighted languages, Noetherian semirings and examplesWe also assume that readers are familiar with algebras and coalgebras for an endofunctor. Given an endofunctor F on some categorywe write (ν F,t) for the final F-coalgebra (if it exists). Recall, that the final F-coalgebra exists under mild assumptions onand F, e.g. wheneveris locally presentable and F an accessible functor (see <cit.>). For any coalgebra c: C → FC we will write c: C →ν F for the unique coalgebra morphism. We writeFfor the category of F-coalgebras and their morphisms. Recall that all colimits in F are formed on the level of , i.e. the canonical forgetful functor F → creates all colimits (see e.g. <cit.>).Ifis a concrete category, i.e. equipped with a faithful functor ·: →, one defines behavioural equivalence as the following relation ∼: given two F-coalgebras (X,c) and (Y,d) then x ∼ y holds for x ∈ X and y∈ Y if there is another F-coalgebra (Z,e) and F-coalgebra morphisms f: X → Z and g: Y → Z with f(x) =g(y). The base categoriesof interest in this paper are the algebraic categories, i.e. categories of Eilenberg-Moore algebras (or T-algebras, for short) for a finitary monad T on . Recall that, for a monad T onwith unit η and multiplication μ, a T-algebra is a pair (A,α) where α: TA → A, called the algebra structure, is a map such that the diagram below commutes:A [r]^-η_A@=[rd] TA [d]^αTTA [l]_-μ_A[d]^Tα A TA [l]^-αMorphisms of T-algebras are just the usual morphisms of functor algebra, i.e. a T-algebra morphism h: (A,α) → (B,β) is a map h: A → B such that the square below commutes:TA [r]^-α[d]_ThA [d]^h TB [r]_-βBThe category of T-algebras and their morphisms is denoted by ^T as usual. Equivalently, those categories are precisely the finitary varieties, i.e. category of Σ-algebras for a signature Σ, whose operation symbols have finite arity, satisfying a set of equations (e.g. the categories of monoids, groups, vector spaces, or join-semilattices).We will frequently make use of the fact that (TX, μ_X) is the free T-algebra on the set X (of generators). This means that for every T-algebra (A, α) and every map f: X → A there exists a unique extension of f to a T-algebra morphisms, i.e., there exists a unique T-algebra morphism f such that f ·η_X = f:X [r]^η_X[rd]_f TX [d]^ fTTX [l]_-μ_X[d]^T f A TA [l]^-αMoreover, it is easy to verify that f = μ_A · Tf holds.A free T-algebra (TX, μ_X) where X is a finite set is called free finitely generated.In the following we will often drop algebra structures when we discuss a T-algebra (A,α) and simply speak of the algebra A. * The leading example in this paper are weighted automata considered as coalgebras. Let (, +,· , 0, 1) be a semiring, i.e. (, +, 0) is a commutative monoid, (, ·, 1) a monoid and the usual distributive laws hold: r ø 0 = 0 = 0 ø r, rø (s + t) = r ø s + r ø t and (r + s)ø t = r ø t + s ø t. We just writeto denote a semiring. As base categorywe consider the categoryof -semimodules; recall that a (left) -semimodule is a commutative monoid (M, +, 0) together with an action × M → M, written as juxtaposition sm for r ∈ and m ∈ M, such that for every r,s ∈ and every m, n ∈ M the following laws hold:[(r+s)m = rm + sm0m = 01m = m;r(m+n) = rm + rnr0 = 0 r(sm) = (r ø s) m ]An -semimodule morphism is a monoid homomorphism h M_1 → M_2 such that h(rm) = rh(m) for each r ∈ and m ∈ M_1.An -weighted automaton over the fixed input alphabet Σ is a triple (i, (M^a)_a ∈Σ, o), where i and o are a row and a column vector in ^n, respectively, of input and output weights, respectively, and M_a is an n× n-matrix over , for some natural number n. This number n is the number of states of the weighted automaton and the matrices M^a represent -weighted transitions; in fact, M^a_i,j is the weight of the a-transition from state i to state j (with a weight of 0 meaning that there is no a-transition). Every weighted automaton accepts a formal power series (or weighted language) L: Σ^* → defined in the following way: L(w) = i · M^w · o where M^w is the obvious inductive extension of a ↦ M^a to words in Σ^*: M^ is the identity matrix and M^av = M^a · M^v for every a ∈Σ and v ∈Σ^*.Now consider the functor FX = × X^Σ on . Clearly, a weighted automaton (without its initial vector) on n states is equivalently an F-coalgebra on ^n; in fact, to give a coalgebra structure ^n →× (^n)^Σ amount to specifying two -semimodule morphisms o: ^n → (equivalently, a column vector over in ) and t: ^n → (^n)^Σ (equivalently, an Σ-indexed family of -semimodule morphisms on ^n each of which can be represented by an n× n-matrix).The final F-coalgebra is carried by the set ^Σ^* of all weighted languages over Σ with the obvious (coordinatewise) -semimodule structure and with the F-coalgebra structure given by ⟨ o, t⟩: ^Σ^*→× (^Σ^*)^Σ with o(L) = L() and t(L)(a) = λ w. L(aw); it is straightforward to verify that o and t are -semimodule morphisms and form a final coalgebra. Moreover, for every F-coalgebra on ^n the unique coalgebra morphism ^n →^Σ^* assigns to every element i of ^n (perceived as the row input vector of the weighted automaton associated to the given coalgebra) the weighted language accepted by that automaton.* An important special case of -weighted automata are ordinary non-deterministic automata. One takes = {0,1} the Boolean semiring for which the category of -semimodules is (isomorphic to) the category of join-semilattices. Then FX = {0,1}× X^Σ is the coalgebraic type functor of deterministic automata with input alphabet Σ, and there is a bijective correspondence between an F-coalgebra on a free join-semilattice and non-deterministic automata. In fact in one direction one restricts X →{0,1}× ( X)^Σ to the set X of generators, and in the other direction one performs the well-known subset construction. The final coalgebra is carried by the set of all formal languages on Σ in this case. * Another special case is whereis a field. In this case, -semimodules are precisely the vector spaces over the field . Moreover, since every field is freely generated by its basis, it follows that the -weighted automata are precisely those F-coalgebras whose carrier is a finite dimensional vector space over .We will now recall a few properties of algebraic categories ^T, where T is a finitary set monad, needed for our proofs.*Recall that every strong epimorphism e in ^T is regular, i.e. e is the coequalizer of some pair of T-algebra morphisms. It follows that the classes of strong and regular epimorphisms coincide, and these are precisely the surjective T-algebra morphisms. Similarly, jointly strongly epimorphic families of morphisms are precisely the jointly surjective families. Finally, monomorphisms in ^T are precisely the injective T-algebra morphisms since the canonical forgetful functor ^T → creates all limits (and pullbacks in particular). *Every free T-algebra TX is (regular) projective, i.e. given any surjective T-algebra morphism q: AB then for every T-algebra morphism h: TX → B there exists a T-algebra morphism g: TX → A such that q · g = h:A @->>[d]^q TX @–>[ru]^-g [r]_h B . *Furthermore, note that every finitely presentable T-algebra A is a regular (= strong) quotient of a free T-algebra TX with a finite set X of generators. Indeed, A is presented by finitely many generators and relations. So by taking X as a finite set of generators of A, the unique extension of the embedding XA yields a surjective T-algebra morphism TXA. §.§ The Rational Fixed Point* finitely presentable and finitely generated objects* lfp categories; examples* finitary functors; examples As we mentioned in the introduction the canonical domain of behaviour of `finite' coalgebras is the rational fixed point of an endofunctor. Its theory can be developed for every finitary endofunctor on a locally finitely presentable category. We will now recall the necessary background material.A filtered colimit is the colimit of a diagram → whereis a filtered category (i.e. every finite subcategory _0 has a cocone in ), and a directed colimit is a colimit whose diagram schemeis a directed poset. A functor is called finitary if it preserves filtered (equivalently directed) colimits. An object C is called finitely presentable (fp) if the hom-functor (C, -) preserves filtered (equivalently directed) colimits, and finitely generated (fg) if (C, -) preserves directed colimits of monos (i.e. colimits of directed diagrams D: → where all connecting morphisms Df are monic in ). Clearly, every fp object is fg, but the converse fails in general. In addition, fg objects are closed under strong epis (quotients), which fails for fp objects in general.A cocomplete categoryis called locally finitely presentable (lfp) if there is a set of finitely presentable objects insuch that every object ofis a filtered colimit of objects from that set. We refer to <cit.> for further details. Examples of lfp categories are the categories of sets, posets and graphs, with finitely presentable objects precisely the finite sets, posets, and graphs, respectively.The category of vector spaces over the field k is lfp with finite-dimensional spaces being the fp-objects. Every algebraic category is lfp. The finitely generated objects are precisely the finitely generated algebras (in the sense of general algebra), and finitely presentable objects are precisely those algebras specified by finitely many generators and finitely many relations. Finitary functors abound. We just mention a few examples of finitary functors on .Constant functors and the identity functor are, of course, finitary. For every finitary signature Σ, i.e. Σ = (Σ_n)_n < ω is a sequence of sets with Σ_n containing operation symbols of the finite arity n, the associated polynomial functor given byF_Σ X = ∐_n< ωΣ_n × X^nis finitary. The finite power-set functor given by X = { Y | Y ⊆ X, Y finite} is finitary (while the full power-set functor is not) and so is the bag functor mapping a set X to the set of finite multisets on X. The class of finitary functors enjoys good closure properties: it is closed under composition, finite products, arbitrary coproducts, and, in fact, arbitrary colimits. As we have mentioned already, the finitary monads (i.e. whose functor part is finitary) onare precisely those monads whose Eilenberg-Moore category ^T is (isomorphic to) a finitary variety of algebras.For the rest of this section we assume that F denotes a finitary endofunctor on the lfp category .Recall that an lfp category, besides being cocomplete, is complete and has (strong epi, mono)-factorizations of morphisms <cit.>, i.e. every morphism f: X → Y can be decomposed as f = m · e where e: XI is a strong epi and m: IY a mono. One should think of I as the image of X in Y under f. The rational fixed point is a fully abstract model of behaviour for all F-coalgebras whose carrier is an fp-object. We now recall its construction <cit.>.Denote by F the full subcategory of all F-coalgebras on fp carriers, and let (ρ F, r) be the colimit of the inclusion functor of F into F:(ρ F, r) = ( FF) with the colimit injections a: A →ρ F for every coalgebra a: A → FA in F.We call (ρ F, r) the rational fixed point of F; indeed, it is a fixed point: The coalgebra structure r: ρ F → F(ρ F) is an isomorphism. The rational fixed point can be characterized by a universal property both as a coalgebra and as an algebra for F: as a coalgebra ρ F is the final locally finitely presentable coalgebra <cit.>, and as an algebra it is the initial iterative algebra <cit.>. We will not recall the latter notion as it is not needed for the technical development in this paper.Locally finitely presentable (locally fp, for short) coalgebras for F can be characterized as precisely those F-coalgebra obtained as a filtered colimit of a diagram of coalgebras from F: An F-coalgebra is locally fp if and only if it is a colimit of some filtered diagram → FF. For = an F-coalgebra (X,c) is locally fp iff it is locally finite, i.e. every element of X is contained in a finite subcoalgebra. Analogously, forthe category of vector spaces over the field k an F-coalgebra (X,c) is locally fp iff it is locally finite dimensional, i.e. every element of X is contained in a finite dimensional subcoalgebra. Of course, there is a unique coalgebra morphism ρ F →ν F. Moreover, in many cases ρ F is fully abstract for locally fp coalgebras, i.e. besides being the final locally fp coalgebra the above coalgebra morphism is monic; more precisely, if the classes of fp- and fg-objects coincide and F preserves non-empty monos, then ρ F is fully abstract (cf. Theorem <ref> below). The assumption that the two object classes coincide is often true: *In the category of sets, posets, and graphs, fg-objects are fp and those are precisely the finite sets, posets, and graphs, respectively. *A locally finite variety is a variety of algebras, where every free algebra on a finite set of generators is finite. It follows that fp- and fg-objects coincide and are precisely the finite algebras. Concrete examples are the categories of Boolean algebras, distributive lattices and join-semilattices.*In the category of -semimodules for a semiringthe fp- and fg-objects need not coincide in general. However, if the semiringis Noetherian in the sense of Ésik and Maletti <cit.>, i.e. every subsemimodule of a finitely generated -semimodule is itself finitely generated, then fg- and fp-semimodules coincide. Examples of Noetherian semirings are: every finite semiring, every field, every principal ideal domain such as the ring of integers and therefore every finitely generated commutative ring by Hilbert's Basis Theorem. The tropical semiring (∪{∞}, min, +, ∞, 0) is not Noetherian <cit.>. The usual semiring of natural numbers is also not Noetherian: the -semimodule × is finitely generated but its subsemimodule generated by the infinite set {(n,n+1) | n ≥ 1} is not. However, -semimodules are precisely the commutative monoids, and for them fg- and fp-objects coincide (this is known as Redei's theorem <cit.>; see Freyd <cit.> for a very short proof).*The categoryof positively convex algebras <cit.> is the Eilenberg-Moore category for the monad 𝒟 of finitely supported subprobability distributions on sets. This monad maps a set X to 𝒟 X = {d: X → [0,1] |d is finite and ∑_x ∈ X d(x) ≤ 1},where d = {x ∈ X | d(x) ≠ 0}, and a function f: X → Y to 𝒟 f: 𝒟 X →𝒟 Y with𝒟 f(d) = λ y. ∑_f x = y d(x).More concretely, a positively convex algebra is a set X equipped with finite convex sum operations: for every n and p_1, …, p_n ∈ [0,1] with ∑_i = 1^n p_i ≤ 1 we have an n-ary operation assigning to x_1, …, x_n ∈ X an element _i=1^n p_i x_i subject to the following axioms:* _i=1^n p_i^kx_i = x_k whenever p_k^k = 1 and p_i^k = 0 for i ≠ k, and* _i=1^n p_i (_j=1^k q_i,j x_j) = _j=1^k (∑_i = 1^n p_iq_i,j) x_j.For n = 1 we write the convex sum operation for p ∈ [0,1] simply as px. The morphisms ofare maps preserving finite convex sums in the obvious sense. The point of mentioning this example at length is thatis used for the coalgebraic modelling of the trace semantics of probabilistic systems (see e.g. <cit.>), and recently, it was established by Sokolova and Woracek <cit.> that in , the classes of fp- and fg-objects coincide. We shall come back to this example in Section <ref> when we introduce and discuss proper functors.We list a number of examples of rational fixed points for cases where they do form subcoalgebras of the final coalgebra.*For the functor FX = {0,1}× X^A onthe finite coalgebras are deterministic automata, and the rational fixed point is carried by the set of regular languages on the alphabet A.*For every finitary signature Σ, the final coalgebra for the associated polynomial functor F_Σ (see Example <ref>) is carried by the set of all (finite and infinite) Σ-trees, i.e. rooted and ordered trees where each node with n-children is labelled by an n-ary operation symbol. The rational fixed point is the subcoalgebra given by rational (or regular <cit.>) Σ-trees, i.e. those Σ-trees that have only finitely many different subtrees (up to isomorphism) – this characterization is due to Ginali <cit.>. For example, for the signature Σ with a binary operation symbol * and a constant c the following infinite Σ-tree (here written as an infinite term) is rational:c * (c * (c* ⋯ )));in fact, its only subtrees are the whole tree and the single node tree labelled by c.*For the functor FX = × X onthe final coalgebra is carried by the set ^ω of real streams, and the rational fixed point is carried by its subset of eventually periodic streams (or lassos). Considered as a functor on the category of vector spaces over , the final coalgebra ν F remains the same, but the rational fixed point ρ F consists of all rational streams <cit.>.*For the functor FX = × X^A on the categoryof -semimodules for the semiringwe already mentioned that ν F = ^A^* consists of all formal power-series. Whenever the classes of fg- and fp-semimodules coincide, e.g. for every Noetherian semiringor the semiring of natural numbers, then ρ F is formed by the recognizable formal power-series; from the Kleene-Schützenberger theorem <cit.> (see also <cit.>) it follows that these are, equivalently, the rational formal power-series. *On the category of presheaves ^ℱ, where ℱ is the category of all finite sets and maps between them, consider the functor FX = V + X × X + δ(X), where V:ℱ is the embedding and δ (X)(n) =X(n+1). This is a paradigmatic example of a functor arising from a binding signature for which initial semantics was studied by Fiore et al. <cit.>.The final coalgebra ν F is carried by the presheaf of all λ-trees modulo α-equivalence: ν F(n) is the set of (finite and infinite) λ-trees in n free variables (note that such a tree may have infinitely many bound variables). And ρ F is carried by the rational λ-trees, where an α-equivalence class is called rational if it contains at least one λ-tree which has (up to isomorphism) only finitely many different subtrees (see <cit.> for details). Rational λ-trees also appear as the rational fixed point of a very similar functor on the category of nominal sets <cit.>. An analogous characterization can be given for every functor on nominal sets arising from a binding signature <cit.>. As we mentioned previously, whether fg- and fp-objects coincide is currently unknown in some base categories used in the coalgebraic modelling of systems, for example, in idempotent semirings (used in the treatment of context-free grammars <cit.>)Do we have a counterexample or not? Check with Joost Winter., in algebras for the stack monad (used for modelling configurations of stack machines <cit.>); or it even fails, for example in the category of finitary monads on sets (used in the categorical study of algebraic trees <cit.>) or in Eilenberg-Moore categories for a monad in general (the target categories of generalized determinization <cit.>).As a remedy, in recent joint work with Pattinson and Wißmann <cit.>, we have introduced the locally finite fixed point which provides a fully abstract model of finitely generated behaviour. Its construction is very similar to that of the rational fixed point but based on fg- in lieu of fp-objects. In more detail, one considers the full subcategory F of all F-coalgebras carried by an fg-object and takes the colimit of its inclusion functor:(θ F, ℓ) = ( FF).Suppose that the finitary functor F: → preserves non-empty monos. Then (θ F, ℓ) is a fixed point for F, and it is a subcoalgebra of ν F.*Note that for an arbitrary (not necessarily concrete) lfp categorythe notion of a non-empty monomorphisms needs explanation: a monomorphism m: XY is said to be empty if its domain X is a strict initial object of , where recall that the initial object 0 ofis strict provided that every morphism A → 0 is an isomorphism.In particular, if the initial object ofis not strict, then all monomorphisms are non-empty.*For a functor F: → preserving non-empty monos the category F of all F-coalgebras inherits the (strong epi, mono)-factorization system from(see Remark <ref>) in the following sense: every coalgebra morphism f: (X,c) → (Y,d) can be factorized into coalgebra morphisms e and m carried by a strong epi and a mono in , respectively. In fact, one (strong epi, mono)-factorizes f = m · e inand obtains a unique coalgebra structure on the `image' I such that e and m are coalgebra morphisms:X [r]^-c @->>[d]_e FX [d]^-FeI @–>[r] @ >->[d]_m FI @ >->[d]^FmY [r]_-d FYIndeed, if m is a non-empty mono, we know that Fm is monic by assumption and we use the unique diagonal property. Otherwise, m is an empty mono, which implies that e: XI is an isomorphism since I is a strict initial object. Then Fe · c · e^-1 is the desired coalgebra structure on I.Furthermore, like its brother, the rational fixed point, θ F is characterized by a universal property both as a coalgebra and as an algebra: it is the final locally finitely generated coalgebra and the initial fg-iterative algebra <cit.>. Under additional assumptions, which all hold in every algebraic category, we have a close relation between ρ F and θ F; in fact, the following is a consequence of <cit.>:Suppose thatis an lfp category such that every fp-object is a strong quotient of a strong epi projective fp-object, and let F: → be finitary and preserving non-empty monos. Then θ F is the image of ρ F in the final coalgebra.More precisely, taking the (strong epi, mono)-factorization of the unique F-coalgebra morphism ρ F →ν F yields θ F, i.e. for F preserving monos on an algebraic category we have the following picture:ρ F θ F ν F.A sufficient condition under which ρ F and θ F coincide is the following (cf. <cit.>):Suppose that in addition to the assumption in Theorem <ref> the classes of fg- and fp-objects coincide in . Then ρ F ≅θ F, i.e. the left-hand morphism above is an isomorphism. In the introduction we briefly mentioned a number of interesting instances of θ F that are not (known to be) instances of the rational fixed point; see <cit.> for details.A concrete example, where ρ F is not a subcoalgebra of ν F (and hence not isomorphic to θ F) was given in <cit.>. We present a new, simpler example based on similar ideas:*Letbe the category of algebras for the signature Σ with two unary operation symbols u and v. The natural numberswith the successor function as both operations u^ and v^ form an object of . We consider the functor FX = × X on . Coalgebras for F are automata carried by an algebra A inequipped with two Σ-algebra morphisms: an output morphism A → and a next state morphism A → A. The final coalgebra is carried bythe set ^ω of streams of natural numbers with the coordinatewise algebra operations and with the coalgebra structure given by the usual head and tail functions.Note that the free Σ-algebra on a set X of generators is TX ≅{u,v}^* × X; we denote its elements by w(x) for w ∈{u,v}^* and x ∈ X.The operations are given by prefixing words by the letters u and v, respectively: s^TX: w(x) ↦ sw(x) for s = u or v. Now one considers the F-coalgebra a: A → FA, where A = T{x} is free Σ-algebra on one generator x and a is determined by a(x) = (0, u(x)). Recall our notation a: A →ν F for the unique coalgebra morphism. Clearly, a(x) is the stream (0,1,2,3,⋯) of all natural numbers, and since a is a Σ-algebra morphism we havea (u(x)) =a (v(x)) = (1,2,3,4,⋯).Since A is (free) finitely generated, it is of course, finitely presentable as well. Thus, (A,a) is a coalgebra in F.However, we shall now prove that the (unique) F-coalgebra morphism a: A →ρ F maps u(x) and v(x) to two distinct elements of ρ F. We prove this by contradiction. So suppose that a(u(x)) =a(v(x)). By the construction of ρ F as a filtered colimit (see Notation <ref>) we know that there exists a coalgebra b: B → FB in F and an F-coalgebra morphism h: A → B withh(u(x)) = h(v(x)).Since B is a finitely presented Σ-algebra it is the quotient inof a free algebra A' via some surjective Σ-algebra morphism q: A'B, say. Next observe, that there is a coalgebra structure a': A' → FA' such that q is an F-coalgebra morphism from (A', a') to (B,b): for Fq is a surjective Σ-algebra morphism and so we obtain q' by using projectivity of A' w.r.t. b · q: A' → FB (cf. Remark <ref>(<ref>)):A' @–>[r]^a'@->>[d]_q FA'@->>[d]^FqB [r]_-b FB Now choose a term t_x in A' with q(t_x) = h(x). Using that q and h are Σ-algebra morphisms we see that q(u(t_x)) = q(v(t_x)) as follows:q(u(t_x))= u^B(q(t_x)) = u^B(h(x)) = h(u(x)) = v^B(h(x)) = v^B(q(t_x)) = q(v(t_x)).Since h is an F-coalgebra morphism, we obtain from (<ref>) that h merges the right-hand components of a(u(x)) and a(v(x)), in symbols: h(uu(x)) = h(vu(x)). It follows that q satisfies q(uu(t_x)) = q(vu(t_x)) using a similar argument as in (<ref>) above.Continuing to use that h and q are F-coalgebra morphisms, we obtain the following infinite list of elements (terms) of A' that are merged by q (we write these pairs as equations):q(u^n+1(t_x)) = q(vu^n(t_x)) for n ∈. We need to prove that there exists no finite set of relations E ⊆ A' × A' generating the above congruence on A' given by q: A'B. So suppose the contrary, and let A_0' be the Σ-subalgebra of A' generated by {t_x}, i.e. A_0' ≅{u,v}^* ×{t_x}. Since q(t_x) = h(x) and q and h are both coalgebra morphisms we know that a' =b · q and b · h =a and thereforea' (t_x) =b(q(t_x)) =b (h(x)) =a(x) = (0,1,2,3, ⋯).Since a' is a Σ-algebra morphism it follows that for a word w ∈{u,v}^* of length n we havea'(w(t_x)) = (n, n+1, n+2, n+3, ⋯).Thus, when w, w' ∈{u,v}^* are of different length, then the pair(w(t_x), w'(t_x)) cannot be in the congruence generated by E; otherwise we would have q(w(t_x)) = q(w'(t_x)) which implies a'(w(t_x)) =a'(w'(t_x)) contradicting (<ref>).Now let ℓ be the maximum length of words from {u,v}^* occurring in any pair contained in the finite set E. Then the pair (u^ℓ+2(t_x), vu^ℓ+1(t_x)) obtained from the ℓ+1-st equation in (<ref>) is not in the congruence generated by E; for if any pair of terms of height greater then ℓ are related by that congruence, these two terms must have the same head symbol. Thus we arrive at a contradiction as desired. However, one can prove that the (unique) F-coalgebra morphisma: A →ρ F satisfies a(u(x)) ≠ a(v(x)), see the full paper for details <cit.>.*In this example we also have that θ F and ν F do not coincide. To see this we use that θ F is the union of images of all c: TX →ν F where (TX,c) ranges over those F-coalgebras whose carrier TX is free finitely generated (i.e. TX ≅{u,v}^* × X for some finite set X) <cit.>. Hence, each such algebra TX is countable, and there exist only countably many of them, up to isomorphism. Furthermore, note that on every free finitely generated algebra TX there exist only countably many coalgebra structures c: TX → FTX, since FTX = × TX is countable and c, being a Σ-algebra morphism, is determined by its action on the finitely many generators. Thus, θ F is countable because it is the above union of countably many countable coalgebras. However, ν F being carried by the set ^ω of all streams overis uncountable.Note that being a Σ-algebra morphism any coalgebra structure a: TX → FTX is determined by its action on the generators. And from the form of any TX we know that for any x ∈ X there exist k,n_i ∈, w_i ∈{u,v}^* and x_i ∈ X, i = 1, … k, such that x = x_0 anda(x_i) = (n_i, w_i(x_i+1)) for i = 0, …, k-1anda(x_k) = (n_k, w_k(x_j)) for some j ∈{0, …, k}.Now let m_i = |w_i|, i = 1, …, k, be the lengths of words. Then it follows that a (x_0) = (n_0, m_0 + n_1, m_0 + m_1 + n_2,⋯, m_0 + ⋯ + m_k-1 + n_k, m_0 + ⋯ m_k + n_j,⋯).Let m be the maximum of all n_i and m_i. Then it is clear that the n-th entry of a(x_0) can be at most (n+1) · m. It follows that for any w ∈{u,v}^* the n-th entry of a(w(x)) is bounded above by (n+1)· m + |w|. Thus,the entries of every stream in θ F grow at most linearly. However, there are streams in ν F for which this is not the case, e.g. the stream (1,2,4,8, ⋯) of powers of 2. Hence θ F does not coincide with ν F. § A FIXED POINT BASED ON COALGEBRAS CARRIED BY FREE ALGEBRAS In this section we study coalgebras for a functor F on an algebraic category ^T whose carrier is a free finitely generated algebra. These coalgebras are of interest because they are precisely those coalgebras arising as the results of the generalized determinization <cit.>.We shall see that their colimit yields yet another fixed point of F (besides the rational fixed point and the locally finite one). Moreover, in the next section we show that this fixed point is characterized by a universal property as an algebra.The purpose of this section is to study the situation where the rational fixed point for a functor F on an algebraic category ^T coincides with the locally finite one, and moreover, both can be constructed just from those coalgebras whose carrier is a free finitely generated coalgebra. The latter coalgebras are precisely those coalgebras arising as the results of the generalized determinization <cit.>.Throughout the rest of the paper we assume thatis an algebraic category, i.e. is (equivalent to) the Eilenberg-Moore category ^T for a finitary monad T on . Furthermore, we assume that F: → is a finitary endofunctor preserving surjective T-algebra morphisms. [inline]Do I need that F preserves non-empty monos? *Note that we do not assume here that F preserves non-empty monomorphisms (cf. Theorems <ref> and <ref>) as this assumption is not needed for our main result Theorem <ref>. However, we will make this assumption at the end, in order to obtain the picture in (<ref>) (see Corollary <ref>).*The most common instance of a functor F on an algebraic categoryis a lifting of an endofunctor F_0: →, i.e. we have a commutative square ^T [r]^-F [d]_U^T [d]^U [r]_-F_0where U: → is the forgetful functor. Recall that monomorphisms in ^T are precisely the injective T-algebra morphisms (see Remark <ref>(<ref>)). Hence, a lifting F preserves all non-empty monos since the lifted set functor F_0 does so. Similarly, F preserves surjective T-algebra morphisms since F_0 preserves surjections (which are split epis in ). Finally, F is finitary whenever F_0 is so because filtered colimits in ^T are created by U. *It is well known that liftings F: ^T →^T are in bijective correspondence with distributive laws of the monad T over the functor F_0, i.e. natural transformations λ: TF_0 → F_0T satisfying two obvious axioms w.r.t. the unit and multiplication of T (see e.g. Johnstone <cit.>):F_0 [r]^-F_0η[rd]_-η F_0TF_0 [d]^λ F_0TTTF_0 [r]^-Tλ[d]_μ F_0TF_0T [r]^-λ TF_0TT [d]^F_0 μTF_0 [rr]_-λ F_0TMoreover, coalgebras for the lifting F are precisely the λ-bialgebras, i.e. sets X equipped with an Eilenberg-Moore algebra structure α: TX → X and a coalgebra structure c: X → F_0X subject to the following commutativity conditionTX [d]_α[r]^-TcTF_0X [r]^-λ_X F_0TX [d]^F_0 αX [rr]_-c F_0Xwhich states that c is a T-algebra morphism from (X,α) to F(X,α). *Let F_0: → have a lifting to ^T. Generalized determinization <cit.> is the process of turning a given coalgebra c: X → F_0TX ininto the coalgebra c: TX → FTX in ^T. For example, for the functor F_0X = {0,1}× X^Σ onand the finite power-set monad T =, F_0T-coalgebras are precisely non-deterministic automata and generalized determinization is the construction of a deterministic automaton by the well-known subset construction. The unique F-coalgebra morphism ( c) assigns to each state x ∈ X the language accepted by x in the given non-deterministic automaton (whereas the final semantics for F_0T onprovides a kind of process semantics taking the non-deterministic branching into account).Thus studying the behaviour of F-coalgebras whose carrier is a free finitely generated T-algebra TX is precisely the study of a coalgebraic language semantics of finite F_0T-coalgebras.We denote by F the full subcategory of F given by all coalgebras c: TX → FTX whose carrier is a free finitely generated T-algebra, i.e. where X is a finite set X.The colimit of the inclusion functor of F into the category of all F-coalgebras is denoted by(ϕ F, ζ ) = ( FF) with the colimit injections _c: TX →ϕ F for every c: TX → FTX in F.The coalgebra ϕ F is an lfp coalgebra. Indeed, as we have just seen ϕ F is the colimit of a filtered diagram of F-coalgebras with a finitely presentable carrier; indeed, each TX with X finite is finitely presentable, and finitely presentable objects are closed under coequalizers. Since every free finitely generated algebra TX is clearly fp (being presented by the finite set X of generators and no relations), F is a full subcategory of F. Therefore, the universal property of the colimit ϕ F induces a coalgebra morphism denoted by h: ϕ F →ρ F. Furthermore we write m: ϕ F →ν F for the unique F-coalgebra morphisms into the final coalgebra, respectively.We shall show in Proposition <ref> that h is a strong epimorphism. Thus, whenever F preserves non-empty monos, we have the picture (<ref>) from the introduction. We will also use that the colimit ϕ F is a sifted colimit. *Recall that a small categoryis called sifted <cit.> if finite products commute with colimits overin . More precisely,is sifted iff given any diagram D: ×→, whereis a finite discrete category, the canonical map_d ∈(∏_j ∈ D(d,j))→∏_j ∈ (_d ∈ D(d,j))is an isomorphism. A sifted colimit is a colimit of a diagram with a sifted diagram scheme.*It is well-known that the forgetful functor ^T → preserves and reflects sifted colimits; this follows from <cit.>.Previously, I had in <cit.> here. *Further recall <cit.> that every small categorywith finite coproducts is sifted. Thus, from Lemma <ref> below it follows that =F is sifted, and therefore ϕ F is a sifted colimit. The category F is closed under finite coproducts in F. The empty map 0 → FT0 extends uniquely to a T-algebra morphism T0 → FT0, i.e. an F-coalgebra, and this coalgebra is the initial object of F. Given coalgebras c: TX → FTX and d: TY → FTY one uses that T(X+Y) together with the injections T: TX → T(X+Y) and T: TY → T(X+Y) form a coproduct in ^T. This implies that forming the coproduct of (TX, c) and (TY, d) in F we obtain an F-coalgebra on T(X+Y), and this is an object of F since X+Y is finite.If F preserves sifted colimits, then ϕ F is a fixed point of F, i.e. ζ: ϕ F → F(ϕ F) is an isomorphism. TODO: it could be a good idea to include an explicit proof of this fact here. Recall that every finitary endofunctor onpreserves sifted colimits (this follows from <cit.>). Thus, so does every lifting F:^T →^T of a finitary endofunctor on , using Remark <ref>(2). In general, finitary functors need not preserve sifted colimits <cit.>.One might now expect that ϕ F is characterized as a coalgebra by a universal property similar to finality properties that characterize ρ F and θ F. However, Urbat <cit.> shows that this is not the case. In fact, he provides the following example of a coalgebra c: TX → FTX where _c: TX →ϕ F is not the only F-coalgebra morphism: *Letbe the category of algebras for the signature with one unary operation symbol u (and no equations), and let F= be the identity functor on . Let A be the free (term) algebra on one generator x, and let B be the free algebra on one generator y (i.e. both A and B are isomorphic to ). We equip A and B with the F-coalgebra structures a = 𝕀: A → A and b: B → B given by b(y) = u(y). Then the mapping t ↦ u(t) clearly is an F-coalgebra morphism from B to itself, i.e. a morphism in F. Therefore we have _b (y) = _b(u(y)). Now define a morphism g: A →ϕ F inby g(x) = _b(y). Then g is an F-coalgebra morphism sinceg ø a(x) = g(x) = _b(y) = _b(u(y)) = _b(b(y)) = ζ(_b(y)) = ζ(g(x)),where ζ: ϕ F → F(ϕ F) is the coalgebra structure on ϕ F.We prove the following property: for every morphism f in F from α: TX → TX to β: TY → TY, any t ∈ TX reaches finitely many states iff f(t) does so, more precisely:{α^n(t) | n ∈} is finite{β^n(f(t)) | n ∈} is finite.Indeed, if t reaches finitely many states, then the f(α^n(t)), for n ∈, form a finite set, and β^n(f(t)), n ∈ is the same set since f is a coalgebra morphism. Conversely, suppose that t reaches infinitely many states. Since f is a morphism in , we know that if α^n(t) is u^k(x) for some x ∈ X then f(α^n(t)) = β^n(f(t)) must be u^l(y) with l ≥ k for some y ∈ Y. Thus, f(t) must also reach infinitely many states.We can now conclude that g, _a: A →ϕ F are different coalgebra morphisms. Indeed, _a(x) reaches only itself since x does so, but g(x) = _b(y) reaches infinitely many states since y does so. Thus, g(x) ≠_a(x). It follows that |ϕ F| ≥ 2, while ρ F = θ F = ν F = 1; to see the latter equation use that 𝕀: 1 → 1 is a coalgebra in F since 1 is the object ofpresented by one generator z and one relation z = u(z).Define g: A →ϕ F by g(x) = _b(y). Then one can show that g is an F-coalgebra morphism different from the F-coalgebra morphism _a: A →ϕ F. *Using similar ideas as in the previous point one can show that, for the categoryand FX = × X from Example <ref>, ϕ F and ρ F do not coincide. , see the full paper <cit.>. Consequently, in this example, none of the arrows in (<ref>) is an isomorphism. In order to see that ϕ F and ρ F do not coincide, consider the two coalgebras a: A → FA and b:B → FB with A = T{x} and B = T{y} and with the coalgebra structure given by a(x) = (0,u(x)) and b(y) = (0,v(y)). These coalgebras both lie in F. Consider also the coalgebra p: P → FP where P is presented by one generator z and one relation u(z) = v(z), i.e. P = T{z}/∼, where ∼ is the smallest congruence with u(z) ∼ v(z). Hence, w(z) ∼ w'(z) for w,w' ∈{u,v}^* iff w and w' have the same length. The coalgebra structure is defined by p([w(x)]) = (0, [uw(x)]). The coalgebra (P,p) lies in F. Now f: A → P and g: B → P determined by f(x) = z = g(y) are easily seen to be F-coalgebra morphisms, and therefore a =p · f and b =p · g. Therefore a(x) =p (f(x)) =p(z) =p (g(y)) =b (y). However, we will prove that _a (x) ≠_b (x).For any (TX, c) in F and t ∈ TX, we say thatt-reachable states are u-bounded if there exists a natural number k such that, for any state s = w(x) reachable from t via the next state function,the number |w|_u of u's in w is at most k. Now we prove for any morphism f: (TX, c) → (TY, d) in F and any t ∈ TX the following claim:t-reachable states are u-bounded iff f(t)-reachable states are u-bounded.Indeed, a state s = w(x) is reachable from t iff f(s) = wf(x) is reachable from f(t). Then the 'only if' direction is clear: if t-reachable states are not u-bounded, then neither are f(t)-reachable states.For the 'if' direction suppose t-reachable states are u-bounded by k. Then f(t)-reachable states are bounded by k + max{|f(x)|_u | x ∈ X}. In this section we are going to investigate when the first three fixed points in (<ref>) collaps to one, i.e. ϕ F ≅ρ F ≅θ F. As a consequence, it follows that finality of a given locally fp coalgebra for F can be established by checking the universal property only for the coalgebras in F(Corollary <ref>). Coming back to the discussion of properties that ϕ F does have, the following proposition shows that ρ F is always a strong quotient of ϕ F. Recall fromNotation <ref> the canonical coalgebra morphism h from ϕ F to ρ F: The morphism h: ϕ F ρ Fis a strong epimorphism in .The following proof is set theoretic and makes explicit use of the fact thatis algebraic over , i.e. we use that strong epimorphisms inare precisely surjective T-algebra morphisms. In the appendix we provide a purely category theoretic proof, which is somewhat longer, however. That proof shows that the above result holds for more general base categories than sets.We first prove the following fact:every coalgebra in F is a regular quotient of some coalgebra in F.Indeed, given any a: A → FA in F we know that its carrier is a regular quotient of some free T-algebra TX with X finite, via q: TXA, say (see Remark <ref>.<ref>). Since F preserves regular epis (= surjections) we can use projectivity of TX (see Remark <ref>.<ref>) to obtain a coalgebra structure c on TX making q an F-coalgebra morphism:TX@–>[r]^-c@->>[d]_q FTX@->>[d]^FqA [r]_-a FAThis implies that we have c =a · q. Now let p ∈ρ F. Since ρ F is the colimit of all coalgebras in F, we know from Lemma <ref> that there exists some coalgebra a: A → FA in F and r ∈ A such that a (r) = p. By the above fact, we have (TX, c) in F and the surjective coalgebra morphism q: TXA. Hence there exists some s ∈ TX with q(s) = r. By the finality of ρ F we have the commuting square below:TX @->>[r]^-q [d]__cA [d]^ a ϕ F @->>[r]_-hρ FThus we have p =a (q(s)) = h(_c(s)), which shows that h is surjective as desired. If F preserves non-empty monomorphisms, then we obtain the situation displayed in (<ref>):ϕ F ρ F θ F ν F.Indeed, this follows from Proposition <ref> and Theorem <ref>. § A UNIVERSAL PROPERTY OF PHI F We have seen in Example <ref>(1) that ϕ F, unlike ρ F and θ F, does not enjoy a finality property as a coalgebra. In this section we will prove that, as an algebra for F, ϕ F is characterized by a universal property. This property then determines ϕ F uniquely up to isomorphism. To this end we make the In addition to Assumptions <ref> we assume in this section that F preserves sifted colimits (cf. Remark <ref>). By Theorem <ref>, we know that ϕ F is then a fixed point of F so that by inverting its coalgebra structure we may regard it as the F-algebra ζ^-1: F(ϕ F) →ϕ F. We have already mentioned that both ρ F and θ F are characterized by universal properties as F-algebras: they are the initial iterative and initial fg-iterative algebras, respectively. However, those properties entail that there exists a unique F-coalgebra morphism from every coalgebra in F to ρ F, and from every coalgebra in F to θ F, respectively. That means that simply adjusting the definition of the notion of iterative algebra does not yield the desired universal property of ϕ F, again due to Example <ref>(1).The key to establishing a universal property of ϕ F is to consider algebras which admit canonical (rather than unique) coalgebra-to-algebra homomorphisms. The following notion is inspired by the Bloom algebras introduced by Adámek et al. <cit.>. An ffg-Bloom algebra for the functor F is a triple (A, a, †) where a: FA → A is an F-algebra and † is an operationTXFTX, X finite/TXAsubject to the following axioms:*solution: c is a coalgebra-to-algebra morphism, i.e. the diagram below commutes:TX [r]^- c[d]_cA FTX[r]_-F cFA[u]_a*functoriality: for every coalgebra morphism m: (TX, c) → (TY,d) in F we have c =d · m:TX [r]^c [d]_m FTX [d]^FmTY [r]_-d FY @R-1.5pc TX [rd]^- c[dd]_mA TY [ru]_- dA morphism of ffg-Bloom algebras from (A,a,†) to (B,b,) is an F-algebra morphism preserving solutions, i.e. an F-algebra morphism h: (A,a) → (B,b) such that for every c: TX → FTX in F we havec^ = (TXAB). The algebra ζ^-1: F(ϕ F) →ϕ F together with the operationgiven by the colimit injections, i.e. c^ = _c: TX →ϕ F for every c: TX → FTX in F, clearly is an ffg-Bloom algebra. Indeed, the solution axiom holds since _c is a coalgebra morphisms from (TX, c) to (ϕ F, ζ) and functoriality holds since the _c form a compatible cocone of the diagram D:FF. The above Bloom algebra on ϕ F is the initial ffg-Bloom algebra. It remains to prove the universal property. Let (A, a, †) be any ffg-Bloom algebra. Then the morphisms c: TX → A, for c: TX → FTX ranging over F, form a compatible cocone on the diagram D by functoriality. Therefore we have a unique morphism h: ϕ F → A such that the triangles below commuteTX [d]__c[rd]^- c ϕ F [r]_-h A for every c: TX → FTX in F.In order to see that h is an F-algebra morphism consider the diagram below:[r]^-c [d]__c [d]^F_c ϕ F @<2pt>[r]^-ζ[d]_h F(ϕ F) @<2pt>[l]^-ζ^-1[d]^FhA @<- `l[u] `[uu]^ c [uu] FA[l]^-a @<- `r[u] `[uu]_Fc [uu]Its outside commutes, for every c: TX → FTX in F, by the solution axiom for A, and the left-hand and right-hand parts by the definition of h. The upper square commutes by the solution axiom for ϕ F. Therefore, for every c: TX → FTX in F we haveh ·_c = a · Fh ·ζ·_c.Use that the colimit injections _c form an epimorphic family to conclude that h is an F-algebra morphism, i.e. h ·ζ^-1 = a · Fh. This proves existence of a morphism of ffg-Bloom algebras from ϕ F to A. For the uniqueness suppose that g: ϕ F → A is any morphism of ffg-Bloom algebras. Theng ·_c = g · c^ =cholds for every c: TX → FTX in F. Thus, g=h by the universal property of the colimit ϕ F.The following result provides a simple alternative characterization of the category of ffg-Bloom algebras for F without mentioning† and its axioms. This result is similar to <cit.> for ordinary Bloom algebras. Here F denotes the category of all F-algebras. The category of ffg-Bloom algebras is isomorphic to the slice category (ϕ F, ζ^-1)/ F. (1) Given an ffg-Bloom algebra (A, a, †), initiality of ϕ F provides an F-algebra morphism h: ϕ F → A, i.e. an object of the slice category. Moreover, this object assignment clearly gives rise to a functor using the initiality of ϕ F. (2) In the reverse direction, suppose we are given any F-algebra (A,a) and F-algebra morphism h: (ϕ F, ζ^-1) → (A,a). Then we define for every c: TX → FTX in F, c = (TX ϕ F A).Then using diagram (<ref>) we see that c satisfies the solution axiom: indeed, the outside of the diagram commutes since all its inner parts do. Moreover, functoriality of † follows from that of : given any m: (TX,c) → (TY,d) in F we haved · m = h ·_d · m = h ·_c =c.Furthermore, given a morphism in the slice category, i.e. we have h: (ϕ F, ζ^-1) → (A,a), g: (ϕ F, ζ^-1) → (B,b) and m: (A,a) → (B,b) such that m · h = g, we see that m is a morphism of ffg-Bloom algebras from (A,a,†) to (B,b,), where c^: TX → B is defined as g ·_c: indeed, m is an F-algebra morphism and we havem · c = m · h ·_c = g ·_c = c^.That this gives a functor from the slice category to the category of ffg-Bloom algebras is again straightforward. (3) We have defined two identity-on-morphisms functors and it remains to show that they are mutually inverse on objects.From ffg-Bloom algebras to the slice category and back we form for the given ffg-Bloom algebra (A,a,†) the ffg-Bloom algebra (A,a,) where c^ = h ·_c for the unique morphism h: ϕ F → A of ffg-Bloom algebras. Hence, since h preserves solutions we thus have c^ = h ·_c =c for every c: TX → FTX in F. From the slice category to ffg-Bloom algebras and back we take for a given F-algebra morphism h: (ϕ F, ζ^-1) → (A,a) the Bloom algebra (A,a,†) with c = h ·_c, which shows that h is a morphism of ffg-Bloom algebras. Thus, going back to the slice category we get back to h.§ PROPER FUNCTORS AND FULL ABSTRACTNESS OF PHI F In this section we are going to investigate when the three left-hand fixed points in (<ref>) collapse to one, i.e. ϕ F ≅ρ F ≅θ F. We introduce proper functors and show that a functor is proper if and only if ϕ F is fully abstract, i.e. a subcoalgebra of the final one. This also entails that the rational fixed point ρ F is fully abstract and at the same time it is determined by the coalgebras with free finitely generated carrier. More precisely, the finality of a given locally fp coalgebra for F can be established by checking the universal property only for the coalgebras in F (Corollary <ref>). Here we continue to work under Assumptions <ref>. *Recall that a zig-zag in a categoryis a diagram of the formZ_0 [rd]_f_0 Z_2 [ld]^f_1[rd]_f_2 ⋯[ld]^f_3 [rd]_f_n-2 Z_n[ld]^f_n-1 Z_1Z_3⋯ Z_n-1Z_0Z_1Z_2 Z_3 ⋯ Z_n-1 Z_n.For = ^T, we say that the zig-zag relatesz_0 ∈ Z_0 and z_n ∈ Z_n if there exist z_i ∈ Z_i, i = 1, …, n-1 such that f_i(z_i) = z_i+1 for i even and f_i(z_i+1) = z_i for i odd.* Ésik and Maletti <cit.> introduced the notion of a proper semiring in order to obtain the decidability of the (language) equivalence of weighted automata. A semiringis called proper provided that for every two -weighted automata A and B whose initial states x and y, respectively, accept the same weighted language there exists a zig-zag A = M_0 [rd]M_2 [ld] [rd]⋯[ld][rd]M_n = B [ld]M_1M_3⋯M_n-1A = M_0 → M_1M_2 → M_3 ⋯→ M_n-1 M_n = Bof simulations that relatesx and y. Recall here that a simulation from a weighted automaton (i, (M^a)_aa ∈ A, o) with n states to another one (j, (N^a)_a ∈ A, p) with m states is an -semimodule morphism represented by an n × m matrix H oversuch that i · H = j, o · H = p and M_a · H = H · N_a. Ésik and Maletti show that every Noetherian semiring is proper as well as the semiringof natural numbers, which is not Noetherian. However, the tropical semiring (∪{∞}, min, +, ∞, 0) is not proper.Recall from Example <ref> that -weighted automata with input alphabet Σ are equivalently coalgebras with carrier ^n, where n ≥ 1 is the number of states, for the functor FX = × X^Σ on the category . Note that the ^n are precisely the free finitely generated -semimodules, whence -weighted automata are precisely the coalgebras in F, which explains why we are interested in collecting precisely their behaviour in the form of the fixed point ϕ F. Moreover, since simulations of -weighted automata are clearly in one to one correspondence with F-coalgebra morphisms, one easily generalizes the notion of a proper semiring as follows. Recall that η_X: X → TX denotes the unit of the monad T. We call the functor F: →proper whenever for every pair of coalgebras c: TX → FTX and d: TY → FTY in F and every x ∈ X and y ∈ Y such that η_X(x) ∼η_Y(y) are behaviourally equivalent there exists a zig-zag in F relating η_X(x) and η_Y(y).A semiringis proper iff the functor FX = × X^Σ onis proper for every input alphabet Σ. We know that Noetherian semirings are proper (cf. Example <ref>.3), and the semiringof natural numbers is proper. Recently, Sokolova and Woracek <cit.> have shown that the non-negative rationals _+ and non-negative reals _+ form proper semirings.Constant functors are always proper. Indeed, suppose that F is the constant functor on some algebra A. Then we have ν F = A, and for any F-coalgebra B its coalgebra structure c: B → FB = A is also the unique F-coalgebra morphism from B to ν F = A. Now given any c: TX → FTX = A and d: TY → FTY = A and x ∈ TX, y ∈ TY as in Definition <ref>. Then η_X(x) ∼η_Y(y) is equivalent to c(η_X(x)) = d(η_Y(y)). Let a be this element of A, and extend x: 1 → X, y: 1 → Y and a: 1 → A to T-algebra morphisms Tx: T1 → TX, Ty: T1 → TY anda: T1 → A = FT1 (the latter yielding an F-coalgebra). Then TX @=[rd]T1[ld]_Tx[rd]^Ty TY @=[ld]TXTYis the required zig-zag in F relating η_X(x) and η_Y(y).Sokolova and Woracek <cit.> have recently proved that the functor FX = [0,1] × X^Σ on the categoryof positively convex algebras (see Example <ref>.4) is proper. In addition, its subfunctor F̂ given byF̂ X = { (o, f) ∈ [0,1] × X^Σ|∀ s ∈Σ: ∃ p_s∈ [0,1], x_s ∈ X:o + ∑_s∈Σ p_s ≤ 1, f(s) = p_sx_s}is proper. The latter functor was used as coalgebraic type functor for the axiomatization of probabilistic systems in <cit.>. In fact, the completeness proof of the expression calculus in loc. cit. makes use of our Corollary <ref> below. In general, it seems to be non-trivial to establish that a given functor is proper (even for the identity functor this may fail; in the light of Theorem <ref> below this follows from Example <ref>(1)). However, we will provide in Proposition <ref> sufficient conditions onand F the entail properness using our main result: The functor F is proper iff the coalgebra ϕ F is a subcoalgebra of ν F. The latter condition states that the unique coalgebra morphism m: ϕ F →ν F is a monomorphism in . We present the proof of this theorem in Section <ref>. Here we continue with a discussion of the consequences of this result. If F is proper, then ϕ F is the rational fixed point of F. Let u: ρ F →ν F be the unique F-coalgebra morphism. Then we have a commutative triangle of F-coalgebra morphisms due to finality of ν F:ϕ F @->>[r]^-h ρ F [r]^-u ν F. @<-> `u[l] `[ll]_-m [ll] m = (ϕ F hρ F u→ν F). Since F is proper, m is a monomorphism in , hence so is h. Since h is also a strong epimorphism by Proposition <ref>, it is an isomorphism. Thus, ϕ F ≅ρ F is the rational fixed point of F.Suppose that F preserves non-empty monomorphisms. Then the functor F is proper iff ϕ F ≅ρ F ≅θ F ν F. If the three fixed points are isomorphic, then F is proper by Theorem <ref>.Conversely, since F preserves non-empty monomorphisms, we have the situation displayed in (<ref>) (see Corollary <ref>). Now if F is proper we know from Corollary <ref> that ϕ F ≅ρ F. Thus, ρ F is a subcoalgebra of ν F, i.e. the composition of the last two morphisms in (<ref>) is a monomorphism. Thus, so is ρ F θ F. Since this is also a strong epimorphism, we conclude that ρ F ≅θ F. Note that this result also entails full abstractness of ϕ F ≅ρ F.A key result for establishing soundness and completeness of coalgebraic regular expression calculi is the following corollary (cf. <cit.> and its applications in Sections 4 and 5 of loc. cit.).Suppose that F is proper. Then an F-coalgebra (R, r) is a final locally fp coalgebra if and only if (R,r) is locally fp and for every coalgebra (TX, c) in F there exists a unique F-coalgebra morphism from TX to R.The implication “⇒” clearly holdsFor “⇐” it suffices to prove that for every a: A → FA in F there exists a unique F-coalgebra morphism from A to R. In fact, it then follows that R is the final locally fp coalgebra. To see this write an arbitrary locally fp coalgebra A as a filtered colimit of a diagram D: → FF with colimit injections h_d: Dd → A (d an object in ). Then the unique F-coalgebra morphisms u_d: Dd → R form a compatible cocone, and so one obtains a unique u: A → R such that u · h_d = u_d holds for every object d of . It is now straightforward to prove that u is a unique F-coalgebra morphism from A to R.Now let a: A → FA be a coalgebra in F. For every (TX,c) in F denote by c^: TX → R the unique F-coalgebra morphism that exists by assumption. These morphisms c^ form a compatible cocone of the diagram FF. Thus, we obtain a unique F-coalgebra morphism m': ϕ F ≅ρ F → R such that the following diagram commutes for every c: TX → FTX in F:TX [d]__c[rd]_(.4) c[rrd]^c^ ϕ F @=[r]_-≅ ρ F [r]_m'RTherefore we have an F-coalgebra morphism h = (A ρ FR).To prove it is unique, assume that g: A → R is any F-coalgebra morphism. As in the proof of Proposition <ref>, we know that A is the quotient of some TX in F via q: TXA, say. Then we have m' · a · q = g · q because there is only one F-coalgebra morphism from TX to R by hypothesis. It follows that h = m' · a = g since q is epimorphic.The next result provides sufficient conditions for properness of F. It can be seen as a category-theoretic generalization of Ésik's and Maletti's result <cit.> that Noetherian semirings are proper. Suppose that finitely generated algebras inare closed under kernel pairs and that F maps kernel pairs to weak pullbacks in . Then F is proper.From Proposition <ref> and Corollary <ref> we know that ϕ F ≅ρ F. Furthermore, since F maps kernel pairs to weak pullbacks inwe see that F preserves monomorphisms; indeed, m: AB is a mono iniff and only if its kernel pair is 𝕀_A,𝕀_A. Thus F𝕀_A, F𝕀_A form a weak pullback in , which is in fact a pullback, whence Fm is monomorphic.By Lemma <ref>, it follows that finitely generated objects are finitely presentable. Therefore, by Proposition <ref>, ρ F and thus ϕ F is a subcoalgebra of ν F, whence F is proper by Theorem <ref>.First, since F maps kernel pairs to weak pullbacks inwe see that F preserves monomorphisms; indeed, m: AB is a mono iniff and only if its kernel pair is 𝕀_A,𝕀_A. Thus F𝕀_A, F𝕀_A form a weak pullback in , which is in fact a pullback, whence Fm is monomorphic.Now let (TX, c) and (TY, d) be in F, x ∈ X and y ∈ Y such that c(η_X(x)) =d(η_Y(y)). It is our task to construct a zig-zag relating η_X(x) and η_Y(y).Form Z = X + Y and let e: TZ → FTZ be the coproduct of the coalgebras (TX, c) and (TY,d) in F (see Lemma <ref>). Take the factorization of e: TZ →ν F into a strong epi q: TZA followed by a monomorphism m: A ν F. Since F preserves non-empty monos, we obtain a unique coalgebra structure a: A → FA such that q and m are coalgebra morphisms (see Remark <ref>(<ref>)). Now take the kernel pair f,g: K ∥ TZ of q. Since TZ and its quotient A are finitely generated T-algebras, so is K because finitely generated T-algebras are closed under taking kernel pairs by assumption. Now F maps the kernel pair f,g to a weak pullback Ff, Fg of Fq along itself in . Thus, we have a map k K → FK such that the diagram below commutes:K @<-.5ex>[d]_f @<.5ex>[d]^g @–>[r]^-k FK @<.5ex>[d]^Fg@<-.5ex>[d]_FfTZ [d]_q [r]^-e FTZ [d]^FqA [r]_-a FANotice that we do not claim that k is a T-algebra morphism. However, since K is a finitely generated T-algebra, it is the quotient of some free finitely generated T-algebra TR via p TRK, say. Now we choose some splitting s K → TR of p in , i. e., s is a map such that p ø s = 𝕀. Next we extend the map r_0 = Fs ø k ø p øη_R to a T-algebra morphism r TR → FTR; it follows that the outside of the diagram below commutes:R [d]_η_R[rd]^r_0TR @–>[r]^-r [d]_p FTR [d]^FpK [r]_-k FK(Notice that to obtain r we cannot simply use projectivity of TR since k is not necessarily a T-algebra homomorphism.)We do not claim that this makes p a coalgebra morphism (i. e., we do not claim the lower square in (<ref>) commutes). However, fø p and g ø p are coalgebra morphisms from (TR, r) to (TZ, e); in fact, to see thate ø (f ø p) =F(f ø p) ø rit suffices that this equation of T-algebra morphisms holds when both sides are precomposed with η_R. To this end we compute[e ø f ø p øη_R = Ff ø k ø p øη_Rsee (<ref>),; = Ff ø Fp ø r_0 outside of (<ref>),; =Ff ø Fp ø r øη_Rdefinition of d. ]Similarly, g ø p is a coalgebra morphism.Now consider the following zig-zag in F (recall that the algebra TZ is the coproduct of TX and TY with coproduct injections T and T):TX [rd]_-TTR [ld]_-f · p[rd]^-g · pTY [ld]_-T TZTZWe now show that this zig-zag relates η_X(x) and η_Y(y). Let x' = T(η_X(x)) and y' = T(η_Y(y)). Then we havee (x') =e · T(η_X(x)) =c (η_X(x)) =d (η_Y(y)) =e · T(η_Y(y)) =e (y').Hence, since e = m · q and m is monomorphic, we obtain q(x') = q(y'). Thus, there exists some k ∈ K such that f(k) = x' and g(k) = y' by the universal property of the kernel pair. Finally, since p: TRK is surjective we obtain some z ∈ TR such that p(z) = k whence f· p(z) = x' and g · p(z) = y'. This completes the proof. *Note that closure of finitely generated algebras under kernel pairs can equivalently be stated in general algebra terms as follows: every congruence R of a finitely generated algebra A is finitely generated as a subalgebra RA × A (observe that this is not equivalent to stating that R is a finitely generated congruence). *For a lifting F of a set functor F_0, the condition that F maps kernel pairs to weak pullbacks inholds whenever F_0 preserves weak pullbacks. Hence, all the functors on algebraic categories mentioned in Example <ref> satisfy this assumption. *For the special case of a lifting, a variant of the argument in the proof of Proposition <ref> was used in <cit.> in order to prove that every coalgebra in F is a coequalizer of a parallel pair of morphisms in F. This has inspired Winter <cit.> who uses a very similar argument to prove that, for a distributive law λ, λ-bisimulations (see Bartels <cit.>) are sound and complete for λ-bialgebras (see Remark <ref>). It turns out that, for a lifting F, Proposition <ref> is a consequence of Winter's result, or, in other words, our result can be understood as a slight generalization of Winter's one. *The first condition in Proposition <ref> is not necessary for properness of F. In fact, it fails in the category of semimodules for , viz. the category of commutative monoids: in fact, consider the finitely generated commutative monoid × and its submonoid infinitely generated by { (n, n+1) | n ∈},which is easily seen not be finitely generated. However, as we mentioned in Example <ref>, FX = × X^Σ is proper on the category of commutative monoids. *In Example <ref>(<ref>) we mentioned that, in the categoryof positively convex algebras, fg- and fp-objects coincide. However, fg-objects are not closed under kernel pairs. In fact, the interval [0,1] is the free positively convex algebra on two generators, but {(0,0), (1,1)}∪ (0,1) × (0,1) is a congruence on [0,1] that is not an fg-object (i.e. a polytope) <cit.>.Thus, properness of the functors in Example <ref> does not follow from Proposition <ref>. § PROOF OF THEOREM <REF> In this section we will present the proof of our main result Theorem <ref>. We start with two technical lemmas. Recall <cit.> that every free T-algebra TX is perfectly presentable, i.e. the hom-functor ^T(TX, -) preserves sifted colimits (cf. Remark <ref>). It follows that for every sifted diagram D: →^T and every T-algebra morphism h: TX → D there exists some d ∈ and h': TX → Dd such that Dd [d]^_dTX @–>[ru]^-h'[r]_-h D.For every finite set X and map f: X →ϕ F there exists an object (TY,d) in F and a map g: X → Y such thatthe triangle below commutes: X[d]^f[lld]_-gY [r]_-η_YTY [r]_-_d ϕ F f = (XYTY ϕ F).We begin by extending f to a T-algebra morphism h =f: TX →ϕ F. By Remark <ref>, there exists some c: TZ → FTZ in F and a T-algebra morphism h': TX → TZ such that h = _c · h'. Let f' = h' ·η_X, let Y = X+Z and consider the T-algebra morphism [f',η_Z]: TY → TZ. This is a split epimorphism in ^T; we have T: TZ → TY with[f',η_Z]· T = η_Z = 𝕀_TZ,where the last equation follows from the uniqueness property of η_Z (see Section <ref>) by the laws of (-). We therefore get a coalgebra structured = (TYTZ FTZFTY)such that [f',η_Z] is an F-coalgebra morphism from (TY, d) to (TZ,c). Since Y is a finite set, (TY, d) is an F-coalgebra in F, and hence _c ·[f',η_Z] = _d. Thus we see that g = : X → Y is the desired morphism due to the commutative diagram below:@C+1pcX`l[llld]_g= [llld] [ld]^f'[d]^fY [r]_-η_Y@/^1pc/[rr]^-[f',η_Z]TY [r]_-[f',η_Z]TZ [r]_-_c ϕ F @<- `d[l] `[ll]^-_d [ll] Recall that a colimit of a diagram D: → is computed as follows:D = (∐_d ∈ Dd)/∼,where ∼ is the least equivalence on the coproduct (i.e. the disjoint union) of all Dd with x ∼ Df(x) for every f:d → d' inand every x ∈ Dd. In other words, for every pair of objects c, d ofand x ∈ Dc, y ∈ Dd we have x ∼ y iff there is a zig-zag inwhose D-imageDc = Dz_0 [rd]_Df_0 Dz_2 [ld]^Df_1[rd]_Df_2⋯[ld]^Df_3[rd]_Df_n-2 z_n = Dd[ld]^Df_n-1 Dz_1Dz_3⋯Dz_n-1relates x and y (cf. Remark <ref>). Let (TX, c) and (TY, d) be coalgebras in F,x ∈ TX, and y ∈ TY. Then the following are equivalent:* _c(x) = _d(y) ∈ϕ F, and*there is a zig-zag in F relating x and y.By Remark <ref>(<ref>), ϕ F is a sifted colimit. Hence, the forgetful functor F →^T → preserves this colimit. Thus the colimit ϕ F is formed as recalled in Remark <ref>:ϕ F ≅(∐_c TX_c)/∼,where c: TX_c → FTX_c ranges over the objects of F. Therefore, we have the desired equivalence.“⇒” Suppose that for m: ϕ F →ν F we have x, y ∈ϕ F with m(x) = m(y). We apply Lemma <ref> to 1 ϕ F and 1 ϕ F, 1 ϕ F and 1 ϕ F,respectively, to obtain two objects c:TX → FTX and d: TY → FTY in F with x' ∈ X and y' ∈ Y such that _c(η_X(x')) = x and _d(η_Y(y')) = y. By the uniqueness of coalgebra morphisms into ν F we have c = m ·_candd = m ·_d.Thus we compute:c(η_X(x')) = m ·_c·η_X(x') = m(x)=m(y) = m ·_d ·η_Y(y') =d(η_Y(y')).Since F is proper by assumption, we obtain a zig-zag in F relating η_X(x') and η_Y(y'). By Lemma <ref>, these two elements are merged by the colimit injections, and we have x = _c(η_X(x')) = _d(η_Y(y') = y. We conclude that m is monomorphic.“⇐” Suppose that m: ϕ F ν F is a monomorphism. Let c: TX → FTX and d: TY → FTY be objects of F, and let x ∈ X and y ∈ Y be such that c (η_X(x)) =d(η_Y(y)). Using (<ref>) and the fact that m is monomorphic we get _c(η_X(x)) = _d(η_Y(y)). By Lemma <ref>, we thus obtain a zig-zag in F relating η_X(x) and η_Y(y). This proves that F is proper. § CONCLUSIONS AND FURTHER WORK*TODO: list open problems from p. 6 of the notes!*the discussion on completeness proofs on p. 9f in the notes, i.e. that completeness proof in <cit.> extends to = *proper functors on convex sets Inspired by Ésik and Maletti's notion of a proper semiring, we have introduced the notion of a proper functor. We have shown that, for a proper endofunctor F on an algebraic category preserving regular epis and monos, the rational fixed point ρ F is fully abstract and moreover determined by those coalgebras with a free finitely generated carrier (i.e. the target coalgebras of generalized determinization).Our main result also shows that properness is necessary for this kind of full abstractness. For categories in which fg-objects are closed under kernel pairs we saw that when F maps kernel pairs to weak pullbacks in , then it is proper. This provides a number of examples of proper functors. However, in several categories of interest the condition on kernel pairs fails, e.g. in -semimodules (commutative monoids) and positively convex algebras. There can still be proper functors, e.g. FX = × X^Σ on the former and FX = [0,1] × X^Σ on the latter. But establishing properness of a functor without using Proposition <ref> seems non-trivial, and we leave the task of finding more examples of proper functors for further work.One immediate consequence of our results is that the soundness and completeness proof for the expression calculi for weighted automata <cit.> extends from Noetherian to proper semirings. In fact, Ésik and Kuich <cit.> already provide sound and complete axiomatizations of weighted language equivalence for (certain subclasses of) proper semiringsby showing that -rational weighted languages form certain free algebras. In the future, when additional proper functors are known, it will be interesting to study regular expression calculi for their coalgebras and use the technical machinery developed in the present paper for soundness and completeness proofs.Another task for future work is to study the new fixed point ϕ F in its own right. Here we have already proven that ϕ F is characterized uniquely (up to isomorphism) as the initial ffg-Bloom algebra. In the future, it might be interesting to investigate free (rather than initial) ffg-Bloom algebras. Moreover, related to ordinary Bloom algebras <cit.> there is the notion of an Elgot algebra <cit.>. It is known that for every object Y of an lfp category, the parametric rational fixed point ρ (F(-) + Y) yields a free Elgot algebra on Y. In addition, the category of algebras for the ensuing monad is isomorphic to the category of Elgot algebras for F. In <cit.>, the new notion of an ffg-Elgot algebra for F is introduced, and it is shown that for free finitely generated algebras Y the parametric fixed point ϕ(F(-) + Y) forms a free ffg-Elgot algebra for F on Y, and furthermore the category of ffg-Elgot algebras for F is monadic over our algebraic base category . It remains an open question whether ffg-Elgot algebras (or ffg-Bloom algebras) are monadic over . abbrvplainurl§ APPENDIX: CATEGORY THEORETIC PROOF OF PROPOSITION <REF> Note first that for every c: TX → FTX in F we clearly havec = (TX ϕ F ρ F)by the finality of ρ F. Recall that for strong epis the same cancellation law as for epis holds: if e · e' is a strong epi, then so is e; a similar law holds for strongly epimorphic families. Hence, we are done if we show that the c where c: TX → FTX ranges over F forms a jointly strongly epimorphic family, too. This is done by using that the a, where a: A → FA ranges over F, form a strongly epimorphic family (to see this use Lemma <ref> once again). The key observation is as follows: given any a: A → FA in F we know that its carrier is a regular quotient of some free T-algebra TX with X finite, via q: TXA, say. Since F preserves regular epis (= surjections) we can use projectivity of TX (see Remark <ref>(<ref>)) to obtain a coalgebra structure c on TX making q an F-coalgebra morphism: TX@–>[r]^-c@->>[d]_q FTX@->>[d]^FqA [r]_-a FA This implies that we have c =a · q. Now suppose that we have two parallel morphisms f, g such that for every c: TX → FTX in F we have f · c = g · c. Then for every a: A → FA in F we obtainf · a · q = f · c = g · c = g · a · q,which implies that f · a = g · a since q is epimorphic. Hence f = g since the a form a jointly epimorphic family. This proves that the c form a jointly epimorphic family. To see that they form a strongly jointly epimorphic family, assume that we are given a monomorphism m: MN and morphisms g: ρ F → N and f_c: TX → M for every c: TX → FTX in F such that m · f_c= g · c. We extend the family (f_c) to one indexed by all a: A → FA in F as follows. We have that any such (A,a) is a quotient coalgebra of some (TX, c) via q: TXA, which is the coequalizer of some parallel pair k_1, k_2: K → TX in . Thus we havem · f_c · k_1 = g · c · k_1 = g · a · q · k_1 = g · a · q · k_2 = g · c · k_2 = m · f_c · k_2, which implies that f_c · k_1 = f_c · k_2 since m is monomorphic. Therefore we obtain a unique f_a: A → M such that f_a · q = f_c using the universal property of the coequalizer q. Hence we can compute m · f_a · q = m · f_c = g · c = g · a · q,which implies m · f_a = g · a since q is epimorphic. Now we use that the a are jointly strongly epimorphic (cf. Lemma <ref>) to obtain a unique morphism d: ρ F → M with d · a = f_a and m · d = g for all a: A → FA in F. In particular, d is the desired diagonal fill-in since F is a full subcategory of F. As for the uniqueness of the fill-in d we still need to check that any d with d · c = f_c for all c: TX → FTX in F and m · d = g also fulfils d · a = f_a for every a: A → FA in F. Indeed, this follows fromd · a · q = d · c = f_c = f_a · qusing that q is epimorphic. | http://arxiv.org/abs/1705.09198v4 | {
"authors": [
"Stefan Milius"
],
"categories": [
"cs.LO"
],
"primary_category": "cs.LO",
"published": "20170525143133",
"title": "Proper Functors and Fixed Points for Finite Behaviour"
} |
Stationary solutions of cubic nonlinear Schrödinger equations]New stationary solutions of the cubic nonlinear Schrödinger equations for Bose-Einstein condensates Q. D. Katatbeh and D. M. Christodoulou]Qutaibeh D. Katatbeh and Dimitris M. Christodoulou We have previously formulated a simple criterion for deducing the intervals of oscillations in the solutions of second-order linear homogeneous differential equations. In this work, we extend analytically the same criterion to the cubic nonlinear Schrödinger equations that describe Bose-Einstein condensates. With this criterion guiding the search for solutions, we classify all types of solutions and we find new stationary solutions in the free-particle cases that were not noticed previously because of limited coverage in the adopted boundary conditions. The new solutions are produced by the nonlinear terms of the differential equations and they continue to exist when various external potentials are also incorporated. Surprisingly, these solutions appear when the nonlinearities are small. [ [ December 30, 2023 ===================== § INTRODUCTION The ordinary second-order linear homogeneous differential equationsof mathematical physics y” + b(x) y' + c(x) y = 0,can all be transformed to the canonical formu” + q(x) u = 0,where the primes denote derivatives with respect to the independent variable x,q = -1/4(b^2 + 2b' - 4c),and y(x) = u(x)exp(-1/2∫b(x) dx) <cit.>. The canonical form (<ref>) condensesthe coefficients of eq. (<ref>) into q(x) and `oscillation theory' focuses on this coefficient in order to derive the oscillatory propertiesof the solutions of eq. (<ref>) (see the reviews in <cit.>and references therein).In recent work <cit.>, we showed that the canonical form is degenerate in the sense that different equations of the form (<ref>) can be transformed to the same canonical form. This is evident from eq. (<ref>) in which q(x) is the result of combining two unrelated functions b(x) and c(x). Furthermore, the derivative b'(x) in eq. (<ref>) sometimes acts as damping (when b' > 0) and other times enhances oscillations in the solutions (when b' < 0). We worked around these ambiguities by transforming eq. (<ref>) to a form with constant damping (eq. (<ref>) with b= constant), and then we transformed again to a new canonical form in which the constant term b acted unambiguously as damping opposing oscillatory tendencies in the solutions, just as it does in the well understood case of the damped harmonic oscillator (eq. (<ref>) with b, c= constant). This procedure was very successful in deducing the precise intervals of oscillations in the solutions of the general form (<ref>).In the last step of the procedure, a generalized Euler transformation of the independent variable x was used <cit.>:x = c_1 + c_2exp(kt),where c_1, c_2, and k are arbitrary constants, and a criterion for the intervals of oscillations in the solutions was established:q(x) > 1/4(x - c_1)^2.Only the constant c_1 appears in the criterion and corresponds to a `horizontal shift' of the independent variable x in eq. (<ref>).For equations with singularities at the origin, c_1 can be set to zero and then the criterion (<ref>) reduces to the simple formq(x) > 1/4x^2.In this case, we can also choose c_2=1 and k=1 in eq. (<ref>)and then the change of the independent variable x takes the form of theclassical Euler transformationx = exp(t),for which the investigation of the interval t∈ (-∞,+∞) in the transformed equation corresponds to searching for oscillatory solutions in the intervalx∈ (0,+∞) of the original equation (<ref>).In this work, we extend the applicability of the criterion (<ref>) to cubic nonlinear Schrödinger (CNLS) equationsof the 1+1 type, the 2+1 type, and the 3+1 type <cit.>.In this notation, the +1 signifies the time (t) dimension whereas the first digit signifies the N spatial dimensions. These equations, sometimes referred to as the Gross-Pitaevskii equations <cit.>, take the formiħ∂Ψ/∂ t = ( -ħ^2/2m∇⃗^2 + V(x⃗) + g|Ψ|^2 )Ψ ,where m is the mass of the particle described by the time-dependent wavefunction Ψ(x⃗, t), x⃗ is the vector of the spatial coordinates in N=1, 2, or 3 dimensions, V is the scalar potential, ħ = h/2π is the reduced Planck constant, and g is the amplitude of the nonlinearity. Eq. (<ref>) models repulsive (g>0) and attractive (g<0) Bose-Einstein condensates (BECs) with a variety of confining potentials V(x⃗) that serve as spatial traps of solitary waves in many applications of current interest.Mallory & Van Gorder <cit.> have given a detailed list of current applications of BECs, as well as the solutions for both bright and dark solitons obtained by using a proper set of boundary conditions. The advantage of using the criterion (<ref>) to predict the conditions for oscillatory spatial solutions is that the results do not rely on any adopted boundary conditions. In the first step of the procedure, we search for trivial solutions in the equations. These are necessary (but not sufficient) in order to provide a baseline for oscillations. The CNLS equations have three trivial solutions each of which can serve as a baseline for different types of oscillatory BECs.In the next step, the criterion (<ref>) distinguishes the oscillatory solutions from other `unstable' (i.e., nonoscillatory) solutions <cit.> based on the adopted boundary conditions in various applications. It turns out that all the coordinate types of the CNLS equations can be investigated simultaneously because the dominant nonlinear terms have the same structures in all the attractive and all the repulsive cases, respectively. All of these cases can be covered by the same criterion for oscillations as follows: The inertial term of the cylindrical 2+1 type is y'/x and it implies that the differential equations contain no damping of the oscillations <cit.>. Then, the criterion (<ref>) for oscillatory solutions reduces to the simple inequalityc(x) > 0 ,where c(x) represents the coefficient of the y-term in equations that can be cast in the form (<ref>). On the other hand, the inertial terms of the 1+1 and the 3+1 types are 0y' and 2y'/x, respectively, and in both of these cases the criterion (<ref>) reduces to c(x) > 1/(4x^2) <cit.>. The term 1/(4x^2) represents the low-level inertial damping that is present in the cartesian and the spherical forms of the CNLS equations, but this term becomes negligible for x>>1, in which case the 1+1/3+1 criterion quickly approaches asymptotically the inequality (<ref>) of the 2+1 case.In what follows, we demonstrate the analytic procedure presented in <cit.> for Bose-Einstein free solitons. This is the first time that nonlinear differential equations with more than one trivial solution have been investigated. (In <cit.>, the nonlinear polytropic Lane-Emden equations that possess only one trivial solution were analyzed.) The repulsive and the attractive CNLS equations are analyzed in Sec. <ref> and Sec. <ref>, respectively. In both cases, we classify the various types of solutions expected to exist depending on the adopted boundary conditions; and we verify the results by high-accuracy numerical integrations <cit.> of the 2+1 type for which the criterion (<ref>) is exact.[We note that additional numerical calculations for the 1+1 and 3+1 cases with and without external potentials (not shown here) also verify the same nonlinear properties of the solutions.] Finally, in Sec. <ref>, we discuss our results. § REPULSIVE CNLS EQUATIONS Following the extensive study in <cit.> (their eqs. (1) to (14)), we adopt the dimensionless CNLS equationψ” + N-1/xψ' + (1 - 2|ψ|^2)ψ = 0 , (N=1,2,3),to describe the spatial part of the stationary wavefunction ψ(x) in the repulsive free-particle case.The time-dependent part of the wavefunction in eq. (<ref>) is assumed to have the form exp(-iħ t/2m), wherem is the particle mass and ħ = h/2π is the reduced Planck constant. Here we do not limit the nonlinearity to a small value, so our calculations can accomodate arbitrarily strong nonlinear properties in the solutions. In eq. (<ref>), the magnitude of the nonlinear amplitude g>0 is effectively set by the choice of the boundary value ψ(0) since ψ(0)∝ 1/√(g).Using eqs. (<ref>) and (<ref>), we obtain the criterion for oscillatory solutions |ψ|^2 < 1/2 .This inequality predicts oscillations for as long as the wavefunction remains between ψ_± = ± 1/√(2). These two values are also trivial solutions of eq. (<ref>), in addition to the better known trivial solution ψ = 0. Since these trivial solutions do not satisfy the criterion (<ref>), then oscillations can only occur about ψ = 0. In this simple way, we can classify the various solutions of eq. (<ref>) as follows: (a) For boundary conditions of the form |ψ(0)| < ψ_+, the solutions will be oscillatory about ψ = 0. (b) For |ψ(0)| = ψ_+, the solutions will be constant. (c) For |ψ(0)| > ψ_+, the solutions will be repelled by the nearest nonzero trivial solution and they will diverge rapidly. Such divergent solutions were noted by Mallory & Van Gorder <cit.> for the case of a constant potential. We see here that they are not produced by the potential, instead they have their origin in the nonlinearity of eq. (<ref>).The three types of solutions are illustrated numerically in Fig. <ref> for the following boundary conditions: ψ'(0)=0 and ψ(0)=0.7, 1/√(2), and 0.71. Clearly, only the trivialsolution ψ = 0 attracts nearby oscillatory solutions while the nonzero trivial solutions repel all solutions.Divergent solutions appear for any choice of the boundary condition|ψ(0)| > 1/√(2). This condition implies that the nonlinearity in eq. (<ref>) takes relatively small values.This is because the chosen value for ψ(0) scales as 1/√(g), where g is the nonlinear amplitude of |ψ|^2 in the normalization given by Mallory & Van Gorder <cit.> for eq. (<ref>). This result is counter to intuition as it indicates that divergent solutions appear only for small perturbations (of order g|ψ|^2) in the Schrödinger equation, while strong perturbations with 0 < |ψ(0)| < 1/√(2) always lead to well-behaved oscillatory solutions about ψ = 0 (as the red curve in Fig. <ref>).§ ATTRACTIVE CNLS EQUATIONS Following <cit.> again (their eqs. (1) to (14)), we adopt the dimensionless CNLS equationψ” + N-1/xψ' + (2|ψ|^2 - 1)ψ = 0 , (N=1,2,3),to describe the spatial part of the stationary wavefunction ψ(x) in the attractive free-particle case.The time-dependent part of the wavefunction in eq. (<ref>) is assumed to have the form exp(+iħ t/2m), where again m is the particle mass and ħ = h/2π is the reduced Planck constant. In eq. (<ref>), the magnitude |g| of the nonlinear amplitude g<0 is effectively set by the choice of the boundary value ψ(0) since ψ(0)∝ 1/√(|g|).Using eqs. (<ref>) and (<ref>), we obtain the criterion for oscillatory solutions |ψ|^2 > 1/2 .This inequality predicts oscillations for as long as the wavefunction manages to repeatedly cross outside the interval (ψ_-, ψ_+) where again ψ_± = ± 1/√(2). These two values are also trivial solutions of eq. (<ref>), in addition to ψ = 0. In this case, these two trivial solutions fail marginally tosatisfy the criterion (<ref>) whereas ψ = 0 fails completely and it will repel all solutions. It would then appear that oscillatory solutions can occur only around ψ = ψ_± for those wavefunctions that manage to satisfy the inequality |ψ(x)|>ψ_+ at some radii x.There exists however one case where the oscillations will develop about one or the other nonzero trivial solution after a rather complicated behavior that involves also the repulsive trivial solution ψ = 0. It turns out that the attractive problem is not the exact inverse of the repulsive problem analyzed in Sec. <ref> because of the absence of divergent solutions and the existence of new solutions in the case of boundary conditions with |ψ(0)|>>1. Obeying the criterion (<ref>), such solutions will oscillate about ψ = ψ_+ or ψ = ψ_-, but in the process they can overshoot the repulsive trivial solution ψ = 0 and intersect it several times. Their behavior will be determined by the choice of ψ(0) and by the fact that ψ = 0 works to repel all solutions; so the new solutions are forbidden from decaying asymptotically on to ψ=0.We can now classify the various solutions of eq. (<ref>)as follows: (a) For |ψ(0)| = ψ_+, the solutions will be constant. (b) For boundary conditions of the form |ψ(0)|≈ 1, the solutions will be oscillatory about ψ = ψ_+ or ψ = ψ_-. (c) In cases where |ψ(0)| < ψ_+, the solutions will be repelled by ψ = 0 and in the process they will have to cross one of the nonzero trivial solutions, thereby becoming oscillatory according to eq. (<ref>). (d) For |ψ(0)| >> ψ_+, the solutions will still oscillate about one or the other nonzero trivial solution but in a more complicated fashion. Such solutions cannot be discovered by examining the density |ψ(x)|^2 of the BEC because then the complexities of the underlying radial (x) solution are lost and the repulsion of the ψ=0 trivial solution is no longer visible. As in the repulsive case of Sec. <ref>,the normal and the exotic features of the bright solutions are caused by the nonlinearity of eq. (<ref>) and they are present in cases where an external potential is introduced.The first three types of solutions are illustrated numerically in Fig. <ref> for the following boundary conditions: ψ'(0)=0 and ψ(0)=1/√(2), 0.75, and 0.65. The solutions of cases (b) and (c) in the classification above are clearly attracted by the ψ=ψ_+ trivial solution and are forced to oscillate about it. The fourth type of (exotic) solutions are illustrated numerically in Fig. <ref> for ψ'(0)=0 and ψ(0)=3 and 5 along with the normal oscillatory solution for ψ(0)=1. The exotic solutions cross the repulsive trivial solution ψ=0 several times but they are forbidden from settling on to it, so after a few cycles they are attracted to one of the nonzero trivial solutions. We have no way of telling which trivial solution they will be attracted to, this choice depends on the amplitude of the decaying oscillations at internediate values of x. But the transition occurs always at an inflection point that develops on the ψ=0 line (Fig. <ref>).When the inflection occurs, the wavelength becomes a lot longer (by factors of order ∼2) and this change in wavelength distinguishes these solitons from all other bright solitons. For this reason, such solitons may be easily identifiable in experiments creating actual BECs.The exotic solutions appear for |ψ(0)| >> 1. Unlike in the repulsive case of Sec. <ref> where the corresponding solutions diverge rapidly, the attractive solutions are always oscillatory. The condition that |ψ(0)| >> 1 implies that thenonlinearity in eq. (<ref>) takes very small values. This is because the chosen large values for ψ(0) scale as 1/√(|g|), where g is the nonlinear amplitude of |ψ|^2 in the normalization given by Mallory & Van Gorder <cit.> for eq. (<ref>). This is a surprising result as it indicates that exotic bright solitons appear only for small perturbations (of order g|ψ|^2) in the Schrödinger equation, while strong perturbations with 0 < |ψ(0)| < 1/√(2) always lead to well-behaved oscillatory solutions about ψ = ψ_+ or ψ = ψ_- (as the red curve in Fig. <ref>).§ DISCUSSION We have presented an analysis and a classification of the oscillatory properties of the solutions of the cubic nonlinear Schrödinger equation <cit.>. The analysis makes use of a procedure that was originally described in <cit.> for second-order linear homogeneous differential equations. It turns out that the same procedure is also valid for nonlinear homogeneous equations, provided that they possess at least one trivial solutionthat may serve as a baseline for oscillations. This requirement is oversatisfied by the CNLS equations in one, two, and three spatial dimensions as they possess 3 different trivial solutions. The presence of so many trivial solutions is the driver for all the oscillatory properties seen in the solutions of the boundary-value problem in both the repulsive and the attractive case (Figs. <ref> and <ref>, respectively);and the solutions in these two cases differ only because the trivial solutionsinterchange their roles from repelling to attracting the nontrivial solutions and vice versa. We carried out our analysis simultaneously for all CNLS equations because the applicable criteria for oscillatory behavior reduce asymptotically to the simple inequality (<ref>) that effectively requires a positive coefficient in front of the non-derivative ψ-terms in eqs. (<ref>) and (<ref>). We have confirmed numerically that the oscillation criterion derived in the linear case <cit.> carries over to the CNLS equations as well (Figs. <ref> and <ref>). This is the direct result of the behavior of the inertial terms in the nonlinear 1+1, 2+1, and 3+1 BEC cases (0, ψ'/x, and 2ψ'/x, respectively).We have also found evidence for asymmetric behavior between the repulsive and the attractive CNLS free-particle solutions beyond of the known difference in the velocities of the two types of solitons <cit.>. The attractive stationary case supports a newphysical oscillatory solution that appears when a boundary conditionwith |ψ(0)| >> 1 is used for the bright wavefunction (Fig. <ref>).In the corresponding repulsive stationary case,the dark wavefunctions are all diverging steeply and they do not appear to be of physical interest <cit.>. The oscillatory features and the divergent behavior discussed in this work are the result of the nonlinear terms in the free-particle CNLS equations; and they remain intact in cases where various external potentials are used <cit.> to model traps for various BECs. In both cases, the boundary condition that|ψ(0)| >> 1 implies that the nonlinearities in the equations are very small, so these solitons appear only for small perturbations (of order g|ψ|^2, where |g|<< 1) in the Schrödinger equation. This is a surprising result. The bright (exotic) oscillatory stationary solutions of Fig. <ref>may actually be identifiable in real BECs because they exhibit a strong elongation in their radial wavelength (by factors of order ∼2; Sec. <ref>) at intermediate radii.Taken all together, our results lead to another interesting conclusion: the presence or the absence of trivial solutions in differential equations of the second order is an important qualifier of the properties of the solutions of the physical Cauchy problem; thus, they should not be ignored, as their name signifies. Lately, we have come to call them intrinsic solutions <cit.> because when the differential equations admit such solutions, they do so with no regard to any boundary or initial conditions that may be imposed externally by the Cauchy problem. § AUTHORS' CONTRIBUTIONS Both authors were involved in the preparation of the manuscript. Both authors have read and approved the final manuscript. §.§ AcknowledgmentsDuring this research project, DMC was supported by the University of Massachusetts Lowell whereas QDK was on a sabbatical visit and was fully supported by the Jordan University of Science and Technology. 10[]abr72M. Abramowitz and I. A. Stegun (eds.), Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables (Dover, New York, 1972). []whi20E. T. Whittakerand G. N. Watson, A Course of Modern Analysis, 3rd edn. (Cambridge Univ. Press, Cambridge, 1920), Sec. 10.2. []har64P. Hartman, Ordinary Differential Equations (Wiley, New York, 1964), Sec. XI.1. []agr02R. P. Agarwal, S. R. Grace, and D. O'Regan, Oscillation Theory for Second Order Linear, Half-Linear, Superlinear and Sublinear Dynamic Equations (Kluwer, Dordrecht, 2002). []won68J. S. W. Wong, Funkcialaj. Ekvacioj, 11, 207, (1968). []chr16aD. M. Christodoulou, J. Graham-Eagle and Q. D. Katatbeh, Adv. Differ. Equ., 2016:48, (2016), doi:10.1186/s13662-016-0774-x. []kat16Q. D. Katatbeh, D. M. Christodoulou and J. Graham-Eagle, Adv. Differ. Equ., 2016:47, (2016), doi:10.1186/s13662-016-0777-7. []mal13K. Mallory and R. A. Van Gorder, Phys. Rev. E 88, 013205, (2013), doi:10.1103/PhysRevE.88.013205. []mal14aK. Mallory and R. A. Van Gorder, Phys. Rev. E, 89, 013204, (2014), doi:10.1103/PhysRevE.89.013204. []mal14bK. Mallory and Van R. A. Gorder, Phys. Rev. E, 90, 023201, (2014), doi:10.1103/PhysRevE.90.023201. []mal15K. Mallory and R. A. Van Gorder, Phys. Rev. E, 92, 013201, (2015), doi:10.1103/PhysRevE.90.023201. []lif80E. M. Lifshitz and L. P. Pitaevskii, Statistical Physics Part 2: Landau and Lifshitz Course of Theoretical Physics (Butterworth-Heinemann, London, 1980). []chr16bD. M. Christodoulou, Q. D. Katatbeh and J. Graham-Eagle,J. of Inequalities and Applications, 2016:147, (2016), doi:10.1186/s13660-016-1086-0. []sha97L. F. Shampine and M. W. Reichelt, SIAM Journal on Scientific Computing, 18, 1, (1997). []sha99L. F. Shampine, M. W. Reichelt and J. A. Kierzenka, SIAM Review, 41, 538, (1999). []pet08C. J. Pethick and H. Smith, Bose-Einstein Condensation in Dilute Gases, 2nd edn. (Cambridge Univ. Press, Cambridge, 2008), Sec. 7.6.Department of Mathematics and Statistics, Jordan University of Science and Technology, Irbid, Jordan 22110 [email protected] Department of Mathematical Sciences, University of Massachusetts Lowell, Lowell, MA 01854, USA [email protected] | http://arxiv.org/abs/1705.09721v1 | {
"authors": [
"Qutaibeh D. Katatbeh",
"Dimitris M. Christodoulou"
],
"categories": [
"math-ph",
"cond-mat.quant-gas",
"math.MP"
],
"primary_category": "math-ph",
"published": "20170526210714",
"title": "New stationary solutions of the cubic nonlinear Schrödinger equations for Bose-Einstein condensates"
} |
cop]Matthias Christandl [email protected] cop]Asger Kjærulff Jensen [email protected] [cop]Department of Mathematical Sciences, University of Copenhagen,Universitetsparken 5, 2100 Copenhagen Ø, Denmark ams]Jeroen Zuiddam [email protected] [ams]Centrum Wiskunde & Informatica, Science Park 123, 1098 XG Amsterdam, Netherlands The tensor rank of a tensor t is the smallest number r such that t can be decomposed as a sum of r simple tensors. Let s be a k-tensor and let t be an ℓ-tensor. The tensor product of s and t is a (k+ℓ)-tensor. Tensor rank is sub-multiplicative under the tensor product.We revisit the connection between restrictions and degenerations.A result of our study is that tensor rank is not in general multiplicative under the tensor product. This answers a question of Draisma and Saptharishi. Specifically, if a tensor t has border rank strictly smaller than its rank, then the tensor rank of t is not multiplicative under taking a sufficiently hight tensor product power.The “tensor Kronecker product” from algebraic complexity theory is related to our tensor product but different, namely it multiplies two k-tensors to get a k-tensor. Nonmultiplicativity of the tensor Kronecker product has been known since the work of Strassen.It remains an open question whether border rank and asymptotic rank are multiplicative under the tensor product. Interestingly, lower bounds on border rank obtained from generalised flattenings (including Young flattenings) multiply under the tensor product.tensor rank border rank degeneration Young flattening algebraic complexity theory quantum information theory [2010] 15A69§ INTRODUCTIONLet U_i, V_i be finite-dimensional vector spaces over a field . Let be a k-tensor in U_1 ⊗⋯⊗ U_k. The tensor rank of t is the smallest number r such that can be written as a sum of r simple tensors u_1 ⊗⋯⊗ u_k in U_1 ⊗⋯⊗ U_k, and is denoted by(). Lettingbe the complex numbers , the border rank of is the smallest number r such thatis a limit point (in the Euclidean topology) of a sequence of tensors in U_1 ⊗⋯⊗ U_k of rank at most r, and is denoted by ().Let ∈ U_1 ⊗⋯⊗ U_k and ∈ V_1 ⊗⋯⊗ V_ℓ be a k-tensor and an ℓ-tensor respectively. Define the tensor product ofandas the (k+ℓ)-tensor⊗ ∈U_1 ⊗⋯⊗ U_k ⊗ V_1 ⊗⋯⊗ V_ℓ.If k=ℓ, then define the tensor Kroneckerproduct ofandas the k-tensor∈(U_1 ⊗ V_1) ⊗⋯⊗ (U_k ⊗ V_k)obtained from ⊗ by grouping U_i and V_i together for each i. In algebraic complexity theory, the tensor Kronecker product is usually just denoted by `⊗'. Using the tensor Kronecker product one defines the asymptotic rank ofas the limit lim_n→∞(^ n)^1/n.(This limit exists and equals the infimum inf_n(^ n)^1/n,see for example Lemma 1.1 in <cit.>.)Asymptotic rank is denoted by ().This paper is about the relationship between tensor rankand the tensor product. It follows from the definition that rankis sub-multiplicative under the tensor product.Let , be any tensors. Then, (⊗) ≤() ().The result of this paper is that the above inequality can be strict. Tensor rank is not in general multiplicative under tensor product. Specifically, if a tensor t has border rank strictly smaller than its tensor rank, then the tensor rank of t is not multiplicative under a taking a sufficiently high tensor power.The theorem answers a question posed in the lecture notes of Jan Draisma <cit.> and a question of Ramprasad Saptharishi (personal communication, related to an earlier version of the survey <cit.>). The theorem was stated as a fact in <cit.>, refering to <cit.> for the proof; however,<cit.> studies only the tensor Kronecker product ⊠. It has been known since the work of Strassen that tensor rank is not multiplicative under the tensor Kronecker product ⊠, see <ref>.We construct three instances of this phenomenon (<ref>, <ref> and <ref>)to prove the theorem.Explicitly, one of our examples is the following strict inequality (<ref>).Let b_1, b_2 be the standard basis of ^2. Define the 3-tensor W_3 as b_2 ⊗ b_1 ⊗ b_1 +b_1 ⊗ b_2 ⊗ b_1 +b_1 ⊗ b_1 ⊗ b_2 ∈ (^2)^⊗ 3 . Then we have the strict inequality (W_3^⊗ 2) ≤ 8 < 9 = (W_3)^2. In <ref> we will prove that <ref> is essentially minimal over the complex numbers, in the sense that if s ∈⊗^2 ⊗^2 and t∈^2 ⊗^n ⊗^m, then one has (st) = (s ⊗ t) = (s) (t). This we prove using the theory of canonical forms of matrix pencils and a formula for their tensor rank.Our general approach is to study approximate decompositions (or border rank decompositions) of tensors. It turns out that a border rank decomposition of a tensor t can be transformed into a tensor rank decomposition of tensor powers of t with a penalty that depends on the so-called error degree of the approximation. More precisely, the notion of border rank (t) has a more precise variant ^e(t) that allows only approximations with error degree at most e (see <ref> for definitions). This variant goes back to <cit.> and <cit.>. We prove in <ref>(<ref>) that (^⊗ n) ≤ (ne + 1) ^e()^n,which we use to construct nonmultiplicativity examples. In particular, we see that as soon as ^e(s)<(s), the quantity (s)^n grows faster than the right-hand side of (<ref>) and thus leads to nonmultiplicativity examples for large enough n.It follows from the definitions that also border rank and asymptotic rank are submultiplicative under the tensor product: (⊗) ≤() (), and(⊗) ≤() (). We leave it as an open question whether these inequalities can be strict.In <ref> we will see that lower bounds on border rank obtained from generalised flattenings (including Young flattenings) are in fact multiplicative under the tensor product.It follows from () ≤(⊗) that tensor rank, border rank and asymptotic rank are submultiplicative under the tensor Kronecker product: () ≤() (), () ≤() (), and () ≤() (). Ifand are 2-tensors (matrices), then tensor rank, border rank and asymptotic rank are equal and multiplicative under the tensor Kronecker product. However, for k≥ 3, it is well-known that each of the three inequalitiescan be strict, see the following example.Consider thefollowing tensors(1.1cm [vertex/.style = circle, fill, black, minimum width = 1.mm, inner sep=0pt] [coordinate] (0,0)coordinate(A) ++( 2*1*60+30:0.8cm) coordinate(B) ++( 2*2*60+30:0.8cm) coordinate(C); [line width=0.2mm](A) node [vertex] – (B) node [vertex] (C) node [vertex] (A);)= ∑_i∈{1,2} b_i⊗ b_i⊗ 1∈ ^2⊗^2 ⊗,(1.1cm [vertex/.style = circle, fill, black, minimum width = 1.mm, inner sep=0pt] [coordinate] (0,0)coordinate(A) ++( 2*1*60+30:0.8cm) coordinate(B) ++( 2*2*60+30:0.8cm) coordinate(C); [line width=0.2mm](A) node [vertex] (B) node [vertex] –(C) node [vertex] (A);) = ∑_i∈{1,2} b_i⊗1⊗ b_i ∈ ^2 ⊗⊗^2,(1.1cm [vertex/.style = circle, fill, black, minimum width = 1.mm, inner sep=0pt] [coordinate] (0,0)coordinate(A) ++( 2*1*60+30:0.8cm) coordinate(B) ++( 2*2*60+30:0.8cm) coordinate(C); [line width=0.2mm](A) node [vertex] (B) node [vertex] (C) node [vertex] –(A);) = ∑_i∈{1,2} 1⊗ b_i⊗ b_i ∈ ⊗^2 ⊗^2.(This graphical notation is borrowed from <cit.>.) Each tensor has rank, border rank and asymptotic rank equal to 2, since they are essentially identity matrices. However the tensor Kronecker product is the 2× 2 matrix multiplication tensor⟨ 2,2,2 ⟩ = (1.1cm [vertex/.style = circle, fill, black, minimum width = 1.mm, inner sep=0pt] [coordinate] (0,0)coordinate(A) ++( 2*1*60+30:0.8cm) coordinate(B) ++( 2*2*60+30:0.8cm) coordinate(C); [line width=0.2mm](A) node [vertex] – (B) node [vertex] –(C) node [vertex] –(A);) = ∑_i,j,k∈{1,2}(b_i⊗ b_j)⊗(b_j⊗ b_k)⊗ (b_k ⊗ b_i)whose tensor rank and border rank is at most 7 <cit.> and whose asymptotic rank is thus at most 7, which is strictly less that 2^3=8. (The tensor rank of ⟨ 2,2,2⟩ equals 7 over any field <cit.> and the border rank of ⟨ 2,2,2 ⟩ equals 7 over the complex numbers<cit.>. Both statements are in fact true for any tensor with the same support as ⟨ 2,2,2 ⟩ <cit.>.)§ DEGENERATION AND RESTRICTION We revisit the theory of degenerations and restrictions of tensors and how to transform degenerations into restrictions. Our non-multiplicativity results rely on these ideas. Let ∈ U_1⊗⋯⊗ U_k and ∈ V_1 ⊗⋯⊗ V_k be k-tensors. We sayrestricts to , written ≥, if there are linear maps A_i : U_i → V_i such that (A_1 ⊗⋯⊗ A_k)=. Let d,e∈. We saydegenerates to with approximation degree d and error degree e, written _d^e, if there are linear maps A_i(ε) : U_i → V_i depending polynomially on ε such that (A_1(ε) ⊗⋯⊗ A_k(ε))= ε^d+ ε^d+1_1 + ⋯ + ε^d+e_e for some tensors _1, …, _e.Naturally, ^e means ∃ d^e_d, and _d means ∃ e^e_d, andmeans ∃ d ∃ e^e_d. (We note that our notation t_d s corresponds to t_d+1s in <cit.>.) Clearly, degeneration is multiplicative in the following sense. Let _1,_2,_1,_2 be tensors. If _1 ^e_1_d_1_1 and _2 ^e_2_d_2_2, then _1 ⊗_2 ^e_1 + e_2_d_1 + d_2_1 ⊗_2 and _1 _2 ^e_1 + e_2_d_1 + d_2_1 _2. The error degree e is upper bounded by the approximation degree d in the following way. Let , be k-tensors. If _d, then _d^kd - d. Suppose (A_1() ⊗⋯⊗ A_k()) t = ^d s + ^d+1 s_1 + ⋯ + ^d+e s_e. For every i let B_i() be the matrix obtained from A_i() by truncating each entry in A_i() to degree at most d. Then (B_1() ⊗⋯⊗ B_k()) t = ^d s + ^d+1 u_1 + ⋯ + ^kd u_kd for some k-tensors u_1, …, u_kd. For any r∈, let b_1, … b_r denote the standard basis of ^r. Let r, k∈ and let_r(k) ∑_i=1^r (b_i)^⊗ k ∈(^r)^⊗ kbe the rank-r order-k unit tensor. Let ∈ V_1 ⊗⋯⊗ V_k. The tensor rank of is the smallest number r such that _r(k)≥, and is denoted by (). This definition of tensor rank is easily seen to be equivalent to the definition given in the introduction. The border rank ofis the smallest number r such that _r(k), and is denoted by (). Note that this definition works over any field . Whenequals , this definition of border rank is equivalent to the definition given in the introduction <cit.>. Define^e_d()min{r ∈|_r(k) _d^e } _d()min{r ∈|_r(k) _d } ^e()min{r ∈|_r(k) ^e }.(Our notation _d() corresponds to _d+1() in <cit.>.) Error degree in the context of border rank was already studied in <cit.> and <cit.>. The following propositions follow directly from <ref> and <ref>. ^e_1 + e_2_d_1 + d_2(_1 ⊗_2) ≤^e_1_d_1(_1) ^e_2_d_2(_2).Letbe a k-tensor. Then _d() = ^kd - d_d(). The following theorem is our main technical result on which the rest of the paper rests. We note that for the tensor Kronecker product the statement is well-known in the context of algebraic complexity theory <cit.>. Let , be k-tensors.If ^e and ||≥ e+2, then we have t _e+1(k) ≥.By assumption there are matrices A_i(ε) with entries polynomial in ε such that(A_1(ε) ⊗⋯⊗ A_k(ε))= ε^d+ ε^d+1_1 + ⋯ + ε^d+e_efor some tensors _1, …, _e. Multiply both sidesby ε^-d and call the right-hand side q(ε),( ε^-d A_1(ε) ⊗⋯⊗ A_k(ε))= + ε_1 + ⋯ + ε^e_eq(ε).Let α_0, …, α_e be distinct nonzero elements of the ground field(by assumption our ground field is large enough to do this). View q(ε) as a polynomial in ε.Write q(ε) as follows (Lagrange interpolation):q(ε) = ∑_j=0^e q(α_j) ∏_0 ≤ m ≤ e:m≠ jε - α_m/α_j - α_m.We now see how to write q(0) as a linear combination of the q(α_j), namelyq(0) = ∑_j=0^e q(α_j) ∏_0 ≤ m ≤ e:m≠ jα_m/α_m - α_j,that is,q(0) = ∑_j=0^eβ_jq(α_j) withβ_j ∏_0 ≤ m ≤ e:m≠ jα_m/α_m - α_j.Now we want to writeas a restriction of _e+1(k). Define thelinear maps B_1 ∑_j=0^eβ_jα_j^-dA_1(α_j) ⊗ b_j^* and B_i ∑_j=0^eβ_j A_i(α_j) ⊗ b_j^* for i ∈{2,…, k}. Then t_e+1(k) ≥ s because(B_1 ⊗⋯⊗ B_k) (_e+1(k))= ∑_j=0^e β_j (α_j^-dA_1(α_j) ⊗⋯⊗ A_k(α_j)) = ∑_j=0^e β_jq(α_j) = q(0) = .This finishes the proof.In the statement of <ref> we assume that [0] is large enough. For small fields one can do the following. For k,d∈, let [0..d] denote the set {0,1,2,…, d} and define the k-tensorχ_d(k) ∑_a ∈ [0..d]^k: a_1 + ⋯ + a_k = d b_a_1⊗⋯⊗ b_a_k∈ (^d+1)^⊗ k.Let , be k-tensors. It is not hard to show that, if _d, then tχ_d(k) ≥ s. By definition of χ_d(k) we have (χ_d(k)) ≤k+d-1k-1.We may thus conclude that t ⊠_k+d-1k-1(k) ≥ s. We collect several almost immediate corollaries.Let _i, _i be k_i-tensors for i∈ [n]. Assumeis large enough.*If ∀ i_i ^e_i_i, then (_1 ⊗⋯⊗_n) _∑_i e_i + 1(∑_i k_i) ≥_1 ⊗⋯⊗_n.*If ∀ i_i _d_i_i, then (_1 ⊗⋯⊗_n) _∑_i (k_i -1)d_i + 1(∑_i k_i) ≥_1 ⊗⋯⊗_n.To prove the first statement, apply <ref> to obtain the degeneration _1 ⊗⋯⊗_n ^∑_i e_i_1 ⊗⋯⊗_n. <ref> yields the result. To prove the second statement, <ref> gives t_i ^k_i d_i - d_i s_i. By <ref>, t_1 ⊗⋯⊗ t_n ^∑_i k_i d_i - d_is_1 ⊗⋯⊗ s_n. <ref> proves the statement. Letbe a k-tensor. Assumeis large enough.*(^⊗ n) ≤ (ne + 1) ^e()^n. *(^⊗ n) ≤ ((k - 1)nd + 1) _d()^n.This follows from <ref>. Letbe a k-tensor. * lim_n→∞(^⊗ n)^1/n≤().* lim_n→∞(^⊗ n)^1/n = lim_n→∞(^⊗ n)^1/n.* If () < (), then for some n ∈, (^⊗ n) < ()^n.§ TENSOR RANK IS NOT MULTIPLICATIVE UNDER THE TENSOR PRODUCTBecause of <ref>, in order to find nonmultiplicativity examples, it is enough to find a tensor t for which ^e(t) < (t). We will give three families of examples of nonmultiplicativity. For k≥ 3, define the k-tensorW_k ∑_i ∈{1,2}^k: (i) = (k-1,1) b_i_1⊗⋯⊗ b_i_k ∈(^2)^⊗ k,where (i) = (k-1,1) means that i is a permutation of (1,1,…, 1, 2). Let || be large enough. Let k≥ 3. For n large enough, we have a strict inequality (W_k^⊗ n) < (W_k)^n. For example, (W_3^⊗ 7) < (W_3)^7 and (W_8^⊗ 2) < (W_8)^2. The rank of W_k equals k. This can be shown with the substitution method as explained in for example <cit.>. However, ^k-1(W_k) ≤ 2, namely( [ 1 1; ε 0 ]⊗⋯⊗[ 1 1; ε 0 ]⊗[1 -1;ε0 ]) _2(k) = ε W_k + ε^2 (⋯) + ⋯ + ε^k(b_2 ⊗⋯⊗ b_2).Applying <ref>(<ref>) to this degeneration gives (W_k^⊗ n) ≤ (n(k-1) + 1)2^n. Therefore, for n large enough, (W_k^⊗ n) ≤ 2^n(n(k-1)+1) < k^n = (W_k)^n.In fact, if () ≠ 2 and √(2)∈, then we can directly show a strict inequality for n=2 and k=3 as follows. (W_3^⊗ 2) ≤ 8 < 9 = (W_3)^2 if ≠ 2 and √(2)∈. As mentioned in the proof of <ref>, (W_3) = 3. If c∈∖{0} such that √(c)∈, then (W_3 + cb_2 ⊗ b_2 ⊗ b_2) ≤ 2. Namely,W_3 + cb_2 ⊗ b_2 ⊗ b_2 = 1/2√(c)( (b_1 + √(c)b_2)^⊗ 3 - (b_1 - √(c)b_2)^⊗ 3).(Overthis also follows from the fact that the Cayley hyperdeterminant evaluated at W_3 + cb_2 ⊗ b_2 ⊗ b_2 is a nonzero constant times c. One may also see this by noting that the image of W_3 + cb_2 ⊗ b_2 ⊗ b_2 under the moment map lies outside the image of the moment polytope associated to the orbit_2 ×_2 ×_2 · W <cit.>.) We expand W_3⊗ W_3 asW_3 ⊗ W_3 = (W_3 + b_2 ⊗ b_2 ⊗ b_2)^⊗ 2 - (W_3 + 12 b_2 ⊗ b_2 ⊗ b_2)⊗b_2 ⊗ b_2 ⊗ b_2- b_2 ⊗ b_2 ⊗ b_2 ⊗(W_3 + 12 b_2 ⊗ b_2 ⊗ b_2).By the above, we know that the rank of W_3 + b_2 ⊗ b_2 ⊗ b_2 and the rank of W_3 + 12 b_2 ⊗ b_2 ⊗ b_2 are at most 2. Therefore, the rank of W_3⊗ W_3 is at most 2^2 + 2 + 2 = 8.Let S_k be the symmetric group of order k. Clearly the tensor W_3 ⊗ W_3 is invariant under the action of the subgroup S_3 × S_3 ⊆ S_6 and under the action of the permutation (14)(25)(36) ∈ S_6 that swaps the two copies of W_3. Remarkably, the decomposition of W_3 ⊗ W_3 given in the proof of <ref> also has this symmetry, in the sense that the above actions leave the set of simple terms appearing in the decomposition invariant. The decomposition is said to be partially symmetric. In fact, each term is itself invariant under S_3 × S_3.It is stated in <cit.> that (W_3 W_3) = 7, which implies that (W_3⊗ W_3) equals 7 or 8. We obtained numerical evidence pointing to 8. After the first version of our manuscript appeared on the arXiv, Chen and Friedland delivered a proof that (W_3⊗ W_3) ≥ 8 <cit.>.For the third power, it is known that (W_3W_3W_3) = 16 <cit.>. A similar construction as in the proof of <ref> gives (W_3⊗ W_3 ⊗ W_3) ≤ 21.This upper bound is improved to 20 in <cit.>. In <ref>, we took the nth power of a tensor in (^2)^⊗ k with n large enough depending on k.In our next example, we take the square of a tensor in (^d)^⊗ k with d≥ 8.For k≥ 3 and q≥ 1, define the tensor_q^k ∑_i=2^q+1 b_i ⊗ b_i ⊗ b_1 ⊗ b_1^⊗ k-3 + b_1 ⊗ b_i ⊗ b_i ⊗ b_1^⊗ k-3 ∈ (^q+1)^⊗ k.This tensor is named after Strassen, who used _q^3 to derive the upper bound ω≤ 2.48 on the exponent of matrix multiplication <cit.>. Assume thatis large enough. For q ≥ 7 and any k≥ 3, we have a strict inequality ((_q^k)^⊗ 2) < (_q^k)^2. The rank of _q^k equals 2q, again by the substitution method. We have ^1(_q^k)≤ q+1, see the proof of Proposition 31 in <cit.>. Applying <ref>(<ref>) to this degeneration gives ((_q^k)^⊗ n) ≤ (n+1)(q+1)^n. Therefore, for q≥ 7 and n = 2, we have the strict inequality ((_q^k)^⊗ 2) ≤ 3(q+1)^2 < (2q)^2 = (_q^k)^2. Our third example uses matrix multiplication tensors. Let n_1, n_2, n_3∈. Define the 3-tensor⟨ n_1,n_2,n_3 ⟩∑_i ∈ [n_1]× [n_2]× [n_3] (b_i_1⊗ b_i_2)⊗ (b_i_2⊗ b_i_3) ⊗ (b_i_3⊗ b_i_1) ∈ (^n_1⊗^n_2) ⊗ (^n_2⊗^n_3) ⊗ (^n_3⊗^n_1). Assume thatis large enough. For n≥78, we have a strict inequality (⟨ 2,2,4 ⟩^⊗ n) < (⟨ 2,2,4 ⟩)^n. The rank of ⟨ 2,2,4⟩ equals 14 over any field <cit.>. On the other hand, ^4(⟨ 2,2,4⟩) ≤ 13 over any field <cit.>.Thus, whenis large enough <ref>(<ref>) implies, for n≥ 78, the strict inequality (⟨ 2,2,4 ⟩^⊗ n) ≤ 13^n (4n+1) < 14^n = (⟨ 2,2,4⟩)^n. In the language of graph tensors <cit.>, <ref> says that tensor rank is not multiplicative under taking disjoint unions of graphs. § GENERALISED FLATTENINGS ARE MULTIPLICATIVE In the previous section we have seen that tensor rank can be strictly submultiplicative under the tensor product. We do not know whether the same is true for border rank. In fact, in this section we observe that lower bounds on border rank obtained from generalised flattenings are multiplicative. In this section we focus on 3-tensors for notational convenience. The ideas directly extend to k-tensors for any k.Let t be a tensor in V_1 ⊗ V_2 ⊗ V_3. We can transform t into a matrix by grouping the tensor legs into two groupsV_1 ⊗ V_2 ⊗ V_3→ V_1 ⊗ (V_2 ⊗ V_3)v_1 ⊗ v_2 ⊗ v_3↦ v_1 ⊗ (v_2 ⊗ v_3).(There are three ways to do this for a 3-tensor.) This is called flattening. The rank of a flattening of t is a lower bound for the border rank of t. (Rank and border rank are equal for matrices.) We now define generalised flattenings. Let t be a tensor in V_1 ⊗ V_2 ⊗ V_3. Instead of a basic flattening V_1 ⊗ V_2 ⊗ V_3 → V_1 ⊗ (V_2 ⊗ V_3), we choose vector spaces V'_1 and V'_2 and apply some linear map F: V_1 ⊗ V_2 ⊗ V_3 → V'_1 ⊗ V'_2 to t. To obtain a border rank lower bound using F we have to compensate for the fact that F possibly increases the border rank of a simple tensor. The following lemma describes the resulting lower bound. Let t∈ V_1 ⊗ V_2 ⊗ V_3 be a tensor.LetF: V_1 ⊗ V_2 ⊗ V_3 → V'_1 ⊗ V'_2be a linear map. The border rank of t is at least(t) ≥(F(t))/max(F(v_1 ⊗ v_2 ⊗ v_3)),where the maximum is over all simple tensors v_1⊗ v_2 ⊗ v_3 in V_1⊗ V_2 ⊗ V_3. Suppose (t) = r. Then there is a sequence of tensors t_i converging to t with (t_i) ≤ r for each i. Each t_i thus has a decomposition into simple tensors t_i = ∑_j=1^r t_i,j. Since F(t_i) → F(t), there exists an i_0 such that for all i≥ i_0 we have (F(t_i)) ≥(F(t)). Moreover, we have the inequalities (F(t_i)) ≤∑_j=1^r (F(t_i,j)) ≤ r ·max_s(F(s)), where the maximum is over all simple tensors s. We conclude that (t) ≥(F(t))/max_s(F(s)). Note that the right hand side of (<ref>) might not be an integer. The lower bound in (<ref>) is multiplicative under the tensor product in the following sense. Let ∈ V_1 ⊗ V_2 ⊗ V_3 and ∈ W_1 ⊗ W_2 ⊗ W_3 be tensors. Let F_1: V_1 ⊗ V_2 ⊗ V_3 → V'_1 ⊗ V'_2 and F_2: W_1 ⊗ W_2 ⊗ W_3 → W'_1 ⊗ W'_2 be linear maps.The border rank of ⊗∈ V_1 ⊗ V_2 ⊗ V_3 ⊗ W_1 ⊗ W_2 ⊗ W_3 is at least(⊗) ≥(F_1())/max(F_1(v_1 ⊗ v_2 ⊗ v_3))(F_2())/max(F_2(w_1 ⊗ w_2 ⊗ w_3))where the maximisations are over simple tensors in V_1 ⊗ V_2 ⊗ V_3 and in W_1 ⊗ W_2 ⊗ W_3 respectively. Combine F_1 and F_2 into a single linear map F : V_1 ⊗ V_2 ⊗ V_3 ⊗ W_1 ⊗ W_2 ⊗ W_3 → (V'_1 ⊗ W'_1) ⊗ (V'_2 ⊗ W'_2).One then follows the proof of <ref> and uses the fact that matrix rank is multiplicative under the tensor Kronecker product. Young flattenings <cit.> are a special case of generalised flattenings. For completeness, we finish with a concise description of Young flattenings and the corresponding multiplicativity statement. We work over the complex numbers . Let S_λ V be an irreducible _V-module of type λ. Consider the space V ⊗ S_λ V as a _V-module under the diagonal action. The Pieri rule says that we have a _V-decompositionV ⊗ S_λ V ≅⊕_μ S_μ V,where the direct sum is over partitions μ of length at most V obtained from λ by adding a box in the Young diagram of λ. This decomposition yields _V-equivariant embeddings S_μ V ↪ V⊗ S_λ V, called Pieri inclusions or partial polarization maps. These maps are unique up to scaling. Such a Pieri inclusion corresponds to a _V-equivariant map ϕ_μ,λ:V^* → S_μ V^* ⊗ S_λ V. Every element ϕ_μ, λ(v) is called a Pieri map. The Young flattening F_μ, λ on V_1 ⊗ V_2^* ⊗ V_3 is obtained by first applying the map ϕ_μ, λ to one tensor leg,V_1 ⊗ V_2^* ⊗ V_3 → V_1 ⊗ S_μ V_2^* ⊗ S_λ V_2 ⊗ V_3,and then flattening into a matrix,V_1 ⊗ S_μ V_2^* ⊗ S_λ V_2 ⊗ V_3 → (V_1 ⊗ S_μ V_2^*) ⊗ (S_λ V_2 ⊗ V_3).Note that for any simple tensor v_1⊗ v_2 ⊗ v_3, the rank of F_μ,λ(v_1 ⊗ v_2 ⊗ v_3) equals the rank of ϕ_μ, λ(v_2). <ref> thus specialises as follows. Let ∈ V_1 ⊗ V_2 ⊗ V_3 and ∈ W_1 ⊗ W_2 ⊗ W_3.Let λ, μ and ν, κ be pairs of partitions as above.The border rank of ⊗∈ V_1 ⊗ V_2 ⊗ V_3 ⊗ W_1 ⊗ W_2 ⊗ W_3 is at least(⊗) ≥(F_μ, λ())/max(ϕ_μ, λ(v_2))(F_ν, κ())/max(ϕ_ν, κ(w_2))where the maximisations are over v_2 ∈V_2 and w_2 ∈ W_2 respectively. We refer to <cit.> for an overview of the applications of Young flattenings.§ MULTIPLICATIVITY FOR COMPLEX MATRIX PENCILS AND 2-TENSORSIn this section all vector spaces are over the complex numbers.The goal of this section is to prove the following proposition.Let ∈⊗^d ⊗^d and ∈^2⊗^n⊗^m. ThenR() = R(⊗) =R()R(). <ref> shows that <ref> is essentially minimal over the complex numbers.Namely, any example of non-multiplicativity of tensor rank under ⊗ must either be with a 5-tensor in (^d⊗^d)⊗(^d_1⊗^d_2⊗^d_3) with d_1,d_2,d_3≥ 3, d≥ 2 or in a tensor space of order 6 or more.Moreover, one can show using <ref> and the well-known classification of the _2^× 3-orbits in ^2 ⊗^2 ⊗^2 that if s,t ∈^2 ⊗^2 ⊗^2 and (s ⊗ t) < (s)(t), then s and t are both isomorphic to the tensor W_3.The elements of ^2⊗^n⊗^m are often called matrix pencils. The tensor rank of matrix pencils is completely understood, in the sense that every matrix pencil is equivalent under local isomorphisms to a pencil in canonical form,for which the rank is given by a simple formula.This formula will allow us to give a short proof of <ref>.We begin with introducing the canonical form for matrix pencils.For a proofwe refer to <cit.>.Recall that the standard basis elements of ^n are denoted by b_1, …, b_n. Given t_i∈ U ⊗ V_i ⊗ W_i, define _U(t_1,…,t_n) as the image of ⊕_i=1^n t_i under the natural inclusion ⊕_i(U⊗ V_i⊗ W_i)→ U⊗(⊕_i V_i )⊗(⊕_i W_i). For ∈ define the tensor L_∈^2 ⊗^⊗^+1 byL_b_1 ⊗∑_i=1^ b_i ⊗ b_i+b_2 ⊗∑_i=1^ b_i ⊗ b_i+1= b_1⊗[ 1 0; 1 0; ⋱ ⋮; 1 0 ] + b_2⊗[ 0 1; 0 1; ⋮ ⋱; 0 1 ]and for η∈ define the tensor N_η∈^2 ⊗^η+1⊗^η byN_ηb_1 ⊗∑_i=1^η b_i ⊗ b_i+b_2 ⊗∑_i=1^η b_i+1⊗ b_i= b_1⊗[ 1; 1; ⋱; 1; 0 0 ⋯ 0 ] + b_2⊗[ 0 0 ⋯ 0; 1; 1; ⋱; 1 ].Let t ∈^2⊗^n⊗^m. There exist invertible linear maps A∈_2, B∈_n and C∈_m and natural numbers _1,…,_p,η_1,…,η_q∈ and an ℓ×ℓ Jordan matrix F such that, with M=b_1⊗ I_ℓ + b_2⊗ F, we have(A⊗ B⊗ C) t = _^2(0,L__1,…,L__p,N_η_1,…,N_η_q,M),where the 0 stands for some 0-tensor of appropriate dimensions.The right-hand side of (<ref>) is called the canonical form of t.Next we give a formula for the tensor rank of matrix pencils in canonical form (<ref>). <ref> is due to Grigoriev <cit.>, JáJá <cit.> and Teichert <cit.>, see also <cit.> or <cit.>.Let F be a Jordan matrix with eigenvalues λ_1, λ_2, …, λ_p. Let d(λ_i) be the number of Jordan blocks in F of size at least two with eigenvalue λ_i. Define m(F) max_i d(λ_i).Let t=_^2(0,L__1,…,L__p,N_η_1,…,N_η_q, b_1 ⊗ I_ℓ + b_2⊗ F)be a tensor in canonical form as in (<ref>). The tensor rank of t equals(t)=∑_i=1^p (_i +1) +∑_i=1^q (η_i+1)+ ℓ + m(F). Let W_3 = b_2 ⊗ b_1 ⊗ b_1 +b_1 ⊗ b_2 ⊗ b_1 +b_1 ⊗ b_1 ⊗ b_2 ∈ (^2)^⊗ 3 as in <ref>. The canonical form of W_3 isW_3 ≅ b_1 ⊗( 1 00 1) + b_2 ⊗( 0 10 0).so in the notation of <ref> we have p = q = 0 and F = [ 0 1; 0 0 ]. We can thus apply <ref> with ℓ = 2 and m(F) = 1 to get (W_3) = 2+1 = 3. We are now ready to give the short proof of <ref>. Let s ∈⊗^d ⊗^d, t ∈^2 ⊗^n ⊗^m. We may assume that s = 1 ⊗∑_i=1^r b_i ⊗ b_i with r = (s).By <ref> we may assume that t is in canonical form,t=_^2(0, L__1,…,L__p,N_η_1,…,N_η_q,M). The tensor Kronecker productt s is isomorphic to t≅_^2(t, …, t_r).By an appropriate local basis transformation we put this in canonical formt≅_^2(L__1^⊕ r,…,L__p^⊕ r,N_η_1^⊕ r,…,N_η_q^⊕ r,M^⊕ r),which by <ref> has rank r ·(t) = (s)(t).Proposition <ref> is also true over the finite field _q when q≥ n,m. To see this one may use the formula from <cit.> for the rank of pencils over finite fields, which for q≥ n,m is as follows:(t)=∑_i=1^p (_i +1) +∑_i=1^q (η_i+1)+ ℓ + δ(B).Here B is the regular part of the pencil t and δ(B) is the number of invariant divisors of B that do not decompose into a product of unassociated linear factors. (We refer to <cit.> for definitions.) The invariant divisors of (B,…,B) are just the invariant divisors of B counted for each copy of B and so Proposition <ref> follows. We note that part of the results in this section have been independently obtained in Section 2 of <cit.>. Acknowledgements We thank Jonathan Skowera for discussion, Fulvio Gesmundo for suggestions regarding Section 4, and Nick Vannieuwenhoven for discussion regarding the literature. We acknowledge financial support from the European Research Council (ERC Grant Agreement no. 337603), the Danish Council for Independent Research (Sapere Aude), and VILLUM FONDEN via the QMATH Centre of Excellence (Grant no. 10059). JZ is supported by NWO (617.023.116) and the QuSoft Research Center for Quantum Software. elsarticle-num | http://arxiv.org/abs/1705.09379v4 | {
"authors": [
"Matthias Christandl",
"Asger Kjærulff Jensen",
"Jeroen Zuiddam"
],
"categories": [
"math.AC",
"cs.CC",
"quant-ph",
"15A69"
],
"primary_category": "math.AC",
"published": "20170525215517",
"title": "Tensor rank is not multiplicative under the tensor product"
} |
Dirichlet-to-Neumann or Poincaré-Steklov operator on fractals described by d-sets KEVIN ARFI[[email protected]],ANNA ROZANOVA-PIERRAT[Laboratoire Mathématiques et Informatique Pour la Complexité et les Systèmes, Centrale Supélec, Université Paris-Saclay, Grande Voie des Vignes, Châtenay-Malabry, France,[email protected]] December 30, 2023 ======================================================================================================================================================================================================================================================================================= In the framework of the Laplacian transport, described by a Robin boundary value problem in an exterior domain in ^n, we generalize the definition of the Poincaré-Steklov operator to d-set boundaries, n-2< d<n, and give its spectral properties to compare tothe spectra of the interior domain and also of a truncated domain, considered as an approximation of the exterior case.The well-posedness of the Robin boundary value problems for the truncated and exterior domains is given in the general framework of n-sets. The results are obtained thanks to a generalization of the continuity and compactness properties of the trace and extension operators in Sobolev, Lebesgue and Besov spaces, in particular, bya generalization of theclassical Rellich-Kondrachov Theorem of compact embeddings for n and d-sets. § INTRODUCTIONLaplacian transports to and across irregular and fractal interfaces are ubiquitous in nature and industry: properties of rough electrodes in electrochemistry, heterogeneous catalysis, steady-state transfer across biological membranes (see <cit.> and references therein). To model it there is a usual interest to consider truncated domains as an approximation of the exterior unbounded domain case. Let Ω_0 and Ω_1 be two bounded domains in ^nwith disjoint boundaries Ω_0∩Ω_1=∅, denoted by Γ and S respectively, such that Ω_0⊂Ω_1.Thus, in this paper, we consider two types of domains constructed on Ω_0:* the unbounded exterior domain to Ω_0, denoted by Ω=^n∖Ω_0; * a bounded, truncated by a boundary S, truncateddomainΩ_S=(^n∖Ω_0)∩Ω_1. Let us notice that Γ∪ S=Ω_S (for the unbounded case S=∅ and Ω=Γ), see Fig. <ref>. As Ω_0 is bounded, its boundary Γ is supposed compact. The phenomenon of Laplacian transport to Γ can be described by the following boundary value problem:-Δ u = 0,x∈Ω_S or Ω,λ u + ∂_ν u = ψΓ,u = 0S, where ∂_ν u denotes the normal derivative of u, in some appropriate sense,λ∈ [0, ∞) is the resistivity of the boundary and ψ∈ L_2(Γ). For S=∅ we impose Dirichlet boundary conditions at infinity. The case of a truncated domain Ω_S corresponds to an approximation of the exterior problemin the sense of Theorem <ref>. When Ω is regular (C^∞ or at least Lipschitz), it is well-known <cit.> how to define the trace of u∈ H^1(Ω) and the normal derivative∂_ν u on Γ. The properties of the Poincaré-Steklovor the Dirichlet-to-Neumann operator, defined at manifolds with C^∞-boundaries are also well-known <cit.>. In the aim to generalize the Poincaré-Steklov operator to d-sets with n-2< d <n (the case n-1<d<n contains the self-similar fractals), we firstly study the most general context (see Section <ref>), when the problem (<ref>) is well-defined and its bounded variant (physically corresponding to a source at finite distance) can be viewed as an approximation of the unbounded case (corresponding to a source at infinity). The main extension and trace theorems, recently obtained in the framework of d-sets theory, are presented and discussed in Section <ref>. They allow us to generalize the knownproperties of the trace and extension operators on the (,δ)-domains <cit.> (see Theorem <ref>) to a more general class of n-sets, calledadmissible domains (see Definition <ref>), and update for admissible domains the classical Rellich-Kondrachov theorem (see Theorems <ref> and <ref> for d-sets). Actually, we state that the compactness of a Sobolev embedding to a Sobolev space does not depend on the boundness of the domain, but it is crucial for the embeddings in the Lebesgue spaces. Hence, a trace operator H^1(Ω)→ L_2(Ω) mapping the functions definedon a domain Ω to their values on the boundary Ω (or on any part D of Ω, H^1(Ω)→ L_2(D)) is compact if and only if the boundary Ω (or the part D) is compact.After a short survey in Section <ref> of known results on the spectral properties of the Poincaré-Steklov operator for a bounded domain, we introduce the Poincaré-Steklov operator A on a compact d-set boundary Γ of an admissible bounded domain Ω_0.SinceΓ (see Fig. <ref>) can be viewed not only as the boundary of Ω_0, but also as the boundary of the exterior domain Ω and of its truncated domain Ω_S, we also introduce the Poincaré-Steklov operator A on Γ for the exterior and trucated cases and relate their spectral properties (see Section <ref>). In all cases, the Poincaré-Steklov operator A can be defined as a positive self-adjoint operator on L_2(Γ), and A has a discrete spectrum if and only if the boundary Γ is compact. The two dimensional case differs from the case of ^n with n≥ 3 by the functional reason (see Subsection <ref>) and gives different properties of the point spectrum of A (see Theorem <ref>). In particular, in the exterior case A for n=2 and n≥ 3 has different domains of definition (see Proposition <ref> in Section <ref>).Specially, for the case of a d-set Γ (see Theorems <ref>, <ref> and <ref>), we justify the method, developed in <cit.>, true for smooth boundaries, to find the total flux Φ across the interface Γ using the spectral decomposition of 1_Γ (belonging to the domain of A by Proposition <ref>) on the basis of eigenfunctions of the Dirichlet-to-Neumann operator (V_k)_k∈ in L_2(Γ) and its eigenvalues (μ_k)_k∈:Φ∝∑_k μ_k (1_Γ , V_k)_L_2(Γ)^2/1 + μ_k/λ. § CONTINUITY AND COMPACTNESS OF THE EXTENSION AND TRACE OPERATORS ON D-SETSBefore to proceed to the generalization results, let us define the main notions andexplain the functional context of d-sets. For instance, for the well-posedness result of problem (<ref>) on “the most general” domains Ω in ^n, we need to be able to say that for this Ωthe extension operator E: H^1(Ω)→ H^1(^n) is continuous and the trace operator (to be defined, see Definition <ref>) Tr: H^1(Ω)→Im(Tr(H^1(Ω)))⊂ L_2(Ω) is continuous and surjective.Therefore, let us introduce the existing results about traces and extension domains in the framework of Sobolev spaces.(W_p^k-extension domains) A domain Ω⊂^n is called a W_p^k-extension domain (k∈^*) if there exists a bounded linear extension operator E: W_p^k(Ω) → W^k_p(^n). This means that for all u∈ W_p^k(Ω) there exists a v=Eu∈W^k_p(^n) with v|_Ω=u and it holds v_W^k_p(^n)≤ Cu_W_p^k(Ω)with a constantC>0.The classical results of Calderon-Stein <cit.> say that every Lipschitz domain Ω is an extension domain for W_p^k(Ω) with 1≤ p≤∞, k∈^*.This result was generalized by Jones <cit.> in the framework of (,δ)-domains:((,δ)-domain <cit.>) An open connected subset Ω of ^n is an (,δ)-domain, > 0, 0 < δ≤∞, if whenever x, y ∈Ω and |x - y| < δ, there is a rectifiable arc γ⊂Ω with length ℓ(γ) joining x to y and satisfying * ℓ(γ)≤|x-y|/ and * d(z,Ω)≥ |x-z||y-z|/|x-y| for z∈γ. This kind of domains are also called locally uniform domains <cit.>.Actually, bounded locally uniform domains, orbounded (,δ)-domains, are equivalent (see <cit.> point 3.4) to the uniform domains, firstly defined by Martio and Sarvas in <cit.>, for which there are no more restriction |x-y|<δ(see Definition <ref>).Thanks to Jones <cit.>, it is known that any (,δ)-domain in ^n is a W_p^k-extension domain for all 1≤ p≤∞ and k∈^*. Moreover, for a bounded finitely connected domain Ω⊂^2, Jones <cit.> proved thatΩ is a W_p^k-extension domain (1≤ p≤∞ and k∈^*) if and only if Ω is an (,∞)-domain for some >0, if and only if the boundary Ω consists of finite number of points and quasi-circles. However, it is no more true for n≥3, i.e. there are W_p^1-extension domains which are not locally uniform <cit.> (in addition, an (,δ)-domain in ^n with n≥ 3 is not necessary a quasi-sphere).To discuss general properties oflocally uniform domains, let us introduce Ahlfors d-regular sets or d-sets:(Ahlfors d-regular set or d-set <cit.>)Let F be a Borel subset of ^nand m_d be the d-dimensional Hausdorff measure,0<d≤ n.The set F is called a d-set, if there existpositive constants c_1, c_2>0, c_1r^d≤ m_d(F∩ B_r(x))≤ c_2 r^d,for ∀ x∈ F,0<r≤ 1, where B_r(x)⊂^n denotes the Euclidean ball centered at x and of radius r. Henceforth, the boundary Γ is a d-set endowed with the d-dimensional Hausdorff measure, and L_p(Γ) is defined with respect to this measure as well.From <cit.>, it is known that* All (,δ)-domains in ^n are n-sets (d-set with d=n): ∃ c>0∀ x∈Ω,∀ r∈]0,δ[∩]0,1] μ(B_r(x)∩Ω)≥ Cμ(B_r(x))=cr^n, where μ(A) denotes the Lebesgue measure of a set A. This property is also called the measure density condition <cit.>. Let us notice that an n-setΩ cannot be “thin” close to its boundary Ω. * If Ω isan (,δ)-domain and Ω is a d-set (d<n) then Ω=Ω∪Ω is an n-set.In particular, a Lipschitz domain Ω of ^n is an (,δ)-domain and also an n-set <cit.>. But not every n-set is an (,δ)-domain: adding an in-going cusp to an (,δ)-domain we obtain an n-set which is not an (,δ)-domain anymore.Self-similar fractals (e.g., von Koch's snowflake domain) are examples of (,∞)-domains with the d-set boundary <cit.>, d>n-1. From <cit.> p.39, it is also known that all closed d-sets with d>n-1 preserve Markov's local inequality:(Markov's local inequality) A closed subset V in ^n preserves Markov's local inequality if for every fixedk∈^*, there exists a constant c=c(V,n,k) > 0, such that max_V∩B_r(x) |∇ P | ≤c/rmax_V∩B_r(x)|P|for all polynomials P ∈𝒫_k and all closed balls B_r(x), x ∈ V and 0 < r ≤ 1.For instance, self-similar sets that are notsubsets of any (n-1)-dimensional subspace of ^n, the closure of a domain Ω with Lipschitz boundary and also ^n itself preserve Markov's local inequality (see Refs. <cit.>).The geometrical characterization of sets preserving Markov's local inequality was initially given in <cit.> (see Theorem 1.3) and can be simply interpreted as sets which are not too flat anywhere. It can beillustrated by the following theorem of Wingren <cit.>:A closed subset V in ^n preserves Markov's local inequality if and only if there exists a constant c>0 such that for every ball B_r(x) centered in x∈ V and with the radius 0 < r ≤ 1, there are n + 1 affinely independent points y_i ∈ V∩ B_r(x), i=1,…,n+1, such that the n-dimensional ball inscribed in the convex hull of y_1, y_2, …, y_n+1, has radius not less than c r.Smooth manifolds in ^n of dimension less than n are examples of “flat” sets not preserving Markov's local inequality.The interest to work with d-sets boundaries preservingMarkov's inequality (thus 0<d<n), related in <cit.> with Sobolev-Gagliardo-Nirenberg inequality, is to ensure the regular extensions W^k_p(Ω)→ W^k_p(^n) with k≥ 2 (actually the condition applies the continuity of the extension C^∞(Ω)→ C^∞(^n)). For the extensions of minimal regularity k=1(see in addition the Definition of Besov space Def. 3.2 in <cit.>with the help of the normalized local best approximation in the class of polynomials P_k-1 of the degree equal to k-1) Markov's inequality is trivially satisfied.Recently, Hajłasz, Koskela and Tuominen <cit.> have proved that every W_p^k-extension domain in ^n for 1≤ p <∞ and k≥ 1, k∈ is an n-set. In addition, they proved that any n-set, for which W_p^k(Ω)=C_p^k(Ω) (with norms' equivalence), is a W_p^k-extension domain for 1<p<∞ (see <cit.> also for the results for p=1 and p=∞). By C_p^k(Ω) is denoted the space of the fractional sharp maximal functions:For a set Ω⊂^n of positive Lebesgue measure,C_p^k(Ω)={f∈ L_p(Ω)|f_k,Ω^♯(x)=sup_r>0 r^-kinf_P∈𝒫^k-11/μ(B_r(x))∫_B_r(x)∩Ω|f-P|∈ L^p(Ω)}with the norm f_C_p^k(Ω)=f_L_p(Ω)+f_k,Ω^♯_L_p(Ω). From <cit.> and <cit.> we directly have Let Ω be a bounded finitely connected domain in ^2 and 1<p<∞, k∈^*. The domain Ω is a 2-set with W_p^k(Ω)=C_p^k(Ω) (with norms' equivalence) if and only if Ω is an (,δ)-domain and its boundary Ω consists of a finite number of points and quasi-circles.The question about W^k_p-extension domains is equivalent to the question of the continuity of the trace operatorTr: W^k_p(^n) → W^k_p(Ω). Thus, let usgeneralize the notion of the trace: For an arbitrary open set Ω of ^n, the trace operator Tr is defined <cit.> for u∈ L_1^loc(Ω) byTr u(x)=lim_r→ 01/μ(Ω∩ B_r(x))∫_Ω∩ B_r(x)u(y)dy. The trace operator Tr is considered for all x∈Ω for which the limit exists. Using this trace definition it holds the trace theorem on closed d-sets <cit.> Ch.VII and <cit.> Proposition 4: Let F be a closed d-set preserving Markov’slocal inequality.Thenif 0 < d < n, 1 < p <∞, and β = k - (n - d)/p > 0, then the trace operator Tr: W^k_p(^n)→ B^p,p_β(F) is bounded linear surjection with a bounded right inverse E: B^p,p_β(F)→ W^k_p(^n).The definition of the Besov spaceB^p,p_β(F) on a closed d-set F can be found, for instance, in Ref. <cit.> p.135 and Ref. <cit.>. Hence, we introduce the notion of admissible domains: (Admissible domain)A domain Ω⊂^nis called admissible if it is an n-set, such that for 1<p<∞ and k∈^* W_p^k(Ω)=C_p^k(Ω)as sets with equivalent norms (hence, Ω is a W_p^k-extension domain), with a closed d-set boundary Ω, 0<d<n, preserving local Markov's inequality.Therefore, we summarize useful in the what follows results (see <cit.>) for the trace and the extension operators (see <cit.> for more general results for the case p>n): Let 1<p<∞, k∈^* be fixed. Let Ω be an admissible domain in ^n.Then, for β=k-(n-d)/p>0, the following trace operators (see Definition <ref>)* Tr: W_p^k(^n)→ B^p,p_β(Ω)⊂ L_p(Ω), * Tr_Ω:W_p^k(^n)→ W_p^k(Ω), * Tr_Ω:W_p^k(Ω)→ B^p,p_β(Ω)are linear continuous and surjective with linear boundedright inverse, i.e. extension, operators E: B^p,p_β(Ω)→ W_p^k(^n), E_Ω: W_p^k(Ω)→ W_p^k(^n) and E_Ω: B^p,p_β(Ω)→ W_p^k(Ω).Proof.It is a corollary of results given in Refs. <cit.>. Indeed, if Ω is admissible, then by Theorem <ref>, the trace operator Tr: W_p^k(^n)→ B^p,p_β(Ω)⊂ L_p(Ω) is linear continuous and surjective with linear boundedright inverse E: B^p,p_β(Ω)→ W_p^k(^n) (point 1). On the other hand, by <cit.>, Ω is a W_p^k-extension domain and Tr_Ω:W_p^k(^n)→ W_p^k(Ω) and E_Ω: W_p^k(Ω)→ W_p^k(^n) are linear continuous (point 2). Hence, the embeddings B^p,p_β(Ω) → W_p^k(^n)→ W_p^k(Ω) and W_p^k(Ω)→ W_p^k(^n)→ B^p,p_β(Ω) are linear continuous (point 3). Note that for d=n-1, one has β=1/2 and B_1/2^2,2(Ω)=H^1/2(Ω) as usual in the case of the classical results <cit.> for Lipschitz boundaries Ω. In addition, for u, v∈ H^1(Ω) with Δ u∈ L_2(Ω), the Green formula still holds in the framework of dual Besov spaces on a closed d-set boundary of Ω (see <cit.> Theorem 4.15 for the von Koch case in ^2):(Green formula) Let Ω be an admissible domain in ^n (n≥ 2) with a d-set boundary Ω such that n-2<d<n. Then for all u, v∈ H^1(Ω) with Δ u∈ L_2(Ω) it holds the Green formula∫_Ω vΔ u=⟨ u/ν, Trv⟩ _((B^2,2_β(Ω))', B^2,2_β(Ω))-∫_Ω∇ v ∇ u ,where β=1-(n-d)/2>0 and the dual Besov space (B^2,2_β(Ω))'=B^2,2_-β(Ω) is introduced in <cit.>. Equivalently, for an admissible domain Ω the normal derivative of u∈ H^1(Ω) with Δ u∈ L_2(Ω) on the d-set boundary Ω with n-2<d<n is defined by Eq. (<ref>) as a linear and continuous functional on B^2,2_β(Ω).Proof. The statement follows, thanks to Theorem <ref>, from the surjective property of the continuous trace operator Tr_Ω:H^1(Ω)→ B^2,2_β(Ω) (see <cit.> Theorem 4.15).For a Lipschitz domain (d=n-1 and thus B_1/2^2,2(Ω)=H^1/2(Ω)), we find the usual Green formula <cit.>∫_Ω vΔ u=⟨ u/ν, Trv⟩ _((H^1/2(Ω))', H^1/2(Ω))-∫_Ω∇ v ∇ u .We also state the compact embedding of H^1 in L_2 for admissible truncated domains: (Admissible truncated domain)A domain Ω_S⊂^n (n≥ 2) is called admissible truncated domainof anexterior and admissible,according to Definition <ref>, domain Ω with a compact d_Γ-set boundary Γ, if it is truncated by an admissible bounded domain Ω_1 with a d_S-set boundary S,Γ∩ S=∅ (see Fig. <ref>). LetΩ_S be an admissible truncated domain with n-2< d_S<n. Thenthe Sobolev space H^1(Ω_S) is compactly embedded in L_2(Ω_S):H^1(Ω_S) ⊂⊂ L_2(Ω_S). Proof.Actually, in the case of a truncated domain, it is natural to impose n-1≤ d_Γ<n and n-1≤ d_S<n, but formally the condition β=1-(n-d)/2>0 (see Theorem <ref> and Proposition <ref>) only imposes the restrictionn-2< d.If Ω is an admissible domain (exterior or not), by Theorem <ref>, there exists linear bounded operator E_Ω: H^1(Ω)→ H^1(^n). Now, let in addition Ω be an exterior domain. Let us prove that for the admissible truncated domain Ω_S the extension operator E_Ω_S→Ω:H^1(Ω_S) → H^1(Ω) isa linear bounded operator. It follows from the fact that it is possible to extend Ω_1 to ^n (there exists a linear bounded operator E_Ω_1: H^1(Ω_1)→ H^1(^n)) and that the properties of the extension are local, i.e. depend on the properties of the boundary S=Ω_1, which has no intersection with Γ=Ω. For instance, if S∈ C^1 (and thus d_S=n-1), then we can use the standard "reflection method" (as for instance in <cit.> Proposition 4.4.2). More precisely, we have to use a finite open covering (ω_i)_i of S such that for all i ω_i ∩Ω_0 = ∅. The compactness of S and the fact that S∩Γ=∅ ensure that such a covering exists. In the case of a d-set boundary we use the Whitney extension method and Theorem <ref>. Hence, using Theorem <ref>, there exists a linear bounded operator A: H^1(Ω)→ H^1(Ω_1) as a composition of the extension operator E_Ω: H^1(Ω)→ H^1(^n) and the trace operator Tr_Ω_1:H^1(^n)→ H^1(Ω_1) (A=Tr_Ω_1∘ E_Ω). Let us define a parallelepiped Π in the such way thatΩ_1 ⊊Π,Π={x=(x_1,…,x_n)| 0<x_i<d_i (d_i∈)}.Consequently, the operator B=E_Ω_1→Π∘A∘ E_Ω_S→Ω :H^1(Ω_S) → H^1(Π) is a linear bounded operator as a composition of linear bounded operators.Let us prove H^1(Ω_S)⊂⊂ L_2(Ω_S). We followthe proof of the compact embedding of H^1 to L_2, given in <cit.> in the case of a regular boundary.Indeed, let (u_m)_m∈ be a bounded sequence in H^1(Ω_S). Thanks to the boundness of the operator B, for all m∈ weextend u_m from Ω_S to the parallelepiped Π, containing Ω_S. Thus, for all m∈ the extensions Bu_m=ũ_m satisfyũ_m∈ H^1(Π),ũ_m|_Ω_S=u_m,u_m_H^1(Ω_S)≤ũ_m_H^1(Π),and, in addition, there exists a constant C(Ω,Π)=B>0, independent on u_m, such thatũ_m_H^1(Π)≤ C(Ω,Π) u_m_H^1(Ω_S).Hence, the sequence (ũ_m)_m∈ is also abounded sequence in H^1(Π). Since the embedding H^1(Π) to L_2(Π) is continuous, the sequence (ũ_m)_m∈ is also bounded in L_2(Π). Let in additionΠ=⊔_i=1^N^nΠ_i,where Π_i=⊗_k=1^n [a_i,a_i+d_k/N](a_i∈).Thanks to <cit.> p. 283, inΠ there holds the following inequality for all u∈ H^1(Π):∫_Π u^2 dx≤∑_i=1^N^n1/|Π_i|(∫_Π_iudx)^2+n/2∫_Π∑_k=1^n(d_k/N)^2( u/ x_k)^2dx.On the other hand,L_2(Π) is aHilbert space, thus weak^* topology on it is equal to the weak topology. Moreover, as L_2 is separable,all closed bounded sets in L_2(Π) are weakly sequentially compact (or compact in the weak topology since here the weak topology is metrizable).To simplify the notations,wesimply write u_m for ũ_m ∈ L_2(Π). Consequently, the sequence (u_m)_m∈ is weakly sequentially compact in L_2(Π) and we have∃ (u_m_k)_k∈⊂ (u_m)_m∈ : ∃ u∈ L_2(Π)u_m_k⇀ ufork→+∞.Here u is an element of L_2(Π), not necessarily in H^1(Π).As (L_2(Π))^*=L_2(Π), by the Riesz representation theorem, u_m_k⇀ u ∈ L_2(Π) ⇔∀ v ∈ L_2(Π) ∫_Π(u_m_k-u)vdx→ 0.Since (u_m_k)_k∈ is a Cauchy sequence in the weak topology on L_2(Π), then, in particular choosing v=1_Π, it holds∫_Π(u_m_k-u_m_j)dx→ 0 fork,j→ +∞.Thus, using Eq. (<ref>), for two members of the sub-sequence (u_m_k)_k∈ with sufficiently large ranks p and q, we haveu_p-u_q^2_L_2(Ω_S)≤u_p-u_q^2_L_2(Π) ≤∑_i=1^N^n1/|Π_i|(∫_Π_i(u_p-u_q)dx)^2+n/2N^2∑_k=1^nd_k^2 u_p/ x_k- u_q/ x_k^2_L_2(Π)</2+/2=.Here we have chosen N such thatn/2N^2∑_k=1^nd_k^2 u_p/ x_k- u_q/ x_k^2_L_2(Π)</2.Consequently, (u_m_k)_k∈ is a Cauchy sequence in L_2(Ω_S), and thus converges strongly in L_2(Ω_S).To have a compact embeddingit is importantthat the domain Ω be an W^k_p-extension domain. The boundness or unboudness of Ω is not important to have W_p^k(Ω)⊂⊂ W_p^ℓ(Ω) with k>ℓ≥1 (1<p<∞). But the boundness of Ω is important for thecompact embedding in L_q(Ω). As a direct corollary we have the following generalization of the classical Rellich-Kondrachov theorem (see for instance Adams <cit.> p.144 Theorem 6.2):(Compact Sobolev embeddings for n-sets)Let Ω⊂^n be an n-set with W_p^k(Ω)=C_p^k(Ω), 1<p<∞, k,ℓ∈^*. Then there holdthe following compactembeddings: * W_p^k+ℓ(Ω)⊂⊂ W_q^ℓ(Ω), * W_p^k(Ω)⊂⊂ L_q^loc(Ω),or W_p^k(Ω)⊂⊂ L_q(Ω) if Ω is bounded, with q∈[1,+∞[ if kp=n, q∈ [1,+∞] if kp>n, and with q∈[1, pn/n-kp[ if kp<n.Proof. Let us denote byB_r(x) a non trivial ball for the Euclidean metric in ^n (its boundary is infinitely smooth, and thus, it is a W^k_p-extension domain for all1<p<∞ andk∈^*).By <cit.> (see also Theorem <ref>),the extension E: W_p^k+ℓ(Ω)→ W_p^k+ℓ(^n) and the trace Tr_B_r:W_p^k+ℓ(^n)→ W_p^k+ℓ(B_r(x)) are continuous. In addition, bythe classical Rellich-Kondrachov theorem on the ball B_r(x), the embedding K: W_p^k+ℓ(B_r(x))→ W_q^ℓ(B_r(x)), for the mentioned values of k, p, n and ℓ, is compact.Hence, for ℓ≥ 1,thanks again to <cit.>, E_1: W_q^ℓ(B_r(x)) → W_q^ℓ(Ω) is continuous, as the composition of continuous operatorsE_2: W_q^ℓ(B_r(x)) → W_q^ℓ(^n) and Tr_Ω:W_q^ℓ(^n) → W_q^ℓ(Ω). Finally, the embedding W_p^k+ℓ(Ω)⊂ W_q^ℓ(Ω) for ℓ≥ 1 is compact, by the composition of the continuous andcompact operators: E_1∘ K ∘Tr_B_r∘ E: W_p^k+ℓ(Ω)→ W_q^ℓ(Ω). When ℓ=0, instead of Sobolev embedding E_1, we need to have the continuous embedding of Lebesgue spacesL_q(B_r(x)) → L_q(Ω), which holds if and only if Ω is bounded. If Ω is not bounded, for all measurable compact sets K⊂Ω, the embedding L_q(B_r(x))→ L_q(K) is continuous. This finishes the proof.In the same way, we generalizethe classical Rellich-Kondrachov theorem for fractals: (Compact Besov embeddings for d-sets)Let F⊂^n be a closed d-set preserving Markov's local inequality, 0<d<n, 1<p<∞ and β=k+ℓ-n-d/p>0 for k,ℓ∈^*. Then, for the same q as in Theorem <ref>, the following continuous embeddingsare compact* B^p,p_β(F)⊂⊂ B^q,q_α(F) forℓ≥ 1 and α=ℓ-n-d/q>0;* if F is bounded in ^n, B^p,p_β(F)⊂⊂ L_q(F), otherwise B^p,p_β(F)⊂⊂ L_q^loc(F) for ℓ≥ 0. Proof. Indeed, thanks to Theorem <ref>, the extension E_F:B^p,p_β(F) → W_p^k+ℓ(^n) is continuous. Hence, by Calderon <cit.>, a non trivial ball is W_p^k+ℓ-extension domain: Tr_B_r (see the proof of Theorem <ref>) is continuous. Thus, the classical Rellich-Kondrachov theorem on the ball B_r(x) gives the compactness of K: W_p^k+ℓ(B_R)→ W_q^ℓ(B_r). Since, for ℓ≥ 1, E_2:W_q^ℓ(B_r)→ W_q^ℓ(^n) is continuous and, by Theorem <ref>,Tr_F:W_q^ℓ(^n)→ B^q,q_α(F) is continuous too, we conclude that the operatorTr_F∘ K ∘Tr_B_r∘ E_F:B^p,p_β(F) → B^q,q_α(F) is compact. For j=0, we have W_q^0=L_q, and hence, if F⊂ B_r(x), the operatorL_q(B_r(x)) → L_q(F) is a linear continuous measure-restriction operator on a d-set (see <cit.> for the d-measures). If F is not bounded in ^n,for all bounded d-measurable subsets K of F, the embedding L_q(B_r(x))→ L_q(K) is continuous. In particular, the compactness of the trace operator implies the following equivalence of the norms on W_p^k(Ω):Let Ω be an admissible domain in ^n with a compact boundary Ω and 1<p<∞, k∈^*, β=k-n-d/p>0. Then* W_p^k(Ω)⊂⊂ L_p^loc(Ω);* Tr: W_p^k(Ω)→ L_p(Ω) is compact;* u_W_p^k(Ω) is equivalent to u_Tr=(∑_|l|=1^k ∫_Ω |D^l u|^p +∫_Ω |Tru|^p_d )^1/p. Proof. Point 1 follows from Theorem <ref> and holds independently on values of kp and n. The trace operator Tr: W_p^k(Ω)→ L_p(Ω) fromPoint 2 is compact as a composition of the compact, by Theorem <ref>, operator K:B^p,p_β(Ω)→ L_p(Ω) with the continuous operator Tr_Ω: W_p^k(Ω)→ B^p,p_β(Ω).Let us show that Points 1 and 2 imply the equivalence of the norms in Point 3.We generalize the proof of Lemma 2.2 in Ref. <cit.>.Since the trace Tr: W_p^k(Ω)→ L_p(Ω) is continuous, then there exists a constant C>0 such that for all u∈ W_p^k(Ω) ∫_Ω |Tru|^p_d≤∑_|l|=1^k ∫_Ω |D^l u|^p +∫_Ω |Tru|^p_d≤ Cu_W_p^k(Ω)^p. Let us prove that there a constant c>0 such that for all u∈ W_p^k(Ω) ∫_Ω |u|^p≤ c(∑_|l|=1^k ∫_Ω |D^l u|^p +∫_Ω |Tru|^p_d)= cu_Tr^p. Suppose the converse. Then for all m∈^* there exists a u_m∈ W_p^k(Ω) such that u_m^p_Tr<1/m∫_Ω |u_m|^p. As in Ref. <cit.>, without loss of generality we assume thatfor all m∈^*u_m_L_p(Ω)=1. Then the sequence (u_m)_m∈ is bounded in W_p^k(Ω): for all m∈^* u_m^p_W_p^k(Ω)≤ 2. As W_p^k(Ω) is a reflexive Banach space, each bounded sequence inW_p^k(Ω) contains a weakly convergent subsequence. Hence, there exists u∈W_p^k(Ω) such that u_m_i⇀ u in W_p^k(Ω) for m_i → +∞. By the compact embedding of W_p^k(Ω) in L_p(Ω) (Point 1), the subsequence (u_m_i)_i∈ converges strongly towards u in L_p(Ω). Consequently, u_L_p(Ω)=1 and ∑_|l|=1^k ∫_Ω |D^l u|^p≤lim inf_i→ +∞∑_|l|=1^k ∫_Ω |D^l u_m_i|^p≤lim inf_i→ +∞1/n=0. Therefore, u is constant (since Ω is connected) with u_L_p(Ω)=1. From Eq. (<ref>) we have ∫_Ω |Tru_m_i|^p_d≤u_m_i^p_Tr<1/m_i∫_Ω |u_m_i|^p. Since the trace operator Tr: W_p^k(Ω)→ L_p(Ω) is compact (Point 2), it holds Tru^p_L_p(Ω)=lim_i→ +∞Tru_m_i^p_L_p(Ω)=0, which implies that u=0. This is a contradiction with u_L_p(Ω)=1. Hence, there exists a constant c̃>0 such thatu_W_p^k(Ω)≤c̃u_Tr. Going back to the Laplace transport problem onexterior and truncated domains, we especially need the following theorem, thus formulated for H^1: (Compactness of the trace)Let Ω be an admissible domain of ^n with a compact d-set boundary Γ, n-2< d_Γ<n. If Ω is an exterior domain, let Ω_S be its admissible truncated domain with n-2< d_S<n. Then for all these domains, i.e. for D=Ω or D=Ω_S,there exist linear trace operatorsTr_Γ:H^1(D) → L_2(Γ) and, ifS∅, Tr_S:H^1(Ω_S) → L_2(S), which are compact. Moreover, Im(Tr_Γ(H^1(D)))=B^2,2_β_Γ(Γ) for β_Γ=1-n-d_Γ/2>0 and Im(Tr_S(H^1(Ω_S)))=B^2,2_β_S(S) for β_S=1-n-d_S/2>0. Proof.It is a direct corollary of Proposition <ref>.§ WELL-POSEDNESSOF ROBIN BOUNDARY PROBLEM FOR THE LAPLACE EQUATION §.§ Well-posedness on a truncated domainLet us start by a well-posedness ofproblem (<ref>) for an admissible truncated domain Ω_S introduced in Section <ref>.Therefore, Ω_S is a bounded domain with a compact d_Γ-set boundary Γ, n-2< d_Γ<n (n≥ 2), on which is imposedthe Robin boundary condition for λ>0 and ψ∈ L_2(Γ), and ad_S-set boundary S, n-2< d_S<n, on which is imposedthe homogeneous Dirichlet boundary condition. Let us denoteH̃^1(Ω_S) := { u ∈ H^1(Ω_S) : Tr_S u = 0 }. Note that, thanks to Theorem <ref>, the continuity of the map Tr_S ensuresthat H̃^1(Ω_S) is a Hilbert space. Therefore, thanks to Proposition <ref>, as H^1(Ω_S)⊂⊂ L_2(Ω_S), following for instance the proof of Evans <cit.> (see section 5.8.1 Theorem 1), we obtain(Poincaré inequality)Let Ω_S⊂^n be an admissible truncated domain, introduced in Theorem <ref>, with n≥ 2.For allv ∈H̃^1(Ω_S) there exists C > 0, depending only on Ω_S and n, such thatv _L_2(Ω_S)≤ C ∇ v _L_2(Ω_S). Therefore the semi-norm ·_H̃^1(Ω_S), defined by v_H̃^1(Ω_S)=∇ v _L_2(Ω_S), is a norm, which is equivalent to ·_H^1(Ω_S) on H̃^1(Ω_S).Let us denote ⟨ v ⟩=1/Vol(Ω_S)∫_Ω_S v. Since Ω_S is a bounded W^1_p-extension domain, Theorem <ref>ensures W^1_p(Ω_S)⊂⊂ L_p(Ω_S) for all 1< p< ∞. Thus thePoincaré inequality can be generalized with the same proof to W^1_p(Ω_S) for all 1< p< ∞: ∀ v ∈ W^1_p(Ω_S)∃ C=C(Ω_S,p,n)>0: v-⟨ v ⟩_L^p(Ω_S)≤ C ∇ v _L^p(Ω_S).Consequently, using these properties of H̃^1(Ω_S), we have the well-posedness of problem (<ref>): Let Ω_S⊂^n be an admissible truncated domain, introduced in Theorem <ref>, with n≥ 2. For all ψ∈ L_2(Γ) and λ≥ 0, there exists a unique weak solution u∈H̃^1(Ω_S) of problem (<ref>) such that∀ v ∈H̃^1(Ω_S) ∫_Ω_S∇ u ∇ v+ λ∫_ΓTr_Γ u Tr_Γ v d m_d_Γ = ∫_ΓψTr_Γ v d m_d_Γ. Therefore, for all λ∈ [0,∞[ and ψ∈ L_2(Γ) the operatorB_λ(S): ψ∈ L_2(Γ)↦ u∈H̃^1(Ω_S)with u, the solution of the variational problem (<ref>), has the following properties* B_λ(S) is a linear compact operator; * B_λ(S) is positive: if ψ≥ 0 from L_2(Γ), thenfor all λ∈ [0, ∞[ B_λψ=u≥ 0; * B_λ(S) is monotone: if 0≤λ_1<λ_2, then for all ψ≥ 0 from L_2(Γ) it holds B_λ_2(S) ψ=u_λ_2≤ u_λ_1= B_λ_1(S) ψ; * If λ∈ [0, ∞[ then 0≤ B_λ(S) 1_Γ≤1/λ1_Ω_S. Proof. It's a straightforward application of the Lax-Milgram theorem. The continuity of the two forms is ensured by the continuity of the trace operator Tr_Γ (see Theorem <ref>). The coercivity of the symmetric bilinear form is ensured by the Poincaré inequality (see Proposition <ref>). To prove the properties of the operator B_λ(S) it is sufficient to replace W^D(Ω) byH̃^1(Ω_S) in the proof of Theorem <ref>.§.§ Functional spaces for the exterior problem To be able to prove the well-posedness of problem (<ref>) on an exterior domain with Dirichlet boundary conditions at infinity, weextend the notion of (H̃^1,·_H̃^1) to the unbounded domains. If Ω is an exterior domain of a bounded domain Ω_0, i.e. Ω=^n∖Ω_0, the usual Poincaré inequality does not hold anymore and, hence, we don't have Proposition <ref>. For this purpose, we use <cit.> and define for Ω=^n∖Ω_0, satisfying the conditions of Theorem <ref>,W(Ω) := { u ∈ H^1_loc(Ω), ∫_Ω |∇ u|^2 < ∞}. Let us fix a r_0>0 in the waythat there exists x∈^n such that Ω_0⊂ B_r_0(x)={y∈^n||x-y|<r_0}, and for all r≥ r_0 define Ω_r=Ω∩ B_r(x). Thanks to Remark <ref>, locally we always have the Poincaré inequality: ∀ u∈ W(Ω) u-⟨ u ⟩_L_2^loc(Ω)≤ C_loc∇ u_L_2^loc(Ω)≤ C_loc∇ u_L_2(Ω)<∞,which implies that for all r≥ r_0 the trace u|_Ω_r∈ H^1(Ω_r) (see Proposition <ref> and Theorem <ref>). Therefore, as in <cit.>, it is still possible to consider (but we don't need it)W(Ω) = { u : Ω→ | u ∀ r > r_0u|_Ω_r∈ H^1(Ω_r)∫_Ω |∇ u|^2 < ∞}.Thanks to G. Lu and B. Ou (see <cit.> Theorem 1.1 with p=2), we haveLet u ∈ W(^n) with n≥ 3. Then there exists the following limit:(u)_∞ = lim_R →∞1/| B_R |∫_B_R u.Moreover, there exists a constant c > 0, depending only on the dimension n, but not on u, such that:u - (u)_∞_L_2n/n - 2(^n)≤ c ∇ u _L_2(^n).In <cit.> Section 5, G. Lu and B. Ou extend this result to exterior domains with a Lipschitz boundary. Their proof is based on the existence of a continuous extension operator. Therefore, thanks to Theorem <ref>, we generalizeTheorem 5.2 and Theorem 5.3 of G. Lu and B. Ou and take p=2, according to our case. Let n≥ 3 and Ω be an admissible exterior domainwith a compact d-set boundary Γ (n-2< d<n).There exists c := c(n, Ω) > 0 so that for all u ∈ W(Ω) there exists (u)_∞∈ such that (∫_Ω | u - (u)_∞|^2n/n - 2)^n - 2/2n≤ c(n, Ω)∇ u _L_2(Ω).Moreover, it holds * The space W(Ω) is a Hilbert space, corresponding the inner product (u, v) := ∫_Ω∇ u . ∇ v + (u)_∞(v)_∞. The associated norm is denoted by u_W(Ω). * The following norms are equivalent to ·_W(Ω):u_Γ,Ω=(∇ u^2_L_2(Ω) + Tr u^2_L_2(Γ) )^1/2,u_A,Ω=( ∇ u^2_L_2(Ω) + u^2_L_2(A) )^1/2, where A ⊂Ω is a bounded measurable set with Vol(A)=∫_A 1 >0. * There exists a continuous extension operator E : W(Ω) → W(^d). * The trace operator Tr : W(Ω) → L_2(Γ) is compact. Proof.Thanks to Theorem <ref>, we update Theorem 5.2 and 5.3 <cit.> to obtaininequality (<ref>).Let us notice the importance of the Sobolev embedding H^1(^n)⊂ L_2n/n - 2(^n) which holds for n≥ 3, but which is false for n=2. The real number (u)_∞ inthe inequality (<ref>) is merely the 'average' of an extension of u to ^n, as defined in Theorem <ref>.Point 1, stating the completeness of W(Ω), follows from Ref. <cit.> by updating the proof of Theorem 2.1. The equivalence of norms in Point 2 follows from the proof of Proposition <ref> using Theorems <ref> and <ref> (see also Proposition 2.5 <cit.>). To prove Point 3,we notice that, thanks to Point 2, the extension operator E is continuous if and only if the domain Ω is such that the extension E_Ω: H^1(Ω)→ H^1(^n) is a linear continuous operator. This is true in our case, since the domain Ω satisfies the conditions of Theorem <ref>.In addition, the continuity of E_Ω ensures that, independently on the geometric properties of the truncated boundary S (S∩Γ=∅), for all (bounded) truncated domains Ω_S the extension operator E_0: H^1(Ω_S)→ H^1(Ω_S∪Ω_0) is continuous. Indeed, if E_Ω: H^1(Ω)→ H^1(^n) is continuous, then H^1_loc(Ω)→ H^1(^n) is also continuous and hence,we can consider only functions with a support on Ω_S and extend them to Ω_S∪Ω_0=^n∩Ω_1 to obtain the continuity of E_0. To prove Point 4, we write Tr : W(Ω) → L_2(Γ) as a composition of two traces operators:Tr= Tr_Γ∘ Tr_W→ H^1,Tr_W→ H^1: W(Ω)→ H^1(Ω_S),Tr_Γ : H^1(Ω_S)→ L_2(Γ).As Tr_W→ H^1 is continuous, i.e.u_H^1(Ω_S)^2≤ C(∇ u_L_2(Ω)^2+ u_L_2(Ω_S)^2),and, since Ω is an admissible domain with a compact boundary Γ, by Proposition <ref>,Tr_Γ is compact, we deduce the compactness of Tr : W(Ω) → L_2(Γ).To have an analogy in the unbounded case with H̃^1 for a truncated domain, let us introduce, as in <cit.>, the space W^D(Ω), defined by the closure of the space{ u|_Ω :u ∈𝒟(^n), n≥ 3}with respect to the norm u ↦ (∫_Ω |∇ u|^2)^1/2. Therefore, forthe inner product (u, v)_W^D(Ω) = ∫_Ω∇ u . ∇ v ,the space (W^D(Ω), (., .)_W^D(Ω)) is a Hilbert space (see a discussion about it on p. 8 of Ref. <cit.>).It turns out that W^D(Ω) is the space of all u ∈ W(Ω) with average zero: Let Ω be a unbounded (actually, exterior) domain in ^n with n≥ 3. The space W^D(Ω) has co-dimension 1 in W(Ω). MoreoverW^D(Ω) = W(Ω) ∩ L_2n/n - 2(Ω) = {u ∈ W(Ω) : (u)_∞ = 0}. Proof.See <cit.> Proposition 2.6 and references therein. Note that, as n≥ 3, H^1(Ω) ⊂ W(Ω) ∩ L_2n/n - 2(Ω)= W^D(Ω), which is false for n=2. §.§ Well-posedness of the exterior problem and its approximation Given ψ∈ L_2(Γ) and λ≥ 0, we consider the Dirichlet problem on the exterior domain Ω with Robin boundary conditions on the boundary Γ in ^n, n≥ 3: -Δ u=0x ∈Ω,λTr u + ∂_ν u= ψ x ∈Γ.At infinity we consider Dirichlet boundary conditions. In <cit.> W. Arendt and A.F.M ter Elst also considered Neumann boundary conditions at infinity. Those results apply as well in our setting, but we chose to focus on the Dirichlet boundary conditions at infinity in order not to clutter the presentation. It is worth emphasizing that in the following we only consider weak formulations that we describe below. Since (see Subsection <ref>) H^1(Ω)⊂ W^D(Ω)⊂ W(Ω) by their definitions,we need to update the definition of the normal derivative, given by Eq. (<ref>) in Section <ref>, to be able to work with elements of W(Ω).Let u ∈ W(Ω) and Δ u ∈ L_2(Ω).We say that u has a normal derivative ψ on Γ, denoted by ∂_ν u = ψ, if ψ∈ L_2(Γ) and for all v ∈𝒟(^n)∫_Ω (Δ u)v + ∫_Ω∇ u ·∇ v = ∫_Γψ Tr v_d.Definition <ref> defines a weak notion of normal derivative of a function in W(Ω) in the distributional sense, if it exists. If it exists, it is unique. In addition, thanks to the definition of the space W^D(Ω), functions v∈𝒟(^n), considered on Ω, are dense in W^D(Ω). Thus, by the density argument, Eq. (<ref>) holds for all v ∈ W^D(Ω) (see <cit.> p. 321). Next we define the associated variational formulation for the exterior problem (<ref>): Let ψ∈ L_2(Γ) and λ≥ 0, we say that u ∈ W^D(Ω) is a weak solution to the Robin problem with Dirichlet boundary conditions atinfinity if∀ v ∈W^D(Ω) ∫_Ω∇ u ∇ v+ λ∫_ΓTr u Tr v_d= ∫_ΓψTr v _d.The variational formulation (<ref>) is well-posed: Let Ω be an admissible exterior domain with a compact d-set boundary Γ (n-2< d<n, n≥ 3).For all λ∈ [0, ∞[ and for all ψ∈ L_2(Γ) there exists a unique weak solution u∈W^D(Ω) to the Robin problem with Dirichlet boundary conditions at infinity in the sense of Definition <ref>. Moreover, if the operator B_λ is defined byB_λ: ψ∈ L_2(Γ)↦ u∈W^D(Ω)with u, the solution of Eq. (<ref>), then it satisfies the same properties as the operator B_λ(S) introduced in Theorem <ref> for the truncated domains (see points 1–4):B_λ is a linear compact, positive and monotone operator with 0≤λ B_λ1_Γ≤1_Ω for all λ∈ [0, ∞[. Proof. Thanks to Theorem <ref>,the trace operator Tr is continuous from W^D(Ω) to L_2(Γ). Thenthe well-posedness of Eq. (<ref>) and the continuity of B_λ follow from the application of the Lax-Milgram theorem in the Hilbert space W^D(Ω). To prove the compactness of B_λ, we followRef. <cit.> Proposition 3.9.Indeed, let λ∈ [0, ∞[ and (ψ_k)_k∈ be a bounded sequence in L_2(Γ). Then there exists ψ∈ L_2(Γ) such that, up to a sub-sequence, ψ_k L_2(Γ)⇀ψ for k→ +∞. For all k ∈ we set u_k = B_λψ_k and u = B_λψ.From the continuity of B_λ it follows that u_k W^D(Ω)⇀ u for k→ +∞. Therefore, Tr u_k L_2(Γ)→Tr u for k→ +∞, since the trace operator Tr iscompact from W^D(Ω) to L_2(Γ) (see Theorem <ref> point 4).Let k ∈, choosing v = u_k in Eq. (<ref>), we obtainu_k _W^D(Ω)^2=∫_Ω |∇ u_k|^2 = ∫_ΓψTr u_k_d - λ∫_Γ |Tr u_k|^2_d.Consequently, using Eq. (<ref>) with v = u, we havelim_k →∞∫_Ω |∇ u_k|^2 = ∫_ΓψTr u _d- λ∫_Γ |Tr u|^2_d = ∫_Ω |∇ u|^2= u _W^D(Ω)^2. Hence, u_k _W^D(Ω)→ u _W^D(Ω) for k→ +∞, andconsequently, B_λ is compact. The positive and the monotone property of B_λ follow respectively from Ref. <cit.> Proposition 3.5 and Proposition 3.7 a). The equality 0≤λ B_λ1_Γ≤1_Ω follows from Ref. <cit.> Proposition 3.6 and Corollary 3.8 b).Now, let us show that the truncated problem, studied in Subsection <ref>, independently of the form of the boundary S, is an approximation of the exterior problem in ^n with n≥ 3. We denote by Ω_S the exterior domain Ω, truncated by the boundary S. In this framework, we also truncate <cit.> the space W^D(Ω), introducing a subspaceW^D_S(Ω) := { u ∈ W^D(Ω) : u|_^n ∖Ω_S = 0 },which is closed and, thus, is a Hilbert space for the inner product (·,·)_W^D(Ω). SinceH^1_0(Ω_1) = { u|_Ω_1 : u ∈ H^1(^n)u|_^n ∖Ω_1 = 0},we notice that the map Ψ: u ∈ W^D_S(Ω) ↦ u|_Ω_S∈H̃^1(Ω_S) is a bi-continuous bijection. Consequently, problem (<ref>) is also well-posed in W^D_S(Ω) with the same properties described in Theorem <ref>. In what follows, we will also suppose that the boundary S is far enough from the boundary Γ. Precisely, we suppose that Ω_0⊂ B_r is a domain (all time satisfying the conditions of Theorem <ref>), included in a ball of a radius r_0>0 (which exists as Ω_0 is bounded), and Ω_S_r with r≥ r_0 is such that(^n∖Ω_0)∩ B_r ⊂Ω_S_r with B_r∩ S_r=∅. If r→ +∞ the boundaries S_r (for each r≥ r_0 the domains Ω_S_r satisfy the conditions of Theorem <ref>) will be more and more far from Γ and in the limit r→ +∞ the domains Ω_S_r give Ω. Let us precise the properties of solutions u∈ W^D_S(Ω) for the truncated problem to compare to the solutions on the exterior domain: LetΩ_0,Ω and Ω_S (or Ω_S_r for all r≥ r_0>0) satisfy conditions of Theorem <ref> in ^n with n≥ 3. Let B_λ(S): ψ∈ L_2(Γ)↦ u∈ W^D_S(Ω) be the operator for the truncated problem and B_λ: ψ∈ L_2(Γ)↦ u∈ W^D(Ω) be the operator for the exterior problem.Then for all λ∈ [0,∞[ and ψ∈ L_2(Γ), if ψ≥ 0 in L_2(Γ)and r_2≥ r_1≥ r_0, then0≤ u_S_r_1=B_λ(S_r_1)ψ≤ u_S_r_2= B_λ(S_r_2)ψ≤ B_λψ=u.Proof.The proof follows the analogous proof as in Ref. <cit.> Propositions 3.5 and 3.6 (see also <cit.> Proposition 4.4).We can now state the approximation result, ensuring that a solution in any admissible truncated domain, even with a fractal boundary, but which is sufficiently far from Γ is an approximation of the solution of the exterior problem: Let λ∈ [0, ∞), ψ∈ L_2(Γ) and (S_m)_m∈ be a fixed sequence of the boundaries of the truncated domains Ω_S_m in ^n (n≥ 3), satisfying for all m∈ the conditions of Theorem <ref> and such that (Ω_S_m∪Ω_0)⊃ B_m ⊃Ω_0. Let u_S_m=B_λ(S_m)ψ and u= B_λψ. Then for all >0 there exitsm_0()>0,independent onthe chosen sequence of the boundaries(S_m), such that ∀ m≥ m_0 u_S_m-u_W^D(Ω)<.Equivalently, for all described sequences (S_m)_m∈,it holds B_λ(S_m) - B_λ_ℒ(L_2(Γ), W^D(Ω))→ 0as m→ +∞.Proof.It is a simple generation using our previous results ofTheorem 4.3 <cit.>. § SPECTRAL PROPERTIES OF THE POINCARÉ-STEKLOV OPERATOR DEFINEDBY THE INTERIOR AND BY THE EXTERIOR PROBLEMS The Poincaré-Steklov operator, also named the Dirichlet-to-Neumann operator, was originally introduced by V.A. Steklov and usually defined by a map A: u|_Γ↦ u /ν|_Γfor a solution u of the elliptic Dirichlet problem:-Δ u=0 in a domain Ω and u|_Γ=f (Ω=Γ).It is well-known that if Ω is a bounded domain with C^∞-regular boundary (a regular manifolds with boundary), then the operator A: C^∞(Γ)→ C^∞(Γ) is an elliptic self-adjoint pseudo-differential operator of the first order (see <cit.> 11 and 12 of Chapter 7) with a discrete spectrum 0=λ_0<λ_1≤λ_2≤…, with λ_k→ +∞k→ +∞. If A is considered as an operator H^1(Γ)→ L_2(Γ), thenits eigenfunctions form a basis in L_2(Γ).For any Lipschitz boundary Γ of a bounded domain Ω, the Dirichlet-to-Neumann operatorA: H^1/2(Γ)→ H^-1/2(Γ)is well-defined and it is a linear continuous self-adjoint operator.Thanks to <cit.>, we also know that the Dirichlet-to-Neumann operator A has compact resolvent, and hence discrete spectrum, as long as the trace operatorTr: H^1(Ω) → L_2(Γ) is compact (see also <cit.> and <cit.> for abstract definition of the elliptic operators on a d-set). Thus, thanks to Theorem <ref>, the property of the compact resolvent also holds for an admissible n-set Ω with a compact d-set boundary Γ. We will discuss it in details in the next section. From <cit.>, we also have thatKerA{0}, since 0 is the eigenvalue of the Neumann eigenvalue problem for the Laplacian.For the Weil asymptotic formulas for the distribution of the eigenvalues of the Dirichlet-to-Neumann operator there are results for bounded smooth compact Riemannian manifolds with C^∞ boundaries <cit.>, for polygons <cit.> and more general class of plane domains <cit.> and also for a bounded domain with a fractal boundary <cit.>. In the aim to relate these spectral results, obtained for the Dirichlet-to-Neumann operator for a bounded domain, with the case of the exterior domain, we prove the following theorem:Let Ω_0⊂^n (n≥ 2) be an admissible bounded domain with a compact boundary Γ such that its complement in ^n Ω=^n∖Ω_0 be also an admissible domain with the same boundary Γ, satisfying conditions of Theorem <ref>. Then the Dirichlet-to-Neumann operatorsA^int: L_2(Γ) → L_2(Γ),associated with the Laplacian on Ω_0, andA^ext: L_2(Γ) → L_2(Γ), associated with the Laplacian on Ω,are self-adjoint positive operators withcompact resolvents anddiscrete positive spectra. Let us denote the sets of all eigenvalues of A^int and A^ext respectively by σ^int and σ^ext, which are subsets of ^+. If Ω_S is an admissible truncated domain to Ω_0, then the associated Dirichlet-to-Neumann operatorA(S): L_2(Γ) → L_2(Γ)for all n≥2isself-adjoint positive operator with a compact resolvent and a discrete spectrum. The point spectrum, or the set of all eigenvalues of A(S), is strictely positive:σ_S⊂^+_* (i.e. A(S) is injective with the compact inverse operator A^-1(S)). In addition, letμ_k(r)∈σ_S(r), where σ_S(r)⊂^+_* is the point spectrum of the Dirichlet-to-Neumann operator A(S_r), associated with an admissible truncated domain Ω_S_r, such that (Ω_S_r∪Ω_0)⊃ B_r ⊃Ω_0.For n=2,if μ_0(r)=min_k∈μ_k(r), then, independently on the form of S_r,μ_0(r)→ 0for r→ +∞.For n≥ 3,A^-1(S_r)-A^-1_ℒ(L_2(Γ))→ 0for r→ +∞ independently on the form of S_r.Moreover,all non-zero eigenvalue of A^int is alsoan eigenvalue of A^ext and converse. Hence the eigenfunctions of A^int and A^ext form the same basis in L_2(Γ).More precisely,it holds * For n=2σ^int=σ^ext⊂^+and0∈σ^ext.* For n≥ 3 σ^int={0}∪σ^ext with σ^ext⊂ ]0,+∞[,i.e. the Dirichlet-to-Neumann operator of the exterior problem, also as of the truncated problem, is an injective operator with the compact inverse.To prove Theorem <ref>, we need to define the Dirichlet-to-Neumann operator on a d-set Γ in L_2(Γ). Hence, we firstly do it in Section <ref> and then give the proof in Section <ref>. § POINCARÉ-STEKLOV OPERATOR ON A D-SET§.§ For a bounded domain Let Ω_0 be a bounded admissible domain with a d-set boundary Γ (n-2< d<n, n≥ 2). Knowing the well-posedness results for the Dirichlet problem (Theorem 7 <cit.>) and the definition of the normal derivative by the Green formula (<ref>), thanks to <cit.>, we notice that the general setting of <cit.> p. 5904 for Lipschitz domains (see also <cit.> Theorem 4.10) still holds in thecase of a d-set boundary, by replacing H^1/2(Γ) by B^2,2_β(Γ) with β=1-n-d/2>0 and H^-1/2(Γ) by B^2,2_-β(Γ). Precisely, we have that for all λ∈ the Dirichlet problem-Δ u=λ u,u|_Γ=ϕ is solvable if ϕ∈ B^2,2_β(Γ)satisfies⟨_νψ,ϕ⟩_B^2,2_-β(Γ)× B^2,2_β(Γ)=0for all solutions ψ∈ H^1(Ω_0) of the corresponding homogeneous problem-Δψ=λψ, ψ|_Γ=0.We are especially interesting in the case λ=0. Thus, we directly conclude thatproblem (<ref>) has only the trivial solution ψ=0 (λ=0 is not an eigenvalue of the Dirichlet Laplacian), and consequently the Poincaré-Steklovoperator A:B^2,2_β(Γ)→ B^2,2_-β(Γ) mapping u|_Γ to _ν u|_Γ is well-defined on B^2,2_β(Γ).On the other hand, as it was done in <cit.> for bounded domains with (n-1)-dimensional boundaries, it is also possible to consider A as operator from L_2(Γ) to L_2(Γ), if we consider the trace map Tr: H^1(Ω_0)→ L_2(Γ) (note that B^2,2_β(Γ)⊂ L_2(Γ)) and update the definition of the normal derivative by analogy with Definition <ref>: Let u ∈ H^1(Ω_0) and Δ u ∈ L_2(Ω_0). If there exists ψ∈ L_2(Γ) such that for all v ∈ H^1(Ω_0) it holds Eq. (<ref>), then ψ is calledaL_2-normal derivative of u, denoted by ∂_ν u = ψ. Definition <ref> restricts the normal derivative of u, which is naturally in B^2,2_-β(Γ), to a consideration of only the normal derivative from its dense subspace. Thus, the L_2-normal derivative can does not exist, but if it exits, it is unique.Therefore, to define the Dirichlet-to-Neumann operator on L_2(Γ), we use the following Theorem from <cit.> (see Theorem 3.4)Let D(a) be a real vector space and let a : D(a) × D(a) → be bilinear symmetric such that a(u, u) ≥ 0 for all u ∈ D(a). Let H be a real Hilbert space and let T : D(a) → H be linear operator with dense image. Then there exists a positive and self-adjoint operator A on H such that for all ϕ, ψ∈ H, one has ϕ∈ D(A) and Aϕ = ψ if and only if there exists a sequence (u_k)_k ∈ in D(a) such that:* lim_k, m →∞ a(u_k - u_m,u_k - u_m) = 0,* lim_k →∞ T(u_k) = ϕ in H,* for all v ∈ D(a)lim_k →∞ a(u_k, v) = (ψ, T(v))_H.The operator A is called the operator associated with (a, T).Consequently we state Let Ω_0 be a bounded admissible domain with a compact d-set boundary Γ (n-2< d<n, n≥ 2). Then forβ=1-n-d/2>0 the Poincaré-Steklov operatorA:B^2,2_β(Γ)→ B^2,2_-β(Γ)mapping u|_Γ to _ν u|_Γ is linear boundedself-adjoint operator with Ker A{0}.In addition, the Poincaré-Steklov operator A, consideredfrom L_2(Γ) to L_2(Γ), is self-adjoint positive operator with a compact resolvent.Therefore, there exists a discrete spectrum of A with eigenvalues0=μ_0<μ_1≤μ_2≤…, with μ_k→ +∞k→ +∞and the corresponding eigenfunctions form an orthonormal basis in L_2(Γ). Proof. We have already noticed that the domain of A is exactly B^2,2_β(Γ). As 0 is an eigenvalue of the Neumann Laplacian, Ker A{0}. From the following Green formula for u, v∈ H^1(Ω_0) with Δ u, Δ v∈ L_2(Ω_0)∫_Ω_0Δ uv -∫_Ω_0 uΔ v=⟨ u/ν, Tr v ⟩_B^2,2_-β(Γ),B^2,2_β(Γ)- ⟨Tr u,v/ν⟩_B^2,2_β(Γ),B^2,2_-β(Γ),we directly find that for all u, v∈ B^2,2_β(Γ)⟨ A u,v ⟩_B^2,2_-β(Γ),B^2,2_β(Γ)= ⟨u, Av⟩_B^2,2_β(Γ),B^2,2_-β(Γ), i.e. the operator A is self-adjoint and closed. SinceB^2,2_-β(Γ) is a Banach space, by the closed graph Theorem, A is continuous.To define A as an operator on L_2(Γ) we use <cit.>. As Ω_0 is such that the trace operator Tr is compact from H^1(Ω_0) to L_2(Γ), then the embedding of its image Tr(H^1(Ω_0))=B^2,2_β(Γ) into L_2(Γ) is compact. Now, as it was noticed in <cit.>, the space {v|_Γ : v ∈𝒟( ^n)} is dense in C(Γ) by the Stone-Weierstrass theorem for the uniform normand, therefore, it is also dense in L_2(Γ), since we endowed Γ with the d-dimensional Hausdorff measure which is Borel regular. Hence, B^2,2_β(Γ) is dense in L_2(Γ).It allows us to apply Theorem 2.2 and follow Section 4.4 of Ref. <cit.>.Using the results of Section <ref>, we follow the proof of Wallin <cit.>, Theorem 3, to obtain that for all bounded admissible domains KerTr=H_0^1(Ω_0). Thanks to Lemma 2.2 <cit.>,H^1(Ω_0)=H^1_0(Ω_0)⊕ HwithH={u∈ H^1(Ω_0)| Δ u=0weakly}.Hence, Tr(H)=B^2,2_β(Γ) and Tr:H→ B^2,2_β(Γ) is a linear bijection.Therefore, the bilinear map a_0: B^2,2_β(Γ) × B^2,2_β(Γ) →, given by a_0(ϕ,ψ)=∫_Ω_0∇ u ∇ v foru,v ∈ H Tru=ϕ,Trv=ψ,is symmetric, continuous and elliptic <cit.> (see Proposition 3.3, based on the compactness of the embedding H⊂⊂ L_2(Ω_0) (as H is a closed subspace of H^1(Ω) and H^1(Ω)⊂⊂ L_2(Ω_0), this implies H⊂⊂ L_2(Ω_0))and on the injective property of the trace from H to L_2(Γ)):∃ω≥ 0 such that∀ u∈ Ha_0(Tru,Tru)+ω∫_Γ |u|^2_d≥1/2u^2_H^1(Ω_0).If the operator N: L_2(Γ)→ L_2(Γ) is the operatorassociated with a_0, then it is the Dirichlet-to-Neumann operator A on L_2(Γ), i.e. Aϕ=_ν u in L_2(Γ) with 𝒟(A)={ϕ∈ L_2(Γ)| ∃ u∈ H^1(Ω_0)such thatTru=ϕ, Δ u=0and ∃_ν u ∈ L_2(Γ)}. Moreover, we havethatfor all ϕ∈ L_2(Γ), ϕ∈ D(A)and there existsan element ψ=Aϕ ofL_2(Γ)⟺ ∃ u∈ H^1(Ω_0) such that Tru=ϕ and ∀ v∈ H^1(Ω_0)∫_Ω_0∇ u∇ v=∫_ΓψTrv_d.On the other hand, we also can directly use Theorem 3.3 in Ref. <cit.>, by applying Theorem <ref>.Let now D(a)=H^1(Ω_0)∩ C(Ω_0), which is dense in H^1(Ω_0) (see the discussion of Ref. <cit.>). Then Tr(D(a)) is dense in L_2(Γ).Therefore, taking in Theorem <ref>a(u,v)=∫_Ω_0∇ u∇ v: D(a)× D(a)→, H=L_2(Γ)andT=Tr: D(a)→ L_2(Γ),as Tr is compact, we conclude that the operator associated to (a,Tr) is the Dirichlet-to-Neumann operator A, positive and self-adjoint in L_2(Γ) (see the proof of Theorem 3.3 in Ref. <cit.>).Since the compactness of the trace implies that A has a compact resolvent,it is sufficient to apply the Hilbert-Schmidt Theorem to finish the proof.§.§ For an exterior and truncated domains In this subsection we generalize <cit.> and introduce the Dirichlet-to-Neumann operator A on L_2(Γ) with respect to the exterior domain Ω⊂^n and A(S) with respect to a truncated domain for n≥ 2 in the framework of d-sets.(Dirichlet-to-Neumann operator for an exterior domain n≥ 3)Let Ω⊂^n, n≥ 3, be anadmissible exterior domain, satisfying the conditions of Theorem <ref>. The operator A: L_2(Γ)→ L_2(Γ),associated with the bilinear form a^D : W^D(Ω) × W^D(Ω) → given bya^D(u, v) = ∫_Ω∇ u ∇ v=⟨ u,v⟩_W^D(Ω),and the trace operator Tr: W^D(Ω)→ L_2(Γ), is called the Dirichlet-to-Neumann operator with the Dirichlet boundary condition at infinity. Theorem <ref> does not require to D(a) the completeness, i.e. a(·,·) can be equivalent to a semi-norm on D(a), what is the case of W^D(Ω) with a(u,u)=∫_Ω |∇ u|^2 for n=2. Therefore, it allows us to define the Dirichlet-to-Neumann operator A of the exterior problem in ^2, which can be understood as the limit case for r→ +∞ of the problem for a truncated domainwell-posed in H̃^1(Ω_S_r). In the case of W^D(Ω) in ^n with n≥ 3, we have that D(a)=W^D(Ω) is the Hilbert space corresponding to the inner product a(·,·).Let us notice that the trace on the boundary Γ satisfiesTr(𝒟(^n))⊂Tr(W^D(Ω))⊂ L_2(Γ)and, since Tr(𝒟(^n)) is dense in L_2(Γ), Tr(W^D(Ω)) is dense in L_2(Γ).In addition, a^D is Tr-elliptic thanks to Point 2 of Theorem <ref>,i.e. there exists α∈ and δ>0 such that∀ u∈ W^D(Ω)∫_Ω |∇ u|^2+α∫_Γ |Tru|^2_d≥δ∫_Ω |∇ u|^2.Thus, for n≥ 3 we can also apply Theorem 2.2 and follow Section 4.4 of Ref. <cit.>.For the two-dimensional case, we define A associated to the bilinear form a_0 from Eq. (<ref>), initially given for the interior case: (Dirichlet-to-Neumann operator for an exterior domain n=2)Let Ω⊂^2 be an admissible exterior domain, satisfying the conditions of Theorem <ref>. The operator A: L_2(Γ)→ L_2(Γ),associated with the bilinear form a_0, defined in Eq. (<ref>), is the Dirichlet-to-Neumann operator with the Dirichlet boundary condition at infinity in the sense that for all ϕ∈ L_2(Γ),ϕ∈ D(A) and there existsan element ψ=Aϕ∈L_2(Γ) ⟺ ∃ u∈ H^1(Ω)such that Tru=ϕ and ∀ v∈ H^1(Ω)∫_Ω∇ u∇ v=∫_ΓψTrv_d. Therefore, the properties of A are the same as for the bounded domain case in Theorem <ref>: the Poincaré-Steklov operator A is self-adjoint positive operator with a compact resolvent, anda discrete spectrum containing positive eigenvalues0=μ_0<μ_1≤μ_2≤…, with μ_k→ +∞fork→ +∞.The corresponding eigenfunctions form an orthonormal basis in L_2(Γ). Proof.We usethat H^1(^2)=H^1_0(^2) and that the compactness of the embedding H={u∈ H^1(Ω)| Δ u=0 weakly}⊂ L_2(Ω) and the injective property of the trace from H to L_2(Γ) still hold for the exterior case. In addition 0 is not an eigenvalue of the Dirichlet Laplacian on Ω. Thus we can follow the proof of Lemma 3.2 and Proposition 3.3in <cit.>, given for a Lipschitz bounded domain. The spectral properties of A are deduced from the analogous properties proved in Theorem <ref>.The following proposition legitimates Definition <ref> in the framework of Theorem <ref> for n≥ 3: Let Ω⊂^n, n≥ 3, be an admissible exterior domain, satisfying the conditions of Theorem <ref>, andlet ϕ, ψ∈ L_2(Γ). Then ϕ∈ D(A) and Aϕ = ψ if and only if there exists a function u ∈ W^D(Ω) such thatTr u = ϕ, Δ u = 0 weakly and ∂_ν u = ψ in the sense of Definition <ref>.Proof.Let ϕ, ψ∈ L_2(Γ) such that ϕ∈ D(A) and Aϕ = ψ. Then, according to Theorem <ref>, there exists a sequence (u_k)_k ∈ in W^D(Ω) such that * lim_k, m →∞∫_Ω |∇ (u_k - u_k)|^2 = 0,* lim_k →∞Tr u_k = ϕ,* lim_k →∞∫_Ω∇ u_k ∇ v = ∫_ΓψTrv_d for all v ∈ W^D(Ω).Form Item 1 it followsthat (u_k)_k ∈ is a Cauchy sequence in W^D(Ω). Therefore, bythe completeness ofW^D(Ω) (thanks to n≥ 3),there exists u ∈ W^D(Ω) such that u_k → u in W^D(Ω). Moreover, since Tr: W^D(Ω)→ L_2(Γ)is continuous by Point 4 of Theorem <ref>, Tr u = ϕ, according to Item 2. From Item 3 we deduce that for all v ∈ W^D(Ω)∫_Ω∇ u ∇ v = ∫_ΓψTrv_d, and hence, in particular for all v ∈𝒟(Ω). Therefore Δ u = 0. This with Eq. (<ref>) yields that u has a normal derivative in L_2(Γ) and ∂_ν u = ψ.Conversely, let ϕ, ψ∈ L_2(Γ)be such that there exists a function u ∈ W^D(Ω), so thatTr u = ϕ, Δ u = 0, ∂_ν u = ψ. According to the definition of normal derivatives (see Definition <ref> and Remark <ref>), sinceΔ u = 0, we have for all v ∈ W^D(Ω):∫_Ω∇ u ∇ v = ∫_ΓψTrv_d. Therefore, for n≥ 3 we can applyTheorem <ref> to the sequence, defined by u_k = u for all k ∈, and the result follows.Let us notice that the Dirichlet-to-Neumann operator A(S) for a domain, truncated by a d_S-set S (n-1≤ d_S<n), can be defined absolutely in the same way as the operator A for the exterior domains if we replace W^D(Ω) by H̃^1(Ω_S) or, equivalently, by W^D_S(Ω).Consequently, forexterior and truncated domains we have Let Ω be an admissible exterior domain in ^n with n≥ 3and Ω_S be an admissible truncated domain in ^n with n≥ 2 satisfying conditions of Theorem <ref> and λ∈ [0, ∞[. Then the Dirichlet-to-Neumann operator with the Dirichlet boundary condition at infinity A: L_2(Γ)→ L_2(Γ) (see Definition <ref>)and the Dirichlet-to-Neumann operator A(S) of the truncated domain are positive self-adjoint operatorswith a compact resolvent∀λ∈[0,+∞[ (λ I + A)^-1= Tr_Γ∘ B_λ,(λ I + A(S))^-1= Tr_Γ∘ B_λ(S)where B_λ:ψ∈ L_2(Γ)↦ u∈W^D(Ω) with u, the solution of Eq. (<ref>), and B_λ(S):ψ∈ L_2(Γ)↦ u∈W^D_S(Ω) with u, the solution of Eq. (<ref>) are defined in Theorem <ref> and Theorem <ref> respectively. Moreover, Ker A= Ker A(S)= {0} and for n≥ 3, independently on a d-set S_r ((^n∖Ω_0)∩ B_r ⊂Ω_S_r with B_r∩ S_r=∅),∀λ∈[0,+∞[ (λ I+A(S_r))^-1-(λ I+A)^-1_ℒ(L_2(Γ))→ 0asr→ +∞. Therefore, the spectra of A (n≥ 3) and A(S) (n≥ 2) are discrete with all eigenvalues (μ_k)_k∈ (precisely, (μ_k(A))_k∈ of A and (μ_k(A(S)))_k∈ of A(S)) strictly positive 0<μ_0<μ_1≤μ_2≤…, with μ_k→ +∞for k→ +∞,and the corresponding eigenfunctions formorthonormal basis of L_2(Γ). Proof.The compactness of the resolvents (λ I + A)^-1 and (λ I + A(S))^-1 directly follows from the compactness properties of the operators Tr_Γ, B_λ, B_λ(S). Using the previous results and the Hilbert-Schmidt Theorem for self-adjoint compact operators on a Hilbert space, we finish the proof. § PROOF OF THEOREM <REF> AND FINAL REMARKS Now, we can prove Theorem <ref>: Proof. Actually, Theorems <ref>, <ref> and <ref> implies that the operators A^int, A^ext and A(S_r) have compacts resolvents and discrete positive spectra. As previously, by σ^int, σ^ext and σ_S(r) are denotedthe sets of all eigenvalues of A^int, A^ext and A(S_r) respectively. With these notations, for all n≥ 2 the point 0∉σ_S(r) (by Theorem <ref>), but 0∈σ^int (by Theorem <ref>). Thanks to Theorem <ref>, for n=2 the point 0∈σ^ext,and,thanks to Theorem <ref>, for n≥ 3 the point 0∉σ^ext.The approximation result for the resolvents of the exterior and truncated domains in Theorems <ref> (for λ=0)gives Eqs. (<ref>).Thus, we need to prove that all non-zero eigenvalues of A^intare also eigenvalues of A^ext and converse.Grebenkov <cit.> (pp. 129-132 and 134) have shown it by theexplicit calculus of the interior and exterior spectra of the Dirichlet-to-Neumann operatorsfor a ball.If Γ is regular, it is sufficient to apply a conform map to project Γ to a sphere and, hence, to obtain the same result (for the conformal map technics see <cit.> the proof of Theorem 1.4, but also <cit.> and <cit.>).For the general case of a d-set Γ, it is more natural to usegiven in the previous Section definitions of the Dirichlet-to-Neumann operators.Let n≥ 3. If μ>0 is an eigenvalue of A^ext, corresponding to an eigenfunction ϕ∈ L_2(Γ), then, according to Proposition <ref>,ϕ∈ D(A) andAϕ =μϕ if and only if ∃ u ∈ W^D(Ω)such that Tr u = ϕ,Δ u = 0 and ∂_ν u = μϕ, i.e. ∀ v∈𝒟(^n) ∫_Ω∇ u∇ v=∫_ΓμϕTrv_d. The trace on v on Γ can be also considered for a function v∈ H^1(Ω_0), and, by the same way, ϕ∈ L_2(Γ) can be also interpreted as the trace of w∈ H^1(Ω_0). Thus,μϕ∈ L_2(Γ) is a normal derivative of w∈ H^1(Ω_0) if and only if∀ v∈ H^1(Ω_0) ∫_Ω_0∇ w∇ v=∫_ΓμϕTrv_dand Δ w=0weakly in Ω_0.Thus, by the definition of A^int, ϕ∈ D(A^int) and μ∈σ^int.More precisely, we use the facts thatTr_ext(W^D(Ω))=Tr_int(H^1(Ω_0))=B^2,2_β(Γ),and thus, the extensionsE_ext: ϕ∈ B^2,2_β(Γ) ↦ u∈ W^D(Ω) andE_int: ϕ∈ B^2,2_β(Γ) ↦ w∈ H^1(Ω_0)are linear bounded operators. Consequently, μ>0 is an eigenvalue of a Dirichlet-to-Neumann operator with an eigenfunction ψ∈ L_2(Γ) if and only ifμϕ is a normal derivative on Γ of u∈ W^D(Ω) or of w∈ H^1(Ω_0), if and only if Tr_ext u=ϕ with Δ u=0 weakly on Ω and Tr_intw=ϕ with Δ w=0 weakly on Ω_0, by the uniqueness of the trace and of the normal derivative on Γ.Hence, if μ 0,μ∈σ^int⟺μ∈σ^ext. For n=2, A^int and A^ext are defined in the same way (by (a_0(·,·),Tr)), and hence, as in the case n≥ 3, the statement σ^int=σ^ext is also a direct corollary of the definitions of the Dirichlet-to-Neumann operators with the continuous extension operators and surjective trace operators mapping to their images. Now, let us prove Eq. (<ref>). Formula (<ref>) was explicitly proved by Grebenkov <cit.> for an annulus p.130. See also <cit.>. Therefore, it also holds, by a conformal mapping, for domains with regular boundaries. Let us prove it in the general case. Indeed, since for n=2 we have 0∈σ^int=σ^ext, and0∉σ_S(r).Moreover, since H̃^1(Ω_S_r) ⊂ H^1(Ω), the functions u_r∈H̃^1(Ω_S_r) can be considered as elements of H^1(Ω), if outside of S_r we put themequal to zero. Thus, if μ(r)>0 is an eigenvalue of A(S)(r) in Ω_S_r, corresponding to an eigenfunction ϕ∈ L_2(Γ), thenfor u_r∈ H^1(Ω),the solution of the Dirichlet Laplacian on Ω_S_r, and u∈ H^1(Ω), the solution of the Laplacian with Dirichlet boundary conditions at the infinity (see Remark <ref> and Theorem <ref>), we have∀ v ∈ H^1(^n)∫_Γμ(r) ϕ_r Trv_d=∫_Ω_S_r∇ u_r∇ v →∫_Ω∇ u∇ v =∫_ΓμϕTrv_d forr→ +∞.This means that one of the eigenvalues inthe spectrum σ_S(r) necessarilyconverges towards zero.Let us also notice that for the convergence of the series (<ref>) on the truncated or the exterior domain, we need to have 1_Γ∈𝒟(A). For a Lipschitz boundary Γ it was proven in Proposition 5.7 of Ref. <cit.>. In this framework we state more generallyLet Ω be an admissibleexterior domain of ^n (n≥ 3) with a compact d-set boundary Γ, n-2< d_Γ<n and let Ω_S be its admissible truncated domain with n-2< d_S<n. Then∀ψ∈ L_2(Γ)∃ϕ=Aψ∈ L_2(Γ),which also holds for the admissible truncated domains of ^2.If Ω is an admissible exterior domain in ^2 or an admissible domain, bounded by the boundary Γ (n≥ 2), then∀ψ∈ B^2,2_d/2(Γ)∃ϕ=Aψ∈ L_2(Γ). Proof.Eq. (<ref>)is a corollary of the fact that the operator A: L_2(Γ)→ L_2(Γ), considered on Ω (for n≥ 3) and Ω_S(for n≥ 2) respectively, is invertible with a compact inverse operator A^-1 (since λ=0 is a regular point by Theorem <ref>). For instance, for the exterior case with n≥ 3, 1_Γ∈ L_2(Γ), thus, for λ=0, B_01_Γ∈ W^D(Ω), by the well-posedness of the Robin Laplacian exterior problem, and hence, Tr(B_01_Γ)=A^-11_Γ∈ L_2(Γ).If Ω is an exterior domain in ^2 or a bounded domain (see the interior case in Subsection <ref>), then for all u∈ H^1(Ω), such that Δ u=0 weakly, there exists unique Tr u∈ B^2,2_β(Γ)⊂ L_2(Γ) with β=d/2 (see Theorem <ref>), thus for all ψ∈ B^2,2_β(Γ) there exists ϕ=Aψ∈ L_2(Γ), as it is stated in Eq. (<ref>). Consequently, as 1_Γ∈B^2,2_β(Γ), we have 1_Γ∈𝒟(A).§ ACKNOWLEDGMENTSWe would like to thank Claude Bardos for useful discussions about the subject andDenis Grebenkov for pointing the physical meaning of the problem.99 ADAMS-1975 R. A. Adams, Sobolev spaces,Academic Press, New York, 1975. ALLAIRE-2012 G. Allaire, Analyse numérique et optimisation, École Polytechnique, 2012.ARENDT-2012-1 W. Arendt and A. F. M. T. Elst, Sectorial forms and degenerate differential operators, J. Operator Theory, 67 (2012), pp. 33–72.ARENDT-2007 W. Arendt and R. Mazzeo, Spectral properties of the Dirichlet-to-Neumann operator on Lipschitz domains, Ulmer Seminare, 12 (2007), pp. 28–38.ARENDT-2012 height 2pt depth -1.6pt width 23pt, Friedlander's eigenvalue inequalities and the Dirichlet-to-Neumann semigroup, Communications on Pure and Applied Analysis, 11 (2012), pp. 2201–2212.ARENDT-2011 W. Arendt and A. ter Elst, The Dirichlet-to-Neumann operator on rough domains, Journal of Differential Equations, 251 (2011), pp. 2100–2124.ARENDT-2015 W. Arendt and A. F. M. ter Elst, The Dirichlet-to-Neumann operator on exterior domains, Potential Analysis, 43 (2015), pp. 313–340.BANJAI-2007 L. Banjai, Eigenfrequencies of fractal drums, J. of Comp. and Appl. Math., 198 (2007), pp. 1–18.BEHRNDT-2015 J. Behrndt and A. ter Elst, Dirichlet-to-Neumann maps on bounded Lipschitz domains, Journal of Differential Equations, 259 (2015), pp. 5903–5926.BODIN-2005 M. Bodin, Characterisations of function spaces on fractals, PhD thesis, 2005.BOS-1995 L. P. Bos and P. D. Milman, Sobolev-Gagliardo-Nirenberg and Markov type inequalities on subanalytic domains, Geometric and Functional Analysis, 5 (1995), pp. 853–923.CALDERON-1961 A.-P. Calderon, Lebesgue spaces of differentiable functions and distributions, Proc. Symp. Pure Math., 4 (1961), pp. 33–49.CAPITANELLI-2007 R. Capitanelli, Mixed Dirichlet-Robin problems in irregular domains, Comm. to SIMAI Congress, 2 (2007).CAPITANELLI-2010 height 2pt depth -1.6pt width 23pt, Asymptotics for mixed Dirichlet-Robin problems in irregular domains, Journal of Mathematical Analysis and Applications, 362 (2010), pp. 450–459.EVANS-1994 L. C. Evans, Partial differential equations, Graduate Studies in Mathematics, 1994.FILOCHE-2008 M. Filoche and D. S. Grebenkov, The toposcopy, a new tool to probe the geometry of an irregular interface by measuring its transfer impedance, Europhys. Lett., 81 (2008), p. 40008.GIROUARD-2015 A. Girouard, R. S. Laugesen, and B. A. Siudeja, Steklov eigenvalues and quasiconformal maps of simply connected planar domains, Archive for Rational Mechanics and Analysis, 219 (2015), pp. 903–936.GIROUARD-2014 A. Girouard, L. Parnovski, I. Polterovich, and D. A. Sher, The Steklov spectrum of surfaces: asymptotics and invariants, Mathematical Proceedings of the Cambridge Philosophical Society, 157 (2014), pp. 379–389.ARXIV-GIROUARD-2014 A. Girouard and I. Polterovich, Spectral geometry of the Steklov problem,preprint, <arXiv:1411.6567>.GREBENKOV-2004 D. S. Grebenkov, Transport Laplacien aux interfaces irregulires : étude théorique, numérique et expérimentale, PhD thesis, 2004.GREBENKOV-2006 D. S. Grebenkov, M. Filoche, and B. Sapoval, Mathematical basis for a general theory of Laplacian transport towards irregular interfaces, Phys. Rev. E, 73 (2006), p. 021103.GREBENKOV-2007 height 2pt depth -1.6pt width 23pt, A simplified analytical model for Laplacian transfer across deterministic prefractal interfaces, Fractals, 15 (2007), pp. 27–39.HAJLASZ-2008 P. Hajłasz, P. Koskela, and H. Tuominen, Sobolev embeddings, extensions and measure density condition, Journal of Functional Analysis, 254 (2008), pp. 1217–1234.HERRON-1991 D. A. Herron and P. Koskela, Uniform, Sobolev extension and quasiconformal circle domains, J. Anal. Math., 57 (1991), pp. 172–202.ARXIV-IHNATSYEVA-2011 L. Ihnatsyeva and A. V. Vähäkangas, Characterization of traces of smooth functions on Ahlfors regular sets,(2011).JONES-1981 P. W. Jones, Quasiconformal mappings and extendability of functions in Sobolev spaces, Acta Mathematica, 147 (1981), pp. 71–88.JONSSON-1984-1 A. Jonsson, P. Sjögren, and H. Wallin, Hardy and Lipschitz spaces on subsets of ℝ^n, Studia Math., 80 (1984), pp. 141–166.JONSSON-1984 A. Jonsson and H. Wallin, Function spaces on subsets of ℝ^n, Math. Reports 2, Part 1, Harwood Acad. Publ. London, 1984.JONSSON-1995 height 2pt depth -1.6pt width 23pt, The dual of Besov spaces on fractals, Studia Mathematica, 112 (1995), pp. 285–300.JONSSON-1997 height 2pt depth -1.6pt width 23pt, Boundary value problems and brownian motion on fractals, Chaos, Solitons & Fractals, 8 (1997), pp. 191–205.LANCIA-2002 M. R. Lancia, A transmission problem with a fractal interface, Zeitschrift für Analysis und ihre Anwendungen, 21 (2002), pp. 113–133.LIONS-1972 J. Lions and E. Magenes, Non-Homogeneous Boundary Value Problems and Applications, Vol. 1, Berlin: Springer-Verlag, 1972.LU-2005 G. Lu and B. Ou, A Poincaré inequality on ^̊n and its application to potential fluid flows in space, Comm. Appl. Nonlinear Anal, 12 (2005), pp. 1–24.MARSCHALL-1987 J. Marschall, The trace of Sobolev-Slobodeckij spaces on Lipschitz domains, Manuscripta Math, 58 (1987), pp. 47–65.MARTIN-1989 M. Martin and M. Putinar, Lectures on hyponormal operators, Vol. 39, Birkhauser, Basel, 1989.MARTIO-1979 O. Martio and J. Sarvas, Injectivity theorems in plane and space, Annales Academiae Scientiarum Fennicae Series A I Mathematica, 4 (1979), pp. 383–401.MASLENNIKOVA-1997 V. N. Maslennikova, Partial differential equations, (in Russian) Moscow, Peoples Freindship University of Russia, 1997.MCLEAN-2000 W. McLean, Strongly elliptic systems and boundary integral equations, Cambridge University Press, 2000.PINASCO-2005 J. P. Pinasco and J. D. Rossi, Asymptotics of the spectral function for the Steklov problem in a family of sets with fractal boundaries, Appl. Maths. E-Notes, 5 (2005), pp. 138–146.SHVARTSMAN-2010 P. Shvartsman, On the boundary values of Sobolev W^1_p-functions, Adv. in Maths., 225 (2010), pp. 2162–2221.STEIN-1970 E. M. Stein, Singular integrals and differentiability properties of functions, Princeton University Press, 1970.TAYLOR-1996 M. Taylor, Partial Differential Equations II, Appl. Math. Sci., Vol. 116, Springer-Verlag, New-York, 1996.TRIEBEL-1997 H. Triebel, Fractals and Spectra. Related to Fourier Analysis and Function Spaces, Birkhäuser, 1997.WALLIN-1991 H. Wallin, The trace to the boundary of Sobolev spaces on a snowflake, Manuscripta Math, 73 (1991), pp. 117–125.WINGREN-1988 P. Wingren, Function Spaces and Applications, Springer Science + Business Media, 1988, ch. Lipschitz spaces and interpolating polynomials on subsets of euclidean space, pp. 424–435. | http://arxiv.org/abs/1705.09523v2 | {
"authors": [
"Kevin Arfi",
"Anna Rozanova-Pierrat"
],
"categories": [
"math.FA",
"math.AP"
],
"primary_category": "math.FA",
"published": "20170526104549",
"title": "Dirichlet-to-Neumann or Poincaré-Steklov operator on fractals described by d -sets"
} |
GSplit LBI: Taming the Procedural Bias in Neuroimaging School of Mathematical Science, Peking University, Beijing, 100871, China Yanjing Medical College, Capital Medical University, Beijing, 101300, China Hong Kong University of Science and Technology and Peking University, China National Engineering Laboratory for Video Technology, Key Laboratory of Machine Perception, School of EECS, Peking University, Beijing, 100871, China Xinwei Sun et al.GSplit LBI: Taming the Procedural Bias in Neuroimaging for Disease Prediction Xinwei Sun1 Lingjing Hu2()Yuan Yao3() Yizhou Wang4 December 30, 2023 =============================================================================In voxel-based neuroimage analysis, lesion features have been the main focus in disease prediction due to their interpretability with respect to the related diseases. However, we observe that there exist another type of features introduced during the preprocessing steps and we call them “Procedural Bias". Besides, such bias can be leveraged to improve classification accuracy. Nevertheless, most existing models suffer from either under-fit without considering procedural bias or poor interpretability without differentiating such bias from lesion ones. In this paper, a novel dual-task algorithm namely GSplit LBI is proposed to resolve this problem. By introducing an augmented variable enforced to be structural sparsity with a variable splitting term, the estimators for prediction and selecting lesion features can be optimized separately and mutually monitored by each other following an iterative scheme. Empirical experiments have been evaluated on the Alzheimer's Disease Neuroimaging Initiative(ADNI) database. The advantage of proposed model is verified by improved stability of selected lesion features and better classification results.§ INTRODUCTIONequationsectionUsually, the first step of voxel-based neuroimage analysis requires preprocessing the T_1-weighted image, such as segmentation and registration of grey matter(GM), white matter(WM) and cerebral spinal fluid(CSF). However, some systematic biases due to scanner difference and different population etc., can be introduced in this pipeline <cit.>. Part of them can be helpful to the discrimination of subjects from normal controls(NC), but may not be directly related to the disease. For example in structural Magnetic Resonance Imaging(sMRI) images of subjects with Alzheimer's Disease(AD), after spatial normalization during simultaneous registration of GM, WM and CSF, the GM voxels surrounding lateral ventricle and subarachnoid space etc. may be mistakenly enlarged caused by the enlargement of CSF space in those locations <cit.> compared to normal template, as shown in Fig. <ref>.Although these voxels/features are highly correlated with disease, they can't be regarded as lesion features in an interpretable model. In this paper we refer to them as “Procedural Bias", which should be identified but is neglected in the literature. We observe that it can be harnessed in our voxel-based image analysis to improve the prediction of disease.Together with procedural bias, the lesion features are vital for prediction and lesion regions analysis tasks, which are commonly solved by two types of regularization models. Specifically, one kind of models such as general losses with l_2 penalty, elastic net <cit.> and graphnet <cit.> select strongly correlated features to minimize classification error. However, such models don't differentiate features either introduced by disease or procedural bias and may also introduce redundant features. Hence, the interpretability of such models are poor and the models are prone to over-fit. The other kind of models with sparsity enforcement such as TV-L_1(Combination of Total Variation <cit.> and L_1) and particularly n^2 GFL <cit.> enforce strong prior of disease on the parameters of the models introduced in order to capture the lesion features. Although such features are disease-relevant and the selection is stable, the models ignore the inevitable procedural bias, hence, they are losing some prediction power.To incorporate both tasks of prediction and selection of lesion features, we propose an iterative dual-task algorithm namely Generalized Split LBI(GSplit LBI) which can have better model selection consistency than generalized lasso <cit.>. Specifically, by the introduction of variable splitting term inspired by Split LBI <cit.>, two estimators are introduced and split apart. One estimator is for prediction and the other is for selecting lesion features, both of which can be pursued separately with a gap control. Following an iterative scheme, they will be mutually monitored by each other: the estimator for selecting lesion features is gradually monitored to pursue stable lesion features; on the other hand, the estimator for prediction is also monitored to exploit both the procedural bias and lesion features to improve prediction. To show the validity of the proposed method, we successfully apply our model to voxel-based sMRI analysis for AD, which is challenging and attracts increasing attention. § METHOD§.§ GSplit LBI AlgorithmOur dataset consists of N samples {x_i,y_i}_1^N where x_i∈ℝ^p collects the i^th neuroimaging data with p voxels and y_i = {± 1} indicates the disease status (-1 for Alzheimer's disease in this paper).X ∈ℝ^N × p and y ∈ℝ^p are concatenations of {x_i}_i and {y_i}_i. Consider a general linear model to predict the disease status (with the intercept parameter β_0∈ℝ),log P( y_i=1| x_i) - log P(y_i=-1|x_i) =x_i^T β_pre + β_0. A desired estimator β_pre∈ℝ^p should not only fit the data by maximizing the log-likelihood in logistic regression, but also satisfy the following types of structural sparsity: (1) the number of voxels involved in the disease prediction is small, so β_pre is sparse; (2) the voxel activities should be geometrically clustered or 3D-smooth, suggesting a TV-type sparsity on D_G β_pre where D_G is a graph difference operator[Here D_G:ℝ^V →ℝ^E denotes a graph difference operator on G=(V,E), where V is the node set of voxels, E is the edge set of voxel pairs in neighbour (e.g. 3-by-3-by-3), such that D_G(β)(i,j):=β(i)-β(j).]; (3) the degenerate GM voxels in AD are captured by nonnegative component in β_pre. However, the existing procedural bias may violate these a priori sparsity properties, esp. the third one, yet increase the prediction power. To overcome this issue, we adopt a variable splitting idea in <cit.> by introducing an auxiliary variable γ∈ℝ^|V|+|E| to achieve these sparsity requirements separately, while controlling the gap from Dβ_pre with penalty S_ρ(β_pre,γ) := ‖ Dβ_pre - γ‖_2^2 := ‖β_pre - γ_V‖_2^2+ ‖ρ D_Gβ_pre - γ_G‖_2^2 with γ = [ [ γ_V^T γ_G^T ]]^T and D = [ [ I ρ D_G^T ]]^T. Here ρ controls the trade-off between different types of sparsity. Our purpose is thus of two-folds: (1) use β_pre for prediction; (2) enforce sparsity on γ. Such a dual-task scheme can be illustrated by Fig. <ref>.To implement it, we generalize the Split Linearized Bregman Iteration (Split LBI) algorithm in <cit.> to our setting with generalized linear models (GLM) and the three types of structural sparsity above, hence called Generalized Split LBI (or GSplit LBI). Algorithm <ref> describes the procedure with a new loss:ℓ(β_0,β_pre,γ;{x_i,y_i}_1^N,ν) := ℓ(β_0,β_pre;{x_i,y_i}_1^N) + 1/2ν S_ρ(β_pre,γ),where ℓ(β_pre;{x_i,y_i}_1^N) is the negative log-likelihood function for GLM and ν>0 tunes the strength of gap control. The algorithm returns a sequence of estimates as a regularization path, {β_0^k, β_pre^k,γ^k,β_les^k}_k≥ 0. In particular, γ^k shows a variety of sparsity levels and β_pre^k is generically dense with different prediction powers. The projection of β_pre^k onto the subspace with the same support of γ^k gives estimate β_les^k, satisfying those a priori sparsity properties (sparse, 3D-smooth, nonnegative) and hence being regarded as the interpretable lesion features for AD. The remainder of this projection is heavily influenced by procedural bias; in this paper the non-zero elements in β_pre^k which are negative (-1 denotes disease label) with comparably large magnitude are identified as procedural bias, while others with tiny values can be treated as nuisance or weak features. In summary, β_les only selects lesion features; while β_pre also captures additional procedural bias. Hence, such two kinds of features can be differentiated, as illustrated in Fig. <ref>. §.§ Setting the Parameters A stopping time at t^k (line 10) is the regularization parameter, which can be determined via cross-validation to minimize the prediction error <cit.>. Parameter ρ is a tradeoff between geometric clustering and voxel sparsity. Parameter κ, α is damping factor and step size, which should satisfy κα≤ν / κ(1 + νΛ_H + Λ_D^2) to ensure the stability of iterations. Here Λ_(·) denotes the largest singular value of a matrix and H denotes the Hessian matrix of ℓ(β_0,β_pre;{x_i,y_i}_1^N).Parameter ν balances the prediction task and sparsity enforcement in feature selection. In this paper, it is task-dependent, as shown in Fig. <ref>. For prediction of disease, β_pre with appropriately larger value of ν may increase the prediction power by harnessing both lesion features and procedural bias. For lesion features analysis, β_les with a small value of ν is helpful to enhance stability of feature selection. For details please refer to supplementary information. § EXPERIMENTAL RESULTSWe apply our model to AD/NC classification (namely ADNC) and MCI (Mild Cognitive Impairment)/NC (namely MCINC) classification, which are two fundamental challenges in diagnosis of AD. The data are obtained from ADNI[http://adni.loni.ucla.edu] database, which is split into 1.5T and 3.0T (namely 15 and 30) MRI scan magnetic field strength datasets. The 15 dataset contains 64 AD, 208 MCI and 90 NC; while the 30 dataset contains 66 AD and 110 NC. DARTEL VBM pipeline <cit.> is then implemented to preprocess the data. Finally, the input features consist of 2,527 8×8×8 mm^3 size voxels with average values in GM population template greater than 0.1. Experiments are designed on 15ADNC, 30ADNC and 15MCINC tasks.§.§ Prediction and Path Analysis 10-fold cross-validation is adopted for classification evaluation. Under exactly the same experimental setup, comparison is made between GSplit LBI and other classifiers: SVM, MLDA (univariate model via t-test + LDA) <cit.>, Graphnet <cit.>, Lasso <cit.>, Elastic Net, TV+L_1 and n^2GFL. For each model, optimal parameters are determined by grid-search. For GSplit LBI, ρ is chosen from {1,2,...,10}, κ is set to 10; α = ν / κ(1 + νΛ_X^2 + Λ_D^2)[For logit model, α < ν / κ(1 + νΛ_H^2 + νΛ_X^2) since Λ_X > Λ_H.]; specifically, ν is set to 0.2 (corresponding to ν↛ 0 in Fig. <ref>)[In this experiment, comparable prediction result will be given for ν∈ (0.1,10).] . The regularization coefficient λ is ranged in {0,0.05, 0.1,...,0.95,1,10,10^2} for lasso[0 corresponds to logistic regression model.] and 2^{-20,-19,...,0,...,20} for SVM. For other models, parameters are optimized from λ:{0.05,0.1,...,0.95,1,10,10^2} and ρ:{0.5,1,..,10}(in addition, the mixture parameter α: {0,0.05,...,0.95} for Elastic Net).The best accuracy in the path of GSplit LBI and counterpart are reported. Table <ref> shows that β_pre of our model outperforms that of others in all cases.Note that although our accuracies may not be superior to models with multi-modality data<cit.>, they are the state-of-the-art results for only sMRI modality.The process of feature selection combined with prediction accuracy can be analyzed together along the path. The result of 30ADNC is used as an illustration in Fig. <ref>. We can see that β_pre (blue curve) outperforms β_les (red curve) in the whole path for additional procedural bias captured by β_pre. Specifically, at β_pre's highest accuracy (t_5), there is a more than 8% increase in prediction accuracy by β_pre. Early stopping regularization at t_5 is desired, as β_pre converges to β_les in prediction accuracy with overfitting when t grows. Recall that positive (negative) features represent degenerate (enlarged) voxels. In each fold of β_pre at t_5, the commonly selected voxels among top 150 negative (enlargement) voxels are identified as procedural bias shown in Fig. <ref>, where most of these GM voxels are enlarged and located near lateral ventricle or subarachnoid space etc., possibly due to enlargement of CSF space in those locations that are different from the lesion features. §.§ Lesion Features Analysis To quantitatively evaluate the stability of selected lesion features, multi-set Dice Coefficient(mDC)[In <cit.>, mDC := 10 | ∩_k=1^10 S(k) | /∑_k=1^10 | S(k) | where S(k) denotes the support set of β_les in k-th fold.] <cit.> is applied as a measurement.The 30ADNC task is again applied as an example, the mDC is computed for β_les which achieves highest accuracy by 10-fold cross-validation. As shown from Table <ref>, when ν = 0.0002(corresponding to ν→ 0 in Fig. <ref>), the β_les of our model can obtain more stable lesion feature selection results than other models with comparable prediction power. Besides, the average number of selected features (line 3 in Table <ref>) are also recorded. Note that although elastic net is of slightly higher accuracy than β_les, it selects much more features than necessary.For the meaningfulness of selected lesion features, they are shown in Fig. <ref> (a)-(c), located in hippocampus, parahippocampal gyrus and medial temporal lobe etc., which are believed to be early damaged regions for AD patients.To further investigate the locus of lesion features, we conduct a coarse-to-fine experiment. Specifically, we project the selected overlapped voxels of 8 × 8 × 8 mm^3 size (shown in Fig. <ref> (c)) onto MRI image with more finer scale voxels, i.e. in size of 2 × 2 × 2 mm^3. Totally 4,895 voxels are served as input features after projection. Again, the GSplit LBI is implemented using 10-fold cross-validation. The prediction accuracy of β_pre is 90.34% and on average 446.6 voxels are selected by β_les. As desired, these voxels belong to parts of lesion regions, such as those located in hippocampal tail, as shown in Fig. <ref> (d). § CONCLUSIONSIn this paper, a novel iterative dual task algorithm is proposed to incorporate both disease prediction and lesion feature selection in neuroimage analysis. With variable splitting term, the estimators for prediction and selecting lesion features can be separately pursued and mutually monitored under a gap control. The gap here is dominated by procedural bias, some specific features crucial for prediction yet ignored in a priori disease knowledge. With experimental studies conducted on 15ADNC, 30ADNC and 15MCINC tasks, we have shown that the leverage of procedural bias can lead to significant improvements in both prediction and model interpretability. In future works, we shall extend our model to other neuroimaging applications including multi-modality data.Acknowledgements. This work was supported in part by 973-2015CB351800, 2015CB85600, 2012CB825501, NSFC-61625201, 61370004, 11421110001 and Scientific Research Common Program of Beijing Municipal Commission of Education (No. KM201610025013).splncs03Supplementary Information§ NOTATIONFor matrix A, A_J represents the submatrix of A indexed by J. A^† denotes the Moore-Penrose pseudoinverse of A.Suppose A ∈ R^n× n, ‖ A ‖_Σ := trace(A) =∑_i=1^n A_i,i. Besides, β̃ and β are used to represent β_les and β_pre respectively in what follows.§ MODEL SELECTION CONSISTENCY Consider recovery from generalized linear model(GLM) of β^⋆∈ R^p which satisfies structural sparsity after linearly transformed by D ∈ R^m × p:P(y | x,β^⋆)∝ exp(x^Tβ^⋆· y - ψ(x^Tβ^⋆)/d(σ) )s.t.γ^⋆ = Dβ^⋆ (S := supp(γ^⋆), s = |S|, s<<m)where ψ: R → R is link function and d(σ) is known parameter related to the variance of distribution. Under linear model with ψ(t) = t^2 and d(σ) = σ^2 in <ref>, our model GSplit LBI degenerates to Split LBI <cit.>. Recently, it's proved in <cit.> that the Split LBI may achieve model selection consistency under weaker conditions than generalized lasso <cit.> if ν is large enough. We claim that this property can also be shared by logit model. To understand why Gsplit LBI can achieve better model selection consistency, note that the variable splitting term projects solution vector β into higher dimensional space (β,γ) with β fitting data and γ being structural sparse. This will make it easier for the subspace of γ_S^c to decorrelate with the subspace of (β,γ_S), especially when ν increases, which sheds light on better performance of Split LBI to recover true signal set S. What's more important, the property may also be shared by logit model when y = {± 1}, d(σ) = 1 and ψ(t) = log(1 + exp(t)). Concretely speaking, we use θ_S^c,(β,S)(ν) to denote the angle between subspace of γ_S^c and that of (β,γ_S), the definition of which is:θ_S^c,(β,S)(ν) := arccos (‖ P_A_(β,S)A_S^c‖_F/‖ A_S^c‖_F) =arccos (√(‖ H_S^c,(β,S)H^†_(β,S),(β,S)H_(β,S),S^c‖_Σ/‖ H_S^c,S^c‖_Σ))Where A := [ [ A_(β,S) A_S^c ]] and H := ▿^2_β,γ l(β,γ) = A^TA = [ [ H_(β,S),(β,S) H_(β,S),S^c; H_S^c,(β,S) H_S^c,S^c ]].For linear model, A = ▿_β,γ l(β,γ) with A_(β,S) = [ [X0_n × s;-D_S / √(ν) I_(S,S) / √(ν);-D_S^c / √(ν)0_(p-s) × s ]]A_S^c = [[0_n × (p-s);0_s × (p-s); I_(S^c,S^c) / √(ν) ]]There is no explicit definition for A for logit model, however θ_S^c,(β,S)(ν) can be computed through Hessian matrix H in equation <ref>.We claim that θ_S^c,(β,S)(ν) will increase as ν becomes larger under some conditions. See theorem <ref> for details. Under linear model and logit model, lim_ν→ +∞θ_S^c,(β,S)(ν) = 90^∘ if and only if Im(D_S^c^T) ⊆Im(X^T). In <cit.>, it's been proved that the necessary condition for sign-consistency is IRR(ν) < 1. For uniqueness of model, we also assume that ker(X) ∩ ker(D_S^c) ⊆ ker(D_S). Combined with Im(D_S^c^T) ⊆Im(X^T)ker(X) ⊆ ker(D_S^c), we have that ker(X) ⊆ ker(D_S), which is the sufficient and necessary condition for the hold of lim_ν→∞IRR(ν) → 0. Hence, this is another way to understand why GSplit LBI can achieve better model selection consistency. We firstly prove the case under linear model. Denoted A: = ν X^⋆X + D^TD where X∈ R^n × p and X^⋆ = X/n. Note that:H_(β,S),(β,S) = QLQ^T,H_S^c,(β,S) = [ [ D_S^c/ν 0 ]]where:Q = [ [ I_p 0; -D_SA^† I_s ]],L = [ [A/ν0;0 (I_s - D_SA^†D_S^T)/ ν ]]Then we have:H_S^c,(β,S)H^†_(β,S),(β,S)H_(β,S),S^c = H_S^c,(β,S)QL^†Q^TH_(β,S),S^c = 1/ν D_S^cA^†D_S^c^TSubstituting equation <ref> into the second equation of <ref>, we have:cos^2(θ_S^c,(β,S)(ν)) = ‖ D_S^cA^†D_S^c‖_Σ/‖ H_S^c,S^c‖_Σ= ‖ D_S^cA^†D_S^c‖_Σ/m - sDenote e_i∈ R^m-s as the vector with the i^th element being 1 and left being 0. Then equation <ref> is equivalent to:cos^2(θ_S^c,(β,S)(ν))(m-s) = Σ_i=1^p d_i^TA^†d_iwhere d_i := D_S^c^Te_i. Suppose the compact singular value decomposition of X/√(n) := UΛ V^T, and (V, Ṽ) be an orthogonal square matrix. Suppose the compact singular value decomposition of DṼ := U_1Λ_1V_1^T. If Im(D_S^c^T) ⊆Im(X^T), then ∃ f_i, such that d_i = Vf_i, hence, d_i^T (ν X^⋆X + D^TD)^† d_i= d_i^T[ V Ṽ ]( [ V^T; Ṽ^T ] (ν X^⋆X + D^TD) [ V Ṽ ])^†[ V^T; Ṽ^T ] d_i = f_i^T (νΛ^2 + V^TD^TDV)^-1 f_i→ 0,asν→∞Combined with equation <ref>, it's then easy to obtain that cos^2(θ_S^c,(β,S)(ν)) → 0 as ν→ +∞. On the contrary, if ∃ a such that D_S^c^Ta ∉Im(X^T), then there ∃ i^⋆ such that d_i^⋆∉Im(X^T). This means that for d_i^⋆, there ∃ f_1,i^⋆, f_2,i^⋆≠ 0 such that d_i^⋆ = Vf_1,i^⋆ + Ṽf_2,i^⋆. Then we have d_i^⋆^T (ν X^⋆X + D^TD)^† d_i^⋆≥ f_2,i^⋆^T (Ṽ^TD^TDṼ)^†f_2,i^⋆ = f_2,i^⋆^TV_1Λ_1^-2V_1^Tf_2,i^⋆does not equal to 0f_2,i^⋆^TV_1Λ_1^2V_1^Tf_2,i^⋆ = f_2,i^⋆^T Ṽ^TD^TDṼ f_2,i^⋆≠ 0. Sincef_2,i^⋆^T Ṽ^TD^TDṼ f_2,i^⋆≥ f_2,i^⋆^T Ṽ^T d_i^⋆d_i^⋆^T f_2,i^⋆ = (f_2,i^⋆^Tf_2,i^⋆)^2 > 0From equation <ref>, we can obtain that:cos^2(θ_S^c,(β,S)(ν))(m-s) ≥ d_i^⋆^TA^†d_i^⋆≠ 0which means the θ_S^c,(β,S)(ν) → 0 does not hold when ν→ +∞. The proof is then completed under linear model. Under logit model, the definition of A is modified to A: = ν X^⋆W({x_i,β}_i=1^p)X + D^TD where W({x_i,β}_i=1^p) is a diagonal matrix with each diagonal element equals to exp(x_i^Tβ)/(1 + exp(x_i^Tβ))^2, the left proof is almost the same with that of linear model. An simulation experiment is conducted to illustrate this idea. In more detail, n = 100 and p = 80, D = I and X ∈ R^n × p and X_i,j∼ N(0,1). β^⋆_i =2 for 1 ≤ i ≤ 4, β^⋆_i =-2 for 5 ≤ i ≤ 8 and 0 otherwise, y is generated by both linear model y = Xβ^⋆ + ϵ with ϵ∼ N(0,1) and logit model given X and β^⋆. We simulated for 100 times and average θ_S^c,(β,S)(ν) is then computed, which is shown in the left image in figure <ref>. We can see that θ_S^c,(β,S)(ν) increases when ν becomes larger, as illustrated in right image in figure <ref>, and converges to 90^∘ when ν→ +∞. The average AUC and estimation of β^⋆ of Gsplit LBI with different ν compared with those of genlasso are also computed. Table <ref> shows better AUC with the increase of ν before ν = 100. As we can see from the algorithm in the paper that β̃ is the projection of β onto the support set of γ. Hence it is equivalent to say that better model selection of β̃ can be achieved as ν increases.However, the excessively large value of ν will lower the signal-to-noise ratio, which is also crucial for model selection consistency and prediction estimation. It's shown in <cit.> that ν determines the trade-off between model selection consistency and estimation of β^⋆. Also, the irrepresentable condition(IRR) can be satisfied as long as ν is large enough. If ν continuously increase, it will deteriorate the estimation of β^⋆, prediction estimation and even AUC. In our experiment the same phenomena can be observed, i.e. the estimation of β̃ and β get worse if ν increases from 10 and 100, respectively; when ν = 100, AUC even decreases. § RELATIONSHIP BETWEEN Β AND Β̃The estimate β̃, as a projection of β onto the subspace of γ, can select features that satisfy structural sparsity. Following the Linearized Bregman Iteration <cit.>, β and β̃ will be more similar on features selected by β̃. In more detail, note that when t = 0, β̃(t) = 0 and β(t) is the graph laplacian regularizer with penalty factor 1/2ν. As t progresses, the gap between β(t) and β̃(t) will decrease in terms of ‖β(t) - β̃(t) ‖_2 for every ν, as shown in figure <ref>.Since ‖β(t) - β̃(t) ‖_2→ 0 as t → +∞ and β̃ is sparse, it follows that β will approximate to β̃ on those selected features. In addition to these selected features, before convergence to β̃, β can capture other features to better fit data(minimize training error), especially for those ones that significantly correlated with data. § CHOICE OF Ν The choice of ν is task-dependent. For stable feature selection,ν with rather "small" value is suggested. It's noted that β - β̃→ 0 as ν→ 0^+, which is reflected by l_2 norm and regularized solution path shown in figure <ref>, <ref>. In this case, the estimator β̃ will be constrained in comparably lower dimension space, therefore it may fit data with more stability, notwithstanding β have no ability to select other features. For prediction estimation, the appropriately large value of ν is preferred. On one hand, when ν is appropriately large, the ability of selecting features with better model selection consistency can be achieved and β will share closer values on these selected features as t progress, as shown in figure <ref>. On the other hand, β may increase the ability of fitting data by having other features being non-zeros as long as ν is not too small. In fact, it is shown in table <ref> that comparable results can be given as long as ν belongs to a reasonable range of values(0.1-10 in this case).§ IDS OF ADNI SUBJECT USED IN OUR EXPERIMENTSc|c|c||c|c|c||c|c|cSubject ID ClassSubject ID ClassSubject ID Class 123S0094 9655 15AD027S040814964 15MCI 072S03151255915NC123S0088 9788 15AD137S048115044 15MCI 137S0301 1258415NC098S0149 10146 15AD 027S0417 15148 15MCI 002S02951372215NC032S0147 10404 15AD 053S0507 15315 15MCI 037S0327 1380215NC123S0162 10962 15AD 094S0531 1543115MCI027S04031414615NC128S0216 11101 15AD033S0567 1545915MCI 137S0459 14178 15NC128S0167 11203 15AD 127S0394 15510 15MCI 002S0413 1443715NC005S0221 11604 15AD 033S0514 15605 15MCI 068S04731448315NC014S0328 12327 15AD 033S0513 1562215MCI 116S036014623 15NC007S0316 12616 15AD 130S0460 15711 15MCI 133S0488 14838 15NC021S0343 12979 15AD 098S0542 1584815MCI 133S04931484815NC014S0356 13004 15AD 007S0414 15875 15MCI 014S052015299 15NC032S0400 13525 15AD 031S056815885 15MCI 014 S05191532315NC116S0370 14122 15AD 037S05011591615MCI 116 S0382 15347 15NC127S0431 15497 15AD 037S0552 15970 15MCI 128S0500 15366 15NC031S0554 15994 15AD 130S0423 1619615MCI010S0419 15415 15NC128S0517 16150 15AD 014S0557 16304 15MCI131S0436 15674 15NC116S0487 16377 15AD 033S051116314 15MCI128S0522 1582115NC002S0619 16392 15AD 130S044916351 15MCI 033S0516 15860 15NC131S0497 16666 15AD 027S046116467 15MCI 002S0559 15948 15NC021S0642 17632 15AD 128S0608 1650315MCI 014S0548 16024 15NC 033S0739 19175 15AD 128S0611 16766 15MCI128S05451609015NC100S0743 19585 15AD 053S0621 16864 15MCI 031S0618 16598 15NC033S0724 19772 15AD 037S0566 1688615MCI 010S0420 1707815NC128S0740 19990 15AD 037S05391701815MCI 126 S0506 17184 15NC021S0753 20169 15AD 137S044317030 15MCI 005S06101730315NC137S0796 23112 15AD 005S05461705615MCI 006S0484 17377 15NC029S0836 23231 15AD 137S0631 17109 15MCI 014S05581740015NC 100S0747 23581 15AD 027S06441715715MCI 021S064717668 15NC127S0754 23787 15AD 133S0629 17596 15MCI 137 S0686 17813 15NC012S0803 24863 15AD 021S062617687 15MCI 032S0677 1782015NC033S0889 25026 15AD 098S06671770215MCI 002S0685 18211 15NC126S0891 25172 15AD 052S06711784915MCI 094S07111858915NC005S0929 25645 15AD 014S0563 17876 15MCI 127S068418896 15NC006S0547 25816 15AD 007S06981836315MCI 033S07341915515NC002S0955 26170 15AD 133S0638 18672 15MCI 033S0741 1925815NC130S0956 27032 15AD 033S0723 19014 15MCI 094S06921956715NC053S1044 27782 15AD 032S071819035 15MCI 009 S0751 20013 15NC133S1055 29381 15AD 126S07081908915MCI 116S06482037015NC100S1062 29579 15AD 128S0715 19225 15MCI 129S077820543 15NC029S1056 30618 15AD 033S072519404 15MCI 029S0824 2321315NC 029S0999 31239 15AD 137S06691941915MCI 116S06572335015NC006S0653 31252 15AD 116S06491951615MCI 006S0731 23468 15NC014S1095 31576 15AD 130S05051970115MCI 029S08452424915NC 094S1090 31678 15AD 137S07221970715MCI 009S08622512815NC021S1109 31784 15AD 126S0709 1975415MCI 098S0896 25255 15NC024S1171 35190 15AD 128S0770 1990715MCI 033S0923 25427 15NC133S1170 35211 15AD 014S0658 20003 15MCI130S0886 25455 15NC031S1209 36178 15AD 137S0668 20202 15MCI 006S0498 2579015NC130S1201 36269 15AD 137S0800 20500 15MCI 052S09512664215NC027S1081 37145 15AD 002S07822051915MCI 130S0969 26688 15NC 126S1221 37339 15AD 130S0783 2079415MCI 021S09842705615NC 029S1184 37350 15AD 116S075223097 15MCI 024S09852760715NC027S1254 37859 15AD068S080223389 15MCI 024S10632811115NC130S1290 38395 15AD 133S0792 2344415MCI 033S1098 30304 15NC033S1285 38593 15AD 006S067523644 15MCI 010S04723048115NC033S1283 38617 15AD 031S082123658 15MCI 137S0972 3170215NC033S1308 40114 15AD 133S07712387615MCI 033S10863205415NC024S1307 41527 15AD 133S0727 23939 15MCI 130S12003628115NC007S1339 42344 15AD027S08352413815MCI116S1232 3784815NC 130S1337 42930 15AD 031S0830 2428115MCI 027S01201093315NC127S1382 45060 15AD 029S087824533 15MCI068S01271113315NC094S1397 51790 15AD 136S06952458515MCI 068S021011235 15NC094S1402 54220 15AD 031S0867 2496215MCI 136S01861133515NC136S0299 1518130AD 033S0906 25053 15MCI 009S0842 24339 15NC 136S0426 1617230AD 033S0922 2509215MCI 029S084324406 15NC018S0335 1656030AD 012S093225150 15MCI 032S1169 34067 15NC 136S0300 1671930AD 137S0825 25272 15MCI 018S00559136 15NC 018S0633 1909330AD 116S083425467 15MCI 100S00158390 30NC 012S0689 1921030AD094S09212549815MCI136S01961423630NC126S0606 2048730AD 136S08732555915MCI 136S0086 1471230NC131S0691 2068130AD 100S0930 25618 15MCI 018S03691511030NC 005S0814 2473430AD 133 S09122600015MCI 131S04411595930NC 002S0816 2540530AD 032S0978 26407 15MCI 032S04791665230NC127S0844 2923030AD 100S08922644315MCI018S0425 1716830NC002S1018 3383230AD 052S0952 26661 15MCI126S0405 1717730NC 031S4024 22887930AD 053S09192673915MCI 005S0553 17619 30NC016S4009 24094630AD 068S0872 27450 15MCI 126S0605 17639 30NC094S4089 24271930AD094S101528005 15MCI 005S0602 1961530NC 006S4153 24851730AD 133S1031 28152 15MCI012S1009 28962 30NC003S4136 25017330AD 127S09252816515MCI012S12123740330NC 003S4152 25376030AD 137S0994 28269 15MCI 007S1206 3776130NC098S4215 25584330AD 009S10302851415MCI068S1191 3837030NC098S4201 25617830AD 100S0995 28877 15MCI007S12223848230NC 006S4192 25859430AD027S10452894715MCI 094S124141449 30NC019S4252 25894730AD136S0874 2914015MCI 002S12614179930NC024S4280 26133230AD 127S10322917715MCI002S128041806 30NC094S4282 26185530AD 126S0865 29243 15MCI052S1251 43812 30NC029S4307 26759530AD031S10662938815MCI100S128645761 30NC016S4353 26793730AD052S09892952515MCI094S126746457 30NC109S4378 27066930AD137S097329650 15MCI131S13014932830NC126S4494 28160530AD012S1033 2996415MCI098S400322460330NC127S4500 28351530AD033S11163031715MCI 098S4018 228788 30NC007S4568 28747230AD 029S10733035915MCI 031S4021 229148 30NC 006S4546 28799430AD 029S10383039515MCI012S402623853230NC 130S4589 29121930AD052S105430580 15MCI 098S405023861530NC 016S4591 29243330AD 037S1078 30960 15MCI016S409724355630NC016S4583 29420930AD 010S0422 31015 15MCI016S4952337793 30NC 014S461529433430AD012S09173172515MCI016S4121246002 30NC 130S4641 29596130AD 006S11303179915MCI006S4150249403 30NC 130S4660 30003430AD 126S1077 3185015MCI127S4148 250137 30NC 019S4549 30033530AD037S058832151 15MCI 003S4119 250894 30NC126S4686 30081830AD052S1168 3234915MCI 127S4198 254320 30NC005S4707 30466330AD010S0904 3249715MCI002S421325458230NC 021S4718 30474930AD 002S1155 33393 15MCI031S4218 255978 30NC 018S4733 30606930AD 029S08713371715MCI 002S4225257270 30NC130S4730 30638430AD127S1140 33761 15MCI 002S426225965330NC137S4756 30711830AD029S09143377515MCI941S410025978130NC 027S4801 31403430AD100S11543425815MCI 002S4264259796 30NC027S4802 31719530AD 094S11883461915MCI021S4276 260047 30NC006S4867 32201230AD 012S1165 35052 15MCI 029S4290 260425 30NC016S4887 32564930AD 133S09133517115MCI098S4275 26145930NC 007S4911 32819630AD012S1175 3534215MCI094S4234261531 30NC 021S4924 33125730AD 126S1187 36364 15MCI 018S425726207630NC 137S4756 33293030AD 009S1199 3637315MCI136S4269 264215 30NC 127S4940 33551230AD 029S121537129 15MCI029S4279 26598030NC 027S4938 33692630AD116S089037182 15MCI021S433526617430NC 027S4962 33855830AD100S122637251 15MCI130S4343 26621730NC130S4982 34178730AD005S12243728415MCI 018S434926662530NC130S4984 34227430AD 037S12253736415MCI129S436926740530NC 130S4971 34233830AD 029S12183737315MCI130S4352 26771130NC127S4992 34269730AD 027S1213 3739315MCI129S4371 26846230NC 019S5012 34391630AD127S12103831915MCI018S4313 26893030NC019S5019 34566330AD 116S124338462 15MCI019S4367269273 30NC002S5018 34624230AD 033S130938837 15MCI007S4387 26992930NC 127S5028 34669630AD 027S1277 39715 15MCI036S4389 27046230NC130S4997 34741030AD 129S12464023715MCI003S4350 270999 30NC 005S5038 35143230AD 129S12044039815MCI129S4422 272184 30NC 127S5056 35320330AD033S1284 4088115MCI 018S4399272231 30NC127S5058 35463630AD 033S127940902 15MCI018S4400 273504 30NC 007S0128 1000715MCI029S1318 41062 15MCI021S4421273564 30NC010S0161 1007715MCI116S127141321 15MCI029S4383 27399330NC021S0141 1017315MCI094S13304149115MCI 003S4441 277108 30NC 127S011210419 15MCI 121S1322 42188 15MCI136S4433 278511 30NC 128S01351043115MCI 094S13144269415MCI 006S4449279470 30NC128S01381043815MCI 052 S13524287615MCI031S4474280369 30NC098S01601046615MCI123S1300 4321415MCI007S4488281560 30NC123S01081073815MCI 121S13504412215MCI 006S4485 28188230NC037S015010773 15MCI072S1211 44137 15MCI010S434528200530NC027S0116 10783 15MCI116S1315 44143 15MCI031S4496 282638 30NC128S018810897 15MCI 052S13464451515MCI 098S4506 282934 30NC014S0169 10987 15MCI 027S138744748 15MCI 094S4459 28344530NC021S01781099315MCI 024S1393 4488715MCI094S4460 28357330NC 128S020511011 15MCI132S098745815 15MCI010S4442283915 30NC128S020011012 15MCI 029S138447455 15MCI 007S451628442430NC037S018211121 15MCI 072S138049799 15MCI029S4385 28558930NC137S015811127 15MCI 094S1398 53551 15MCI 094S450328622230NC128S02251117915MCI024S1400 5373915MCI 073S4559 286553 30NC 136S01071122715MCI 094S1417 60175 15MCI 021S4558 287527 30NC032S02141128015MCI 127S141961670 15MCI109S4499 288999 30NC005S022211299 15MCI 137S1414 6447215MCI100S4469 28956430NC027S017911348 15MCI127S14276935515MCI100S451128965330NC021S023111430 15MCI 037S1421 70885 15MCI 012S4545 290413 30NC007S0249 1154415MCI 137S142672082 15MCI 053S457829081430NC098S0269 11615 15MCI 007S0041817715MCI127S4604 29152330NC130S0289 1185015MCI123S0050 8648 15MCI 007S4620 293938 30NC 021S0273 1194215MCI100S0006 879315MCI127S4645295590 30NC007S0293 1198215MCI 007S0101 960215MCI002S4270260581 30NC031S0294 1206515MCI 123S01061012615NC013S4579296776 30NC021S0276 1209215MCI100S0035812015NC 013S458029685930NC128S0227 1211915MCI100S00478899 15NC012S4642 296878 30NC027S0256 12250 15MCI010S00679093 15NC 012S4643297693 30NC 130S0285 1242415MCI 018S0043 9324 15NC029S4585 298523 30NC 098S0288 12654 15MCI 100S0069941715NC013S4616 300089 30NC 007S0344 1269715MCI 032S0095968015NC029S465230088630NC 021S0332 1286215MCI 123S00729752 15NC 137S4632301677 30NC128S0258 1308515MCI 007S00701002715NC094S464930292630NC027S030713281 15MCI 131S012310043 15NC016S4638 30588230NC123S0390 1331515MCI 027S011811370 15NC 013S473130817830NC031S0351 13783 15MCI 098S017211398 15NC 136S4726 308396 30NC021S0424 1390915MCI130S023211567 15NC016S4688310327 30NC 053S0389 13938 15MCI 005S02231164515NC 019S4835 315857 30NC 094S04341396415MCI123S01131171415NC127S4843316771 30NC 068S0401 14161 15MCI128S02301180615NC003S483931941430NC 131S04091424015MCI137S028312028 15NC 003S484031942730NC 116S03611429615MCI128S0245 12242 15NC003S487232137630NC 132S033914367 15MCI128S0272 1231315NC 003S490032572930NC037S03771440515MCI 128S022912459 15NC 016S4951337692 30NC027S0485 1492815MCI 021S0337 12466 15NC 130S01029709 15MCI 098S0171 1081815NC | http://arxiv.org/abs/1705.09249v2 | {
"authors": [
"Xinwei Sun",
"Lingjing Hu",
"Yuan Yao",
"Yizhou Wang"
],
"categories": [
"stat.AP",
"q-bio.NC"
],
"primary_category": "stat.AP",
"published": "20170525162514",
"title": "GSplit LBI: Taming the Procedural Bias in Neuroimaging for Disease Prediction"
} |
Counting one sided simple closed geodesics]Counting one sided simple closed geodesics on fuchsian thrice punctured projective planesWe prove that there is a true asymptotic formula for the number of one sided simple closed curves of length ≤ L on any Fuchsian real projective plane with three points removed. The exponent of growth is independent of the hyperbolic structure, and it is noninteger, in contrast to counting results of Mirzakhani for simple closed curves on orientable Fuchsian surfaces.M. Magee was supported in part by NSF award DMS-1701357. [ michael magee December 30, 2023 =====================𝐙 Spec 𝔾 𝕏 GL Ø𝒪 SL 𝐂 tr Aut Out 𝔽 𝐐 𝐑 Perms Fib {3 points} 𝒯 ord Stab 𝒢 § INTRODUCTION Let Σ:=P^2()-, the three times punctured real projective plane. It is the fixed topological surface of interest in this paper. Any hyperbolic structure J of finite area on Σ gives a metric of curvature -1 and hence a way to measure the length of curves. For fixed J, any isotopy class of nonperipheral simple closed curve [γ] on Σ has a unique geodesic representative, and we call the length of this geodesic with respect to J simply the length of [γ].It is known by work of Mirzakhani <cit.> that for a fixed finite area hyperbolic structure J on an orientable surface S, the number n_J(L) of isotopy classes of simple closed curves of length ≤ L has an asymptotic formula:n_J(L)=cL^d+o(L^d)where c=c(J)>0 and d=d(S)>0 is the integer dimension of the space of compactly supported measured laminations on S.In the case of the once punctured torus, a stronger form of Theorem <ref> was obtained previously by McShane and Rivin<cit.>. An isotopy class of simple closed curve in Σ is said to be one sided if cutting along this curve creates only one boundary component, or in other words, a thickening of this curve is homeomorphic to a Möbius band. The point of the current paper is to establish an asymptotic formula for n_J^(1)(L), the number of isotopy classes of one sided simple closed curves of length ≤ L with respect to a given hyperbolic structure J on Σ.There is a noninteger parameter β>0 such that for any finite area hyperbolic structure J on P^2()-,n_J^(1)(L)=cL^β+o(L^β) for some c=c(J)>0.The parameter β appeared for the first time in the work of Baragar <cit.> in connection with the affine varietiesV_n,a: x_1^2+x_2^2+…+x_n^2=ax_1x_2… x_n.These varieties have a rich automorphism group that contains an embedded copy of :=C_2^*n, the free product of a cyclic group of size 2 with itself n times. Baragar proved that for o∈ V() the following limit exists and is independent of o:lim_R→∞log|.o∩ B_ℓ^∞(R)|/loglog R=β(n)>0.The variety V_4,1 was connected to the Teichmüller space of Σ by Huang and Norbury in <cit.>. The value β of Theorem <ref> is therefore β:=β(4) that Baragar estimated to be in the range2.430<β(4)<2.477. Using Baragar's result, Huang and Norbury proved in <cit.> for an arbitrary hyperbolic structure J on Σ that[This statement corrects the statement in <cit.>.]lim_L→∞log n_J^(1)(L)/log L=β. A true asymptotic count for the integer points V_n,a() was obtained[In fact the paper <cit.> treats slightly more general varieties than V_n,a.] by Gamburd, Magee and Ronan in <cit.>. Let o∈ V_n,a() and β=β(n) as for Baragar <cit.>. There is c(o)>0 such that|.o∩ B_ℓ^∞(R)|=c(log R)^β+o((log R)^β).This is a strengthening of Baragar's result analogous to the main Theorem <ref>. It is worth noting that the type of arguments used by Huang and Norbury in <cit.> would not be enough to establish Theorem <ref>, even using Theorem <ref> as input. In the sequel we show how to combine and refine the arguments of <cit.> and <cit.> to prove Theorem <ref>.We also point out the recent preprint of Gendulphe <cit.> who has begun a systematic investigation into the issues of growth rates of simple geodesics on general non-orientable surfaces.§ ORBITS ON TEICHMÜLLER SPACE The curve complex of Σ is the simplicial complex whose vertices are isotopy classes of one sided simple closed curves, and a collection of k+1 curves span a k-simplex if they pairwise intersect once. We write Z for this complex that was introduced by Huang and Norbury in <cit.>, and its 1-skeleton was studied earlier by Scharlemann in <cit.>. It is a pure complex of dimension 3, that is, all maximal simplices are 3 dimensional. Throughout the paper we use the notation Z^k for the k-simplices of Z. The collection of all finite area hyperbolic structures on Σ is called the Teichmüller space of Σ and denoted by (Σ). It has a natural real analytic structure.Let V be the affine subvariety of ^4 cut out by the equationx_1^2+x_2^2+x_3^2+x_4^2=x_1x_2x_3x_4.It was proven by Hu, Tan and Zhang in <cit.> that the automorphism group of the complex variety V is given by Λ⋊(N⋊ S_4)where * N is the group of transformations that change the sign of an even number of variables.* S_4 is the symmetric group on 4 letters that acts by permuting the coordinates of ^4.* Λ is a nonlinear group generated by Markoff moves, e.g.m_1(x_1,x_2,x_3,x_4)=(x_2x_3x_4-x_1,x_2,x_3,x_4)replaces x_1 by the other root of the quadratic obtained by fixing x_2,x_3,x_4 in (<ref>). Similarly there are moves m_2,m_3,m_4 that flip the roots in the other coordinates, and m_1,m_2,m_3,m_4 generate a subgroup Λ≅ C_2*C_2*C_2*C_2of (V) where the m_i correspond to the generators of the C_2 factors.Since the abstract group C_2^*4 acts in different ways in the sequel, we let:=C_2^*4.We obtain an action ofon V(_+) by the identification (<ref>).Huang and Norbury in <cit.> prove that V(_+) can be identified with (Σ) by the following map. Let Δ=(α_1,α_2,α_3,α_4) be an ordering of a 3-simplex of Z. Let ℓ_α_j(J) be the length of the geodesic representative of α_j in the metric of J. Define a map Θ_Δ(J):=(x_α_1(J),x_α_2(J),x_α_3(J),x_α_4(J))where x_α_i(J):=√(2sinh(1/2ℓ_α_i(J))).Building on work of Penner <cit.>, Huang and Norbury showFor any ordering of the curves Δ=(α_1,α_2,α_3,α_4) in a 3-simplex of Z, Θ_Δ:(Σ)→ V(_+) is a real analytic diffeomorphism. Let Z_^3 denote tuples (α_1,α_2,α_3,α_4) such that {α_1,α_2,α_3,α_4} is a 3-simplex of Z. It is more symmetric to consider instead of Theorem <ref>, the pairing⟨∙,∙⟩:T(Σ)× Z_^3 → V(_+), ⟨ J,Δ⟩:=Θ_Δ(J).Huang and Norbury note for fixed Δ=(α,β,γ,δ)∈ Z_^3 there is a unique way to `flip' each of α,β,γ,δ to another one sided simple closed curve, say α' in the case of α being flipped, so that e.g. Δ'=(α',β,γ,δ) is in Z_^3, i.e. α' intersects each of β,γ,δ once. This yields an action ofon Z_^3 where the generator of the first C_2 factor always acts by flipping the first curve and so on. Recall also the action ofon V(_+). The pairing ⟨∙,∙⟩ is equivariant for the action ofon the second factor:⟨ J,g.(α_1,α_2,α_3,α_4)⟩=g.⟨ J,(α_1,α_2,α_3,α_4)⟩, g∈.§ DYNAMICS OF THE MARKOFF MOVES Our approach to counting relies on establishing the following properties for points x∈ V(_+) in various contexts. A The largest entry of x appears in exactly one coordinate. B If x_j is the largest coordinate of x then the largest entry of m_j(x) is smaller than x_j, that is, (m_j(x))_i<x_j for all i. C If x_j is not the unique largest coordinate of x then it becomes the largest after the move m_j, that is, (m_j(x))_j>(m_j(x))_i for all i≠ j. We will have use for the following theorem due to Hurwitz <cit.>, building on work of Markoff <cit.>.If x∈ V(_+)-(2,2,2,2) then Properties A, B and C hold for x. Hurwitz showed the corresponding result for the point (1,1,1,1)∈ V'(_+) where V' is defined byV': x_1^2+x_2^2+x_3^2+x_4^2=4x_1x_2x_3x_4.It is easy to check that the map V'(_+)→ V(_+), x↦2x is a bijection.Every x in V(_+) has every entry x_j≥2 and is obtained by a unique series of nonrepeating m_j from (2,2,2,2).The following observation will be used several times in the remainder of the section.For any o∈ V(_+), the coordinates of .o form a discrete set. For fixed Δ∈ Z_^3 let J be such that ⟨ J,Δ⟩=o. Then .o=⟨ J,.Δ⟩ and the coordinates are all obtained as √(2sinh(1/2ℓ)) where ℓ is the length of some one sided simple closed curve in Σ w.r.t. J. Since these values of ℓ are discrete in _+ and sinh^1/2 has bounded below derivative in _+ we are done.Lemma <ref> has the following fundamental consequence that makes our counting arguments work.For every point o∈ V(_+) there is some ϵ=ϵ(o)>0 such that for all x∈.o we havex_ix_j≥2+ϵ,1≤ i<j≤4.Let x∈.o and without loss of generality suppose x_1≤ x_2≤ x_3≤ x_4. Since from (<ref>)x_1x_2x_3x_4-x_3^2-x_4^2=x_1^2+x_2^2>0we obtain x_1x_2x_3x_4>x_3^2+x_4^2≥2x_3x_4implying that x_1x_2>2. Since x_1 and x_2 are related by (<ref>) to lengths of simple closed curves with respect to a hyperbolic structure J=J(o), they take on discrete values in .o that are bounded away from 0 and so the possible values of x_1x_2 with 2<x_1x_2<3 are discrete.We also need the following theorem that establishes Theorem <ref> for an arbitrary orbit of , outside a compact set depending on the orbit.For given o∈ V(_+), there is a compact S_4-invariant set K=K(.o)⊂ V(_+) such that Properties A, B and C hold for x∈.o-K. Call a move that takes place at a non-(uniquely largest) entry of x outgoing. The set .o-K is preserved under outgoing moves. Fix o throughout the proof. Lemma <ref> tells us that for some ϵ>0, x_ix_j≥2+ϵ for all x∈.o and 1≤ i<j≤4. Let K_0:={(x_1,x_2,x_3,x_4)∈ V(_+) :x_∞≤10 }. We will choose K such that K_0⊂ K. A. Take x∈.o. We'll prove something stronger than property A for suitable choice of K, and use this later in the proof. Suppose for simplicity x_1≤ x_2≤ x_3≤ x_4. Write x_4=x_3+δ and assume δ<δ_0 where δ_0<1 is small enough to ensure(1+δ x_3^-1)x_1x_2-1-(1+δ x_3^-1)^2>ϵ/2given x_3≥9 (which we know to be the case since x∉ K_0). We will enlarge K_0 so that this is a contradiction. From (<ref>)x_1^2+x_2^2+x_3^2(1+(1+δ x_3^-1)^2-(1+δ x_3^-1)x_1x_2)=0,so x_3^2(3+(1+δ x_3^-1)^2-(1+δ x_3^-1)x_1x_2)>0 and hencex_1x_2≤3+(1+δ x_3^-1)^2/(1+δ x_3^-1)<5given x∉ K_0 (so x_3≥9) and the assumption δ<1. On the other hand (<ref>) and (<ref>) now imply that if η>0 is a lower bound for all coordinates of .o thenx_3^2≤2/ϵ(x_1^2+x_2^2)≤4/ϵx_2^2≤4/η^2ϵx_1^2x_2^2≤100/η^2ϵ,where the last inequality is from (<ref>).Now let K_1={x∈ V(_+) : x_∞≤10/η√(ϵ)+1}∪ K_0.We proved there is δ=δ(o) such that for x∈.o-K_1, there is an entry of x that is ≥δ more than all the other entries.B. Take x∈.o-K_1 with x_1≤ x_2≤ x_3<x_4. We follow the method of Cassels <cit.>. Consider the quadratic polynomialf(T)=T^2-x_1x_2x_3T+x_1^2+x_2^2+x_3^2.Then f has roots at x_4 and x'_4 where x'_4 is the last entry of m_4(x). Property B holds at x unless x_3<x_4≤ x_4', in which case f(x_3)>0 givingx_3^2(4-x_1x_2)≥ x_3^2(2-x_1x_2)+x_1^2+x_2^2>0.Therefore x_1x_2<4. By discreteness of the coordinates of .o this means there are finitely many possibilities for x_1 and x_2. Now x'_4≥ x_4 directly impliesx_1x_2x_3≥2x_4so 2x_4^2≤ x_1x_2x_3x_4=x_1^2+x_2^2+x_3^2+x_4^2 and so(x_4+x_3)(x_4-x_3)≤ x_1^2+x_2^2≤ Mfor some M depending on the finitely many possible values for x_1,x_2. Since we know x_4-x_3≥δ we obtainx_4+x_3≤M/δso x_3≤ x_4≤ Mδ^-1. Let K_2:={x∈ V(_+) : x_∞≤ Mδ^-1}∪ K_1. This establishes B for x∈.o-K_2.C. Take x∈.o with x_1≤ x_2≤ x_3≤ x_4. Then for 1≤ j≤3, (m_j(x))_j=x_1x_2x_3x_4/x_j-x_j≥ x_1x_2x_4-x_3=x_4(x_1x_2-x_3/x_4)≥ x_4(1+ϵ)>x_4.by Lemma <ref>. This establishes C.We established A, B for x∈.o-K with K=K_2, and C for any x∈.o. It is clear from the previous that .o-K is stable under outgoing moves.§ THE TOPOLOGY OF THE CURVE COMPLEXOur first goal in this section is to prove the following topological theorem. Let G be the graph whose vertices are 3-simplices {α,β,γ,δ} of Z with an edge between two vertices if they share a dimension 2 face. G is a 4-regular tree.This theorem is stated without proof in <cit.>, and then used throughout the rest of the paper <cit.>. We have been careful here only to use results from <cit.> that are deduced independently from Theorem <ref>, to avoid circularity.We prove Theorem <ref> in two steps, using the following theorem of Scharlemann:The 1-skeleton of Z is the 1-skeleton of the complex obtained by repeated stellar subdivision of the dimension 2 faces of a tetrahedron.Z is connected.Recall that the clique complex of a graph H has the same vertex set as H and a k-simplex for each clique (complete subgraph) of H of size k+1. Note that Z is the clique complex of its 1-skeleton.The link of a vertex or edge in Z is contractible. In particular all links of simplices of codimension >1 in Z are connected. Let Y be the 1 dimensional subcomplex of Theorem <ref>. Let Δ^(2) be the 2-skeleton of a standard 3-simplex Δ.Since Z is the clique complex of Y it is possible to characterize links of simplices in Z purely in terms of cliques in Y. Precisely, the link of a simplex s in Z is the collection of all cliques in Y that are disjoint from s but that together with s form a clique.We view Y as a graph drawn on |Δ^(2)|. For every vertex y of Y there are 3 other vertices A(y),B(y),C(y) of Y such that * y,A,B and C are a clique in Y.* Every vertex adjacent to y in Y is contained in one of the triangles T_1(y),T_2(y) or T_3(y) in |Δ^(2)| with vertices (A,B,y), (B,C,y) or (A,C,y) respectively. In the case y is a vertex of Δ, these triangles are faces of |Δ|. Otherwise they are all contained in the same face of |Δ| that contains y.* More precisely, every vertex of Y adjacent to y, and all edges between these vertices, are generated by repeated stellar subdivision of the triangles T_1,T_2 and T_3 together with the edges and vertices of the T_i.These observations mean that all links of vertices of Z look the same and can be calculated by drawing the same picture. Similarly all links of edges can be calculated in the same way.Figure <ref> shows the link of the central vertex y, truncating after 2 iterations of the stellar subdivision. The red edges are incident with y. The blue edges are edges not incident with y but whose vertices are adjacent to y. Cliques in the link of y in Z are cliques relative to blue edges. We observe that since the drawing of this part of Y is planar, the blue cliques of size 3 other than {A,B,C} bound nonoverlapping regions, so we can identify the geometric realization of the link of y in Z with the closure of the shaded triangles here, together with an extra triangle with vertices A,B,C. This geometric realization is visibly a topological disc. The effect of iterating stellar subdivision is that the shaded region encroaches inwards, but its homotopy type doesn't change.Similarly the link of the edge between y and C is approximated in Figure <ref>. Green edges emanate from C and red emanate from y. A blue edge has both vertices adjacent to both y and C (i.e. having incident red and green edges). The closure of the blue edges hence approximates the link of {y,C} in Z and is homeomorphic to a line segment. Iterating stellar subdivision extends the segment on both signs and as before, the homotopy type doesn't change. Since Z is connected we obtain the following consequence of Lemma <ref> (cf. Hatcher <cit.>). The basic idea is to use Lemma <ref> to inductively deform any path in Z away from codimension >1 simplices.G is connected. G is acyclic. Suppose G has a cycle, so that there is a series of nonrepeating flips that map a vertex Δ_0={α,β,γ,δ} to itself. Pick the ordering Δ=(α,β,γ,δ) of this vertex. By Theorem <ref> there is a hyperbolic structure J on Σ so that ⟨ J,Δ⟩=(2,2,2,2). The flips of Δ_0 yield a unique nonrepeating series of flips of Δ that in turn yield a unique nonrepeating series of Markoff-Hurwitz moves m_i preserving (2,2,2,2). By Corollary <ref> the series of flips has to be empty.These results (Corollary <ref> and Lemma <ref>) conclude the proof of Theorem <ref> since we established G is an acyclic connected graph that we also know to be 4-valent.In the rest of this section we prove that smaller pieces of G are connected and acyclic. Specifically, for any simplex Δ∈ Z we may form G_Δ, the subgraph of G induced by vertices containing Δ. For example, if Δ is a 2-simplex then G_Δ has two vertices and an edge representing a flip between them. If Δ is a 3-simplex then G_Δ has only one vertex, Δ. More generally,For all Δ⊂ Z, G_Δ is a tree. Since G is acyclic it suffices to prove G_Δ is connected. We give the proof that G_δ is connected in the case δ is a vertex of Z, the case Δ is an edge is similar and we have already discussed the other cases.Suppose δ is a vertex of Δ,Δ'∈ Z^3. We aim to connect Δ to Δ' by flips that don't touch δ. Order Δ and Δ' so that δ is the final element of each. Let J be the hyperbolic structure provided by Theorem <ref> such that ⟨ J,Δ⟩=(2,2,2,2). Since Δ'=(β_1,β_2,β_3,δ), ⟨ J,Δ'⟩=(x_1,x_2,x_3,2) for some x_1,x_2,x_3∈. The infinite descent (Theorem <ref>) for V(_+) now yields a series of flips that never modifies δ, starts at Δ' and ends at some Δ”=(γ_1,γ_2,γ_3,δ)∈ Z_^3 with ⟨ J,Δ”⟩=(2,2,2,2). Also note that by combining Theorem <ref> and Theorem <ref>, there is a unique Δ_0∈ Z^3 such that ⟨ J,Δ_0⟩=(2,2,2,2) for any ordering of Δ_0. Therefore up to reordering, Δ”=Δ_0=Δ as required.There is a nice corollary of Proposition <ref> that may be of independent interest.The curve complex Z has the homotopy type of a point. The collection {G_Δ:Δ-dimensional} is a cover of G by subcomplexes. The nerve of this cover can be identified with Z, and each finite nonempty intersection of the covering complexes is G_Δ for Δ a simplex of Z, and hence is contractible by Proposition <ref>. Therefore the Nerve Theorem <cit.> applies to give the result. § PROOF OF THEOREM <REF> Let Γ denote the mapping class group of Σ. Mapping classes in Γ may permute the punctures of Σ. The group Γ acts simplicially on Z in the obvious way.Recall that for each 3-dimensional simplex Δ={α,β,γ,δ}∈ Z, there is a unique flip of α that produces a new simplex {α',β,γ,δ}. Further to this, Huang and Norbury <cit.> construct a corresponding unique mapping class γ_Δ^1∈Γ that maps {α,β,γ,δ} to {α',β,γ,δ}, similarly γ_Δ^2 performs a flip at β and so on. The mapping class elements γ_Δ^i can be extended to a cocycle for the group action ofon Z_^3. In other words, for every Δ∈ Z_^3 and g∈ there is a mapping class group element γ(g,Δ) such that γ(g,Δ)Δ=gΔ. For example, if g_1 is the generator of the first factor ofthen γ(g,Δ)=γ_Δ^1.For any given Δ, if Δ̃∈ Z_^3 is an ordering of Δ then the map→ Z^3g↦γ(g,Δ̃).Δis a bijection.The map g↦γ(g,Δ̃)Δ yields a graph homomorphism from the Cayley graph of , a 4-regular tree, to G. Recall that G is also a 4-regular tree by Theorem <ref>. The homomorphism is locally injective. Therefore g↦γ(g,Δ̃)Δ is a bijection.The next proposition allows us to pass from counting over G to counting over simple closed curves (our goal), up to finite subsets at either side of the passage. Let J∈(Σ) and for arbitrary fixed Δ_0∈ Z_^3 let o:=⟨ J,Δ_0⟩. Let K be a compact S_4-invariant subset of V(_+) containing the set K(.o) from Theorem <ref>. Since K is S_4-invariant, the condition ⟨ J,Δ⟩∉ K is independent of the ordering of Δ∈ Z^3, and so well defined. The map Φ:{ Δ ∈ Z^3 : ⟨ J,Δ⟩∉ K }→ Z^0, Φ:{α_1,α_2,α_3,α_4} ↦{α_i} : ℓ_α_i(J)=max_1≤ j≤4ℓ_α_j(J) is a well defined injection whose image is all but finitely many elements of Z^0. That Φ is well defined is immediate from Theorem <ref>, Property A. Suppose δ∈ Z^0 is the longest curve in each of Δ,Δ' with respect to J, with ⟨ J,Δ⟩,⟨ J,Δ'⟩∉ K. By Proposition <ref> there is a series of flips taking Δ to Δ' and never modifying δ. By Property C of Theorem <ref>, the first flip creates a curve longer than δ w.r.t. J. This continues, since .o-K is stable under outgoing moves, and it is therefore impossible to reach Δ'≠Δ since δ is the largest curve of Δ', but not of any intermediate simplex of the sequence that was generated. This establishes injectivity of Φ.As for the final statement that the image of (<ref>) misses only finitely many curves, let δ∈ Z^0. We aim to find (α',β',γ,δ) for which δ is the longest curve with respect to J. Say that δ is bad if ⟨ J,Δ⟩∈ K for some Δ containing δ. Otherwise say δ is good. Since K is compact, and the set of lengths of one sided simple closed curves in J is discrete, there are only finitely many bad δ. We will prove all good δ are in the image of Φ. For good δ, begin with any Δ∈ Z_^3 such that ⟨ J,Δ⟩∉ K and δ is last in Δ. If δ is the longest curve of Δ with respect to J then we are done. Otherwise let (x_1,x_2,x_3,x_4)=⟨ J,Δ⟩. Using Property B of Theorem <ref>, apply moves at the largest entries of (x_1,x_2,x_3,x_4) (which do not correspond to δ) until δ becomes the longest curve. The resulting (y_1,y_2,y_3,y_4)=⟨ J,Δ'⟩ cannot be in K, so we are done since δ=Φ(Δ'). We have put all the pieces in place to use the methods of Gamburd, Magee and Ronan <cit.> to prove Theorem <ref>. We now give an overview of the method of <cit.> and explain how what we have already proved extends the method to the current setting.Step 1. (loc. cit.) begins with a compact set K such that for x∈.o-K, properties A, B, and C hold. Here, we take K to be the set provided by Theorem <ref>. It is then deduced from A, B, and C that the number of distinct entries of x∈.o-K cannot decrease during an outgoing move. There is a further regularization of K in <cit.>, by adding to K a large ball B_ℓ^∞(R) if necessary, in order to assume that if for example x_1≤ x_2≤ x_3≤ x_4 with (x_1,x_2,x_3,x_4)∈.o-K thenx_3≥1/2x_4^1/3, 3log(1-2x_4^-1/3)-3log2/log x_4≥-1/2,and x_4≥10. These inequalities play a role in technical estimates throughout the proof, in particular, the proof of <cit.>. It is possible to increase K to ensure these hold (and the corresponding inequalities for other ordering of the coordinates of x) for the same reasons as in (loc. cit.). Also, without loss of generality, o∈ K.Step 2. Recall the quantity n_J^(1)(L) from our main Theorem <ref>. Fix Δ_0∈ Z_^3 and let o:=⟨ J,Δ_0⟩. Let K be the enlarged compact set from Step 1.Putting Propositions <ref> and <ref> (for the the current K) together gives usn_J^(1)(L) :=∑_α∈ Z^01{ℓ_α(J)≤ L} (Proposition <ref>)=∑_Δ: ⟨ J,Δ⟩∉ K1{max⟨ J,Δ̃⟩≤√(2sinh(1/2L))} +O_J(1) (Proposition <ref>)=∑_g∈: .o∉̸K1{max g.o≤√(2sinh(1/2L))} +O_J(1), where for Δ∈ Z^3 we wrote Δ̃ for an arbitrary lift of Δ to Z_^3. Since √(2sinh(1/2L))=√(e^L/2-e^-L/2)=e^L/4(1+O(e^-L))the required asymptotic formula for (<ref>) as L→∞ will follow from an estimate of the form∑_g∈: .o∉̸K1{max g.o≤ e^L}=c(o)L^β+o(L^β). Note as in <cit.> that the set .o-K breaks up into a finite union .o-K=∪_i=1^NØ_iwhere each Ø_i is the orbit of a point o_i∈.o-K under outgoing moves. The points o_i are each one move outside of K. The fact there are finitely many o_i requires the discreteness of .o and the compactness of K. Each Ø_i has the formØ_i={m_j_M… m_j_3m_j_2m_j_1o_i : M≥0, j_i≠ j_i+1,j_1≠ j_0(i)}where j_0(i) is such that m_j_0(i)o_i∈ K, or in other words, m_j_0(i) is not outgoing on o_i. It can be deduced from A, B, C and preceding remarks that each orbit Ø_i can be identified with a subset _i⊂ via a bijectiong∈_i↦ g.o∈Ø_i.Moreover the _i are disjoint. Therefore∑_g∈: .o∉̸K1{max g.o≤ e^L}=∑_i=1^N∑_g∈_i1{max g.o≤ e^L} =∑_i=1^N∑_x∈Ø_i1{max x≤ e^L}. This reduces the count for n_J^(1)(L) to a count for each of a finite number of orbits under outgoing moves in a region where A, B and C hold.Step 3. The methods of <cit.> now take over, with one important thing to point out. A version of Lemma <ref> is crucially used during the proof of <cit.>. In that instance <cit.> can make a better bound than we have[Since in <cit.> we were concerned with integer points, this meant after ruling out certain special cases, it allowed us to take ϵ=1 in Lemma <ref>.], but what is really important is the existence of the uniform ϵ>0 in Lemma <ref>. This establishes a weaker, but qualitatively the same, version of <cit.> that plays the same role in the proof. The rest of the arguments of <cit.> go through without change to establishFor each 1≤ i≤ N there is a constant c(Ø_i)>0 such that∑_x∈Ø_i1{max x≤ e^L}=c(Ø_i)L^β+o(L^β). Using Theorem <ref> in (<ref>) completes the proof of Theorem <ref>.alpha 0BARAGAR2 Arthur Baragar. Asymptotic growth of Markoff-Hurwitz numbers. Compositio Math., 94(1):1–18, 1994.BARAGAR1 Arthur Baragar. Integral solutions of Markoff-Hurwitz equations. J. Number Theory, 49(1):27–44, 1994.BARAGAR3 Arthur Baragar. The exponent for the Markoff-Hurwitz equations. Pacific J. Math., 182(1):1–21, 1998.BROWNK. S. Brown.Cohomology of groups, volume 87 of Graduate Texts in Mathematics. Springer-Verlag, New York-Berlin, 1982.CASSELS J. W. S. Cassels. An introduction to Diophantine approximation. Cambridge Tracts in Mathematics and Mathematical Physics, No. 45. Cambridge University Press, New York, 1957.GMR A. Gamburd, M. Magee, and R. Ronan. An asymptotic formula for integer points on Markoff-Hurwitz surfaces. arXiv:1603.06267v2, Sept. 2017. GENDULPHE M. Gendulphe. What's wrong with the growth of simple closed geodesics on nonorientable hyperbolic surfaces. arXiv:1706.08798v1, June. 2017. H Allen Hatcher. On triangulations of surfaces. Topology Appl., 40(2):189–194, 1991.HPZ H. Hu, S. Peow Tan, and Y. Zhang. Polynomial automorphisms of ^n preserving the Markoff-Hurwitz polynomial. arXiv://1501.06955, January 2015.HN Yi Huang and Paul Norbury. Simple geodesics and Markoff quads. Geom. Dedicata, 186:113–148, 2017.HURWITZ A. Hurwitz. Über eine Aufgabe der unbestimmten Analysis. Archiv. Math. Phys., 3:185–196, 1907. Also: Mathematisch Werke, Vol. 2, Chapt. LXX (1933 and 1962), 410–421.MARKOFF A. Markoff. Sur les formes quadratiques binaires indéfinies. Math. Ann., 17(3):379–399, 1880.MIRZSIMPLE Maryam Mirzakhani. Growth of the number of simple closed geodesics on hyperbolic surfaces. Ann. of Math. (2), 168(1):97–125, 2008.MR G. McShane and I. Rivin. A norm on homology of surfaces and counting simple geodesics. Internat. Math. Res. Notices, (2):61–69 (electronic), 1995.Penner R. C. Penner. The decorated Teichmüller space of punctured surfaces. Comm. Math. Phys., 113(2):299–339, 1987.SCH Martin Scharlemann. The complex of curves on nonorientable surfaces. J. London Math. Soc. (2), 25(1):171–184, 1982. Michael Magee,Department of Mathematical Sciences,Durham University,Lower Mountjoy, DH1 3LE Durham,United Kingdom [email protected] | http://arxiv.org/abs/1705.09377v2 | {
"authors": [
"Michael Magee"
],
"categories": [
"math.GT",
"math.DG",
"math.NT",
"57M50, 11J06"
],
"primary_category": "math.GT",
"published": "20170525215155",
"title": "Counting one sided simple closed geodesics on Fuchsian thrice punctured projective planes"
} |
Internal Structure of Giant and Icy Planets: Importance of Heavy Elements and MixingRavit Helled Institute for Computational Sciences, University of Zurich, Winterthurerstr. 190 CH 8057 Zurich, Switzerland. [email protected] Tristan Guillot Observatoire de la Cote dAzur, Bd de lObservatoire, CS 34229, 06304 Nice Cedex 4, France. [email protected]* Ravit Helled and Tristan Guillot==================================== In this chapter we summarize current knowledge of the internal structure of giant planets. We concentrate on the importance of heavy elements and their role in determining the planetary composition and internal structure, in planet formation, and during the planetarylong-term evolution.We briefly discuss how internal structure models are derived, present the possible structures of the outer planets in the Solar System, and summarise giant planet formation and evolution.Finally, we introduce giant exoplanets and discuss how they can be used to better understand giant planets as a class of planetary objects. § INTRODUCTIONCharacterisation of the outer planets in the Solar System has been one of the major objectives in planetary science since decades.Throughout the years significant progress has been made, both in theory and observations. We now have a much better understanding of the behaviour of hydrogen and other elements at high pressures and temperatures, and the physical processes that govern the planetary structure. The various spacecrafts that have visited (and are currently visiting) the outer planets in the Solar System, Jupiter, Saturn, Uranus, and Neptune, provide us with constraints on the gravitational fields, rotation periods, and atmospheric compositions of the planets that can be used by structure models.In parallel, the discovery of giant planets around other stars (giant exoplanets) provides an opportunity to study the diversity in giant planet composition, which can be used to better understand giant planet formation.Despite the great progress in planetary modelling in the last few decades there are still several open questions regarding the nature of Jupiter, Saturn, Uranus and Neptune.Many review chapters have been written recently on giant planet interiors (e.g., Fortney & Nettelmann, 2010, Guillot & Gautier, 2014; Baraffe et al., 2014, Militzer et al., 2016) and this chapter aims to be somewhat complementary to those.Our chapter is organised as follows.First, we discuss the interiors of the Solar System's gas giant planets (Jupiter and Saturn) and icy planets (Uranus and Neptune). Second, we discuss the standard formation mechanism of giant planets and how it is linked to their composition.Finally, we provide an outlook on the compositions of giant exoplanets. § GIANT PLANET STRUCTURE §.§ Making an interior modelInformation on the interiors of the giant planets in the Solar System is typically derived from theoretical structure models which are designed to fit the observed physical data of the planets, such as their gravitational fields, masses, internal rotations, and radii. The physical properties used by interiors models of the outer planets are listed in Table 1.The planetary interior is modelled by using the following structure equationswhich include the mass conservation, hydrostatic, thermodynamic, and energy conservation equations:∂∂ m4π3r^3=1ρ, ∂ p∂ m=-Gm4π r^4 + ω^2 6π r + G M 4π R^3 rφ_ω, ∂ T∂ m=∇_T∂ p∂ m, ∂ u∂ t+p∂∂ t1ρ =q-∂ L∂ m,where P is the pressure, ρ is the density, m is the mass, r is the radius and G is the gravitational constant. The temperature gradient ∇_T depends on the process by which the internal heat is transported.The last equations is the only equation that is time (t) dependent and is used for modelling the planetary evolution. u is the internal energy, q is an energy source that is typically assumed to be zero for planets, and L is the intrinsic luminosity.In order to account for rotation, the hydrostatic equation (Eq. 2) includes additional terms which depend on ω, the spin rate, M the total mass of the planet, R the total radius and ϕ_ω is a function of the radius, internal density and spin rate(see Guillot 2005). For a non-spinning planet, ϕ_ω=ω=0. For a spinning planet, this equation is valid in the limit of a barotropic fluid and a solid-body rotation. The radius is then considered as a mean volumetric radius. In that case, we can obtain constraints on the internal density distribution by measuring the departure of the planet's gravity field from sphericity. These are expressed in the form of the gravitational moments, even functions of the radius, r, and the colatitude, θ (see e.g., Guillot 2005, Hubbard 2013):J_2ℓ=-1/Ma^2ℓ∫ r'^2ℓP_2ℓ(cosθ')ρ(r',θ')d^3r'where a is the equatorial radius, and P_n is the nth-order Legendre polynomial.Interior models are constructed to fit the mass (essentially J_0) and as many of the J_2ℓ's as have been measured.Although each higher order J gives additional information on ρ(r). The density distribution correspond to a hydrostatic configuration when the contribution of dynamical effects (e.g., winds) on the gravitational moments are not included.Unfortunately, there is no unique solution for the internal structure of a planet. The inferred structure depends on the model assumptions and the equations of state (EoSs) used by the modeller. The main uncertainties in structure models are linked to the following assumptions/setups: (i) number of layers (ii) the composition and distribution of heavy elements (iii) heat transport mechanism, and (iv) rotation period and the dynamical contribution of winds (e.g., differential rotation).Since the gas giant planets (Jupiter and Saturn) consist of mainly hydrogen and helium,their modelling relies on the EoS of hydrogen, helium, and their mixture.The major uncertainty concerning the EoS of hydrogen is in the region of 0.5-10 Mbar, where hydrogen undergoes a transition from a molecular phase to a metallic phase. The EoS of helium in the relevant pressure region is simpler since helium ionization requires larger pressures and a phase transition is not expected to occur. The difficulty with calculating the EoS of helium, however, is due to the separation of helium droplets from the hydrogen-helium mixture (e.g,Fortney & Hubbard, 2003; Stevenson & Salpeter, 1977a,b). The EoS for the heavier elements (metals, rocks, ices) have generally received somewhat less attention than those for hydrogen and helium.Despite the difficulty, there have been substantial advances in high-pressure experiments and ab initio calculations of EoSs of hyrogen and helium and ofof heavier materials, as well as on the miscibility properties, for water, ammonia, rock, and iron. Detailed description on EoSs and interior modeling can be found in Saumon & Guillot (2004), Baraffe et al. (2014), Fortney & Nettelmann (2010), Militzer et al. (2016), Miguel et al. (2016), Fortney et al. (2016). §.§ Jupiter and SaturnTypically, the interiors of Jupiter and Saturn are modelled assuming the existence of a distinct heavy-element core which is surrounded by an envelope divided into an inner helium and heavy element rich layer and an outer envelope which is helium poor and less enriched with heavy elements(e.g., Guillot, 1999, Saumon & Guillot, 2004; Nettelmann et al., 2008; 2012). The existence of a core is linked to the traditional (and somewhat outdated) view of planet formation in the core accretion scenario, as we discuss below, and thedivision of the envelope into two is based on the idea that at high pressures not only does hydrogen change from the molecular to the metallic phase, but also to the immiscibility of helium in hydrogen (Stevenson & SalPeter 1977a,b, Fortney & Hubbard, 2003).Recent calculations of the phase diagram of a hydrogen-helium mixture confirm the immiscibility of helium in hydrogen (e.g., Lorenzen et al.,2009; 2011, Morales et al., 2009; 2013). Figure 1 shows the phase diagram for the hydrogen-helium mixture for a helium mole concentration of 8% (see Guillot & Gautier 2014 for details).Indeed, the atmospheres of both Jupiter and Saturn are observed to be depleted in helium compared to a proto-solar ratio (von Zahn et al. 1998, Conrath & Gautier 2000), and helium rain is the most common (although not the only) explanation for Saturn's high thermal emission (see Fortney & Nettelmann, 2010 and references therein). The location in which helium rain occurs and its timescale are important to determine the distribution of helium and heavy elements in the interiors of Jupiter and Saturn (e.g., Stevenson & Salpeter 1977a,b). For Jupiter, standard 3-layer models typically infer a core mass smaller than ∼10 M_⊕ (Earth mass). The global enrichment in heavy elements is uncertain, and the total heavy element mass is estimated to be between 10 and 40 M_⊕ (Saumon & Guillot, 2004; Nettelmann et al., 2015). Alternative models with a different EoS for hydrogen (Militzer et al., 2008; Hubbard & Militzer, 2016) imply the existence of a relatively massive core (∼16 M_⊕) and only a very small enrichment (if at all) in heavy elements in the gaseous envelope.Recently, Miguel et al. (2016) investigated the sensitivity of the derived internal structure of Jupiter to the estimates of its gravitational moments and the accuracy of the used EoSs. They suggest that the differences in the inferred structureare linked to differences in the internal energy and entropy calculation. This in return leads to differences in the thermal profiles and therefore to different estimations in the core and heavy-element masses. Overall, it seems that preferable solutions are ones with cores (∼10 M_⊕) and a discontinuity of the heavy-element enrichment in the envelope, with the inner helium-rich envelope consistsof a more heavy elements than the outer, helium-poor envelope.Recently, new estimates for Jupiter's gravitational field were determined by the Juno spacecraft (Bolton et al., 2017).Interior models of Jupiter that fit the data suggest that another feasible solution for Jupiter is the existence of a diluted core (Wahl et al., 2017). In this case, Jupiter's core is no longer viewed as a pure heavy-element central region with a density discontinuity at the core-envelope-boundary, but as a diluted core which is more extended region, and can also consist of lighter elements.This model resembles the primordial structure derived by formation models (see below),providing a potential link between giant planet formation models, and structure models of the planets at present day. The internal structure of Saturn is also uncertain - although its derived structure is less sensitive to the hydrogen EoS (e.g., Saumon & Guillot, 2004), it is dependent on the hydrogen-helium phase diagram which is not fully constrained. Additional complication arises from the uncertainty in Saturn's rotation period and shape (e.g., Fortney et al., 2017). Overall, structure models suggest that Saturn is more enriched in heavy elements compared to Jupiter, also having a larger core.The total heavy-element mass in Saturn is estimated to be ∼16 - 30 M_⊕ with a core mass between zero and 20 M_⊕ (e.g., Saumon & Guillot, 2004; Nettelmann et al., 2012; Helled & Guillot, 2013). §.§ Uranus and NeptuneUranus and Neptune are the outermost planets in the Solar System.Unlike Jupiter and Saturn, their gaseous envelopes are relatively small fractions of their total masses. The available constraints on interior models of Uranus and Neptune are limited. The gravitational harmonics of these planets are known only up to fourth degree (J_2, J_4), and the planetary shapes and rotation periods are not well determined (e.g., Helled et al. 2010). Although Uranus and Neptune have similar masses and radii, they appear to be quite different internally.The measured low heat flux of Uranus implies that either it has lost its heat or there is a mechanism that reduces the efficient of cooling.In addition, Uranus radius is larger than Neptune's but its mass is smaller. This means that Neptune is denser than Uranus by 30%. The origin of this dichotomy is unknown, and could be a result ofgiant impacts that affected the internal structure of these planets (e.g., Podolak & Helled, 2012). Three main approaches have been used for modelling Uranus and Neptune. The first assumes that the planets consist of three layers: a core made of “rocks” (silicates, iron), an “icy” shell (H_2O, CH_4, etc.), and a gaseous envelope (composed of molecular hydrogen and helium with some heavier components).This approach uses physical EoSs of the assumed materials to derive a density profile that best fits the measured gravitational coefficients, similarly to the standard models of Jupiter and Saturn (e.g., Nettelmann et al., 2013).A second approach makes no a priori assumptions regarding planetary structure and composition. The radial density profiles of Uranus and Neptune that fit their measured gravitational fields are derived using Monte Carlo searches (e.g., Marley et al. 1995, Podolak et al. 2000). A third one uses a continuous radial density and pressure profiles that fit the mass, radius, and gravitational moments of Uranus and Neptune, and then use this density profile to investigate the possible composition of the planets by using theoretical EoSs (e.g., Helled et al, 2011).While there are variations in the derived composition of Uranus and Neptune using the different approaches several results seem to be robust: all models find that the outer envelopes of the planets are highly enriched with heavy elements, and that the heavy element concentration increases towards the planetary centre. Recent 3-layer models suggest that Uranus and Neptune contain a minimum of ∼2 M_⊕and about 3 M_⊕ of hydrogen and helium, respectively.When considering that the planetary interior has distinct layers of different composition (3-layer model), the ice-to-rock ration is found to be high in both planets. The inferred global ice-to-rock-ratio is estimated to be between 19 and 36 in Uranus, while Neptune has a wide range of solutions from 3.6 to 14.Random models of Uranus and Neptune suggest both planets consists of small cores and enriched outer envelopes, and that both planets require a density jump at a radius of about 0.6 to 0.7 of the total radius to fit the gravity data (see Marley et al. 1995, Podolak et al. 2000).On the other hand, the empirical models of Helled et al. (2011) suggest that both planets can have a gradual structure in which there is a gradual increase of the heavier material toward the centre. They also found that the innermost regions of both Uranus and Neptune cannot be fit to the empirical density distribution with pure ice/rock, but by ∼82% of SiO_2 and ∼ 90% of H_2O by mass for both Uranus and Neptune. The overall metallicity of the planets was found to be 0.75-0.92 and 0.76-0.9 for Uranus and Neptune, respectively.In addition, they emphasise the fact that the planetary interiors could be depleted in ices, and still fit the measured gravitational field, suggesting that these planets are not necessarily "icy".Figure 2 shows the density profiles of the outer planets in the Solar System for standard 3-layer models. §.§ Non-Adiabatic Interiors We now realise that in some cases, and perhaps in most cases, a fully adiabatic model for the giant planets is too simplistic. The fact that Uranus has a much smaller internal luminosity than Neptune has long ago been attributed to the presence of a molecular weight gradients in the deep interior (Podolak, Hubbard & Stevenson 1991). The inhibition of convection in the presence of helium rain has also been shown to be a likely possibility (Stevenson & Salpeter 1977b). Recently, f-mode oscillations of Saturn were discovered through the observation of its rings by the Cassini spacecraft. The analysis of the splitting of these oscillation modes led Fuller (2014) to propose that Saturn's deep interior must be stably stratified: This is at present the only way to explain the unexpected splittings, through interactions between f-modes propagating in the convective envelope and g-modes propagating in the stable region of the deep interior.A non-adiabatic structure may arise because of a primordial compositional gradient due to the formation process itself, because of the erosion of a central core or because of immiscibility effects (for example of helium in metallic hydrogen). Composition gradients can inhibit convection and affect the heat transport in giant planets. If they are weak and the luminosity is large, they will be overwhelmed by overturning convection which will then ensure a rapid mixing and homogeneization. Otherwise, they can either lead to layered convection, a less efficient type of convection, or inhibit convection and lead to heat transport by conduction or radiation. When in the presence of a homogenous composition, the convection criterion is given by the Schwarzschild criterion, ∇_ad>∇, where ∇≡ dln T /dln P, and ∇_ad is the adiabatic gradient. In case of an inhomogeneous environment, one has to take into account the effect of the composition gradient on the stability criterion. Considering a mixture of elements with mass fractions (X_1,X_2,...,X_n), the composition gradient is given by (e.g., Vazan et al., 2015):∇_X≡∂ln T(p,ρ,X)/∂ X_j·dX_j/dln P. In this case the convection criterion is given by the Ledoux criterion, ∇-∇_ad-∇_X<0.Layered convection is convective mixing that can occur in regions that are stable according to the Ledoux criterion, but unstable according to the Swcharzschild criterion, if the entropy and chemical stratifications have opposing contributions to the dynamical stability. In that case, diffusive convection can take place (e.g, Rosenblum et al., 2011; Wood et al., 2013; Mirouh, et al., 2012), leading to slow mixing and a more efficient heat transfer. Layered-convection can occur in two forms: fingering convection or double-diffusive convection. In the first, the entropy is stably stratified (∇ - ∇_ad < 0), but the composition gradient is unstably stratified (∇_X < 0); while in the second, oscillatory double-diffusive convection (ODD), entropy is unstably stratified(∇ - ∇_ad > 0), but chemical composition is stably stratified (∇_X > 0); it is related to semi-convection, but can occur even when the opacity is independent of composition.A pioneering study on double diffusive convection in planetary interiors was presented by Leconte & Chabrier (2012; 2013) where Jupiter's and Saturn's interiors were modeled assuming the presence of double-diffusive convection caused by a heavy-element gradient in their gaseous envelopes.These models investigated the effect on the internal heat transport efficiency and the internal structure.In this scenario, the planetary interiors can be much hotter and the planets can accommodate larger amounts (a few tens of M_⊕) of heavy elements. However, these models can be considered as extreme because they assumed a compositional gradient to be present throughout the envelope. Evolution models matching Jupiter's present constraints show that is almost impossible to avoid overturning convection homogeneizing a large fraction of the envelope (Vazan et al. 2016). This therefore strongly limits the extent and consequences of layered convection. At the same time, the results of Vazan et al. (2016) show that both Jupiter and Saturn can be non-adiabatic and still fit observations.Evolution models with layered convection in the helium-rain region of Jupiter have recently been calculated (Nettelmann et al. 2015, Mankovich et al. 2016). In these models, the molecular envelope cools over time, but the deep interior can actually heat up because of the loss of specific entropy due to helium settling (see also Stevenson & Salpeter 1977b). While it is not yet clear that layered convection does really occur in the helium mixing region (it depends on the thermodynamic behavior of the hydrogen-helium mixture in the presence of a phase separation), these models show that non-adiabaticity is certainly an important aspect of the evolution of cool gaseous planets such as Jupiter and Saturn. Recently, Nettelmann et al. (2016) modelled Uranus accounting for a boundary layer and modelled the planetary evolution. They find that the existence of such a boundary layer can explain the the low luminosity of the planet.The thermal boundary leads to a hotter interior, suggesting that the deep interior could have a large fraction of rocks.Investigations of non-adiabatic structure of the outer planets are still ongoing, and we expect major progress in this direction in the upcoming years.Overall, each of theouter planet in the Solar System can be modelled by using a more standard layered interior or alternatively, by a structure in which there isgradual change in composition.Figure 3 shows sketches of the possible internal structures of the outer planets in the Solar System accounting for these two possibilities.§ GIANT PLANET FORMATION AND HEAVY ELEMENTS DISTRIBUTION§.§ The core accretion modelThe standard model for giant planet formation is known as core accretion (see Helled et al., 2014 for a review).In this model, the formation of a gaseous planet begins with the buildup of a heavy-element core due to the growth and accretion of solids which can be in the form of planetesimals(e.g., Pollack et al., 1996; Alibert et al., 2005) or pebbles (e.g., Lambrechts et al., 2014, Levison et al. 2016) and continues with gas accretion.The planetary formation history can be divided into three main phases. The first phase, Phase-1, is dominated by core heavy-element accretion. A small core accretesplanetesimals/pebbles until it hasobtained mostof the heavy-element mass M_Z within its gravitational reach.The gas mass, M_gas (H-He), also grows, but it remains only a very small fraction of M_Z. During the second phase, Phase-2, a gaseous envelope is being accreted slowly, Ṁ_̇Żdecreases considerably, and Ṁ_̇ġȧṡ increases slowly until itexceeds the heavy-element accretion rate. As the envelope's massincreases, the expansion of the zone of gravitational influence allows further accretion of planetesimals. Phase-3 correspond to the runaway gas accretion phase.WhenM_Z∼ M_gas, known as crossover, the gas accretion rate increasesconsiderably, nearly at free fall.The heavy-element accretion rate during this phase is poorly known but is typically assumed to be small (Helled & Lunine, 2014).The gas accretion is terminated by either disk dissipation or gap opening, and the planet gains its final mass (assuming no mass loss or late accretion occur).§.§ Core Growth and MixingThe envelope of the forming giant planets is typically considered to consist mainly of hydrogen and helium. If the accreted heavy-elements reach the center (core) without depositing mass in the envelope, the planetary envelope has a sub-stellar composition due to the depletion in heavies, and in this case M_env∼ M_gas.However, if planetesimals/pebbles suffer a strong mass ablation as they path through the gaseous envelope they can lead to a substantial enrichment with heavy elements, typically resulting in a metal-rich proto-atmosphere.In this case, the core mass M_core and the heavy element mass M_Z can differ. The determination of the planetesimal mass ablation depends on the characteristics of the accreted heavy elements such as their composition, size, and mechanical strength. Formation models typically find that when M_core∼ 2M_⊕ the solids tend to remain in the atmosphere(Iaroslavitz & Podolak, 2007; Lozovsky et al., 2017).The enrichment of the planetary envelope (envelope pollution) has a strong influence on the planetary growth;it canstrongly reduce the critical mass of the planet for triggering rapid gas accretion, i.e., in reaching Phase-3 (e.g., Hori & Ikoma, 2011; Venturini et al., 2016).It is not clear at this point whether the last phase of accretion, in which most of the planetary mass is gained, is that of heavy-element poor gas or whether heavy elements manage to be accreted very efficiently. We view the former as more likely, because tidal barriers from a forming giant planet repel pebbles and planetesimals more efficiently than gas in protoplanetary disks (Tanaka & Ida 1999, Paardekooper & Mellema 2004).In that case, upward mixing is required to explain the fact that Jupiter, Saturn, Uranus and Neptune are all enriched in heavy elements compared to the Sun (e.g., Guillot & Gautier 2014). This upward mixing of heavy elements, if convection is present, is energetically possible (Guillot et al. 2004). It requires these heavy elements to be miscible in the envelope, which appears to be the case (Wilson & Militzer, 2010; 2012).An open question which remains is the efficiency at which this mixing (or core erosion) proceeds: this depends both on the initial state (formation mechanism), on the availability of overturning convection and on the efficiency of layered convection where it is present. Recently, the heavy-element distribution and core mass in proto-Jupiter at different stages during its formation was investigated (Lozovsky et al., 2017).The accreted planetesimals were followed as they entered the planetary envelope, and their distribution within the protoplanet accounting for settling (due to saturation) and convective mixing was determined.It was clearly shown that there is an important difference between theheavy material mass M_Z and core mass M_core, because most of the accreted heavy elements remain in the planetary envelope, and the core mass can be significantly smaller than the total heavy-element mass.This is demonstrated in Fig. 4 where we show M_Z (red curve) vs. M_core (purple curve).Although convective mixing can mix the heavy elements in the outer envelope, the innermost regions which have a steep enough composition gradient, can remain stable against convection. These inner regions can also consist of hydrogen and helium and could be viewed as diluted cores. The diluted cores are of the order of 20 M_⊕ in mass, but with lower density than that of a pure-Z core due to the existence of H+He. The left panel of Fig. 5 shows the calculated distribution of heavy-elements in proto-Jupiter based on formation models.§.§ Mixing During Long-Term EvolutionIf the distribution of heavy elements is not homogenous due to the formation process, as suggested by Lozovsky et al. (2017), it can affect the planetary thermal evolution. In addition, the primordial internal structure might change during the several 10^9 of evolution.Convective mixing of a gradual distribution of heavy elements in Jupiter's interior was investigated by Vazan et al. (2016). The primordial internal structure is somewhat similar to the one found by formation models.The right panel of Fig. 5 shows the preliminary (dotted blue) and final (after 4.5×10^9 years, dashed red) heavy-element distributions in Jupiter accounting for mixing using a state-of-the-art planet evolution code (e.g., Vazan et al., 2016). In this model, Jupiter consists of ∼ 40 M_⊕ of heavies.It is found that the innermost regions (∼ 15% of the mass) are stable against convection, and therefore, act as a bottleneck in terms of heat transport, while the outer envelope is convective throughout the entire evolution.The increasing temperature gradient between the innermost non-adiabatic and non-convective region and the outer convective region leads to a small penetration inward during the evolution,leading to a moderate heavy-element enrichment in the outer envelope as time progresses (see red curve in Fig. 4b). The innermost region which is highly enriched in heavy element has a lower entropy and is stable against (large-scale) convection during the entire evolution.As a result, the temperatures in the inner regions of the planet remain high while the outer ones can cool efficiently. § EXOPLANETS§.§ Radii and bulk compositions Since the mid 90s, we know that gaseous planets exist in other planetary systems, which provides us with the opportunity to study giant planets more generally.Giant exoplanets are a complementary group to the Solar-System's outer planets - while the measurements are typically limited to mass and radius determination which provides their mean density - their large number provide us withgood statistics in terms of planetary bulk composition and the physical mechanisms governing the planetary evolution.Most of the giant exoplanets with well-known masses and radii are “hot Jupiters”, i.e., short period (P< 10days) Jupiter-mass planets (0.5-10M_ Jup). A significant fraction of these objects ∼ 50% are more inflated than predicted by standard evolution models of irradiated planets, which implies that another physical mechanism either slows the planets' cooling and contraction, or leads to an extra dissipation of energy in their interior (e.g., Guillot & Showman, 2002; Laughlin et al., 2011). The bulk composition of exoplanets can be inferred by assuming a common mechanism inflating hot Jupiters (e.g., Guillot et al. 2006, Burrows et al. 2007) or by selecting only modestly irradiated giant exoplanets (Thorngren et al. 2016). The amount of heavy elements that they contain is inversely related to their size. Because giant planets are compressible, both the irradiation that they receive and their progressive contraction must be taken into account.Three main results can be extracted from these studies: 1. While giant exoplanets above the mass of Saturn are generally mostly made of hydrogen and helium, some of them require surprisingly large masses of heavy elements (up to hundreds of M_⊕, e.g., Moutou et al., 2013) or large ratios heavy elements to gas (as in the case of HD149026b which contains about 70 M_⊕ of heavies for a total mass of 120 M_⊕ – see Ikoma et al. 2006). 2. The ratio of the mass of heavy elements to the total planetary mass is negatively correlated with planetary mass (Thorngren et al. 2016), in agreement with formation of these planets by core accretion. 3. The mass of heavy elements in hot Jupiters appears to be correlated with the metallicity of the parent star (e.g., Guillot et al., 2006, Burrows et al. 2007). However, this correlation is not statistically significant in the sample of weakly irradiated planets (Thorngren et al. 2016).Generally, it appears that at least some of the planets were able to efficiently collect the solids present in the disks.This is something that is not clearly explained by formation models, and is linked to the (unknown) efficiency of solid accretion during Phase-3 and/or to late accretion of solids. Understanding this subject should improve significantly with PLATO thanks to the precise characterisation of a large number of transiting giant exoplanets, including those at large distances from their parent stars.§.§ Massive cores or enriched envelopes? Studies of the evolution of giant exoplanets typically assume the planets are made of a dense core and a solar-composition hydrogen-helium envelope. However, as discussed above, this is no more than a convenient simplification and this assumption is not justified. In fact, it is likely that, as for the giant planets in our Solar System, a significant fraction of the heavy elements are in the envelope rather than in a central core. Whether the heavies are mixed in the envelope or not has two effects: An enriched envelope has a larger molecular weight and shrinks more effectively than when heavy elements are embedded in a central core (e.g., Baraffe et al., 2008). This also means a larger opacity which, for hot Jupiters which are cooling through a thick radiative zone (Guillot et al., 1996), implies a less efficient cooling and contraction (Guillot, 2005; Vazan et al., 2013). When the enrichment is moderate (say, a few times solar) whether the heavy elements are embedded in the core or distributed in the envelope seems to have limited consequences. However, for larger enrichments, it can lead to an overestimate of the heavy-element mass required to fit the planetary radius (Baraffe et al. 2008, Vazan et al., 2013).As for Jupiter and Saturn, the presence of potentially large amount of heavy elements can also lead to double-diffusive convection in the envelope. It has been proposed that this may account for the anomalously large size of some hot Jupiters (Chabrier & Baraffe, 2007). This is unlikely however, for the same reasons as discussed previously, namely that overturning convection develops easily and should limit the extent of the double-diffusive region. In addition, the effect of the increased mass of heavy elements essentially compensates the effect of the delayed contraction on the planetary radius caused by compositional inhomogeneity (Kurokawa & Inutsuka, 2015). The extent of whether the envelopes and atmospheres are significantly enriched or not is thus important both to better constrain the bulk compositions of the planets, but also to provide constraints to planet formation models. The ability to characterise giant planet atmospheres, and in particular determine their enrichment in heavy elements, would enable us to link interior and atmospheric compositions. This is crucial to understand the interior structure and formation of these planets. Currently, the possible determinations of chemical abundances are to be taken with extreme caution, both because of data quality and uncertainties on the presence of clouds (e.g., Deming & Seager, 2017), but this situation should change in the near-future, in particular with JWST.§ CONCLUSIONSCharacterising the planets in the outer Solar System is an ongoing challenge.Each planet has its special features and open research questions that are associated with its special nature.For Jupiter, we still try to get a better determination of its core mass and overall enrichment. Also for Saturn, we still need to better constrain its composition and structure, but with a focus on the role of helium rain and its cooling rate.The internal structures of Uranus and Neptune should be better determined, the source for the different structures and cooling rates of the planets still has to be resolved. In addition, understanding the connection between giant planet formation, evolution, and structure is still incomplete and is highly desirable.Ongoing and future space missions provide more constraints for structure models, and at the same time introduce new challenges and directions for exploration for modelers. Several of the open questions have the potential to be solved in the fairly near future, in particular, in the following subjects:(1) Improvements in EoSs calculations and experiments.This will allow us to understand the behaviour of materials at high pressures and temperatures, and to discriminate among various EoSs. (2) Significant improvements of the measurement of Jupiter's and Saturn's gravitational fields by the Juno (Bolton et al., 2017) and Cassini (Spilker, 2012) missions.With accurate measurements of the gravitational fields, and of the water abundance in the case of Jupiter, we will be able to reduce the parameter space of possible internal structures. (3) The potential of sending a probe into Saturn's atmosphere and measuring the abundance of noble gases would allow us to understand enrichment mechanisms in giant planets, and their origins. In the longer run, a mission dedicated for the ice giants (Uranus and/or Neptune) would bring new views of these icy planets. (4)Additional and more accurate measurements of giant and intermediate-mass exoplanets.An overview of the variation in atmospheric composition of giant exoplanets and its connection to the host star's properties, and accurate determination of the planetary mean density will allow us to understand the nature of giant and icy planets in a boarder manner.Clearly, we still have not solved all the mysteries related to gaseous planets, and much work is required. However, we expect new observations, exciting discoveries, and theoretical developments that will lead to a leap in understanding the origin, evolution, and interiors of this class of planetary objects.§ REFERENCESAlibert, Y., Mordasini, C., Benz, W. & Winisdoer, C. (2005). Models of giant planet formation with migration and disc evolution. A&A, 434, 343. Baraffe, I., Chabrier, G., & Barman, T. (2008). Structure and evolution of super-Earth to super-Jupiter exoplanets: I. heavy element enrichment in the interior. A&A, 482, 315.Baraffe, I., Chabrier, G., Fortney, J. & Sotin, C. (2014). Planetary Internal Structures. Protostars and Planets VI, Henrik Beuther, Ralf S. Klessen, Cornelis P. Dullemond, and Thomas Henning (eds.), University of Arizona Press, Tucson, 914 pp., 763.Bolton et al. (2017). Jupiter's interior and deep atmosphere: the first close polar pass with the Juno spacecraft. Science, submitted.Burrows, A., Hubeny, I., Budaj, J. & Hubbard, W. B. (2007). Possible solutions to the radius anomalies of transiting giant planets. ApJ, 661, 502.Chabrier, G. & Baraffe, I. (2007). Heat transport in giant (exo)planets: A new perspective. ApJL, 661, L81. Conrath, D. & Gautier, D. (2000). Saturn Helium Abundance: A Reanalysis of Voyager Measurements. Icarus, 144, 124. Deming, D. & Seager, S. (2017). Illusion and Reality in the Atmospheres of Exoplanets. JGR Planets, 122, 53.Folkner, W. M. et al., 2017. Jupiter gravity field estimated from the first two Juno orbits. GRL, under review. Fortney, J. J. & Hubbard, W. B. (2003). Phase separation in giant planets: inhomogeneous evolution of Saturn. Icarus, 164, 228.Fortney, J. J. & Nettelmann, N. (2010). The interior structure, composition, and evolution of giant planets. Space Sci. Rev. 152, 423. Fortney, J. J., Helled, R., Nettelmann, N., Stevenson, D. J., Marley, M. S., Hubbard, W. B. & Iess, L. (2016). Invited review for the forthcoming volume "Saturn in the 21st Century." eprint arXiv:1609.06324 Fuller, J. (2014). Saturn ring seismology: Evidence for stable stratification in the deepinterior of Saturn, Icarus, 242, 283.Guillot, T., Burrows, A., Hubbard, W. B., Lunine, J. I. & Saumon, D. (1996). Giant planets at small orbital distances. ApJL, 459, L35.Guillot, T. (1999). A comparison of the interiors of Jupiter and Saturn. Icarus, 47. Guillot, T. (2005). The interiors of giant planets: Models and outstanding questions. Annual Review of Earth and Planetary Sciences, 33.Guillot, T. & Showman, A. P. (2002) Evolution of "51 pegasus b-like" planets. A&A, 385, 156.Guillot, T., Santos, N. C., Pont, F., Iro, N., Melo, C. & Ribas, I. (2006). A correlation between the heavy element content of transiting extrasolar planets and the metallicity of their parent stars. A&A, 453,L21.Guillot, T. & Gautier, D. (2014).Treatise on Geophysics (Eds. T. Spohn, G. Schubert). Treatise on Geophysics, 2nd edition.Helled, R. & Lunine, J. (2014). Measuring jupiter's water abundance by juno: the link between interior and formation models. MNRAS,441, 2273.Helled, R., Anderson, J. D., Podolak, M. & Schubert G. (2011). Interior models of Uranus and Neptune. ApJ, 726,15. Helled, R., Anderson,J. D. & Schubert G. (2010). Uranus and Neptune: Shape and rotation. Icarus, 210, 446. Helled, R. & Guillot, T. (2013). Interior models of Saturn: Including the uncertainties in shape and rotation. ApJ, 767, 113.Hori, Y. & Ikoma, M. (2011). Gas giant formation with small cores triggered by envelope pollution by icy planetesimals. MNRAS, 416, 419. Hubbard, W. B. & Horedt, G. P. (1983). Computation of Jupiter interior models from gravitational inversion theory. Icarus, 54, 456.Hubbard, W. B. & Militzer, B. (2016). A preliminary Jupiter model. ApJ, 820, 80.Iaroslavitz, E. & Podolak, M. (2007). Atmospheric mass deposition by captured planetesimals. Icarus, 187, 600-.Lambrechts, M.; Johansen, A. (2014). Forming the cores of giant planets from the radial pebble flux in protoplanetary discs. A&A, 572, id.A107, 12 pp.Kurokawa, H. & Inutsuka, S. (2015). On the Radius Anomaly of Hot Jupiters: Reexamination of the Possibility and Impact of Layered Convection. ApJ, 815, 78.Laughlin, G., Crismani, M. & Adams, F. C. (2011). On the anomalous radii of the transiting extrasolar planets. ApJL, 729, L7.Leconte, J. & Chabrier, G. (2012). A new vision on giant planet interiors: the impact of double diffusive convection. A&A, 540, A20Leconte, J. & Chabrier, G. (2013). Layered convection as the origin of Saturns luminosity anomaly. Nature Geoscience, 6, 347.Levison, H. F., Kretke, K. A. & Duncan, M. J. (2016). Growing the gas-giant planets by the gradual accumulation of pebbles. Nature, 524, 322.Lorenzen, W., Holst, B. & Redmer, R. (2009). Demixing of Hydrogen and Helium at Megabar Pressures. PRL, 102(11), 115701.Lorenzen, W., Holst, B. & Redmer, R. (2011). Metallization in hydrogen-helium mixtures. Phys. Rev. B, 84(23), 235109.Loubeyre,P., Letoullec, R. & Pinceaux, J. P. (1991). A new determination of the binary phase diagram of H_2-He mixtures at 296 K. Journal of Physics: Condensed Matter, 3, 3183.Lozovsky, M., Helled, R., Rosenberg, E. D. & Bodenheimer, P. (2017). Jupiters Formation and Its Primordial Internal Structure. ApJ, 836, article id. 227, 16 pp.Mankovich, C., Fortney, J. J. & Moore, K. L. (2016). Bayesian Evolution Models for Jupiter with Helium Rain and Double-diffusive Convection. ApJ, 832, article id. 113, 13 pp. Marley,M. S., Gómez P. & Podolak, M. (1995). Monte Carlo interior models for Uranus and Neptune. GJR, 100,23349.Miguel, Y., Guillot, T. & Fayon, L. (2016). Jupiter internal structure: the effect of different equations of state. A&A, 596, id.A114, 12 pp. Militzer, B., Hubbard, W. B., Vorberger, J., Tamblyn, I. & Bonev, S. A. (2008). A massive core in Jupiter predicted from first-principles simulations. ApJL, 688, L45.Militzer, B., Soubiran, F., Wahl, S. M., Hubbard, W. (2016). Understanding Jupiter's interior. JGR: Planets, 121, 1552-.Mirouh, G. M., Garaud, P., Stellmach, S., Traxler, A. L., & Wood, T. S. (2012). ApJ, 750, 61.Mizuno, H. (1980). Formation of the giant planets. Progress of Theoretical Physics, 64, 544.Morales, M. A., Hamel, S., Caspersen, K. &Schwegler,E. (2013). Hydrogen-helium demixing from first principles: From diamond anvil cells to planetary interiors. Phys. Rev. B, 87, 174105.Morales, M. A.,Schwegler, E., Ceperley, D. et al. (2009). Phase separation in hydrogen-helium mixtures at Mbar pressures. PNAS, 106, 1324.Nettelmann, N., Fortney, J. J., Moore, K. & Mankovich, C. (2014). An exploration of double diffusive convection in jupiter as a result of hydrogen-helium phase separation. MNRAS, 447, 3422.Nettelmann, N., Helled, R., Fortney, J. J., and Redmer, R. (2012). New indication for a dichotomy in the interior structure of uranus and neptune from the application of modi ed shape and rotation data. Planet. Space Sci., special edition, 77, 143.Nettelmann, N., Holst, B., Kietzmann, A., French, M., Redmer, R., and Blaschke, D. (2008). Ab initio equation of state data for hydrogen, helium, and water and the internal structure of Jupiter. ApJ, 683, 1217.Nettelmann, N., Becker, A., Holst, B. & Redmer, R. (2012). Jupiter models with improved ab initio hydrogen equation of state (H-REOS.2). ApJ, 750, 52.Nettelmann, N., Püstow, R. & Redmer, R. (2013). Saturn layered structure and homogeneous evolution models with different EOSs. Icarus 225, 548.Paardekooper, S. J. & Mellema, G. (2004). Planets opening dust gaps in gas disks. A&A, 425, L9.Podolak, M., Hubbard, W. B. & Stevenson D. J. (1991). Model of Uranus interior and magnetic field. In: Uranus, 2961. Tucson, AZ: University of Arizona PressPodolak, M., Weizman, A. & Marley M. S. (1995). Comparative models of Uranus and Neptune. PSS, 43, 1517.Podolak, M., Podolak J. I. & Marley M. S. (2000). Further investigations of random models of Uranus and Neptune. PSS, 48, 143.Podolak, M. & Helled, R. (2012). What Do We Really Know about Uranus and Neptune? ApJL, 759, Issue 2, article id. L32, 7 pp.Pollack, J. B., Hubickyj, O., Bodenheimer, P., Lissauer, J. J., Podolak, M. & Greenzweig, Y. (1996). Formation of the Giant Planets by Concurrent Accretion of Solids and Gas. Icarus, 124, 62.Püstow, R., Nettelmann, N., Lorenzen, W., & Redmer, R. (2016). H/He demixing and the cooling behavior of Saturn. Icarus, 267, 323.Rosenblum, E., Garaud, P., Traxler, A. & Stellmach, S. (2011). Erratum: "Turbulent Mixing and Layer Formation in Double-diffusive Convection: Three-dimensional Numerical Simulations and Theory". ApJ, 742, 132Saumon, D. & Guillot, T. (2004). Shock compression of deuterium and the interiors of Jupiter and Saturn. ApJ, 609, 1170.Schouten,J. A.,de Kuijper,A. & Michels,J. P. J. (1991). Critical line of He-H_2 up to 2500 K and the influence of attraction on fluid-fluid separation.Phys. Rev. B, 44, 6630.Spilker, L. J. (2012). Cassini: Science highlights from the equinox and solstice missions. In: Lunar and Planetary Institute Science Conference Abstracts, 43, p. 1358.Stevenson, D. J. & Salpeter E. E. (1977a). The dynamics and helium distribution in hydrogen-helium fluid planets. ApJS, 35, 239.Stevenson, D. J. & Salpeter, E. E. (1977b). The phase diagram and transport properties for hydrogen-helium fluid planets. ApJS, 35, 221.Tanaka, H. & Ida, S. (1999). Growth of a Migrating Protoplanet. Icarus, 139, 350Thorngren, D. P., Fortney, J. J., Murray-Clay, R. A. & Lopez, E. D. (2016). The Mass-Metallicity Relation for Giant Planets. ApJ, 831, article id. 64, 14 pp.Vazan, A., Helled, R., Kovetz, A. & Podolak, M. (2015). Convection and Mixing in Giant Planet Evolution. ApJ, 803, 32.Vazan, A., Helled, R., Podolak, M. & Kovetz, A. (2016).The Evolution and Internal Structure of Jupiter and Saturn with Compositional Gradients. ApJ, 829, 118. Venturini, J., Alibert, Y., & Benz, W. (2016). A&A, 596, id.A90, 14 pp.von Zahn, U., Hunten, D. M. & Lehmacher, G.. (1998). Helium in Jupiters atmosphere: Results from the Galileo probe helium interferometer experiment. JGR, 103, 22815.Wahl, Sean M et al.. (2017). GRL, submitted. Wilson, H. F. & Militzer, B. (2010). Sequestration of noble gases in giant planet interiors. PRL, 104,121101.Wilson, H. F. & Militzer, B. (2012). Solubility of water ice in metallic hydrogen: Consequences for core erosion in gas giant planets. ApJ, 745, 54.Wood, T. S., Garaud, P. & Stellmach, S. (2013). A new model for mixing by double-diffusive convection (semi-convection). II. The transport of heat and composition through layers. ApJ, 768, 157. | http://arxiv.org/abs/1705.09320v2 | {
"authors": [
"Ravit Helled",
"Tristan Guillot"
],
"categories": [
"astro-ph.EP"
],
"primary_category": "astro-ph.EP",
"published": "20170525183328",
"title": "Internal Structure of Giant and Icy Planets: Importance of Heavy Elements and Mixing"
} |
[email protected]@na.infn.it^1 Dip. di Fisica "E. Pancini", Università di Napoli Federico II, I-80126 Napoli, Italy ^2 INFN-Sezione di Napoli, I-80126 Napoli, Italy^3 Institut de Physique Nucléaire, CNRS-IN2P3, Univ. Paris-Sud, Université Paris-Saclay, 91406 Orsay Cedex, France^4 INFN - Sezione di Catania, Via S. Sofia, I-95125 Catania, Italy ^5 Universidad Nacional Autónoma de México, A.P. 20-364, Cd.Mx, D.F. 01000 México ^6 INFN - Laboratori Nazionali del Sud, Via S. Sofia, I-95125 Catania, Italy ^7 Dip. di Fisica e Astronomia, Università di Catania, Via S. Sofia, I-95125 Catania, Italy ^8 Facoltà di Ingegneria ed Architettura, Università Kore, I-94100 Enna, Italy The decay path of the Hoyle state in ^12C (E_x=7.654MeV) has been studied with the ^14N(d,α_2)^12C(7.654) reaction induced at 10.5MeV. High resolution invariant mass spectroscopy techniques have allowed to unambiguously disentangle direct and sequential decays of the state passing through the ground state of ^8Be. Thanks to the almost total absence of background and the attained resolution, a fully sequential decay contribution to the width of the state has been observed. The direct decay width is negligible, with an upper limit of 0.043% (95% C.L.). The precision of this result is about a factor 5 higher than previous studies. This has significant implications on nuclear structure, as it provides constraints to 3-α cluster model calculations, where higher precision limits are needed.High precision probe of the fully sequential decay width of the Hoyle state in ^12C A. Tumino8,6 December 30, 2023 ===================================================================================Exploring the structure of ^12C is extremely fascinating, since it is strongly linked to the existence of α clusters in atomic nuclei and to the interplay between nuclear structure and astrophysics. Furthermore, ^12C is one of the major constituents of living beings and ourselves. Our present knowledge traces the origin of ^12C to the so called 3α process in stellar nucleosynthesis environments. The 3α process, which occours in the He-burning stage of stellar nucleosynthesis, proceeds via the initial fusion of two α particles followed by the fusion with a third one <cit.> and the subsequent radiative de-excitation of the so formed excited carbon-12 nucleus, ^12C^*. The short lifetime of the ^8Be unbound nucleus (of the order of 10^-16s), formed in the intermediate stage, acts as a bottle-neck for the whole process. Consequently, the observed abundance of carbon in the universe cannot be explained by considering a non-resonant two-step process. This fact led Fred Hoyle, in 1953, to the formulation of his hypothesis <cit.>: the second step of the 3α process, α + ^8Be→ ^12C+ γ, has to proceed through a resonant J^π=0^+ state in ^12C, close to the α+^8Be emission threshold. The existence of such a state was then soon confirmed <cit.> at an excitation energy of 7.654MeV. This state was then named as the Hoyle state of ^12C <cit.>.The decay properties of this state strongly affect the creation of carbon and heavier elements in helium burning <cit.>, as well as the evolution itself of stars <cit.>. At typical stellar temperatures of T≈10^8-10^9K, this reaction proceeds exclusively via sequential process consisting of the α+α s-wave fusion to the ground state of ^8Be, followed by the s-wave radiative capture of a third α to the Hoyle state. However, in astrophysical scenarios that burn helium at lower temperatures, like for instance helium-accreting white dwarfs or neutron stars with small accretion rate, another decay mode of the Hoyle state completely dominates the reaction rate: the non-resonant, or direct, α decay <cit.>, where the two αs bypass the formation of ^8Be via the 92keV resonance. Recent theoretical calculations show that, at temperatures below 0.07GK, the reaction rate of the direct process is largely enhanced with respect to the one calculated by assuming only the sequential scenario <cit.>; as an example, for temperatures around 0.02GK such enhancement is predicted to be 7-20 orders of magnitude <cit.>.In nuclear structure, the Hoyle state is crucial to understand clustering in nuclei <cit.>. Theoretical calculations show different hypothesis regarding its spatial configuration. Recent ab-initio calculations describe it as a gas-like diluted state <cit.>, where the constituent α clusters are only weakly interacting. The possible appearance of Bose-Einstein condensates of α particles have been also proposed <cit.>, as well as molecular-like structures with three α's forming a linear chain, an obtuse triangle or a bent-arm configuration <cit.>. Between several observables, some of these models are able to predict the sequential-to-direct decay branching ratio (B.R.) of the Hoyle state <cit.>. The accurate knowledge of the experimental value of such branching ratio has therefore the capital importance to serve as a benchmark of theoretical models attempting to describe α clustering in ^12C. Recently, a quite large number of experiments has been carried out to probe the structure and decay properties of the Hoyle state in ^12C. The most commonly adopted strategy is to explore how the Hoyle state decays via 3α emission, i.e. what is direct decay rate relative to the sequential one. An upper limit to the direct decay branch was firstly given by Freer et al. in 1994 <cit.>. In their work they suggested that the B.R. of the Hoyle state decay bypassing the ^8Be ground state was lower than 4%, i.e. (Γ_α-Γ_α_0)/Γ_α<0.04. Here Γ_α indicates the global α decay width and Γ_α_0 is the partial width of the α emission leading to the ground state of ^8Be. More recently, Raduta et al. <cit.> reported a result in strong contradiction with the previous one, finding a rather high value (17%± 5%) of direct B.R. Such contrasting results stimulated a series of new experiments aimed at determining the actual value of the direct decay B.R. of the Hoyle state. A new upper limit of 0.5% (95% C.L.) was obtained by Kirsebom et al. by using the kinematic fitting method <cit.>. Two more recent experiments by Rana et al. <cit.> and Morelli et al. <cit.> suggested non-zero values of direct decay B.R., respectively of (Γ_α-Γ_α_0)/Γ_α=0.91%±0.14% and 1.1%±0.4%. Finally, thanks to a high statistics experiment, Itoh et al. <cit.> determined an improved upper limit of the direct B.R. of 0.2% (95% C.L.). It is important to underline that, as discussed in <cit.>, the use of strip detectors introduces the presence of a non-vanishing background, that reduces the sensitivity to the direct decay B.R. signal. Taking into account the importance to fully understand α clustering effects in the nuclear structure of ^12C, it is mandatory to improve our knowledge of the direct-to-sequential decay B.R. of the Hoyle state, since theoretical estimations of this quantity are given at the 0.1% level, i.e. well below the most recent upper limit reported in the literature <cit.>.In this letter we report on the result of a new high precision experiment specifically designed to isolate, if any, 3α direct decays of the Hoyle state in ^12C. For the first time we succeeded to have almost zero background, which is a requirement in order to unambiguously disentangle sequential and direct decays. To populate ^12C nuclei in the Hoyle state we used the ^14N(d,α)^12C nuclear reaction. A 10.5MeV deuteron beam was provided by the 15MV tandem accelerator of the INFN-LNS (Catania, Italy). As a detection apparatus we used the combination of a ΔE-E telescope and a high granularity hodoscope detector. The adopted experimental method is the invariant mass analysis of 3α disintegrations of the Hoyle state. We completely reconstruct the kinematics of the reaction by simultaneously detecting the four α particles emitted in the final state, namely the α ejectile, used to tag the excitation of the ^12C residue at its Hoyle state (E^*=7.654MeV), and the three α particles fed by the Hoyle state decay. The hodoscope detector was specifically designed to ensure the detection of the three α particles coming from the Hoyle state decay with the highest possible efficiency and to avoid the artificial introduction of background. It is constituted by 8×8 independent silicon pads (1cm^2, 300μm thick), and it is placed in such a way that its center is aligned with the axis of the ^12C(7.654) three α emission cone, when the corresponding α tagging ejectile is detected by the ΔE-E telescope. The ^12C excitation energy spectrum, reconstructed from the measurement of kinetic energy and emission direction of the particles detected in the ΔE-E telescope, is shown on Fig. <ref> by the blue line. Only particles stopping in the first detection stage are selected, allowing to strongly reduce contaminations from (d,d) and (d,p) reactions on the target constituents. Details on this technique can be found e.g. in Ref. <cit.>. The excitation energy spectrum reduces to the filled one if we select events with 4-particles in coincidence, i.e. by selecting events where 3 particles are detected in coincidence by the hodoscope. This spectrum exhibits a pronounced peak at E_x = 7.654MeV, corresponding to the energy position of the Hoyle state, while background as well as other peaks are strongly suppressed, demonstrating the good sensitivity of our detection system to the α decays of the Hoyle state and a very low background level. For the subsequent analysis, events are selected by gating on the Hoyle peak and on the corresponding four-particles total energy spectrum, which unambiguously identifies the reaction channel of interest. In Fig. <ref> we report (full dots) the ^12C excitation energy spectrum obtained by an invariant mass analysis of ternary coincidences inside the hodoscope, assuming that they are α particles. The red dashed line is the result of a complete Monte Carlo simulation of the effect of the detection system on the reconstruction of the three α particles resulting from the in-flight decay of the Hoyle state. To produce this result we consider four α particles fully reconstructed events from ^14N(d,α_2)^12C(7.654) reaction simulated data. In our simulation we have taken into account both the profile of the beam on the target and the angular distribution of the emitted α ejectile, as reported in <cit.> at the same incident energy. The geometry of the detectors and their energy resolution are also taken into account in the simulation. The result of the simulation is in excellent agreement with the experimental data, confirming the unambiguous reconstruction of this physical process. The invariant mass of the Hoyle state is determined with a resolution of about 47keV (FWHM), while the center of the distribution is in agreement with the position of the Hoyle state within an indetermination smaller than 1keV. Four-α fully detected events are thus selected by means of a further cut on the peak of Fig. <ref>. In such a way we obtain a number of about 28000 decay events of Hoyle state, an amount well higher than any other previous investigation. The background level, due to spurious coincidences, is extremely low thanks to the stringent constraints on the data, the sensitivity of the apparatus to the physical process and the unambiguous particle tracks identification achieved by the use of an hodoscope. It can be evaluated by inspecting the right and left sides of the spectrum; it amounts to about 0.036% of the total integral of the peak. Details about the three-α decay mechanisms of the Hoyle state can be studied by using the symmetric Dalitz plot <cit.>. This technique is particularly suited to geometrically visualize the decay pattern into three equal mass particles. Cartesian coordinates to construct the Dalitz plot can be obtained as follows:x=√(3)(ε_j-ε_k) y=2ε_i-ε_j - ε_kwhere ϵ_i,j,k=E_i,j,k/(E_i+Ej+E_k) are the kinetic energies of each particle, in the reference frame where the emitting source is at rest, normalized to the total energy of the decay. E_i,j,k are selected so that E_i≥ E_j≥ E_k and, consequently, ε_i≥ε_j≥ε_k. In Fig. <ref> we show the Dalitz plot obtained from the experimental data selected with the above discussed procedure (a) compared with the analogous plot constructed with simulated 100% sequential decay (SD) data (b) and the 100% DDΦ data (c). Simulated data have been obtained with the same prescription used to construct Fig. <ref>. In this Dalitz plot representation, a sequential decay (SD) mechanism would populate a uniform horizontal narrow band, while a spread of events along the whole plot region would be observed in the case of DDΦ. The plots of Fig. <ref>(b) and (c) are particularly useful to characterize the expected distortion introduced by the experimental apparatus on the analysis to discriminate the decay mechanism. In particular, two significant conclusions can be extracted from these plots. First, the effect of the detection device on the three α reconstruction results only in a broadening of the SD band, without introducing a significant background contamination in the region outside the band. This result demonstrates that we are able to distinguish between the two mechanisms with an exceptionally low background level. In previous investigations <cit.>, the Dalitz plot constructed with simulated sequential decays shows the presence of data points outside the above mentioned horizontal band, thus containing ambiguities and leading to a reduced sensitivity on direct decay contributions. This difficulties arise from the misassignment of particle tracks inside the strip detectors used in their experiment, as the authors of Ref. <cit.> state. Our experiment is free from such problems thanks to the use of an hodoscope of independent detectors free of pixel assignment ambiguities. A second, very important, conclusion can be deduced by comparing the behaviour of the experimental Dalitz plot of Fig. <ref>(a) with the simulated ones. An excellent agreement with the simulated SD horizontal band is clearly seen, while only few counts populate the region outside the SD band. A more quantitative analysis can be achieved by inspecting the ϵ_i distribution, i.e. the distribution of the largest energy among the ϵ_i,j,k normalized energies <cit.>. The ϵ_i distribution is shown by the green points of Fig. <ref>. These values are expected to lie, in the case of a DDΦ decay, between 0.33 (when particles share an equal amount of the energy decay) and 0.67 (when one α is emitted in the opposite direction of the other two). In contrast, a value of about 0.506 is expected for a SD mechanism. In order to estimate the B.R. of direct decays contributing to the width of the Hoyle state, we have compared the experimental data with the result of a Monte Carlo simulation assuming 100% of SD (red dashed line on Fig. <ref>). From an analysis of this spectrum, it is possible to identify an extremely small amount of counts not reproduced by the SD simulation. They correspond to background events falling into the selection of Fig. <ref> (the total estimated background level is about 0.036%, as previously discussed) and, eventually, to a signal of DD.Starting from the observed experimental data, we can determine the lower and upper limits of the DD B.R., by assuming that both the DD and background counts are regulated by the Poisson statistics <cit.>. In doing this evaluation, we follow the Feldman and Cousin's approach to the analysis of small signals described in Ref. <cit.>, and we carefully take into account the different expected detection efficiencies for DD and SD decay, as determined with Monte Carlo simulations. The lower limit is found to be compatible with zero. Therefore we quote an upper limit on the B.R. of the direct three α decay of 0.043% (95% C.L.). This value is about a factor 5 lower than the state of art experiment <cit.>.To summarize, we have studied the α decay from the Hoyle state (7.654, 0^+) in ^12C by simultaneously detecting the four α particles emitted from the reaction ^14N(d,α_2)^12C(7.654) at an incident energy of 10.5MeV. To quantitatively estimate the possible contribution of non-resonant (direct) decays bypassing the ground state of ^8Be, we inspect the distribution of the highest normalized energy in the 3α decay, ε_i. A complete Monte Carlo simulation, assuming exclusively the sequential decay pattern, fully reproduces the experimental data. The possible presence of any direct decay is found to be statistically insignificant, and an upper limit of 0.043% (C.L. 95%) to the corresponding branching ratio is estimated. This finding is in agreement with the previous results by Freer et al. <cit.>, Kirsebom et al. <cit.> and Itoh et al. <cit.>, introducing an improvement of about a factor 5 with respect to the previous most statistically significant work <cit.>. These results provide important information about the α cluster structure of ^12C Hoyle state and have to be carefully taken into account in theoretical models attempting to reproduce the outgoing α particles and the structure of the Hoyle state. They have also very significant astrophysical impact. Indeed, the further reduction of the upper limit of direct decay implies that calculations of the triple-α stellar reaction rate at temperatures lower than 10^8K have to be correspondingly revised <cit.>.We gratefully acknowledge all the services (accelerator, target, vacuum lines, mechanics, electronics) of INFN Laboratori Nazionali del Sud (Catania, Italy) for their collective efforts to perform, in the best possible way, the present experiment. We thank the Servizio Elettronica e Rivelatori of the INFN-Sezione di Napoli for the support in the development and production of the hodoscope detector.10 opik E.J. Opik, Proc. R. Irish Acad. A 54,49(1951). salpeter E.E. Salpeter, Phys. Rev. 88,547(1952). hoyle_first F. Hoyle et al., Phys. Rev. 92,1095c(1953). hoyle F. Hoyle, Astrophys. J. Suppl. Ser. 1,121(1954). cook C.W. Cook, W.A. Fowler, C.C. Lauritsen, and T. Lauritzen, Phys. Rev. 107,508(1957). freer_hoyle_review M. Freer and H.O.U. Fynbo, Prog. Part. Nucl. Phys. 78,1(2014). ogata Kazuyuki Ogata, Masataka Kan, and Masayasu Kamimura, Prog. Theor. Phys. 122,1055(2009). Herwig Falk Herwig, Sam M. Austin, and John C. Lattanzio, Phys. Rev. C 73, 025802(2006). Tur:2009zb Clarisse Tur, Alexander Heger, and Sam M. Austin, Astrophys. J. 718, 357(2010). nguyen N.B. Nguyen, F.M. Nunes and I.J. Thompson, Phys. Rev. C 87, 054615(2013). nomoto K. Nomoto, F.-K. Thielemann, and S. Miyaji, Astron. Astrophys. 149, 239(1985). langanke K. Langanke, M. Wiescher, and F.K. Thielemann, Z. Physik A - Atomic Nuclei 324,147(1986). nacre C. Angulo et al., Nucl. Phys. A 656,3(1999). nguyenprl N.B. Nguyen, F.M. Nunes, I.J. Thompson and E.F. Brown, Phys. Rev. Lett. 109, 141101(2012). garrido E. Garrido, R.de Diego, D.V. Fedorov and A.S. Jensen, Eur. Phys. J. A 47, 102 (2011). yabana K. Yabana and Y. Funaki, Phys. Rev. C 85, 055803(2012). vonoertzen W. von Oertzen, Zeit. Phys. A 357,355(1997). uegaki E. Uegaki, S. Okabe, Y. Abe, and H. Tanaka, Prog. Theor. Phys. 57, 1262(1977). kamimura M. Kamimura, Nucl. Phys. A 351,456(1981). funaki Y. Funaki, A. Tohsaki, H. Horiuchi, P. Schuck and G. Röpke, Eur. Phys. J. A 28, 259 (2006). funaki1 Y. Funaki, H. Horiuchi, W. von Oertzen, G. Ropke, P. Schuck, A. Tohsaki and T. Yamada, Phys. Rev. C 80, 064326 (2009). tohsaki A. Tohsaki, H. Horiuchi, P. Schuck, and G. Röpke, Phys. Rev. Lett. 87,192501(2001). chernykh M. Chernykh, H. Feldmeier, T. Neff, P. von Neumann-Cosel, and A. Richter, Phys. Rev. Lett. 98,032501(2007). morinaga H. Morinaga, Phys. Rev. 101,254(1956). epelbaum1 E. Epelbaum et al., Phys. Rev. Lett. 106,192501(2011). ishikawa S. Ishikawa, Phys. Rev. C90,061604(2014). freer_hoyle1994 M. Freer et al., Phys. Rev. C 49,R1751(1994). raduta Ad.R. Raduta et al., Phys. Lett. B 705,65(2011). kirsebom O.S. Kirsebom et al., Phys. Rev. Lett. 108,202501(2012). rana T.K. Rana et al., Phys. Rev. C 88,021601(R)(2013). morelli_hoyle L. Morelli et al., J. Phys. G43,045110(2016). itoh M. Itoh et al., Phys. Rev. Lett. 113,102501(2014). freer_models M. Freer, H. Horiuchi, Y. Kanada-En’yo, D. Lee and Ulf-G. Meißner, arXiv:1705.06192v1. koenig W. Koenig et al., Il Nuov. Cim. 39,9(1977). curry J.R. Curry, W.R. Coker, and P.J. Riley, Phys. Rev. 185,1416(1969). dalitz R.H. Dalitz, Philos. Mag. 44,1068(1953). statistics R.J. Barlow, Statistics (J. Wiley & Sons, Chichester (UK), 1989). feldman_cousins G.J. Feldman and R.D. Cousins, Phys. Rev. D 57, 3873(1998). | http://arxiv.org/abs/1705.09196v2 | {
"authors": [
"D. Dell'Aquila",
"I. Lombardo",
"G. Verde",
"M. Vigilante",
"L. Acosta",
"C. Agodi",
"F. Cappuzzello",
"D. Carbone",
"M. Cavallaro",
"S. Cherubini",
"A. Cvetinovic",
"G. D'Agata",
"L. Francalanza",
"G. L. Guardo",
"M. Gulino",
"I. Indelicato",
"M. La Cognata",
"L. Lamia",
"A. Ordine",
"R. G. Pizzone",
"S. M. R. Puglia",
"G. G. Rapisarda",
"S. Romano",
"G. Santagati",
"R. Spartà",
"G. Spadaccini",
"C. Spitaleri",
"A. Tumino"
],
"categories": [
"nucl-ex"
],
"primary_category": "nucl-ex",
"published": "20170525142634",
"title": "High precision probe of the fully sequential decay width of the Hoyle state in $^{12}$C"
} |
justification=raggedright | http://arxiv.org/abs/1705.09603v1 | {
"authors": [
"Anna Sinelnikova",
"Antti J. Niemi",
"Johan Nilsson",
"Maksim Ulybyshev"
],
"categories": [
"cond-mat.soft",
"cond-mat.stat-mech",
"physics.bio-ph",
"q-bio.BM"
],
"primary_category": "cond-mat.soft",
"published": "20170526145301",
"title": "Multiple scales and phases in discrete chains with application to folded proteins"
} |
Results are presented for the time evolution of fermions initially in a non-zero temperature normal phase, following the switch on of anattractive interaction.The dynamics are studied in the disordered phase close to the critical point, where the superfluid fluctuationsare large. The analysis is conductedwithin a two-particle irreducible, large N approximation.The system is considered from the perspective of critical quenches where it isshown that the fluctuations follow universal model A dynamics. A signature of this universality is found in a singular correction to thefermion lifetime, given by a scaling form t^(3-d)/2S_d(ε^2 t), where d is the spatial dimension, t is the timesince the quench, and ε is thefermion energy. The singular behavior of the spectral density is interpreted as arising due to incoherent Andreev reflectionsoff superfluid fluctuations. 74.40.Gh; 05.30.Fk; 78.47.-pDepartment of Physics, New York University, 726 Broadway, New York, NY, 10003, USA Time-resolved spectral density of interacting fermions following a quench to a superconducting critical point Aditi Mitra December 30, 2023 =============================================================================================================§ INTRODUCTION Experiments involving pump-probe spectroscopy of solid-state systems <cit.>as well as dynamics of cold-atomic gases <cit.> have opened up an entirely new temporal regime for probing correlated systems. A strong disturbance at an initial time, either by a pump beam of a few femto-second duration, or by explicit changes to the lattice and interaction parametersfor atoms confined in an optical lattice, places the system in a highly nonequilibrium state. Following this, the dynamics can be probed with pico-second resolution for solid state systems, and milli-second resolution for cold atoms. At these short time-scales, the system is far from thermal equilibrium, and has memory of the initial pulse.For generic systems, with only a finite number of conservation laws such asenergy and particle number, relaxation to thermal equilibrium is expected to befast <cit.>. However the relaxation may still possess rich dynamics when tuned near a critical point. As in equilibrium, observables will have a singular dependence on the detuningfrom the critical point.Such singular behavior in quench dynamics has already been identified for bosonic systems coupled to a bath <cit.>, and also for isolated bosonicsystems <cit.>. In this paper, we complement this study with that for an isolated fermionic system.We consider a gas of fermions in a lattice without disorder at finite temperature, where the fermions have an attractive interaction.This can be realized as a gas of cold atoms in an optical lattice. In equilibrium, there is a phase transition separating superfluid and normal (non-superfluid)phases.We study this phase transition as a quench process. In particular we study the dynamics when the system is initially in the normal phase,but where the interaction is suddenly increased so the system isclose to the phase transition.Our results may be understood as follows. The distance from the critical point is captured by the superconducting fluctuationD(q,t)≡⟨Δ^†(q,t)Δ(q,t)⟩, where Δ^†(q,0) creates a Cooper pair at momentum q. In the initial system, deep in the normal phase, D(q) is small and non-singularas a function of q.At the finite temperature equilibrium critical point, we expect D(q,∞) ∼ q^-2 as q→ 0.When a large interaction is turned on in the normal state,the system must interpolate between these two limits as a function of time. Since in a diffusive system information can only becommunicated a distance∝√(t) in a time t after the quench, we should expect a scale ∼ t^-1/2 to function as the cutoff on thedivergence of D(q,t). Therefore the long wavelength fluctuations should show strong dependence on t. We propose to detect these fluctuations through their effect on the spectral properties of the fermions, in the vein of fluctuation superconductivity.If we imagine a fixed background of superfluid fluctuations, the fermions can be thought of as Andreev reflecting off of this background.As the system is disordered the Andreev reflection is incoherent, and no gap is opened in the fermionic spectrum.However, the Andreev reflection is an energy conserving process for fermions at the chemical potential. This then contributes a decay channelfor fermions at the Fermi surface which we can estimate by Fermi's golden rule as ∫ d^d-1q D(q,t) (where the d-1 dimensional integralcorresponds to the Fermi surface of a d dimensional system).In d=2,3 this integral is singular as q→0, t→∞, and we obtain a singular correction which goes like √(t) in d=2and log t in d=3. This behavior is summarized in Fig. <ref>.Our analysis is in a regime complementary to studies such as <cit.>where the initial state was already superconducting to begin with, and the dynamics of the superconducting order-parameter under an interaction quench was studied within a mean field approximation. It is also complementary to studiessuch as <cit.> where the initial state was in the normal phase, and the external perturbation puts the system deep in the ordered phase where the dynamicswere again studied in mean field. In contrast, we study the behavior when the system is always in the disordered phase and the mean field behavior is trivial.To controllably go beyond mean-field calculation we perform a 1/N expansion, where N is an additional orbital degree of freedom for the fermions.The paper is organized follows. In Section <ref> we present the model, outline the approximations, and introduce an auxiliary or Hubbard-Stratonovich field that represents the Cooper pair fluctuations or Cooperons. In Section <ref> the equations of motions are analyzed assuming the Cooperons are non-interacting, an assumption valid at short times.Further the effect of the fluctuations on the fermion lifetime is calculated.In Section <ref>, the longer time behavior is considered. This is done by mapping the dynamics to model-A <cit.>,and solving the self-consistent equation for the self-energy. We conclude in section <ref>. The mapping to model A dynamics isoutlined in Appendices <ref> and <ref>, where App. <ref> includes a perturbative estimate of the parameters of Model A. The implication of model A on Cooperon dynamics is relegated to App. <ref>. § MODELWe study a quench where the initial Hamiltonian is that of free fermions,H_i=∑_k,σ=↑,↓,τ=1… Nϵ_k c_kστ^†c_kστ.Above k is the momentum, σ=↑,↓ denotes the spin, and τ is an orbital quantum number that takes N values. We consider the initial state to be the ground state of H_i at non-zero temperature T, and chemical potential μ. The time-evolution from t>0 is in the presence of a weak pairing interaction u. We write the quartic pairing interaction in terms of pair operators Δ_q such that,H_f = H_i + u/N∑_qΔ^†_qΔ_q, Δ_q = ∑_kτ c_k,↑,τc_-k+q,↓,τ; Δ^†_q = ∑_k,τ c^†_-k+q,↓,τc_k↑τ^†.The Hamiltonian above assumes contact interaction, so that only fermions with opposite spin quantum numbers scatter off of each other. In the superfluid phase ⟨Δ_q⟩≠ 0. In this paper, since we are always in the normal phase, ⟨Δ_q⟩=0. In the strict N→∞ limit, the fermion number at each momentum k is conserved. Thus the system is fully integrable, and fails to thermalize. We do not work in this limit and consider finite 1/N corrections to the behavior.In particular this allows us to study the back-reaction of the fluctuating Cooper pairs on the fermionic gas.We use a two-particle irreducible (2PI) formalism <cit.> to obtain the equations of motions. The main ingredient is the sum of 2PI diagrams Γ'[G], which is a functional of the fermions Greens functions G,G_R(1,2) =-iθ(t_1-t_2)⟨{c(1), c^†(2)}⟩,G_K(1,2) =-i⟨ [c(1), c^†(2)]⟩.We here and generally suppress the spin and orbital indices, use numbers to indicate spacetime coordinates, and do not define the advanced functionsince it is the hermitian conjugate of the retarded part: G_A(1,2) = G_R^*(2,1).TheGreen's function and interaction vertex correspond to the diagrams in Fig. <ref>, whileat O(1/N), the Keldysh functional Γ' is the set of fermion loops shown in Fig. <ref>.Note the effect of the 1/N expansion is to select the Cooper interaction channel which is the most singular channel near the critical point.The Green's function is determined by the Dyson equationG_R^-1 = g_R^-1-Σ_R[G],where g^-1 is the non-interacting Green's function and Σ_R is the retarded self-energy, determined self-consistently by the saddle point equation.Σ_R[G] ≡δΓ'/δ G_A,At O(1/N), Σ_R is (see Fig. <ref>),Σ_R(1,2) =i/N[D_K(1,2)G_A(2,1) + D_R(1,2)G_K(2,1)]where D is the Cooperon, Fig. <ref>(b,c), defined by,D_R^-1 ≡ u^-1-Π_R.D_K≡ D_R∘Π_K ∘ D_A.Here Π is the Cooper bubble, Fig. <ref>(d,e), or equivalently the expectation,iΠ^K(q,t,t') = ⟨{Δ_q(t),Δ^†_q(t')}⟩, iΠ^R(q,t,t') = θ(t-t')⟨[Δ_q(t),Δ^†_q(t')]⟩, evaluated to 𝒪(1/N). The Cooperon can be understood as the correlator of an auxiliary or Hubbard-Stratonovich field ϕ conjugate to Δ, used to decouple the fermionic quartic interaction in the Cooper channel, as outlined in Appendix <ref>. In this language D is defined by, D_R(1,2)=-iθ(t-t')⟨[ϕ(1),ϕ^*(2)]⟩, D_K(1,2)= -i⟨{ϕ(1),ϕ^*(2)}⟩. We emphasize that Eqs. (<ref>a-g), constitute a highly non-trivial set of coupled equations as D, G, andΣ are defined self-consistently in terms of each other.In the rest of this paper we solve these equations in the nonequilibrium systemfollowing the quench. In Sec. <ref> we do this in a short timeapproximation, which gives the essential qualitative behavior. In Sec. <ref> we remove the short time limit andconsider the general behavior.§ PERTURBATIVE REGIME We begin by evaluating Eqs. (<ref>) in the spirit of a short time approximation. We do this by replacing G with it's initial, noninteracting value g. The functions D and Π may then be straightforwardly obtained in terms of the dispersion ϵ_k and the initialoccupation. At finite temperature T, and low frequency (ω/T≪ 1) (see Appendix <ref>) these can be estimated as, Π_R(v_F|q| ≪ T,ω≪ T)= ν(a -ibω/T + a c^2 v_F^2q^2/T^2),iΠ_K(v_F|q| ≪ T,ω≪ T) = 4bν, ⇒ iΠ_K(q=0,t,t') ∼δ(t-t'),where v_F is the Fermi velocity, ν the density of states and a, b and c system-dependent dimensionless constants. Then Eq. (<ref>) reduces to[∂_t + γ_q]D_R(q,t,t')=-Zδ(t-t'), γ_q = T(l^2 q^2+r).where we have l = c v_F/T and Z∼ T/ν, and r is the distance from the critical point in units of T. D_R(q,t,t')= -Zθ(t-t') e^-(t-t')γ_q,The D are overdamped as a consequence of the fermionic bath. When r >0 (r<0) D_R decays (grows) with time indicating that the system isin the disordered phase (unstable to the ordered phase). The critical point r=0 separates the two regimes.While D_R,A are time translation invariant within the current approximation, iD_K explicitly breaks time translation invariance. From Eqns. (<ref>), (<ref>), and (<ref>)it follows thatiD_K(q,t,t')= ZT/2γ_q[e^-γ_q|t-t'|-e^-γ_q(t+t')].As expected of a non-equilibrium system this violates the fluctuation dissipation theorem. We may quantify this by introducing a functionF_K^0(x) = 1-e^-2x/2 x,Eq. (<ref>) may be written asiD_K(q,t,t')= -T[ D_R(q,t,t')t'F_K^0(γ_q t')+t F_K^0(γ_q t)D_A(q,t,t') ].The violation the FDT (at the initial temperature T) is given by the fact that,2xF^0_K(x) - 1 ≠0.Therefore this quantity measures the extent to which the Cooper pair fluctuations are out of equilibrium with the fermions.At the critical point r=0, iD_K(q,t,t) for v_Fq ≪ T can be written in the scaling formiD_K(q,t,t)= Z T t F_K^0(T l^2q^2t);v_Fq ≪ T.This is a consequence of the fact that at the critical point the only length scale greater than l is the one generated from t, l√(T t).§.§ Fermion lifetimeWe now show that the growing fluctuations may be detected through the spectral properties of the fermions, in particular the lifetime.There is some subtlety with defining the lifetime, as the system is not translationally invariant and so the response functions may not be decomposed in frequency space.However, we may take advantage of the fact the rate of change of D_K is γ_q, while the typical energy of a thermal fermion isT≫γ_q.Therefore it is reasonable to interpret the Wigner-Transform of the self energy,Σ_R^ WT(k,ω; t ) ≡∫ dτ e^iτωΣ_R(k,t+τ/2, t-τ/2),as being the self energy at ω near time t, and the quantity,1/τ(k,t)≡Im[Σ_R^ WT(k,ω=ε_k;t )],as the fermion lifetime. In any case, the correct observable will be determined by the particular experimental protocol.We now proceed to evaluate τ^-1 within the present approximation. The self energy is determined from equation (<ref>) which in momentum space takes the form,Σ_R(k;t_1,t_2)=i∫d^d q/(2π)^d[ G_K(-k+q;t_2,t_1) D_R(q;t_1,t_2) + G_A(-k+q;t_2,t_1) D_K(q;t_1,t_2) ].We may now make several simplifications. First as D_K/ D_R ∼ T/ γ_q we may neglect the first term. Second,as the evaluation of D_K(q;t_1,t_2) has shown that it varies on the scale of γ_q which is much less then T,it is sufficient to replace D_K(q,t_1,t_2) with the equal time quantity D_K(q;t,t), t= (t_1+t_2)/2evaluated at the average value of the two time coordinates. Third, continuing within the perturbative approximation,G_R may be replaced with it's non-interacting value. Therefore we obtain the equation, Σ_R(k; t_1,t_2) =iθ(t_1-t_2) ∫d^d q/(2π)^d e^iε_q-k(t_1-t_2) iD_K (q,t,t). Finally, we Fourier transform with respect to the time difference t_1-t_2,Σ_R^WT(k,ω;t )= -∫d^d q/(2π)^d iD_K(q,t,t) /ω + ε_q-k +iδ.Setting ω = ε_k, and using Eq. (<ref>), we obtain,Σ^R(k,ω = ε_k,t )= Z Tt ∫d^d q/(2π)^d F_K^0(γ_q t)/ε_k + ε_q-k+iδ≈ Z Tt ∫d^d q/(2π)^d F_K^0(γ_q t)/2ε_k + q⃗·v⃗_k +iδFocusing on the region where q l<1, we may set γ_q = T l^2 q^2. Going over to spherical coordinates this may be written as,Σ^R(k,ω = ε_k,t )=Z T t∫_0^l^-1q^d-1d q /(2π)^d[ ∫ dn̂ F^0_K(tTl^2q^2) / q v⃗_k·n̂ + 2ε_k +iδ],where ∫ dn̂ indicates the integral over the d-dimensional sphere.Now introducing new coordinates y = q l √(Tt), produces Σ^R(k,ω = ε_k,t )=(Tt)^3-d/2Z c/Tl^d×∫_0^√(Tt)y^d-1 d y/(2π)^d∫ dn̂ F^0_K(y^2) / y cosθ' + 2cε_k√(t/T) +iδ=(Tt)^3-d/2Z c/Tl^dS^0_d(2 c ε_k√(t/T)), S^0_d(x)≡∫_0^√(Tt)y^d-1 d y/(2π)^d∫ dn̂ F^0_K(y^2) / y cosθ' +x +iδ,where θ' is the angle between v⃗_k and n̂. Therefore the lifetime is given byτ^-1(k,t) = (Tt)^3-d/2Z c/T l^dIm S^0_d(2 c ε_k√(t/T)). We now evaluate this function, beginning in d=2. Recalling that F^0_K(y)→ y^-1 as y→∞ and F^0_K(0) = 1, we see that in d =2 the integralis convergent and so we may neglect the condition that q ≪ l^-1. Therefore S^0_2(x) is not sensitive to how theintegral is cutoff at q ≈ l^-1, and is plotted in Fig. <ref>. The asymptotics may be extracted from the asymptotics of F^0_K,Im S^0_2(x)∼const; x≪ 1 ∼1/x; x ≫ 1. For completeness we give the asymptotic forms of ReS_2^0 which take the form. Re S_2^0(x)∝ x; x≪ 1 = 1/2xlog(x/√(T t)); x≫ 1.As a result, for d=2,τ^-1(k,t )∼√(t);tε^2_k/T≪ 1 ∼1/ε_k;tε^2_k/T ≫ 1 .The full curve is plotted in Fig. <ref>.In d = 3 the integral is logarithmically divergent. The upper limit of the integral over y is √(t T) whereas the divergence as y → 0 is cutoff by the greater of x and one. Thus,S^0_3(x) ∼log[ √(Tt)/ max(1,x)], andIm S^0_3(x)∼log(T t);x≪ 1 ∼log(T t/x^2); x ≫ 1. For completeness we give asymptotic limits of ReS_3^0 which takes the form.Re S_3^0(x) ∼ x;x ≪ 1 ∼constx ≫ 1.Therefore for d=3, we have the dependence,τ^-1(k,t )∼log(T t);tε^2_k/T≪ 1 ∼ - log(ε_k/T);tε^2_k/T≫ 1.The full behavior of the function S^0_3 is plotted in Fig. <ref>. Note that as this function is logarithmically dependent on the cutoff of the integral, a change in the precise form of the cutoff enforcing q ℓ≪ 1 will shift the final result by an additive constant. This ambiguity would be fixed by comparing the asymptotic behavior of τ(ε), ε→∞ with a microscopic calculation of the lifetime at high energies. The self-energy at ε = 0 diverges as t→∞. Therefore at sufficiently long time the assumption that G may be substituted with it's non-interacting value is not valid. In the next section we lift this assumption.§ NON-PERTURBATIVE REGIME We note that the 2PI formalism is not a short-time expansion, and that the Eqns. (<ref>) are valid at all times. There are two shortcomings that must be remedied. First, we have seen that the D is a slow function of time, and that Σ depends essentially on the equal time value D_K(t,t). The estimation in Sec. <ref>essentially estimates D_K(t,t) by linear response. However as D increases with time, it will eventually grow large enough to violate the linear response assumption.We remedy this by employing the theory of critical quenches in this section.Second, we have that G and Σ must be self-consistent. We therefore solve the self-consistent version of Eq. (<ref>).The results of this analysis are shown in Fig. <ref>. §.§ Propagator for interacting Cooperons The equation of motion for D given in Eq. (<ref>), is equivalent to linear response. This is because the non-linear behavior comes from thedependence of G on Σ, which in turns depends on D. Therefore once the perturbative approximation for G fails it becomes necessary to considerthe non-linear evolution of D. Since we are only concerned with the long wavelength behavior of D in the vicinity of the dynamical critical point, we do not need to solve for the full behavior of D. Instead we need only to identify the appropriate dynamical universality class. As the fluctuations of the bosonic field Δ are not conserved and are overdamped by the fermionic bath, the system belongs to the dynamical model-A transition <cit.>. The standard manipulation mapping the original Hamiltonian to this model are relegatedto Appendices <ref>, <ref>.We quote the results for the transient dynamics of thermal aging in model Aalready discussed elsewhere <cit.>, and re-derived in Appendix <ref>. In d=2 there is no true dynamical critical point, as in equilibrium. Therefore our results are only valid in the perturbative regime in d=2.The behavior at intermediate to long time is expected to be described by a crossover to Kosterlitz-Thouless physics, where theamplitude of the Cooper fluctuations saturate but there are long range phase fluctuations. However, such a calculation is beyond the scope of this paper.In d=3, the dynamics is characterized by three exponents z,η,θ. Of these z,η are already familiar in equilibrium ϕ^4 theory and are the dynamical critical exponent and the scaling dimension of ϕ respectively. The exponent θ is a non-equilibrium exponentknown as the initial slip exponent <cit.>. It is responsible for non-trivial aging dynamics, and is interpreted as the scaling dimension of a source field applied at short times after the quench. This is because such a source field will induce an initial order-parameter M_0=⟨ϕ_c⟩, to grow with time at short times after the quenchas <cit.> M_0 ∼ t^θ even though the quench is still within the disordered phase. At long times, eventually the order-parameter will decay to zero.For the present calculation, the short time behavior is not directly relevant as we are interested in the regime when |t-t'| ≪ t.However the short-time exponent still affects the qualitative behavior of D_K.The values of the Model A exponents z, η and θ may be calculated using standard methods such as epsilon-expansions or large-N, where N now controls the components of the bosonic field. We adopt the latter approach, and emphasize that this component Nis not the same as the fermion orbital index used to justify the form of Γ'.The derivation of the exponent using large-N for the bosonic theory is equivalent to a Hartree-Fock approximation for model A (see App. <ref>) giving z=2, η=0and θ = ϵ/4, ϵ=4-d. Other approximations will change the precise value of the exponent, but not the overall scaling form.The results for the Cooperon dynamics from model A (see App. <ref>) are as follows. When t' becomes comparable to t, we expect,iD_K(q,t,t')= T t' e^-q^2(t-t')(t/t')^θF_K(2q^2t');q l ≪ 1,F_K(x) ≡∫_0^1 dy e^-xy(1-y)^-2θ.In particular for equal times,i D_K(q,t,t)∝ t F_K(2 q^2t). The above form for the boson density iD_K(q,t,t) is the same as that in Sec. <ref> if one replacesthe scaling function F_K^0 by F_K. The function F_K has the asymptotic limits F_K(x)= (1-2θ)^-1; x = 0∼ x^-1 + 2θ x^-2;x≫ 1.Note the leading asymptotic behavior of F_K(x) as x→∞ is the same as that of F_K^0(x).We define a scaling function S_d, d>2 as the analogue of Eq. (<ref>) for interacting bosons,S_d(x) ≡∫_0^√(Tt)d y y^d-1/(2π)^d∫ dn̂ F_K(y^2) / y cosθ' +x +iδ.This replacement makes only a smallquantitative change to the final result in d=3. As the leading behavior at F_K is unchanged,the derivation of the asymptotics Sec. <ref> may be followed precisely, leading toImS_3(x)∼const;x≪ 1∼log( Tt/x^2)+ const;x ≫ 1.The only difference in the asymptotic behavior between ImS_3^0 and ImS_3 might be in the constants. However τ^-1only depends on ImS_3( β) where β is a material dependent parameter, seeEq. (<ref>).Further as ImS_3 has a logarithmic dependence on the cutoff, changing the (material-dependent) cutoffshifts ImS_3(x) → ImS_3(x) + γ. Thus the constants in Eq. (<ref>) may be absorbed into constant β, γ. Once these are fixed, ImS_3(x) and ImS_3^0(x) both crossover smoothly between the same asymptotics and therefore it is reasonable that they are qualitatively similar, see Fig. <ref>. As the final result does not appear significantly sensitive to the critical exponents we will not attempt to estimate the value of these exponents more accurately. §.§ Self-consistent solutionAlthough the intermediate regime is only controlled for the case of d=3, we include the self-consistent equation in both d=2,3for completeness. Having considered the behavior of D we may now directly solve the self consistent equation for the self-energy, Eq. (<ref>). In Wigner coordinates, it is given by,Σ^R(k,ω,t) = i∫d^d q/(2π)^dD_K(q,t,t)/ω + ϵ_k+q + Σ^R(k+q,ω,t).Expanding the denominator in q to first order, andassuming that the variation in Σ^R with q is negligible, givesΣ^R(k,ω,t) = i∫d^dq/(2π)^dD_K(q,t,t)/ω + ϵ_k + Σ^R(k,ω,t) + v⃗_k·q⃗. Now the scaling for D_K is i D_K(q,t,t) =Z T t F_K(2v^2 q^2 t/T).To take advantage of this we rescale units by definingy≡ v |q| √(2 t/ T)z_0≡(ω + ε_k + iδ)√(t/T)z≡(ω + ε_k + Σ^R(k,ω,t))√(t/T) α ≡Z/T^2 l^d.Givingz = z_0 + α (T t)^4-d/2∫_0^y_md y y^d-1/(2π)^d∫ dn̂F_K(y^2)/yk̂·n̂ + z.The integral over n̂ is an integral over unit vectors in ℝ^d. The condition that v|q| ≪ T is imposed by cutting off the integral at y_m ∼√(t T).So, it goes to infinity as t →∞. This gives a self-consistent equation for z.The function F_K(w) goes to a constant asw→ 0, decays like 1/w as w→∞. We introduce the function S_d depending on dimension d so that we can write,z = z_0 + α (Tt)^4-d/2S_d(z).Since we need to solve the integral self-consistently we must understand how S_d(z) behaves for z in the upper half of the complex plane. This is greatly simplified since S_d is analytic as a function of z in the upper half complex plane, as the only singularity can come from the poley k̂·n̂= -z. The estimate of S_d depends on the dimensions. §.§.§d=3 In d=3, Eq. (<ref>) leads to S_3(z)= 1/4π^2∫_0^y_m dy y^2 F_K(y^2) ∫_-1^1dcosθ'/ycosθ' + z= 1/4π^2∫_0^y_m dy y F_K(y^2)log(z+y/z-y). Recalling the position of the branch cut as z→ iδ we obtain log(iδ + y/iδ - y) = π i,so that as |z| → 0 we get the leading behavior as y_m→∞,S_3(z)=i/4π∫_0^y_m dy y F_K(y^2)∼i/4πlog(y_m/ζ),where ζ is an order one constant. The integral does not converge as y_m goes to ∞. Therefore, S_3 does not depend only on the variable z but also on y_m and therefore exactly how theintegral is cutoff at q≈ q_m.In particular by shifting y_m to a new value y'_m changes S_3 → S_3 + ilog( y_m/y'_m)/4π. Therefore the imaginary partof S_3 is ambiguous up to an overall additive constant. To understand the large z behavior, we split the integral into the regions y ≪ζ and y≫ζ, where ζ is some constant of order one. The small y limit is∫_0^ζ dy y F_K(y^2) log(z+y/z-y)∼∫_0^ζ dy y F_K(y^2)[ 1 + 2y/z +⋯]∼const. And the large y limit is∫_ζ^y_m dy y F_K(y^2)log(z+y/z-y) ≈∫_ζ^y_mdy/ylog(z+y/z-y)=π ilogy_m/ζ + ∫_ζ^y_mdy/y[log(z+y/z-y) - π i]≈π ilogy_m/ζ + ∫_ζ^∞dy/y[log(z+y/z-y) - π i]≈π i logy_m/ζ + ∫_ζ/z^∞du/u[log(1+u/1-u) - π i] As z →∞ this diverges logarithmically around u = 0, therefore the integral is approximately -π ilog(ζ/z). Collecting the results we have that, ImS_3(z) = i/4πlog(y_m/ζ) +⋯;z→ 0 = i/4πlog(y_m/z);z→∞The results are summarized in Fig. <ref>. Note the the substitution of F_K for F_K^0, makes minimal difference in the calculation of S_3,see Fig. <ref>. Returning to the self consistent equationz = z_0 + α (Tt)^4-d/2 S_3(z)If we assume that z ≈ z_0, we obtainz = z_0 + α (T t)^4-d/2 S_3(z_0)Plugging this back into the self consistent equationz= z_0 + α (T t)^4-d/2 S_3[ z_0 + α (Tt)^4-d/2 S_3(z_0) ]≈ z_0 + α (T t)^4-d/2 S_3(z_0) (1+ α (T t)^4-d/2 S'_3(z_0)).This implies the condition for validity of the perturbative solution is 1≫α (T t)^1/2S'_3(z_0) ∼α (T t)^1/2/z_0 Substituting z_0 = 2ε√(t/T), we see this condition is equivalent to,ε≫α T.Therefore the short time dynamics is sufficient to explain the behavior of the tails of the distribution, which is reasonable as these saturate at short times. Let us look for the self-consistent solution at z_0 = 0 and α (T t)^1/2≫ 1. Assuming z≫ 1 we get z = α/4π (T t)^1/2 i log(y_m/z).Bearing in mind the y_m ∝√(t) we see that the above has a solution with z ∝ t^1/2.Therefore the Σ_R(0,0,t) saturates at a constant at long times, given by the equationΣ^R(0,0,∞)/T = -i α/4πlogT/Σ_R(0,0,∞). To summarize, for ε_k/T ≫α, the behavior is the same as in Sec. <ref>, with saturation at logε_k / T. For smaller energies the logarithmic growth given earlier saturates at T t ∼α^-2.The general behavior is shown in Fig. <ref>. The approximate behavior of τ^-1 at ε_k =0 is shown inFig. <ref>.Unfortunately calculating S_3(z) over the upper half plane and solving Eq. (<ref>) is numerically intensive. Instead we approximateS_3(z)∼i/4πlogy_m/ζ + i z,which renders Eq. (<ref>) analytically tractable. As this approximation has the same asymptotic limits as S_3 it should be sufficient for reproducing the qualitative shape of τ^-1. §.§.§d=2 We now analyze the self consistent equation in d=2.S_2(z)= ∫_0^∞ dy y F_K(y^2)/(2π)^2∫_0^2πdθ'/ycosθ' + z= 1/2π∫_0^∞ dy y F_K(y^2)/√(z^2- y^2)We estimate this integral as follows. First as z → 0+ iδ this goes to i for some order one constant.Note the sign is determined by the branch cut and should be consistent with causality. On the other hand if z≫ 1we split the integral at some ζ of order one:∫_0^ζ dy y F_K(y^2)/√(z^2- y^2) ≈1/z∫_0^ζ dy y F_K(y^2)[1 + y^2/2z^2 + ⋯]∝1/z,and the other half∫_ζ^∞ dy y F_K(y^2)/√(z^2- y^2) ≈∫_ζ^∞dy/y√(z^2- y^2)= log (i ζ)/z-log(z +√(z^2-ζ^2))/z. As |z|→∞ this is ∼log(2z/ζ)/z, which dominates the small y contribution, and it's effect is to renormalize theorder one cutoff ζ. So we may summarize the behavior asS_2(z) = i·const;z→ 0 = -1/2π zlog(z/i ζ);z→∞ The real and imaginary parts of S_2^0 which is similar to S_2, are plotted in Fig. <ref>. We deal with the self consistent equation essentially as in d=3. The perturbative condition holds at large z_0,|α T t S'_2(z_0)|≪ 1 .For z_0 = 0 this condition is always violated at the time scale 1/α. However, if we take z_0 = 2ε√(t/T)≫ 1, then using the asymptotics we estimate that α T t S'_2(z_0)∼α T t log(z_0)/z_0^2 = α T t log(2ε√(t/T))/4ε^2 t/T. The perturbative condition is only violated at an exponentially long time t ∝exp( ε^2/(α T^2) ).We now seek a self-consistent solution when α T t ≫ 1, but z_0 is small.We use the large z asymptotics.z ≈ z_0 - 2πα T t/zlog(z).Solving the quadratic equation treating log(z) as a constant we get,z= z_0/2(1 + √(1-8πα t /z_0^2log z)).The choice of branch comes from matching the behavior as α→ 0. In the regime of interest where t≫ 1,we can to good accuracy simply replace the log z on the RHS with log (πα T t). Taking the t ≫ 1 limit we obtainz = [-πα t Tlog(- πα t T)]^1/2 . We see that z≫ 1 so theassumption of large z is self-consistent. Translating back to the self energy via Σ_R = z√(T/t), we obtainΣ_R ∼ T √(αlog(α T t)).The self energy apparently grows without bound at z_0 = 0 albeit extremely slowly. We interpret this unbounded growth as a symptom of thenon-existence of the true critical point in d=2 and therefore the impossibility of a self-consistent treatment in this regime. § CONCLUSIONSIn this paper we have analyzed the superfluid quench, wherein an attractive interaction is suddenly turned on in a normal fluid of fermions.This interaction enhances superfluid fluctuations. There are two regimes: a disordered phase at weak interaction strength where the fluctuations saturateat a finite value; and the ordered phase, for strong interaction strength, where the fluctuations grow exponentially,leading eventually to spontaneous symmetry breaking. Between these two regimes is a dynamical critical point, where the fluctuations grow but order is not formed. We find that as with the usual equilibriumcritical points, there is a notion of universality associated with this dynamical critical point. That is, once a small number of constants are fixed,the complete behavior of the superfluid fluctuations is determined by a function of the wavelength and time, with no further free parameters.The necessary parameters are the r and ℓ given in Eq. (<ref>).Moreover, we find a signature of this universality in the lifetime of the fermions. The mechanism is essentially that the fermions near the Fermi energyscatter resonantly off of superfluid fluctuations. Thus the growing superfluid fluctuations lead to a singular feature in the fermion lifetimeas a function of energy. We show that this singular feature inherits the universality of the dynamical critical point. In particular after fixing the Fermivelocity v_F and normalized scattering rate α, the energy and time dependence of the lifetime is completely determined.The present work may be extended in several directions. One is the full development of the kinetic equation governing the fermion dynamics,to be published elsewhere. It would also be of interest to repeat this analysis for a disordered system to allow for comparison withpump-probe experiments.Lastly, extending this treatment to include other fermion symmetry breaking channels, such as magnetic orders,or charge-density waves, would be fruitful.We note that the perturbative calculation in d=3 gives a logarithmic correction ∼log t which grows large with t.This suggests that a dynamical RG conductedaround the critical dimension d=3 may be a fruitful alternative way to approach this problem.In this paper we have consider the fermions to initially be at finite temperature before the quench.A natural problem would be to consider the quench starting with fermions at zero temperature. This problem is more delicate for at least two reasons.Firstly the superfluid phase transition always occurs at finite temperature, therefore to approach the critical regime one would have to considerthe temperature that is dynamically generated by the self-heating of the fermions.Secondly before the temperature is generated, the fermions are controlled by quantum fluctuations, leading tocomplex prethermal dynamics <cit.>. These difficulties aside, the problem appears deserving of future study. Acknowledgements: This work was supported by the US National Science Foundation Grant NSF-DMR 1607059. § THE D PROPAGATOR OR COOPERON AS CORRELATORS OF HUBBARD-STRATONOVICH FIELDSThe final post-quench Hamiltonian is,H_f = H_i + u/N∑_q Δ^†_qΔ_q.We will highlight the meaning of D in an imaginary time formalism as the generalization to real time Keldysh formalism is conceptually straightforward.We may decouple the quartic interaction via a complex field ϕ_q for each momentum mode q,∏_qe^-u/NΔ^†_qΔ_q= ∫[ϕ_q,ϕ^*_q] × e^-N/u|ϕ_q|^2 + ϕ_qΔ^†_q+ϕ_q^*Δ_q.In this picture, the action is quadratic in the fermionic fields.After integrating out the fermions one may write the partition function Z as,Z=∫[ϕ,ϕ^*] e^-N/u∫ dx |ϕ|^2 + Trln[g^-1-[ 0 ϕ; ϕ^* 0 ]],where g^-1 is the non-interacting fermionic Green's function in 2× 2 Nambu space. On expanding the Trln, one obtains an action for the ϕ fields. Since the system is assumed to be in the normal phase, only even powers of the ϕ field enter the action.Thus, we obtain,Z = ∫[ϕ,ϕ^*]e^-S(ϕ̂); ϕ̂= [ 0 ϕ; ϕ^* 0 ],whereS= N/u∫ dx |ϕ(x)|^2 - 1/2 Tr[gϕ̂gϕ̂]-1/4 Tr[gϕ̂gϕ̂gϕ̂gϕ̂].The Gaussian approximation involves keeping only quadratic terms in the ϕ fields. The coefficient of ϕ^2 in the second term in the action is recognized as the polarization bubble Π≡ gg. The equation of motion at Gaussian order is,[1/u- Π]D = 1 ,where trace over the fermions gives an additional factor of N. In the next sub-section we show that Eq. (<ref>) is equivalent to a classical Langevin equation for the Hubbard-Stratonivich fields when the fermions are at non-zero temperatures.§ PROPERTIES OF THE Π AND RELATIONSHIP TO MODEL-AThe fermionic distribution function before the quench is, n_σ(k)=1/(e^ξ_k/T+1),ξ_k=ϵ_k-μ. We measure all energies relative to the chemical potential. The Keldysh component of the polarization bubble is found to be,iΠ^K(q,t,t')=∑_k e^-i(ξ_k↑+ξ_-k+q↓)(t-t') ×[n_σ(k)n_-σ(-k+q) + (1-n_σ(k))(1-n_-σ(-k+q)) ],while the retarded component isiΠ^R(q,t,t') =θ(t-t')∑_k e^-i(ξ_k↑+ξ_-k+q↓)(t-t') ×[-n_σ(k)- n_-σ(-k+q)+1].Since the Π are time-translation invariant in this approximation, it is helpful to write them in frequency space,Π^R(q,ω)=-1/2∑_k tanh[ξ_k/2T] + tanh[ξ_k-q/2T]/ω - ξ_k-ξ_k-q+iδ, Π^K(q,ω)=2iπ∑_k (n[ξ_k/T]n[ξ_-k+q/T]+ (1-n[ξ_k/T])(1-n[ξ_-k+q/T]) ) δ(ω - ξ_k-ξ_k-q).Now we use the fact that 1-2n(x) = tanh(x/2) and using that (a) (b) +1 = (a+b)((a)+(b)), one may show that fluctuation dissipation theorem (FDT) is obeyed,Π_K(q,ω) =(ω/2T)[Π_R(q,ω)-Π_A(q.ω)].It should be emphasized that this FDT is simply inherited from the properties of the initial state. In a better approximation,the FDT will cease to hold as the system goes through the process of thermalization.We are interested in the dynamics of the soft Cooperon mode, which evolves on a timescale much larger than T^-1. Therefore we expand Π^R(q,ω)inω/T, q^2/T. The constant term,Π^R(0,0)= ∑_ktanh[ξ_k/2T]/2ξ_k -iδ≈1/2νlog E_F /T,is the usual Cooper logarithm, where ν is the density of states and E_F is some bandwidth or Fermi energy. The coefficient of ω/T is∂/∂ωΠ^R(0,0) = ∑_ktanh[ ξ_k/2T] /(2ξ_k - iδ)^2= π i ∑_k tanh[ ξ_k/2T] δ'(2ξ_k)= i νπ/2 Twhich is purely imaginary in the absence of particle hole asymmetry. For the coefficient of q^2/T, we expand the dispersion as ϵ_k-q = ϵ_k - q⃗·v⃗_k and obtain,∂^2/∂ q^2Π^R(0,0) = ∑_k tanh”[ξ_k/2T] (v⃗_k/(2T))^2 /2ξ_k-iδ= ν/8 T^2⟨ v^2_k⟩_FS∫_-∞^∞ dx tanh” x/x,where ⟨·⟩ _FS is the average over the Fermi surface and the integral evaluates to the constant28ζ(3)/π^2 ≈ 3.41.With this expansion for Π_R, the FDT gives that Π_K is given by iΠ_K(ω, q = 0)∼ 2νπand therefore in real time iΠ_K(t,t') = νδ(t-t'),as discussed in the text. The fact that Π_K is well approximated by a delta function is entirely a consequence of the fact that we are interestedin timescales much longer than T^-1, because the Cooperon dynamics are governed by much longer timescales at the critical point.Thus in summary, the above behavior for Π together with how it affects the equation of motion of D (Eq. (<ref>)) show that the Cooperon obeys model-A dynamics close to the critical point.§ INTERACTING COOPERONS IN THE HARTREE-FOCK APPROXIMATION For the sake of completeness we outline how the results for interacting bosons used in the main text were obtained. We employ a Hartree-Fock approach, although the same scaling forms can be obtained with an ϵ-expansion <cit.>. The Hartree-Fock approximation for the bosons is justified as the N→∞ limit of a bosonic model where N denotes the number of components of the boson field. This N should not be confused with the orbital index of the fermions used in the main text. The Hartree-Fock equations of motion are,∂_t D_R(k,t,t') + [k^2 + r_ eff(t)]D_R(k,t,t')=-δ(t-t'),⇒ D_R(k,t,t') = -θ(t-t')e^-k^2(t-t')e^-∫_t'^t dt_1r_ eff(t_1),where the mass obeys the equation of motionr_ eff(t) = r + u ∫d^dq/(2π)^diD_K(q,t,t), D_K = D_R ∘Π_K ∘ D_A.The overdamped dynamics of D_R is entirely due to the underlying finite temperature Fermi sea which gives u^-1-Π^R= r +iω.If we employ the Gaussian expression for D_K(q,t,t) →T/q^2+r[1-e^-2(q^2+r)t], we find thatr_ eff(t)-r_c →∫ q^d-1dq 1/q^2e^-2 q^2 t∝1/t^d/2-1.The above shows that scaling emerges only if we set d=4 in the above Gaussian result, showing that the upper critical dimension of the theory is d=4. Thus with the ansatz,r_ eff(t) =-a/t,we obtain,D_R(q,t,t')= -e^-q^2(t-t')(t/t')^a,For D_K we have,iD_K(q,t,t'; t>t')= 2T ∫_0^t'dt_1e^-q^2(t-t_1)-q^2(t'-t_1)(t/t_1)^a (t'/t_1)^a, = 2T e^-q^2(t+t')(tt')^a∫_0^t'dt_1 e^2 q^2t_1 t_1^-2a.For q^2 t' ≪ 1, we obtain aging behavior,iD_K(q,t, t';q^2 t' ≪ 1)= c e^-q^2 tt^a (t')^1-a, For equal times we may write,i D_K(q,t,t)= 2Te^-2 q^2 tt^2a∫_0^tdt_1 e^2 q^2t_1t_1^-2a,= T/q^2F(2 q^2t),F(x) = e^-xx^2a∫_0^x dy' e^y'y'^-2a,= x ∫_0^1 dy e^-xy(1-y)^-2a.Note that F(x=0)=0 and F(x=∞)=1.In order to solve for a, we use thatr_ eff(t) = r_ eff(∞)+ u ∫_q[iD_K(q,t,t)-iD_K(q,∞,∞)].At criticality r_ eff(∞)=0 and iD_K(q,∞,∞)= T/q^2. Using this,r_ eff(t) = uA_d ∫_0^Λ d q q^d-11/q^2[F(2 q^2 t)-1].where A_d= the surface area of a d-dimensional unit sphere.The above may be recast as-a/t=uA_d/t^-1+d/2∫_0^2Λ^2 t dx x^-2 + d/2[F(x)-1].Thus we may write, defining ϵ=4-d,a= - u A_dt^ϵ/2{∫_0^∞dx x^-ϵ/2[F(x)-1] -∫_2Λ^2 t^∞dx x^-ϵ/2[F(x)-1]}.The first integral above increases in time as t^ϵ/2 unless∫_0^∞dx x^-ϵ/2[F(x)-1]= 0.Notice that in Eq. (<ref>), to avoid infra-red singularity 2a < 1. Then,F(x, a<1/2) = e^-x x (-x)^2 a-1 ×[Γ (1-2 a)-Γ (1-2 a,-x)].Since F(0)=0, we require ϵ/2 <1 or d>2 to make the ∫ dx x^-ϵ/2 infra-red convergent. Substituting Eq. (<ref>) in Eq. (<ref>), we obtain∫_0^∞dx x^-ϵ/2[F(x)-1]_ϵ<2, =-Γ(1-2a)Γ(ϵ/2)Γ(1-ϵ/2)/Γ(-2a +ϵ/2)=0 ⇒ a=ϵ/4.Thus we have derived the quoted scaling forms, and also the initial slip exponent a=θ=ϵ/4.It is also useful to note that for t>t' but general q^2t,q^2t', one obtains from Eq. (<ref>),iD_K(q,t,t') = T/q^2e^-q^2(t-t')(t/t')^θF(2q^2t')DefiningF_K(x) = F(x)/x iD_K(q,t,t') = Tt'e^-q^2(t-t')(t/t')^θF_K(2q^2t')38 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Fausti et al.(2011)Fausti, Tobey, Dean, Kaiser, Dienst, Hoffmann, Pyon, Takayama, Takagi, and Cavalleri]Fausti11 author author D. Fausti, author R. I. Tobey, author N. Dean, author S. Kaiser, author A. Dienst, author M. C.Hoffmann, author S. Pyon, author T. Takayama, author H. Takagi,and author A. Cavalleri, 10.1126/science.1197294 journal journal Science volume 331, pages 189 (year 2011)NoStop [Smallwood et al.(2012)Smallwood, Hinton, Jozwiak, Zhang, Koralek, Eisaki, Lee, Orenstein, and Lanzara]Smallwood12 author author C. L. Smallwood, author J. P. Hinton, author C. Jozwiak, author W. Zhang, author J. D. Koralek, author H. Eisaki, author D.-H. Lee, author J. Orenstein,and author A. Lanzara, 10.1126/science.1217423 journal journal Science volume 336, pages 1137 (year 2012)NoStop [Smallwood et al.(2014)Smallwood, Zhang, Miller, Jozwiak, Eisaki, Lee, and Lanzara]Smallwood14 author author C. L. Smallwood, author W. Zhang, author T. L. Miller, author C. Jozwiak, author H. Eisaki, author D.-H. Lee,and author A. Lanzara, 10.1103/PhysRevB.89.115126 journal journal Phys. Rev. B volume 89, pages 115126 (year 2014)NoStop [Beck et al.(2013)Beck, Rousseau, Klammer, Leiderer, Mittendorff, Winnerl, Helm, Gol'tsman, and Demsar]Beck13 author author M. Beck, author I. Rousseau, author M. Klammer, author P. Leiderer, author M. Mittendorff, author S. Winnerl, author M. Helm, author G. N.Gol'tsman,and author J. Demsar, 10.1103/PhysRevLett.110.267003 journal journal Phys. Rev. Lett. volume 110, pages 267003 (year 2013)NoStop [Mitrano et al.(2016)Mitrano, Cantaluppi, Nicoletti, Kaiser, Perucchi, Lupi, Pietro, Pontiroli, Riccó, Clark, Jaksch, and Cavalleri]Mitrano15 author author M. Mitrano, author A. Cantaluppi, author D. Nicoletti, author S. Kaiser, author A. Perucchi, author S. Lupi, author P. D. Pietro, author D. Pontiroli, author M. Riccó, author S. R.Clark, author D. Jaksch,and author A. Cavalleri, @noopjournal journal Nature volume 530, pages 461 (year 2016)NoStop [Regal et al.(2004)Regal, Greiner, and Jin]Regal04 author author C. A. Regal, author M. Greiner, and author D. S. Jin, 10.1103/PhysRevLett.92.040403 journal journal Phys. Rev. Lett. volume 92, pages 040403 (year 2004)NoStop [Zwierlein et al.(2004)Zwierlein, Stan, Schunck, Raupach, Kerman, and Ketterle]Zwierlein04 author author M. W. Zwierlein, author C. A. Stan, author C. H. Schunck, author S. M. F. Raupach, author A. J. Kerman,and author W. Ketterle, 10.1103/PhysRevLett.92.120403 journal journal Phys. Rev. Lett. volume 92, pages 120403 (year 2004)NoStop [Bloch et al.(2008)Bloch, Dalibard, and Zwerger]Bloch08 author author I. Bloch, author J. Dalibard, and author W. Zwerger, 10.1103/RevModPhys.80.885 journal journal Rev. Mod. Phys. volume 80, pages 885 (year 2008)NoStop [Endres et al.(2012)Endres, Fukuhara, Pekker, Cheneau, Schaub, Gross, Demler, Kuhr, and Bloch]Bloch12 author author M. Endres, author T. Fukuhara, author D. Pekker, author M. Cheneau, author P. Schaub, author C. Gross, author E. Demler, author S. Kuhr,and author I. Bloch, @noopjournal journal Nature volume 487, pages 454 (year 2012)NoStop [Navon et al.(2016)Navon, Gaunt, Smith, and Hadzibabic]Navon16 author author N. Navon, author A. L. Gaunt, author R. P. Smith,andauthor Z. Hadzibabic,@noopjournal journal Nature Lettervolume 539, pages 72 (year 2016)NoStop [Juchem et al.(2004)Juchem, Cassing, and Greiner]Juchem04 author author S. Juchem, author W. Cassing, and author C. Greiner, 10.1103/PhysRevD.69.025006 journal journal Phys. Rev. D volume 69, pages 025006 (year 2004)NoStop [Eckstein et al.(2009)Eckstein, Kollar, and Werner]Werner09 author author M. Eckstein, author M. Kollar, and author P. Werner, 10.1103/PhysRevLett.103.056403 journal journal Phys. Rev. Lett. volume 103, pages 056403 (year 2009)NoStop [Santos and Rigol(2010)]Santos10 author author L. F. Santos and author M. Rigol,10.1103/PhysRevE.81.036206 journal journal Phys. Rev. E volume 81, pages 036206 (year 2010)NoStop [Tavora et al.(2014)Tavora, Rosch, and Mitra]Tavora14 author author M. Tavora, author A. Rosch, and author A. Mitra, 10.1103/PhysRevLett.113.010601 journal journal Phys. Rev. Lett. volume 113, pages 010601 (year 2014)NoStop [Janssen et al.(1989)Janssen, Schaub, and Schmittmann]Janssen1988 author author H. Janssen, author B. Schaub, and author B. Schmittmann,10.1007/BF01319383 journal journal Z. Phys. B volume 73, pages 539 (year 1989)NoStop [Huse(1989)]Huse89 author author D. A. Huse, 10.1103/PhysRevB.40.304 journal journal Phys. Rev. B volume 40,pages 304 (year 1989)NoStop [Calabrese and Gambassi(2005)]Gambassi05 author author P. Calabrese and author A. Gambassi, http://stacks.iop.org/0305-4470/38/i=18/a=R01 journal journal Journal of Physics A: Mathematical and General volume 38, pages R133 (year 2005)NoStop [Gagel et al.(2014)Gagel, Orth, and Schmalian]Gagel2014 author author P. Gagel, author P. P. Orth, and author J. Schmalian, 10.1103/PhysRevLett.113.220401 journal journal Phys. Rev. Lett. volume 113, pages 220401 (year 2014)NoStop [Gagel et al.(2015)Gagel, Orth, and Schmalian]Gagel15 author author P. Gagel, author P. P. Orth, and author J. Schmalian, 10.1103/PhysRevB.92.115121 journal journal Phys. Rev. B volume 92, pages 115121 (year 2015)NoStop [Chandran et al.(2013)Chandran, Nanduri, Gubser, andSondhi]Sondhi2013 author author A. Chandran, author A. Nanduri, author S. S. Gubser,andauthor S. L. Sondhi, 10.1103/PhysRevB.88.024306 journal journal Phys. Rev. B volume 88, pages 024306 (year 2013)NoStop [Chiocchetta et al.(2015)Chiocchetta, Tavora, Gambassi, andMitra]Chiocchetta2015 author author A. Chiocchetta, author M. Tavora, author A. Gambassi,andauthor A. Mitra, 10.1103/PhysRevB.91.220302 journal journal Phys. Rev. B volume 91, pages 220302 (year 2015)NoStop [Maraga et al.(2015)Maraga, Chiocchetta, Mitra, and Gambassi]Maraga2015 author author A. Maraga, author A. Chiocchetta, author A. Mitra,and author A. Gambassi, 10.1103/PhysRevE.92.042151 journal journal Phys. Rev. E volume 92, pages 042151 (year 2015)NoStop [Nicklas et al.(2015)Nicklas, Karl, Höfer, Johnson, Muessel, Strobel, Tomkovi čč, Gasenzer, andOberthaler]Oberthaler15 author author E. Nicklas, author M. Karl, author M. Höfer, author A. Johnson, author W. Muessel, author H. Strobel, author J. Tomkovi čč, author T. Gasenzer,and author M. K. Oberthaler, 10.1103/PhysRevLett.115.245301 journal journal Phys. Rev. Lett. volume 115, pages 245301 (year 2015)NoStop [Chiocchetta et al.(2016)Chiocchetta, Tavora, Gambassi, andMitra]MitGam16 author author A. Chiocchetta, author M. Tavora, author A. Gambassi,andauthor A. Mitra, 10.1103/PhysRevB.94.134311 journal journal Phys. Rev. B volume 94, pages 134311 (year 2016)NoStop [Lemonik and Mitra(2016)]Lemonik16 author author Y. Lemonik and author A. Mitra, 10.1103/PhysRevB.94.024306 journal journal Phys. Rev. B volume 94,pages 024306 (year 2016)NoStop [Karl et al.(2017)Karl, Cakir, Halimeh, Oberthaler, Kastner, and Gasenzer]Gasenzer17 author author M. Karl, author H. Cakir, author J. C. Halimeh, author M. K. Oberthaler, author M. Kastner,and author T. Gasenzer, 10.1103/PhysRevE.96.022110 journal journal Phys. Rev. E volume 96, pages 022110 (year 2017)NoStop [Chiocchetta et al.(2017)Chiocchetta, Gambassi, Diehl, andMarino]Marino17 author author A. Chiocchetta, author A. Gambassi, author S. Diehl, and author J. Marino, 10.1103/PhysRevLett.118.135701 journal journal Phys. Rev. Lett. volume 118, pages 135701 (year 2017)NoStop [Yuzbashyan et al.(2015)Yuzbashyan, Dzero, Gurarie, andFoster]Yuzbashyan15 author author E. A. Yuzbashyan, author M. Dzero, author V. Gurarie,andauthor M. S. Foster, 10.1103/PhysRevA.91.033628 journal journal Phys. Rev. A volume 91, pages 033628 (year 2015)NoStop [Liao and Foster(2015)]Foster15 author author Y. Liao and author M. S. Foster, 10.1103/PhysRevA.92.053620 journal journal Phys. Rev. A volume 92, pages 053620 (year 2015)NoStop [Foster et al.(2014)Foster, Gurarie, Dzero, and Yuzbashyan]Foster14 author author M. S. Foster, author V. Gurarie, author M. Dzero,and author E. A. Yuzbashyan, 10.1103/PhysRevLett.113.076403 journal journal Phys. Rev. Lett. volume 113, pages 076403 (year 2014)NoStop [Sentef et al.(2016)Sentef, Kemper, Georges, and Kollath]Sentef16 author author M. A. Sentef, author A. F. Kemper, author A. Georges,andauthor C. Kollath, 10.1103/PhysRevB.93.144506 journal journal Phys. Rev. B volume 93, pages 144506 (year 2016)NoStop [Knap et al.(2016)Knap, Babadi, Refael, Martin, andDemler]Knap16 author author M. Knap, author M. Babadi, author G. Refael, author I. Martin,and author E. Demler, 10.1103/PhysRevB.94.214504 journal journal Phys. Rev. B volume 94, pages 214504 (year 2016)NoStop [Dehghani and Mitra(shed)]Dehghani17 author author H. Dehghani and author A. Mitra, @noopjournal journal arXiv:1703.01621(year unpublished)NoStop [Kennes et al.(2017)Kennes, Wilner, Reichman, and Millis]Kennes17 author author D. M. Kennes, author E. Y. Wilner, author D. R. Reichman,andauthor A. J. Millis,@noopjournal journal Nature, Physicsvolume 10, pages 10.1038/nphys4024 (year 2017)NoStop [Hohenberg and Halperin(1977)]HH77 author author P. C. Hohenberg and author B. I. Halperin, 10.1103/RevModPhys.49.435 journal journal Rev. Mod. Phys. volume 49, pages 435 (year 1977)NoStop [Cornwall et al.(1974)Cornwall, Jackiw, and Tomboulis]CornwallJackiwTomboulis author author J. M. Cornwall, author R. Jackiw, and author E. Tomboulis,@noopjournal journal Phys. Rev. Dvolume 10, pages 2428 (year 1974)NoStop [Berges(2004)]BergesRev author author J. Berges, @noopjournal journal AIP Conf. Proc. 739 , pages 3 (year 2004)NoStop [Babadi et al.(2015)Babadi, Demler, and Knap]Knap15 author author M. Babadi, author E. Demler, and author M. Knap, 10.1103/PhysRevX.5.041005 journal journal Phys. Rev. X volume 5, pages 041005 (year 2015)NoStop | http://arxiv.org/abs/1705.09200v2 | {
"authors": [
"Yonah Lemonik",
"Aditi Mitra"
],
"categories": [
"cond-mat.supr-con"
],
"primary_category": "cond-mat.supr-con",
"published": "20170525143248",
"title": "Time-resolved spectral density of interacting fermions following a quench to a superconducting critical point"
} |
[email protected] School of Physics , Dalian University of Technology, Dalian, 116024, P.R. China [][email protected] Department of Physics and State Key Laboratory of Nuclear Physics and Technology, Peking University, Beijing 100871, China Collaborative Innovation Center of Quantum Matter, Beijing 100871, China Center for High Energy Physics, Peking University, Beijing 100871, China We investigate the chiral symmetry and its spontaneous breaking at finite temperature and in an external magnetic field with four-fermion interactions of different channels. Quantum and thermal fluctuations are included within the functional renormalization group approach, and properties of the set of flow equations for different couplings, such as its fixed points, are discussed. It is found that external parameters, e.g. the temperature and the external magnetic field and so on, do not change the structure of the renormalization group flows for the couplings. The flow strength is found to be significantly dependent on the route and direction in the plane of couplings of different channels. Therefore, the critical temperature for the chiral phase transition shows a pronounced dependence on the direction as well. Given fixed initial ultraviolet couplings, the critical temperature increases with the increasing magnetic field, viz., the magnetic catalysis is observed with initial couplings fixed. 11.30.Rd,05.10.Cc,11.10.Wx,12.38.MhFour-fermion interactions and the chiral symmetry breaking in an external magnetic fieldYu-xin Liu December 30, 2023 =========================================================================================§ INTRODUCTION Recent studies on QCD and strongly interacting matter in extremely strong external magnetic fields have attracted lots of attentions, which are motivated, on one hand, by experimental observations of azimuthal charged-particle correlations in heavy-ion collisions at the Relativistic Heavy-Ion Collider (RHIC) and the LHC <cit.>.This phenomenon can be interpreted, although still being under discussion and recently challenged by relevant measurements in p-Pb collisions by the CMS Collaboration at LHC <cit.>, in the theoretical framework of the chiral magnetic effect (CME) <cit.>, whereof positive electric charges are separated from negative ones along the direction of the magnetic field produced in event-by-event noncentral heavy-ion collisions, due to the imbalanced chirality caused by a possible local violation of the parity symmetry. For more details about the CME and its recent progress, see e.g. <cit.> and references therein. On the other hand, how an extremely strong external magnetic field affects the spontaneous chiral symmetry breaking and the QCD chiral phase transition, catalyzing the symmetry breaking or inhibiting, is still in debate. Unlike many effective models which predict that, when the magnetic field strength B is enhanced, the (pseudo)-critical temperature T_c for the chiral phase transition or crossover increases as well <cit.>, lattice QCD simulations found that T_c decreases with increasing B <cit.>, which is because sea quarks coupled with the magnetic field increase the Polyakov loop, thus reduce the chiral condensate effectively <cit.>. In the same time, based on other approaches, such as the Dyson-Schwinger equations, the magnetic catalysis is found for asymptotically large B, while for intermediate B, the inverse magnetic catalysis is observed, due to gluon screening effects and the strong coupling decreasing <cit.>. This phenomenon is also called as the delayed magnetic catalysis, similarly found in a functional renormalization group (FRG) calculation as well <cit.>, in which the running of the four-fermion coupling is driven by not only the four-fermion but also the quark-gluon interactions. Besides gluonic effects as mentioned above (see e.g. <cit.> for more relevant discussions), the inverse catalysis can also be reproduced by including e.g., neutral meson effects <cit.>, quark antiscreening <cit.>or some other scenario <cit.>.Fukushima and Pawlowski have investigated the chiral symmetry breaking in an external magnetic field by studying the running four-fermion couplings within the FRG approach <cit.>, see e.g. <cit.> for more details about the FRG and its recent progresses in QCD. It is clearly shown there that how the magnetic field changes the pattern of the renormalization group (RG) flow for the four-fermion coupling and how the dimensional reduction takes place, in comparison to case without B. Note also that only four-fermion interactions of scalar and pseudo-scalar channels, more specifically σ-π channels, are included in analyses of <cit.>. Although scalar and pseudo-scalar channels play the most important role in the chiral symmetry breaking, they are not complete in the Dirac space, and are significantly influenced by other channels through quantum fluctuations, as we will show in what follows. Furthermore, four-fermion interactions with different channels are also indispensable to the dynamical hadronization technique <cit.>, through which hadronic degrees of freedom emerge naturally as collective modes of quark-gluon dynamics in the low energy regime, and this technique has been successfully applied in recent rebosonized QCD computations <cit.>. In this work we will perform the RG analyses for four-fermion interactions of different channels, investigate their mutual impacts on each other through quantum evolution, and study the influence on the chiral symmetry breaking and chiral phase transition in an external magnetic field.The paper is organized as follows: In Sec. <ref> we investigate the four-fermion interactions of different channels within the FRG framework, and obtain a set of flow equations for the couplings. Then in Sec. <ref>the structure of the flow equations is analyzed in detail. Flow diagram and relevant fixed points at vacuum are presented, and for the cases of finite temperature and an external magnetic field, numerical results are provided. A summary and conclusion can be found in Sec. <ref>.§RENORMALIZATION GROUP FLOWS FOR THE FOUR-FERMION COUPLINGS We employ the following scale-dependent effective action for the two-flavor Nambu–Jona-Lasinio (NJL) model:Γ_k= ∫_x{Z_q,kq̅γ_μ∂_μq + 1/2λ_-,k[(q̅γ_μq)^2-(q̅iγ_μγ_5q)^2]+ 1/2λ_+,k[(q̅γ_μq)^2+(q̅iγ_μγ_5q)^2]} ,with ∫_x=∫_0^1/Td x_0 ∫ d^3 x and the quark fields q, q̅. Z_q,k is the quark wave function renormalization; λ_-,k and λ_+,k are the four-fermion couplings for the V-A and V+A channels respectively, with V and A denoting vector and axial vector, where we have employed the notations in <cit.>. They are all scale-dependent with subscript k. Obviously, the four-fermion interactions in eq:action are U(N_f)_V× U(N_f)_A symmetric with flavor number N_f=2, and we don't take the U(1)_A breaking into account throughout this work. The V+A channel is related to the scalar and pseudo-scalar ones through the Fierz transformations, which yield(q̅γ_μq)^2+(q̅iγ_μγ_5q)^2 = -4/N_c{[(q̅ T^0q)^2-(q̅ T^aγ_5q)^2]+[(q̅ T^aq)^2-(q̅ T^0γ_5q)^2]}-8{[(q̅ t^αT^0q)^2-(q̅ t^αT^aγ_5q)^2]+[(q̅ t^αT^aq)^2-(q̅ t^αT^0γ_5q)^2]} ,with T^0=1/√(2N_f)1_N_f× N_f and the SU(N_f) generators T^a in flavor space, and the SU(N_c) generators t^α in color one. Those in the first set of square brackets on the r.h.s of eq:fierzVpA are the σ-π channels, which are commonly used in the NJL model. Note that, besides the σ-π channels, contributions also come from scalar triplets and the pseudo-scalar singlet, and even nontrivial colored channels with the SU(N_c) generators t^α. Similarly, the V-A channel, upon the implementation of the transformation, can be rewritten as (q̅γ_μq)^2-(q̅iγ_μγ_5q)^2 = 2/N_c{[(q̅ T^0γ_μq)^2-(q̅ T^aiγ_μγ_5q)^2]+[(q̅ T^aγ_μq)^2-(q̅ T^0iγ_μγ_5q)^2]}+4{[(q̅ t^αT^0γ_μq)^2-(q̅ t^αT^aiγ_μγ_5q)^2]+[(q̅ t^αT^aγ_μq)^2-(q̅ t^αT^0iγ_μγ_5q)^2]} .The V-A channel, however, is invariant under the Fierz transformation, as only the Dirac structure is concerned. Therefore, it is orthogonal to the (pseudo)-scalar channels, while the V+A channel is not. And since we are interested in the chiral symmetry and its breaking, does that mean the V-A channel is not important and can be neglected? Obviously, it is not correct. We will show in what follows that quantum fluctuations in the V-A channel affect those in the V+A channel significantly, and thus the chiral symmetry breaking as well.As the renormalization group (RG) scale k in eq:action evolves from an ultraviolet (UV) cutoff scale k=Λ down to the infrared k=0, quantum fluctuations with wavelength ≳ 1/Λ are successively included in the effective action, through the Wetterich equation <cit.> as follows∂_tΓ_k =1/2STr{∂_t R_k(Γ_k^(2)+R_k)^-1}=1/2STr{∂̃_tln(Γ_k^(2)+R_k)} ,with t=ln (k/Λ), where we have adopted the formalism in Ref.<cit.>; Γ_k^(2)+R_k, which is usually called as the fluctuation matrix, includes a regulator R_k and the second derivative of the effective action with respect to all fields, i.e.,(Γ_k^(2))_ij:=δ/δΦ_iΓ_kδ/δΦ_j ,with the super field Φ=(q,q̅) for the NJL model. The super trace in eq:WetterichEq runs over momenta, fields and all other internal indices, and provides an additional minus sign for the fermionic part. Since in our case we only have fermionic quark fields, the minus sign is always there. The partial differentiation ∂̃_t with a tilde in eq:WetterichEqacts only on the regulator.The fluctuation matrix can be rewritten asΓ_k^(2)+R_k=𝒫+ℱ ,where 𝒫 is the matrix of inverse propagators with regulators, and ℱ is the leftover part which includes the field dependence. Substituting eq:flucmatrdecom into eq:WetterichEq and expanding ln(𝒫+ℱ) in order of ℱ/𝒫, one arrives at∂_tΓ_k= 1/2STr{∂̃_tln(𝒫+ℱ)}=1/2STr∂̃_tln𝒫+1/2STr∂̃_t(1/𝒫ℱ)-1/4STr∂̃_t(1/𝒫ℱ)^2+⋯ ,from which we could obtain the flow equations for all the coupling in the effective action at appropriate expanding orders.In this work we employ the 3d optimized regulator for the quark, i.e.,R_k(q) =Z_q,kiq⃗·γ⃗ r_F(q⃗^2/k^2) ,withr_F(x) =(1/√(x)-1)Θ(1-x).Then through Eqs. (<ref>) (<ref>) (<ref>) one has1/𝒫 =[ 0S_k(q); -S_k(q)^T 0 ] ,withS_k(q)=1/Z_q,ki[q_0γ_0+(1+r_F)q⃗·γ⃗] ,and ℱ =[ F_k^qqF_k^qq̅;F_k^q̅q F_k^q̅q̅ ] ,withF_k^qq= -(λ_+,k+λ_-,k)(q̅γ_μ)^T(q̅γ_μ)-(λ_+,k-λ_-,k)(q̅ iγ_μγ_5)^T(q̅ iγ_μγ_5) ,F_k^q̅q̅= -(λ_+,k+λ_-,k)(γ_μ q)(γ_μ q)^T-(λ_+,k-λ_-,k)(iγ_μγ_5 q)(iγ_μγ_5 q)^T ,F_k^q̅ q= (λ_+,k+λ_-,k)[γ_μ (q̅γ_μ q)+γ_μ q q̅γ_μ]+(λ_+,k-λ_-,k)[(iγ_μγ_5)(q̅ iγ_μγ_5 q)+iγ_μγ_5q q̅ iγ_μγ_5] ,F_k^q q̅= -(F_k^q̅ q)^T . So far, we have all the elements to construct the flow equations for the four-fermion couplings. To begin,we would like to mention that in this work the momentum dependence of the four-fermion couplings are neglected for simplicity, and the external momenta are assumed to be vanishing. It is found in <cit.> that the momentum dependence only has a minor quantitative effect. Inserting the expression of ℱ/𝒫 into eq:WEqexpand and only considering the second-order term, one finds that there are several different classes of diagrams contributing to the flow equations, which are shown in fig:fl4p. It is quite apparent that, among the four diagrams in fig:fl4p, only diagram (a) has a closed loop, which just corresponds to the usually called Hartree term, and is the leading-order term in the expansion of 1/(N_cN_f). In comparison to (a), there is a small leak on the right vertex in diagram (b) [It does not matter whether the leak appears on the right or left, because they can be related with each other through manipulations of symmetry, in the case that all external momenta are vanishing.], and diagrams (c) and (d) have two leaks. Note that (c) and (d) are different, since their two connected fermionic lines are anti-parallel and parallel, respectively.Inserting the regulator in eq:regulator into the diagrams in fig:fl4p, one can perform straightforward calculations for diagram (a). For others one needs additional Fierz transformations twice to obtain expressions, which have the same four-fermion interactions as eq:action. Then, the flow equations for the four-fermion couplings are readily obtained as∂_t λ_+ =-[2(N_cN_f+1)λ_+λ_-+3λ_+^2]l_2(k,T,μ)/k, ∂_t λ_- =-[(N_cN_f-1)λ_-^2+N_cN_fλ_+^2]l_2(k,T,μ)/k .We have verified that these equations are similar with those obtained in <cit.> for a fermionic model without color degrees of freedom. The threshold function is given byl_2(k,T,μ):=2ℱ_2(k,T,μ)/N_f2N_f∫d^3q/(2π)^3Θ(1-q⃗^2/k^2) ,withℱ_2(k,T,μ)= 1/4[1-n_f(k,T,μ)-n_f(k,T,-μ)]-k/4T[n_f(k,T,μ)+n_f(k,T,-μ)-n_f^2(k,T,μ)-n_f^2(k,T,-μ)] ,and the fermionic distribution functionn_f(k,T,μ)= 1/e^(k-μ)/T+1 . To be proceeded, the flow equations can be easily extended to the case of a finite external magnetic field. Assuming a spatially homogeneous, temporally independentmagnetic field B aligning along the z axis, due to the quantization of the transverse momenta into Landau levels, one hasq⃗^2=q_z^2+2| q_f eB| n ,with the electric charge q_f e. And the 3-momentum integral in eq:l2 is modified to 2N_f∫d^3q/(2π)^3Θ(1-q⃗^2/k^2) ⟶ 1/2π^2∑_f=u,d| q_f eB|∑_n=0^N_k,fα_n√(k^2-2| q_f eB| n) ,with α_0=1 for the lowest-order Landau level and α_n>0=2; N_k,f is given byN_k,f=θ_g(k^2/2| q_f eB|) ,with θ_g(n+x)=n for 0<x<1 and integer n.Therefore, if there is a finite magnetic field, flow equations in (<ref>) and (<ref>) are not changed, but with the threshold function replaced withl_2(k,T,μ, eB)= ℱ_2(k,T,μ)/π^2 N_f∑_f=u,d| q_f eB|×∑_n=0^N_k,fα_n√(k^2-2| q_f eB| n) , § NUMERICAL RESULTS It is interesting to note that, from the flow equations for the four-fermion couplings in Eqs. (<ref>) (<ref>), all the external influences, such as the temperature, chemical potential, and the magnetic field are implemented only through the threshold function l_2. Thus the external parameters do not change the structure of the flow equations. For the case of vacuum, l_2(k)=k^3/(6π^2), it is more convenient to introduce the dimensionless couplings, i.e.,λ_+^*=k^2λ_+andλ_-^*=k^2λ_- ,whose flow equations are readily obtained as∂_t λ_+^*=-β_+and∂_t λ_-^*=-β_- ,withβ_+= 1/6π^2[2(N_cN_f+1)λ_+^*λ_-^*+3(λ_+^*)^2]-2λ_+^* , β_-= 1/6π^2[(N_cN_f-1)(λ_-^*)^2+N_cN_f(λ_+^*)^2]-2λ_-^* .The vector field (β_+, β_-), flowing toward the infrared (IR) in the plane of λ_+^* and λ_-^*, is plotted in fig:flowfield. Apparently, there are four fixed points in the flow diagram, which are labelled by red stars. Of these points, the one at the origin, i.e. point O with λ_+^*=λ_-^*=0, is the IR fixed point, while the other three belong to the kind of UV ones, and their coordinates (λ_+^*, λ_-^*) are given byA= (12(3+N_cN_f)π^2/9+5N_cN_f+2N_c^2N_f^2, 12 N_cN_fπ^2/9+5N_cN_f+2N_c^2N_f^2),B= (0, 12 π^2/N_cN_f-1),C= (-12 π^2/2N_cN_f-1, 12 π^2/2N_cN_f-1).Drawing a straight line across both the IR fixed point O and any one of the UV points, say point A, one obtains a specific line of renormalization flow OA. A system, initially located on the line, won't flow away from it forever. This is quite obvious for line OB from Eqs. (<ref>) (<ref>) with λ_+=0. It can be easily checked that this also holds for lines OA and OC. For OA, performing the following rotation:[ λ_+^'; λ_-^' ] =[cosθsinθ; -sinθcosθ ][ λ_+; λ_- ] ,withcosθ= N_cN_f+3/√((N_cN_f)^2+(N_cN_f+3)^2), sinθ= N_cN_f/√((N_cN_f)^2+(N_cN_f+3)^2).one arrives at∂_t λ_+^'= -[(2N_c^2N_f^2+5N_cN_f+9)λ_+^'^2+2(N_cN_f+3)×λ_+^'λ_-^'-3N_cN_fλ_-^'^2]l_2/k×1/√((N_cN_f)^2+(N_cN_f+3)^2), ∂_t λ_-^'= -λ_-^'[-4N_cN_fλ_+^'+(2N_c^2N_f^2+2N_cN_f-3)λ_-^']×l_2/k1/√((N_cN_f)^2+(N_cN_f+3)^2) .line OA just corresponds to the solutions in the equations above with λ_-^'=0, and the two equations are reduced to a single one as follows∂_t λ_+^'= -cl_2/kλ_+^'^2 ,withc= (2N_c^2N_f^2+5N_cN_f+9)/√((N_cN_f)^2+(N_cN_f+3)^2)≃ 10.3 .In the same way, this property also applies to line OC. In summary, the two entangled flow equations for the V-A and V+A couplings are decomposed into a simple single flow equation, when the system is located on a straight line which connects the IR and UV fixed points. Properties of the flow equation on these lines and its solutions, at vacuum and at finite T or/and B, have been studied in detail in <cit.>, and all results there can be directly applied to the reduced flow equation in this work. Taking eq:lambppri for example, only when the coupling strength λ_Λ+^' at the initial UV evolution scale Λ at vacuum fulfills Λ^2λ_Λ+^' ≥12π^2/c ,one has the spontaneous chiral symmetry breaking. As shown in fig:flowfield, line OA is divided into two parts by point A, and the condition in eq:chiralbreaking just corresponds to the case that, at k=Λ, the system is located at point A or on the higher part of line OA, which flows away from the IR fixed point O. Apparently, systems on the other sector of line OA all flow toward the IR fixed point O and there are no spontaneous chiral symmetry breaking. Similar analysis is also applicable to lines OB and OC. It is straightforward to extend our analysis to the whole plane of λ_+^* and λ_-^*. We have depicted schematically two gray lines crossing the three UV fixed points in fig:flowfield, separating the plane into two regions, of which the above one is that of spontaneous chiral symmetry breaking, while that below has no chiral symmetry breaking. One should note that the strength of flow is dependent on the route where it goes. As for the three specific lines OA, OB and OC, they have a different coefficient c in their respective flow equations as in eq:lambppri; furthermore, the dependence of the flow strength on routes can also be found in fig:flowmod, where the magnitude of the vector (β_+, β_-) in Eqs. (<ref>) (<ref>), i.e., |β|=√(β_+^2+β_-^2) is shown in the plane of λ_+^* and λ_-^*. Here we have used ln(1+|β|) instead of |β| directly for the convenience of plotting. A pronounced dependence of the flow strength on the route and direction is observed in this figure.Moreover, the spontaneously broken chiral symmetry at vacuum can be restored at finite T, and the corresponding critical temperature is known from <cit.> asT_c=(Λ^2/π^2-12/λ_Λ+^'c)^1/2 ,with line OA for instance. Although one could not obtain a set of equations similar with Eqs. (<ref>) through (<ref>) at finite T, it can be still regarded that the three UV fixed points in fig:flowfield move effectively along the three straight line respectively, all in directions away from the IR fixed point O.When the external magnetic field B is nonzero and T=0, It is found in <cit.> that there is always a chiral symmetry breaking in the lowest-Landau level approximation, no matter how small the coupling is, because of the dimensional reduction. This is the magnetic catalysis of the chiral symmetry breaking. This conclusion for a fermionic system with one coupling is still valid in our case, and the three UV fixed points in fig:flowfield move toward, and coincide at the origin O point, when B≠ 0 and T=0 in the lowest-Landau level approximation. In the same way, when B≠ 0, T≠0 and beyond the lowest-Landau level approximation, the three UV fixed points A, B and C are not at the origin point O any more, but they all move toward O with the increase of the magnetic field, indicating that the region of chiral symmetry breaking above the gray line in fig:flowfield is enlarged, which is also an appearance of the magnetic catalysis. Given a set of four-fermion couplings at the initial UV evolution scale Λ, i.e., λ_Λ+ and λ_Λ-, the flow equations in (<ref>) and (<ref>) can be solved numerically. In order to investigate the dependence of the flow on the route and direction, we parametrize λ_Λ+ and λ_Λ- as followλ_Λ+=λ_Λcosθ_Λ,λ_Λ-=λ_Λsinθ_Λ .In fig:Tc-theta we show the dependence of T_c, the critical temperature for the chiral phase transition, on the direction in the plane as depicted in fig:flowfield, which is represented by the angle θ_Λ defined above. Here we choose λ_ΛΛ^2=25 which guarantees thatthere is always a chiral symmetry breaking with θ_Λ∈ [0 ,π], as shown apparently in fig:flowfield. Furthermore, calculated results for several different values of the magnetic field strength are compared, and the chemical potential is assumed to be vanishing. One can easily find that curves in fig:Tc-theta have a quite obvious feature that the shape of these curves looks like a deformed letter “M”. The two directions relevantto the two peaks in fig:Tc-theta are almost collinear to lines OA and OC in fig:flowfield, respectively. This is reasonable and can be anticipated, because one can observe in fig:flowmod that the flow strength obtains its maximum in these two directions. Comparing OA and OC, we can find that OA is the direction, which breaks the chiral symmetry most effectively. In contrast, values of the critical temperature are relatively small in the directions with θ_Λ=0, π/2 and π. Moreover, the dependence of the critical temperature on the magnetic field strength is also investigated in fig:Tc-theta, and we find that T_c increases with the increasing eB, which is a feature of the magnetic catalysis of the spontaneous chiral symmetry breaking. §SUMMARY AND CONCLUSIONS In this work we have studied the chiral symmetry and its spontaneous breaking at finite temperature and in an extremely strong external magnetic field, in a model with four-fermion interactions of V+A and V-A channels, with V and A denoting vector and axial vector respectively. The V+A channel is related to the usually employed σ-π one through the Fierz transformation, while V-A channel is orthogonal to V+A. Quantum and thermal fluctuations are encoded within the functional renormalization group approach, and a set of flow equations for the couplings of V+A and V-A channels, i.e., Eqs. (<ref>) and (<ref>) is obtained.There are one infrared and three ultraviolet fixed points in the flow diagram in terms of dimensionless couplings at vacuum. Three straight flow lines connect the three UV fixed points with the IR one, respectively. Inclusion of external parameters, such as the temperature and the magnetic field and so on, only moves the three UV fixed points on their respective flow lines, while the directions of the three straight flow lines are not changed. This is because external parameters enter into flow equations (<ref>) and (<ref>) only through the threshold function l_2, and the ratio between Eq. (<ref>) and (<ref>) is independent of l_2. In another word, external parameters do not modify the structure of the flow equations. Furthermore, we have found that the flow strength is significantly dependent on the route and direction in the plane of couplings of different channels, which results in that the critical temperature T_c for the chiral phase transition has a pronounced dependence on the direction as shown in fig:Tc-theta. In the specific model used in this work, OA in fig:flowfield is found to be the direction, which breaks the chiral symmetry most effectively.We also find that the magnetic catalysis effect, i.e., the critical temperature increases with the increasing magnetic field. Note, however, that this conclusion is based on the implicit assumption that all the initial UV couplings are fixed when the magnetic field strength is increased. This is obviously not true, once other dynamical degrees of freedom, such as the gluonic fields, are included, as done in <cit.>. Considering the quite obvious dependence of the flow strength on the direction and route, it is very interesting to extend our work to include the gluonic fluctuations, which will give more information about the magnetic catalysis or inverse magnetic catalysis. Furthermore, we noticed interestedly that J. Braun et al. have found that the inclusion of Fierz-complete four-fermion interactions has a significant impact on the QCD phase structure <cit.>, when we are very close to the accomplishment of this work.The work was supported by the National Natural Science Foundation of China under Contracts No. 11435001; the National Key Basic Research Program of China under Contract Nos. G2013CB834400and 2015CB856900; the Fundamental Research Funds for the Central Universities under Contract No. DUT16RC(3)093. | http://arxiv.org/abs/1705.09841v1 | {
"authors": [
"Wei-jie Fu",
"Yu-xin Liu"
],
"categories": [
"hep-ph"
],
"primary_category": "hep-ph",
"published": "20170527164345",
"title": "Four-fermion interactions and the chiral symmetry breaking in an external magnetic field"
} |
Institut für Physik, Universität Rostock, Albert-Einstein-Strasse 23, D-18059 Rostock, GermanyDepartment of Physics, Kyoto University, Kitshirakawa-Oiwake-cho, Sakyo-ku, Kyoto 606-8502, JapanThe basic theoretical foundation for the modelling of phonon-assisted absorptionspectra in direct bandgap semiconductors, introduced by Elliott 60 years ago<cit.> using second order perturbation theory, results ina square root shaped dependency close to the absorption edge.A careful analysis of the experiments <cit.> reveals that for theyellow S excitons in Cu_2O the lineshape does not follow that square rootdependence. The reexamination of the theory shows that the basic assumptions ofconstant matrix elements and constant energy denominators is invalid forsemiconductors with dominant exciton effects like Cu_2O, where the phonon-assistedabsorption proceeds via intermediate exciton states. The overlap between these andthe final exciton states strongly determines the dependence of the absorption onthe photon energy. To describe the experimental observed line shape of the indirectabsorption of the yellow S exciton states we find it necessary to assume a momentumdependent deformation potential for the optical phonons. 71.35.Cc, 78.40.Fy, 63.20.kk, 71.35.-yThe phonon-assisted absorption of excitons in Cu_2O Nobuko Naka December 30, 2023 =================================================== § INTRODUCTION The research focus in semiconductor physics has changed in recent decades from generic bulk semiconductors in favor for physical phenomena in more fancy systems, like lower dimensional structures. However, the discovery of yellow excitons with principal quantum numbers up to n=25<cit.>has renewed the interest of Cu_2O as it facilitates a novel branch of researchin semiconductor physics<cit.>.These highly excited states, generally referred to as Rydberg excitons, exhibitsimilar properties already observed in atom physics but in a much more experimentalistfriendly framework (effects such as Rydberg blockade are identifiable at liquid helium temperatures and the Stark effect manifests at rather low electric field strengths <cit.>). Furthermore, they additionally show new characteristics due to the unique setting within the semiconductor <cit.>.The cubic symmetry of the system leads e.g. to anisotropic band dispersions, fine-structure splitting, or the breaking of antiunitary symmetries in magnetic fields <cit.>. Since the optical absorption bands of these Rydberg states sit on top of the phonon assisted absorption into the yellow exciton ground state, exhibiting a strong Fano- type interaction <cit.>, a thorough understanding of the phonon-assisted absorption processes is of uttermost importance for properties of the Rydberg excitons. The standard textbook approach to describe the shape of the phonon-assisted exciton absorption close to the band gap is based on second-order perturbation theory and goes back to Elliott <cit.>. It can be visualized as a direct optical excitation into a dipole allowed virtual intermediate state and the subsequent relaxation to the final state through the emission of a phonon. Then by assuming the sum over the matrix elements and energy dominators to be constant, one can derive the well-known square root dependence of the absorption coefficient <cit.>. In this paper, we will critically examine these assumptions and show that in case of semiconductors with strong exciton effects, like Cu_2O, they are invalid mainly due to two reasons. First, the intermediate states are not pure band states, but also higher lying exciton states. Second, the assumptions that the deformation potential, which is used to describe the phonon interaction cannot be taken as a constant, but must be allowed to depend on the phonon wave vector Q. Our theoretical analysis is strongly substantiated by experimental results, which indeed show not the expected square root behavior but the absorption coefficient rises more strongly at higher photon energies. A line shape fit of the absortion then allows to determine precise values for the deformation potential and its Q-dependence. Since the green excitons are coming from the same valence band states as the yellow excitons, their absorption processes are closely connected. Therefore, we are able to describe the complete absorption band of the yellow and green series without additional parameters and obtain excellent agreement with experiment.Our results also have practical interest for the use of Cu_2O in solar cells, as the absorption coefficient determines the cell efficiency. In a recent paper <cit.>, a detailed analysis of the whole absorption of Cu_2O up to the blue and violet exciton states was performed, but the authors used the simple square root dependence of the absorption coefficient and introduced ad hoc values for the deformation potentials for the green excitons to obtain a fit to the experimental spectrum, making their analysis invalid.The paper is organized as follows: In the first section, we discuss the symmetry properties of Cu_2O relevant to the phonon-assisted absorption process. In the second paragraph the theoretical analysis is presented, while the next section discusses the experimental procedures. Then we discuss how to obtain the deformation potential from the fit of the theoretical expressions to the experimental results. In the last section, we extend the analysis to the green exciton states and discuss the results.§ SYMMETRIES IN CUPROUS OXIDE To comprehend the composition of effects contributing to the excitonic absorption spectra ofcuprous oxide, we require some basic knowledge of its band structure.The highest valence band stems from Cu 3d orbital with symmetryΓ_3^+ ⊕ Γ_5^+ at the Γ point. Under the crystal field theysplit into the upper Γ_5^+ and lower Γ_3^+ bands. The Γ_5^+bands splits via spin-orbit interaction further into a nondegenerate upper Γ_7^+band and lower, twofold degenerateΓ_8^+ bands with a splitting ofΔ_so = 131 meV at zone center. The lowest conduction band originatesfrom the Cu 4s orbital, hence possesses a Γ_1^+ symmetry and becomes a Γ_6^+band under consideration of spin-orbit interaction. It is followed by a Γ_3^- band(Γ_8^- respectively, when including spin), that stems from the Cu 4p orbital, whichis known from band structure calculations <cit.>. There are also higher located conduction bands with Γ_4^- symmetry that are formed bythe Cu 4p orbital.For our purposes, we are only interested in the two lowest conduction bands(Γ_6^+ ⊕Γ_8^-) and two highest valence bands(Γ_7^+⊕Γ_8^+), since they form the four known exciton series ofcuprous oxide: the yellow (Γ_6^+⊗Γ_7^+), green(Γ_6^+⊗Γ_8^+), blue (Γ_8^-⊗Γ_7^+) and violet (Γ_8^-⊗Γ_8^+). They are visualised in Fig. <ref>. Symmetries play an important role as they limit the possibilities for transition betweenthe different bands. The symmetry of any respective exciton state is given byΓ_exc = Γ_env⊗Γ_c⊗Γ_vTo enter any excitonic state directly, the exciton symmetry Γ_exc requires to coincide with the symmetry of the respective transition operator. In the O_h group the dipole operator p possesses the symmetry Γ_4^-, the operator(e·p)(k·r) yields the symmetryΓ_3^+⊕Γ_4^+⊕Γ_5^+,which corresponds to the electric quadrupole (Γ_3^+⊕Γ_5^+) and the magneticdipole transitions (Γ_4^+). Regarding the excitonic envelope Γ_env, states with an S-like character bear Γ_1^+ symmetry, while P-like states show Γ_4^- symmetry. The yellow S excitons then split further into Γ_1^+ ⊗ (Γ_6^+ ⊗Γ_7^+) = Γ_2^+⊕Γ_5^+via exchange interaction. While the orthoexcitons (Γ_5^+) are at least quadrupoleactive, the paraexcitons (Γ_2^+) are inexcitable with light. The same relation holds true for the green S-excitons, which split as Γ_1^+ ⊗ (Γ_6^+ ⊗Γ_8^+) = Γ_3^+⊕Γ_4^+⊕Γ_5^+. On the other hand the dipole transition to the higher located blue and violet S-states is possible as they decompose to Γ_1^+ ⊗ (Γ_8^- ⊗Γ_7^+) =Γ_3^-⊕Γ_4^-⊕Γ_5^- for blue and Γ_1^+ ⊗ (Γ_8^- ⊗Γ_8^+) =Γ_1^-⊕Γ_2^-⊕Γ_3^-⊕2Γ_4^-⊕2Γ_5^- for violet. For P-type excitons on the other hand, both the yellow Γ_4^- ⊗ (Γ_6^+ ⊗Γ_7^+) =Γ_2^-⊕Γ_3^-⊕Γ_4^-⊕2Γ_5^- and the greenΓ_4^- ⊗ (Γ_6^+ ⊗Γ_8^+) =Γ_1^-⊕Γ_2^-⊕2Γ_3^-⊕3Γ_4^-⊕3Γ_5^- series are dipole active. The beforementioned highly excited states up ton=25 <cit.> are the yellow series P excitons.As we will see in the upcoming part, most of the absorption background superpositioningwith the exciton resonances arises from the yellow S excitons. The prevalent contributionin the spectra, however, comes from the phonon-assisted absorption process. As mentioned,the dipole excitation from either the blue and violet S-excitons ispossible. Treating the transition into the Γ_8^- band (or its respective excitonstates) as a virtual state with a successive absorption or emission of a phonon, providedit has the correct symmetry, we are able to access the yellow (and green) S exciton states.Beyond the Γ_8^- conduction band, the next closest band that fulfills the necessarysymmetry and parity restrictions to allowfor a dipole transition into S-states would be a Γ_4^- valence band at around ∼-5 eV<cit.>. As we will see in the next paragraph, the strength ofabsorption contribution depends on the energy difference between the incoming light andthe virtual state. Approximating the photon energy being around the yellow gap energy theratio between the two dipole allowed states is|Δ E_8^-,6^+/Δ E_4^-,7^+|^2 ≃ 0.01, which however,might be compensated by a very strong electron-phonon interaction (as will be the case withthe Γ_4^- phonon).Cuprous oxide features 6 atoms in the primitive unit cell, hence there are 18 phonon branches<cit.>. The symmetry of the final exciton state must be contained inthe direct product of the Γ_4^- and the corresponding phonon symmetry. Utilisingmultiplication tables of the O_h group <cit.> it is easy to show thata Γ_5^+ exciton can couple to all odd parity phonons, while a Γ_4^+ statecouples to all odd parity phonons exept the Γ_2^- mode. Excitons with symmetryΓ_2^+ and Γ_3^+ only couple to Γ_5^-and Γ_4^-, Γ_5^- phonons, respectively.Additionally, to enable a transition, phonon symmetry must also coincide with thepredetermined transition symmetries of the bands, in the case of the second lowest conductionband Γ_8^- ⊗Γ_6^+ = Γ_3^- ⊕Γ_4^- ⊕Γ_5^-.From luminescence spectroscopy we know that the Γ_3^- optical phonon with an energyof ħω_3- = 13.6meV at zone center is the dominant phonon branch.The contributions of all other phonons should be much weaker, except probably the Γ_4^-LO phonon at ħω_4- = 82.1meV. Note that the coupling mechanism ofall phonon modes is via the optical deformation potential, since Fröhlich interaction can only give rise to intraband transitions due to the orthonormality of the Bloch functions [The scalar potential of the LO interaction does not modify the Bloch functions, hence can be extracted from the transition matrix elements.]. The even parity Γ_5^+ mode can in principle also contribute to the absorption by awave vector dependent deformation potential, which has odd parity <cit.>.§ THEORETICAL TREATMENTStarting with the transition from excitonic vacuum to an exciton state μ with μcontaining the set of quantum numbers (n,ℓ,m) of the final state, the transitionprobability can be derived by second order perturbation theory asP_0,μ(k,ω) = 2π/ħ∑_Q,λ|∑_ν⟨Ψ_μ,Q+k|h_λ,Q |Ψ_ν,k⟩⟨Ψ_ν,k | h_ph |Ψ_0⟩/E_ν (k) - ħω|^2 × δ[E_μ(Q+k) ∓ħω_λ,Q -ħω],for the absorption or emission of a phonon, respectively. The two transition elements consistof the electron-radiation interaction h_ph and the phonon interaction hamiltoniansh_λ,Q, with λ,Q denoting the associated phonon type and itsmomentum. E_i(k) represents the energy dispersion of state i, which is usuallyexpressed in terms of the effective mass approximation ħ^2 k^2/2 M_i, with M_ibeing the respective excitonic mass. We start by assuming the electron-radiation interaction in electric dipole approximationh_ph = e/m_0A·p since excitations over higher orderprocesses (i.e. quadrupole excitation etc.) are negligibly small. As will be seen later, themain contribution to the phonon-assisted absorption stems from the 1S excitons of the yellowand green series, so we will restrict ourselves to final states with ℓ = 0. Dipole transitionsto states of the yellow and green series with S symmetry are forbidden, however this is not thecase for the subsequent S series' of blue and violet excitons. The blue exciton series can beassociated with the yellow series, since they share the same Γ_7^+ valence band, while theviolet series share the Γ_8^+ valence bands with the green series. An excitation ofℓ=1 blue/violet excitons is inhibited by the negative parity of the P envelope, and higherorder angular momenta are considered negligible. Our virtual states therefore consist solely ofblue/violet S exciton states (depending on the final state being yellow or green respectively).The experimental spectra are taken at a crystal temperature of around 2 K, where theoccupation number of optical phonons converges to zero. Hence, we limit our examination to phononemission. Furthermore, we consider the photon momentum k to be negligibly small. Thetransition probability for the yellow excitons (as seen in Fig. <ref>) then takes the form P_0,nS^(y)(ω)= ∑_λP̅_n,y^λ(ω)P̅_n,y^λ(ω) = 2πe^2/ħ m_0^2∑_Q×|∑_n'⟨Ψ_nS,Q^(y)|h_λ,Q |Ψ_n'S,0^(b)⟩⟨Ψ_n'S,0^(b) | A·p |Ψ_0⟩/E_n'S^(b) (0) - ħω|^2×δ[E_nS^(y)(Q) + ħω_λ,Q -ħω] .The transition for the green series is equivalent to Eqs. (<ref>), (<ref>) by switching yellow to green (y→g) and blue to violet (b→v). For the sake of clarity though, we restrict the calculation to the yellow series. The phonon interaction matrix element in Eq. (<ref>) can be rewritten in terms of the Bloch functions ψ_n,k of the associated bands as⟨Ψ_nS,Q^(y)|h_λ,𝐐 |Ψ_n'S,0^(b)⟩ = ∑_q,q'_nS,q^(y) _n'S,q'^(b) ⟨ψ_6c,Q/2+q|h_3,Q |ψ_8c,q'⟩ ×⟨ψ_7v,Q/2-q |ψ_7v,-q'⟩ = ∑_q_nS,q^(y) _n'S,q-Q/2^(b) ⟨ψ_6c,Q/2+q|h_3,Q |ψ_8c,q-Q/2⟩ .We are expanding the Bloch functions of the matrix elements in Eq. (<ref>) around q=0 and express them via a deformation potential D_λ;ij⟨ψ_6c,Q/2|h_λ,Q |ψ_8c,Q/2⟩ = D_λ;68 (Q) √(ħ/2Ω ρ ω_λ) ,with Ω being the crystal volume, and ρ the density of Cu_2O. Consequently, the remainder of the sum reads as ∑_q_nS,q^(y) _n'S,q-Q/2^(b). We assume _nS,q^(i) to be hydrogen like envelope functions in momentum space. The sum can be evaluated by either simply inserting the momentum hydrogen wave functions <cit.>, or treating the expression as a convolution and integrate their product in position space. These convolution functions between different excitonic envelopes are know in the theory of phonon scattering as overlap functions <cit.>. In our case of S type envelopes the spherical harmonics only introduce a factor of 1/(4π), which in both cases leaves us with a single integral which can be evaluated analytically. For the latter approach we would get𝒮_n,n'^(y,b) (Q) = ∑_q_nS,q^(y) _n'S,q-Q/2^(b) = 2/Q∫_0^∞dr rR_nS^(y)(r)R_n'S^(b)(r) sinQ r/2 , with R_nS^(i) being the modified radial hydrogen wave functions. For the dominant transition over 1S states we get 𝒮_1,1^(y,b) (Q)= 2^7 β^3/2(1+β)/(4 (1+β)^2 + a_y^2 β^2Q^2 )^2 ,where β = a_b/a_y and a_y, a_b are the excitonic Bohr radii of the yellow and blue series. The electron dipole interaction matrix element corresponds to the textbook solution⟨Ψ_n'S,0^(b) | A·p |Ψ_0⟩ = A_0 _n'S^(b)(r=0) p_78 ,with the dipole transition element between Bloch states p_78 = ⟨ u_8c,q| e·p|u_7v,q⟩, which is considered to not vary significantly over q. _n',S^(b)(r) is the hydrogen like S envelope function in position space. For r=0 it abides to _n',S^(b)(r=0) = (π (a_b n')^3)^-1/2.Inserting the Eq. (<ref>) to (<ref>) into Eq. (<ref>) we arrive atP̅_n,y^λ(ω) = e^2 A_0^2/Ω m_0 ρ ω_λ a_b^3| p_78|^2/m_0×∑_Q|D_λ;68 (Q)|^2 |∑_n'𝒮_n,n'^(y,b) (Q)/n'^3/2 (E_n'S^(b)-ħω)|^2 ×δ[E_nS^(y)(Q) + ħω_λ,Q -ħω] .The absorption coefficient is defined asα_n,y^λ(ω) = 2ħ/ε_0 n_R c ω A_0^2P̅_n,y^λ(ω),with n_R being the refractive index around the excitation energy. To simplify the calculation, we assume that both the deformation potential and the 1S exciton dispersion have spherical symmetry, any deviation can in principle be treated, but would make the following integration more complex. In addition, the phonons of interest are optical phonons with only marginally varying energy dispersions <cit.>, hence their energies ω_λ will be considered constant. Under these circumstances the sum in Eq. (<ref>) can be evaluated. Thus we obtainα_n,y^λ(ω) = e^2/π^2 ħ ρ ε_0 n_R ca_b^3 M_y/m_0 q_n^λ/ω ω_λ×|p_78|^2/m_0|D_λ;68 (q_n^λ) |^2 ×| ∑_n'𝒮_n,n'^(y,b) (q_n^λ)/ n'^3/2 (E_n'S^(b)-ħω)|^2Θ(q_n^λ),withq_n^λ (ω) = √(2 M_y/ħ^2)√(ħω -ħω_λ -E_nS^(y)) .The square root behaviour of the textbook solution is still recognisable and is embedded in q_n^λ (ω), though Eq. (<ref>) shows additional photon energy dependencies, such as the convolution of wave functions and the momentum dependent deformation potential. Furthermore, it is not depending on the Bohr radius of the final state, the yellow exciton, but on that of the blue exciton. Therefore, the dependence on the quantum number n of the yellow states is solely in the overlap factors 𝒮_n,n'^(y,b).§ EXPERIMENTAL DATATwo samples were prepared to obtain absorption spectra in different photon energy ranges[Ref. <cit.>, copyright 2005 The Japan Society of Applied Physics.]. For the observation of yellow exciton series, a thin slab of natural crystal Cu_2O was cut, mechanically polished, and then surface treated by NH_4OH. The thickness was measured as 160 μm by a caliper. For the observation of green exciton series, a thinner sample was grown by the melt-growth method as described in <cit.>. Cu_2O powder was sandwiched between two MgO plates of 500 μm thickness, and then heated up to 1523 K above the melting point of Cu_2O. A wedge-shaped Cu_2O film was formed between the substrates after cooling down to room temperature. The film thickness was measured by a stylus profiler after removal of the top substrate. At the point of measurement film thickness is 10 μ m.Absorption spectra were taken with samples at 2 K, immersed in superfluid helium in a cryostat. The white light from a halogen lamp, transmitted through the sample, was measured by a Peltier-cooled CCD camera (Wright Instruments) equipped at the back of a 25 cm monochromator with a 1200 g/mm grating blazed at 500 nm (JASCO CT-25T).Estimation of the reflectivity of light at the sample and substrate surfaces was difficult. In calculating the absorption coefficientsα=-ln[I_t/b I_0]/d,we adjusted the magnitudes of the reference light (by the factor of b) so that the transmission at 620 nm wavelength becomes unity.[The change in refractive index from 2.95 at the band edge to about 3.15 <cit.> at the green excitons would give a correction of 2%, which is of order of the experimental error.] Here, I_t and I_0 represent light intensities with and without a sample in the optical path, d is sample thickness.§ DEFORMATION POTENTIAL AND YELLOW 1S PHONON TRANSITIONTaking a look back at Eq. (<ref>), while providing us with an analytical solution for the absorption of the phonon background, it still harbors two uncertainties:The strength and momentum dependency of the deformation potential D_λ;68. Fortunately the phonon-assisted absorption into the yellow 1S exciton via the Γ_3^- LO phonon features a wide and distinctive spectral shape. We will utilize this property to do a fit of Eq. (<ref>) unto the spectral data. In consideration of which blue S states actually contribute to this absorption line, one can easily calculate the ratios:𝒮_1,2^(y,b)/2^3/2 𝒮_1,1^(y,b)≲ 12% ,𝒮_1,3^(y,b)/3^3/2 𝒮_1,1^(y,b)≲3.7% .Since already the blue 3S contribution is small, the inner sum in Eq. (<ref>) over the intermediate states is run up to n'=3,which should be sufficient to safely neglect the contribution from higher blue states. The deformation potential can be expanded with respect to the square of phonon momentum Q toD_λ,68(Q) = D_λ,68^(0) +D_λ,68^(2) Q^2+ … ≃ D_λ,68^(0)(1+ D̅_λ,68^(2) Q^2).While usually the deformation potential is assumed to be constant, we will show that in case of the Γ_3^- phonon this is not the case. In this paper we will consider the zeroth and first order of Eq. (<ref>). Higher order terms only increase the number of variables to fit and do not improve the result noticeably. For comparison we also fit the standard approach derived by Elliott <cit.> of the formα_E^λ (ω) ∝∑_n=11/n^3√(ħω -ħω_λ -E_nS^(y)) . The result can be seen in Fig. <ref>. While the assumption of a constant deformation potential α_(0)^(3-) fails to represent the experimental curve completely, the approach of Elliott α_E^(3-) fits well for a short energy range close to absorption edge, however deviates with increasing energy. The fit with the momentum dependent deformation potential α_(2)^(3-) reproduces the spectra neatly up to the start of the overlaying Γ_4^- phonon absorption edge, which can be seen as a shoulder around 2.116 eV at the upper right corner of Fig. <ref> b). The Γ_4^- phonon transition requires either Γ_4^- conduction or Γ_2^-⊕Γ_3^-⊕Γ_4^-⊕Γ_5^- valence bands[The consideration of the “spinless” symmetries is sufficient here. The spin-including symmetries open up additional transitory channels, however those require a change in the spin configuration, which cannot be inflicted by phonons.]. While bands with these symmetries exist <cit.>, we have to keep in mind, that they are located energetically quite far away from the yellow band gap E_g and we are not cognisant of any excitonic properties. Thus, the approach of Eq. (<ref>) is not really suited for their treatment, and since the Γ_4^- phonon contribution is small enough, we purposely approximate it in the fashion of Eq. (<ref>). The result of the fit is given in appendix <ref>.We stress, that all other phonons contribute with negligible strength to the yellow absorption band. The inclusion of the Γ_4^- phonon transition allows us to describe the phonon-assisted absorption into the 1S yellow exciton very accurately up to the P transitions.The used parameters can be found in table <ref>. The Bohr radius of the blue excitons is not explicitly known, thus for a systematic treatment they were calculated from the binding energies via a_b = Rya_B /(Ry_bε_0), with Ry and a_B being the (hydrogen) Rydberg energy and Bohr radius, and ε_0 = 7.5 <cit.>. The dipole transition element can obtained from experiments as well as k·p theory (see appendix <ref>). The resulting fit parameters for the deformation potential of the Γ_3^- phonon-assisted absorption areD_3;68^(0) = 25.45eV/nm ,D̅_3;68^(2) = 0.168nm^2 .The value for the static deformation potential D_3;68^(0) is in well accordance to previous estimations <cit.>. § THE SPECTRUM BEYOND YELLOW 1S§.§ Extrapolating the previous resultThe practical aspect of the fitted parameters (<ref>) and (<ref>) isthe fact that they are applicable for all Γ_3^- (and Γ_4^-) phonon-assistedtransitions into the yellow S series, i.e. we additionally receive the absorptionstrengths for all n≥ 2 states.However, the contributions from states with n>2 and into the yellow continuum, whichwould start at 2.186 eV, is negligible and will not be taken into account.Another and perhaps more interesting proposition stems from the relation between transitionelements of the yellow and green series. From group theoretical symmetry considerationsit can be shown thatα_g^Γ_3^-/α_y^Γ_3^-= 2/1 .The derivation is shown in appendix <ref>. Therefore we also possess allnecessary information to describe the Γ_3^- phonon-assisted transitions into thegreen S series. At zone center, symmetry considerations predict that the green series issupposed to consist of three distinct states Γ_3^+, Γ_4^+ and Γ_5^+.However, only the Γ_5^+ orthoexciton is accessible.Beyond that, we consider the absorption into the yellow P exciton states phenomenologicallyto achieve a well-rounded depiction of the absorption spectrum. The excitonic resonancesare accessed via a forbidden dipole transition<cit.>, with the oscillatorstrength varying with principal quantum number as (n^2-1)/n^5. The lineshape of the Pexcitons can be described via asymmetric Lorentzians as derived by Toyozawa <cit.>and the successive transition into the yellow continuum is given by the Sommerfeld enhanceddirect forbidden absorption <cit.>.Recently, we have shownthat the continuum absorption is shifted by an energy Δ_c intothe P states due to plasma screening of charged residual impurities. In addition, the continuum absorption develops an Urbach tail behaviorexp((ħω-E_g)/E_U) with E_U being the Urbach parameter <cit.>. These contributions are conglomerated intoα_P, for details, see appendix <ref>.With the the fitted parameters in (<ref>) and (<ref>) we will now attempt todepict the absorption spectrum beyond the yellow band gap E_g. The total absorptioncoefficient only requires only the absorption channels that significantly participate and istherefore composed ofα_tot = α_1,y^Γ_3^- +α_2,y^Γ_3^-+α_1,g^Γ_3^-+α_1,y^Γ_4^-+α_1,g^Γ_4^-+α_P .The remaining parameters needed to evaluate α_tot are listed in table <ref>. The green and violet Bohr radii are obtained in the same fashion, as it was done for the blue exciton. The yellow 2S exciton mass stems from the sum of effective electron and hole mass. Due to the almost equal binding energy of the yellow 1S paraexciton and the green 1S orthoexciton, and since they experience the same screening, it is expected that both masses are about equal. The result is shown in Fig. <ref>. We recognise that the absorption spectrum is mainly constructed out of the Γ_3^- phonon-assisted transition of the yellow and green 1S exciton state, as well as the excitonic resonances and theabsorption into the continuum at the band edge. While at an energy range around 2.2 eVthe summed up absorption coefficient α_tot appears to slightly overestimate theabsorption, beyond 2.22 eV the experimentally measured spectrum shows a steady increase in the absorption slope that is not reproduced in our theory. §.§ DiscussionWe will first discuss the overshot of the theoretical resultsover the experimental data at 2.2 eV.The biggest issue in extrapolating the result of the yellow 1S exciton to thegreen 1S state is the fact that two of the essential exciton parameters are not explicitly known. The excitonic Bohr radii used throughout this work are all extracted from the respective binding energies. They come into play via the overlap functions Eq. (<ref>). The other parameters that determine the strength of the absorption are the exciton translational masses according to Eq. (<ref>).Note that a change of 10% in the masses would alter the absorption by 15%. While for the 1S yellow state the mass has been determined experimentally to be M_1Sy=2.61m_0<cit.>, for the 2S yellow and the green excitonseries it is not known. However, for excitons with large principal quantum numbers, which are composed of valence and conduction band states near the zone center, the approximation M_X =m_c + m_v holds true, where m_c and m_v are the effective masses at k=0. Considering the effective mass of the conduction band m_6 = 0.985 m_0, and that of the Γ_7^+ hole of m_7=0.575, both from time-resolved cyclotron resonance <cit.>, we obtain for the 2S yellow exciton a mass of 1.56 m_0.For the green 1S state, whose wave function extends due to its large binding energy far into the Brillouin zone, the influence of the non-parabolicity of the Γ_8^+ valence bands on Bohr radius and effective mass are expected to be similar to that for the yellow 1S exciton. Using the heuristical relation between binding energy and translational mass as for the yellow state, we obtain a mass of M_1Sg=2.61 m_0. The Bohr radius is given in table <ref>. Of course, all these quantities are only first order approximations, and require extensive future work to improve. Theoretically, the Bohr radii and translational masses can be obtained from the solution of the K-dependent effective mass equations, where K is the center-of-mass wave vector, which will be the topic of a forthcoming paper. Experimentally, the masses can be obtained from resonance Raman studies involving the green P and the green 1S states, similar to that reported by Yu et al. in the 1970's <cit.> for the yellow exciton states. Here especially Raman processes involving acoustical phonons are of interest, since their Raman shift depends on their momentum, thus giving directly the dispersion of the green 1S states. These experiments would also clarify a possible contribution of a Γ_5^+ phonon to the absorption. The most interesting finding of Fig. <ref> is the steady increase of absorption in the regionaround 2.22 eV. This cannot be explained by simple modifications within this framework, i.e.the choice of parameters or of the wave function used. Theoretically, it could be explained by adding additional absorption channels, e.g. by introducingadditional phonon interactions or exciton resonances, but it can easily be shown that this is not the case here. The only phonon resonance with a suitable energy to compensate the missing absorptionwould be the Γ_5^+ phonon at 63.8 meV. However, the possibility for a scattering intothe green states involving the Γ_5^+ phonon can be ruled out, since the phonon possesses thewrong parity and such a process must also occur in comparable strength for the yellow 1S state butcould not be identified in the analysis. The existence of a potential second green exciton series, that could be associated with the Γ_8^+ light hole band dispersion, was considered, but since the Γ_8^+ valence bands are heavily coupled <cit.> no such additional exciton series should exist.Both ideas also seem implausible, since an additional phonon-assisted absorption edge would rise upabruptly in the spectrum, while the increase of the slope appears to be continuous. Currently, the most logical explanation is a dependence of the excitonic parameters on exciton momentum K.If the exciton mass is expected to steadily increase with exciton momentum, a smooth increase inphonon-assisted absorption with photon energy, as it is seen in the experiment, should be observed(cf. Eq. (<ref>)). The same happens, if the green exciton Bohr radius increases, as the overlapfunctions (cf. Eq. <ref>)would also increase. A solution of this problem might come from theaforementioned study of the K dependent effective mass equations. Finally, we will discuss the topic of mixing between yellow and green exciton states, which was fo<und in a recent study of the even excitons in Cu_2O <cit.>. According to this work, the lowest orthoexciton resonance (our 1S yellow state) is a mixture of yellow and green states with 7.2% green contribution. Taking this into account in our analysis would require the redetermination of the Deformation potential D_3;68 by refitting the Γ_3^- absorption band in the low energy part of the spectrum (cf. Fig. <ref> b), but we expect only a small correction of the order of some percent due to the low green admixture. Much more pronounced should be the influence of mixing onto the absorption of the yellow “2s” and the green “1S” state. The former has more than 10% of green contributions, which would enhance its absorption strength considerably. In contrast, the green 1S state should have only a contribution of green states of about 40%, which, taking literally, should result in only half the absorption strength (cf. solid green line in Fig. <ref>). Both effects are clearly not consistent with the experimental data. However, a rigorous analysis would require the full wavefunctions of these exciton states, which is an interesting task for future work.§ CONCLUSION In an effort to determine the phonon-assisted absorption background around the band edge of Cu_2Owe evaluated the established second order perturbation treatment of Elliott. However, the resultingsquare root behaviour of this textbook solution is not sufficient to reliably reproduce experimentallymeasured spectra for energies much higher than the absorption edge. We reassessed the approach andremoved three distinct approximations that are not necessarily justifiable. In a semiconductor withstrong excitonic features, like Cu_2O, instead of treating intermediate states as pure band stateswe need to consider the corresponding exciton eigenstates for the virtual transition. Additionally,the momentum dependence of the optical phonons deformation potential was taken into account as wellas the excitation energy dependent denominator. The resulting improved expression of the absorptioncoefficient [Eq. (<ref>)] is able to effectively model the Γ_3^- phonon-assisted transitioninto the yellow 1S exciton state, the strongest and most distinct phonon-assisted transition. Beyondthat, we modelled the Γ_4^- phonon transition into the yellow 1S state and extrapolated ourresults from the yellow to the green series excitons. This yields a profound description of thephonon-assisted absorption background up to the yellow band gap. Beyond that, a sudden increase inthe experimental absorption spectra is found, that could not be explained with our current theoretical treatment. The possibility of momentum dependent exciton parameters is discussed as a potentialorigin.We gratefully acknowledgesupport by the Collaborative Research Centre SFB 652/3 'Strong correlations inthe radiation field' funded by the Deutsche Forschungsgemeinschaft.§ FITTING RESULTS FOR THE MINOR ABSORPTION CONTRIBUTIONS The Γ_4^- phonon transition:As previously mentioned, the Γ_4^- phonon scattering is fitted with the square root solution of Eq. (<ref>), as it couples to a multitude of higher (lower) located conduction (valence) bands, of which we cannot distinguish the individual transitions. Since its absolute contribution to the spectrum is marginal, we are content with only considering the n=1 state. Utilising Eq. (<ref>) we getα_E^Γ_4^-(ω) = C_4 q_1^Γ_4^-(ω),with the corresponding fit parameterC_4 = 6.56 × 10^-7 . The yellow P-absorption: The P-absorption is divided into three separate partsα_P = α_Pcont + α_Urbach + ∑_n=2^4 α_nP .The continuum is given by<cit.>α_Pcont(ω)= C_yP(ħω-Ẽ_g)^3/2/ħω γ e^γ/sinhγ(1+γ^2/π^2),withγ = √(π^2Ry_y/ħω-Ẽ_g) ,and the yellow Rydberg energy<cit.> Ry_y= 87 meV.The renormalized band gap Ẽ_g = E_g+Δ_c reflects the band gap shift due to plasma screening. Roughly, Δ_c can be estimated from the energy of the highest visible P exciton line (n_max=4) as-87 meV/n_max^2. It depends on thesample properties and thus is different for the thick and thin sample.The fit yieldsC_yP = 9.82 × 10^-02(√(eV) μ m)^-1 .The Urbach tail is given byα_Urbach(ω) = C_Uexp(ħω-Ẽ_g/E_U)θ(Ẽ_g-ħω),with C_U = 7.34 × 10^-03 μ m^-1 and E_U = 9.8meV. The exciton resonances are described by asymmetric Lorentzians<cit.>α_nP(ω) = C_nPΓ_nP/2 + 2ξ_nħ(ω-ω_n) /(Γ_nP/2)^2 + ħ^2(ω-ω_n)^2 .The values used are:n 2P 3P 4Pħω_n<cit.> (eV) 2.1472 2.1612 2.16604C_nP (10^-5eV/μ m) 1.587 0.793 0.2645Γ_nP (meV) 3.86 1.93 1.29ξ_nP (10^-3) -4.32 -4.32 -4.32 § DIPOLE TRANSITION ELEMENT P_78 The dipole transition element of the blue exciton is related to the oscillator strength byf_b/Ω_uc = 2/ħω|p_78|^2/m_0 | _1S^(b)(0)|^2 = 2/π a_b^3 ħω|p_78|^2/m_0 ,with Ω_uc=a_L^3 being the volume of the uni cell, a_L=0.45 nm is the lattice constant, and a_b is given in table <ref>. In <cit.> the oscillator strength was determined to be f_b = 1.2× 10^-2. This yields for the dipole transition element|p_78|^2/m_0 = 2.726eV ;however this approach is reliant on the blue excitons Bohr radius, which is not well known. Therefore, we employ a second derivation to double check the result. The dipole transition element for the blue exciton series is related to the transition matrix element of the Γ_5^+ valence and Γ_3^- conduction band basis states by <cit.>p_78 = -√(2/3) ⟨ε_3^+ | p |γ_2^- ⟩ .The transition matrix element of Eq. (<ref>) appears in the Suzuki-Hensel Hamiltonian <cit.> in the coefficientG= 2/m_0∑_ℓ = Γ_3^-|⟨ε_3 |p_z | γ_2^-,ℓ⟩|^2/E_5v -E_ℓ ,coupling all Γ_3^- bands to the respective Γ_5^+ band. The coupling coefficients are directly connected to the three dimensionless parameters A_i (i=1,2,3) of the Hamiltonian. Albeit the dipole coupling to a Γ_5^+ band is possible via four different symmetries Γ_4^-⊗Γ_5^+ = Γ_2^-⊕Γ_3^-⊕Γ_4^-⊕Γ_5^-, and thus there should exist four separate coupling coefficients, band structure calculations of Cu_2O show <cit.>, that no Γ_2^- band is located in the near vicinity of the Γ_5^+ valence band. Therefore when the coupling coefficient for Γ_2^- is neglected, the system of equations is solvable. The dimensionless parameters A_i are known from band structure fits <cit.> and result in a value of G = -2.973.As there is also only one Γ_3^- band in the vicinity of the Γ_5^+ band, the coupling coefficient G can be associated with the matrix element in Eq. (<ref>), hence|p_78|^2/m_0 = 2.662eV .Both results are in fairly good agreement. For the estimation of the static deformation potential, we use the result of Eq. (<ref>).§ PHONON-ASSISTED TRANSITION STRENGTH OF THE YELLOW AND GREEN SERIES We are denoting the band to band dipole transition matrix element between Γ_5v^+ and Γ_3c^- bands of Eq. (<ref>) asp_35 = ⟨ε_3^+ | p|γ_2^- ⟩ .We now calculate the relative strength of the dipole transition strength of the blue and violet transition, respectively. We are only interested in the Γ_4^- states, as they are the only ones accessible via the dipole operator p. The composition of these states is known from the coupling coefficients of the O_h group <cit.>. The resulting dipole transition matrix elements read as⟨ 0 | p| Z⟩_b= -√(2/3) p_35 , ⟨ 0 | p| Z⟩_v,1 = -√(6/5)p_35 , ⟨ 0 | p| Z⟩_v,2 = -√(2/15)p_35 ,For the phonon-assisted transition, we additionally need to consider the transition strength of the phonon process.The transition probability has the formP_0,μ∝∑_λ| ∑_ν ⟨Ψ_μ | h_λ | Ψ_ν⟩⟨Ψ_ν | p | Ψ_0⟩ |^2.As we are primarily interested in the transition that is facilitated by the Γ_3^- phonon, we restrict the sum over λ to the constituents of this respective phonon branch. The Γ_3^- phonon can theoretically scatter into Γ_4^-⊗Γ_3^- =Γ_4^+⊕Γ_5^+ states. For the yellow series only the Γ_5^+ ortho-exciton states contribute, the green series exhibits Γ_5^+ ortho- as well as Γ_4^+ para-exciton states. However, since the Γ_3^- phonon transition cannot inflict a change to the spin-configuration of the intermediate state, the scattering into Γ_4^+ states is not occurring. This can also readily be seen when the coupling strengths of the transitions are evaluated, where the Γ_4^+ participating states cancel each other out. The Γ_3^- phonon transition operator h_3[The momentum subscript Q is dropped here, since it carries no relevance in these considerations.] has two constituents, η_3_1 and η_3_2. Their coupling between the Γ_8^- and Γ_6^+ conduction band can be expressed viaη_3_1[ |8c,-3/2⟩; |8c,-1/2⟩; |8c,+1/2⟩; |8c,+3/2⟩ ] = D̃_3;68/√(2)[0;|6c,-1/2⟩; -|6c,+1/2⟩;0 ] , η_3_2[ |8c,-3/2⟩; |8c,-1/2⟩; |8c,+1/2⟩; |8c,+3/2⟩ ] = D̃_3;68/√(2)[ - |6c,+1/2⟩; 0; 0; |6c,-1/2⟩ ] ,with D̃_λ,ij = ħ D_λ,ij/√(2Ωρ E_λ). The transformed intermediate Γ_4^- states receive the structure of their Γ_5^+ counterparts, and utilising the orthonormality of the exciton states then eliminates coupling to most of the states. The phonon transition elements then read as follows_y⟨ XY|η_3_1| Z⟩_b= 0,_y⟨ XY|η_3_2| Z⟩_b= -D̃_3;68/√(2) , _g⟨ XY|η_3_1| Z⟩_v_1 = 0,_g⟨ XY|η_3_2| Z⟩_v_1 = - 3/2D̃_3;68/√(5) , _g⟨ XY|η_3_1| Z⟩_v_2 = 0,_g⟨ XY|η_3_2| Z⟩_v_2 = -1/2D̃_3;68/√(2) ,In this case, the choice of our intermediate states spares us from a separate evaluation of the η_3_1 component. The transition probability for the phonon assisted transition into the yellow series is then given byP_0,y∝|_y⟨ XY|η_3_2| Z⟩_b_b⟨ Z| p | 0⟩ |^2 = 1/3D̃^2_3;68 p^2_35 ,the transition probability into the green series results inP_0,g∝|∑_i=1^2_g⟨ XY|η_3_2| Z⟩_v_i_v_i⟨ Z| p | 0⟩ |^2 = 2/3D̃^2_3;68 p^2_35 .From this we concur, that the ratio between yellow and green Γ_3^- phonon-assisted absorption has to beα_g^Γ_3^-:α_y^Γ_3^- =2 : 1.apsrev4-1 | http://arxiv.org/abs/1705.09521v2 | {
"authors": [
"Florian Schöne",
"Heinrich Stolz",
"Nobuko Naka"
],
"categories": [
"cond-mat.mes-hall",
"quant-ph"
],
"primary_category": "cond-mat.mes-hall",
"published": "20170526104305",
"title": "The phonon assisted absorption of excitons in Cu$_2$O"
} |
Global hard thresholding algorithms for joint sparse image representation and denoising Reza BorhaniJeremy WattAggelos Katsaggelos =======================================================================================Sparse coding of images is traditionally done by cutting them into small patches and representing each patch individually over some dictionary given a pre-determined number of nonzero coefficients to use for each patch. In lack of a way to effectively distribute a total number (or global budget) of nonzero coefficients across all patches, current sparse recovery algorithms distribute the global budget equally across all patches despite the wide range of differences in structural complexity among them. In this work we propose a new framework for joint sparse representation and recovery of all image patches simultaneously. We also present two novel global hard thresholding algorithms, based on the notion of variable splitting, for solving the joint sparse model. Experimentation using both synthetic and real data shows effectiveness of the proposed framework for sparse image representation and denoising tasks. Additionally, time complexity analysis of the proposed algorithms indicate high scalability of both algorithms, making them favorable to use on large megapixel images. § INTRODUCTION In recent years a large number of algorithms have been developed for approximately solving the NP-hard sparse representation problem 𝐱 ‖𝐃𝐱-𝐲‖_2^2‖𝐱‖_0≤ s, where 𝐲 is a signal of dimension N×1, 𝐃 is an N× L dictionary, 𝐱 is an L×1 coefficient vector, and ‖·‖_0 denotes the ℓ_0 norm that counts the number of nonzero entries in a vector (or matrix). These approaches can be roughly divided into three categories. First, greedy pursuit approaches such as the popular Orthogonal Matching Pursuit (OMP) algorithm <cit.> which sequentially adds new atoms (or dictionary elements) to a signal representation in a greedy fashion until the entire budget of s nonzero coefficients is used. Second, convex relaxation approaches like the Fast Iterative Shrinkage Thresholding Algorithm (FISTA) <cit.> wherein the ℓ_0 norm of the coefficient vector 𝐱 is appropriately weighted, brought up to the objective function, and replaced with an ℓ_1 norm. This convex relaxation of the sparse representation problem in (<ref>) is then solved via accelerated proximal gradient. Lastly, hard thresholding approaches such as the Accelerated Iterative Hard Thresholding (AIHT) algorithm <cit.> which approximately solves (<ref>) by projected gradient descent onto the nonconvex set of s-sparse vectors given by {𝐱∈ℝ^L | ‖𝐱‖_0≤ s}. Greedy pursuit and convex relaxation approaches have received significant attention from researchers in signal and image processing (see e.g., <cit.>). However, several recent works have suggested that hard thresholding routines not only have strong recovery guarantees, but in practice can outperform greedy pursuit or convex relaxation approaches, particularly in compressive sensing <cit.> applications, both in terms of efficacy and computation time <cit.>. Current hard thresholding routines that are proposed to solve the general constrained optimization problem 𝐱f(𝐱) 𝐱∈𝒯,are largely based on the projected gradient method, wherein at the k^th iteration a gradient descent step is taken in the objective function f and is then adjusted via an appropriate operator as𝐱^k=P_𝒯(𝐱^k-1-α_k∇ f(𝐱^k-1)).Here the gradient step 𝐱^k-1-α_k∇ f(𝐱^k-1), where α_k is an appropriately chosen step-length, is transformed by the projection operator P_𝒯(·) so that each step in the procedure remains in the problem's given constraint set 𝒯. Denoting by ℋ_s(·) the projection onto the s-sparse set, we haveℋ_s(𝐲)=‖𝐱‖ _0≤ s ‖𝐱-𝐲‖_2^2where ℋ_s(𝐲) is a vector of equal length to 𝐲 wherein we keep only the top s (in magnitude) entries of 𝐲, and set the remaining elements to zero. With this notation, popular hard thresholding approaches <cit.> for solving (<ref>) take projected gradient steps of the form𝐱^k=ℋ_s(𝐱^k-1-α_k𝐃^T(𝐃𝐱^k-1-𝐲)).Since the s-sparse set is nonconvex one might not expect projected gradient to converge at all, let alone to a sufficiently low objective value. However projected gradient in this instance is in fact provably convergent when the dictionary 𝐃 satisfies various forms of a Restricted Isometry Property (RIP) <cit.>, i.e., if 𝐃 satisfies(1-δ_s)‖𝐱‖ _2^2≤‖𝐃𝐱‖ _2^2≤(1+δ_s)‖𝐱‖ _2^2for all s-sparse vectors 𝐱 and for some δ_s∈(0,1). Such a matrix is used almost exclusively in compressive sensing applications. Analogous projected gradient methods have been successfully applied to the low-rank matrix completion problem <cit.>, where hard thresholding is performed on singular values as opposed to entries of a matrix itself, and has also been shown to be theoretically and practically superior to standard convex relaxation approaches which invoke the rank-convexifying surrogate, the nuclear norm <cit.>, when RIP conditions hold for the problem. It must be noted that while these algorithms have mathematically guaranteed convergence for RIP-based problems, it is unclear how well they contend on the plethora of other instances of (<ref>) where the matrix 𝐃 does not necessarily hold an RIP (e.g., image denoising and deblurring <cit.>, super-resolution <cit.>, sparse coding <cit.>).§.§ Joint sparse representation modelIn many image processing applications large or even moderate sized images are cut into small image patches (or blocks), and then one wants to sparsely represent a large number of patches {𝐲_p} _p=1^P together, given a global budget S for the total number of nonzero coefficients to use. This ideally requires the user to decide on the individual per patch budget s_p for each of the P patches in a way to ensure that p∑s_p≤ S. Because this global budget allocation problem seems difficult to solve, in practice a fixed s=⌊S/P⌋ is typically chosen for all patches, even though this choice results in a suboptimal distribution of the global budget considering the wide range of differences in structural complexity across the patches. This is particularly the case with natural images wherein patches vary extremely in terms of texture, structure, and frequency content. We illustrate this observation through a simple example in Figure <ref> where two 8×8 patches taken from an image are sparsely represented over the 64×64 Discrete Cosine Transform (DCT) dictionary. One of the patches (patch B in Figure <ref>) is rather flat and can be represented quite well using only one atom from the DCT dictionary while the other more structurally complex patch (patch A) requires at least 7 atoms in order to be represented equally well in terms of reconstruction error. Notice that the naive way of distributing a total of 8 nonzero coefficients equally across both patches (4 atoms per patch) would adversely affect the representation of the more complex patch with no tangible improvement in the representation of the flatter one.This observation motivates introduction of the joint sparse representation problem, where local patch-wise budgets can be determined automatically via solving 𝐗 ‖𝐃𝐗-𝐘‖_F^2‖𝐗‖_0≤ S,where we have concatenated all signals {𝐲_p} _p=1^P into an N× P matrix 𝐘, 𝐗 is the corresponding coefficient matrix of size L× P, and ‖·‖_F denotes the Frobenius norm. If this problem could be solved efficiently, the issue of how to distribute the budget S across all P patches would be taken care of automatically, alleviating the painstaking per patch budget tuning required when applying (<ref>) to each individual patch. Note that one could concatenate all columns in the matrix 𝐗 into a single vector and then use any of the patch-wise algorithms designed for solving (<ref>). This solution however is not practically feasible due to the potentially large size of 𝐗. §.§ Proposed approaches The hard thresholding approaches described in this work for solving (<ref>) are based on the notion of variable splitting as well as two classic approaches to constrained numerical optimization. More specifically, in this work we present two scalable hard thresholding approaches for approximately solving the joint sparse representation problem in (<ref>). The first approach, based on variable splitting and the Quadratic Penalty Method (QPM) <cit.>, is a provably convergent method while the latter employs a heuristic form of the Alternating Direction Method of Multipliers (ADMM) framework <cit.>. While ADMM is often applied to convex optimization problems (where it is provably convergent), our experiments add to the growing body of work showing that ADMM can be a highly effective empirical heuristic method for nonconvex optimization problems.To illustrate what can be achieved by solving the joint model in (<ref>), we show in Figure <ref> the result of applying our first global hard thresholding algorithm to sparsely represent a megapixel image. Specifically, we sparsely represent a gray-scale image of size 1024×1024 over a 64×100 overcomplete Discrete Cosine Transform (DCT) dictionary using a fixed global budget of S=20× P, where P is the number of non-overlapping 8×8 patches which constitute the original image. We also keep count of the number of atoms used in reconstructing each patch, that is the count ‖𝐱_p‖ _0 of nonzero coefficients in the final representation 𝐃𝐱_p for each patch 𝐲_p, and form a heatmap of the same size as the original image in order to provide initial visual verification that our algorithm properly distributes the global budget. In the heatmap the brighter the patch color the more atoms are assigned in reconstructing it. As can be seen in the middle panel of Figure <ref>, our algorithm appears to properly allocate fractions of the budget to high frequency portions of the image.The remainder of this work is organized as follows: in the next Section we derive both Global Hard Thresholding (GHT) algorithms, referred to as GHT-QPM and GHT-ADMM hereafter, followed by a complete time complexity analysis of each algorithm. Then in Section <ref> we discuss the experimental results of applying both algorithms to sparse image representation and denoising tasks. Finally we conclude this paper in Section <ref> with reflections and thoughts on future work.§ GLOBAL HARD THRESHOLDING In this work we introduce two new hard thresholding algorithms that are effectively applied to the joint sparse representation problem in (<ref>). Both methods are based on variable-splitting, as opposed to the projected gradient technique, and unlike the methods discussed in Section <ref> do not rely on any kind of RIP condition on the dictionary matrix 𝐃.§.§ GHT-QPMThe first method we introduce is based on variable splitting and the Quadratic Penalty Method (QPM). By splitting the optimization variable we may equivalently rewrite the joint sparse representation problem from equation (<ref>) as 𝐗,𝐙 ‖𝐃𝐗-𝐘‖_F^2‖𝐙‖_0≤ S𝐗=𝐙.Using QPM we may relax this version of the problem by bringing the equality constraint to the objective in weighted and squared norm as 𝐗,𝐙 ‖𝐃𝐗-𝐘‖_F^2+ρ‖𝐗-𝐙‖_F^2‖𝐙‖_0≤ S,where ρ>0 controls how well the equality constraint holds. A simple alternating minimization approach can then be applied to solving this relaxed form of the joint problem. Specifically, at the k^th step we solve for the following two closed form update steps first by minimizing the objective of (<ref>) with respect to 𝐗 with 𝐙 fixed at its previous value 𝐙^k-1, as 𝐗^k=𝐗‖𝐃𝐗-𝐘‖_F^2+ρ‖𝐗-𝐙^k-1‖_F^2,which can be written in closed form as the solution to the linear system (𝐃^T𝐃+ρ𝐈)𝐗=𝐃^T𝐘+ρ𝐙^k-1,and can be solved for in closed form as 𝐗^k=(𝐃^T𝐃+ρ𝐈)^-1(𝐃^T𝐘+ρ𝐙^k-1).However we note that in practice such a linear system is almost never solved by actually inverting the matrix 𝐃^T𝐃+ρ𝐈, since solving the linear system directly via numerical linear algebra methods is significantly more efficient. Moreover, since in our case this matrix remains unchanged throughout the iterations significant additional computation savings can be achieved by catching a Cholesky factorization of 𝐃^T𝐃+ρ𝐈. We discuss this further in Subsection <ref>.Next, minimizing the objective of (<ref>) with respect to 𝐙 gives the projection problem 𝐙^k=‖𝐙‖ _0≤ S‖𝐙-𝐗^k‖_F^2,to which the solution is a hard thresholded version of 𝐗^k given explicitly as 𝐙^k=ℋ_S(𝐗^k). Taking both updates together, the complete version of GHT-QPM is given in Algorithm <ref>.§.§ GHT-ADMMWe also introduce a second method for approximately solving the joint problem, which is a heuristic form of the popular Alternating Direction Method of Multipliers (ADMM). While developed close to a half a century ago, ADMM and other Lagrange multiplier methods in general have seen an explosion of recent interest in the machine learning and signal processing communities <cit.>. While classically ADMM has been provably mathematically convergent for only convex problems, recent work has also proven convergence of the method for particular families of nonconvex problems (see e.g., <cit.>). There has also been extensive successful use of ADMM as a heuristic method for highly nonconvex problems <cit.>. It is in this spirit that we have applied ADMM to our nonconvex problem and, like these works, find it to provide excellent results empirically (see Section <ref>).To achieve an ADMM algorithm for the joint problem we rewrite it by again introducing a surrogate variable 𝐙 as𝐗,𝐙 ‖𝐃𝐗-𝐘‖_F^2‖𝐙‖_0≤ S𝐗=𝐙.We then form the Augmented Lagrangian associated with this problem, given by[ ℒ(𝐗,𝐙,Λ,ρ)=‖𝐃𝐗-𝐘‖_F^2;+ρ‖𝐗-𝐙‖_F^2+⟨Λ, 𝐗-𝐙⟩ ]where Λ is the dual variable, ⟨·,·⟩ returns the inner-product of its input matrices, and 𝐙 is constrained such that ‖𝐙‖_0≤ S. With ADMM we repeatedly take a single Gauss-Seidel sweep across the primal variables, minimizing ℒ independently over 𝐗 and 𝐙 respectively, followed by a single dual ascent step in Λ. This gives the closed form updates for the two primal variables as 𝐗^k=(𝐃^T𝐃+ρ𝐈)^-1(𝐃^T𝐘+ρ𝐙^k-1-Λ^k-1) [ 𝐙^k=ℋ_S[𝐗^k+1/ρΛ^k-1] ]Again the linear system in the 𝐗 update is solved effectively via catched Cholesky factorization, and ℋ_S(·) is the hard thresholding operator. The associated dual ascent update step is then given by Λ^k=Λ^k-1+ρ(𝐗^k-𝐙^k).For convenience we summarize the ADMM heuristic used in this paper in Algorithm <ref>. §.§ Time complexity analysisIn this Section we derive time complexities of both proposed algorithms. In what follows we assume that i) L=2N, that is the dictionary is two times overcomplete, and ii) the number of signals P greatly dominates every other influencing parameter.As can be seen in Algorithm <ref>, each iteration of GHT-QPM includes i) solving a linear system of equations to update 𝐗^k, and ii) hard thresholding the solution to update 𝐙^k. In our implementation of GHT-QPM we pre-compute 𝐃^T𝐘 as well as the Cholesky factorization of the matrix 𝐃^T𝐃+ρ𝐈 outside the loop and as a result, updating 𝐗^k can be done more cheaply via forward/backward substitutions inside the loop. Assuming 𝐃∈ℝ^N× 2N and 𝐘∈ℝ^N× P, construction of matrices 𝐃^T𝐘 and 𝐃^T𝐃+ρ𝐈 require 4N^2P and 4N^3+2N operations, respectively. In our analysis we do not account for matrix (re)assignment operations that can be dealt with memory pre-allocation. Additionally, whenever possible we can take advantage of the symmetry of the matrices involved, as is for example the case when computing 𝐃^T𝐃+ρ𝐈. Finally, considering 8/3N^3 operations required for Cholesky factorization of 𝐃^T𝐃+ρ𝐈, the outside-the-loop cost of GHT-QPM adds up to 4N^2P+20/3N^3+2N, that is 𝒪(N^2P).Now to compute the per iteration cost of GHT-QPM, the cost of hard thresholding operation must be added to the 8N^2P operations needed for forward and backward substitutions, as well as the 4NP operations required for computing 𝐃^T𝐘+ρ𝐙^k. Luckily, we are only interested in finding the S largest (in magnitude) elements of 𝐗^k, where S is typically much smaller than 2NP - the total number of elements in 𝐗^k. A number of efficient algorithms have been proposed to find the S largest (or smallest) elements in an array, that run in linear time <cit.>. In particular Hoare's selection algorithm <cit.>, also known as quickselect, runs in 𝒪(NP). Combined together, the per iteration cost of GHT-QPM adds up to 8N^2P+4NP+𝒪(NP), that is again 𝒪(N^2P). The time complexity analysis of GHT-ADMM is essentially similar to that of GHT-QPM, with a few additional steps: subtraction of Λ^k-1 from 𝐃^T𝐘+ρ𝐙^k in updating 𝐗^k which takes 2NP operations, addition of 1/ρΛ^k-1 to 𝐗^k in updating 𝐙^k which requires 4NP more operations, and finally the dual variable update Λ^k←Λ^k-1+ρ(𝐗^k-𝐙^k) which adds 6NP operations to the per iteration cost of GHT-ADMM. Despite these additional computations, solving the two linear systems remains the most expensive step, and hence the time complexity of GHT-ADMM is 𝒪(N^2P), akin to that of GHT-QPM. § EXPERIMENTSIn this Section we present the results of applying our proposed global hard thresholding algorithms to several sparse representation and recovery problems. For both GHT-QPM and GHT-ADMM and for all synthetic and real experiments we kept ρ fixed at ρ=0.1, however we found that both algorithms are fairly robust to the choice of this parameter. We also initialized both 𝐙 and Λ as zero matrices. As a stopping condition, we ran both algorithms until subsequent differences of the RMSE value √(‖𝐃𝐙^k-𝐘‖ _F^2/P), where P is again the total number of patches, was less than 10^-5. In all experiments we compare our approach with popular approaches that work on the patch level: the Orthogonal Matching Pursuit (OMP) algorithm as implemented in the SparseLab package <cit.>, the Accelerated Iterative Hard Thresholding (AIHT) algorithm <cit.>, and the Compressive Sampling Matching Pursuit (CoSaMP) algorithm <cit.>. All experiments were run in MATLAB R2012b on a machine with a 3.40 GHz Intel Core i7 processor and 16 GB of RAM.§.§ Experiment on synthetic dataWe begin with a simple synthetic experiment where we create an overcomplete matrix 𝐃 of size 100×200 whose entries are generated from a Gaussian distribution (with zero mean and standard deviation of 0.1). We then generate P=100 s-sparse signals 𝐱_p for p=1...100 each consisting of s nonzero entries taking on the values ±1 uniformly. We then set 𝐲_p=𝐃𝐱_p for all p and either solve 100 instances of the local problem according to the model in (<ref>) using a patch-wise competitor algorithm, or by our methods using the joint model in (<ref>) where the global budget S is set to 100s. This procedure is repeated for each value of s in the range of s=5...30 and with 5 different dictionaries generated as described above. Finally the average support mismatch ratio, reconstruction error, and computation time are reported and displayed in Figure <ref>. Of these three criteria, the first two measure how close the recovered solution is to the true solution while the last one captures each algorithm's runtime and how it varies by changing the sparsity level s. More specifically, support mismatch ratio measures the distance between the support [The support of a matrix is defined as the set of indices with nonzero values. ] of the true solution 𝐗 denoted by 𝒜 and that of the recovered solution 𝐙^K denoted by ℬ, and is given by <cit.> mismatch ratio=max{|𝒜|, |ℬ|} -|𝒜∩ℬ|/max{|𝒜|, |ℬ|}.Here |𝒜|=|ℬ|=100s, and a mismatch ratio of zero indicates perfect recovery of the true support. Reconstruction error (or RMSE) on the other hand measures how close the recovered representation 𝐃𝐙^k is to 𝐘. As can be seen in Figure <ref>, both GHT-QPM and GHT-ADMM are competitive with the best of the patch-wise algorithms in all three of these categories. Interestingly, while both global hard thresholding algorithms match the leading algorithm (CoSaMP) in terms of support mismatch ratio and reconstruction error, they also match the algorithm with best computation time (OMP). Therefore these experiments seem to indicate that global hard thresholding provides the best of both worlds: algorithms with high accuracy and low computation time. It is also worth noting that unlike the competitors, the computation time of both proposed algorithms do not increase as the per patch sparsity level s increases, as expected from the time complexity analysis in Subsection <ref> of Section <ref>. Finally, note that in this experiment we did not take full advantage of the power of the joint model in (<ref>) over the patch-wise model in (<ref>) since all the synthetic patches were created using the same number s of dictionary atoms, and this number was given to all patch-wise algorithms. However the assumption that all patches taken from natural images could be well represented using the same number of atoms does not typically hold, and as we will see next the experiments on real data show that our global methods significantly outperform patch-wise algorithms in terms of reconstruction error.§.§ Sparse representation of megapixel images Here we perform a series of experiments on sparse representation of large megapixel images, using those images displayed in Figure <ref>. In the first set of experiments we compare GHT-QPM and GHT-ADMM to OMP, AIHT, and CoSaMP in terms of their ability to sparsely represent these images. This experiment illustrates the surprising efficacy and scalability of our global hard thresholding approaches. Decomposing each image into P non-overlapping 8×8 patches, this data is columnized into 64×1 vectors and concatenated into a 64× P matrix referred to as 𝐘. We then learn sparse representations for these image patches over a 64×100 overcomplete DCT dictionary. Specifically, we run each patch-wise algorithm using an average per patch budget of s=S/P nonzero coefficients for s in a range of 5 to 30 in increments of 1. We then run both global algorithms on the entire data set 𝐘 using the global budget S.Figure <ref> displays the results of these experiments on the megapixel images from Figure <ref>, including final root mean squared errors and runtimes of the associated algorithms. In all instances both of our global methods significantly outperform the various patch-wise methods in terms of RMSE over the entire budget range. For example, with a patch-wise budget of S/P=10 the RMSE of our algorithms range between 78% and 350% lower than the nearest competitor. This major difference in reconstruction error lends credence to the claim that GHT algorithms effectively distribute the global budget across all patches of the megapixel image. While not overly surprising given the wealth of work on ADMM heuristic algorithms (see Subsection <ref> of Section <ref>), it is interesting to note that GHT-ADMM outperforms GHT-QPM (as well as all other competitors) in terms of RMSE. Moreover, the total computation time of our algorithms remain fairly stable across the range of budgets tested, while competitors' runtimes can increase quite steeply as the nonzero budget is increased.To compare visual quality of the images reconstructed by different algorithms, we show in Figure <ref> the results obtained by OMP (the best competitor) and GHT-QPM (the inferior of our two proposed algorithms) using a total budget of S=5P on one of test images. The close-up comparison between the two methods clearly shows the visual advantage gained by solving the joint model. §.§ Runtime and convergence on a million patches So far in both synthetic and real sparse representation experiments we have kept the number of patches P fixed and only varied the patch-wise budget s. It is also of practical interest to explore how the runtimes of patch-wise and global algorithms are affected when the number of patches increase while keeping a fixed patch-wise budget. To conduct this experiment we collect a set of P random 8×8 patches taken from a collection of natural images <cit.>. For each P∈{ 2^10, 2^11, …,2^20} we then run all the algorithms keeping a fixed per patch budget of s=S/P=10 and plot the computation times in Figure <ref>. As expected the runtime of both our algorithms are linear in P (note that the scale on the x-axis is logarithmic). This Figure confirms that both GHT algorithms are highly scalable. As can be seen GHT-ADMM runs slightly slower than GHT-QPM which is consistent with our time complexity analysis.Finally, we choose three global sparsity budgets such that an average of s=5, 10, and 15 atoms would be used per individual patch and run GHT-QPM and GHT-ADMM to resolve 10^6 randomly selected patches. In Figure <ref> we plot for each iteration k=1...100 the RMSE value. Note three observations: firstly, both algorithms have decreasing values of ‖𝐃𝐙^k-𝐘‖ empirically[Denoting by f(𝐗,𝐙)=‖𝐃𝐗-𝐘‖ _F^2+ρ‖𝐗-𝐙‖ _F^2 as well as 𝐗^k=𝐗argminf(𝐗,𝐙^k-1) for some 𝐙^k-1 and 𝐙^k=‖𝐙‖ _0≤ Sargminf(𝐗^k,𝐙), then it follows that f(𝐗^k,𝐙^k)≤ f(𝐗^k,𝐙^k-1)≤ f(𝐗^k-1,𝐙^k-1). Hence the QPM approach produces iterates {𝐗^k,𝐙^k} that are non-increasing in the objective.], secondly that GHT-ADMM gives lower reconstruction errors compared to GHT-QPM across all budget levels, and third that within as few as just 10 iterations both algorithms have converged. §.§ Natural image denoisingIn this next set of experiments, we add Gaussian noise to the images in Figure <ref> and test the efficacy of our proposed methods for noise removal. More specifically, for a range of noise levels σ=5, 10, 20, 30, and 40 we add zero mean Gaussian noise with standard deviation σ to these images, which are then as before decomposed into P non-overlapping 8×8 patches, each of which is columnized, and a 64× P matrix 𝐘 is formed. Both of our global algorithms, GHT-QPM and GHT-ADMM, are then given this entire matrix to denoise along with a global budget of S=10P nonzero coefficients. The competing algorithms are given the analogous patch-wise budgets of S/P=10 nonzero coefficients, and process the image on the patch level. Noise removal results in terms of PSNR (in dB) are tabulated in Tables <ref> through <ref>. For each noise level, PSNR of the best algorithm is boxed. Here we can see that both of our global methods significantly outperform patch-wise algorithms, particularly at low to moderate noise levels. For example, at a noise level of σ=10 our algorithms greatly outperform the nearest competitor on the images tested by 2.7 to 4.9 dB, and at σ=15 both global algorithms produce results at 1 to 2.6 dB lower than the nearest competitor. Finally, in Figure <ref> we show an example of a megapixel image from Figure <ref> to which this amount of noise has been added, and the result of applying GHTA-ADMM (PSNR=26.91 dB) as well as OMP (PSNR=24.12 dB) to denoising the image. As can be seen, the resulting denoised image using GHTA-ADMM is also visually superior, recovering high frequency portions of the image with greater accuracy than OMP.§ CONCLUSIONSIn this work we have described two hard thresholding algorithms for approximately solving the joint sparse representation problem in (<ref>). Both are penalty method approaches based on the notion of variable splitting, with the former being an instance of the Quadratic Penalty Method. While the latter, a heuristic adaptation of the popular ADMM framework, is not provably convergent it nonetheless consistently outperforms most of the popular algorithms for sparse representation and recovery in our extensive experiments on synthetic and natural image data. These experiments show that our algorithms distribute a global budget of nonzero coefficients much more effectively than naive patch-wise methods that use fixed local budgets. Additionally, both proposed algorithms are highly scalable making them attractive for researchers working on sparse recovery problems in signal and image processing. While we have presented experimental results for sparse image representation and denoising, the approaches discussed in this paper for solving (<ref>) can be applied to a number of further image processing tasks such as image inpainting, deblurring, and super-resolution. § ACKNOWLEDGEMENTS This work is supported in part by the GK-12 Reach For the Stars program through the National Science Foundation grant DGE-0948017, and the Department of Energy grant DE-NA0000457. elsarticle-num | http://arxiv.org/abs/1705.09816v1 | {
"authors": [
"Reza Borhani",
"Jeremy Watt",
"Aggelos Katsaggelos"
],
"categories": [
"cs.CV",
"cs.LG"
],
"primary_category": "cs.CV",
"published": "20170527124024",
"title": "Global hard thresholding algorithms for joint sparse image representation and denoising"
} |
[][email protected] Faculty of Physics, University of Athens, GR-15784 Athens, Greece[][email protected] Faculty of Physics, University of Athens, GR-15784 Athens, Greece[][email protected] Faculty of Physics, University of Athens, GR-15784 Athens, Greece[][email protected] Faculty of Physics, University of Athens, GR-15784 Athens, Greece Considering the 3d Ising universality class of the QCD critical endpoint we use a universal effective action for the description of the baryon-number density fluctuations around the critical region. Calculating the baryon-number multiplicity moments and determining their scaling with system's size we show that the critical region is very narrow in the direction of the baryon chemical potential μ and wide in the temperature direction T for T > T_c. In this context, published experimental results on local proton density-fluctuation measurements obtained by intermittency analysis in transverse momentum spacein NA49 central A+A collisions at √(s_NN)=17.2 GeV (A=C,Si,Pb), restrict significantly the location (μ_c,T_c) of the QCD critical endpoint. The main constraint is provided by the freeze-out chemical potential of the Si+Si system, which shows non-conventional baryon density fluctuations, restricting (μ_c,T_c) within a narrow domain, 119 MeV≤ T_c ≤ 162 MeV, 252 MeV≤μ_c ≤ 258 MeV, of the phase diagram. Locating the QCD critical endpoint through finite-size scaling C. E. Tsagkarakis December 30, 2023 ==============================================================The search for the QCD critical endpoint (CEP), remnant of the chiral symmetry breaking, at finite baryon density and high temperature, is the main task in contemporary relativistic ion collision experiments <cit.>. Fluctuation analysis with global <cit.> and local measures <cit.> is the basic tool to achieve this goal. Up to now, indication of such non-conventional fluctuations, which can be related to the CEP, has been observed in the freeze-out state of Si+Si central collisions at NA49 SPS experiment with beam energy √(s_NN)=17.2 GeV <cit.>. However, the strong background and the poor statistics in the corresponding data set, did not allow for convincing statements concerning the existence and the location of the CEP. Similarly, in RHIC BES-I program, a non-monotonic behaviour of κσ^2 (kurtosis times the variance) for net-proton distribution, compatible with theoretical proposals <cit.>, was observed <cit.> but a conclusive evidence for the location of the critical point is still pending, so its experimental hunt continues. From the theoretical side the efforts are focused on Lattice QCD calculations at finite chemical potential in order to obtain the QCD phase diagram from first principles and predict the location of the CEP. Unfortunately, the until now obtained Lattice results depend strongly on the method used to handle the well known sign problem and they do not converge to a well defined critical chemical potential value <cit.>. Therefore a first principle prediction of the QCD CEP location, the holly grail of the physics of strongly interacting matter in our times, is still missing. In the present Letter we will make an effort to estimate the QCD CEP location employing an appropriate effective action for the thermodynamic description of the baryonic fluid around the critical region. To this end, we will assume that CEP belongs to the 3d Ising universality class, a hypothesis which is strongly supported by several theoretical works <cit.>. In this context, a universal effective action, found on the basis of a Monte-Carlo simulation of the 3d Ising system in an external field <cit.>, is an appropriate tool for the formulation of the QCD critical properties. Introducing a dimensionless scalar field ϕ=β_c^3 n_b (order parameter) with β_c=1/k_B T_c and n_b the baryon-number density, the effective action is written as follows:S_eff = ∫_V d^3 𝐱̂[ 1/2|∇̂ϕ|^2 + U(ϕ) - ĥϕ] ; T ≥ T_c U(ϕ) = 1/2m̂^2 ϕ^2 + m̂ g_4 ϕ^4 + g_6 ϕ^6In Eq. (<ref>) the variables with a "hat" are dimensionless: x̂_i=x_i β_c^-1, m̂=β_c m (m=ξ^-1, ξ being the correlation length), ĥ=(μ-μ_c) β_c (ordering field) and g_4=0.97 ± 0.02, g_6=2.05 ± 0.15 are universal dimensionless couplings <cit.>. The partition function, on the basis of Eq. (<ref>) is written schematically:𝒵=∫[ 𝒟ϕ] exp(-S_eff)and considering the ensemble of constant ϕ-configurations (∇̂ϕ =0) we obtain a grand-canonical expansion:𝒵=∑_N=0^Mζ^N exp[-1/2m̂^2 N^2/M-g_4 m̂N^4/M^3 - g_6 N^6/M^5]where ζ=exp(μ-μ_c/k_B T_c), M=V/β_c^3, m̂ = β_c ξ^-1=[T-T_c/T_c]^ν and ν≈ 2/3 for the 3d Ising universality class <cit.>. Our aim is to calculate thebaryon-number distribution moments:⟨ N^k ⟩=1/𝒵∑_N=0^M N^k ζ^N exp[-1/2m̂^2 N^2/M -g_4 m̂N^4/M^3 - g_6 N^6/M^5]with k=1,2,.. and explore their scaling behaviour with the system's size M around the critical region. At the critical point ζ_c=1 (μ=μ_c), m̂_c=0 (T=T_c) these moments obey the scaling law:⟨ N^k ⟩∼ M^k q ; q=d_F/d, k=1,2,..where d is the embedding dimension of the considered system and d_F the fractal dimension related to the critical fluctuations of the baryon density <cit.>.Our strategy to determine the location of the QCD CEP is the following. First we will estimate the size of the critical region based on the scaling behaviour of ⟨ N ⟩ with the system's size M. Then, using the published NA49 results on proton intermittency analysis in central A+A collisions at √(s_NN)=17.2 GeV<cit.> we will constrain the location of the CEP. For the second step it is crucial that the critical exponent q in Eq. (<ref>) is directly related to the intermittency index ϕ_2 measured in the proton intermittency analysis. Let us start with the estimation of the critical region. We determine the dependence of ⟨ N ⟩ on M for different values of ζ and m̂. Exactly at the critical point (ζ_c,m̂_c)=(1,0) the critical exponent q attains the value 5/6 for the 3d-Ising universality class (δ=5) extracted from the representation (<ref>). A direct calculation of ⟨ N ⟩ as a function of M from the partition function in Eq. (<ref>) shows that, departing slightly from the critical point leads to a behaviour ⟨ N ⟩∼ M^q̃ for M ≫ 1 with q̃≠ q. Outside the critical region q̃=1. Varying m̂ and ζ wemay also enter to the ϕ^4-dominance region when the last term in the effective action (<ref>) becomes suppressed with respect to the other two effective potential terms. In that case q̃=3/4 (mean field universality class, δ=3) and the information of the 3d-Ising critical exponent q is again lost. Thus, we consider as critical region of the QCD CEP the domain in the (ζ,m̂)-plane for which ⟨ N ⟩∼ M^q̃ ; 3/4 < q̃ < 1holds. Since the critical region depends in general on the size M, a supplementary constraint that the correlation length is greater than the linear size of the system, compatible with finite-size scaling theory <cit.>, is certainly needed. To keep contact with M-values realistic for the size of the fireball produced in relativistic ion collisions we explore the validity of the scaling law (<ref>) for 20 < M < 700. This estimated range of M-values contains all sizes of nuclei ranging from Be (R_Be≈ 2.6 fm) to Pb (R_Pb≈ 7.5 fm), assuming T_c ≈ 150 MeV. In Fig. 1a the red shaded area denotes the critical region, i.e. the domain in (ζ,m̂)-plane for which the above constraints hold. We observe that the critical region is quite extended in the upper m̂-direction but it is very narrow in the ζ-direction. This is a crucial property restricting the location of the CEP. The blue line in Fig. 1a is the location of the (ζ,m̂) pairs which lead to a scaling of ⟨ N ⟩ with q̃=0.96. This is the mean value of the intermittency index ϕ_2 found in the SPS NA49-data analysis of the proton-density fluctuations in transverse momentum space forSi+Si central collisions at √(s_NN)=17.2 GeV. It can be shown that the intermittency index ϕ_2 in transverse momentum space is equal to the exponent q̃ in 3d configuration space <cit.>. Thus, the Si+Si freeze-out state should lie on this blue line. Taking the freeze-out data of the fireball created in central collisions of Si+Si at √(s_NN)=17.2 GeV to be (μ,T)=(260,162) MeV, as reported in <cit.>, the condition of lying on the blue line provides a relation between μ_c and T_c. This relation, determining all the possible freeze-out values for the QCD CEP compatible with our analysis and with the intermittency results in <cit.>, is presented graphically in Fig. 1b. The freeze-out temperature of Si+Si sets an upper bound on the critical temperature T_c < 162 MeV. A lower bound on T_c is provided by the requirement that the correlation length ξ is greater than the linear system size for the occurrence of critical fluctuations as stated above. For the smallest considered system, Be, with a radius of ≈ 2.6 fm, we obtain the upper bound T-T_c/T_c < 0.36 which covers also the case of Si and leads (Fig. 1b) to the lower bound on the critical temperature T_c > 119 MeV. Thus, in the plot of Fig. 1b we show only the allowed range 119 MeV < T_c < 162 MeV. We observe that the corresponding critical chemical potential domain is very narrow: 252 MeV < μ_c < 258 MeV. In fact, using the available Lattice-QCD estimates of the critical temperature T_c ≈ 146 MeV <cit.> and T_c ≈ 153 MeV <cit.> as the boarders of a critical zone (red shaded domain in Fig. 1b) we obtain a very narrow range for the critical chemical potential 256 MeV < μ_c < 257 MeV. For completeness we add in Fig. 1a the freeze-out states for the central collisions of the other two systems, Pb+Pb and C+C, considered in the NA49 experiment (at √(s_NN)=17.2 GeV). We clearly observe that these systems lie outside the critical (shaded) region although their freeze-out chemical-potential values do not differ so much. The reason is the narrowness of the critical region in the chemical potential direction which possesses a linear size of 5 MeV for T=T_c and T_c ≈ 150 MeV. To illustrate in more detail how our strategy leading to the critical region in Fig. 1a works in practice, we plot in Fig. 2 in double logarithmic scale, the mean value ⟨ N ⟩ versus M for various values of m̂ and ζ. The plot focuses on the relevant region 310 ≤ M ≤ 700. To facilitate the comparison we have scaled all moments to a common value at M=310 using as referencethe black line (m̂=0 and ζ=1). In Fig. 2a we show ⟨ N ⟩ for m̂=0, 0.07, 0.15 and ζ=1.We observe that although m̂ increases by a factor of two or equivalently the reduced temperature t=T-T_c/T_c=m̂^3/2 by a factor of three (compare blue and red lines in Fig. 2a), the corresponding slope change is relatively small. This slow change of the exponent q̃ with increasing m̂ explains why the critical region in the t (or m̂)-direction is wide.On the other hand, assuming m̂=0 and varying ζ across the real axis, we observe that the critical scaling goes over to the conventional behaviour ⟨ N ⟩∼ M quite rapidly. This behaviour is in accordance with the plot of the critical region in Fig. 1a and it is clearly demonstrated in Fig. 2b where we plot in double logarithmic scale ⟨ N ⟩ versus M for three different ζ-values (1, 1.01, 1.02) using m̂=0 (T=T_c). Notice that, in Fig. 2a, the slope decreases as we depart from the critical (black) line, going over to the mean field behaviour, while in Fig. 2b it increases, going over to the conventional behaviour mentioned above. Furthermore, it is worth also to mention that the value of ⟨ N ⟩, obtained through Eq. (<ref>), is a prediction for the total mean baryon-number multiplicity in the critical freeze-out state, since there are no free parameters in the calculation. This information is shown by the black line in Fig. 2a (or Fig. 2b) which gives the mean baryon number as a function of the system's size M.Finally, it is important to demonstrate that the effective action (<ref>) is fully consistent with the critical properties of the 3d-Ising universality class. Therefore, we extend our finite-size analysis to the treatment of baryon-number susceptibility. From fluctuation-dissipation theorem we have:χ=1/V⟨ (δ N)^2 ⟩ ; ⟨ (δ N)^2 ⟩ = ζ∂/∂ζ( ζ∂log𝒵/∂ζ)At the critical chemical potential (ζ_c=1) we expect a peak of χ at T=T_c (m̂=0) for large but finite M obeying the finite-size scaling relationχ(T_c) ∼ V^γ/ν d∼ (k_B T_c)^3 M^2 q -1with 2 q -1 =γ/ν d (=2/3 for the 3d-Ising universality class), as well as a power-law dependence on temperatureχ(T) ∼ (T-T_c)^-γ ; M →∞close to the CEP (with critical exponent γ≈ 4/3 for the infinite system). In Fig. 3, plotting χ(T)/M^2 q - 1, calculated numerically through the partition function (<ref>), for various values of M, we clearly show the validity of both scaling laws in Eqs. (<ref>,<ref>). Thus the effective action (<ref>) captures correctly the 3d-Ising critical behaviour, supporting strongly the validity of the finite-size scaling analysis in the present Letter.Concluding remarks are now in order. Based on the effective action proposed in <cit.> for the 3d-Ising system we have demonstrated that the critical region of the QCD CEP is wide along the temperature direction and very narrow along the chemical potential axis. Using published results on intermittency analysis of proton density fluctuations in SPS NA49 experiment <cit.> and Lattice QCD estimates of the critical temperature it is possible to give a prediction for the location of the QCD CEP (μ_c,T_c) ≈ (256,150) MeV, as shown in Fig. 1b. From the analysis, above, one may draw conclusions about the most promising, crucial measurements in the experiments, currently in progress, at CERN and BNL (SPS-NA61, RHIC-BES). It is suggestive from the constraints of the critical region in Fig. 1 that 2d intermittency of net-proton density in transverse momentum space (in the central rapidity region) combined with chemical freeze-out measurements may capture the systems, for different energies (√(s_NN)) and size of nuclei (A), which freeze out very close to the critical point. To this end we consider two classes of experiments with heavy (I) and medium or small size (II) nuclei:I. Pb+Pb, Au+Au: The crucial energy range for these processes in the experiments at CERN (Pb+Pb) and BNL (Au+Au), compatible with the requirements of the critical region (Fig. 1) is* Pb+Pb at 12.3 GeV < √(s_NN) < 17.2 GeV (SPS-NA61) corresponding to lab-energies 80 AGeV < E_lab < 158 AGeV and* Au+Au at 14.5 GeV < √(s_NN) < 19.6 GeV (RHIC-BES) II. Be+Be, Ar+Sc, Xe+La: The crucial measurements in these processes, regarding 2d intermittency, are in progress at the experiment SPS-NA61, at the highest SPS energy √(s_NN)=17.2 GeV. To complete the picture, however, a detailed study of chemical freeze-out in these collisions is also needed.99NA61 M. Gazdzicki (for the NA61 Collaboration), J. Phys. G: Nucl. Part. Phys. 36, 064039 (2009); N. Antoniou et al. (NA61/SHINE Collaboration), CERN-SPSC-2006-034.RHICM. M. Aggarwal etal. (STARCollaboration), arXiv: 1007.2613;STAR Note 0598:BES-II whitepaper: http://drupal.star.bnl.gov/STAR/starnotes/public/sn0598.Stephanov2005 M. A. Stephanov, Int. J. Mod. Phys. A 20, 4387 (2005); M. A. Stephanov, J. Phys. G: Nucl. Part. Phys. 38, 124147 (2011); Luo2017 X. Luo and N. Xu, arXiv:1701.02105 [nucl-ex].Antoniou2006 N. G. Antoniou, F. K. Diakonos, A. S. Kapoyannis and K. S. Kousouris, Phys. Rev. Lett. 97, 032002 (2006).Anticic2015 T. Anticic et al., Eur. Phys. J. C 75, 587 (2015).Stephanov2011M. A. Stephanov, Phys. Rev. Lett. 107, 052301 (2011).Luo2016 X. Luo, Nucl. Phys. A 956, 75 (2016).Katz2004 Z. Fodor and S. Katz, J. High Energy Phys. 04, 050 (2004).Gupta2008 R. Gavai and S. Gupta, Phys. Rev. D 78, 114503 (2008).Gavin1994 S. Gavin, A. Gocksch and R. D. Pisarski, Phys. Rev. D 49, 3079 (1994).Stephanov1998 M. Stephanov, K. Rajagopal and E. Shuryak, Phys. Rev. Lett. 81, 4816 (1998).Halasz1998 M. A. Halasz, A. D. Jackson, R. E. Shrock, M. A. Stephanov andJ. J. M. Verbaarschot, Phys. Rev. D 58, 096007 (1998).Berges1999 J. Berges and K. Rajagopal, Nucl. Phys. B 538, 215 (1999).Karsch2001 F. Karsch, E. Laermann and Ch. Schmidt, Phys. Lett. B 520, 41 (2001). Tsypin1994 M. M. Tsypin, Phys. Rev. Lett. 73, 2015 (1994).Pelissetto2002 A. Pelissetto and E. Vicari, Phys. Rep. 368, 549 (2002).Antoniou2017 N. G. Antoniou, F. K. Diakonos, X. Maintas and C. E. Tsagkarakis, arXiv:1705.00262 [hep-th], Phys. Rev. E in print.Stinchcombe1988 R. B. Stinchcombe, Order and Chaos in Nonlinear Physical Systems, Plenum Press, New York, 1988.Ortmanns1996 H. Meyer-Ortmanns, Rev. Mod. Phys. 68, 473 (1996).Antoniou2016 N. G. Antoniou, N. Davis and F. K. Diakonos, Phys. Rev. C 93, 014908 (2016). Becattini2006 F. Becattini, J. Manninen and M. Gazdzicki, Phys. Rev. C 73, 044905 (2006). | http://arxiv.org/abs/1705.09124v1 | {
"authors": [
"N. G. Antoniou",
"F. K. Diakonos",
"X. N. Maintas",
"C. E. Tsagkarakis"
],
"categories": [
"hep-ph"
],
"primary_category": "hep-ph",
"published": "20170525105731",
"title": "Locating the QCD critical endpoint through finite-size scaling"
} |
=-0.6in =-0.80in =-0.3in =0.00in=210mm =165mm =0.1in [email protected] Key Laboratory of Mathematics Mechanization, Institute of Systems Science, AMSS, Chinese Academy of Sciences, Beijing 100190, China School of Mathematical Sciences, University of Chinese Academy of Sciences, Beijing 100049, China Chaos 27 (2017) 053105. =15pt We report localized nonlinear modes of the self-focusing and defocusing nonlocal nonlinear Schrödinger equation with the generalized -symmetric Scarf-II, Rosen-Morse, and periodic potentials. Parameter regions are presented for broken and unbroken -symmetric phases of linear bounded states, and the linear stability of the obtained solitons. Moreover, we numerically explore the dynamical behaviors of solitons and find stable solitons for some given parameters. In recent years, linear and nonlinear partial differential equations havingsymmetry structures draw intense interests, because many significant and interesting properties in -symmetric systems are absent in conventional Hermitian ones. Particularly, stable solitons can be generated in local nonlinear wave equations with -symmetric external potentials. Recently, the nonlocal nonlinear Schrödinger (NNLS) equation has been presented and shown to be completely integrable, and it admits new properties that differ from its local version (the conventional nonlinear Schrödinger equation). Since the NNLS equation has a -symmetric self-induced potential, it is straightforward to consider the NNLS equation in the presence of -symmetric external potentials. However, to the best of our knowledge, there is no report on soliton solutions and their stability in this generalized nonlocal model. In this paper, we derive some exact localized nonlinear modes of the NNLS equation with three kinds of generalized -symmetric potentials, and conclude graphically the relationship between model parameters and broken and unbroken -symmetric phases of the generalized -symmetric potentials. Moreover, we study linear stability and dynamical behaviors of the obtained solitons under the scope of the above-mentioned parameters. Solitons and their stability in the nonlocal nonlinear Schrödinger equation with -symmetric potentials Zhenya Yan December 30, 2023 =========================================================================================================15pt § INTRODUCTION The nonlinear Schrödinger (NLS) equation is a fundamental model in many fields of nonlinear science <cit.> (alias the Gross-Pitaevskii equation in Bose-Einstein condensates <cit.>). The NLS equation with different kinds of real-valued external potentials, such as harmonic, periodic and double-well potentials, have been intensely studied, including exact soliton solutions (see, e.g., <cit.> and references therein) and numerical soliton solutions (see, e.g., <cit.> and references therein).Motivated by the famous work of Bender and Boettcher <cit.> and other related works <cit.> that the complex -symmetric potentials were introduced into the usual Hermitian Hamiltonians, Musslimani and his collaborators <cit.> introduced the complex -symmetric potentials (e.g., complex -symmetric Scarf-II and periodic potentials) into the usual NLS equation such that the stable nonlinear modes were found. The imaginary parts of the complex -symmetric potentials have an effect of gain-and-loss on nonlinear modes, and may lead to stable nonlinear modes since the gain-and-loss distributions with thesymmetry can always be balanced. Here the linear parity operator 𝒫 and antilinear time-reversal operator 𝒯 are defined as 𝒫: x→ -x, p→ -p, and 𝒯: x→ x,p→ -p,i→ -i defined <cit.>. After that, the NLS equation with distinct types of complex -symmetric potentials have been studied to yield stable localized nonlinear modes <cit.>. Also, the stable localized nonlinear modes are found for some generalized NLS equations in the presence of -symmetric potentials such as the NLS equation with the momentum term <cit.>, the third-order NLS equation <cit.>, the derivative NLS equation <cit.>, and etc. (see the recent review in <cit.>).Recently, a new integrable nonlocal nonlinear Schrödinger (NNLS) equation introduced from the AKNS hierarchy is of the form <cit.>i ∂/∂ tψ(x,t) = -∂^2/∂ x^2ψ(x,t) +g ψ^2(x,t)ψ^*(-x,t),where ψ(x,t) is a complex field, ψ(-x,t) a nonlocal field, g a non-zero real constant, and the star stands for the complex conjugate. The NNLS equation is shown to possess a Lax pair and infinite numbers of conservation laws <cit.>. Moreover, the higher-order rational solitons and dynamics of Eq. (<ref>) with the defocusing case (g=1) have been found by means of the generalized Darboux transformation methods and numerical methods <cit.>. Eq. (<ref>) becomes the usual NLS equation if ψ(x,t) is an even function about space. More recently, two families of two-parameter and multi-component extensions of Eq. (<ref>) were also found <cit.>.To the best of our knowledge, localized nonlinear modes of the NNLS equation with any complex -symmetric potential and their stability have not been investigated yet. The rest of this paper is organized as follow. In section II, we present a general theory for the self-focusing and defocusing NNLS equation with -symmetric potentials, linear broken and unbroken -symmetric phases and the linear stability for the localized nonlinear modes. In section III, we investigate in sequence the generalized 𝒫𝒯-symmetric Scarf-II, Rosen-Morse, and Rosen-Morse-II potentials. Unbroken and broken -symmetric phases, localized nonlinear modes, and their linear stability and dynamical behaviors are discussed in details. § -SYMMETRIC NONLOCAL NONLINEAR MODEL AND GENERAL THEORY We aim to investigate the NNLS equation with 𝒫𝒯-symmetric potentials i ∂/∂ tψ(x,t)=-∂^2/∂ x^2ψ(x,t) +[V(x)+i W(x)]ψ(x,t)+g ψ^2(x,t)ψ^*(-x,t),where ψ(x,t) is a complex function of real variables x and t, ψ^*(-x,t) stands for the complex conjugate of the nonlocal field ψ(-x,t), the constant g describes two-body `self-focusing' (g=-1) or `defocusing' (g=1) interactions. The 𝒫𝒯-symmetric potential is required that V(x) is an even function, i.e., V(x)=V(-x), and W(x) is an odd function, i.e., W(x)=-W(-x). Eq. (<ref>) without -symmetric potentials reduces to the NNLS equation introduced by Ablowitz and Musslimani, i.e., Eq. (<ref>). Particularly, if the obtained solutions of Eq. (<ref>) have even parity symmetry for space, i.e., ψ(-x,t)=ψ(x,t), they also solve the conventional NLS equation with the same -symmetric potentials. On the other hand, if the obtained solutions of Eq. (<ref>) have no even parity symmetry for space, i.e., ψ(-x,t)≠ψ(x,t), they are not identical to the solutions of the conventional NLS equation. In general, Eq. (<ref>) with non-zero potentials (i.e., V(x)+i W(x)≢0) is not completely integrable. Let Q(t)=∫^+∞_-∞ψ(x,t)ψ^*(-x, t)dx and P(t)=∫^+∞_-∞|ψ(x,t)|^2dx, named “quasi-power" and “power" respectively in the context of -symmetric optics <cit.>, and it is easy to show that dQ(t)/dt=0, thus Q(t) is a conserved quantity, and that dP(t)/dt=∫^+∞_-∞|ψ(x,t)|^2{2W(x)+gIm[ψ(x,t)ψ^*(-x,t)-ψ(-x,t)ψ^*(x,t)]}dx, thus Q(t) may not be conserved. We focus on stationary solutions of the 𝒫𝒯-NNLS equation in the form ψ(x,t)=ϕ(x)^-iμ t, where μ is the propagation constant in optics or real chemical potential in BEC, and complex nonlinear eigenmode ϕ(x) satisfies the stationary -NNLS equation μ ϕ(x) = -d^2/dx^2ϕ(x)+ [V(x)+i W(x)]ϕ(x) +g ϕ^2(x)ϕ^*(-x),subject to the boundary conditions ϕ(x)→0 as x→±∞. The linear problem of Eq. (<ref>) is written as H Φ(x)=λ Φ(x), where the Hamiltonian H=-∂^2_x +V(x)+i W(x) is a linear Schrödinger operator with complex -symmetric potential, and Φ(x) is the eigenfunction corresponding to eigenvalue λ. Usually, H is parameterized by tuning parameter(s) in specific V(x) and W(x). In the interested parametric space, it is called the parametric region ofunbroken symmetry (orunbroken -symmetric phase), if all of the eigenvalues of the Hamiltonian are real in corresponding parametric region; otherwise the parametric region ofbroken symmetry (or broken -symmetric phase).When W(x)≠0, the non-zero solution ϕ(x) should be complex, and thus can be written as ϕ(x)=ϕ̂(x) ^iφ(x),where the amplitude ϕ̂(x) is real and strictly positive, the real function φ(x) denotes the phase. We substitute Eq. (<ref>) into Eq. (<ref>) and yield the relations between the amplitude and the phase [φ_x(x)ϕ̂^2(x)]_x/ϕ̂^2(x) = W(x)+g ϕ̂(x)ϕ̂(-x)sin[θ(x)],and ϕ̂_xx(x)/ϕ̂(x)=μ+V(x)+g ϕ̂(x)ϕ̂(-x)cos[θ(x)],where θ(x)=φ(x)-φ(-x), which differs from the local NLS cases <cit.>.For given -symmetric potential V(x)+i W(x), one can find the exact solutions by solving Eqs. (<ref>) and (<ref>), or numerical solutions via applicable numerical methods in principle. Further, one can study the linear stability of the obtained localized modes by considering the perturbed solution of 𝒫𝒯-NNLS equation (<ref>) in the formψ(x,t) = {ϕ(x) + ε[F(x)^-iδ t+G^*(-x)^iδ^*t]}^-iμ t,where ε≪1, F(x) and G(x) are the perturbation eigenfunctions. Via the substitution of Eq. (<ref>) into Eq. (<ref>) and the linearization with respect to ε, the linear eigenvalue problem for the perturbation eigenfunctions is given by[[L(x)g ϕ^2(x); -g ϕ^*2(-x)- L(x) ]] [[ F(x); G(x) ]] =δ[[ F(x); G(x) ]],where L(x)=-∂^2_x+V(x)+i W(x)+2 g ϕ(x)ϕ^*(-x)-μ is 𝒫𝒯-symmetric, i.e., L(x)= L^*(-x). It is the routine that the -symmetric nonlinear modes are linearly stable if all eigenvalues δ of this problem are real, otherwise they are linearly unstable. In the following, several interesting and physically relevant 𝒫𝒯-symmetric potentials are introduced in Eq. (<ref>) and the properties of corresponding nonlinear modes are to be discussed. § NONLINEAR MODES WITH -SYMMETRIC POTENTIALS§.§ Generalized Scarf-II potential We first consider the generalized 𝒫𝒯-symmetric complex Scarf-II potential V_1(x)+i W_1(x), with the components [[ V_1(x); W_1(x) ]] =-[[ (w_1^2+2) ^2(x); 3w_1 (x)tanh(x) ]] -σ_1(x)[[ cos[θ_1(x)]; sin[θ_1(x)] ]],where σ_1(x)=gρ_1^2^2(x),θ_1(x)=2w_1tan^-1[sinh(x)],and w_1, ρ_1 are real-valued constants.The linear eigenvalue problem for the -symmetric Scarf-II potential (<ref>) related to Eq. (<ref>) as H_1 Φ(x)=λ Φ(x), H_1=-∂_x^2 + V_1(x)+ i W_1(x),where λ and Φ(x) are the eigenvalue and eigenfunction, respectively, and Φ(x)=0 as x→±∞. Notice that the generalized Scarf-II potential (<ref>) reduces to the conventional -symmetric complex Scarf-II potential when ρ_1=0, whose linear problem can be shown to admit an entirely real spectrum for any w_1, since 3|w_1|≤ 9/4+w_1^2 always holds <cit.>. Here we consider the linear problem (<ref>) associated with the generalized Scarf-II -symmetric potential (<ref>) at ρ_1≠0. We fix the self-focusing (g=-1) or defocusing (g=1) nonlinearities and investigate the parametric regions (w_1,ρ_1) of broken and unbrokensymmetry (see Fig. <ref>). The generalized Scarf-II -symmetric potential (<ref>) is a combination of hyperbolic and periodic functions, and the periodic parts, which are parameterized by w_1, result in alternate patterns of broken and unbroken -symmetric phases with respect to w_1, which differs from the conventional Scarf-II -symmetric potential. In the self-focusing case (g=-1), the real part of -symmetric potential becomes shallower when ρ_1 increases, which makes it harder for bounded states to come into being, and that results in the bottom-right region of no bounded states (see Fig. <ref>(a)). However, the defocusing case (g=1) is in the opposite and therefore bounded states exist when ρ_1⩾0, at least in the region we have searched as shown in Fig. <ref>(b). It should be noted that the pictures Figs. <ref>(a) and (b) are symmetric with respect to the horizontal axis w_1=0 because of the odd symmetry of w_1 in the linear eigenvalue problem.For the given 𝒫𝒯-symmetric potential (<ref>), we can find exact bright soliton solutions of 𝒫𝒯-NNLS equation (<ref>) ϕ_1(x)=ρ_1(x)^iw_1tan^-1[sinh(x)],with μ=-1. It follows from the solution (<ref>) that we have S_1(x)=i/2(ϕϕ^*_x-ϕ_xϕ^*)=ρ_1^2w_1^3(x). Following the idea in the -symmetric classical optics <cit.>, we know that S_1(x) is everywhere positive in thecell and the power always flows in one direction when the positive w_1, i.e., from the gain toward the loss domain. We can see in Figs. <ref>(c)-(f) that, for the exact soliton solusions (<ref>), the parametric region for linear stable solitons is tiny in the whole parametric space. We next investigate the dynamical stability of nonlinear modes (<ref>) for both self-focusing and defocusing cases by numerical simulations for the wave propagation without or with an initial random perturbation of order 2%. For the self-focusing case (g=-1), Fig. <ref> illustrates the profiles of potentials and exact initial soliton states, and numerical simulations for the wave propagations. Interesting enough, stability of the solitons is sensitive to the shape of the real potential such as single-well or double-well, and thus in the following we focus on the dependence of soliton stability on the shape of potentials. When w_1=ρ_1=1 corresponding to a linear stable case, it locates at a -unbroken region for the linear operator H_1 [cf. Eq. (<ref>)] in the parametric space. The corresponding nonlinear mode is stable and an evident oscillatory (breather-like) behavior can be observed (see Fig. <ref>(c)). If we choose w_1=1.2, ρ_1=2 corresponding to a linear unstable case, the linear operator H_1 is -unbroken again, and the double-well potential V_1(x) has two completely separated wells (see Fig. <ref>(d)). In this case the nonlinear mode diverge at around t=100 without an initial noise (see Fig. <ref>(f)). §.§ Generalized Rosen-Morse potential well In this subsection, we study the generalized -symmetric Rosen-Morse potential [[ V_2(x); W_2(x) ]] =-[[ 2 ^2(x); 2w_2tanh(x) ]] -σ_2(x)[[ cos[θ_2(x)]; sin[θ_2(x)] ]],with σ_2(x)=gρ_2^2^2(x),θ_2(x)=2w_2x,and w_2, ρ_2 being real-valued constants. When ρ_2=0, the -symmetric potential V_2(x)+i W_2(x) becomes the conventional -symmetric Rosen-Morse potential <cit.>.The linear eigenvalue problem for the -symmetric Rosen-Morse potential (<ref>) related to Eq. (<ref>) is H_2 Φ(x)=λ Φ(x), H_2=-∂_x^2+ V_2(x)+ i W_2(x),where λ and Φ(x) are the eigenvalue and eigenfunction, respectively, and Φ(x)→0 as x→±∞. The parametric regions for (w_2,ρ_2) of broken and unbrokensymmetry are shown in Figs. <ref> (a) and (b).For the given -symmetric Rosen-Morse potential (<ref>), we can find soliton solutions of the 𝒫𝒯-NNLS equation (<ref>) in the form ϕ_2(x)=ρ_2(x)^i w_2 x with μ=w_2^2-1, and S_2(x)=i (ϕϕ^*_x-ϕ_xϕ^*)/2=ρ_2^2w_2^2(x). S_2(x) is everywhere positive in thecell for positive w_2, and the power always flows in one direction, i.e., from the gain toward the loss domain. We can conclude from Figs. <ref>(c) and (d) that, linear stable exact solitons can only be found when w_2 is very close to zero.Dynamical stability of the soliton solutions in Eq. (<ref>) is checked via numerical simulations for wave propagations without or with initial random perturbation of order 2%, and all the cases here have unbrokensymmetry for the linear operator H_2 [cf. Eq. (<ref>)]. If we choose w_2=0.01, ρ_2=1, we have a stable propagation (see Fig. <ref>(c)) with 2% initial random noise for the self-focusing case g=-1, but an unstable one (see Fig. <ref>(f)) without initial noise for the defocusing case g=1. Real and imaginary parts of the generalized -symmetric Rosen-Morse potentials for the cases above are also shown in Fig. <ref>. §.§ Generalized Rosen-Morse-II (periodic) potential We next consider another kind of -symmetric potential which could be realized in optical lattice, the generalized Rosen-Morse-II potential: [[ V_3(x); W_3(x) ]] =-[[ w_3^2cos^2(x);3w_3sin(x) ]] -σ_3(x)[[ cos[θ_3(x)]; sin[θ_3(x)] ]],with σ_3(x)=gρ_3^2cos^2(x),θ_3(x)=2w_3sin(x)and w_3, ρ_3 being real-valued constants (see Figs. <ref>(a) and (b)).For the given periodic potential (<ref>), we can also find the periodic-wave solution of the 𝒫𝒯-NNLS equation (<ref>) in the form ϕ_3(x)=ρ_3cos(x)^i w_3sin(x) with μ=1. Notice that under the chosen periodic potentials (<ref>) even though ϕ̂(x)=ρ_3cos(x) is not positive everywhere, Eq. (<ref>) is still a solution of the 𝒫𝒯-NNLS equation (<ref>). It follows from the solution (<ref>) that we haveS_3(x)=i/2(ϕϕ^*_x-ϕ_xϕ^*)=ρ_3^2w_3cos^3(x). It should be noted that for the positive w_3, S_3(x) is no longer positive everywhere in thecell and the power does not always flows from the gain toward the loss domain.By numerical methods, we obtain stationary soliton solutions of Eq. (<ref>) with the -symmetric periodic potential (<ref>) for the self-focusing (g=-1) case. Figs. <ref>(b) and (e) display the real and imaginary parts of the numerical bright solition solutions for the different parameters. Results for the numerical propagation are shown in Figs. <ref>(c) and (f), with w_3=0.1, ρ_3=0.5, μ=0.8 and w_3=0.3, ρ_3=1, μ=0.4, respectively. We find that the two numerical bright soliton solutions are both unstable in this case.§ CONCLUSIONS In conclusion, we have found localized nonlinear modes of the nonlocal nonlinear Schrödinger equation in the presence of generalized 𝒫𝒯-symmetric Scarf-II, Rosen-Morse, and Rosen-Morse-II potentials. We have investigated the parametric regions for the broken and unbroken -symmetric phases. Moreover, we have studied the linear stability and dynamical stability of the obtained soliton solutions under the scope of the above-mentioned parameters. It should be note that all the obtained exact solutions above are exceptional ones, and one may find more generic localized solutions for the same potentials in a numerical form. The idea used in this paper can also be extended to other nonlocal nonlinear wave equations with -symmetric potentials. The authors thank the referees for their valuable comments and suggestions. This work was partially supported by the NSFC under Grant Nos. 11571346 and 61621003, and the Youth Innovation Promotion Association CAS. op Y. S. Kivshar and G. P. Agrawal,Optical solitons: from fibers to photonic crystals (Academic Press, New York, 2003); B. A. Malomed, D. Mihalache, F. Wise, and L. Torner, J. Opt. B: Quantum Semiclassical Opt.7, 53R (2005).ocean M. J. Ablowitz,Nonlinear Dispersive Waves (Cambridge University Press, Cambridge, 2011); A. Osborne,Nonlinear Ocean Waves and the Inverse Scattering Transform (Academic, Boston, 2010).bec L. Pitaevskii and S. Stringari, Bose-Einstein condensation (Oxford University Press, Oxford, 2003); R. Carretero-González, D. J. Frantzeskakis, and P. G. Kevrekidis, Nonlinearity21, R139 (2008); F. Dalfovo, S. Giorgini, L. P. Pitaevskii, and S. Stringari, Rev. Mod. Phys.71, 463 (1999).hm V. N. Serkin, A. Hasegawa, and T. L. Belyaeva, Phys. Rev. Lett.98, 074102 (2007); J. Belmonte-Beitia, V.M. Perez-Garcia, V. Vekslerchik, and P. J. Torres, Phys. Rev. Lett.98, 064102 (2007); J. Belmonte-Beitia, V. M. Pérez-García, V. Vekslerchik, and V. V. Konotop, Phys. Rev. Lett.100, 164102 (2008).yan10 H. Friedrich, G. Jacoby, and C. G. Meister, Phys. Rev. A65, 032902 (2002); Yu. V. Bludov,Z. Yan,V. V. Konotop, Phys. Rev. A81, 063610 (2010); Z. Yan and V. V. Konotop, Phys. Rev. E80, 036607 (2009); Z. Yan and D. M. Jiang, Phys. Rev. E85, 056608 (2012).RealPotential1 V. A. Brazhnyi and V. V. Konotop, Mod. Phys. Lett. B18, 627 (2004).RealPotential2 Y. V. Kartashov, B. A. Malomed, and L. Torner, Rev. Mod. Phys.83, 247 (2011).RealPotential3 E. A. Ostrovskaya and Y. S. Kivshar, Phys. Rev. Lett.90, 160407 (2003); N. K. Efremidis,et al.., Phys. Rev. Lett.91, 213906 (2003); N. K. Efremidis and D. N. Christodoulides, Phys. Rev. A67, 063608 (2003); B. B. Baizakov, B. A. Malomed, and M. Salerno, Europhys. Lett.,63, 642 (2003); D. E. Pelinovsky, A. A. Sukhorukov, and Y. S. Kivshar, Phys. Rev. E70, 036618 (2004); J. Yang, and Z. Chen, Phys. Rev. E73, 026609 (2006).Bender98 C. M. Bender and S. Boettcher, Phys. Rev. Lett.80, 5243 (1998).ms M. Znojil, J. Phys. A: Math. Gen.33, L61 (2000).hyper A. Khare andB. P. Mandal, Phys. Lett. A272, 53 (2000).sh Z. Ahmed, Phys. Lett. A282, 343 (2001).PP D. E. Pelinovsky, P. G. Keverekidis, and D. J. Frantzeskakis, Eur. Phys. Lett.101, 11002 (2013).PTR J. G. Muga,et al., Phys. Rep.395, 357 (2004).Muss Z. H. Musslimani,et al., Phys. Rev. Lett.100, 030402 (2008); Z. H. Musslimani,et al., J. Phys. A 41, 244019 (2008).Bender2 C. M. Bender, D. C. Brody, and H. F. Jones, Am. J. Phys.71, 1095 (2003); C. M. Bender, Rep. Prog. Phys.70, 947 (2007).Mussg Z. Yan,et al., arXiv:1009.4023, 2010; F. K. Abdullaev,et al., Phys. Rev. A83, 041805(R) (2011); N. Moiseyev,ibid 83, 052125 (2011); S. Nixon,et al.,ibid.85, 023822 (2012); M. Kreibich,et al.,Phys. Rev. A87, 051601(R) (2013); Y. Lumer,et al., Phys. Rev. Lett.111, 263901 (2013); Z. Yan,Phil. Trans. R. Soc. A371, 20120059 (2013); C. P. Jisha,et al., Phys. Rev. A89, 013812 (2014); Z. Yan,et al., Phys. Rev. E92, 022913 (2015); Z. Yan,et al., Phys. Rev. A92, 023821 (2015).yanchaos15 Z. Yan, Y. Chen, and Z. Wen, Chaos 26, 083109 (2016).yansr Y. Chen and Z. Yan, Sci. Rep.6, 23478 (2016).yanpre17 Y. Chen and Z. Yan, Phys. Rev. E95, 012205 (2017).vvk16 V. V. Konotop, J. Yang, and D. A. Zezyulin, Rev. Mod. Phys.88, 035002 (2016).nnls M. J. Ablowitz and Z. H. Musslimani, Phys. Rev. Lett.110, 064105 (2013). wen2016 X. Wen, Z. Yan, and Y. Yang, Chaos26, 063123 (2016).yz G. Zhang, Z. Yan, Y. Chen, Appl. Math. Lett.69, 113 (2017).yanaml Z. Yan, Appl. Math. Lett.47, 61 (2015); ibid 62, 101 (2016).Levai00 G. Lévai and M. Znojil, J. Phys. A33, 7165 (2000). | http://arxiv.org/abs/1705.09401v1 | {
"authors": [
"Zichao Wen",
"Zhenya Yan"
],
"categories": [
"nlin.PS",
"math-ph",
"math.MP",
"nlin.SI",
"quant-ph"
],
"primary_category": "nlin.PS",
"published": "20170526002047",
"title": "Solitons and their stability in the nonlocal nonlinear Schroedinger equation with PT-symmetric potentials"
} |
The border support rank of two-by-twomatrix multiplication is sevenMarkus Bläser, Matthias Christandl and Jeroen ZuiddamWe show that the border support rank of the tensor corresponding to two-by-two matrix multiplication is seven over the complex numbers. We do this by constructing two polynomials that vanish on all complex tensors with format four-by-four-by-four and border rank at most six, but that do not vanish simultaneously on any tensor with the same support as the two-by-two matrix multiplication tensor. This extends the work of Hauenstein, Ikenmeyer, and Landsberg. We also give two proofs that the support rank of the two-by-two matrix multiplication tensor is seven over any field: one proof using a result of De Groote saying that the decomposition of this tensor is unique up to sandwiching, and another proof via the substitution method. These results answer a question asked by Cohn and Umans. Studying the border support rank of the matrix multiplication tensor is relevant for the design of matrix multiplication algorithms, because upper bounds on the border support rank of the matrix multiplication tensor lead to upper bounds on the computational complexity of matrix multiplication, via a construction of Cohn and Umans. Moreover, support rank has applications in quantum communication complexity.§ INTRODUCTION Multiplication of two n× n matrices over a fieldis an -bilinear map ^n× n×^n× n→^n× n called the matrix multiplication map. The matrix multiplication map corresponds naturally to the following structure tensor. Let [n] be the set {1,2,…, n} and let {e_ij : i,j∈ [n]} be the standard basis for the vector space ^n× n of n× n matrices. Define the structure tensor of the matrix multiplication map as⟨ n,n,n ⟩∑_i,j,k∈ [n] e_ij⊗ e_jk⊗ e_ki ∈ ^n× n⊗^n× n⊗^n× n. (Technically, this is the structure tensor of the trilinear map that computes the trace of a product of three matrices.) Let V_1, V_2, and V_3 be vector spaces. The tensor rank of a tensor t∈ V_1⊗ V_2⊗ V_3 is the smallest number r such that t can be written as a sum of r simple tensors v_1⊗ v_2⊗ v_3 ∈ V_1 ⊗ V_2 ⊗ V_3. The computational complexity of matrix multiplication is tightly related to the tensor rank of the tensor ⟨ n,n,n⟩ (see e.g. <cit.>). Strassen showed that the tensor rank of ⟨ 2,2,2 ⟩ is at most seven over any field <cit.>; Hopcroft and Kerr <cit.> showed that the tensor rank is at least seven over the finite field _2, and Winograd <cit.> showed that the tensor rank is at least seven over any field. Over an algebraically closed field, the border rank of a tensor t∈ V_1⊗ V_2⊗ V_3 is the smallest number r such that t is in the Zariski closure of all tensors of rank at most r in V_1 ⊗ V_2 ⊗ V_3.Landsberg proved that the border rank of ⟨ 2,2,2 ⟩ is seven over the fieldof complex numbers <cit.>, and a different proof for this based on highest-weight vectors was later given by Hauenstein, Ikenmeyer and Landsberg <cit.>.We extend the above results. Let t∈ V_1⊗ V_2 ⊗ V_3 be a tensor in a fixed basis, a hypermatrix. The support of t is the set of coordinates where t has a nonzero coefficient. The support rank of t is the minimal rank of a tensor with the same support as t. This has also been called s-rank <cit.>, nondeterministic rank <cit.>, zero-one rank <cit.> and minimum rank of a nonzero pattern <cit.> in the literature. The border support rank of t is the minimal border rank of a tensor with the same support as t. We prove the following. The support rank of ⟨ 2,2,2 ⟩ is seven over any field .The border support rank of ⟨ 2,2,2 ⟩ is seven over . <ref> and <ref> answer a question of Cohn and Umans <cit.>, that was also posed as an open problem during the Algorithms and Complexity in Algebraic Geometry programme at the Simons Institute <cit.>. We note that, in general, computing the tensor rank or support rank of a tensor is a computationally hard task. Namely, given a 3-tensor t and a natural number r, deciding whether the tensor rank of t is at most r is NP-complete over any finite field <cit.> and NP-hard over any integral domain <cit.>. Moreover, given a 2-tensor (that is, a matrix) A and a natural number r, deciding whether the support rank of A is at most r is NP-hard over the real numbers <cit.>.Previously, it was known that the border support rank of the matrix multiplication tensor ⟨ n,n,n ⟩ is at least 2n^2-n <cit.>, so in particular that the border support rank of ⟨ 2,2,2 ⟩ is at least six. This result was obtained using Young flattenings. Studying (border) support rank is interesting for two reasons. The first reason comes from algebraic complexity theory. As mentioned above, the tensor rank of the matrix multiplication tensor is tightly related to the computational complexity of matrix multiplication. It turns out that asymptotically, the border support rank of matrix multiplication gives an upper bound on the tensor rank of matrix multiplication, as follows.The exponent of matrix multiplication ω is defined as the smallest number β such that for any ε>0 the tensor rank of ⟨ n,n,n ⟩ is in (n^β+ε). The number ω is between 2 and 2.3728639 <cit.> and it is a major open problem in algebraic complexity theory to decide whether ω equals 2. One can define an analogous quantity ω_s for the support rank of ⟨ n,n,n ⟩. One can show with Strassen's laser method that ω≤ (3ω_s - 2)/2 <cit.>. To show that ω = 2, it therefore suffices to show that ω_s = 2. Cohn and Umans aim to obtain upper bounds on ω_s by realizing the algebra of n× n matrices inside some cleverly chosen group algebra.The second reason, which was our original motivation, comes from quantum communication complexity. Let f: X× Y × Z →{0,1} be a function on a product of finite sets X, Y and Z. Alice, Bob and Charlie have to compute f in the following sense. Alice receives an x∈ X, Bob receives a y∈ Y and Charlie receives a z∈ Z. Moreover, the players share a so-called Greenberger-Horne-Zeilinger (GHZ) state of rank r, which is described by the tensor _r = ∑_i=1^r e_i ⊗ e_i ⊗ e_i ∈ (^r)^⊗ 3. The players apply local quantum operations. After this, each player has to output a bit such that if f(x,y,z) = 1, then with some nonzero probability all players output 1 and if f(x,y,z) = 0, then with probability zero all players output 1. The complexity of such a protocol is the logarithm of the rank r of the GHZ-state used, and the minimum complexity of all quantum protocols for f is the nondeterministic communication complexity of f. This number equals the logarithm of the support rank of the tensor with support given by f, that is ∑_x,y,z f(x,y,z)e_x⊗ e_y ⊗ e_z <cit.>. Similarly, the logarithm of the border support rank of the tensor with support given by f equals the approximate nondeterministic communication complexity of f. Since tensor rank and border rank are natural measures of entanglement, our result may also be of interest to the quantum information theory community.Notation. For any tensor t, we will denote tensor rank by (t), border rank by (t), support rank by (t) and border support rank by (t). Paper outline. This paper is structured as follows. In <ref> we give two proofs for <ref>. In <ref> we give a short introduction to border rank lower bounds by highest-weight vectors and then apply this theory to prove <ref>.§ SUPPORT RANKWe will give two proofs for <ref>. Both proofs use the following lemma that reduces the 8-parameter minimization problem at hand to a 1-parameter minimization problem. Let be a field.Let e_11 = ([ 1 0; 0 0 ]), e_12=([ 0 1; 0 0 ]), e_21=([ 0 0; 1 0 ]), e_22=([ 0 0; 0 1 ]) be the standard basis of the space of 2× 2 matrices ^2× 2 over . Let e_1,e_2,e_3,e_4 be the standard basis of ^4. We naturally identify ^2× 2 with ^4 by e_11↦ e_1, e_12↦ e_2, e_21↦ e_3, e_22↦ e_4. Let _4()^× 3 act on the tensor space ^2× 2⊗^2× 2⊗^2× 2 accordingly. Let t∈^2×2⊗^2×2⊗^2×2 be a tensor with the same support as the matrix multiplication tensor ⟨ 2,2,2⟩. There is a tensor s in the _4()^× 3-orbit of t, with the same support as t, such that all nonzero entries of s are 1 except possibly for the coefficient of e_11⊗ e_11⊗ e_11. Identify the tensor ⟨ 2,2,2⟩ = ∑_i,j,k∈ [2] e_ij⊗ e_jk⊗ e_kℓ with the tensore_111 + e_123 + e_231 + e_243 + e_312 + e_324 + e_432 + e_444∈^4 ⊗^4 ⊗^4,where e_ijk = e_i ⊗ e_j ⊗ e_k. We can view this tensor as as a 4× 4× 4 cube filled with elements 0 and 1 from . Let t be a tensor in ^4 ⊗^4 ⊗^4 withthe same support as ⟨ 2,2,2 ⟩, so, in 1-slices,t= [ a 0 0 0; 0 0 b 0; 0 0 0 0; 0 0 0 0 ][ 0 0 0 0; 0 0 0 0; c 0 0 0; 0 0 d 0 ][ 0 e 0 0; 0 0 0 f; 0 0 0 0; 0 0 0 0 ][ 0 0 0 0; 0 0 0 0; 0 g 0 0; 0 0 0 h ] where a,b,c,d,e,f,g,h are nonzero elements in . Here we index the 1-slices by the first tensor leg, the rows of the slices by the second tensor leg and columns of the slices by the third tensor leg. Scaling the 1-slices of t according to (1/b, 1/d, 1/f, 1/h), that is, applying (1/b, 1/d, 1/f, 1/h) ⊗𝕀_4⊗𝕀_4 to t, yields a tensor of the form t'= [ a'000;0010;0000;0000 ][0000;0000; c'000;0010 ][0 e'00;0001;0000;0000 ][0000;0000;0 g'00;0001 ] Scaling the rows of t' according to (1/e', 1, 1/g', 1), that is, applying 𝕀_4 ⊗(1/e', 1, 1/g', 1) ⊗𝕀_4 to t', yields a tensor of the form t” = [ a”000;0010;0000;0000 ][0000;0000; c”000;0010 ][ 0 1 0 0; 0 0 0 1; 0 0 0 0; 0 0 0 0 ][ 0 0 0 0; 0 0 0 0; 0 1 0 0; 0 0 0 1 ]Finally, scaling the columns of t” according to (1/c”,1,1,1), that is, applying 𝕀_4 ⊗𝕀_4 ⊗(1/c”,1,1,1) to t”, yields a tensor of the required form. Our first proof of <ref> uses a corollary of a result of De Groote on the uniqueness of the decomposition of ⟨ 2,2,2⟩ into simple tensors. Let v v_1 ⊗ v_2 ⊗ v_3∈^2× 2⊗^2× 2⊗^2× 2 be an element of an arbitrary optimal decomposition of ⟨ 2,2,2 ⟩ into simple tensors oversuch that the rank of each v_i as an element of ^2 ⊗^2 is one. Then there exist invertible matrices A,B,C∈_2() such that v = (A^-1e_11B) ⊗ (B^-1e_11C) ⊗ (C^-1e_11A), whereA, B and C act by matrix multiplication from the left and right on ^2× 2.For any number q ∈, define the perturbed matrix multiplication tensor ⟨ 2,2,2 ⟩_⟨ 2,2,2⟩ + (-1)e_11⊗ e_11⊗ e_11; this is the tensor obtained from ⟨ 2,2,2⟩ by replacing the coefficient of e_11⊗ e_11⊗ e_11 by . We now give our first proof of <ref> using the above uniqueness statement. As was already observed by De Groote, <ref>gives the upper bound (⟨ 2,2,2 ⟩_0) ≤ 6 and thus (⟨ 2,2,2⟩_) ≤ 7 for all ∈. We claim that (⟨ 2,2,2⟩_) ≥ 7 for all nonzero ∈. Supposeis a number insuch that (⟨ 2,2,2⟩_) = 6. Let ⟨ 2,2,2⟩_ = ∑_i=1^6 u_i ⊗ v_i ⊗ w_i be a decomposition into simple tensors. Then⟨ 2,2,2 ⟩ = ⟨ 2,2,2⟩_ + (1-)e_11⊗ e_11⊗ e_11 = ∑_i=1^6 u_i ⊗v_i ⊗ w_i + (1-)e_11⊗ e_11⊗ e_11is an optimal decomposition of ⟨ 2,2,2⟩ into simple tensors. Therefore, by <ref>,there exist A,B,C in _2() such thatA^-1 e_11 B⊗B^-1 e_11 C⊗C^-1 e_11 A = (1-)e_11⊗ e_11⊗ e_11. Let f_1, f_2 be the standard basis of ^2. Then by taking appropriate transposes the previous equation is equivalent toA^-1 f_1 ⊗ B^T f_1⊗B^-1 f_1 ⊗ C^T f_1⊗ C^-1 f_1 ⊗A^T f_1= (1-)f_1^⊗ 6,which implies that A^T, B^T, C^T each have eigenvector f_1. Let α, β, γ be the respective eigenvalues. Then A^-1, B^-1, C^-1 have eigenvalues α^-1, β^-1, γ^-1. This yields the equation α^-1 β β^-1 γ α γ^-1 = 1-. We conclude that = 0.By <ref> we can conclude that (⟨ 2,2,2 ⟩) = 7. Our second proof of <ref> uses a method called the substitution method. Let x_ij, y_ij, z_ij (i,j∈ [2]) be variables. Let X,Y,Z be the corresponding 2× 2 variable matrices. For ∈, define the functionf_(X,Y,Z) ∑_i,j,k∈[2] x_ij y_jk z_ki + (-1)x_22 y_22 z_22.The tensor rank of ⟨ 2,2,2 ⟩_ is equal to the smallest number r such that f_(X,Y,Z) can be written as a sum ∑_ρ=1^r u_ρ(X) v_ρ(Y) w_ρ(Z), where u_ρ is a linear form in the x_ij, similarly for v_ρ and w_ρ.Suppose that the function f_(X,Y,Z) has rank r in the sense that it has a decomposition into a sum of r products of three linear forms as described above. If U = (u_ij)_ij∈[2] is any upper triangular matrix, then f_(X,YU^-1,UZ) has rank at mostr and by direct computationf_(X,YU^-1,UZ) = f_(X,Y,Z) + u_12/u_11 (-1)x_22y_21z_22.There exists an upper triangular matrix U such that the function g_(X,Y,Z)f_(X,YU^-1, UZ) has a decompositiong_(X,Y,Z) = ∑_ρ=1^r u_ρ(X) v_ρ(Y) w_ρ(Z)in which w_r(Z) is of the form z_21 + a_12 z_12 + a_22 z_22 for some a_12,a_22∈. Apply the substitution z_21↦w̃(Z)-a_12 z_12 - a_22 z_22 to (<ref>) to see that ∑_j∈ [2](x_1jy_j1z_11 + x_2jy_j1z_12 + x_1jy_j2 w̃(Z))+ x_21 y_12z_22 + x_22 y_22z_22 + u_12/u_11 (-1)x_22y_21z_22= ∑_ρ=1^r-1 u_ρ(X) v_ρ(Y) w_ρ([z_11z_12; w̃(Z)z_22 ]).We can test that y_22 occurs in the obtained decomposition of (<ref>) by setting x_22, z_22 to 1 and y_21 to 0 and the other x_ij, z_ij to 0.We can test that y_12 occurs in the obtained decomposition of (<ref>) by setting x_21, z_22 to 1 and the other x_ij, z_ij to 0. Say y_22 occurs in v_r-1 and y_12 occurs in v_r-2. Then, there is a substitution y_12↦ṽ_12(Y), y_22↦ṽ_22(Y),which, applied to (<ref>) yields∑_j∈ [2]( x_1jy_j1z_11 + x_2jy_j1z_12 + x_1j ṽ_j2(Y) w̃(Z))+ x_21 ṽ_12(Y)z_22 + x_22 ṽ_22(Y)z_22 + u_12/u_11 (-1)x_12y_21z_22= ∑_ρ=1^r-3 u_ρ(X) v_ρ([y_11 ṽ_12(Y);y_21 ṽ_22(Y) ]) w_ρ([z_11z_12; w̃(Z)z_22 ]).To clean up, setting z_22↦ 0 in (<ref>) shows that∑_j∈ [2] x_1jy_j1z_11 + x_2jy_j1z_12 + x_1j ṽ_j2(Y) w̃(Z)= ∑_ρ=1^r-3 u_ρ(X) v_ρ([y_11 ṽ_12(Y);y_21 ṽ_22(Y) ]) w_ρ([z_11z_12; w̃(Z) 0 ]).We can test that x_21occurs in the obtained decomposition (<ref>) by setting y_11, z_12 to1 and the other x_ij, z_ij to 0. Similarly, we can test that x_22 occurs in the obtained decomposition (<ref>) by setting y_21, z_12 to 1 and the other x_ij, z_ij to 0. Say x_21 occurs in u_r-3 and x_22 occurs in u_r-4. We apply a substitution x_21↦ũ_21(X), x_22↦ũ_22(X) to see that∑_j∈ [2] x_1jy_j1z_11 + ũ_2j(X)y_j1z_12 + x_1j ṽ_j2(Y) w̃(Z)= ∑_ρ=1^r-5 u_ρ([x_11x_12; ũ_21(X) ũ_22(X) ]) v_ρ([y_11 ṽ_12(Y);y_21 ṽ_22(Y) ]) w_ρ([z_11z_12; w̃(Z) 0 ]).Apply the substitution z_12↦ 0 to (<ref>) to get∑_j∈ [2] x_1jy_j1z_11= ∑_ρ=1^r-5 u_ρ([x_11x_12; ũ_21(X) ũ_22(X) ]) v_ρ([y_11 ṽ_12(Y);y_21 ṽ_22(Y) ]) w_ρ([z_11 0; w̃(Z) 0 ]).which clearly has rank 2. Therefore, r≥ 7. By <ref> we are done. § BORDER SUPPORT RANK In this section all vector spaces are complex vector spaces. We will review a method that was introduced in <cit.> to study equations for border rank and that was later used in <cit.> to give a proof that (⟨ 2,2,2⟩) ≥ 7. Then, we will use this method to show that the border support rank of ⟨ 2,2,2⟩ equals seven. Our Python code is included as an ancillary file with the arXiv submission. View the space ⊗^3 ^n as an affine variety, and let [⊗^3^n] be its coordinate ring. Define σ_r⊆⊗^3^n as the subset of tensors with border rank at most r,σ_r {s ∈⊗^3 ^n : (s) ≤ r}.This is called the rth secant variety of the Segre variety of ^n×^n ×^n. The set σ_r isZariski closed in ⊗^3 ^n by definition of border rank.In other words, if we let I(σ_r) ⊆[⊗^3 ^n] be the ideal of polynomials on ⊗^3 ^n that vanish identically on σ_r, then Z(I(σ_r)) = σ_r.§.§ Lower bounds by polynomialsBy definition, if (t) > r then there exists a polynomial in I(σ_r) that does not vanish on t. The following standard proposition says that we may in fact assume that this polynomial is homogeneous. Let t∈⊗^3 ^n. If (t) > r, then there exists a homogeneous polynomial f in I(σ_r) such that f(t)≠ 0. We give a proof for the convenience of the reader. If f(t)=0 for all f∈ I(σ_r), then t∈ Z(I(σ_r)) = σ_r, which is a contradiction. Let f be a polynomial in I(σ_r) such that f(t)≠0. Let f = ∑_d f_d be the decomposition of f into homogeneous parts. There is a d such that f_d(t)≠ 0.Let v∈σ_r. For any α∈, define g(α)f(α v). This is a polynomial in α. We have g(α) = ∑_d α^d f_d(v). Since σ_r is closed under scaling and f(v) = 0, we have g(α) = 0 for any α∈, so g is the zero polynomial. Therefore, each coefficient f_d(v) is 0. This argument holds for any v∈σ_r, so f_d∈ I(σ_r) for each d. The polynomial ring [⊗^3^n] decomposes into a direct sum of homogeneous parts [⊗^3 ^n]_d and, by the above argument, the vanishing ideal I(σ_r) decomposes accordingly as I(σ_r) = ⊕_d I(σ_r)_d with I(σ_r)_d ⊆[⊗^3 ^n]_d.The space ⊗^3 ^n has a natural action of G_n^× 3 and σ_r is a G-submodule. Thus [⊗^3 ^n]_d ≅^d (⊗^3 (^n)^*) has a natural action of G and I(σ_r)_d is a G-submodule. We will use the well-known theory of highest-weight vectors to exploit this symmetry. The theory of highest-weight vectors holds in a much more general setting than we need here. We refer to <cit.> and <cit.> for the general theory, and focus on a description of the theory for the group _n^× 3.Let W be a finite-dimensional G-module. Choose a basis so that G becomes the group of triples of invertible matrices. Let T⊆ G be the subgroup of triples of diagonal matrices. For t = ((a_1,…,a_n), (b_1,…,b_n), (c_1,…,c_n)) ∈ Tand z = (u,v,w)∈ (^n)^3 define t^z ∏_i=1^n a_i^u_i b_i^v_i c_i^w_i. As a T-module, W decomposes into weight spaces,W = ⊕_z∈(^n)^3 W_z where W_z = {w∈ W : t· w = t^z w ∀ t∈ T}.The vectors in W_z are said to have weight z. Let U⊆ G be the subgroup of triples of unipotent matrices, that is, upper triangular matrices with ones on the diagonal. A nonzero vector v ∈ W_z is a highest-weight vector if u· v = v for all u∈ U. A finite-dimensional (rational) representation W of _n^× 3 is irreducible if and only if it has a unique highest-weight vector v, up to multiplication by a scalar, that is, [W]^U = _ v. If W is irreducible and v is a highest-weight vector, then one has W = _(G v). Moreover, two irreducible representations are isomorphic if and only if their highest-weight vectors have the same weight.We call a sequence of n nonincreasing integers a generalized partition.It turns out that the weight of a highest-weight vector is a triple of generalized partitions. For any triple of generalized partitions λ, we will denote an abstract realisation of the G-module with highest-weight λ by V_λ. For any finite-dimensional G-module W, the highest-weight vectors in W of weight λ form a vector space, which we denote by [W_λ]^U.For a generalized partition λ, define the dual partition λ^* as the generalized partition obtained from λ by negating every entry and reversing the order. Then V_λ^* = V_λ^*, the dual module. We note that the polynomial irreducible representations are precisely the ones that are isomorphic to V_λ with λ a partition.Recall that σ_r is the variety of tensor in ⊗^3 ^n of border rank at most r. Consider the isotypic decomposition of W^d (⊗^3 (^n)^*) and I(σ_r)_d under the action of _n^× 3,W =⊕_λ⊢dW_λ^*=⊕_λ⊢dk(λ) V_λ^*,I(σ_r)_d =⊕_λ⊢d I(σ_r)_λ^*=⊕_λ⊢dm(λ) V_λ^*,where λ runs over all triples of partitions of d with at most n parts, and k(λ)V_λ^* denotes an isotypic component consisting of a direct sum of k(λ) copies of the irreducible G-representation V_λ^*, similarly for m(λ)V_λ^*. Note that, although the direct sums run over triples of partitions λ, the representations W and I(σ_r) are not polynomial since we take duals. The number k(λ) is exactly the dimension of the highest-weight vector space [W_λ^*]^U, and the number m(λ) is the dimension of the highest-weight vector space [I(σ_r)_λ^*]^U. The following proposition extends <ref> by saying that we may assume that the polynomial we are looking for is a highest-weight vector, if we replace t by a random point in its G-orbit. Let t∈⊗^3^n. If (t) > r, then there exists a highest-weight vector f∈ I(σ_r) and a group element g∈ G such that f(gt) ≠ 0.We provide the proof for the convenience of the reader. By <ref>, there exists a homogeneous polynomial f∈ I(σ_r) such that we have f(t)≠ 0. By highest-weight theory, the polynomial f can be written as a sum ∑_λ, i g_λ, i f_λ, i, where f_λ, i is a highest-weight vector of type λ in I(σ_r) and g_λ,i∈ G. Since f(t)≠ 0, there exists a λ and an i so that f_λ,i(g_λ,i^-1t) ≠ 0.§.§ Highest-weight vector methodThe following method was first proposed in <cit.> to study equations for border rank and was later used in <cit.> to give a proof that (⟨ 2,2,2⟩) ≥ 7. Let t∈⊗^3 ^n be a tensor for which we want to show (t) > r.* Choose a degree d ∈. Let W be the space ^d (⊗^3 (^n)^*). Choose a partition triple λ⊢ d such that the highest-weight vector space [W_λ^*]^U is nonzero.* Construct a basis b_1, …, b_k for [W_λ^*]^U.* Find a linear combination f of the basis elements b_1, …, b_k that vanishes on all tensors of border rank at most r, that is, f ∈ [I(σ_r)_λ^*]^U where σ_r is the variety of tensors with border rank at most r.* Show that f does not vanish on gt for some g∈ G.The above method is guaranteed to work by <ref>. Before applying the method, we will consider each step in more detail. Step 1. Kronecker coefficient. The dimension of the space of U-invariants [(^d( ⊗^3(^n)^*))_λ^*]^U is the so-called Kronecker coefficient k(λ). We pick a partition triple λ such that the number k k(λ) is nonzero. Algorithms for computing Kronecker coefficients have been implemented in for example Schur <cit.>, Sage <cit.> and the Python package Kronecker <cit.>.Step 2. Las Vegas construction of basis. For any natural number ℓ≤ n, let ϕ_ℓ e_1^* ∧⋯∧ e_ℓ^* be the Slater determinant living in ∧^ℓ (^n)^*. For any partition μ⊢ d with at most n parts, we let ϕ_μ denote the tensor ϕ_ν_1⊗⋯⊗ϕ_ν_μ_1 living in ⊗^d (^n)^*, where ν denotes the transpose of μ. Let λ=(λ^(1),λ^(2), λ^(3)) be a triple of partitions of d. We define ϕ_λϕ_λ^(1)⊗ϕ_λ^(2)⊗ϕ_λ^(3). This tensor lives in ⊗^3 ⊗^d (^n)^*, but we view it as a tensor in ⊗^d ⊗^3 (^n)^* via the canonical reordering. Let P_d be the canonical symmetrizer ⊗^d ⊗^3 (^n)^* →^d (⊗^3 (^n)^*) acting from the right. The group S_d^× 3 has a natural right action on ⊗^3 ⊗^d (^n)^* and via the reordering also on ⊗^d ⊗^3 (^n)^*. Let λ be a triple of partitions of d. The tensors {ϕ_λπ P_d : π∈ S_d^× 3} span the vector space [(^d (⊗^3(^n)^*))_λ^*]^U, see <cit.>. We construct a basis of [(^d (⊗^3(^n)^*))_λ^*]^U as follows. Randomly pick k permutation pairs τ_1, …, τ_k ∈ S_d^× 2. Let e∈ S_d be the identity permutation. Let π_i = (e, τ_i^(1), τ_i^(2)) and let b_i ϕ_λπ_i P_d. Pick k random tensors w_1,…, w_k in ⊗^3^n and evaluate every b_i in every w_j, giving a k-by-k evaluation matrix M. If M has full rank, then (b_1,…, b_k) is the desired basis.Before going to the next step we discuss how to efficiently implement the evaluation of a polynomial represented by a pair of permutations, as was already described in <cit.>. Let f = ϕ_λπ P_d and let t be the tensor ∑_i=1^r t^1_i⊗ t^2_i ⊗ t^3_i in ⊗^3 ^n. The evaluation of the polynomial f at t is equal to the contractionϕ_λπ P_dt^⊗ d = ϕ_λπt^⊗ d= ∑_j ∈ [r]^dϕ_λπ(t^1_j_1⊗ t^2_j_1⊗ t^3_j_1) ⊗⋯⊗(t^1_j_d⊗ t^2_j_d⊗ t^3_j_d)= ∑_j ∈ [r]^dϕ_λ^(1) (t^1_j_1⊗⋯⊗ t^1_j_d) ·ϕ_λ^(2) τ^(1) (t^2_j_1⊗⋯⊗ t^2_j_d) ·ϕ_λ^(3) τ^(2) (t^3_j_1⊗⋯⊗ t^3_j_d).Note that the last expression is a sum of a product of determinants. Let us study the first factor of a summand. Let ν denote the transpose of λ^(1). We haveϕ_λ^(1) (t^1_j_1⊗⋯⊗ t^1_j_d)= (ϕ_ν_1⊗⋯⊗ϕ_ν_μ_1) (t^1_j_1⊗⋯⊗ t^1_j_d)=_ν_1( t^1_j_1, …, t^1_j_ν_1) _ν_2( t^1_j_ν_1, …, t^1_j_ν_1+ν_2)⋯_ν_μ_1( t^1_j_d-ν_μ_1, …, t^1_j_d),where _m(v_1,…,v_m) denotes top m-by-m minor of the matrix with columns v_1, …, v_m. Suppose that, in our evaluation of ∑_j, we have chosen values for j_1,…, j_ν_1 and suppose _ν_1( t^1_j_1, …, t^1_j_ν_1) is 0. Then whatever choices we make for j_ν_1+1,…, j_d, the summand at hand will be zero. Recognizing this situation early is crucial.Step 3. Construction of a vector in I(σ_r). Pick k random tensors t_1, …, t_k of rank r. Evaluate each basis element b_i in each random tensor t_j. If the resulting matrix (b_i(t_j))_i,j∈ [k] has a nontrivial kernel, then we find a candidate highest-weight vector f in I(σ_r). We can verify the correctness of the candidate by evaluating f at a symbolic tensor of rank r. This evaluation should be zero. The way we do this symbolic evaluation is by working in ⊗^3 ^6 and using the straightening algorithm, see e.g. the SchurFunctors package in Macaulay2 <cit.>. We used multi-prolongation to split up the computation in order to save memory. We refer to <cit.> for a discussion of multi-prolongation. Step 4. Evaluating at gt. Evaluate f at gt for a random g∈ G. (In our case, it turns out that taking g to be the identity is good enough.) §.§ The matrix multiplication tensorWe will now prove that (⟨ 2,2,2 ⟩) = 7. The upper bound follows from <ref>, so it remains to prove the lower bound. Let σ_6 be the variety of tensors in ⊗^3 ^4 of border rank at most 6. We will apply the method described above to the tensor ⟨ 2,2,2⟩_, see <ref>.Let d = 20 and let λ be the partition triple (5,5,5,5)^3. The Kronecker coefficient k(λ) equals 4. Let W^20 (⊗^3 (^4)^*) and denote by W_λ^* the isotypic component of type λ^*. Writing permutations in the one-line notation, the following pairs of permutations define a basis (b_1,b_2,b_3,b_4) for the highest-weight vector space [W_λ^*]^U:π_1 = ( [5, 14, 8, 2, 12, 0, 1, 15, 6, 11, 18, 13, 4, 3, 9, 17, 7, 10, 16, 19],[14, 5, 9, 0, 6, 13, 16, 15, 4, 11, 3, 10, 12, 8, 2, 17, 7, 19, 18, 1]), π_2 = ( [11, 18, 2, 12, 10, 5, 1, 17, 19, 9, 3, 4, 7, 6, 13, 0, 14, 16, 15, 8],[19, 1, 2, 7, 8, 3, 13, 6, 17, 10, 18, 12, 15, 4, 5, 11, 16, 0, 14, 9]), π_3 = ( [2, 16, 17, 1, 4, 0, 7, 5, 10, 14, 11, 6, 18, 15, 9, 12, 19, 13, 3, 8],[15, 9, 0, 11, 19, 16, 18, 7, 2, 13, 5, 6, 17, 14, 8, 1, 12, 4, 10, 3]), π_4 = ( [9, 12, 14, 2, 6, 19, 18, 3, 15, 0, 1, 5, 11, 17, 7, 16, 8, 4, 13, 10],[14, 4, 18, 3, 11, 16, 15, 12, 5, 0, 17, 2, 10, 9, 13, 19, 7, 6, 1, 8]). The polynomial f_20 = 11832 g_1 + 233074 g_2 + 34117 g_3 - 32732 g_4 is the only linear combination of the basis elements that is in I(σ_6), up to scaling. We verified that f_20 is indeed in I(σ_6) with the straightening algorithm. Evaluating f_20 on ⟨ 2,2,2 ⟩_ yieldsf_20(⟨ 2,2,2 ⟩_) = -730140480 ( + 1) ^2. Let d = 19 and let λ be the partition triple (5,5,5,4)^3. The Kronecker coefficient k(λ) equals 31. Let W^19 (⊗^3 (^4)^*) and denote by W_λ^* the isotypic component of type λ^*. The following pairs of permutations define a basis (b_1,…,b_31) for the highest-weight vector space [W_λ^*]^U: π_1 = ( [4, 8, 13, 3, 1, 12, 5, 11, 9, 15, 2, 7, 0, 17, 14, 6, 10, 18, 16], [2, 18, 5, 7, 9, 13, 0, 12, 1, 15, 10, 8, 4, 11, 16, 3, 17, 6, 14]), [0] π_2 = ( [12, 15, 11, 7, 2, 6, 8, 17, 9, 1, 16, 13, 4, 0, 3, 10, 18, 14, 5], [11, 9, 14, 0, 15, 13, 16, 3, 6, 8, 17, 7, 10, 5, 18, 2, 12, 1, 4]), [0] π_3 = ( [14, 1, 2, 15, 6, 3, 7, 13, 4, 18, 8, 9, 12, 10, 16, 5, 17, 0, 11], [7, 18, 2, 10, 4, 12, 0, 9, 15, 6, 5, 13, 1, 17, 14, 16, 8, 3, 11]), [0] π_4 = ( [4, 1, 0, 12, 7, 13, 9, 16, 6, 8, 18, 15, 17, 11, 14, 2, 10, 3, 5], [5, 13, 17, 14, 3, 4, 6, 11, 8, 18, 1, 15, 2, 0, 9, 16, 7, 10, 12]), [0] π_5 = ( [11, 14, 5, 0, 15, 8, 2, 17, 1, 13, 4, 9, 16, 6, 7, 10, 18, 3, 12], [8, 18, 4, 14, 6, 16, 10, 2, 11, 9, 5, 0, 13, 12, 1, 7, 3, 17, 15]), [0] π_6 = ( [10, 5, 18, 8, 15, 2, 16, 1, 0, 13, 3, 4, 7, 14, 11, 6, 12, 17, 9], [0, 8, 12, 2, 3, 9, 11, 13, 5, 1, 14, 7, 4, 16, 17, 18, 15, 10, 6]), [0] π_7 = ( [12, 1, 11, 16, 13, 7, 2, 17, 10, 15, 3, 0, 5, 4, 14, 6, 9, 8, 18], [8, 1, 4, 2, 12, 14, 18, 15, 7, 9, 0, 11, 3, 10, 6, 17, 13, 5, 16]), [0] π_8 = ( [17, 18, 6, 11, 4, 2, 1, 9, 15, 16, 5, 8, 10, 0, 12, 13, 3, 14, 7], [14, 1, 18, 6, 10, 15, 3, 5, 11, 16, 12, 9, 13, 7, 0, 17, 8, 4, 2]), [0] π_9 = ( [8, 2, 10, 3, 6, 4, 11, 18, 13, 0, 5, 1, 15, 17, 12, 16, 14, 7, 9], [2, 5, 13, 16, 1, 10, 3, 14, 4, 17, 18, 12, 0, 11, 9, 6, 7, 8, 15]), [0] π_10 = ( [13, 17, 15, 1, 12, 0, 9, 10, 6, 18, 7, 16, 14, 5, 2, 4, 11, 8, 3], [6, 12, 11, 10, 2, 14, 13, 0, 9, 15, 16, 17, 5, 8, 3, 7, 1, 18, 4]), [0] π_11 = ( [14, 5, 4, 1, 16, 8, 3, 7, 10, 13, 18, 6, 2, 17, 11, 9, 15, 12, 0], [5, 9, 10, 1, 2, 4, 14, 18, 8, 11, 7, 6, 15, 17, 16, 3, 0, 13, 12]), [0] π_12 = ( [1, 5, 4, 13, 15, 2, 17, 16, 8, 10, 11, 6, 7, 3, 12, 14, 9, 0, 18], [9, 5, 7, 8, 6, 11, 18, 3, 10, 4, 14, 17, 13, 0, 12, 15, 16, 1, 2]), [0] π_13 = ( [16, 13, 4, 3, 5, 2, 1, 15, 18, 6, 12, 0, 14, 8, 17, 7, 10, 11, 9], [2, 7, 8, 18, 16, 4, 6, 14, 0, 15, 9, 5, 1, 12, 10, 13, 17, 11, 3]), [0] π_14 = ( [5, 12, 0, 9, 3, 7, 17, 2, 6, 14, 11, 8, 15, 4, 1, 10, 13, 18, 16], [5, 15, 18, 8, 17, 11, 9, 4, 13, 1, 16, 2, 0, 14, 7, 10, 12, 3, 6]), [0] π_15 = ( [12, 6, 9, 14, 18, 5, 17, 2, 1, 4, 3, 11, 0, 10, 15, 7, 16, 13, 8], [9, 1, 16, 18, 14, 5, 6, 0, 10, 13, 3, 7, 15, 4, 11, 17, 12, 2, 8]), [0] π_16 = ( [1, 18, 4, 8, 5, 3, 0, 16, 6, 10, 11, 2, 17, 7, 9, 12, 14, 13, 15], [8, 2, 15, 12, 18, 6, 0, 11, 13, 5, 9, 4, 16, 7, 10, 17, 14, 1, 3]), [0] π_17 = ( [18, 8, 16, 6, 5, 7, 2, 13, 0, 4, 12, 11, 14, 15, 3, 17, 1, 10, 9], [12, 9, 14, 2, 18, 5, 0, 13, 4, 16, 8, 7, 1, 10, 6, 3, 17, 11, 15]), [0] π_18 = ( [7, 5, 16, 15, 1, 0, 8, 11, 14, 17, 12, 6, 9, 3, 10, 18, 13, 4, 2], [8, 9, 0, 4, 2, 3, 5, 13, 18, 12, 6, 1, 16, 11, 17, 10, 14, 7, 15]), [0] π_19 = ( [2, 17, 0, 14, 15, 8, 1, 9, 12, 5, 10, 3, 7, 11, 4, 16, 6, 13, 18], [13, 3, 0, 15, 7, 17, 18, 10, 6, 16, 1, 8, 9, 14, 12, 4, 5, 2, 11]), [0] π_20 = ( [0, 16, 9, 3, 15, 1, 4, 14, 7, 2, 18, 10, 12, 11, 17, 8, 6, 5, 13], [3, 2, 13, 11, 8, 1, 5, 4, 0, 16, 7, 17, 6, 12, 14, 9, 18, 15, 10]), [0] π_21 = ( [17, 3, 5, 14, 0, 16, 2, 8, 1, 11, 7, 18, 12, 6, 9, 15, 4, 13, 10], [7, 2, 17, 8, 0, 13, 6, 1, 4, 5, 18, 9, 15, 10, 16, 11, 3, 14, 12]),[0]π_22 = ( [5, 4, 1, 14, 16, 3, 9, 17, 12, 8, 2, 6, 11, 7, 18, 15, 13, 0, 10], [6, 14, 8, 7, 9, 18, 3, 12, 15, 2, 0, 1, 13, 5, 10, 16, 4, 11, 17]), [0] π_23 = ( [17, 4, 10, 13, 14, 1, 6, 8, 5, 15, 9, 2, 0, 11, 18, 7, 3, 12, 16], [6, 3, 11, 12, 15, 17, 10, 2, 8, 5, 1, 0, 14, 7, 9, 18, 13, 4, 16]), [0] π_24 = ( [3, 9, 0, 15, 14, 7, 1, 16, 2, 8, 11, 4, 17, 12, 10, 6, 18, 13, 5], [10, 11, 3, 2, 1, 9, 14, 13, 18, 16, 0, 4, 15, 8, 5, 12, 6, 7, 17]), [0] π_25 = ( [12, 2, 8, 6, 16, 1, 15, 9, 11, 14, 10, 3, 5, 17, 0, 13, 18, 4, 7], [8, 2, 14, 1, 6, 17, 16, 3, 7, 9, 11, 12, 18, 0, 5, 13, 15, 10, 4]), [0] π_26 = ( [2, 16, 14, 6, 9, 0, 11, 12, 3, 15, 1, 18, 17, 7, 4, 8, 13, 5, 10], [10, 15, 13, 12, 17, 0, 16, 7, 4, 11, 1, 2, 6, 14, 8, 5, 9, 3, 18]), [0] π_27 = ( [10, 7, 6, 0, 12, 11, 16, 13, 1, 3, 17, 14, 8, 18, 4, 2, 9, 5, 15], [3, 17, 11, 12, 6, 5, 2, 13, 18, 14, 9, 1, 7, 16, 4, 8, 10, 15, 0]), [0] π_28 = ( [16, 6, 8, 4, 7, 5, 9, 1, 0, 2, 14, 13, 17, 10, 18, 15, 11, 3, 12], [6, 11, 1, 12, 2, 8, 5, 9, 3, 16, 15, 18, 4, 7, 14, 0, 10, 17, 13]), [0] π_29 = ( [8, 13, 7, 0, 17, 4, 2, 15, 16, 1, 18, 3, 5, 11, 12, 10, 6, 14, 9], [4, 13, 1, 10, 18, 12, 2, 5, 17, 7, 6, 15, 8, 9, 0, 11, 16, 14, 3]), [0] π_30 = ( [1, 6, 12, 0, 3, 10, 9, 13, 17, 4, 7, 8, 18, 14, 2, 5, 15, 16, 11], [16, 6, 10, 11, 15, 8, 17, 13, 14, 4, 5, 1, 3, 12, 2, 7, 0, 18, 9]), [0] π_31 = ( [5, 10, 11, 8, 17, 16, 2, 15, 12, 14, 0, 18, 3, 1, 7, 9, 6, 4, 13], [10, 15, 4, 12, 18, 3, 16, 6, 0, 13, 11, 7, 1, 8, 9, 2, 14, 17, 5]). Let c_1=289082199568614200505625810989998081122378290025627334[0]c_2=41448548699164679707399349100915823812613974963005402[0]c_3=211649838021887426162677078824519293749517217920047823[0]c_4=-118150576713220917823141541211872001702845422153137763[0]c_5=-71972591371289085208000082313759547126396087856917092[0]c_6=-148042611712972282129069557835544665097810271759437007[0]c_7=-20671385701071233448917086723379921457752823704368686[0]c_8=-41700697565765737458921317121977791710351222967960389[0]c_9=89818454969459149830510070194701368406615458716738371[0]c_10=-33389561951163547125931836395846743479037338582546746[0]c_11=-55953034618025281839233784369005651793756337420914611[0]c_12=99436050816695444459576518293215696786461418941439932[0]c_13=-30608800079918651823012662681016076665421200200986429[0]c_14=62322369796163233078186315204176712499710334162812978[0]c_15=71531123200873494604907676681446086219352685074695096[0]c_16=11103950876950753893392891180499777390516447716768874[0]c_17=-18170416924354926777786745151805158474424942420073625[0]c_18=56636600557844043196391811853778001287738236566321291[0]c_19=-49475697236538461568207568070821224602714314684182556[0]c_20=-58897567946922439319826816178640661508235201647724834[0]c_21=-29789369352552042959878217935401203848547004115080562[0]c_22=42553086095082787553533988614363448520647296308373860[0]c_23=-10584947869810207513601472123471095674362492708851758[0]c_24=-155536179226293398590182659612811187764949236460651258[0]c_25=-15163630056597008306009257387099740416829146255166469[0]c_26=152468055855066906135282920200590542819196123610118125[0]c_27=-170101205621738870358375711649013594303036219144235962[0]c_28=-36619800006361115328892590783407206736313224654320560[0]c_29=63636824324804825079032794300460871506246849887804488[0]c_30=-114422655018015193150391631424350000645293977961135740[0]c_31=99270978701207213884119395668714341424298017907910144and define f_19 = c_1 g_1 +⋯ + c_31g_31. This is the only linear combination that is in I(σ_6), up to scaling. We verified that f_19 is in I(σ_6) by straightening. Evaluating f_19 at ⟨ 2,2,2 ⟩_ yields69332245782016022615247261570208505413020193878724712262 (3+ 2) .We have thus found two highest-weight vectorsf_19 ∈ [I(σ_6)_(5,5,5,4)^3 *]^U ⊆^19 (⊗^3 (^4)^*)f_20 ∈ [I(σ_6)_(5,5,5,5)^3 *]^U ⊆^20 (⊗^3 (^4)^*)such that f_19(⟨2,2,2⟩_) = α (3+2) and f_20(⟨2,2,2⟩_) = β ^2(+1), where α and β are nonzero constants. The only simultaneous root of these polynomials occurs at =0. This means that for any nonzero , the point ⟨ 2,2,2 ⟩_ is not contained in σ_6. From <ref> we conclude that the border rank of any tensor with the same support as ⟨ 2,2,2 ⟩ is at least seven, which proves the theorem. The lower bound (⟨ 2,2,2⟩) ≥ 7 in <cit.> was also obtained by showing that the highest-weight vector space [I(σ_6)_(5,5,5,5)^3 *]^U is nonzero, and the evaluation of a nonzero element v∈ [I(σ_6)_(5,5,5,5)^3 *]^U at ⟨ 2,2,2⟩ is nonzero. Acknowledgements. The authors are grateful to Christian Ikenmeyer for helpful discussions.MC acknowledges financial support from the European Research Council (ERC Grant Agreement no. 337603), the Danish Council for Independent Research (Sapere Aude), and VILLUM FONDEN via the QMATH Centre of Excellence (Grant no. 10059). JZ is supported by NWO through the research programme 617.023.116. The computations in this work were carried out on the Dutch national e-infrastructure with the support of SURF Cooperative. alphaurlppMatthias ChristandlQMATH, Department of Mathematical Sciences, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen, Denmark.Email: mailto:[email protected]@math.ku.dkMarkus BläserComputer Science, Saarland University, Saarland Informatics Campus E1.3, 66123 Saarbrücken, Germany.Email: mailto:[email protected]@cs.uni-saarland.deJeroen ZuiddamQuSoft, CWI Amsterdam and University of Amsterdam, Science Park 123, 1098 XG Amsterdam, Netherlands. Email: mailto:[email protected]@cwi.nl | http://arxiv.org/abs/1705.09652v1 | {
"authors": [
"Markus Bläser",
"Matthias Christandl",
"Jeroen Zuiddam"
],
"categories": [
"cs.CC",
"math.RT",
"quant-ph",
"68Q17, 15A69",
"G.1.3"
],
"primary_category": "cs.CC",
"published": "20170526172203",
"title": "The border support rank of two-by-two matrix multiplication is seven"
} |
1]Linh Nghiem1,2]Cornelis J. Potgieter Nghiem & Potgieter (2017)[1]Department of Statistical Science, Southern Methodist University, Texas, USA[2]Department of Statistics, University of JohannesburgSouth AfricaLinh Nghiem. [email protected] Hall 136B, 3225 Daniel Avenue, Dallas, TX 75206[Abstract]It is important to properly correct for measurement error when estimating density functions associated with biomedical variables. These estimators that adjust for measurement error are broadly referred to as density deconvolution estimators. While most methods in the literature assume the distribution of the measurement error to be fully known, a recently proposed method based on the empirical phase function (EPF) can deal with the situation when the measurement error distribution is unknown. The EPF density estimator has only been considered in the context of additive and homoscedastic measurement error; however, the measurement error of many biomedical variables is heteroscedastic in nature. In this paper, we developed a phase function approach for density deconvolution when the measurement error has unknown distribution and is heteroscedastic. A weighted empirical phase function (WEPF) is proposed where the weights are used to adjust for heteroscedasticity of measurement error. The asymptotic properties of the WEPF estimator are evaluated. Simulation results show that the weighting can result in large decreases in mean integrated squared error (MISE) when estimating the phase function. The estimation of the weights from replicate observations is also discussed. Finally, the construction of a deconvolution density estimator using the WEPF is compared to an existing deconvolution estimator that adjusts for heteroscedasticity, but assumes the measurement error distribution to be fully known. The WEPF estimator proves to be competitive, especially when considering that it relies on minimal assumption of the distribution of measurement error.Density Estimation in the Presence of Heteroscedastic Measurement Error of Unknown Type using Phase Function Deconvolution [ December 30, 2023 ========================================================================================================================== § INTRODUCTIONMany biomedical variables cannot be measured with great accuracy, leading to observations contaminated by measurement error. Examples of such variables have been suggested in numerous epidemiological and clinical settings, including the measurement of blood pressure, radiation exposure, and dietary patterns. <cit.> The sources of measurement error range from the instruments used to measure the variables of interest to the inadequacy of short-term measurements for long-term variables; as such, the observed measurements have larger variance than the true underlying quantity of interest. The presence of measurement error can have a substantive impact on statistical inference. For example, not correcting for measurement error can result in biased parameter estimates, and loss of power in detecting relationships among variables. <cit.> Appropriate corrections need to be implemented when performing any data analysis with measurement error present to avoid making erroneous inferences.A common problem of interest is to estimate the density of a variable when it is measured with additive measurement error. <cit.> This problem is often referred to as density deconvolution. When the noise-to-signal ratio is large, implementing a correction becomes crucial as the density of the observed data can deviate substantially from the true density of interest. Let f_X(x) denote the density function of a random variable X, and assume that it is of interest to estimate f_X(x) when X is not directly observable. Specifically, we are only able to observe contaminated versions of X, say W=X+U, where U represents measurement error. Thus, we are interested in estimating the density function of X based on an observed sample W_1, W_2, ..., W_n with W_i = X_i + U_i,i = 1, …,n. Here, the X_i are an iid sample from a distribution with density f_X, with U_i representing the measurement error of the i^th observation. The U_i are assumed both mutually independent and independent of the X_i. The nonparametric density deconvolution problem when first considered assumed that the distribution of the measurement error was fully known. <cit.>^,<cit.> The development that followed in the literature mostly considered the case of known measurement error, and generally treated the measurement error as homoscedastic. <cit.>^,<cit.>^, <cit.>^, <cit.>^, <cit.> The case of heteroscedastic measurement error was considered by Fan <cit.> and Delaigle & Meister. <cit.> The problem of the measurement error having an unknown distribution was considered by Diggle & Hall <cit.> and Neumann &Hössjer, <cit.> who assume that samples of error data are available, and by Delaigle et al.<cit.> who use replicate data to estimate the entire characteristic function of the measurement error. McIntyre & Stefanski <cit.> considered the heteroscedastic case with replicate observations. Their work assumed the measurement errors all follow a normal distribution with unknown variances only. The phase function deconvolution approach developed by Delaigle & Hall <cit.> is groundbreaking in that they estimate the density function f_X with both the measurement error distribution and variance unknown, and without the need for replicate data. Their method is based on minimal assumptions: The measurement error terms U_i are only assumed to be mutually independent andindependent of the X_i and to have a strictly positive characteristic function. However, Delaigle & Hall <cit.> only considered the case where the U_i are homoscedastic, while heteroscedastic data is a reality often encountered in practice. In fact, the variance of measurement error often increases with the true underlying value.<cit.>In this paper, we develop the phase function approach for density deconvolution when the measurement error has unknown distribution and is heteroscedastic. The model considered in this paper assumes the observed data are of the form W_i = X_i + σ_i ε_i where the X_i are an iid sample from f_X, the measurement error terms ε_i are independent and each ε_i has a positive characteristic function and satisfies E(ε_i)=0 and Var(ε_i)=1. The σ_i are non-negative constants and represent measurement error heteroscedasticity. Specifically, Var(W_i)=σ_X^2 + σ_i^2 where σ_X^2 denotes the variance of X. Additionally, it is assumed that the random variable X is asymmetric. This assumption is fundamental to the identifiability of the phase function of X, which forms the basis of estimation. A more detailed discussion of the model assumptions is presented in Section <ref>, see also Delaigle & Hall <cit.>.Note that the heteroscedasticity of the measurement error will require either that the constants σ_i be known, or that there are replicate data so that the σ_i can be estimated from the data. To illustrate the use of this estimator in a biomedical setting, a real-data example is included in Section 4. This example uses data from the Framingham Heart Study, which collected several variables related to coronary heart disease for study subset of n=1615 patients. For each patient, two measurements of long-term systolic blood pressure (SBP) were collected at each of two examinations. The distribution of true long-term SBP is estimated using the empirical phase function (EPF) and weighted empirical phase function (WEPF) density deconvolution estimator. These estimators are compared to a naive density estimator that makes no correction for measurement error, as well as the estimator of Delaigle & Meister <cit.> assuming the measurement error follows a Laplace distribution.The remainder of the paper is organized as follows. Section 2 discusses the model assumptions,considers estimation of the phase function and introduces a weighted empirical phase function (WEPF) which adjusts for heteroscedasticity in the data. A small simulation study compares two different weighting schemes. Section 3 shows how the WEPF can be inverted to estimate the density function f_X and presents an approximation of the asymptotic mean integrated squared error for selecting the bandwidth. The WEPF deconvolution estimator is compared to that of Delaigle & Meister, <cit.> who treat the heteroscedastic case with known measurement error distribution. Section 4 illustrates the method using data from the Framingham Heart Study and Section 5 contains some concluding remarks.§ PHASE FUNCTION ESTIMATION§.§ Model and Main Assumptions The model considered in the paper assumes the observed data are of the form W_i = X_i + σ_i ε_i where the X_i are an an iid sample from f_X, the measurement error terms ε_i are mutually independent and independent from X_i, and that each ε_i has a strictly positive characteristic function. Note that the model does not require that the ε_i have the same type of distribution, but only that each ε_i has a characteristic function satisfying the above requirement. The assumption of a strictly positive characteristic function is equivalent to ε_i being symmetric about zero with support on the entire real line. Many commonly used continuous distributions, including the Gaussian, Laplace, and Student's t distributions, satisfy this assumption. In general, the only symmetric distributions excluded are those defined on bounded intervals (such as the uniform). For convenience, it is assumed that Var[ε_i]=1, so that the constant σ_i^2 represents the heteroscedastic measurement error variance of the i^th observation. Specifically, Var(W_i)=σ_X^2 + σ_i^2 where σ_X^2 denotes the variance of X. The density function f_X is assumed to be asymmetric. More specifically, it is assumed that the random variable X does not have a symmetric component. This means that there is no symmetric random variable S for which X can be decomposed as X=X_0+S for arbitrary random variable X_0. This asymmetry is crucial to the ability to estimate the true density function of X. As discussed in Delaigle & Hall,<cit.> if one were to assumed that the density function f_X were sampled from a random universe of distributions, then the assumption of indecomposability is satisfied with probability 1. Practically, the indecomposability assumption is not unreasonable as data are rarely observed from a perfectly symmetric distribution. There is a special type of distribution for X that cannot be recovered by this method, namely when X is itself a convolution (sum) of a skew distribution and a symmetric distribution. The result from Delaigle & Hall indicates that this need not be a concern for the general practitioner implementing this method.While the exposition in this paper assumes that the measurement error components are independent, the methodology could be generalized to a setting where Cov[ε_i,ε_j]=σ_ij≠ 0 for some pairs i≠ j. This would not affect the proposed estimator directly, but would have consequences for how the bandwidth is chosen. The latter question is beyond the scope of the present paper. §.§ The Weighted Empirical Phase Function (WEPF) The phase function of a random variable X, denoted ρ_X(t), is defined as the characteristic function of X standardized by its norm,ρ_X (t) = ϕ_X(t)|ϕ_X(t)|with ϕ_Z(t) the characteristic function of a random variable Z and | z | = (z z̅)^1/2 denoting the norm function with z̅ the complex conjugate of z. Let W=X+σε with ε having characteristic function ϕ_ε(t) ≥ 0 for all t. It is easy to verify that the random variables W and X have the same phase function, ρ_W(t) = ρ_X(t). Delaigle & Hall <cit.> use this relation and an empirical estimate of ϕ_W(t) in equation (<ref>) to estimate the phase function, see their paper for details on implementation.In the case of heteroscedastic errors, we propose to use a weighted empirical phase function (WEPF) to adjust for heteroscedasticity. Define functionϕ̂_W(t|q) = ∑_j=1^n q_j exp (itW_j)where q={q_1,…,q_n} denotes a set of non-negative constants that sum to 1. Function (<ref>) is a weighted empirical characteristic function and noting random variable W_i = X_i + σ_i ε_i has characteristic function ϕ_W_i(t) = ϕ_X(t) ϕ_ε_i(σ_i t), i = 1,…,n, it follows thatE[ϕ̂_W(t|q)] = ϕ_X(t) ∑_j=1^n q_j ϕ_ε_j (σ_j t).The WEPF is defined asρ̂_W(t|q) = ϕ̂_W(t|q)/|ϕ̂_W(t|q) | = ∑_j q_j exp (itW_j)/{∑_j∑_k q_j q_k exp [it(W_j-W_k)]}^1/2 .For q_eq={1/n,…,1/n}, ρ̂_W(t|q_eq) essentially reduces to the phase function proposed by Delaigle & Hall.<cit.> Use of weights choice q_eq will be referred to as the empirical phase function (EPF) estimator. Other choices of weights can serve as an adjustment for heteroscedasticity – observations with large measurement error variance can be down-weighted to have smaller contribution to the phase function estimate.The asymptotic properties of the WEPF are given in the Theorem <ref> below. Assume that max_j q_j = 𝒪(n^-1) and that each measurement error component ε_j has a strictly positive characteristic function. It then follows that the WEPF as defined in (<ref>) is a consistent estimator of the phase function of W, and hence of the phase function of X. Also, the asymptotic variance of the WEPF is given by AVar[ρ̂_W(t|q)-ρ_W(t)] = 1/2|ϕ _X( t) | ^2ψ _ε(t|q) ∑_k=1^nq_k^2[ 1-|ϕ _X( t) | ^2ϕ _ε _k^2( σ _kt) +ϕ _ε _k^2(σ _kt)]-Re{ϕ _X^2( t) ϕ _X( -2t) }/2|ϕ _X( t) | ^4ψ _ε(t|q) ∑_k=1^nq_k^2ϕ _ε _k(2σ _kt) whereψ _ε(t|q) = [∑_kq_kϕ _ε _k(σ _kt)]^2. The proof of Theorem <ref> can be found in the Supplementary Material. Equation (<ref>) shows that the asymptotic variance of ρ̂_W(t|q) depends on ϕ_ε_j(t) j=1,…,n, the characteristic functions of the measurement error components. While one would ideally like to choose weights q that minimize said asymptotic variance, this is unrealistic as the method proposed in this paper makes no parametric assumptions about the measurement error, meaning the ϕ_ε_j are unknown. A much simpler weighting scheme is proposed here, relying only on knowledge of the measurement error variances.Note that E(W_i) = E(X) = μ. As such, for weights q, the estimator μ̂_q = ∑_j=1^n q_j W_j is an unbiased estimator of μ. The weightsq_i^* = σ^-2_W_i[∑_j=1^nσ^-2_W_j]^-1 = (σ_X^2+σ_i^2)^-1[∑_j=1^n(σ_X^2+σ_j^2)^-1]^-1result in a minimum variance estimator of μ. This does have a connection to the phase function, as ρ'_X(0) = μ; see the supplemental material of Delaigle & Hall <cit.> for the connection between the phase function and the odd moments of the underlying distribution. Let q_opt={q_1^*,…,q_n^*} denote the vector of mean-optimal weights and let WEPF_opt denote the weighted empirical phase function estimator calculated using the mean-optimal weights. Both the performance of the EPF and the WEPF_opt will be considered for estimating the phase function and density function. §.§ Estimating the Variance Components In practice, it is often the case that neither the measurement error variances σ_1^2,…,σ_n^2 nor σ_X^2 is known. These quantities can be easily estimated from replicate observations. This section describes how to estimate the variance components for a heteroscedastic measurement error variance model. In a setting where the underlying measurement error variance structure is unknown, the procedure outlined in this section can be used to estimate the mean-optimal weights in (<ref>) used for estimating the WEPF.Consider replicate observations, W_ij=X_i+τ_ie _ij, j=1,…,n_i, i=1,…,n with min_i n_i ≥ 2, E(e_ij)=0, Var(e_ij)=1, and τ_i^2 representing heteroscedastic measurement error variance at the observation level. Note that W_ij-W_ij^'=τ_i( e _ij-e_ij') and thus E[ ( W_ij-W_ij^') ^2] =2τ_i^2 for j≠ j'. Define grand meanW̅ = 1n∑_i=1^n[ 1n_i∑_j=1^n_i W_ij] = 1n∑_i=1^nX_i+1n∑_i=1^n[τ_in_i∑_j=1^n_ie_ij]and note that E(W̅) = μ and(W̅) = σ_X^2n + 1n^2∑_i=1^nτ_i^2n_i.It can also be shown thatE[ ( W_ij-W̅) ^2]= σ _X^2+τ_i^2 + 𝒪(n^-1).Subsequently, the variance components can be estimated byτ̂_i^2=1/n_i( n_i-1) ∑_j=1^n_i-1∑_j^ '=j+1^n_i( W_ij-W_ij^') ^2,i=1,…,n,and, motivated by (<ref>),σ̂_X^2 = 1N∑_i=1^n∑_j=1^n_i (W_ij-W̅)^2 - 1n∑_i=1^nτ̂_i^2with N=∑_i n_i. The analysis then proceeds by defining individual-level averages W_i=(n_i^-1)∑_j=1^n_iW_ij and noting that W_i = X_i + σ_i ε_i where σ_i = τ_i/√(n_i) and ε_i has a distribution with a positive characteristic function whenever the same is true for all elements of the set {e_i1,…,e_in_i}. The estimate of σ_i is given by σ̂_i = τ̂_i / √(n_i). §.§ Simulation StudyA small simulation study was conducted to compare the performance of the EPF and WEPF_opt estimators. The true X_i data were sampled from the following three distributions: (1) X ∼χ^2_3/√(6) (Scaled χ^2_3), (2) X ∼(0.5 N(1,1) + 0.5 χ^2 (5) )/√(9.5) (Mixture 1), and (3) X ∼(0.5 N(5,0.6^2) + 0.5 N(2.5,1))/√(2.2425) (Mixture 2). The first two distributions are right-skewed while the third distribution is bimodal. All three distributions were scaled to have unit variance. The phase functions of these distributions are shown in Figure 1 of the Supplemental Material. The measurement error terms ε_ij=τ_i e_ij were sampled from a normal distribution with mean 0 and variance structure τ_i^2 = Jσ_i^2 with σ_i^2 = 0.025 σ^2_X, i=1,…, n/2 and σ_i^2 = 0.975 σ^2_X, i= n/2+1,…, n. For each candidate distribution of X, a total of N=1000 samples W_ij=X_i + τ_ie_ij, i=1,…,n and j=1,…,J were generated for sample sizesn = 250, 500,and1000. Scenarios with no replicates (J=1) and also with replicates (J=2 and 3) were considered in the simulation. Under the scenario with no replication, the measurement error variance was treated as known. In settings with J=2 and 3 replicates, the measurement error variances were estimated from the replicate data using the procedure outlined in Section <ref>. The choice of observation-level measurement error variance τ_i^2 = Jσ_i^2 results in the combined replicate values W_i = J^-1∑_j W_ij having measurement error variance σ_i^2. This was done to make the simulation results with and without replicates easily comparable. For each simulated dataset, the mean-optimal weight vector q_opt was calculated (or estimated in the case of replicate data) using equation (<ref>). The WEPF_opt estimator was then calculated using these weights. Additionally, the EPF estimator was calculated using equal weights for all observations. As the quality of the empirical characteristic function decreases with increasing t, the suggestion ofDelaigle & Hall <cit.> was followed and the estimated phase functions were only computed on the interval [-t^*, t^*], where t^* is the smallest t>0 such that |ϕ̂_W(t|q)| < n^-1/4. The EPF and WEPF are compared using (estimated) mean integrated squared error (MISE) ratios, MISE_eq/MISE_opt, where MISE_eq and MISE_opt denote the MISEs of the EPF and WEPF_opt estimators respectively. The results are summarized in Table <ref>.In Table <ref>, an MISE ratio greater than 1 indicates better performance of the WEPF_opt estimator compared to the EPF estimator. The table also reports estimated standard errors for the MISE ratios. The standard errors were estimated using the following jackknife procedure. For the j^th simulated sample, let (ISE_eq,j,ISE_opt,j) denote the integrated squared error for the EPF and the WEPF_opt respectively,j = 1,…,N. Let R_(-j) denote the MISE ratio calculated after deleting the j^th ISE pair. Then, the jackknife standard error for the MISE ratio is given bySE_jack = √(1N∑_j=1^N(R_(-j)-R̅)^2)where R̅ = N^-1∑_j=1^N R_(-j).Inspection of Table <ref> shows that the WEPF_opt performs better than the EPF for the measurement error configuration considered. When the measurement error variances are known, the gain from using WEPF_opt can be substantial. Specifically, the MISE of WEPF_opt is seen to between 6.5% and 30% lower than the MISE of the EPF for the distributions considered. When there are J=2 and J=3 replicates per observation, the WEPF_opt performs slightly better than the EPF for the scaled χ^2_3 distribution, while their performance is nearly identical for Mixtures 1 and 2. In this setting, the use of the suggested weighting scheme never results in poorer performance of the WEPF_opt estimator compared to the EPF estimator.Next, the effect of different underlying measurement error variance structures on the MISE ratio of the EPF and WEPF_opt was examined. The sample size was fixed at n=1000 and the three different measurement error variance structures considered are outlined in Table <ref>. The ratios MSE_eq / MSE_opt based on 1000 simulated datasets are reported in Table <ref>. Again, jackknife estimates of standard error are also reported.Inspection of Table <ref> illustrates the effect of different heterogeneity patterns of measurement error variances on the performance of the EPF and WEPF_opt estimators. When the measurement error variances are known (J=1), the WEPF_opt has a lower MISE than the EPF in all the considered configurations, with the heterogeneity pattern only affecting the size of the improvement. In the case of J=2 replicates per observation, there were four instances in Case 2 and Case 3 of measurement error variances where the EPF performed better than the WEPF_opt. This occurrence was likely because the estimated weights for WEPF_opt were calculated from estimated variance components based on only a small number of replicates. When the number of replicates increases from J=2 to J=3, measurement error variances are estimated with higher accuracy, so the MISE ratio increase in general. Note that, although using WEPF_opt can sometimes lead to a worse performance, the loss tends to be small (at most 8% as seen in the Case 2 measurement error variance setting when X follows a Scaled-χ_3^2 with 2 replicates); however, using WEPF_opt can still result in large gains (as much as 15% in the Case 1 measurement error variance setting when X follows a Scaled-χ_3^2 with 3 replicates).In general, the simulation study shows that weighting to adjust for heteroscedasticity in estimating the phase function never results in a much poorer estimator, but sometimes leads to a large gain in efficiency. The loss/gain depends on how accurate measurement error variances were estimated as evidenced by the improvement in going from J=2 to J=3 replicates. In the next section, this is explored in the context of density deconvolution. § DENSITY ESTIMATION§.§ Constructing an Estimator of f_X The outline here is a brief overview of how the method ofDelaigle & Hall <cit.> can be implemented using the WEPF to estimate the density function f_X. Let ϕ̂_W(t|q) and ρ̂_W(t|q) denote the weighted empirical characteristic function and corresponding WEPF respectively. Let w(t) denote a non-negative weight function. Also let x_j, j=1,…,m denote a set of arbitrary values with respective probability masses p_j.Delaigle & Hall suggest a two-stage estimation method for f_X. First, one finds a characteristic function of the form ψ(t|𝐱,𝐩)=∑_j p_j exp (itx_j) that has phase function close to the WEPF. Since this characteristic function corresponds to a discrete distribution with probability mass p_j at the point x_j for j=1,…,m, the second stage of estimation involves smoothing ψ(t|𝐱,𝐩) before applying an inverse Fourier transformation to obtain the estimated density f̂_X(x).Delaigle & Hall suggest sampling the x_j uniformly on the interval [min W_i, max W_i] with m = 5√(n). The goal is then to find the set {p_j}_j=1^m that minimizes T(p) = ∫_-∞^∞|ρ̂_W(t|q) - ψ(t|𝐱,𝐩)|ψ(t|𝐱,𝐩)||^2 w(t) dtunder the constraint of also minimizing the variance of the corresponding discrete distribution,v(𝐩) = ∑_j=1^m p_j x_j^2 - (∑_j=1^m p_j x_j)^2. This non-convex optimization problem of finding the solution {p̂_j}_j=1^m can be solved using MATLAB. Details are given in Delaigle & Hall. <cit.> The present implementation differs only in that the estimated phase function is weighted to adjust for heteroscedasticity. Beyond using a different estimator of the phase function, the optimization problem remains unchanged.Now, let ψ(t|𝐱,p̂) = ∑_j p̂_j exp(itx_j) be the characteristic function with the p̂_js the probability masses estimated by minimizing (<ref>). The deconvolution density estimator based on the WEPF is thenf̂_X( x) =1/2π∫exp( -itx)ϕ̃( t) K^ft( ht) dt where ϕ̃(t) = ψ(t|𝐱,p̂),fort ≤ t^* r(t),fort > t^*with t^* being the smallest t>0 such that |ϕ̂_W(t|q)| < n^-1/4. Here, K^ft(t) denotes the Fourier transform of a deconvolution kernel function and r(t) denotes a ridging function. The ridging function ensures that the estimator is well-behaved outside the range [-t^*,t^*]. The proposed choice of ridging function isr(t) = ϕ̂_W(t|q)/ϕ̂_L(t), with ϕ̂_L(t) the characteristic function of a Laplace distribution with variance equal to an estimator of σ_L^2=∑_j q_j σ_j^2, the weighted sum of the measurement error variances. In application here, the common choice K^ft(t) = (1-t^2)^3 for |t|≤ 1 is used. The weight function is chosen to be w(t)=ω(t)|ϕ̂_W(t|𝐪)ψ(t|𝐱,𝐩)|^2 with ω(t)=0.75(1-t^2) for |t|≤ 1 (the Epanechnikov kernel) rescaled to the interval [-t^*,t^*]. This choice of weight function avoids numerical difficulties that can arise when dividing by very small numbers. §.§ Bandwidth SelectionThe proposed phase function deconvolution estimator that accounts for heteroscedasticity in (<ref>) is an approximation of the estimatorf̃( x) =1/2π∫exp(-itx)K^ft( ht ) ϕ̂_W(t|q) /∑_j q_jϕ _ε _j( σ _jt) dtwith ϕ̂_W(t|q) defined in (<ref>). Note that (<ref>) is an estimator that one could compute if the measurement error distribution were known, but that it is different from the heteroscedastic estimator proposed byDelaigle & Meister. <cit.> Taking expectation of the integrated squared error (ISE) of (<ref>), ISE =∫ [ f̃( x) -f_X( x) ] ^2dx, gives mean integrated squared error (MISE)MISE = 1/2π∫|ϕ _X( t) | ^2 [ K^ft( ht) -1] ^2dt+1/2π∫[ K^ft( ht) ]^2 ∑_j q_j^2/ [∑_j q_jϕ _ε _j(σ _jt)] ^2dt-1/2π∫|ϕ _X( t) | ^2[ K^ft( ht) ]^2 ∑_jq_j^2ϕ _ε _j^2( σ _jt) /[ ∑_j q_jϕ _ε _j( σ _jt) ] ^2dt.An argument similar to that ofDelaigle & Meister <cit.> when evaluating the asymptotic MISE (AMISE) of their heteroscedastic estimator, one can show that the last term of (<ref>) is negligible, givingAMISE=1/2π∫|ϕ _X( t) | ^2[ K^ft(ht) -1] ^2dt+1/ 2π∫[K^ft(ht)]^2 ∑_jq_j^2 /[ ∑_j q_jϕ _ε _j( σ _jt) ] ^2dtIn the present application, both ϕ _X( t) and ϕ _ε _j( t), j=1,…,n are unknown. However, note that |ϕ _X( t) | ^2=ϕ _X( t) ϕ _X( -t) is the characteristic function of the random variable X-X^', where X, X^' are iid f_X. Regardless of the shape of f_X, the random variable X-X^' is symmetric about 0 and has variance 2σ _X^2. This suggests replacing |ϕ _X( t) | ^2 with the characteristic function of a symmetric distribution with mean 0 and variance 2σ̂_X^2. Appropriate choices might be the normal distribution, i.e. substituting exp( - σ̂_X^2t^2) for |ϕ _X( t) | ^2, or the Laplace distribution, i.e. substituting ( 1+ σ̂_X^2t^2) ^-1. Additionally, one can use appropriate approximations for ϕ _ε _j( σ _jt). For example, the Laplace choice is a reasonable one. <cit.><cit.> One can therefore substitute ( 1+0.5σ̂_j^2t^2) ^-1 for ϕ _ε _j( σ _jt).This Normal-Laplace substitution gives approximate AMISE functionÂ( h) = 1/2π∫exp( -σ̂_X^2t^2) [ K^ft( ht) -1] ^2dt+1/2π∫[ K^ft( ht) ]^2 ∑_jq_j^2/[ ∑_j q_j( 1+0.5σ̂ _j^2t^2) ^-1] ^2dtand the value of h that minimizes the above function can then be used to evaluate the density deconvolution estimator in equation (<ref>). §.§ Simulation Study Simulation studies were done to evaluate the performance of the equally-weighted and mean-optimal weighted phase function deconvolution density estimators. These correspond to the use of the EPF and WEPF_opt as the phase function estimate before performing the deconvolution operation as described in Section <ref>. Additionally, as it is already established in the literature, the Delaigle & Meister estimator <cit.> for heteroscedastic data was also calculated. The three candidate distributions for X as described in Section <ref> were considered. Both normal and Laplace distributions were considered for the measurement error, each in conjunction with the three measurement error variance models outlined in Table <ref> being considered. In all cases the sample size was taken to be n=500. Due to the computational cost of evaluating the phase function deconvolution estimators, a total of 500 samples were generated for each combination of X-distribution and variance model. For the phase-function estimators, the approximate AMISE bandwidth minimizing (<ref>) was computed. The bandwidth of the Delaigle-Meister estimator was a two-stage plug-in bandwidth as suggested in their paper. For all the three deconvolution estimators, the integrated squared error (ISE) was computed for each sample. Table <ref> presents the simulation results corresponding to the setting where the measurement error variances are assumed known, and Table <ref> presents the simulation results corresponding to the case with J=2 replicates per observation and the variance components are estimated as outlined in Section <ref>. The simulation with replicate observations contains results for the Delaigle-Meister estimator both using the estimated variances (D&M_VarE) and treating the variances as known (D&M_VarK). Note that the simulations with replicate observations use the individual-level average data W_i = (W_i1+W_i2)/2 to compute the deconvolution estimators and are therefore not directly comparable to the simulation without replication and measurement error variances assumed known. Due to the presence of outliers in the ISE calculations, the median as well as the first and third quartiles of 10 ×ISE are reported.x[1]>p#1Inspection of Table <ref> reveals that the Delaigle-Meister (D&M) estimator tends to have the smallest median ISE, although there are a few instances in which the phase function estimators outperform the D&M estimator, notably for Mixture 2 and Laplace measurement error. It is also clear that calculating the mean-optimal weights is very advantageous in this setting, with the mean-optimally weighted estimator having smaller median ISE than the equally weighted estimator in all but one instance. Overall, one can conclude that the WEPF estimator performs very well and compares favorably to the D&M estimator, the latter requiring knowledge of the measurement error distribution to be useful in practice.Inspection of the simulation results in Table <ref> is very insightful. Note that the measurement error variances here are estimated based on only J=2 replicates for each observation. As such, one might not expect good performance. However, the two phase function estimators perform very favorable when compared to the D&M estimator with known measurement error variances. The mean-optimally weighted estimator generally performs better than the equally weighted estimators in terms of median ISE, although there are two exceptions. It is interesting that weights estimated based on only two replicates give such good performance. Also revealing is that the WEPF estimator performs significantly better than the D&M estimator with estimated variances, with the median ISE of the mean-optimally weighted estimator often reflecting more than a 50% reduction in median ISE when comapared to the D&M counterpart.Figures <ref> and <ref> show plots of the density estimators corresponding to the first, second, and third quantiles (Q_1, Q_2, and Q_3) of ISE for each of the methods EPF, WEPF_opt, and the D&M estimators corresponding to X having scaled χ^2_3 and Mixture 1 distribution. In all three instances, the estimators were calculated with estimated measurement error variances based on J=2 replicates per observation. Observation-level measurement error was taken to be Case 1 of Table <ref>. Both normal and Laplace distributions were considered for the measurement error. The sample size was fixed at n=500. The figures also show the true density curve for comparison. Although all three estimators considered are able to capture the shape of the true density, the D&M estimators with estimated variance do the worst among the three: For X having a scaled χ^2_3 distribution, it puts much more density in negative support than the EPF and WEPF_opt and tends to underestimate the modal height. Both the EPF with WEPF_opt, perform well for the scaled χ_3^2 distribution, with the WEPF_opt seemingly capturing the shape around the mode a little better than the EPF. When evaluating Figure <ref> showing the same plots for X having the distribution Mixture 1, the general observations are very similar. The EPF and WEPF_opt have visually similar performance, while the D&M estimator underestimates the density around the mode. The Supplementary Material also contains a set of plots corresponding to X having Mixture 2 distribution.Similar observations apply there. Additional simulation results are presented in the Supplemental Material. There, theEPF, WEPF and D&M estimators are compared under the assumption that one can find an optimal bandwidth (a bandwidth minimizing ISE) for any observed sample. When no replicate data is available and the measurement error variances are assumed known, the D&M estimator has the best performance, and the WEPF outperforms the EPF in all but one case considered. However, once the measurement error variance needs to be estimated (for both J=2 and J=3 replicates per case), the WEPF estimator tends to have the best performance, with the D&M estimator faring worse than the EPF estimator. Finally, a simulation with plug-in bandwidth and J=3 replicates is also presented. Here, the EPF and WEPF both outperform the D&M estimator.§ ANALYSIS OF FRAMINGHAM DATA In this section, the EPF and WEPF_opt density deconvolution estimators are illustrated using a classical dataset in the deconvolution literature, a subset of the Framingham Heart Study. The data consists of several variables related to coronary heart disease for n=1615 patients. For each patient, two measurements of long-term systolic blood pressure (SBP) were collected at each of two examination. As per Carroll et al., <cit.> let M_ij be the average of the two measurements at exam j for j=1,2, and let W_ij = log(M_ij-50). The W_ij are assumed to be related to true long-term SBP, X_i according to W_ij = Y_i + σ_i ε_ij with Y_i = log(X_i-50). Density deconvolution is therefore used to estimate the density on the Y-scale, f̂_Y(y), after which it follows that f̂_X(x)=(x-50)^-1f̂_Y[log(x-50)], x>50. For the SBP data, the EPF and WEPF_opt were estimated, the latter with mean-optimal weights q_opt using variance components estimated as described in Section <ref>. For both the EPF and WEPF_opt, deconvolution bandwidths were estimated using (<ref>). These two estimators are shown in Figure <ref>, together with the Delaigle & Meister (2008) estimator using the same estimated variances and Laplace measurement error. (The D&M estimator was also calculated for normal measurement error and was nearly identical.) A naive kernel estimator of the data using a normal references bandwidth is also shown for comparative purposes.Other bandwidth selection approaches for the naive kernel estimator were also considered with very similar results. The naive kernel estimator is much flatter around the mode and fatter in the tails. This is expected, as the kernel estimator makes no correction for the measurement error present in the data. Furthermore, it can be seen that the WEPF_opt and EPF deconvolution density estimators are similar. The two density estimators based on phase functions suggest that the distribution of X may be multi-modal, while the D&M estimator is unimodal and positive skew.§ CONCLUSIONSThis paper presents a method for phase density deconvolution with heteroscedastic measurement error of unknown type and builds on the work of Delaigle & Hall <cit.> who considered the homoscedastic case. Two estimators are proposed, one using equally weighted observations and the other using mean-optimal weights to adjust for heteroscedasticity of the measurement error. A method based on approximating the AMISE is proposed for bandwidth selection in both instances. In the simulation settings considered, the WEPF_opt estimator generally performed better than the EPF estimator, although there were instances where their performance was comparable. The simulation results suggest that mean-optimal weighting of observations will not have a detrimental effect on estimating the density function, and big gains are sometimes possible. The practitioner cautious about estimaging weights from a small number of replicates could always opt for a hybrid type of estimator, calculting WEPF_hybrid using weights q_hybrid = αq_opt + (1-α)/n where α indicates their degree of confidence in using the estimated weights. The performance of this hybrid estimator is a future avenue of research. In the setting where the measurement error variances are known, the method of Delaigle & Meister <cit.> will outperform both phase function estimators, although the latter are still competitive in this setting. Also recall that the Delaigle & Meister estimator requires knowledge of the measurement error distribution — an assumption not made by the EPF and WEPF estimators. When there are only 2 replicates per individual from which to estimate the measurement error variances, the phase function methods performed substantially better than the Delaigle & Meister estimator. This suggests that the phase function methods have some inherent robustness against variance estimate deviation from the true values, and that the phase function density estimators can generally do the same as Delaigle & Meister estimator with much less assumption on measurement error.§ SUPPLEMENTARY MATERIAL In the supplementary material, the asymptotic properties of the weighted empirical phase function (WEPF) and the mean integrated squared error (MISE) of the phase function deconvolution density estimator are derived. Furthermore, plots of the phase functions corresponding to the three distributions used in the simulation studies (Section <ref>) are shown. In addition, as a complement to the simulations in Section <ref>, the plots of the density estimators corresponding to the first, second, and third quantiles (Q_1, Q_2, and Q_3) of ISE for each of the methods EPF, WEPF_opt, and the D&M estimators corresponding to X having a bimodal mixture distribution (called Mixture 2 in the paper). Finally, simulation results are provided to compare density estimators under an optimal bandwidth setting and also when there are J=3 replicates per observation. 10 carroll2006measurement Carroll Raymond J, Ruppert David, Stefanski Leonard A, Crainiceanu Ciprian M. Measurement error in nonlinear models: a modern perspective. CRC press; 2006. stirnemann2012density Stirnemann JJ, Comte Fabienne, Samson Adeline. Density estimation of a biomedical variable subject to measurement error using an auxiliary set of replicate observations.Statistics in medicine. 2012;31(30):4154–4163. carroll1988optimal Carroll Raymond J, Hall Peter. Optimal rates of convergence for deconvolving a density.Journal of the American Statistical Association. 1988;83(404):1184–1186. stefanski1990deconvolving Stefanski Leonard A, Carroll Raymond J. Deconvolving kernel density estimators. Statistics. 1990;21(2):169–184. fan1991asymptotic Fan Jianqing. Asymptotic normality for deconvolution kernel density estimators. Sankhyā: The Indian Journal of Statistics, Series A. 1991;:97–110. fan1991optimal Fan Jianqing. On the optimal rates of convergence for nonparametric deconvolution problems.The Annals of Statistics. 1991;:1257–1272. fan1993nonparametric Fan Jianqing, Truong Young K. Nonparametric regression with errors in variables.The Annals of Statistics. 1993;:1900–1925. hall2005discrete Hall Peter, Qiu Peihua. Discrete-transform approach to deconvolution problems. Biometrika. 2005;:135–148. lee2010direct Lee Mihee, Shen Haipeng, Burch Christina, Marron JS. Direct deconvolution density estimation of a mixture distribution motivated by mutation effects distribution.Journal of Nonparametric Statistics. 2010;22(1):1–22. fan1992deconvolution Fan Jianqing. Deconvolution with supersmooth distributions.Canadian Journal of Statistics. 1992;20(2):155–169. delaigle2008density Delaigle Aurore, Meister Alexander. Density estimation with heteroscedastic error.Bernoulli. 2008;:562–579. diggle1993fourier Diggle Peter J, Hall Peter. A Fourier approach to nonparametric deconvolution of a density estimate.Journal of the Royal Statistical Society. Series B (Methodological). 1993;:523–531. neumann1997effect Neumann Michael H, Hössjer O. On the effect of estimating the error density in nonparametric deconvolution.Journal of Nonparametric Statistics. 1997;7(4):307–330. delaigle2008deconvolution Delaigle Aurore, Hall Peter, Meister Alexander. On deconvolution with repeated measurements.The Annals of Statistics. 2008;:665–685. mcintyre2011density McIntyre Julie, Stefanski Leonard A. Density estimation with replicate heteroscedastic measurements.Annals of the Institute of Statistical Mathematics. 2011;63(1):81–99. delaigle2016methodology Delaigle Aurore, Hall Peter. Methodology for non-parametric deconvolution when the error distribution is unknown.Journal of the Royal Statistical Society: Series B (Statistical Methodology). 2016;78(1):231–252. guo2011regression Guo Ying, Little Roderick J. Regression analysis with covariates that have heteroscedastic measurement error.Statistics in medicine. 2011;30(18):2278–2294. meister2006density Meister Alexander. Density estimation with normal measurement error with unknown variance.Statistica Sinica. 2006;:195–211. delaigle2008alternative Delaigle Aurore. An alternative view of the deconvolution problem.Statistica Sinica. 2008;:1025–1045. | http://arxiv.org/abs/1705.09846v3 | {
"authors": [
"Linh Nghiem",
"Cornelis J. Potgieter"
],
"categories": [
"stat.ME"
],
"primary_category": "stat.ME",
"published": "20170527172940",
"title": "Phase Function Density Deconvolution with Heteroscedastic Measurement Error of Unknown Type"
} |
Quantum dark solitons in Bose gas confined in a hard wall box Krzysztof Sacha December 30, 2023 ============================================================= Motivated by problems in search and detection we present a solution to a Combinatorial Multi-Armed Bandit (CMAB) problem with both heavy-tailed reward distributions and a new class of feedback, filtered semibandit feedback. In a CMAB problem an agent pulls a combination of arms from a set {1,...,k} in each round, generating random outcomes from probability distributions associated with these arms and receiving an overall reward. Under semibandit feedback it is assumed that the random outcomes generated are all observed. Filtered semibandit feedback allows the outcomes that are observed to be sampled from a second distribution conditioned on the initial random outcomes. This feedback mechanism is valuable as it allows CMAB methods to be applied to sequential search and detection problems where combinatorial actions are made, but the true rewards (number of objects of interest appearing in the round) are not observed, rather a filtered reward (the number of objects the searcher successfully finds, which must by definition be less than the number that appear). We present an upper confidence bound type algorithm, Robust-F-CUCB, and associated regret bound of order 𝒪(ln(n)) to balance exploration and exploitation in the face of both filtering of reward and heavy tailed reward distributions. § INTRODUCTIONIn this paper we present a solution to Combinatorial Multi-Armed Bandit (CMAB) problem with both heavy-tailed reward distributions and filtered semi-bandit feedback. This work is motivated by an application in search and detection (afforded a more detailed description in Section <ref>), where an agent sequentially selects combinations of cells to search, aiming to detect some objects in each cell. The number of objects in a given cell in a given round is randomly drawn from a Poisson distribution. The generalisation of previous work on CMAB problems to allow heavy tailed reward distributions is required to accommodate Poisson distributed counts of objects of interest. However due to the imperfect nature of search, this Poisson observation will not necessarily be observed as some events may go undetected. Moreover, the larger the area chosen for search, the less efficient the search will be.The filtered semibandit feedback allows for the observed outcomes to be a second random outcome, drawn from a distribution conditioned on the true or initial outcome. This new feedback class allows us to model the imperfect detection of objects of interest. In a CMAB problem with semibandit feedback an agent is faced with k bandit arms representing basic actions and may select some subset of these arms to play at each time step. Each arm i ∈{1,...,k} has an associated probability distribution ν_i with finite mean μ_i, both unknown to the agent. Playing a combination of arms S ⊆{1,..,k} reveals random outcomes sampled independently from distributions ν_i : i ∈ S and grants the agent a reward R(S) which is a function of the random outcomes observed. The agent's goal is to maximise her cumulative reward (or equivalently minimise her cumulative regret) through time.The work in this paper extends the CMAB framework of <cit.> in two directions. Firstly, to one where the underlying probability distributions ν_i are only restricted to have a bounded moment of order 1+ϵ, for ϵ∈ (0,1]. Secondly, we introduce filtered semibandit feedback where observed outcomes associated with an arm i need not be drawn from ν_i but can befiltered observations drawn from a related filtering distribution ν̃_i whose parameters depend on the true outcome X_i,t drawn from ν_i and the combination of arms S_t selected in a given round. The reward received is a function of the filtered observations. As in other versions of the CMAB problem the agent's goal is to maximise her cumulative reward through time.In addition to introducing this expanded view of the existing CMAB framework, we propose a general class of upper confidence bound algorithms for CMAB problems with filtered semibandit feedback, which achieve 𝒪(ln n) regret subject to the identification of a suitable mean estimator for the specific problem. We include illustrative examples of specific algorithms within this general class for particular distributional families and filtering mechanisms. So far as we are aware, the notion of observing filtered rewards in a CMAB problem is a new one and no previous work on algorithms for use under filtered semibandit feedback exists. For CMAB problems with semibandit feedback, the majority of work has focussed on a (relatively) simple CMAB framework where (at most) m < k arms are selected in each round and the overall reward observed is simply the sum of the outcomes generated from these m arms. This configuration is sometimes called Learning with Linear rewards or a Multiple play Bandit and has been considerd by authors including <cit.>, <cit.>, and <cit.>. <cit.> was the first paper to consider a more general class of reward functions, allowing all functions that satisfy certain smoothness assumptions. It is upon this work that our research is based. <cit.> permits an even less restrictive class of reward functions while <cit.> considers a variant of the usual semibandit feedback where arms not selected in a particular round may still be triggered with a certain probability. We have not attempted to incorporate the two latter innovations in to our work, principally because it was not relevant to our motivating application.Most Multi-Armed Bandit (MAB) research deals with compact or sub-Gaussian reward distributions, however there are several notable exceptions. In particular <cit.> present Robust-UCB algorithms suitable for heavy tailed (non sub-Gaussian) reward distributions with a bounded 1+ϵ moment. Extending these algorithms to Robust-CUCB algorithms will be one of our contributions in this work. The Bayes-UCB method of <cit.> and KL-UCB method of <cit.> have recently been improved in <cit.> to versions with provable regret bounds of optimal order in MAB problems with exponential family rewards. However, as these algorithms are based upon quantile-type UCB indices, rather than UCB indices which take the form of a mean estimate plus an inflation term, the existing analysis from <cit.> cannot be so easily exploited and we do not consider a combinatorial extension of these methods in this work.The rest of the paper is organised as follows. In Section <ref> we outline the aforementioned motivating application and justify its link to the CMAB problem. Section <ref> defines more rigorously our generalisation of the CMAB to include filtering of reward and heavy-tailed reward distributions. Section <ref> introduces our main Robust-F-CUCB algorithm for this generalised CMAB problem along with a performance guarantee in the form of a bound on expected regret. We conclude by revisiting the motivating example in light of our theoretical work and providing a short discussion.§ MOTIVATING EXAMPLE - LEARNING IN SEARCHOur inspiration to study these CMAB problems with filtered semibandit feedback comes from a real world problem in search and detection. In this section we describe this motivating problem and its link to Combinatorial bandits.§.§ Problem Specification This research is motivated by the problem of searching for objects over a large area <cit.>. The main assumption is that the target objects appear in the search area according to a nonhomogeneous spatial Poisson process. Repeated searches are conducted over this search area which is split into a finite number of cells. At each time t=1,2,...,n objects appear according to the Poisson process and the agent selects a subset of the cells to search over (with objects disappearing whether detected or not at time t+1). However, the more cells the agent opts to search, the less effective her search can be in any one cell. The key operational question is: How should the cells be searched in order to maximize the expected number of detections over a finite time horizon? We assume that to aid in answering this question, the probability of detecting an object that has appeared in a particular cell given the set of cells the agent opts to search is known. If the intensity function of the Poisson process were known, the analyst could formulate a mixed integer linear optimization problem to find the optimal subset of cells to patrol. The challenge for the agent is to come up with a patrolling scheme that judiciously balances exploration and exploitation. Specifically, in this example, a patrolling scheme should take the form of a choice of cells to search in rounds t=1,2,...,n, where choices may be made after observing the detections fromprevious rounds. §.§ Link to Combinatorial Multi-Armed Bandits Clearly this problem in search with an unknown intensity function is a sequential decision problem, where the action space in each round is formed of different combinations of cells that the agent may patrol. A CMAB problem is therefore an appropriate model. In each round the agent will choose a set of cells in which to search, so that cells are viewed as bandit arms. Due to the spatio-temporal Poisson process model we specify, the number of objects appearing in a given cell i over a fixed time window will be Poisson distributed with parameter μ_i, independently of the number of objects in other cells. However, an added complication comes from the fact that not all objects which appear are detected. Under the choice of combination of arms S_t at time t, each object is detected with a certain probability γ_i,S_t - assumed constant within a round - such that the number of objects observed given the number of objects appearing is Binomially distributed. i.e. if X_i,t∼ Pois(μ_i) is the number of objects appearing in cell i during round t, then conditional on X_i,t and S_t, the number of objects detected Y_i,t will have a Binomial distribution such that Y_i,t|X_i,t,S_t ∼ Bin(X_i,t,γ_i,S_t). A consequence of X_i,t being Poisson is that the marginal distribution of Y_i,t|S_t will follow a Pois(γ_i,S_tμ_i) distribution. So while there is a clear link between the search problem and the CMAB problem in terms of sequential decision making with a combinatorially structured action space, the original CMAB model of <cit.> does not apply directly to the search problem. In the search problem, draws from the underlying reward distribution are not observed. Further, due to the varying detection probabilities from round to round (as different combinations of arms are played) the distribution from which rewards are observed does not remain constant either. Additionally, Poisson rewards have heavier tails than can be accommodated within the framework considered by <cit.>. This motivates us to develop an extended CMAB framework allowing for a broader range of underlying reward distributions and a feedback mechanism where the observed rewards are a filtered version of the true outcomes from the underlying distributions. With such a model design, algorithms to approach the search problem can be developed. § FRAMEWORKIn a CMAB problem, an agent is faced with k arms each associated with some unknown, underlying probability distribution ν_i with expectation μ_i. At each time step t=1,2,... the agent selects a combination of arms S_t from a set of possible combinations 𝒮⊆𝒫({1,...,k}) where 𝒫({1,...,k}) denotes the power set of the set of arms. When a combination of arms is selected in a round, we say that all the arms within that combination have been played in the round. Letting T_i,t=∑_j=1^tI{i ∈ S_j} denote the number of times an arm is played in the first t rounds, we introduce the filtered semi bandit feedback framework as follows.When a combination of arms S_t is selected in round t, a random outcome X_i,T_i,t is generated (independently) from underlying distribution ν_i for each i ∈ S_t. However, these outcomes remain unobserved. Instead, for each i ∈ S_t, a filtered observation Y_i,T_i,t is drawn from a filtering distribution ν̃_i,T_i,t=ν̃_i(X_i,T_i,t,S_t) conditioned on the random outcome from the underlying distribution and the combination of arms played. These filtered observations are seen by the agent. Let 𝐗_S_t and 𝐘_S_t respectively denote the vectors of true outcomes and filtered observations in round t where combination of arms S_t is selected. In addition to observing 𝐘_S_t, playing the combination of arms S_t grants the agent a reward R(𝐘_S_t) which is a function of the filtered observations (and thus is a random variable). The expectation of the reward obtained by playing combination of arms S_t, with respect to a particular vector of underlying means μ, is denoted r_μ(S_t)=E(R(𝐘_S_t)|S_t). The function r_μ: 𝒮→R is referred to as a reward function.One example of a filtering model is the binomial filtering of discrete non-negative integer data, as seen in the search example of Section <ref>. In such a model, Y_i,T_i,t|X_i,T_i,t,S_t follows a Bin(X_i,T_i,t,γ_i,S_t) distribution where γ_i,S_t is a success probability dependent on the combination of arms played.Filtered semibandit feedback can be contrasted with bandit, semibandit and full information feedbacks where there is no filtering, or in our terms where the filtering distributions are such that 𝐘_S_t=𝐗_S_t with probability 1 (so true outcomes and filtered observations can be treated as the same thing). In bandit feedback, only R(𝐗_S_t) is observed. In semibandit feedback R(𝐗_S_t) and 𝐗_S_t are observed. In full information feedback R(𝐗_S_t), 𝐗_S_t, and a draw from ν_i for i ∉ S_t are observed. Our model applies filtering to the semibandit feedback case. We do not consider filtered variants of bandit or full information feedback. We note that the classical stochastic Multi-Armed Bandit (MAB) problem is a special case of the CMAB problem with (non-filtered) bandit or semibandit feedback, where 𝒮={{1},...,{k}} and the reward observed is simply equal to the observation X_i,T_i,t drawn from the arm i selected.A CMAB problem with filtered semibandit feedback is therefore defined by a set of underlying probability distributions ν=(ν_1,...,ν_k) with means μ=(μ_1,...,μ_k), a set of possible combinations 𝒮, a reward function r_μ(·), and a set of filtering distributions ν̃=(ν̃_1,...,ν̃_k) with variable parameters. To aid in the analysis in this paper, we make assumptions on the expected reward r_μ(S) as in <cit.>.Assumption 1 - Monotonicity: The expected reward of playing any combination of arms S ∈𝒮 is monotonically nondecreasing with respect to the expectation vector, i.e. if for all i ∈{1,...,k}, μ_i ≤μ_i', we have r_μ(S) ≤ r_μ'(S) for all S ∈𝒮.Assumption 2 - Bounded Smoothness: There exists a strictly increasing function f(·) called a bounded smoothness function, such that for any two expectation vectors μ and μ' with max_i ∈ S|μ_i - μ_i'|≤Λ we have |r_μ(S)-r_μ'(S)|≤ f(Λ). With these assumptions in place we will be able to construct bounds on the performance of UCB-type algorithms for CMAB problems with filtered semibandit feedback. A CMAB algorithm will, in a round t, consider the rewards observed in previous rounds and select a combination of arms S_t to be played. Its objective is to maximise cumulative expected reward over n rounds, E(∑_t=1^n r_μ(S_t)) - where the expectation is taken with respect to the actions selected by the algorithm.We investigate the performance of UCB type algorithms for the CMAB problems. Typically, UCB algorithms make decisions based on indices formed by adding an inflation term to a data-driven estimator of the underlying mean μ_i. Successful algorithms are obtained by selecting the inflation term appropriately to match the convergence rate of the mean estimator thereby encouraging an appropriate balance of exploration and exploitation. In simple CMAB or MAB problems with bounded or sub-Gaussian reward distributions, an empirical mean has convergence of a suitable rate to yield UCB algorithms with 𝒪(ln n) bounded regret. However with non-sub-Gaussian (or heavy tailed) reward distributions the empirical mean lacks this same rate of convergence. As in <cit.>, we turn to more robust mean estimators to find the correct convergence rate. A further challenge is that our mean estimators must be based on observations from filtered distributions but converge to the mean of the underlying distributions. We seek estimators μ̂(Y_i,1,...,Y_i,n) of μ_i based on filtered observations Y_i,1,...,Y_i,n which satisfy the following assumption for the relevant distributions in the particular CMAB problems we consider.Assumption 3 - Concentration of Mean Estimator: The mean estimator μ̂_i,n=μ̂(Y_i,1,...,Y_i,n) is such that for positive parameter ϵ∈ (0,1], positive values c,v, and independent random variables Y_i,1,...,Y_i,n drawn from filtering distributions ν̃_i,1,...,ν̃_i,n we have for all n≥ 1 andδ∈ (0,1)P(μ̂_i,n≥μ_i + v^1/1+ϵ(cln(1/δ)/n)^ϵ/1+ϵ)≤δ P(μ_i ≥μ̂_i,n + v^1/1+ϵ(cln(1/δ)/n)^ϵ/1+ϵ)≤δ. § ROBUST-F-CUCBFor the stochastic CMAB with filtered semibandit feedback as introduced in Section <ref>, we propose the Robust-F-CUCB algorithm, described in Algorithm 1. The Robust-F-CUCB algorithm is both a generalisation and combination of the Robust-UCB algorithm of <cit.> and CUCB algorithm of <cit.>. This section proceeds as follows: We begin by introducing the necessary language and notation to express performance guarantees in the form of regret bounds. We then present our Robust-F-CUCB algorithm for a general mean estimator satisfying Assumption 3, before considering versions of the algorithm tailored to more specific reward and filtering distributions. With each version of the algorithm we also present a regret bound. §.§ Regret NotationThe regret of a CMAB algorithm in n rounds with respect to an expectation vector μ can be writtenReg_n,μ = n·opt_μ - E(∑_t=1^n r_μ(S_t)).where opt_μ=max_S ∈𝒮r_μ(S) denotes the highest attainable expected reward in a single round of the CMAB problem with respect to a expectation vector μ, and the expectation in (<ref>) is taken with respect to the actions selected by the algorithm. The aim to maximise expected reward is equivalent to minimising regret. The quality of an algorithm is usually measured by determining an analytical bound on Reg_n,μ, the order of which is the principal consideration. Algorithms with bounds of 𝒪(ln n) are said to be of optimal order in CMAB and MAB problems. In line with Chen et al. we defineΔ_max =opt_μ-min_S ∈𝒮{r_μ(S)|r_μ(S) ≠opt_μ} Δ_min =opt_μ-max_S ∈𝒮{r_μ(S)|r_μ(S) ≠opt_μ}The quantity Δ_max is then the difference in expected reward between an optimal combination of arms and the worst possible combination of arms, while Δ_min is the difference in expected reward between an optimal combination of arms and the closest to optimal suboptimal combination of arms. These quantities will be important in defining bounds on expected regret. §.§ General Algorithm Statement and Regret BoundWe first describe the Robust-F-CUCB algorithm for a general mean estimator satisfying Assumption 3, before considering more specific results later in this section. Like the CUCB algorithm, our Robust-F-CUCB algorithm consists firstly of an initialisation stage where a combination of arms containing each arm is played, to initialise mean estimates and T_i,t counters. Thereafter, in each round, upper confidence bounds (UCBs) are calculated for each arm and these UCBs are passed to a combinatorial optimisation to identify the best combination of arms to play from our optimistic perspective. The approach is presented in full detail in Algorithm 1.The following theorem provides our performance bound for the Robust-F-CUCB algorithm.Theorem 1:Let ϵ∈ (0,1] and let μ̂_i,n be a mean estimator. Suppose that the underlying distributions ν_1,...,ν_k and filtering distributions ν̃_1,...,ν̃_k are such that the mean estimator satisfies Assumption 3 for all i=1,...,k. Then the regret of the Robust-F-CUCB policy satisfiesReg_n,μ≤(3cv^1/ϵ(2/f^-1(Δ_min))^1+ϵ/ϵln n + π^2/3 +1 )· k ·Δ_max.Proof: Since the mean estimators are assumed to satisfy Assumption 3, the proof is an adaptation of those given by <cit.> and <cit.> and is given in Appendix <ref>.A particular instance of the Robust-F-CUCB algorithm can be defined by a specific mean estimator and particular values of ϵ, c, v. In the remainder of this section, we consider particular cases of the CMAB problem and particular Robust-F-CUCB algorithms which are appropriate to these problems. §.§ Semibandit Feedback with Heavy Tails Firstly, we consider a CMAB problem without filtering - i.e. where the filtering distributions are such that Y_i,t=X_i,t for all i and t and we simply have semibandit feedback. In this situation we are still considering a model not previously studied in the literature as we have permitted a more general class of reward distribution whose support is not solely contained within [0,1]. For this problem class we propose the Robust-F-CUCB algorithm with truncated empirical mean - a direct extension of the Robust-UCB policy with truncated empirical mean specified by <cit.>. The truncated empirical mean, given some parameters u>0, and ϵ∈ (0,1], and based on observations X_1,...,X_n is defined asμ̂_i,n^Trunc=1/n∑_t=1^n X_t I{|X_t|≤(ut/ln(t))^1/1+ϵ}. The following Proposition provides a bound on the regret of the Robust-F-CUCB algorithm where the underlying distributions ν have suitably bounded 1+ϵ moments. The Robust-F-CUCB algorithm with truncated empirical mean works for this problem class because in <cit.> the truncated empirical mean has already been shown to satisfy Assumption 3.Proposition 2:Let ϵ∈ (0,1] and u > 0. Let the reward distributions ν_1,...,ν_k satisfyE_X ∼ν_i|X_i|^1+ϵ ≤u ∀i ∈{1,...,k}.Then the regret of the Robust-F-CUCB algorithm used with the truncated empirical mean estimator defined in (<ref>) satisfiesReg_n,μ≤(12(4u)^1/ϵ(2/f^-1(Δ_min))^1+ϵ/ϵln n + π^2/3 + 1)· k ·Δ_max. Proof: <cit.> shows that Assumption 3 holds with c=4 and v=4u. The main result then follows from Theorem 1.§.§ Binomially filtered Poisson rewardsWe now wish to consider a CMAB problem with filtering. As we mentioned in Section 3, one possible filtering framework is the Binomial filtering of count data, i.e. where if in round t X_i,T_i,t is a draw from ν_i and S_t is the combination of arms selected then the filtering distribution for arm i, ν̃_i(X_i,T_i,t,S_t) is Bin(X_i,T_i,t,γ_i,S_t). We also mentioned that if ν_i follows a Poisson distribution with parameter μ_i then the marginal distribution of the filtered observation Y_i,T_i,t|S_t will be Poisson with parameter γ_i,S_tμ_i. We consider this example, with the additional assumption that for some γ_min>0 we have γ_i,S>γ_min for all i and S ∈𝒮.To define a Robust-F-CUCB algorithm that satisfies the logarithmic order regret bound for this problem, we must have a mean estimator which satisfies Assumption 3 for the filtering distributions specified above. Consider the following filtered truncated empirical mean estimator, and the associated Lemma, demonstrating that a version of Assumption 3 holds for this estimator when applied to Poisson reward distributions. Lemma 3: Let δ∈ (0,1) and μ_max>0, and define u_max=μ_max^2 + μ_max. Consider a series of filtered Poisson observations Y_i,1,...,Y_i,n with means γ_i,1μ_i,...,γ_i,nμ_i where γ_i,t∈ (γ_min,1] for t=1,...,n and γ_min > 0. Consider the filtered truncated empirical mean estimator,μ̂_i,n^TruncF = 1/n∑_t=1^n Y_i,t/γ_i,tI{Y_i,t≤γ_i,t√(u_maxt/lnδ^-1)}.If μ_i ≤μ_max thenP(μ̂_i,n^TruncF≥μ_i + (2/γ_min + √(2/γ_min) +1/3)√(u_maxlnδ^-1/n))≤δ,P(μ_i ≥μ̂_i,n^TruncF + (2/γ_min + √(2/γ_min) +1/3)√(u_maxlnδ^-1/n))≤δ. We present a proof of Lemma 3 in Appendix <ref>. Proposition 4 below specifies the regret bound which holds if the filtered truncated empirical mean estimator is used to define a Robust-F-CUCB algorithm and this algorithm is applied to the CMAB problem with the filtering structure defined above. Proposition 4: Let ϵ=1 and μ_max>0. Let the reward distributions ν_1,...,ν_k be Poisson satisfying μ_i ≤μ_max for i=1,...,k. Let the filtering distributions ν̃_1,...,ν̃_k be Binomial as described above. Then the regret of the Robust-F-CUCB algorithm used with the filtered truncated empirical mean estimator defined in (<ref>) satisfiesReg_n,μ≤(12(μ_max^2+μ_max)(2/γ_min + √(2/γ_min) +1/3)^2/(f^-1(Δ_min))^2ln n + π^2/3 + 1)· k ·Δ_max. Proof: Lemma 3 shows that Assumption 3 holds with ϵ=1, c=u_max and v=(2/γ_min + √(2/γ_min) +1/3)^2. The main result then follows from Theorem 1. § DISCUSSIONWithin this paper we have presented a generalisation (in two senses) of the Combinatorial Multi-Armed Bandit framework, by considering unbounded reward distributions and filtered semibandit feedback. Our Robust-F-CUCB algorithm, presented in a general form, can be shown to have an associated logarithmic order bound on regret and we have specified this bound for particular CMAB problem instances. In particular we have shown that in a filtering free context, the truncated mean estimatorcan be used to provide an algorithm for a CMAB problem with heavy tails with a logarithmic order bound on regret. We developed a generalisation of the truncated mean estimator to deal with binomially filtered Poisson data and showed that for this class of data, it has the required concentration properties - a result which could of course be applied in the study of other problems, not just bandits. We can apply the Robust-F-CUCB algorithm with filtered truncated empirical mean discussed in Section <ref> in the sequential search problem as long as we have knowledge of some upper bound μ_max such that the average rate of the underlying Poisson process in each cell is below the upper bound in each round. As the reward function in this problem is the expected number of detected events, r_μ(S_t)=∑_i ∈ S_tγ_i,S_tμ_i, and γ_min≤γ_i,S_t≤ 1 for all i and S_t ∈𝒮, Assumption 1 (monotonicity) holds and Assumption 2 (bounded smoothness) holds for a bounded smoothness function f(Λ)=kΛ. Thus the we can bound the regret of the Robust-F-CUCB as 𝒪(k^3ln n). We note also, that we could readily extend this work to a more complex application where there are multiple agents searching collaboratively. If multiple searchers were each to select a combination of cells to search (in such a way that the combinations do not overlap), one could still identify a best combination in the full information problem by formulating and solving a Integer Linear Program. In the sequential decision variant of the problem would still observe filtered rewards from the multiple combinations of cells in each round and could still apply an Upper Confidence Bound algorithm to balance exploration and exploitation.The key difference with this more complex application is that the mapping between combinations of arms and an interpretable allocations of searchers would not be one-to-one. The added combinatorial aspect of the problem means that, while a combination of arms would be interpretable as the union of the sets of cells picked by the different searchers, different sets of sets could lead to the same overall combination of arms. Therefore, crucially, a combination of arms may have multiple different sets of filtering distributions associated with it, and the most appropriate way to play the combination of arms may vary as the arm indices do. So, to approach this more complex version of the problem, a definition of the set of possible combinations 𝒮 that includes labellings of the partitions within combinations of cells S ∈𝒮 is required.We note that it would be possible to improve the leading order coefficients of all our regret bounds by applying the more sophisticated analysis used in the proof of Theorem 1 of <cit.> to our Robust-F-CUCB framework. Said analysis would improve on our presented analysis by noting the discrepancy between defining sufficiency of sampling with respect to Δ_min but bounding regret with respect to Δ_max. The more sophisticated analysis would remain usable in our more complex framework, as any intricacies due to filtering can be truly captured within the concentration inequality based step, which yields the π^2/3kΔ_max term, a step which the more sophisticated analysis does not alter. We have refrained from making these improvements in this work, as they do not affect the order of the bound and the omission permits an easier explanation of our key results. Furthermore although we did not directly consider the more general (α, β)-approximation regret considered by Chen et al. (which allows for the CMAB algorithm to be a randomised algorithm with a small failure probability), the results presented in our paper can be trivially generalised to incorporate this by reintroducing the α and β parameters which we have effectively fixed to equal 1. § ACKNOWLEDGEMENTS We gratefully acknowledge the support of the EPSRC funded EP/L015692/1 STOR-i Centre for Doctoral Trainingapalike§ PROOF OF THEOREM 1 Proof of Theorem 1: For each arm maintain T_i,t as the a count of the number of times arm i has been played in the first t rounds. We also maintain a second set of counters {N_i}_i=1^k, one associated with each arm. These counters, which collectively count the number of suboptimal plays, are updated as follows. Firstly, after the k initialisation rounds set N_i,k=1 for all i ∈{1,...,k}. Thereafter, in each round t >k, let S_t be the combination of arms played in round t and let i=min_j ∈ S_tN_j,t, if i is non-unique then we choose randomly from the minimising set. If r_μ(S_t) ≠opt_μ then we increment N_i, i.e. set N_i,t=N_i,t-1+1, . The key results of these updating rules are that ∑_i=1^k N_i,t provides an upper bound on the number of suboptimal plays in t rounds and T_i,t≥ N_i,t for all i and t.Definel_t = 3cv^1/ϵ(2/f^-1(Δ_min))^1+ϵ/ϵln(t).We consider a round t in which S_t: r_μ(S_t) ≠opt_μ is selected and counter N_i of some arm i ∈ S_t is updated. We have∑_i=1^k N_i,n - k= ∑_t=k+1^n I{r_μ(S_t) ≠opt_μ} ⇒∑_i=1^k N_i,n - k·(l_n+1)= ∑_t=k+1^n I{r_μ(S_t) ≠opt_μ}-kl_n ≤∑_t=k+1^n ∑_i=1^k I{r_μ(S_t) ≠opt_μ, N_i,t > N_i,t-1, N_i,t-1 > l_n}≤∑_t=k+1^n ∑_i=1^k I{r_μ(S_t) ≠opt_μ, N_i,t > N_i,t-1, N_i,t-1 > l_t}= ∑_t=k+1^n I{r_μ(S_t) ≠opt_μ, N_i,t-1 > l_t ∀ i ∈ S_t}≤∑_t=k+1^n I{r_μ(S_t) ≠opt_μ,T_i,t-1 > l_t ∀ i ∈ S_t}Here, the initial equations come from the updating rules for counters. The first inequality holds because there are at most kl_n occasions where the specified conditions do not hold - i.e. once each counter has been updated l_n times, none of the counters will be < l_n. The second inequality is true because l_t ≤ l_n for t≤ n, and equation (<ref>) holds because of our rule that we always update only one of the smallest counters in the selected combination of arms. The final inequality follows from N_i,t≤ T_i,t. We wish to show P(r_μ(S_t) ≠opt_μ, T_i,t-1>l_t ∀ i ∈ S_t) ≤ 2kt^-2, so that the summation in (<ref>) converges. As a consequence of Assumption 3, for any arm i=1,..,k we have:P(|μ̂_i,T_i,t-1-μ_i| ≥ v^1/1+ϵ(cln t^3/T_i,t-1)^ϵ/1+ϵ) =∑_s=1^t-1P({|μ̂_i,s-μ_i| ≥ v^1/1+ϵ(cln t^3/s)^ϵ/1+ϵ,T_i,t-1=s}) ≤∑_s=1^t-1P(|μ̂_i,s-μ_i| ≥ v^1/1+ϵ(cln t^3/s)^ϵ/1+ϵ) ≤ t· 2t^-3≤ 2t^-2. Define a random variable Λ_i,t = v^1/1+ϵ(cln t^3/T_i,t-1)^ϵ/1+ϵ and event E_t ={|μ̂_i,T_i,t-1 - μ_i| ≤Λ_i,t, ∀ i=1,..,k}. It is clear, by a union bound on Eq. (<ref>) that P( E_t) ≤ 2kt^-2. In the loop phase of the Robust-F-CUCB algorithm we have μ̅_i,t - μ̂_i,T_i,t-1 = Λ_i,t. Thus, E_t implies μ̅_i,t≥μ_i for all i. Let Λ=v^1/1+ϵ(cln t^3/l_t)^ϵ/1+ϵ (not a random variable) and define Λ_t = max{Λ_i,t|i ∈ S_t} (which is a random variable). The following results can then be written:E_t ⇒ |μ̅_i,t-μ_i| ≤ 2 Λ_t ∀ i ∈ S_t {r_μ(S_t) ≠opt_μ , T_i,t-1 > l_t ∀ i ∈ S_t }⇒Λ >Λ_twhich follow from the definitions of the various Λ terms.We can then present the following derivation, true if {E_t,r_μ(S_t) ≠opt_μ, T_i,t-1 > l_t ∀ i ∈ S_t} holds:r_μ(S_t) + f(2Λ)> r_μ(S_t)+f(2Λ_t),≥ r_μ̅_t(S_t) = opt_μ̅_t, ≥ r_μ̅_t(S^*_μ), ≥ r_μ(S^*_μ) = opt_μ,where S^*_μ is an combination of arms with optimal expected reward with respect to the true mean vector. The first inequality follows from the monotonicity of the bounded smoothness function specified in Assumption 2 and Eq. (<ref>). The second is a result of the bounded smoothness property of Assumption 2 and Eq. (<ref>). The third inequality follows from the definition of opt_μ̅_t and the fourth from the monotonicity of r_μ(S) assumed in Assumption 1 and the result that E_t ⇒μ̅_t ≥μ. In summary, this derivation says that if {E_t,r_μ(S_t) ≠opt_μ, ∀ i ∈ S_t, T_i,t-1 > l_t} holds thenr_μ(S_t)+f(2Λ) > opt_μ.The definitions of l_t and Λ mean that f(2Λ)= Δ_min, and we can rewrite Eq. (<ref>) asr_μ(S_t) + Δ_min > opt_μ.This however, is a direct contradiction of the definition of Δ_min and the assumption that r_μ(S_t) ≠opt_μ. This means that P({E_t,r_μ(S_t) ≠opt_μ,T_i,t-1 > l_t ∀ i ∈ S_t})=0 and thusP({r_μ(S_t) ≠opt_μ,T_i,t-1 > l_t ∀i ∈S_t}) ≤P(E_t) ≤2kt^-2as derived previously. From (<ref>) we can thus write:E(∑_i=1^k N_i,n)≤ k(l_n+1) + ∑_t=k+1^n P(r_μ(S_t) ≠opt_μ,T_i,t-1 > l_t ∀ i ∈ S_t) ≤ k(l_n+1) + ∑_t=1^n 2k/t^2≤2^1+ϵ/ϵ· c · k · v^1/ϵ·ln n^3/(f^-1(Δ_min))^1+ϵ/ϵ+ (π^2/3 + 1) · k. Since the expected reward from playing a suboptimal combination of arms is at most Δ_max from opt_μ we can trivially reach the required result by assuming the suboptimal rounds are all as far from optimality as they could be. □§ PROOF OF LEMMA 3Proof of Lemma 3: Theproof will show (<ref>) to be true, and then proving (<ref>) is just a simple modification of the same steps. Define B_t = √(u_max t/lnδ^-1). We haveμ_i - μ̂_i,n^TruncF = 1/n∑_t=1^n (μ_i - Y_i,t/γ_i,tI{Y_i,t≤γ_i,tB_t}) = 1/n∑_t=1^n (E(Y_i,t/γ_i,t) - E(Y_i,t/γ_i,tI{Y_i,t≤γ_i,tB_t}))+ 1/n∑_t=1^n (E(Y_i,t/γ_i,tI{Y_i,t≤γ_i,tB_t}) - Y_i,t/γ_i,tI{Y_i,t≤γ_i,tB_t}) = 1/n∑_t=1^n E(Y_i,t/γ_i,tI{Y_i,t > γ_i,tB_t}) + 1/n∑_t=1^n Z_twhere Z_t = E(Y_i,t/γ_i,tI{Y_i,t≤γ_i,tB_t}) - Y_i,t/γ_i,tI{Y_i,t≤γ_i,tB_t}. We bound the first sum in (<ref>) by noting thatE(Y_i,tI{Y_i,t > γ_i,tB_t}) ≤E(Y_i,t^2/γ_i,tB_t) ≤γ_i,tu_max/γ_i,tB_t = u_max/B_t,since I{Y_i,t > γ_i,tB_t}≤Y_i,t/γ_i,tB_t and E(Y_i,t^2)≤γ_i,tu_max because Y_i,t∼ Pois(γ_i,tμ_i). To bound the second sum in (<ref>), we will use Bernstein's inequality for bounded random variables:Bernstein's Inequality: Let X_1,X_2,...,X_n be independent bounded random variables such that E(X_i)=0 and |X_i|≤ς with probability 1 and let σ^2 = 1/n∑_i=1^n Var(X_i) then for any a>0 we haveP(1/n∑_i=1^n X_i ≥ a) ≤exp{-na^2/2σ^2 + 2ς a/3}.The Z_t have zero mean, bounded support (|Z_t| ≤ B_t ≤ B_n), and bounded variancesVar(Z_t) = Var(Y_i,t/γ_i,t1_{Y_i,t ≤γ_i,tB_t}) ≤1/γ_i,t^2E(Y_i,t^2 1_{Y_i,t ≤γ_i,tB_t}) ≤u_max/γ_i,tfor t=1,...,n. Thus we have a bounded σ^2 = 1/n∑_t=1^n Var(Z_t) ≤u_max/γ_min also. Therefore applying Bernstein's inequality for bounded random variables, with our upper bounds on σ^2 and ς, haveP(1/n∑_t=1^n Z_t > a) ≤exp{-na^2/2u_max/γ_min+2B_na/3 }.Plugging ina = √(2u_maxlnδ^-1/γ_min n) + B_n/3nlnδ^-1we see that P(1/n∑_t=1^n Z_t > a)< δ and therefore P(1/n∑_t=1^n Z_t ≤ a)≥ 1-δ. With these results we can place the following bound on (<ref>) that holds with at least probability 1-δ:1/n∑_t=1^n E(Y_i,t/γ_i,t1_{Y_i,t > γ_i,tB_t}) + 1/n∑_t=1^n Z_t ≤1/n∑_t=1^n u_max/γ_i,t B_t +√(2u_maxlnδ^-1/γ_min n) + B_n/3nlnδ^-1= 1/n∑_t=1^n 1/γ_i,t√(u_maxlnδ^-1/t) + √(2/γ_min)√(u_maxlnδ^-1/n) + 1/3√(u_maxlnδ^-1/n)≤(1/γ_min√(n)∑_t=1^n 1/√(t) + √(2/γ_min) + 1/3)√(u_maxlnδ^-1/n)≤(2/γ_min + √(2/γ_min) +1/3)√(u_maxlnδ^-1/n). □This proves thatP(μ_i ≤μ̂_i,n^TruncF + (2/γ_min + √(2/γ_min) +1/3)√(u_max lnδ^-1/n)) ≥1-δ. | http://arxiv.org/abs/1705.09605v1 | {
"authors": [
"James A. Grant",
"David S. Leslie",
"Kevin Glazebrook",
"Roberto Szechtman"
],
"categories": [
"cs.LG",
"stat.ML"
],
"primary_category": "cs.LG",
"published": "20170526145346",
"title": "Combinatorial Multi-Armed Bandits with Filtered Feedback"
} |
Department of physics, Massachusetts Institute of Technology, Cambridge, MA 02139, USAInstitute of Physics, Chinese Academy of Sciences, Beijing 100190, ChinaDepartment of physics, Massachusetts Institute of Technology, Cambridge, MA 02139, USAWe prove a theorem on the ground state degeneracy in quantum spin systems on two-dimensional lattices: if a half-integer spin is located at a center of symmetry where the point group symmetry is 𝔻_2,4,6, there must be a ground state degeneracy. The presence of suchdegeneracy in the thermodynamic limit indicates either a broken-symmetry state or a unconventional state of matter. Compared to the Lieb-Schultz-Mattis theorem, our criterion for ground state degeneracy is based onthe spin at each center of symmetry, instead of the total spin per unit cell. Therefore, our result is even applicable to certain systems with an even number of half-integer spins per unit cell. Ground state degeneracy in quantum spin systems protected by crystal symmetries Liang Fu December 30, 2023 ===============================================================================For quantum many-body systems with an odd number of spin-1/2 per unit cell, the Lieb-Schultz-Mattis (LSM) theorem and its generalization to higher dimensions <cit.> guarantee a ground-state degeneracy protected by the translation symmetry and the spin-rotation symmetry. Such a ground-state degeneracy rules out the possibility of a featureless paramagnetic phase, and indicates either a broken-symmetry state or a unconventional state of matter, such as a quantum spin liquid <cit.> with topological order<cit.>. Recently, LSM-type theorems have been developed for systems with both time-reversal symmetry and (magnetic) space-group symmetry <cit.>. In this work, we present a new theorem on ground state degeneracy in quantum spin systems, which solely relies on crystal symmetries, and specifically, the point groups. Our theorem states that quantum spin systems in two-dimensional (2D) lattices where a half-integer spin is located at a center of symmetry with the point group 𝔻_n for n=2,4 or 6 [For simplicity, in this paper we consider a strictly 2D lattice with a wallpaper group symmetry, and do not distinguish the symmetry groups 𝔻_n and C_nv. In general, our theorem applies to both cases.], must have a ground state degeneracy. Several remarks are in order: 1. Here and throughout this paper, a half-integer (integer) spin on a given siterefers to the spin degrees of freedom arising from an odd (even) number of electrons localized at the site. Importantly, our theorem does not rely on the presence of full spin-rotation symmetry, hence is applicable to systems with spin-orbit coupling. 2. In contrast with the original LSM theorem and its recent generalizations, our theorem does not involve any internal symmetry such as time-reversal. 3. Our theorem is applicable to a number of systems with an even number of half-integer spins in the unit cell. For concreteness, we first derive this theorem for a rectangular lattice with wallpaper group p2mm, before presenting the generalization to other 2D lattices. Finally we discuss possible applications of our theorem to real materials and its possible generalizations. Some mathematical details of the proof of the theorem is provided in the Supplemental Material. Rectangular lattice. Consider quantum spin systems on a rectangular lattice with the 2D wallpaper group G=p2mm. Any Hamiltonian satisfying the symmetry G, with or without spin-orbit coupling, must be invariant under the the action of any crystal symmetry operation R ∈ G onthe lattice and on the spins jointly. This action is represented by a unitary transformation on many-body basis states:R: ∏_j | s_j ⟩→∏_j U_j(R) | s_j'=Rj⟩where |s_j⟩ denotes the spin state on site j, R maps site j to j'=Rj, and U_j(R) represents the action of R on the spin state on site j. For translationally invariant systems, the operators U_j(R) on different sites connected by primitive lattice vectors are identical. Let us now consider a subgroup of G that leaves the center of the unit cell (denoted by a) invariant (or the point group at a), denoted by G_a. G_a = 𝔻_2=ℤ_2×ℤ_2 is a group generated by the C_2 rotation and a mirror reflection. Since the Hamiltonian considered here is invariant under G_a, every energy eigenstate must belong to a certain representation of G_a. Recall that a half-integer spin and an integer spin transform as projective and linear representations of the 𝔻_2 point group, respectively.As an example, consider a spin-1/2 located at site a. The two generators of 𝔻_2, the C_2 rotation and mirror reflection R_x, are represented by U(C_2)=iσ_z and U(R_x)=iσ_x acting on the 2D Hilbert space of a spin-1/2, respectively. These two operators anticommute, U(C_2)U(R_x)=-U(R_x)U(C_2), which differs from the multiplication rule for group elements in 𝔻_2: C_2R_x=R_xC_2. This “twisted” relation implies that the 2D Hilbert space of spin-1/2 forms a projective representation of the 𝔻_2 group (see Sec. I of the SM for a brief review of projective representations). It is important to note that states in the Hilbert space must either all form linear representations or all form projective representations of the same class, because excitations that connect the ground state to excited states all carry linear representations of the symmetry group.Now consider a quantum spin system on a rectangular lattice with open boundary condition, which maps onto itself under the point group G_a. Such a lattice is translational invariant apart from the boundary. We ask whether the many-body Hilbert spaceH of the system—the direct product of the spin Hilbert space at every site—forms a linear or projective representation of G_a. These two cases are denoted by a ℤ_2 index ν_a =+1 or -1 respectively. To answer this question, we first note that sites can be grouped into “orbits”: each orbit consists of those sites that map onto each other under the symmetry operations of G_a. For example, any site not on either of the two mirror axes passing through a belongs to an orbit of four sites, with one in each quadrant. Any site on a mirror axis, other than a, belongs to an orbit of two sites that are related by two-fold rotation. With open boundary condition, a is the only fixed point under G_a, hence forms an orbit of its own, { a}. Since all orbits except { a } contain an even number of sites, the many-body Hilbert space of all spins other than the one at a, forms a linear representation of G_a. Therefore, we conclude that the Hilbert space of the entire spin system forms a projective (linear) representation of G_a=𝔻_2, if and only if the spin at the center a is half-integer (integer), respectively. This result can be expressed byν_a=(-1)^2S_a,where S_a, the spin at a, is either a half-integer or integer. When H forms aprojective representation, any Hamiltonian invariant under the point group G_a must have degenerate ground states, simply because projective representations necessarily have dimensions greater than 1. This mathematical fact can be intuitively understood from the non-commutativeness of the algebra U(C_2)U(R_x)=-U(R_x)U(C_2), which can only be realized by matrices of sizes greater than 1 <cit.>. The ground state degeneracy here is thus protected by the point group G_a=𝔻_2, which is a subgroup of the wallpaper group G=p2mm. The above result (<ref>) can be applied to to lattices of increasing sizes, which are invariant under G_a. If the spin at site a is half-integer, the point group symmetry G_a guarantees that the ground state degeneracy persists in the thermodynamic limit, with open boundary condition. We now discuss the implications of this thermodynamic degeneracy for the ground state of the system. First, if the systemhas a unique ground state on a torus, the degeneracy shown above must come from boundary degrees of freedom, implying that the ground state of the system is an SPT state protected by the point group G_a. Interestingly, if realized, such an SPT state of half-integer spins cannot belong to the known classification which assumes physical degrees of freedom form linear representation of G_a. The opposite possibility is that the system also has thermodynamically degenerate ground states on the boundary-less torus. This bulk degeneracy may imply that the ground state issymmetry breaking or topologically ordered. If the system is topologically ordered, the degeneracy shown above for open boundary condition can be the result of fractional point-group symmetry quantum numbers <cit.> of anyons, implying a symmetry-enriched topological state.We have thus ruled out completely featureless ground states for quantum spin systems with half-integer spins on symmetry centers, while leaving alive the possibility of SPT states, symmetry-breaking, and symmetry-enriched topological order. We now further rule out the possibility of an SPT state for systems which have an odd number of half-integer spins on symmetry centers in each unit cell. This is achieved by putting the system on a torus with an odd number of unit cells <cit.>(as shown in Sec.II of the SM, such a torus can always be constructed compatible with the wallpaper groups). In this setup, the whole system has in total an odd number of half-integer spins on symmetry centers, which together form a projective representation of the 𝔻_2 symmetry group. Therefore, the ground state degeneracy remains on the torus.In this case, our result is still in some aspects stronger than the LSM Theorem and its previous generalizations, because it requires only the crystal symmetry, and does not need time-reversal and spin-rotation symmetries.This argument can be readily generalized to other centers of symmetry. Each center of symmetry a in the 2D lattice has an associated point group G_a, which always has the structure of 𝔻_2 for the wallpaper group p2mm. Therefore, for each a, the ℤ_2 quantum number ν_a computed from Eq. (<ref>) determines whether the entire system transforms projectively under G_a. Furthermore, if two centers of symmetry a and b are related by crystal symmetries, they must host the same quantum number ν_a=ν_b. Therefore, in p2mm, there are only four independent quantum numbers, as there are four inequivalent centers of symmetry. Any one of them being -1 implies that the ground state must have a degeneracy protected by the wallpaper-group symmetry. Other wallpaper groups.We now generalize our result to other 2D wallpaper groups. Similar to the example of p2mm, we consider a center of symmetry a and the associated point group G_a. Using the same argument as in the previous example, one can show that the system has a G_a-protected ground state degeneracy, if the degrees of freedom at site a transforms projectively under G_a. There are eight different types of point groups in 2D: the cyclic groups ℂ_n and the dihedral groups 𝔻_n, where n=2,3,4,6. Among the eight possible point groups, only three dihedral groups, 𝔻_2, 𝔻_4 and 𝔻_6, have nontrivial projective representations . They both have a ℤ_2 classification of projective representations: one class of linear representation and one class of nontrivial projective representation, realized by an integer spin and a half-integer spin, respectively. Therefore, for each center of symmetry a with G_a=𝔻_2,4,6, the quantum number ν_a defined in Eq. (<ref>) reflects whether the entire system transforms projectively under G_a. (These point groups all contain a two-fold rotation, which implies that all sites except the center of symmetry form orbits of even sizes.) The number of independent quantum numbers is equal to the number of inequivalent centers of symmetry with G_a=𝔻_2,4,6; any ν_a=-1 implies that the ground state must have a degeneracy protected by the wallpaper-group symmetry.Among the 17 2D wallpaper groups, five of them has centers of symmetry for which a quantum number ν_a can be defined. We summarize the position of such centers of symmetry and the number of independent ν_a quantum numbers in Table <ref>. We notice that our no-go theorem does not apply to spin-1/2 models on the honeycomb lattice, because the lattice sites are centers of 𝔻_3 point-group symmetry. Such a center of symmetry cannot be used in our no-go theorem, because 𝔻_3 does not have nontrivial projective representations. This is consistent with the recent construction of a unique symmetric ground state of a honeycomb lattice with spin-1/2 at each lattice site <cit.>.Similar to the original LSM theorem <cit.>, this symmetry-protected ground-state degeneracy can be understood as the surface ground-state degeneracy of a 3D symmetry-protected topological (SPT) state <cit.>. The half-integer-spin degree of freedom that transforms projectively under the point-group symmetry 𝔻_2,4,6 can be realized as the edge state of a one-dimensional (1D) Haldane spin chain <cit.>, which is a 1D SPT state protected by the point-group symmetry symmetry made of objects that transform linearly <cit.>. Therefore, the 2D lattice, containing half-integer spins on the centers of symmetry, can be realized as the surface of a 3D system made of a 2D lattice of 1D Haldane chains, which is a 3D SPT state protected by the wallpaper-group symmetry <cit.>. The surface of such an SPT state must have a symmetry-protected ground-state degeneracy, which is the degeneracy we derived above. Outlooks. Comparing to the LSM Theorem and its recent generalizations, our theorem does not rely on the translation symmetries, and applies to systems with an even number of spin-1/2s per unit cell. In particular, we consider a spin-1/2 model on a checkerboard lattice. Comparing to a square lattice, a checkerboard lattice symmetry allows different spin-spin interactions on the two types of plaquettes, as shown in Fig. <ref>. Examples of such models include the checkerboard J_1-J_2 Heisenberg model <cit.>, which is the effective spin model for the so-called planar-pyrochlore quasi-2D materials <cit.>. The checkerboard lattices has the square-lattice p4mm symmetry, with a unit cell containing two lattice sites (see Fig. <ref>). Therefore the LSM theorem does not apply. In contrast, our no-go theorem still applies, because the spin-1/2s are located on the 𝔻_2 center of the lattice (marked as green rhombuses in the corresponding row of Table <ref>). Consequently, our no-go theorem guarantees a symmetry-protected ground-state degeneracy, indicating that the planar-pyrochlore systems are a promising place to look for topological quantum spin liquids.When no-go theorems guarantees a ground-state degeneracy, one possibility is that the ground state is a gapped quantum spin liquid state with an intrinsic topological order, which protects a topological ground-state degeneracy, although all local excitations are gapped. In this case, the no-go theorems often put additional constraints on the possible symmetry-fractionalization patterns realized in such spin liquids. Our no-go theorem can also be extended to provide such constraints: for example, having a projective representation of the 𝔻_2 group at the center of symmetry will determine whether an anyon also carries a projective representation of the 𝔻_2 group <cit.>. We leave an extensive study of this extension to a future work. It will be interesting to generalize our result to 3D spin systems, which we also leave to a future work. We gratefully acknowledge Michael Hermele for very helpful comments on our manuscript. The work at MIT was supported by DOE Office of Basic Energy Sciences, Division of Materials Sciences and Engineering under Award de-sc0010526.Note added. After completing our manucript we were informed of a related work <cit.>. [pages=1]sgsupp [pages=2]sgsupp [pages=3]sgsupp [pages=4]sgsupp | http://arxiv.org/abs/1705.09190v2 | {
"authors": [
"Yang Qi",
"Chen Fang",
"Liang Fu"
],
"categories": [
"cond-mat.str-el"
],
"primary_category": "cond-mat.str-el",
"published": "20170525141651",
"title": "Ground state degeneracy in quantum spin systems protected by crystal symmetries"
} |
[email protected] and Aerospace Engineering, Rutgers University, Piscataway, NJ 08854Mechanical and Aerospace Engineering, Rutgers University, Piscataway, NJ 08854Department of Mechanical Engineering and Applied Research Laboratories, The University of Texas at Austin, Austin, Texas 78712 Department of Mechanical Engineering and Applied Research Laboratories, The University of Texas at Austin, Austin, Texas 78712 Department of Mechanical Engineering and Applied Research Laboratories, The University of Texas at Austin, Austin, Texas 78712We report an inhomogeneous acoustic metamaterial lens based on spatial variation of refractive index for broadband focusing of underwater sound. The index gradient follows a modified hyperbolic secant profile designed to reduce aberration and suppress side lobes. The gradient index (GRIN) lens is comprised of transversely isotropic hexagonal microstructures with tunable quasi-static bulk modulus and mass density. In addition, the unit cells are impedance-matched to water and have in-plane shear modulusnegligible compared to the effective bulk modulus. The flat GRIN lens is fabricated by cutting hexagonal centimeter scale hollow microstructures in aluminum plates, which are then stacked and sealed from the exterior water. Broadband focusing effects are observed within the homogenization regime of the lattice in both finite element (FEM) simulations and underwater measurements (20-40 kHz). This design approach has potential applications in medical ultrasound imaging and underwater acoustic communications. 43.20.+g, 43.20.Dk, 43.20.El, 43.58Ls Broadband focusing of underwater sound using a transparent pentamode lens Preston S. Wilson December 30, 2023 ========================================================================= § INTRODUCTIONThe quality of focused sound through a conventional Fresnel lens is usually limited by spherical/cylindrical aberration. Recent advances in acoustic metasurface design made it possible to manipulate the transmitted wavefront in an arbitrary way by achieving phase delay using space coiling structures. <cit.> The aberration of the focused sound can be reduced by tuning the phase of the transmitted wave through simple ray tracing. However, this diffraction based design approach usually suffers from unbalanced impedance <cit.> which is crucial to achieve destructive interference for canceling out side lobes. Therefore, this design approach requires more sophisticated modeling. <cit.> Many efforts have been made to achieve extraordinary transmission, <cit.> but the underlying physics is to tune the structure to achieve certain phase gradient of the transmitted wave at a particular frequency which limits the bandwidth of operation. Another disadvantage of the metasurface design is that the device only works at the steady state. <cit.> In other words, it can not focus a pulse to a single focal spot. Apart from the aforementioned disadvantages, the space coiling structure is not applicable for underwater devices because of the low contrast between bulk modulus of common materials and water. Both the fluid phase and the solid phase are connected to the background fluid, the existence of the Biot fast and slow compressional waves <cit.> might cause strong aberration and induce more side lobes, while the shear mode will cause undesired scattering. Thus, we need to employee an alternative design method to overcome these issues.The hyperbolic secant index profile has been widely used in GRIN lens designs. <cit.> <cit.> showed that the frequency independent analytical ray trajectories intersect at the same point, and demonstrated that it can be used in phononic crystal design to focus sound inside the device without aberration. <cit.> adopted this approach in sonic crystal design, and experimentally demonstrated the broadband focusing effect beyond the lens with low aberration. Many other designs used the same index profile to focus airborne sound <cit.> and underwater sound. <cit.> Most of the designs are based on variation of the filling fraction to achieve different refractive indices which usually cause significant impedance mismatch. Although transmission is not a big concern in many applications, it is determinant in the focusing capability of the GRIN lens. The focal distance is derived from ray tracing which is a transient solution. Nevertheless, the steady state focusing properties of the lens can be altered due to impedance mismatch between the lens and background medium. One exception is that <cit.> modified the index distribution to reduce aberration and achieved high transmission by using hollow aluminum shells in a water matrix. However, the idea of adjusting the filling fraction introduces anisotropy and limits the range of effective properties which restrict the focal spot to be far from the lens.In this paper, we utilize a two-dimensional (2D) version of the pentamode material (PM) <cit.> to achieve a wide range of refractive indices, and introduce a new modification of the index profile for further aberration reduction. The advantage of PMs is that they can be designed to match the acoustic impedance to water and minimize the shear modulus which is undesired in acoustic designs, thus are very promising in underwater applications. For instance, <cit.> tuned the effective acoustic properties to water and experimentally demonstrated negative refraction at the second compressional mode. The structure is versatile such that it can be designed to achieve strong anisotropy, <cit.> therefore is also a good choice for acoustic cloaking. <cit.> In our design, the unit cells are transversely isotropic with index varying along the incidence plane. The modification of the index profile is done by using a one-dimensional coordinate transformation, the aberration reduction can be clearly observed from ray trajectories. The unit cells of the GRIN lens are designed using a static homogenization technique based on FEM <cit.> according to the modified index profile with a range from 0.5 to 1. Moreover, all the unit cells are impedance matched to water which is the key to obtain optimal focusing effect. The GRIN lens is fabricated by cutting centimeter scale hollow microstructures on aluminum plates using waterjet, then stacking and sealing them together. The interior of the compact solid matrix lens is filled with air, only the exterior faces are connected to water. The acoustic waves in the exterior water background are fully coupled to the structural waves inside the lens so that the lens is backscattering free and is capable of focusing sound as predicted. The GRIN lens is experimentally demonstrated to be capable of focusing underwater sound with high efficiency from 25 kHz to 40 kHz. The present design has potential applications in ultrasound imaging and underwater sensing where the water environment is important. The successful demonstration of our GRIN lens also shed light on the realization of pentamode acoustic cloak. <cit.> § DESIGN OF GRADIENT INDEX §.§ Focal distance The rectangular outline of the 2D flat GRIN lens is designed as depicted in Fig. <ref> withindex profile symmetric with respect to the x-axis (y=0). Assuming that the refractive index n is a function only of y, the trajectories of a normally incident wave can be derived by solving a ray equation for y = y(x) based on the fact that the component of slowness along the interface between each layer is constant: 0 n (y(x)) /√(1+y^'2(x) ) = n (y_0) where y_0 = y(0) is the incident positionon the y-axis at the left side of the lens, x=0.The focal distance from the right-hand boundary of the GRIN lens at x=t isd=y_t √(1/n^2 (y_t) - n^2 (y_0) -1) .§.§ Hyperbolic secant and quadratic profiles We first consider a hyperbolic secant index profile n(y):n(y)=n_0 (α y),where n_0and α are constants.This profile, alsoknown as a Mikaelian lens, <cit.>was originally proposed byMikaelian <cit.>for both rectangular and cylindrical coordinates, and is often usedto design for low aberration. <cit.> The ray trajectory is y(x)=1/αsinh^-1[sinh(α y_0)cos(α x)]. Alternatively, consider the quadratic index profile <cit.>n(y)=n_0√(1 - (α y)^2 ),for which the rays are2.2 y(x) = y_0 √(2) sin( π/4 - n_0αx/n(y_0)) .<cit.> noted that theabove two profiles have opposite aberration tendencies,and proposed amixed combination which shows reduced aberration.However, in our design we are interested in a wider range in index, from unity to about 0.5 (unlike Ref. Martin2015 for which the minimum is 1/1.3 = 0.77).This requires α y_0 to exceed unity, which rules out the use of the quadratic profile. It is notable that the purpose of using a wider range of index is to fully exploit the bulk space of the GRIN lens to achieve near field focusing capability.§.§ Reduced aberrationprofileHere we usea modified hyperbolic secant profile by stretching the y-coordinate, as follows: n(y)=n_0( g(α y)) where g(z)=z / (1+β_1z^2 + β_2 z^4).The objective is to make d of Eq. (<ref>) independent of y_0 as far as possible. For smallα y_0 we have from both Eqs. (<ref>) and (<ref>) that y(x) ≈y_0 cosα x, and hence for all three profiles d → d_0 ≡1/n_0αα t as α y_0 → 0 .Note that d_0 is independent of y_0, as expected. This is the value of the focal distance that the modified profile (<ref>) attempts to achieve for all values ofy_0 in the device by selectingsuitable values of the non-dimensional parameters β_1 and β_2.Numerical experimentation led to the choice β_1=-0.0679 and β_2=-0.002. As a demonstration of aberration reduction, we plot the ray trajectories with and without the stretch in the y-direction are shown in Fig. <ref> for comparison. It is clear that the modified secant profile is capable of focusing a normally incident plane wave with minimal aberration. § DESIGN OF UNIT CELLS The flat GRIN lens is designed using six types of unit cells corresponding to the discrete values selected from the modified hyperbolic index profile. Figure <ref> shows the spatial distribution of refractive indices of the lens. The unit cell structure is the regular hexagonal lattice which has in-plane isotropy at the quasi-static regime. <cit.> Using Voigt notation, the 2D pentamode elasticity requires C_11C_22≈ C_12^2 and C_66≈ 0 to minimize the shear modulus. With these requirements satisfied, the main goal is to tune the effective C_11 and mass density at the homogenization limit to achieve the required refractive index and match the impedance to water simultaneously. The material properties of water are taken as bulk modulus κ_0=2.25 GPa and density ρ_0=1000 kg/m^3. The material of the lens slab is aluminum with Young's modulus E=70 GPa, density ρ=2700 kg/m^3 and Poisson's ratio ν=0.33. The geometric parameters of each unit cell, as shown in Fig. <ref>, are predicted using foam mechanics <cit.> and iterated using a homogenization technique based on FEM. <cit.> The geometric parameters of the six types of unit cells are listed in Table <ref>. Note that big value of the radius r at the joints increases the effective shear modulus, but r=0.420 mm is the limit of the machining method we are using. The GRIN lens is comprised of the six types of unit cells, the minimum cutoff frequency is limited by the unit cell with thinnest plates, i.e. n_eff=1, therefore it is essential to examine its band structure. The band diagram as shown in Fig. <ref> is calculated using Bloch-Floquet analysis in COMSOL. The directional band gap along the incident direction occurs near 40 kHz, this sets the upper limit of the lens. The lens is designed following an index gradient, therefore the low frequency focusing capability is limited due to the high frequency approximation nature of the ray theory. Although bending modes exist at low frequency range, they do not cause much scattering due to sufficient shear modulus which prevents the structure from flexure. <cit.> We expect the lens to be capable of focusing underwater sound over a broadband from 10 kHz to 40 kHz.§ SIMULATION RESULTS The lens is formed by combining all the designed unit cells together following the reduced aberration profile. The length of the lens is 40 cm, and the width is 13.7 cm. The material of the lens is aluminum as we described in the previous section. The GRIN is permeated with air and immersed in water so that only structural wave is allowed in the lens. Full wave simulations were done to demonstrate the broadband focusing effect using COMSOL Multiphysics. Figure <ref> shows the intensity magnitude normalized to the maximum value at the focal point from 15 to 40 kHz.A Gaussian beam is normally incident from the left side, and the focal point lies on the right side of the lens. It is clear that the lens works over a broad range of frequency. In the focal plane, the high intensity focusing region moves towards the lens as the frequency increases. This is not surprising as we explain as follows. The low frequency focusing capability is limited due to the high frequency approximation nature of the index gradient, while the high frequency is limited because the longitudinal mode becomes dispersive as shown in Fig. <ref>, i.e. the effective speed is reduced. The best operation frequency of the lens is found to be near 20 kHz where the longitudinal mode is non-dispersive. The cutoff frequency is near 40 kHz as predicted in the band diagram. The as-designed lens has minimized side lobes comparing to conventional diffractive lens. Diffractive acoustic lenses are usually designed by tuning the impedance of each channel to achieve certain phase delay. However, the transmitted amplitudes are different so that it is hard to cancel out the side lobes caused by aperture diffraction. The main advantage of the GRIN lens is that it redirects the ray paths inside the lens, and reduces the diffraction aperture to a minimal size at the exiting face of the lens. Figure <ref> shows the normalized intensity magnitude across the focal point along the lens face.The width of the intensity profile at half of its maximum is only 0.47λ at 35 kHz. The focal distance at this frequency is about 5 cm. It is also clear that the intensity magnitudes of the side lobes are all below 1/10 of the maximum value so that our GRIN lens is nearly side lobe free. As we mentioned in Sec. <ref>, the as-designed pentamode GRIN lens is impedance matched to water so that it is acoustically transparent (back-scattering free) to a normally incident plane wave. This feature should result in a very high gain at the focal plane. Figure <ref> shows the simulated sound pressure level (SPL) gain at 33.5 kHz over the focal plane. This plot is generated by subtracting the simulated SPL without the lens from the SPL with the lens for normally incident plane wave beams. It is remarkable that the maximum gain at 33.5 kHz is as high as 11.06 dB which is hard to achieve for a diffractive lens, especially for a 2D device. The advantage of the pentamode GRIN is that it can achieve high gain and minimal side lobes at the same time, however, minimizing the side lobes for a diffractive lens is usually at the cost of introducing high impedance mismatch.Unlike the diffractive metasurfaces, which only work at the steady state, the pentamode GRIN lens is also capable of focusing a plane wave pulse. Figure <ref> shows the simulated pressure variations at each time frame. The acoustic pressure in all the six plots are normalized to the maximum at t=0.36 ms.Two cycles of a plane wave pulse are incident from the left side at the central frequency of 30 kHz. The wave moves towards the lens and then transmits through the lens as shown in each time frame. The wave focuses on the right side of the lens and starts to spread out when t=0.36 ms. It is also easy to see from the third plot, i.e. t=0.24 ms, that the reflection from the water-lens interface is almost negligible.§ EXPERIMENTS§.§ Experimental apparatusThe GRIN lens pictured in Fig. <ref> was fabricated using an abrasive water jet cutting twelve pieces 1.5 cm-thick aluminum plates. The dimensions of the plates were measured and compared to the specified dimensions in Table <ref>. The maximum discrepancy was 0.5 mm from the desired dimension with an average difference of 0.2 mm. These deviations were noted as a source of possible error in the experimental data. The as-tested lens is constructed by assembling twelve fabricated plates so that the inside could be air-tight. Rubber gaskets were cut out of neoprene sheets to provide a 1 cm rubber border around the perimeter of each lens piece and the outer edge of the top and bottom of each piece was lined with a layer of electrical tape and double sided tape to hold the gaskets in place. The layers were then placed on top of one another alternating with rubber gaskets. Two blocks of aluminum measuring 40.0 cm by 15.25 cm, and 2 cm thick were placed on the top and bottom of the stacked pieces and were compressed together using nuts and washers with four steel rods. The compression of the gaskets provided a means of overcoming the surface irregularities on the perimeters of each piece to prevent leakage.All the experimental measurements were done in a rectangular indoor tank approximately 4.5 m in depth with a capacity of 459 m^3 surrounded by cement walls with a sand covered floor. The tank is filled with fresh water and the temperature is assumed to be of negligible variance between tests. An aluminum and steel structure was constructed to secure the lens and source separated by 1 cm at a centerline depth of 68.5 cm. The structure was attached to a hydraulically actuated cylinder which held the components at a consistent desired depth for the duration of testing. An exponential chirp at 1 ms in duration with a frequency range of 10 kHz to 70 kHz was used as the excitation signal and the signal was repeated every 100 ms. An automated scanning process as shown in Fig. <ref> was used to acquire hydrophone amplitude measurements.Three stepper motors controlled by MATLAB via an Arduino Uno moved a rod with a RESON TC4013 Hydrophone attached to the end through a rectangular area in front of the GRIN lens. The scan area was collinear with center-line plane of the source and GRIN lens at a depth of 685 mm. Figure <ref> shows the experimental apparatus, including the support structure, GRIN Lens, and the planar hydrophone scanner.The area was 31.0 cm parallel to the lens face by 20.0 cm perpendicular to the lens face. The step size was set to 5 mm which resulted in 2,583 data points. As the hydrophone moved to each location, a pause of 2 seconds was initiated by the MATLAB program to negate rod dynamics due to the swaying caused by the scanner motion in the water. Voltage outputs were acquired from the oscilloscope and stored in an excel spreadsheet labeled for its exact location in the scan area. After each point had voltage data, the scanning program terminated after approximated 4.5 hours of run time. This process was completed with both the lens and the source, and another case with just the source. This would allow the effects due to the inclusion of the lens to be quantified by comparing the amplitude changes between the source only case and the source-lens case.To begin simulation verification, a source capable of generating constant amplitude acoustic waves was constructed and tested. The source is 29.5 cm in width, 22.9 cm in height, and 6.4 cm in depth. The planarity was verified by submerging the source at a depth of 68.5 cm measured from centerline and measuring pressure amplitude using an omni-directional hydrophone. The test signal was prescribed to be a sinusoidal pulse at a frequency of 35 kHz and amplitude of 2 Volts peak-to-peak for 15 cycles continuously repeating every 100 ms.The Hilbert transform was taken of the hydrophone measurement and the mean amplitude of the Hilbert transform was calculated for the steady state region of the signal. The transmit voltage response (TVR) of a transducer is the amount of sound pressure produced per volt applied and is calculated using TVR = 20log_10(V_outR_meas/V_inR_ref) - RVS_cal, where V_out is the output voltage from the hydrophone, V_in is the voltage applied to the transducer, R_meas is the separation distance between the transducer and the hydrophone, R_ref is the reference distance set to 1 m, and RVS_cal is receive sensitivity of the calibrated hydrophone taken from the hydrophone documentation. The R_meas distance was set to 9.5 cm, V_in was 2 Vpp, and RVS_cal was 211 dB/μPa. The planarity amplitude test results are shown inFig. <ref>.The amplitude measurements show that there is relatively consistent planarity across the aperture of the source face. However, as the boundaries of the source are reached, the amplitude reduces by approximately 7 dB. Even though the amplitude decreases, the source operates effectively enough to be used to verify the GRIN lens simulations. It should be noted that source planarity may be a cause for a reduction in amplitude shown in the GRIN lens experiment because the width of the lens extends outside the borders of the source width.§.§ Data Processing For both the source-only case and the source-lens case, the cross-correlation between the input signal and the voltage output from the hydrophone was determined. A Hann window was applied to the cross-correlation over the direct path form the source. This removed any reflections from the water surface of the tank or diffraction from the source interaction with the edges of the lens from contaminating the results. An example of this process is shown in Fig. <ref>.The Fourier transform of the cross-correlation for both cases was then found. The gain was then calculated by means of Eq. <ref>,G = 20log_10(X_lenswin/X_sourcewin) where G is the gain at a particular scan point and frequency, X_lenswin is the windowed cross-correlation from the source-lens case, and X_sourcewin is the windowed cross-correlation from the source-only case. §.§ Measurement resultsAs outlined in Sec. <ref>, the gain was measured by finding the amplitude difference between the source-only and the source-lens cases. The measurements at frequencies from 20 to 45 kHz are shown in Fig. <ref>.The amplitude scale represents the gain at each hydrophone location in decibels. The general shape of the beam pattern shows a clear focusing tendency of the lens, especially in the 30-40 kHz range.The data shows evidence of a focused beam pattern forming at 20 kHz with approximately -5 dB of gain at the focus. As the frequency increases, the beam becomes narrower and the gain increases to peak levels at 30 and 35 kHz. There is also evidence a stop band is approached as the frequency approaches 45 kHz. Figure <ref> shows the beam pattern of the normalized intensity through the focus for 35 kHz. Significant side lobe amplitude reduction is evident, and the beam width is 0.44λ with the speed of sound in fresh water assumed to be 1480 m/s. The maximum gain through the frequency range was determined to be at 33.5 kHz as shown in Fig. <ref>.To better quantify the data, a cross section of the amplitude data was extracted from upper plot in Fig. <ref> for a constant distance from the lens through the peak gain of focus. The maximum gain was observed to be 4.0 dB and the beam pattern was found to have 12 dB of sidelobe amplitude reduction compared to the focus as shown in the lower plot in Fig. <ref>.The as-designed and as-tested lenses both work over a broad range of frequency. Figures <ref> and <ref> both show that the focal point moves toward the lens with the increase of frequency as predicted from the band diagram. It is also clear that the side lobe suppression ability of the GRIN lens in both simulation and experiment agree to a remarkable degree as can be seen from Figs. <ref> and <ref>, where the magnitude of the intensity of the side lobes are all lower than 1/10 of the maximum magnitude at the focal point. It is noted that the power magnification at the focal point have certain differences between simulations and experiments. These discrepancies are mainly due to the fabrication of the lens as we explain in the following section.§.§ Sources of Error and DiscussionPotential error in the experiment was noted as data was taken. First, the source itself had acceptable planarity, but as shown in Fig. <ref>, there is amplitude reduction at the edges of the source. This results in the outside portions of the lens to have less contribution to the focusing beam pattern than was assumed in the simulation. The lens pieces themselves have a machining tolerance that also affects the mass and stiffness properties of the architecture. With an effectively random distribution of tolerances throughout the assembled lens, the altered effective index distribution may cause some variability in the focal distance. During the scanning process, the hydrophone rod moved from location to location to acquire data. In order to protect the scanning components, the scanner could not be submerged underwater, but the depth of the lens and source were desired to be at the greatest depth possible to eliminate contamination by reflections from the water surface. However, this resulted in the hydrophone rod to have a length longer than the depth of the lens with a single attachment point at its extreme. As the location changed, the resistance of the water caused the lens to sway momentarily during the beginning of each measurement potentially affecting the results.The lens construction also includes the rubber gaskets between each piece. Some excess rubber was necessary to extend over the perimeters of each lens piece to ensure a watertight seal. However, this excess rubber results in an impedance mismatch between the lens face and the surrounding water. This causes a reflection of wave energy at both the front and back faces of the lens and inevitably causes a reduction of energy that should reach the focus. The surface impedance mismatch induced by the alternating layers causes a lower gain than expected. Moreover, the impedance mismatch could cause focal distance shift even though the index distribution still follows the modified profile as we described in the introduction.These sources of error support the observed differences between the simulation and experiment with the most noticeable being the lower gain obtained via the experiment. There is a 5 dB deficit from the simulations and can be attributed to the excess rubber causing and impedance mismatch with high confidence. § CONCLUSIONIn conclusion, we have designed and fabricated a pentamode GRIN lens based on a modified secant index profile. We have experimentally demonstrated itsbroadband focusing effect for underwater sound. The unit cells are tuned to beimpedance-matchedto water so that the GRIN lens is capable of focusing sound with minimized aberration. Moreover, the physics behind the GRIN lens makes it possible to focus sound at both steady state and transient domain. The mismatch of the focal distance in simulation and experiments is due to theaccuracy of the waterjet machining process and the assembly method which altered the refractive index. This issue couldbe successfully resolved by using more advanced fabrication methods such as wire EDM or 3D metal printing. The design method can also be easily extended to the design of anisotropic metamaterials such as directional screens and acoustic cloaks.§ ACKNOWLEDGMENTS This work was supported by ONR through MURI Grant No. N00014-13-1-0631.31 urlstyle[Li et al.(2012)]Li2012a Y. Li, B. Liang, X. Tao, X.-F. Zhu, X.-Y. Zou, and J.-C. Cheng. Acoustic focusing by coiling up space. Appl. Phys. Lett., 1010 (23):0 233508, 2012. 10.1063/1.4769984.[Xie et al.(2014)]Xie2014 Y. Xie, W. Wang, H. Chen, A. Konneker, B.-I. Popa, and S. A. Cummer. Wavefront modulation and subwavelength diffractive acoustics with an acoustic metasurface. Nat. Commun., 5:0 5553, 2014. 10.1038/ncomms6553.[Li et al.(2014)]Li2014 Y. Li, G. Yu, B. Liang, X. Zou, G. Li, S. Cheng, and J. Cheng. Three-dimensional ultrathin planar lenses by acoustic metamaterials. Sci. Rep., 4:0 6830, 2014. 10.1038/srep06830.[Wang et al.(2014)]Wang2014 W. Wang, Y. Xie, A. Konneker, B.-I. Popa, and S. A. Cummer. Design and demonstration of broadband thin planar diffractive acoustic lenses. Appl. Phys. Lett., 1050 (10):0 101904, 2014. 10.1063/1.4895619.[Li et al.(2015)]Li2015 Y. Li, X. Jiang, B. Liang, J. Cheng, and L. Zhang. Metascreen-based acoustic passive phased array. Phys. Rev. Applied, 40 (2), 2015. 10.1103/physrevapplied.4.024003.[Estakhri et al.(2016)]Estakhri2016 N. M. Estakhri and A. Alù. Wave-front transformation with gradient metasurfaces. Phys. Rev. X, 60 (4), 2016. 10.1103/physrevx.6.041008.[Li et al.(2016)]Li2016 Yong Li, Shuibao Qi, and M Badreddine Assouar. Theory of metascreen-based acoustic passive phased array. New Journal of Physics, 180 (4):0 043024, 2016. 10.1088/1367-2630/18/4/043024.[Molerón et al.(2014)]Moleron2014 M. Molerón, M. Serra-Garcia, and C. Daraio. Acoustic fresnel lenses with extraordinary transmission. Appl. Phys. Lett., 1050 (11):0 114109, 2014. 10.1063/1.4896276.[Tang et al.(2015)]Tang2015 K. Tang, C. Qiu, J. Lu, M. Ke, and Z. Liu. Focusing and directional beaming effects of airborne sound through a planar lens with zigzag slits. J. Appl. Phys., 1170 (2):0 024503, 2015. 10.1063/1.4905910.[Biot(1956)]Biot1956 M. A. Biot. Theory of propagation of elastic waves in a fluid-saturated porous solid. i. low-frequency range. J. Acoust. Soc. Am., 280 (2):0 168, 1956. 10.1121/1.1908239.[Biot(1962)]Biot1962 M. A. Biot. Mechanics of deformation and acoustic propagation in porous media. J. of Appl. Phys., 330 (4):0 1482, 1962. 10.1063/1.1728759.[Gomez-Reino et al.(2002)]GRINOPTICS C. Gomez-Reino, M. V. Perez, and C. Bao Gradient-Index Optics: Fundamentals and Applications. Springer, New York, 2002.[Lin et al.(2009)]Lin09 S.-C. S. Lin, T. J. Huang, J.-H. Sun, and T.-T. Wu. Gradient-index phononic crystals. Phys. Rev. B, 79:0 094302, 2009. 10.1103/PhysRevB.79.094302.[Climente et al.(2010)]Climente2010 A. Climente, D. Torrent, and J. Sanchez-Dehesa. Sound focusing by gradient index sonic lenses. Appl. Phys. Lett., 970 (10):0 104103, 2010. 10.1063/1.3488349.[Zigoneanu et al.(2011)]Zigoneanu2011 L. Zigoneanu, B.-I. Popa, and S. A. Cummer. Design and measurements of a broadband two-dimensional acoustic lens. Phys. Rev. B, 840 (2), 2011. 10.1103/physrevb.84.024305.[Romero-García et al.(2013)]Romero-Garcia2013 V. Romero-García, A. Cebrecos, R. Picó, V. J. Sánchez-Morcillo, L. M. Garcia-Raffi, and J. V. Sánchez-Pérez. Wave focusing using symmetry matching in axisymmetric acoustic gradient index lenses. Appl. Phys. Lett., 1030 (26):0 264106, 2013. 10.1063/1.4860535.[Park et al.(2016)]Park2016 C. M. Park, C. H. Kim, H. T. Park, and S. H. Lee. Acoustic gradient-index lens using orifice-type metamaterial unit cells. Appl. Phys. Lett., 1080 (12):0 124101, 2016. 10.1063/1.4944333.[Martin et al.(2010)]Martin10 T. P. Martin, M. Nicholas, G. J. Orris, L.-W. Cai, and D. Torrent. Sonic gradient index lens for aqueous applications. Appl. Phys. Lett., 97:0 113503, 2010. 10.1063/1.3489373[Martin et al.(2015)]Martin2015 T. P. Martin, C. J. Naify, E. A. Skerritt, C. N. Layman, M. Nicholas, D. C. Calvo, G. J. Orris, D. Torrent, and J. Sanchez-Dehesa. Transparent gradient-index lens for underwater sound based on phase advance. Phys. Rev. Appl., 40 (3), 2015. 10.1103/physrevapplied.4.034003.[Milton et al.(1995)]Milton95 G. W. Milton and A. V. Cherkaev. Which elasticity tensors are realizable? J. Eng. Mat. Tech., 1170 (4):0 483–493, 1995. 10.1115/1.2804743.[Norris et al.(2011)]Norris11mw A. N. Norris and A. J. Nagy. Metal Water: A metamaterial for acoustic cloaking. In Proceedings of Phononics 2011, Santa Fe, NM, USA, May 29-June 2, pages 112–113, Paper Phononics–2011–0037, 2011.[Hladky-Hennion et al.(2013)]Hladky-Hennion13 A.-C. Hladky-Hennion, J. O. Vasseur, G. Haw, C. Croënne, L. Haumesser, and A. N. Norris. Negative refraction of acoustic waves using a foam-like metallic structure. Appl. Phys. Lett., 1020 (14):0 144103, 2013. 10.1063/1.4801642.[Layman et al.(2013)]LaymanOrris2012 C. N. Layman, C. J. Naify, T. P. Martin, D. C. Calvo, and G. J. Orris. Highly-anisotropic elements for acoustic pentamode applications. Phys. Rev. Lett., 111:0 024302–024306, 2013. 10.1103/PhysRevLett.111.024302.[Norris(2008)]Norris08b A. N. Norris. Acoustic cloaking theory. Proc. R. Soc. A, 464:0 2411–2434, 2008. 10.1098/rspa.2008.0076.[Chen et al.(2015)]Chen2015 Y. Chen, X. Liu, and G. Hu. Latticed pentamode acoustic cloak. Sci. Rep., 5:0 15745, 2015. 10.1038/srep15745.[Hassani et al.(1998)]Hassani98I B. Hassani and E. Hinton. A review of homogenization and topology optimization I-homogenization theory for media with periodic structure. Comp. Struct., 690 (6):0 707–717, 1998. 10.1016/S0045-7949(98)00131-X.[Mikaelian et al.(1980)]Mikaelian1980 A. L. Mikaelian and A. M. Prokhorov. Self-focusing media with variable index of refraction. Progress in Optics, pages 279–345, 1980. 10.1016/s0079-6638(08)70241-5.[Mikaelian(1951)]Mikaelian1951 A. L. Mikaelian. Application of stratified medium for waves focusing. Doklady Akademii Nauk SSSR, 81:0 569–571, 1951.[Norris(2014)]Norris2014 A. N. Norris. Mechanics of elastic networks. Proc. R. Soc. A, 4700 (2172):0 20140522, 2014. 10.1098/rspa.2014.0522.[Kim et al.(2001)]KimHassani H. S. Kim and S. T. S. Al-Hassani. A morphological elastic model of general hexagonal columnar structures. Int. J. Mech. Sc., 430 (4):0 1027–1060, 2001. 10.1016/S0020-7403(00)00038-2.[Cai et al.(2016)]Cai2016 X. Cai, L. Wang, Z. Zhao, A. Zhao, X. Zhang, Tao Wu, and H. Chen. The mechanical and acoustic properties of two-dimensional pentamode metamaterials with different structural parameters. Appl. Phys. Lett., 1090 (13):0 131904, 2016. 10.1063/1.4963818. | http://arxiv.org/abs/1705.10274v1 | {
"authors": [
"Xiaoshi Su",
"Andrew N. Norris",
"Colby W. Cushing",
"Michael R. Haberman",
"Preston S. Wilson"
],
"categories": [
"physics.class-ph"
],
"primary_category": "physics.class-ph",
"published": "20170526130230",
"title": "Broadband focusing of underwater sound using a transparent pentamode lens"
} |
UWThPh-2017-11 Renormalization and radiative corrections to masses in a general Yukawa modelM. FoxE-mail: [email protected] , footnote1 W. GrimusE-mail: [email protected] M. LöschnerE-mail: [email protected] University of Vienna, Faculty of Physics Boltzmanngasse 5, A–1090 Vienna, AustriaOctober 2, 2017 ====================================================================================================================================================================================================================================================== We consider a model with arbitrary numbers of Majorana fermion fields and real scalar fields φ_a, general Yukawa couplings anda Z_4 symmetry that forbids linear and trilinear termsin the scalar potential. Moreover, fermions become massive only afterspontaneous symmetry breaking of the Z_4 symmetryby vacuum expectation values (VEVs) of the φ_a. Introducing the shifted fields h_a whose VEVs vanish,renormalization of the parameters of the unbroken theory suffices to make the theory finite. However, in this way, beyond tree level it is necessary to perform finite shifts of the tree-level VEVs, induced by the finite parts of the tadpole diagrams, in order to ensure vanishing one-point functions of the h_a.Moreover, adapting the renormalization scheme to a situation with many scalarsand VEVs, we consider the physical fermion and scalar masses as derivedquantities, i.e. as functions of the coupling constants and VEVs.Consequently, the masses have to be computed order by order in aperturbative expansion. In this scheme we compute the selfenergies offermions and bosons and show how to obtain the respective one-loopcontributions to the tree-level masses. Furthermore, we discuss the modification of our results in the case of Dirac fermions and investigate, by way of an example, the effects of a flavour symmetry group. § INTRODUCTIONThanks to the results of the neutrino oscillation experiments—see forinstance <cit.>—it is nowfirmly established that at least two light neutrinos have a nonzero mass and that there is a non-trivial lepton mixing matrix or PMNS matrix in analogy to the quark mixing matrix or CKM matrix.The surprisingly large mixing angles inthe PMNS matrix have given a boost to model building with spontaneously broken flavour symmetries—for recent reviews see <cit.>.Many interesting results have been discovered, however, no favoured scenario has emerged yet.Moreover, predictions of neutrino mass and mixing models refer frequently to tree-level computations. It would thus be desirable to check the stability of such predictions under radiative corrections. In the case of renormalizable models one has a clear-cut and consistent method to remove ultraviolet(UV) divergences and to compute such corrections.However, there is the complication that the envisaged models always have a host of scalars and often complicated spontaneous symmetry breaking (SSB) of the flavour group. This makes it impossible to replace all Yukawa couplings by ratios of masses over vacuum expectation values (VEVs),as done for instance in the renormalization of the Standard Model.Of course, one could replace part of the Yukawa coupling constants by masses, but this would make the renormalization procedure highly asymmetric. In this paper we suggest to make such models finite by renormalization of the parameters of the unbroken model and to perform finite VEV shifts at the loop level in order to guarantee vanishingscalar one-point functions of the shiftedscalar fields <cit.>.Additionally, we introduce finite field strength renormalization forobtaining on-shell selfenergies.In this way, all fermion and scalar massesare derived quantities and functions of the parameters of the model. In the usual approach to renormalization of theories with SSB andmixing <cit.> one has counterterms for masses,quark and lepton mixing matrices—see forinstance <cit.>—and tadpoles—see forinstance <cit.>.[There are othertreatments of tadpoles adapted to the theory where they occur,see for instance reference <cit.> for the MSSM and <cit.> where the issue of gaugeinvariance is discussed.] We stress that in our approach there are no such counterterms because we use an alternative approach tailored to the situation with a proliferation of scalars and VEVs.In order to present the renormalization scheme in a clear and compact way, we consider a toy modelwhich has* an arbitrary number of Majorana or Dirac fermions,* an arbitrary number of neutral scalars,* a Z_4 (Z_2) symmetry which forbidsMajorana (Dirac) fermion masses beforeSSB[This is motivated by the Standard Modelwhere—before SSB—fermion massesas well as linear and trilinear terms in the scalar potentialare absent due to the gauge symmetry.]and * general Yukawa interactions.We put particular emphasis on the treatment of tadpoles. Since radiativecorrections in this model are already finite due torenormalization with the counterterms of theunbroken theory, also the sum of all tadpole contributions, i.e.the loop contributions and those induced by the counterterms of the unbrokentheory, is finite.However, tadpoles introduce finite VEV shifts which have to be takeninto account for instance in the selfenergies. Eventually, the finiteVEV shifts also contribute to the radiative corrections of the tree-levelmasses.[After SSB, these shifts have to be taken into account everywhere in the Lagrangian where VEVs appear in order to obtain a consistent set of counterterms.] We also focus on Majorana fermions, having in mind that neutrinos automaticallyobtain Majorana nature through the seesaw mechanism <cit.>.An attempt at a renormalizationscheme—with one fermion and one scalar field—along the lines discussed herehas already been made in <cit.>; however,the treatment of the VEV in this paper cannot be generalized to the case of more than one scalar field. The paper is organized as follows. In section <ref> we introducethe Lagrangian, define the counterterms and discuss SSB.Section <ref> is devoted to the explanation of ourrenormalization scheme, while in section <ref> we explicitly computethe selfenergies of fermions and scalars at one-loop order. We present anexample of a flavour symmetry in section <ref> and study how thesymmetry teams up with the general renormalization scheme. In section <ref> we describe the changes when one has Dirac fermionsinstead of Majorana fermions. Finally, in section <ref>we present the conclusions.Some details which are helpful for reading the papercan be found in the three appendices. § TOY MODEL SETUPIn this section, we give the specifics of the investigated model anddiscuss the generation of masses via SSB. We focus on Majorana fermions. Throughout this paper we always use the sum convention, if not otherwise stated. §.§ Bare and renormalized LagrangianThe bare Lagrangian is given by ℒ_B= i χ̅^_iLγ^μ∂_μχ^_iL + 1/2( ∂_μφ^_a )( ∂^μφ^_a ) + ( 1/2 (Y^_a)_ij χ^_iL^T C^-1χ^_jLφ^_a + )- 1/2 ( μ_ B^2 )_abφ^_a φ^_b - 1/4λ^_abcd φ^_a φ^_b φ^_c φ^_d. The charge-conjugation matrix C acts only on the Dirac indices. We assume n_χ chiralMajorana fermion fields χ^_iL andn_φ real scalar fields φ^_a.This Lagrangian exhibits the Z_4 symmetry𝒮: χ^_L→ iχ^_L, φ^→ -φ^,withχ^_L =( [χ^_1L;⋮; χ^_n_χ L ]), φ^ =( [ φ^_1;⋮; φ^_n_φ ]).Note that( Y^_a )^T = Y^_a ∀ a = 1,…,n_χ,( μ_ B^2 )_ab = ( μ_ B^2 )_baand λ^_abcd is symmetric in all indices.[One can showthat the number of independent elements of λ^_abcd is(n_φ + 34 ).] We define the renormalized fields by χ^_L = Z_χ^(1/2) χ_L, φ^ = Z^(1/2)_φ φ,where χ_L and φ are the vectors of the renormalized fermion and scalar fields, respectively. The quantity Z_χ^(1/2) isa general complex n_χ× n_χ matrix, whileZ^(1/2)_φ is a real but otherwise generaln_φ× n_φ matrix. Since we use dimensional regularization with dimensiond = 4 - ε,we also introduce an arbitrary mass parameterwhich rendersthe renormalized Yukawa and quartic coupling constants dimensionless.We split the bare Lagrangian into ℒ_B = ℒ + δℒ,where the renormalized Lagrangian is given byℒ = i χ̅_iLγ^μ∂_μχ_iL + 1/2( ∂_μφ_a )( ∂^μφ_a ) + ( 1/2 ^ε/2(Y_a)_ij χ_iL^T C^-1χ_jLφ_a + )- 1/2μ^2_abφ_a φ_b - 1/4^ελ_abcd φ_a φ_b φ_c φ_d andδℒ =i δ^(χ)_ijχ̅_iLγ^μ∂_μχ_jL + 1/2δ^(φ)_ab( ∂_μφ_a )( ∂^μφ_b ) + ( 1/2 ^ε/2(δ Y_a)_ij χ_iL^T C^-1χ_jLφ_a + )- 1/2δμ^2_abφ_a φ_b - 1/4^εδλ_abcd φ_a φ_b φ_c φ_d contains the counterterms.In δℒ, the counterterms corresponding to theparameters in ℒ are given by ^ε/2δ Y_a= ( Z_χ^(1/2))^T Y^_bZ_χ^(1/2)( Z^(1/2)_φ)_ba - ^ε/2 Y_a,^εδλ_abcd = λ^_a'b'c'd'( Z^(1/2)_φ)_a'a( Z^(1/2)_φ)_b'b( Z^(1/2)_φ)_c'c( Z^(1/2)_φ)_d'd- ^ελ_abcd,δμ^2= ( Z^(1/2)_φ)^T μ^2_ BZ^(1/2)_φ - μ^2. Note that, whenever possible, we use matrix notation,as done in equations (<ref>) and (<ref>). Moreover we have definedδ^(χ) =( Z_χ^(1/2))^† Z_χ^(1/2) - , δ^(φ) =( Z_φ^(1/2))^T Z_φ^(1/2) - .The renormalized parameters have the same symmetry propertiesas the unrenormalized ones, i.e. Y_a^T = Y_a ∀ a = 1,…,n_χ, μ^2_ab = μ^2_baand λ_abcd is symmetric in all indices. The same applies to thecorresponding counterterms. §.§ Spontaneous symmetry breakingWe introduce the shift φ_a = ^-ε/2v̅_a + h_a v̅_a = v_a + Δ v_a.For convenience we have split the shift into v_a and Δ v_a; below we will identify the v_a with thetree-level VEVs of the scalar fields φ_a, while the Δ v_a indicate furtherfinite shifts effected by loop corrections. Throughout our calculations, the symbol δ signifiesUV divergent counterterms, while with the symbol Δwe denote finite shifts. A one-loop discussionof Δ v_a will be presented insection <ref>. The shift leads to the scalar potential, including counterterms,V + δ V - V_0= ^-ε/2( t_a + Δ t_a + δμ^2_abv̅_b +δλ_abcd v̅_b v̅_c v̅_d )h_a + 1/2( ( M^2_0 )_ab +( Δ M^2_0 )_ab + δμ^2_ab +3 δλ_abcd v̅_c v̅_d ) h_a h_b + ^ε/2( λ_abcd + δλ_abcd) v̅_dh_a h_b h_c+ 1/4^ε( λ_abcd + δλ_abcd) h_a h_b h_c h_d, with V as in equation (<ref>),t_a = μ^2_ab v_b + λ_abcd v_b v_c v_d, Δ t_a = μ^2_abv̅_b + λ_abcdv̅_b v̅_c v̅_d- t_a,V_0 being the constant term, ( M^2_0 )_ab≡μ^2_ab + 3 λ_abcd v_c v_d ( Δ M^2_0 )_ab≡μ^2_ab + 3 λ_abcdv̅_c v̅_d - ( M^2_0 )_ab.The quantities Δ t_a and ( Δ M^2_0 )_abwill become useful when we go beyond the tree level becausethey will be induced by the shifts Δ v_a. We will drop V_0 in the rest of the paper since it does not alter the dynamics of the theory.From now on we choose the v_a as the tree-level vacuum expectationvalues (VEVs) of the scalars, i.e. as the values of the φ_a at the minimum of V(φ).Taking the derivative of the scalar potential V, we obtain∂ V/∂φ_a =μ^2_abφ_b + ^ελ_abcdφ_b φ_c φ_d.Therefore, the conditions that the v_a (a=1,…,n_φ)correspond to a stationary point of V are given by t_a = 0a=1,…,n_φ.SSB occurs if the minimumφ_1 = v_1, …, φ_n_φ = v_n_φ of V is non-trivial, i.e.different from v_1 = ⋯ = v_n_φ = 0.In any case, whether there is SSB or not, M^2_0 of equation (<ref>)is the tree level mass matrix of the scalars.The mass matrix of the fermions is given bym_0 = ∑_a=1^n_φ v_a Y_a.The subscript 0 in m_0 and M^2_0 indicates tree levelmass matrices. The tree-level mass matrices and fermions and scalars are diagonalized by U_0^T m_0 U_0= m̂_0 ≡( m_01, …, m_0 n_χ), W_0^T M^2_0 W_0= M̂^2_0 ≡( M^2_01, …, M^2_0 n_φ), where U_0 is unitary <cit.> and W_0 is orthogonal.The diagonalization matrices U_0 and W_0allow us to introduce mass eigenfields χ̂_jL and ĥ_a via χ_iL = ( U_0 )_ijχ̂_jL h_a = (W_0)_abĥ_b,respectively. Rewriting the Lagrangian in terms of the mass eigenfields amounts to the replacements δ^(χ) →δ̂^(χ) =U_0^†δ^(χ) U_0, Y_a→Ŷ_a = (U_0^TY_b U_0 ) (W_0)_ba, δ^(φ) →δ̂^(φ) = W_0^T δ^(φ) W_0, v_a→v̂_a = (W_0)_ba v_b, t_a→t̂_a = (W_0)_ba t_b, μ^2→μ̂^2 = W_0^T μ^2 W_0,λ_abcd →λ̂_abcd = λ_a'b'c'd'(W_0)_a'a (W_0)_b'b (W_0)_c'c (W_0)_d'd, such that the form of the Lagrangian is preserved. Therefore, without loss of generality we assume that we arein the mass bases of fermions and scalars, when we perform theone-loop computation of the selfenergies. Note that v̂̅̂_a and Δv̂_a are defined analogously tov̂_a.In the mass basis it is useful to rewrite the Yukawa interaction asℒ_Y = - 1/2χ̅̂̅( Ŷ_a γ_L + Ŷ_a^* γ_R ) χ̂( ^ε/2ĥ_a + v̂̅̂_a )withγ_L =- γ_5/2, γ_R =+ γ_5/2, χ̂= ( [ χ̂_1;⋮; χ̂_n_χ ]) χ̂_i = χ̂_iL +( χ̂_iL)^c,where the superscript c indicates charge conjugation.§ RENORMALIZATION General outline: Our objective is to describe the general renormalization procedure and to work out a prescription for the computationof the one-loop contribution to the physical fermion and scalar masses. For this purpose we have to compute the selfenergies. Clearly,the manner in which the selfenergies—and thus the quantitieswe aim at—depend on the parameters of our toy model isrenormalization-scheme-dependent. It is, therefore, expedient to clearlyexpound the scheme we want to use and how we plan to reach our goal. We proceed in three steps: * renormalization for the determination ofδŶ_a, δλ̂_abcd,δμ̂^2_ab, δ̂^(χ) andδ̂^(φ).*Finite shifts Δv̂_a such that the scalar one-pointfunctions of the ĥ_a are zero.These two steps allow us to compute renormalizedone-loop selfenergies Σ(p) and Π(p^2) for fermions and scalars, respectively.* Finite field strength renormalization in order to switch from theselfenergies Σ(p) and Π(p^2)to on-shell selfenergies[Note that here the term on-shellrefers to field strength renormalization only.We have no mass counterterms, because in our approach masses are derived quantities and, therefore, functions of the parameters of the model—see the discussion at the end of this section.] Σ(p) and Π(p^2). Several remarks are in order to concretize this outline.renormalization, i.e. subtraction of terms proportional to the constant c_∞ = 2/ε - γ + ln(4π),where γ is the Euler–Mascheroni constant, is realized in the following way:* δλ̂_abcd is determined from the quartic scalar coupling,* δŶ_a is obtained from the Yukawa vertex,* δμ̂^2_ab removes c_∞ from the p^2-independent part of the scalar selfenergy,* δ̂^(χ) and δ̂^(φ) are determined from themomentum-dependent parts of the respective selfenergies. With the prescriptions (a)–(d) above, all correlation functions andall physical quantities computed in our toy model must be finite. This applies in particular to the selfenergies. Fermion selfenergy: Let us first consider the renormalized fermion selfenergy Σ(p), defined via the inverse propagator matrixS^-1(p) = p - m̂_0 - Σ(p),where Σ(p) has the chiral structure Σ(p) = p( Σ^(A)_L(p^2) γ_L + Σ^(A)_R(p^2) γ_R ) +Σ^(B)_L(p^2) γ_L + Σ^(B)_R(p^2) γ_R.For the relationships between Σ^(A)_L and Σ^(A)_R andbetween Σ^(B)_L and Σ^(B)_R in the case of Dirac andMajorana fermions we referthe reader to appendix <ref>. At one-loop order, Σ(p) has the terms Σ(p)= Σ^1-loop(p)- p[ δ̂^(χ)γ_L +( δ̂^(χ))^* γ_R ]+ v̂_a [ δŶ_a γ_L +(δŶ_a)^* γ_R ]+ Δv̂_a [ Ŷ_a γ_L + Ŷ_a^* γ_R ],where Σ^1-loop corresponds to the diagram offigure <ref>. Since δŶ_a is already determined by the Yukawa vertex, the corresponding term in Σ(p) must automatically makeΣ^(B)_L,R in equation (<ref>) finite.As for Σ^(A)_L,R in Σ(p), we note that these matricesare hermitian—see also appendix <ref>, therefore, the counterterms with the hermitian matrix δ̂^(χ)suffice for finiteness. The last term in equation (<ref>) is induced by the finite VEV shifts. Scalar selfenergy: Now we address the inverse scalar propagator matrixΔ^-1(p^2) = p^2 - M̂^2_0 - Π(p^2).The scalar selfenergy Π(p^2) has the structureΠ_ab(p^2) = Π^1-loop_ab(p^2) - δ̂^(φ)_ab p^2 +δμ̂^2_ab + 3 δλ̂_abcdv̂_c v̂_d +6 λ̂_abcdv̂_c Δv̂_dat one-loop order.With an argument analogous to the fermionic case we find that the symmetricmatrix δ̂^(φ) suffices for making the derivative of Π(p^2) finite. According to our renormalization prescription, δλ̂_abcdv̂_c v̂_dis already fixed, but we have δμ̂^2_ab at our disposal to cancel the infinity in the p^2-independent term in Π(p^2). The last term in the scalar selfenergy, equation (<ref>),stems from the finite mass correctionsΔ M^2_0—see equation (<ref>)—expressedin terms of the finite VEV shiftsinduced by tadpole contributions.Another commonly used approach for the renormalization of μ̂^2,e.g. in <cit.>, is to express its diagonal entriesvia the tadpole parameters t̂_a as ofequation (<ref>),resulting in renormalization conditions more closely related tophysical observables. However, there are simply not enough tadpole parameters available to replaceall parameters in the n_φ× n_φ symmetricmatrix μ̂^2 and we havetwo main reasons for dismissing this choice in our case. One is that expressing μ̂^2_aa in terms of the tadpole parametersinvolves the inverses of the VEVs v̂_a.In the general case, some of these can be zero,leading to ill-defined expressions for δμ̂^2_aa.The other one is that the diagonal and off-diagonal entries ofμ̂^2 can be treated on an equal footing in our approach,leading to a more compact description. One-point function: These shifts derive from the linear term in the scalar potential. For simplicity we stick tothe lowest non-trivial order, where it is given by ^-ε/2( t̂_a + Δt̂_a +δμ̂^2_abv̂_b +δλ̂_abcd v̂_b v̂_c v̂_d ) ĥ_a.Diagrammatically, the one-point function pertaining to ĥ_ahas the contributions[We stress again that we do not introducetadpole counterterms.] tadpoles-sum-general(40,40) i1o1phantomi1,v1,o1dasheso1,v1phantom,leftv1,i1,v1decor.shape=circle,decor.size=3v1 + (40,40) i1o1phantomi1,v1,o1dasheso1,v1phantom,leftv1,i1,v1decor.shape=circle,decor.filled=shadedv1 + (40,40) i1o1phantomi1,v1,o1dasheso1,v1phantom,leftv1,i1,v1decor.shape=crossv1= ^-ε/2i/-M^2_0a× (-i)( t̂_a + T_a + Δt̂_a +δμ̂^2_abv̂_b +δλ̂_abcd v̂_b v̂_c v̂_d ) = 0,where i/(-M^2_oa) is the external scalar propagator at zero momentum. The requirement that the one-point function is zero is identical withthe requirement that the VEV of ĥ_a is zero. The first diagram in equation (<ref>) represents the scalar tree-levelone-point function corresponding to t̂_a,which vanishes identically due toequation (<ref>); we have included it only for illustrative purposes. The second diagram, which represents the one-looptadpole contributions, corresponds to T_a.The third diagram represents thesum of Δt̂_a and the two counterterm contributions.We can decompose T_a intoan infinite and a finite part, i.e.T_a = ( T_∞)_a + ( T_fin)_a.Since with the imposition of conditions (a)–(d) the theory becomes finite, in equation (<ref>)we necessarily have δμ̂^2_abv̂_b +δλ̂_abcd v̂_b v̂_c v̂_d +( T_∞)_a = 0.An explicit check of this relation is presented in section <ref>. Moreover, we translate the finite tadpole contributions( T_fin)_a to shifts of the VEVs Δv̂_b,similar to the approach of <cit.>.At one-loop order this is effected byΔt̂_a = μ̂^2_abΔv̂_b + 3λ̂_abcdv̂_c v̂_d Δv̂_b =( M̂^2_0 )_abΔv̂_b,where we have used equation (<ref>). Therefore, equation (<ref>)leads to the finite shiftΔv̂_a =-( M̂^2_0 )^-1_ab( T_fin)_b.Note that these finite shifts eventually contribute to thefinite mass corrections because they contribute to the two-point functions of the fermions and scalars—seeequations (<ref>) and (<ref>), respectively. Further clarifications concerning the VEV shifts Δ v_aare found in appendix <ref>.Pole masses and finite field strength renormalization: It remains to perform a finite field strength renormalization in order totransform the one-loop selfenergies Σ(p) and Π(p^2)to on-shell selfenergies Σ(p) and Π(p^2),respectively. Immediately the question arises why wecannot use the Z^(1/2)_χ and Z^(1/2)_φ defined insection <ref> for this purpose. Note thatwe have incorporated these matrices intoδ Y_a and δλ_abcd at the respective interaction vertices.Therefore, in δℒ the field strength renormalization matricesZ^(1/2)_χ and Z^(1/2)_φ occur solely in the hermitian matrixδ^(χ) and the symmetric matrix δ^(φ), respectively. Obviously, the latter matrices have fewer parameters than the original ones and it is impossible to perform on-shell renormalization with δ^(χ) for more than one fermion field and withδ^(φ) for more than one scalar field. What happens if we do not incorporate Z^(1/2)_χ and Z^(1/2)_φ intothe Yukawa and quartic couplings, respectively? Let us consider the Yukawainteraction for definiteness and denote by δ̌Y_a the Yukawacounterterm where Z^(1/2)_χ is not incorporated. Obviously,the relation between δ Y_a and δ̌Y_a is given byδ Y_a = ( Z_χ^(1/2))^T ( Y_b + δ̌Y_b ) Z_χ^(1/2)( Z^(1/2)_φ)_ba - Y_a.Actually, the quantity that is determined by the Yukawa vertex renormalizationis δ Y_a and not δ̌Y_a.Moreover, since we generate mass terms by SSB,the fermion mass term is induced by the shift of equation (<ref>)and has the form1/2χ_L^T C^-1δ Y_a v̅_a χ_L +1/2χ_L^T C^-1 Y_a v̅_a χ_L +Thus it is clearly the same δ Y_a that occurs in boththe mass term and the vertex renormalization.Consequently, with the counterterms of theunbroken theory we always end up with δ Y_a and δ^(χ)as independent quantities and wecan in general not perform on-shell renormalization. Therefore, we need, in addition to Z^(1/2)_χ and Z^(1/2)_φ, finite field strength renormalization matrices∘Z^-6pt(1/2)_χ and∘Z^-6pt(1/2)_h for fermions and bosons, respectively, inserted into the broken Lagrangian, in order to perform on-shellrenormalization. In this way, the √(Z)-factors of the external lines inthe LSZ formalism are exactly one <cit.>. We denote the one-loop contributions to ∘Z^-6pt(1/2)_χ and∘Z^-6pt(1/2)_h by 1/2∘z_χ and1/2∘z_h, respectively.For the details of the computation and the results forthese quantities we refer the reader to appendix <ref>. Here we only state the masses <cit.>m_i=m_0i + m_0i( Σ^(A)_L )_ii(m^2_0i) + ( Σ^(B)_L )_ii(m^2_0i), M^2_a=M^2_0a + Π_aa(M^2_0a)at one-loop order.There is no summation in these two formulas over equal indices. Eventually we remark that one could decompose ∘z_χ intoa hermitian and an antihermitian matrix. One could be tempted to conceive the antihermitian part as a correction to the tree-level diagonalizationmatrix U_0. However, we think that in our simple model such a decompositionhas no physical meaning; in essence, we have no PMNS mixing matrix at disposalwhere it could become physical. Of course, a similar remark applies to ∘z_h—seealso <cit.> for a recent discussion in the context ofthe two-Higgs-doublet model.§ RENORMALIZATION AT THE ONE-LOOP LEVELIn this section we concretize, at the one-loop level,the renormalization procedure introduced in the previous section. For the relevant integrals needed for these computationssee appendix <ref>.§.§ One-loop results for selfenergies and tadpoles Here we display the results for the one-loop contributionsΣ^1-loop(p) andΠ^1-loop_ab(p^2)to the fermion and scalar selfenergies, respectively, and also for the one-looptadpole expression T_a. Fermion selfenergy:The only direct one-loop contribution to the fermionic self-energyis given by the diagram of figure <ref>.Then, the definitions Δ_a,k = x M^2_0a + (1-x) m^2_0k - x(1-x) p^2, D_a,k = ∫_0^1 dxx lnΔ_a,k/^2,E_a,k = ∫_0^1 dx lnΔ_a,k/^2and D̂_a = ( D_a,1, …, D_a,n_χ), Ê_a = ( E_a,1, …, E_a,n_χ)allow us to write the one-loop contribution to the fermionic selfenergy as Σ^1-loop = 1/16 π^2{pγ_L [ -1/2 c_∞Ŷ_a^* Ŷ_a+ Ŷ_a^* D̂_a Ŷ_a ] . + pγ_R [ -1/2 c_∞Ŷ_a Ŷ_a^*+ Ŷ_a D̂_a Ŷ_a^* ] + γ_L [ -c_∞Ŷ_a m̂_0 Ŷ_a +Ŷ_a m̂_0 Ê_a Ŷ_a ] . + γ_R [ -c_∞Ŷ_a^* m̂_0 Ŷ_a^* +Ŷ_a^* m̂_0 Ê_a Ŷ_a^* ] }. Scalar selfenergy:In the following, the superscripts (a), (b), (c)refer to the Feynman diagrams of figure <ref>.Thus the selfenergy has the contributionsΠ^1-loop_ab(p^2) =Π^(a)_ab(p^2) + Π^(b)_ab(p^2) + Π^(c)_ab(p^2).We define Δ_ij = x m_0i^2 + (1-x) m_0j^2 - x(1-x) p^2 Δ̃_rs = x M_0r^2 + (1-x) M_0s^2 - x(1-x) p^2.With these definitions we obtain Π^(a)_ab(p^2)= 1/16 π^2{ c_∞ [ Ŷ_a m̂_0Ŷ_b m̂_0 +Ŷ_a^* m̂_0 Ŷ_b^* m̂_0 +2 Ŷ_a Ŷ_b^* m̂_0^2+2 Ŷ_a^* Ŷ_b m̂_0^2 ]. -1/2 c_∞ [ Ŷ_a Ŷ_b^* +Ŷ_a^* Ŷ_b ] p^2 +[ ( Ŷ_a Ŷ_b^* +Ŷ_a^* Ŷ_b ) ( m̂_0^2 - 1/6p^2 )] - ∫_0^1 dx[ ( (Ŷ_a)_ij (Ŷ_b)_ji^* +(Ŷ_a)_ij^* (Ŷ_b)_ji) ( 2 Δ_ij - x(1-x) p^2 ) .. . + (Ŷ_a)_ij m_0j (Ŷ_b)_ji m_0i +(Ŷ_a)_ij^* m_0j (Ŷ_b)_ji^* m_0i]lnΔ_ij/^2},Π^(b)_ab(p^2)= -18/16 π^2 λ̂_acrsv̂_c λ̂_bdrsv̂_d ( c_∞ - ∫_0^1 dx lnΔ̃_rs/^2), Π^(c)_ab(p^2)=-3/16 π^2 λ̂_abrrM^2_0r( c_∞ + 1 -lnM^2_0r/^2). For the following discussion, it is useful to introduce a separate notationfor the divergent p^2-independent parts of Π^1-loop_ab: ( Π^(a)_∞)_ab = 1/16 π^2 c_∞ [ Ŷ_a m̂_0 Ŷ_b m̂_0 +Ŷ_a^* m̂_0 Ŷ_b^* m̂_0 +2 Ŷ_a Ŷ_b^* m̂_0^2+2 Ŷ_a^* Ŷ_b m̂_0^2 ],( Π^(b)_∞)_ab =-18/16 π^2 c_∞ λ̂_acrsv̂_c λ̂_bdrsv̂_d,( Π^(c)_∞)_ab =-3/16 π^2 c_∞ λ̂_abrr M^2_0r.Tadpoles: There are two one-loop tadpole contributions to the scalar one-point function, namelytadpoles-graphs(40,40) i1o1phantomi1,v1,o1dasheso1,v1plain,leftv1,i1,v1 + (40,40) i1o1phantomi1,v1,o1dasheso1,v1dashes,leftv1,i1,v1 = ^-ε/2i/-M_0a^2× (-i)( T^(χ)_a + T^(h)_a ).We find the following result for tadpole terms:T^(χ)_a= 1/16 π^2 [ ( Ŷ_a m̂^3_0 + Ŷ_a^*m̂^3_0)( c_∞ + 1 - lnm̂^2_0/^2) ], T^(h)_a=-3/16 π^2λ̂_abrrv̂_b M^2_0r( c_∞ + 1 - lnM^2_0r/^2).We denote the divergences in the tadpole expressions by ( T^(χ)_∞)_a and ( T^(h)_∞)_a. §.§ Determination of the countertermsCounterterms of Yukawa and quartic scalar couplings: Using MS renormalization, it is straightforward tocompute these counterterms. For the Yukawa couplings we obtain δŶ_a =1/16 π^2 c_∞Ŷ_b Ŷ_a^* Ŷ_b.The δλ̂_abcd can be split intoδλ̂_abcd =δλ̂^(χ)_abcd + δλ̂^(φ)_abcd,generated by fermions and scalars, respectively, in the loop. The first case yields δλ̂^(χ)_abcd =-1/3×1/16 π^2 c_∞Tr[Ŷ_a Ŷ_b^* Ŷ_c Ŷ_d^* +Ŷ_a Ŷ_c^* Ŷ_d Ŷ_b^* +Ŷ_a Ŷ_d^* Ŷ_b Ŷ_c^* .. +Ŷ_a^* Ŷ_b Ŷ_c^* Ŷ_d +Ŷ_a^* Ŷ_c Ŷ_d^* Ŷ_b +Ŷ_a^* Ŷ_d Ŷ_b^* Ŷ_c ].In this formula we have taken into account that the Yukawa coupling matricesare symmetric. The scalar contribution isδλ̂^(φ)_abcd =3/16 π^2 c_∞( λ̂_abrsλ̂_rscd+λ̂_adrsλ̂_rsbc +λ̂_acrsλ̂_rsbd).Counterterms pertaining to field strength renormalization: Cancellation of the divergence in equation (<ref>)determines δ̂^(χ) as δ̂^(χ) =-1/2×1/16 π^2 c_∞Ŷ_a^* Ŷ_a.Considering the scalar selfenergy, we find that only diagram (a) offigure <ref> has a divergence proportional to p^2. Therefore, we obtain from equation (<ref>) δ̂^(φ)_ab = -1/2×1/16π^2 c_∞[Ŷ_a Ŷ_b^* + Ŷ_a^* Ŷ_b].Counterterm pertaining to μ̂^2_ab:The counterterm δμ̂^2_ab has to be determined by the cancellations of the divergences of equations (<ref>) and (<ref>). Thus we demand 0= δμ̂^2_ab +3 δλ̂^(φ)_abcdv̂_c v̂_d + ( Π^(b)_∞)_ab + ( Π^(c)_∞)_ab= δμ̂^2_ab + 3/16 π^2 c_∞[ 3 λ̂_abrsλ̂_rscdv̂_c v̂_d -λ̂_abrr M^2_0r]= δμ̂^2_ab + 3/16 π^2 c_∞[λ̂_abrs( μ̂^2_rs +3 λ̂_rscdv̂_c v̂_d - μ̂^2_rs) -λ̂_abrr M^2_0r]= δμ̂^2_ab -3/16 π^2 c_∞ λ̂_abrsμ̂^2_rs.Therefore, δμ̂^2_ab is fixed as δμ̂^2_ab =3/16 π^2 c_∞ λ̂_abrsμ̂^2_rs. §.§ Cancellation of divergencesHaving fixed all available counterterms, the remaining UV divergences inthe selfenergies and tadpoles have to drop out. This is whatwe want to show in this subsection.Fermion selfenergy:With δŶ_a of equation (<ref>) and v̂_a δŶ_a = 1/16 π^2 c_∞Ŷ_b m̂_0 Ŷ_b,we find that Σ is finite without any mass renormalization,as it has to be.Scalar selfenergy:We have already treated the divergences (<ref>)and (<ref>), but there is still the divergence of equation (<ref>). However, it is easy to see that its cancellation in the selfenergy (<ref>) is simply effected by ( Π^(a)_∞)_ab +3 δλ̂^(χ)_abcdv̂_c v̂_d = 0.Tadpoles:It remains to verify equation (<ref>). First we consider the resultof the fermionic tadpole in equation (<ref>).Contracting the counterterm δλ̂^(χ)_abcdof equation (<ref>) with the VEVs and adding to it ( T^(χ)_∞)_a yields( T^(χ)_∞)_a +δλ̂^(χ)_abcdv̂_b v̂_c v̂_d =( T^(χ)_∞)_a -1/16 π^2 c_∞ [ ( Ŷ_a m̂_0^3 + Ŷ_a^* m̂_0^3 )]= 0.Similarly,using equations (<ref>) and (<ref>), the scalar tadpole contribution of equation (<ref>)is found to be finite via( T^(h)_∞)_a + δμ̂^2_abv̂_b +δλ̂^(φ)_abcdv̂_b v̂_c v̂_d= ( T^(h)_∞)_a + 3/16π^2 c_∞(λ̂_abrsμ̂^2_rsv̂_b +3λ̂_abrsλ̂_rscdv̂_b v̂_c v̂_d )= ( T^(h)_∞)_a + 3/16π^2 c_∞λ̂_abrrv̂_b M^2_0r =0. §.§ Counterterms and UV divergences in a general basis The results for the selfenergies and counterterms shownin the previous sections are given in the mass bases. However,for a check of the cancellation of divergencesit might be advantageous to havethe divergences in a general basis. Such expressions can be obtainedby using the parameter transformations (<ref>).As an example, let us do this transformation in the case of δ̂^(χ) of equation (<ref>),where one has to applyδ^(χ) = U_0 δ̂^(χ) U_0^†= -1/2×1/16 π^2 c_∞ U_0 Ŷ_a^* Ŷ_a U_0^†= -1/2×1/16 π^2 c_∞ U_0( U_0^† Y_b^* U_0^* (W_0)_ba)( U_0^T Y_c U_0 (W_0)_ca) U_0^†= -1/2×1/16 π^2 c_∞ Y_a^* Y_a.In the case of the divergence in Σ^(B)_L—seeequation (<ref>), we have to use the slightly different transformationU_0^* Ŷ_a m̂_0 Ŷ_a U_0^† = Y_a m_0^* Y_aThis explains that we have to be careful when a fermion mass term occurs because in generalv_a Y_a^* = m_0^* ≠ v_a Y_a = m_0.This complication only arises in ( T^(χ)_∞)_a = 1/16 π^2 c_∞ [ Y_a m_0^*m_0m_0^* +Y_a^*m_0m_0^*m_0]and( Π^(a)_∞)_ab= 1/16 π^2 c_∞ [ Y_a m_0^* Y_b m_0^* + Y_a^* m_0 Y_b^* m_0 +2 Y_a Y_b^* m_0 m_0^*+ 2 Y_a^* Y_b m_0^* m_0 ].The divergences(T^(h)_∞)_a,( Π^(b)_∞)_ab and( Π^(c)_∞)_abare obtained in a general basis by simply removing the hats from allquantities and the same is true for all counterterms. § AN EXAMPLE OF A FLAVOUR SYMMETRYMotivated by flavour models of the lepton sector <cit.>, we will now consider a Lagrangian with a simple flavour symmetry andstudy how renormalization is affected in this case. §.§ Symmetry group and Lagrangian We assume the same number of Majorana and scalar fields, i.e.n_χ = n_φ≡ n.In addition, we require n ≥ 2. Instead of the Z_4 symmetry of equation (<ref>), which acts at the same time on all fields, we will now postulate a Z_4 symmetry for every index a = 1, …, n:ℤ_4_a : χ^_aL→ i χ^_aL, φ^_a→ -φ^_a, χ^_bL→χ^_bL, φ^_b→φ^_b ∀b ≠ a.This has the consequence that scalar fields with the same index occur in pairs in the scalar potential. Note that now it is reasonable to usethe same indices for both fermions and scalars.In addition, we assume that the Lagrangian is invariant under simultaneous permutations of fermion and scalar fields. Therefore, group-theoretically the symmetry group of the Lagrangian can be conceived asG_n = ℤ_4^n ⋊ S_n. With this flavour group, the bare Lagrangian has the form ℒ_B = ∑_a=1^n [ iχ_aL^∂χ_aL^ +1/2∂_μφ_a^∂^μφ_a^ +1/2 y^χ_aL^ C^-1χ_aL^φ^_a +H.c.] - V_B,where the bare scalar potential can be written asV_B = 1/2 μ^2∑_a=1^n ( φ_a^)^2 + 1/4 λ( ∑_a=1^n ( φ_a^)^2 )^2 +1/4λ'∑_a,b = 1^n φ_a^^2φ_b^^2(1-δ_ab), where δ_ab is the Kronecker delta.§.§ Relation to the general model Due to the symmetry group G_n, we only have one Yukawacoupling constant and two quartic couplings. In order to use the generalone-loop results, we have to establish the relation betweenthe general model of section (<ref>) and the present example. For simplicity we now drop the superscript (B) and keep in mind that the following list applies not only to the renormalized couplingconstants but also to the counterterms and the bare coupling constants: Y_a_bc = y δ_abδ_ac∀ a,( μ^2 )_ab = μ^2 δ_ab,λ_aaaa = λ∀ a andλ_aabb = 1/3( λ + λ' ) ∀ a ≠ b. Note that now we just have one mass parameter μ^2. Moreover, quartic couplings λ_abbb with a ≠ b and those with three or four different indices are zero.Without loss of generality we assume y > 0.In addition, we have to consider equation (<ref>), which now readsδ^(χ)_ab = δ^(χ)δ_ab, δ^(φ)_ab = δ^(φ)δ_ab,because due to the symmetry group G_n only one field strengthrenormalization constant is allowed for each type of fields.The results of section <ref>, found for the general Yukawa model,can directly be used for the present case by applying equation (<ref>). In this way we obtainthe counterterms δ y= 1/16π^2 c_∞ y^3, δλ^(χ) =-2/16π^2c_∞ y^4, δλ^(φ) = 1/16π^2c_∞[ 9λ^2 + ( n-1 ) λ+λ'^2], ( δλ +δλ' )^(χ) =0, ( δλ +δλ' )^(φ) = 1/16π^2c_∞[ 6λ(λ +λ') +(n+2)(λ+λ')^2], δμ^2 = μ^2/16π^2c_∞[3 λ + (n-1) (λ+λ') ], where the superscripts (χ) and (φ) indicate fermions and scalarsin the loop, respectively, in analogy to the notation insection <ref>. Field strength renormalization yields δ^(χ) = -1/2×1/16π^2 c_∞ y^2 δ^(φ) = -1/16π^2 c_∞ y^2. §.§ Spontaneous symmetry breaking In order to have SSB we assume μ^2 < 0. For the vacuum expectation values we introduce the notationv^2 = ∑_a=1^n v_a^2.Obviously, for the scalar potential to be bounded from below we must haveλ > 0, but λ' can be positive or negative. Case λ' > 0: Here, the minimum of the scalar potential is achieved when only one VEV isnonzero. Without loss of generality we assumev_1 = v,v_2 = ⋯ = v_n = 0 ⇒ v^2 = -μ^2/λ.The symmetry breaking can be formulated asG_nG_n-1,where G_n-1 is the residual symmetry group. This residual symmetry isreflected in the mass spectrumM^2_01 = 2 λ v^2, M^2_02 = ⋯ M^2_0n = λ' v^2,m_01 = yv, m_02 = ⋯ = m_0n = 0. Since the mass matrices of both fermions and scalars are diagonal attree level, it is straightforward to compute the one-loop correctionsto equation (<ref>).It easy to see that at one-loop order the VEV shifts fulfillΔ v_2 = ⋯ = Δ v_n = 0, onlyΔ v_1 will in general be nonzero. It is also obvious thatthe nonzero masses in equation (<ref>) receive one-loop corrections.However, m_2 = ⋯ = m_n is still valid because the unbroken symmetrygroup G_n-1 forbids such masses. Case λ' < 0: For negative λ', the condition |λ'| < n/n-1λis necessary for the scalar potential to be bounded from below.In this case the minimum is given byv_1^2 = ⋯ = v_n^2 = v^2/n⇒ v^2= -μ^2/λ + n-1/nλ'.In principle, the VEVs v_a could have different signs. However, since arbitrary sign changes of the scalar fields are part of G_n, we can assume v_a > 0∀ a without loss of generality. Therefore, we have the symmetry breakingG_nS_n,where the permutation group is given by its“natural permutation representation” corresponding to n × npermutation matrices.This representation decays into the trivialone-dimensional and a (n-1)-dimensional irreducible representation. Defining n vectors w_a (a = 1,…,n) such thatw_1 = 1/√(n)( [ 1; 1; ⋮; 1 ]) w_a · w_b = δ_ab ∀ a,b,then w_1 is invariant under all permutation matrices and belongs, therefore, to the trivial irreducible representation, while the vectors w_2, … w_nspan the space pertaining to the (n-1)-dimensional one. This is borne out by the tree-level masses. The scalars have the mass matrixM^2_0 = A 1 + B w_1 w_1^TA = -2 λ' v^2/n,B = 2 (λ + λ') v^2.Hence, the diagonalization matrix is given by W_0 = ( w_1, …, w_n )and we findM^2_01 = A + B, M^2_02 = ⋯ = M^2_0n = A.However, the fermion masses are all equal at tree level:m_01 = ⋯ = m_0n =yv/√(n). At one-loop order, the scalar masses of equation (<ref>) will receive radiative corrections, but—due to the unbroken symmetry groupS_n—the relation M^2_2 = ⋯ = M^2_n will still hold.One might expect that the total degeneracy of the fermion masses, as expressed in equation (<ref>), will be lifted because of radiative corrections such that m_1 is different from the rest. However, as we will demonstrate now, this is not the case. First we discuss the contribution from the finite one-loopVEVs shifts to the fermion masses.Since the fermion mass matrix is diagonal, we haveŶ_a = Y_b ( W_0 )_ba = y( ( W_0 )_1a, …, ( W_0 )_na).In particular, Ŷ_1 = y/√(n) 1 Ŷ_a = 0 a = 2, …, ndue to w_1 of equation (<ref>). Therefore, it follows from equation (<ref>) that T^(χ)_a = 0 a = 2, …, n.Moreover, from equations (<ref>) and (<ref>) we findv̂_1 = v, v̂_2 = ⋯ = v̂_n = 0.With this the tadpole expression T^(h)_a of equation (<ref>)has the structureT^(h)_a = λ̂_abrrv̂_b X_r =λ̂_a1rr v X_r.According to equation (<ref>), this expression can only be nonzero for a=1. Therefore, T^(h)_a = 0 a = 2, …, nas well andΔv̂_a Ŷ_a = Δv̂_1 Ŷ_1 ∝1. This proves that the finite VEV shifts cannot remove the total fermion massdegeneracy.Next we consider Σ^1-loop of equation (<ref>). We note that both D̂_a and Ê_a are proportional to the unit matrix because of equation (<ref>). In addition, because of equation (<ref>),D̂_2 = … = D̂_n Ê_2 = … = Ê_n.Thus we can write D̂_a = f_a 1 with f_2 = ⋯ = f_n,but f_1 ≠ f_2 in general. There are the analogous relations for the Ê_a. Considering now the b-th entry of the(diagonal) finite parts of Σ^1-loop and taking into account that the Yukawa coupling matrices are given by equation (<ref>),we have the generic sum ∑_a=1^n ( W_0 )_ba f_a ( W_0 )_ba =( W_0 )_b1( f_1 - f_2 ) ( W_0 )_b1 +∑_a=1^n ( W_0 )_ba f_2 ( W_0 )_ba =1/n( f_1 - f_2 ) + f_2.(Note that there is no summation over the index b in this equation.) This result does not depend on b and, therefore, Σ^1-loop isproportional to the unit matrix. Consequently, the fermion mass degeneracy cannot be lifted by one-loop contributions, as stated above. §.§ Soft symmetry breaking It is possible to lift any mass degeneracies by explicit breaking of G_n. The model remains renormalizable, if we have soft breaking, for instance,by terms of dimension two.This is done by admitting in equation (<ref>) a general mass matrix μ^2_ab, whereas theYukawa and quartic couplings arestill restricted by G_n. This breaks the symmetry group G_n down toG(ℤ_4)_diagwith (ℤ_4)_diag: χ^_aL→ i χ^_aL, φ^_a→ -φ^_a ∀ a,i.e. this ℤ_4 acts simultaneously on all fields and agrees with equation (<ref>).In this way, the scalar mass spectrum will be completely non-degenerate already at tree level, but also the fermion mass spectrum because a general matrix μ^2_ab will induce general VEVs v_a.It is easy to understand why this modified model remains renormalizable; allowing for a general matrixμ^2_ab, we also allow for a general counterterm matrix δμ^2_ab and we can cancel the divergences related to the scalar mass terms as handled by equation (<ref>).It is natural that soft symmetry breaking is small. We can easily incorporate this by taking one large mass parameter μ^2 and setting μ^2_ab = μ^2 δ_ab + σ_absuch that∑_a=1^n σ_aa = 0 and|σ_ab| ≪μ^2 ∀a,b. In this case the previously degenerate masses will now become slightly different and we can produce quasi-degenerate mass spectra.§ DIRAC FERMIONSSo far, we have put the focus on Majorana fermions. We have done so becausein the long run we are interested in studying radiative correctionsin neutrino mass models which typically feature the seesaw mechanism and, therefore, neutrinos of Majorana nature. However, it is straightforward to switch from Majorana toDirac fermions. How this is done will be explained in this section—seealso <cit.>. Lagrangian, diagonalization of Dirac mass matrices, andrenormalization: In the Dirac setup, we can in general have n_χ_L chiralfields χ^_iL and n_χ_R independent chiral fields χ^_iR,while the scalar sector remains the same as in the Majorana case.Then, the bare Lagrangian is given by ℒ_B = i χ̅^_iLγ^μ∂_μχ^_iL + i χ̅^_iRγ^μ∂_μχ^_iR + 1/2( ∂_μφ^_a )( ∂^μφ^_a ) - ((Y^_a)_ij χ̅^_iRχ^_jLφ^_a + )- 1/2 ( μ_ B^2 )_abφ^_a φ^_b - 1/4λ^_abcd φ^_a φ^_b φ^_c φ^_d, where the Y^_a now are n_φ general complexn_χ_R× n_χ_L matrices. In principle, n_χ_L could be different from n_χ_R, in which case one has |n_χ_L - n_χ_R| massless Weyl fermions. However, for simplicity we assume n_χ_L = n_χ_R≡ n_χ in the following.A possible modification of the transformation of the fermions inequation (<ref>) is the Z_2 symmetry𝒮': χ^_L → -χ^_L, χ^_R →χ^_R, φ^→ -φ^,in order to forbid fermion tree-level mass terms and linear andtrilinear terms in the scalar potential.The renormalization of the fermionic fields now becomesχ^_L = Z_χ_L^(1/2) χ_L, χ^_R = Z_χ_R^(1/2) χ_R,involving two independent general complex matricesZ_χ_L^(1/2) and Z_χ_R^(1/2). Inserting this into equation (<ref>) yields a renormalizedLagrangian with Yukawa coupling matrices Y_aand counterterms similar to the Majorana case. The main changes lie in the definition of the Yukawa counterterm^ε/2δ Y_a = ( Z_χ_R^(1/2))^† Y^_bZ_χ_L^(1/2)( Z^(1/2)_φ)_ba - ^ε/2 Y_a,and the need for the definition of two independent hermitian matricesδ^(χ_L) =( Z_χ_L^(1/2))^† Z_χ_L^(1/2) - , δ^(χ_R) =( Z_χ_R^(1/2))^† Z_χ_R^(1/2) - . Via SSB we obtain the tree-level Dirac mass matrix m_0 = ∑_a=1^n_φ v_a Y_a.This mass matrix is bi-diagonalized with two unitary matricesU_L0 and U_R0:U_R0^† m_0 U_L0 = m̂_0 ≡( m_01, …, m_0 n_χ).Due to the left and right diagonalization matrices, there are now leftand right chiral mass eigenfields χ̂_L = U_L0^†χ_L, χ̂_R = U_R0^†χ_R.Moreover, equations (<ref>) and (<ref>)are modified to δ^(χ_L)→δ̂^(χ_L) =U_L0^†δ^(χ_L) U_L0, δ^(χ_R)→δ̂^(χ_R) =U_R0^†δ^(χ_R) U_R0, Y_a→Ŷ_a = (U_R0^†Y_b U_L0 ) (W_0)_ba, respectively.In analogy to equation (<ref>), we define Dirac mass eigenfields χ̂_i = χ̂_iL + χ̂_iRand the corresponding vector of eigenfields χ̂. In terms of mass eigenfields, the Yukawa interaction readsℒ_Y = - χ̅̂̅( Ŷ_a γ_L + Ŷ^†_aγ_R ) χ̂( ^ε/2ĥ_a + v̂̅̂_a ).Formally, Dirac and Majorana Yukawa terms look the same <cit.>. Note that the only difference of this ℒ_Y to that ofequation (<ref>) is the factor 1/2 which we do not introducein the Dirac case. It will become clear inthe last paragraph of this section why we prefer this definition. Fermion selfenergy: With the above definitions, the renormalization programme ofsection <ref> goes through with only minor modifications. The renormalized fermion selfenergy for Dirac fermions is given byΣ(p)= Σ^1-loop(p)- p[ δ̂^(χ_L)γ_L +δ̂^(χ_R)γ_R ]+ v̂_a [ δŶ_a γ_L +(δŶ_a)^†γ_R ] +Δv̂_a [ Ŷ_a γ_L +Ŷ_a^†γ_R ].Eventually, the one-loop Dirac masses readm_i=m_0i + 1/2 m_0i[( Σ^(A)_L )_ii (m_0i^2) +( Σ^(A)_R )_ii (m_0i^2) ] + ( Σ^(B)_L )_ii(m^2_0i).Note that( Σ^(B)_L )_ii =( Σ^(B)_R )_iibecause of the symmetry relation (<ref>). Computation of amplitudes:There are two changes when we switch fromMajorana to Dirac fermions <cit.>:* Ŷ_a^* →Ŷ_a^† and * a factor of two for every closed Dirac fermion loop compared to the corresponding Majorana fermion loop.As discussed above, the first change simply comes from the fact that forDirac neutrinos the Yukawa coupling matrices are not symmetric.The reason for the factor of two is the following.In the Majorana case we have defined the Yukawa Lagrangian with afactor 1/2—see equation (<ref>). If a Majorana fermion line in a Feynman diagram is not closed, then allfactors of 1/2 are cancelled because, whenever a fermion lineis connected to a vertex, there are two possible Wick contractions; however, in a closed loop one factor 1/2 is left over because, when closing the loop, there is only one contraction. In the Dirac case, we have omitted the factor 1/2 in theYukawa Lagrangian (<ref>) because, when we connect a Dirac fermionline to a vertex, there is exactly one Wick contraction.Therefore, when a closed fermion loop occurs, there is a factorof two for Dirac fermions relative to Majorana fermions. Finally, whenever we have made a simplification in a trace by exploitingY_a^T = Y_a in the Majorana case, as done in equations (<ref>)and (<ref>), we have to revoke it in the Dirac case.Consequently, in the Dirac case, Π^(a)(p^2) is given by Π^(a)_ab(p^2)= 2/16 π^2{ c_∞ [ Ŷ_a m̂_0Ŷ_b m̂_0 +Ŷ_a^†m̂_0 Ŷ_b^†m̂_0 +Ŷ_a Ŷ_b^†m̂_0^2+Ŷ_a^†Ŷ_b m̂_0^2 . .. +Ŷ_a m̂_0^2 Ŷ_b^†+Ŷ_a^†m̂_0^2 Ŷ_b]-1/2 c_∞ [ Ŷ_a Ŷ_b^† +Ŷ_a^†Ŷ_b ] p^2. +1/2 [ ( Ŷ_a Ŷ_b^† + Ŷ_b^†Ŷ_a + Ŷ_a^†Ŷ_b + Ŷ_b Ŷ_a^†) ( m̂_0^2 - 1/6p^2 ) ] - ⋯}.The dots refer to the integral in equation (<ref>)where merely Y_a^* has to be substitutedby Y_a^†. From equation (<ref>), ( Π^(a)_∞)_ab can be read off. Equation (<ref>) is modified to δλ̂^(χ)_abcd = -1/3×1/16 π^2 c_∞Tr[Ŷ_a Ŷ_b^†Ŷ_c Ŷ_d^† +⋯ +Ŷ_a^†Ŷ_b Ŷ_c^†Ŷ_d +⋯],where the dots indicate the five non-trivial permutations of the indices b,c,d. No complications arise inequations (<ref>), (<ref>), (<ref>) and (<ref>); for Dirac fermionsone simply has to multiply the right-hand side by a factor of two and replace complex conjugation by hermitian conjugation. § CONCLUSIONSIn this paper we have presented a versatile and simple renormalization procedure which is adapted to models which have SSB and a multitude of scalars.This renormalization programme takes seriouslythe nature of masses as functions of the parameters ofthe underlying model; therefore, physical masses have an expansionin perturbation theory just like any other observable.We have exemplified our renormalization procedure by discussing a general Yukawa model with an arbitrary number of fermion fieldsof Majorana or Dirac nature and an arbitrary number of real scalar fields; moreover, this toy modelhas the feature that tree-level fermion masses are generated by SSB of a cyclic group.In particular, we have explicitly computed the fermionic and scalarselfenergies and studied radiative corrections at the one-loop level to tree-level masses.The main idea discussed in this paper is to split renormalization into a step in which UV divergent parts are cancelled byrenormalization of the parameters of the unbroken theory and a subsequent step in which finite corrections are performed to make the scalar one-point functions vanish and to obtain one-loop pole masses. We have presented the details of the cancellation of UV divergences andelucidated the role of tadpole diagrams in our renormalization procedureand their contributions to the masses. We have also applied our findings to a showcase model furnished with anon-Abelian flavour symmetry group.A typical example where the renormalization procedure put forward in this paper can be applied is the leptonsector of the multi-Higgs-doublet Standard Model withan arbitrary number of right-handed neutrino singletsand flavour symmetries; this comprises the seesaw mechanism as well aslight sterile neutrinos. A derivation ofgeneral formulae which permit to compute radiative corrections to tree-levelpredictions of masses and mixing angles in this rather general class offlavour models is in preparation.§ ACKNOWLEDGMENTSM.L. is supported by the Austrian Science Fund (FWF), Project No. P28085-N27 and in part by the FWF Doctoral Program No. W1252-N27Particles and Interactions.The authors thank H. Eberl, G. Ecker, M. Mühlleitner and H. Neufeldfor stimulating discussions. M.L. also thanks D. Lechner and C. Lepenikfor further helpful discussions.§ SELFENERGIES AND ON-SHELL RENORMALIZATION Since for fermions the general relations( Σ^(A)_L )^† = Σ^(A)_L, ( Σ^(A)_R )^† = Σ^(A)_R, ( Σ^(B)_L )^† = Σ^(B)_Rare valid,[Strictly speaking these relations hold only for the dispersive part of the selfenergy.] we see that for the finiteness of Σ^(A)_L and Σ^(A)_R the counterterm with the hermitian δ^(χ) suffices. In addition, we remark that in the case of Majorana fermions the further conditions <cit.>( Σ^(A)_L )^T = Σ^(A)_R, ( Σ^(B)_L )^T = Σ^(B)_L, ( Σ^(B)_R )^T = Σ^(B)_Rhold. This is a general condition, but can also be seen explicitly inour one-loop result. In order to switch from the renormalized Majorana selfenergyΣ(p) and the bosonic selfenergy Π(p^2)to the on-shell selfenergies Σ(p) and Π(p^2),respectively, we must allow for finite field strengthrenormalization matrices. Denoting these by ∘Z^-6pt(1/2)_χ =+1/2∘z_χ∘Z^-6pt(1/2)_h =+1/2∘z_h,we have at one-loop orderΣ(p)= Σ(p) - 1/2 p[ ( ( ∘z_χ)^† +∘z_χ) γ_L +( ( ∘z_χ)^† +∘z_χ)^* γ_R ] + 1/2[ ((∘z_χ)^T m̂_0 +m̂_0 ∘z_χ) γ_L +( (∘z_χ)^T m̂_0 +m̂_0 ∘z_χ)^* γ_R ],Π_ab(p^2)= Π_ab(p^2) -1/2[ ( ∘z_h)^T +∘z_h]_ab p^2 + 1/2[ ( ∘z_h)^T M̂^2_0 + M̂^2_0 ∘z_h]_ab.It is important to note that we have no freedom for mass renormalizationbecause in our scheme the masses are computed in terms of the renormalizedparameters of the model.Due to the Majorana nature of the fermions under consideration, the relation∘z_χ≡( ∘z_L )_ij =( ∘z_R )_ij^*holds for left and right-chiral fields.In Σ(p) this fact has been taken into account. Using the second relation in equation (<ref>) andthe first relation in equation (<ref>), the on-shell conditions lead for i ≠ j to <cit.>1/2(∘z_χ)_ij =-1/m_0i^2-m_0j^2[ m_0j^2 ( Σ^(A)_L )_ij +m_0i m_0j( Σ^(A)_L )_ji +m_0j( Σ^(B)_L )_ji^* + m_0i( Σ^(B)_L )_ij]_p^2 = m_0j^2.Furthermore, for i = j we obtainRe(∘z_χ)_ii =(Σ^(A)_L)_ii(m_0i^2) + 2m_0i^2 . / p^2( Σ^(A)_L)_ii(p^2) )|_p^2=m_0i^2 +2m_0i. / p^2 ( Σ^(B)_L)_ii(p^2) )|_p^2=m_0i^2andm_0i Im(∘z_χ)_ii =-Im (Σ^(B)_L)_ii(m_0i^2).It is characteristic of Majorana fermions that there is no phase freedom in the determination of the field strength renormalization matrix, i.e. not only the real part but also the imaginary part of(z_χ)_ii is fixed.Finally, in the scalar scalar case we are lead to a ≠ b:1/2( ∘z_h)_ab =-Π_ab(M_0b^2)/M_0a^2 - M_0b^2,a = b:( ∘z_h)_aa =. Π_aa(p^2)/ p^2|_p^2 =M_0a^2for on-shell renormalization.§ FINITE TADPOLE CONTRIBUTIONS Throughout this appendix the discussion refers to the one-loop order. In the fermionic as well as the scalar selfenergy,tadpole diagrams contribute indirectly via thefinite shift (<ref>), even though in both casesthe condition t_a = 0 of equation (<ref>)and the requirement that the scalar one-point function is zero—seeequation (<ref>)—procure the vanishing of the sum of tadpole diagrams and the termΔt̂_a + δμ̂^2_abv̂_b +δλ̂_abcd v̂_b v̂_c v̂_d.Diagrammatically, this can be written as fermion-two-point-tadpoles(50,40) i1b1,b2plainb1,v2,b2phantomi1,v1,v2dashesv2,v1plain,leftv1,i1,v1 +(50,40) i1b1,b2plainb1,v2,b2phantomi1,v1,v2dashesv2,v1dashes,leftv1,i1,v1 + (50,40) i1b1,b2plainb1,v2,b2phantomi1,v1,v2dashesv2,v1phantom,leftv1,i1,v1decor.shape=crossv1 = 0andscalar-two-point-tadpoles(50,40) i1b1,b2dashesb1,v2,b2phantomi1,v1,v2dashesv2,v1plain,leftv1,i1,v1 +(50,40) i1b1,b2dashesb1,v2,b2phantomi1,v1,v2dashesv2,v1dashes,leftv1,i1,v1 + (50,40) i1b1,b2dashesb1,v2,b2phantomi1,v1,v2dashesv2,v1phantom,leftv1,i1,v1decor.shape=crossv1 = 0,where the cross symbolizes the contribution of equation (<ref>). Still, the finite parts of the tadpole diagrams generate, via the finiteVEV shifts Δ v_a, the mass shiftsΔm̂_0 = Ŷ_a Δv̂_afor the fermions—see equation (<ref>)—and(ΔM̂_0^2)_ab = 6 λ̂_abcdv̂_c Δv̂_dfor the real scalars—see equation (<ref>).These add to the counterterms of thefermionic and scalar two-point functions. In terms of diagrams, this can be symbolized as counterterm-fermion(50,40) ioplaini,v,odecor.shape=crossv =-i( δŶ_̂âv̂_a + Δm̂_0 )for the fermions and counterterm-scalar(50,40) iodashesi,v,odecor.shape=crossv =-i ( δμ̂^2_ab +3 δλ̂_abcdv̂_c v̂_d + (ΔM̂^2_0)_ab)for the scalars. § INTEGRALS^ε∫d^d k/(2π)^d 1/ k^2 - Δ + iϵ = i/16π^2 Δ( c_∞ + 1 -lnΔ/^2),^ε∫d^d k/(2π)^d 1/ ( k^2 - Δ + iϵ )^2 = i/16π^2( c_∞ - lnΔ/^2),^ε∫d^d k/(2π)^d k^2/ ( k^2 - Δ + iϵ )^2 = i/16π^2 Δ( 2c_∞ + 1 - 2lnΔ/^2). 99rpp C. Patrignani et al. (Particle Data Group), The Review of Particle Physics (2016), Chin. Phys. C 40 (2016) 100001.nuosc-exp Y. Fukuda et al. [Super-Kamiokande Collaboration], Evidence for oscillation of atmospheric neutrinos, Phys. Rev. Lett.81 (1998) 1562 [hep-ex/9807003]; Q. R. Ahmad et al. [SNO Collaboration], Measurement of the rate of ν_e+d → p+p+e^- interactions produced by ^8B solar neutrinos at the Sudbury Neutrino Observatory,Phys. Rev. Lett.87 (2001) 071301 [nucl-ex/0106015]; B. Aharmim et al. [SNO Collaboration], Combined analysis of all three phases of solar neutrino data from the Sudbury Neutrino Observatory,Phys. Rev. C 88 (2013) 025501 [arXiv:1109.0763 [nucl-ex]].reviews S. F. King, Unified models of neutrinos, flavour and CP violation, Prog. Part. Nucl. Phys.94 (2017) 217 [arXiv:1701.04413 [hep-ph]]; F. Feruglio, Aspects of leptonic flavour mixing, Talk given at Neutrino 2016 (London, 4-9 July 2016) andNow 2016 (Otranto, 4-11 September 2016), arXiv:1611.09237 [hep-ph]. weinberg S. Weinberg, Perturbative calculations of symmetry breaking, Phys. Rev. D 7 (1973) 2887.Aoki:1982 K. I. Aoki, Z. Hioki, M. Konuma, R. Kawabe and T. Muta, Electroweak theory. Framework of on-shell renormalization and study of higher order effects, Prog. Theor. Phys. Suppl.73 (1982) 1.Denner:1990A. Denner and T. Sack,Renormalization of the quark mixing matrix,Nucl. Phys. B 347 (1990) 203.Kniehl:1996B. A. Kniehl and A. Pilaftsis,Mixing renormalization in Majorana neutrino theories,Nucl. Phys. B 474 (1996) 286[hep-ph/9601390].Fleischer:1980ub J. Fleischer and F. Jegerlehner, Radiative corrections to Higgs decays in theextended Weinberg-Salam Model, Phys. Rev. D 23 (1981) 2001.Denner:2016etu A. Denner, L. Jenniches, J. N. Lang and C. Sturm, Gauge-independent MS renormalization in the 2HDM, JHEP 1609 (2016) 115 [arXiv:1607.07352 [hep-ph]].Pierce:1993 D. Pierce and A. Papadopoulos, Radiative corrections to the Higgs-boson decay rate(HZZ) in theminimal supersymmetric model, Phys. Rev. D 47 (1993) 222 [hep-ph/9206257].Sperling:2013eva M. Sperling, D. Stöckinger and A. Voigt, Renormalization of vacuum expectation values inspontaneously broken gauge theories, JHEP 1307 (2013) 132 [arXiv:1305.1548 [hep-ph]].Krause:2016oke M. Krause, R. Lorenz, M. Mühlleitner, R. Santos and H. Ziesche, Gauge-independent renormalization of the 2-Higgs-doublet model, JHEP 1609 (2016) 143[arXiv:1605.04853 [hep-ph]].seesaw P. Minkowski, μ→ e γ at a rate of one out of 10^9 muon decays?, Phys. Lett. 67B (1977) 421; T. Yanagida, Horizontal gauge symmetry and masses of neutrinos, in Proceedings of the workshop on unified theory and baryon number in the universe (Tsukuba, Japan, 1979), O. Sawata and A. Sugamoto eds., KEK report 79-18, Tsukuba, 1979; S.L. Glashow, The future of elementary particle physics, in Quarks and leptons, proceedings of the advanced study institute (Cargèse, Corsica, 1979), M. Lévy et al. eds., Plenum, New York, 1980; M. Gell-Mann, P. Ramond and R. Slansky, Complex spinors and unified theories, in Supergravity, D.Z. Freedman and F. van Nieuwenhuizen eds., North Holland, Amsterdam, 1979; R.N. Mohapatra and G. Senjanović, Neutrino mass and spontaneous parity violation, Phys. Rev. Lett. 44 (1980) 912.Grimus:2014zwa W. Grimus, P. O. Ludl and L. Nogués, Mass renormalization in a toy model with spontaneously broken symmetry, arXiv:1406.7795 [hep-ph]. SchurI. Schur, Ein Satz Ueber Quadratische Formen Mit Komplexen Koeffizienten, Am. J. Math. 67 (1945) 472. kiyoura S. Kiyoura, M. M. Nojiri, D. M. Pierce and Y. Yamada, Radiative corrections to a supersymmetric relation: A new approach, Phys. Rev. D 58 (1998) 075002 [hep-ph/9803210].Grimus:2016hmw W. Grimus and M. Löschner, Revisiting on-shell renormalization conditions in theorieswith flavor mixing, Int. J. Mod. Phys. A 31 (2016) 1630038; Erratum: ibid. A 32 (2017) 1792001 [arXiv:1606.06191 [hep-ph]].altenkamp L. Altenkamp, S. Dittmaier and H. Rzehak, Renormalization schemes for the two-Higgs-doublet model and applications to h → WW/ZZ → 4 fermions, arXiv:1704.02645 [hep-ph].Denner:1992me A. Denner, H. Eck, O. Hahn and J. Küblbeck, Compact Feynman rules for Majorana fermions, Phys. Lett. B 291 (1992) 278; A. Denner, H. Eck, O. Hahn and J. Küblbeck, Feynman rules for fermion number violating interactions, Nucl. Phys. B 387 (1992) 467.lavoura W. Grimus and L. Lavoura, One-loop corrections to the seesaw mechanism in the multi-Higgs-doublet Standard Model,Phys. Lett. B 546 (2002) 86 [hep-ph/0207229].grimus W. Grimus and L. Lavoura, Soft lepton-flavor violation in a multi-Higgs-doublet seesaw model, Phys. Rev. D 66 (2002) 014016 [hep-ph/0204070]. | http://arxiv.org/abs/1705.09589v2 | {
"authors": [
"M. Fox",
"W. Grimus",
"M. Löschner"
],
"categories": [
"hep-ph"
],
"primary_category": "hep-ph",
"published": "20170526141123",
"title": "Renormalization and radiative corrections to masses in a general Yukawa model"
} |
[email protected] Departamento de Física Teórica and IFIC, Centro Mixto Universidad de Valencia-CSIC Institutos de Investigación de Paterna, Aptdo. 22085, 46071 Valencia, Spain [email protected] Departamento de Física Teórica and IFIC, Centro Mixto Universidad de Valencia-CSIC Institutos de Investigación de Paterna, Aptdo. 22085, 46071 Valencia, Spain Instituto de Física, Universidade de São Paulo, C.P. 66318, 05389-970 São Paulo, SP, Brazil [email protected] Departamento de Física Teórica and IFIC, Centro Mixto Universidad de Valencia-CSIC Institutos de Investigación de Paterna, Aptdo. 22085, 46071 Valencia, Spain Using the Fixed Center Approximation to Faddeev equations we have investigated the DKK and DKK̅ three-body systems, considering that the DK dynamically generates, through its I=0 component, the D^*_s0(2317) molecule. According to our findings, for DKK̅ interaction we have found an evidence of a state I(J^P)=1/2(0^-) just above the D^*_s0(2317)K̅ threshold and around the Df_0(980) thresholds, with mass about 2833 - 2858 MeV, made mostly of Df_0(980). On the other hand, no evidence related to a state from the DKK interaction is found. The state found could be seen in the ππ D invariant mass. Study of the DKK and DK K̅ systems E. Oset December 30, 2023 ==================================§ INTRODUCTION The study of three-body systems is one of the starting points in the study of nuclei and nuclear dynamics. The traditional Quantum Mechanical approach to this problem is based on the Faddeev equations <cit.> and the main application was done for three nucleons systems. The simplicity of the Faddeev equations is deceiving since in practice itsevaluation is very involved and one approximation or another is done to solve them. One popular choice is the use of separable potentials to construct the two-body scattering amplitudes via the Alt-Grassberger-Sandhas (AGS) form of the Faddeev equations <cit.>. Incorporation of chiral symmetry into the scheme has lead to interesting developments <cit.>. Another way to tackle these three-body systems is using a variational method <cit.>. Gradually, other systems involving not only nucleons or hyperons but mesons were tackled. The interaction of K^-d at threshold was thoroughly investigated using Faddeev equations <cit.>, or approximations to it, basically the Fixed Center Approximation (FCA) <cit.>. The investigation of a possible state of K^- pp nature has also received much attention <cit.> and, according to the calculations done in Ref. <cit.>, the recent J-PARC experiment <cit.> has found support for this state.Another step in this direction was the investigation of systems with two mesons and one baryon. Surprisingly it was found in Refs. <cit.> that with such systems one could obtain the low energy baryon states of J^P=1/2^+. Work in this direction with different methods was also done in Ref. <cit.> for the K̅K̅ N system and in Ref. <cit.> for the K K̅ N system. In this latter case a bound system developed, giving rise to a N^* state around 1920 MeV, mostly made of a N a_0(980), which was also predicted in Ref. <cit.>.Systems of three mesons also followed, and in Ref. <cit.> the ϕ K K̅ system was studied and shown to reproduce the properties of the ϕ(2170). Similarly, in Ref. <cit.> the K K K̅ system is studied and a bound cluster found is associated to the K(1460). Another similar system, the π K K̅is studied in Ref. <cit.> and the state found is associated to the π(1300). The η K K̅ and η' K K̅ systems are also studied in Refs. <cit.> and they are revised in Ref. <cit.> with the full Faddeev equations and more solid results.An important result was found in Refs. <cit.>. In the Faddeev equations one uses input from the two-body amplitudes of the different components and the off-shell part of the amplitudes appears in the calculations. This off-shell part is unphysical and observables cannot depend upon it. The finding in those works was that the use of chiral Lagrangians provides three-body contact terms that cancel the off-shell two-body contributions. In other calculations empirical three-body forces are introduced which might have some genuine part, but an important part of it will serve the purpose of effectively cancelling these unphysical off-shell contributions. Rather than putting these terms empirically, and fitting them to some data, the message of those works is that to make predictions it is safer to use as input only on-shell two-body amplitudes, without extra three-body terms, and an example of it is given in Ref. <cit.>.Extension to the charm sector was also done. The DNN system, analogous to the K̅ NN system is studied in Ref. <cit.>, and the NDK, K̅ DN and NDD̅ molecules are studied in Ref. <cit.>. In the hidden charm sector a resonance is found for the J/ψ K K̅ system which is associated to the Y(4260) in Ref. <cit.>. Closer to our work is the one of Ref. <cit.> where the DKK̅ is studied using QCD sum rules and Faddeev equations and in both methods a state coupling strongly to Df_0(980) is found. We will study this system with a different method, and in addition the DKK system.The former review of work done shows a constant feature, which is that systems that add K K̅ to another particle turn out to generate states in which the K K̅ clusters around the f_0(980) or the a_0(980). TheDKK system benefits from the DK attraction that forms the D_s0^*(2317) according to works using chiral Lagrangians and unitary approach <cit.>. It is also supported by analysis of lattice QCD data <cit.>. However, the K K interaction is repulsive and the system might not bind. On the other hand, the DK K̅ system has repulsion for D K̅ in I=1, and attraction for I=0, and the DK interaction is attractive, as it is also the K K̅. Altogether this latter system could have more chances to bind than the DKK system, but a detailed calculation is called for to find the answer, and this is the purpose of the present work.The starting point of our approach is to use the FCA with a preexisting molecule, which is the D_s0^*(2317), formed from the DK interaction. On top of that, another K (or K̅) is introduced which is allowed to undergo multiple scattering with the D and K components of the molecule. The resulting thing, as we shall see, is that in the DKK system we do not see a signal of a three-body bound state, but in the DK K̅ system we find a peak which we interpret as the K K̅ fusing to produce the f_0(980) which gets then bound to the D meson, and a narrow peak appears at an energy below the D f_0(980) threshold. Such state could be seen in the ππ D invariant mass. § FORMALISMThe Fixed Center Approximation (FCA) to Faddeev equations is useful when a light hadron H_3 interacts with a cluster H composed of two other hadrons H_1 and H_2, H[H_1 H_2], which are heavier than the first one, i.e. M_(H[H_1 H_2])>M_H_3. This cluster comes out from the two-body interaction between the hadrons H_1 and H_2 that can be described using a chiral unitary approach in coupled channels. Hence, the Faddeev equations in this approximation have as an input the two-body t matrices for the different pairs of mesons which form the system and, in this way the generated bound states and resonances are encoded. In our case, we have H_1=D and H_2=K while H_3=K̅ if we consider the DKK̅ interaction or H_3=K for the DKK system. Both three-body interactions involve the D^*_s0(2317) and f_0(980)/a_0(980) molecules that, according to Refs. <cit.> are dynamically generated through DK and KK̅ interactions, respectively, taking into account their associated coupled channels. Therefore, we shall have the following channels contributing to the three-body interaction systems we are concerned: (1) K^-[D^+K^0], (2) K^-[D^0K^+], (3) K̅^0[D^0K^0], (4) [D^+K^0]K^-, (5) [D^0K^+]K^- and (6) [D^0K^0]K̅^0 for the DKK̅ interaction and (1) K^+[D^+K^0], (2) K^+[D^0K^+], (3) K^0[D^+K^+], (4) [D^+K^0]K^+, (5) [D^0K^+]K^+ and (6) [D^+K^+]K^0 for DKK system. Note that the states (1), (2) and (3) are the same as (4), (5) and (6), respectively. Their distinction is to signify that the interaction in the FCA formalism occurs with the particle outside the cluster, which is represented by the brackets [ . . .], and the particle of the cluster next to it. This allows for a compact formulation that describes all the charge exchange steps and distinguishes the interaction with the right or left component of the cluster <cit.>. These channels will contribute to the T_DKK̅ and T_DKK three-body scattering matrices and, if those interactions generate bound states or resonances, they will manifest as a pole in the solutions of the Faddeev equations. In what follows we shall discuss how to construct these three-body scattering matrices and its solution for both, the DKK̅ and DKK systems.§.§ DKK̅ and DKK̅ three-body systemsIn order to write the contributions to Faddeev equations of all the channels mentioned previously, we shall adopt the following procedure to construct the relevant amplitudes: for each channel the anti-kaon (kaon) meson to the left side in (1), (2) and (3) interacts with the hadron to its right side. Similarly, for the (4), (5) and (6) the K or K̅ to the right interacts with the particle to its left. In doing so, we can distinguish the order of the anti-kaon (kaon) and two other mesons with which the anti-kaon (kaon) interacts first and last. This procedure is similar to that used in Ref. <cit.> to study the K̅NN interaction. For instance, in the DKK̅ system, the channel (1) K^-[D^+K^0] in the initial state, means that the K^- interacts with the D^+ meson to its right. The channel (4) [D^+ K^0] K^- indicates that the K^- interacts with the K^0 to its left. This procedure allows us to divide the multiple anti-kaon (kaon) scattering process in such a way that the formulation of the multiple scattering becomes easier.In order to illustrate the structure of the multiple scattering in the fixed center approximation we define the partition functions T^ FCA_ij, which contain all possible intermediate multiple steps, where the first index refers to the initial K̅[DK], (1), (2) and (3) or [DK]K̅ (4), (5) and (6) states and the second index to the final state. If we consider the K^-[D^+K^0]→ K^-[D^+K^0] amplitude denoted by T^ FCA_11, which is diagramatically represented in Fig. <ref>, it provides the following expression <cit.> T^ FCA_11(s)=t_1+t_1 G_0 T^ FCA_41+t_2 G_0 T^ FCA_61, which tells us that the transition between the K^-[D^+K^0] to itself is given in terms of a single and double scattering, coupled to the amplitudes T^ FCA_ij related to the other channels. As a result, the three-body problem is given in terms of the T^ FCA_ij partitions, where the i,j indices run from 1 to 6 and stand for the initial and final channels, respectively, and as we will discuss later, can be displayed in a matrix form.In Eq. (<ref>), s is the Mandelstam variable that is equal to the squared of the three-body energy system, while t_1 and t_2 are, respectively, the D^+K^-→ D^+K^- and D^+K^-→ D^0K̅^0 two-body scattering amplitudes studied in Ref. <cit.>, in which the authors have applied the chiral unitary approach in coupled channels to investigate the DK̅ and DK two-body interaction. G_0 is the kaon propagator <cit.> between the particles of the cluster, which is evaluated through the equation below G_0(s)=1/2M_D^*_s0∫d^3 q/(2π)^3F_R( q)/(q^0)^2-ω^2_K( q)+iϵ, with ω^2_K( q)≡ q^2+m^2_K and q^0 is the energy carried by kaon meson in the cluster rest frame where F( q) is calculated, which corresponds to the following expression q^0(s)=s-m^2_K-M^2_D^*_s0/2 M_D^*_s0. In this work, we are using the isospin symmetric masses such that m_D and m_K are the D and K mesons average masses, respectively, while M_D^*_s0 is the D^*_s0 molecule mass. This molecule dynamics does not come into play explicitly in our formalism. The information on the molecule is encoded in the function F_R( q) appearing in Eq. (<ref>), the form factor, which is related to the cluster wave function by a Fourier transform, as discussed in Refs. <cit.>. According to these works, for the form factor to be used consistently, the theory that generates the bound states and resonances (clusters), the chiral unitary approach, which is developed for scattering amplitudes, has to be extended to wave functions. This was done in those references for s-wave bound states, s-wave resonant states as well as in states with arbitrary angular momentum <cit.>. In our work we need the form factor expression only for s-wave bound states, which is given by <cit.> F_R( q)=1/N∫_| p|,| p- q| < Λd^3 p1/M_D^*_s0-ω_D( p)-ω_K( p) 1/M_D^*_s0-ω_D( p- q)-ω_K( p- q), where ω_D( p)≡√( p^2+m^2_D) and the normalization factor N is N=∫_| p|<Λd^3 p ( 1/M_D^*_s0-ω_D( p) -ω_K( p))^2. The upper integration limit Λ has the same value of the cut-off used to regularize the loop DK, adjusted in order to get the D^*_s0(2317) molecule from the DK interaction.Analogously to T^ FCA_11 expressed in Eq. (<ref>), we can calculate all the relevant multiple scattering amplitudes, the partitions T^ FCA_ij, using similar diagrams like the one in Fig. <ref>. As a result, they can be written as T^ FCA_ij(s)=V^ FCA_ij(s)+∑_l=1^6Ṽ^ FCA_il(s) G_0(s) T^ FCA_lj(s),where V_ij and Ṽ_il are the elements of the matrices belowV^ FCA =( [ t_1 0 t_2 0 0 0; 0 t_3 0 0 0 0; t_2 0 t_4 0 0 0; 0 0 0 t_5 0 0; 0 0 0 0 t_6 t_7; 0 0 0 0 t_7 t_8; ] ) , Ṽ^ FCA =( [ 0 0 0 t_1 0 t_2; 0 0 0 0 t_3 0; 0 0 0 t_2 0 t_4; t_5 0 0 0 0 0; 0 t_6 t_7 0 0 0; 0 t_7 t_8 0 0 0; ] ) .Therefore, according to Eq. (<ref>), in our case we can solve the three-body problem in terms of the multiple scattering amplitudes given by partitions T^ FCA_ij, which contain only the DK̅ and KK̅ two-body amplitudes. Thus, for the DKK̅ system the solution of the scattering equation, Eq. (<ref>), will beT^ FCA_ij(s)=∑_l=1^6[ 1- Ṽ^ FCA(s) G_0(s) ]^-1_il V_lj^ FCA(s). Analogously, for the DKK system, we will have the same solution as in Eq. (<ref>). However, in this case, the Ṽ^ FCA and V^ FCA matrices, in terms of the DK and KK two-body amplitudes, are now given by V^ FCA =( [ t̅_100000;0 t̅_2 t̅_3000;0 t̅_3 t̅_4000;000 t̅_50 t̅_5;0000 t̅_60;000 t̅_50 t̅_5;] ) , Ṽ^ FCA =( [000 t̅_100;0000 t̅_2 t̅_3;0000 t̅_3 t̅_4; t̅_50 t̅_5000;0 t̅_60000; t̅_50 t̅_5000;] ) . The elements of the matrices in Eqs. (<ref>) and (<ref>), i.e. t_1, t_2, . . . ,t_8 and t̅_1, . . ., t̅_6 related to the three-body interaction DKK̅ and DKK systems are the two-body scattering matrices elements, respectively, given by [ t_1=t_D^+K^-→ D^+K^- ; t_4=t_D^0K̅^0→ D^0K̅^0 ;;t_2=t_D^+K^-→ D^0K̅^0 ; t_5=t_K^0K^-→ K^0K^- ;; t_3=t_D^0K^-→ D^0K^- ; t_6=t_K^+K^-→ K^+K^- ;;][ t_7=t_K^+K^-→ K^0K̅^0 ;; t_8=t_K^0K̅^0→ K^0K̅^0,; ]and[ t̅_1=t_D^+K^+→ D^+K^+ ; t̅_4=t_D^+K^0→ D^+K^0 ;; t̅_2=t_D^0K^+→ D^0K^+ ; t̅_5=t_K^+K^0→ K^+K^0 ;; t̅_3=t_D^0K^+→ D^+K^0 ; t̅_6=t_K^+K^+→ K^+K^+ ,; ] which we shall discuss in the next subsection.It is important to mention that, in this work, we are using the Mandl and Shaw normalization, which has different weight factors for the particle fields. In order to use these factors in a consistent manner in our problem, we should take into account how they appear in the single-scattering and double-scattering as well as in the full amplitude. The detailed calculation on how to do this can be found in Refs. <cit.>. According to these works, this is done multiplying the two-body amplitudes by the factor M_c/M_1(2), where M_c is the cluster mass while M_1(2) is associated with the mass of the hadrons H_1 and H_2. In our case, we have M_c/M_D for the two-body amplitudes related to the DK̅(DK) and M_c/M_K for the one related to the KK̅(KK) appearing in Eqs. (<ref>) and (<ref>).Once we solve the Faddeev equations for the systems we are concerned, we have to write this solution in such a way that it represents the amplitude of a K̅(K) meson interacting with the D^*_s0 molecule, which is the DK cluster written into an I=0 combination. Taking into account that |DK(I=0) ⟩ = (1/√(2)) | D^+K^0+D^0K^+ ⟩ (recall (D^+,-D^0) is the isospin doublet), and summing the cases where the odd K̅ (K) interacts first to the left (right) of the cluster, and finishes interacting at the left (right) we obtain the following combination for both DKK̅ and DKK system, T_X-D^*_s0 = 1/2( T^ FCA_11 +T^ FCA_12+T^ FCA_14+T^ FCA_15 +T^ FCA_21+T^ FCA_22+T^ FCA_24 +T^ FCA_25+T^ FCA_41 +T^ FCA_42+ T^ FCA_44 +T^ FCA_45+ T^ FCA_51+T^ FCA_52 +T^ FCA_54+T^ FCA_55), where X denotes a K̅ in the DKK̅ case and a K meson for DKK interaction. §.§ Two-body amplitudesIn order to solve the Faddeev equations using the FCA for the systems we are concerned, we need to know the two-body scattering amplitudes appearing in Eqs. (<ref>) and (<ref>). They were studied in Refs. <cit.>. These amplitudes are calculated using the chiral unitary approach (for a review see <cit.>). In this model, the transition amplitudes between the different pairs of mesons are extracted from Lagrangians based on symmetries as chiral and heavy quark symmetries. Then, they are unitarized using them as the kernels of the Bethe-Salpeter equation, which in its on-shell factorization form is given by t=(1-v G)^-1 v, where G is the two meson loop function and its expression in dimensional regularization method is G(s_i) = 1/16π^2{α_i(μ)+ logm^2_1/μ^2+m_2^2-m^2_1+s_i/2s_ilog m^2_2/m^2_1+p/√(s_i)[log(s_i-m^2_2+M^2_1+2p√(s_i))-log (-s_i+m^2_2-m^2_1+2p√(s_i))+log(s_i+m^2_2-m^2_1+2p√(s_i)) - log(-s_i-m^2_2+M^2_1+2p√(s_i)) ]}, with m_1 and m_2 standing for the i-channel meson masses in the loop and p is the three-momentum in the two meson center-of-mass energy, √(s_i). In Eq. (<ref>) μ is a scale fixed a priori and the subtraction constant α(μ) is a free parameter. In Ref. <cit.>, μ is considered to be equal to 1500 MeV for the DK̅ system, corresponding to α_DK̅=-1.15. On the other hand, since the amount of DK content in D^*_s0(2317) is about 70% <cit.>, we consider just one channel, with α_DK=-0.925, adjusted to provide the D^*_s0(2317) peak, corresponding to a cut-off value equal to 650 MeV. This value also has to be used as the upper limit in the integrals given by Eqs. (<ref>) and (<ref>). For the f_0(980)/a_0(980) we consider the same channels as Refs. <cit.> where a cut-off equal to 600 MeV was used to regularize the loops, given byG(s_l)=∫d^3 q/(2π)^3ω_1( q) + ω_2( q)/2ω_1( q)ω_2( q)1/(P^0)^2-[ω_1( q) + ω_2( q)]^2+iϵ,where (P^0)^2 = s_l, the two-body center-of-mass energy squared. The index l stands for the following channels: 1) π^+π^-, 2) π^0π^0, 3) K^+K^-, 4) K^0K̅^0, 5) ηη and 6) πη. In each channel ω_1(2)( q) = √( q^2 + m_1(2)^2), where m_1(2) is the mass of the mesons inside the loop.In order to get the scattering amplitude for the KK interaction, we follow Ref. <cit.>. First, we have to find the kernel v to be used in Eq. (<ref>). This kernel is the lowest order amplitude describing the KK interaction and it is calculated using the chiral Lagrangian ℒ_2=1/12 f_π^2⟨ (∂_μΦ Φ - Φ ∂_μΦ)^2+MΦ^4⟩, where ⟨ . . . ⟩ means the trace in the flavour space of the SU(3) matrices appearing in Φ and M while f_π is the pion decay constant. The matrices Φ and M are given by Φ =( [π^0/√(2)+η_8/√(6)π^+K^+;π^- -π^0/√(2)+η_8/√(6)K^0;K^- K̅^0-2 η_8/√(6);] );M= ( [ m^2_π 0 0; 0 m^2_π 0; 0 0 2 m^2_K-m^2_π; ]) , where in M we have taken the isospin limit (m_u=m_d). Hence, from Eqs. (<ref>) and (<ref>) we can calculate the tree level amplitudes for K^+K^0 and K^+K^+, which after projection in s-wave read asv_K^+K^0 , K^+K^0=1/2 f^2_π( s_KK - 2 m^2_K); v_K^+K^+ , K^+K^+ =1/f^2_π( s_KK - 2 m^2_K), where s_KK is the Mandelstam variable s in the KK center-of-mass frame. From these equations one finds that v^I=0_KK=0 (and t^I=0_KK=0) and taking the unitary normalization appropriate for identical particles |K^+K^+,I=1⟩=|K^+K^+⟩/√(2), we find v^I=1_KK=1/2v_K^+K^+ , K^+K^+. The t matrix will be t^I=1_KK=(1-v^I=1_KK G_KK)^-1 v^I=1_KK, and then t^I=1_KK has to be multiplied by two to restore the good normalization. Therefore, using these expressions we obtain the KK scattering amplitudes t̅_5 and t̅_6 present in Eq. (<ref>) (t̅_6=t^I=1_KK, t̅_5=1/2t^I=1_KK, with t^I=1_KK with the good normalization), where we have used a cut-off of 600 MeV to regularize the KK loops, the same that was used in the KK̅ and coupled channels system. After these considerations we are able to determine all the two-body amplitudes in Eqs. (<ref>) and (<ref>).It is worth mentioning that the arguments of the partitions T^ FCA_ij(s) and the t_i(s_i) two-body amplitudes are different. While the former is written into the three-body center-of-mass energy √(s), the latter is given in the two-body one. In order to write the √(s_i)'s in terms of √(s), we are going to use the same transformations used in Ref. <cit.>, which are s_DK(DK̅)=m^2_K+m^2_D+1/2M^2_D^*_s0 (s-m^2_K-M^2_D^*_s0) (M^2_D^*_s0+m^2_D-m^2_K), where the subscript DK(DK̅) stands for the two-body channels associated with the energy in the center-of-mass of DK(DK̅). Analogously, for the energy in the KK(KK̅) center-of-mass, we have s_KK(KK̅)=2 m^2_K+1/2M^2_D^*_s0 (s-m^2_K-M^2_D^*_s0) (M^2_D^*_s0+m^2_K-m^2_D). In this work, we are going to call this set of transformations Prescription I.In order to estimate the uncertainties in our calculations, we will also use another set of transformations, which we are going call Prescription II, given by s_DK(DK̅)=( √(s)/M_D^*_s0+m_K)^2(m_K+m_D M_D^*_s0/m_D+m_K)^2 - P^2_2, and s_KK(KK̅)=( √(s)/M_D^*_s0+m_K)^2(m_K+m_K M_D^*_s0/m_D+m_K)^2 - P^2_1, where P_1 and P_2 stand for the momenta of the D and K mesons in the cluster, which we take equal and such that the kinetic energy in the DK cluster is of the order of the binding energy, hence P^2_1= P^2_2= 2μ̃B_D^*_s0=2μ̃ (m_D+m_K-M_D^*_s0), with μ̃ the reduced mass of DK. This prescription is based on another one discussed in Refs. <cit.>, which shares the binding energy among the three-particles proportionally to their respective masses.§ RESULTS In all our calculations we use m_K = 495 MeV, m_D = 1865 MeV, m_D_s0^*(2317) = 2317 MeV, m_π = 138 MeV, m_η = 548 MeV and f_π = 93 MeV. In Fig. <ref> we plot the energies in the center-of-mass of each of the two-body systems as a function of the energy of the center-of-mass of the three-body system, according to Eqs. (<ref>), (<ref>), (<ref>) and (<ref>). Both prescriptions map the energy range around 2812 MeV, the threshold of D_s0^*(2317) K (or D_s0^*(2317) K̅), to an energy range around each of the thresholds of the two-body interactions, i. e. the K K system (or K K̅) interact in the energy range around 990 MeV in their center-of-mass, which corresponds to 2 m_K, and the D K (or D K̅) interact in the energy range around 2360 MeV, which corresponds to m_K + m_D.The main uncertainty in our calculation is the difference between these two ways of mapping the total energy into the center-of-mass of each two-body system. This feature was also found in other works using FCA, for instance in Ref. <cit.>. §.§ The D K K̅ systemIn Fig. <ref> we show the result of the total Faddeev amplitude squared from Eq. (<ref>) using Prescription I. We can see a strong peak around 2833 MeV, which could be interpreted as a D [f_0(980)/a_0(980)] bound state, since it is below the D [f_0(980)/a_0(980)] threshold of 2855 MeV. On the other hand, using Prescription II we observe a peak around 2858 MeV, as can seen in Fig. <ref>, and now could be interpreted as a D [f_0(980)/a_0(980)] resonance since it is above its threshold.In order to investigate if this strong peak in the D K K̅ system comes mostly from K K̅ merging into a_0(980) or f_0(980), we have separated the K K̅ amplitudes (that enter in the Faddeev equations) in isospin basis and selected only one contribution at a time. In Fig. <ref> we show the results where the I=0 component of K K̅ was removed, therefore there is no f_0(980) contribution. In this figure we can see clearly the shape of the a_0(980) in the three-body amplitude, that peaks around 2842 MeV in Prescription I (and 2886 MeV in Prescription II), which according to Fig. <ref>, correspond to 990 MeV in the K K̅ center-of-mass, exactly where the a_0(980) peak results from the I=1 K K̅ two-body amplitude. Notice that when we removed the I=0 isospin component from the K K̅ amplitude the strength of the peaks in |T_DDK̅|^2 have decreased more than two orders of magnitude in both prescriptions, pointing out that the f_0(980) is indeed the most important contribution coming from K K̅. It is interesting to recall that the same conclusion was obtained in <cit.>, where no apparent signal for D a_0(980) was found. Furthermore, the small cusps seen in both prescriptions at 2812 MeV in Fig. <ref> correspond to the D^*_s0(2317)K̅ threshold. In Table <ref> we compile the results of both prescriptions.The results for the D K K̅ system point out to the formation of a three-body state: the D [f_0(980)/a_0(980)], in which the D f_0(980) is the strongest contribution in both prescriptions. Specifically, in Prescription I the Df_0(980) state would be bound by about 20 MeV, while in Prescription II it would correspond to a resonance. This latter result would be similar to the findings of Ref. <cit.> where a peak is seen at higher energy, forming a D f_0(980) resonant state at 2890 MeV.As mentioned previously, the difference between the results of prescriptions I and II should be interpreted as the main uncertainty in our approach, but what emerges from both pictures is that a Df_0(980) state is formed, slightly bound or unbound. We would like to note that the theoretical uncertainty of the present method is of the order of 25 MeV. To put this number in a proper context we can recall that the uncertainty in the QCD sum rules method in Ref. <cit.> is far larger, with a mass given by m_Df_0=(2926 ± 237) MeV (the uncertainty for the mass in the Faddeev method of Ref. <cit.> is not given). §.§ The D K K system In Fig. <ref> we show the D K K total amplitude squared from Eq. (<ref>) using prescriptions I and II. We can see that in both prescriptions the amplitude decreases around 2812 MeV which corresponds to the D_s0^*(2317) K threshold, and both have a maximum below this threshold, while Prescription II also develops a broad structure above threshold, but no clear peak which could indicate that a bound state or a resonance is found.As a physical interpretation we could say that, even though the D K interaction is attractive and responsible for the strong binding that generates the D_s0^*(2317), the repulsion between K K seems to be of the same magnitude and prevents the D K K system to form a bound state.One might be tempted to associate the peak below threshold to a physical state, but this is not the case. Indeed, one should note that the strength of |T_DKK|^2 in Fig. <ref> is about three orders of magnitude smaller than for |T_DKK̅|^2 in Fig. <ref>, which simply indicates that no special hadron structure has been formed in this case.§ CONCLUSIONSIn this work, we have used the FCA to Faddeev equations in order to look for bound states or resonances generated from DKK̅ and DKK three-body interactions. The cluster DK in the I=0 component is the well known D^*_s0(2317) bound state studied by means of the chiral unitary approach. From the DKK̅ interaction we found a I(J^P)=1/2(0^-) state with mass about 2833-2858 MeV, where the uncertainties were estimated taking into account two different prescriptions to obtain √(s_DK) and √(s_KK) from the total energy of the system √(s). Our findings corroborated those of Ref. <cit.>, where the authors studied the DKK̅ interaction using two different nonperturbative calculation tools, the QCD sum rules and the Faddeev equations without FCA. They found a state around 2890 MeV which is above the Df_0(980) threshold. As we have pointed out before, this state could be seen in the π πD invariant mass distribution. Therefore, as in Ref. <cit.>, we also suggest the search for such a state in future experiments. On the other hand, for the DKK system we found an enhancement effect, but with a very small strength compared to the DKK̅ system and should not be related to a physical bound state. In this case, the repulsion between K K seems to be of the same magnitude as the attraction on the DK interaction, preventing the formation of the three-body molecular state. § ACKNOWLEDGMENTS V. R. Debastiani wishes to acknowledge the support from the Programa Santiago Grisolia of Generalitat Valenciana (Exp. GRISOLIA/2015/005). J. M. Dias would like to thank the Brazilian funding agency FAPESP for the financial support. This work is also partly supported by the Spanish Ministerio de Economia y Competitividad and European FEDER funds under the contract number FIS2014-57026-REDT, FIS2014-51948-C2-1-P, and FIS2014-51948-C2-2-P, and the Generalitat Valenciana in the program Prometeo II-2014/068. plain 99 faddeev L. D. Faddeev, Sov. Phys. JETP 12, 1014 (1961) [Zh. Eksp. Teor. Fiz. 39, 1459 (1960)].Alt:1967fx E. O. Alt, P. Grassberger and W. Sandhas,Nucl. Phys. B 2, 167 (1967).Epelbaum:2005pn E. Epelbaum,Prog. Part. Nucl. Phys.57, 654 (2006),[nucl-th/0509032].Hiyama:2003cu E. Hiyama, Y. Kino and M. Kamimura,Prog. Part. Nucl. Phys.51, 223 (2003). Dote:2008hw A. Dote, T. Hyodo and W. Weise,Phys. Rev. C 79, 014003 (2009),[arXiv:0806.4917 [nucl-th]].Kanada-Enyo:2008wsu Y. Kanada-En'yo and D. Jido,Phys. Rev. C 78, 025212 (2008),[arXiv:0804.3124 [nucl-th]].Toker:1981zh G. Toker, A. Gal and J. M. Eisenberg,Nucl. Phys. A 362, 405 (1981). Torres:1986mr M. Torres, R. H. Dalitz and A. Deloff,Phys. Lett. B 174, 213 (1986). Kamalov:2000iy S. S. Kamalov, E. Oset and A. Ramos,Nucl. Phys. A 690, 494 (2001),[nucl-th/0010054].Dote:2008in A. Dote, T. Hyodo and W. Weise,Nucl. Phys. A 804, 197 (2008),[arXiv:0802.0238 [nucl-th]].Shevchenko:2006xy N. V. Shevchenko, A. Gal and J. Mares,Phys. Rev. Lett.98, 082301 (2007),[nucl-th/0610022].Ikeda:2007nz Y. Ikeda and T. Sato,Phys. Rev. C 76, 035203 (2007),[arXiv:0704.1978 [nucl-th]].Bayar:2011qj M. Bayar, J. Yamagata-Sekihara and E. Oset,Phys. Rev. C 84, 015209 (2011),[arXiv:1102.2854 [hep-ph]].Bayar:2012hn M. Bayar and E. Oset,Phys. Rev. C 88, no. 4, 044003 (2013),[arXiv:1207.1661 [hep-ph]].Uchino:2011jt T. Uchino, T. Hyodo and M. Oka,Nucl. Phys. A 868-869, 53 (2011),[arXiv:1106.0095 [nucl-th]].bicudo P. Bicudo,Phys. Rev. D 76, 031502 (2007)[hep-ph/0701008].sekiramos T. Sekihara, E. Oset and A. Ramos,PTEP 2016, no. 12, 123D03 (2016),[arXiv:1607.02058 [hep-ph]].jparc Y. Sada et al. [J-PARC E15 Collaboration],PTEP 2016, no. 5, 051D01 (2016),[arXiv:1601.06876 [nucl-ex]].alber1 A. Martinez Torres, K. P. Khemchandani and E. Oset,Phys. Rev. C 77, 042203 (2008),[arXiv:0706.2330 [nucl-th]].alber2 K. P. Khemchandani, A. Martinez Torres and E. Oset,Eur. Phys. J. A 37, 233 (2008),[arXiv:0804.4670 [nucl-th]].alber3 A. Martinez Torres, K. P. Khemchandani and E. Oset,Phys. Rev. C 79, 065207 (2009),[arXiv:0812.2235 [nucl-th]].kbarkbarn Y. Kanada-En'yo and D. Jido,Phys. Rev. C 78, 025212 (2008),[arXiv:0804.3124 [nucl-th]].kkbarn A. Martinez Torres and D. Jido,Phys. Rev. C 82, 038202 (2010),[arXiv:1008.0457 [nucl-th]].albermeson A. Martinez Torres, K. P. Khemchandani, L. S. Geng, M. Napsuciale and E. Oset,Phys. Rev. D 78, 074031 (2008),[arXiv:0801.3635 [nucl-th]].kkkbar A. Martinez Torres, D. Jido and Y. Kanada-En'yo,Phys. Rev. C 83, 065205 (2011),[arXiv:1102.1505 [nucl-th]].albereta A. Martinez Torres, K. P. Khemchandani, D. Jido and A. Hosaka,Phys. Rev. D 84, 074027 (2011),[arXiv:1106.6101 [nucl-th]].xiaoliang W. Liang, C. W. Xiao and E. Oset,Phys. Rev. D 88, no. 11, 114024 (2013),[arXiv:1309.7310 [hep-ph]].ollereta M. Albaladejo, J. A. Oller and L. Roca,Phys. Rev. D 82, 094019 (2010),[arXiv:1011.1434 [hep-ph]].albernew A. Martinez Torres and K. P. Khemchandani,Phys. Rev. D 94, 076007 (2016),[arXiv:1607.02102 [hep-ph]]. japocola M. Bayar, C. W. Xiao, T. Hyodo, A. Dote, M. Oka and E. Oset,Phys. Rev. C 86, 044004 (2012),[arXiv:1205.2275 [hep-ph]].Xiao:2011rc C. W. Xiao, M. Bayar and E. Oset,Phys. Rev. D 84, 034037 (2011),[arXiv:1106.0459 [hep-ph]].alberdani A. Martinez Torres, K. P. Khemchandani, D. Gamermann and E. Oset,Phys. Rev. D 80, 094012 (2009),[arXiv:0906.5333 [nucl-th]].albermari A. Martinez Torres, K. P. Khemchandani, M. Nielsen and F. S. Navarra,Phys. Rev. D 87, no. 3, 034025 (2013),[arXiv:1209.5992 [hep-ph]].kolo E. E. Kolomeitsev and M. F. M. Lutz,Phys. Lett. B 582, 39 (2004),[hep-ph/0307133].hofmann J. Hofmann and M. F. M. Lutz,Nucl. Phys. A 733, 142 (2004), [hep-ph/0308263].chiang F. K. Guo, P. N. Shen, H. C. Chiang, R. G. Ping and B. S. Zou,Phys. Lett. B 641, 278 (2006),[hep-ph/0603072].danids D. Gamermann, E. Oset, D. Strottman and M. J. Vicente Vacas,Phys. Rev. D 76, 074016 (2007), [hep-ph/0612179].hanhart1 F. K. Guo, C. Hanhart, S. Krewald and U. G. Meißner,Phys. Lett. B 666, 251 (2008),[arXiv:0806.3374 [hep-ph]].hanhart2 F. K. Guo, C. Hanhart and U. G. Meißner,Eur. Phys. J. A 40, 171 (2009),[arXiv:0901.1597 [hep-ph]].yamafix J. Yamagata-Sekihara, L. Roca and E. Oset,Phys. Rev. D 82, 094017 (2010), Erratum: [Phys. Rev. D 85, 119905 (2012)],[arXiv:1010.0525 [hep-ph]].acetiwf F. Aceti and E. Oset,Phys. Rev. D 86, 014012 (2012),[arXiv:1202.4607 [hep-ph]].sasa A. Martinez Torres, E. Oset, S. Prelovsek and A. Ramos,JHEP 1505, 153 (2015),[arXiv:1412.1706 [hep-lat]].ollerJ. A. Oller and E. Oset,Nucl. Phys. A 620, 438 (1997), Erratum: [Nucl. Phys. A 652, 407 (1999)],[hep-ph/9702314].bayarren M. Bayar, X. L. Ren and E. Oset,Eur. Phys. J. A 51, no. 5, 61 (2015),[arXiv:1501.02962 [hep-ph]].Roca:2010tf L. Roca and E. Oset,Phys. Rev. D 82, 054013 (2010),[arXiv:1005.0283 [hep-ph]].YamagataSekihara:2010pj J. Yamagata-Sekihara, J. Nieves and E. Oset,Phys. Rev. D 83, 014003 (2011),[arXiv:1007.3923 [hep-ph]]. Oller:2000ma J. A. Oller, E. Oset and A. Ramos,Prog. Part. Nucl. Phys.45, 157 (2000),[hep-ph/0002193].Liu:2012zya L. Liu, K. Orginos, F. K. Guo, C. Hanhart and U. G. Meißner,Phys. Rev. D 87, no. 1, 014508 (2013),[arXiv:1208.4535 [hep-lat]].xiedai J. J. Xie, L. R. Dai and E. Oset,Phys. Lett. B 742, 363 (2015),[arXiv:1409.0401 [hep-ph]].Dias:2016gou J. M. Dias, F. S. Navarra, M. Nielsen and E. Oset,Phys. Rev. D 94, no. 9, 096002 (2016),[arXiv:1601.04635 [hep-ph]].pedro M. Bayar, P. Fernandez-Soler, Z. F. Sun and E. Oset,Eur. Phys. J. A 52, no. 4, 106 (2016),[arXiv:1510.06570 [hep-ph]]. | http://arxiv.org/abs/1705.09257v1 | {
"authors": [
"V. R. Debastiani",
"J. M. Dias",
"E. Oset"
],
"categories": [
"hep-ph"
],
"primary_category": "hep-ph",
"published": "20170525164826",
"title": "Study of the $DKK$ and $DK\\bar{K}$ systems"
} |
Maximizing Indoor Wireless Coverage Using UAVs Equipped with Directional Antennas Hazim Shakhatreh and Abdallah Khreishah Hazim Shakhatreh and Abdallah Khreishah are with the Department of Electrical and Computer Engineering, New Jersey Institute of Technology (email: {hms35,abdallah}@njit.edu) Received: date / Accepted: date ==================================================================================================================================================================================================================================== Unmanned aerial vehicles (UAVs) can be used to provide wireless coverage during emergency cases where each UAV serves as an aerial wireless base station when the cellular network goes down. They can also be used to supplement the ground base station in order to provide better coverage and higher data rates for the users. In this paper, we aim to maximize the indoor wireless coverage using UAVs equipped with directional antennas. We study the case that the UAVs are using one channel, thus in order to maximize the total indoor wireless coverage, we avoid any overlapping in their coverage volumes. We present two methods to place the UAVs; providing wireless coverage from one building side and from two building sides. In the first method, we utilize circle packing theory to determine the 3-D locations of the UAVs in a way that the total coverage area is maximized. In the second method, we place the UAVs in front of two building sides and efficiently arrange the UAVs in alternating upside-down arrangements. We show that the upside-down arrangements problem can be transformed from 3D to 2D and based on that we present an efficient algorithm to solve the problem. Our results show that the upside-down arrangements of UAVs, can improve the maximum total coverage by 100% compared to providing wireless coverage from one building side. Unmanned aerial vehicles, coverage, circle packing theory. § INTRODUCTION Cells on wheels (COW), are used to provide expanded wireless coverage for short-term demands, when cellular coverage is either minimal, never present or compromised by the disaster <cit.>. UAVs can also be used to provide wireless coverage during emergency cases and special events (such as concerts, indoor sporting events, etc.), when the cellular network service is not available or it is unable to serve users <cit.>. Compared to the COW, the advantage of using UAV-based aerial base stations is their ability to quickly and easily move <cit.>. The main disadvantage of using UAVs as aerial base stations is their energy capacity, the UAVs need to return periodically to a charging station for recharging, due to their limited battery capacity. In <cit.>, the authors integrate the recharging requirements into the coverage problem and examine the minimum number of required UAVs for enabling continuous coverage under that setting. Directional antennas are used to improve the received signal at their associated users, and also reduce interference since other- aerial base stations are targeting/serving other users in other directions <cit.>. The authors in <cit.> study the optimal deployment of UAVs equipped with directional antennas, using circle packing theory. The 3D locations of the UAVs are determined in a way that the total coverage area is maximized. In <cit.>, the authors investigate the problem by characterizing the coverage area for a target outage probability, they show that for the case of Rician fading there exists a unique optimum height that maximizes the coverage area. In <cit.>, the authors propose a heuristic algorithm to find the positions of aerial base stations in an area with different user densities, the goal is to find the minimum number of UAVs and their 3D placement so that all the users are served. However, it is assumed that all users are outdoor and the location of each user represented by an outdoor 2D point. In <cit.>, the authors use multiple UAVs to design efficient UAV relay networks to support military operations. They describe the tradeoff between connectivity among the UAVs and maximizing the covered area. However, they use the UAVs as wireless relays and do not take into account their mutual interference in downlink channels. In <cit.>, the authors propose a computational method for positioning aerial base stations with the goal of minimizing their number, while fully providing the required bandwidth over the disaster area. It is assumed that overlapping aerial base stations coverage areas are allowed and they use the Inter-Cell Interference Coordination (ICIC) methods to schedule radio resources to avoid inter-cell interference. The authors in <cit.> use a single UAV equipped with omnidirectional antenna to provide wireless coverage for indoor users inside a high-rise building, where the objective is to find the 3D location of a UAV that minimizes the total transmit power required to cover the entire high-rise building. In <cit.>, the authors use UAVs equipped with omnidirectional antennas to minimize the number of UAVs required to cover the indoor users. We summarize our main contributions as follows: * In order to maximize the indoor wireless coverage, we present two methods to place the UAVs, providing wireless coverage from one building side and from two building sides. In this paper, we study the case that the UAVs are using one channel, thus we avoid any overlapping in their coverage volumes (to avoid interference). In the first method, we utilize circle packing theory to determine the 3-D locations of the UAVs in a way that the total coverage area is maximized. In the second method, we place the UAVs in front of two building sides and efficiently arrange the UAVs in alternating upside-down arrangements. * We show that the upside-down arrangements problem can be transformed from 3D to 2D and based on that we present an efficient algorithm to solve the problem. * We demonstrate through simulation results that the upside-down arrangements of UAVs, can improve the maximum total coverage by 100% compared to providing wireless coverage from one building side. The rest of this paper is organized as follows. In Section II, we describe the system model. In Section III, we show the appropriate placement of UAVs that maximizes the total indoor wireless coverage. Finally, we present our numerical results in Section IV and make concluding remarks in Section V. § SYSTEM MODEL §.§ System Settings Consider a 3D building, as shown in Figure <ref>, where N UAVs must be deployed to maximize wireless coverage to indoor users located within the building. The dimensions of the high-rise building, in the shape of a rectangular prism, be [0,x_b] × [0,y_b] × [0,z_b]. Let (x_k, y_k, z_k) denote the 3D location of UAV k∈ N, and let (X_i, Y_i, Z_i) denote the location of user i.Also, let d_out,i be the distance between the UAV and indoor user i, and let d_in,i be the distance between the building wall and indoor user i. Each UAV uses a directional antenna to provide wireless coverage where the antenna half power beamwidth is θ_B. The authors in <cit.> use an outdoor directional antenna to provide wireless coverage for indoor users. They show that the highest RSRP (Reference Signal Received Power) and throughput values are measured along the main beam direction, thus the radiation pattern of a directional antenna is a cone and the indoor volume covered by a UAV is a truncated cone, as shown in Figure <ref>. Here, r_i is the radius of the circle that is located at yz-rectangular side ((0,0,0), (0,0,z_b) , (0,y_b,z_b), (0,y_b,0))), r_j is the radius of the circle that is located at yz-rectangular side ((x_b,0,0), (x_b,0,z_b) , (x_b,y_b,z_b), (x_b,y_b,0)) and x_b is the horizontal width of the building. The volume of a truncated cone is given by: V=1/3π x_b (r^2_i+r^2_j+r_ir_j) §.§ UAV Power Consumption In <cit.>, the authors show that significant power gains are attainable for indoor users even in rich indoor scattering conditions, if the indoor users use directional antennas. Now, consider a transmission between k-th UAV located at (x_k, y_k, z_k) and i-th indoor user located at (X_i, Y_i, Z_i). The received signal power at i-th indoor user location can be given by: P_r,ik(dB)=P_t+G_t+G_r-L_i where P_r,ik is the received signal power, P_t is the transmit power of UAV, G_t is the antenna gain of the UAV. It can be approximated by G_t≈29000/θ_B^2 with θ_B in degrees <cit.> and G_r is the antenna gain of indoor user i, which is given by <cit.>: G_r(dB)=G_r,dir+G_r,omni-GRF where G_r,dir and G_r,omni are free-space antenna gains of a directive and an omnidirectional antenna respectively and GRF is the decrease in gain advantage of a directive over an omnidirectional antenna, due to the presence of clutter. Also, L_i is the path loss which for the Outdoor-Indoor communication is: L_i=L_F+L_B+L_I= (wlog_10d_3D,i+wlog_10f_Ghz+g_1) + (g_2+g_3(1-cosθ_i)^2)+(g_4d_2D,i) where L_F is the free space path loss, L_B is the building penetration loss, and L_I is the indoor loss. In the path loss model, we also have w=20, g_1=32.4, g_2=14, g_3=15, g_4=0.5 <cit.> and f_Ghz is the carrier frequency. §.§ Placement of UAVs Choosing the appropriate placement of UAVs will be a critical issue when we aim to maximize the indoor wireless coverage. In this paper, we assume that we can place the UAVs in front of building sides A, B and above the building C as shown Figure <ref>. We also assume that the UAVs are using one channel. In this section, we demonstrate why avoiding the overlapping between UAV's coverage volumes will strengthen the total indoor wireless coverage. §.§.§ Overlapping between UAV's coverage volumes is allowed Now, when we place two UAVs in front of building sides A as shown in Figure <ref> (the UAVs have different z-coordinates and same x- and y- coordinates), the indoor users located in G_1's and G_2's locations will have high SINR. On the other hand, the indoor users located in G_3's location will have low SINR. This is because the dependency of SINR on the location of indoor user. Similarly, when we place two UAVs in front of two building sides A and B as shown in Figure <ref> (the UAVs have different x-coordinates and same y- and z- coordinates), the indoor users located in G_1's and G_2's locations will have high SINR. On the other hand, the indoor users located in G_3's location will have low SINR. In Figure <ref> (the UAVs have same y-coordinates and same x- and z- coordinates), when we place one UAV in front of building side A and one UAV above the building C, the indoor users located in G_1's and G_2's locations will have high SINR. On the other hand, the indoor users located in G_3's location will have low SINR. From the previous examples, we can conclude that allowing the UAVs coverage volumes to overlap will result in that some users are not satisfied. In the next section, we place the UAVs in a way that maximizes the total coverage, and avoids any overlapping in their coverage volumes. §.§.§ Overlapping between UAV's coverage volumes is not allowed In Figure <ref>, we avoid the overlapping between UAV's coverage volumes by using UAVs with small antenna half power beamwidths θ_B. Actually, this is impractical way to cover the building, due to the high number of UAVs required to cover the building. In Figure <ref>, we place the UAVs in front of two building sides and efficiently arrange the UAVs in alternating upside-down arrangements. We can notice that this method will maximize the indoor wireless coverage where the uncovered holes are minimized and the overlapping between UAV's coverage volumes is not allowed. § MAXIMIZING INDOOR WIRELESS COVERAGE In this section, the UAVs are assumed to be symmetric having the same transmit power, the same horizontal location x_k, the same channel and the same antenna half power beamwidth θ_B. We show two methods to place the UAVs in a way that tries to maximize the total coverage, and avoids any overlapping in their coverage volumes. §.§ Providing Wireless Coverage from one building side In this method, we place all UAVs in front of one building side (side A, side B or side C). The objective is to determine the three-dimensional location of each UAV k∈ N in a way that the total covered volume is maximized. Now, consider that we place the UAVs in front of building side A, then the projection of UAV's coverage on the building side B is a circle as shown in Figure <ref>. Our problem can be formulated as: max |N|⋆1/3⋆π⋆ x_b⋆ (r^2_i+r^2_j+r_ir_j) subject to √((y_k-y_q)^2+(z_k-z_q)^2)≥ 2r_j, k≠ q∈ N z_b-(z_k+r_j)≥ 0, k∈ N (z_k-r_j)≥ 0, k∈ N y_b-(y_k+r_j)≥ 0, k∈ N (y_k-r_j)≥ 0, k∈ N The objective is to maximize the indoor wireless coverage (covered volume). Constraint set (1) guarantees that truncated cones cannot overlap each other. Constraint sets (2-5) ensure that UAV k should not cover outside the 3D building, see Figure <ref>. We model this problem by utilizing the well-known problem, circle packing problem. In this problem, N circles should be packed inside a given surface such that the packing density is maximized and no overlapping occurs <cit.>, note that the surface in our problem is a rectangle. The authors of <cit.> tackle this problem by solving a number of decision problems. The decision problem is:Given N circles of radius r_j and a rectangle of dimension d_1× d_2, whether is it possible to locate all the circles into the rectangle or not. In <cit.>, the authors introduce a nonlinear model for this problem. Finding the answer for the decision problem will depend on finding the global minimizer of a nonconvex and nonlinear optimization problem. In each decision problem, they investigate the feasibility of packing N identical circles. If this is feasible, N is incremented by one and the decision problem is solved again. The algorithm will stop when the decision problem yields an infeasible packing <cit.>. The pseudo code of the algorithm is shown in Algorithm 1. In the next section, we utilize the two building sides to maximize the indoor wireless coverage. This will allow us to extend the indoor wireless coverage compared with providing wireless coverage from one building side, because the holes induced by the cones of the UAVs in one side can be filled by the cones induced by the UAVs in the other side without causing overlap among the two sets of cones. §.§ Providing Wireless Coverage from two building sides In this method, we place the UAVs in front of two building sides (side A and side B) and efficiently arrange the UAVs in alternating upside-down arrangements (see Figures <ref> and <ref>). In Theorem 1, we find the horizontal location of the UAV x_UAV that guarantees the upside-down arrangements of the truncated cones. In Theorem 2, we prove that if the truncated cones do not intersect in 3D, then the circles do not intersect in building sides (A and B), and vice versa. In Theorem 3, we prove that if we maximize the percentage of covered area of building sides (A and B), then we maximize the percentage of covered volume of building, and vice versa. These theorems help us to transform the geometric problem from 3D to 2D and present an efficient algorithm that maximizes the indoor wireless coverage. The horizontal location of the UAV x_UAV that guarantees the upside-down arrangements of the truncated cones will be equal to 0.7071x_b regardless of the antenna half power beamwidth angle θ_B. The radius of the smaller circular face r_i is given by: r_i=r_j x_UAVx_b+x_UAV Now, we divide the building sides A and B to square cells (as shown in Figures <ref> and <ref>), the large circle in Figure <ref> and the small circle in Figure <ref> will represent the projections of UAV's coverage on building sides A and B when the UAV is placed in front of building side B. Similarly, the four small circles quarters in Figure <ref> and the four large circles quarters in Figure <ref> will represent the projections of UAVs coverage on building sides A and B when the UAV is placed in front of building side A. From Figures <ref> and <ref>, the diagonal of the square cell is: D=2r_j+2r_i where r_j is the radius of the larger circular face and r_i is the radius of the smaller circular face. By applying the pythagorean’s theorem, we get: 4r^2_j+4r^2_j=(2r_j+2r_i)^2 ⟹ √(8)r_j=2r_j+2r_i ⟹ r_i=√(8)-22r_j=γ r_j From equations (1) and (2), we get: x_UAVx_b+x_UAV=√(8)-22 ⟹ 2x_UAV=x_b(√(8)-2)+x_UAV(√(8)-2) ⟹x_UAV=x_b(√(8)-2)(4-√(8))=0.7071x_b Thus, to guarantee the upside-down arrangements of the truncated cones, we must place the UAVs at horizontal distance equals to 0.7071x_b. Theorems 2 and 3 help us to transform the geometric problem from 3D to 2D and present an efficient algorithm that maximizes the indoor wireless coverage. The truncated cones do not intersect in 3D iff The circles do not intersect in building sides (A and B). First, we prove that if the truncated cones do not intersect in 3D, then the circles do not intersect in building sides (A and B). Assume that we have a set of truncated cones G={1,2,...,N} and they do not intersect in 3D space. Each truncated cone n ∈ G can be represented by a number of 2D circles {c_1n, c_2n,..., c_|h|n}, where |h| is the height of the truncated cone, c_1n is the smaller circular face and c_|h|n is the larger circular face. It is obvious that if the |G| truncated cones do not intersect in 3D space then the smaller and larger circular faces do not intersect in building sides (A and B). Second, we prove that if the circles do not intersect in building sides (A and B), then the truncated cones do not intersect in 3D. Assume that four circles (with large radius r_j) not intersect in building side A (see Figure <ref>), then the circles (with small radius r_i) in building side B will appear as shown Figure <ref>. Now, we need to do two steps: 1) Connect the lines between these points (A_|h| with A_1, B_|h| with B_1, C_|h| with C_1 and D_|h| with D_1 ). 2) Draw circles that pass through four points A_k, B_k, C_k and D_k where k∈ h. After these two steps, the circles that have been drawn in step two will represent a truncated cone that his circular bases do not intersect with the four circles in building sides (A and B). Also, the truncated cones do not intersect in 3D space. We maximize the percentage of covered area of building sides (A and B) iff We maximize the percentage of covered volume of building First, we divide the building sides A and B to square cells (as shown in Figures <ref> and <ref>). The percentage of covered volume is given by: V=⌊(y_b∗ z_b)4r^2_j⌋∗ 2 ∗ (π/3∗ x_b∗ (r^2_i+r_ir_j+r^2_j))(x_b∗ y_b∗ z_b) Where: ⌊(y_b∗ z_b)4r^2_j⌋: the number of square cells in the building side. 2: the number of truncated cones in the square cell (see Figures <ref> and <ref>).π/3∗ x_b∗ (r^2_i+r_ir_j+r^2_j): the volume of truncated cone.(x_b∗ y_b∗ z_b): the volume of the building. Now, from equations (2) and (3), we get: V=⌊(y_b∗ z_b)4r^2_j⌋∗ (2π/3)∗ (γ^2+γ+1)r^2_j(y_b∗ z_b) = K_1⌊(y_b∗ z_b)4r^2_j⌋ r^2_j Where: K_1=(2π/3)(γ^2+γ+1)(y_b∗ z_b) The percentage of covered area of building sides (A and B) is given by: W= ⌊(y_b∗ z_b)4r^2_j⌋∗ (π r^2_i+π r^2_j)(y_b∗ z_b) + ⌊(y_b∗ z_b)4r^2_j⌋∗ (π r^2_i+π r^2_j)(y_b∗ z_b)=⌊(y_b∗ z_b)4r^2_j⌋∗ 2π( r^2_i+ r^2_j)(y_b∗ z_b) Now, from equations (2) and (5), we get: W=⌊(y_b∗ z_b)4r^2_j⌋∗ 2π(γ^2+1)r^2_j(y_b∗ z_b)=K_2⌊(y_b∗ z_b)4r^2_j⌋ r^2_j Where: K_2=(2π)(γ^2+1)(y_b∗ z_b) To prove that maximizing the percentage of covered volume of building is equivalent to maximizing the percentage of covered area of building sides (A and B). From equations (4) and (6), maximizing V=K_1⌊(y_b∗ z_b)4r^2_j⌋ r^2_j is equivalent to maximizing K_2⌊(y_b∗ z_b)4r^2_j⌋ r^2_j where K_1 and K_2 are constants. To prove that maximizing the percentage of covered area of building sides (A and B) is equivalent to maximizing the percentage of covered volume of building. From equations (4) and (6), maximizing W=K_2⌊(y_b∗ z_b)4r^2_j⌋ r^2_j is equivalent to maximizing K_1⌊(y_b∗ z_b)4r^2_j⌋ r^2_j where K_1 and K_2 are constants. In Algorithm 2, we maximize the covered volume by placing the UAVs in alternating upside-down arrangements. First, we find the horizontal distance between the building and the UAVs x_UAV=0.7071x_b (see Theorem 1) that guarantees the alternating upside-down arrangements. Then, we divide the building sides A and B to square cells and place one UAV in front of the square cell. In steps (8-16), we find the 3D locations of UAVs that cover the building from side B. On the other hand, steps (17-25) find the 3D locations of UAVs that cover the building from side A. Finally, the algorithm will output total number of UAVs and the total covered volume. § SIMULATION RESULTSLet the dimensions of the building, in the shape of a rectangular prism, be [0, x_b=30]×[0, y_b=40]×[0, zb=60]. We use three methods to cover the building using UAVs. In the first method, we place all UAVs in front of one building side (A or B) (FOBS). In the second method, we place all UAVs above the building (C) (ABS). In the third method, we arrange the UAVs in alternating upside-down arrangements (AUDA). For the first and second methods, we utilize the circle packing in a rectangle approach <cit.> to maximize the covered volume. For the third method, we apply Algorithm 2 to maximize the covered volume. In Figure <ref>, we find the maximum total coverage for different antenna half power beamwidth angles θ_B. As can be seen from the simulation results, the maximum total coverage is less than half for the FOBS and ABS methods, this is because providing wireless coverage from one building side will only maximize the covered area of the building side. On the other hand, we improve the maximum total coverage by applying the AUDA, this is because AUDA will allow us to use a higher number of UAVs to provide wireless coverage compared with providing wireless coverage from one building side, as shown in Figure <ref>.In order to provide full wireless coverage for the building, we use UAVs with different channels to cover the holes in the building. In Figure <ref>, we find the total number of UAVs required to provide full coverage. As can be seen from the figure, FOBS and ABS need high number of UAVs to guarantee full wireless coverage for the building, due to the irregular shapes of the holes in the building. Here, we can easily specify the number of UAVs required to cover each hole in the building, due to the small projections of the holes in the building side. On the other hand, AUDA needs fewer number of UAVs to provide full wireless coverage, due to the small-regular shapes of the uncovered spaces inside the building. Here, we need only one UAV to cover each hole. In Figure <ref>, we find the total transmit power consumed by UAVs when the building is fully covered. Here, we assume that the threshold SNR equals 25dB, the noise power equals -120dBm, the frequency of the channel is 2GHz and the antenna gain of each indoor user is 14.4 dB <cit.>. As can be seen from the figure, the total transmit power in all methods is very small, due to the high gain of the directional antennas. Also, we can notice that the total power consumed in FOBS and ABS is higher than that of AUDA. This is because the number of UAVs required to fully cover the building in AUDA is fewer than that for FOBS and ABS. § CONCLUSIONChoosing the appropriate placement of UAVs will be a critical issue when we aim to maximize the indoor wireless coverage. In this paper, we study the case that the UAVs are using one channel, thus in order to maximize the total indoor wireless coverage, we avoid any overlapping in their coverage volumes. We present two methods to place the UAVs; providing wireless coverage from one building side and from two building sides. In the first method, we utilize circle packing theory to determine the 3-D locations of the UAVs in a way that the total coverage area is maximized. In the second method, we place the UAVs in front of two building sides and efficiently arrange the UAVs in alternating upside-down arrangements. We show that the upside-down arrangements problem can be transformed from 3D to 2D and based on that we present an efficient algorithm to solve the problem. Our results show that the upside-down arrangements, can improve the maximum total coverage by 100% compared to providing wireless coverage from one building side. § ACKNOWLEDGMENTThis work was supported in part by the NSF under Grant CNS-1647170. IEEEtran | http://arxiv.org/abs/1705.09772v1 | {
"authors": [
"Hazim Shakhatreh",
"Abdallah Khreishah"
],
"categories": [
"cs.IT",
"math.IT"
],
"primary_category": "cs.IT",
"published": "20170527061755",
"title": "Maximizing Indoor Wireless Coverage Using UAVs Equipped with Directional Antennas"
} |
Symmetries and scaling in generalised coupled conservedKardar-Parisi-Zhang equations Abhik Basu December 30, 2023 ====================================================================================== Let (M,g) be a compact, smooth,Riemannian manifold and {ϕ_h }an L^2-normalized sequence of Laplace eigenfunctions with defect measure μ. Let H be a smooth hypersurface with unit exterior normal ν. Our main resultsaysthatwhen μ is not concentrated conormally to H, the eigenfunction restrictions to H satisfy∫_H ϕ_h dσ_H = o(1) and∫_H h D_νϕ_h dσ_H = o(1),h → 0^+.§ INTRODUCTION On a compact Riemannian manifold (M,g), with no boundary, consider a sequence of Laplace eigenfunctions {ϕ_h}, -h^2 Δ_g ϕ_h=ϕ_h,normalized so that ϕ_h_L^2(M)=1. The goal of this article is to study the average oscillatory behavior of ϕ_h when restricted to a hypersurface H ⊂ M.Namely, the goal is to find a condition on the pair ({ϕ_h}, H) so that∫_H ϕ_h dσ_H=o(1),as h → 0^+, where σ_H denotes the hypersurface measure on H induced by the Riemannian structure. It is important to point out that one cannot always expect to observe this oscillatory decay. For instance, on the round sphere, zonal harmonics of even degree integrate to a constant along the equator. Also, for any closed geodesic inside the square flat torus. there is a sequence of eigenfunctions thatintegrate to a non-zero constant.Integrals of the form (<ref>) have been studied for quite some time, going back to the work of Good <cit.> and Hejhal <cit.> that treated the case where H is a periodic geodesic inside a compact hyperbolic manifold. These authors proved that in such a case,∫_H ϕ_h dσ_H=O(1) as h→ 0^+. Zelditch <cit.> generalized this to the case where H is any hypersurface inside a compact manifold, showing that for any hypersurface H,∫_Hϕ_hdσ_H=O(1). In addition, it follows from <cit.> that for a density one subsequence of eigenvalues {h_j}_j, one has lim_j→∞∫_H ϕ_h_jdσ_H=0. Moreover, one can actually get an explicit polynomial bound of the form O(h^1/2 - 0) for the rate of decay of expectations for the density-one subsequence (see <cit.>). However, the latter estimate is not satisfied for all eigenfunctions and it is not clear which sequence ofeigenfunctions must be removed for the estimate to hold.There are several articles that address this issue by restricting to special cases ofRiemannian surfaces (M,g) andspecial curves H ⊂ M.Working on surfaces of strictly negative curvature, and choosing H to be a geodesic,Chen-Sogge <cit.> proved ∫_H ϕ_h dσ_H =o(1).Subsequently, Sogge-Xi-Zhang <cit.> obtained a O((log h)^-1/2) bound on the rate of decay under a relaxed curvature condition. Recently, working on surfaces of non-positive curvature Wyman <cit.> obtained (<ref>) when assuming curvature conditions on H. Finally, we remark that on average, one expects ∫_H ϕ_h dσ_H≍ h^1/2 (see <cit.>). In this article we focus on establishing (<ref>) given explicit conditions on the sequence of eigenfunctions {ϕ_h}. We do not impose any geometric conditions on (M,g), nor do we assume it is a surface. Furthermore, we do not restrict our attention to geodesic curves and allow H to be any hypersurface in M.Instead, we prove that (<ref>) holds provided that the sequence {ϕ_h} does not asymptotically concentrate in the conormal direction N^*H to H. One example where this holds is the case quantum ergodic sequences of eigenfunctions and any hypersurface H.§.§ Statements of the results Let H⊂ M be a closed smooth hypersurface, and write S^*_HM ⊂ S^*M for the space of unit covectors with foot-points in H, and S^*H for the set of unit covectors tangent to H. We fix t_0>0 small enough and define a measure μ_H on S^*_HM ⊂ S^*Mby μ_H(A) :=1/2t_0 μ( ⋃_|s|≤ t_0G^s(A)),whereG^t:S^*M→ S^*M denotes the geodesic flow. Remark <ref> shows that if A ⊂ S^*_HM is so thatA⊂ S_H^*M \ S^*H, then μ_H(A) is independent of the choice of t_0 and it is natural to replace fixed t_0 with lim_t_0→ 0. We say that μ is conormally diffuse with respect to H ifμ_H(N^*H)=0.IfU⊂ H is open, we say that μ is conormally diffuse with respect to H over U if μ_H(N^*H∩ S^*_UM)=0. As an example, this condition is satisfied when {ϕ_h} is a quantum ergodic (QE) sequence and μ = μ_L, the Liouville measure on S^*M. Note that the QE condition is much stronger than the assumption in Definition <ref>. In Section <ref> we give examples of hypersurfaces and sequences of eigenfunctions for which the defect measure is conormally diffuse but is not absolutely continuous with respect to the Liouville measure. Our main result is the following.Let H ⊂ M be a closed hypersurface. Let{ϕ_h} be a sequence of eigenfunctions associated to a defect measure μ that is conormally diffuse with respect to H. Then,∫_H ϕ_hdσ_H=o(1),and ∫_Hh ∂_νϕ_hdσ_H=o(1),as h→ 0^+. The proof of Theorem <ref> actually shows that ∫_Hϕ_h χ dσ_H = o(1) for any χ∈ C^∞(H).We note also that the methods of this paper give another independent proof of (<ref>).As we have already pointed out, the Liouville measure μ = μ_L is conormally diffuse. Consequently, the following result is a corollary of Theorem <ref>: Le H ⊂ M be a closed hypersurface and {ϕ_h } be anyQE sequence sequence of eigenfunctions. Then,∫_H ϕ_hdσ_H=o(1) and∫_H h ∂_νϕ_hdσ_H=o(1).By Lindenstrauss' celebrated result <cit.>, Hecke eigenfunctions on compact, arithmetic hyperbolic surfaces are all QE (ie. they are quantum uniquely ergodic (QUE)). Together with Theorem <ref> this yields Let (H/Γ,g) be a compact, arithmetic surface andH ⊂ M be a closed, C^∞ curve. Then, for all Hecke eigenfunctions {ϕ_h},∫_H ϕ_hdσ_H=o(1)and∫_H h ∂_νϕ_hdσ_H=o(1).One can localize the results in Theorems <ref>-<ref>. In the following, we write dσ_H for the measure on H induced by the Riemannian structure.Let (M,g) be a smooth, closed Riemannian manifold and H ⊂ M be a closed hypersurface with A ⊂ H a subset with piecewise C^∞ boundary and suppose U⊂ H is open with A⊂ U. Let {ϕ_h} bea sequence of eigenfunctions with defect measure μ conormally diffuse with respect to H over U. Then,∫_A ϕ_hdσ_H=o(1),and ∫_Ah ∂_νϕ_hdσ_H=o(1),as h→ 0^+.We note that as a corollary of Theorem <ref>, the results in Theorems <ref> and <ref> for QE eigenfunctions extend to all smooth curve segments A.Acknowledgements. The authors would like to thank the anonymous referee for many helpful comments. J.G. is grateful to the National Science Foundation for support under the Mathematical Sciences Postdoctoral Research FellowshipDMS-1502661.The research of J.T. was partially supported by NSERC Discovery Grant # OGP0170280 and an FRQNT Team Grant. J.T. was also supported by the French National Research Agency project Gerasic-ANR- 13-BS01-0007-0.§ DECOMPOSITION OF DEFECT MEASURES§.§ Invariant Measures near transverse submanifoldsLet N be a smooth manifold, 𝒱 be a vector field on N and write φ^𝒱_t:N → N for the flow map generated by 𝒱 at time t. Let Σ⊂ N be a smooth manifold transverse to 𝒱. Then for ϵ>0 small enough, the map ι:(-2ϵ,2ϵ)×Σ→ Nι(t,q)=φ^𝒱_t(q)is a diffeomorphism onto its image and we may use (-2ϵ,2ϵ)×Σ as coordinates on N near Σ. Suppose that μ is a finite Borel measure on N and that 𝒱μ=0 i.e. (φ_t^𝒱)_*μ=μ. Then, for a Borel set A⊂ [-,)×Σ,ι^*μ(A)=dt dμ_Σ (A)where dμ _Σ is a finite Borel measure on Σ.As above, we choose coordinates (t,q) so that ι^*𝒱=∂_t. Then, for all F∈ C_c^∞ (-2ϵ,2ϵ)×Σ, ∫∂_t Fdμ=0.Now, fix χ∈ C_c^∞((-2ϵ,2)) with with ∫χ dt=1. Let f∈ C_c^∞((-2ϵ,2)×Σ) and define f̅(q):=∫ f(t,q)dt.Then f(t,q)-χ(t)f̅(q)=∂_t F with F(t,q):=∫_-∞^t f(s,q)-χ(s)f̅(q)ds∈ C_c^∞((-2,2)×Σ).Therefore, for all f∈ C_c^∞((-2,2)× N) and χ∈ C_c^∞((-2,2)) with ∫χ dt=1,∫ f(t,q)dμ(t,q)=∫χ(t)f̅(q)dμ(t,q) =∭ f(s,q)dsχ(t)dμ(t,q).Now, let B⊂Σ be Borel and I⊂ (-2,) Borel and f_n(t,q)↑ 1_I(t)1_B(q). Then by the dominated convergence theorem,μ(I× B)=∬ |I|1_B(q)χ(t)dμ(t,q).Next, let χ_n↑ (2)^-11_[-,] with ∫χ_n ≡ 1. Then we obtainμ(I× B)=|I|/2μ([-,]× B).So, letting μ_Σ(B):=(2)^-1μ([-,]× B), we have that for rectangles I× B, μ(I× B)=dtdμ_Σ(I× B). But then, since these sets generate the Borel sigma algebra, the proof of the lemma is complete.Throughout this proof, we slightly abuse notation by identifying ι^*μ with μ.For B⊂Σ Borel, definedμ_Σ(B):=1/2 μ( [-,)× B).We will show that μ=dtdμ_Σ(B).To do this, it is enough to show that for B⊂Σ Borel and I⊂ [-,), an intervalμ(I× B)= |I|dμ_Σ(B)Notice that with𝒜:={ I× B:I⊂ [-,) an interval, B⊂Σ Borel},the sigma algebra generated by 𝒜 is the Borel sigma algebra on [-,)×Σ. Therefore, once we show (<ref>), we have μ(A)=dtdμ_Σ(A), A∈𝒜and hence since μ is σ-finite (indeed finite) and 𝒜 generates the Borel sets, this proves μ=dtdμ_Σ.We now proceed to prove (<ref>). By invariance of μ under φ^𝒱_t, μ( [a,b)× B)=μ([a+t,b+t)× B)for all -2ϵ-a<t<2ϵ-b. So, given k_0, n ∈ℕ with 1≤ k_0≤ n<∞, we have1/nμ([-,)× B) =1/n∑_k=1^n μ([-+2(k-1)n,-+2 kn)× B) =μ([-+2(k_0-1)n,-+2 k_0n)× B). Next, suppose I⊂ [-,) is an interval with endpoints a,b ∈ [-,], a≤ b,and fix δ>0. Then let k_1, k_1 ∈ℕ satisfy1≤ k_1≤ k_2≤ n and be so that a-δ≤ - +2(k_1-1)n ≤ a≤ b≤-+2 k_2n ≤ b+δ.Then, since μ([a,b] × B) ≤μ ([-+2(k_1-1)n, -+2 k_2n] × B) = ∑_j=k_1^k_2μ ([-+2(j-1)n, -+2 jn] × B), μ( I× B)≤2(k_2-k_1+1)/n1/2 μ([-,)× B)≤ (b-a+2δ)dμ_Σ(B).Sending δ→ 0 provesμ(I× B)≤ |I|dμ_Σ(B).Therefore, if a=b, μ(I× B)=0 and we may assume a<b. Fix δ>0 so that b-a>2δ>0 and choosea ≤(-1+2(k_1-1)n)≤ a+δ≤ b-δ≤(-1+2k_2n)≤ b.Then,μ( I× B)≥2(k_2-k_1+1)/n1/2 μ([-,)× B)≥ (b-a-2δ)dμ_Σ( B)and sending δ→ 0 proves (<ref>) and hence finishes the proof of the lemma.§.§ Fermi coordinatesThroughout the remainder of the article we will work in the case that H⊂ M is a smooth, orientable, separating hypersurface. That is,M∖ H has two connected components. We then recover Theorem <ref> for general H after proving Theorem <ref> for such hypersurfaces. We then divide a given hypersurface into finitely many (possibly overlapping) subsets of separating orientable hypersurfaces and apply Theorem <ref> to each. Let H ⊂ M be a closed smooth hypersurfaceand let U_H be a Fermi collar neighborhood of H.In Fermi coordinatesU_H= {(x',x_n): x' ∈ Hand x_n ∈ (-c, c)}for some c>0,and H={(x',0) : x' ∈ H}. Since H is a closed, separating hypersurface, it divides M into two connected components Ω_H and M \Ω_H.In the Fermi coordinates system, the point (x',x_n) is identified with the point exp_x'(x_n ν_n) ∈ U_H where ν_n is the unit normal vector to Ω_H with base point atx' ∈ H.< g r a p h i c s >The Fermi coordinates on U_H induce coordinates (x',x_n, ξ', ξ_n) on S_^*M={(x,ξ)∈ S^*M: x ∈} with (ξ',ξ_n) ∈ S^*_(x',x_n)M.In these coordinates, ξ' is cotangent to H while ξ_n is conormal to H. Note that in the Fermi coordinate system we have |(ξ', ξ_n)|^2_g(x', x_n)= ξ_n^2 + R(x',x_n,ξ'),where R satisfies that R(x',0, ξ')= |ξ'|^2_g_H(x') for all (x', ξ') ∈ T^*H and g_H is the Riemannian metric induced on H by g.§.§ Transversals for defect measures We now apply Lemma <ref> to the special case of defect measures, using the fact that they are invariant under the geodesic flow.In what follows we write |ξ'|_x':= |ξ'|_g_H(x'), where g_H is the Riemannian metricon H induced by g. Let_H(δ):={(x,ξ)∈ S^*_HM:|ξ'|_x'^2≥ 1-δ^2},and define the set of non-glancing directionsΣ_δ:= S^*_HM∖_H(δ).Suppose μ is a defect measure associated to a sequence of Laplace eigenfunctions. Then, for all δ>0 there exists >0 small enough so thatι^*μ =dtdμ_Σ_δon(-,)×Σ_δwhere ι:(-,)×Σ_δ→⋃_|s|<G^s(Σ_δ), ι(t,q)= G^t(q),is a diffeomorphism and dμ_Σ_δ is a finite Borel measure on Σ_δ.In what follows we use Lemma <ref> with N=S^*M, 𝒱=H_p the Hamiltonian flow for p=|ξ|_g, and φ_t^𝒱=G^t the geodesic flow. Note that since μ is a defect measure for a sequence of Laplace eigenfunctions, it is invariant under the geodesic flow G^t. Then, forq∈Σ_δ,|H_px_n(q)|>c δ>0 and hence Σ_δ is transverse to G^t. Therefore, there exists >0 so thatι:(-2,2)×Σ_δ→ S^*M, with ι(t,q)=G^t(q), is a coordinate map.For each A⊂ S^*_HM with A⊂ S^*_HM∖ S^*H, there exists δ_0>0 so that dμ_Σ_δ(A)=lim_t→ 01/2tμ(⋃_|s|≤ t G^s(A))for all 0<δ≤δ_0. Indeed, since A is compact, there exists δ_0=δ_0(A)>0 so that A⊂Σ_δ_0. Then, by Lemma <ref>, there exists =(A)>0 so that if |t| ≤, then μ(⋃_|s|≤ t G^s(A))=2tdμ_Σ_δ(A).In particular, we conclude that the quotient 1/2tμ(⋃_|s|≤ t G^s(A)) is independent of t as long as |t|≤. We also need the following description of μ.Suppose μ is a defect measure associated to a sequence of Laplace eigenfunctions, and let δ>0. Then, in the notation of Lemma <ref>, there exists _0>0 small enough so that μ =|ξ_n|^-1 dμ_Σ_δ(x',ξ',ξ_n) dx_n, for (x',x_n,ξ',ξ_n)∈ι((-_0,_0)×Σ_δ). Notice that |H_px_n|>γ on Σ_γ=S^*_HM∖𝒢(γ). Therefore, there exists c_0,c_1>0 so that {(x',x_n, ξ', ξ_n): |x_n|≤ c_0γ,|ξ'|_x'^2≤ 1-c_0^-1γ^2}⊂⋃_|t|≤ c_1γG^t(Σ_γ).By Lemma <ref>,ι^*μ =dμ_Σ_δ(x',ξ',ξ_n) dton (-,)×Σ_δ.Then, forq∈Σ_δ|∂_t x_n(ι(0,q))|=|H_px_n(ι(0,q))|=|ξ_n(ι(0,q))|/|ξ|_g> δ and hence for _0>0 small enough and q∈Σ_δ, t∈ (-_0,_0),|∂_t x_n(ι(t,q))|=|H_px_n(ι(t,q))|=|ξ_n(ι(t,q))|/|ξ|_g> δ/2.Therefore, dt= f(x',x_n,ξ',ξ_n)dx_n where f(x',x_n,ξ',ξ_n)=|H_px_n(ι^-1(x_n,(x',ξ',ξ_n)))|^-1=|ξ|_g/|ξ_n|=|ξ_n|^-1 where in the last equality, we use that |ξ|_g=1.In particular, μ=|ξ_n|^-1dμ_Σ_δ(x',ξ',ξ_n)dx_n.Before proceeding to the proof of Theorem <ref> we note that Lemma <ref> implies that for all δ>0, μ( S^*_H M∖_H(δ))=0.Notice that the measure |ξ_n|^-1dμ_Σ_δ(x',ξ',ξ_n)=1/√(1-|ξ'|_x'^2)dμ_Σ_δ(x',ξ',ξ_n) is hypersurface measure on S^*_HM∖𝒢(δ) induced by μ where we take ∂_x_n to be the normal vector field to S^*_HM. For example, if μ_L is Liouville measure, then, parametrizing S^*_HM∖𝒢(δ) by (x',ξ')d(μ_L)_Σ_δ=c 1_{S^*_HM∖𝒢(δ)}(x',ξ',ξ_n) dx'dξ'for some c>0.§ PROOF OF THEOREM <REF>Consider the cut-off function χ_α∈ C^∞(, [0,1]) with χ_α(t) =0 |t| ≥α1 |t| ≤α/2,with|χ_α'(t)| ≤ 3/α for all t ∈. For δ>0 consider the symbol β_δ(x',ξ')=χ_δ(|ξ'|_x') ∈ S^0(T^*H) where we continue to write |ξ'|_x':= |ξ'|_g_H(x').We refer the reader to the Appendix where the semiclassical notation used in this section is introduced.The operator Op_h(β_δ) ∈Ψ^0(H) microlocalizes near the conormal direction in T^*H which is identified with ξ'=0 via the orthogonal projection.The first step towards the proof of Theorem <ref> is to reduce the problem to study averages over H of the functions ϕ_h and h∂_νϕ_h when microlocalized near the conormal direction. For any δ >0 andu ∈ L^2(H),∫_H udσ_H=∫_H Op_h(β_δ) udσ_H+O_δ(h^∞)u_L^2(H).We wish to show that ⟨ (1-Op_h(β_δ)) u, 1 ⟩_L^2(H)=⟨u,(1-Op_h(β_δ))^* 1 ⟩_L^2(H)=O_δ(h^∞). To prove this,we simply note that in local coordinates(1-Op_h(β_δ) )^* 1 (x)= 1/(2π h)^n-1∬ e^i/h⟨ x-x',ξ' ⟩a_δ(x,ξ';h)(1-χ_2δ)(|ξ'|_x) dξ' dx',for some symbol a_δ∈ S^0. The phase function Φ(x', ξ';x) =⟨ x-x',ξ' ⟩ has critical pointsin (x', ξ') given by (x', ξ')=(x,0).By repeated integration by parts with respect to the operatorL:=1/|x-x'|^2 + |ξ'|^2( ∑_j=1^n ξ_j'hD_x'_j +∑_j=1^n (x'_j-x_j) hD_ξ'_j),using thatL (e^iΦ/h) = e^i Φ/h, one gets(1-Op_h(β_δ) )^* 1 (x)= 1/(2π h)^n-1∬ e^i (x-x')ξ'/ha_δ(x,ξ') (1-χ_δ)(|ξ'|_x) χ_1(|x-x'|)dξ' dx' + O_δ(h^∞) = O_δ(h^∞),uniformly in x ∈ H. The last line follows by repeated integrations by parts with respect to L using the fact that (1-χ_δ)^(k)(0)=0 for all k≥ 0.§.§ Proof of Theorem <ref>We wish to show that for any >0 there exists h_0()>0 so that |∫_H ϕ_h dσ_H| ≤and| ∫_Hh ∂_νϕ_h dσ_H | ≤,for allh ≤ h_0.In view ofLemma <ref>, we can microlocalize the problem to the conormal direction; that is, the claim in (<ref>)follows provided we prove that given >0 there exist δ()>0 and h_0()>0 so that|∫_HOp_h(β_δ) ϕ_h dσ_H| ≤and| ∫_HOp_h(β_δ) h ∂_νϕ_h dσ_H | ≤,for allh ≤ h_0().To prove (<ref>), by Cauchy-Schwarz, it clearly suffices to establish the stronger bounds Op_h(β_δ) ϕ_h_L^2(H)≤andOp_h(β_δ)h ∂_νϕ_h_L^2(H)≤, for allh ≤ h_0() and δ() >0 sufficiently small.From now on, we fix >0.Using Green's formula <cit.>, it is straightforward to check that for any operator A:C^∞(M)→ C^∞(M)one has the Rellich Identityi/h∫_Ω_H [-h^2 Δ_g, A] ϕ_hϕ_hdv_g= ∫_H A ϕ_h hD_νϕ_hdσ_H + ∫_H h D_ν (A ϕ_h)ϕ_hdσ_H,where D_ν = 1/i∂_ν, with ν being the unit outward vector normal to Ω_H.Let δ>0 and α>0 be two real valued parameters to be specified later and consider the operatorA_δ, α(h):=Op_h(β_δ^2) ∘ Op_h(χ_α(x_n)) ∘ hD_ν,where β_δ is defined in (<ref>).We note that when we write Op_h (β_δ^2) above, we are actually considering the operator Op_h(β_δ^2)⊗_x_n. That is, for u∈ C^∞(M),[Op_h(β_δ^2)u](x',y_n)=[Op_h(β_δ^2)u|_x_n=y_n](x'). The operator A_δ, α(h) is the semiclassical normal derivative operatorh-microlocalized to a neighbourhood of the conormal direction to H over the collar neighbourhood U_H.We note that∫_H A_δ, α(h) ϕ_h hD_νϕ_hdσ_H= ⟨ Op_h(β_δ^2)hD_νϕ_h , hD_νϕ_h⟩_L^2(H),since χ_α(x_n)=1 for x_n ∈ [-α/2,α/2].Without loss of generality, we may assume that Ω_H ∩ U_H ={(x', x_n): x'∈ Handx_n<0}.With this choice, D_ν=D_x_n. We next recall that γ_H (h^2 D_ν^2 ϕ_h) = (I + h^2 Δ_g_H) γ_H( ϕ_h) + h a_1 γ_H ( ϕ_h) + h a_2 γ_H(h D_νϕ_h) ,where γ_H:M → H is the restriction map to H, and a_1, a_2 ∈ C^∞(H). Since χ_α'(0)=0 it follows from the restriction upper bounds ϕ_h_H=O(h^-1/4) <cit.> and h D_νϕ_h _H = O(1) <cit.>that⟨ (hD_ν)^2 ϕ_h,ϕ_h ⟩_L^2(H) - ⟨ (1+h^2Δ_g_H)ϕ_h, ϕ_h ⟩_L^2(H)= O_L^2(√(h)). Consequently, ∫_H h D_ν (A_δ, α(h) ϕ_h)ϕ_hdσ_H=⟨ h D_νOp_h(β_δ^2)χ_α(x_n)h D_νϕ_h, ϕ_h⟩_L^2(H)=⟨Op_h(β_δ^2) ( h D_ν)^2ϕ_h, ϕ_h⟩_L^2(H)= ⟨ Op_h(β_δ^2)(1+h^2Δ_g_H) ϕ_h, ϕ_h⟩_L^2(H) + O(h^1/2). Substitution of(<ref>) and (<ref>) in (<ref>) givesi/h∫_Ω_H [-h^2 Δ_g, A_δ, α(h)] ϕ_hϕ_hdv_g== ⟨ Op_h(β_δ^2)hD_νϕ_h , hD_νϕ_h⟩_H+⟨ Op_h(β_δ^2)(1+h^2Δ_g_H) ϕ_h, ϕ_h⟩_H + O(h^1/2).Next, we observe that Op_h(β_δ)hD_νϕ_h^2_H= ⟨ Op_h(β_δ^2)hD_νϕ_h , hD_νϕ_h⟩_H+O(h) since hD_νϕ_h_H=O(1)<cit.>. On the other hand, for (x',ξ') ∈ β_δ we have |ξ'|^2_x≤δ^2 and so,β_δ^2 · (1- |ξ'|^2_x) - β_δ^2 · (1-2δ^2)= β_δ^2 (2 δ^2 - |ξ'|^2_x)≥β_δ^2 δ^2≥ 0. Therefore, combining the sharp Garding inequality with the boundϕ_h_H=O(h^-1/4) gives(1-2δ^2) Op_h(β_δ)ϕ_h_H^2= ⟨Op_h(β_δ^2(1-2δ^2) ) ϕ_h, ϕ_h ⟩_H + O(h^1/2) ≤⟨Op_h(β_δ^2 · (1-|ξ'|^2_x) ) ϕ_h, ϕ_h ⟩_H + O(h^1/2) =⟨Op_h(β_δ^2) (1+h^2Δ_g_H ) ϕ_h, ϕ_h ⟩_H + O(h^1/2). Substitution of(<ref>) and (<ref>) into(<ref>) givesOp_h(β_δ)hD_νϕ_h_H^2 +(1-2δ^2) Op_h(β_δ)ϕ_h_H^2≤ i/h∫_Ω_H [-h^2 Δ_g, A_δ, α(h)] ϕ_hϕ_hdv_g + O(h^1/2). The claim in (<ref>) follows atonce from (<ref>) provided we show that for any >0 there exist δ, α>0 and h_0>0(all possibly depending on ) such that |⟨i/h [-h^2 Δ_g, A_δ, α(h)] ϕ_h, ϕ_h⟩_L^2(Ω_H)| ≤^2 ∀ h ≤ h_0()To prove (<ref>) we note that ⟨i/h [-h^2 Δ_g, A_δ, α(h)] ϕ_h, ϕ_h⟩_L^2(Ω_H)==⟨ Op_h( {σ(-h^2Δ_g), σ(A_δ, α(h))}) ϕ_h, ϕ_h⟩_L^2(Ω_H)+O(h),where σ(A_δ, α(h))(x,ξ)= β_δ^2(x',ξ')χ_α(x_n) ξ_n, andaccording to (<ref>),the Poisson bracket {|(ξ', ξ_n)|^2_x , σ(A_δ, α(h))} = 2χ_α'(x_n) β_δ^2(x',ξ')ξ_n^2 +χ_α(x_n)q_δ(x',x_n,ξ', ξ_n)where, q_δ(x,ξ):=ξ_n∂_ξ' R ·∂_x'β_δ^2- ξ_n∂_x' R ·∂_ξ'β_δ^2-∂_x_n R ·β_δ^2 . We now estimate each term in the RHS of (<ref>) separately.Let {ϕ_h } be an L^2-normalized eigenfunction sequence with defect measure μ. Then, (i) | ⟨ Op_h( χ_α(x_n) q_δ ) ϕ_h, ϕ_h⟩_L^2(Ω_H) |≤ R_α,δ+o(1),where R_α,δ :=q_δ_L^∞·μ({(x',x_n,ξ', ξ_n) ∈ S_U_H^*M:|x_n|≤α, |ξ'| < δ})^ 1/2.In addition,(ii) ⟨ Op_h( 2χ_α'(x_n) β_δ^2(x',ξ')ξ_n^2) ϕ_h, ϕ_h⟩_L^2(Ω_H)=∫_S^*_Ω_H M 2χ_α'(x_n) β_δ^2(x',ξ')ξ_n^2dμ+o(1). In both (i) and (ii),o(1) denotes a term that vanishes as h → 0^+. We postpone the proof of Lemma <ref> until the end of this section. Assuming this result for the moment, we now conclude the proof of the theorem.FromLemma <ref>and (<ref>), it follows that⟨i/h [-h^2 Δ_g, A_δ, α(h)] ϕ_h, ϕ_h⟩_L^2(Ω_H) =∫_S^*_Ω_HM 2χ_α'(x_n) β_δ^2(x',ξ')ξ_n^2dμ+R_α,δ + o(1).Since μ is a Radon measure, and hence monotone, lim_α→ 0 R_α,δ = q_δ_L^∞·μ( { (x',0,ξ) ∈ S_H^*M; |ξ'| < δ} )^1/2.Thus, using Lemma <ref> (or more precisely (<ref>)) gives lim_α→ 0 R_α,δ =0.Moreover, since the LHS of (<ref>) is independent of α, we are free to take the α→ 0 limit of both sides. In view of (<ref>) and (<ref>), it follows that after taking h → 0^+ and then α→ 0^+, lim sup_h → 0( Op_h(β_δ)hD_νϕ_h_H^2 +(1-2δ^2) Op_h(β_δ)ϕ_h_H^2 )≤≤lim sup_α→ 0lim sup_h → 0^+i/h∫_Ω_H [-h^2 Δ_g, A_δ, α(h)] ϕ_hϕ_hdv_g = lim sup_α→ 0^+∫_S^*_Ω_HM 2χ_α'(x_n) β_δ^2(x',ξ')ξ_n^2dμ. The last line in (<ref>) follows from (<ref>). To analyze the RHS of (<ref>),fixγ>0 small. By Lemma <ref>there exists _γ>0 and a measure μ_Σ_γ on Σ_γ={(x,ξ)∈ S^*_HM:|ξ'|_x'^2≤ 1-γ^2} so thatμ(x,ξ) =f(x',x_n,ξ',ξ_n) dμ_Σ_γ(x',ξ',ξ_n) dx_n, (x,ξ)∈⋃_|t|≤_γ G^t(Σ_γ).By Remark <ref> we may assume that we work withα, δsmall enough so that (χ_α' ·β_δ^2 ) ⊂⋃_|t|≤_γ G^t(Σ_γ). Sincesupp(χ_α' ) ⊂ (-α, 0), by the Fubini theorem we have∫_S^*_Ω_HM 2χ_α'(x_n) β_δ^2(x',ξ')ξ_n^2dμ= =∫_-c^02χ_α'(x_n)( ∫_S_H^*Mβ_δ^2(x',ξ')ξ_n^2|ξ_n|^-1 dμ_Σ_γ (x', ξ',ξ_n) ) dx_n =∫_S_H^*M∫_-c^02χ_α'(x_n)β_δ^2(x',ξ')|ξ_n|dx_ndμ_Σ_γ (x', ξ',ξ_n).Sending α→ 0 gives lim_α→ 0^+∫_S^*_Ω_HM 2χ_α'(x_n) β_δ^2(x',ξ')ξ_n^2dμ=∫_S_H^*M 2β_δ^2(x',ξ') |ξ_n| dμ_Σ_γ (x', ξ',ξ_n).Sending δ→ 0 and using that β_δ≡ 1 on N^*H, |β_δ|≤ C we obtainlim_δ→ 0lim_α→ 0^+∫_S^*_Ω_HM 2χ_α'(x_n) β_δ^2(x',ξ')ξ_n^2dμ=∫_N^*H 2 dμ_Σ_γ(x', ξ',ξ_n)=2 μ_Σ_γ(N^*H).Since μ is conormally diffuse, we have by Remark <ref> that μ_Σ_γ(N^*H) =0 and so (<ref>) follows from (<ref>) and (<ref>).§.§ Proof of Lemma <ref>First, we use the standard fact that {ϕ_h} are microsupported on S^*M <cit.> to h-microlocally cut them off near S^*M.More precisely, forr>0 small, consider the annular shellA(r):= {(x,ξ) ∈ T^*M: 1-r < |ξ|_g(x)< 1+r }. Letχ̃∈ C_c^∞(T^*M) be a cutoff function equal to1 on A(r) and zero on T^*M ∖ A(2r). Then, <cit.>ϕ_h-Op_h(χ̃) ϕ_h _L^2(M)=O(h^∞). Proof of (i):Since ϕ_h_L^2(M)=1, by Cauchy-Schwarz,|⟨ Op_h( χ_α(x_n) q_δ )ϕ_h, ϕ_h⟩_L^2(Ω_H)|^2 ≤ Op_h( χ_α(x_n) q_δ ) ϕ_h ^2_L^2(M)=⟨ [ Op_h( χ_α(x_n) q_δ )]^* [ Op_h( χ_α(x_n) q_δ )] ϕ_h, ϕ_h ⟩_L^2(M)=⟨ [ Op_h( χ_α(x_n) q_δ )]^* [ Op_h( χ_α(x_n) q_δ )] ϕ_h,Op_h(χ̃) ϕ_h ⟩_L^2(M)+ O(h^∞)=⟨ Op_h( χ̃·χ_α^2(x_n) · |q_δ|^2 ) ϕ_h, ϕ_h ⟩_L^2(M)+ O(h)=∫_S^*Mχ̃·χ_α^2(x_n) · |q_δ|^2 dμ+ o(1)≤ q_δ^2_L^∞·μ({(x',x_n,ξ', ξ_n) ∈ S_U_H^*M:|x_n|≤α, |ξ'| < δ}) +o(1),where the penultimate identity follows from the fact that μ is the defect measure associated to {ϕ_h} andthe symbol χ̃·χ_α^2(x_n) · |q_δ|^2∈ C_c^∞ (T^* U_H). Proof of (ii): Let ρ∈ C_c^∞() be a smooth cut-off function with ρ(x_n)=0 for x_n≥ 0 and ρ(x_n)=1 for x_n ≤ -α/2. Then,since Ω_H ∩ U_H is identified with the set of points on which x_n<0, and (χ_α')⊂ (-∞, -α/2] ∪ [α/2, +∞), we haveρ(x_n) χ_α'(x_n)=0on Ω_H^c, χ_α'(x_n)on Ω_H. Note that since χ_α'(x_n)=0 for x_n ∈ [-α/2, α/2],we may regard ρχ_α' as a smooth function defined on all of M. We then have that⟨ Op_h( 2χ_α'(x_n) β_δ^2(x',ξ')ξ_n^2) ϕ_h, ϕ_h⟩_L^2(Ω_H)==⟨ Op_h( 2 ρ(x_n) χ_α'(x_n) β_δ^2(x',ξ')ξ_n^2) ϕ_h, ϕ_h⟩_L^2(M).Microlocalizing the eigenfunctions near S^*M by using the cut-off χ̃ we obtain⟨ Op_h(2χ_α'(x_n) β_δ^2(x',ξ')ξ_n^2) ϕ_h, ϕ_h⟩_L^2(Ω_H)==⟨ Op_h( χ̃ρ(x_n) 2χ_α'(x_n) β_δ^2(x',ξ')ξ_n^2) ϕ_h, ϕ_h⟩_L^2(M) + O(h).Using that μ is the defect measure associated to {ϕ_h}, and that the symbol χ̃β_δ^2ξ_n^2 ∈ C_c^∞(T^*M), we obtain⟨ Op_h( χ̃ρ(x_n) 2χ_α'(x_n) β_δ^2(x',ξ')ξ_n^2) ϕ_h, ϕ_h⟩_L^2(M)= =∫_S^*M 2ρ(x_n)χ_α'(x_n) β_δ^2(x',ξ')ξ_n^2dμ+o(1) =∫_S^*_Ω_HM 2χ_α'(x_n) β_δ^2(x',ξ')ξ_n^2dμ+o(1),as claimed. By replacing the test operator A_δ,α(h) with Ã_δ, α(h):=Op_h(β_δ^2(x',ξ')) ∘f(x') ∘ Op_h(χ_α(x_n)) ∘ hD_ν, where f ∈ C^∞(H) and carrying out the same argument as in the proof of Theorem <ref>, it is easy to see that under the assumption μ_H( π^-1( f)∩ N^*H)=0,∫_H fϕ_h dσ_H = o(1) and∫_H fh D_νϕ_h dσ_H = o(1).§ PROOF OF THEOREM <REF>To prove Theorem <ref> we need the following result. Suppose A⊂ H has piecewise smooth boundary. Then for all ϵ>0(1-Op_h(β_δ))^* 1_A_L^2(H)=O_ϵ(h^1/2-ϵ).To prove this result we first introduce a cut-off function χ_h so that(1-χ_h)1_Ais smooth and close to 1_A. Let χ_h ∈ C_c^∞(H)satisfy i) χ_h ≡ 1 on {x∈ H:d(x,∂ A)≤ h^1-ϵ}.ii)χ⊂{x∈ H:d(x,∂ A)≤ 2h^1-}. iii)|∂_x^αχ |≤ C_α h^|α|(1-). Then, (1-χ_h)1_A satisfies the same bound as in (iii), and hence integrating by parts as in Lemma <ref> i.e. withL:= 1/|x-x'|^2 + |ξ'|^2( ∑_j=1^n ξ_j'hD_x'_j +∑_j=1^n (x'_j-x_j) hD_ξ'_j),gives[(1-Op_h(β_δ))^*(1-χ_h)1_A](x)=1/(2π h)^n-1∬ e^i/h⟨ x-x',ξ'⟩ (1-β_δ(x',ξ'))(1-χ_h(x'))1_A(x')dx'dξ'= 1/(2π h)^n-1∬ e^i/h⟨ x-x',ξ'⟩(L^*)^N[(1-β_δ(x',ξ'))(1-χ_h(x'))1_A(x')]dx'dξ'=O_N(h^1-n+N(1-)).In particular,(1-Op_h(β_δ))^*(1-χ_h)1_A_L^∞=O_ϵ(h^∞).On the other handχ_h1_A_L^2(H)=O(h^1-ϵ/2).Combining (<ref>) and (<ref>) together with L^2 boundedness of Op_h(β_δ) proves the lemma.§.§ Proof of Theorem <ref> Let A ⊂ H be an open subset with piecewise C^∞ boundary and indicator function χ_A. Suppose that U⊂ H is open with A⊂ U. Then since C^∞(H) is dense in L^2(H), for any >0, we can find f ∈ C^∞(H)f - 1_A_L^2(H)≤, f⊂ U.Now, | ∫_H 1_A ϕ_h dσ_H| ≤≤| ∫_H 1_AOp_h(β_δ) ϕ_h dσ_H | + | ⟨(1-Op_h(β_δ)) ϕ_h ,1_A⟩_H |≤| ∫_H (1_A- f) Op_h(β_δ) ϕ_h dσ_H |+ | ∫_H f Op_h(β_δ) ϕ_h dσ_H |+ | ⟨ϕ_h, (1-Op_h(β_δ))^*1_A ⟩_H |≤| ∫_H (1_A- f) Op_h(β_δ) ϕ_h dσ_H | + o(1).The last line follows by applying Lemma <ref>, the universal upper bound ϕ_h_L^2(H)≤ Ch^-1/4 <cit.> and Cauchy-Schwarz to the third term, and by applying Remark <ref> to the second term.Now, since β_δ is supported away from S^*H:={(x',ξ')∈ T^*H:|ξ'|_x'=1},we have that Op_h(β_δ)ϕ_h_L^2(H)≤ C <cit.> and hence applying Cauchy–Schwarz to (<ref>)|∫_H 1_Aϕ_hdσ_H|≤ C +o(1).Since >0 was arbitrary, the theorem follows.It is clear from the proof of Theorem <ref> that one can decrease the regularity assumption on ∂ A and only assume that ∂ A has Minkowski box dimension <n-3/2 where n= M. However, we do not pursue this here.§ EXAMPLES§.§ Non vanishing averages on the torusLet 𝕋^2 be the 2-dimensional square flat torus. We identify 𝕋^2 with {(x_1,x_2): (x_1,x_2) ∈ [0,1)× [0, 1)}. Consider the sequence of normalized eigenfunctionsϕ_h(x_1,x_2)= e^i/h x_1.Consider the curve H ⊂𝕋^2 defined as H={(x_1,x_2): x_1=0}. Then, since ϕ_h|_H ≡ 1, we have ∫_H ϕ_h dσ_H =1,h^-1∈ 2π^+.We claim that in this case the measure μ associated to {ϕ_h} is not conormally diffuse with respect to H. Actually, we next prove that μ(x_1, x_2, ξ_1, ξ_2)=δ_(1,0)(ξ_1, ξ_2) · dx_1 dx_2,(x,ξ) ∈ S^* 𝕋^2.Given (<ref>), it follows that μ_H= δ_(1,0)(ξ_1, ξ_2),(x,ξ) ∈ S^* 𝕋^2.In particular, μ_H(N^*H)=1,so the measure μ is not conormally diffuse with respectto H.To see that (<ref>) holds, fix any a ∈ C^∞_c(T^* 𝕋^2). Then, ⟨ Op_h(a) ϕ_h,ϕ_h⟩ = 1/(2π h)^n∫_𝕋^2∫_𝕋^2∫_^2 a(x,ξ) e^i/hψ(x,y,ξ) dξ dy dxfor the phase functionψ(x,y,ξ):= ⟨ x-y, ξ⟩ + y_1-x_1.We next do Stationary Phase in (y, ξ). The critical points for the phase are (y,ξ)=(x,(1,0)). Also,Hess_(y, ξ)ψ=[0 -1; -10 ].It follows that ⟨ Op_h(a) ϕ_h,ϕ_h⟩ =∫_𝕋^2a(x, (1,0))dx =∫_S^* 𝕋^2 a(x,ξ)δ_(1,0)(ξ) dx,as claimed. §.§ Defect measures that are not LiouvilleAs we already pointed out in the Introduction, the assumptions on μ for being conormally diffuse are much weaker than asking μ to be absolutely continuous with respect to the Liouville measure on S^*M. In these examples we build a defect measure μ that is not absolutely continuous with respect to the Liouville measure but still satisfies the hypothesis ofTheorem <ref> for a suitable choice of curve H. §.§.§ Toral EigenfunctionsLet 𝕋^2 be the 2-dimensional square flat torus. We identify 𝕋^2 with{(x_1,x_2): (x_1,x_2) ∈ [0,1)× [0, 1)}. Consider the sequence of eigenfunctionsϕ_h(x_1,x_2)=e^i/hx_1,h^-1∈ 2π. As shown in Section <ref>, the associated defect measure isμ(x_1, x_2, ξ_1, ξ_2)= δ_(1,0)(ξ_1, ξ_2)dx_1 dx_2. Next, consider the curve H ⊂𝕋^2 defined as H={(x_1,x_2): x_2=0}. SinceN^*H={(x_1, x_2, ξ_1, ξ_2) ∈ S^* 𝕋^2: ξ_1=0}, we have for δ >0 sufficiently small,μ_H( N^*H )=0.Theorem <ref> therefore implies thatlim_h→ 0^+∫_H φ_h dσ = lim_h→ 0^+∫_0^1 e^i x_1/h dx_1 = 0. Of course, in this case the much stronger result ∫_0^1 e^i x_1/h dx_1= 0 holds for all h^-1∈ 2π. §.§.§ Gaussian BeamsConsider the two dimensional sphere S^2 equipped with the round metric, and use coordinates(θ,ω)↦ (cosθcosω,sinθcosω,sinω)∈ S^2,with [0,2π)× [-π/2,π/2]. For each of the frequencies h^-1=√(l(l+1)) with ℓ∈ℕ we associate the Gaussian beamϕ_h(θ,ω)= 1/2^ll!(2l+1/4π (2 l)!)^1/2e^-i l θ (cosω)^l .It is normalized so thatϕ_h_L^2(S^2)=1, (-h^2Δ_S^2-1)ϕ_h=0. Then, let χ∈ C_c^∞(-1,1) with χ≡ 1 on [-1/2,1/2] and defineu_h(θ,ω)= 1/2^ll!(2l+1/4π (2 l)!)^1/2e^-ilθχ(ω) e^-lω^2/2.Observe that u_h-ϕ_h=o_L^2(1),so for the purposes of computing the defect measure, we may compute with u_h. Using this, by an elementary stationary phase argument, (see e.g. <cit.>) the defect measure associated to ϕ_h is μ=1/2πδ_{ω=0,ξ=-1,ζ=0}dθwhere ξ is dual to θ and ζ is dual to ω. Let H={(θ, ω):ω=0} be the equator. In particular, N^*H={(θ, ω, ξ, ζ) ∈ S^*S^2: ω=0,ξ=0,ζ=± 1}. Then, μ_H(N^*H)=μ({ω∈(-t_0,t_0),ξ=0,ζ=±1})=0and Theorem <ref> implies∫_Hϕ_h (θ,0)dθ=o(1).§ APPENDIX ON SEMICLASSICAL NOTATION We next review the notation used for semiclassical operators and symbols and some of the basic properties. First, recall that for a compact manifold M of dimension n, we writeS^m(T^*M):={a(·;h) ∈ C^∞(T^*M): |∂_x^α∂_ξ^β a(x,ξ;h)|≤ C_αβ(1+|ξ|)^m-|β|}.We write Ψ^m(M) for the semiclassical pseudodifferential operators of order m on M and Op_h:S^m(T^*M)→Ψ^m(M)for a quantization procedure with Op_h(1)=+O_𝒟'→ C^∞(h^∞) and for u supported in a coordinate patch, φ∈ C_c^∞(M) with φ≡ 1 on u we haveOp_h(a)u(x)=1/(2π h)^n∬ e^i/h⟨ x-y,ξ⟩φ(x)a(x,ξ)u(y)dξ dy +O_𝒟'→ C^∞(h^∞)u. Then there exists a principal symbol mapσ:Ψ^m(M)→ S^m(T^*M)/hS^m-1(T^*M) so thatOp_h∘σ (A)=A+O_Ψ^m-1(h), A∈Ψ^m,σ∘ Op_h=π : S^m→ S^m/hS^m-1, where π is the natural projection map. Moreover, for A∈Ψ^m_1, B∈Ψ^m_2,* σ(AB)=σ(A)σ(B)∈ S^m_1+m_2/hS^m_1+m_2-1,* σ([A,B])=h/i{σ(A),σ(B)}∈ hS^m_1+m_2-1/h^2S^m_1+m_2-2, where {·,·} denotes the poisson bracket. For more details on the semiclassical calculus see e.g. <cit.> <cit.>. Finally, we recall the for any {u(h)}_0<h<h_0⊂ L^2(M) a bounded family of functions, we may extract a subsequence h_k→ 0 so that for a∈ C_c^∞(T^*M), ⟨ Op_h(a)u_h_k,u_h_k⟩_L^2(M)h_k→ 0→∫ a(x,ξ)dμfor a positive Radon measure μ. We call μ a defect measure for u_h_k. For p∈ S^m(T^*M) real valued, if u(h) solvesOp_h(p)u=o(h),u(h)_L^2=1,then for any defect measure μ associated to u(h), μ⊂{p(x,ξ)=0},exp(tH_p)_*μ =μwhere H_p denotes the Hamiltonian vector field associated to p. See e.g. <cit.> for more details.amsalpha | http://arxiv.org/abs/1705.09595v2 | {
"authors": [
"Yaiza Canzani",
"Jeffrey Galkowski",
"John A. Toth"
],
"categories": [
"math.AP",
"math.SP"
],
"primary_category": "math.AP",
"published": "20170526142732",
"title": "Averages of eigenfunctions over hypersurfaces"
} |
Approximation of Ruin Probabilities via Erlangized Scale Mixtures [=================================================================Adversarial learning has been embedded into deep networks to learn disentangled and transferable representations for domain adaptation. Existing adversarial domain adaptation methods may not effectively align different domains of multimodal distributions native in classification problems. In this paper, we present conditional adversarial domain adaptation, a principled framework that conditions the adversarial adaptation models on discriminative information conveyed in the classifier predictions. Conditional domain adversarial networks (CDANs) are designed with two novel conditioning strategies: multilinear conditioning that captures the cross-covariance between feature representations and classifier predictions to improve the discriminability, and entropy conditioning that controls the uncertainty of classifier predictions to guarantee the transferability. With theoretical guarantees and a few lines of codes, the approach has exceeded state-of-the-art results on five datasets. § INTRODUCTION Deep networks have significantly improved the state-of-the-arts for diverse machine learning problems and applications. When trained on large-scale datasets, deep networks learn representations which are generically useful across a variety of tasks <cit.>. However, deep networks can be weak at generalizing learned knowledge to new datasets or environments. Even a subtle change from the training domain can cause deep networks to make spurious predictions on the target domain <cit.>. While in many real applications, there is the need to transfer a deep network from a source domain where sufficient training data is available to a target domain where only unlabeled data is available, such a transfer learning paradigm is hindered by the shift in data distributions across domains <cit.>. Learning a model that reduces the dataset shift between training and testing distributions is known as domain adaptation <cit.>. Previous domain adaptation methods in the shallow regime either bridge the source and target by learning invariant feature representations or estimating instance importances using labeled source data and unlabeled target data <cit.>. Recent advances of deep domain adaptation methods leverage deep networks to learn transferable representations by embedding adaptation modules in deep architectures, simultaneously disentangling the explanatory factors of variations behind data and matching feature distributions across domains <cit.>.Adversarial domain adaptation <cit.> integrates adversarial learning and domain adaptation in a two-player game similarly to Generative Adversarial Networks (GANs) <cit.>. A domain discriminator is learned by minimizing the classification error of distinguishing the source from the target domains, while a deep classification model learns transferable representations that are indistinguishable by the domain discriminator. On par with these feature-level approaches, generative pixel-level adaptation models perform distribution alignment in raw pixel space, by translating source data to the style of a target domain using Image to Image translation techniques <cit.>. Another line of works align distributions of features and classes separately using different domain discriminators <cit.>.Despite their general efficacy for various tasks ranging from classification <cit.> to segmentation <cit.>, these adversarial domain adaptation methods may still be constrained by two bottlenecks. First, when data distributions embody complex multimodal structures, adversarial adaptation methods may fail to capture such multimodal structures for a discriminative alignment of distributions without mode mismatch. Such a risk comes from the equilibrium challenge of adversarial learning in that even if the discriminator is fully confused, we have no guarantee that two distributions are sufficiently similar <cit.>. Note that this risk cannot be tackled by aligning distributions of features and classes via separate domain discriminators as <cit.>, since the multimodal structures can only be captured sufficiently by the cross-covariance dependency between the features and classes <cit.>. Second, it is risky to condition the domain discriminator on the discriminative information when it is uncertain.In this paper, we tackle the two aforementioned challenges by formalizing a conditional adversarial domain adaptation framework. Recent advances in the Conditional Generative Adversarial Networks (CGANs) <cit.> disclose that the distributions of real and generated images can be made similar by conditioning the generator and discriminator on discriminative information. Motivated by the conditioning insight, this paper presents Conditional Domain Adversarial Networks (CDANs) to exploit discriminative information conveyed in the classifier predictions to assist adversarial adaptation. The key to the CDAN models is a novel conditional domain discriminator conditioned on the cross-covariance of domain-specific feature representations and classifier predictions. We further condition the domain discriminator on the uncertainty of classifier predictions, prioritizing the discriminator on easy-to-transfer examples. The overall system can be solved in linear-time through back-propagation. Based on the domain adaptation theory <cit.>, we give a theoretical guarantee on the generalization error bound. Experiments show that our models exceed state-of-the-art results on five benchmark datasets.§ RELATED WORK Domain adaptation <cit.> generalizes a learner across different domains of different distributions, by either matching the marginal distributions <cit.> or the conditional distributions <cit.>. It finds wide applications in computer vision <cit.> and natural language processing <cit.>. Besides the aforementioned shallow architectures, recent studies reveal that deep networks learn more transferable representations that disentangle the explanatory factors of variations behind data <cit.> and manifest invariant factors underlying different populations <cit.>. As deep representations can only reduce, but not remove, the cross-domain distribution discrepancy <cit.>, recent research on deep domain adaptation further embeds adaptation modules in deep networks using two main technologies for distribution matching: moment matching <cit.> and adversarial training <cit.>.Pioneered by the Generative Adversarial Networks (GANs) <cit.>, the adversarial learning has been successfully explored for generative modeling. GANs constitute two networks in a two-player game: a generator that captures data distribution and a discriminator that distinguishes between generated samples and real data. The networks are trained in a minimax paradigm such that the generator is learned to fool the discriminator while the discriminator struggles to be not fooled. Several difficulties of GANs have been addressed, e.g. improved training <cit.> and mode collapse <cit.>, but others still remain, e.g. failure in matching two distributions <cit.>. Towards adversarial learning for domain adaptation, unconditional ones have been leveraged while conditional ones remain under explored.Sharing some spirit of the conditional GANs <cit.>, another line of works match the features and classes using separate domain discriminators. Hoffman et al. <cit.> performs global domain alignment by learning features to deceive the domain discriminator, and category specific adaptation by minimizing a constrained multiple instance loss. In particular, the adversarial module for feature representation is not conditioned on the class-adaptation module with class information. Chen et al. <cit.> performs class-wise alignment over the classifier layer; i.e., multiple domain discriminators take as inputs only the softmax probabilities of source classifier, rather than conditioned on the class information. Tsai et al. <cit.> imposes two independent domain discriminators on the feature and class layers. These methods do not explore the dependency between the features and classes in a unified conditional domain discriminator, which is important to capture the multimodal structures underlying data distributions.This paper extends the conditional adversarial mechanism to enable discriminative and transferable domain adaptation, by defining the domain discriminator on the features while conditioning it on the class information. Two novel conditioning strategies are designed to capture the cross-covariance dependency between the feature representations and class predictions while controlling the uncertainty of classifier predictions. This is different from aligning the features and classes separately <cit.>.§ CONDITIONAL ADVERSARIAL DOMAIN ADAPTATION In unsupervised domain adaptation, we are given a source domain 𝒟_s = {(𝐱_i^s, y^s_i)}_i=1^n_s of n_s labeled examples and a target domain D_t = {x_j^t} _j = 1^n_t of n_t unlabeled examples. The source domain and target domain are sampled from joint distributions P(𝐱^s, 𝐲^s) and Q(𝐱^t, 𝐲^t) respectively, and the i.i.d. assumption is violated as PQ. The goal of this paper is to design a deep network G: x↦ y which formally reduces the shifts in the data distributions across domains, such that the target risk ϵ_t( G ) = 𝔼 _( 𝐱^t, y^t) ∼ Q[ G ( 𝐱^t)y^t] can be bounded by the source risk ϵ_s( G ) = 𝔼 _( 𝐱^s, y^s) ∼ P[ G ( 𝐱^s)y^s] plus the distribution discrepancy disc(P,Q) quantified by a novel conditional domain discriminator.Adversarial learning, the key idea to enabling Generative Adversarial Networks (GANs) <cit.>, has been successfully explored to minimize the cross-domain discrepancy <cit.>. Denote by 𝐟 = F( 𝐱) the feature representation and by 𝐠 = G( 𝐱) the classifier prediction generated from the deep network G. Domain adversarial neural network (DANN) <cit.> is a two-player game: the first player is the domain discriminator D trained to distinguish the source domain from the target domain and the second player is the feature representation F trained simultaneously to confuse the domain discriminator D. The error function of the domain discriminator corresponds well to the discrepancy between feature distributions P( f) and Q( f) <cit.>, a key to bound the target risk in thedomain adaptation theory <cit.>. §.§ Conditional Discriminator We further improve existing adversarial domain adaptation methods <cit.> in two directions. First, when the joint distributions of feature and class, i.e. P( x^s, y^s) and Q( x^t, y^t), are non-identical across domains, adapting only the feature representation f may be insufficient. Due to a quantitative study <cit.>, deep representations eventually transition from general to specific along deep networks, with transferability decreased remarkably in the domain-specific feature layer f and classifier layer g.Second, when the feature distribution is multimodal, which is a real scenario due to the nature of multi-class classification, adapting only the feature representation may be challenging for adversarial networks. Recent work <cit.> reveals the high risk of failure in matching only a fraction of components underlying different distributions with the adversarial networks. Even if the discriminator is fully confused, we have no theoretical guarantee that two different distributions are identical <cit.>.This paper tackles the two aforementioned challenges by formalizing a conditional adversarial domain adaptation framework. Recent advances in Conditional Generative Adversarial Networks (CGANs) <cit.> discover that different distributions can be matched better by conditioning the generator and discriminator on relevant information, such as associated labels and affiliated modality. Conditional GANs <cit.> generate globally coherent images from datasets with high variability and multimodal distributions. Motivated by conditional GANs, we observe that in adversarial domain adaptation, the classifier prediction g conveys discriminative information potentially revealing the multimodal structures, which can be conditioned on when adapting feature representation f. By conditioning, domain variances in both feature representation f and classifier prediction g can be modeled simultaneously.We formulate Conditional Domain Adversarial Network (CDAN) as a minimax optimization problem with two competitive error terms: (a) ℰ(G) on the source classifier G, which is minimized to guarantee lower source risk; (b) ℰ(D,G) on the source classifier G and the domain discriminator Dacross the source and target domains, which is minimized over D but maximized over 𝐟 = F(𝐱) and 𝐠 = G(𝐱):ℰ(G) = 𝔼_( 𝐱_i^s,𝐲_i^s )∼𝒟_sL( G( 𝐱_i^s),𝐲_i^s), ℰ(D,G) =- 𝔼_𝐱_i^s∼𝒟_slog[ D( 𝐟_i^s,𝐠_i^s)] - 𝔼_𝐱_j^t∼𝒟_tlog[1- D( 𝐟_j^t,𝐠_j^t)],where L(·, ·) is the cross-entropy loss, and h = ( f,g) is the joint variable of feature representation f and classifier prediction g. The minimax game of conditional domain adversarial network (CDAN) ismin_Gℰ(G) - λℰ(D,G) min_Dℰ(D,G),where λ is a hyper-parameter between the two objectives to tradeoff source risk and domain adversary.We condition domain discriminator D on the classifier prediction g through joint variable h = ( f,g). This conditional domain discriminator can potentially tackle the two aforementioned challenges of adversarial domain adaptation. A simple conditioning of D is D( f⊕ g), where we concatenate the feature representation and classifier prediction in vector f⊕ g and feed it to conditional domain discriminator D. This conditioning strategy is widely adopted by existing conditional GANs <cit.>. However, with the concatenation strategy, f and g are independent on each other, thus failing to fully capture multiplicative interactions between feature representation and classifier prediction, which are crucial to domain adaptation. As a result, the multimodal information conveyed in classifier prediction cannot be fully exploited to match the multimodal distributions of complex domains <cit.>. §.§ Multilinear Conditioning The multilinear map is defined as the outer product of multiple random vectors. And the multilinear map of infinite-dimensional nonlinear feature maps has been successfully applied to embed joint distribution or conditional distribution into reproducing kernel Hilbert spaces <cit.>. Given two random vectors 𝐱 and 𝐲, the joint distribution P(𝐱,𝐲) can be modeled by the cross-covariance 𝔼_𝐱𝐲[ϕ(𝐱) ⊗ϕ(𝐲)], where ϕ is a feature map induced by some reproducing kernel. Such kernel embeddings enable manipulation of the multiplicative interactions across multiple random variables.Besides the theoretical benefit of the multilinear map 𝐱⊗𝐲 over the concatenation 𝐱⊕𝐲 <cit.>, we further explain its superiority intuitively. Assume linear map ϕ (𝐱) = 𝐱 and one-hot label vector 𝐲 in C classes. As can be verified, mean map 𝔼_𝐱𝐲[𝐱⊕𝐲] = 𝔼_𝐱[𝐱] ⊕𝔼_𝐲[𝐲] computes the means of 𝐱 and 𝐲 independently. In contrast, mean map 𝔼_𝐱𝐲[ 𝐱⊗𝐲] = 𝔼_𝐱[ 𝐱|y = 1] ⊕…⊕𝔼_𝐱[ 𝐱|y = C] computes the means of each of the C class-conditional distributions P(𝐱|y). Superior than concatenation, the multilinear map 𝐱⊗𝐲 can fully capture the multimodal structures behind complex data distributions.Taking the advantage of multilinear map, in this paper, we condition D on g with the multilinear mapT_ ⊗( 𝐟,𝐠) = 𝐟⊗𝐠,where T_⊗ is a multilinear map and D( f, g) = D( f⊗ g). As such, the conditional domain discriminator successfully models the multimodal information and joint distributions of f and g. Also, the multi-linearity can accommodate random vectors f and g with different cardinalities and magnitudes. A disadvantage of the multilinear map is dimension explosion. Denoting by d_f and d_g the dimensions of vectors f and g respectively, the dimension of multilinear map 𝐟⊗𝐠 is d_f × d_g, often too high-dimensional to be embedded into deep networks without causing parameter explosion. This paper addresses the dimension explosion by randomized methods <cit.>. Note that multilinear map holds⟨T_ ⊗( 𝐟,𝐠),T_ ⊗( 𝐟',𝐠')⟩= ⟨𝐟⊗𝐠,𝐟'⊗𝐠'⟩ = ⟨𝐟,𝐟'⟩⟨𝐠,𝐠'⟩≈⟨T_ ⊙( 𝐟,𝐠),T_ ⊙( 𝐟',𝐠')⟩,where T_ ⊙( 𝐟,𝐠) is the explicit randomized multilinear map of dimension d ≪ d_f × d_g. We defineT_ ⊙( 𝐟,𝐠) = 1/√(d)( 𝐑_𝐟𝐟) ⊙( 𝐑_𝐠𝐠),where ⊙ is element-wise product, R_f and R_g are random matrices sampled only once and fixed in training, and each element R_ij follows a symmetric distribution with univariance, i.e. 𝔼[ R_ij] = 0,𝔼[ R_ij^2] = 1.Applicable distributions include Gaussian distribution and Uniform distribution. As the inner-product on T_ ⊗ can be accurately approximated by the inner-product on T_ ⊙, we can directly adopt T_⊙ ( f,g) for computation efficiency. We guarantee such approximation quality by a theorem. The expectation and variance of using T_ ⊙( 𝐟,𝐠) (<ref>) to approximate T_ ⊗( 𝐟,𝐠) (<ref>) satisfy 𝔼[ ⟨T_ ⊙( 𝐟,𝐠),T_ ⊙( 𝐟',𝐠')⟩] = ⟨𝐟,𝐟'⟩⟨𝐠,𝐠'⟩,var[ ⟨T_ ⊙( 𝐟,𝐠),T_ ⊙( 𝐟',𝐠')⟩] = ∑_i = 1^d β( 𝐑_i^𝐟,𝐟)β( 𝐑_i^𝐠,𝐠)+ C, where β( 𝐑_i^𝐟,𝐟) = 1/d∑_j = 1^d_f[ f_j^2f'_j^2𝔼[ ( R_ij^f)^4 ] + C' ] and similarly for β( 𝐑_i^𝐠,𝐠), C, C' are constants. The proof is given in the supplemental material. This verifies that T_⊙ is an unbiased estimate of T_⊗ in terms of inner product, with estimation variance depending only on the fourth-order moments 𝔼[ ( R_ij^f )^4 ] and 𝔼[ ( R_ij^g )^4 ], which are constants for many symmetric distributions with univariance, including Gaussian distribution and Uniform distribution. The bound reveals that wen can further minimize the approximation error by normalizing the features.For simplicity we define the conditioning strategy used by the conditional domain discriminator D asT ( 𝐡) =T_ ⊗( 𝐟,𝐠)if d_f×d_g⩽ 4096T_ ⊙( 𝐟,𝐠)otherwise,where 4096 is the largest number of units in typical deep networks (e.g. AlexNet), and if dimension of the multilinear map T_⊗ is larger than 4096, then we will choose randomized multilinear map T_⊙. §.§ Conditional Domain Adversarial Network We enable conditional adversarial domain adaptation over domain-specific feature representation f and classifier prediction g. We jointly minimize (<ref>) w.r.t. source classifier G and feature extractor F, minimize (<ref>) w.r.t. domain discriminator D, and maximize (<ref>) w.r.t. feature extractor F and source classifier G. This yields the minimax problem of Conditional Domain Adversarial Network (CDAN):min_G 𝔼_( 𝐱_i^s,𝐲_i^s )∼𝒟_sL( G( 𝐱_i^s),𝐲_i^s)+ λ( 𝔼_𝐱_i^s∼𝒟_slog[ D( T ( 𝐡_i^s))] + 𝔼_𝐱_j^t∼𝒟_tlog[1 - D( T ( 𝐡_j^t))])max_D 𝔼_𝐱_i^s∼𝒟_slog[ D( T ( 𝐡_i^s))] + 𝔼_𝐱_j^t∼𝒟_tlog[1 - D( T ( 𝐡_j^t))],where λ is a hyper-parameter between source classifier and conditional domain discriminator, and note that h = ( f,g) is the joint variable of domain-specific feature representation f and classifier prediction g for adversarial adaptation. As a rule of thumb, we can safely set f as the last feature layer representation and g as the classifier layer prediction. In cases where lower-layer features are not transferable as in pixel-level adaptation tasks <cit.>, we can change f to lower-layer representations. Entropy ConditioningThe minimax problem for the conditional domain discriminator (<ref>) imposes equal importance for different examples, while hard-to-transfer examples with uncertain predictions may deteriorate the conditional adversarial adaptation procedure. Towards safe transfer, we quantify the uncertainty of classifier predictions by the entropy criterion H( 𝐠) =- ∑_c = 1^C g_clogg_c, where C is the number of classes and g_c is the probability of predicting an example to class c. We prioritize the discriminator on those easy-to-transfer examples with certain predictions by reweighting each training example of the conditional domain discriminator by an entropy-aware weight w(H( 𝐠)) = 1+e^-H(𝐠). The entropy conditioning variant of CDAN (CDAN+E) for improved transferability is formulated asmin_G 𝔼_(𝐱_i^s,𝐲_i^s)∼𝒟_sL( G( 𝐱_i^s),𝐲_i^s)+ λ( 𝔼_𝐱_i^s∼𝒟_sw( H( 𝐠_i^s))log[ D( T( 𝐡_i^s))] + 𝔼_𝐱_j^t∼𝒟_tw( H( 𝐠_j^t))log[ 1 - D( T( 𝐡_j^t))])max_D 𝔼_𝐱_i^s∼𝒟_sw( H( 𝐠_i^s))log[ D( T( 𝐡_i^s))] + 𝔼_𝐱_j^t∼𝒟_tw( H( 𝐠_j^t))log[ 1 - D( T( 𝐡_j^t))].The domain discriminator empowers the entropy minimization principle <cit.> and encourages certain predictions, enabling CDAN+E to further perform semi-supervised learning on unlabeled target data.§.§ Generalization Error Analysis We give an analysis of the CDAN method taking similar formalism of the domain adaptation theory <cit.>. We first consider the source and target domains over the fixed representation space 𝐟 = F(𝐱), and a family of source classifiers G in hypothesis space ℋ <cit.>. Denote by ϵ _P( G ) = 𝔼_(𝐟,𝐲)∼ P[G( 𝐟) 𝐲] the risk of a hypothesis G∈ℋ w.r.t. distribution P, and ϵ _P( G_1,G_2) = 𝔼_(𝐟,𝐲)∼ P[ G_1( 𝐟)G_2( 𝐟)] the disagreement between hypotheses G_1, G_2 ∈ℋ. Let G^ *= min_G ϵ _P( G ) + ϵ _Q( G ) be the ideal hypothesis that explicitly embodies the notion of adaptability. The probabilistic bound <cit.> of the target risk ϵ_Q(G) of hypothesis G is given by the source risk ϵ_P(G) plus the distribution discrepancyϵ _Q( G ) ⩽ϵ _P( G ) + [ϵ _P( G^ * ) + ϵ _Q( G^ * )] + | ϵ _P( G,G^ * ) - ϵ _Q( G,G^ * )|.The goal of domain adaptation is to reduce the distribution discrepancy | ϵ _P( G,G^ * ) - ϵ _Q( G,G^ * )|. By definition, ϵ _P( G,G^ * ) = 𝔼_(𝐟,𝐲)∼ P[ G( 𝐟) G^ * ( 𝐟)] = 𝔼_(𝐟,𝐠)∼P_G[ 𝐠G^ * ( 𝐟)] = ϵ _P_G( G^ * ), and similarly, ϵ _Q( G,G^ * ) = ϵ _Q_G( G^ * ). Note that, P_G = ( 𝐟,G( 𝐟))_𝐟∼P(𝐟) and Q_G = ( 𝐟,G( 𝐟))_𝐟∼Q(𝐟) are the proxies of the joint distributions P(𝐟,𝐲) and Q(𝐟,𝐲), respectively <cit.>. Based on the proxies, | ϵ _P( G,G^ * ) - ϵ _Q( G,G^ * )| = | ϵ _P_G( G^ * ) - ϵ _Q_G( G^ * )|. Define a (loss) difference hypothesis space Δ≜{δ= | 𝐠 - G^ * ( 𝐟)| : G^ * ∈ℋ} over the joint variable (𝐟,𝐠), where δ : (𝐟,𝐠) ↦{0, 1} outputs the loss of G^∗∈ℋ. Based on the above difference hypothesis space Δ, we define the Δ-distance asd_Δ( P_G,Q_G)≜sup_δ∈Δ| 𝔼_(𝐟,𝐠)∼P_G[ δ( 𝐟,𝐠)0] - 𝔼_(𝐟,𝐠)∼Q_G[ δ( 𝐟,𝐠)0]| = sup_G^ * ∈ℋ| 𝔼_(𝐟,𝐠)∼P_G[ | 𝐠 - G^ * ( 𝐟)|0] - 𝔼_(𝐟,𝐠)∼Q_G[ | 𝐠 - G^ * ( 𝐟)|0]|⩾| 𝔼_(𝐟,𝐠)∼P_G[ 𝐠G^ * ( 𝐟)] - 𝔼_(𝐟,𝐠)∼Q_G[ 𝐠G^ * ( 𝐟)]| = | ϵ_P_G( G^ * ) - ϵ_Q_G( G^ * )|.Hence, the domain discrepancy | ϵ _P( G,G^ * ) - ϵ _Q( G,G^ * )| can be upper-bounded by the Δ-distance.Since the difference hypothesis space Δ is a continuous function class, assume the family of domain discriminators ℋ_D is rich enough to contain Δ, Δ⊂ℋ_D. Such an assumption is not unrealistic as we have the freedom to choose ℋ_D, for example, a multilayer perceptrons that can fit any functions. Given these assumptions, we show that training domain discriminator D is related to d_Δ( P_G,Q_G):d_Δ( P_G,Q_G)⩽sup_D ∈ℋ_D| 𝔼_(𝐟,𝐠)∼P_G[ D( 𝐟,𝐠)0] - 𝔼_(𝐟,𝐠)∼Q_G[ D( 𝐟,𝐠)0]| ⩽sup_D ∈ℋ_D| 𝔼_(𝐟,𝐠)∼P_G[ D( 𝐟,𝐠) = 1] + 𝔼_(𝐟,𝐠)∼Q_G[ D( 𝐟,𝐠) = 0]|.This supremum is achieved in the process of training the optimal discriminator D in CDAN, giving an upper bound of d_Δ( P_G,Q_G). Simultaneously, we learn representation 𝐟 to minimize d_Δ( P_G,Q_G), yielding better approximation of ϵ_Q(G) by ϵ_P(G) to bound the target risk in the minimax paradigm.§ EXPERIMENTS We evaluate the proposed conditional domain adversarial networks with many state-of-the-art transfer learning and deep learning methods. Codes will be available at <http://github.com/thuml/CDAN>. §.§ Setup Office-31 <cit.> is the most widely used dataset for visual domain adaptation, with 4,652 images and 31 categories collected from three distinct domains: Amazon (A), Webcam (W) and DSLR (D). We evaluate all methods on six transfer tasks A → W, D → W, W → D, A → D, D → A, and W → A.ImageCLEF-DA[<http://imageclef.org/2014/adaptation>] is a dataset organized by selecting the 12 common classes shared by three public datasets (domains): Caltech-256 (C), ImageNet ILSVRC 2012 (I), and Pascal VOC 2012 (P).We permute all three domains and build six transfer tasks: I → P, P → I, I → C, C → I, C → P, P → C.Office-Home <cit.> is a better organized but more difficult dataset than Office-31, which consists of 15,500 images in 65 object classes in office and home settings, forming four extremely dissimilar domains: Artistic images (Ar), Clip Art (Cl), Product images (Pr), and Real-World images (Rw). Digits We investigate three digits datasets: MNIST, USPS, and Street View House Numbers (SVHN). We adopt the evaluation protocol of CyCADA <cit.> with three transfer tasks: USPS to MNIST (U → M), MNIST to USPS (M → U), and SVHN to MNIST (S → M). We train our model using the training sets: MNIST (60,000), USPS (7,291), standard SVHN train (73,257). Evaluation is reported on the standard test sets: MNIST (10,000), USPS (2,007) (the numbers of images are in parentheses). VisDA-2017[<http://ai.bu.edu/visda-2017/>] is a challenging simulation-to-real dataset, with two very distinct domains: Synthetic, renderings of 3D models from different angles and with different lightning conditions; Real, natural images. It contains over 280K images across 12 classes in the training, validation and test domains.We compare Conditional Domain Adversarial Network (CDAN) with state-of-art domain adaptation methods: Deep Adaptation Network (DAN) <cit.>, Residual Transfer Network (RTN) <cit.>, Domain Adversarial Neural Network (DANN) <cit.>, Adversarial Discriminative Domain Adaptation (ADDA) <cit.>, Joint Adaptation Network (JAN) <cit.>, Unsupervised Image-to-Image Translation (UNIT) <cit.>, Generate to Adapt (GTA) <cit.>, Cycle-Consistent Adversarial Domain Adaptation (CyCADA) <cit.>.We follow the standard protocols for unsupervised domain adaptation <cit.>. We use all labeled source examples and all unlabeled target examples and compare the average classification accuracy based on three random experiments. We conduct importance-weighted cross-validation (IWCV) <cit.> to select hyper-parameters for all methods.As CDAN performs stably under different parameters, we fix λ = 1 for all experiments. For MMD-based methods (TCA, DAN, RTN, and JAN), we use Gaussian kernel with bandwidth set to median pairwise distances on training data <cit.>.We adopt AlexNet <cit.> and ResNet-50 <cit.> as base networks and all methods differ only in their discriminators. We implement AlexNet-based methods in Caffe and ResNet-based methods in PyTorch. We fine-tune from ImageNet pre-trained models <cit.>, except the digit datasets that we train our models from scratch. We train the new layers and classifier layer through back-propagation, where the classifier is trained from scratch with learning rate 10 times that of the lower layers.We adopt mini-batch SGD with momentum of 0.9 and the learning rate annealing strategy as <cit.>: the learning rate is adjusted by η _p = η _0( 1 + α p)^-β, where p is the training progress changing from 0 to 1, and η_0 = 0.01, α=10, β=0.75 are optimized by the importance-weighted cross-validation <cit.>.We adopt a progressive training strategy for the discriminator, increasing λ from 0 to 1 by multiplying to 1 - exp(- δ p)/1 + exp(- δ p), δ = 10.§.§ Results The results on Office-31 based on AlexNet and ResNet are reported in Table <ref>, with results of baselines directly reported from the original papers if protocol is the same. The CDAN models significantly outperform all comparison methods on most transfer tasks, where CDAN+E is a top-performing variant and CDAN performs slightly worse. It is desirable that CDAN promotes the classification accuracies substantially on hard transfer tasks, e.g. A → W and A → D, where the source and target domains are substantially different <cit.>. Note that, CDAN+E even outperforms generative pixel-level domain adaptation method GTA, which has a very complex design in both architecture and objectives.The results on the ImageCLEF-DA dataset are reported in Table <ref>. The CDAN models outperform the comparison methods on most transfer tasks, but with smaller rooms of improvement. This is reasonable since the three domains in ImageCLEF-DA are of equal size and balanced in each category, and are visually more similar than Office-31, making the former dataset easier for domain adaptation. The results on Office-Home are reported in Table <ref>. The CDAN models substantially outperform the comparison methods on most transfer tasks, and with larger rooms of improvement. An interpretation is that the four domains in Office-Home are with more categories, are visually more dissimilar with each other, and are difficult in each domain with much lower in-domain classification accuracy <cit.>.Since domain alignment is category agnostic in previous work, it is possible that the aligned domains are not classification friendly in the presence of large number of categories.It is desirable that CDAN models yield larger boosts on such difficult domain adaptation tasks, which highlights the power of adversarial domain adaptation by exploiting complex multimodal structures in classifier predictions.Strong results are also achieved on the digits datasets and synthetic to real datasets as reported in Table <ref>. Note that the generative pixel-level adaptation methods UNIT, CyCADA, and GTA are specifically tailored to the digits and synthetic to real adaptation tasks. This explains why the previous feature-level adaptation method JAN performs fairly weakly. To our knowledge, CDAN+E is the only approach that works reasonably well on all five datasets, and remains a simple discriminative model. §.§ Analysis Ablation StudyWe examine the sampling strategies of the random matrices in Equation (<ref>). We testify CDAN+E (w/ gaussian sampling) and CDAN+E (w/ uniform sampling) with their random matrices sampled only once from Gaussian and Uniform distributions, respectively. Table <ref> shows that CDAN+E (w/o random sampling) performs best while CDAN+E (w/ uniform sampling) performs the best across the randomized variants. Table <ref>∼<ref> shows that CDAN+E outperforms CDAN, proving that entropy conditioning can prioritize easy-to-transfer examples and encourage certain predictions.Conditioning StrategiesBesides multilinear conditioning, we investigate DANN-[f,g] with the domain discriminator imposed on the concatenation of f and g, DANN-f and DANN-g with the domain discriminator plugged in feature layer f and classifier layer g.Figure <ref> shows accuracies on A → W and A → D based on ResNet-50. The concatenation strategy is not successful, as it cannot capture the cross-covariance between features and classes, which are crucial to domain adaptation <cit.>. Figure <ref> shows that the entropy weight e^-H(𝐠) corresponds well with the prediction correctness: entropy weight ≈ 1 if the prediction is correct, and much smaller than 1 when prediction is incorrect (uncertain). This reveals the power of the entropy conditioning to guarantee example transferability.Distribution DiscrepancyThe 𝒜-distance is a measure for distribution discrepancy <cit.>, defined as dist_ A = 2( 1 - 2ϵ), where ϵ is the test error of a classifier trained to discriminate the source from target. Figure <ref> shows dist_ A on tasks A → W, W → D with features of ResNet, DANN, and CDAN. We observe that dist_ A on CDAN features is smaller than dist_ A on both ResNet and DANN features, implying that CDAN features can reduce the domain gap more effectively. As domains W and D are similar, dist_ A of task W → D is smaller than that of A → W, implying higher accuracies.ConvergenceWe testify the convergence of ResNet, DANN, and CDANs, with the test errors on task A → W shown in Figure <ref>. CDAN enjoys faster convergence than DANN, while CDAN (M) converges faster than CDAN (RM). Note that CDAN (M) constitutes high-dimensional multilinear map, which is slightly more costly than CDAN (RM), while CDAN (RM) has similar cost as DANN. Visualization We visualize by t-SNE <cit.> in Figures <ref>–<ref> the representations of task A → W (31 classes) by ResNet, DANN, CDAN-f, and CDAN-fg.The source and target are not aligned well with ResNet, better aligned with DANN but categories are not discriminated well. They are aligned better and categories are discriminated better by CDAN-f, while CDAN-fg is evidently better than CDAN-f. This shows the benefit of conditioning adversarial adaptation on discriminative predictions. § CONCLUSION This paper presented conditional domain adversarial networks (CDANs), novel approaches to domain adaptation with multimodal distributions. Unlike previous adversarial adaptation methods that solely match the feature representation across domains which is prone to under-matching, the proposed approach further conditions the adversarial domain adaptation on discriminative information to enable alignment of multimodal distributions. Experiments validated the efficacy of the proposed approach.§ ACKNOWLEDGMENTS We thank Yuchen Zhang at Tsinghua University for insightful discussions. This work was supported by the National Key R&D Program of China (2016YFB1000701), the Natural Science Foundation of China (61772299, 71690231, 61502265) and the DARPA Program on Lifelong Learning Machines. ieee | http://arxiv.org/abs/1705.10667v4 | {
"authors": [
"Mingsheng Long",
"Zhangjie Cao",
"Jianmin Wang",
"Michael I. Jordan"
],
"categories": [
"cs.LG"
],
"primary_category": "cs.LG",
"published": "20170526005036",
"title": "Conditional Adversarial Domain Adaptation"
} |
Sensor Selection Cost Optimization for Tracking Structurally Cyclic Systems: a P-Order SolutionM. Doostmohammadian ab^∗^∗Corresponding author. Email: [email protected], [email protected] , H. Zarrabic, and H. R. Rabieea aICT Innovation Center for Advanced Information and Communication Technology, School of Computer Engineering, Sharif University of Technology; bMechanical Engineering Department, Semnan University; cIran Telecommunication Research Center (ITRC) December 30, 2023 ========================================================================================================================================================================================================================================================================================================================================================================================================== Measurements and sensing implementations impose certain cost in sensor networks. The sensor selection cost optimization is the problem of minimizing the sensing cost of monitoring a physical (or cyber-physical) system. Consider a given set of sensors tracking states of a dynamical system for estimation purposes. For each sensor assume different costs to measure different (realizable) states. The idea is to assign sensors to measure states such that the global cost is minimized. The number and selection of sensor measurements need to ensure the observability to track the dynamic state of the system with bounded estimation error. The main question we address is how to select the state measurements to minimize the cost while satisfying the observability conditions. Relaxing the observability condition for structurally cyclic systems, the main contribution is to propose a graph theoretic approach to solve the problem in polynomial time. Note that, polynomial time algorithms are suitable for large-scale systems as their running time is upper-bounded by a polynomial expression in the size of input for the algorithm. We frame the problem as a linear sum assignment with solution complexity of 𝒪(m^3).State-Space Models, Linear Systems, State Estimation, Observability, Convex Programming, Sensor Selection § INTRODUCTION Sensors and sensing devices are widespread in everyday use and are involved in many aspect of human life. Nowadays, sensors are advanced beyond the physical world and even are introduced in online social networks. The emerging notion of IoT and the so-called Trillion Sensors roadmap further motivates sensor and actuator implementation in many physical systems and cyber networksA few examples are: in ecosystems and environmental monitoring <cit.>, security and vulnerability of social networks <cit.>, eHealth and epidemic monitoring <cit.>, Dynamic Line Rating (DLR) in smart power grids <cit.>, etc. In these large-scale applications the cost of sensing is a challenge. The cost may represent energy consumption, the economic cost of sensors, and even the additive disturbance due to, for example, long distance communication in wireless sensor networks. The rapidly growing size of IoT and sensor networks motivates minimal cost sensor-placement solution for practical applications.There exist different approaches toward sensor selection optimization. In <cit.>, authors study sensor selection for noise reduction. This work introduces combinatorial problem of selecting k out of m sensors to optimize the volume of probabilistic confidence ellipsoid containing measurement error by adoptinga convex relaxation. Authors in <cit.>, consider minimum sensor coverage for dynamic social inference. Their idea is to find minimum sensor collection to ensure generic social observability. The authors show the relaxation lies in set covering category and is generally NP-hard[Note that, the NP-hard problems are believed to have no solution in time complexity upper-bounded by a polynomial function of the input parameters.] to solve.In <cit.> the source localization problem under observability constraints is addressed. The authors aim to find the minimal possible observers to exactly locate the source of infection/diffusion in a network. They state that this problem is NP-hard and propose approximations to solve the problem. In another line of research, distributed optimization is discussed in <cit.>, where dynamic feedback algorithms robust to disturbance is proposed to minimize certain cost function over a sensor network. Optimal sensor coverage with application to facility allocation is studied in <cit.>. The authors propose a distributed deployment protocol as a local optimal solution in order to assign resources to group of mobile autonomous sensors under certain duty to capability constraints. Minimal actuator placement ensuring controllability is discussed in <cit.>. Authors provide P-order approximations to a generally NP-hard problem by considering control energy constraints. In<cit.>, the minimal number of observers for distributed inference in social networks is discussed. Similarly, minimizing the number of actuators for structural controllability following specific rank constraints is addressed in <cit.>.This paper studies minimum sensor placement cost for tracking structurally cyclic dynamical systems (see Fig.<ref>). In general, state measurements are costly and these costs may change for different sensor selection. This is due to various factors, e.g. sensor range and calibration, measurement accuracy, embedding/installation cost, and even environmental conditions. Therefore, any collection of state-sensor pairs may impose specific sensing costs. The main constraint, however, is that not all collection of sensor measurements provide an observable estimation. Observability determines if the system outputs convey sufficient information over time to infer the internal states of the dynamic system. With no observability, no stable estimation can be achieved and the tracking error covariance grows unbounded.In conventional sense, observability requires algebraic tests, e.g. the observability Gramian formulation <cit.> or the Popov-Belevitch-Hautus test <cit.>. These methods are computationally inefficient especially in large-scale systems. In contrast, this paper adopts a structural approach towards observability. The methodology is irrespective of numerical values of system parameters, while the structure is fixed but the system parameters vary in time <cit.>. Indeed, this approach makes our solution practically feasible for Linear Structure-Invariant (LSI) systems. This may arise in linearization of nonlinear dynamics, where the linearized Jacobian is LSI while the values depend on the linearization point. As it is known the observability and controllability of Jacobian linearization is sufficient for observability and controllability of the original nonlinear dynamics <cit.>.Another structural property of the system is the system rank. In particular, for full rank systems the adjacency graph of the system (system digraph) is structurally cyclic, i.e. there exist a disjoint union of cycles covering all state nodes in system digraph. An example is dynamic system digraphs including self loops, which implies that for every system state the 1st-derivative is a function of the same state (referred to as self-damped systems in <cit.>). An example arises in systems representing ecological interactions <cit.> among species, where intrinsic self-dampening ensures the eco-stability around equilibrium state.In <cit.>, authors extended the case to network of coupled dth-order intrinsic dynamics. In such case, the self-dynamic impose a cyclic subgraph in the large-scale structure of the system digraph. In addition, in distributed estimation and sensor network literature typically it is assumed that the system matrix is invertible <cit.> and therefore system is full-rank. In <cit.> authors study the stability and observability of cyclic interconnected systems resulted from biochemical reactions, while in <cit.> authors analyze the self-damped epidemic equations integrated in social and human networks. These examples motivates the study of structurally cyclic systems in this paper.Contributions: Towards sensor selection cost optimization, this paper, first, considers LSI dynamic systems. In such systems the parameters vary in time while the system structure is unchanged as in Jacobian linearization of nonlinear dynamic systems. This is also the case, for example, in social systems with invariant social interactions, and in power systems with fixed system structure but time-varying parameters due to dynamic loading. Second, the observability constraint for cost optimization is framed as a selection problem from a necessary set of states. We relax the observability constraint by defining the equivalent set of states necessary for observability. This is a novel approach towards cost optimization for estimation purposes. Third, the optimization is characterized as a Linear Sum Assignment Problem (LSAP), where the solution is of polynomial complexity. In this direction, the NP-hard observability optimization problem, as reviewed in the beginning of this section, is relaxed to a P-order problem for the case of structurally cyclic systems. Note that, for general systems, this problem is NP-hard to solve (see <cit.> for example). The relaxation in this paper is by introducing the concept of cost for states measured by sensors and the concept of structural observability. Further, the definition of the cost is introduced as a general mathematical concept with possible interpretation for variety of applications. Particularly, note that this P-order formulation is not ideal but practical, and this work finds application in monitoring large-scale systems such as social systems <cit.>, eco-systems <cit.>, and even epidemic monitoring <cit.>. To the best of our knowledge, no general P-order solution is proposed in the literature for this problem.Assumptions: The following assumptions hold in this paper:(i) The system is globally observable to the group of sensors.(ii) Number of sensors is at least equal to the number of crucial states necessary for observability, and at least one state is accessible/measurable by each sensor.Without the first assumption no estimation scheme works, and there is no solution for optimal observability problem. In the second assumption, any sensor with no access to a necessary state observation is not a player in the optimization game. Other assumptions are discussed in the body of the paper.The outline of the paper is as follows. In Section <ref>, relevant structural system properties and algorithms are reviewed. Section <ref> states the graph theoretic approach towards observability.Section <ref> provides the novel formulation for the cost optimization problem. Section <ref> reviews the so-called assignment problem as the solution. Section <ref> states some remarks on the results, motivation, and application of this optimal sensor selection scheme. Section <ref> illustrates the results bytwo academic examples. Finally, Section <ref> concludes the paper. Notation: We provided a table of notations in Table <ref> to explain the terminologies and symbols in the paper. § STRUCTURED SYSTEM THEORY Consider the state of linear system, x, evolving as[The underline notation represents a vector variable.]:ẋ = Ax + νand in discrete time as:x(k+1) = Ax(k) + ν(k)where x∈ℝ^n is the vector of system states, and ν∼𝒩(0,V) is independent identically distributed (iid) system noise.Consider a group of sensors, indexed by y_i, i=1,,m each taking a noise-corrupted state measurement as:y_i = H_ix + η_ior in discrete time as:y_i(k) = H_ix(k) + η_i(k)where H_i is a row vector, y_i ∈ℝ is the sensor measurement, and η_i ∼𝒩(0,Q_i) is the zero-mean measurement noise at sensor i.Let 𝒜∼{0,1}^n × n represent the structured matrix, i.e. the zero-nonzero pattern of thematrix A. A nonzero element implies a system parameter that may change by time, and the zeros are the fixed zeros of the system. Similarly, ℋ∼{0,1}^m × n represents the structure of measurement matrix H. A nonzero entry in each row of ℋ represents the index of the measured state by the corresponding sensor. This zero-nonzero structure can be represented as a directed graph𝒢_sys∼ (𝒳∪𝒴,ℰ) (known as system digraph). Here, 𝒳 is the set of state nodes {x_1,x_n} each representing a state, and 𝒴 is the output set {y_1, y_m} representing the set of sensor measurements. The nonzero entry 𝒜_ijis modeled by an edge x_j →x_i. The set ℰ = (𝒳×𝒳) ∪(𝒳×𝒴) is the edge set.Edges in ℰ_xx = 𝒳×𝒳 represent the dynamic interactions of states in 𝒢_A∼ (𝒳,ℰ_xx), and edges ℰ_xy = (𝒳×𝒴) in 𝒢_xy = (𝒳∪𝒴,ℰ_xy) represent the flow of state measurement information into sensors. It is clear that 𝒢_sys = 𝒢_A∪𝒢_xy. Define a path as a chain of non-repeated edge-connected nodes and denote 𝒴 as a path ending in a sensor node in 𝒴. Define a cycle as a path starting and ending at the same node.Similar to the structured matrices and the associated digraphs for linear systems, one can define a digraph for the Jacobian linearization of the nonlinear systems, also referred to as inference diagrams <cit.>. In the nonlinear case the structure of the system digraph is related to the the zero-nonzero structure of the Jacobian matrix 𝒥. For system of equations ẋ = f(x, ν) if 𝒥_ij = ∂ f_i/∂ x_j is not a fixed zero, draw a link x_j →x_i in the system digraph. This implies that x_i is a function of x_j and state x_j can be inferred by measuring x_i over time. Following this scenario for all pairs of states and connecting the inference links the system digraph 𝒢_A is constructed. It should be mentioned that we assume the nonlinear function f is globally Lipschitz, and therefore the system of equation has a unique solution and the Jacobian matrix is defined at all operating points.The properties of the system digraph and its zero-nonzero structure are closely tied with the generic system properties. Such properties are almost independent of values of the physical system parameters.It is known that if these specific properties of the system hold for a choice of numerical values of free parameters, they hold for almost all choices of system parameters, where these system parameters are enclosed in nonzero entries of the system matrix. Therefore, the zero-nonzero structure of the system and the associated system digraph ensures sufficient information on such generic properties. In general, efficient structural algorithms are known to check these properties while the numerical approach might be NP-hard to solve <cit.>. An example of generic properties are structural rank (𝒮-rank) and structural observability and controllability <cit.>. §.§ Structurally Cyclic SystemsThe following definition defines structurally cyclic systems: A system is structurally cyclic if and only if its associated system digraph includes disjoint family of cycles spanning all nodes <cit.>. There exist many real-world systems which are structurally cyclic. As mentioned in Section <ref>, any complex network/system governed by coupled dth-order differential equations and randomly weighted system parameters is structurally cyclic (see <cit.> for more information). Such structures may arise in biochemical reaction networks <cit.>, epidemic spread in networks <cit.>, ecosystems <cit.> and even in social networks where each agent has intrinsic self-dampening dynamics represented as self-loop in social digraph (see the example in <cit.>).There are efficient methods to check if the graph is cyclic and includes a disjoint cycle family, namely matching algorithms. A matching of size m, denoted by ℳ_m, is a subset of nonadjacent edges in ℰ_xx spanning m nodes in 𝒳. Define nonadjacent directed edges as two edges not sharing an end node. Define a maximum matching as the matching of maximum size in 𝒢_A, where the size of the matching, m, is defined by the number of nodes covered. A perfect matching is a matching covering all nodes in the graph, i.e. ℳ_n where n = |𝒳|. A system of n state nodes is structurally cyclic if and only if its digraph includes a perfect matching ℳ_n. The detailed proof is given in <cit.>. The size of the maximum matching is known to be related to the structural rank of the system matrix defined as follows: For a structured matrix 𝒜, define its structural rank (𝒮-rank) as the maximum rank of the matrix A for all values of non-zero parameters. Note that, the 𝒮-rank of 𝒜 equals to the maximum size of disjoint cycle family, where the size represents the number of nodes covered by the cycle family <cit.>. For structurally cyclic systems the maximum size of the cycle family equals to n, number of system states. This implies that for structurally cyclic system 𝒮(𝒜) = n. This result can be extended to nonlinear systems as 𝒮(𝒥) = n at almost all system operating points. As an example, consider a graph having a random-weighted self-cycle at every state node. In this example, every self-cycle is a matching edge, graph contains a perfect matching and therefore is cyclic. On the other hand, these self-loops imply that every diagonal element in the associated structured matrix, 𝒜, is nonzero. Having random values at diagonal entries and other nonzero parameters the determinant is (almost) always nonzero and system is structurally full rank [This can be checked simply by MATLAB, considering random entries as nonzero parameters of the matrix. The probability of having zero determinant is zero.]. This is generally true for network of intrinsic dth-order self dynamics (instead of 1st order self-loops) as addressed in <cit.>. In the coupled dynamic equations a dth-order dynamic represents a cyclic component in system digraph. The intra-connection of these individual dynamics construct the digraph of large-scale physical system. Assuming time-varying system parameters the system remains structurally full rank.§ GRAPH THEORETIC OBSERVABILITY Observability plays a key role in estimation and filtering. Given a set of system measurements, observability quantifies the information inferred from these measurements to estimate the global state of the system. This is irrespective of the type of filtering and holds for any estimation process by a group of sensors/estimators. Despite the algebraic nature of this concept, this paper adopts a graph theoretic approach towards observability. This approach is referred to as structural observability and deals with system digraphs rather than the algebraic Gramian-based method. The main theorem on structural observability is recalled here. A system digraph is structurally observable if the following two conditions are satisfied: * Every state, is connected to a sensor via a directed path of states, i.e. x_i 𝒴, i ∈{1,,n}. * There is a sub-graph of disjoint cycles and output-connected paths that spans all state nodes. The original proof of the theorem is available in the work by Lin <cit.> for the dual problem of structural controllability, and more detailed proof is available in <cit.>. The proof for structural observability is given in <cit.>.The conditions in Theorem <ref> are closely related to certain properties in digraphs. The second condition holds for structurally cyclic systems, since all states are included in a disjoint family of cycles. The first condition can be checked by finding Strongly Connected Components (SCCs) in the system digraph <cit.>. Recall that a SCC includes all states mutually reachable via a directed path. Therefore, the output-connectivity of any state in SCC implies the output-connectivity of all states in that SCC, and consequently, this satisfies the first condition in Theorem <ref>. By measuring one state in every SCC, all states in that SCC are reachable (and observable), i.e. x_i ∈ SCC_l and x_i𝒴 implies SCC_l 𝒴. This further inspires the concept of equivalent measurement sets for observability stated in the following lemma. States sharing an SCC are equivalent in terms of observability. This is directly follows from the definition. Since for every two states x_i and x_j we have x_jx_i, then following Theorem <ref> having x_i 𝒴 implies that x_j 𝒴. See more details in the previous work by the first author <cit.>.Observationally equivalent states provide a set of options for monitoring and estimation. This is of significant importance in reliability analysis of sensor networks. These equivalent options are practical in recovering the loss of observability in case of sensor/observer failure <cit.>. In order to explore states necessary for observability, we partition all SCCs in terms of their reachability by states in other SCCs. * Parent SCC: is a SCC with no outgoing edge to states in other SCCs, i.e. for all x_i ∈ SCC_l there is no x_j ∉ SCC_l such that x_i → x_j. * Child SCC: is a non-parent SCC (a SCC having outgoing edges to other SCCs), i.e. there exist x_i ∈ SCC_l and x_j ∉ SCC_l such that x_i → x_j. Parent SCCs do not share any state node. The above lemma is generally true for all SCCs. The proof is clear; if two components share a state node they in fact make a larger component. Following the first condition in Theorem <ref>, the given definitions inspire the notion of necessary set of equivalent states for observability. At least one measurement/sensing from every parent SCC is necessary for observability. This is because the child SCCs are connected to parent SCCs via a direct edge or a directed path. Therefore, 𝒴-connectivity of parent SCCs implies 𝒴-connectivity of child SCCs. In other words, x_i ∈ SCC_l, x_i → x_j and x_j ∈ SCC_k 𝒴 implies SCC_l 𝒴. See detailed proof in the previous work by the first author <cit.>.This further implies thatthe number of necessary sensors for observability equals to the number of parent SCCs in structurally cyclic systems. In such scenario, it is required to assign a sensor for each parent SCC in order to satisfy the observability condition. For more details on SCC classification and equivalent set for observability refer to <cit.>. § COST OPTIMIZATION FORMULATION In sensor-based applications every state measurement imposes certain cost. The cost may be due to, for example, maintenance and embedding expenses for sensor placement, energy consumption by sensors, sensor range and calibration, and even environmental condition such as humidity and temperature. In this section, we provide a novel formulation of the minimal cost sensor selection problem accounting for different sensing costs to measure different states. Contrary to <cit.>, the final formulation in this section has a polynomial order solution as it is discussed in Section <ref>. Assume a group of sensors and a cost c_ij for every sensor y_i , i ∈{1, , m} measuring state x_j , j ∈{1, , n}. Given the cost matrix c, the sensor selection cost optimization problem is to minimize sensing cost for tracking the global state of the dynamical system (<ref>) (or discrete time system (<ref>)). Monitoring the global state requires observability conditions, leading to the following formulation: min_ℋ ∑_i=1^m∑_j=1^n (c_ijℋ_ij)s.t. (A,H)-observability, ℋ_ij∈{0,1} where A and H are, respectively, system and measurement matrix, and ℋ represents the 0-1 structure of H.In this problem formulation, ℋ is the 0-1 pattern of H, i.e. a nonzero element ℋ_ij represents the measurement of state x_j by sensor y_i. First, following the discussions in Section <ref>, the observability condition is relaxed to structural observability. min_ℋ ∑_i=1^m∑_j=1^n (c_ijℋ_ij)s.t. (𝒜,ℋ)-observability, ℋ_ij∈{0,1}Notice that in this formulation (𝒜,ℋ)-observability implies the structural observability of the pair (A,H). Primarily assume that the number of sensors equals to the number of necessary measurements for structural observability. In control and estimation literature <cit.>, this is addressed to find the minimal number of sensors/actuators . This consideration is in order to minimize the cost. Notice that extra sensors impose extra sensing cost, or they take no measurements and play no role in estimation. Therefore, following the assumptions in Section <ref>, the number of sensors, at first, is considered to be equal to the number of necessary measurements for observability (i.e. number of parent SCCs). This gives the following reformulation of the original problem. Considering minimum number of sensors for observability, the sensor selection cost optimization problem is in the following form: min_ℋ ∑_i=1^m∑_j=1^n (c_ijℋ_ij)s.t. (𝒜,ℋ)-observability, ∑_i=1^mℋ_ij≤ 1 ∑_j=1^nℋ_ij = 1 ℋ_ij∈{0,1} The added conditions do not change the problem. The constraint ∑_i=1^mℋ_ij≤ 1 implies that all states are measured by at most one sensor, and ∑_j=1^nℋ_ij = 1 implies that all sensors are responsible to take a state measurement. Notice that, in case of having, say N, sensors more than them necessary sensors for observability, this condition changes to ∑_j=1^Nℋ_ij≤ 1 to consider the fact that some sensors are not assigned, i.e. they take no (necessary) measurement.Next, we relax the observability condition following the results of Section <ref> for structurally cyclic systems. Revisiting the fact that parent SCCs are separate components from Lemma <ref>, the problem can be stated as assigning a group of sensors to a group of parent SCCs.For this formulation, a new cost matrix 𝒞_m × m is developed. Denote by 𝒞_ij, the cost of assigning a parent set, SCC_j, to sensor y_i. Define this cost as the minimum sensing cost of states in parent SCC_j:𝒞_ij= min{c_il}, x_l ∈SCC_j, i,j ∈{1, , m}This formulation transforms matrix c_m × n to matrix 𝒞_m × m. This transfers the sensor-state cost matrix to a lower dimension cost matrix of sensors andparent SCCs. Further, introduce new variable 𝒵∼{0,1}^m × m as a structured matrix capturing the assignment of sensors to parent SCCs. Entry 𝒵_ij implies sensor indexed i having a state measurement of SCC indexed j, and consequently SCC_jy_i. Recalling that sensing all parent SCCs guarantee observability (see Lemma <ref>), the problem formulation can be modified accordingly in a new setup as follows. For structurally cyclic systems, having a set of m sensors to be assigned to m parent SCCs, the sensor selection cost optimization is given by: min_𝒵 ∑_i=1^m∑_j=1^m (𝒞_ij𝒵_ij)s.t. ∑_j=1^m𝒵_ij = 1 ∑_i=1^m𝒵_ij = 1 𝒵_ij∈{0,1} In this formulation, the new constraint ∑_j=1^m𝒵_ij = 1 is set to satisfy sensing of all parent SCCs as necessary condition for observability. The formulation in (<ref>) is well-known in combinatorial programming and optimization. It is referred to as Linear Sum Assignment Problem (LSAP) <cit.>. It is noteworthy that the three statements in this section represent the same problem and the differences stem from mathematical relaxations and observability consideration. The above formulation is one-to-one assignment of sensors and parent SCCs. By changing the first constraint to ∑_j=1^m𝒵_ij≥ 1 we allow more than one parent SCC to be assigned to each sensor. This is the generalization to the primary assumption of assigning only one state to each sensor. For the second constraint in (<ref>), considering ∑_i=1^m𝒵_ij≥ 1 implies that more than one sensor may be assigned to a parent SCC. This adds redundancy in sensor selection and consequently increases the cost, and thus should be avoided. On the other hand, ∑_i=1^m𝒵_ij≤ 1 violates the necessary condition for observability as some of the SCCs may not be assigned and tracked by sensors. § LINEAR SUM ASSIGNMENT PROBLEM (LSAP) The novel formulation of sensor selection problem proposed in the Problem Formulation <ref> is known to be a classical optimization problem referred to as the Assignment problem. Assignment problem is widely studied as many problems, e.g. in network flow theory literature, are reduced to it. The problem deals with matching two sets of elements in order to optimize an objective function. Linear Sum Assignment Problem (LSAP) is the classical problem of assigning m tasks to m agents (or matching m grooms with m brides, m machines/companies to m jobs, etc.) such that the matching cost is optimized <cit.>. The LSAP is mathematically similar to the weighted matching problem in bipartite graphs. This problem is also called one-to-one assignment as compared to one-to-many assignment problem in which one agent is potentially assigned to more than one task. There have been many solutions to this problem. From the original non-polynomial solution to later polynomial-time primal-dual solutions including the well-known Hungarian method. The Hungarian Algorithm, proposed by Kuhn <cit.> and later improved by Munkres, is of complexity order of 𝒪(m^4) with m as the number of tasks/agents. The algorithm was later improved by <cit.> to the complexity order of 𝒪(m^3). The algorithm is given in Algorithm <ref>.Other than these original solutions, recently new linear programming methods to solve the classical one-to-one LSAPand variations of this original setting are discussed. To name a few, distributed assignment problem based on a game-theoretic approach is proposed in <cit.>. Sensors/Agents are assigned to tasks relying only on local information of the cost matrix. The complexity of the algorithm is 𝒪(m^3) in the worst case scenario. In <cit.> a new algorithm is proposed whose average complexity matches Edmonds Hungarian method in large-scale. All these solutions can be applied to solve the Problem Formulation <ref> and its variant, for example even when the sensing costs are changing. However, in terms of performance the Edmonds’ Hungarian algorithm <cit.> is more practical and used in programming softwares like MATLAB. The algorithm by <cit.> is practical in distributed setting while the algorithm by <cit.> is as practical as <cit.> only in large-scale applications. Note that the focus of this paper is on the polynomial complexity of such algorithms to be practical in large-scale application. Therefore, although other non-polynomial solutions to LSAP may exist, they are not of interestin large-scale sensor selection optimization.Note that, in the LSAP the cost matrix has to be a complete m by m matrix. However, in practical application some states may not be measured by some sensors (not realizable by some sensors). This may, for example, caused by mismatch in range/calibration of the sensor and what is required for the state measurement. In the sensor-state cost matrix, c, this simply implies that some entries are not defined. For this unmeasurable states, the cost is infinite, in application a large enough cost (pseudo-cost c̃_ij) can be given. By introducing c̃_ij and having a complete cost matrix, the LSAP problem can be solved using anyone of the polynomial methods mentioned in this section.Notice that if the optimal cost from LSAP in (<ref>) is greater than the pseudo cost c̃_ij, the sensor selection has no feasible solution. A possible explanation is that at least one parent SCC is not realizable by any sensor, implying the assignment of a pseudo cost by LSAP. In case the feasible solution exists, no non-realizable state is assigned and the LSAP gives the optimal feasible solution in polynomial time. § REMARKS This section provides some remarks to further illustrate the results, motivation, and application of the polynomial order sensor selection solution proposed in this work.The main motivation on this paper is to find a polynomial order solution to optimize sensor selection problem for cyclic systems. Notice that in general, as mentioned in the introduction and literature review, the problem is NP-hard to solve, see for example <cit.> and references therein. However, we showed that if system is cyclic there exist a polynomial order solution for sensor selection optimization. Note that this is significant in large-scale system monitoring as polynomial order algorithms are practical in large-scale applications because their running time is upper-bounded by a polynomial expression in the size of input for the algorithm. Examples of such large-scale cyclic systems are given in the introduction. It should be mentioned that LSI dynamics are practically used in state estimation and complex network literature, see <cit.>and references therein. The motivation behind structured system theory is that this approach holds for systems with time-varying parameters while the system structure is fixed. This is significant in system theory as in many applications the system non-zero parameters change in time while the zero-nonzero pattern of the system matrix is time-invariant. Indeed, this structural analysis deals with system properties (including observability and system rank) that do not depend on the numerical values of the parameters but only on the underlying structure (zeros and non-zeros) of the system <cit.>. It is known thatif a structural property holds for one admissible choice of non-zero elements/parameters it is true for almost all choices of non-zero elements/parameters and, therefore, is called generic property<cit.>. Another motivation is in linearization of nonlinear systems where the nonlinear model is linearized over a continuum of operating points, see <cit.>. In this case the structure of the Jacobian matrix is fixed while the matrix elements change based on the linearization point, therefore implying the LSI system model. In this direction, the observability/controllability of LSI model implies the observability/controllability of the nonlinear model <cit.> and therefore the results of the LSI approach leads to conclusions on the nonlinear model. Based on the mentioned features of LSI model in Remark <ref>,the structural observability almost always implies algebraic observability, therefore LSI relaxation in Problem Formulation 2 almost always holds. Further, for structurally cyclic systems the problem can be exactly framed as a LSAP, and therefore the relaxation in Problem Formulation 3 and polynomial order solution is exact for cyclic systems. Note that the SCC decomposition is unique <cit.> and therefore the cost matrix C and the formulation in (<ref>) are uniquely defined. While this work focuses on sensor selection and observability, the results can be easily extended to the dual problem of controllability and particularly input/actuator selection. In this case the problem is to choose among the possible inputs to direct/control the dynamical system to reach the desired state with optimal cost. Note that the only mathematical difference is that the constraint in Problem Formulation 1 and 2 changes to (A,H)-controllability and the same graph theoretic relaxation holds. Because of duality the problem changes to assigning sensors optimally to Child SCCs resulting the same formulation as in Problem Formulation 3 where the solution is known via Hungarian algorithm. In fact, in the context of control of networked systems, this problem is also known asthe so-called leader selection. In this problem using LSI model the idea is to determine the control leaders in structured multi-agent system. In this case the cost, for example,may represent energy consumption by agents. For more information on this subject we refer interested readers to <cit.>. § ILLUSTRATIVE EXAMPLES This section provides academic examples to illustrate the results of the previous sections.Example 1:Consider a dynamical system with the associated digraph given in Fig.<ref>. Every node represents a state of the system and every edge represent the dynamic interaction of two states. For example an edge from x_3 to x_1 and the self-loop on x_1 implies ẋ_̇1̇ = a_13x_3 + a_11x_1. Assuming a nonlinear dynamic system, the same link represents a possible nonlinear interaction function ẋ_̇1̇ = f_1(x_3,x_1) where the Jacobian linearization is in the form ẋ_̇1̇ = ∂ f_1/∂ x_3x_3 + ∂ f_1/∂ x_1x_1.[Notice that having self-cycle at every node implies that the diagonal entries of the Jacobian matrix, ∂ f_i/∂ x_i, are non-zero and the Jacobian is structurally full rank.] Such terminology holds for all state nodes and edges in the system digraph and relates the system digraph to the differential equation governing the dynamic phenomena. A group of sensors are needed to track the 15 states of this dynamic system. If a state is measurable by a sensor, the measurement is associated with a cost c_ij.However, not every state is measurable by every sensor.In this example, we assume the pseudo-cost as c̃_ij = max(c_ij)*m*n. This prevents the assignment algorithm to assign these non-realizable sensor-state pairs. Among the realizable states, some states are not necessary to be measured. This is defined based on the SCC classification discussed in Section <ref>. In the particular example of Fig. <ref>, the inner component and the outer components have no outgoing edges to other SCCs; therefore, {x_1,x_2,x_3}, {x_9,x_10}, {x_11,x_12,x_13}, and {x_14,x_15} are parent SCCs. The other components, {x_4,x_5,x_6} and {x_7,x_8}, are child SCCs. For observability, each parent SCC is needed to be tracked by at least one sensor. The selection of which state to be measured in each parent SCC is cost-based; in every parent SCC, each sensor measures the state with minimum cost . This takes the problem in the form given in Fig.<ref> and in the form of (<ref>). Then, Hungarian method in Section <ref> is applied to solve this LSAP.For numerical simulation, in this system graph example we consider uniformly random costs c_ij in the range (0,10). Number of sensors equals to m=4 that is the number of parent SCCs. The non-realizable states are defined randomly with probability 50%, i.e. almost half of the states are not measurable by sensors [We should mention that this is only for the sake of simulation to check the algorithm. In real applications if the measurable states are not observable no sensor selection optimization algorithm can provide an observable estimation of the system, therefore in real applications it is usual to assume that at least one observable solution exist for the problem otherwise no sensor selection and estimation scheme works.]. In the assignment algorithm, the pseudo-cost of a non-realizable state is considered c̃_ij=max{c_ij}*m*n, which is certainly more than ∑_i=1^m∑_j=1^n c_ij. Fig. <ref> shows the cost of all realizable observable and non-observable assignments. In this figure, for the sake of clarification the indexes are sorted in ascending and descending cost order respectively for observable and non-observable assignments. Among the realizable state-sensor pairs, if the selected sensors do not measure one state in each parent SCC, this sensor selection is not observable (violating the Assumption (i) inSection <ref>). Among the observable selections, the optimal sensor selection has the minimum cost of 10.88, which matches the output of the LSAP using the Hungarian method. Note that, the naive solution in Fig. <ref> has complexity O(m!), and is only provided for clarification and checking the results of the LSAP solution.Example 2: In order to show the advantage of using the proposed approachover the existing methods we provided another example. This example, shown in Fig.<ref>, is similar to the example given in <cit.>, where the graph represents a social dynamic system. In such social digraphs each node represents an individual and the links represent social interaction and opinion dynamics among the individuals <cit.>. According to <cit.>, the social system is generally modeled as LSI system where the social interactions (as the structure) are fixed while the social influence of individuals on each-other change in time. This is a good example stating the motivation behind considering LSI model in this work. We intentionally presented this example to compare our results with <cit.>. As <cit.> claims for such example there is no polynomial order solution to solve the problem while here we present a sensor selection algorithm with polynomial complexity of O(m^3). Similar to the Example 1, a group of sensors (referred to as information gatherers in social system <cit.>) are required to monitor 20states of the social system. Notice that having a cycle family covering all states the system is Structurally full-rank. The measurement of each state by each sensor is associated with a cost c_ij and if not measurable the cost is assigned with c'_ij=max{c_ij}*m*n. Applying the DFS algorithm one can find the SCCs and Parent/Child classification in O(m^2) as shown in Fig.<ref>. Then, the assignment problem in Fig.<ref> and in the form of (<ref>) can be solved using the Hungarian method in O(m^3). Again we consider uniformly random costs c_ij in the range (0,10) for numerical simulation. Since there are 5 parent SCCs in this social digraph we need m=5 social sensor. Among the sensor-state pairs, the non-realizable states are defined randomly with probability 30% with pseudo-cost c̃_ij=max{c_ij}*m*n. In Fig. <ref> the costs of all realizable observable and non-observable assignments are shown, where among the observable cases the optimal sensor selection has the minimum cost of 7.048. As expected, this value matches the output of the proposed solution, i.e. the Hungarian algorithm for the LSAPin equation (<ref>). § CONCLUSION There exists many efficient algorithms to check the matching properties and structural rank of the system digraph, namely Hopcraft-Karp algorithm <cit.> or the Dulmage-Mendelsohn decomposition <cit.> of O(n^2.5) complexity. Moreover, the SCC decomposition and the partial order of SCCs is efficiently done in running time O(n^2) by using the DFS algorithm <cit.>, or the Kosaraju-Sharir algorithm <cit.>. As mentioned earlier in Section <ref>, the LSAP solution is of complexity of O(m^3). This gives the total complexity of O(n^2.5+m^3) to solve the sensor coverage cost optimization problem. In dense graphs typically the nodes outnumber parent SCCs; assuming m ≪ n, the complexity of the algorithm is reduced to O(n^2.5). In case of knowing that the system is structurally cyclic, e.g. for self-damped systems, the running time of the solution is O(n^2).In practical application using MATLAB, thefunction checks the structural rank of the system (i.e. the size of maximum matching in the system digraph). System is structurally cyclic ifequals to n, the size of the system matrix. To find the partial order of SCCs the straightforward way (but not as efficient) is to usefunction. This function takes the system matrix A and returns the permutation vectors to transfer it to upper block triangular form and the boundary vectors for SCC classification. The functionsolves the assignment problem using Munkres's variant of the Hungarian algorithm. This function takes the cost matrix and the cost of unassigned states/sensors as input, and returns the indexes of assigned and unassigned states/sensors as output.As the final comment, recall that weconsider sensor cost optimization only for structurally cyclic systems. For systems which are not structurally cyclic, other than parent SCCs, another type of observationally equivalent set emerges, known as contraction <cit.>. Number of contractions equals to the number of unmatched nodes in the system digraph which in turn equals to system rank deficiency. Contractions and Parent SCCs determine the number of necessary states for system observability. The key point is that, unlike Parent SCCs which are separate sets, contractions may share state nodes with each other and with parent SCCs<cit.>. This implies that the problem cannot be generally reformulated as LSAP andProblem Formulation 3 is only valid and exact for structurally cyclic systems. In general systems, particularly in structurally rank-deficient systems, a combination of assignment problem and greedy algorithms may need to be applied, which is the direction of future research. § ACKNOWLEDGEMENTThe first author would like to thank Professor Usman Khan from Tufts University for his helpful suggestions and feedback on this paper. 99 [Arcak & Sontag(2006)]arcak2006diagonal Arcak, M., & Sontag, E. D. (2006). Diagonal stability of a class of cyclic systems and its connection with the secant criterion. Automatica, 42(9), 1531-1537.[Battistelli et al.(2012)]battistelli_cdc Battistelli, G., Chisci, L., Mugnai, G., Farina, A., & Graziano, A. (2012, December). Consensus-based algorithms for distributed filtering. In 51st IEEE Conference on Decision and Control (pp. 794-799). [Bay (1999)]bay Bay, J. (1999). Fundamentals of linear state space systems. McGraw-Hill.[Bertsekas (1981)]bertsekas1981assign Bertsekas, D. P. (1981). A new algorithm for the assignment problem. Mathematical Programming, 21(1), 152-171.[Clark et al. (2014)]clark2014leader Clark, A., Bushnell, L., Poovendran, R. (2014). A supermodular optimization framework for leader selection under link noise in linear multi-agent systems. IEEE Transactions on Automatic Control, 59(2), 283-296.[Chapman & Mesbahi (2013)]acc13_mesbahi Chapman, A., & Mesbahi, M. (2013, June). On strong structural controllability of networked systems: a constrained matching approach. In American Control Conference (pp. 6126-6131).[Commault& Dion(2015)]commault2015single Commault, C., & Dion, J. M. (2015). The single-input Minimal Controllability Problem for structured systems. Systems & Control Letters, 80, 50-55.[Cormen et al. (2001)]algorithm Cormen, T. H., Leiserson, C. E., Rivest, R. L., & Stein, C. (2001). Introduction to algorithms. Cambridge: MIT press.[Davison & Wang (1973)]davison1973LSI Davison, E., & Wang, S. (1973). Properties of linear time-invariant multivariable systems subject to arbitrary output and state feedback. IEEE Transactions on Automatic Control, 18(1), 24-32.[Davison & Wang (1974)]davison1974LSI Davison, E. J., & Wang, S. H. (1974). Properties and calculation of transmission zeros of linear multivariable systems. Automatica, 10(6), 643-658.[Dion et al.(2003)]woude:03 Dion, J. M., Commault, C., & Van Der Woude, J. (2003). Generic properties and control of linear structured systems: a survey. Automatica, 39(7), 1125-1144.[Doostmohammadian & Khan (2011)]asilomar11 Doostmohammadian, M., & Khan, U. A. (2011, November). Communication strategies to ensure generic networked observability in multi-agent systems. In45th Asilomar Conference on Signals, Systems and Computers(pp. 1865-1868). [Doostmohammadian & Khan (2014a)]jstsp14 Doostmohammadian, M., & Khan, U. A. (2014). Graph-theoretic distributed inference in social networks. IEEE Journal of Selected Topics in Signal Processing, 8(4), 613-623. [Doostmohammadian & Khan (2014b)]asilomar14 Doostmohammadian, M., & Khan, U. A. (2014, November). Vulnerability of CPS inference to DoS attacks. In 48th Asilomar Conference on Signals, Systems and Computers (pp. 2015-2018).[Dulmage & Mendelsohn (1958)]dulmage58 Dulmage, A. L., & Mendelsohn, N. S. (1958). Coverings of bipartite graphs. Canadian Journal of Mathematics, 10(4), 516-534.[Edmonds & Karp (1972)]edmondsHungarian Edmonds, J., & Karp, R. M. (1972). Theoretical improvements in algorithmic efficiency for network flow problems. Journal of the ACM , 19(2), 248-264.[Fitch & Leonard (2013)]fitch2013leader Fitch, K., Leonard, N. E. (2013, December). Information centrality and optimal leader selection in noisy networks. InIEEE 52nd Annual Conference on Decision and Control , (pp. 7510-7515).[Friedkin (2006)]FriedkinSocial Friedkin, N. E. (2006). A structural theory of social influence. Cambridge University Press.[Harary (1962)]harary Harary, F. (1962). The determinant of the adjacency matrix of a graph. SIAM Review, 4(3), 202-210.[Hautus (1969)]hautus Hautus, M. (1969). Controllability and observability conditions of linear autonomous systems. Nederlandse Akademie van Wetenschappen. Serie A(72), 443-448.[Hopcroft & Karp (1973)]hopcraft Hopcroft, J. E., & Karp, R. M. (1973). An n^5/2 algorithm for maximum matchings in bipartite graphs. SIAM Journal on computing, 2(4), 225-231.[Hopcroft (1983)]algorithm2 Hopcroft, J. E. (1983). Data structures and algorithms. Boston, MA, Addison-Wesley. [Hug et al.(2015)]kar2015consensus+grid Hug, G., Kar, S., & Wu, C. (2015). Consensus+ innovations approach for distributed multiagent coordination in a microgrid. IEEE Transactions on Smart Grid, 6(4), 1893-1903.[Ilic et al.(2010)]usman_smc:08 Ilic, M. D., Xie, L., Khan, U. A., & Moura, J. M. (2010). Modeling of future cyber–physical energy systems for distributed sensing and control. IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, 40(4), 825-838.[Ji & Egerstedt (2007)]egerstedt2007LSI Ji, M., & Egerstedt, M. (2007, December). Observability and estimation in distributed sensor networks. In 46th IEEE Conference on Decision and Control, (pp. 4221-4226).[Kuhn (1955)]kuhnHungarian Kuhn, H. W. (1955). The Hungarian method for the assignment problem. Naval research logistics quarterly, 2(1‐2), 83-97.[Lin (1974)]linLin, C. T. (1974). Structural controllability. IEEE Transactions on Automatic Control, 19(3), 201-208.[Lin et al.(2014)]lin2014leaderLin, F., Fardad, M.,Jovanovic, M. R. (2014). Algorithms for leader selection in stochastically forced consensus networks. IEEE Transactions on Automatic Control, 59(7), 1789-1802.[Lin et al.(2011)]lin2011leader Lin, F., Fardad, M., Jovanovic, M. R. (2011, December). Algorithms for leader selection in large dynamical networks: Noise-corrupted leaders. In 50th IEEE Conference on Decision and Control and European Control Conference, (pp. 2932-2937).[Liu et al.(2011)]Liu-nature Liu, Y. Y., Slotine, J. J., & Barabasi, A. L. (2011). Controllability of complex networks. Nature, 473(7346), 167-173.[Liu et al.(2013)]liu-pnas Liu, Y. Y., Slotine, J. J., & Barabási, A. L. (2013). Observability of complex systems. Proceedings of the National Academy of Sciences, 110(7), 2460-2465.[Joshi & Boyd (2009)]boyd2009sensor Joshi, S., & Boyd, S. (2009). Sensor selection via convex optimization. IEEE Transactions on Signal Processing, 57(2), 451-462. [May(1972)]may1972ecology May, R. M. (1972). Will a large complex system be stable?. Nature, 238, 413-414.[May(1973)]may2001book May, R. M. (1973). Stability and complexity in model ecosystems. Princeton University Press.[Murota (2000)]murota Murota, K. (2000). Matrices and matroids for systems analysis. Springer.[Nowzari et al.(2016)]nowzari2016epidemic Nowzari, C., Preciado, V. M., & Pappas, G. J. (2016). Analysis and control of epidemics: A survey of spreading processes on complex networks. IEEE Control Systems, 36(1), 26-46.[Pentico (2007)]assignmentSurvey Pentico, D. W. (2007). Assignment problems: A golden anniversary survey. European Journal of Operational Research, 176(2), 774-793.[Pequito et al.(2014)]pequito_gsip Pequito, S., Kar, S.,& Aguiar, A. P. (2014, December). Minimum number of information gatherers to ensure full observability of a dynamic social network: A structural systems approach. In IEEE Global Conference on Signal and Information Processing (GlobalSIP), (pp. 750-753). [Reinschke (1988)]rein_book Reinschke, K. J. (1988). Multivariable control, a graph theoretic approach. Berlin: Springer.[Sayyaadi & Moarref(2011)]MiadCons Sayyaadi, H., & Moarref, M. (2011). A distributed algorithm for proportional task allocation in networks of mobile agents. IEEE Transactions on Automatic Control, 56(2), 405-410.[Slotine & Li (1991)]nonlin Slotine, J. J. E., & Li, W. (1991). Applied nonlinear control. Englewood Cliffs, NJ: prentice-Hall.[Tzoumas et al.(2015)]jad2015minimal Tzoumas, V., Rahimian, M. A., Pappas, G. J., & Jadbabaie, A. (2015, July). Minimal actuator placement with optimal control constraints. In American Control Conference (pp. 2081-2086). [Van der Woude (1999)]woude-rank Van der Woude, J. W. (1991). A graph-theoretic characterization for the rank of the transfer matrix of a structured system. Mathematics of Control, Signals and Systems, 4(1), 33-40.[Wang& Elia(2010)]wang2010control Wang, J., & Elia, N. (2010, September). Control approach to distributed optimization. In 48th Annual Allerton Conference on Communication, Control, and Computing , 2010(pp. 557-561). [Zavlanos et al.(2008)]zavlanos2008distributed Zavlanos, M. M., Spesivtsev, L., & Pappas, G. J. (2008, December). A distributed auction algorithm for the assignment problem. In 47th IEEE Conference on Decision and Control.(pp. 1212-1217). [Zejnilovic et al.(2013)]sinopoli2013network_obsrv Zejnilovic, S., Gomes, J., & Sinopoli, B. (2013, October). Network observability and localization of the source of diffusion based on a subset of nodes. In 51st Annual Allerton Conference on Communication, Control, and Computing.(pp. 847-852).[Zhao et al.(2015)]slotine2015intrinsic Zhao, C., Wang, W. X., Liu, Y. Y., & Slotine, J. J. (2015). Intrinsic dynamics induce global symmetry in network controllability. Scientific reports, 5. | http://arxiv.org/abs/1705.09454v1 | {
"authors": [
"Mohammadreza Doostmohammadian",
"Houman Zarrabi",
"Hamid R. Rabiee"
],
"categories": [
"cs.SY"
],
"primary_category": "cs.SY",
"published": "20170526065345",
"title": "Sensor Selection Cost Optimization for Tracking Structurally Cyclic Systems: a P-Order Solution"
} |
The hard Pomeron impact on the high-energy elastic scattering of nucleons A.A. GodizovE-mail: [email protected] A.A. Logunov Institute for High Energy Physics,NRC “Kurchatov Institute”, 142281 Protvino, Russia========================================================================================================================================================-1.0cm The role of the hard Pomeron (HP) exchanges in the high-energy diffractive interaction of nucleons is explored. It is demonstrated that the HP subdominance atavailable energies and low transferred momenta is due to the extremely low slope of its Regge trajectory. § INTRODUCTION In the framework of Regge phenomenology <cit.>, the observed growth of the pp total and elastic cross-sections at collision energies higher than 20 GeV <cit.>is explained in terms of the soft Pomeron exchanges <cit.>, where the soft Pomeron (SP) is a supercritical Reggeon with the intercept of its Reggetrajectory α_ SP(0) ≈ 1.1. By full analogy, the available data on the proton unpolarized structure function F^p_2(x,Q^2) <cit.> at high valuesof the incoming photon virtuality Q^2 and low values of the Bjorken scaling variable x can be described in terms of another Pomeron (called “hard”) with the interceptα_ HP(0) = 1.32± 0.03 <cit.> or even higher (see, for instance, <cit.>, <cit.>, or <cit.>). In spite of the fact thatα_ HP(0)>α_ SP(0), the HP impact on the nucleon-nucleon diffractive scattering seems to be insignificant <cit.>.An easy way to explain this elusiveness of the HP in high-energy soft interaction is just to presume the suppression of its coupling to hadrons in the nonperturbativeregime, which automatically leads to the SP dominance in the diffractive interaction in the absence of hard scale. However, such a physical pattern seems somewhat exotic,since both the Pomerons are apparently composed of gluon matter.[The leading meson Regge trajectories of the Quark Model have intercepts considerably below theunity. A detailed discussion of the gluon nature of supercritical Reggeons can be found in classical papers <cit.>.] Hence, no evident argument exists why theircouplings to proton at low transferred momenta should differ greatly by order of magnitude, while the difference between their intercepts is large enough to expect the HPdominance or, at least, significance at the LHC energies. Below we address the problem of the HP contribution into the pp high-energy elastic scattering to provide a more natural interpretation of the HP “invisibility” inthe soft interaction of hadrons than the above-mentioned presumption about the coupling suppression.§ THE HP REGGE TRAJECTORY First of all, let us pay attention to the behavior of the HP Regge trajectory in the asymptotic region t→ -∞. Presumably, the HP is the leading Reggeon of the BFKL series <cit.>:α^(n_r)_ BFKL(t) = 1+12 ln 2/πα_s(√(-t))[1-α_s^2/3(√(-t))(7ζ(3)/2 ln 2)^1/3(3/4+n_r/11-2/3 N_f)^2/3+...] ,where α_s(μ) is the QCD running coupling, N_f is the number of quark flavors, and n_r is the radial quantum number. If t=-M_Z^2=-(91.2 GeV)^2, α_s(M_Z) = 0.118, n_r=0, and N_f= 5 or 6, then we obtain α_ HP(-M_Z^2)=α^(0)_ BFKL(-M_Z^2)≈ 1.28 .Note, that the second term in the brackets in the right-hand side of (<ref>) is ∼ 0.1 under the chosen values of the parameters. Thus, the estimation(<ref>) is quite justified.Comparing the values of α_ HP(t) at t=0 and t=-M_Z^2, as well as the quantitiesα'_ HP(-M_Z^2)≈ 2· 10^-6 GeV^-2 andα_ HP(0)-α_ HP(-M_Z^2)/M_Z^2≈ 5· 10^-6 GeV^-2, one might come to a conclusion that both the functions α_ HP(t)and α'_ HP(t) evolve very slowly in the interval -M_Z^2<t<0. Moreover, even if α'_ HP(t) is essentially nonlinear in the considered range andα'_ HP(0) is, say, 100 times higher than α'_ HP(-M_Z^2), it is quite reasonable to consider α_ HP(t)≈α_ HP(0)at-3 GeV^2<t<0.Such a weak t-dependence is a very important feature of the HP Regge trajectory, which allows to make unambiguous conclusions on the basis of further analysis.§ THE HP EXCHANGE CONTRIBUTION INTO THE EIKONAL The soft-Pomeron-exchange eikonal approximation has the following structure <cit.>:dσ/dt = |T(s,t)|^2/16π s^2,T(s,t) = 4π s∫_0^∞db^2 J_0(b√(-t)) e^2iδ(s,b)-1/2i , δ(s,b)=1/16π s∫_0^∞d(-t) J_0(b√(-t)) δ_ SP(s,t) = = 1/16π s∫_0^∞d(-t) J_0(b√(-t))g^2_ SP(t)(i+ tanπ(α_ SP(t)-1)/2)πα'_ SP(t)(s/2s_0)^α_ SP(t),where s and t are the Mandelstam variables, b is the impact parameter, s_0 = 1 GeV^2, α_ SP(t) is the Regge trajectory of the soft Pomeron,g_ SP(t) is the SP coupling to proton. At t<0, α_ SP(t) and g_ SP(t) can be approximated by simple test functions α_ SP(t) = 1+α_ SP(0)-1/1-t/τ_a ,g_ SP(t)=g_ SP(0)/(1-a_gt)^2 ,where the free parameters take on the values presented in Table <ref>. Inclusion of the HP exchanges into consideration requires a replacementδ_ SP(s,t)→δ_ SP(s,t)+δ_ HP(s,t), whereδ_ HP(s,t)=(i+ tanπ(α_ HP(0)-1)/2)β_ HP(t)(s/2s_0)^α_ HP(0) .Choosing α_ HP(0)=1.32 and β_ HP(t)=β_ HP(0) e^b t, where β_ HP(0)=0.08 and b=1.5 GeV^-2, we come to the patternpresented in Fig. <ref>. The description quality is satisfactory: for example, Δχ^2≈ 12 over 19 points of the data set <cit.> andΔχ^2≈ 215 over 205 points of the data set <cit.>. The description of other data considered in <cit.> remains satisfactoryas well. As we see, the account of the HP exchanges improves the description of dσ/dt at√(s)= 7 TeV without any refitting of the SP parameters. Regardingavailable data at the lower energies, the HP impact can be ignored though.§ DISCUSSION AND CONCLUSIONSThe HP subdominance at accessible energies is, certainly, determined by the smallness of its Regge residue: β_ HP(t) = g^2_ HP(t) πα'_ HP(t) .Assuming that 4<α'_ HP(0)/α'_ HP(-M_Z^2)<100, we obtain g_ HP(0)∼ g_ SP(0), what is quite natural in view of the presumedglueball nature of both the Pomerons. The smallness of β_ HP(t) at low negative t is, thus, related to the extremely weak t-behavior of α_ HP(t).The low t-slope of α_ HP(t) may take place in the region t>0 as well. It would imply the existence of some series of ultraheavy resonances lying on theHP Regge trajectory. Due to their spin properties, such an ultraheaviness (tens or hundreds of GeV) accomplished by strong enough coupling to light hadrons inevitablyresults in the ultrashort life of the HP resonance states. The conception of heavy Pomeron is, certainly, not new. It was proposedbyV.N. Gribov more than 40 years ago <cit.>. The only difference between Gribov's heavy Pomeron and the BFKL HP is in their intercept values. Above, we neglected the impact of the subleading (daughter) Pomerons corresponding to nonzero values of n_r in series (<ref>). The reason is thatthe leading Pomeron intercept is separated from the subleading ones by significant gap <cit.>. A similar pattern takes place for other known series of Reggeonsin asymptotically free field theories <cit.>. Moreover, as the Regge trajectories are expected to be Herglotz functions <cit.>, so thecontributions of the subleading BFKL Pomerons at nonzero n_r and low negative t are suppressed in the factors α^(n_r)'_ BFKL(t) (as compared toα^(0)'_ BFKL(t)) in addition to the suppression in the values of α^(n_r)_ BFKL(t). The much higher slope of α_ SP(t) points to thefact that the soft Pomeron is not a Reggeon from the BFKL series.In view of the aforesaid, we come to the main conclusion: * The conception of the hard Pomeron as the leading Reggeon of the BFKL series is quite consistent with the available data on the high-energy pp elastic scattering.Its “invisibility” at the collision energies lower than 2 TeV is related not to the smallness of its coupling to proton (which is of the same order as the soft Pomeron'sone) but to its extremely weak t-evolution in the scattering region. In its turn, such a weak t-behavior seems to be related to a possible ultraheaviness of theresonances corresponding to this Reggeon. Hence, the characteristics “light” and “heavy”, regarding these two Pomerons, seem to be more natural than “soft” and“hard” (though, it is just a matter of conventions in terminology). In the very end, it should be pointed out that the TOTEM data at √(s)= 7 TeV only do not allow to confirm or discriminate the used phenomenological estimationof the HP intercept. The problem of absorption was swept under the carpet in <cit.>, though the relative contribution of the absorptive corrections may benon-vanishing in the kinematic range considered in <cit.>. Therefore, the true value of α_ HP(0) may be a bit higher and, so, the estimationα_ HP(0) = 1.32± 0.03should be treated just as the lower bound for this quantity. For example, the variantα_ HP(0)=1.44<cit.> and β_ HP(t)=β_ HP(0) e^b t, where β_ HP(0)=0.01 and b=1.5 GeV^-2, also yields a satisfactory description of theTOTEM data (see Fig. <ref>). The data at √(s)= 13 TeV are needed for more or less reliable determination of α_ HP(0).In any case, the account of the HP exchanges extends the applicability range of the Regge-eikonal approximation (<ref>),(<ref>) for the elastic scatteringof nucleons at ultrahigh energies. The satisfactory reproduction of available data by the updated model demonstrates the incorrectness of the claim <cit.>that absorptive models do not provide a good description of the LHC data in the deep-elastic scattering region.§ ACKNOWLEDGMENTSThe author thanks V.A. Petrov and R.A. Ryutin for discussions.99collins P.D.B. Collins, An Introduction to Regge Theory & High Energy Physics. Cambridge University Press 1977pdg Particle Data Group, http://pdg.lbl.gov/2016/hadronic-xsections/hadron.htmldonnachie A. Donnachie and P.V. Landshoff, Phys. Lett. B 296 (1992) 227donnachie2 A. Donnachie and P.V. Landshoff, Phys. Lett. B 727 (2013) 500struc www.desy.de/h1zeus/combined_resultsThe H1 and ZEUS Collaborations, JHEP 1001 (2010) 109godizov A.A. Godizov, Nucl. Phys. A 927 (2014) 36donnachie3 A. Donnachie and P.V. Landshoff, arXiv: 0803.0686 [hep-ph]selyugin O.V. Selyugin, Nucl. Phys. A 903 (2013) 54fazio S. Fazio, R. Fiore, L. Jenkovszky, and A. Salii, Phys. Rev. D 90 (2014) 016007 low F.E. Low, Phys. Rev. D 12 (1975) 163S. Nussinov, Phys. Rev. D 14 (1976) 246bfkl E.A. Kuraev, L.N. Lipatov, and V.S. Fadin, Sov. Phys. JETP 44 (1976) 443 Ya.Ya. Balitsky and L.N. Lipatov, Sov. J. Nucl. Phys. 28 (1978) 822kirschner R. Kirschner and L.N. Lipatov, Z. Phys. C 45 (1990) 477godizov2 A.A. Godizov, Eur. Phys. J. C 75 (2015) 224ua4 UA4 Collaboration (D. Bernard et al.), Phys. Lett. B 171 (1986) 142totatl The TOTEM Collaboration, Europhys. Lett. 101 (2013) 21002 ATLAS Collaboration, Nucl. Phys. B 889 (2014) 486gribov V.N. Gribov, Nucl. Phys. B 106 (1976) 189 heckathorn D. Heckathorn, Phys. Rev. D 18 (1978) 1286lovelace C. Lovelace, Nucl. Phys. B 95 (1975) 12godizov3 A.A. Godizov, Phys. Rev. D 81 (2010) 065009troshin S.M. Troshin and N.E. Tyurin, Mod. Phys. Lett. A 31 (2016) 1650079S.M. Troshin and N.E. Tyurin, arXiv: 1704.00443 [hep-ph] | http://arxiv.org/abs/1705.09126v1 | {
"authors": [
"A. A. Godizov"
],
"categories": [
"hep-ph",
"hep-ex"
],
"primary_category": "hep-ph",
"published": "20170525111638",
"title": "The hard Pomeron impact on the high-energy elastic scattering of nucleons"
} |
Institute of Mathematics, Czech Academy of Sciences. Žitná 25, 110 00, Praha, Czech Republic. The Institute of Mathematics of the Czech Academy of Sciences is supported by RVO:67985840. dolezal|[email protected] Jan Hladký was supported by the Alexander von Humboldt Foundation. Research of Martin Doležal was supported by the GAČR project GA16-07378S. We prove that the accumulation points of a sequence of graphs G_1,G_2,G_3,… with respect to the cut-distance are exactly the weak^* limit points of subsequences of the adjacency matrices (when all possible orders of the vertices are considered) that minimize the entropy over all weak^* limit points of the corresponding subsequence. In fact, the entropy can be replaced by any map W↦∬ f(W(x,y)), where f is a continuous and strictly concave function.As a corollary, we obtain a new proof of compactness of the cut-distance topology. Cut-norm and entropy minimization over weak^* limits Jan Hladký====================================================§ INTRODUCTIONThe theory of limits of dense graphs was developed in <cit.> and has revolutionized graph theory since then. The key objects of the theory are so-called graphons. More precisely, a graphon is a symmetric Lebesgue measurable function from I^2 to [0,1] where I=[0,1] is the unit interval (equipped by the Lebesgue measure λ). In the heart of the theory is then the following statement.Suppose that G_1,G_2,G_3,… is a sequence of graphs. Then there exists a subsequence G_k_1,G_k_2,G_k_3,… and a graphon W:I^2→ [0,1] such that G_k_1,G_k_2,G_k_3,… converges to W.Roughly speaking, to obtain the graphon W one looks at the adjacency matrices of the graphs (G_k_n)_n from distance. One possible way an analyst might attempt to make this statement formal could be to take W as a weak^* limit[See the Appendix for basic information about the weak^*topology.] of adjacency matrices of the graphs (G_k_n)_n represented as functions from I^2 to {0,1}. Such a version of Theorem <ref> would be just an instance of the Banach–Alaoglu Theorem. However, the weak^* topology turns out to be too coarse to provide the favorable properties that are available in the contemporary theory of graph limits.[A primal example of such a favorableproperty is the continuity of subgraph densities.] A good toy example is the sequence of the complete balanced bipartite graphs (K_n,n)_n=1^∞. When considering adjacency matrices of these graphs with vertices grouped into the two parts of the bipartite graphs, the corresponding weak^* limit is a 2× 2-chessboard function with values 0 and 1, which we denote by W_bipartite. This turns out to be a desirable limit. On the other hand, one could consider adjacency matrices ordered differently. Ordering the vertices randomly, we get the constant W_const≡1/2 as the weak^* limit (almost surely). We see that it is undesirable to get W_const as the limit object as the only information carried by such an object is that the overall edge densities of the graphs along the sequence converge to 1/2.So, instead of the weak^* topology one considers the so-called cut-norm topology, and this is also the topology to which “converges to W” inTheorem <ref> refers.The cut-norm ·_□ is a certain uniformization of the weak^* topology. Indeed, recall that given symmetric measurable functions Γ:I^2→[0,1] and Γ_1,Γ_2,Γ_3,…:I^2→[0,1], the two convergence notions compare as follows. Γ_n w^*⟶Γ ⟺sup_B⊂ I{lim sup_n |∫_x∈ B∫_y∈ BΓ_n(x,y)-Γ(x,y)|}=0 , Γ_n ·_□⟶Γ ⟺lim sup_n{sup_B⊂ I|∫_x∈ B∫_y∈ BΓ_n(x,y)-Γ(x,y)|}=0 .We shall state the formal version of Theorem <ref> in a somewhat bigger generality for graphons. If Γ,Γ':I^2→[0,1] are two graphons then we say that they are versions of each other if they differ only by some measure-preserving transformation of I (see Section <ref> for a precise definition).Then the formal statement of Theorem <ref> reads as follows.Suppose that Γ_1,Γ_2,Γ_3,…:I^2→ [0,1] is a sequence of graphons. Then there exists a sequence k_1<k_2<k_3<⋯ of natural numbers, versions Γ'_k_1,Γ'_k_2,Γ'_k_3,… of Γ_k_1,Γ_k_2,Γ_k_3,…, and a graphon W:I^2→ [0,1] such that the sequence Γ'_k_1,Γ'_k_2,Γ'_k_3,…converges to W in the cut-norm. Prior to our work, there were three approaches to proving Theorem <ref>. One, taken in <cit.> and in <cit.>, uses (variants of) the regularity lemma to group parts of I according to the structure of Γ_n. This way, one approximates the graphons by step-functions, and the limit graphon W is a limit of these step-functions.[A very general compactness result was given by Regts and Schrijver, <cit.>. This result in particular subsumes the compactness of thegraphon space. Even when specialized to the space of graphons, there are differences of debatable significance between the proofs.] A second approach, taken in <cit.>, relies on ultraproduct techniques. This later approach is extremely technical, and was developed for the (more difficult) theory of limits of hypergraphs, where for some time the regularity approach was not available.[A regularity approach to hypergraph limits was later found by Zhao, <cit.>.] The third proof follows from the Aldous–Hoover theorem for exchangeable arrays (<cit.>). While the Aldous–Hoover theorem substantially precedes the theory of graph limits, the connection was realized substantially later by Diaconis and Janson, <cit.> and independently by Austin <cit.>.We present a fourth proof of Theorem <ref>. Our proof provides for the first time a characterization of the cut-norm convergence in terms of the weak^* convergence. Namely, fixing any continuous and strictly concave function f:[0,1]→ℝ, we prove that there is a subsequence Γ_k_1,Γ_k_2,Γ_k_3,… such that the map W↦∬ f(W(x,y)) attains its minimum on the space of all weak^* accumulation points of versions of graphons Γ_k_1,Γ_k_2,Γ_k_3,…, and that any such minimizer is an accumulation point of the sequence Γ_1,Γ_2,Γ_3,… in the cut-distance. This result is consistent with our toy example above. Indeed, for any strictly concave function f we have ∬ f(W_bipartite(x,y))<∬ f(W_const(x,y)) by Jensen's inequality. Jensen's inequality underlies the general proof of our result.This application of Jensen's inequality is in a sense analogous to the proof of the index-pumping lemma in proofs of the regularity lemma. We try to indicate this important link in Section <ref>. §.§ Statement of the main resultsLet f:[0,1]→ℝ be an arbitrary continuous and strictly concave function.[The additional assumption of the continuity of the concave function f:[0,1]→ℝ which we work with in this paper only means that f is continuous (from the appropriate sides) at 0 and at 1, so this is not a big extra restriction.] Given a graphon Γ:I^2→[0,1], we write _f(Γ):=∫_x∈ I∫_y∈ If(Γ(x,y)). When f is the binary entropy, the integration _f(W) appears also in the work on large deviations in random graphs, <cit.> (which does not relate to the current work otherwise), and is called the entropy of the graphon W.[As was pointed out to us by Svante Janson, Aldous (<cit.>) worked with this quantity already in the 1980's in the context of exchangeability.]For a sequence Γ_1,Γ_2,Γ_3,…:I^2→[0,1] of graphons, we denote by (Γ_1,Γ_2,Γ_3,…) the set of all functions W:I^2→ [0,1] for which there exist versions Γ'_1,Γ'_2,Γ'_3,… of Γ_1,Γ_2,Γ_3,… such that W is a weak^* accumulation point of the sequence Γ'_1,Γ'_2,Γ'_3,…. We also denote by (Γ_1,Γ_2,Γ_3,…) the set of all functions W:I^2→ [0,1] for which there exist versions Γ'_1,Γ'_2,Γ'_3,… of Γ_1,Γ_2,Γ_3,… such that W is a weak^* limit of the sequence Γ'_1,Γ'_2,Γ'_3,…. We have (Γ_1,Γ_2,Γ_3,…)⊂(Γ_1,Γ_2,Γ_3,…). Note that (Γ_1,Γ_2,Γ_3,…) can be empty but (Γ_1,Γ_2,Γ_3,…) cannot be empty by the sequential Banach–Alaoglu Theorem (see the Appendix for more details). Also, note that such weak^* accumulation points (and thus also limits) are necessarily symmetric, Lebesgue measurable, [0,1]-valued, and thus graphons.Our main result states that, given a sequence of graphons Γ_1,Γ_2,Γ_3,…, there is a subsequence Γ_k_1,Γ_k_2,Γ_k_3,… such that the minimum of _f(·) over the set(Γ_k_1,Γ_k_2,Γ_k_3,…) is attained, and the graphon attaining this minimum is an accumulation point of the sequence Γ_1,Γ_2,Γ_3,… in the cut-distance. Suppose that f:[0,1]→ℝ is an arbitrary continuous and strictly concave function.Suppose that Γ_1,Γ_2,Γ_3,…:I^2→[0,1] is a sequence of graphons.*Suppose that W∈(Γ_1,Γ_2,Γ_3,…) is not an accumulation point ofthe sequence Γ_1,Γ_2,Γ_3,… in the cut-norm. Then there exists W∈(Γ_1,Γ_2,Γ_3,…) such that _f(W)<_f(W).*There exist a subsequence Γ_k_1,Γ_k_2,Γ_k_3,… and a graphon W_min∈(Γ_k_1,Γ_k_2,Γ_k_3,…) such that_f(W_min)=inf{_f(W) W∈(Γ_k_1,Γ_k_2,Γ_k_3,…)} .Clearly, Theorem <ref> implies Theorem <ref>.The proof of Theorem <ref> is given in Sections <ref> and <ref>.To complete the “characterization of the cut-norm convergence in terms of the weak^* convergence” advertised above, we prove that weak^* limit points that do not minimize _f(·) cannot be limit points in the cut-norm.Suppose that f:[0,1]→ℝ is an arbitrary continuous and strictly concave function.Suppose that Γ_1,Γ_2,Γ_3,…:I^2→[0,1] is a sequence of graphons. If W∈(Γ_1,Γ_2,Γ_3,…) is a cut-norm limit of versions of Γ_1,Γ_2,Γ_3,… then W is a minimizer of _f(·) over the space (Γ_1,Γ_2,Γ_3,…).In Section <ref> we show that Proposition <ref> is an easy consequence of a result of Borgs, Chayes, and Lovász <cit.> on uniqueness of graph limits. In addition, we give a self-contained proof. § NOTATION AND TOOLSFor every function W:I^2→ℝ, we define the cut-norm of W byW_□=sup_A|∫_A∫_AW(x,y)| ,where A ranges over all measurable subsets of I. Another slightly different formula is also often used in the literature where one replaces the right-hand side of (<ref>) by sup_A,B|∫_A∫_BW(x,y)| where two sets A and B range over all measurable subsets of I. However, it is easy to see that for every symmetric function W, we havesup_A,B|∫_A∫_BW(x,y)|≥sup_A|∫_A∫_AW(x,y)|≥1/2sup_A,B|∫_A∫_BW(x,y)|,and so the notion of convergence of sequences of graphons (which are symmetric) in the cut-norm is irrelevant to the choice between these two formulas.We say that a graphon Γ I^2→ [0,1] is a step-graphon with steps I_1,I_2,…,I_k⊂ I if the sets I_1,I_2,…,I_k are pairwise disjoint, I_1∪ I_2∪…∪ I_k=I and W_|I_i× I_j is constant (up to a null set) for every i,j=1,2,…,k.We say that a measurable function γ:I→ I is an almost-bijection if there exist conull sets J_1,J_2⊂ I such that γ_|J_1 is a bijection from J_1 onto J_2. When we talk about the inverse of such a function γ then we mean (γ_|J_1)^-1 but we denote it only by γ^-1. Note that this inverse γ^-1 is not unique but that does not cause any problems as any two inverses of γ differ only on a null set.If Γ,Γ':I^2→[0,1] are two graphons then we say that Γ' is a version of Γ if there exists a measure preserving almost-bijection γ:I→ I such that Γ'(x,y)=Γ(γ^-1(x),γ^-1(y)) for almost every (x,y)∈ I^2.Related to versions, we recall that the cut-distance and L^1-distance between two graphons W_1,W_2 are defined as δ_□(W_1,W_2)=infU_1-W_2_□ and δ_1(W_1,W_2)=infU_1-W_2_1 where U_1 ranges over all versions of W_1.By an ordered partition of I, we mean a partition of I with a fixed order of the sets from the partition. For an ordered partition 𝒥 of I into finitely many sets C_1,C_2,…,C_k, we define mappings α_𝒥,1,α_𝒥,2,…,α_𝒥,k:I→ I, and a mapping γ_𝒥:I→ I byα_𝒥,1(x) = ∫_0^x 1_C_1(y) (̣y) , α_𝒥,2(x) =α_𝒥,1(1)+∫_0^x 1_C_2(y) (̣y) , ⋮ α_𝒥,k(x) =α_𝒥,1(1)+α_𝒥,2(1)+…+α_𝒥,k-1(1)+∫_0^x 1_C_k(y) (̣y) , γ_𝒥(x) = α_𝒥,i(x)if x∈ C_i, i=1,2,…,k .Informally, γ_𝒥 is defined in such a way that it maps the set C_1 to the left side of the interval I, the set C_2 next to it, and so on. Finally, the set C_k is mapped to the right side of the interval I. Clearly, γ_𝒥 is a measure preserving almost-bijection.For a graphon W:I^2→[0,1] and an ordered partition 𝒥 of I into finitely many sets, we denote by 𝒥W the version of W defined by 𝒥W(x,y)=W(γ_𝒥^-1(x),γ_𝒥^-1(y)) for every (x,y)∈ I^2. §.§ Lebesgue pointsThe Lebesgue density theorem asserts that given an integrable function f:ℝ^n→ℝ, almost every point x∈ℝ^n is a Lebesgue point of f, meaning that the value of f(x) equals to the limit of the averages of f on neighborhoods of x of diminishing sizes. There is some freedom in choosing the particular shapes of these neighborhoods. Below, we give a definition of Lebesgue points tailored to our purposes. Since we shall work with graphons, we state this definition for the domain I^2.Suppose that W:I^2→ℝ is an integrable function. We say that (x,y)∈ I^2 is a Lebesgue point of W if for every η>0 there exists δ_0>0 such that whenever [p_1,p_2]⊂ I and [q_1,q_2]⊂ I are intervals such that the length of the intervals is smaller or equal to δ_0, such that the ratio of the lengths of these intervals is at least 12 and at most 2, and such that [p_1,p_2] contains x and [q_1,q_2] contains y then|W(x,y)-1/(p_2-p_1)(q_2-q_1)∫_p_1^p_2∫_q_1^q_2W(w,z) (̣w) (̣z)|<η . We can now state the Lebesgue density theorem. Suppose that W:I^2→ℝ is an integrable function. Then almost every point of I^2 is a Lebesgue point of W.§.§ Stepping The next definition introduces graphons derived by an averaging of a given graphon W on a given partition of I. Here, we denote by λ^⊕ 2 the two-dimensional Lebesgue measure on I^2. Suppose that W:I^2→ [0,1] is a graphon. For a partition ℐ of the unit interval into finitely many sets of positive measure, I=I_1⊔ I_2⊔…⊔ I_k, we define a stepping W^ℐ which is defined on each rectangle I_i× I_j to be the constant 1/λ^⊕ 2(I_i× I_j)∫_I_i∫_I_jW(x,y). The next lemma shows that we can replace any graphon W by its stepping (on some partition of I) without changing the value of _f(W) too much. Let f:[0,1]→ℝ be an arbitrary continuous and strictly concave function, and let 𝒥 be an arbitrary partition of I into finitely many intervals of positive measure. Suppose that W I^2→[0,1] is a graphon, and let ε>0. Then there exists a partition ℐ of I into finitely many intervals of positive measure such that ℐ is a refinement of 𝒥 and such that |_f(W)-_f(W^ℐ)|<ε. As f is continuous, there is η>0 such that |f(x)-f(y)|< 12ε whenever x,y∈ [0,1] are such that |x-y|<η. Also, as W is an integrable function, almost every point (x,y)∈ I^2 is a Lebesgue point of W. This implies that for a.e. (x,y)∈ I^2 there is a natural number n such that whenever [p_1,p_2]⊂ I and [q_1,q_2]⊂ I are intervals of lengths smaller or equal to 2n such that the ratio of the lengths is at least 12 and at most 2, and such that [p_1,p_2] contains x and [q_1,q_2] contains y then inequality (<ref>) holds. For every such (x,y), we denote by n(x,y) the smallest n with this property. For every natural number n, we also put D_n={(x,y)∈ I^2 n(x,y)>n} . Then it is easy to check that the sets D_1⊇ D_2⊇ D_3⊇… are measurable and λ^⊕ 2(⋂_n=1^∞ D_n)=0. So, after denoting C:=max_x∈ [0,1]|f(x)|, we can find a natural number n_0 large enough such that λ^⊕ 2(D_n_0)<1/4Cε , and such that 1n_0 is smaller than the length of all intervals from the partition 𝒥. Now let ℐ be an arbitrary refinement of the partition 𝒥 into finitely many intervals I_1,I_2,…,I_k, such that the length of each of these intervals is at least 1n_0 and at most 2n_0. For each i,j=1,2,…,k, we denote C_i,j=1/λ^⊕ 2(I_i× I_j)∫_I_i∫_I_jW(x,y). Inequality (<ref>) then tells us that |W(x,y)-C_i,j|<ηfor every (x,y)∈ (I_i× I_j)∖ D_n_0, i,j=1,2,…,k , and so |f(W(x,y))-f(C_i,j)|< 12εfor every (x,y)∈ (I_i× I_j)∖ D_n_0, i,j=1,2,…,k . So we have |_f(W)-_f(W^ℐ)| ≤ ∬_D_n_0|f(W(x,y))-f(W^ℐ(x,y))|+∑_i,j=1^k ∬_(I_i× I_j)∖ D_n_0|f(W(x,y))-f(W^ℐ(x,y))| (<ref>)≤ 2C·λ^⊕ 2(D_n_0)+1/2ε∑_i,j=1^kλ^⊕ 2((I_j× I_j)∖ D_n_0) (<ref>)< 1/2ε+1/2ε=ε , as we wanted.The next lemma says that if a graphon is a weak^* limit point then so is any graphon derived by an averaging of the original one on a given partition of I into intervals. Suppose that Γ_1,Γ_2,Γ_3,…:I^2→[0,1] is a sequence of graphons. Suppose that W∈(Γ_1,Γ_2,Γ_3,…) and that we have a partition ℐ of I into finitely many intervals of positive measure. Then W^ℐ∈(Γ_1,Γ_2,Γ_3,…). Moreover, whenever Γ'_1,Γ'_2,Γ'_3,… are versions of Γ_1,Γ_2,Γ_3,… which converge to W in the weak^* topology then the versions Γ”_1,Γ”_2,Γ”_3,… of Γ_1,Γ_2,Γ_3,… weak^* converging to W^ℐ can be chosen in such a way that for every natural number j and for every intervals K,L∈ℐ it holds ∫_K∫_LΓ_j'(x,y)=∫_K∫_LΓ_j”(x,y) . The proof of Lemma <ref> follows a relatively standard probabilistic argument. Suppose for simplicity that Γ_1,Γ_2,Γ_3,… weak^* converges to W. Then, for each n, we consider a version Γ'_n of Γ_n which is obtained by splitting each intervalA∈ℐ into n subsets of the same measure and then permuting these subsets of A at random. It can then be shown that Γ'_1,Γ'_2,Γ'_3,… converge to W^ℐ almost surely. The next two definitions are needed to make precise the notion of randomly permuting parts of the graphon within a given partition. Given a set A⊂ I of positive measure and a number s∈ℕ, we can consider a partition A= A ^s_1⊔ A ^s_2⊔…⊔ A ^s_s, where each set A ^s_i has measure λ(A)/s and for each 1≤ i<j≤ s, the set A ^s_i is entirely to the left of A ^s_j. These conditions define the partition A= A ^s_1⊔ A ^s_2⊔…⊔ A ^s_s uniquely, up to null sets. For each i,j∈ [s] there is a natural, uniquely defined (up to null sets), measure preserving almost-bijection χ^A,s_i,j: A^s_i→ A^s_j which preserves the order on the real line. Suppose that Γ:I^2→ [0,1] is a graphon. For a partition ℐ of I into finitely many sets of positive measure, I=I_1⊔ I_2⊔…⊔ I_k, and for s∈ℕ, we define a discrete distribution 𝕎(Γ,ℐ,s) on graphons using the following procedure. We take π_1,…,π_k:[s]→ [s] independent uniformly random permutations. After these are fixed, we define a sample W∼𝕎(Γ,ℐ,s) by W(x,y)=Γ(χ^I_i,s_p,π_i(p)(x),χ^I_j,s_q,π_j(q)(y)) . This defines the sample W:I^2→ [0,1] uniquely up to null sets, and thus defines the whole distribution 𝕎(Γ,ℐ,s). Observe that 𝕎(Γ,ℐ,s) is supported on (some) versions of Γ. We call the sets I_j^s_q stripes. By considering suitable versions of the graphons Γ_n, we can without loss of generality assume that the sequence Γ_1,Γ_2,Γ_3,… itself converges to W in the weak^* topology. For each n∈ℕ, let us sample U_n∼𝕎(Γ_n,ℐ,n). We claim that the sequence U_1,U_2,U_3,… converges to W^ℐ in the weak^* topology almost surely. As each U_n is a version of Γ_n, this will prove the lemma. So, let us now turn to proving the claim. Let i,j∈[k] be arbitrary. Further, let 0≤ p_1<p_2≤ 1 and 0≤ r_1<r_2≤ 1 be arbitrary rational numbers such that the rectangle [p_1,p_2]×[r_1,r_2] is contained (modulo a null set) in I_i× I_j. Having fixed i,j,p_1,p_2,r_1,r_2, let us write c for the value of W^ℐ on I_i× I_j. For each n∈ℕ, let E_n be the event that |∬_[p_1,p_2]×[r_1,r_2]U_n(̣λ^⊕2)-c(p_2-p_1)(r_2-r_1)|>√(1/n)+ 4n . Let us now bound the probability that E_n occurs. To this end, let Y_n be the value of ∬_[p_1,p_2]×[r_1,r_2]U_n(̣λ^⊕2). We clearly have [Y_n]=c(p_2-p_1)(r_2-r_1)± 4n (the error ± 4n comes from those products of pairs of stripes that intersect both [p_1,p_2]×[r_1,r_2] and its complement). Therefore, if E_n occurs then |Y_n-[Y_n]|>√(1/n). Suppose that we want to compute Y_n. From the k random permutations π_1,π_2,…,π_k:[n]→[n] used in Definition <ref> to define U_n, we only need to know the permutations π_i and π_j. To generate these, we toss in i.i.d. points i_1,i_2,…,i_n,j_1,j_2,…,j_n into the unit interval I; the Euclidean order of the points i_1,i_2,…,i_n naturally defines π_i and similarly the points j_1,j_2,…,j_n naturally define π_j.[The exception being when some of the points i_1,i_2,…,i_n or of the points j_1,j_2,…,j_n coincide, in which case the order of these points does not determine a permutation. This event however happens almost never.] So, we can view Y_n as a random variable on the probability space I^2n. Observe that if 𝔰=(i_1,i_2,…,i_n,j_1,j_2,…,j_n) and 𝔰'=(i'_1,i'_2,…,i'_n,j'_1,j'_2,…,j'_n) are two elements of I^2n that differ in only one coordinate, then |Y_n(𝔰)-Y_n(𝔰')|≤2/n. Thus the Method of Bounded Differences (see <cit.>) tells us that [E_n]≤[|Y_n-[Y_n]|>√(1/n)]≤ 2exp(-2(√(1/n))^2/2n·(2/n)^2)=2exp(-√(n)/4) . Because the sequence (2exp(-√(n)/4))_n=1^∞ is summable, the Borel–Cantelli lemma allows to conclude that only finitely many events E_n occur, almost surely. Thus, almost surely, for any weak^* accumulation point U of the sequence U_1,U_2,U_3,…, we have ∬_[p_1,p_2]×[r_1,r_2]U(̣λ^⊕2)=c(p_2-p_1)(r_2-r_1) . By applying the union bound, we obtain that (<ref>) holds for all (countably many) choices of i,j,p_1,p_2,r_1,r_2, almost surely. Since the elements of ℐ are intervals, the above system of rectangles [p_1,p_2]×[r_1,r_2] generates the Borel σ-algebra on I^2. Consequently, we obtain that U≡ W^ℐ, almost surely. The “moreover” part obviously follows from the proof.§.§ Jensen's inequality and steppings Recall that one of the possible formulations of Jensen's inequality says that if (Ω,λ) is a measurable space with λ(Ω)>0, g:Ω→ℝ is a measurable function and f:ℝ→ℝ is a concave function thenf(1/λ(Ω)∫_Ω g(x))≥1/λ(Ω)∫_Ω f(g(x)) .We use this formulation of Jensen's inequality to prove the following simple lemma. Let f[0,1]→ℝ be a continuous and strictly concave function. Let Γ I^2→ [0,1] be a step-graphon with steps I_1,I_2,…, I_k, and let W I^2→ [0,1] be another graphon such that ∫_I_i× I_jW=∫_I_i× I_jΓ for every i,j=1,2,…,k. Then _f(W)≤_f(Γ). It clearly suffices to show that for every i,j=1,2,…,k it holds ∫_I_i∫_I_jf(W(x,y))≤∫_I_i∫_I_jf(Γ(x,y)) . So let us fixi,j, and let C_i,j be the constant for which Γ_|I_i× I_j=C_i,j almost everywhere. Then we have ∫_I_i∫_I_jf(W(x,y)) (<ref>)≤λ^⊕ 2(I_i× I_j)· f(1/λ^⊕ 2(I_i× I_j)∫_I_i∫_I_jW(x,y))=λ^⊕ 2(I_i× I_j)· f(C_i,j)=∫_I_i∫_I_jf(Γ(x,y)) , as we wanted. § SUMMARIES OF PROOFSIn this section, we give an overview of the proof of Theorem <ref><ref> in Section <ref>. Then, we explain in Section <ref> that this proof can be viewed as an infinitesimal counterpart to the index-pumping lemma. Last, in Section <ref> we give a detailed outline of Theorem <ref><ref>.§.§ Overview of proof ofTheorem <ref><ref> Suppose for simplicity that the sequence Γ_1,Γ_2,Γ_3,… converges to W in the weak^* topology. The key step to the proof of Theorem <ref><ref> is Lemma <ref>. There we prove that whenever we fix a sequence (B_n)_n=1^∞ of measurable subsets of I and define a new version Γ'_n of Γ_n (for every n) by “shifting the set B_n to the left side of the interval I”, then any weak^* accumulation point W of the sequence Γ'_1,Γ'_2,Γ'_3,… satisfies _f(W)≤_f(W). As this result relies on Jensen's inequality, we actually get _f(W)<_f(W) when we choose the sets B_n carefully. “Carefully” means that each of the integrals ∫_B_n∫_B_nΓ_n(x,y) differs from the integral ∫_B_n∫_B_nW(x,y) at least by some given ε>0. But observe that if the graphon W is not a cut-norm accumulation point of the sequence Γ_1,Γ_2,Γ_3,… then it is always possible to choose the sets B_n. §.§ Connection between the proof ofTheorem <ref><ref> and proofs of regularity lemmas Graphons could be regarded as “the ultimate regularization”. Thus, it is instructive to see how our proof relates to the usual proofs of regularity lemmas (of which the weak regularity lemma of Frieze and Kannan <cit.> is the most relevant). Recall that in these proofs of regularity lemmas one keeps refining a partition of a graph until the partition is regular. Let us give details. Let f:[0,1]→ℝ be an arbitrary continuous and strictly concave function. Suppose that G is an n-vertex graph, and let 𝒫=(P_i)_i=1^k be a partition of V(G) into sets. Then for each i,j∈[k], we define d_ij:=∑_u∈ P_i, v∈ P_j1_uv∈ E(G)/|P_i|·|P_j| (with the convention 0/0=0). If i≠ j then d_ij corresponds to the bipartite density of the pair G[P_i,P_j], and otherwise this corresponds to the density of the graph G[P_i]. Then we write_f(G;𝒫):=∑_i=1^k∑_j=1^k |P_i|· |P_j|/n^2· f(d_ij) .Note that we can express _f(G;𝒫) as _f(W_G;𝒫), where W_G;𝒫 is a graphon representation of densities of G according to the partition 𝒫. The index-pumping lemma, which we state here in the setting of the weak regularity lemma, asserts that non-regular partitions canbe refined while controlling the index. Let us recall that a partition 𝒫=(P_i)_i=1^k of V(G) is weak ϵ-regular if for each B⊂ V(G) we havee(G[B])=1/2∑_i=1^k∑_j=1^k d_i,j|B∩ P_i|· |B∩ P_j|±ϵ n^2 . Suppose that 𝒞 is a partition of a graph G. If B⊂ V(G) is a witness that 𝒞 is not ϵ-regular, then splitting each cell C∈𝒞 into C∩ B and C∖ B yields a partition 𝒟 for which_x↦ -x^2(G;𝒟)<_x↦ -x^2(G;𝒞)-ϵ^2/4 . For completeness, let us recall the the proof of the weak regularity lemma, which states that for each ϵ>0, each graph has a weak ϵ-regular partition with at most 2^⌈4/ϵ^2⌉ parts. One starts with a singleton partition. At any stage, if the current partition is not weak ϵ-regular, then Lemma <ref> allows to decrease the index by at least ϵ^2/4 while doubling the number of cells in the partition. Since _x↦ -x^2(G;· )∈[-1,0], we must terminate in at most ⌈4/ϵ^2⌉ steps.Let us now draw the analogy between Lemma <ref> and Theorem <ref><ref> and its proof. Firstly, note that Theorem <ref> allows other functions than x↦ -x^2 used in Lemma <ref>. This is however not a serious restriction. Indeed, replacing the so-called “defect form ofthe Cauchy–Schwarz inequality” in the usual proof of Lemma <ref> by Jensen's inequality, we could obtain a statement for general continuous strictly concave functions. In fact, this has already been used in <cit.>, where a different choice of a concave function was necessary. So let us now move to the main analogy. Let us consider W as in Theorem <ref><ref>. As in Section <ref>, let us assume that W is the weak* limit of Γ_1,Γ_2,Γ_3…. Suppose that W is not an accumulation point of Γ_1,Γ_2,Γ_3,… with respect to the cut-norm, and let B_1,B_2,B_3,…⊂ I be witnesses for this. Now, the “shifting B_n to the left” described in Section <ref> can be viewed as splitting each interval J⊂ I into J∩ B_n and J∖ B_n (we think of J as being very small, thus representing an “infinitesimally small cluster”), just as in Lemma <ref>.We pose a conjecture which goes in this direction in Section <ref>.§.§ Overview of proof of Theorem <ref><ref>Let us begin with the most straightforward attempt for a proof. For now, let us work with the simplifying assumption that all accumulation points are actually limits. As we shall see later, this simplifying assumption is a major cheat for which an extra patch will be needed. Then, letm:=inf{_f(W) W∈(Γ_1,Γ_2,Γ_3,…)} .For each k∈ℕ, let us fix a sequence Γ_1^k,Γ_2^k,Γ_3^k,… of versions of Γ_1,Γ_2,Γ_3,… which converges in the weak^* topology to a graphon W_k with _f(W_k)<m+1/k. Now, we might diagonalize and hope that any weak^* accumulation point (whose existence is guaranteed by the Banach–Alaoglu Theorem) W^* of the sequence Γ_1^1,Γ_2^2,Γ_3^3,… satisfies _f(W^*)≤ m. The reason for this hope being vain is the discontinuity of _f(·) with respect to the weak^* topology. As an example, let us take a situation when each W_k is a 2(k+2)× 2(k+2)-chessboard {0,1}-valued function, with the last two rows and columns having value 1/2 (see Figure <ref>). In other words, most of each graphon W_k corresponds to a complete balanced bipartite graphon, to which an additional artificial subdivision to each of its parts to k subparts was introduced. These subparts were interlaced one after another, except that the vertices of the last subpart of each part were mixed together. (These graphons were clearly chosen nonoptimally in the sense that the mixing of the last two parts is undesired. We chose these graphons in this example here to have richer features to study.) All the graphons W_k have small values of _f(·). On the other hand, the weak^* limit of the sequence is the graphon W_const≡1/2 whose value _f(·) is bigger. There is a lesson to learn from this example. While for larger k, the versions in the sequence Γ_1^k,Γ_2^k,Γ_3^k,… will be aligned on I in a more optimal way locally, the global structure may get undesirably more convoluted as k→∞. To remedy this, we consider a sequence of version of Γ_1,Γ_2,Γ_3,… in which the structure of measure-preserving transformation on a rough level is inherited from measure preserving transformations leading to W_1. Within each step corresponding to the step-graphon W_1, the structure of the measure-preserving transformation is inherited from measure preserving transformations leading to W_2, and so on. An example of this procedure is given in Figure <ref>. It can be shown that any weak^* accumulation point W^* of these reordered graphons has the property that _f(W^*)≤lim sup_n _fW_n, as was needed. Let us now explain why the assumption that all sequences converge weak^* leaves a substantial gap in the proof. Recall that the information how the partition 𝒥^k of U_k interacts with the measure preserving almost-bijections on graphons 𝔰_k⊂ (Γ_1,Γ_2,Γ_3,…) that converge to W_k gives us crucial directions as how to reorder and refine the subsequence of graphons 𝔰_k+1 that converges to W_k+1. Let us again stress that while the existence of the subsequences 𝔰_j is guaranteed by weak^* compactness, we have no control on their properties. So, it can be that 𝔰_k is disjoint from 𝔰_k+1. In other words, we do not get the needed information how to reorder and refine the graphons in 𝔰_k+1. To remedy this problem, we prove a lemma (Lemma <ref>) which says that for every sequence Γ_1,Γ_2,Γ_3,… of graphons there exists a subsequence Γ_k_1,Γ_k_2,Γ_k_3,… such thatinf{_f(W) W∈(Γ_k_1,Γ_k_2,Γ_k_3,…)}=inf{_f(W) W∈(Γ_k_1,Γ_k_2,Γ_k_3,…)} .Applying this lemma first, the arguments above become sound for the subsequence Γ_k_1,Γ_k_2,Γ_k_3,….§ PROOF OF THEOREM <REF><REF> The following key lemma (or its subsequent corollary) is used in both proofs of Theorem <ref><ref> and Theorem <ref><ref>. Suppose that f:[0,1]→ℝ is an arbitrary continuous and strictly concave function. Suppose that Γ_1,Γ_2,Γ_3,…:I^2→[0,1] is a sequence of graphons which converges to a graphon W:I^2→[0,1] in the weak^* topology. Suppose that B_1,B_2,B_3,… is an arbitrary sequence of subsets of I.For each n, let 𝒥_n be the ordered partition of I into two sets B_n and I∖ B_n (in this order). Then every graphon W that is a weak* accumulation point of the sequence 𝒥_1Γ_1,𝒥_2Γ_2,𝒥_3Γ_3,… satisfies _f(W)≤_f(W).Moreover, suppose that for the sequence n_1<n_2<n_3<… for which 𝒥_n_1Γ_n_1,𝒥_n_2Γ_n_2,𝒥_n_3Γ_n_3,… weak* converges to W, we have that 1_B_n_1, 1_B_n_2, 1_B_n_3,… converges to a function ψ I→[0,1] in the weak^* topology. Let θ I→ I be defined by θ(x)=∫_0^xψ(y) (̣y). If we have λ^⊗ 2({(x,y)∈ I^2 : ψ(x)>0, ψ(y)>0, W(x,y)≠W(θ(x),θ(y))})>0then _f(W)<_f(W).By passing to a subsequence, we may assume that the sequence 𝒥_1Γ_1,𝒥_2Γ_2,𝒥_3Γ_3,… is convergent to W in the weak^* topology, and that the sequence 1_B_1, 1_B_2, 1_B_3,… converges in the weak^* topology to ψ I→[0,1]. We define ξ I→ I by ξ(x)=θ(1)+∫_0^x(1-ψ(y)) (̣y). For every two intervals [p_1,p_2],[q_1,q_2]⊂ Iwe have ∫_p_1^p_2∫_q_1^q_2W(x,y) =∫_p_1^p_2∫_q_1^q_2W(θ(x),θ(y))ψ(x)ψ(y)+∫_p_1^p_2∫_q_1^q_2W(θ(x),ξ(y))ψ(x)(1-ψ(y))+∫_p_1^p_2∫_q_1^q_2W(ξ(x),θ(y))(1-ψ(x))ψ(y)+∫_p_1^p_2∫_q_1^q_2W(ξ(x),ξ(y))(1-ψ(x))(1-ψ(y)) . By using the fact that Γ_nw^*→W together with the identity ab+a(1-b)+(1-a)b+(1-a)(1-b)=1 we get that ∫_p_1^p_2∫_q_1^q_2W(x,y) =lim_n→∞∫_p_1^p_2∫_q_1^q_2Γ_n(x,y)= lim_n→∞∫_p_1^p_2∫_q_1^q_2Γ_n(x,y) 1_B_n(x) 1_B_n(y)+lim_n→∞∫_p_1^p_2∫_q_1^q_2Γ_n(x,y) 1_B_n(x)(1- 1_B_n(y))+lim_n→∞∫_p_1^p_2∫_q_1^q_2Γ_n(x,y)(1- 1_B_n(x)) 1_B_n(y)+lim_n→∞∫_p_1^p_2∫_q_1^q_2Γ_n(x,y)(1- 1_B_n(x))(1- 1_B_n(y)) . Next we rewrite the integral following the first limit on the right-hand side of (<ref>). To this end, we use the notation from (<ref>) together with the obvious differentiation formula (α_𝒥_n,1)'(x)= 1_B_n(x)for a.e. x∈ I (and also, we use the fact that α_𝒥_n,1|B_n is an almost-bijection from B_n onto the interval [0,∫_0^1 1_B_n(y)], and so it makes sense to talk about its inverse). We have ∫_p_1^p_2∫_q_1^q_2Γ_n(x,y) 1_B_n(x) 1_B_n(y) integration by substitution= ∫_α_𝒥_n,1(p_1)^α_𝒥_n,1(p_2)∫_α_𝒥_n,1(q_1)^α_𝒥_n,1(q_2)Γ_n(α_𝒥_n,1^-1(x),α_𝒥_n,1^-1(y)) γ_𝒥_n(x)=α_𝒥_n,1(x) for every x∈ B_n= ∫_α_𝒥_n,1(p_1)^α_𝒥_n,1(p_2)∫_α_𝒥_n,1(q_1)^α_𝒥_n,1(q_2)Γ_n(γ_𝒥_n^-1(x),γ_𝒥_n^-1(y)) = ∫_α_𝒥_n,1(p_1)^α_𝒥_n,1(p_2)∫_α_𝒥_n,1(q_1)^α_𝒥_n,1(q_2)𝒥_nΓ_n(x,y) . Therefore, we have |∫_p_1^p_2∫_q_1^q_2Γ_n(x,y) 1_B_n(x) 1_B_n(y)-∫_θ(p_1)^θ(p_2)∫_θ(q_1)^θ(q_2)𝒥_nΓ_n(x,y)| (<ref>)= |∫_α_𝒥_n,1(p_1)^α_𝒥_n,1(p_2)∫_α_𝒥_n,1(q_1)^α_𝒥_n,1(q_2)𝒥_nΓ_n(x,y)-∫_θ(p_1)^θ(p_2)∫_θ(q_1)^θ(q_2)𝒥_nΓ_n(x,y)| ≤ |α_𝒥_n,1(p_1)-θ(p_1)|+|α_𝒥_n,1(p_2)-θ(p_2)|+|α_𝒥_n,1(q_1)-θ(q_1)|+|α_𝒥_n,1(q_2)-θ(q_2)| . The fact that 1_B_nw^*→ψ immediately implies that α_𝒥_n,1(x)→θ(x) for every x∈ I, and so we conclude that the right-hand side, and thus also the left-hand side, of (<ref>), tends to 0. Therefore (note that the following limits exist as Γ_nw^*→W) lim_n→∞∫_p_1^p_2∫_q_1^q_2Γ_n(x,y) 1_B_n(x) 1_B_n(y) = lim_n→∞∫_θ(p_1)^θ(p_2)∫_θ(q_1)^θ(q_2)𝒥_nΓ_n(x,y) 𝒥_nΓ_nw^*→W =∫_θ(p_1)^θ(p_2)∫_θ(q_1)^θ(q_2)W(x,y) integration by substitution =∫_p_1^p_2∫_q_1^q_2W(θ(x),θ(y))ψ(x)ψ(y) . In a very analogous way as we derived (<ref>), one can verify that lim_n→∞∫_p_1^p_2∫_q_1^q_2Γ_n(x,y) 1_B_n(x)(1- 1_B_n(y)) =∫_p_1^p_2∫_q_1^q_2W(θ(x),ξ(y))ψ(x)(1-ψ(y)) , lim_n→∞∫_p_1^p_2∫_q_1^q_2Γ_n(x,y)(1- 1_B_n(x)) 1_B_n(y) =∫_p_1^p_2∫_q_1^q_2W(ξ(x),θ(y))(1-ψ(x))ψ(y) , lim_n→∞∫_p_1^p_2∫_q_1^q_2Γ_n(x,y)(1- 1_B_n(x))(1- 1_B_n(y)) =∫_p_1^p_2∫_q_1^q_2W(ξ(x),ξ(y))(1-ψ(x))(1-ψ(y)) . By putting (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>) together, we get (<ref>). Since the sets of the form [p_1,p_2]× [q_1,q_2] generate the Borel σ-algebra on I^2, we conclude from Claim <ref> that for almost every (x,y)∈ I^2 we have thatW(x,y)= W(θ(x),θ(y))ψ(x)ψ(y)+W(θ(x),ξ(y))ψ(x)(1-ψ(y))+W(ξ(x),θ(y))(1-ψ(x))ψ(y)+W(ξ(x),ξ(y))(1-ψ(x))(1-ψ(y)) .Note that the right-hand side of (<ref>) is a convex combination of the four termsW(θ(x),θ(y)) ,W(θ(x),ξ(y)) ,W(ξ(x),θ(y)) ,W(ξ(x),ξ(y)) .Therefore we have_f(W) =∫_0^1∫_0^1f(W(x,y)) f is concave (<ref>)≥∫_0^1∫_0^1f(W(θ(x),θ(y)))ψ(x)ψ(y)+∫_0^1∫_0^1f(W(θ(x),ξ(y)))ψ(x)(1-ψ(y))+∫_0^1∫_0^1f(W(ξ(x),θ(y)))(1-ψ(x))ψ(y)+∫_0^1∫_0^1f(W(ξ(x),ξ(y)))(1-ψ(x))(1-ψ(y)) integration by substitution =∫_0^θ(1)∫_0^θ(1)f(W(x,y))+∫_0^θ(1)∫_θ(1)^1f(W(x,y))+∫_θ(1)^1∫_0^θ(1)f(W(x,y))+∫_θ(1)^1∫_θ(1)^1f(W(x,y))=∫_0^1∫_0^1f(W(x,y))=_f(W) . To prove the “moreover” part, suppose that we have (<ref>). Then the convex combination (<ref>) is not trivial on a set of positive measure. This is all we need as then we have a sharp inequality in (<ref>) because f is strictly concave. We do not use the next corollary right now but we will need it in Section <ref>. Suppose that f:[0,1]→ℝ is an arbitrary continuous and strictly concave function. Suppose that Γ_1,Γ_2,Γ_3,…:I^2→[0,1] is a sequence of graphons which converges to a graphon W:I^2→[0,1] in the weak^* topology. Suppose that ℓ is a fixed natural number and that for every n, 𝒥_n is an ordered partition of I into ℓ sets B_1^n,B_2^n,…,B_ℓ^n. Then for every graphon W that is a weak* accumulation point of the graphons 𝒥_1Γ_1,𝒥_2Γ_2,𝒥_3Γ_3,… we have _f(W)≤_f(W).For every natural number n and every i∈{1,…,ℓ}, we denote by 𝒥_n^i the ordered partition of I consisting of the sets B_ℓ-i+1^n,B_ℓ-i+2^n,…,B_ℓ^n and I∖⋃_j=ℓ-i+1^ℓ B_j^n (in this order). Consider these ℓ+1 sequences of graphons:𝒮_0 Γ_1,Γ_2,Γ_3,… 𝒮_1 𝒥_1^1Γ_1,𝒥_2^1Γ_2,𝒥_3^1Γ_3,… 𝒮_2 𝒥_1^2Γ_1,𝒥_2^2Γ_2,𝒥_3^2Γ_3,… ⋮ 𝒮_ℓ 𝒥_1^ℓΓ_1,𝒥_2^ℓΓ_2,𝒥_3^ℓΓ_3,… ,so that the sequence 𝒮_ℓ is precisely 𝒥_1Γ_1,𝒥_2Γ_2,𝒥_3Γ_3,…. Let us fix W∈(𝒮_ℓ). By passing to a subsequence, we may assume that the sequence 𝒮_i converges to some graphon W_i in the weak^* topology for every i=1,2,…,ℓ-1. It remains to apply Lemma <ref> ℓ-times in a row. First, we apply it on the sequence 𝒮_0 of graphons and on the sequence B_ℓ^1,B_ℓ^2,B_ℓ^3,… of subsets of I to conclude that _f(W_1)≤_f(W). Next, we apply it on the sequence 𝒮_1 of graphons and on the sequence B_ℓ-1^1,B_ℓ-1^2,B_ℓ-1^3,… of subsets of I to conclude that _f(W_2)≤_f(W_1)≤_f(W). In the last step, we apply it on the sequence 𝒮_ℓ-1 of graphons and on the sequence B_1^1,B_1^2,B_1^3,… of subsets of I to conclude that _f(W)≤_f(W_ℓ-1)≤…≤_f(W_1)≤_f(W). Now we can prove Theorem <ref><ref>.By passing to a subsequence, we may assume that the sequence Γ_1,Γ_2,Γ_3,… converges to W in the weak^* topology. As W is not an accumulation point of the sequenceΓ_1,Γ_2,Γ_3,… in the cut-norm, there is ε>0 and a natural number n_0 such that Γ_n-W_□≥ε for every n≥ n_0. By passing to a subsequence, we may suppose that Γ_n-W_□≥ε for every natural number n. By the definition of the cut-norm, there is a sequence B_1,B_2,B_3,… of subsets of I such that for every natural number n we have |∫_x∈ B_n∫_y∈ B_n(Γ_n(x,y)-W(x,y))|≥ε. This means that either∫_x∈ B_n∫_y∈ B_nΓ_n(x,y) ≥∫_x∈ B_n∫_y∈ B_nW(x,y)+εor ∫_x∈ B_n∫_y∈ B_nΓ_n(x,y) ≤∫_x∈ B_n∫_y∈ B_nW(x,y)-ε .By passing to a subsequence, we may assume that only one of these two cases occurs. We stick to the case when (<ref>) holds for every natural number n (the other case is analogous). By passing to a subsequence once again, we may assume that the sequence 1_B_1, 1_B_2, 1_B_3,… converges in the weak^* topology to some ψ I→[0,1]. For every natural number n, let 𝒥_n be the ordered partition of I into two sets B_n and I∖ B_n (in this order). This allows us to define α_𝒥_n,1,α_𝒥_n,2,γ_𝒥_n I→ I as in (<ref>), and versions 𝒥_1Γ_1,𝒥_2Γ_2,𝒥_3Γ_3,… of Γ_1,Γ_2,Γ_3,…. We pass to a subsequence again to assure that the sequence 𝒥_1Γ_1,𝒥_2Γ_2,𝒥_3Γ_3,… is convergent in the weak^* topology, and we denote the weak^* limit by W. Now Lemma <ref> tells us that _f(W)≤_f(W), and that to prove that this inequality is sharp we only need to verify (<ref>). So to complete the proof, it suffices to prove the following claim. We have∫_0^1∫_0^1W(θ(x),θ(y))ψ(x)ψ(y)≥∫_0^1∫_0^1W(x,y)ψ(x)ψ(y)+ 12ε .We have∫_0^1∫_0^1W(θ(x),θ(y))ψ(x)ψ(y) = ∫_0^θ(1)∫_0^θ(1)W(x,y) 𝒥_nΓ_nw^*→W= lim_n→∞∫_0^θ(1)∫_0^θ(1)𝒥_nΓ_n(x,y) for large enough n, as α_𝒥_n,1(1)→θ(1)≥ lim sup_n→∞∫_0^α_𝒥_n,1(1)∫_0^α_𝒥_n,1(1)𝒥_nΓ_n(x,y)- 12ε integration by substitution= lim sup_n→∞∫_0^1∫_0^1𝒥_nΓ_n(α_𝒥_n,1(x),α_𝒥_n,1(y)) 1_B_n(x) 1_B_n(y)- 12ε γ_𝒥_n(x)=α_𝒥_n,1(x) for every x∈ B_n= lim sup_n→∞∫_B_n∫_B_n𝒥_nΓ_n(γ_𝒥_n(x),γ_𝒥_n(y))- 12ε= lim sup_n→∞∫_B_n∫_B_nΓ_n(x,y)- 12ε (<ref>)≥ lim sup_n→∞∫_B_n∫_B_nW(x,y)+ 12ε 1_B_nw^*→ψ= ∫_0^1∫_0^1W(x,y)ψ(x)ψ(y)+ 12ε . The initial step when we “shift the sets B_n to the left” crucially relies on the Euclidean order on I. This order is needless for the theory of graphons, i.e., graphons can be defined on a square of an arbitrary atomless separable probability space Ω. A linear order on Ω can be always introduced additionally, as Ω is measure-isomorphic to I. So, while our results work in full generality for an arbitrary Ω, we wonder if our argument can be modified so that the proof would naturally work without assuming a linear structure of the underlying probability space.§ PROOF OF THEOREM <REF><REF>The bulk of the proof is given after proving the following key lemma. For every sequence Γ_1,Γ_2,Γ_3,…:I^2→[0,1] of graphons there exists a subsequence Γ_k_1,Γ_k_2,Γ_k_3,… such thatinf{_f(W) W∈(Γ_k_1,Γ_k_2,Γ_k_3,…)}=inf{_f(W) W∈(Γ_k_1,Γ_k_2,Γ_k_3,…)} . We start by finding countably many subsequences 𝒮_1,𝒮_2,𝒮_3,… of the sequence Γ_1,Γ_2,Γ_3,… such that for every natural number n we have: (i) 𝒮_n+1 is a subsequence of 𝒮_n, and(ii) there exists W_n+1∈(𝒮_n+1) such that_f(W_n+1)<inf{_f(W) W∈(𝒮_n)}+ 1n . This is done by induction. In the first step, we just define the sequence 𝒮_1 to be the original sequence Γ_1,Γ_2,Γ_3,…. Next suppose that we have already defined the subsequence 𝒮_n for some natural number n. Then there is a graphon W_n+1∈(𝒮_n) such that_f(W_n+1)<inf{_f(W) W∈(𝒮_n)}+ 1n .Now we find a subsequence 𝒮_n+1 of 𝒮_n such that some versions of the graphons from 𝒮_n+1 converge to W_n+1 in the weak^* topology. This finishes the construction.Now we use the diagonal method to define, for every natural number n, the graphon Γ_k_n to be the nth element of the sequence 𝒮_n. Then we have for every n thatinf{_f(W) W∈(Γ_k_1,Γ_k_2,Γ_k_3,…)} Γ_k_n,Γ_k_n+1,Γ_k_n+2,… is a subsequence of 𝒮_n≥ inf{_f(W) W∈(𝒮_n)} (<ref>)> _f(W_n+1)- 1n W_n+1∈(𝒮_n+1)⊂(Γ_k_1,Γ_k_2,Γ_k_3,…)≥ inf{_f(W) W∈(Γ_k_1,Γ_k_2,Γ_k_3,…)}- 1n ,and soinf{_f(W) W∈(Γ_k_1,Γ_k_2,Γ_k_3,…)}≥inf{_f(W) W∈(Γ_k_1,Γ_k_2,Γ_k_3,…)} .The other inequality is trivial. We can now give the proof of Theorem <ref><ref>.By using Lemma <ref> and by passing to a subsequence, we may assume thatinf{_f(W) W∈(Γ_1,Γ_2,Γ_3,…)}=inf{_f(W) W∈(Γ_1,Γ_2,Γ_3,…)} .We construct the desired subsequence Γ_k_1,Γ_k_2,Γ_k_3,… by the following construction.In the first step, we find a graphon W_1∈(Γ_1,Γ_2,Γ_3,…) such that_f(W_1)<inf{_f(W) W∈(Γ_1,Γ_2,Γ_3,…)}+1 .By Lemma <ref>, there is a partition 𝒥_1 of I into finitely many intervals of positive measure such that |_f(W_1)-_f(W_1^𝒥_1)|<1. Then we clearly have_f(W_1^𝒥_1)<inf{_f(W) W∈(Γ_1,Γ_2,Γ_3,…)}+2 .By Lemma <ref>, the graphon W_1^𝒥_1 is also an element of the set (Γ_1,Γ_2,Γ_3,…), and so there is a sequence Γ_1^1,Γ_2^1,Γ_3^1,… of versions of Γ_1,Γ_2,Γ_3,… that converges to W_1:=W_1^𝒥_1 in the weak^* topology. We define Γ_k_1:=Γ_1, and we also define a sequence q_1^1,q_2^1,q_3^1,… to be the increasing sequence of all natural numbers.Now fix a natural number n and suppose that we have already defined a finite subsequence Γ_k_1,Γ_k_2,…,Γ_k_n of Γ_1,Γ_2,Γ_3,…. Suppose also that for every 1≤ i≤ n, we have already constructed (i) a step-graphon W_i with steps given by some partition 𝒥_i of I into finitely many intervals of positive measure such that 𝒥_i is a refinement of 𝒥_i-1 (if i>1) and such that_f(W_i)<inf{_f(W) W∈(Γ_1,Γ_2,Γ_3,…)}+ 2i ,and(ii) an increasing sequence q_1^i,q_2^i,q_3^i,… of natural numbers which is a subsequence of q_1^i-1,q_2^i-1,q_3^i-1,… (if i>1), together with a sequence Γ_q_1^i^i,Γ_q_2^i^i,Γ_q_3^i^i,… of versions of Γ_q_1^i,Γ_q_2^i,Γ_q_3^i,… which converges to W_i in the weak^* topology and such that (if i>1) for every natural number j and for every intervals K,L∈𝒥_i-1 it holds that∫_K∫_LΓ_q_j^i^i(x,y)=∫_K∫_LΓ_q_j^i^i-1(x,y) . Then we find a graphon W_n+1∈(Γ_1,Γ_2,Γ_3,…) such that_f(W_n+1)<inf{_f(W) W∈(Γ_1,Γ_2,Γ_3,…)}+1n+1 .Find a sequence Γ_q_1^n^n+1,Γ_q_2^n^n+1,Γ_q_3^n^n+1,… of versions of Γ_q_1^n,Γ_q_2^n,Γ_q_3^n,… which converges to W_n+1 in the weak^* topology. For every natural number j, let ϕ_j:I→ I be the measure-preserving almost bijection satisfying Γ_q_j^n^n+1(x,y)=Γ_q_j^n^n(ϕ_j^-1(x),ϕ_j^-1(y)) for a.e. (x,y)∈ I^2 (such an almost-bijection exists as both Γ_q_j^n^n+1 and Γ_q_j^n^n are versions of the same graphon Γ_q_j^n). Let us fix some order of the sets from the partition 𝒥_n. For every j, let ℐ_j be the ordered partition of I consisting of the sets ϕ_j(K), K∈𝒥_n, with the order given by the order of the sets from 𝒥_n. Let r_1,r_2,r_3,… be a subsequence of q_1^n,q_2^n,q_3^n,… such that for every K∈𝒥_n, the sequence 1_ϕ_1(K),1_ϕ_2(K),1_ϕ_3(K),… is convergent in the weak^* topology. Find an accumulation point W_n+1 of the sequence ℐ_1Γ_r_1^n+1,ℐ_2Γ_r_2^n+1,ℐ_3Γ_r_3^n+1,… (in the weak^* topology). By Corollary <ref>, we have_f(W_n+1)≤_f(W_n+1)<inf{_f(W) W∈(Γ_1,Γ_2,Γ_3,…)}+1n+1 .Let s_1,s_2,s_3,… be a subsequence of r_1,r_2,r_3,… such that the sequence ℐ_1Γ_s_1^n+1,ℐ_2Γ_s_2^n+1,ℐ_3Γ_s_3^n+1,… converges to W_n+1 in the weak^* topology. Note that for every natural number j and for every intervals K,L∈𝒥_n, it holds that∫_K∫_Lℐ_jΓ_s_j^n+1(x,y)=∫_ϕ_J(K)∫_ϕ_j(L)Γ_s_j^n+1(x,y) =∫_ϕ_J(K)∫_ϕ_j(L)Γ_s_j^n(ϕ_j^-1(x),ϕ_j^-1(y))=∫_K∫_LΓ_s_j^n(x,y) .By Lemma <ref>, there is a partition 𝒥_n+1 of I into finitely many intervals of positive measure such that 𝒥_n+1 is a refinement of 𝒥_n and such that |_f(W_n+1)-_f(W_n+1^𝒥_n+1)|< 1n+1. Then we clearly have_f(W_n+1^𝒥_n+1)<inf{_f(W) W∈(Γ_1,Γ_2,Γ_3,…)}+ 2n+1 .By Lemma <ref>, the graphon W_n+1:=W_n+1^𝒥_n+1 is a limit (in the weak^* topology) of the sequence of some versionsΓ_s_1^n+1,Γ_s_2^n+1,Γ_s_3^n+1,… of the graphons ℐ_1Γ_s_1^n+1,ℐ_2Γ_s_2^n+1,ℐ_3Γ_s_3^n+1,…. By the “moreover” part of Lemma <ref>, we may further assume that for every natural number j and for every intervals P,Q∈𝒥_n+1, we have∫_P∫_Qℐ_jΓ_s_j^n+1(x,y)=∫_P∫_QΓ_s_j^n+1(x,y) ,which, together with (<ref>), easily implies that for every natural number j and for every intervals K,L∈𝒥_n it holds∫_K∫_LΓ_s_j^n+1(x,y)=∫_K∫_LΓ_s_j^n(x,y) .We define Γ_k_n+1:=Γ_s_n+1^n+1, and we also define the sequence q_1^n+1,q_2^n+1,q_3^n+1,… to be the sequence s_1,s_2,s_3,…. This completes the construction of the sequence Γ_k_1,Γ_k_2,Γ_k_3,….Now let W_min be an arbitrary accumulation point (in the weak^* topology) of the sequence Γ_k_1,Γ_k_2,Γ_k_3,…, so that in particular W_min∈(Γ_1,Γ_2,Γ_3,…). It suffices to show that it holds for every n that _f(W_min)≤_f(W_n) as then we clearly have by our choice of the graphons W_1,W_2,W_3,… that_f(W_min)=min{_f(W) W∈(Γ_1,Γ_2,Γ_3,…)} .But for every three natural numbers n<m and j and for every intervals K,L∈𝒥_n it holds by (ii) that∫_K∫_LΓ_q_j^m^m(x,y)=∫_K∫_LΓ_q_j^m^n(x,y) ,and so (as Γ_q_j^n^nw^*→W_n as j→∞ for every n)∫_K∫_LW_m(x,y)=∫_K∫_LW_n(x,y) .It follows that for every n it holds∫_K∫_LW_min(x,y)=∫_K∫_LW_n(x,y) .The rest follows by Lemma <ref>.§ PROOF OF PROPOSITION <REF>As promised, we give two proofs of Proposition <ref>. The first one is somewhat quicker, but uses a theorem of Borgs, Chayes, and Lovász <cit.> about uniqueness of graph limits. More precisely, the theorem states that if U':I^2→[0,1] and U”:I^2→[0,1] are two cut-norm limits of versions Γ_1',Γ_2',Γ_3',… and Γ_1”,Γ_2”,Γ_3”,… of a graphon sequence Γ_1,Γ_2,Γ_3,…, then there exists a graphon U^*:I^2→[0,1] that is a cut-norm limit of versions of Γ_1,Γ_2,Γ_3,…, and measure preserving transformations ψ',ψ”:I→ I such that for almost every (x,y)∈ I^2, U'(x,y)=U^*(ψ'(x),ψ'(y)) and U”(x,y)=U^*(ψ”(x),ψ”(y)). Since then, the result was proven in several different ways, see <cit.>. Also, let us note that while all known proofs of the Borgs–Chayes–Lovász theorem are complicated, none uses the compactness of the space of graphons or the Regularity lemma. So, using this result as a blackbox, we still obtain a self-contained characterization of cut-norm limits in terms of weak^* limits.So, suppose that W:I^2→[0,1] is a limit of versions of Γ_1,Γ_2,Γ_3,… in the cut-norm.By Theorem <ref> and by passing to a subsequence, we may assume that there exists a minimizer W':I^2→[0,1] of _f(·) over (Γ_1,Γ_2,Γ_3,…) which is a limit of versions of Γ_1,Γ_2,Γ_3,… in the cut-norm. Therefore, the Borgs–Chayes–Lovász theorem tells us that there exists a graphon W^*:I^2→ [0,1] and measure preserving maps ψ,ψ':I→ I such that W(x,y)=W^*(ψ(x),ψ(y)) and W'(x,y)=W^*(ψ'(x),ψ'(y)) for almost every (x,y)∈ I^2. Since ψ and ψ' are measure preserving, we get _f(W)=_f(W^*) and _f(W')=_f(W^*). This finishes the proof.Let us now give a self-contained proof of Proposition <ref>.By Theorem <ref> and by passing to a subsequence, we may assume that there exists a minimizer W':I^2→[0,1] of _f(·) over (Γ_1,Γ_2,Γ_3,…) which is a limit of versions Γ'_1,Γ'_2,Γ'_3,… of Γ_1,Γ_2,Γ_3,… in the cut-norm. Suppose that W is a graphon with _f(W)>_f(W'). This in particular means that there exists δ>0 so that W'-U_1>δfor any version U of W. We claim that there are no versions of Γ_1,Γ_2,Γ_3,… that converge to W in the cut-norm. Indeed, suppose that such versions Γ^*_1,Γ^*_2,Γ^*_3,… exist. Observe that δ_1(Γ'_n,Γ^*_n)=0 for each n (in fact, the infimum in the definition of δ_1 is attained). Now,<cit.>[Let us stress that <cit.> does not rely on the Borgs–Chayes–Lovász theorem, and has a self-contained, one-page proof.] tells us that 0=lim inf_n 0=lim inf_n δ_1(Γ'_n,Γ^*_n) ≥δ_1(W',W) ,which is a contradiction to (<ref>). § CONCLUDING REMARKS§.§ Specific concave and convex functionsPerhaps the most natural choice of continuous concave function is the binary entropy H.An equivalent characterization to our main result is that the limit graphons are the weak^* limits that maximize _g for a strictly convex function g. The most interesting instance of this version of the statement is that the limit graphons are weak^* limits maximizing the L^2-norm. Note that the L^2-norm is an infinitesimal counterpart to the notion of the “index” commonly used in proving the regularity lemma. §.§ Regularity lemmas as a corollaryWhile the cut-distance is most tightly linked to the weak regularity lemma of Frieze and Kannan <cit.>, a short reduction given in <cit.> shows that Theorem <ref> implies also Szemerédi's regularity lemma <cit.>, and its “superstrong” form, <cit.>. So, it is possible to obtain these regularity lemmas using the approach from this paper.[With a notable drawback that we do not obtain any quantitative bounds.]The most remarkable difference of the current approach is that it does not use iterative index-pumping, as we explained in Section <ref>. That is, in our proof one refinement is sufficient for the argument. Such a shortcut is available only in the limit setting, it seems. §.§ A conjecture about finite graphsLet f:[0,1]→ℝ be an arbitrary continuous and strictly concave function. Suppose that G is an n-vertex graph, and let 𝒫=(P_i)_i=1^k be a partition of V(G) into non-empty sets. Recall the notion of _f(G;𝒫) and of densities d_ij defined in Section <ref>. We believe that a partition that minimizes _f(G;·), when we range over all partitions 𝒬 of G with a given (but large) number of parts, provides a good approximation of G in the sense of the weak regularity lemma. To formulate this conjecture, let us say that a partition 𝒬 is an _f-minimizing partition with k parts if 𝒬 has k parts and for any partition 𝒫 of V(G) with k parts we have _f(G;𝒬)≤_f(G;𝒫). Suppose that f:[0,1]→ℝ is a continuous and strictly concave function, and that ϵ>0 is given. Then there exist numbers M,n_0 so that the following holds for each graph G of order at least n_0. If 𝒬 is an _f-minimizing partition of V(G) with M parts then 𝒬 is also weak ϵ-regular.This is a finite counterpart of our main result. Indeed, the space of weak* limits of graphons in Theorem <ref> corresponds to an averaging over infinitesimally small sets, while in Conjecture <ref> we range only over partitions with M parts. Of course, the much finer partitions considered in Theorem <ref> provide an “ϵ=0 error”. If true, Conjecture <ref> would provide a more direct link between regularity and index-like parameters than the index-pumping lemma.While we were not able to prove Conjecture <ref>, let us present here a quick proof of a somewhat weaker statement. Suppose that f:[0,1]→ℝ is a continuous and strictly concave function, and that ϵ>0 is given. Then there exist a finite set X⊂ℕ so that for each graph G there exists M∈ X with the following property. If 𝒬 is an _f-minimizing partition of V(G) with M parts then 𝒬 is also weak ϵ-regular.Actually to prove Proposition <ref>, one just needs to go through the proof of the weak regularity lemma. For simplicity, let us assume that h:x↦ -x^2 is the negative of the usual “index” used in the proof of the weak regularity lemma. Let us take X:={1,2,4,…,2^⌈4/ϵ^2⌉} . Suppose for a contradiction that for each i∈ X, there is an _f-minimizing partition 𝒞_i with i parts which is weak ϵ-irregular. Then Lemma <ref> assert that there exists a partition 𝒫_i+1 with 2i parts such that _h(G;𝒫_i+1)< _h(G;𝒞_i)-ϵ^2/4. In particular, we have0≥_h(G;𝒞_1) > _h(G;𝒫_2)+ϵ^2/4≥_h(G;𝒞_2)+ϵ^2/4> _h(G;𝒫_3)+2·ϵ^2/4≥_h(G;𝒞_3)+2·ϵ^2/4>…> _h(G;𝒫_i+1)+i·ϵ^2/4≥_h(G;𝒞_i+1)+i·ϵ^2/4>…> _h(G;𝒫_⌈4/ϵ^2⌉+1)+⌈4/ϵ^2⌉·ϵ^2/4 .This is a contradiction to the fact that _h(·;·)≥ -1.One could consider even a “stability version” of Conjecture <ref>. That is, it may be that if _f(G;𝒬) is close to the minimum of _f(G;𝒫) over partitions 𝒫 with M parts, then 𝒬 is weak ϵ-regular. For example, repeating the proof of Proposition <ref> for a set X={1,2,4,…,2^⌈8/ϵ^2⌉}, we get there exists M∈ X so that any partition 𝒬 with M parts for which_x↦ -x^2(G;𝒬)≤ϵ^2/8+min{_x↦ -x^2(G;𝒫): }is weak ϵ-regular.Also, Conjecture <ref> could be asked for other versions of the regularity lemma. §.§ Attaining the infimum in Theorem <ref><ref>Theorem <ref><ref> states that there exist a subsequence of graphons Γ_k_1,Γ_k_2,Γ_k_3,… such that the infimum of _f(· ) over the set (Γ_k_1,Γ_k_2,Γ_k_3,…) is attained. Recently, Jon Noel showed us that passing to a subsequence is really needed. That is, taking f to be the binary entropy function, he constructed a sequence of graphons Γ_1,Γ_2,Γ_3,… such that inf{_f(Γ):Γ∈(Γ_1,Γ_2,Γ_3,…)}=0 but there exists no Γ∈(Γ_1,Γ_2,Γ_3,…) with _f(Γ)=0. To this end, take (W_ℓ)_ℓ=1^∞ to be rescaled adjacency matrices of a sequence of quasirandom graphs with edge density say 0.5, but replacing in each adjacency matrix one diagonal element (now represented by a square S_ℓ of size 1/ℓ×1/ℓ) by value say 0.7. Let (Γ_n)_n=1^∞ be asequence in which each graphon W_ℓ occurs infinitely many times.Firstly, we claim that inf{_f(Γ):Γ∈(Γ_1,Γ_2,Γ_3,…)}=0. To see this, take ℓ large. Taking a subsequence Γ_k_1,Γ_k_2,Γ_k_3,… which consists only of copies of W_ℓ, we see that W_ℓ∈(Γ_1,Γ_2,Γ_3,…). Now, _f(W_ℓ)=∫_x∫_y f(W_ℓ(x,y))≤1/ℓ^2, since the integrand is zero everywhere except S_ℓ.Secondly, we claim that there is no graphon in (Γ_1,Γ_2,Γ_3,…) with zero entropy. Indeed, let us consider a weak* limit W of an arbitrary sequence of versions of W_ℓ_1,W_ℓ_2,W_ℓ_3,…. There are two cases. If the sequence ℓ_1,ℓ_2,ℓ_3,… is unbounded then quasirandomness of the graphons implies that W≡1/2. The other case is when one index ℓ repeats infinitely many times. In that case, due to the value of 0.7 on S_ℓ, the graphon W cannot be {0,1}-valued, as the next lemma shows.Suppose that Λ is an arbitrary probability measure space with a probability measure λ, and α>0. Suppose that (A_s)_s=1^∞ is a sequence of functions, A_s:Λ→[0,1], which converges weak* to a function A. Suppose further that λ(R_s)≥α for each s∈ℕ, where R_s:={x∈Λ: A_s=0.7}. Then A is not {0,1}-valued. Suppose for a contradiction that A is {0,1}-valued. Let X_0=A^-1(0) and X_1=A^-1(1). Then for each s∈ℕ, we have λ(R_s∩ X_0)≥α/2 or λ(R_s∩ X_1)≥α/2. Let us consider the case that the set I_0 of indices s for which the former inequality occurs is infinite; the other case being analogous. For each s∈ I_0 we have∫_X_0 A_s=∫_X_0∩ R_s A_s+∫_X_0∖ R_s A_s≥ 0.7·λ(X_0∩ R_s)+0·λ(X_0∖ R_s)≥ 0.7·α/2 .On the other hand, ∫_X_0A=0. So, the set X_0 witnesses that the functions (A_s)_s∈ I_0 do not weak* converge to A, a contradiction. In either of the two cases above, W has positive entropy. §.§ HypergraphsThe theory of limits of dense hypergraphs of a fixed uniformity was worked out in <cit.> (using ultraproduct techniques) and in <cit.> (using hypergraph regularity lemma techniques), and is substantially more involved. It seems that the current approach may generalize to the hypergraph setting. This is currently work in progress. §.§ Role of weak* limits for other combinatorial structuresIn this paper, we have shown how to use weak* limits for sequences of graphs to obtain cut-distance limits. In the section above we indicated that a similar approach may lead to a construction of limits of hypergraphs of fixed uniformity. Of course, one can ask which other limit concepts can be approached by considering weak* limits as an intermediate step. Let us point out that limits of permutations (permutons) are particularly simple in this sense: Limits (in the “cut-distance” sense) of permutations arise simply by taking weak limits (here, it is weak rather than weak*, but the difference is not important) of certain objects associated directly to permutations. That is, no counterpart to our entropy minimization step is necessary, and every weak limit already has the desired combinatorial properties. See <cit.>. These are, to the best of our knowledge, the only combinatorial structures for which weak/weak* convergence was used. §.§ Minimization with respect to different concave functionsSuppose that f and g are two different strictly concave functions. Then for two graphons Γ_1 and Γ_2, we can have for example _f(Γ_1)<_f(Γ_2) but _g(Γ_1)>_g(Γ_2). As a (perhaps somewhat surprising) by-product of our main results, we cannot get such an inconsistency when searching global minima over the space of weak* limits. That is, Γ_1 achieves the minimum of _f on the space of weak* limits if and only if it achieves the minimum of _g. We do not know of a more direct proof of this fact. §.§ Recent developments After this paper was made available at arXiv in May 2017, the relation between the cut distance and the weak* topology was studied in more detail in <cit.> and <cit.>. The main novel feature in <cit.> is an abstract approach which allows to identify convergent subsequences and cut distance limits without minimization of any parameter over the space of weak* limits. The main two theorems in <cit.> which were inspired by the present paper are the following:Suppose that Γ_1,Γ_2,Γ_3,…:I^2→[0,1] is a sequence of graphons. Then there exists a subsequence Γ_k_1,Γ_k_2,Γ_k_3,…such that (Γ_k_1,Γ_k_2,Γ_k_3,…)=(Γ_k_1,Γ_k_2,Γ_k_3,…) . Suppose that Γ_1,Γ_2,Γ_3,…:I^2→[0,1] is a sequence of graphons. Then this sequence is cut-distance convergent if and only if (Γ_1,Γ_2,Γ_3,…)=(Γ_1,Γ_2,Γ_3,…) . In particular, note that Theorem <ref> substantially generalizes Lemma <ref>. Actually, investigating possible generalizations of Lemma <ref> was the starting point for <cit.>.Besides this abstract approach,some further graphon parameters that can replace _f(·) in Theorem <ref> are found in <cit.>. These include, for example, the negative of the density of any even cycle, -t(C_2ℓ,·). On the other hand, in another very recent paper, Král', Martins, Pach, and Wrochna <cit.> identify a large class of (bipartite) graphs H for which -t(H,·) fails to identify cut distance limits. The problem of characterizing graphs H which this property is related to the Sidorenko conjecture and to norming graphs motivated by a question of Lovász and studied first in <cit.>.Also, the machinery introduced in <cit.> gives a short proof of a version of Theorem <ref> which even allows to drop the requirement on the continuity of f.§ ACKNOWLEDGEMENTSThis work was done while Jan Hladký was enjoying a lively atmosphere of the Institute for Geometry at TU Dresden, being hosted by Andreas Thom there.We thank Dan Král and Oleg Pikhurko for encouraging conversations on the subject, Jon Noel for comments on an earlier version of the manuscript, and to Svante Janson and Guus Regts for bringing several important references to our attention. We also thank Jon Noel for his contribution included in Section <ref>.Finally, we thank two anonymous referees for their comments, and in particular, for pointing out a gap in the proof of Lemma <ref>.§ THE WEAK^* TOPOLOGYSuppose that X is a Banach space and denote by X^* its dual. Then the weak^* topology on X^* is the coarsest topology on X^* such that all mappings of the form X^*∋ x^*↦ x^*(x), x∈ X, are continuous. Recall that if the space X is separable then by the sequential Banach–Alaoglu Theorem (see e.g. <cit.>), the unit ball of X^* is sequentially compact. This means that every bounded sequence of elements of the dual space X^* contains a weak^*-convergent subsequence.In this paper, we are interested in the case when X is the Banach space L^1(Ω) of all integrable functions on some probability space Ω. (Depending on our needs, the probability space Ω will be chosen to be either the unit interval I equipped with the one-dimensional Lebesgue measure or the unit square I^2 equipped with the two-dimensional Lebesgue measure). The space L^1(Ω) is equipped with the norm f_1=∫_Ω|f(x)|, f∈ L^1(Ω). In this setting, the dual X^*=(L^1(Ω))^* is isometric to the space L^∞(Ω) of all bounded measurable functions on Ω, equipped with the norm g_∞=esssup_x∈Ω|g(x)|. The duality between L^1(Ω) and L^∞(Ω) is given by the formula ⟨ g,f⟩=∫_Ωf(x)g(x) for g∈ L^∞(Ω) and f∈ L^1(Ω). This means that a sequence g_1,g_2,g_3,… of elements of L^∞(Ω) converges to g∈ L^∞(Ω) if and only if lim_n→∞∫_Ωf(x)g_n(x)=∫_Ωf(x)g(x) for every f∈ L^1(Ω).Now consider the Banach space X=L^1(I^2) of all integrable functions defined on the unit square I^2 (which is equipped with the two-dimensional Lebesgue measure). Standard arguments show that the weak^* topology on its dual space L^∞(I^2) can be equivalently generated by mappings of the form L^∞(I^2)∋ g↦∫_A∫_Bg(x,y) where A,B are measurable subsets of I. That is, the weak^* topology can be equivalently generated only by characteristic functions of measurable rectangles (instead of all integrable functions on I^2). If we restrict this topology only to the space of all graphons W:I^2→[0,1] defined on I^2 then it is easy to see that this restricted topology is generated only by mappings of the form W↦∫_A∫_AW(x,y) where A is a measurable subset of I (this is because each graphon is symmetric by the definition). This is the topology we refer to when we talk about convergence of graphons in the weak^* topology. So this means that a sequence W_1,W_2,W_3,… of graphons defined on I^2 converges to a graphon W defined on I^2 if and only if lim_n→∞∫_A∫_AW_n(x,y)=∫_A∫_AW(x,y) for every measurable subset A of I. Note that the space of all graphons defined on I^2 is a weak^* closed subset of the unit ball of L^∞(I^2), and so it is sequentially compact by the sequential Banach–Alaoglu Theorem (as the space L^1(I^2) is separable).While crucial to our arguments, it is worth noting that the Banach–Alaoglu Theorem is not a particularly deep statement and follows easily from Tychonoff's theorem for powers of compact spaces (and actually the version for countable powers is sufficient). plain | http://arxiv.org/abs/1705.09160v5 | {
"authors": [
"Martin Dolezal",
"Jan Hladky"
],
"categories": [
"math.CO",
"math.FA"
],
"primary_category": "math.CO",
"published": "20170525131517",
"title": "Cut-norm and entropy minimization over weak* limits"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.