entry_id
stringlengths
33
33
published
stringlengths
14
14
title
stringlengths
17
188
authors
sequence
primary_category
stringlengths
5
18
categories
sequence
text
stringlengths
2
629k
http://arxiv.org/abs/2307.04942v1
20230711001445
Benchmarking Algorithms for Federated Domain Generalization
[ "Ruqi Bai", "Saurabh Bagchi", "David I. Inouye" ]
cs.LG
[ "cs.LG" ]
Dispersive estimates for 1D matrix Schrödinger operators with threshold resonance Yongming Li October 2023 ================================================================================= While prior domain generalization (DG) benchmarks consider train-test dataset heterogeneity, we evaluate Federated DG which introduces federated learning (FL) specific challenges. Additionally, we explore domain-based heterogeneity in clients' local datasets—a realistic Federated DG scenario. Prior Federated DG evaluations are limited in terms of the number or heterogeneity of clients and dataset diversity. To address this gap, we propose an Federated DG benchmark methodology that enables control of the number and heterogeneity of clients and provides metrics for dataset difficulty. We then apply our methodology to evaluate 13 Federated DG methods, which include centralized DG methods adapted to the FL context, FL methods that handle client heterogeneity, and methods designed specifically for Federated DG. Our results suggest that despite some progress, there remain significant performance gaps in Federated DG particularly when evaluating with a large number of clients, high client heterogeneity, or more realistic datasets. Please check our extendable benchmark code here: https://github.com/inouye-lab/FedDG_Benchmarkhttps://github.com/inouye-lab/FedDG_Benchmark. § INTRODUCTION Domain generalization (DG) <cit.> formalizes a special case of train-test heterogeneity in which the training algorithm has access to data from multiple source domains but the ultimate goal is to perform well on data from an unseen test domain—i.e., a type of out-of-distribution generalization instead of the standard in-distribution generalization. While most prior DG work focuses on centralized algorithms, another natural context is federated learning (FL) <cit.>, which is a distributed machine learning context that assumes each client or device owns a local dataset. These local datasets could exhibit heterogeneity, which we call client heterogeneity (e.g., class imbalance between clients). Although train-test heterogeneity (in DG) and client heterogeneity (in FL) are independent concepts, both could be naturally defined in terms of domain datasets. For example, suppose a network of hospitals aimed to use FL to train a model to predict a disease from medical images. Because the equipment is different across hospitals, it is natural to assume that each hospital contains data from different domains or environments (or possibly a mixture of domains if it is a large hospital)—this is a case of domain-based client heterogeneity. Yet, the trained model should be robust to changes in equipment within a hospital or to deployment in a new hospital that joins the network—both are cases of domain-based train-test heterogeneity. The interaction between these types of heterogeneity produces new algorithmic and theoretic challenges yet it may also produce new insights and capabilities. Solutions to Federated DG with both types of heterogeneity could increase the robustness and usefulness of FL approaches because the assumptions more naturally align with real-world scenarios rather than assuming the datasets are i.i.d. This could enable training on partial datasets, increase robustness of models to benign spatial and temporal shifts, and reduce the need for retraining. In the centralized regime, various approaches have been proposed for DG, including feature selection, feature augmentation, etc. Most of these methods are not applicable in the FL regime which poses unique challenges. In the FL regime, client heterogeneity has long been considered a statistical challenge since FedAvg <cit.>, where it experimentally shows that FedAVG effectively mitigate some client heterogeneity. There are many other extensions based on the FedAvg framework tackling the heterogeneity among clients in FL, for example using variance reduction method <cit.>. An alternative setup in FL, known as the personalized setting, aims to learn personalized models for different clients to tackle heterogeneity, for example <cit.>. However, none of these works consider model robustness under domain shift between training and testing data. Recently, a few works in the FL regime tackling DG <cit.> have been proposed, however their evaluations are limited in the following senses: 1) The evaluation datasets are limited in the number and diversity of domains. 2) The evaluations are restricted to the case when the number of clients is equal to the number of domains, which may be an unrealistic assumption (e.g., a hospital that has multiple imaging centers or a device that is used in multiple locations). The case when clients number might be massive are of both theoretical and application interests. 3) None of the works consider the influence of the effect of the number of communication rounds. We provide an overview of the tasks in<ref>, considering both the heterogeneity between training and testing datasets (standard vs. domain generalization) and among clients (domain client heterogeneity). While some studies have addressed the standard supervised learning task, there is a need for a fair evaluation to understand the behavior of domain generalization algorithms in the FL context under those new challenges. There are several benchmark datasets available for evaluating domain generalization (DG) methods in the centralized setting. These benchmarks, such as DomainBed <cit.> and WILDS <cit.>, provide multiple datasets that are suitable for assessing the performance of DG algorithms. However, they did not explicitly consider the unique challenges that arise in the federated learning (FL) setting. On the other hand, there are also benchmarks specifically designed for FL. For instance, the LEAF benchmark <cit.> provides a standardized framework for evaluating FL algorithms. It includes several datasets from various domains and allows researchers to assess the performance of their algorithms in a realistic FL scenario. Another benchmark for FL is PFLBench <cit.>, which focuses on evaluating personalized FL methods. PFLBench provides 12 datasets containing various applications. Though these FL-based benchmarks consider statistical heterogeneity, they fail to consider the DG task adequately. Moreover, the level of statistical heterogeneity present in these datasets is insufficient for proper DG evaluation. In summary, DG benchmarks do not consider FL challenges, and FL benchmarks do not consider DG challenges. Major contributions: We develop a benchmark methodology for evaluating Federated DG with various client heterogeneity contexts and diverse datasets, and we evaluate representative Federated DG approaches with this methodology. 1) We propose the first Federated DG benchmark methodology including four important dimensions of the experimental setting (see <ref>). 2) We propose a standardized definition of domain-based client heterogeneity that is unique to the FL context and interpolates between domain homogeneity and domain separation (see <ref>) while controlling class imbalance. In particular, we develop a novel method to split any dataset with domain labels across any number of clients (see <ref>). 3) We compare three broad approaches to Federated DG: centralized DG methods naïvely adapted to FL setting, FL methods developed for client heterogeneity (e.g., class imbalance), and recent methods specifically designed for Federated DG. Our results indicate that there still exist significant gaps and open research directions in Federated DG. 4) We will release an extendable open-source library for evaluating Federated DG methods. Notation Let [A] := {1,2,⋯, A} denote the set of integers from 1 to A. Let d ∈ [D] denote the d-th domain out of D total training domains and similarly let c ∈ [C] denote the c-th client out of C total clients. Let 𝒟⊆ [D] denote a subset of domain indices. Let ℒ(θ; p) denote a generic objective with model parameters θ given a distribution p, which is approximated via samples from p. Let p_d and p_c denote the distribution of the d-th domain and the c-th client, respectively. Let 𝒮 denote a set of samples. § APPROACHES TO FEDERATED DOMAIN GENERALIZATION We first briefly review the DG problem, extend to the Federated DG problem, and explain domain-based client heterogeneity. Then, we discuss three categories of Federated DG approaches. §.§ Problem Background and Setup Domain generalization (train-test heterogeneity) Unlike standard ML which assumes the train and test are independent and identically distributed (i.i.d.), the ultimate goal of DG is to minimize the average or worst-case loss of the test domain distributions when only samples from the train domain distributions are given. Formally, given a set of train domain distributions { p_d : d ∈}, minimize the average or worst case loss over test domain distributions, i.e., min_θ1/||∑_d ∈(θ; p_d) or min_θmax_d ∈(θ; p_d) , where (θ; p_d) = _(x,y) ∼ p_d[ℓ(x,y; θ)] where ℓ is a per-sample loss function such as squared or cross-entropy loss. The key challenge in DG is that the train and test domain distributions are disjoint, i.e., ∩ = ∅, and thus, the method must be able to generalize beyond the train domain distributions to perform well on the test domain distributions. The naïve approach is to simply ignore the domains and perform empirical risk minimization (ERM) on all the training data—which is actually a challenging baseline to outperform in practice <cit.>. Federated DG Federated DG adds a layer of complexity because now the domain distribution samples are not centrally located. Instead, each FL client can update their local model based only on their local client distribution p_c and then pass their local parameters θ_c to a central server, which will aggregate the local models and broadcast the model to all clients. The FL problem can be abstracted as follows: ∀ c ∈ [C], θ_c = ((θ; p_c), θ_init = θ_global )_Locally optimize given local distribution p_c and θ_global = (θ_1, θ_2, ⋯, θ_C)_Aggregate client model parameters on server , where the client distributions may be homogeneous (i.e., ∀ (c,c'), p_c = p_c') or heterogeneous (i.e., ∃ c≠ c', p_c≠ p_c'), minimizes an objective (θ; p_c) initialized at θ_init, and aggregates the client model parameters, where the most common aggregator is simply a (weighted) average of the client parameters, which corresponds to FedAvg <cit.>. Domain-based client heterogeneity While client heterogeneity (i.e., ∃ c≠ c', p_c≠ p_c') is often expressed as label imbalance, i.e., p_c(y) ≠ p_c'(y), we make a domain-based client heterogeneity assumption that each client distribution is a (different) mixture of train domain distributions, i.e., p_c(x,y) = ∑_d ∈ w_c,d p_d(x,y) where w_c,d is the weight of the d-th domain for the c-th client. At one extreme, FL with i.i.d. data would be equivalent to the mixture proportions being the same across all clients, i.e., ∀ c, c', d, w_c,d = w_c',d, which we call the homogeneous setting. On the other extreme, if the number of clients and number of domains are equal (D=C), the domain weights could be disjoint between clients such that each client “owns” one or more domains, i.e., for any domain d∈[D], w_c,d > 0 ⇔ w_c',d = 0, ∀ c' ≠ c, which we call domain separation (in <ref>, we extend the domain separation to the case D≠ C). Finally, we use λ to denote an interpolation parameter between these two extremes: homogeneous case λ = 1 and domain separation case λ=0. We give details of an explicit algorithm to use in practice for splitting datasets for evaluation in <ref> which covers two extremes and interpolation cases λ∈(0,1). <ref> summarizes the train-test heterogeneity in DG and the domain-based client heterogeneity from the FL context, where we focus on Federated DG. Overview of Federated DG methods In this benchmark study, we explore three categories of Federated DG methods: DG methods originally designed for the centralized setting, FL methods specifically tailored to handle client heterogeneity, and methods specifically designed for Federated DG. To provide a comprehensive evaluation, we assess the performance of several representative methods from each of these categories and compare to vanilla FedAvg <cit.> with ERM loss, where any heterogeneity is simply ignored. These methods are selected based on their prominence and potential effectiveness for Federated DG. To ensure a diverse range of evaluation scenarios, we conduct experiments on various datasets and under various heterogeneity settings. Our results provide insights into their relative performance and suitability for different Federated DG contexts. This benchmark study aims to offer a broad overview of the current popular methods, enabling researchers and practitioners to make informed decisions when tackling Federated DG. §.§ Centralized DG methods adapted to the FL setting The first natural choice is directly migrating the DG methods from the centralized regime. To adapt those methods, we simply run the centralized DG method at each client locally with their own local dataset (see <ref> for how the local datasets are created), and then compute an average of model parameters at each communication round (see next paragraph). This approach is straightforward for the homogeneous (λ=1) and heterogeneous (λ=0.1) settings where each client has data from all training domains—albeit quite imbalanced for λ=0.1. This can be seen as biased updates at each client based on biased local data. In the domain separation case λ=0, (i.e., if all clients only have one primary domain, i.e., ∀ k, |P_k| = 1), this simple approach cannot be applied because centralized DG methods require data from at least two domains. In fact, these centralized DG methods degenerate to vanilla FedAvg if there is only one domain per client. Extending these methods to the case where all clients only have one domain without violating the FL constraints is an interesting direction for future work. A predominant and effective centralized DG approach is through representation learning, including domain-invariant representation learning by learning domain-invariant features via either kernel methods <cit.> or invariant risk minimization <cit.>, and domain adversarial neural networks <cit.>, and <cit.> which explicitly tries to align the feature representation distribution. Besides invariant representation, there are other methods the general learning strategy to promote the generalization ability gradient operation <cit.>, which tries to learn generalized representations by directly operating on gradients. Other approaches include distributionally robust optimization <cit.>, which learns the worst-case distribution scenario of training domains; and meta-learning <cit.>, which is based on the learning-to-learn mechanism to learn general knowledge by constructing meta-learning tasks to simulate domain shift. We selected IRM <cit.>, Fish <cit.>, MMD <cit.>, Mixup <cit.>, DeepCoral <cit.>, and GroupDRO <cit.> from this category for their representative mechanisms. §.§ FL methods tackling client heterogeneity Another line of research in FL aims to guarantee convergence even under client heterogeneity, but these FL-based methods still assume the train and test datasets do not shift (i.e., they do not explicitly tackle train-test heterogeneity of the domain generalization task). The empirical observation of the statistical challenge in federated learning when local data is non-IID was first made by <cit.>. Several subsequent works have analyzed client heterogeneity by assuming bounded gradients <cit.> or bounded gradient dissimilarity <cit.>, and additionally assuming bounded Hessian dissimilarity <cit.>. From this category, we selected FedProx <cit.>, which addresses statistical heterogeneity by adding a proximal term to the local subproblem, constraining local updates to be closer to the initial global model. Scaffold <cit.> utilizes variance reduction to account for client heterogeneity. §.§ FL methods designed for Federated DG Limited research has focused explicitly on solving the Federated DG by design. FedDG <cit.> introduced a specific FL paradigm for medical image classification, which involves sharing the amplitude spectrum of images among local clients, violating the privacy protocol. Another approach, FedADG <cit.>, utilizes a generative adversarial network (GAN) framework in FL, where each client contains four models: a featurizer, a classifier, a generator, and a discriminator. FedADG first trains the featurizer and classifier using empirical loss and then trains the generator and discriminator using the GAN approach. However, this method requires training four models simultaneously and tuning numerous hyperparameters, making convergence challenging. A novel aggregation method called Federated Gradient Masking Averaging (FedGMA) <cit.> aims to enhance generalization across clients and the global model. FedGMA prioritizes gradient components aligned with the dominant direction across clients while assigning less importance to inconsistent components. FedSR <cit.> proposes a simple algorithm that uses two locally-computable regularizers for domain generalization. Given the limited literature on solving domain generalization (DG) in the federated learning (FL) setting, we selected all the aforementioned algorithms. § BENCHMARK METHODOLOGY In this study, we aimed to conduct a comprehensive evaluation of the Federated DG task by considering four distinct dimensions of the problem setup. We evaluated a total of 13 methods, encompassing three different types of approaches. §.§ Four evaluation dimensions (1) Domain-based client heterogeneity and (2) number of clients Previous studies on Federated DG have often focused on domain separation client heterogeneity where the number of clients equals the number of training domains, i.e., C=D. However, this excludes evaluation of the homogeneous and partially heterogeneous settings and restricts the number of clients to the number of training domains D. In particular, many pseudo-realistic datasets, such as those in DomainBed <cit.>, which consist training domains D ≤ 5, which limits the evaluation of methods under scenarios with a large number of clients. Conversely, when using realistic domain datasets that have many domains, such as those in WILDS <cit.>, most current methods perform poorly under this extremely challenging setting. By introducing the domain split method discussed in <ref>, we can explore various levels of client heterogeneity and relax the assumption that C=D so that we can leverage both pseudo-realistic and realistic datasets and evaluate methods at an appropriate difficulty level. (3) Dataset difficulty and (4) dataset type While most Federated DG work focuses on standard image-based datasets, we evaluate methods across a broad range of dataset difficulty (ranging from easy pseudo-realistic datasets to very challenging realistic datasets) and two types of datasets (3 image datasets and 2 text datasets). This ensures that we can fully understand the performance of each method across a wide range of scenarios. Further details can be found in Section <ref>. §.§ Domain-based client heterogeneity by splitting DG datasets For evaluation dimensions (1) and (2) above, we need to control over the amount of client heterogeneity, ranging from homogeneous to domain separation, and we need to control over the number of clients, which may be smaller or larger than than the number of domains (i.e., D ≠ C). We propose a way to split any DG dataset into any number of clients that allows explicit control of the amount of heterogeneity through the λ hyperparameter which attempts to balance the number of samples per client. An illustration of our domain splitting procedure can be found in the following <ref>. R0.5 0.5 In Algorithm <ref>, we provide a concrete algorithm for splitting samples from multiple domains across an arbitrary number of clients C while controlling the amount of domain-based client heterogeneity via λ, ranging from homogeneous clients to domain separation. Our algorithm has two main steps. In Step 1, we assign “primary” domain indices 𝒟_c ⊆ [D] to each client c∈[C] depending on C and D. If C ≤ D, the domains are sorted in descending order according to number of sample size and are iteratively assigned to the client c^* which currently has the smallest number of training samples (denoted by ∑_d'∈𝒟_c^* n_d'). In this case, the algorithm ensures that no client shares domains with the others but otherwise attempts to balance the total number of training samples between clients. If C>D, we first assign the domains one by one to the first D clients. Then, starting from client c=D+1, we iteratively assign the current on average largest domain d^* to c, accounting for the fact that the domain may already shared by different clients—notationally, on average is represented through dividing by ∑_c'1[d ∈𝒟_c']. In this case, some clients may share one domain, but no client holds two domains simultaneously while otherwise attempting to balance the number of samples across clients as much as possible. In Step 2, we define the sample counts for each client and domain, denoted n_c,d(λ) based on the balancing parameter λ∈ [0,1]: n_d,c(λ) = λn_d/C + (1-λ) 1[d ∈𝒟_c]/∑_c'=1^C 1[d ∈𝒟_c] n_d , where rounding to integers is carefully handled when not perfectly divisible and where 1[·] is the indicator function. This is simply a convex combination between homogeneous clients (λ=1) and domain separation (λ=0). Given the number of samples per client per domain, we simply sample without replacement from the corresponding domain datasets and build up the client datasets. §.§ Dataset Selection and Dataset Difficulty Metrics To ensure a comprehensive evaluation of current methods across different difficulty levels, we have curated a range of datasets with varying complexities. We define two dataset metrics to measure the dataset difficulty with respect to the DG task and with respect to the FL context. For DG difficulty, we compute R_DG, the ratio of the ERM performance with and without samples from the test domain (i.e., the former is able to “cheat” by seeing part of test domain samples during training). For FL difficulty, we attempt to isolate the FL effect by computing R_FL(λ), the ratio of ERM-based FedAvg λ client heterogeneity over centralized ERM on in-domain test samples. These dataset difficulty metrics can be formalized as follows: R_DG ≜(𝒮_DG-train, 𝒮'_DG-test)/(𝒮_DG-train∪𝒮”_DG-test, 𝒮'_DG-test) R_FL(λ) ≜(𝒮_DG-train, 𝒮_IN-test; λ)/(𝒮_DG-train, 𝒮_IN-test), where is the performance of ERM using the first argument as training and the second for test, is similar but with the client heterogeneity parameter λ, 𝒮_DG-train denotes samples from the train domains , 𝒮_DG-test denotes samples from the test domains , and 𝒮'_DG-test and 𝒮”_DG-test are 20%,80% split respectively of 𝒮_DG-test. For R_FL(λ), we use 𝒮_IN-test (test samples from the train domains) instead of 𝒮_DG-test to isolate the FL effect from DG effect. We choose five datasets in our benchmark: FEMNIST from <cit.>, PACS from <cit.>, and IWildCam, CivilComments and Py150 from <cit.>. We summarize the statistics and difficulty metrics in <ref> and provide the rationale of selecting these datasets in the Appendix. § BENCHMARK EXPERIMENTAL RESULTS In this section, we report the performance of 13 representative methods from three lines of research on 5 different datasets, where the FEMINIST results are provided in the appendix given the DG task simplicity (i.e., R_DG≈ 1). For each dataset, we fix the total computation and communication rounds for different methods for a fair comparison. We then select the model based on the held-out-domain validation set. After training, we choose the model according to the early-stopping at the communication round which achieves the best held-out-domain performance, and finally we evaluate the performance on the test-domain in <ref> and <ref>. See Appendix for detailed hyperparameters choices. We also include the results according to in-domain validation early-stopping in the Appendix but the trends are similar. We make the following remarks on the main results from <ref> and <ref>. FedAvg with an ERM objective is a strong baseline. Simple FedAvg with an ERM objective is a strong baseline that is challenging to beat across datasets (except for CivilComments), similar to the centralized case stated in DomainBed <cit.> and WILDS <cit.>. We recommend always including FedAvg as a baseline in all future evaluations. Most centralized DG methods degrade in the FL setting. For image datasets, the DG methods adapted to the FL setting (except for GroupDRO) show significant degradation in performance compared to their centralized counterparts as can be seen when comparing the C=1 column to the C > 1 columns in <ref>. Further, degradation can be seen in PACS when moving from the homogeneous client setting (λ=1) to the heterogeneous client setting (λ = 0.1). FL methods tackling client heterogeneity perform surprisingly well on the Federated DG task. FedProx and Scaffold, which were designed for client heterogeneity but not specifically for the DG task, perform quite well in the Federated DG setting and even perform the best for IWildCam and Py150. Thus, we suggest including these methods in future Federated DG evaluations and suggest that future works could focus on combining the strengths of these methods and DG methods. The performance of real-world data significantly degrades as λ decreases. This can be seen from IWildCam and Py150. While it is challenging and expensive to run models for IWildCam and Py150, they show the largest differences between methods and demonstrates the real-world challenge of Federated DG. We suggest including IWildCam and Py150 in most future DG evaluations given their unique nature across datasets. A diversity of evaluation datasets is important for holistically evaluating new methods for Federated DG. This conclusion is inspired from the following two observations. 1) When comparing FL methods, we notice opposite trends between PACS and all other datasets (IWildCam, CivilComments, Py150) when increasing client heterogeneity based on λ. For PACS, the FL methods (except for FedADG and FedSR) surprisingly seem to improve when increasing client heterogeneity (i.e., λ =0). However, for other datasets, the expected trend exists such that increasing client heterogeneity produces worse performance. From this, we recommend that using PACS alone might be a misleading or at least incomplete evaluation of Federated DG methods. 2) Centralized DG methods adapted to FL perform the best on CivilComments. This unexpected result suggests that DG methods in the centralized regime may be able to better accommodate subpopulation shift, which is the special kind of shift that is exhibited in CivilComments. Both of these observations emphasize that a diversity of evaluation datasets is important for holistically evaluating new methods for Federated DG tasks. Additional DG challenges from FL. For further understanding, we explore some additional questions on the PACS dataset because it is the most common and computationally feasible to train many different models. Specifically, we explore how the number of clients, amount of communication (i.e., the number of server aggregations) in the federated setup, and client heterogeneity affects the performance of various methods. The figures and detailed analysis are provided in the Appendix but we highlight two remarks here. The number of clients C strongly affects on the overall performance. The DG performance drops from 90% to 50% or even 10% when varying C from 1 to 200. We strongly recommend future Federated DG evaluations to consider larger number of clients such as 50 to 100 rather than only a very small number of clients. It indicates that there exist significant unresolved challenges in the domain of federated domain generalization when dealing with a large number of clients, which have not been adequately explored. In particular, FedADG and FedSR seem to be sensitive to the number of clients because they perform poorly when C ≥ 10, while in the original papers they were only evaluated on a few clients C=3 (and we reproduce the good results with such C in the Appendix). The number of communications does not monotonically affect DG performance. We observe an interesting implicit regularization phenomena in the FL context: different methods achieve their best performance when the communication rounds is relatively low. In particular, for PACS, this communication round is 10 (while fixing the total amount of computations). This is surprising as in contrast for in-domain task, FL suggests more communications leads to better performance <cit.>. Further theoretical investigation of the dependence of DG accuracy on the communications rounds, and its possible relation to implicit regularization via early stopping is an interesting area for future work. § CONCLUSION AND DISCUSSION We first build a systematic benchmark methodology for evaluating Federated DG that includes novel methodologies for splitting the data across clients and evaluation of dataset difficulty. We then evaluate 13 representative methods from three relevant lines of research. Our evaluation shows that Federated DG is still unsolved and significant gaps remain between centralized DG performance and Federated DG performance. Therefore, Federated DG is ripe for additional research. Based on our evaluation and observation, here are some recommendations and suggestions for future work in Federated DG. Recommendations for future evaluations of Federated DG. * Stronger Baselines - FedAvg should always be included because it is a strong baseline (<Ref>). FL methods designed for client heterogeneity (though not necessarily DG) should also be included given their strong performance (<Ref>). * Realistic Datasets - Federated DG methods should be evaluated on more realistic datasets. In particular, FEMNIST and PACS behave quite differently with respect to client heterogeneity than the more realistic WILDS datasets (<Ref>). Thus, we recommend including both IWildCam and Py150 in future evaluations of Federated DG. CivilComments may be useful for evaluating realistic subpopulation shift (<Ref>). * Large Number of Heterogeneous Clients - Evaluations should include scenarios with a large number of clients with varying degrees of heterogeneity. A large number of clients poses unique challenges for most methods (<Ref>). Additionally, the heterogeneity of clients is more realistic and plays a significant role in evaluation (<Ref>). Suggestions for future work in Federated DG. * Handle Domain Separation Case - The domain separation scenario (λ=0) limits the exchange of information between domains, i.e., a client may only have data from a single domain. Centralized approaches cannot be adapted to this setting and current methods still struggle under this realistic client heterogeneity setting. * Increase Convergence Rate - In most benchmark experiments (except for PACS, which is an easy dataset), we have observed slow convergence, particularly on challenging real-world datasets such as IWildCam and Py150. Moreover, we have noticed even slower convergence when client heterogeneity is high (i.e., when λ is small). In addition, we have found that federated methods designed to improve convergence actually performed quite well even though they were not designed for the DG task (<Ref>). Therefore, it is crucial for future methods to improve the convergence rate of Federated DG approaches. * Investigate the Effect of Communication Frequency - Because increased communication frequency may actually hurt DG performance (<Ref>), further investigation into the regularization effect of early stopping and infrequent communications in FedAvg may yield important insights. * Understand the Effect of the Number of Clients - Given that the performance of some methods degrade very quickly when increasing the number of clients beyond a few (<Ref>), the community could benefit from an improved theoretic and empirical understanding of how the number of clients affects Federated DG in terms of convergence, computational requirements, sample complexity, and DG performance. We hope this work provides a better foundation for future work in Federated DG and accelerates research progress. § ACKNOWLEDGEMENTS This work was supported by Army Research Lab under Contract No. W911NF-2020-221. R.B. and D.I. also acknowledge support from NSF (IIS-2212097) and ONR (N00014-23-C-1016). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the sponsor(s). plainnat tocsectionAppendix PART: Appendix § REPRODUCIBILITY STATEMENT Code for reproduce the results is available at the following link: https://github.com/inouye-lab/FedDG_Benchmarkhttps://github.com/inouye-lab/FedDG_Benchmark. We include detailed documentation in using our code to reproduce the results throughout the paper. We also provide documentation in adding new algorithm's DG evaluation in the FL context. § DATASETS AND DIFFICULTY METRIC §.§ Dataset Introduction In this section, we introduce the datasets we used in our experiments, and the split method we used to build heterogeneous datasets in the training and testing phase as well as the heterogeneous local training datasets among clients in the FL. FEMNIST It is an FL prototyping image dataset of handwritten digits and characters each users created as a natural domains, widely used for evaluation for client heterogeneity in FL. Though it contain many training domains, it lacks significant distribution shifts across domains (R_DG= 1), and considered as easy compared to other datasets. PACS It is an image dataset for domain generalization. It consists of four domains, namely Photo (1,670 images), Art Painting (2,048 images), Cartoon (2,344 images), and Sketch (3,929 images). This task requires learning the classification task on a set of objects by learning on totally different renditions. R_DG=0.960 makes it a easy dataset as domain generalization in our setting. Notice that choosing different domain as test domain might give us different R_DG. IWildCam It is a real-world image classification dataset based on wild animal camera traps around the world, where each camera represents a domain. It contains 243 training domains, 32 validation and 48 test domains. Usually people cares about rare speicies, thus we utilize the macro F1 score as the evaluation metric instead of standard accuracy, as recommended in the original dataset's reference <cit.>. The R_DG=0.449 makes it a very challenging dataset for domain generalization. CivilComments It is a real-world binary classification text-based dataset formed from the comments from different demographic groups of people, containing 8 demographic group. The goal is to judge whether the text is malicious or not. Py150 It is a real-world code-completion dataset which is challenging given massive domains 8421, where the domain is formed according to the repository author. The goal is to predict the next token given the context of previous tokens. We evaluate models by the accuracy on the class and method tokens. §.§ Dataset Split Setup For each dataset, we first split the dataset into 5 categories, namely training dataset, in-domain validation dataset, in-domain test dataset, held-out validation dataset and test domain dataset. For FEMNIST and PACS dataset, we use Cartoon and Sketch as training domain, Art-painting as held-out domain, Painting as test domain. For training domain, we split 10%,10% of the total training domain datasets as in-domain validation dataset and in-domain test dataset respectively. For IWildCam, CivilComments and Py150, we directly apply the Wilds official splits. § BENCHMARK EXPERIMENTAL SETTING §.§ Model Structure In this benchmark, for image-based datasets, we use ResNet-50 <cit.> dataset. For CivilComments and Py150 dataset, we choose DistilBERT <cit.> and CodeGPT <cit.> respectively as recommended by Wilds. §.§ Model Selection We conduct held-out domain model selection with 4 runs for each methods. The oracle model selection evaluates the model based on the performance on the held-out validation domain. The results are reported based on the best run. §.§ Early Stopping We conduct early stopping using the held-out validation dataset in our evaluation. For each dataset and method, We first run certain communication rounds, then we select the model parameters which achieves the best performance on the validation set. We report the held-out validation dataset in the main paper, and we report the results using the in-domain validation set in <ref>. §.§ Hyperparameters In this section, we present the hyperparameters selected for the evaluation. We opted to communicate per epoch for all experiments. For PACS, we run totally 50 communication rounds. For IWildCam, we run 50 communication rounds. For CivilComments, we run 10 communication rounds. For Py150, we run 10 communication rounds, and for FEMNIST, we run 40 communication rounds. Please refer to the <ref> to review other hyperparameters. § ADDITIONAL FL-SPECIFIC CHALLENGES FOR DOMAIN GENERALIZATION As mentioned in <ref>, we also include some deeper exploration over the effect of number of clients and communication frequency, which are unique to the FL regime. i) Massive number of clients: In this experiment, we explore the performance of different algorithms when the number of clients K increases on PACS. We fix the communication rounds 50 and the local number of epoch is 1 (synchronizing the models every epoch). <ref> plots the held-out DG test accuracy versus number of clients for different levels of data heterogeneity. The following comments are in order: given communication budget, 1) current domain generalization methods all degrade a lot after C≥ 10, while the performance ERM and FedDG maintain relatively unchanged as the clients number increases given communication budget. FedADG and FedSR are are sensitive to the clients number, and they both fail after C≥ 20. 2) Even in the simplest homogeneous setting λ=1, where each local client has i.i.d training data, current domain generalization methods IRM, FISH, Mixup, MMD, Coral, GroupDRO work poorly in the existence of large clients number, this means new methods are needed for DG in FL context when data are stored among massive number of clients. ii) Communication constraint: To show the effect of communication rounds on convergence, we plot the test accuracy versus communication rounds in Appendix <ref>. We fix the number of clients C=100 on PACS and decreases rounds of communication (together with increasing local epochs). That is, if the regime restricts the communication budget, then we increase its local computation E to have the same the total computations. Therefore, the influence of communication on the performance is fair between algorithms because the total data pass is fixed. We observe that the total number of communications does not monotonically affect the DG performance. With decreasing number of total communication, most methods' performance first increase then decrease. This might be an interesting implicit regularization phenomena in the FL context. Without discussing DG task, usually frequent communications lead to faster convergence. The relationship between DG performance and communications requires further exploration. § SUPPLEMENTARY RESULTS In the main paper, we provide experiments results using held-out validation early stopping. Here we report the results using in-domain validation set. We also report the convergence curve for each methods on each dataset for reference. We observe that under most of cases, the held-out validation gives us a better model. Thus, we recommend using held-out validation set to perform early stopping. More results on PACS and IWildCam dataset. Here we both report the convergence curve using the DG accuracy of each method on PACS <ref> and IWildCam <ref> and the DG accuracy on PACS and IWildCam <ref> using the in-domain validation. From the PACS convergence curve <ref>, we observe that with lower λ, the model is harder to converge. FedAvg ang FedDG converge faster than all other methods while FedADG and FedSR does not converge. From IWildCam convergence curve <ref>, we observe that all methods are struggling to converge. This is due to the main challenge first come from the R_FL here. We thus observe that Federated methods dealing with heterogeneity clients achieves best performance. Especially when λ is low. It is worth noticing that this does not mean DG in FL is solved, none of the methods even achieve the centralized ERM performance. More results on CivilComments and Py150 dataset. Here we report the DG accuracy on CivilComments and Py150 <ref> using the in-domain validation. We could observe that most of the methods perform well on CivilComments and Py150. This could be attributed to the use of pretrained models. Additionally, the utilization of pretrained models may explain the relatively high value of R_DG, as NLP pretrained models have already been exposed to various domains. Results on FEMNIST dataset. As mentioned in the main paper, we include the FEMNIST dataset here as reference. We could observe from <ref> that λ does not influence the final DG accuracy, and either in-domain validation or held-out-domain validation does not affect the final DG accuracy. This indicates the lacking of the statistical heterogeneity across different domains. we observe that changing of λ does not significantly affect the convergence. Most of which does not converge to the centralized counterpart's performance, this is due to the challenge coming from large number of clients where R_FL=0.980<1. § GAP TABLE We list the gap table in <ref> for summarizing the current DG algorithms performance gap w.r.t FedAvg-ERM in the FL context, in particular, positive means it outperforms FedAvg-ERM, negative means it is worse than FedAvg-ERM. It can be seen that in the on the simple dataset, the best DG migrated from centralized setting is better than FedAvg-ERM. In the domain separation case, no centralized DG algorithms can be adapted to it, and FDG methods performs comparably good in this setting. However, they fail in harder datasets. In the hardest setting, currently the Federated methods dealing with data heterogeneity performs the best. It is worth noting that while federated learning methods that address client heterogeneity perform better than other methods, they still fall short of achieving centralized empirical risk minimization (ERM). This highlights the need for future research and development of DG methods in the FL regime. § TRAINING TIME, COMMUNICATION ROUNDS AND LOCAL COMPUTATION In this section, we provide training time per communication in terms of the wall clock training time. Notice that for a fixed dataset, most of algorithms have similar training time comparing to FedAvg-ERM, where FedDG and FedADG are significantly more expensive.
http://arxiv.org/abs/2307.04760v1
20230710175817
Learning Spatial Features from Audio-Visual Correspondence in Egocentric Videos
[ "Sagnik Majumder", "Ziad Al-Halah", "Kristen Grauman" ]
cs.CV
[ "cs.CV", "cs.SD", "eess.AS" ]
[ Lorenzo Nicolodi Version of June 20, 2023 ============================ We propose a self-supervised method for learning representations based on spatial audio-visual correspondences in egocentric videos. In particular, our method leverages a masked auto-encoding framework to synthesize masked binaural audio through the synergy of audio and vision, thereby learning useful spatial relationships between the two modalities. We use our pretrained features to tackle two downstream video tasks requiring spatial understanding in social scenarios: active speaker detection and spatial audio denoising. We show through extensive experiments that our features are generic enough to improve over multiple state-of-the-art baselines on two public challenging egocentric video datasets, EgoCom and EasyCom. Project: <http://vision.cs.utexas.edu/projects/ego_av_corr>. § INTRODUCTION Egocentric videos provide a first-person view of how we perceive and interact with our surroundings in our daily lives, and they are pushing a new frontier in multi-modal learning research <cit.>. A key aspect of ego-video is that it can provide a rich stream of first-person spatial (multi-channel) audio alongside the visual frames, when the audio is captured with multiple microphones <cit.>. The coupling of such visual and spatial audio provides strong spatial information about the sound sources (where the sound sources are, if they are in motion or not) in the context of the surrounding physical space (how big or small the room is, if there is a large wall nearby), as well as the camera wearer's attention in the scene implicit in how they move their head. Such spatial cues are especially important for social settings of multiple people talking to each other, where it is valuable to be able to focus on the voice(s) of interest from among various competing sounds and understand where people are directing their attention and speech activity, for better comprehension and communication. In this way, future AR applications in conversational settings could allow a hearing-impaired person to determine who is speaking in order to redirect their attention, or enhance the received audio to make it more intelligible for any listener. We argue that this creates the need for human-centric spatially-grounded understanding of audio-visual events—to learn representations from video that capture audio-visual events in the context of the persistent physical space of the environment and the human speakers in it. Such representations are useful for answering questions like “who is speaking right now?" and “what would the voices sound like without the audio noise?". Whereas the former requires inferring the source location for a voice in the scene, the latter requires understanding how the perceived audio is a function of the source locations, the listener, and the surrounding environment. Despite being significant, the problem of human-centric spatially-grounded understanding of audio-visual events is underexplored. Current audio-visual representation learning models exclusively tackle exocentric (third-person) video <cit.>, which lacks the AR relevance and sidesteps challenges inherent to ego-video arising from the camera wearer's head motion and relatively limited field of view. Limited prior work has explored self-supervised objectives using multi-channel audio and video <cit.>, but outside of the egocentric and social contexts. We propose to learn audio-visual representations via spatial correspondence between an egocentric video and its multi-channel audio. In particular, we design a novel pretext task where the goal is to inpaint binaural (two-channel) audio using both video and audio. Given an egocentric video clip with binaural audio, we mask segments of it and train a model based on a new form of masked autoencoding (MAE) <cit.> to predict the missing segments on the basis of the video and the unmasked segments in the audio. See Fig. <ref> (top). Additionally, we introduce a spatial audio masking strategy that combines random masking of discrete audio segments in the two channels with masking a full channel. This, in essence, helps combines the benefits of two tasks: synthesis of novel binaural audio segments, and binauralization of a full monaural waveform. While the binauralization task is more challenging and enables learning stronger spatial correspondences between vision and audio, random masking of segments leads to better learning stability in cases where binauralization only using vision is intractable. Once trained, our model's encoder provides a spatial audio-visual feature that can be used to address multiple downstream tasks using multiple backbones and egocentric video datasets. Motivated by the AR applications discussed above, we validate our feature learning method on two downstream social egocentric tasks that require strong audio-visual spatial reasoning: 1) active speaker detection: predicting which person in the field of view of an egocentric video is speaking, and 2) spatial audio denoising: separating audio noise (any sounds from non-speakers) from the input audio. See Figure <ref>(bottom). We test the generality of our method by evaluating on two egocentric video datasets, EgoCom <cit.> and EasyCom <cit.>. On both, our method significantly outperforms multiple state-of-the-art task-specific and audio-visual spatial feature learning models. § RELATED WORK Audio-visual self-supervised pretraining Past work <cit.> extensively studies the synergy of vision and audio for learning representations through self-supervision. They explore using both modalities to construct pretext tasks based on synthesis <cit.>, alignment <cit.>, and masked auto-encoding (MAE)<cit.>, with downstream tasks focused on audio-visual event classification and retrieval. However, none of these methods are designed to extract spatial cues from video and multi-channel audio, nor do they analyze the social egocentric setting. On the contrary, we tackle the challenging problem of self-supervised learning spatial audio-visual features from egocentric videos. Further, different from the existing MAE-style models <cit.>, we propose a specialized masking strategy that better learns spatial audio-visual cues. Audio-visual spatial correspondence learning Learning the spatial alignment between video and audio is important for self-supervision <cit.>, spatial audio generation <cit.>, audio-visual embodied learning <cit.> and 3D scene mapping <cit.>. However, these methods are either restricted to exocentric settings <cit.>, or else tackle egocentric settings <cit.> in simulated 3D environments that lack realism and diversity, both in terms of the audio-visual content of the videos and the continuous camera motion due to the camera-wearer's physical movements. On the contrary, we learn an audio-visual representation from real-world egocentric video. More closely related to our work are Telling Left from Right <cit.> and 2.5D Visual Sounds <cit.>, both of which learn spatial audio-visual features for improving source separation and localization, albeit for exocentric data only. The former predicts whether the left and right binaural channels are swapped, which provides only coarse spatial information about the scene; the latter learns to “lift" the mono input to binaural audio, which can be underconstrained from the single-channel audio and video alone. We design a novel pretext task using audio-visual inpainting of multi-channel audio, which is both fine-grained (requiring to capture subtleties about the arrangement of speakers in the environment) and, through our novel masking strategy, exposes better multi-modal constraints for stable training. Our results show our model's advantages over both prior methods <cit.>. Active speaker detection Active speaker detection (ASD) entails predicting the active speaker(s) from among all detected faces in a video, and can be seen as a special case of generic 2D sound localization <cit.>. While early ASD methods rely on lip movements and facial gestures <cit.>, recent methods employ ensemble networks <cit.> or 3D CNNs <cit.>, relation context modules <cit.>, attention  <cit.>, or graph neural networks <cit.>. Multi-channel audio improves ASD in <cit.>, but does so requiring privileged information of speaker pose for training. Unlike all these methods, our goal is to learn spatial audio-visual features purely from in-the-wild egocentric methods through self-supervision—features generic enough to benefit multiple ASD models, as we demonstrate for both TalkNet <cit.> and SPELL <cit.>. Spatial audio denoising Audio denoising, which requires separating a target sound from noise, has traditionally been studied with single-channel (non-spatial) audio, both in the audio-only setting <cit.> and audio-visual settings <cit.>. Using spatial audio captured with multiple microphones <cit.> naturally makes the task simpler. Different from the above, we learn task-agnostic audio-visual spatial features. That is, our contribution is the feature learning idea (which benefits both denoising and ASD), rather than a novel denoising approach. § LEARNING SPATIAL FEATURES FROM EGOCENTRIC AUDIO-VISUAL CORRESPONDENCE The spatial sound perceived in an egocentric setting is shaped by environment in which it is emitted and the sound source location relative to the camera-wearer. Based on this knowledge, we hypothesize that trying to solve the pretext task of audio-visual inpainting of binaural audio—synthesis of missing segments in a spatial audio clip by extracting information about the scene and the source location from the coupling of vision and audio—can lead to learning useful audio-visual spatial correspondences. To validate our hypothesis, we propose a novel feature-learning task for egocentric videos: learning spatial features from audio-visual correspondence through binaural audio inpainting. Formally, we consider an egocentric video clip C = (V, A), where V and A refer to the visual and binaural audio streams, respectively. The visual clip V comprises T frames, such that V = {V_1, …, V_T}. We generate a set of visual tokens V̂ by splitting V into P tubelets, such that V̂ = {V̂_1, …, V̂_P}, where V̂_k denotes the k^th tubelet consisting of a contiguous sequence of non-overlapping 16 × 16 dimensional patches spanning all T frames. We represent the binaural audio A as Mel-spectrograms <cit.>, such that A = {A^L, A^R}, where A^L and A^R are the spectrograms for the left and right channels, respectively. We create a set of audio tokens  by splitting A into Q non-overlapping patches of size 2 × 16, such that  = {Â_1, …, Â_Q }. Next, we mask a portion of the audio tokens in  and obtain complementary subsets of masked and unmasked tokens, Â^M and Â^U, respectively, where Â^M = {Ä_1, …, Ä_S}, Â^U = {A̅_1, …, A̅_Q-S}, and S is the number of masked tokens. Given {V̂, Â^M, Â^U}, we aim to learn a self-supervised model ℱ comprising an encoder ℰ and decoder 𝒟, such that ℱ = 𝒟∘ℰ and ℱ(V̂, Â^U) = Ã^M, where Ã^M is an estimate of the masked audio tokens in Â^M. By training on this pretext task, our encoder ℰ can learn rich audio-visual spatial correspondences that can be leveraged for multiple downstream tasks that require the synergy of vision and spatial audio, as we show in results. § APPROACH To solve our pretext task of binaural audio inpainting in egocentric videos, we propose an approach based on the masked autoencoding framework <cit.>, which has been shown to learn meaningful semantic features from audio-visual data <cit.>. Our model ℱ has 2 main components (see Fig. <ref>): 1) an audio-visual (AV) spatial correspondence encoder, ℰ, and 2) an audio-visual decoder for binaural audio inpainting, 𝒟. The encoder ℰ (Sec. <ref>) learns an implicit representation of the spatial relationships between the visual and unmasked binaural audio tokens, while the decoder D (Sec. <ref>) uses this implicit representation to synthesize the masked audio tokens. We also devise a simple yet novel masking protocol (Sec. <ref>) specifically for our inpainting task, which mixes masking random audio tokens with masking a full audio channel, and helps the model learn stronger audio-visual spatial associations, which facilitate multiple downstream tasks. We train ℱ with a training objective that aims to minimize the prediction error in the masked audio tokens. Next, we describe our model architecture, training objective, audio masking protocol, and downstream tasks. §.§ Audio-visual spatial correspondence encoder The audio-visual spatial correspondence encoder ℰ (Fig. <ref> left) extracts features from the visual and unmasked audio tokens {V̂, Â^U}. It begins by embedding the visual and audio tokens using separate transformer encoders <cit.> for individually capturing the spatio-temporal features in the two modalities. Next, it uses a shared transformer encoder <cit.> to jointly encode the audio and visual features, and produces a multi-modal representation suitable for binaural audio inpainting. Audio and visual encoders. We first encode the visual tokens V̂ using a linear layer to generate visual features v, such that v = {v_1, …, v_P}. We encode the audio tokens Â^U with another linear layer to produce audio features a, such that a = {a_1, …, a_Q-S}, where S is the number of masked tokens out of a total of Q audio tokens (cf. Sec. <ref>). For each visual feature v_j, we add a sinusoidal positional embedding p^V_j <cit.> to it, where p^V_j captures cues about the 3D position of the j^th tubelet in the visual clip V. For an audio feature a_i, we add a sinusoidal positional embedding p^A_i and a learnable channel embedding c ∈{c_L, c_R} to it to convey information about the 2D location of the i^th unmasked audio token in the spectrogram and also the audio channel to which it belongs. Next, we feed the transformed visual and audio features to separate transformer encoders, ℰ^V and ℰ^A, respectively, and obtain visual features e^V = {e^V_1, …, e^V_P } and audio features e^A = {e^A_1, …, e^A_Q-S}. Shared audio-visual encoder. Given the visual features e^V and audio features e^A, we concatenate them into e^AV, such that e^AV = { e^V_1, …, e^V_P, e^A_1, …, e^A_Q-S}, and re-add the sinusoidal positional embeddings p^V and p^A to the features of the respective modalities in e^AV. Furthermore, we add the channel embeddings c to the audio features, and a learnable modality embeddings m ∈{m_A, m_V} to all features in e^AV to help the model distinguish between the visual and audio modalities. Next, a shared audio-visual transformer ℰ^AV encoder takes e^AV as input and outputs audio-visual features f^AV, which implicitly holds spatio-temporal information required for accurate inpainting of audio. §.§ Audio-visual decoder for binaural audio inpainting Our audio-visual decoder 𝒟 takes f^AV as input and attempts to synthesize the masked binaural audio tokens by leveraging spatio-temporal cues in f^AV. It first projects f^AV to a lower-dimensional feature set g^AV. It then appends a learnable embedding for the masked audio tokens to g^AV and passes it through a shared audio-visual transformer decoder <cit.>. Next, it feeds the audio feature outputs of the shared decoder to another transformer decoder and uses its outputs to predict an estimate of the masked binaural audio tokens. The decoders are light-weight compared to the encoders, ensuring that the encoders are primarily responsible for driving the inpainting task and producing good audio-visual features for strong downstream performance. We next describe each component of 𝒟 in detail. Shared audio-visual decoder. We first create a lower-dimensional projection g^AV of the audio-visual encodings f^AV by passing it through a linear layer, and append a learnable embedding ϕ corresponding to each of the S masked audio tokens to g^AV. Next, we add the positional embeddings p^V and p^A, the audio channel embeddings c, and the modality embeddings m to g^AV, and feed it to a shallow transformer decoder 𝒟^AV that outputs an audio-visual feature set h^AV. We then take take the audio features h^A from h^AV and pass them to the audio decoder for further processing. Audio decoder. The audio decoder 𝒟^A re-adds the positional embeddings p^A and channel embeddings c to g^A, and feeds it to a transformer decoder, which outputs audio features d^A. Prediction of masked audio tokens. Finally, we take the subset d^A_M of all audio features d^A, which correspond to the masked audio tokens Â^M, upsample them by passing through a linear layer, and reshape them to obtain an estimate Ã^M of the masked tokens Â^M, such that Ã^M = {ã_1, …, ã_S }. §.§ Model training We train our model to minimize the error in prediction of the masked audio tokens. In particular, we compute the mean-squared error ℒ averaged over all masked audio tokens, such that ℒ = 1/S∑_i=1… S ||ä_i - ã_i ||^2_2. §.§ Audio masking for inpainting We design an audio masking protocol that is customized to help our model better extract spatial audio-visual cues during self-supervised pretraining. In particular, we mix the strategy of randomly masking a full audio channel with that of randomly masking audio tokens in the ratio r% : (100-r%) during training, where r represents the relative frequency with which we randomly drop an audio channel. On the one hand, token masking could lead to tokens from the same location in the two audio channels being present among the unmasked tokens, thereby providing additional spatial cues to the model and resulting in a simpler optimization objective for the inpainting task. On the other hand, channel masking forces the model to solve a more challenging binauralization task solely on the basis of vision, which could help it learn even stronger spatial features. Towards achieving high performance on the downstream tasks, we aim to strike a fine balance between these two strategies. In our setup, we choose in favor of a particular strategy at the level of a training batch, and set the value of r using validation on the downstream tasks. When finetuning on downstream tasks, we randomly mask a channel. §.§ Downstream tasks requiring spatial audio-visual understanding We explore two downstream tasks with our pretrained features: active speaker detection and spatial audio denoising. Active speaker detection (ASD) involves matching an audio clip with an appropriate face track from the corresponding video clip. While current state-of-the-art methods <cit.> rely on semantic similarities between monaural audio and vision to solve this task, leveraging spatial audio can additionally reveal the sound source location in the video. As we will see, however, our learned representation improves this task even compared to simpler ways to use the binaural input. In spatial audio denoising, also studied with spatial audio-visual pretraining in <cit.>, the goal is to separate the target audio from distractors. In particular, we aim to remove the audio from sources extraneous to the conversation (off-video sounds from other parts of the scene). § EXPERIMENTS Datasets. We evaluate our model on two challenging egocentric video datasets that contain binaural audio: 1) EgoCom <cit.>, and 2) EasyCom <cit.>, detailed in Supp. While both datasets contain egocentric videos captured by people having conversations, EgoCom is more unconstrained than EasyCom. Whereas EasyCom primarily shows participants sitting around a table and talking, EgoCom has videos of participants moving around a room, turning their face and body, standing up, etc. These datasets test the robustness of our method in diverse scenarios of varying difficulty. Model architecture and training The uni-modal encoders, ℰ^A and ℰ^V, have 8 layers, while the audio-visual encoder ℰ^AV has 6 layers. All encoders have 12 attention heads and use 768-dimensional hidden embeddings. The audio-visual decoder 𝒟^AV and audio-only decoder 𝒟^A have 1 and 3 layers, respectively. Both decoders have 6 attention heads and use 384-dimensional hidden embeddings. To pretrain our model, we set the relative frequency of dropping an audio channel in our masking protocol for training to r=20 %. We train our model for 200 epochs using the AdamW <cit.> optimizer with a weight decay of 10^-5, and a learning rate scheduler that reaches a peak learning rate of 2 × 10^-4 over 10 warmup epochs, and then decays it through half-cycle cosine annealing <cit.>. For data agumentation, we perform random flipping of video clips and audio channels along their width. During ASD training, we finetune the pretrained features with a lower learning rate than the rest of the model. See Supp. for further details on datasets, architecture, and training. §.§ Active speaker detection First we evaluate our model on active speaker detection (ASD). Backbone models. We consider two state-of-the-art ASD models as the backbones for leveraging our pretrained representations: 1) TalkNet <cit.>, and 2) SPELL <cit.>. TalkNet is an attention-based model that first encodes the face track and the audio clip using temporal encoders into feature sequences of the same length as the input clip. Next, it performs self- and cross-attention on the feature sequences to capture intra- and inter-modal semantic and temporal patterns. Finally, it fuses the two feature streams frame by frame, and uses a binary classifier to predict if the face in the track is active or not. SPELL first extracts audio-visual features for each face in a clip using a two-stream ResNet <cit.> encoder. It then treats these features as nodes in a graph and uses a graph neural network to learn both long- and short-term bidirectional semantic relationships. Finally, it does binary classification of every graph node to predict if its associated face is active or not Pretrained feature fusion. To fuse our pretrained features with the ASD backbones, we first use a single-layer transformer decoder <cit.>. The decoder takes the feature outputs of our shared transformer encoder ℰ^AV as the keys and values, a sinusoidal embedding sequence as queries, where each embedding denotes an index of a frame in the clip, and outputs an audio-visual feature sequence of the same length as the clip. Each output feature acts as a spatially aggregated representation of the features for the individual tokens from the corresponding frame, and implicitly holds rich information about the audio source location in the scene. Finally, we append these features to the cross-attention outputs in TalkNet, or the two-stream audio-visual encoder outputs in SPELL, on a per-frame basis. In essence, while the original audio-visual encoders leverage semantic correlations between vision and audio, our features can provide strong complementary spatial cues for better performance. Baselines. For both TalkNet and SPELL, we compare against multiple baselines comprising both the unmodified backbone and improved versions of it, in addition to some naive methods: * All-active: a naive model that predicts that all visible speaker are always active * All-inactive: a naive model that predicts that all visible faces are always inactive * Random: a naive model that emits a random ASD confidence score for every visible speaker * Backbone w/o audio: a vision-only version of the backbone with no audio input * Backbone: the originally-proposed backbone that processes only faces and monaural audio * Backbone-binaural: an improvement over the backbone, where we use binaural audio instead of monaural, alongside positional encodings for the faces, indicative of their relative position and depth, for better matching the face to the audio * Backbone-binaural w/ scene video: a further improvement over the backbone, where we additionally provide the scene images (uncropped video frames) to the backbone-binaural model * Backbone w/ TLR <cit.> features: we fuse the SOTA Telling Left from Right (TLR) <cit.>, which learns audio-visual spatial correspondences by predicting the spatial alignment between vision and binaural audio. * Backbone w/ 2.5D-VS <cit.> features: we fuse features from the SOTA audio-visual binauralization model, 2.5D Visual Sounds (2.5D-VS) <cit.>. For both TLR <cit.> and 2.5D-VS <cit.>, we use a feature fusion method like ours to fuse their pretrained features with the backbone. We use the standard mean average precision (mAP) metric. Results. Table <ref> (top) reports our ASD results on both val and test splits. The three naive baselines achieve very low ASD performance on both EgoCom <cit.> and EasyCom <cit.>, emphasizing the difficulty of the task. For both TalkNet <cit.> and SPELL <cit.>, the unchanged backbone model generally performs better than the model without audio, showing that both vision and audio are required. Upgrading from monaural to binaural audio further boosts performance, as the model can now leverage both spatial and semantic information. Additionally using scene features lets the backbone explicitly match the scene area around the inferred source location with the face, and further improves ASD, especially for EgoCom, where the background scene changes more often. TLR <cit.> and 2.5D-VS <cit.> improve the original models on EasyCom, but fare worse on the more challenging EgoCom, demonstrating the limitations of their pretrained features. Furthermore, 2.5D-VS outperforms TLR, emphasizing that fine-grained spatial correspondences are necessary. Our model substantially outperforms all baselines for both models (TalkNet and SPELL) on both datasets. This shows that our method helps learn stronger spatial features for ASD, which are both backbone- and dataset-agnostic. Besides, our improvement over the baselines using alternate pretrained features, ndicates that merely predicting spatial alignment (TLR) or doing audio-visual binauralization (2.5D-VS) isn't enough for ASD, especially on the more challenging EgoCom dataset. Model analysis. Table <ref> (bottom) shows an ablation of our pretraining method. Upon training for ASD from scratch, we see sharp drop in performance[SPELL requires storing pretrained features in the graph nodes, therefore not allowing training from scratch], showing that our advantage is not solely due to our model design, but also our self-supervised pretraining stage. §.§ Spatial audio denoising Next we evaluate spatial audio denoising. To instantiate this task, we add the binaural audio of a target clip with the downscaled binaural audio from another randomly chosen clip, where the downscaling factor is depends on the desired noise level, and attempt to extract the target from the mixture. We evaluate three noise levels, expressed using the signal-to-noise (SNR) ratio: 1) 0 dB, 2) 2.5 dB, and 3) 5 dB. The different noise levels test our model's robustness to varying levels of task difficulty—the lower the SNR value, the higher the noise, and consequently, the higher the difficulty. For this task, we evaluate on EgoCom only. We find that for EasyCom mixing audio from a different clip as noise usually leads to spatially overlapping sound sources since the dataset is recorded in a fixed setting (people sitting around a table), this renders the denoising task on EasyCom intractable for all models. Backbone model. We adopt the commonly used U-Net <cit.> model for audio-visual source separation <cit.> as the backbone, which produces a binaural ratio mask for the target audio (see Supp. for details). We multiply the predicted ratio mask with the mixed magnitude spectrogram to get the predicted magnitude spectrogram, then convert it to a waveform using inverse short-time Fourier transform with the mixed audio phase. Pretrained feature fusion. To use our features for denoising, we reshape the visual features f^V and unmasked audio features f^A, produced by our audio-visual encoder ℰ^AV, to form multi-channel 2D maps, where the features align with their corresponding tokens vis-vis the raster order. Next, we pass the feature maps to separate convolutional layers, concatenate the outputs channel-wise, and use them to replace the visual features at the U-Net <cit.> bottleneck. Our fusion strategy helps the U-Net leverage fine-grained spatial cues at the level of audio patches and video tubelets. Baselines. We compare against the following baselines and existing methods: * U-Net w/o vision: an audio-only blind denoising model * U-Net: the original backbone without any alterations * U-Net w/ ImageNet features: pretrains the visual encoder on ImageNet <cit.> * U-Net w/ TLR <cit.> features: fuses the features from TLR <cit.> with the feature outputs of the audio encoder through channel-wise concatenation. * U-Net w/ 2.5D-VS <cit.> features: fuses the pretrained features from 2.5D-VS <cit.> similarly. Evaluation metric. For evaluating our denoising quality, we use standard metrics: 1) STFT distance, a spectrogoram-level measure of the denoising error, expressed using base 10^-3 and 2) SI-SDRi: the improvement in SI-SDR <cit.>, a scale-invariant estimate of the level distortion in the audio, over using the mixed audio as the prediction. Results. Table <ref> (top) shows spatial audio denoising results on the more challenging EgoCom dataset. The unmodified U-Net backbone performs better than the version that lacks vision, establishing that similar to ASD, vision is crucial for better denoising. Using pretrained features of TLR <cit.> or 2.5D-VS <cit.> further improves the performance, showing that learning spatial audio-visual features aids denoising. Our method outperforms all baselines (p ≤ 0.05) across both metrics for all noise levels. While the improvement over the baselines that do not use self-supervised pretraining emphasizes the utility of learning spatial audio-visual relationships through self-supervision, the performance boost over TLR and 2.5D-VS underlines the strengths of our self-supervised method design—which are consistently realized for both ASD and denoising. Further, our improvement margins over the baselines are larger for higher noise levels (0 and 2.5 dB), indicating our features play a bigger role in the more difficult denoising settings. Model analysis. In Table <ref> (bottom), we ablate our pretraining method. Similar to ASD, training our model from scratch on the denoising task leads to a decline in the performance. This disentangles the impact of our pretext task design from the model architecture and shows that our pretraining stage helps the backbone with learning better audio-visual features, leading to superior denoising quality. §.§ Qualitative analysis. In Fig. <ref>, we analyze the visual attention maps of our shared audio-visual encoder ℰ^AV. Observe that the regions of high attention are usually centered around the active speakers (see center-left and center-right) and other sound sources (e.g., a loudspeaker for generating background noise in the examples at the center and top-left), or around objects that determine how the sound spatializes in the scene (e.g., large tables, cupboards). Interestingly, our model also attends to multiple people if they are speaking at the same time (see top-left), thereby facilitating the detection of multiple active speakers. § CONCLUSION We introduced a novel self-supervised approach for learning audio-visual representations in egocentric videos via spatial correspondence between the video and its binaural audio. The spatial representations are learned via binaural audio inpainting, which involves masking segments or full channels of the binaural audio and predicting the masked parts on the basis of the video and unmasked audio context. Through extensive evaluation, we show that our learned features are strong and generic enough to improve over multiple backbone methods and for multiple downstream tasks, including active speaker detection and source separation. In future work, we plan to explore alternate pretraining strategies involving spatial audio synthesis and leverage more large-scale conversational video datasets for learning stronger features. plain § SUPPLEMENTARY MATERIAL In this supplementary material, we provide additional details about: * Video (with audio) for qualitative illustration of our pretext task and qualitative evaluation of our model predictions on the downstream tasks (Sec. <ref>). * Evaluation of the impact of the channel masking frequency r (from Sec. <ref> in main) in our audio masking protocol on the downstream task performance (Sec. <ref>) * Evaluation of the impact of our model parameter initialization on the downstream performance (Sec. <ref>) * Additional dataset details (Sec. <ref>), as mentioned in Sec. <ref> in main * Additional model architecture and hyperparameter details for both self-supervised pretraining and downstream training (Sec. <ref>), as referenced in Sec. <ref> and <ref> in main §.§ Supplementary video The supplementary video provides a qualitative illustration of our pretraining task for learning spatial features from audio-visual correspondence in egocentric videos. Moreover, we provide video samples from the both EgoCom <cit.> and EasyCom <cit.> datasets to illustrate the unique challenges posed by the egocentric videos. Additionally, we demonstrate our model's prediction quality for both active speaker detection and spatial audio denoising, and analyze common failure models for our model on both tasks. Please see the video on <http://vision.cs.utexas.edu/projects/ego_av_corr> and use headphones to hear the binaural audio correctly. §.§ Channel masking frequency r Here, we analyze the effect of the channel masking frequency r in our audio masking protocol (Sec. <ref> in main) on the downstream task performance. Table <ref> reports the active speaker detection (ASD) results on the more challenging EgoCom <cit.> dataset, and table <ref> reports the denoising results for different noise levels. We notice that the performance on both ASD and denoising, especially at the higher noise levels, declines upon increasing or decreasing the value of r from our choice of 20 % based on the downstream validation performance (Sec. <ref> in main), which helps our model achieve a fine balance between the two complementary strategies of masking a complete channel and randomly masking audio tokens. Whereas randomly masking a channel of the binaural audio entails solving the more under-constrained and consequently complex binauralization task, thereby helping our model learn stronger spatial associations between vision and audio, randomly masking audio tokens helps with improving training stability. §.§ Model parameter initialization To evaluate the effect of random parameter initialization on our model, we train our model on both tasks and datasets with 3 different random seeds. Across all runs, our standard errors are less than 0.01 on all metrics, showing that our model is robust to different random parameter initializations, and the improvements in performance are significantly larger than these small variations from randomness. §.§ Dataset details As discussed in main (Sec. <ref> in main), we use two public datasets containing egocentric videos with binaural audio, EgoCom <cit.> and EasyCom <cit.>, for our experiments. For EgoCom, we follow the authors and split the data into train/val/test comprising 30.3/2.4/5.8 hours of data. For EasyCom, we randomly generate train/val/test splits with 4.5/0.38/0.39 hours of data, such that there is no overlap in conversation participants between any two splits. Next, we extract 1 second long clips from both datasets, where the video and binaural audio are sampled at 5 frames per second (fps) and 16 kHz, respectively. The frame resolution is 240 × 352 for EgoCom, and 198 × 352 for EasyCom. Furthermore, we choose audio channel 5 and 6 (corresponding to the in-ear microphones) as our binaural audio channels for EasyCom. §.§ Model architecture and training details In addition to the provided details in main Sec. <ref>, we provide here extra model architecture and training details for both pretraining and finetuning on downstream tasks, for reproducibility. We perform all training using 8 NVIDIA Tesla V100 SXM2 GPUs. We will release all code and data. §.§.§ Pretraining We described our model architecture and pretraining details in Sec. <ref> in main. Here, we provide additional details about our model's input preparation, architecture, parameter initialization, and training . Input preparation. We sample the video clips at their original resolution, normalize them using the per-color means and standard deviations computed using ImageNet <cit.>, and generate a total of 330 and 286 visual tokens for EgoCom and EasyCom, respectively, by splitting the clips into non-overalapping tubelets containing a sequence of 5 patches, where each patch is 16 × 16 in size (Sec. <ref> in main). We represent the binaural audio as two-channel Kaldi-compliant <cit.> spectrograms with 98 temporal windows and 128 Mel-frequency bins, which we compute by using the binaural audio normalized to [-1, 1], a window length of 25 ms and a hop length of 10 ms. We normalize the spectrograms by computing the mean and standard deviation of the Mel-spectrograms generated from all audio clips in each dataset. We next generate 392 audio tokens per spectrogram channel by splitting it into non-overlapping patches of size 2 × 16. Architecture. All hidden layers in each transformer block <cit.> emit features that are four times as long as the embedding size for the block. We always use LayerNorm <cit.> after every output of a transformer block unless it's a direct input to another transformer block. Parameter initialization. We use Xavier <cit.> uniform initialization for all network parameters. For the LayerNorm <cit.> layers, we initialize their weights to 1 and biases to 0. We use a truncated normal distribution with a standard deviation of 0.02 and a sampling range of [-2, 2] to initialize the learnable modality and channel embedding tokens, and initialize the mask tokens from a normal distribution with a standard deviation of 0.02. Training. We set the batch size to 104 during pretraining. §.§.§ Active speaker detection In Sec. <ref> in main, we outlined our feature fusion method for active speaker detection (ASD). Here, we provide additional architectural details for feature fusion, and also describe our finetuning process. Pretrained feature fusion. Fig. <ref> and <ref> show our feature fusion methods for TalkNet <cit.> and SPELL <cit.> ASD backbones, respectively. The single-layer transformer decoder (Sec. <ref> in main), which we use for fusing our pretrained features with the backbones (Sec. <ref> in main), generates 128 and 512 dimensional embeddings for TalkNet and SPELL, respectively. Since SPELL doesn't train any audio-visual features when training its graph neural network (GNN), we first pretrain the the transformer decoder for SPELL by using it with the TalkNet backbone. Towards that goal, we feed the decoder features to a single linear layer that maps the 512 dimensional features to 128 dimensional features, and is followed by GELU <cit.> activations and LayerNorm <cit.>, before fusing the 128 dimensional features with the TalkNet backbone. After pretraining, we append the 512 dimensional outputs of the decoder with the outputs of the two-stream audio-visual encoder (Sec. <ref> in main) for training the GNN in SPELL. Training. For TalkNet, we train using Adam <cit.> for 25 epochs optimizer with an initial learning rate (LR) of 10 ^ -4 for the backbone and 10 ^ -5 for the pretrained components, both of which we decay using a step LR scheduler by a factor of 0.95 after every epoch. We set the batch size to 400. For SPELL, we first train the two-stream audio-visual encoder for feature extraction for 100 epochs using the cross entropy loss and Adam <cit.> with an initial learning rate of 5 × 10^-4, which we decay by 0.1 after every 40 epochs. We set the batch size to 320. For training the GNN of SPELL, we train for 70 epochs by using a batch size of 320 again and learning rate of 10^-3, while setting all other hyperparameters per the original paper. §.§.§ Spatial audio denoising Backbone architecture. Following <cit.>, our U-Net backbone for spatial audio denoising (Sec. <ref> in main) is an audio-visual model comprising an audio encoder, a visual encoder, and a decoder for predicting an estimate of the target audio. The audio encoder takes the log magnitude spectrogram of the mixed binaural audio as input, and uses a stack of 5 convolutional (conv.) layers to produce a multi-channel 2D audio feature map. Each conv. layer has a kernel size of 4, padding of 1, and stride of 2, and is followed by leaky ReLU <cit.> activations with a slope of 0.2 for negative inputs, and batch normalization <cit.>. The conv. layers have 64, 128, 256, 512 and 512 output channels, respectively. The visual encoder has a ResNet-18 <cit.> architecture that outputs a multi-channel 2D visual feature map without feeding it to the average pooling or any subsequent layers. We push the ResNet outputs through another conv. layer to match its height and width with the audio features. The conv layer has a kernel size of (1, 4), a padding of (0, 0) for EgoCom <cit.> and (1, 0) for EasyCom <cit.>, and 128 output channels. Further, we remove the last feature column from the output of the conv. layer for all channels for EasyCom. We concatenate the per-frame features along the channel dimension and generate the visual features. We then concatenate the visual features with the audio features channel-wise, and feed the concatenated features to the audio decoder, which predicts an estimate of the ratio mask <cit.> for the target audio magnitude spectrogram. The audio decoder first uses a stack of 5 transpose convolutional (conv.) layers, which are connected to the corresponding encoder layers through skip connections. The transpose conv. layers have a kernel size of 4, stride of 2, and a padding of (1, 1), except for the last layer, which has a padding of (2, 1). The transpose conv. layers have 1152, 1024, 512, 256 and 128 output channels, respectively. Next, the audio decoder feeds the output of the transpose conv. layers to a conv. layer with 2 input and output channels, and a kernel size of (2, 1) to emit the predicted ratio mask. Input preparation. To transform the audio waveforms into magnitude spectrograms, we first normalize them to [-1, 1] and then compute the short-time Fourier transform with a window length of 128, hop length of 64, and 512 frequency bins. Pretrained feature fusion. Fig. <ref> shows our feature fusion method for spatial audio denoising. We reshape the visual features from the outputs of our audio-visual encoder ℰ^AV to form multi-channel 2D visual feature maps (Sec. <ref> in main), such that the 2D raster order of the features matches that of the tubelets in the video clip, and feed the reshaped features to a convolutional (conv.) layer with a kernel size of (3, 4), stride of (2, 3), padding of (1, 2) and (2, 2) for EgoCom <cit.> and EasyCom <cit.>, respectively, and 128 input and 784 output channels. We similary reshape the audio features, and feed them to another conv. layer with a kernel size of (1, 7), padding of 0, stride of (1, 6), and 128 input and 256 output channels. Both conv. layers are followed by leaky ReLU activations with a slope of 0.2 for the negative values, and batch normalization. Next, we concatenate the visual and audio features along the channel dimension, and further concatenate them with the audio encoder outputs channel-wise (Sec. <ref> in main). Training. We train using Adam <cit.> for 200 epochs optimizer with an learning rate (LR) of 5 × 10 ^ -4. We set the batch size to 80.
http://arxiv.org/abs/2307.04011v1
20230708164347
Robust Learning-Based Incipient Slip Detection using the PapillArray Optical Tactile Sensor for Improved Robotic Gripping
[ "Qiang Wang", "Pablo Martinez Ulloa", "Robert Burke", "David Cordova Bulens", "Stephen J. Redmond" ]
cs.RO
[ "cs.RO", "cs.LG" ]
[ Kipton Barros August 12, 2023 =================== empty empty The ability to detect slip, particularly incipient slip, enables robotic systems to take corrective measures to prevent a grasped object from being dropped. Therefore, slip detection can enhance the overall security of robotic gripping. However, accurately detecting incipient slip remains a significant challenge. In this paper, we propose a novel learning-based approach to detect incipient slip using the PapillArray (Contactile, Australia) tactile sensor. The resulting model is highly effective in identifying patterns associated with incipient slip, achieving a detection success rate of 95.6% when tested with an offline dataset. Furthermore, we introduce several data augmentation methods to enhance the robustness of our model. When transferring the trained model to a robotic gripping environment distinct from where the training data was collected, our model maintained robust performance, with a success rate of 96.8%, providing timely feedback for stabilizing several practical gripping tasks. Our project website: <https://sites.google.com/view/incipient-slip-detection>. § INTRODUCTION §.§ Background Autonomous robots have yet to achieve human-like dexterity when performing gripping tasks, mainly due to a lack of satisfactory tactile perception and processing abilities. Studies have shown that even humans struggle with simple gripping tasks in the absence of tactile sensation <cit.>. The palm of the human hand contains ∼17,000 mechanoreceptors, i.e., specialized nerve endings that respond to mechanical stimuli such as deformation, pressure, and displacement <cit.>. These receptors play a crucial role in sensing and relaying tactile information to the nervous system <cit.>, allowing humans to adjust their grip in real-time to account for slipperiness and other factors. Building on these insights, researchers have designed tactile sensors replicating part of human hand sensing capabilities and explored slip detection techniques using these sensors to enhance robotic manipulation performance <cit.>. §.§.§ Types of slip The two main types of slip are gross slip and incipient slip. Gross slip refers to the occurrence of slip across the entire contact surface, where the relative motion between the gripper or tactile sensor and the gripped object is typically observable at a macro level <cit.>. On the other hand, incipient slip refers to the initial stage of slip, when parts of the contact surface slip while others remain stuck <cit.>. For example, when an object is held by elastic fingertips, and an external force is applied to the object in a direction tangential to the contact surface, some parts of the fingertips will stretch while others will compress, causing incipient slip at the periphery of the contact surface while the central part remains stuck. As the applied force increases, the slip will finally spread across the entire contact surface, leading to gross slip. Throughout the incipient slip phase, there may not be any observable relative motion between the object and the finger. §.§.§ Slip detection and challenges Previous studies have proposed techniques to detect gross slip and apply corrective measures when the slip is detected to prevent objects from dropping out of the grasp <cit.>. Detecting gross slip may not always be a wise strategy, as it occurs when the entire contact has already started slipping. On the other hand, detecting incipient slip can provide an early warning of an impending and more dangerous gross slip, allowing corrective measures to be applied earlier, and increasing the likelihood of maintaining a safe grip. However, detecting incipient slip is not trivial because it requires the contact interface of the sensor to possess adequate elasticity, enabling one part to undergo sufficient and detectable deformation, resulting in slip, while the other part remains stuck. Furthermore, validating incipient slip can be challenging since it is not generally associated with macro-level relative movement between the sensor/finger and the object. To verify the occurrence of incipient slip, researchers commonly utilize a camera to monitor the contact surface; by examining the camera images, they can visually confirm the presence of incipient slip events <cit.>. However, this method of relying on cameras may not be feasible in real-world situations, such as when gripping everyday objects. §.§ Our contribution Our study presents a new technique for detecting incipient slip using the PapillArray (Contactile, Australia) tactile sensor. This sensor features a square array of nine elastic silicone pillars with varying unloaded heights, promoting different normal forces on the pillars when pressed against a surface. This design enhances the likelihood of inducing incipient slip on shorter pillars when a tangential force is applied. We utilized deep neural networks (NN) to develop our incipient slip detection algorithm, where we made novel use of the data gathered in a previous study <cit.> to construct the dataset for training and evaluating the NN. The primary objective of the NN was to classify inputs into two distinct categories: incipient slip and other, functioning as a binary classifier; other refers to all others states that are not incipient slip, such as gross slip or being stationary. Furthermore, the tactile data at hand is presented in the form of a uniformly-sampled time series. Therefore, to effectively capture the serial nature of the data, we utilize a recurrent neural network (RNN) <cit.>. The inclusion of historical data in a NN model has the potential to enhance its performance in real-time prediction tasks, as it enables the capture of temporal patterns and dependencies, leading to more robust and accurate forecasts <cit.>. We also propose several data augmentation methods designed to enhance the performance and robustness of our trained model, making it robust to environmental confounders. § RELATED WORK Similar to the approach we will take in this paper, the approach proposed in <cit.> treats slip detection as a classification task; the authors employed a support vector machine <cit.> to detect slip using the velocity of embedded pins on the inner surface of a TacTip camera-based tactile sensor <cit.>. Labels of the training data are assigned manually based on the alignment of pin velocities. In a more recent study <cit.>, the authors modified the TacTip sensor used in <cit.> by introducing raised fingerprint-like ridges, decreasing skin thickness, and increasing pin spacing to reduce mechanical coupling between ridges and to create the traction differential and facilitating the shear displacement required for the occurrence of incipient slip. This is similar to the behavior seen on the human finger pad when sheared against an object, thus allowing the sensor to experience incipient slip. They used an external camera to monitor the contact in real-time for data labeling, and then employed a convolutional neural network <cit.> as a binary classifier to detect incipient slip. The GelSight technology is another camera-based tactile sensing system that uses an elastic body to establish a contact with an object, with the built-in camera recording the resulting deformation to obtain tactile data <cit.>. An approach was introduced in <cit.> for detecting incipient slip using the GelSight sensor. This method determines the degree of incipient slip by analyzing the inhomogeneity of the displacement field, which is quantified in terms of entropy. More recently, a more advanced version of the GelSight technology, called GelSlim, was proposed in <cit.>; it employed the deviation of the deformation field from a 2D planar rigid displacement field to determine slip. Compared to camera-based tactile sensors, the distributed optical sensor used in our work, the PapillArray, is less complex in terms of instrumentation<cit.>. It offers several advantages over other sensor designs, including size, temporal resolution, and compliance. A heuristic algorithm that employs the PapillArray tactile sensor to detect incipient slip is proposed in <cit.>. The approach is based on the observation that incipient slip happens when some sensor pillars stop deflecting at the same rate as the contacted object is moving in the sensor's frame of reference. Precisely, this approach detects slip by evaluating the tangential velocity drop with respect to a reference pillar, which is the pillar under the highest normal force (usually the center). In the case of rotational movements, with the center of rotation at the center pillar, the algorithm cannot detect any slip since no movement can be detected in the center pillar. This heuristic approach is further improved in <cit.> to account for rotational slips, detecting the deceleration of each pillar by comparing it to its own recent maximum velocity, and then it checks if other pillars are still in motion to confirm that the deceleration indicates an incipient slip. However, these methods may not be applicable when dealing with deformable or non-planar surfaces, or when only a subset of the pillars makes contact with the object. In such cases, establishing a dependable reference pillar to represent the object's movement in <cit.> becomes challenging; in <cit.>, it is difficult to determine whether the deceleration of pillars is caused by slip or by the shape of the object's surface. In our work, we are motivated to take a learning-based approach in developing a dedicated incipient slip detection algorithm, where we propose domain adaptation techniques to enhance the robustness of our trained model, enabling it to effectively detect incipient slip for more realistic objects and contacts, overcoming the challenges outlined above. § MATERIALS AND METHODS §.§ Hardware §.§.§ Contactile sensor Our study employed the commercial PapillArray sensor from Contactile[<https://contactile.com/>], depicted in Fig. <ref>, which is based on the concept described in <cit.>. The sensor outputs the real-time x-y-z force data experienced by each pillar at a high sampling rate of 1,000 Hz. Our training data was collected using the Dev Kit v1, while for the online evaluation of our trained model, we used the Dev Kit v2. Dev Kit v2 and Dev Kit v1 differ in size and the pillar Shore hardness. §.§.§ Robotic gripping rig Fig. <ref> displays the rig used in our study for the gripping task. The rig features a specialized two-finger gripper (RG2, OnRobot, Germany) with a blue adapter fixed to one of its fingers. This adapter serves to couple the Contactile PapillArray Dev Kit v2 sensor to the gripper finger. A white 3D-printed cuboid is used to extend another finger, matching the length of the finger equipped with the sensor. Moreover, a couple of ArUco markers are attached to this extended cuboid to track the gripper's pose. We replaced the original motor of the RG2 gripper with a stepper motor (MX-28, Dynamixel, US) to achieve high-frequency interruptible control of the gripper. The modified gripper was mounted on a six-axis robot arm (UR5e, Universal Robots, Denmark). §.§ Data preparation §.§.§ Collect slip data and annotate slip events for individual pillars Our training dataset is sourced from <cit.>. In brief, the training data was acquired using a six-degree-of-freedom hexapod robot (H-820, Physik Instrumente, Germany) with the Contactile PapillArray Dev Kit v1 sensor mounted on the top. A transplant acrylic plate is fixed above the sensor on a T-slot frame and a video camera (Logitech Streamcam, Logitech, Switzerland) is positioned above the acrylic plate to capture videos of the contact between the sensor and the plate. During the data collection, the hexapod pushes the sensor vertically against the acrylic place and then moves it laterally to induce a slip. The horizontal movement could be a translation, a rotation, or a combination of both. A total of 200 data sequences were collected, covering a range of compression levels, hexapod movement velocities, and movement directions. The recorded videos are processed using the Matlab Computer Vision Toolbox (MathWorks, USA) to track the pillar tip position. The tangential pillar tip velocity is then used to label the slip state (gross slip or not gross slip) of individual pillars. §.§.§ Collect control data When the sensor is compressed against a flat surface and moved laterally, the tangential velocity measured by each pillar will increase at first, as the sensor starts deforming, before reaching a peak velocity and subsequently decreasing its speed when the pillar stops deforming (Fig. <ref>). If a pillar stops deforming because it is undergoing incipient slip, at least one other pillar will still be deforming laterally; this is observed by an asynchronous decrease of the tangential velocity of the nine pillars (Fig. <ref> - Slip). However, if the object stops moving before any slip occurs, the tangential velocity magnitude of the nine pillars decreases almost simultaneously (Fig. <ref> - Stop). Since the stop events display similar temporal feature to slip events, we collected an additional dataset specifically focusing on stop events, consisting of a total of 28 data sequences. We label the data points in these sequences as other. By incorporating this dataset, the NN is less likely to misclassify between incipient slip and other, thereby improving the accuracy and reliability of the NN. The data collection process was similar to that of the slip events, except that the hexapod's movement was abruptly halted before any slip occurred. Further details on this process can be found in <cit.>. §.§.§ Annotate the incipient slip Based on the definition of incipient slip provided in Section <ref>, we annotate the incipient slip in the dataset as follows: we consider that incipient slip has occurred when at least one pillar slips with respect to the contact surface, while at least one other pillar remains stationary with respect to the contact surface. In other words, we start annotating incipient slip from the moment the first slip occurs on any pillar, and this interval continues until the time when all nine pillars have slipped. The slip label of each pillar is obtained as described in Section <ref>. It should be noted that when annotating incipient slip in the rotational data, we only consider the outer eight pillars. This is because the rotational movement is centered around the central pillar, which never slips by our definition (remains in the same location on the contact area), for our data set. §.§.§ Refine data sequence The sensor output exhibits variance due to noise and sporadically produces glitches that deviate significantly from the mean value, displaying sudden extreme highs or lows. To address these issues, we apply a median filter with a window size of 21 samples on the raw sensor signal, which is sampled at 1,000 Hz. We divided the raw data sequence into non-overlapping windows, with each window containing 40 samples. This division reduced the data rate to 25 Hz. This was done for practical limitations in the hardware and software of our system. More precisely, the maximum refresh rate of our gripper servo is ∼62 Hz, and the computation rate of our classifier is ∼40 Hz. Moreover, it is worth noting that reliable gripping does not necessarily require a high sampling frequency. Indeed, humans have a reaction time of approximately 80-120 ms (equivalent to 8.3-12.5 Hz) <cit.>, enabling us to perform most everyday gripping tasks effectively. Finally, we only consider the x-y forces on the pillars as input in NN training, while excluding the z force. During the data collection process, when the hexapod moves tangentially to induce slip, it remains stationary in the z direction. As a result, we assume that the z force does not play a significant role in detecting incipient slip in our case. It should be acknowledged that in real-world scenarios, the normal force can provide valuable information for humans to detect slip, and it is likely to vary appreciably for different gripping objectives. Therefore, another reason for excluding the z force is to prevent the NN from incorrectly learning that the z force remains relatively stable during slip events, as occurs in our data set. §.§ Training data augmentation §.§.§ Data augmentation by rotational symmetry During the data collection process, the sensor is placed at the origin of the world coordinate frame. Its horizontal surface is parallel to the x-y plane of the world frame of reference, and the side edges align with the x-y axis directions. Hence we use a rotation transformation to augment the data; intuitively, it can be understood as rotating the initial position of the sensor around the z axis by a random angle. For each data point in a sequence, we perform the following mathematical calculations: [ F_x'; F_y' ] = [ cos(θ) -sin(θ); sin(θ) cos(θ) ]·[ F_x; F_y ], θ∈[0,2π), where F_x and F_y represent the force values along the original x-y axis, and F_x' and F_y' are the augmented force values after virtual rotation of the sensor by a randomly sampled angle, θ, from a uniform distribution of [0, 2π). §.§.§ Advanced data augmentation for domain adaptation The data used in our study was collected under idealized conditions, where a hexapod robot was used to compress the sensor against a flat surface and move laterally in a controled manner. In this setup, the force was nearly perpendicular or parallel to the contact surface and the movement speed is nearly constant. However, in real-world robotic gripping, the conditions are expected to be quite different from this idealized setup, and the performance of the model trained on such data is expected to be poor. We identify several issues that may arise when transferring the model trained on idealized data to real-world gripping scenarios, and we propose a range of advanced data augmentation methods to address these issues in the following paragraphs. These methods are designed to generate synthetic data that mimics the real-world variability of the gripping: * Issue: The slipping velocity in real-world robotic gripping is not constant, as it is influenced by various factors such as gravity, friction, and the shape of the object being gripped. However, during the data collection process, the hexapod induces slip at a constant velocity. Remedy: We employ random sampling to sample a percentage of data points from the raw data sequence, thereby generating a new data sequence. And we maintain the frequency of the new sequence at the same rate as the raw sequence (1,000 Hz). This approach can simulate velocity variations to mimic real-world gripping scenarios, as it changes the magnitude differences of some temporally adjacent data points while keeping the time interval unchanged. * Issue: In some gripping scenarios, a portion of the sensor pillars may not be in contact with the object. For instance, this can occur when employing sensors to grip an object with a rounded surface or when gripping an object smaller than the sensor's contact area. Remedy: To simulate an unloaded pillar, we substitute a number of pillar data sequences with zero sequences. Noise is then added to make the generated sequence resemble a realistic sensor signal. The noise is derived from a normal distribution with a mean of 0.0 N and a standard deviation of 0.001 N. * Issue: Unlike with the hexapod, the force generated by the gripper may not be perfectly perpendicular to the x-y plane of the sensor frame of reference, and the force leading to slip may not be perfectly in this plane. This can occur when the gripped object is not flat or the mechanical linkage of the gripper flexes when applying force to the object. Remedy: First, we sampled nine individual pillar sequences from raw sensor sequences with different sensor compression levels and hexapod movement types, and then combined them to form a new sensor sequence. Secondly, we scaled (scale factor ranging from 0.2 to 2.0) the magnitude of values for a number of pillar sequences. Lastly, we randomly permuted the position (by pillar index) of a nine-pillar sequence. Employing these techniques can encourage the NN capture a broader and more comprehensive pattern of incipient slip (see Section <ref>), rather than only learning the limited pattern introduced by the hexapod. §.§ Neural networks The key decision making component of our incipient slip detection approach is a binary classifier. Initially, we trained a NN capable of estimating the probability of incipient slip for each time point in a sequence. Next, we set a threshold to convert the continuous probability into a binary output. To enhance the accuracy of the classifier, we used an ensemble technique that trains multiple independent classifiers concurrently and aggregates their output probabilities to produce the final decision (shown in Fig. <ref>). §.§.§ Architecture Fig. <ref> illustrates the process of inputting a data sequence into the NN and obtaining the corresponding slip classification. The modified data sequence, as explained in Section <ref>, is input into an encoder. Subsequently, the encoder output is passed to a specific type of RNN called a gated recurrent unit (GRU) <cit.>. In our approach, we utilize a single layer of GRU for each propagation step, and we refer to it as a GRU cell. The hidden output from the GRU cell is generated as a combination of the current input and historical information. Moreover, an estimator is included that takes the hidden layer output from the GRU cell and converts it into a probability estimation. The ground truth label of each window is determined by the label of the last sample in the window. §.§.§ Training The ensemble model consists of Z (Z=5 in our case) independently trained classifier models. During each training iteration of each classifier model, a subset comprising a proportion of λ sequences (λ=40% in our case) is randomly sampled with replacement from the entire training set and used for NN training. The final layer of the estimator utilizes a two-class softmax activation function, with its outputs interpreted as probabilities for the occurrence of incipient slip and other. Our chosen loss function is binary cross-entropy. §.§.§ Decision making We aggregate the output probability from each classifier model in the ensemble to convert the continuous probability to binary prediction: f:=1[∑_z=1^ZM_z(x=[F_(n-1)T+1,···,F_nT])/Z >P_th], where 1[·] is an indicator function, M_z donates the z^th classifier model in the ensemble, x donates the input vector, and P_th denotes the probability threshold, which is 50% in our work. Z donates the number of classifiers in the ensemble model. § EXPERIMENTS AND RESULTS We first explicitly display our method's high success rate in detecting incipient slip, including offline and online scenarios. Then, we illustrate the practical benefits of our approach by showcasing its ability to stabilize an insecure robotic grasp in a number of practical gripping tasks. §.§ Offline evaluation The entire dataset is randomly split into two subsets: a training set (∼80% of the entire dataset, comprising 160 data sequences of slip event and 23 data sequences of stop event) for model training, and a test set (∼20% of the entire dataset, consisting of 40 data sequences of slip event and 5 data sequences of stop event) for model evaluation. Both subsets are expanded through the symmetry-based augmentation method described in Section <ref>, resulting in a five-fold increase in the size of the training set and test set. Fig. <ref> displays two examples comparing the incipient slip detection results over slip and stop events. As observed, the algorithm's confidence in labeling incipient slip increases rapidly as incipient slip starts and decreases as it progresses toward gross slip. In comparison, the probability in the stop case fluctuates slightly but remains well below the threshold. We define an incipient slip detection as successful if it occurs within a 0.3 second window preceding the true labeled time point of incipient slip (to accommodate the error of the ground truth) and prior to the occurrence of the gross slip. For the stop event, a successful estimation is defined as a classification of the entire sequence as other. Fig. <ref> presents the confusion matrix, displaying the final classification results over the entire test set; our algorithm achieves an overall success rate of ∼95.6%. The results also demonstrate its effectiveness in differentiating between the slip and stop events; this indicates that our algorithm is not simply detecting the changes in the force and yank of the pillars, as mentioned earlier in Section <ref>. Our algorithm can effectively detect incipient slip in its early stages. In Fig. <ref>, we present the latency between the moment of incipient slip detected by the algorithm and the ground truth onset of incipient slip. It is evident that, on average, incipient slip can be detected within 10 ms of its initiation. §.§ Online evaluation In the online evaluation stage, we utilized the full data set for training the final deployed model. Again, to increase the amount of training data, we applied both symmetry-based (see Section <ref>) and advanced data augmentation (see Section <ref>) techniques, resulting in a five-fold increase in data amount (1140 data sequences). The online evaluation was performed on six everyday objects, depicted in Fig. <ref>. We include objects of varying surface materials, curvatures, and hardness to ensure a broad range of conditions are represented in our results. §.§.§ Validating incipient slip detections We cannot easily validate incipient slip occurrences for everyday objects as we cannot independently monitor individual pillar contacts. Hence, we choose to perform the online evaluation based on following well-founded assumptions. The incipient slip detection is considered successful if it can be detected at any time-point between the time when the robot's movement begins (T_m) and the time when gross slip occurs (T_g); the criterion for determining the occurrence of gross slip has been arbitrarily defined as the occurrence of relative translational movement greater than 2 mm or relative rotational movement exceeding 2^∘ between the object and the robot's frame of reference. To induce a slip, the gripper first grips the object with a constant force. Then the robot moves the gripper downwards towards a rigid and stationary table surface, eliciting the slip between the sensor attached to the gripper tip and the object. In each trial, the gripping force is selected from a range of 8 N to 30 N. The robot movement can be either translational, rotational or a combination of translational and rotational. The velocity (v) and acceleration (a) of the robot movement have three different levels: low (v = 4 mm.s^-1, a = 10 mm.s^-2), medium (v = 10 mm.s^-1, a = 50 mm.s^-2), and high (v = 40 mm.s^-1, a = 100 mm.s^-2). All robot movements were performed using the built-in movel function of the UR script. The tool center position and orientation are obtained using the built-in getl function of the UR robot. This function employs forward kinematics calculations based on the read joint angles. In accordance with the offline evaluation, control trails are also conducted here for each v and a combination and movement type. The purpose is to validate that the identified behavior is indeed the incipient slip, rather the event with similar pattern like the stop event we mentioned above. The control data involves lifting the robot arm while maintaining a secure grip using a pre-determined grip force that is sufficient to prevent any slippage. As a result, when lifting an object, the pillars in contact undergo downward deformation due to the force of gravity; subsequently, once the object is securely held by the gripper and remains relatively motionless, these pillars will remain stationary. Here, for the sake of convenient explanation, we will also refer to this event as stop, and we label the sequence as other. To ensure a fair experiment, we add extra weight to lightweight objects to enhance their downward motion when being lifted, aiming to make the pattern of the output data sequence more like a slip event. In total, our experiment consisted of 216 trials, including 162 sequences of slip event (6 objects × 3 movements × 3 forces × 3 velocity/acceleration combinations) and 54 sequences of stop event (6 objects × 3 movements × 1 force × 3 velocity/acceleration combinations). Fig. <ref> illustrates the final validation results. Fig. <ref> shows a confusion matrix, highlighting the high success rate (∼96.8%) of our method in detecting incipient slip and its ability to differentiate between slip and stop events. Fig. <ref> demonstrates that our algorithm can detect incipient slip almost immediately upon the initiation of the movement that induces slip, with a normalized displacement D_norm range of 0.2 - 0.4, within which the incipient slip can be detected (refer to the caption for the definition of D_norm). These results provide comprehensive validation of the effectiveness of our approach in detecting incipient slipping in real-world gripping tasks. §.§.§ Ablation study This study aims to showcase the effectiveness of our advanced augmentation method in bridging the domain gap between the idealized data collected with the hexapod and more realistic data encountered with the robotic gripper. To accomplish this, we employed the model training approach described in Section <ref>. However, instead of splitting the data into separate train and test sets, we trained the model using the entire dataset here, given the different objective. Subsequently, we conducted online gripping experiments, as described in Section <ref>, using this trained model. Our findings, as illustrated in Fig. <ref>, indicate that the model trained without our advanced augmentation method exhibits a notably high false positive rate in the subsequent online gripping task when compared to the results shown in Fig. <ref> where the model was trained using our advanced augmentation method. In other words, the model trained without our advanced augmentation is unable to effectively distinguish patterns between slip and stop events. As a result, it incorrectly detects incipient slip in many stop events. §.§ Grasp stabilization after incipient slip detection This experiment aims to show the benefit of using our incipient slip detection method in practical gripping tasks. This involve lifting the robot arm while gripping the object with a pre-determined small force to ensure that slip occurs. We applied our incipient slip detection method and adjusted the grip when incipient slip was first detected to prevent the object from slipping further. In this experiment, we simulate two common scenarios that can trigger slips. The first involves gripping an object at its center of gravity with insufficient force and lifting it, causing a translational slip between the gripper and the object. The second involves gripping an object away from its center of gravity and lifting it, where rotational slip is likely to occur. We implemented a simple grip force adaptation that responds to incipient slip detection as follows: if incipient slip is detected, the robot immediately stops, and the gripper applies a pre-determined secure force to the object. The objects used in the experiment are the same as those shown in Fig. <ref>. The experiment was conducted 36 times (6 objects × 2 scenarios (translation or rotation) × 3 repetitions). We fix ArUco markers on the objects and gripper and use Python OpenCV to track the positions and orientations of all. We report the results in Table <ref>, which demonstrate the quickly and effective detection of incipient slip using our algorithm. On average, our algorithm can timely detect incipient slip and prevent the object from slipping when the relative translation between the object and the gripper reaches 2.5 mm and the relative rotation reaches 1.9 ^∘. Our algorithm showcases its ability to facilitate timely corrective action, preventing object falls; a demonstration video can be seen at our project website given in the abstract. § DISCUSSION Our developed algorithm enables the NN to effectively learn the incipient slip pattern from offline data and demonstrates high accuracy in both offline and online test sets. Furthermore, our algorithm enhances the security of robotic gripping. Compared to previous related works <cit.>, our algorithm offers several advantages. Firstly, our incipient slip detection algorithm incorporates a data-driven learning-based approach, minimizing the need for extensive human involvement in investigating the complex patterns of incipient slip. Secondly, the improved robustness of our algorithm enables the NN to effectively adapt to diverse domains with various types of PapillArray sensors and robotic gripping systems, despite being trained solely on data lacking heterogeneity. Therefore, our algorithm is more practical and possesses greater potential for maximizing the utilization of valuable tactile data in real-world scenarios. Thirdly, our algorithm has the ability to distinguish between incipient slip and a closely related tactile pattern that we refer to as a stop event. Notably, previous related work <cit.> has not adequately considered or addressed the stop event; however, our investigation has revealed the importance of including stop events when developing incipient slip detection algorithms due to their similar patterns but entirely different consequences. There are limitations to our work that need consideration. Firstly, the incipient slip detection could be improved by transitioning from a binary signal to a continuous warning signal. For instance, if incipient slip is detected in a small portion of the contact surface, the remaining area may still possess sufficient fraction to prevent significant slippage. In such cases, the warning level of incipient slip is low and corrective actions may not be necessary. Conversely, if a significant portion of the contact surface exhibits incipient slip, the warning level should escalate and it becomes important to for appropriate corrective actions. Moreover, our current choice of force adaptation method for reacting to incipient slip falls short when compared to the state-of-the-art gripping control work <cit.>. However, it is important to note that force adjustment is not the primary focus of our research in this paper, which is focused on improving the incipient slip detection. In future work, we will develop a more sophisticated force adaptation technique that incorporates our incipient slip detection method. § CONCLUSION In conclusion, this paper presents an incipient slip detection method that employs deep learning and several data augmentation techniques to improve the robustness of the trained NN. Our method is highly effective and reaches the state-of-art performance, it enable a single pre-trained NN model to be applied across various domains and tasks. In addition, our method has the potential to be extended to other approaches that use compliant tactile sensors. To train the NN parameters, we use stochastic gradient descent with a momentum of 0.95 and a learning rate of 10^-3, with a batch size of 512. We also incorporate a weight decay of 10^-3 using L_2 regularization during training. The encoder NN consists of one hidden layer with 1024 units, and the output dimension is 128. The GRU cell has a hidden layer dimension of 128. The predictor network comprises two hidden layers with 256 and 128 units, respectively. To all hidden layers, we apply rectified non-linearity <cit.> and batch normalization <cit.>. We implement our NN using PyTorch (Version 1.12.1, Meta, USA). All our experiments are conducted on a PC with an Intel 7-10875H CPU and an NVIDIA 2060 GPU. During the online evaluation stage, e utilise ROS <cit.> to facilitate communication between various components in our system. ieeetr
http://arxiv.org/abs/2307.07262v1
20230714103504
MorphPiece : Moving away from Statistical Language Representation
[ "Haris Jabbar" ]
cs.CL
[ "cs.CL" ]
AudioInceptionNeXt: TCL AI LAB Submission to EPIC-SOUND Audio-Based-Interaction-Recognition Challenge 2023 Kin Wai Lau, Yasar Abbas Ur Rehman, Yuyang Xie, Lan Ma TCL AI Lab {stevenlau, yasar, yuyang.xie, rubyma} @tcl.com August 12, 2023 ======================================================================================================================================= Tokenization is a critical part of modern NLP pipelines. However, contemporary tokenizers for Large Language Models, are based on statistical analysis of text corpora, without much consideration to the linguistic features. We propose a linguistically motivated tokenization scheme, MorphPiece, which is based partly on morphological segmentation of the underlying text. A GPT-style causal language model trained on this tokenizer (called MorphGPT) shows superior convergence compared to the same architecture trained on a standard BPE tokenizer. Specifically we get Language Modeling performance comparable to a 6 times larger model. Additionally, we evaluate MorphGPT on a variety of NLP tasks in supervised and unsupervised settings and find superior performance across the board, compared to GPT-2 model. § INTRODUCTION One significant aspect of modern Large Language Models (LLMs) is their massive size in terms of memory footprint and training resources. For instance, GPT-2 <cit.>, a well-known language model, took the equivalent of 9.2 days on 512 V-100 GPUs for training <cit.>. Its elder cousin, GPT-3, needed the equivalent of 14.8 days on 10,000 V-100 GPUs <cit.>. However, such infrastructure requirements are beyond the financial means of most researchers, and training these models has a substantial CO_2 footprint <cit.>. Moreover, inference on larger models is also slower and more expensive. Therefore, any technique that can reduce these requirements would make LLMs more affordable, ubiquitous and eco-friendly. In this paper, we demonstrate that a tokenization method that incorporates linguistic knowledge can help in this direction. Most contemporary tokenizers use statistical information from text corpora to build vocabularies. We propose to move away from this purely statistical nature of tokenization schemes and inject language specific inductive bias at the tokenization stage. We propose to achieve that by introducing a deterministic morphological segmentation stage and combine it with statistical BPE algorithm. The input text is first tokenized with morphological segmentation and then passed through a BPE algorithm. We also introduce a reverse tokenizer that combines the tokens from these two sources to output sentences. Modern NLP pipelines involve segmenting text into discrete units which are represented with learnable high dimensional vectors. This segmentation, called tokenization, forms the basis of most transformer (and many pre-transformer e.g LSTM, RNN <cit.> based architectures. Many tokenization algorithms have been explored over the past few years, ranging from characters to words and an intermediate form that is called sub-word tokenization. The most commonly used tokenizers such as BPE <cit.>, WordPiece <cit.>, Unigram <cit.> etc) follow the subword tokenization paradigm, which relies on the statistical properties of the corpus to construct the tokenization scheme and ignore the linguistic knowledge embedded in the language. It has been shown <cit.> that a morphologically informed vocabularies lead to better generalization capabilities of language models. In this work, we build on that insight and we propose a tokenization approach that relies partly on the morphological construction of words to break them down into sub-words. The intuition being that sub-words so constructed will be more natural than a split on statistical properties, and hence might lead to more efficient models. For instance, "paratrooper" would be segmented as (’para#’, ’troop’, ’#er’) in our tokenizer, which aligns more closely with the linguistic parts of the word compared to the BPE and Wordpiece tokenizers that split it into (’par’, ’atro’, ’oper’) and ('para', '##tro', '##oper'), respectively. To validate our approach, we train a GPT-like architecture with our proposed tokenizer and compare it to a pretrained GPT-2 model that uses BPE tokenization. The results demonstrate that our tokenizer leads to superior convergence and improved performance across a wide range of NLP tasks. We call our tokenizer MorphPiece[There is a similarly named R library (morphemepiece) : https://github.com/macmillancontentscience/morphemepiece] and Table <ref> gives some examples that highlight the manner in which MorphPiece splits words compared to BPE and Wordpiece. Few aspects are apparent here : * MorphPiece segmentation splits up the words into liguistically aligned affixes which have a semantic meaning. This is not the case with statistical tokenizers. * MorphPiece modifies the spellings for some words without which this alignment would not be possible (e.g. batting is tokenized as ['bat','ing'], instead of ['batt','ing']) * Such splitting into affixes opens up potential analyses of suffixes and prefixes that isn't possible with statistical tokenizers. For example, the negation prefixes like 'de' and 'un' and 'dis' are clearly segmented from the stem, which is not the case with BPE/Wordpiece Going forward, we first give an overview of related work in Section <ref> and then present MorphPiece in Section <ref> with details of how to construct the tokenizer. In Section <ref> we carry out few statistical comparisons of MorphPiece with WordPiece and BPE tokenizer. Then we present a GPT-like model trained on this tokenizer and discuss at length the results under various evaluation metrics in Section <ref>. This is followed by a detokenization algorithm (Section <ref>) which combines the tokens to sentences. Finally we conclude by giving few insights and way forward, in Section <ref>. Our primary contributions are as follows : * We propose a linguistically motivated tokenizer that results in a more efficient language model, with superior performance across a wide variety of NLP tasks, compared to models trained on BPE. * We pre-train a GPT-like architecture on this tokenizer. * We also devise an algorithm for de-tokenization of tokens into sentences. * We will open-source the code and various checkpoints of the model trained on MorphPiece, upon publication of the paper. § RELATED WORK There is ample body of research for building morphological tokenizers using either supervised, unsupervised or manually curation methods. Morfessor <cit.> and its variants <cit.> are the most well known. In SIGMORPHON 2022 Shared Task for Morpheme Segmentation <cit.>, there were 13 submissions to build morpheme segmentation at word and sentence level. This challenge itself built on Morpho-Challenge series <cit.>. Use of morphological segmentation for tokenization has been explored extensively in the context of Neural Machine Translation with mixed results <cit.>. However, use of morphological analysis in Language Modeling, especially on transformer based architectures, is rather limited. <cit.> compare BPE and Unigram tokenization for morphological alignment and find that Unigram is more aligned to morphpological splits, and leads to better or similar performance in downstream tasks. Similarly <cit.> showed that a morphologically informed vocabulary improves performance of LLMs. Subsequently <cit.> proposed a statistical tokenization improvement method (FLOTA) that tries to align the tokenization with morphological segmentation and show that this improves the performance in a specific task. Our work is different from theirs in a couple of important ways. First, they use statistically built vocabulary of BERT/BPE/Unigram. Second, they apply their method only during fine-tuning stage. Third, they don't have separate morphological and statistical modes of tokenization. Fourth, they evaluate only on one task and finally, our model outperforms FLOTA by a huge margin. § MORPHPIECE In this section we present MorphPiece; an English language tokenization scheme that combines Byte Pair Encoding (BPE) with morpheme based segmentation for a more linguistically aligned tokenization mechanism. The tokenization scheme is shown in Figure <ref>. First, the text is normalized and pre-tokenized as per the standard BPE tokenization <cit.>. In case of English, these pretokens are a regex based splitting of sentences. These pretokens are then passed through a look-up table of words (called MorphTable), to see if a morpheme based segmentation is available. If a segmentation is found, the pretoken is replaced with the corresponding morphemes; if not, the tokens are split according to the BPE tokenization scheme with a custom trained vocabulary. §.§ MorphTable MorphTable is a simple dictionary with keys being the words from English language and values being their respective morphological segmentation. To construct MorphTable, we use MorphyNet <cit.>, which is a database of derivational and inflectional morphology of 15 languages. We construct a look-up table of 346,340 words from English language which have been segmented into morphemes from the database. Table <ref> shows the frequency count of these segmentations. The extreme high number of morphemes come from chemical compounds (e.g dichlorodiphenyltrichloroethane). For the purpose of our tokenizer, we created a vocabulary from the set of unique affixes and stems from MorphTable after dropping the entities with less than 5 occurrences. This trimmed down version, had 18,304 tokens, and reduced the table size down to 134,943 entries. §.§ MorphPiece Vocabulary The MorphPiece vocabulary has two sources. First is the MorphTable described above. All the affixes and stems from this table are added to the vocabulary. The second component is the trainable BPE vocabulary. In the spirit of fair comparison, we aimed for the same vocabulary size as that of GPT-2, i.e 50,257 tokens. Accounting for the vocabulary from MorphTable (18,304 tokens), we trained a BPE tokenizer to build a vocabulary size of 32,000. We used OpenWebText <cit.> as the training corpus. Before training this tokenizer, we removed all words that had a segmentation available in the MorphTable: the idea being that since those words will be processed by the MorphTable and not by the BPE algorithm, hence the BPE tokenizer should be trained only on the text that will be tokenized by it. After merging the two vocabularies and accounting for a few common tokens, we have final vocabulary size of 50,006. § STATISTICAL ANALYSES OF MORPHPIECE In this part, we compare the proposed MorphPiece tokenizer with BPE and WordPiece on various tokenization statistics. Specifically, we evaluate the three tokenizers across fertility <cit.> and coverage. Fertility is defined as the average number of subwords that a tokenizer splits a word into. So the tokenization ('para', '##tro', '##oper') of the word 'paratrooper' has a fertility of 3. When averaged over a large corpus, fertility is a measure of how aggressively a tokenizer splits the words. Coverage, on the other hand, tells us which part (the MorphTable or the integral BPE tokenizer) of MorphPiece handled a particular pretoken. We evaluate coverage across various token lengths and fertility across various sentence lengths. Combined together they indicate how different (or similar) is MorphPiece, compared to WordPiece or BPE, at word and sentence level. For both evaluations, we use GPT2-Output-Dataset, released by OpenAI <cit.>, which has 250,000 English sentences. §.§ Fertility To measure fertility, we tokenize the dataset with the three tokenization schemes and additionally with a whitespace splitter. We use whitespace tokenization as a proxy for number of words in a sentence. Subsequently we plot the average number of tokens produced by the three tokenizers for various sentence lengths and the result is shown in Table <ref>. We can see that while BPE and WordPiece have similar sentence lengths, MorphPiece produces about 17% longer sentences. To reconfirm that the trend is not influenced by the dataset statistics, we did the same analysis for first million sentences (Appendix <ref>) of bookcorpus dataset. §.§ MorphTable Coverage MorphTable was constructed from MorphyNet <cit.> which is a crowd-sourced collection of morpheme segmentations. In this section we evaluate the coverage of MorphTable by analyzing the words in a corpus that are tokenized by MorphTable versus those tokenized by BPE. We tokenize bookcorpus <cit.> by MorphPiece and list the most frequent words that are not tokenized by MorphTable. Table <ref> shows the 10 most common words in this category. As can be seen, none of the words have a morphological segmentation; thus indicating that the MorphTable has a good coverage of words that have morphological segmentation. (Please refer to Appendix <ref> for the list of top 50 tokens) Another way to analyze coverage is across various word lengths. MorphPiece has essentially two internal tokenization schemes: the MorphTable and the internal-BPE. Within internal-BPE, there are again two modes of tokenizations: pretokens that are available as complete tokens within the internal BPE vocabulary and the pretokens that are split further ('BPESplit' in Figure <ref>). We want to compare the number of tokens that are split by these three mechanisms across various token lengths. From the figure, we can see that as the token length increases, the proportion of tokens found in BPE vocabulary decreases. This is consistent with BPE algorithm. Moreover, MorphPiece splits words from 4 to about 20 characters. Smaller and larger tokens are handled by the BPE tokenization. § EVALUATION ON A LANGUAGE MODEL A concrete test of any new tokenization scheme is from the performance of a language model trained on that scheme, on various NLP tasks. Towards that end, we train a GPT-2 architecture with MorphPiece and compare it with a GPT-2 model pretrained with BPE. We call our model MorphGPT. It is pertinent to note that other than the tokenization scheme, MorphGPT has no architectural difference from GPT-2 (Base). §.§ Evaluation setup GPT-2 was trained on custom built corpus called WebText. Since that corpus is not available publicly, we used its open source clone, called the OpenWebText <cit.>. Additionally, we used HuggingFace's implementation of GPT-2 <cit.> with Pytorch-Lightning <cit.> as the training framework on Nvidia A-100 GPUs. To establish a baseline, we pretrained GPT-2 architecture twice, once each on BPE and on MorphPiece tokenizer for 55,000 steps with exactly the same hyper-parameters. As can be seen in Figure <ref>, MorphGPT-50k shows a clear advantage over GPT-2Base50k. Please refer to Section <ref> for some possible explanation into this performance gain. Additionally, we evaluated both models on various Language Modeling tasks and found that MorphGPT-50k outperforms GPT-2Base50k by huge margins (Table <ref>). Having confirmed performance gains using MorphPiece on language modeling task, we continue training the MorphGPT-Base50k model for a total of 200k steps using the same hyper-parameters and compare its performance with the GPT-2 (Base), available on HuggingFace hub as 'gpt2' checkpoint. For training hyperparameters, we use batch size of 512 and one-cycle learning rate scheduler <cit.> with maximum learning rate of 1e^-3. We used warmup of 2000 steps, and cosine decay to final learning rate of 1e^-5. For the optimizer, we used Adam <cit.> with betas 0.9 and 0.995 and eps of 1e^-8. § EVALUATIONS We evaluate our model on a number of NLP tasks as described in following sections. For the tasks more closely related to language modeling, we compare MorphGPT at checkpoints of 50k, 100k, 150k and 200k iterations with fully trained GPT-2 (Base/Large) models. With one training step, the model sees about 0.5 million tokens. Here, we see MorphGPT perform comparable to a 6 times larger GPT-2 (Large) model (Table <ref>). For other NLP tasks, we use MorphGPT at 200k iterations and find that it outperforms comparable GPT-2 (Base) model; usually, with a wide margin. This reconfirms the finding from <cit.> that compared to a statistical tokenizer, a morphologically inspired tokenizer produces better word representations. We evaluate MorphPiece on a wide variety of tasks. Specifically we conduct evaluations on language modeling tasks (perplexities on various datasets and LAMBADA); supervised learning tasks (on GLUE benchmark); unsupervised learning (Information Retrieval, Paraphrase Identification, Re-ranking) and zero shot prompt-based evaluations on GLUE. In the first three categories, MorphPiece shows much superior performance across the board. In the last category, it shows performance comparable to GPT-2. Finally, we compare MorphGPT to a similarly themed tokenization scheme called FLOTA <cit.> and find that our method performs extremely well in this comparison as well. §.§ Language Modeling We evaluate MorphGPT and GPT-2 (Base/Large) on Penn Tree Bank <cit.>, OpenAI-250k <cit.> and LAMBADA datasets <cit.>. As can be seen in Table <ref>, MorphGPT models show much better perplexity numbers over fully trained GPT-2 models, despite being trained for a fraction of iterations. In particular, even with only 50k steps, MorphGPT achieves better perplexity than GPT-2 (Base) across all three datasets; and reaches performance of GPT-2 (Large) with 200k steps. LAMBADA In the LAMBADA dataset <cit.>, the task is to predict the last word of a paragraph and it is designed in a way that local context is not enough and one requires the whole paragraph to predict the correct answer. This task is known to be particularly hard for models to do well <cit.>. MorphGPT surpasses GPT-2 accuracy by almost 10% with only 50k steps and almost reaches the accuracy of six times larger GPT-2 Large model (Table <ref>). §.§ GLUE Benchmark GLUE <cit.> is a standard NLU benchmark. We finetuned both GPT-2 and MorphGPT on the tasks included in this benchmark and the results are shown in Table <ref>. It can be seen that, with the exception of SST, in all the tasks where MorphGPT is better than GPT-2, the difference is quite big. On the contrary, the tasks where GPT-2 is better, the difference is much smaller and could be attributed to inherent noise in evaluations. §.§ Sequence Embedding To test the performance of MorphGPT with unsupervised training, we evaluate it on four different tasks involving sequence embeddings from various domains and tasks. We used the tasks, datasets and code from <cit.> for these evaluations. Re-Ranking We evaluate this task on datasets from two different domains. The first domain is a collection of technical posts from the AskUbuntu <cit.>; where the models are required to re-rank 20 candidate questions according to similarity with a given post. The second dataset is subset of a benchmark about scientific papers <cit.>. Following <cit.>, we use the subsets of Cite, Co-Cite, Co-Read, and Co-Review. For all these tasks, the models are required to identify and rank up to 5 relevant papers from a list of 30 candidate papers. Information Retrieval For this task, we use CQADupStack <cit.>, where the models are required to retrieve duplicate questions from a collection of forum posts across 12 domains in Stack Exchange. Paraphrase Identification We evaluate TwitterPara from <cit.>, which consists of two sub-datasets. The task involves determining if a pair of tweets are paraphrase of each other, against manually annotated gold labels. We evaluate all these tasks at sentence level. To construct sentence embedding, we take the average across tokens, of the last hidden state of MorphGPT and GPT-2, before softmax. Aggregated results are shown in Table <ref>. It can be seen that MorphGPT performs better than GPT-2 across all tasks; often, with considerable performance improvement. For more detailed results in sub-domains of respective datasets and tasks, please see Appendix <ref>. §.§ Zero Shot Evaluations Here we use LM Evaluation Harness <cit.>, to evaluate MorphGPT and GPT-2 on GLUE tasks with default prompts. We use no in-context learning and only evaluate the tasks in zero-shot settings. The results (Table <ref>) show that MorphGPT performs comparable to GPT-2. It is pertinent to mention here that prompt-based evaluations are susceptible to high variance <cit.> depending on the wording of prompts. §.§ FLOTA Finally, we present a comparison baseline. Few Longest Token Approximation (FLOTA) <cit.> is a tokenization improvement method which uses the vocabulary of standard BPE tokenizer but tries to preserve the morphological structure of words during tokenization. It achieves that by finding a segmentation that recursively finds the largest segment of a word and splits on that. So, for example the word 'undesirable' would be split as ('und', 'es', 'irable') by BPE; but with FLOTA, it will be split as ('un','desirable'); which is closer to exact morphological segmentation of ('un','desire','able') used by MorphPiece. The authors show that FLOTA scheme preserves morphological structure to a large extent and that such a mechanism improves upon vanilla GPT-2 model. MorphPiece is different from FLOTA in a couple of important ways (Please see Section <ref> for details), however since this technique is closest to our work, we look at it in detail. FLOTA was evaluated on a classification task of a custom dataset consisting of titles from computer science, maths and physics domains of ArXiv. A small (2000 samples) and a large (20,000 samples) dataset was constructed for each of the three areas. The models were finetuned for 20 epochs and evaluated on the dev/test splits. The results (Table <ref>) show a marked improvement over FLOTA technique. While GPT-2+FLOTA shows an improvement of 5% on dev set (7% on test set) over vanilla GPT-2, MorphGPT shows improvements of 27% on dev set (54% on test set). Additionally the authors of FLOTA injected noise during evaluation to test the robustness of their scheme (Table <ref>). Here also, MorphGPT shows marked improvements over vanilla GPT-2 (40 % in ArXiv-Large and 77 % on ArXiv-Small). § DETOKENIZATION We define detokenization as the process of combining individual tokens, produced by a model trained with MorphPiece (e.g MorphGPT), to form a sentence. While detokenization is straightforward for BPE and other statistical tokenizers, that's not the case for MorphPiece. This is primarily due to the fact that in MorphPiece, tokens come from one of two different sources: MorphTable or internal-BPE. During detokenization, we need to not only ascertain which token comes from which source, but also, how to combine together the morphemes back to English words. We give details of both steps separately in the sections below. §.§ Classification of Tokens In the first stage, we use the surface forms to classify all tokens as either 'morph' or 'bpe'; signifying the source they come from. Additionally we annotate the 'morph' tokens as either prefix, suffix, stem or hash (for compound words). MorphPiece tokens have four different surface forms. (a) The prefixes and suffixes have a '#' sign at the end or beginning of the token respectively. (b) The compound words are separated by a '#' token. (c) The tokens split by BPE, with a space have a 'G' symbol. (d) The BPE splits and the stems from MorphTable have no special symbol in them. Classification of tokens with surface forms of the first three types is straightforward. For the tokens that have no special symbol, we have a heuristically driven algorithm that marks them either 'morph/stem' or 'bpe'. §.§ Reverse MorphTable Once all tokens are classified as above, the 'bpe' tokens are combined together following standard BPE algorithm, which essentially involves just concatenating them together and use byte pair decoding. However, for the tokens marked 'morph', the procedure is more involved. First we need to find tokens that are morpheme constituents of the same word (i.e find word boundaries) and then use a reverse MorphTable to find those words. Finding word boundaries is further complicated by various cases like compound words, multiple affixes etc. To combine various cases of these surface forms, we have developed a heuristic algorithm (Figure <ref>) that gives us word continuation and word boundaries between different tokens. This algorithm defines sequence of surface forms that would form a valid segmentation for a word, by looking at consecutive token labels from Section <ref>. Once the word boundaries are found, the reverse-MorphTable is then used to convert this segmentation to an English word. §.§ Illustrative Example Let's assume, a model trained on MorphPiece outputs the tokens shown in Figure <ref>. In the first step, the tokens will get classified as : ['bpe', 'bpe', 'prefix', 'stem', 'suffix', 'stem', 'suffix'] with additional label of 'morph' on all tokens except 'bpe'. Since merging of tokens labelled 'bpe' is straightforward, we focus on those marked 'morph'. Now we follow the arrows from Figure <ref>; with the solid black lines showing word continuation and red dashed lines showing word boundaries. From here we get word boundaries as : ['in#','vestigate','ing'] and ['diligent','#ly']. Finally we look up the words in a reverse-MorphTable to get 'investigating' and 'diligently'. § DISCUSSION We believe that the performance improvement from using MorphPiece comes from the relative ease to perform well on the language modeling task of predicting the next token. This is because the tokens in MorphPiece have less 'noise' compared to BPE. From Table <ref>, it can be seen that MorphPiece tokens have (a) more meaningful segmentation in the form of morphemes, and (b) less spelling idiosyncrasies e.g batting is split as ('bat','ing') instead of ('bat','ting') or ('batt','ing'), both of which have tokens that are not aligned with the actual words 'bat' and 'ing'. On the other hand, a model trained on BPE has to tackle with both problems; which makes it relatively difficult to perform well on language modeling task. A related aspect is that of representational efficiency. Contemporary tokenizers use sub-word tokenization to achieve a balance between representational power and model size. MorphPiece can be seen as a mechanism in the same direction, but using linguistic properties instead of statistical information. § CONCLUSION AND WAY FORWARD We have presented a linguistically motivated tokenization scheme that is more efficient in training large language models and outperforms models trained on BPE on a wide variety of tasks. We hope that this new paradigm of using linguistic inductive bias will lay the foundations of a new generation of tokenization schemes and models, that move away from purely statistical language representation. § ACKNOWLEDGEMENTS The author would like to extend sincerest gratitude to Prof Nafise Sadat Moosavi for her comments and input to the manuscript. Additionally, this work was supported in part by ERC-Grant 740516: NonSequeToR and a BMBF grant. The author also acknowledges the compute resources provided by Leibniz-RechenZentrum (LRZ), Garching and Lichtenberg HochLeistungsRechenzentrum (HLR), Darmstadt for training the models and running the experiments. acl_natbib § FERTILITY FOR BOOKCORPUS Fertility analysis for first million sentences from BookCorpus dataset <cit.>. (Table <ref>) § COVERAGE OF MORPHPIECE ON BOOKCORPUS Table <ref> shows the top 50 most frequent tokens in bookcorpus, that are not split by MorphPiece. § DETAILED EVALUATIONS Table <ref> shows detailed results comparing MorphGPT with GPT-2/FLOTA across all six datasets in both dev and test splits. Table <ref> shows MorphGPT outperforms GPT-2 across the two subtasks and two evaluation metrics of TwitterPara dataset. Table <ref> shows superior performance of MorphGPT on AskUbuntu dataset across all four metrics. Table <ref> has detailed evaluation on CQADupStack dataset across all 12 genres in the dataset. Table <ref> has detailed evaluation on SciDocs dataset across the four subtasks, with MorphGPT outperforming GPT-2 across all subtaska and both distance metrics.
http://arxiv.org/abs/2307.06063v1
20230712102736
Bending instabilities of m=1 mode in disc galaxies: interplay between dark matter halo and vertical pressure
[ "Sagar S. Goyary", "Kanak Saha", "H. Shanjit Singh", "Suchira Sarkar" ]
astro-ph.GA
[ "astro-ph.GA" ]
firstpage–lastpage Operational Support Estimator Networks Mete Ahishali, Mehmet Yamac, Serkan Kiranyaz, Moncef Gabbouj Mete Ahishali, Mehmet Yamac, and Moncef Gabbouj are with the Faculty of Information Technology and Communication Sciences, Tampere University, Tampere, Finland (email: [email protected]). Serkan Kiranyaz is with the Department of Electrical Engineering, Qatar University, Doha, Qatar (email: [email protected]). August 12, 2023 ============================================================================================================================================================================================================================================================================================================================================================================================== A self-gravitating, differentially rotating galactic disc under vertical hydrostatic equilibrium is supported by the vertical pressure gradient force against the gravitational collapse. Such discs are known to support various bending modes e.g., warps, corrugation, or scalloping (typically, higher order bending modes) of which m=1 bending modes (warps) are the most prevalent ones in galactic discs. Here, we present a detailed theoretical analysis of the bending instability in realistic models of disc galaxies in which an exponential stellar disc is under vertical equilibrium and residing in a cold rigid dark matter halo. A quadratic eigenvalue equation describing the bending modes is formulated and solved for the complete eigen spectrum for a set of model disc galaxies by varying their physical properties such as disc scale-height, and dark matter halo mass. It is shown that the vertical pressure gradient force can excite unstable bending modes in such a disc as well as large scale discrete modes. Further, it is shown that the unstable eigen-modes in a thinner disc grow faster than those in a thicker disc. The bending instabilities are found to be suppressed in discs dominated by massive dark matter halo. We estimate the growth timescales and corresponding wavelength of the m=1 unstable bending modes in Milky Way like galaxies and discuss its implication. instabilities - galaxies: kinematics and dynamics - galaxies: structure - methods: analytical § INTRODUCTION Large-scale bending waves or the classic integral-sign warps (m=1, where m is the azimuthal wave number) are commonly seen in nearby disc galaxies including our Milky Way <cit.>. More than 50% of local disc galaxies show warping of their disc mid-plane when viewed edge-on <cit.>. Considering the fact that the detection of warps is a tedious task in low-inclination discs - most disc galaxies may be warped - simply speaking galactic discs refuse to stay flat <cit.>. Although gaseous discs exhibit large amplitude bending waves, typically ∼ 1 kpc size <cit.>, they are of small amplitude in the stellar discs; typically not more than a few hundred parsecs <cit.> - adding woes to the detection of stellar bending waves further in external galaxies. However, there is a sudden rejuvenation in this age-old topic with the availability of RAVE (Radial Velocity Experiment) survey covering more than a few kpc distances from the solar neighborhood in the Milky Way <cit.>; 6D phase space data combining recent surveys like GAIA, LAMOST, SEGUE <cit.> to decode the dynamical fossil signature of bending modes in our Galaxy <cit.>. In other words, bending waves are present both in stars and gas - implying that they might be long-lived and rather easier to be generated in their host discs <cit.>. Various mechanisms are proposed to generate bending waves in galaxies. For example, tidal interaction with a companion galaxy or satellite has been suggested to generate large-scale bending waves in the Galaxy and in external galaxies <cit.>. A number of studies focused on the infalling gas from the intergalactic medium, cosmic infall <cit.> as well as misaligned disc and dark matter halo <cit.> to generate large scale bending of the galactic discs. Although large-scale bending waves can be generated through these processes, their persistence remains a perennial problem. Substantial efforts have been invested over the last few decades to generate long-lived bending waves via internal instabilities <cit.> including counter-rotation <cit.> and dynamical friction due to the dark matter halo or its substructure <cit.> as the external effects might not always be present. The readers are referred to <cit.> for a detailed account on disc warping. <cit.> in their pioneering work, found no unstable bending modes in a family of rotating, self-gravitating razor-thin cold discs and concluded that discrete bending modes can not account for the observed gentle warping of a disc. However, this analysis needs to be carried out in a more realistic model of warm disc galaxies that includes additionally a dark matter halo. A detailed study of the linear normal mode analysis carried out by <cit.> on a thin stellar slab with uniform density distribution along the radial axis concludes that when the ratio of vertical to radial velocity dispersion of stars σ_z/σ_R ≤ 0.293, the slab becomes bending unstable. This has been confirmed later on by various authors <cit.> through analytical studies as well as N-body simulations. <cit.>, in particular, carried out a detailed analysis of bending instability in a more realistic model of discs - a family of self-gravitating Kuzmin-Toomre discs with finite thickness and found that they exhibit both discrete and exponentially growing modes which disappear as the disk thickness increases. He further demonstrated using N-body simulations that one of the disk models (KT/5) supports long-lived discrete axisymmetric (m=0) bending modes. Discrete normal modes of oscillation of a galactic disc are also shown to exist in a number of studies but for specific galactic systems <cit.>. In other words, long-lived bending oscillations pose a considerable challenge to dynamicists; the readers are referred to <cit.> for an insightful discussion in this context. It is in this spirit, we revisit this fascinating problem of galactic dynamics and extend the analysis to realistic models of disc galaxies considering an exponential density distribution along the radial direction and sech^2 or exponential along the vertical direction <cit.> and a logarithmic dark matter halo potential that gives rise to a flat rotation curve <cit.>. The rest of the paper presents a detailed analytic study of the m=1 bending instability, the nature of the eigen spectrum, and how the growth rate of bending instability depends on various properties of the galaxy including that of the dark matter halo. This paper is organized as follows: In Section <ref>, we give a description of the dynamical model of the disc bending and the details of the galaxy and dark matter halo density profiles. In Section <ref>, the methods used for solving the numerical solution of models are presented with input parameters to solve the quadratic eigenvalue problem and find the nature of eigenmodes. The results of the numerical analysis of eigen modes in the absence and presence of vertical pressure are discussed in Section <ref> and <ref> respectively. In Section <ref>, we estimate and discuss the growth time of bending mode and wavelength using the WKB dispersion relation. In Section <ref>, we discuss our results and present the conclusions. § FORMULATION OF THE PROBLEM §.§ Model of the galactic disc and dark matter halo We use galactocentric cylindrical coordinate system (R, ϕ, z). We consider the density distribution of the galactic disc exponential out to truncation radius R_t and is zero beyond a radius R_o (<R), and between these two radii, the density tapers smoothly with cos^2 function along the radial direction introduced by <cit.> and Gaussian in the vertical direction, as given by ρ(R,z) = ρ__0,0 e^-R/R_de^-z^2/z__0^2,   if   R≤R_t = ρ__0,0e^-R/R_d e^-z^2/z__0^2 cos^2( π/2 R-R_t/R_o-R_t ),     if   R_t≤R≤R_o = 0,      if   R≥R_o Here ρ__0,0 is the central, mid-plane (z=0) density value of the galactic disc, given by ρ__0,0= M_d/2π^3/2 R_d^2 z__0. M_d is the mass of the disc, R_d denotes radial scale length and z_0 denotes the vertical scale height. The potential corresponding to the above density distribution is calculated in the following. Φ_disc(R,z) = - 2 π^3/2 G ρ_0 R_d^2 z_0∫^∞_0 dk J_0(kR) I(R,k) × I(z,k), where I(R,k) and I(z,k) are given by I(R,k) = ∫^R_t_0 R'dR' J_0(kR') e^-R'/R_d + ∫^R_o_R_t R'dR' J_0(kR') e^-R'/R_dcos^2(π/2R'-R_t/R_o-R_t), I(z,k) = 1/2exp( k^2z_0^2/4-kz) erfc( kz_0/2-z/z_0) +1/2exp( k^2z_0^2/4+kz) erfc( kz_0/2+z/z_0). At the mid-plane of the galactic disc z=0, I(k)= exp( k^2z_0^2/4) erfc( kz_0/2). Here J_0(kR) is the cylindrical Bessel function of the first kind of order zero. The circular velocity due to the disc, V_c,d, at any radius R, and at the disc mid-plane is obtained by using the following relation <cit.> V^2_c,d = R ∂Φ_disc/∂ R|_z=0. We consider the density distribution of the dark matter halo of the form <cit.> ρ_h(R,z)=V_0^2/4π Gq^2(2q^2+1)R_c^2+R^2+(2-q^-2)z^2/(R_c^2+R^2+z^2q^-2)^2, where R_c is the core radius and q is the halo flattening parameter, i.e. the axis ratio of the equipotential surfaces. V_0= √(GM_h/R_c) is the flat circular speed at large R. M_h denotes dark matter halo mass. The corresponding gravitational potential of the dark matter halo is given by: Φ_halo(R,z)= V^2_0/2ln(R^2+R_c^2+z^2/q^2). The circular velocity V_c,h(R) at radius R on the equatorial plane of the halo potential is given by <cit.> V_c,h(R) = V_0R/√(R^2+R_c^2). This profile yields an asymptotically flat rotation curve as observed in local spiral galaxies through the 21cm HI observation <cit.>. The net circular velocity is obtained by adding the contribution of the disc and the halo in quadrature as V_c, total=(V^2_c,d+V^2_c,h)^1/2. We show the circular velocities for the disc and the disc plus halo cases, corresponding to different values of z_0 and the halo to disc mass ratio (M_h/M_d), in Fig.<ref>. §.§ Derivation of the dynamical equation We consider an axisymmetric galactic disc of density profile given by equation (<ref>), rotating in the equatorial plane (z=0) of the dark matter halo with an angular speed Ω(R) about the symmetry axis of the halo (R=0). The halo is considered to be rigid or non-responsive throughout the analysis. Then the dynamical equation of the bending of the galactic disc under small angle approximation (i.e., considering vertical displacement w.r.t. to unperturbed plane z=0 to be small) can be written as: (∂/∂ t + Ω(R)∂/∂ϕ)^2 Z = -∇_zΦ_self - ∇_zΦ_halo -1/ρ_d∇_zP, where Z denotes the small bending above the disc mid-plane; ∇_z is the vertical gradient; the first term on the RHS is the force due to the self-gravity, the second term is the vertical restoring force due to the dark matter halo and the third term is the vertical pressure force due to the non-zero vertical velocity dispersion respectively. The system ignores any diffusion of matter along the radial direction, such as might be caused by epicyclic motions in the stellar disc. The dynamical model of the bending of the disc adopted here is similar to that of <cit.> and <cit.>, with the additional vertical pressure gradient term introduced here. The vertical force F_d=-∇_zΦ_self due to the disc self-gravity is given by F_d = -G∫_0^∞ ∫_-∞^∞∫_0^2πR'ρ(R',z') ×[Z_m(R,ϕ,t)-Z_m(R',ϕ',t')]dϕ'dz'dR'/[R^2+R'^2-2RR'cos(ϕ-ϕ')+(z-z')^2]^3/2. Here m represents m ^th order bending mode. The vertical restoring force near the disc plane due to the dark matter halo is given by F^m_h = -ν_h^2(R)Z_m(R,ϕ,t), where ν_h is vertical frequency due to the dark matter halo alone in the unperturbed disc plane. When the vertical frequency is greater than the orbital frequency i.e. ν^2(R)>Ω^2(R), the disc oscillates in the vertical direction more rapidly than it orbits about the galactic center <cit.>. §.§.§ Calculation of the vertical velocity dispersion Assuming barotropic stellar fluid, the pressure P acting along the vertical direction arising from the non-zero vertical velocity dispersion in the disc is given by P = σ^2_z(R,z) ρ(R,z), = σ^2_z(R,z)ρ__0,0ρ (R) e^-z^2/z_0^2. In the above equation σ_z(R,z) denotes the vertical velocity dispersion. Now the Poisson equation for an axisymmetric, thin galactic disc is given by dK_z/dz = - 4 π G ρ(R,z), where K_z denotes the vertical force per unit mass. Substituting ρ(R,z) into the above equation and integrating the equation from z=0 to z, we obtain K_z = -2 π^3/2 G ρ__0,0 z_0 ρ(R) erf(z/z_0). Now, the condition for the vertical hydrostatic equilibrium <cit.> for the disc is given by ∂/∂ z(ρ(R,z) σ^2_z(R,z)) = ρ(R,z) K_z. We obtained an analytical expression for σ_z by integrating the above equation from z to ∞, with the boundary condition of ρ(z)=0 as z → ∞ <cit.>. This gives us σ^2_z(R,z) = -1/ρ(R,z)∫_z^∞ρ(R,z')dz' K_z', = 2 π^3/2 G ρ__0,0ρ(R) z_0 e^z^2/z_0^2∫_z^∞ e^-z'^2/z_0^2erf(z'/z_0) dz'. At the mid-plane of the disc z=0, σ^2_z(R,0) = 2 π^3/2 G ρ__0,0 z_0ρ(R) ∫_0^∞ e^-z'^2/z_0^2erf(z'/z_0) dz'. Using the following integral relation <cit.>- ∫_0^∞ e^-b^2x^2 erf(ax) dx = √(π)/2b-1/b√(π)tan^-1(b/a), and substituting a=b=1/z_0 we get, ∫_0^∞ e^-z'^2/z_0^2erf(z'/z_0) dz' = π/4z_0. Substituting equation (<ref>) into equation (<ref>) we get the final expression for σ^2(R,0) as σ^2_z(R,0) = π^2/2 G ρ__0,0 z_0^2 ρ(R). Equation (<ref>) provides the vertical velocity dispersion of the disc at z=0 as a function of disc radius. The right panel of Fig. <ref> shows the plot of vertical velocity dispersion for two different discs of vertical scale heights z_0 =0.1 & z_0 = 0.2 at radial distance 2R_d. We see that at any radial distance, the disc of z_0 = 0.2 has a higher σ_z value than the disc of z_0=0.1. The velocity dispersion sharply drops in between truncation radius R_t =5 and R_o =6. In the left panel of Fig.<ref> we show the vertical density distribution of the discs with scale heights z_0=0.1 and 0.2 at radial distance 2R_d. §.§.§ The quadratic eigenvalue problem (QEP) When equations (<ref>) and (<ref>) are substituted into equation (<ref>) we can get the solution of dynamical equation (<ref> ) in the form of Fourier terms as- Z_m(R, ϕ, t) = ℝ{h(R) e^j(ω_m t-mϕ)}. Here m represents the m^th bending mode of the disc, which represents an m-armed warp precessing at eigen frequency ω = ω_R ± j ω_I (i.e., the whole pattern revolves with period 2π/ω). h(R) describes the shape of bending modes in the self-gravitating disc. When the imaginary part of the eigen frequency, i.e. ω_I<0 the eigen mode becomes unstable and grows exponentially with a growth rate of 1/|ω_I|. On the other hand, when ω_I>0 the eigen mode is called damping mode. The eigen mode is stable when ω_I=0. Finally we substitute the expression of Z_m(R,ϕ,t) from equation (<ref>) into equation (<ref>) and obtain the following equation (∂/∂t + Ω(R)∂/∂ϕ)^2Z_m = -G∫_-∞^∞ ∫_0^∞∫_0^2πR'ρ(R')ρ(z') ×[Z_m(R,ϕ,t)-Z_m(R',ϕ',t)] dϕ'dR'dz'/[R^2+R'^2-2RR'cos(ϕ-ϕ')+(z-z')^2]^3/2 -ν_h^2(R)Z_m(R,ϕ,t)+2σ_z^2(R)/z_0^2 Z_m(R,ϕ,t). The triple integral on the right side of the above equation can be written as [(ω_m-mΩ (R))^2-ν^2_h] h(R) =Gh(R)∫_0^∞ R' ρ(R')dR' K_0(R,R') -G∫_0^∞ R' ρ(R')dR' K_m(R,R') h(R'). Note that the angular speed Ω(R) in the above equation has a contribution from the perturbed disc (Ω_d) as well as the dark matter halo (Ω_h), i.e, Ω^2(R) = Ω_d^2(R)+Ω_h^2(R). In equation (<ref>), K_0(R,R') and K_m(R,R') are respectively given by K_0(R,R') = ∫_-∞^∞ ρ(z') B_0(R,R') dz', K_m (R,R') = ∫_-∞^∞ ρ(z') B_m(R,R') dz', where B_0(R,R') and B_m(R,R') are given by B_0(R,R') = ∫_0^2π dψ/[R^2+R'^2+(z-z')^2-2RR'cos(ψ)]^3/2, B_m(R,R') = ∫_0^2π cos(mψ)dψ/[R^2+R'^2+(z-z')^2-2RR'cos(ψ)]^3/2, with ψ=ϕ-ϕ'. Now we put, ν^2_d = G∫_0^∞ρ(R')K_0(R,R')R'dR', in equation (<ref>), where ν_d denotes the vertical frequency of the perturbed disc, and we rearrange the equation in the following compact form [ (ω^2_m-2mΩ(R_i)ω_m) ] h_m(R_i) = ∑_j=1^NS_ij h_m(R_j), where S_ij = U_ij + δ_ij( ν^2_h(R_i) +ν^2_d(R_i)-Ω^2(R_i) - 2σ^2_z(R_i)/z_0^2), U_ij = Δ R G ρ(R_j) K_m(R_i,R_j) R_j. Recasting equation (<ref>) into a matrix-eigenvalue problem on a uniform grid with N radial points we get a quadratic eigenvalue equation, for the m^th order mode, in the form similar to <cit.>: (ω_m^2I+ω_mD+S)h_m=0, where I is an N× N identity matrix and D_ij=-2mΩ(R_i)δ_ij . Here h_m is the eigenvector corresponding to the eigenvalue ω_m. The eigenfrequency ω_m gives the oscillation frequency of the m^th bending mode with the shape specified by h_m. The matrices; I, D_ij, and S_ij are commonly known as the mass matrix, damping matrix, and stiffness matrix, and are N× N real square matrices. Equation (<ref>) represents a class of nonlinear eigenvalue problem and its solution describes the global behaviour of m=0,1,2,3..., etc. modes in the self-gravitating disc. § NUMERICAL METHOD AND INPUT PARAMETERS §.§ Numerical method for solving the equations To make numerical calculations convenient, we consider a system of units where G = M_d = R_d = 1. Throughout the analysis, all the frequencies and lengths are normalized in the unit of √(GM_d/R_d^3) and R_d respectively. We treat (z-z') in the integral equation of B_0(R,R') and B_m(R,R') as a softening parameter to make integrals regular at R=R'. It is similar to the case of softening parameters used in <cit.> to make integral regular at R=R' which avoids the divergence of numerical integration. The idea of softened- gravity was first introduced by <cit.> and is used extensively in the numerical studies of disc dynamics. They replaced 1/R Keplerian potential by 1/(R^2+b^2)^1/2 (where b is the softening length) to avoid numerical singularities at R = 0. We keep the softening parameter equal to inter-ring spacing, which gives us a satisfactory result for the mode shape. The equation (<ref>) can be solved by recasting it into a matrix-eigenvalue problem in a compact notation equation (<ref>) for the m^th order bending mode. In the present paper, we focus only on the m = 1 bending mode representing warps in disc galaxies. To find the solution of the QEP (<ref>), we treat the galactic disc as having a finite radius with a system of N uniformly spaced concentric rings. In our analysis, the number of equally spaced concentric rings of the disc is considered as matrix dimensions. In a previous study of <cit.> it is shown that the value of ground state discrete eigen frequency is not much affected by the orders of matrix dimensions above N=150. In the present paper, we restrict the numbers of differentially rotating concentric rings of the disc or matrix dimensions to N=200 only. The standard way to solve QEP (<ref>) for the m = 1 mode is by reducing it to a generalized eigenvalue problem (GEP) of the form Ax = ω Bx by linearizing it into a 2N dimensional eigensystem (i.e., twice the matrix dimension N). We solve the linearized QEP numerically by using the standard technique for diagonalization. For further details about solving QEP, the readers are referred to <cit.>. The Python module `scipy.linalg' is used to solve the QEP equation and to find out all the eigen values and eigen functions. On solving the N-dimensional QEP (<ref>), we obtain 2N numbers of eigenvalues. The natures of eigen modes obtained from QEP (<ref>) for Model-I, Model-II, and Model-III, in the absence and in the presence of vertical pressure gradient force are discussed in Section <ref>. §.§ Input Parameters In order to explore the bending instabilities in the disc, we calculate the eigen modes considering three sets of models that differ in the dark matter halo mass. At the same time, we consider two different vertical scale heights of the disc for each of the three models. In Table.<ref> we give the disc and dark matter halo input parameters for the three sets of models required for numerical calculation in this paper. These input parameters of the disc and dark matter halo are arbitrary in order to first investigate the nature of eigen spectra and eigen modes. Later, we apply our analysis to three models of realistic Milky Way like galaxy given in Table.<ref> from recent literature; Milky Way_1 <cit.>, Milky Way_2 <cit.>, and Milky Way_3 <cit.>. For dark matter halo, we use the parameters of <cit.> and explore the growth of bending mode instabilities and wavelength in Section <ref>. § NUMERICAL RESULT AND ANALYSIS §.§ Absence of vertical pressure force In order to analyze the m = 1 eigen modes in the absence of vertical pressure gradient, we assume that the disc is razor-thin. We solve the QEP (<ref>) for Model-I parameters considering only the radial surface density profile of the disc given in equation (<ref>) and obtain the eigen values and corresponding eigen vectors. In the left panel of Fig.<ref>, we show the complete eigen spectrum for Model-I. From the eigen spectrum, we see that all the eigen modes are real i.e., all eigenmodes are stable. The eigen modes are stabilized by the gravitational restoring force due to the self-gravitating disc and dark matter halo (see the WKB dispersion in a later section). The eigen spectrum comprises two continuous branches of eigen modes with eigen frequencies (Ω-ν) and (Ω+ν), with a gap of length 2ν between them. The right continuum is known as the fast mode (Ω+ν), and the left continuum (Ω-ν) is known as the slow mode <cit.>. Except for a discrete mode, the two continua contain all of the eigenmodes. The gap between these two continua is known as the principal gap (P-gap). The fast modes have positive eigen frequencies and are prograde in the disc. The slow modes may withstand the differential shear in the disc for a longer period of time than the fast modes. The zoom-in inset subplot of the left panel shown in Fig.<ref> demonstrates the existence of a distinct eigen mode lying in the P-gap isolated from the continuum area. The mode corresponding to this point is a stable and stationary long-lived eigen mode and is immune to damping mechanisms such as wave-particle interaction known as Landau damping. This eigen mode is neither growing nor decaying and describes the global behavior of the integral shaped bending mode of the self-gravitating disc. This eigen mode behaves like a stable independent harmonic oscillator, vibrating at a frequency ω_R = -0.04 with the lowest number of nodes (zero nodes in this case) along the radial axis. In the middle panel, we show the shape of the eigen vector corresponding to the discrete eigen mode. The face on map of the vertical displacement Z(R,ϕ,t) = h(R) cos(ω t-ϕ) (real part of the equation (<ref>)) at t = τ/8 is shown in the right panel of Fig.<ref> that replicates the large-scale, S-shaped warp in the disc. Here τ = 2π/ω is the characteristic time scale of ground state eigen frequency of global mode in the P-gap. By ground state, we mean the mode having the lowest real part of the eigen frequency ω in the spectrum. The concentric rings are incremented along the radial axis from the center of the galactic disc with an amount of 1R_d disc scale length. The color bar right next to the plot indicates the amount of displacement normal to the galactic plane in the unit of R_d. In Fig.<ref>, we show the typical behavior of eigen functions of a few selected eigen modes from the two continuum - these are akin to the singular van Kampen modes of oscillations in the disc <cit.>. Although an isolated self-gravitating disc does not support a discrete bending mode m=1 until and unless the disc is truncated at the edge <cit.>, a smoothly tapered disc embedded in the oblate dark matter halo can support a discrete bending mode having the characteristics of large-scale warp in the disc. For such a system, the bending mode does not appear to be sensitive to details of the edge of the disc <cit.>. This result of finding a global discrete mode in the P-gap again confirms the fact that there exists at least one stable discrete normal mode <cit.> describing large-scale warp in the disc that exhibits stationary flapping oscillation in the N-ring system. The fact that the disc supports only stable eigen modes can be confirmed from the WKB dispersion relation of a small amplitude bending wave in the presence of self-gravitating force of the disc and dark matter halo force alone <cit.>. §.§ Effect of halo core radius on discrete eigen mode Here, we explore the effect of halo core radius (R_c) on discrete eigenmode. The halo core radius governs how the halo mass is distributed in the disc and, ultimately, the frequency of vertical oscillation due to the halo. For a larger halo core radius, the circular velocity would rise slowly and a considerable fraction of the disc will reside inside the core. We solve the QEP (<ref>) for three configurations of the halo with core radii R_c = 1 R_d, 3 R_d, and 5 R_d and obtain the corresponding eigen spectrum. In the top panel Fig.<ref>, we show the complete eigen spectrum for these three cases. The inset zoom-in plots shows the discrete eigen modes for each halo core radius. We find no discrete eigen mode in the P-gap for R_c=1, however for radii 3 and 5, a discrete eigen mode in the P-gap, distinct from the two continua is clearly evident. The corresponding values of eigen frequencies for the value of core radius R_c = 3 and 5 are ω_R = -0.04 and ω_R = -0.01 respectively. The value of discrete eigen frequency decreases with the increasing halo core radius. At a very smaller core radius, the discrete modes merge into the (Ω-ν ) continuum. As a result, no discrete eigen modes are found in the eigen spectrum for halo core radius R_c=1. The shape of the discrete mode is highly sensitive to the halo core radius. The increasing core radius drastically changes the discrete mode shape. In the lower panel of Fig.<ref>, we show the mode shape for two halo core radii R_c= 3 and 5. We obtain perfectly S-shaped bending for the core radius 3. As the core radius decreases the shape or warp of the bending mode gets increasingly curved at the edge of the disc while the inner region becomes flat. At a large halo core radius, the discrete mode is unable to retain the S-shaped bending. In a nutshell, the halo core radius appears to play a fundamental role in supporting the long-scale S-shaped discrete mode in the disc. § EIGEN MODES IN THE PRESENCE OF VERTICAL PRESSURE In this section, we attempt to investigate the properties of eigen modes in more realistic models of disc galaxies by including the vertical pressure gradient force calculated self-consistently as shown in section <ref>. The radial profile of the vertical velocity dispersion σ_z (R) is shown in Fig.<ref>. We explore all three models described in Table<ref>. The rotation curves for each halo to disc mass ratio and disc scale heights of all three models producing reasonably flat rotation curves are shown in Fig.<ref>. Each of these models is primarily characterized by their different halo-to-disc mass ratio (M_h/M_d). In the following, we describe our findings for each model. §.§ Model-I On the top left panel of Fig.<ref>, we show the eigen spectrum of the disc having the vertical disc scale heights z_0= 0.2 (top panel) and z_0= 0.1 (bottom panel). The inset plots highlight the existence of discrete stable eigen modes in the P-gap for discs with both vertical scale heights. The complex eigenvalues are distributed in a wedge-like fashion on the Argand diagram. In either case, there exist both stable and unstable eigenmodes lying in the P-gap with a slightly lesser number of unstable modes in z_0=0.2 in comparison to that z_0 =0.1 (see Table.<ref>). Some of these unstable eigen modes are shown on the top right panel of Fig. <ref>. The stable discrete modes have a shape as shown in Fig. <ref> and have low-frequency oscillations <cit.>. The top panel of Fig.<ref> displays the face-on map of a few discrete eigen modes found in the P-gap of Model-I. The panels are labeled with the value of eigen mode frequencies ω. The extreme left panels show the slowest discrete mode in the P-gap. The extreme right panels show the unstable modes present in the P-gap. The color bar shows the scale of displacement Z(R,ϕ,t) along the vertical z- axis in the unit of R_d. The lowest frequency mode for model-I with z_0 = 0.2 and z_0 = 0.1 have the time-period T_low =10.87 Gyr and 7.53 Gyr respectively, considering fiducial disc model parameters (see Sec.<ref> for a further discussion). The unit of time-scale for the fiducial disc is √(R_d^3/GM_d)=15.58 Myr (see Table <ref>). As the lowest frequency mode has a larger time period than the disc rotation period such modes are retrograde in the disc. The eigen modes with negative imaginary parts are of particular interest because they grow exponentially with time influencing overall disc structure. The unstable modes in the eigen spectrum arise due to the finite vertical velocity dispersion of the disc (as can be seen from the WKB relation given below). The eigenmodes with positive imaginary parts are damped and are absorbed by the disc particles in the form of random kinetic energy - which in turn would heat the disc and might affect the further instability of the disc through feedback <cit.>. None of these unstable modes resemble those in the continuum similar to the singular van Kampen modes (see Fig. <ref>). We notice that in the case of z_0=0.1, the number of eigenmodes in the P-gap is higher as compared to the disc with z_0=0.2. Apart from the discrete stable modes, the vertical pressure excites a few unstable in the P-gap which are comparatively higher in number for z_0=0.1 (see Table.<ref> and Fig.<ref>). Coming to the unstable growing mode we obtain 5 unstable modes for the z_0=0.2 case, whereas the disc with z_0=0.1 has 13 growing modes (see Table.<ref>). As the disc thickens, the unstable modes start disappearing (see Fig.<ref>) confirming previous findings by <cit.>. We estimate the wavenumber and wavelength of these unstable modes using Fourier transform <cit.> and discuss this aspect in greater detail in the light of WKB relation in Sec. <ref>. §.§ Model-II The Model-II is similar to Model-I but with a dark matter halo mass M_h =10 M_d, in other words, the disc is more dominated by the dark halo compared to Model-I. The eigen spectrum of Model-II is shown in the middle left panel of Fig.<ref> for both the vertical scale heights z_0 = 0.1, 0.2. The face-on maps of Z(R,ϕ,t) for a few selected discrete stable modes are shown in the middle panel of Fig.<ref>. These discrete modes have fewer radial nodes as compared to the eigen modes in the two continua. The typical behavior of discrete mode is similar to that of Model-I (see Fig.<ref>). Similar to Model-I, the lowest frequency discrete mode has a significantly larger time period (see Table. <ref>) than the disc rotation period that avoids resonance in the disc. Although the trends of the eigen spectrum are similar to Model-I, the number of discrete stable modes in the P-gap and unstable modes are different. For the scale height z_0 =0.2, only 2 unstable modes are obtained whereas, there are 7 unstable modes in the case of z_0 =0.1. It is worth noting that the unstable mode in z_0=0.1 in the P-gap as well as outside the gap is more than that of z_0=0.2 following the same trends as the previous model. The number of unstable modes in the P-gap as well outside the gap reduces in this model (see Table.<ref> and Fig.<ref>). Since the combination of disc self-gravity and enhanced restoring force due to the increased dark matter halo mass has a stabilizing effect, it is natural to expect a number of decreased unstable modes inside and outside the P-gap. In other words, these results just reaffirm that disc instabilities are, in general, suppressed by a massive dark matter halo <cit.> but how the unstable modes disappear remains a question. Note that this is true only when the halo is non-responsive, else a live halo might lead to excitation of a bending mode <cit.> in the disc outskirts or bar growth <cit.> in the central region of the galaxy. §.§ Model-III The dark matter halo mass (M_h =15 M_d) in Model-III is even higher as compared to Models-I and II. The eigen spectrum for this model is shown in the bottom left panel of Fig.<ref>. The subplots of the eigen spectrum are labeled for the different values of disc scale heights z_0. We obtain similar trends of the eigen spectrums as observed in Model-I and II but with different numbers of eigen modes. For z_0=0.2, only one unstable mode is obtained outside the P-gap. Being more massive, the dark matter halo is able to suppress the unstable modes even further. Whereas, the disc with z_0=0.1 still has 6 unstable modes: 4 outside the P-gap and 2 inside the P-gap. The typical shape of unstable modes is shown in the bottom right panel of Fig.<ref>. The number of stable oscillating discrete eigen modes is found to be decreasing for the z_0 = 0.1 value in this model as compared to the previous two models. The face on maps of a few selected discrete modes are shown in the bottom panel of Fig.<ref>. From the overall analysis, we see that for a particular disc scale height, low-mass halos support larger number of unstable modes (both inside and outside the P-gap) compared to their massive counterpart (See Fig. <ref>). Thinner disc supports more unstable modes compared to the thicker discs. For example, for z_0=0.1, the disc still supports unstable modes for halo mass as high as M_h/M_d =20. For z_0=0.2 case, although unstable modes cease to exists inside the P-gap, a few of them still persist outside the P-gap. It is interesting to note that as we increase halo mass, the unstable modes in the disc first disappear from the P-gap and later from outside the P-gap. Based on several numerical works <cit.>, it is known that a stellar disc is stabilized by a massive dark matter halo but the process in which discs are stabilized remained obscure. Our simplified analysis based on analytical work, shed some light in this regard. §.§ Effect of dark matter halo core radius on instabilities For the sake of completeness, we calculate and present the eigen spectrum of Model-I input parameters having disc scale height z_0=0.2 for three different dark matter halo core radii. The primary goal is to investigate the number of unstable modes that arise in the eigen spectrum with the increasing halo core radii. In Fig.<ref>, we show three eigen spectrums for three different halo core radii 1R_d, 3R_d, and 5R_d. We obtain no unstable modes for R_c=1 whereas, we obtain 5 and 7 unstable modes for R_c = 3 and 5 respectively. Note that when the disc thickness is reduced to z_0=0.1, we obtain a few unstable modes in the eigen spectrum for core radius R_c=1. The number of unstable modes in the eigen spectrum is found to be increasing with the increase of the halo core radius. In other words, the disc is susceptible to m=1 bending instability when embedded within a dark matter halo with a larger core radius. As the halo core radii increase, the dark matter halo becomes less concentrated in the inner region of the disc which in turn weakens the gravitational restoring force on the disc due to the halo. As a consequence, the disc is unable to counterbalance the destabilizing force due to the vertical pressure. We have verified our numerical results for Model-II and III and found similar trends as in Model-I. In the bottom panel of Fig.<ref>, we show the typical nature of the discrete and unstable mode shape in subplots (a) and (b) for the two halo core radii R_c=3 and 5 respectively. § DISPERSION RELATION AND MODE WAVELENGTH In this section, we explore the stability of bending modes using the WKB dispersion relation. The WKB dispersion relation of bending waves depends only on the local properties of the differentially rotating disc. For the m=1 mode, the local dispersion relation, assuming thin disc approximation, is given by <cit.>: [ω - Ω (R)]^2 =ν^2 (R) +2 π G Σ(R,0) |k|-σ_z^2 (R) k^2, where Σ(R,0) and k are surface density profile and wave number respectively. The self-gravity of the disc stabilizes the bending modes while the vertical velocity dispersion σ_z acts as a destabilizing factor. Note that the contribution from the vertical restoring force (first term on the RHS of the above equation) is independent of the scale of the perturbation. In the absence of self-gravity and vertical pressure, an m=1 bending wave would propagate like a plane wave with free precession frequencies ω = Ω -ν and ω = Ω +ν (two possible solutions of equation (<ref>)). Since all the terms on the right-hand side are real, the disc is stable against local perturbation if ω^'^2≥ 0 and unstable otherwise; here ω^'=ω - Ω. Solving dω^'^2/dk = 0 provides the critical value of the wave number k_c below that all eigen modes are unstable. Applying this condition in the above equation (<ref>) and substituting radial disc surface density Σ(R,0) and σ_z^2, we obtain the following expression for the critical frequency: k_c = π G Σ/σ_z^2sign(k) = 2/√(π) z_0sign(k). In other words, the critical wavenumber is entirely determined by the scale height of the disc under this approximation. For k > k_c (short wavelengths (λ<λ_c)) and ω^'^2 > 0, the solutions are oscillatory; for k < k_c (long wavelengths (λ>λ_c)), the solutions are exponentially growing. The corresponding critical wavelength for unstable modes is λ_c = 2π/k_c. To calculate the wave number (k) and wavelength (λ) for each unstable mode, we perform a discrete Fourier transform (DFT) on Z(R) in the spatial domain and obtain the one-sided spatial coefficients H (k) <cit.> as H (k) = ∑_r=0^N-1 w(r) Z(R) e^-ikr/N, where r= 0,.....,(N - 1). Here, w(r) is a Gaussian window function with a standard deviation of N/2^5/2. The window function w(r) is introduced to alleviate spectrum leakage from high frequencies <cit.>. The discrete wave number is given by k = t_k/NΔ, where Δ is sampling interval with t_k = 0,........,N/2 such that Nyquist critical limit of wave number corresponding to t_k = N/2 is 1/2Δ. The power spectrum as a function of wavenumber k is then obtained as P(k ) =1/W | H(k) |^2, where W = N∑_r=0^N-1 w(r) denotes the window function normalization. The wave number k and corresponding wavelength λ = 2π/k for each unstable mode is obtained at maximum power P(k)_max. §.§ Dependence of growth time scale (𝒯) on disc thickness and dark matter halo mass As stated previously, a self-gravitating disc supported by a pressure gradient force possesses unstable eigen modes. The unstable modes have both positive and negative imaginary parts of eigen frequency (ω). The eigen modes with positive imaginary parts are damped and are of no interest in the present paper. Here, we are mainly interested in the negative imaginary part (ω_I) of the eigenvalues; note both are the same in magnitude since eigen values appear in complex conjugate pair. The modulus of ω_I refers to the growth rate while its reciprocal measures the growth time scale (𝒯 = 1/|ω_I|) of the unstable modes. The mode with the shortest growth time scale is the one that grows fastest among all the unstable modes. In this paper, the shortest growth time scale is denoted by 𝒯_fast. In Table.<ref>, we have given the growth time-scale 𝒯_fast and wavelength of the fastest unstable modes of Model-I, Model-II, and Model-III of the disc having different scale heights z_0 =0.1, and z_0=0.2. All the fastest growing modes have the longest wavelengths among all the unstable modes. These wavelengths are found to be significantly longer than the critical wavelength (λ_c) for both the disc scale height values as expected from the WKB dispersion relation. These unstable modes are the ones with the smallest number of radial nodes. It is also interesting to note that the wavelength of the fastest mode increases with the increasing disc thickness. For a thin disc (z_0 =0.1), we obtain the wavelength comparable to the size of the disc (∼ 6R_d). However, for the larger thickness disc (e.g. see Table. <ref> for z_0=0.2) the wavelength is significantly longer than the size of the disc. We next calculate the growth time scale of the fastest mode. In all three models, we see that the values of 𝒯_fast increase when the values of the disc scale heights z_0 increase (see Table.<ref>). For Model-I, we get 𝒯_fast= 0.51, and 𝒯_fast= 0.89 for z_0=0.1 and z_0=0.2 respectively. For Model-II, the values of 𝒯_fast having z_0=0.2 and 0.1 are 1.07 and 0.53 respectively. For z_0 = 0.2, the Model-III has only one unstable mode with a growth time scale of 𝒯_fast =1.29, while for z_0 = 0.1, we get the value of 𝒯_fast = 0.56. The general trend is that the unstable mode in a thicker disc grows slowly whereas it grows faster in a thin disc. In thinner discs, the existence of non-zero vertical velocity dispersion allows the disc to buckle and grow more quickly surpassing the combined effect of restoring forces due to the self-gravitating disc and dark matter halo. In other words, we can conclude that thin discs are more unstable than thick discs in the light of vertical pressure gradient force. Further, we explore the effect of dark matter halo mass on the growth time scale of unstable modes. From the numerical values of 𝒯_fast (see Table.<ref>) for the three models, we observe the following general trends of unstable modes in the disc. (1) Growth time scale of the unstable mode is larger in the presence of a more massive dark matter halo resulting in a more stable disc. (2) In the presence of low mass dark matter halo, unstable modes in the disc grow faster. In the left panel of Fig.<ref>, we show a plot of time scales (𝒯) in Gyr with respect to dimensionless wavenumbers k/k_c of unstable modes of three models using fiducial disc mass M_d = 3 ×10^10 M_⊙ and scale radius R_d=3.2 kpc. Different shapes represent the unstable modes with different wavenumbers and growth time scales for the models. All the unstable modes are found below the critical wave number k_c i.e. k/k_c<1. In the right panel of Fig.<ref>, we show that the unstable mode having the smallest wavenumber (longest wavelength) has the shortest growth time scale in Myr. In Fig.<ref>, we show the vertical displacement of fastest growing unstable eigen modes along with the growth time scales. §.§ Bending instability in Milky Way like galaxies For a better understanding of bending instability in realistic galaxies, we apply our theoretical model to Milky Way like galaxies. Each of these models are characterized by a set of parameters taken from recent literature, see Table.<ref> for details. We adopted these parameters for the Galaxy and dark matter halo because they are the best-fitting parameters given by <cit.>. We estimate the growth time scale and corresponding wavelength of the fastest unstable mode for each of these models. In the left panel of Fig.<ref>, we show the dimensionless wavelength λ/λ_c in log scale and growth time scale of the unstable modes that arise in the three models of the Milky Way. We obtain a few unstable modes for all the Galaxy models. The critical values of wavelengths of all three models Milky Way_1, Milky Way_2, and Milky Way_3 are 1.67 kpc, 2.22 kpc, and 1.78 kpc respectively. All the unstable modes have wavelengths above the critical wavelength i.e. λ/λ_c>1. We estimate the wavelength of unstable modes, particularly the fastest-growing modes. The wavelengths of fastest growing modes are found to be λ = 12.4 kpc, 11.5 kpc, and 13.4 kpc for the Milky Way_1, Milky Way_2, and Milky Way_3 respectively. The mode shapes of these unstable modes are shown in the right panel of Fig.<ref>. It is interesting to note that wavelengths of the fastest-growing modes are somewhat found to be as comparable to the radius of the discs. Further, the estimated growth time scales 𝒯_fast of the fastest modes are found to be 5.2 Myr, 5.5 Myr, and 6.7 Myr respectively. § DISCUSSION AND CONCLUSIONS The bending instability of a rotating self-gravitating galactic disc has attracted some of the best minds in galaxy dynamics and continues to be a subject of profound interest amongst dynamicists. A recent surge in the activities is due to the availability of high-quality data on stellar motion (6D phase space) in the Milky Way by the Gaia satellite <cit.>. Gaia has made it possible to identify and investigate bending modes with minute details in the Galaxy <cit.>. Not only our Galaxy but bending waves are also characterized recently in a number of external galaxies via line-of-sight kinematics of ionized gas (Hα), neutral hydrogen gas, <cit.> as well as corrugated dust pattern <cit.>. These observations have been a clear motivation to revisit this age-old problem. However, we do not attempt to explain any of these observations depicting bending modes in this work but rather attempt to provide a more detailed picture of the bending instability from an analytic point of view. In this work, we reaffirm that in a razor-thin disc, in the absence of vertical pressure gradient force, all eigenmodes in the spectrum have real eigenvalues <cit.>. The stable eigen modes are, generally, parts of two main continuum branches: slow modes continuum with negative eigen frequencies (ω< 0) and fast modes continuum with positive eigen frequencies (ω> 0). In addition to the two continuum modes, a razor-thin disc might support a discrete, stable large-scale eigen mode. Such a discrete mode has been a cornerstone of numerical work in several previous investigations <cit.>. A discrete bending mode would behave like an oscillating pattern in the disc without any damping and growing. If found, such a discrete mode could be a viable solution, as envisaged previously, for the observed warp in many disc galaxies <cit.>. However, as shown by the previous numerical works of Hunter, Toomre, and Sellwood, a discrete mode existed only when the disc was sharply truncated. Although theoretical work brought the discrete mode picture alive <cit.>, they were for a simplified system, and naturally a need for an extensive numerical analysis on realistic models of galaxies was in place. For a smoothly truncated exponential disc, the existence of discrete m=1 bending mode was shown via numerical work in <cit.>. In the current work, we perform an extensive search for the discrete bending in more realistic models of disk galaxies by including dark matter and vertical pressure in the dynamical equation. Our current analysis is based on Binney's logarithmic dark matter halo potential, producing a flat rotation curve. It is interesting to see that the number of unstable modes in the P-gap decreases as the halo mass increases with respect to the disc. In the absence of disc self-gravity, it is the halo that supplies the restoring force against the destabilizing pressure force. In that spirit, it might be useful to know how the bending instability arises in the presence of a dark matter halo with different density profiles such as the Navarro Frank White (NFW) profile <cit.> which is common in cosmological simulations of structure formation <cit.>. Since for the NFW halo, the density falls as r^-3, the local restoring force due to the halo would be lowered as compared to the logarithmic halo (where ρ∝ r^-2) in the outer parts and if the disc self-gravity is not significant, an NFW halo might promote stronger bending instability compared to the logarithmic halo. We plan to explore this in detail in a future paper. Our models of disc galaxies have one disc component. Real galaxies, e.g., our Milky Way and several external galaxies have both a thin and thick disc <cit.>. It would be insightful, how the thick disc component would affect the bending instability. Finally, our current analysis is restricted to only the m=1 bending mode. Higher order bending modes or corrugations (seen in stars, dust, or gas) are also known in several galaxies <cit.>. Future exploration of higher order bending modes in gravitationally coupled discs of stars and gas would help in getting a complete picture of bending instabilities in a realistic disc galaxy. We draw the following main conclusions based on this work: * In a smoothly truncated, exponential razor-thin disc, we reaffirm that all the m=1 bending modes are stable. Such disc might support a stable discrete mode in the P-gap, describing classic integral-sign warps with a purely oscillatory nature. The general properties of such discrete mode are sensitive to the halo core radius. In particular, the modes cannot maintain the S-shaped nature inside a halo with large core radii and ceases to exist in the P-gap at very small core radii. * In a realistic galaxy model, the vertical pressure of the disc excites unstable modes in the P-gap as well as outside the P-gap, as expected from the WKB relation. Such a disc also supports discrete stable long-lived modes in the P-gap. We show that the increasing halo mass first stabilizes the unstable modes in the P-gap and then the modes outside the P-gap, resulting in overall a smaller number of unstable modes at a very high halo mass. On the other hand, the increasing halo core radius causes more unstable modes to arise in the eigen spectrum. In other words, the overall instabilities of the disc are highly governed by the dark matter halo. * Our numerical analyses show that in a thin disc, the vertical pressure excites a greater number of unstable modes outside the P-gap as well as in the gap than its thicker counterpart. * Using WKB dispersion relation and discrete Fourier transform (DFT), we show that all the unstable modes have wavelengths above the critical value of wavelength. Our analyses reveal that unstable modes with the longest wavelength have the fastest growth time scale. The growth time scale is found to be affected by the halo mass and disc thickness. In a low halo mass, the unstable modes grow faster than in a massive halo. For a fixed halo mass, the models grow faster in a thinner disc. * For Milky Way-like galaxies, the wavelength of the fastest growing mode is found to lie approximately within the range of λ≃ 11 - 13 kpc; comparable to the radius of the stellar disc, and the growth time scale lies within the range of 5 - 7 Myr. § ACKNOWLEDGEMENTS The authors acknowledge Rajiv Gandhi University, Arunachal Pradesh, and Inter-University Centre for Astronomy and Astrophysics (IUCAA), Pune for providing local hospitality and computational facilities to carry out this research work. The Python packages Numpy, Scipy, and Astropy are used in numerical calculations and analysis. Further, Sagar S. Goyary acknowledges the UGC-CSIR (Govt. of India) for the Senior Research Fellowship to support financially during the period of the present work. § DATA AVAILABILITY The data underlying this article will be shared on reasonable request to the corresponding author. mnras
http://arxiv.org/abs/2307.04568v1
20230710140046
Global synchronization on time-varying higher-order structures
[ "Md Sayeed Anwar", "Dibakar Ghosh", "Timoteo Carletti" ]
cond-mat.stat-mech
[ "cond-mat.stat-mech", "math-ph", "math.DS", "math.MP", "nlin.AO", "nlin.CD", "nlin.PS" ]
Physics and Applied Mathematics Unit, Indian Statistical Institute, 203 B. T. Road, Kolkata 700108, India Department of Mathematics and Namur Institute for Complex Systems, naXys, University of Namur, 2 rue Grafé, Namur B5000, Belgium Synchronization has received a lot of attention from the scientific community for systems evolving on static networks or higher-order structures, such as hypergraphs and simplicial complexes. In many relevant real world applications, the latter are not static but do evolve in time, in this paper we thus discuss the impact of the time-varying nature of high-order structures in the emergence of global synchronization. To achieve this goal we extend the master stability formalism to account, in a general way, for the additional contributions arising from the time evolution of the higher-order structure supporting the dynamical systems. The theory is successfully challenged against two illustrative examples, the Stuart-Landau nonlinear oscillator and the Lorenz chaotic oscillator. Global synchronization on time-varying higher-order structures Timoteo Carletti Received January 1, 2015; accepted January 1, 2015 ============================================================== § INTRODUCTION In the realm of complex systems, synchronization refers to the intriguing ability of coupled nonlinear oscillators to self-organize and exhibit a collective unison behavior without the need for a central controller <cit.>. This phenomenon, observed in a wide range of human-made and natural systems <cit.>, continues to inspire scientists seeking to unravel its underlying mechanisms. To study synchronization, network science has proved to be a powerful and effective framework. Here, the interconnected nonlinear oscillators are represented as nodes, while their interactions are depicted as links <cit.>. However, the classical static network representation has its limitation in modeling many empirical systems, such as social networks <cit.>, brain networks <cit.>, where the connections among individual basic units are adaptable enough to be considered to evolve through time. Therefore, the framework of networks has been generalized as to include time-varying networks <cit.>, whose connections vary with time. The results presented in this framework support the claim that synchronization is enhanced by the dynamics of the supporting medium <cit.>. Another intrinsic limitation of networks is due to their capability to only model pairwise interactions. To go beyond this issue, scholars have brought to the fore the relevance of higher-order structures, which surpass the traditional network setting that models the interactions between individual basic units only through pairwise links <cit.>. By considering the simultaneous interactions of many agents, higher-order structures, namely hypergraphs <cit.> and simplicial complexes <cit.>, offer a more comprehensive understanding of complex systems. These higher-order structures have been proven to produce novel features in various dynamical processes, including consensus <cit.>, random walks <cit.>, pattern formation <cit.>, synchronization <cit.>, social contagion and epidemics <cit.>. Nevertheless, the suggested framework is not sufficiently general for describing systems with many-body interactions that vary with time. As an example, group interactions in social systems have time-varying nature as the interactions among groups of individuals are not always active but rather change throughout time <cit.>. Some early works have begun to investigate the time-varying aspect of many-body interactions in various dynamical processes. For instance, time-varying group interactions have been demonstrated to influence the convergence period of consensus dynamics <cit.> and to predict the onset of endemic state in epidemic spreading <cit.>. The present work is motivated by these recent research directions, and it aims to take one step further by considering the impact of time-varying higher-order structures in the synchronization of nonlinear oscillators. In this context, a preliminary effort has been reported in <cit.>, that investigates synchronization in time-varying simplicial complexes, limited only to fast switching <cit.> among distinct static simplicial configurations, implying that the time scale of the simplicial evolution is exceedingly fast compared to that of the underlying dynamical system. In contrast, in the present work, we allow the higher-order structures to evolve freely with time, thus removing any limitations on the imposed time evolution of the higher-order structure. We present the results in the framework of hypergraphs, but they hold true also for simplicial complexes. Under such broad circumstances, we develop a theory to determine the conditions ensuring the stability of a globally synchronized state that generalizes the Master Stability Equation <cit.> to a setting where the time evolution of underlying higher-order structures is explicitly considered. The generalized framework we discuss here assumes that the coupling functions cancel out when the dynamics of individual oscillators are identical, which is a necessary condition that must be met for the extended system to have a synchronous solution and it has been frequently used in the literature across various domains. The developed theory reveals that the consideration of temporality in group interactions can induce synchronization more easily than static group interactions, tested on higher-order structures of coupled Stuart Landau oscillators and paradigmatic Lorenz systems. § THE MODEL To start with, let us consider a m-dimensional dynamical system whose time evolution is described by the following ordinary differential equation dx⃗/dt = f⃗(x⃗) , where x⃗∈ℝ^m denotes the state vector and f⃗:ℝ^m→ℝ^m some smooth nonlinear function; let us assume moreover that system (<ref>) exhibits an oscillatory behavior, being the latter periodic or irregular; we are thus considering the framework of generic nonlinear oscillators. Let us now consider n identical copies of system (<ref>) coupled by a symmetric higher-order structure; namely, we allow the nonlinear oscillators to interact in couples, as well as in triplets, quadruplets, and so on, up to interactions among D+1 units. We can thus describe the time evolution of the state vector of the i-th unit by ẋ⃗̇_i = f⃗(x⃗_⃗i⃗) + ∑_d=1^D q_d∑_j_1,…,j_d=1^n A_ij_1… j_d^(d)(t)g⃗^(d)(x⃗_i,x⃗_j_1,…,x⃗_j_d) , where for d=1,…,D, q_d>0 denotes the coupling strength, g⃗^(d):ℝ^(d+1)m→ℝ^m the nonlinear coupling function and 𝐀^(d)(t) the tensor encoding which units are interacting together. More precisely A^(d)_ij_1… j_d(t)=1 if the units i,j_1,… ,j_d do interact at time t, observe indeed that such tensor depends on time, namely the intensity of the coupling as well which units are coupled, do change in time. Finally, we assume the time-varying interaction to be symmetric, namely if A^(d)_ij_1… j_d(t)=1, then A^(d)_π(ij_1… j_d)(t)=1 for any permutation π of the indexes i,j_1,… , j_d. Let us emphasize that we consider the number of nodes to be fixed, only the interactions change in time; one could relax this assumption by considering to have a sufficiently large reservoir of nodes, from which the core of the system can recruit new nodes or deposit unused nodes. Let us fix a periodic reference solution, s⃗(t), of system (<ref>). We are interested in determining the conditions under which the orbit (s⃗(t),…,s⃗(t))^⊤ is a solution of the coupled system (<ref>), and moreover it is stable, namely the n units globally synchronize and behave at unison. A necessary condition is that the coupling functions vanish once evaluated on such orbit, i.e., g⃗^(d)(s⃗,…,s⃗)=0, for d=1,…, D. This assumption is known in the literature as non-invasive condition. For the sake of pedagogy, we will hereby consider a particular case of non-invasive couplings and we will refer the interested reader to Appendix <ref> for a general discussion. We are thus assuming the coupling functions g⃗^(d) to be diffusive-like, namely for each d there exists a function h⃗^(d):ℝ^dm→ℝ^m such that g⃗^(d)(x⃗_i,x⃗_j_1,…,x⃗_j_d)=h⃗^(d)(x⃗_j_1,…,x⃗_j_d)-h⃗^(d)(x⃗_i,…,x⃗_i) . In this way we can straightforwardly ensure that the coupling term in Eq. (<ref>) vanishes once evaluated on the orbit (s⃗(t),…,s⃗(t))^⊤, allowing thus to conclude that the latter is also a solution of the coupled system. To study the stability of the reference solution, let us now perturb the synchronous solution (s⃗(t),…,s⃗(t))^⊤ with a spatially inhomogeneous term, meaning that ∀ i∈{1,…,n} we define x⃗_i=s⃗+δx⃗_i. Substituting the latter into Eq. (<ref>) and expanding up to the first order, we obtain δẋ⃗̇_i = ∂f⃗/∂x⃗_i|_s⃗δx⃗_i+∑_d=1^D q_d∑_j_1,…,j_d=1^n B_ij_1… j_d(t) ∑_ℓ=1^d∂h⃗^(d)/∂x⃗_j_ℓ|_(s⃗,…,s⃗)δx⃗_j_ℓ , where B_ij_1(t) = A_ij_1^(1)(t)- k^(1)_i(t)δ_ij_1 , B_ij_1j_2(t) = A_ij_1j_2^(2)(t)-2k_i^(2)(t)δ_ij_1j_2 , … B_ij_1j_2… j_D(t) = A_ij_1j_2… j_D^(D)(t)-D!k_i^(D)(t)δ_ij_1j_2… j_D , being δ_ij_1j_2… j_D the generalized multi-indexes Kronecker-δ, and the (time-varying) d-degree of node i is given by k_i^(d)(t)=1/d!∑_j_1,..,j_d=1^n A_ij_1… j_d^(d)(t) , which represents the number of hyperedges of order d incident to node i at time t. Observe that if 𝐀^(d) is weighted, then k_i^(d)(t) counts both the number and the weight, it is thus the generalization of the strength of a node. Let us now define k_ij^(d)(t)=1/(d-1)!∑_j_1,...,j_d-1^n A_ijj_1… j_d-1^(d)(t) , namely the number of hyperedges of order d containing both nodes i and j at time t. Again, once 𝐀^(d) is weighted, then k_ij^(d)(t) generalizes the link strength. Let us observe that because of the invariance of 𝐀^(d) under index permutation, we can conclude that k_ij^(d)(t)=k_ji^(d)(t). Finally, we define the generalized time-varying higher-order Laplacian matrix for the interaction of order d as L_ij^(d)(t)= -d!k_i^(d)(t) if i=j (d-1)!k_ij^(d)(t) if i≠ j . Observe that such a matrix is symmetric because of the assumption of the tensors 𝐀^(d). Let us also notice the difference in sign with respect to other notations used in the literature. We can then rewrite Eq. (<ref>) as follows δẋ⃗̇_i = ∂f⃗/∂x⃗_i|_s⃗δx⃗_i+∑_d=1^D q_d[∑_j_1=1^n ∂h⃗^(d)/∂x⃗_j_1|_(s⃗,…,s⃗)δx⃗_j_1∑_j_2,…,j_d=1^n B_ij_1… j_d(t) +…+ ∑_j_d=1^n ∂h⃗^(d)/∂x⃗_j_d|_(s⃗,…,s⃗)δx⃗_j_d∑_j_1,…,j_d-1=1^n B_ij_1… j_d(t)] = ∂f⃗/∂x⃗_i|_s⃗δx⃗_i+∑_d=1^D q_d∑_j=1^n L^(d)_ij(t)[∂h⃗^(d)/∂x⃗_j_1 +…+ ∂h⃗^(d)/∂x⃗_j_d]_(s⃗,…,s⃗)δx⃗_j , where we used the fact the ∂h⃗^(d)/∂x⃗_j_1 +…+ ∂h⃗^(d)/∂x⃗_j_d is independent from the indexes being the latter just place holders to identify the variable with respect to the derivative has to be done. Finally, by defining 𝐉_f := ∂f⃗/∂x⃗_i|_s⃗(t) and 𝐉_h^(d) := ∑_ℓ=1^d ∂h⃗^(d)/∂x⃗_j_ℓ|_(s⃗(t),…,s⃗(t))∀ d∈{1,…,D} , we can rewrite Eq. (<ref>) in compact form δẋ⃗̇_i = 𝐉_fδx⃗_i+∑_d=1^D q_d∑_j=1^n L^(d)_ij(t)𝐉_h^(d)δx⃗_j . This is a non-autonomous linear differential equation determining the stability of the perturbation δx⃗_i, for instance, by computing the largest Lyapunov exponent. To make some analytical progress in the study of Eq. (<ref>), we will consider two main directions: the functions h⃗^(d) satisfy the condition of natural coupling (see Section <ref>) or the higher-order structures exhibit regular topologies (see Section <ref>). The aim of each assumption is to disentangle the dependence of the nonlinear coupling functions from the higher-order Laplace matrices and thus achieve a better understanding of the problem under study. §.§ Natural coupling Let us assume the functions h⃗^(d) to satisfy the condition of natural coupling, namely h⃗^(D)(x⃗,…,x⃗)=…=h⃗^(2)(x⃗,x⃗)=h⃗^(1)(x⃗) , that implies 𝐉_h^(1)=𝐉_h^(2)=…=𝐉_h^(D) and it allows to eventually rewrite Eq. (<ref>) as follows δẋ⃗̇_i = 𝐉_fδx⃗_i+∑_j=1^n M_ij(t)𝐉_h^(1)δx⃗_j , where M_ij(t) := ∑_d=1^D q_d L^(d)_ij(t) ∀ i,j=1,… n . Let us observe that the matrix 𝐌(t) is a Laplace matrix; it is non-positive definite (as each one of the 𝐋^(d)(t) matrices does for any d=1,…, D and any t>0, and q_d>0), it admits μ^(1)=0 as eigenvalue associated to the eigenvector ϕ^(1)=(1,…,1)^⊤ and it is symmetric. So there exists an orthonormal time-varying eigenbasis, ϕ^(α)(t), α=1,…,n, for 𝐌(t) with associated eigenvalues μ^(α)≤ 0. Let us define <cit.> the n× n time dependent matrix 𝐜(t) that quantifies the projections of the time derivatives of the eigenvectors onto the independent eigendirections, namely d ϕ⃗^(α)(t)/dt=∑_βc_αβ(t)ϕ⃗^(β)(t) ∀α=1,…, n . By recalling the orthonormality condition (ϕ⃗^(α)(t))^⊤·ϕ⃗^(β)(t)=δ_αβ , we can straightforwardly conclude that 𝐜 is a real skew-symmetric matrix with a null first row and first column, i.e., c_αβ+c_βα=0 and c_1α=0. To make one step further, we consider Eq. (<ref>), and we project it onto the eigendirections, namely we introduce δx⃗_i=∑_αδx̂⃗̂_αϕ^(α)_i and recalling the definition of 𝐜 we obtain dδx̂⃗̂_β/dt = ∑_α c_βα(t)δx̂⃗̂_α+[𝐉_f+ μ^(β)(t)𝐉_h^(1)]δx̂⃗̂_β . Let us observe that the latter formula and the following analysis differ from the one presented in <cit.> where the perturbation is assumed to align onto a single mode, a hypothesis that ultimately translates in the stationary of the Laplace eigenvectors that is 𝐜=0. The same assumption is also at the root of the results by <cit.>; indeed, commuting time-varying networks implies to deal with a constant eigenbasis. In conclusion, Eq. (<ref>) returns the more general description for the projection of the linearized dynamics on a generic time-varying Laplace eigenbasis, and thus allowing us to draw general conclusions without unnecessary simplifying assumptions. §.§ Regular topologies An alternative approach to study Eq. (<ref>) is to assume regular topologies <cit.>, namely hypergraphs such that 𝐋^(d)(t) = α_d 𝐋^(1)(t), for d=1,…,D, with α_1=1 and α_d∈ℝ_+. Indeed we can use this assumption to obtain from Eq. (<ref>) δẋ⃗̇_i = 𝐉_fδx⃗_i+∑_j=1^n L^(1)_ij(t)𝐉_ĥδx⃗_j , where 𝐉_ĥ := ∑_d=1^D q_d α_d 𝐉_h^(d) , that results in a sort of weighted nonlinear coupling term. We can now make use of the existence of a time-varying orthonormal basis of 𝐋^(1)(t), namely ψ^(α)(t), α=2,…,n, associated to eigenvalues Λ^(α) <0, ψ^(1)(t)=(1,…,1)^⊤ and Λ^(1)=0, to project δx⃗_i onto the n eigendirections, δx⃗_i=∑_αδx̃⃗̃_αψ^(α)_i. Because the latter vary in time we need to define a second n× n time dependent matrix 𝐛(t) given by d ψ⃗^(α)(t)/dt=∑_βb_αβ(t)ψ⃗^(β)(t) ∀α=1,…, n , that it is again real, skew-symmetric, with a null first row and first column, i.e., b_αβ+b_βα=0 and b_1α=0, because of the orthonormality condition of eigenvectors. By projecting Eq. (<ref>) onto ψ^(α)(t), we get dδx̃⃗̃_β/dt = ∑_α b_βα(t)δx̃⃗̃_α+[𝐉_f+ Λ^(β)(t)𝐉_ĥ]δx̃⃗̃_β . Let us conclude by observing that the latter equation has the same structure of (<ref>). Those equations determine the generalization of the Master Stability Equation to the case of time-varying higher-order structures. The time variation signature of the topology is captured by the matrices 𝐜(t) or 𝐛(t) and the eigenvectors μ^(α)(t) or Λ^(α)(t), while the dynamics (resp. the coupling) in the Jacobian 𝐉_f (resp. 𝐉_h^(1) or 𝐉_ĥ). It is important to notice that as the eigenvalues μ^(1)=0, Λ^(1)=0 and the skew-symmetric matrices 𝐜(t), 𝐛(t) have null first row and column, in analogy with the MSF approaches carried over static networks <cit.> and higher-order structures <cit.>, also in the case of time-varying higher-order structures, we can decouple the Master Stability Equation into two components. One component describes the movement along the synchronous manifold, while the other component represents the evolution of different modes that are transverse to the synchronous manifold. The Maximum Lyapunov Exponent (MLE) associated with the transverse modes measures the exponential growth rate of a tiny perturbation in the transverse subspace. It serves as an enhanced form of Master Stability Function (MSF) and provides valuable insights into the stability of the reference orbit. For the synchronous orbit to be stable, the MLE associated to all transverse modes must be negative. Moreover, the MSF approaches applied to static networks and higher-order structures can be simplified by examining the evolution of the perturbation along each independent eigendirection associated with distinct eigenvalues of the Laplacian matrix. Let us observe that this is not possible in the present because the matrices 𝐜(t) and 𝐛(t) mix the different modes and introduce a complex interdependence among them, making it challenging to disentangle their individual contributions. For this reason, one has to address numerically the problem <cit.>. To demonstrate the above introduced theory and emphasize the outcomes arising from the modified Master Stability Equations (<ref>) and (<ref>), we will present two key examples in the following sections. Indeed, we will utilize the Stuart-Landau limit cycle oscillator and the chaotic Lorenz system as prototype dynamical systems anchored to each individual nodes. To simplify the calculations, we assume that the hypergraph consists of only three nodes, three links and one triangle (face), whose weights change in time. Additionally, the eigenvector projection matrices 𝐜(t) and 𝐛(t) do not vary in time; this assumption results from a suitable choice of the Laplace eigenbasis as explained later in Appendix <ref>. Finally, to simplify the analysis we also assume the Laplace eigenvalues to be constant in time. Let us stress that despite such assumptions, the proposed framework is very general and can be applied to any time varying hypergraphs. § SYNCHRONIZATION OF STUART-LANDAU OSCILLATORS COUPLED VIA TIME-VARYING HIGHER-ORDER NETWORKS The aim of this section is to present an application of the theory above introduced. We decided to use the Stuart-Landau (SL) model as a prototype example for two reasons; first, it provides the normal form for a generic system close to a supercritical Hopf-bifurcation, second, because of its structure, the Jacobian of the reaction part becomes constant once evaluated on the reference orbit and this simplifies the presentation of the results. A SL oscillator can be described by a complex amplitude w that evolves in time according to ẇ=σ w-β |w|^2w, where σ=σ_+iσ_ and β=β_+iβ_ are complex model parameters. The system admits a limit cycle solution w_LC(t)=√(σ_/β_)e^iω t, where ω=σ_-β_σ_/β_, that is stable provided σ_>0 and β_>0, conditions that we hereby assume. To proceed in the analysis, we couple together n identical SL oscillators, each described by a complex amplitude w_j, with j=1,...,n, anchored to the nodes of a time-varying hypergraph as prescribed in the previous section, namely dw_j/dt= σ w_j-β w_j|w_j|^2 + ∑_d=1^D q_d∑_j_1,…,j_d=1^n A_jj_1… j_d^(d)(t)g⃗^(d)(w_j,w_j_1,…,w_j_d) . For the sake of simplicity, we restrict our analysis to pairwise and three-body interactions, namely D=2 in Eq. (<ref>). We hereby present and discuss the SL synchronization under the diffusive-like coupling hypothesis and by using two different assumptions: regular topology and natural coupling. The case of non-invasive coupling will be presented in Appendix <ref>. §.§ Diffusive-like and regular topology Let us thus assume the existence of two functions h^(1)(w) and h^(2)(w_1,w_2) such that g^(1) and g^(2) do satisfy the diffusive-like assumption, namely [ g^(1)(w_j,w_j_1) = h^(1)(w_j_1)-h^(1)(w_j) and; ; g^(2)(w_j,w_j_1,w_j_2) = h^(2)(w_j_1,w_j_2)-h^(2)(w_j,w_j) . ] For the sake of definitiveness, let us fix h^(1)(w)=w and h^(2)(w_1,w_2)=w_1w_2 , let us observe that the latter functions do not satisfy the condition for natural coupling, indeed h^(1)(w)=w≠ w^2=h^(2)(w,w). Let us assume to deal with regular topology, namely 𝐋^(2)=α_2𝐋^(1). Hence following Eq. (<ref>) we can define 𝐉_ĥ = q_1 𝐉_h^(1)+q_2 α_2 𝐉_h^(2). Let us perturb the limit cycle solution w_LC(t)=√(σ_/β_)e^iω t by defining w_j=W_LC(1+ρ_j)e^iθ_j, where ρ_j and θ_j are real and small functions for all j. A straightforward computation allows to write the time evolution of ρ_j and θ_j ddt(ρ_j θ_j) = ( -2σ_ 0 -2β_σ_/β_ 0 )(ρ_j θ_j)+∑_ℓ L_jℓ^(1)[( q_1, - q_1, q_1, q_1,)+ 2α_2 √(σ_/β_)(cos (ω t) - sin (ω t) sin (ω t) cos (ω t) )( q_2, - q_2, q_2, q_2,)](ρ_ℓ θ_ℓ) , where ω =σ_-β_σ_/β_ is the frequency of the limit cycle solution. By exploiting the eigenvectors ψ^(α)(t) and eigenvalues Λ^(α)(t) of 𝐋^(1)(t) to project the perturbation ρ_j and θ_j we obtain: ddt(ρ_β θ_β) = ∑_α b_βα(ρ_α θ_α)+{( -2σ_ 0 -2β_σ_/β_ 0 ) +Λ^(β)[( q_1, - q_1, q_1, q_1,) + 2α_2 √(σ_/β_)(cos (ω t) - sin (ω t) sin (ω t) cos (ω t) )( q_2, - q_2, q_2, q_2,)]}(ρ_β θ_β) , where the matrix 𝐛 has been defined in Eq. (<ref>). For the sake of definiteness and to focus on the impact of the time-varying topology, we hereby consider a simple higher-order network structure composed of n=3 nodes, three links and one triangle. Moreover, the eigenvalues are assumed to be constant and the time-derivative of the associated eigenvectors projected on the eigenbasis to return a constant matrix 𝐛, for a given Ω≥ 0 𝐛=[ 0 0 0; 0 0 Ω; 0 -Ω 0 ] . One can show (see Appendix <ref> and <cit.>) that those assumptions on the hypergraph correspond to two eigenvectors rotating in a plane orthogonal to the constant eigenvector ψ^(1)∼ (1,…,1)^⊤ with frequency Ω>0. The case Ω=0 corresponds thus to a static higher-order network structure. Under those assumptions, Eq. (<ref>) determines a time periodic linear system whose stability can be determined by using Floquet theory. In order to illustrate our results, we let q_1, and q_2, to freely vary in the range [-5,5], while keeping fixed to generic values the remaining parameters, and we compute the Floquet eigenvalue with the largest real part, corresponding thus to the Master Stability Function (MSF) of Eq. (<ref>), as a function of q_1, and q_2,. The corresponding results are shown in Fig. <ref> for Ω=0 (panel (a)) and Ω = 2 (panel (b)). By a direct inspection, one can clearly conclude that the parameters region associated with a negative MSF (black region), i.e., to the stability of the SL limit cycle and thus to global synchronization, is larger for Ω >0 than for Ω=0. To study the combined effect of both coupling strengths q_1 and q_2, we set q_1=ϵ_1q_1,0 and q_2=ϵ_2q_2,0, and we compute the MSF as a function of ϵ_1 and ϵ_2, having fixed without loss of generality q_1,0=0.1-0.5i and q_2,0=0.1-0.5i. The corresponding results are presented in Fig. <ref> for static (Ω=0, panel (a)) and time-varying (Ω=2, panel (b)) higher-order structure. We can again conclude that the region of parameters corresponding to global synchronization (black region) is larger in the case of time-varying hypergraph than in the static case. Our last analysis concerns the relation between the frequency Ω and the size of the coupling parameters ϵ_1, ϵ_2, still assuming q_1=ϵ_1q_1,0 and q_2=ϵ_2q_2,0, on the onset of synchronization. In Fig. <ref> we report the MSF in the plane (Ω,ϵ_1) for a fixed value of ϵ_2 (panel (a)), and in the plane (Ω,ϵ_2) for a fixed value of ϵ_1 (panel (b)). Let us observe that the synchronization can be easier achieved the smaller the value ϵ_j, j=1,2, for which the MSF is negative, having fixed Ω. Let us thus define ϵ̂_1(Ω)=min{ϵ >0 : MSF(ϵ,ϵ_2,Ω)<0}, for fixed ϵ_2, and similarly ϵ̂_2(Ω). The results of Fig. <ref> clearly show that ϵ̂_1(Ω)<ϵ̂_1(0)∼ 3.5 and ϵ̂_2(Ω)<ϵ̂_2(0)∼ 4.2 and thus support our claim that time-varying structures allow to achieve synchronization easier. To support our analysis, we performed numerical simulations of the SL defined on the simple 3 nodes time-varying hypergraph. We selected (ϵ_1,ϵ_2)=(2.5,0.5) and the remaining parameters values as in Fig. <ref>. By observing the latter figure, we conclude that for the chosen parameters, the MSF is positive if Ω=0 and negative if Ω=2, hence the SL should globally synchronize on the time-varying hypergraph while it would not achieve this state in the static case. Results of Fig. <ref> confirm these conclusions; indeed, we can observe that (real part of) the complex state variable is in phase for all i in the case Ω=2 (right panel), while this is not clearly the case for Ω=0 (left panel). §.§ Diffusive-like and natural coupling The aim of this section is to replace the condition of regular topology with a condition of natural coupling and consider thus again, a diffusive-like coupling. Let us thus consider now two functions h^(1)(w) and h^(2)(w_1,w_2) satisfying the natural coupling assumption, namely h^(1)(w)=h^(2)(w,w) . For the sake of definitiveness, let us fix h^(1)(w)=w^3 and h^(2)(w_1,w_2)=w_1(w_2)^2 . Consider again to perturb the limit cycle solution w_LC(t)=√(σ_/β_)e^iω t by defining w_j=W_LC(1+ρ_j)e^iθ_j, where ρ_j and θ_j are real and small functions for all j. A straightforward computation allows us to write the time evolution of ρ_j and θ_j as, ddt(ρ_j θ_j) = ( -2σ_ 0 -2β_σ_/β_ 0 )(ρ_j θ_j) +3σ_/β_∑_ℓ M_jℓ(cos (2ω t) - sin (2ω t) sin (2ω t) cos (2ω t) )(ρ_l θ_l) , where ω =σ_-β_σ_/β_ is the frequency of the limit cycle solution and 𝐌 is the matrix q_1 𝐋^(1)(t)+q_2 𝐋^(2)(t) (see Eq. (<ref>)). Let us observe that in this case, the coupling parameters q_1 and q_2 should be real numbers if we want to deal with real Laplace matrices, hypothesis that we hereby assume to hold true. By invoking the eigenvectors ϕ^(α)(t) and eigenvalues μ^(α)(t) of 𝐌(t), and the matrix 𝐜 (see Eq. (<ref>)), we can project the perturbation ρ_j and θ_j on the eigenbasis and thus rewrite the time variation of the perturbation as follows ddt(ρ_β θ_β) = ∑_α c_βα(ρ_α θ_α)+[ ( -2σ_ 0 -2β_σ_/β_ 0 ) +3σ_/β_μ^(β)(cos (2ω t) - sin (2ω t) sin (2ω t) cos (2ω t) )](ρ_β θ_β) . Let us assume again to deal with an hypergraph made by 3 nodes and consider a time-independent matrix 𝐜 𝐜=[ 0 0 0; 0 0 Ω; 0 -Ω 0 ] , for some Ω≥ 0. The eigenvalue μ^(1)=0 of 𝐌 determines the dynamics parallel to the synchronous manifold. On the other hand, the equations obtained for μ^(2) and μ^(3) give the dynamics of transverse modes to the synchronization manifold. Hence the MSF can be obtained by solving the latter equations and provide the conditions for a global stable synchronous solution to exist. In Fig. <ref>, we show the level sets of the MSF as a function of the eigenvalues μ^(2) and μ^(3) while keeping the remaining parameters in Eq. (<ref>) fixed at generic nominal values. In panel (a), we consider a static hypergraph, i.e., Ω=0, while in panel (b) a time-varying hypergraph, i.e., Ω=2, negative values of MSF are reported in black and they correspond thus to a global synchronous state, positive values of MSF are shown in yellow; one can clearly appreciate that in the case of time-varying hypergraph, the MSF is negative for a much larger set of eigenvalues μ^(2) and μ^(3) and thus the SL system can easier synchronize. § SYNCHRONIZATION OF LORENZ SYSTEMS NONLINEARLY COUPLED VIA TIME-VARYING HIGHER-ORDER NETWORKS The aim of this section is to show that our results hold true beyond the example of the dynamical system shown above, i.e., the Stuart-Landau. We thus decide to present an application of synchronization for chaotic systems on a time-varying higher-order network. For the sake of definitiveness, we used the paradigmatic chaotic Lorenz model for the evolution of individual nonlinear oscillators. We consider again the scenario of regular topology with the toy model hypergraph structure composed of n=3 nodes described previously, the whole system can thus be described by ẋ_i =a_1(y_i-x_i)+ϵ_2∑_j=1^N∑_k=1^NA^(2)_ijk(x_j^2x_k-x_i^3) ẏ_i =x_i(a_3-z_i)-y_i+ϵ_1∑_j=1^NA^(1)_ij(y_j-y_i) ż_i =x_iy_i-a_2z_i , where the system parameters are kept fixed at a_1=10, a_2=8/3, a_3=28 for which individual nodes exhibits chaotic trajectory. The pairwise and higher-order structures are related to each other by 𝐋^(2)=α_2𝐋^(1). We assume the eigenvalues of the Laplacian 𝐋^(1) to be constant and the matrix 𝐛 to be given by 𝐛=[ 0 0 0; 0 0 Ω; 0 -Ω 0 ] for some Ω≥ 0. Let us thus select as reference solution s⃗(t) a chaotic orbit of the isolated Lorenz model and consider as done previously the time evolution of a perturbation about such trajectory. Computations similar to those reported above, allow to obtain a linear non-autonomous system ruling the evolution of the perturbation, whose stability can be numerically inferred by computing the largest Lyapunov exponent, i.e., the MSF. We first considered the impact of the coupling strength, ϵ_1 and ϵ_2 on synchronization; results are reported in Fig. <ref> where we present the level sets of the MSF as a function of the above parameters by using a color code: black dots refer to negative MSF while yellow dots to positive MSF. The panel (a), refers to a static hypergraph, i.e., Ω=0, while the panel (b) to a time-varying one, i.e., Ω=3, one can thus appreciate that the latter setting allows a negative MSF for a larger range of parameters ϵ_1 and ϵ_2 and hence we can conclude that time-varying hypergraph enhance synchronization also in the case of chaotic oscillators. We conclude this analysis by studying again the relation between the frequency Ω and the size of the coupling parameters ϵ_1, ϵ_2 on the onset of synchronization. In Fig. <ref> we show the MSF in the plane (Ω,ϵ_1) for a fixed value of ϵ_2=0.01 (panel (a)), and in the plane (Ω,ϵ_2) for a fixed value of ϵ_1=0.2 (panel (b)). By using again ϵ̂_1(Ω)=min{ϵ >0: MSF(ϵ,ϵ_2,Ω)<0}, for fixed ϵ_2, and similarly ϵ̂_2(Ω), we can conclude that ϵ̂_1(Ω)<ϵ̂_1(0)∼ 1.4 and ϵ̂_2(Ω)<ϵ̂_2(0)∼ 0.04 and thus supporting again our claim that time-varying structures allow to achieve synchronization easier. § CONCLUSIONS To sum up we have here introduced and studied a generalized framework for the emergence of global synchronization on time-varying higher-order networks and developed a theory for its stability without imposing strong restrictions on the functional time evolution of the higher-order structure. We have demonstrated that the latter can be examined by extending the Master Stability Function technique to the novel framework for specific cases based either on the inter-node coupling scheme or the topology of the higher-order structure. Our findings reveal that the behavior of the higher-order network is represented by a matrix that changes over time and possesses skew symmetry. This matrix is derived from the time-dependent evolution of the eigenvectors of the higher-order Laplacian. Additionally, the eigenvalues associated with these eigenvectors can also vary over time and have an impact on shaping the evolution of the introduced disturbance. We have validated the proposed theory on time-varying hypergraphs of coupled Stuart-Landau oscillators and chaotic Lorenz systems, and the results obtained indicate that incorporating temporal aspects into group interactions can facilitate synchronization in higher-order networks compared to static ones. The framework and concepts presented in this study create opportunities for future research on the impact of temporality in systems where time-varying group interactions have been observed but not yet thoroughly explored due to the absence of a suitable mathematical setting. Importantly, the fact that our theory does not require any restrictions on the time evolution of the underline structure could offer the possibility to apply it for a diverse range of applications other than synchronization. apsrev4-1 § NON-INVASIVE COUPLINGS Here we will discuss the results corresponding to a slightly more general hypothesis for g⃗^(d), namely to be non-invasive, i.e., g⃗^(d)(s⃗,…,s⃗)=0 ∀ d=1,…,D , whose goal is again to guarantee that the coupling term in Eq. (<ref>) vanishes once evaluated on the orbit (s⃗(t),…,s⃗(t))^⊤. Indeed by using again x⃗_i=s⃗+δx⃗_i and expanding Eq. (<ref>) up to the first order we get δẋ⃗̇_i = 𝐉_fδx⃗_i+∑_d=1^D q_d∑_j_1,…,j_d=1^n B_ij_1… j_d(t) [ ∂g⃗^(d)/∂x⃗_i|_(s⃗,…,s⃗)δx⃗_i+∂g⃗^(d)/∂x⃗_j_1|_(s⃗,…,s⃗)δx⃗_j_1+ …+ ∂g⃗^(d)/∂x⃗_j_d|_(s⃗,…,s⃗)δx⃗_j_d] ; from Eq. (<ref>) we can obtain ∂g⃗^(d)/∂x⃗_i|_(s⃗,…,s⃗)+∂g⃗^(d)/∂x⃗_j_1|_(s⃗,…,s⃗)+ …+ ∂g⃗^(d)/∂x⃗_j_d|_(s⃗,…,s⃗)=0 , and thus rewrite (<ref>) as follows δẋ⃗̇_i = 𝐉_fδx⃗_i+∑_d=1^D q_d∑_j_1,…,j_d=1^n B_ij_1… j_d(t) [ ∂g⃗^(d)/∂x⃗_j_1|_(s⃗,…,s⃗)(δx⃗_j_1-δx⃗_i)+ …+ ∂g⃗^(d)/∂x⃗_j_d|_(s⃗,…,s⃗)(δx⃗_j_d-δx⃗_i)] . Recalling the definition of k^(d)_ij given in Eq. (<ref>) we get δẋ⃗̇_i = 𝐉_fδx⃗_i+∑_d=1^D q_d (d-1)![∑_j_1=1^n k^(d)_ij_1(t) ∂g⃗^(d)/∂x⃗_j_1|_(s⃗,…,s⃗)(δx⃗_j_1-δx⃗_i)+ …+ ∑_j_l=1^n k^(d)_ij_d(t) ∂g⃗^(d)/∂x⃗_j_d|_(s⃗,…,s⃗)(δx⃗_j_d-δx⃗_i)] . By using the definition of the higher-order Laplace matrix (<ref>) we eventually obtain δẋ⃗̇_i = 𝐉_fδx⃗_i-∑_d=1^D q_d∑_j=1^n L^(d)_ij(t) [∂g⃗^(d)/∂x⃗_j_1|_(s⃗,…,s⃗)+ …+ ∂g⃗^(d)/∂x⃗_j_d|_(s⃗,…,s⃗)]δx⃗_j . Let us consider now a particular case of non-invasive function, we assume thus there exists a function φ⃗:ℝ^m→ℝ^m, such that φ⃗(0)=0 and define g^(d)(x⃗_i,x⃗_j_1,…,x⃗_j_d)=∑_ℓ=1^dφ⃗(x⃗_i-x⃗_j_ℓ) , then ∂g⃗^(d)/∂x⃗_j_ℓ = -𝐉_φ(0) , where 𝐉_φ(0) is the Jacobian of the function φ⃗ evaluated at 0. In conclusion (<ref>) rewrites as follows δẋ⃗̇_i = 𝐉_fδx⃗_i-∑_d=1^D q_d∑_j=1^n L^(d)_ij(t) (-d)𝐉_φ(0) δx⃗_j=𝐉_fδx⃗_i+∑_j=1^n G_ij(t)𝐉_φ(0) δx⃗_j , where 𝐆(t)=∑_d=1^Dd q_d𝐋^(d)(t) can be considered as an effective time-varying simplicial complex or hypergraph. Let us now observe that the effective matrix 𝐆(t) is a Laplace matrix; it is non-positive definite (as each one of the 𝐋^(d)(t) does for any d=1,…, D and any t>0), it admits μ^(1)=0 as eigenvalue associated to the eigenvector ϕ^(1)=(1,…,1)^⊤ and it is symmetric. So there exist a orthonormal time-varying eigenbasis, ϕ^(α)(t), α=1,…,n, for 𝐆(t) with associated eigenvalues μ^(α)≤ 0. Similar to before, we define the n× n time dependent matrix 𝐜(t) that quantifies the projections of the time derivatives of the eigenvectors onto the independent eigendirections, namely d ϕ⃗^(α)/dt(t)=∑_βc_αβ(t)ϕ⃗^(β)(t) ∀α=1,…, n . By recalling the orthonormality condition (ϕ⃗^(α)(t))^⊤·ϕ⃗^(β)(t)=δ_αβ we can again straightforwardly conclude that 𝐜 is a real skew-symmetric matrix with a null first row and first column, i.e., c_αβ+c_βα=0 and c_1α=0. Thereafter, we consider Eq. (<ref>), and we project it onto the eigendirections, namely we introduce δx⃗_i=∑_αδx̂⃗̂_αϕ^(α)_i and recalling the definition of 𝐜 we obtain dδx̂⃗̂_β/dt = ∑_α c_βα(t)δx̂⃗̂_α+[𝐉_f+ μ^(β)(t)𝐉_φ(0)]δx̂⃗̂_β . This is the required Master Stability Equation, solving which for the calculation of maximum Lyapunov exponents provide the condition for stability of the synchronous solution. §.§ Synchronization of Stuart-Landau oscillators with non-invasive coupling assumption To validate the above results we again consider the SL oscillator with a particular case of non-invasive coupling function, namely we assume to exist a real function φ such that φ(0)=0, φ^'(0)≠ 0 and [ g^(1)(w_1,w_2)=φ(w_1-w_2) , and; g^(2)(w_1,w_2,w_3)=φ(w_1-w_2)+φ(w_1-w_3) . ] By reasoning as before, we get [ ddt(ρ_j θ_j) = ( -2σ_ 0 -2β_σ_/β_ 0 )(ρ_j θ_j)+ φ^'(0)∑_ℓ(q_1 L^(1)_jℓ +q_2 L^(2)_jℓ) ( 1 0 0 -1 )(ρ_l θ_l) . ] By using again the eigenvectors ϕ^(α)(t), eigenvalues μ^(α)(t) of 𝐆(t) and the matrix 𝐜 (see Eq. (<ref>)), we can rewrite the previous formula as [ ddt(ρ_β θ_β) = ∑_α c_βα(ρ_α θ_α)+[( -2σ_ 0 -2β_σ_/β_ 0 ) + φ^'(0)μ^(β)( 1 0 0 -1 )](ρ_β θ_β). ] Figure <ref> represent the result for the non-invasive coupling assumption. Here, we consider the non-invasive function so that φ^'(0)=1 and the skew-symmetric projection matrix 𝐜 is considered constant throughout the analysis as earlier. Here we show the level sets of the MSF as a function of the eigenvalues μ^(2) and μ^(3) while keeping the remaining parameters in Eq. (<ref>) fixed at generic nominal values. In panel (a), we consider a static hypergraph, i.e., Ω=0, while in the (b) panel, a time-varying hypergraph, i.e., Ω=2, negative values of MSF are reported in black, and they correspond thus to a global synchronous state, positive values of MSF are shown in yellow; one can clearly appreciate that in the case of the time-varying hypergraph, the MSF is negative for a much larger set of eigenvalues μ^(2) and μ^(3) and thus the SL system can achieve synchronization more easily. § STRUCTURE OF THE SMALL HYPERGRAPH The goal of this section is to provide more details about the construction of the simple time-varying hypergraph used as support for the numerical simulations in the main text. To start with we need to obtain the time-evolution of eigenvectors ψ⃗^(α)(t), which follows the equation [ dψ⃗^(α)dt=∑_αb_βαψ⃗^(α) , ] where the matrix 𝐛 has been given in Eq. (<ref>). The eigenvector associated with the least eigenvalue Λ^(1)=0 is constant and is given by ψ⃗^(1)=1/√(3)(1,1,1)^⊤. The other two eigenvectors are obtained by solving the previous equation and are represented as ψ⃗^(2)(t)=v⃗_1cos(Ω t)+v⃗_2sin(Ω t) and ψ⃗^(3)(t)=-v⃗_1sin(Ω t)+v⃗_2cos(Ω t), where v⃗_1, v⃗_2 are the unknown vectors that should be determined using the constraints to have orthonormal eigenbasis for every t. Following a few steps of calculation, we can obtain the other two eigenvectors as follows [ ψ⃗^(2)(t)=1√(6)[ 1; -2; 1 ]cos(Ω t)+1√(2)[ -1; 0; 1 ]sin(Ω t) ,; ψ⃗^(3)(t)=-1√(6)[ 1; -2; 1 ]sin(Ω t)+1√(2)[ -1; 0; 1 ]cos(Ω t). ] Now recalling our assumption about constant eigenvalues and using the relation 𝐋^(1)_ij(t)=∑_αΛ^(α)ψ⃗^(α)_i(t)ψ⃗^(α)_j(t), we can obtain the entries of the pairwise Laplace matrix as [ L^(1)_ij(t)=Λ^(2)ψ⃗^(2)_i(t)ψ⃗^(2)_j(t)+Λ^(3)ψ⃗^(3)_i(t)ψ⃗^(3)_j(t), ] where we use the fact that Λ^(1)=0 for all time t. Finally by using the relation between pairwise adjacency and Laplace matrices L^(1)_ij(t)=A^(1)_ij(t), for i j, we obtain the temporal evolution of the links as [ A^(1)_12(t)=12-13cos(π/3+2Ω t),; ; A^(1)_13 (t)= 12+13cos(2Ω t),; ; A^(1)_23(t)=12-13cos(π/3-2Ω t), ] where we have used the fact that the non-zero eigenvalues are given by Λ^(2)=-1 and Λ^(3)=-2. Again from the regular structure of the hypergraph, we have 𝐋^(2)(t)=α_2𝐋^(1)(t), for all t. Therefore, following the relation (<ref>), entries of the 2nd-order Laplacian 𝐋^(2) can be represented as, [ L^(2)_ij(t)=α_2[Λ^(2)ψ⃗^(2)_i(t)ψ⃗^(2)_j(t)+Λ^(3)ψ⃗^(3)_i(t)ψ⃗^(3)_j(t)]. ] Now, the definition of higher-order Laplacian implies that, L^(2)_ij(t)=∑_kA^(2)_ijk(t), i j. Hence, using the above relation and Eq. (<ref>), we can obtain the temporal evolution of the 3-hyperedge as [ A^(2)_123(t)=1-23cos(π/3+2Ω t), ] where we have again used the fact that the non-zero eigenvalues are Λ^(2)=-1, and Λ^(3)=-2, and the value of the parameter α_2 has been set α_2=2. Due to the assumption of undirected hypergraph, we also trivially have, A^(2)_123(t)=A^(2)_π(123)(t), where π(123) indicates any permutation of (123). Fig. <ref> portrays the temporal evolution of the links and 3-hyperedge weights. To better understand the evolution of the hypergraph, we provide the graphical evolution of the hypergraph in the accompanying Supplementary Movie, together with the time evolution of the weights of the links A^(1)_ij(t) and of the hyperedge A^(2)_123(t).
http://arxiv.org/abs/2307.04418v1
20230710084854
Quantum error correction beyond the toric code: dynamical systems meet encoding
[ "Garima Rajpoot", "Komal Kumari", "Sudhir Ranjan Jain" ]
quant-ph
[ "quant-ph" ]
[ * Received / Accepted ======================== We construct surface codes corresponding to genus greater than one in the context of quantum error correction. The architecture is inspired by the topology of invariant integral surfaces of certain non-integrable classical billiards. Corresponding to the fundamental domains of rhombus and square torus billiard, surface codes of genus two and five are presented here. There is significant improvement in encoding rates and code distance, in addition to immunity against noise. § FROM GEOMETRY TO ENCODING Geometrical representations of algebraic and arithmetic relations <cit.>, and, algebraic representations of geometrical patterns <cit.> are both fascinating themes. In their turns, they have led to a deep understanding in physics and mathematics <cit.>. A one-to-one correspondence between Lie groups and reflection groups whose fundamental regions are simplexes in Euclidean space has been beautifully illustrated in <cit.>. These fundamental regions generate tori for “unit shapes" like a square, equilateral triangle, right isosceles triangle, or a hemi-equilateral triangle <cit.>. Here we bring out an application of geometry of regular polytopes <cit.> to encoding theory in the context of quantum information. The dynamical systems which are most relevant to the present theme are planar billiards wherein a particle moves freely inside a two-dimensional enclosure, reflecting from the boundary in accordance to the Snell's law of reflection. According to the Liouville-Arnol'd theorem <cit.>, for a system with f degrees of freedom, if there are f functionally independent invariants which are in involution, the (invariant) surface on which the trajectory of the system resides is topologically equivalent to an f-torus. Another condition stipulated for the applicability of the Liouville-Arnol'd theorem is that the vector fields in phase space must be smooth everywhere. The integrability of such systems is a fragile property, so much so that even if the vector fields become singular at points of measure zero, the system loses integrability <cit.>. Perhaps the simplest example is when the shape of the enclosure is a square or a rectangle, explained later in some detail, where the invariant surface is a torus. However, an interesting situation arises by deforming the square to a rhombus with an acute angle π /n. The vector fields in phase space become singular at a set of points of measure zero. Corresponding invariant surface is topologically equivalent to a sphere with few handles, the number of handles is related to n. In this work, instead of a lattice of spins, we employ the lattice constructed by stacking fundamental domains in a plane. On this lattice, we show how to place qubits and set up a stabilizer code. Somewhat unrelated but of great significance, a connection between billiard and computation was first realized by Fredkin and Toffoli <cit.>. Although it gave us the Toffoli gate, the connection between topology of invariant surfaces in billiards and surface codes was not relevant for them and has been brought out recently <cit.>. § GENUS-2 CODE Computation requires scalability of logical qubits on planar chips. One way to achieve this is to use “unit shapes" which can fill the plane on successive reflections to encode the information on a surface. Our aim is to make use of the fundamental domains of certain geometrical structures such as squares and rhombi, which upon successive reflections, fill the whole plane while maintaining the planarity of the surface. This suitable arrangement allows one to make changes anywhere else in the circuit by only locally changing parameters, inadvertently leading to scalability. For example, if we consider a square tile, upon successive reflections about its sides, four copies form a unit of tessellation - the fundamental domain, identifying the pairs of parallel edges gives a torus, which is characterized by a topological invariant, the genus being equal to one. Thus, the surface code corresponds to tori, and hence makes the well-known “toric code" <cit.>. The fundamental domain of a π/3-rhombus is another such structure, genus equal to two, that can be tessellated on the whole surface. Here, we use this to design a new code on a surface of genus two. §.§ “Tessellation" with Lg-rhombus We introduce a new surface code using the fundamental domain equivalent to a genus two surface (Fig. <ref>), constructed by stitching six copies of π/3-rhombus. Upon identification of edges as shown in Fig. <ref>, it creates a “double-torus" <cit.>, which is equivalent to a sphere with two handles. This can be tessellated over the whole plane as shown in Fig. <ref>. Hence, encryption on this surface is termed as “Genus two code" or “Double-toric code". As per Kitaev's idea, whereby increasing the genus will give a higher encryption, the double-toric code helped achieve a significantly higher encryption rate as compared to the surface code. §.§ Encoding on a plane Let us start with a unit structure of the genus two code - constructed by using n=6 data qubits (represented by circles) and m=4 ancilla qubits (represented by squares), shown in Fig. <ref>. The bold and dashed lines represent the control-X and control-Z operations, respectively from the ancilla qubit to the data qubits. Stabilizers are the operators which belong to the Pauli group and preserve the logical state, i.e. if the logical state is |Ψ⟩_L, then P_i|Ψ⟩_L=(+1)|Ψ⟩_L. The set of stabilisers for this code structure is P={X_1X_2X_3X_4, X_3X_4X_5X_6, Z_1Z_3Z_5, Z_2Z_4Z_6}. These four elements of the stabilizer set are the generators of the stabilizer group 𝒮. For this encoded logical qubit, the logical state |0⟩_L is <cit.>: |0⟩_L =1/𝒩∏_P_i∈⟨P⟩(I^⊗n+P_i)|0^⊗n⟩ =1/𝒩(I^⊗6+X_1X_2X_3X_4)(I^⊗6+X_3X_4X_5X_6)(I^⊗6+Z_1Z_3Z_5)(I^⊗6+Z_2Z_4Z_6)|0^⊗6⟩ =1/𝒩(|000000⟩+|001111⟩+|111100⟩+|110011⟩), where 𝒩 is the normalization factor. The circuit for this encryption is shown in Fig. <ref> (b). All the stabilizers commute with each other ([P_i,P_j]=0 ∀ i,j). To construct logical state |1⟩_L, we have to look for analogous Pauli sigma pairs of logical operators {X_i,Z_i}, that (i) commute with each of the stabilizers P_j ([X_i,P_j]=0=[Z_i,P_j] ∀ i,j) and (ii) pairwise anti-commute with each other ({X_i,Z_i}=0 and [X_i,Z_j]=0 ∀ i≠ j). To find the logical operators, first we have to identify the edges to specify the boundaries. The filling of plane using π/3-rhombus, forms periodically arranged branch-cuts, which help identify the boundaries. On these boundaries, the control-X (bold lines) and control-Z (dashed lines) are arranged alternately. We define a path, between the boundaries, by connecting a data qubit vertex of a rhombus to another data qubit vertex of a corresponding copy with respect to the fundamental domain of the rhombus. Two sets of six paths are found which form the logical X operator (X̅) and logical Z operator(Z̅). Thus we found two pairs of logical operators, which satisfy the above conditions {X̅_1=X_1X_3, Z̅_1=Z_1Z_4Z_6} and {X̅_2=X_4X_6, Z̅_2=Z_2Z_4Z_5}. The minimum weight of error E=E_a^† E_b violating the Knill-Laflamme conditions <cit.> was found to be 2. Thus it is a [[6,2,2]] code. The encoding rate, or the ratio of the number of logical qubits to the number to data qubits for this code structure is 1/3. To increase the code distance and the encoding rate of the double-toric code, we can stack a unit of this code (Fig. <ref>) vertically as well as horizontally. Reflecting the unit in equal number of vertical and horizontal directions, arranges the unit structures in equal number of rows and columns. To construct the code with p^2 number of unit structures, the number of rows and columns will be p, the the number of required data qubits is n=2p(2p+1), number of required ancilla qubits is m=2p(p+1), number of logical qubit is k=2p^2 and the code distance is d=⌊p+2/2⌋+1, where ⌊·⌋ is the floor function. So the general form of the code is [[2p(2p+1), 2p^2,⌊p+2/2⌋+1]]. The encoding rate of this code is k/n=p/(2p+1). For p→∞, the encoding rate is 1/2. §.§ Comparison of code distance in toric and genus-2 codes In the [[5,1,2]] code shown in Fig. <ref>, the code distance is 2. Let us try to make a logical operator of weight 3. The paths D1-A1-D3-A4-D5 and D2-A3-D3-A2-D4 provide such a pair of logical operator ⟨X̅=X_2X_3X_4, Z̅=Z_1Z_3Z_5⟩. Both the operators commute with all the stabilizers of the [[5,1,2]] code and anticommute with each other. In this way we achieved a pair of logical operators of weight 3 and so the code distance could be 3 making it a [[5,1,3]] code instead. But for the states corresponding to these operators, the minimum weight of error for which Knill-Laflamme conditions do not hold is d=2, indicating that this has to be a distance 2 code, hence the code is [[5,1,2]]. This is well-expected. It is important to note that we could have found all logical operators of weight 2, while maintaining the code distance two - {X_1X_3, Z_1Z_2} and {X_4X_6, Z_5Z_6}. In this case also, the minimum weight of errors for which the Knill-Laflamme conditions do not hold is two. So we could have chosen either set of logical operators. But it is our aim to maximize the code distance using the reflection property of the structure. This makes the [[2p(2p+1),2p^2,⌊p+2/2⌋+1]] code more suitable for achieving higher encryption rates and distances than a [[2p(2p+1),2p^2,2]] code. Consider now another unit stacked vertically on the single unit as shown in Fig. <ref>. Here, the number of physical qubits is n=10, while the number of ancilla qubits is m=7. The stabilizers for this code are, P={X_1X_2X_3X_4, X_3X_4X_5X_6X_7X_8, X_7X_8X_9X_10, Z_1Z_3Z_5, Z_2Z_4Z_6, Z_5Z_7Z_9, Z_6Z_8Z_10}. Following the arguments presented above for identifying paths between boundaries, we obtain X̅ and Z̅; the complete set of logical operators commuting with the stabilizers and anti-commuting pairwise is thus (i) {X̅_1=X_2X_6X_8, Z̅_1=Z_1Z_4Z_8Z_9}, (ii) {X̅_2=X_2X_6X_10, Z̅_2=Z_5Z_7Z_10}, (iii) {X̅_3=X_4X_6X_8, Z̅_3=Z_2Z_3Z_6}. The Knill-Laflamme conditions are violated for a weight of error three, giving the code distance three. However, we can again find logical operators of weight two - {X_1X_3,Z_1Z_2}, {X_3X_5X_7,Z_5Z_6} and {X_7X_9,Z_9Z_10}. This should give a distance of two which is also verified using the Knill-Laflamme conditions. Since both the cases are valid, we choose to use the one in which the distance is maximum without violating the stabilizer algebra. § GENUS-5 CODE The motivation to this code stems from another dynamical system, the square torus billiard where the integrable dynamics of a square billiard is interrupted by a square shaped scatterer <cit.>. Following the association discussed above for genus 2, we construct a code with this dynamical system in mind. §.§ Square torus billiard The free motion of a point particle in a square torus billiard (STB) is shown in Figure <ref>. According to the theorem by Zemlyakov and Katok <cit.>, this system is non-integrable albeit non-chaotic with zero Lyapunov exponent. The invariant integral surface is topologically equivalent to a sphere with five handles, as shown in <cit.>. The entire trajectory of the free particle in the STB can be folded in four copies using which we can construct the invariant surface (constant energy). This is explained in Figure <ref>. In statistical mechanics, this model is related to Ehrenfest gas where a beam of particles moving freely in a plane gets scattered by square-shaped scatterers (also called wind-tree model <cit.>). A new finite-time exponent was introduced to describe these systems <cit.> as the long-time average vanishes due to rather pathological behaviour of these systems. We shall now employ these features to our advantage in quantum encoding. §.§ Encoding We start with the fundamental domain of an equivalent genus five surfaces, Fig. <ref>, obtained by tessellating a square with a square-shaped scatterer inside it four times and placing the data and the ancilla qubits alternatively on the vertex of external squares as well as on the vertex of scatterers. The data qubits are represented as D (in the circles) and the ancilla qubits are represented as A (in the squares). As in earlier sections, the bold (dashed) lines represent the control-X(Z) operations from the ancilla qubits to the data qubits. The set of stabilizers is P={X_1X_2X_3X_6X_7, X_3X_4X_5X_12X_13, X_1X_6X_8, X_2X_7X_9, X_3X_10X_12, X_3X_11X_13, Z_1Z_3Z_4Z_8Z_10, Z_2Z_3Z_5Z_9Z_11, Z_3Z_6Z_8, Z_3Z_7Z_9, Z_4Z_10Z_12, Z_5Z_11Z_13}. The logical state |0⟩_L is: |0⟩_L= 1/𝒩∏_P_i∈⟨P⟩(I^⊗n+P_i)|0^⊗n⟩ = 1/𝒩(I^⊗13+X_1X_2X_3X_6X_7)(I^⊗13+X_3X_4X_5X_12X_13)(I^⊗13+X_1X_6X_8)(I^⊗13+X_2X_7X_9) (I^⊗13+X_3X_10X_12)(I^⊗13+X_3X_11X_13)(I^⊗13+Z_1Z_3Z_4Z_8Z_10)(I^⊗13+Z_2Z_3Z_5Z_9Z_11) (I^⊗13+Z_3Z_6Z_8)(I^⊗13+Z_3Z_7Z_9)(I^⊗13+Z_4Z_10Z_12)(I^⊗13+Z_5Z_11Z_13)|0^⊗13⟩. We next look for pairs of logical operators that commute with stabilizers and anti-commute pairwise. For this, we have to specify the boundaries. The filling of the plane using the fundamental domain of the equivalent genus five surfaces, forms periodically arranged branch cuts (edges EF and GH in Fig.<ref>), which are considered as the boundaries. Thus we define a path by connecting the data qubit vertex of one scatterer to the data qubit vertex of the corresponding copy with respect to the fundamental domain. The directed paths for the logical X operator are: X_6X_8X_10X_12, X_6X_8X_4X_12, X_7X_9X_11X_13, and X_7X_9X_5X_13. The directed paths for the logical Z operator are: Z_8Z_6Z_7Z_9, Z_8Z_6Z_2Z_9, Z_8Z_1Z_7Z_9, Z_8Z_1Z_2Z_9, and Z_10Z_12Z_13Z_11. From these paths, we found a pair of logical operators {X=X_6X_8X_4X_12, Z=Z_8Z_1Z_7Z_9}. The minimum weight the error E=E_a^† E_b, which violates the Knill-Laflamme conditions, is 3, thereby constructing a [[13,1,3]] code. To increase the distance of the code, we can stack the unit structure of the code (Fig. <ref>) vertically as shown in Fig.<ref>. The number of required data qubits is n=24 and the number of required ancillary qubits is m=23. The set of stabilizers is P={X_1X_2X_3X_4X_7, X_1X_3X_5, X_2X_4X_6, X_7X_8X_10, X_7X_9X_11, X_7X_10X_11X_12X_13X_14X_15X_18, X_12X_14X_16, X_13X_15X_17, X_18X_19X_21, X_18X_20X_22, X_18X_21X_22X_23X_24, Z_3Z_5Z_7, Z_4Z_6Z_7, Z_1Z_5Z_7Z_8Z_12, Z_2Z_6Z_7Z_9Z_13, Z_8Z_10Z_12, Z_9Z_11Z_13, Z_14Z_16Z_18, Z_15Z_17Z_18, Z_12Z_16Z_18Z_19Z_23, Z_13Z_17Z_18Z_20Z_24, Z_19Z_21Z_23, Z_20Z_22Z_24}. The pair of logical operators is {X=X_8X_12X_16X_14, Z=Z_8Z_10Z_15Z_17}. The minimum weight that violates the Knill-Laflamme conditions for this code is 4. Hence it is a [[24,1,4]] code. Thus, the distance of the code can be increased by stacking fundamental domains on the plane. §.§ Effect of noise Any logical qubit should be robust against dephasing due to an external noise. Recently, it has been shown <cit.> that certain observables formed by code space population and logical operators in the code space help determine the dynamical behaviour of logical qubits. We incorporate a time-dependent external fluctuating magnetic field in z-direction, which acts on the qubits globally, thus leading to global dephasing. To estimate the effect, consider the logical |1⟩_L: |1⟩_L= X|0⟩_L Let an initial logical quantum state be written as |ψ⟩_L=cosθ/2|0⟩_L+e^ιϕsinθ/2|1⟩_L where θ and ϕ are real parameters (θ≤π and 0≤ϕ≤ 2π). The evolution of |ψ⟩_L gives the logical Bloch sphere coordinates, X_L, Y_L and Z_L. Assuming the global dephasing process by a single fluctuating variable B(t) along the z-direction acting on all data qubits, the Hamiltonian representing the effect of noise may be written as H_G(t) =1/2B(t)∑_i=1^13σ_z_i. In case of local dephasing, the Hamiltonian reads as: H_L(t)=1/2∑_i=1^13B_i(t) σ_z_i. The randomly fluctuating variable B(t) obeys the Gaussian distribution P(B), which implies that <cit.>: ⟨exp (±ι∫_0^tB(t^')dt^')⟩ =exp[-1/2⟨(∫_0^tB(t^') dt^')^2⟩]= e^-γt/2 assuming the stationarity of the auto-correlation function of delta-correlated noise, with γ=⟨[B(0)]^2|$⟩. Following <cit.>, we analyze the effect of noise on theN-qubit system by grouping the physical states by their magnetization, defined as the difference between the number of spins in the state|0⟩, denoted byn^', and the remaining in state|1⟩,N-n^'. The magnetisation is,m^'=2n^'-N. The logical state|0⟩_Lis written as,|0⟩_L=∑_m^' ∑_l=1^N_m^'b_l^m^'|b⟩_l^m^'. Dephasing noise changes the state|ψ⟩_Lto another state|ψ^'⟩, where|ψ^'⟩=exp[-ι∫_0^tH_L, G(t^')dt^']|ψ⟩_L. The density matrix corresponding to the logical qubit isρ^'=∫|ψ^'⟩⟨ψ^'| P(B)dB.The Bloch coordinatesℛ≡{R_X, R_Y, R_Z}in the new state are obtained by evaluating the expectation values of the logical operators in the evolved state, given by⟨ℛ|=⟩Tr[ρ^'ℒ̅], whereℒ̅≡{X̅, Y̅, Z̅}represents the logical Bloch vectors in the initial state,|ψ⟩. For the single unit structure (Fig. <ref>), in the presence of global dephasing noise, the logical Bloch coordinates turn out to be ⟨R_X|=⟩ 1/32e^-(2γ t+ιϕ)(1+e^-γ t)^4(1+e^2ιϕ)sinθ ⟨R_Y|=⟩ ι/32e^-(2γ t+ιϕ)(1+e^-γ t)^4(-1+e^2ιϕ)sinθ ⟨R_Z|=⟩ cosθ In the absence of noise, i.e.,γ=0, the Bloch sphere coordinates in the new state,|ψ'⟩are⟨R_X|=⟩sinθcosϕ,⟨R_Y|=⟩sinθsinϕ, and⟨R_Z|=⟩cosθsame as that in the old state,|ψ⟩_L. Even in the presence of noise,⟨R_Z⟩remains unaffected. Thus the code is significantly robust against dephasing noise. § CONCLUDING REMARKS The basic idea underlying surface codes for error detection and correction is to be able to arrange the data and ancillary qubits in a way thatXandZerrors can be corrected by making Stabilizer measurements through ancillae. For a scalable architecture, planar structures are desirable. This brings us to the question of tessellation of the plane. While in Kitaev's construction, two-dimensional Ising model is considered where the lattice shape can be anything - however, it should be noted that “anything" is only under periodic boundary conditions where then, unit shapes could be square, equilateral triangle etc. Here we take the essence from Kitaev's construction and use the correspondence between Lie and reflection groups, ideas from well-known billiards, and present a novel way to realize architectures of higher genus. The encoding rates - number of logical qubits for the physical qubits - surpasses the value for all surface codes hitherto known. We believe that these results pave the way to a new direction of research in the field of quantum error correction. The codes presented here are not related to tessellations of hyperbolic surfaces. We have constructed fundamental domain using replicas of the billiard considered. We then stack the domains, thus taking care of all the symmetries of the system. It is at this point that we endow each vertex with a qubit or ancilla. This enables us to write the stabilizers and construct logical operators. This construction respects the commutation and anticommutation relations expected of a consistent and complete definition of a code. The spectra of the Hamiltonian made by the generators is studied. The degeneracy of the ground state increases with the number of qubits. For instance, for the genus-two codes[[n, k, d]], the degeneracy of the ground state is2^k. The code is not topological. However, the ground state of the codes has high degeneracy which is useful for encoding. The code distance increases with the size of the code. The main advantage, however, is that the codes have much higher encoding rates. For genus-two codes of large size, the encoding rate tends to one-half. For the genus-five codes, the code distance increases with size whereas the encoding rate does not. Future investigations along these lines would be useful. In classical dynamical systems, tori as invariant surfaces are synonymous to integrability. The surfaces of higher genus correspond to non-integrability, but not chaos, even when the dynamics is nonlinear. Nonlinearity of the dynamics leads to the appearance of special points in the phase space, which have been shown to play an important role in controlling of quantum jumps for error correction <cit.>. In quantum computing technology, almost all paradigms are related in an important way to aspects of nonlinearity, be it the nonlinearity of the Josephson junction, creation of EPR pair of photons from a nonlinear crystal and so on. Nonlinear resonances in coupled nonlinear quantum circuits with Josephson junctions have been shown to provide criteria for protection of qubits <cit.>. Ideas from nonlinear science would expectedly contribute to the development of quantum information theory and technology. 0.25 truecm Acknowledgements Authors thank the Referee for her(his) critique drawn on our work. They also thank Rhine Samajdar, Princeton University, for several helpful and stimulating discussions. 0.25 truecm Data Availability Statement: No Data associated in the manuscript 99kvant Ed. S. Tabachnikov, Kvant Selecta: Algebra and Analysis, I and II (Universities Press (India) Limited, 2002). weissman M. H. Weissman, An illustrated theory of numbers (American Mathematical Society, 2017). aop2014 R. Samajdar and S. R. Jain, Ann. Phys. 351, 1 (2014). aop2016 N. Manjunath, R. Samajdar, S. R. Jain, Ann. Phys. 372, 68 (2016). rmp2017 S. R. Jain and R. Samajdar, Rev. Mod. Phys. 89, 045005 (2017). nakahara M. Nakahara, Geometry, Topology, and Physics (Taylor and Francis, London, 2003). coxeter H. S. M. Coxeter, Regular Polytopes (Dover, New York, 1973). toffoli E. Fredkin and T. Toffoli (1982), International Journal of Theoretical Physics 21, 219 (1982). krj K. Kumari, G. Rajpoot, and S. R. Jain, A genus-two surface code (arXiv:2211.12695 [quant-ph]). weyl1926nachtrag Hermann Weyl, Mathematische Zeitschrift 24, 789 (1926). cartan1927geometrieÉlie Cartan, Annali di Matematica pura ed applicata 4, 209 (1927). arnold V. I. Arnol'd, Mathematical methods of classical mechanics (Springer, Heidelberg, 1978). jain1992 S. R. Jain and H. D. Parab, J. Phys. A 25, 6669 (1992). Kitaev Alexei Kitaev, Ann. Phys. 303, 2 (2003). eckhardt1984analytically Bruno Eckhardt, Joseph Ford and Franco Vivaldi, Physica D: Nonlinear Phenomena 13, 339–356 (1984). zemlyakov A. Zemlyakov and A. B. Katok, Math. Notes 18, 760 (1976). richens1981pseudointegrable P. J. Richens and M. V. Berry, Physica D: Nonlinear Phenomena 2, 495–512 (1981). Gottesman Daniel Gottesman, Stabilizer codes and quantum error correction, Ph. D. thesis (California Institute of Technology, 1997). aa V. I. Arnold and A. Avez, Ergodic problems of classical mechanics (W. A. Benjamin, Inc., Amsterdam, 1970). bob J. R. Dorfman, An introduction to chaos in nonequilibrium statistical mechanics (Cambridge Univ. Press, Cambridge, 1999). manan M. Jain, Student J. Phys. 5, 55 (2013). mcj S. Moudgalya, S. Chandra, and S. R. Jain, Ann. Phys. 361, 82 (2015). pal Amit Kumar Pal, Philipp Schindler, Alexander Erhard, Ángel Rivas, Miguel A. Martin-Delgado, Rainer Blatt, Thomas Monz and Markus P. Müller, Quantum 6, 632 (2022). krjj K. Kumari, G. Rajpoot, S. Joshi, and S. R. Jain, Ann. Phys. 450, 169222 (2023). ssj R. K. Saini, R. Sehgal, and S. R. Jain, Eur. Phys. J. Plus 137, 356 (2022). We start with the fundamental domain of genus five surface, by reflecting a square with a square shaped scatterer inside it four times. And placing the data and the ancilla qubits alternatively on the vertex of external squares as well as on the vertex of square shaped scatterer. The data qubits are represented asD(in circles) and the ancilla qubits are represented asA(in squares). The bold (dashed) lines are representing the control-X(Z)operations from the ancilla qubits to the data qubits. The set of stabilizers isP={X_1X_2X_3X_6X_7,X_3X_4X_5X_12X_13,X_1X_6X_8,X_2X_7X_9,X_4X_10X_12,X_5X_11X_13,Z_1Z_3Z_4Z_8Z_10,Z_2Z_3Z_5Z_9Z_11,Z_3Z_6Z_8,Z_3,Z_7Z_9,Z_3Z_10Z_12,Z_3Z_11Z_13}. The logical state|0⟩_Lis: |0⟩_L= 1/𝒩∏_P_i∈⟨ P⟩(I^⊗ n+P_i)|0^⊗ n⟩ = 1/𝒩(I^⊗ 13+X_1X_2X_3X_6X_7)(I^⊗ 13+X_3X_4X_5X_12X_13)(I^⊗ 13+X_1X_6X_8)(I^⊗ 13+X_2X_7X_9) (I^⊗ 13+X_4X_10X_12)(I^⊗ 13+X_5X_11X_13)(I^⊗ 13+Z_1Z_3Z_4Z_8Z_10)(I^⊗ 13+Z_2Z_3Z_5Z_9Z_11) (I^⊗ 13+Z_3Z_6Z_8)(I^⊗ 13+Z_3,Z_7Z_9)(I^⊗ 13+Z_3Z_10Z_12)(I^⊗ 13+Z_3Z_11Z_13)|0^⊗ 13⟩ . We next look for pairs of logical operators that commute with stabilisers and anti-commute pairwise. For this, we have to specify the boundaries. The filling of plane using the fundamental domain of genus five surface, forms periodically arranged branch cuts (edgesEFandGHin Fig.<ref>), which are considered as the boundaries. Thus we define the path by connecting the data qubit vertex of one square scatterer to the data qubit vertex of the corresponding copy with respect to the Fundamental Domain. The directed paths to for the logicalZoperator are:Z_8Z_6Z_7Z_9,Z_8Z_6Z_2Z_9,Z_8Z_1Z_7Z_9,Z_8Z_1Z_2Z_9,Z_10Z_12Z_13Z_11,Z_10Z_12Z_5Z_11,Z_10Z_4Z_13Z_11,andZ_10Z_4Z_5Z_11, all of these operators commute with all the stabilizers. The directed paths for the logicalXoperator are:X_6X_8X_10X_12,X_6X_3X_12,X_6X_3X_11,X_6X_3X_13,X_6X_3X_7,X_6X_3X_9X_11X_13,X_6X_8X_3X_11X_13,X_6X_8X_3X_9X_7,X_7X_9X_11X_13,X_7X_3X_13,X_7X_3X_11,X_7X_3X_10,X_7X_3X_8X_10X_12, andX_7X_9X_3X_10X_12. In these many operators, only two operatorsX_6X_8X_10X_12andX_7X_9X_11X_13commute with all the stabilizers. Thus we found a pair of logical operators{X=X_6X_8X_10X_12, Z=Z_8Z_1Z_7Z_9}. The minimum weight of the errorE=E_a^†E_b, which violates the Knill-Laflamme conditions, came out to be3. So it is a[[13,1,3]]code. LetSbe the generators of stabilizer group. Then, for ann-qubit code encodingk-logical operators, we can define an(n-k)-bit binary number, or error syndrome function,f_Mfor the code. Letf_M:𝒢→ℤ_2, such that f_M(E)= {[ f_M(E)=0, [M,E]=0; f_M(E)=1, {M,E}=0 ]., wheref_M(E)=f_M_1(E)f_M_2(E)…f_M_n-k(E). If all the values off_Mare different, the code is nondegenerate. For the single unit of the double-toric[[6,2,2]]code, stabilizer generators areM={X_1 X_2 X_3 X_4, X_3 X_4 X_5 X_6, Z_1 Z_3 Z_5, Z_2 Z_4 Z_6}. The functionf_M(E)for the error set,E={X_1,X_2,…, X_6,Z_1,Z_2,…,Z_6}is shown in table <ref>. Here,f_Mis four-bit binary function, which is not different for every error inE, thus making it a degenerate code. By contrast, the[[13,1,3]]surface code is a nondegenerate code, wheref_Mis a twelve-bit binary number, which is different for each error in the error setE.
http://arxiv.org/abs/2307.04669v1
20230710161213
Reversal of the skyrmion topological deflection across ferrimagnetic angular momentum compensation
[ "L. Berges", "R. Weil", "A. Mougin", "J. Sampaio" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci", "cond-mat.mes-hall" ]
AIP/123-QED Reversal of the skyrmion topological deflection across ferrimagnetic angular momentum compensation]Reversal of the skyrmion topological deflection across ferrimagnetic angular momentum compensation [email protected] Université Paris-Saclay, CNRS, Laboratoire de Physique des Solides, 91405 Orsay, France Due to their non-trivial topology, skyrmions describe deflected trajectories, which hinders their straight propagation in nanotracks and can lead to their annihilation at the track edges. This deflection is caused by a gyrotropic force proportional to the topological charge and the angular momentum density of the host film. In this article we present clear evidence of the reversal of the topological deflection angle of skyrmions with the sign of angular momentum density. We measured the skyrmion trajectories across the angular momentum compensation temperature () in GdCo thin films, a rare earth/transition metal ferrimagnetic alloy. The sample composition was used to engineer the skyrmion stability below and above the . A refined comparison of their dynamical properties evidenced a reversal of the skyrmions deflection angle with the total angular momentum density. This reversal is a clear demonstration of the possibility of tuning the skyrmion deflection angle in ferrimagnetic materials and paves the way for deflection-free skyrmion devices. [ J. Sampaio August 12, 2023 =================== The discovery of efficient driving of chiral magnetic textures by current-induced spin-orbit torques <cit.> has opened the possibility of energy-efficient and high-performance spintronic devices <cit.>, with applications in digital <cit.> or neuromorphic <cit.> computation, ultra-dense data-storage <cit.>, and signal processing <cit.>. Chiral textures are stable in magnetic thin films with a significant Dzyaloshinskii-Moriya interaction (DMI), typically induced with an adjacent heavy-metal layer (e.g. Pt/Co). Additionally, the heavy-metal layer, through the spin Hall effect, converts an applied charge current into a spin current that drives the magnetic textures by spin orbit torque (SOT). Very promising mobility of chiral magnetic domain walls (DW) has been observed <cit.>, with nonetheless a saturating mobility at large current densities <cit.>. Another archetypal chiral magnetic texture is the skyrmion, a small (down to few tens of nm) radially symmetric whirling texture. Although highly mobile <cit.>, their non-trivial topology induces a transverse deflection of their trajectory, a phenomenon known as gyrotropic deflection or skyrmion Hall effect <cit.>. This reduces the velocity in the forward direction and can lead to the annihilation of the skyrmion at the edges of the hosting magnetic track, and is thus highly undesired. The gyrotropic deflection can be mitigated in magnetic systems with anti-parallel lattices <cit.>, such as antiferromagnets or ferrimagnets, where the overall angular momentum density of the double skyrmion can be suppressed. In particular, ferrimagnetic alloys of the rare-earth/transition-metal (RETM) family, where the RE and TM moments are antiferromagnetically coupled <cit.>, are a promising example. In a previous work by our team, it was shown that skyrmions in GdCo thin films attained the high-mobility linear regime beyond pinning, and that their velocity and deflection followed the predictions of the Thiele model <cit.>. However, there is still only little experimental evidence of the advantages of these systems <cit.>, especially regarding the control of the gyrotropic deflection. In RETMs, The balance between the moments of different nature can be changed with alloy composition or temperature which leads to two points of interest for skyrmions. At the first one, the magnetic compensation temperature , the magnetization of the two sub-lattices are equal, the total magnetization (M_s = M_ TM - M_ RE) vanishes, and the size of the skyrmions is minimal due to the absence of dipolar fields <cit.>. As RE and TM have different gyromagnetic ratios (γ_ RE and γ_ TM), the total angular momentum density (L_s = M_ TM/γ_ TM - M_ RE/γ_ RE) will vanish at a different temperature, the angular compensation temperature . Both and depend on composition. The reduction and reversal of the total angular momentum, which is the root cause of magnetic precession, leads to interesting dynamical properties near , such as e.g. the reversal of the deflection angle of chiral domain wall fingers <cit.> or the precessionless motion of magnetic domains walls <cit.>. However, the reversal of the skyrmion gyrotropic deflection at has not yet been demonstrated. In this letter, we measure the velocity and deflection angle of skyrmions driven by spin-orbit torques in two Pt/GdCo/Ta films of different composition, above and below their . We show the dependence of the deflection with angular moment density, and in particular its reversal by changing sample composition or temperature. A quantitative analysis with a rigid texture model based on the Thiele equation is used to characterize the role of the material parameters on the skyrmion dynamics. The skyrmion dynamics were measured in two samples. Sample 1 is composed of a film of (Si/SiOx(100))/ Ta(1)/ Pt(5)/ Gd_0.32Co_0.68(5)/ Ta(3) and sample 2 of (Si/SiOx(300))/ Ta(3)/ Pt(5)/ Gd_0.3Co_0.7(8)/ Ta(5)/ Pt(1) (thicknesses in nm) as presented in the insets in Fig. <ref>a. The samples were patterned into 10 μm- or 20 μm-wide tracks in order to apply current pulses (Fig.<ref>b). The magnetization as a function of temperature was measured by SQUID magnetometry on unpatterned samples and is presented in Fig. <ref>(a). Sample 1 presents a around 360 K whereas sample 2 presents a around 200 K. Therefore, at room temperature, sample 1 is RE-dominated whereas sample 2 is TM-dominated, where RE or TM domination refers to which sublattice has the higher magnetic moment and therefore aligns with an external magnetic field. It is useful to use the effective ferromagnet model of ferrimagnets <cit.>, which assumes a signed magnetization and angular momentum density that are positive, by convention, when TM-dominated: M_s=|M_ Co|-M_ Gd| and L_s=|L_ Co|-L_ Gd|. The exact determination of the is not straightforward. It was therefore deduced for both samples, using the mean field model described in ref. <cit.>. The calculated L_S(T) are shown by the dashed lines in Fig. <ref>(a), and yield = 416 K for sample 1 and = 260 K for sample 2. These results are consistent with the empirical law described in ref. <cit.> which gives for GdCo between 40 to 60 K above the . The magnetic textures are observed in each sample as a function of temperature by magneto-optical-Kerr-effect (MOKE) microscopy. A typical differential MOKE image is presented in Fig. <ref>(c). Skyrmions are observed in the temperature ranges indicated by the color bands in Fig. <ref>(a). In these ranges, starting from a saturated state and lowering the applied external magnetic field, skyrmions with a core of opposing magnetization will naturally nucleate at small enough field (-30 to 0 mT for an initial saturation at large negative magnetic field). Skyrmions can also be nucleated by applying electrical pulses. A typical phase diagram (versus temperature and field) of these samples is presented in a previous work <cit.>. In the studied temperature range, sample 1 only presents one skyrmion stability range around 290 K, whereas sample 2 presents two skyrmion stability ranges, one around 90 K and a second around 350 K. In sample 1, the skyrmion stability range is below (and ), where the film L_S<0, and so these are dubbed RE-dominated skyrmions. In sample 2, the skyrmions at 90 K are RE-dominated as well, while the skyrmions at 350 K are TM-dominated (above and with therefore L_S>0). Note that in the MOKE images, the signal is proportional to the Co sublattice, independently of the temperature <cit.>. Thus, skyrmions with a core Co moment pointing along the same direction will appear with the same color (black for -z with our experimental conditions), whether they are RE- or TM-dominated (Fig. <ref>c). Once skyrmions are nucleated, electrical pulses of 3 to 10 ns are applied and MOKE images are acquired in order to study the skyrmions dynamics. The skyrmion motion is tracked over several pulses using a partially-automated process described in ref. <cit.>, and their velocity and deflection are calculated considering the pulse duration and the traveled distance. Typical images of skyrmions displacements are shown in Fig. <ref>, in the case of sample 2 at low temperature and L_s<0 (a) and high temperature and L_s>0 (b). The average skyrmion diameter was similar for the three studied cases, 0.86±0.28 μm. An example of the observed skyrmion dynamics in sample 1 is presented in Fig. <ref>(c) with a superposition of successive MOKE images where the skyrmion color refers to the MOKE image number. The skyrmion deflection () and velocity (v) versus applied current density (j) are presented in Fig. <ref>(a,b) for the three cases: RE-dominated skyrmions in sample 1, and RE- and TM-dominated skyrmions in sample 2. Videos of successive displacements in both samples are shown in S.I. In the three cases, the velocity shows a clear depinning transition above a current threshold (different for each case), and then follows a linear regime. The mobility in the linear regime (i.e. Δ v/Δ j) is much higher in sample 2 than in sample 1. In sample 2, the mobility of TM-dominated skyrmions is slightly higher than RE-dominated skyrmions. These differences in mobility will be discussed later. The linear regime extends up to 190 m/s in sample 1 and to 450 m/s in sample 2. At highest j, skyrmions are nucleated by the pulse, which hinders the tracking analysis and thus limits the maximum j that can be examined. In the linear regime, the deflection angle is approximately constant with the current density, and its absolute value is about 40^∘ for the three cases. The deflection angle is clearly reversed between the TM- and RE-dominated skyrmions: it is positive for TM-dominated skyrmions (in sample 2) and negative for RE-dominated skyrmions (in both samples). The deflection also reverses with core polarity, i.e. with the Co moment pointing along +z (which appear as white skyrmions in the MOKE images; see SI). The in the pining regime is measured to be larger than in the flow regime in sample 1, whereas it is lower in sample 2. This is perhaps a bias induced by the different nucleation protocol used in these measurements. For sample 1, skyrmions were only nucleated by current pulses, mostly near one of the edges, whereas for sample 2 they were first nucleated homogeneously by magnetic field. As skyrmions can be annihilated at the edges, only the skyrmions that deviate towards the center are accounted for, which biases the measurement of the mean . The skyrmion dynamics in the linear regime can be quantitatively analyzed using a rigid-texture formalism based on the Thiele equation <cit.>. It expresses the equilibrium of all forces applied on the magnetic texture that reads in our case as: F⃗_G + F⃗_ SOT + α Dv⃗ = 0⃗, where F⃗_ SOT is the SOT force, F⃗_G the gyrotropic force and α D is, in general, a tensor describing the dissipation. This formalism can be applied to skyrmions in double-lattice systems as presented in refs. <cit.>. These forces are depicted in Fig. <ref> c), on a black dot representing a skyrmion in the case of L_s<0. The norm of the skyrmion velocity |v| and its deflection can be deduced to be: |v|=v_0/√(1+ρ^2) =arctan(ρ) In the limit of skyrmions larger than the domain wall width parameter Δ, the parameters v_0 and ρ are: v_0 ≈ -πΔ/2 L_αħ j θ_ SHE/ 2 e t ρ ≈ Δ/2π RL_S/L_α n where ħ is the Planck constant, e the fundamental charge, t the magnetic film thickness, θ_ SHE is the effective SHE angle in the Pt layer, L_α=α_Co|L_s^Co|+α_Gd|L_s^Gd| the energy dissipation rate, n= p_ Co 4π = ± 4π the topological charge of the skyrmion, R its radius, and p_ Co=±1 is the orientation along z of the core Co moment. Because L_α is always positive, the sign of the deflection is given by the sign of the product of L_s (positive for T>) and p_Co. This sign is presented in Table <ref> as a function of temperature for p_Co =-1, which is the case shown here (black skyrmions). The parameters needed for the model were measured on both samples (see Table <ref>). M_s(T) were measured by SQUID magnetometry (Fig. <ref>a), and L_s(T) was deduced from a mean-field model as described in ref. <cit.>. The dissipation rate L_α (7.4 and 4.3× 10^-7 kgm^-1s^-1 for sample 1 at 290 K and 2 at 350 K, respectively) is calculated using the gyromagnetic ratio γ and the effective Gilbert dissipation parameter α. The L_α at 90 K in sample 2 (5.2× 10^-7 kgm^-1s^-1) was estimated using the calculated sub-lattice angular momenta from the mean-field model and assuming constant sub-lattice Gilbert damping parameters. The domain wall width parameter Δ is calculated from K_u, the exchange stiffness A, and M_S. The skyrmion diameter was taken from the average diameter observed in the images, which is very similar for the three studied cases. These measured parameters allowed to fully constrain the model and obtain curves for the velocity and deflection angle with no fitting parameters, shown by the dashed lines in Fig. <ref>, which reproduce accurately the experimental data. The sign of the deflection angle observed in the experiments agrees with Eq (<ref>b) taking into account the L_S of the film (L_S<0 for RE-dominated skyrmions and L_S>0 for TM-dominated skyrmions). The skyrmion mobility, given by the slope of the linear model shown in Fig. <ref>(b), is much higher in sample 2 than in sample 1 (1.80 at 350 K vs 0.6 m·s^-1/GA·m^-2, respectively). This difference in mobility cannot be ascribed to a difference in skyrmion diameter (see eq. <ref>), as the two conditions present very similar average sizes (0.85 ± 0.28 μm and 0.86 ± 0.28 μm, respectively). This large difference has multiple origins. First, the L_α of sample 2 is lower (L_α (sample 1; 290K)/L_α (sample 2; 350 K)≈ 1.5). The second major cause is the difference of the film stacks, in particular the thickness of the Ta capping layer. The measured θ_ SHE is more than 2 times higher in sample 2 (0.09) than in sample 1 (0.04). This can be expected to be due a better passivation of the Ta layer in sample 2 which can therefore contribute more to the SOT than the thinner (3 nm) Ta cap of the sample 1 which is probably fully oxidized. Finally, comparing the skyrmion velocity curves for the two conditions in sample 2 (at 90 and 350 K), it can be seen that both the depinning current and the mobility in the linear regime are significantly different. The depinning current is higher at 90 K, which can be attributed by the thermal nature of the depinning process <cit.>. The difference in mobility is not due to a difference in skyrmion diameter (which again is very similar in all three studied conditions). It can be expected that several magnetic parameters vary between 90 and 350 K, but the experimental mobility can be understood by considering only the variation of L_α (L_α (90  K)/L_α (350 K)≈ 1.2, assuming with L_α (90  K) calculated assuming constant sublattice Gilbert damping parameters). This result and the Thiele model suggest that L_α is a more pertinent parameter than α to characterize the role of dissipation in the skyrmion mobility. Interestingly, L_α can more be more easily optimized than α to increase mobility, by increasing the sample temperature (as was the case here) or by decreasing the material's Curie temperature (all other parameters remaining equal). A recent work <cit.> on skyrmions measured at relatively high temperature also seems to point toward such an effect which seems to be an interesting path to increase skyrmion mobility. In conclusion, we observed the propagation of skyrmions in the flow regime, i.e., beyond the effects of pinning in two GdCo samples, below and above the angular compensation temperature. The observed mobilities were very large, with a velocity up to 450 m/s. The skyrmion dynamics was studied in three cases, two in RE-dominated films and one in a TM-dominated film. The deflection angle was constant with driving current and its sign was opposite between RE- and TM-dominated cases, both when comparing two samples of different composition and when comparing two temperatures (above and below ) in the same sample. This confirms the modulation of deflection angle with L_S. These experiments demonstrate the effects of the angular momentum density L_S of the host material on the deflection of skyrmions. They show that can be reversed in GdCo ferrimagnetic thin films across their angular compensations, either by changing the alloy stoichiometry or simply its temperature. In particular, the reversal of sign of across compensation strongly supports that should be zero at angular moment compensation. The engineering of magnetic parameters that was done to produce the two presented skyrmion-hosting samples could be repeated rather straightforwardly to engineer a film with stable skyrmions at with no deflection. The authors thank Stanislas Rohart for fruitful discussions, and André Thiaville for the study of the sample properties by BLS. This work was supported by a public grant overseen by the French National Research Agency (ANR) as part of the “Investissements d’Avenir” program (Labex NanoSaclay, reference: ANR-10-LABX-0035, project SPICY). Magnetometry and Anomalous Hall effect measurements were performed at the LPS Physical Measurements Platform. § SUPPLEMENTARY INFORMATION Videos of successive MOKE images showing the skyrmion motion can be found in .... for the three temperature regions discussed in the text. Motion of skyrmions of opposite polarity (i.e., p_ Co=+1; white in the MOKE images) is also shown for sample 1. § DATA AVAILABILITY STATEMENT The data that support the findings of this study are available from the corresponding author upon reasonable request. * 38 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Moore et al.(2008)Moore, Miron, Gaudin, Serret, Auffret, Rodmacq, Schuhl, Pizzini, Vogel, and Bonfim]Moore2008 author author T. A. Moore, author I. M. Miron, author G. Gaudin, author G. Serret, author S. Auffret, author B. Rodmacq, author A. Schuhl, author S. Pizzini, author J. Vogel, and author M. Bonfim, title title High domain wall velocities induced by current in ultrathin Pt/Co/AlOx wires with perpendicular magnetic anisotropy, 10.1063/1.3062855 journal journal Applied Physics Letters volume 93, pages 262504 (year 2008)NoStop [Thiaville et al.(2012)Thiaville, Rohart, Jué, Cros, and Fert]Thiaville2012a author author A. Thiaville, author S. Rohart, author É. Jué, author V. Cros, and author A. Fert, title title Dynamics of Dzyaloshinskii domain walls in ultrathin magnetic films, 10.1209/0295-5075/100/57002 journal journal EPL (Europhysics Letters) volume 100, pages 57002 (year 2012)NoStop [Manchon et al.(2019)Manchon,  ŽŽelezný, Miron, Jungwirth, Sinova, Thiaville, Garello, and Gambardella]Manchon2019 author author A. Manchon, author J.  ŽŽelezný, author I. M. Miron, author T. Jungwirth, author J. Sinova, author A. Thiaville, author K. Garello, and author P. Gambardella, title title Current-induced spin-orbit torques in ferromagnetic and antiferromagnetic systems, 10.1103/RevModPhys.91.035004 journal journal Rev. Mod. Phys. volume 91, pages 035004 (year 2019)NoStop [Fert, Reyren, and Cros(2017)]Fert2017b author author A. Fert, author N. Reyren, and author V. Cros, title title Magnetic skyrmions: advances in physics and potential applications, 10.1038/natrevmats.2017.31 journal journal Nature Reviews Materials volume 2, pages 17031 (year 2017)NoStop [Sampaio et al.(2013)Sampaio, Cros, Rohart, Thiaville, and Fert]Sampaio2013 author author J. Sampaio, author V. Cros, author S. Rohart, author A. Thiaville, and author A. Fert, title title Nucleation, stability and current-induced motion of isolated magnetic skyrmions in nanostructures, 10.1038/nnano.2013.210 journal journal Nature Nanotechnology volume 8, pages 839–844 (year 2013)NoStop [Zhang et al.(2015)Zhang, Baker, Komineas, and Hesjedal]Zhang2015a author author S. Zhang, author A. A. Baker, author S. Komineas, and author T. Hesjedal, title title Topological computation based on direct magnetic logic communication, 10.1038/srep15773 journal journal Scientific Reports volume 5, pages 15773 (year 2015)NoStop [Huang et al.(2017)Huang, Kang, Zhang, Zhou, and Zhao]Huang2017 author author Y. Huang, author W. Kang, author X. Zhang, author Y. Zhou, and author W. Zhao, title title Magnetic skyrmion-based synaptic devices, 10.1088/1361-6528/aa5838 journal journal Nanotechnology volume 28, pages 08LT02 (year 2017)NoStop [Zázvorka et al.(2019)Zázvorka, Jakobs, Heinze, Keil, Kromin, Jaiswal, Litzius, Jakob, Virnau, Pinna, Everschor-Sitte, Rózsa, Donges, Nowak, and Kläui]Zazvorka2019 author author J. Zázvorka, author F. Jakobs, author D. Heinze, author N. Keil, author S. Kromin, author S. Jaiswal, author K. Litzius, author G. Jakob, author P. Virnau, author D. Pinna, author K. Everschor-Sitte, author L. Rózsa, author A. Donges, author U. Nowak, and author M. Kläui, title title Thermal skyrmion diffusion used in a reshuffler device, 10.1038/s41565-019-0436-8 journal journal Nature Nanotechnology volume 14, pages 658–661 (year 2019)NoStop [Li et al.(2017)Li, Kang, Huang, Zhang, Zhou, and Zhao]Sai2017 author author S. Li, author W. Kang, author Y. Huang, author X. Zhang, author Y. Zhou, and author W. Zhao, title title Magnetic skyrmion-based artificial neuron device, 10.1088/1361-6528/aa7af5 journal journal Nanotechnology volume 28, pages 31LT01 (year 2017)NoStop [Song et al.(2020)Song, Jeong, Pan, Zhang, Xia, Cha, Park, Kim, Finizio, Raabe, Chang, Zhou, Zhao, Kang, Ju, and Woo]Song2020a author author K. M. Song, author J.-S. Jeong, author B. Pan, author X. Zhang, author J. Xia, author S. Cha, author T.-E. Park, author K. Kim, author S. Finizio, author J. Raabe, author J. Chang, author Y. Zhou, author W. Zhao, author W. Kang, author H. Ju, and author S. Woo, title title Skyrmion-based artificial synapses for neuromorphic computing, 10.1038/s41928-020-0385-0 journal journal Nature Electronics volume 3, pages 148–155 (year 2020)NoStop [Fert, Cros, and Sampaio(2013)]Fert2013c author author A. Fert, author V. Cros, and author J. Sampaio, title title Skyrmions on the track, 10.1038/nnano.2013.29 journal journal Nature Nanotechnology volume 8, pages 152–156 (year 2013)NoStop [Brataas, Kent, and Ohno(2012)]Brataas2012 author author A. Brataas, author A. D. Kent, and author H. Ohno, title title Current-induced torques in magnetic materials, 10.1038/nmat3311 journal journal Nature Materials volume 11, pages 372–381 (year 2012)NoStop [Carpentieri et al.(2015)Carpentieri, Tomasello, Zivieri, and Finocchio]Carpentieri2015 author author M. Carpentieri, author R. Tomasello, author R. Zivieri, and author G. Finocchio, title title Topological, non-topological and instanton droplets driven by spin-transfer torque in materials with perpendicular magnetic anisotropy and Dzyaloshinskii-Moriya Interaction, 10.1038/srep16184 journal journal Scientific Reports volume 5, pages 1–8 (year 2015)NoStop [Finocchio et al.(2015)Finocchio, Ricci, Tomasello, Giordano, Lanuzza, Puliafito, Burrascano, Azzerboni, and Carpentieri]Finocchio2015 author author G. Finocchio, author M. Ricci, author R. Tomasello, author A. Giordano, author M. Lanuzza, author V. Puliafito, author P. Burrascano, author B. Azzerboni, and author M. Carpentieri, title title Skyrmion based microwave detectors and harvesting, 10.1063/1.4938539 journal journal Applied Physics Letters volume 107, pages 3–8 (year 2015)NoStop [Kim et al.(2017)Kim, Kim, Hirata, Oh, Tono, Kim, Okuno, Ham, Kim, Go, Tserkovnyak, Tsukamoto, Moriyama, Lee, and Ono]Kim2017 author author K.-J. Kim, author S. K. Kim, author Y. Hirata, author S.-H. Oh, author T. Tono, author D.-H. Kim, author T. Okuno, author W. S. Ham, author S. Kim, author G. Go, author Y. Tserkovnyak, author A. Tsukamoto, author T. Moriyama, author K.-J. Lee, and author T. Ono, title title Fast domain wall motion in the vicinity of the angular momentum compensation temperature of ferrimagnets, 10.1038/nmat4990 journal journal Nature Materials volume 16, pages 1187–1192 (year 2017)NoStop [Boulle et al.(2016)Boulle, Vogel, Yang, Pizzini, de Souza Chaves, Locatelli, Menteş, Sala, Buda-Prejbeanu, Klein, Belmeguenai, Roussigné, Stashkevich, Mourad Chérif, Aballe, Foerster, Chshiev, Auffret, Miron, Gaudin, Chérif, Aballe, Foerster, Chshiev, Auffret, Miron, and Gaudin]Boulle2016 author author O. Boulle, author J. Vogel, author H. Yang, author S. Pizzini, author D. de Souza Chaves, author A. Locatelli, author T. O. Menteş, author A. Sala, author L. D. Buda-Prejbeanu, author O. Klein, author M. Belmeguenai, author Y. Roussigné, author A. Stashkevich, author S. Mourad Chérif, author L. Aballe, author M. Foerster, author M. Chshiev, author S. Auffret, author I. M. Miron, author G. Gaudin, author S. M. Chérif, author L. Aballe, author M. Foerster, author M. Chshiev, author S. Auffret, author I. M. Miron, and author G. Gaudin, title title Room-temperature chiral magnetic skyrmions in ultrathin magnetic nanostructures, 10.1038/nnano.2015.315 journal journal Nature Nanotechnology volume 11, pages 449–454 (year 2016)NoStop [Hrabec et al.(2017)Hrabec, Sampaio, Belmeguenai, Gross, Weil, Chérif, Stashkevich, Jacques, Thiaville, and Rohart]Hrabec2017c author author A. Hrabec, author J. Sampaio, author M. Belmeguenai, author I. Gross, author R. Weil, author S. M. Chérif, author A. Stashkevich, author V. Jacques, author A. Thiaville, and author S. Rohart, title title Current-induced skyrmion generation and dynamics in symmetric bilayers, 10.1038/ncomms15765 journal journal Nature Communications volume 8, pages 15765 (year 2017)NoStop [Jiang et al.(2017)Jiang, Zhang, Yu, Zhang, Wang, Benjamin Jungfleisch, Pearson, Cheng, Heinonen, Wang, Zhou, Hoffmann, and te Velthuis]Jiang2017a author author W. Jiang, author X. Zhang, author G. Yu, author W. Zhang, author X. Wang, author M. Benjamin Jungfleisch, author J. E. Pearson, author X. Cheng, author O. Heinonen, author K. L. Wang, author Y. Zhou, author A. Hoffmann, and author S. G. E. te Velthuis, title title Direct observation of the skyrmion Hall effect, 10.1038/nphys3883 journal journal Nature Physics volume 13, pages 162–169 (year 2017)NoStop [Reichhardt, Reichhardt, and Milošševi ćć(2022)]Reichhardt2022 author author C. Reichhardt, author C. J. O. Reichhardt, and author M. V. Milošševi ćć, title title Statics and dynamics of skyrmions interacting with disorder and nanostructures, 10.1103/RevModPhys.94.035005 journal journal Rev. Mod. Phys. volume 94, pages 035005 (year 2022)NoStop [Zang et al.(2011)Zang, Mostovoy, Han, and Nagaosa]Zang2011 author author J. Zang, author M. Mostovoy, author J. H. Han, and author N. Nagaosa, title title Dynamics of skyrmion crystals in metallic thin films, 10.1103/PhysRevLett.107.136804 journal journal Phys. Rev. Lett. volume 107, pages 136804 (year 2011)NoStop [Che(2017)]Chen2017 title title Skyrmion hall effect, @noop journal journal Nature Physics volume 13, pages 112–113 (year 2017)NoStop [Dohi et al.(2019)Dohi, DuttaGupta, Fukami, and Ohno]Dohi2019 author author T. Dohi, author S. DuttaGupta, author S. Fukami, and author H. Ohno, title title Formation and current-induced motion of synthetic antiferromagnetic skyrmion bubbles, 10.1038/s41467-019-13182-6 journal journal Nature Communications volume 10, pages 5153 (year 2019)NoStop [Hansen et al.(1989)Hansen, Clausen, Much, Rosenkranz, and Witter]Hansen1989 author author P. Hansen, author C. Clausen, author G. Much, author M. Rosenkranz, and author K. Witter, title title Magnetic and magneto‐optical properties of rare‐earth transition‐metal alloys containing Gd, Tb, Fe, Co, 10.1063/1.343551 journal journal Journal of Applied Physics volume 66, pages 756–767 (year 1989)NoStop [Sala and Gambardella(2022)]Sala2022 author author G. Sala and author P. Gambardella, title title Ferrimagnetic Dynamics Induced by Spin‐Orbit Torques, 10.1002/admi.202201622 journal journal Advanced Materials Interfaces volume 2201622, pages 2201622 (year 2022)NoStop [Berges et al.(2022)Berges, Haltz, Panigrahy, Mallick, Weil, Rohart, Mougin, and Sampaio]Berges2022 author author L. Berges, author E. Haltz, author S. Panigrahy, author S. Mallick, author R. Weil, author S. Rohart, author A. Mougin, and author J. Sampaio, title title Size-dependent mobility of skyrmions beyond pinning in ferrimagnetic GdCo thin films, 10.1103/PhysRevB.106.144408 journal journal Physical Review B volume 106, pages 144408 (year 2022)NoStop [Woo et al.(2018)Woo, Song, Zhang, Zhou, Ezawa, Liu, Finizio, Raabe, Lee, Kim, Park, Kim, Kim, Lee, Lee, Choi, Min, Koo, and Chang]Woo2018b author author S. Woo, author K. M. Song, author X. Zhang, author Y. Zhou, author M. Ezawa, author X. Liu, author S. Finizio, author J. Raabe, author N. J. Lee, author S.-I. Kim, author S.-Y. Park, author Y. Kim, author J.-Y. Kim, author D. Lee, author O. Lee, author J. W. Choi, author B.-C. Min, author H. C. Koo, and author J. Chang, title title Current-driven dynamics and inhibition of the skyrmion Hall effect of ferrimagnetic skyrmions in GdFeCo films, 10.1038/s41467-018-03378-7 journal journal Nature Communications volume 9, pages 959 (year 2018)NoStop [Caretta et al.(2018)Caretta, Mann, Büttner, Ueda, Pfau, Günther, Hessing, Churikova, Klose, Schneider, Engel, Marcus, Bono, Bagschik, Eisebitt, and Beach]Caretta2018b author author L. Caretta, author M. Mann, author F. Büttner, author K. Ueda, author B. Pfau, author C. M. Günther, author P. Hessing, author A. Churikova, author C. Klose, author M. Schneider, author D. Engel, author C. Marcus, author D. Bono, author K. Bagschik, author S. Eisebitt, and author G. S. D. Beach, title title Fast current-driven domain walls and small skyrmions in a compensated ferrimagnet, 10.1038/s41565-018-0255-3 journal journal Nature Nanotechnology volume 13, pages 1154–1160 (year 2018)NoStop [Hirata et al.(2019)Hirata, Kim, Kim, Lee, Oh, Kim, Nishimura, Okuno, Futakawa, Yoshikawa, Tsukamoto, Tserkovnyak, Shiota, Moriyama, Choe, Lee, and Ono]Hirata2019 author author Y. Hirata, author D.-H. Kim, author S. K. Kim, author D.-K. Lee, author S.-H. Oh, author D.-Y. Kim, author T. Nishimura, author T. Okuno, author Y. Futakawa, author H. Yoshikawa, author A. Tsukamoto, author Y. Tserkovnyak, author Y. Shiota, author T. Moriyama, author S.-B. Choe, author K.-J. Lee, and author T. Ono, title title Vanishing skyrmion Hall effect at the angular momentum compensation temperature of a ferrimagnet, 10.1038/s41565-018-0345-2 journal journal Nature Nanotechnology volume 14, pages 232–236 (year 2019)NoStop [Haltz et al.(2020)Haltz, Sampaio, Krishnia, Berges, Weil, and Mougin]Haltz2020 author author E. Haltz, author J. Sampaio, author S. Krishnia, author L. Berges, author R. Weil, and author A. Mougin, title title Measurement of the tilt of a moving domain wall shows precession-free dynamics in compensated ferrimagnets, 10.1038/s41598-020-73049-5 journal journal Scientific Reports volume 10, pages 16292 (year 2020)NoStop [Wangsness(1953)]Wangsness1953 author author R. K. Wangsness, title title Sublattice Effects in Magnetic Resonance, 10.1103/PhysRev.91.1085 journal journal Physical Review volume 91, pages 1085–1091 (year 1953)NoStop [Hirata et al.(2018)Hirata, Kim, Okuno, Nishimura, Kim, Futakawa, Yoshikawa, Tsukamoto, Kim, Choe, and Ono]Hirata2018 author author Y. Hirata, author D.-H. Kim, author T. Okuno, author T. Nishimura, author D.-Y. Kim, author Y. Futakawa, author H. Yoshikawa, author A. Tsukamoto, author K.-J. Kim, author S.-B. Choe, and author T. Ono, title title Correlation between compensation temperatures of magnetization and angular momentum in GdFeCo ferrimagnets, 10.1103/PhysRevB.97.220403 journal journal Physical Review B volume 97, pages 220403 (year 2018)NoStop [Berges(2022)]bergesphd2022 author author L. Berges, title title Magnetic skyrmions in gdco ferrimagnetic thin-films, http://www.theses.fr/2022UPASP161 (year 2022), note phD thesis defended at université Paris-SaclayNoStop [Thiele(1974)]Thiele1974 author author A. A. Thiele, title title Applications of the gyrocoupling vector and dissipation dyadic in the dynamics of magnetic domains, 10.1063/1.1662989 journal journal Journal of Applied Physics volume 45, pages 377–393 (year 1974)NoStop [Panigrahy et al.(2022)Panigrahy, Mallick, Sampaio, and Rohart]Panigrahy2022 author author S. Panigrahy, author S. Mallick, author J. Sampaio, and author S. Rohart, title title Skyrmion inertia in synthetic antiferromagnets, 10.1103/PhysRevB.106.144405 journal journal Physical Review B volume 106, pages 144405 (year 2022)NoStop [Hayashi et al.(2014)Hayashi, Kim, Yamanouchi, and Ohno]Hayashi2014 author author M. Hayashi, author J. Kim, author M. Yamanouchi, and author H. Ohno, title title Quantitative characterization of the spin-orbit torque using harmonic Hall voltage measurements, 10.1103/PhysRevB.89.144425 journal journal Physical Review B volume 89, pages 144425 (year 2014)NoStop [Litzius et al.(2020)Litzius, Leliaert, Bassirian, Rodrigues, Kromin, Lemesh, Zazvorka, Lee, Mulkers, Kerber, Heinze, Keil, Reeve, Weigand, Van Waeyenberge, Schütz, Everschor-Sitte, Beach, and Kläui]Litzius2020 author author K. Litzius, author J. Leliaert, author P. Bassirian, author D. Rodrigues, author S. Kromin, author I. Lemesh, author J. Zazvorka, author K.-J. Lee, author J. Mulkers, author N. Kerber, author D. Heinze, author N. Keil, author R. M. Reeve, author M. Weigand, author B. Van Waeyenberge, author G. Schütz, author K. Everschor-Sitte, author G. S. D. Beach, and author M. Kläui, title title The role of temperature and drive current in skyrmion dynamics, 10.1038/s41928-019-0359-2 journal journal Nature Electronics volume 3, pages 30–36 (year 2020)NoStop [Thiele(1973)]Thiele1973 author author A. A. Thiele, title title Steady-State Motion of Magnetic Domains, 10.1103/PhysRevLett.30.230 journal journal Physical Review Letters volume 30, pages 230–233 (year 1973)NoStop [Manchon et al.(2015)Manchon, Koo, Nitta, Frolov, and Duine]Manchon2015 author author A. Manchon, author H. C. Koo, author J. Nitta, author S. M. Frolov, and author R. A. Duine, title title New perspectives for Rashba spin–orbit coupling, 10.1038/nmat4360 journal journal Nature Materials volume 14, pages 871–882 (year 2015)NoStop
http://arxiv.org/abs/2307.05422v2
20230711163943
Differential Analysis of Triggers and Benign Features for Black-Box DNN Backdoor Detection
[ "Hao Fu", "Prashanth Krishnamurthy", "Siddharth Garg", "Farshad Khorrami" ]
cs.CR
[ "cs.CR", "cs.LG" ]
Differential Analysis of Triggers and Benign Features for Black-Box DNN Backdoor Detection Hao Fu, Prashanth Krishnamurthy, Member, IEEE Siddharth Garg, Member, IEEE, Farshad Khorrami, Senior Member, IEEE Department of Electrical and Computer Engineering, New York University, Brooklyn, NY, 11201, USA. E-mail: {hf881, prashanth.krishnamurthy, sg175, khorrami} @nyu.edu. This work was supported in part by the Army Research Office under grant number W911NF-21-1-0155 and by the New York University Abu Dhabi (NYUAD) Center for Artificial Intelligence and Robotics, funded by Tamkeen under the NYUAD Research Institute Award CG010. Code is available at <https://github.com/fu1001hao/Five-Metrics-Detector.git>. August 12, 2023 ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== This paper proposes a data-efficient detection method for deep neural networks against backdoor attacks under a black-box scenario. The proposed approach is motivated by the intuition that features corresponding to triggers have a higher influence in determining the backdoored network output than any other benign features. To quantitatively measure the effects of triggers and benign features on determining the backdoored network output, we introduce five metrics. To calculate the five-metric values for a given input, we first generate several synthetic samples by injecting the input's partial contents into clean validation samples. Then, the five metrics are computed by using the output labels of the corresponding synthetic samples. One contribution of this work is the use of a tiny clean validation dataset. Having the computed five metrics, five novelty detectors are trained from the validation dataset. A meta novelty detector fuses the output of the five trained novelty detectors to generate a meta confidence score. During online testing, our method determines if online samples are poisoned or not via assessing their meta confidence scores output by the meta novelty detector. We show the efficacy of our methodology through a broad range of backdoor attacks, including ablation studies and comparison to existing approaches. Our methodology is promising since the proposed five metrics quantify the inherent differences between clean and poisoned samples. Additionally, our detection method can be incrementally improved by appending more metrics that may be proposed to address future advanced attacks. Neural Network Backdoors, Black-Box Detection, Small Validation Dataset, Hand-Crafted Features § INTRODUCTION Deep neural networks (DNN) should be secure and reliable since they are utilized in many applications <cit.>. Therefore, studying the security problems for DNN is an important research topic <cit.>. This paper considers defending against backdoor attacks in neural networks for classification tasks under a black-box scenario, in which only the network output is accessible and other information (e.g., model weights and intermediate layer outputs) is not available. Backdoor attacks may appear in models trained by a third party. In backdoor attacks, the attacker injects triggers into the network during the training phase. During the testing phase, the backdoored neural network outputs the attacker-chosen labels whenever the corresponding triggers appear. This paper proposes a novel defense against backdoor attacks based on a differential analysis of the behaviors of backdoor triggers and benign features used by the network for classification. Although there is substantial literature on backdoor attacks and defenses, effective detection methods are scarce. Detecting backdoors is challenging due to the asymmetric-information advantage available to the adversary (i.e., the attacker has complete control of the trigger, whereas the defender has little information about the trigger). Among the existing defenses, many make assumptions about the trigger, such as assuming that the trigger is small or non-adaptive. However, the assumptions may not be valid in real-world situations because a clever attacker can design any trigger. Another set of existing literature assumes that the defender has access to a contaminated dataset that contains trigger information. In real-world cases, accessing a contaminated dataset may not be feasible for the defender. Some studies do not have assumptions on triggers or a need for contaminated datasets. However, they require a large amount of clean data. Collecting a large amount of clean data may not always be affordable to the defender. Many works focus on detecting if a neural network is backdoored and should be abandoned. However, this paper is interested in designing a detection algorithm with limited clean validation samples to reject poisoned inputs (i.e., the inputs with triggers) so that the backdoored model can still be used without causing considerable loss. Therefore, we propose a detection method that has no assumptions on triggers, does not require the availability of a contaminated dataset, and only requires a tiny clean validation dataset. Our method is inspired by the definition of the backdoor attack: the attacker controls the neural network output by overriding the original logic of the network when he/she presents triggers. This overriding behavior of the neural network will be exposed only for poisoned inputs, whereas for clean inputs, the neural network behaves normally. Therefore, we claim that the trigger has a higher influence than the benign features in determining the backdoored network output. Based on this difference, we propose five metrics (i.e., robustness r, weakness w, sensitivity s, inverse sensitivity is, and noise invariance Inv) such that a function exists to separate the clean and poisoned inputs regarding their five-metric values. To calculate the five-metric values for a given input, our detection algorithm generates a few synthetic images by injecting the input's partial contents into samples from a tiny validation dataset and utilizes the output labels corresponding to the synthetic images. Having the computed five metrics, five novelty detectors are trained from the tiny validation dataset. The trained novelty detectors will output high (resp., low) confidence scores for clean (resp., poisoned) inputs because a clean (resp., poisoned) input's metric values will be similar to (resp., different from) the clean validation samples' metric values (i.e., the training data for the novelty detectors) with a high probability. Thereafter, a meta novelty detector fuses the confidence scores by the five trained novelty detectors. During online testing, our method determines if a new input is poisoned and should be rejected via assessing its meta confidence score output by the meta novelty detector. Besides the backdoor study, the proposed five metrics also contribute to solving two important problems for hand-crafted features-based anomaly detection <cit.>: 1) designing an effective descriptor, and 2) deciding suitable features for specific anomaly detection situations <cit.>. Our approach is novel in that it contributes effective hand-crafted features for backdoor detection (which can be regarded as an anomaly detection) using the proposed five novel metrics and achieves high accuracy under a black-box scenario with scarce available clean samples, whereas existing hand-crafted features-based anomaly detection methods are not designed for backdoor detection purposes and hence are not as effective as ours. Overall, the contributions of this paper include: * We utilize the conceptual ways that triggers can be injected into a backdoored network in order to achieve an unsupervised approach for backdoor detection; * We propose five metrics to measure the behavior of the network for different scenarios; * We propose a data-efficient online detection algorithm using the five-metric values as inputs to detect poisoned inputs for neural networks under a black-box scenario; * We evaluate and compare the efficacy of our detection approach with other methods on various backdoor attacks. § RELATED WORK Backdoor attack was first proposed by <cit.> and <cit.>. Several types of backdooring triggers have been studied including triggers with semantic real-world meaning <cit.>, hidden invisible triggers <cit.>, smooth triggers <cit.>, and reflection triggers <cit.>. Backdoor attacks have been devised in several contexts including federated learning <cit.>, transfer learning <cit.>, graph networks <cit.>, text-classification <cit.>, and out-sourced cloud environments <cit.>. Several scenarios/variants of backdoor attacks have been considered including all-label attacks <cit.>, clean label attacks <cit.>, and defense-aware attacks <cit.>. The backdooring mechanism has also been applied for benign/beneficial purposes, such as watermarking for patent protection <cit.>. Backdoor attack defenses can be classified into several groups regarding their assumptions or proposed methods. The reverse-engineering-based approaches <cit.> attempt to solve an optimization problem under certain restrictive assumptions; hence, those methods are effective only in a small portion of cases. <cit.> proposed a gradient-free technology to reverse-engineer the trigger with limited data. Clustering-based approaches <cit.> assume a contaminated training dataset is available, whose acquisition might be expensive. Novelty-detection-based approaches <cit.> require enough clean validation samples for training the complex novelty detector models, especially for the neural-network-based novelty detectors. The retraining-based approaches <cit.> also need a reasonable amount of clean data to achieve high performance. If the available clean data is not sufficient, their performance degrades dramatically. <cit.> used online data to improve detection accuracy. However, the method becomes ineffective when online data is limited. Some works tested if a network has a backdoor <cit.> and should be abandoned. Fine-pruning <cit.> and STRIP <cit.> have their assumptions and limitations that require further improvements. Some works modify the original problem and show the behavior of the backdoored networks in their settings, such as noise response analysis <cit.> and generation of universal litmus patterns <cit.>. <cit.> studied backdoor attacks in the frequency domain. § PROBLEM FORMULATION §.§ Background and Assumptions The difference between a backdoored network and a benign network in classification is shown in Table <ref>: a backdoored network f^* outputs ground truth label l for a clean input z and a wrong label l^* (called attacker-chosen label) for a poisoned input z^* with a high probability, as shown in the last column in the table. However, poisoning an input adds negligible influence on the output of a benign model f, as shown in the middle column in the table. We assume that only the network output is available to our approach. The network's other information (e.g., gradients, weights, and hidden layer outputs) is not available. This black-box assumption makes our detection approach realistic since such internal access into the network may be unavailable due to proprietary/security considerations in real-world cases. Additionally, this black-box setting is widely used in the literature of neural network backdoor studies <cit.>. We assume that there is a small set of clean data {x_i}_i=1^n (e.g., with size n≤ 30) to confirm the performance of f on clean data. We assume that only the backdoor attack appears in our problem. Other types of attacks are out of the scope of this paper. §.§ Problem Formulation Given a black-box network f and a tiny validation dataset {x_i}_i=1^n, we want to find a detection algorithm g(·;{x_i}_i=1^n,f) such that g(z^*;{x_i}_i=1^n,f) = 1 with a high probability for poisoned inputs z^* (if f is backdoored) and g(z;{x_i}_i=1^n,f)=0 with a high probability for clean inputs z, i.e., ℙ(g(z;{x_i}_i=1^n,f)= 0 | z is clean) ≥ 1-ϵ_1, ℙ(g(z^*;{x_i}_i=1^n,f)= 1 | z^* is poisoned) ≥ 1-ϵ_2 where ϵ_1 and ϵ_2 are two small positive numbers. §.§ Important Concepts Classification Accuracy (CA) is the ratio of the number of clean inputs for which the network outputs ground-truth labels to the total number of clean inputs. Both backdoored networks and benign networks should have high CA. Attack Success Rate (ASR) is the ratio of the number of poisoned inputs for which the network outputs attacker-chosen labels to the total number of poisoned inputs. ASR should be high for backdoored networks and low for benign networks. True-Positive Rate (TPR) is the ratio of the number of poisoned inputs detected by the detection algorithm to the total number of poisoned inputs. An accurate detection algorithm should have high TPR. False-Positive Rate (FPR) is the ratio of the number of clean inputs misidentified as poisoned by the detection algorithm to the total number of clean inputs. An accurate detection algorithm should have low FPR. Receiver Operating Characteristic Curve (ROC) is a graph that shows the detection algorithm's performance at all thresholds. Its two parameters are TPR and FPR. Area Under the ROC Curve (AUROC) is the entire two-dimensional area underneath the entire ROC curve. An accurate detection algorithm should have AUROC close to 1. Area Under the Precision and Recall (AUPR) is similar to AUROC but with precision[Precision = TruePositives/(TruePositives+FalsePositives).] and recall[Recall = TruePositives/(TruePositives+FalseNegatives).] as its two parameters. AUPR is useful when the testing dataset is imbalanced. Higher AUPR implies a better performance of the approach. Novelty Detector is a one-class detector that learns the training data distribution and detects if an incoming new sample belongs to this distribution or not. In this paper, clean validation data and clean online data belong to the same distribution, whereas clean validation data and poisoned online data belong to different distributions. § METHODOLOGY §.§ Intuition – Rethinking the Pattern-Based Triggers Consider the naive trigger with functionality shown in Fig. <ref>: the trigger is one pixel with a fixed value located at the lower-right corner of the image. Any image attached with this trigger makes the backdoored network f^* output the attacker-chosen label l^* (i.e., 0). This shows that the trigger pattern has a higher influence than other benign features in deciding the network output. We measure this influence with the following steps: 1) given an image, we copy its partial content (i.e., the dashed area in Fig. <ref>) and paste the content into different clean validation samples in the exact corresponding location to generate synthetic images. 2) We feed these synthetic images into the network and observe the outputs. If the image is poisoned and the pasted content contains the trigger, then all the output labels should be l^* (i.e., the left “Apply-Get” in Fig. <ref>). If the image is clean, it is less likely that all the output labels are the same (i.e., the right “Apply-Get” in Fig. <ref>). Therefore, the consistency among the network output labels for synthetic images can be used to measure this influence. Based on motivations analogous to the above discussion, we propose five metrics to quantitatively measure the effect of regions of a given image. Using these five metrics as a five-metric set, we will train a classifier that will enable testing of the given image for the presence of triggers. §.§ The Five Metrics The five proposed metrics are robustness r, weakness w, sensitivity s, inverse sensitivity is, and noise invariance Inv. Defining them will use the following notations: * z represents an input image to be evaluated for backdoor presence. * {x_i}_i=1^n represents n clean validation samples[ We require z and {x_i}_i=1^n to belong to the same domain. For instance, they can be both MNIST-like images for the MNIST dataset]. * U_(·) represents the partial content of the image (·). For example, U_z represents the partial content of image z. * paste(·, *) pastes (·) into (*) in the exact corresponding location and returns the synthetic image. For example, paste(U_z, x_i) pastes U_z into x_i in the exact corresponding location and returns the synthetic image. * 1 represents the indicator function[With A being a set, 1_A(x)=1 if x∈ A, and 1_A(x)=0, otherwise.]. * f^* represents the neural network (possibly backdoored). * ϵ∼𝒩(0,δ) represents the normal noise tensor. Robustness r quantifies the likelihood that U_z overrides the prediction of the backdoored network f^* on x_i: r = 1/n∑_i=1^n 1{f^*(z) = f^*(paste(U_z, x_i))}. As one example scenario, if z is poisoned and U_z does not include the benign features of z but includes the trigger, r will be high. Inversely, if z is clean and U_z does not contain the benign features, r will be low. Weakness w quantifies the likelihood that U_z fails to make the backdoored network f^* change its prediction on x_i: w = 1/n∑_i=1^n 1{ f^*(x_i) = f^*(paste(U_z, x_i))}. As one example scenario, if z is poisoned and U_z does not include the benign features of z but includes the trigger, w will be low. Inversely, if z is clean and U_z does not include the benign features, w will be high. Sensitivity s quantifies the likelihood that z still contains high influence features after U_x_i is pasted: s = 1/n∑_i=1^n 1{f^*(z) = f^*(paste(U_x_i, z))}. As one example case, if z is poisoned and U_x_i contains the benign features of x_i but paste(U_x_i, z) still contains the trigger, s will be high. Inversely, if z is clean and U_x_i contains the benign features of x_i, s will be low. Inverse Sensitivity is quantifies the likelihood that z does not contain high influence features after U_x_i is pasted: is = 1/n∑_i=1^n 1{ f^*(x_i) = f^*(paste(U_x_i, z)) }. As one example scenario, if z is poisoned and U_x_i contains the benign features of x_i but paste(U_x_i, z) still contains the trigger, is will be low. Inversely, if z is clean and U_x_i contains the benign features of x_i, is will be high. Thus, the metrics help distinguish between clean and poisoned samples. The following observations are made: * Each metric is expected to contribute in different ways to detect various triggers, although it is possible that multiple metrics capture the same trigger in some cases. Fusing all metrics further enhances the true positive rates and detection of the triggers. For example, one can easily design counterexamples for poisoned samples to evade detection using r or s. However, w and is help complement r and s to detect those counterexamples. * Having some reasonable regions U is the key step to distinguishing the clean and poisoned samples. Therefore, we use several regions. * These four metrics consider insertions of regions of a given image into corresponding regions of the validation set images (or vice versa), which are expected to be most relevant for pattern-based triggers (i.e., triggers contained in some regions in the image). However, non-pattern-based triggers exist (i.e., triggers that are based on inserting subtle variations throughout the image) and are usually entangled with benign features, which cannot be separated from benign features by any region. Consequently, the performance of these four metrics can be very low on certain non-pattern-based triggers. Therefore, the fifth metric Inv is needed. Noise Invariance Inv quantifies the robustness of the features in z against noise perturbation: Inv = 1/n∑_i=1^n 1{ f^*(z) = f^*(z+ϵ_i)}. If z is poisoned and ϵ_i does not break the function of the benign features of z but breaks the function of the trigger, Inv will be low. Inversely, if z is clean and ϵ_i does not break the function of the benign features, Inv will be high. Thus, clean and poisoned samples are distinguishable. Similarly, finding the ideal noise ϵ is the key to distinguishing clean and poisoned samples. Therefore, we utilize a pool of noise distributions (i.e., different δ). Fig. <ref> is made with noise ϵ sampled from 𝒩(0, δ) with different δ shown in the X-axis. The noise pattern is global with the same shape as the input image (i.e., a noise perturbation ϵ∼𝒩(0, δ) is added into each pixel of the image). From the figure, the clean and poisoned samples are empirically distinguishable. We also empirically observed that although Inv is designed for non-pattern-based triggers, it also works well on some pattern-based triggers. The explanation is that some generated noise perturbations break the trigger pattern's function but do not break the benign features' influence. Overall, Inv helps in distinguishing clean and poisoned samples, especially for non-pattern-based triggers and sometimes for pattern-based triggers. §.§ The Pool of Feature Extraction Regions and Noise Variance Separating different triggers and benign features could require multiple regions. Therefore, one should use a pool of regions to separate the triggers and benign features. Similarly, as shown in Fig. <ref>, a pool of δ can better capture the poisoned samples. This paper uses a pool of 16 central regions and a pool of 16 noise variances. Specifically, the aspect ratio[The aspect ratio is the ratio of the height of the central region to the height of the original image.] of the central regions to the image is 0.1, 0.15, 0.2, 0.25, 0.3, 0.35, 0.4,0.45, 0.5,0.55, 0.6,0.65, 0.7,0.75, 0.8, and 0.9, respectively. And the noise variances are 0.001, 0.003, 0.005, 0.007, 0.01, 0.03, 0.05, 0.07, 0.1, 0.3, 0.5, 0.7, 1, 1.5, 2, and 3, respectively. Alg. <ref> shows the extraction process used in this paper. We use the aspect ratio to calculate the coordinates of the extraction regions and then copy the content of the regions (i.e., U_z and U_x_i) to calculate the five-metric values. Since the pool has 16 elements, each metric value will be a 16-dimensional vector. Note that the pools can be expanded by adding more regions and noise. For example, one can use and add extra regions with different locations and shapes. The paper later shows the performance of our approach using central regions and additional corner regions. Similarly, one can use and add extra noise perturbations from a wide range of distributions. §.§ Novelty Detection Process The novelty detection process is shown in Fig. <ref>. For any input z, we first calculate the five-metric values using the validation data, the pool of central regions, and the noise variances via MetricCal in Alg. <ref>. The metric values are then sent into the corresponding novelty detectors, each of which will output a confidence score of the input not being a novelty. A meta novelty detector takes all five confidence scores to output a final confidence score. By setting a user-defined threshold, poisoned inputs can be detected. We expect each metric novelty detector to detect triggers in different ways. The reason to use the meta novelty detector is that it fuses the confidence scores nonlinearly and is more accurate than a simple linear combination of the confidence scores. In this paper, Local Outlier Factor (LOF) <cit.> from scikit-learn <cit.> with default parameter settings is used as the novelty detector. The meta novelty detector is also a LOF with default parameter settings. However, our approach is applicable to any type of anomaly detector and does not necessarily require nearest-neighbor detectors. Indeed, we empirically observed that our approach is also accurate with one-class SVM <cit.> as the novelty detector. The number of LOF's parameters that need to be learned is small. Therefore, over-fitting is not likely to happen even if the available clean validation dataset is tiny. In contrast, novelty detectors with a large parameter size (e.g., neural-network-based detectors) are likely to face over-fitting with small training datasets. In this paper, the clean validation dataset size is 30. Therefore, the neural-network-based novelty detectors may be over-fitted and have low accuracy. Training the LOF is simple: after the training data is prepared, one calls model.fit(training data) for training. We used default values set by scikit-learn for all the training hyper-parameters. §.§ The Detection Algorithm Alg. <ref> describes how to train the novelty detectors and use them to detect poisoned inputs. It first calculates the five-metric values for the clean validation samples {x_i}_i=1^n using MetricCal function. It then trains the five novelty detectors with the calculated metric values. The algorithm next feeds the calculated metric values to the trained novelty detectors to acquire the corresponding confidence scores. Finally, a meta novelty detector is trained with confidence scores. During online testing, the algorithm first calculates the metric values for a given input and then acquires the confidence scores by feeding the metric values into the first set of novelty detectors. It next feeds the scores into the meta novelty detector to get a meta score. If the meta score is lower than a user-defined threshold, the algorithm will consider z poisoned. Otherwise, the input z will be considered clean. § EXPERIMENTAL RESULTS §.§ Datasets, Triggers, and Compared Methods Clean Datasets and Network Architecture: Our method is evaluated on various datasets, including MNIST <cit.>, GTSRB <cit.>, CIFAR-10 <cit.>, YouTube Face <cit.>, and a subset of ImageNet <cit.>. The number of classes and the number of clean samples in the training and testing datasets for each class are shown in Table <ref>. For MNIST, the model was from <cit.>. For GTSRB, the models were from <cit.> and Pre-activation Resnet-18 <cit.>. For CIFAR-10, the models were Network in Network <cit.> and Pre-activation Resnet-18. For YouTube Face, the network was from <cit.>. For ImageNet, Resnet-18 was used. Since our method addresses a black-box scenario, any backdoored models (even models other than neural networks, such as SVM <cit.> or random forest <cit.>) can be addressed. Triggers and Their Impacts: The triggers for each dataset are shown in Fig. <ref>. In MNIST dataset, we considered all label attack (AAA) <cit.>, clean label attack (CLA) <cit.>, and blended (Ble.) <cit.>. Three additional triggers were created: the 4 corner trigger (4C) is four pixels at each corner; the 2 piece trigger (2P) is 2 pixels at the image's center and upper-left corner; the centered trigger (Cen.) is one pixel located in the image center. The desired impact of the triggers is to misguide the backdoored network to output the attacker-chosen label l^* given in Table <ref>, where “+1” means l^*= (l+1) mod 10. In CIFAR-10 dataset, we considered combination attack (TCA) <cit.> and Wanet (Wa.C) <cit.> with the impact shown in Table <ref>. In GTSRB dataset, we considered a white box (Whi.) <cit.>, a moving trigger (Mov.) <cit.>, feature space attack (FSA) <cit.>, and Wanet (Wa.G) <cit.>. In ImageNet dataset, we used an invisible (Invs.) trigger <cit.>. In YouTube Face dataset, we used sunglasses (Sun.) <cit.>, lipstick (Lip.), and eyebrow (Eye.) <cit.> as triggers. All the impacts (i.e., l^*) can be found in Table <ref>. This paper later discusses the trigger patterns in more detail. Details of Training Datasets: We randomly injected the corresponding triggers into 15% of the training clean samples for each class to create the training poisoned samples and changed the ground-truth label to the attacker-chosen label shown in Table <ref>. Table <ref> shows the number of clean and poisoned samples per class in the training dataset. CIFAR-10, YouTube Face, and Sub-Imagenet are balanced datasets. Therefore, the numbers in Table <ref> are the numbers of samples per class in these three datasets. MNIST and GTSRB are not strictly balanced, but the numbers of samples per class for these two datasets are very close to the numbers shown in Table <ref>. One can acquire more information on their distributions through the corresponding references. We followed the standard training process with CrossEntropy-based loss function and Adam optimizer to make the backdoored networks (badnet) have the classification accuracy (CA) and attack success rate (ASR) shown in Table <ref> “Setup”. Details of Validation and Testing Datasets: Our work considers two small validation datasets with sizes n=30 and 100 for all cases as shown in Table <ref>. The first clean validation datasets were used for training all the novelty detectors. In this work, we randomly select clean samples to form the validation datasets so that the clean validation datasets have the same underlying data distributions as the clean testing datasets. We do not require the clean validation samples to be 100% correctly classified by the model. The number of samples per class for MNIST and YouTube Face is shown in the first row of Fig. <ref>. CIFAR-10 and MNIST have similar numbers of clean validation samples per class because they both have 10 classes, whereas the numbers of clean validation samples per class for GTSRB and sub-ImageNet are similar to YouTube Face because they all have a large number of classes. The second validation datasets were utilized to determine a proper threshold for our approach and have the same underlying distributions as the clean testing datasets as well. The second row in Fig. <ref> shows the number of samples per class in the second clean validation datasets of MNIST and YouTube Face. Table <ref> also shows the testing datasets that are used to evaluate our approach and other compared methods. The ratio of poisoned samples to clean samples is 1 (i.e., the testing datasets are balanced). Specifically, we generated the poisoned testing samples by injecting triggers into their corresponding clean versions. It is worth noticing that methods evaluated on imbalanced binary classification datasets may have high inference accuracy but poor run-time performance. Compared Methods: We selected one reverse-engineering-based approach (i.e., Neural Cleanse <cit.>), one out-of-distribution detection (i.e., Mahalanobis-distance-based novelty detection (MD) <cit.>), one retraining-based approach (i.e., Kwon's method <cit.>), and STRIP <cit.>. Neural Cleanse directly modifies the parameters of the original backdoored network to cap the maximum neuron values for the reverse-engineered triggers. Kwon's method trains a new clean network on some relabeled poisoned samples. STRIP utilizes an entropy-based confidence score and “blend” technique for backdoor detection. MD is a feature-based anomaly detector with a Gaussian-based confidence score. MSP <cit.> and GEM <cit.> are two additional feature-based anomaly detectors whose performance was found to be close to MD. Therefore, for brevity, we only present the results with MD. The compared methods are representative in their types of defenses. Fitting the Novelty Detectors: We used LOFs for the metric and meta novelty detectors. However, other types of novelty detectors are also allowed, such as one-class SVM. The training process is shown in Alg. <ref>. Only the clean validation datasets with size n=30 are available. Neither poisoned samples nor information about triggers were used. All the training hyper-parameters were set to default values provided by scikit-learn <cit.>. §.§ Ablation Study for the Five Metrics The ablation study shows the efficacy of each metric detector, the reasoning behind the validation dataset size and central regions, and improvement by incrementally adding metrics. The results are shown in Table <ref> and Figs. <ref>-<ref>. How Each Metric Works: To show the efficacy of the introduced five metrics given by (<ref>), (<ref>), (<ref>), (<ref>), and (<ref>), we have plotted the five metrics values of clean and poisoned samples with respect to different aspect ratios (small ratios represent small central regions) and noise variances (small variances represent small noise perturbations) for several backdoor attack cases. The results are shown in Fig. <ref>. The first row shows the cases where clean and poisoned samples are distinguishable with respect to each metric value. Based on this observation, it is important to use several central regions with different sizes together to increase the detection probability of poisoned samples. For example, in the case AAA, clean and poisoned samples have similar r and is values for small central regions. However, they have distinguishable r and is values for large central regions. As for the case Cen., clean and poisoned samples are distinguishable in terms of w for small and medium central regions. Clean and poisoned samples in CLA can be distinguished in terms of s using medium central regions. Clean and poisoned samples in Wa.G can be distinguished by the metric Inv using small and medium noise perturbations. The second row in Fig. <ref> shows the cases where clean and poisoned samples are not distinguishable with respect to each metric value for all central regions and noise variances. Therefore, it is important to use all the five metrics together to increase the detection probability of poisoned samples. Details are given in the following ablation study. Contribution of Each Metric: We calculated the five-metric values of the poisoned samples in the testing data for each backdoor attack case and input them into the five novelty detectors. After receiving the confidence scores, we calculated the TPR of each novelty detector using the default threshold value provided by scikit-learn (i.e., 0). Fig. <ref>(a) shows the TPR for each backdoor attack case. The “Overall” column is the final TPR of or operation of the five novelty detectors. It is seen that each metric contributes to detecting poisoned inputs for some cases. Combining the five metrics can achieve a high TPR of over 90% for most cases except Wanet CIFAR-10 (Wa.C) whose TPR is 89.4%. Fig. <ref>(b) shows the number of cases in which each metric contributes to detecting poisoned samples over the total 15 cases. 𝒩_r has a TPR of more than 20% in over 80% the cases. 𝒩_w helps increase the “Overall” TPR for some cases. For example, without using 𝒩_w, the TPR reduces by 3% in both “Cen.” and “Lip.” cases. We also observed that 𝒩_w improved its own performance in some triggers if additional regions are utilized as shown in Table <ref> “𝒩_w”. For more than half the cases, 𝒩_s and 𝒩_is have TPR of over 50%. Besides the non-pattern-based triggers, 𝒩_Inv is also effective on some pattern-based triggers since the generated noise breaks the impact of some pattern-based triggers, leading to an abrupt change in Inv and the detection by 𝒩_Inv. The FPR of using the or operation in “Overall” column is less than 30% for most cases. However, our meta novelty detector can reach a better trade-off between TPR and FPR than a simple or operation of the five novelty detectors. Performance of the Meta Novelty Detector: Choosing different threshold values thres will lead to different TPR and FPR. We, therefore, draw the ROC curves for the meta novelty detector shown in Fig. <ref>(c) with the AUROC shown in Table <ref>. The meta novelty detector can reach an AUROC over 0.9 for all the triggers. Therefore, for a proper threshold value, the meta novelty detector will have a high TPR and a low FPR. Although the testing datasets are balanced with an equal number of clean and poisoned samples, we still show the AUPR to investigate the performance of our approach in finding the poisoned samples. From Table <ref>, our approach also has high AUPRs. Compared with the baseline method STRIP, our approach shows consistently high performance in all the cases, whereas STRIP performs poorly in several cases, such as AAA, Wa.C, and Wa.G. We also applied one-class SVM (OC-SVM) as the metric detectors and the meta detector and show the result in Table <ref>. Based on the result, our approach is applicable to any type of anomaly detector and does not necessarily require LOFs. Regions, Sizes, and Incremental Improvements: We evaluated our approach by using additional regions. Specifically, the coordinates of the additional regions are m_1 = (1-k)L, n_1 = (1-k)W, m_2 = L, and n_2= W, where k is the aspect ratio from the same ratio pool in Alg. <ref>. The results are shown in Table <ref> “Overall”. The AUROC of using extra regions is close to using only the central regions. Therefore, to minimize the computation cost, we choose to use only the central regions. The first plot in Fig. <ref> shows the performance of our approach with different validation sizes. Using more validation samples does not significantly improve our approach but adds memory and computation complexity. Therefore, we consider n=30 to be optimal. The second plot in Fig. <ref> shows the improvement of our approach by incrementally adding metrics. The AUROC is over 0.9 by using only two metrics (w and r) in eight cases. With three metrics (w, r, and is), the AUROC is over 0.9 in twelve cases. Therefore, our approach requires metrics less than five to be accurate in some cases. Our approach in the remaining three cases requires all five metrics to achieve over 0.9 AUROC (using only r, w, s, and is, the AUROC of our approach is 0.835 for Invs., 0.508 for Wa.C, and 0.746 for Wa.G). To further highlight the contribution of each metric, we have evaluated our approach on all the cases by using only four metrics. The Sun. case mostly highlights the contribution of each metric. Therefore, this paper mainly discusses the Sun. case for brevity. The AUROC is 0.571 for using only w, s, is, Inv, 0.937 for using only r, s, is, Inv, 0.895 for using only r, w, is, Inv, 0.905 for using only r, w, s, Inv, and 0.906 for using only r, w, s, is. The AUROC of using all the five metrics is 0.986. According to these numbers, the metric r contributes to detecting poisoned samples most for the Sun. case. However, the other four metrics also greatly fine-tune the performance of our approach by increasing the AUROC by roughly 5% ∼ 8%. Therefore, all the five metrics are needed to maximally capture poisoned samples. §.§ Adaptive Search for Threshold Recall that our approach requires an appropriate threshold value is required to have a high TPR and a low FPR. To find such a threshold, we propose an adaptive-search solution. After the meta novelty detector is trained, one collects all the meta scores of the clean validation data (i.e., 𝒩_meta.score({Score_l}_l=1^n)). One finds the mean μ and standard deviation σ of these meta scores. The threshold value can be set to thres = μ - h*σ, where h is a coefficient. With the second validation dataset available as shown in Table <ref>, the user can vary h and observe the FPR by testing the meta novelty detector on this validation dataset. Then, the user can choose the desired h according to the corresponding FPR. If the second validation dataset is not available, the user can set h to be any reasonable number. With this adaptive search, our detection algorithm can reach the FPR of less than 5% and TPR of more than 90% for most cases, as shown in Table <ref>. The CA and ASR of the backdoored network before and after applying our method are also shown in Table <ref>. It is seen that our approach reduces the ASR by identifying and discarding potential poisoned inputs and maintains a reasonable CA. Even though our approach has a relatively low TPR (88.78%) for the trigger Lipsticks, the ASR for the Lipsticks is low (2.85%). This is because there exist null poisoned samples that fail to make the backdoored network output the attacker-chosen label. Since our approach is based on detecting differences in behaviors of the network when presented with triggers vs. benign features, the null poisoned samples may be considered clean by our approach and bypass the detection. However, failing to detect null poisoned samples does not increase the ASR. Note that the adaptive-search method can find better thresholds if a larger second validation dataset is used. For example, in Sun., the ASR can be further reduced to 0.9% with the CA being 95%. §.§ Data-Efficiency and Comparisons We selected several types of triggers and compared our approach with other methods. We trained the baseline models with the two tiny validation datasets (i.e., a total of 130 samples) for a fair comparison. The hyper-parameters for training the compared methods were set based on the original papers and the codes provided by the authors. We set the thresholds for STRIP and MD so that 5% of the clean validation samples are considered poisoned. The results are summarized in Table <ref>. Naive Triggers: The naive triggers (i.e., CLA and Whi.) are simple patterns associated with an attacker-chosen label. Our method reduces the ASR to a low value (maximum is 0.05%) while maintaining a reasonable CA. The other methods either do not work or cannot achieve comparable CA or ASR. Functional Complex Triggers: According to the l^* in Table <ref>, the attack AAA depends on the trigger pattern and the image's benign features. Additionally, AAA is a one-to-all attack with attacker-chosen label l^* = l+1 mod 10. The trigger of Mov. is randomly attached to the image. The trigger for TCA is the combination of two different shapes, and the network will output the attacker-chosen label only when both shapes exist. If only one shape appears in the image, the network behaves normally. The compared methods cannot both detect poisoned inputs and maintain high CA. Real-World Meaning Triggers: The attacker uses some real-world objects as the triggers (i.e., Sun. and Lip.), and the trigger size can be large. Neural Cleanse cannot reverse-engineer large-sized triggers and thus has low accuracy. This is verified in Table <ref>. STRIP is still valid in this case, but our method shows a higher CA. Filter Trigger and Invisible Sample-Specific Trigger: The trigger for FSA is a Gotham filter. Inputs that pass through this filter become poisoned. This trigger essentially changes the entire input image. However, our method shows its efficacy while all other methods fail. Trigger Invs. is an invisible sample-specific trigger. The attacker extracts content information and generates a hidden pattern for each image (the last picture in the “ImageNet” row in Fig. <ref>). By injecting the hidden pattern, the poisoned input looks identical to the clean input to human eyes (the middle picture in the “ImageNet” row), and the network will output the attacker-chosen label. Our method can detect this advanced attack as well. Information Comparison: Table <ref> shows the information needed for each method to detect or defend against backdoor attacks. For reverse-engineering-based approaches such as Neural Cleanse, they need access to the network parameters. However, it may not always be possible since such information could be proprietary. Feature-based statistical detection tools, such as MD, require some hidden layer outputs of the network. Compared to reverse-engineering-based approaches, they require less information about the network. Nevertheless, they may not be viable if the network is proprietary. The retraining-based approaches, such as Kwon's, need the architecture of the network. However, to achieve high performance, they need a large amount of clean data, which may not be possible. Lastly, STRIP is a statistical detection tool that requires only the logits layer (i.e., the layer before the softmax function) output. However, if a neural network is entirely black-box, STRIP is also inapplicable. In contrast, our method can operate in completely black-box scenarios (black-box-efficient) and with smaller amounts of clean validation data (data-efficient). Reasons for Low Accuracy on the Compared Methods: The most important reason for their low accuracy is that the available clean validation dataset is small. Neural Cleanse and Kwon's method need to fine-tune a neural network model. The two tiny validation datasets are not enough to fine-tune neural network models to have high accuracy. MD and STRIP do not use neural networks. However, MD trains a novelty detector with hidden layer outputs of the validation samples, which are high-dimensional vectors (e.g., a 100-dimensional vector). The small number of validation samples and the high dimension of the hidden layer outputs make the trained novelty detector have low accuracy. STRIP uses the “blend” function to create synthetic images. However, there are many triggers whose functionality can be broken by the “blend” function. Therefore, STRIP becomes ineffective on those triggers. For example, STRIP is not accurate for one-to-all cases (i.e., AAA). Our method utilizes multiple regions to extract image contents to avoid breaking the triggers' functionality. §.§ Benign Models and Multiple Triggers Although our method does not detect if the model is backdoored and instead detects potentially poisoned inputs, other works (e.g., Neural Cleanse) can be used for checking if a model is backdoored. If the model is indeed backdoored, our method still applies to find poisoned inputs during online operation while using the model. If the model is clean but misidentified as backdoored, our method can retain a reasonable CA value. For example, we trained a benign model with MNIST dataset, which has 98.95% CA. After applying our method, the CA becomes 95.86%, which is still reasonable. Therefore, it is safe to use our method even when there are no backdoor attacks. We also considered multi-trigger-single-target attack (MTSTA) and multi-trigger-multi-target attack (MTMTA). In MTSTA, there are three triggers associated with a single attacker-chosen label. In MTMTA, the three triggers are associated with three different attacker-chosen labels. As shown in Table <ref>, our method works for both cases for all triggers. However, STRIP fails to detect some triggers. § ADAPTIVE ATTACKS AND FUTURE WORKS The proposed metrics help understand the backdoored network behavior. To bypass our detection, a backdoor attack needs to satisfy several conditions. Its trigger should be non-pattern-based since our approach uses four metrics to detect pattern-based triggers and attains high accuracy. Additionally, the robustness of the trigger against noise perturbation should be close to benign features so that the metric values Inv for clean and poisoned samples are similar. The attacker can attempt to design an adaptive attack by using the five metrics in the training loss function. While we have not so far been able to devise a straightforward way to construct such an adaptive backdoor attack, it appears that the Wanet in CIFAR-10 case provides some clue in this direction since it provided a relatively low CA and high ASR compared to other attacks (although we did reasonably well in this case as well). From the ablation studies, it appears that one metric Inv is dominant for detecting non-pattern-based triggers although other metrics have some contribution. One potential direction for future work is to add new metrics for non-pattern-based triggers. For example, the denoising technique may also contribute to detecting non-pattern-based triggers. Another potential direction is to build new deep novelty detectors based on the existing ones to capture poisoned samples more accurately. The deep novelty detectors have shown promising performance for many applications. However, there are several factors that limit the existing deep novelty detectors to be utilized on detecting poisoned samples under the considered scenario. One factor is that training the deep novelty detectors requires sufficiently a large amount of data. Deep networks, such as LeNet, have thousands of parameters for even a single layer. However, the available clean samples considered in this work are scarce (i.e., n≤ 30). Therefore, the overfitting is likely to happen when using the existing deep novelty detectors for backdoor detection with limited data. Another factor is that the existing deep novelty detectors' accuracy still requires improvement. Therefore, building new deep novelty detectors that are less demanding for data and more accurate for backdoor detection can be fruitful. § CONCLUSION The behavioral differences between triggers and benign features are illustrated and utilized to detect backdoored networks. Five metrics are proposed to measure the behavior of the network for a given input. A novelty detection process is proposed to detect poisoned inputs by taking as input the five-metric values. The method is black-box-efficient and data-efficient. The ablation study for the five metrics, the efficacy of our approach, and the comparison with other methods are shown on various types of backdoor attacks. Potential adaptive attacks and prospective works are also discussed. ieee_fullname [ < g r a p h i c s > ]Hao Fu is a Ph.D. candidate in the Department of Electrical and Computer Engineering at New York University, Tandon School of Engineering, Brooklyn, NY, USA. In 2019, he received his M.S. degree in the same department. He received his B.S. in Physics from the University of Science and Technology of China, Hefei, China, in 2017. In 2018 Fall, he joined in Control/Robotics Research Laboratory (CRRL). His research interest is backdooring attacks and security in cyber-physical systems. [ < g r a p h i c s > ]Prashanth Krishnamurthy received the B.Tech. degree in electrical engineering from Indian Institute of Technology Madras, Chennai, in 1999, and M.S. and Ph.D. degrees in electrical engineering from Polytechnic University (now NYU) in 2002 and 2006, respectively. He is a Research Scientist and Adjunct Faculty with the Department of Electrical and Computer Engineering at NYU Tandon School of Engineering. He has co-authored over 150 journal and conference papers and a book. His research interests include autonomous vehicles and robotic systems, multi-agent systems, sensor data fusion, robust and adaptive nonlinear control, resilient control, machine learning, real-time embedded systems, cyber-physical systems and cyber-security, decentralized and large-scale systems, and real-time software implementations. [ < g r a p h i c s > ]Siddharth Garg received the B.Tech. degree in electrical engineering from the Indian Institute of Technology Madras, Chennai, India, and the Ph.D. degree in electrical and computer engineering from Carnegie Mellon University, Pittsburgh, PA, in 2009. He is currently an Associate Professor at New York University, New York, where he joined as an Assistant Professor in 2014. Prior to this, he was an Assistant Professor with the University of Waterloo, Waterloo, ON, Canada, from 2010 to 2014. His current research interests include computer engineering, and more particularly in secure, reliable, and energy efficient computing. He was a recipient of the NSF Career Award in 2015, and the paper awards at the IEEE Symposium on Security and Privacy 2016, the USENIX Security Symposium in 2013, the Semiconductor Research Consortium TECHCON in 2010, and the International Symposium on Quality in Electronic Design in 2009. He was listed in popular science magazine’s annual list of “Brilliant 10” researchers. He serves on the technical program committee of several top conferences in the area of computer engineering and computer hardware and has served as a reviewer for several IEEE and ACM journals conferences. [ < g r a p h i c s > ]Farshad Khorrami received the bachelor’s degrees in mathematics and in electrical engineering from The Ohio State University in 1982 and 1984, respectively, and the master’s degree in mathematics and the Ph.D. degree in electrical engineering from The Ohio State University, in 1984 and 1988, respectively. He is currently a Professor with the Electrical and Computer Engineering Department, NYU, where he joined as an Assistant Professor in September 1988. He has developed and directed the Control/Robotics Research Laboratory, Polytechnic University (Now NYU) and Co-Director of the Center for AI and Robotics. His research has been supported by the DARPA, ARO, NSF, ONR, DOE, AFRL, NASA, and several corporations. He has published more than 320 refereed journal articles and conference papers in these areas. His book on "modeling and adaptive nonlinear control of electric motors" was published by Springer Verlag in 2003. He also holds 14 U.S. patents on novel smart micropositioners and actuators, control systems, cyber security, and wireless sensors and actuators. His research interests include adaptive and nonlinear controls, robotics, unmanned vehicles (fixed-wing and rotary wing aircrafts as well as underwater vehicles and surface ships), machine learning, resilient control and cyber security for cyber-physical systems, large-scale systems, decentralized control, and real-time embedded instrumentation and control. He has served as the general chair and a conference organizing committee member for several international conferences.
http://arxiv.org/abs/2307.04724v1
20230710173408
The individual abundance distributions of disc stars across birth radii in GALAH
[ "Kaile Wang", "Andreia Carrillo", "Melissa K. Ness", "Tobias Buck" ]
astro-ph.GA
[ "astro-ph.GA" ]
firstpage–lastpage AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning Yuwei Guo^1,2 Ceyuan Yang^1* Anyi Rao^3 Yaohui Wang^1 Yu Qiao^1 Dahua Lin^1,2 Bo Dai^1 ^1Shanghai AI Laboratory ^2The Chinese University of Hong Kong ^3Stanford University <https://animatediff.github.io/> =================================================================================================================================================================================================================================================================================== Individual abundances in the Milky Way disc record stellar birth properties (e.g. age, birth radius ()) and capture the diversity of the star-forming environments over time. Assuming an analytical relationship between ([Fe/H], [α/Fe]) and , we examine the distributions of individual abundances [X/Fe] of elements C, O, Mg, Si, Ca (α), Al (odd-z), Mn (iron-peak), Y, and Ba (neutron-capture) for stars in the Milky Way. We want to understand how these elements might differentiate environments across the disc. We assign tracks of in the [α/Fe] vs. [Fe/H] plane as informed by expectations from simulations for ∼ 59,000 GALAH stars in the solar neighborhood (R∼7-9 kpc) which also have inferred ages. Our formalism for shows that older stars (∼10 Gyrs) have a distribution with smaller mean values (i.e., R̅_∼5±0.8 kpc) compared to younger stars (∼6 Gyrs; R̅_∼10±1.5 kpc), for a given [Fe/H], consistent with inside-out growth. The α-, odd-z, and iron-peak element abundances decrease as a function of , whereas the neutron-capture abundances increase. The -[Fe/H] gradient we measure is steeper compared to the present-day gradient (-0.067 dex/kpc vs -0.058 dex/kpc), which we also find true for -[X/Fe] gradients. These results (i) showcase the feasibility of relating the birth radius of stars to their element abundances, (ii) the abundance gradients across are steeper than those over current radius, and (iii) offer an observational comparison to expectations on element abundance distributions from hydrodynamical simulations. Galaxy: abundances – Galaxy: disc – Galaxy: evolution § INTRODUCTION Recovering the birth conditions of the stars is one of the main goals of Galactic archaeology. However, stars deviate from their birth orbits, such that their guiding-center radius can change over their lifetime, without leaving any signature of this change. These orbital excursions are due to processes such as the interaction with the spiral structure as well as external perturbations from infalling satellites (e.g. ). Although we cannot directly probe the initial orbital properties of disc stars at birth, they exhibit atmospheric abundances that - to first order - reflect the abundance distribution of the gas from which the stars were born, with exceptions (e.g. ). We can therefore assume that most element abundances of stars, in particular within narrow regions of evolutionary state, are time-invariant. With stellar death, elements created within the stars and during explosive nucleosynthesis are returned to the interstellar medium. This enriches the environment where newer stars are formed, in a cyclic process. The element abundances for a given star are therefore a record of the nucleosynthetic history of the star-forming environment, at that particular time and place. The time invariance of element abundances and their effective barcode of a star's birth environment has been foundational to the idea of chemical tagging, via which individual molecular cloud stellar birth sites in the disc might be reconstructed using abundances alone <cit.>. However, the current data appear to demonstrate that this goal is prohibited by the low-dimensionality of what appears to be a very correlated abundance space <cit.>. A more feasible goal with current spectroscopic data is the inference of the time and overall radius at which stars formed in the disc. Different types of stars and production mechanisms produce elements across the periodic table with different yields, at different rates, and at different points in time (see ). Additionally, it is widely accepted that galaxies, like the Milky Way, formed inside-out, with star formation starting in the deepest part of the potential and proceeding outwards (e.g. ). Combining nucleosynthesis timescales with the inside-out formation of the Milky Way, the element abundances of the stars encode the temporal enrichment of the Galaxy and reveal stars' birth properties in terms of age and spacial location. We are now able to have a clearer picture of this as the field of Galactic archaeology has greatly expanded due to large multi-object stellar surveys, such as the Apache Point Observatory Galactic Evolution Experiment (APOGEE; ), the Galactic Archaeology with HERMES (GALAH; ), Gaia-European Southern Observatory (ESO) survey (Gaia-ESO; ), and the Large Sky Area Multi-Object Fibre Spectroscopic Telescope (LAMOST; ). These surveys enable the detailed study of the element abundance for >10^5 stars in the Galaxy. In addition to element abundances, another fundamental and time invariant property of stars is their age. Age tells us when during the evolution of the galaxy a star was formed. In fact, numerous studies have explored the relationship between stellar age and element abundances, or age-[X/Fe] relations (). These studies have shown that just by knowing a star's age and metallicity, [Fe/H], the element abundance, [X/Fe], can be predicted up to a precision of 0.02 dex for many elements. Indeed, a star's age does prove to be a key link to understanding the nucleosynthetic history and evolution of the Galaxy. The only missing link now is where in the galaxy a star was born. If we know a star's individual abundances, age, and birth site, we can begin to unravel the formation of the Milky Way disc with utmost detail. To this effect, the element abundances of stars should be yet very useful. Earlier works have demonstrated the feasibility to infer birth radius, . For example, <cit.> presented a largely model-independent approach for estimating  for Milky Way disc stars, using [Fe/H] and age estimates from the local HARPS sample <cit.>. The assumptions relied on are (1) the interstellar medium (ISM) is well mixed at a given radius, (2) there exists a negative radial metallicity gradient in the ISM for most of the disc lifetime, (3) stars younger than 1 Gyr are expected to have little migration, and (4) the Milky Way formed inside-out. Utilizing the  derived in their work, they find that the ISM radial metallicity gradient in the Milky Way disc flattens with time. As noted in this study, processes like radial migration can blur  signatures. With this in mind, <cit.> developed a model to derive  by quantifying the radial migration in the Milky Way disc, using the ages and [Fe/H] of low-α disc stars. In this work, it was assumed that (i) the metallicity of the ISM has negligible variations azimuthally, (ii) the Milky Way had a relatively quiescent life for the past 8 Gyr, and (iii) radial orbit migration is the only mechanism responsible for the scatter in age–metallicity at a given radius. Their model reproduced the observed data well and further found that the radial orbit migration efficiency in the Milky Way is strong. Recently, <cit.> proposed an empirical method to derive birth radii from age and metallicity measurements, with the assumptions that gas is well mixed in Galactic azimuth, Milky Way formed inside-out, and there is a well-defined linear relation between metallicity and birth radius. Such in-depth studies to derive  have been shown to be successful, with the help of various physically-motivated assumptions and modeling. It is therefore worth asking if  could be similarly derived with different assumptions, and specifically a model that does not directly use present-day radius measurements. In addition, as detailed element abundances have been shown to have a direct link to ages, it is interesting to explore how detailed element abundances can potentially trace stars back to their birth sites. Fortunately, the correlations between the birth radius and stellar properties have also been shown from cosmological hydrodynamical simulations, allowing methods of recovering the birth radius of stars to be explored. For example, <cit.> examined the reliability of inferring birth radii from the assumed linear relationship between the ISM metallicity with radius, using four zoom-in cosmological hydrodynamic simulations from the NIHAO-UHD project (). They found that precise stellar birth radii can be obtained for stars with age < 10 Gyr, as the stellar disc starts to form and the linear correlation between the ISM metallicity and radius increases. Also with the simulations from the NIHAO-UHD project, <cit.> showed the direct correlation between element abundances (specifically, [O/Fe] and ) and the birth location of stars. In this work, we want to recover the birth radius of stars simply based on their and abundances, as shown in simulation works (e.g. ). Instead of performing complex Galactic chemical evolution modeling, we assign each of the stars a birth radius based on their and abundances and examine the validity of this birth radius assignment. To do this, we explore the individual abundance distribution [X/Fe] across birth radii with the disc stars in GALAH DR3 data <cit.>. In Section <ref>, we describe the observational data we used in this study. In Section <ref>, we discuss our birth radius assignment and the simulation work we are motivated by. In Section <ref>, we present our age-birth radius relation in two thin metallicity bins, and in Section <ref> we show the distribution of individual element abundances [X/Fe] across birth radii. The results presented in these two sections validate our birth radius assignment based on element abundances. Lastly, we summarize and discuss the results in Section <ref>. § OBSERVATIONAL DATA We take advantage of the Galactic Archaeology with HERMES (GALAH) survey data release 3 (DR3, ) which measures up to 30 element abundance ratios for elements in different groups: α, light/odd-z, iron-peak, and neutron-capture. The GALAH survey uses the HERMES instrument, a high-resolution (R ∼ 28,000) four channel fibre-fed spectrograph (covering 4713–4903 Å, 5648–5873 Å, 6478–6737 Å, and 7585–7887 Å) on the Anglo-Australian Telescope <cit.>. The catalogue contains 588,571 stars, with the stellar parameters determined using the modified version of the spectrum synthesis code Spectroscopy Made Easy (SME: ; ) and 1D MARCS model atmospheres. After the stellar parameters were estimated and fixed, one abundance was fitted at a time for the different lines/elements in the GALAH wavelength range <cit.>. In this work, we aim to study the distribution of individual abundances [X/Fe], which we take to be X = α, C, O, Mg, Al, Si, Ca, Mn, Y, and Ba, spanning the different groups of elements. In addition to the main catalogue, we also use the GALAH DR3 value-added catalogue that contains stellar ages, Galactic kinematics, and dynamics. The stellar ages were determined by the Bayesian Stellar Parameter Estimation code (BSTEP), an isochrone-based scheme that provides a Bayesian estimate of intrinsic stellar parameters from observed parameters by making use of stellar isochrones, adopting a flat prior on age and metallicity <cit.>. The Galactic dynamic information was calculated using galpy <cit.>. In the calculations, the best fitting axisymmetric potential by <cit.> was used with a Solar radius of 8.21 kpc. We assemble a parent sample of qualified GALAH DR3 disc stars according to the following criteria: * flag_sp=0, flag_fe_h=0, flag_X_fe=0 * -2 < [Fe/H], < 0.5, -1 < <6 * 3500 < < 6250 K, SNR = snr_c3_iraf > 40 * 7 < R < 9, and |z| < 2 where X = α, C, O, Mg, Al, Si, Ca, Mn, Y, and Ba. We set the cut in element abundance to avoid extreme values. The flag_sp, flag_fe_h, and flag_X_fe are set to select stars with reliable stellar parameters and element abundance determination. In addition, we limit the range such that the abundances are not affected by systematic temperature trends. This selection produces agreement between the values from GALAH+ DR3 and from angular diameter-based measurements (e.g. ) for Gaia benchmark stars <cit.>. We show in the Appendix in Figure <ref> that there are only small slopes between element abundances [X/Fe] and . These may be real or systematics inherited from stellar models. We employ a signal-to-noise ratio cut of SNR > 40 for the red band (CCD 3) to ensure good quality spectra, as well as cuts in Galactocentric radius (R) and height from the disc plane (z) to select for disc stars. This results in a sample of 59,124 stars, and the stellar parameters are shown in Figure <ref>. The stars span a range of 0.004 to 13 Gyrs in age, with a median age = 5.7 Gyrs. The 16th and 84th percentiles of age are 3.7 Gyrs and 8.6 Gyrs respectively. Figure <ref> shows the density plots of the parent sample on [Fe/H]-[X/Fe] plane, for elements C, O, Mg, Al, Si, Ca, Mn, Y, and Ba. § METHOD We aim to determine, given and abundances, the distribution of [X/Fe] across different birth radii (), under an assumed relation between [Fe/H]-[α/Fe] and . Cosmological simulations (e.g. ) demonstrate clear birth radius tracks on the [O/Fe] vs. abundance plane. Figure <ref> is a reproduction of Figure 3 in <cit.> for the galaxy g2.79e12, showing the [O/Fe] vs. plane at solar radius (7<R<9 kpc). The zoom-in simulation of g2.79e12 analyzed in <cit.> is taken from the Numerical Investigation of a Hundred Astronomical Objects (NIHAO) simulation suite of cosmological hydrodynamical simulations of Milky Way mass galaxies (). The total virial mass, total stellar mass, and the disc scale length of g2.79e12 are 3.13×10^12M_⊙, 15.9×10^10M_⊙, and 5.57 kpc. Figure <ref> panels are colored by (a) birth radius, (b) age, (c) birth radius dispersion, and (d) stellar mass. In Figure <ref> panel (a), stars with high [O/Fe] (>0.3) are seen to mostly originate from the inner Galaxy, while stars with low [O/Fe] (< 0.2) are distributed across a wider range of birth radii where larger birth radii are offset to lower metallicity. Panel (b) shows clear horizontal age gradients with older ages associated with higher [O/Fe]. In panel (c), there is high birth radius dispersion around = -1.0, [O/Fe] = 0.3, as well as the lower-right region on the [O/Fe] vs. plane towards high metallicity. The stellar mass is also higher in the lower-right region, as shown in panel (d). Motivated by the results from the <cit.> simulations, specifically the birth radius-element abundances trends, we lay down seven tracks (2, 4, 6, 8, 10, 12, and 14 kpc) as shown in Figure <ref> panel (a) in the vs. plane from GALAH data. These tracks can be described by the following equation = -40×(+0.80×exp(0.4×)-0.81)+8 which was obtained by fitting birth radius tracks similar to Figure <ref> panel (a). We further assign every star in our sample a birth radius according to the equation, with known and . The number of stars in bins between each track is as follows: 3804 (2-4 kpc), 9328 (4-6 kpc), 15720 (6-8 kpc), 18628 (8-10 kpc), 9565 (10-12 kpc), 1306 (12-14 kpc). Stars with assigned < 0 kpc are removed (170 stars, or 0.29% of all qualified disc stars). Instead of using the oxygen abundance [O/Fe], we choose to use the alpha element abundance because (1) it is better measured, as the mean uncertainty in is smaller than that of [O/Fe], and (2) in the simulations performed by <cit.>, [O/Fe] is intended as a tracer of alpha-elements than of the specific element O. The absolute values of each radial track are not calibrated to match the Milky Way, but the range is consistent with the birth radius range used in other studies, e.g. . We adopt this form of the relation between [Fe/H]-[α/Fe]-birth radius and examine the overall effect for the birth radius variations in the element abundance distributions, and for the birth radius at fixed age, if such a relation is held in the Milky Way. The birth radius increases as [α/Fe] decreases (from top to bottom), and the y-axis spacing between two neighboring curves is around 0.05 dex. The distribution of the parent sample stars on the [α/Fe] vs. [Fe/H] are also shown in Figure <ref> panel (a) colored by age, with the median (, ) shown as a black circle. We lay down the tracks such that the middle track goes over the median (, ) point because most of the stars are located near the origin, i.e. ([Fe/H], [α/Fe]) = (0, 0) (see density plot in Figure <ref> panel (b)). Furthermore, the birth radius for the majority of stars roughly follows a similar distribution as their current Galactocentric radii <cit.>, which is around 8 kpc. Additionally, the distribution of stellar ages exhibits a decreasing trend going towards lower and higher as shown in Figure <ref> panel (a). As shown in Figure <ref> panel b, the stellar population density is very non-uniformly distributed in the - plane. We wish to carry out an analysis of how the age and individual abundance distributions of stars change with birth radius, given our model assigned in the - plane. Therefore, the varying density distribution of stars in this plane is not information we wish to propagate. To eliminate the impact of the uneven density of stars in the - plane for this analysis, we use a grid of evenly spaced representative populations in [α/Fe] vs. [Fe/H]. Along the x and y axis, the grid spacing is 0.051 and 0.019, respectively. Bins with N < 20 stars are removed, mostly on the edges, because we want our binned data to be representative of the neighboring star population on the abundance plane. The remaining sample of 231 binned data points, including 57,858 stars, is summarised in Figure <ref> panel (c) colored by mean birth radius. We use these binned data points, which give us an even sampling across the -plane in , for further analysis. § BIRTH RADIUS DISTRIBUTIONS WITH AGE AND METALLICITY We explore how the birth radius distribution of stars in the - plane as shown in Figure 4 (c) changes as a function of age and metallicity. In Figure <ref>, we show the birth radius distribution for a high metallicity (-0.25<[Fe/H]<0, top panel) and low metallicity (-0.5<[Fe/H]<-0.25, bottom panel) sample. Within the same metallicity bin, the sample is broken down into three stellar ages bins. These are shown separately with different colors in the sub-panels of Figure <ref>, with the lightest to darkest color for the youngest to oldest stars, respectively. The mean birth radius values for the three age bins lie at 10.1 kpc (4-6 Gyr bin), 8.2 kpc (6-8 Gyr bin), and 5.0 kpc (8-10 Gyr bin) for the high metallicity sample, and at 10.7 kpc (6-8 Gyr bin), 8.1 kpc (8-10 Gyr bin), and 5.2 kpc (10-12 Gyr bin) for the low metallicity sample. For both the high and low metallicity samples, the birth radius distribution for older stars generally peaks at a smaller birth radius compared to younger stars, exhibiting an inside-out formation trend similar to other studies (e.g. ). Furthermore, the width of the birth radius distributions also has a correlation with age, in which the width decreases with increasing age. The median absolute deviations (MAD) of the three high metallicity age bins are 1.4 kpc (4-6 Gyr bin), 1.2 kpc (6-8 Gyr bin), and 0.8 kpc (8-10 Gyr bin), and the values for the low metallicity sample are 1.5 kpc (6-8 Gyr bin), 1.2 kpc (8-10 Gyr bin), and 0.8 kpc (10-12 Gyr bin). Here we choose MAD to describe the dispersion because our sample distribution is non-Gaussian, and MAD is less sensitive to extreme values. Under this assumed model between birth radius and the - plane, this is consistent with an inside-out formation of the Milky Way; the older stars are more concentrated in the inner Galaxy. The younger stars on the other hand show mean distributions at larger radii and with wider distributions across Galactic radii. Interestingly, we do not see any obvious age- trends when examining the data across all [Fe/H], i.e. without looking at different metallicity bins. This signal is erased, as the mean age distribution is a function of [Fe/H], so this age gradient, which is consistent with the idea of `inside-out' formation, is only seen when looking at the distribution of stellar ages in small ranges of [Fe/H] in our sample. In the pre-binned data (shown in Figure 3, panel (b), there is a clear density peak in the distribution in the - plane; this non-uniform density would presumably enable signatures in age and radius, which are correlated with this plane (i.e. ), without metallicity binning, as the majority of stars are at one particular metallicity already. The overall age gradient seen when examining all stars in the Milky Way (e.g. ) is similarly presumably sensitive to the underlying density distribution of stars as a function of metallicity. This is an example of the Yule-Simpson paradox, a phenomenon in which a trend appears in several groups of data, but disappears or reverses when the groups of data are combined. Examples of Yule-Simpson's paradox in Galactic archaeology can be found in <cit.>. Additionally, samples with different metallicity are dominated by stars of different ages. As shown in <cit.> Figure 2, the distribution of current radius R at low metallicity ([Fe/H] = -0.75) is dominated by 7-10 Gyr old stars, while the 1-3 Gyr old star population becomes the majority at high metallicity ([Fe/H] = 0). This change in age dominance with [Fe/H] also appears in Figure <ref>. For the high metallicity sample, we are able to make a bin for 4-6 Gyrs stars but not for 10-12 Gyrs due to having too few old stars in the sample, and this is the opposite in the low metallicity sample. Therefore, we have to make bins according to [Fe/H] to account for the differing dominant age populations. In addition, this allows us to see inside-out growth in the level of chemical enrichment for mono-age populations. Comparing the distributions of the two 6-8 Gyr age bins (colored light pink) in both high and low metallicity samples, we find that the low metallicity sample peaks at a larger . A similar trend also exists in the distributions for age = 8-10 Gyr stars (colored red). By selecting narrow metallicity bins, we show that the inside-out formation holds for different metallicities. We summarise the birth radius-age relation, as shown in the top-left panel of Figure <ref>. Overall, as the birth radius increases, the stellar age decreases. Similarly, as birth radius increases, the mean metallicity, [Fe/H], decreases, as shown in the bottom-left panel of Figure <ref>. The top-right panel of Figure <ref> shows the age dispersion as a function of birth radius. We see that small birth radii correspond to the highest age dispersions. Similarly, in the bottom-right panel of Figure <ref>, we see that the [Fe/H] dispersion is highest at the smallest radii. § INDIVIDUAL ABUNDANCE DISTRIBUTIONS AT DIFFERENT BIRTH RADII We investigate the abundance distributions for the elements C, O, Mg, Al, Si, Ca, Mn, Fe, Y, and Ba, spanning the α, odd-z, iron-peak, and neutron-capture groups of elements, at different birth radii. These [X/Fe] distributions are shown in Figure <ref>. The number of data points from Figure 4 (c) in each of the birth radius bins is 84 (2-6 kpc), 84 (6-10 kpc), and 53 (10-14 kpc). We find a bimodal distribution towards small birth radius bins. High precision observational measurements of - in the solar neighborhood show a bimodality termed the `low' and `high' alpha discs (e.g. ). Across a wider Galactic radius range these change in their density contribution; the high-alpha sequence is concentrated to the inner Galaxy and the low-alpha sequence extends to the outer Galaxy (e.g. ). The sampling we use for our analysis is evenly spaced across the full - plane as shown in Figure <ref> (c). However, when we examine the individual abundance distributions, a bimodality appears in a number of individual elements at the smallest birth radii. This is presumably due to the contribution from both the high and low alpha discs at fixed birth radius in the inner Galaxy. In effect, this is a strong prediction of our model, that the disc is bimodal in elements at small birth radius. Furthermore, most of the elements show that the [X/Fe] distribution changes from wide (2-6 kpc) to narrow (10-14 kpc) as the birth radius increases. Metallicity, [Fe/H]: The metallicity distribution at a small birth radius has a higher mean value. A decreasing mean metallicity gradient is observed with present-day guiding radius from the Milky Way center to the outer region (e.g. ). This is inherited from a birth gradient in the gas metallicity (e.g. ) but has presumably been weakened by radial migration over time (e.g. ). Carbon: Carbon is mainly produced in massive stars, followed by low-mass AGB stars <cit.>. Therefore, carbon distributions should be similar to that of α-elements, as the majority of the α-elements are produced in massive stars. The age-abundance relation for carbon in other observational works (e.g.) shows a positive gradient, indicating that [C/Fe] is larger for older stars. In this study, the carbon abundance [C/Fe] has little relation with value. We see a weak and opposite trend where there is a slight shift in peak position (i.e. larger bins have greater peak [C/Fe]). However, Carbon changes over the evolution of the star due to dredge-up, so perhaps this is representative of the impact of the intrinsic evolution of the element rather than extrinsic (ISM). Oxygen, Magnesium, Silicon, and Calcium (α-elements): For the α-elements Mg, Si, and Ca, the distributions peak at a smaller mean [X/Fe] as the birth radius increases. The α-elements are mainly produced through Type II Supernovae and their relative ISM contribution is diluted by the increasing supernovae Ia iron-peak pollution. Therefore, we expect the abundance of α-elements, as a function of iron, to be lower in younger stars. We note that the oxygen abundance [O/Fe] shows the smallest evolution across different birth radii. The distribution is wider at smaller birth radii, and each of the distributions overlaps significantly. We see little variation in [O/Fe] with , which contradicts the progression found in other works (e.g. ). Manganese (iron-peak): The iron-peak element Mn has a higher mean [Mn/Fe] value toward smaller birth radii. The iron-peak elements like Mn are generally synthesized in Type Ia supernovae and also in collapse supernovae. At the center of the Milky Way, younger stars are formed from more enriched gas compared to the outskirts of the Galaxy. As [Mn/Fe] increases with [Fe/H] (e.g. ), [Mn/Fe] is expected to be higher in the Galactic center compared to that in the outskirts. In the age-abundance trends of Mn examined by <cit.> and <cit.>, we see that both studies reveal a relatively flat but still positive age-abundance slope. In general, our result agrees with those from the previous studies. Aluminum (Odd-z): The odd-z element Al also has a higher mean abundance at smaller birth radii. Based on the prediction from the chemical evolution model of the Milky Way, [Al/Fe] decreases with time for stars with age 12 Gyrs and younger ( Figure 2). Since the majority of our sample stars are younger than 12 Gyrs, we expect our sample to behave similarly (i.e. decreasing [Al/Fe] with time). Moreover, <cit.> examined the age-abundance relation of stars at solar metallicity and discovered a positive relation wherein [Al/Fe] increases with increasing age. Such a trend is also seen by <cit.> in their analysis of the Sun-like stars in the solar neighborhood. Thus, [Al/Fe] is expected to increase with decreasing birth radii, as predicted by both the chemical evolution model and the age-[Al/Fe] relation, and as shown in our results. Interestingly, the 2-6 birth radius bin does not follow the general trend of increasing dispersion with smaller birth radii. However, this is because, for all the binned data points with Al abundance available, the ones that fall in the 2-6 kpc birth radius bin do not span a wide range in [Fe/H], and thus the dispersion of the bin is smaller. Barium & Yttrium (Neutron-capture): The two neutron-capture elements, Ba and Y, though centered on different values, have similar abundance distributions for stars at different birth radii; that is, the distribution peaks at a larger [X/Fe] value as birth radius increases. They exhibit an opposite trend as the aforementioned elements C, O, Mg, Al, Si, Ca, and Mn. This trend is consistent with the age-abundance relation for the neutron-capture elements from the literature (e.g. ). According to the negative age-[Ba/Fe] relation <cit.> as well as the age-birth radius relation (Figure <ref> top-left panel), the older population were born at smaller mean birth radii with a lower [Ba/Fe] value. It is reassuring that Ba and Y have abundance distributions that behave similarly to birth radius, as both are considered s-process elements. Furthermore, we calculate and tabulate the -[X/Fe] gradients for the low-α stars. In Figure <ref>, we present the [X/Fe] vs.  plots for the low-α in GALAH DR3, with the black lines representing the best-fit gradients and colored by log density. The vertical error bar reflects the MAD of [X/Fe] in small bins with bin width = 2 kpc. The gradient results are summarized in Table <ref> column 3. The reason we focus on the low-α population is that they exhibit the strongest change in element abundances across radius, but for the high-α stars, there is no obvious abundance trend associated with radius (e.g. ). Adopting <cit.> cuts for low-α stars ([Mg/Fe]>0.12-0.13[Fe/H] if [Fe/H]<0; [Mg/Fe]>0.12 if [Fe/H]>0), the number of the low-α stars in our sample is ∼56,000. The inner-most bin (i.e. <5 kpc) seems to be an outlier to the general trend (referring to Figure <ref>). Therefore, to justify a linear fit and gradient metric, we excluded the inner-most data points in our gradient calculations. The gradients are calculated over a range of 5-13 kpc. The largest abundance gradient with is seen in [Fe/H] at -0.067 dex/kpc followed by the individual element [O/Fe] with an [X/Fe]- slope of 0.029±0002 dex/kpc. We emphasize that in the GALAH sample we use, our present-day radius is limited to the solar neighborhood, with a mean present-day radius of 8.14±0.35 kpc. However, as stars migrate from birth, this survey still gives us access to stars born all over the disc, as parameterized in our model of (from 2-14 kpc). In APOGEE, the survey spans a present-day Galactocentric radius of 0.01-20 kpc, so we can directly compare and contrast our results for birth radius to the present-day radius with APOGEE. For example, Table <ref> column 1 shows the abundance gradients for APOGEE DR16 low-α disc stars (i.e. [α/M]<0.12, |z|<1) with current radius in the range of 5-13 kpc, obtained from <cit.> Figure 7. We show the seven elements [X/Fe] where X=C, O, Mg, Al, Si, Ca, and Mn) in APOGEE DR17 <cit.> that are in common with the elements used in this study. We also calculate gradients for the element abundances [X/Fe] independently, using ∼ 63,000 APOGEE DR17 low-α stars. We adopt similar cuts as <cit.> (i.e. 4800 K<  <5800 K, <3.6, [α/M]<0.12, and |z|<1). The APOGEE gradients are summarized in Table <ref> columns 1 and 2. In column 4, since GALAH covers a narrow range in current radius compared to APOGEE, the present-day radius-abundance gradients for GALAH low-α stars around the solar neighborhood only (7<R<9 kpc) are shown. We discuss these gradient comparisons in more detail in Section <ref> below. § DISCUSSION In this work, we explore the element abundance distributions of stars as a function of birth radius which we inferred from the [Fe/H]-[α/Fe] plane alone, as motivated by cosmological simulations. We now discuss the validity of our assigned  tracks and the implications of our  estimates on the star formation history of the Galaxy. We test two other models for assigning the birth radius. We lay down horizontal and vertical tracks, on the vs. plane. From these alternate tracks, we produce [X/Fe] distributions of these stars with different , similar to Figure <ref>. In the horizontal assignment, we see that the mean [Mn/Fe] and [Fe/H] values increase with increasing . This contradicts the observed [Fe/H] gradient (i.e. higher [Fe/H] at the center) of the Galaxy due to inside-out formation and therefore its longer history of star formation. In addition, there is no obvious trend in the dispersion across different bins for C, O, Al, Mn, Y, and Ba. As for the vertical assignment, the mean abundances of all four α-elements, O, Mg, Si, and Ca, increase with , which does not agree with what is observed with the present-day guiding radius. Observations show that as radius increases the low-α populations dominate and in the inner Galaxy the high-α population has the highest density (e.g. ). Therefore, the alternative models we propose result in [X/Fe] distributions that are inconsistent with that of observations of present-day guiding radius. However, in general the assignments motivated by the NIHAO-UHD simulations give rise to trends in the individual abundances [X/Fe] that are consistent with observations of element abundance distributions with present-day guiding radius. We have the expectation that the element abundance gradients and dispersions as a function of birth radius will be higher amplitude than that of the present-day guiding radius due to the impact of radial migration. Therefore, this gives us a better insight into the element abundance distributions at stellar birth place and time in the Milky Way disc. Due to radial migration <cit.>, we expect gradients in [X/Fe]- to be weakened over time. Therefore, abundance gradients across should be steeper than present-day gradients. This is indeed what we find for most elements. Using the APOGEE DR16 data, <cit.> report negative present-day gradients across radius in the low-α disc (i.e. [α/M]<0.12, |z|<1) for [Fe/H], as well as as the individual elements [X/Fe] where X = C, Al, Mn. For the elements X = O, Mg, Si, Ca they report positive gradients with Galactic radius. These gradients are summarised in Table <ref> column 1. In column 2 of this table, we report the present-day abundance gradients we calculate with APOGEE. We find good agreement with the <cit.> analysis with the exception of a few elements. We note that the [Mg/Fe] and [Al/Fe] present-day abundance gradients are opposite in sign compared to <cit.> gradients. However, the gradients for these two elements are very shallow. Some differences are not unexpected as we use the ASPCAP abundances from APOGEE and the <cit.> paper uses a data-driven approach to report calibrated abundances that these gradients are based on. Similarly, we report the present-day gradients in GALAH (column 4) for the low-alpha stars (adopting cuts). Note that the GALAH present-day gradients are over a restricted radius range, compared to APOGEE. Again there are some differences, and the GALAH gradients are shallower than APOGEE gradients. The present-day element abundance gradients with radius in columns 1, 2, and 4 of Table 1 serve as a comparison to our calculated birth radius gradients (column 3). We find that the GALAH birth radius gradients are steeper than both the GALAH present-day local gradient (column 4) and APOGEE present-day gradients (with wider present-day radius range; column 1 & 2). The magnitude of the change in gradients varies with elements. We can therefore infer from our comparisons between columns 1 and 3 that gradients between elements and radius flatten over time. The element [Fe/H] shows the steepest gradient of -0.067 dex/kpc across birth radius. This flattens the order of 13 percent, to -0.058 dex/kpc from birth to present-day radius, well in agreement with recent theoretical predictions <cit.>. The elements [X/Fe] where X = O, C, Mn, and Al all have the next steepest gradients from -0.21 dex/kpc to 0.029 dex/kpc with birth radius. These flatten by between ≈ 0.02-0.03 dex/kpc such that the present-day gradients for these elements vary between ≈ -0.014-0.002 dex/kpc. We also note that some of the gradients change sign, between birth and present-day radius (i.e. C, Mg, Ca, and Y). Similar [X/H] radial gradients being flattened over time is also observed in <cit.>, in which they used an empirical approach from <cit.> to derive  estimates for APOGEE DR17 red giant stars based on their age and [Fe/H]. The individual abundances of stars as a function of birth radius record the star-forming environment at that location and time in the disc. A recent study by <cit.> employed chemical evolution modeling <cit.> to use ages and individual abundances of GALAH stars to infer environmental parameters (i.e. high-mass slope of the IMF (α_IMF), number of SN Ia exploding per solar mass over 15 Gyr (log_10(SNIa))). Their analysis assumed a link between using small bins in [Fe/H]-[Mg/Fe]-[Ba/Fe]-age for the chemical evolution model, as representative of linking to the interstellar medium conditions at different birth radii. They subsequently examined the model parameter gradients across present-day radius. They found that the abundances give rise to a gradient in the high-mass end of the disc's initial mass function. They report that this is more top-heavy towards the inner disc, and more bottom-heavy in the outer disc. Using our birth radius assignment, it would be possible to directly infer the environmental parameters as a function of birth radius and compare the conditions at different birth places and times in the star-forming disc directly. § CONCLUSION This work examines the distribution of individual abundances [X/Fe] of elements C, O, Mg, Al, Si, Ca, Mn, Y, and Ba for disc stars in different birth radii. To do this, we assumed seven birth radius tracks across the vs. plane of ∼ 59,000 GALAH DR3 disc stars and assigned each star a birth radius. This formalism is based on the NIHAO-UHD simulations <cit.> (see Figures <ref> and <ref>). We emphasize that our adopted model of birth radius is not calibrated to quantitatively map a location in the - plane to the birth radius. Rather, this serves as a tool to trace the element abundance and age distribution of stars across the disc from their origin. Via this approach, we can map variations in time of birth and in individual channels of enrichment to differences in the star-forming environment over time and radius. Below we summarize our main results: * The  distribution as a function of age supports an inside-out growth for the Milky Way disc (Figure <ref>). There is a larger mean value in  for the younger population (i.e., ∼10 kpc) compared to the older population (i.e., ∼4 kpc). This result is consistent with a number of earlier studies (e.g. ). * The  distribution dispersions change with age as well i.e., the median absolute deviation changes from 0.8 kpc to 1.5 kpc going from older to younger stellar populations as the Milky Way disc grows with time and therefore has star formation over a larger region. * There is a clear progression in the median [X/Fe] trend with : Mg, Si, Ca, Mn, and Al all decrease while C, O, Y, and Ba all increase with increasing . * For the low-α population, the abundance gradients are steeper in birth compared to present-day radius. The [Fe/H]- gradient measures -0.067±0.0002 dex/kpc compared to the [Fe/H] present-day gradient of -0.058 dex/kpc. The [O/Fe] abundance is the next strongest indicator of ; it exhibits the steepest [X/Fe]- slope of the [X/Fe] measurements (see Table <ref>) and is 0.029±0.0002 dex/kpc in  and 0.002 dex/kpc in present-day. We tested two other birth radius assignments based on stars' location on the vs. plane, but neither return physically plausible [X/Fe] distributions across radius. Furthermore, our model adopted from the simulation gives sensible results that are aligned with expectations. For example, according to radial migration, we expect birth radius gradients to be steeper in the past, which we find. Therefore, the adopted model for birth radius appears physically plausible and presumably gives insight into the relative distribution of individual abundances across the disc as it formed. Our model uses no direct information about present-day birth radius and is also therefore a useful comparison to models that do assume a relationship with the present-day radius. In summary, aided by  tracks inspired from a cosmological hydrodynamical simulation of a Milky Way-like galaxy and assumptions constrained to the vs. plane, we are able to recover the inside-out growth of the Milky Way disc and the spatial evolution in its chemical abundance distributions. This work serves as a proof of concept of the legitimacy of this modeling approach, and in future it can be applied to additional large spectroscopic survey data. This includes data that covers a larger area of the disc, such as SDSS-V Milky Way Mapper. In addition, chemical evolution modeling would add another dimension in investigating the validity of these  assignments (e.g. ). Nonetheless, this work shows that assigning birth radius to stars in the Milky Way and studying the element abundance distributions over time and birth place is very promising. This is demonstrative of the utility in using ensembles individual abundances to trace the formation of the Milky Way disc. § ACKNOWLEDGEMENTS AC acknowledges support from the Science and Technology Facilities Council (STFC) [grant number ST/T000244/1] and the Leverhulme Trust. TB's contribution to this project was made possible by funding from the Carl Zeiss Stiftung. § DATA AVAILABILITY The GALAH DR3 data used in this article are available at https://www.galah-survey.org/dr3/the_catalogueshttps://www.galah-survey.org/dr3/the_catalogues. The APOGEE DR17 data used in this article are available at https://www.sdss4.org/dr17https://www.sdss4.org/dr17. Simulation data from the NIHAO-UHD project is availabel at https://tobias-buck.de/#sim_datahttps://tobias-buck.de/#sim_data. Other data used in this article can be made available upon reasonable request to the corresponding authors. mnras § ELEMENT ABUNDANCE VS. EFFECTIVE TEMPERATURE We include here the element abundance [X/Fe] vs. effective temperature plot to show that there is no trend associated with [X/Fe] and , see Figure <ref>.
http://arxiv.org/abs/2307.03866v1
20230708000448
Ultrathin films of black phosphorus as suitable platforms for unambiguous observation of the orbital Hall effect
[ "Tarik P. Cysne", "Marcio Costa", "Marco Buongiorno Nardelli", "R. B. Muniz", "Tatiana G. Rappoport" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall" ]
[email protected] Instituto de Física, Universidade Federal Fluminense, 24210-346 Niterói RJ, Brazil Instituto de Física, Universidade Federal Fluminense, 24210-346 Niterói RJ, Brazil Department of Physics and Department of Chemistry, University of North Texas, Denton TX, USA Instituto de Física, Universidade Federal Fluminense, 24210-346 Niterói RJ, Brazil [email protected] Centro de Física das Universidade do Minho e do Porto (CF-UM-UP) e Departamento de Física, Universidade do Minho, P-4710-057 Braga, Portugal Instituto de Física, Universidade Federal do Rio de Janeiro, C.P. 68528, 21941-972 Rio de Janeiro RJ, Brazil Phosphorene, a monolayer of black phosphorus, is a two-dimensional material that lacks a multivalley structure in the Brillouin zone and has negligible spin-orbit coupling. This makes it a promising candidate for investigating the orbital Hall effect independently of the valley or spin Hall effects. To model phosphorene, we utilized a DFT-derived tight-binding Hamiltonian, which is constructed with the pseudo atomic orbital projection method. For that purpose, we use the paoflow code with a newly implemented internal basis that provides a fairly good description of the phosphorene conduction bands. By employing linear response theory, we show that phosphorene exhibits a sizable orbital Hall effect with strong anisotropy in the orbital Hall conductivity for the out-of-plane orbital angular momentum component. The magnitude and sign of the conductivity depend upon the in-plane direction of the applied electric field. These distinctive features enable the observation of the orbital Hall effect in this material unambiguously. The effects of strain and of a perpendicularly applied electric field on the phosphorene orbital-Hall response are also explored. We show that a supplementary electric field applied perpendicular to the phosphorene layer in its conductive regime gives rise to an induced in-plane orbital magnetization. Ultrathin films of black phosphorus as suitable platforms for unambiguous observation of the orbital Hall effect Tatiana G. Rappoport ================================================================================================================ § INTRODUCTION The phenomenon known as the orbital Hall effect (OHE) is characterized by the emergence of an orbital angular momentum (OAM) current that flows transversely to the direction of an applied electric field. Distinctly from the spin Hall effect (SHE), the OHE does not require the presence of spin-orbit interaction to occur. Despite being predicted nearly two decades ago <cit.>, the prospect of using the OHE to generate OAM current in certain materials has recently sparked great interest in the solid-state physics community <cit.>. OAM currents can be produced in a wide range of materials, and their intensities can exceed those of spin current. Furthermore, they can be injected into adjacent elements to exert torque on magnetic units, expanding their possible applications in orbitronics <cit.>. As a matter of fact, light metals with weak spin-orbit coupling are being explored as a means of generating orbital currents in three-dimensional metals <cit.>. Recently, orbital torques have been realized in light metal/ferromagnets heterostructures, providing indirect but robust experimental evidence of the OHE <cit.>. The OHE has also been investigated in two-dimensional (2D) materials that, in some cases, may host an orbital Hall (OH) insulating phase, characterized by a finite OH conductivity plateau located within the insulating band gap <cit.>. Recent studies have shed light on the fascinating properties of the OH insulating phase in these materials, such as its connection with higher order topological phases <cit.> and the encoding of non-trivial topology associated with OAM in an orbital Chern number <cit.>. The difficulty in discerning the OHE from other angular-momentum transport phenomena has hindered its unequivocal direct observation. For example, in some cases the spin accumulation produced by the spin Hall effect may be hard to distinguish from its orbital angular momentum counterpart. The valley Hall effect (VHE) induced by a longitudinally applied electric field that occur in non-centrosymmetric lattices with multi-valley structure in the Brillouin zone involves the transverse flow of valley currents that may also carry magnetic moment <cit.>, which can be hard to dissociate from the intra-atomic orbital Hall contribution. Multi-orbital 2D materials possess natural symmetry constraints that lead to various types of orbital hybridization, which can maximize the OHE <cit.>. However, to single out the OHE unequivocally it is crucial to identify materials with weak spin-orbit coupling that display no significant spin Hall effect (SHE), nor VHE or magnetoelectric effects that may mask the OHE. In this article, we suggest that phosphorene is a very suitable material for direct observing the OHE in 2D materials. It is a centrosymmetric semiconductor with a sizeable direct band gap at the Γ point of the 2D BZ <cit.> that does not host VHE. In the absence of reasonably strong electric fields, applied perpendicularly to the layer, it behaves as an ordinary insulator and shows no spin Hall effect (SHE) within its band gap <cit.>. The spin-orbit interaction in phosphorene is extremely weak <cit.> and consequently it also displays negligible SHE in the metallic regime in comparison with the OHE, as we shall see later. Symmetry prevents the appearance of the magneto-electric effect in phosphorene, even in the presence of strain <cit.>. Here, we have performed density functional theory (DFT) calculations combined with linear-response theory to analyze the OH response in phosphorene. Our calculations show that phosphorene exhibit sizeable anisotropic OH conductivities that change sign for in-plane electric fields applied along the armchair (x̂) and zigzag (ŷ) cartesian directions depicted in Fig. <ref>. These features hold in the presence of moderate in-plane strain and perpendicularly applied electrical fields along ẑ. We also show that the perpendicular electric field allows the occurrence of a current-induced orbital magnetization in the plane of phosphorene. § DFT DERIVED HAMILTONIAN Phosphorene is a two-dimensional material composed of a single layer of phosphorus atoms arranged in a distorted honeycomb lattice structure (figure <ref>(a)), similar to graphene. However, unlike graphene, the lattice structure of phosphorene is puckered, with a non coplanar configuration as illustrated in Figure <ref>(b). Our DFT calculations <cit.> were carried out with the plane-wave-based code Quantum Espresso <cit.> to compute the band structure and eigenstates of phosphorene. The generalized gradient approximation (GGA) <cit.> was used to treat the exchange and correlation potential, while fully relativistic projected augmented wave (PAW) potentials <cit.> were employed to describe the ionic cores. To ensure accurate results, we set the wavefunctions cutoff energy to 44 Ryd and the charge density cutoff energy is ten times larger. Our self-consistent calculations (SCF) were executed with a linear density of k-points of 12.0/Å^-1 in the 2D Brillouin zone, and a minimum of 15 Å of vacuum is taken to avoid spurious interactions. We included a static electrical field (along the z direction) using a full SCF calculation via the modern theory of polarization <cit.> Figure <ref>(c), shows the band structure of phosphorene displaying its direct bandgap at the Γ point. Phosphorene's puckered crystalline structure is highly anisotropic, as evidenced by its energy spectrum near Γ, which presents a parabolic dispersion along the Γ-Y direction and a linear behavior along Γ-X. Furthermore, the puckering of the lattice has a notable impact on the mechanical and electronic characteristics of phosphorene. It renders the material more susceptible to strain, as deformation can significantly alter its bandgap and electronic transport properties <cit.>. To perform linear response calculations, we utilized the pseudo atomic orbital projection method <cit.> implemented in the paoflow code <cit.>. This approach involves constructing an effective tight-binding Hamiltonian, with no adjustable parameters, from the DFT calculations. In general, we project the plane-wave Kohn-Sham orbitals onto the compact subspace spanned by the pseudo atomic orbitals (PAO), which are naturally included in the PAW potentials. The vast majority of cases can be accurately described by this approach with an excellent agreement between the DFT and paoflow band-structure. Nevertheless, occasionally the PAO basis fails to reproduce the conduction bands, especially when the unoccupied bands have a relatively strong character of an orbital that is not included in the PAO base, as in the case of phosphorene. Its conduction, and to a minor degree the valence bands, are highly hybridized with d-orbitals <cit.>. Since the pseudo potential used in the calculation (P.rel-pbe-n-kjpaw_psl.1.0.0.UPF) is generated only with s and p orbitals, this original approach fails. To circumvent this problem, we used the recently implemented paoflow internal basis, which is constructed by solving the atomic DFT problem for an all electron configuration up to desired orbital. Once the atomic wavefunction is obtained the DFT plane-wave wavefunctions are projected as described in ref. <cit.>. Figure <ref>(c) shows the effective tight-binding and the DFT band-structure calculations superimposed. This approach significantly reduces the computational cost of performing large k-space numerical integration. We have previously used this method to investigate distinct characteristics of different systems, such as: spin dynamics <cit.>, as well as transport <cit.> and topological properties <cit.>. The orbital Hall conductivity calculations were performed with a reciprocal space sampling that is ten times larger than the one used in our DFT-SCF calculations. § OHE CALCULATIONS Within linear response theory, the current density of angular momentum with polarization η, flowing along the μ direction (𝒥^X_η_μ), can be generically expressed in terms of the angular momentum conductivity tensor by 𝒥^X_ η_μ=∑_νσ^X_η_μ,νℰ_ν. Here, ℰ_ν symbolizes the ν-component of the applied electric field; η, μ and ν label the Cartesian components x,y,z. X_η represents the η-component of either the orbital angular momentum operator (ℓ̂_η) or the spin operator (ŝ_η), depending on the nature of the induced angular momentum that drifts. The conductivity tensor is given by σ^X_η_μ,ν=e/(2π)^2∑_n∫_BZ d^2 k f_n kΩ_μ,ν , n^X_η ( k), where, the orbital (spin) Berry curvature Ω_μ,ν , n^X_η ( k)= 2ħ∑_m≠ nIm[ ⟨ u_n, k|j_μ, k^X_η|u_m, k⟩⟨ u_m, k|v_ν, k|u_n, k⟩/(E_n, k-E_m, k+i0^+)^2]. The ν-component of the velocity operator may be obtained by v_ν, k=ħ^-1∂ℋ ( k)/∂ k_ν, where ℋ ( k) represents the Hamiltonian in reciprocal space, and k stand for the wave vector. Here, |u_n, k⟩ is the periodic part of the Bloch eigenstate of ℋ ( k), associated with band energy E_n, k and f_n k symbolizes the Fermi-Dirac distribution function. The orbital (spin) angular momentum current operator that flows along the μ-direction with orbital (spin) polarization in the η-direction, is defined by j_μ, k^X_η=(X_ηv_μ, k+v_μ, kX_η)/2, where X_η=ℓ̂_η (ŝ_η). § RESULTS AND DISCUSSION Figure <ref>(d) shows the orbital Hall conductivities σ^L_z_xy and σ^L_z_yx, calculated as functions of Fermi energy, for in-plane electric fields applied along the ŷ and x̂ directions, respectively. Both conductivities present a plateau inside the energy-band gap. Phosphorene has been proposed to be a higher-order topological insulator <cit.>, a type of topological state that was recently connected to the orbital Hall insulating phase <cit.>. We note that σ^L_z_xy is markedly different from σ^L_z_yx inside and close to the energy-band gap, where they have opposite signs. This reflects the high anisotropy of the phosphorene lattice structure. The crystalline symmetry of phosphorene also ensures that in-plane electric fields can only induce transverse currents of angular momentum polarized along ẑ. This holds for both orbital and spin angular momentum currents, because they are subjected to essentially the same crystalline symmetry constraints <cit.>. In a crystal with a given space group, the spin and orbital Berry curvatures must be invariant under all symmetry operations of the group. This means that if a given symmetry operation, such as rotation, mirror reflection, or spatial inversion, changes the sign of the spin or orbital Berry curvature, then the corresponding component of the spin or orbital Hall conductivity is forbidden by symmetry. The presence or absence of symmetries in the crystal structure can dictate which components of the Hall conductivity are allowed or forbidden (see Appendix A). The change of sign in the phosphorene OH-conductivity may be experimentally verified by observing the induced orbital magnetic moment accumulations on the boundaries of phosphorene samples, similar to SHE experiments <cit.>. The small spin-orbit coupling and the topological triviality of phosphorene, with respect to ℤ_2, make the SHE orders of magnitude smaller than the OHE [see Fig. <ref> (e)]. In addition, the electronic spectrum of phosphorene has no multivalley structure in the 2D Brillouin zone and hence does not host VHE. Thus, phosphorene offers an ideal platform for unambiguous observation of the OHE. It is noteworthy that the OHE increases with the number of layers <cit.> and so, thin films of black phosphorus may be employed to enhance the OH signal in such experiments. However, one must keep in mind that the band gap decreases monotonically with the increase in the number of layers, saturating at approximately 0.3 eV for sufficiently large film thicknesses <cit.>. In general, the transport properties of 2D materials are influenced by the substrate, which may cause strain and/or alter the features of the sample's surface in contact with it. In some cases it is necessary to encapsulate the film to prevent its deterioration from oxidation and also be able to control its density of carriers with gate voltages. Therefore, it is worth investigating how strain and the presence of an auxiliary perpendicular electric field would affect the orbital transport properties of phosphorene. §.§ Effects of Strain Figure <ref> illustrates the effects of uniaxial strain (both compressive and tensile) along the x̂ direction, on the OH conductivity components σ^L_z_xy and σ^L_z_yx. With such moderate uniform strains the point group (D_2h) of phosphorene is preserved and hence, only the L_z component of the OHC remains non-null. Strain clearly affects the OH conductivity of phosphorene. It modifies the electronic states around the band gap <cit.>, may alter their orbital features and the orbital transport in general. Interestingly, the height of the σ^L_z_yx plateau remains unchanged under strain along the x direction, which does not happen for σ^L_z_xy. On the other hand, the length of the OHC plateaux decrease (increase) under compressive (tensile) strain, which is expected because the energy band-gap size follows the same trend <cit.>, as illustrated in the inset of Fig. <ref>(a) for σ^L_z_yx. §.§ Effect of Perpendicular Electric-Field §.§.§ Orbital Hall Conductivity We shall now examine how the OHC of phosphorene is affected by an electric field E⃗_⊥ = E_⊥ẑ, applied perpendicularly to its layer. The presence of E⃗_⊥ reduces the phosphorene point group D_ 2h to C_ 2v, which belong to the same Laue class mmm. Since the Laue class determines the general form of the OHC tensor <cit.>, only the L_z component of the OHC remains non-null in the presence of the E⃗_⊥ (see Appendix A). Figure <ref> shows σ^L_z_yx and σ^L_z_xy, calculated as functions of energy, for different values of E_⊥. We note that the OHC is much more affected by E_⊥ in some energy ranges outside the band gap than within it. We recall that ultrathin films of black phosphorus can switch to a topological insulating phase for sufficiently high values of E_⊥, as discussed in the <cit.>. However, for phosphorene, this phase transition requires values of E_⊥≫ 0.6V/m, which is higher than the ones considered in Fig. <ref>. §.§.§ Orbital Magnetoelectric Effect The noncentrosymmetric and polar C_ 2v point group allows the occurrence of orbital magnetoelectric effect mediated by Fermi-surface conducting states <cit.>. The perpendicular electric field E⃗_⊥ distorts the phosphorene's charge distribution, giving rise to a finite polarization P⃗=P_zẑ perpendicular to its layer <cit.>. The driving field in the phosphore plane exerts a torque on the electric dipoles, thereby inducing a net orbital magnetization M⃗^L∝P⃗×ℰ⃗ <cit.>. One may calculate M⃗^L utilizing a scheme similar to the one described in the Secs. <ref> and <ref>. Since time-reversal symmetry is preserved, there are no interband contributions to the orbital magnetoelectric effect in phosphorene. Thus, to first order in the in-plane driving field and for finite values of E_⊥, the current-induced orbital magnetization per unity cell area of phosphorene is given by m_L_η=∑_να_ηνℰ_ν <cit.>, where α_ην= eμ_B/2Γ∑_n∫_ BZd^2 k/(2π)^2∂ f_n, k/∂ E ×⟨u_n, k| v_ν, k|u_n, k⟩⟨u_n, k|ℓ̂_η|u_n, k⟩ represent the matrix elements of the magnetoelectric tensor. Here, μ_B is the Bohr magneton and Γ is the energy scale associated with the electronic relaxation time τ=ħ/2Γ. In our calculations we have used Γ=1.6 meV, that correspond to τ≈ 200 fs <cit.>. Fig. <ref> shows α_xy and α_yx calculated as functions of energy for different values of the E_⊥. As expected, the OME clearly vanishes within the band-gap energy range. However, in the conductive regime, it can reach sizeable values for both m_L_y and m_L_x, in response to electric fields applied along the x̂ and ŷ directions, respectively. This inplane-induced orbital magnetization adds up to the orbital angular-momenta accumulated at the sample's edges, due to the OHE, transforming its original antiferromagnetic-like disposition into a non-collinear orbital magnetic arrangement. In some energy ranges the OME varies appreciably with E_⊥, which may be used to control the OME intensity. In order to roughly estimate the order of magnitude of the in-plane OME we consider an electric field with intensity ℰ_x=10^5 V/m and a carrier density that leads to α_yx=-2× 10^2 μ_B/(V.nm). In this case, the induced orbital magnetization m_L_y≈ -0.3 × 10^-2μ_B/A_ u.c., where A_ u.c.=0.152 nm^2 represents the phosphorene unity cell area. This has the same order of magnitude of the Edelstein effect estimated for Bi/Ag(111) in Ref. <cit.> assuming a larger value of τ. § FINAL REMARKS AND CONCLUSIONS To summarize, we argue that thin films of black phosphorus may provide suitable 2D platforms for direct observation of the orbital Hall effect. To this end, we combine linear response theory with density functional theory calculations to investigate the orbital conductivity of phosphorene and explore how it is affected by uniform strain and perpendicular electric fields. We show that phosphorene displays a fairly large OHC, with perpendicular orbital polarization, which is orders of magnitude larger than the SHC. This OHC is also highly anisotropic with respect to the direction of the in-plane applied electric field, and may even switch sign when the driving field direction is changed. Inside the energy band gap, it exhibits an orbital Hall insulating plateau that is robust under moderate uniform strain and perpendicular electric fields. The latter breaks spacial inversion symmetry and may lead to the appearance of an in-plane orbital-magnetization, induced by an in-plane electric current. This effect alters the anti-symmetric profile of the orbital magnetic moment induced by the orbital Hall effect in the conducting phase. Our numerical calculations are complemented by symmetry analysis. We acknowledge CNPq/Brazil, CAPES/Brazil, FAPERJ/Brazil, INCT Nanocarbono and INCT Materials Informatics for financial support. TGR acknowledges funding from FCT-Portugal through Grant No. CEECIND/07471/2022. She thankfully acknowledges the computer resources at MareNostrum and the technical support provided by Barcelona Supercomputing Center (FI-2020-2-0033). MC acknowledges CNPq (Grant No. 317320/2021-1) and FAPERJ/Brazil (Grant No. E26/200.240/2023). We also thank Profs. A. Fazzio and P. Venezuela for fruitful discussions. § SYMMETRY CONSTRAINT ON ORBITAL HALL CONDUCTIVITY The crystal symmetry operations of phosphorene are: E, τ𝒞_2x, 𝒞_2y, τ𝒞_2z, 𝒫, τℳ_x, ℳ_y and τℳ_z <cit.>. Here E represents the identity operation, 𝒞_2μ is a 180^o rotation around the μ-axis, ℳ_μ denotes a reflection through a mirror plane that is perpendicular to the μ-axis, and 𝒫 symbolizes the spatial-inversion operation; τ𝒪 designates the action of 𝒪 followed by a half-unity-cell translation τ⃗=(a_x/2,a_y/2), where a_x and a_y represent the moduli of the unit cell lattice vectors. This set of symmetries is isomorphic to the point group D_ 2h, which correspond to Laue group mmm <cit.>. Consequently, for a phosphorene layer in the xy plane only the L_z component of the OHE is allowed <cit.>. It is possible derive the constraints to the OH conductivity tensor imposed by each symmetry operation of phosphorene. They are summarized in the table <ref>. In the presence of E⃗_⊥ all symmetry operations that interchange z and -z are excluded, leaving just τ𝒞_2z, τℳ_x, ℳ_y, and E, which are identified with an asterisk in table <ref>. In this case, the point group is reduced from D_ 2h to C_ 2v. However, since C_ 2v and D_ 2h belong to the same Laue class (mmm), only the L_z component of the OHC can be non zero when phosphorene is subjected to E⃗_⊥<cit.>. In order to obtain the constraints on the OHC components presented in Table <ref> we consider the action of τ𝒪 on the Bloch eigenstates ψ_n,k(r) associated with the eigenvalue E_n,k, namely τ𝒪ψ_n,k(r)= exp(-iτ⃗·k)ψ_n,𝒪k(r) <cit.>. Since the Hamiltonian is invariant under τ𝒪, E_n, k=E_n,𝒪 k. Let us examine, for example, Ω^L_η_yx,n( k). Inserting the identity (τ𝒪)^†(τ𝒪)=1 into the orbital-weighted Berry curvature and using the above relations we obtain Ω^L_η_yx,n( k) = 2ħ∑_m≠ nIm[ ⟨ u_n, k| (τ𝒪)^† (τ𝒪) j_y, k^L_η (τ𝒪)^† (τ𝒪)|u_m, k⟩⟨ u_m, k|(τ𝒪)^† (τ𝒪) v_x, k(τ𝒪)^† (τ𝒪)|u_n, k⟩/(E_n, k-E_m, k+i0^+)^2]. The restrictions on the conductivity tensor depend on how the Cartesian components of the velocity and angular momentum operators transform under the group symmetry operations. This information is contained in its character table, which shows that they only acquire a sign s_𝒪,Â=± 1 <cit.> under such operations, as table <ref> illustrates. Therefore, Ω^L_η_yx,n( k) = 2ħ∑_m≠ nIm[ ⟨ u_n,𝒪 k| s_𝒪,v̂_y s_𝒪,L̂_η j_y,𝒪 k^L_η|u_m, 𝒪 k⟩⟨ u_m, 𝒪 k| s_𝒪,v̂_x v_x,𝒪 k|u_n,𝒪 k⟩/(E_n,𝒪 k-E_m,𝒪 k+i0^+)^2] = s_𝒪,v̂_x s_𝒪,v̂_y s_𝒪,L̂_ηΩ^L_η_yx,n(𝒪 k). The same expression holds for Ω^L_η_xy,n( k). Since ∫ d^2 k=∫ d^2(𝒪 k), it follows from Eq. (<ref>) that 𝒪: σ^L_η_ OH= s̅^η_ OH(𝒪) σ^L_η_ OH, where s̅^η_ OH(𝒪)=s_𝒪, v_x× s_𝒪, v_y× s_𝒪, L_η. If s̅^η_ OH(𝒪)=+1 the symmetry 𝒪 does not impose a constraint to the OH conductivity. However, if s̅^η_ OH(𝒪)=-1, σ^L_η_yx= 0. apsrev 76 natexlab#1#1bibnamefont#1#1bibfnamefont#1#1citenamefont#1#1url<#>1urlprefixURL [Bernevig et al.(2005)Bernevig, Hughes, and Zhang]Bernevig-Hughes-Zhang-PhysRevLett.95.066601 authorB. A. Bernevig, authorT. L. Hughes, and authorS.-C. Zhang, journalPhys. Rev. Lett. volume95, pages066601 (year2005), <https://link.aps.org/doi/10.1103/PhysRevLett.95.066601>. [Phong et al.(2019)Phong, Addison, Ahn, Min, Agarwal, and Mele]Mele-PhysRevLett.123.236403 authorV. o. T. Phong, authorZ. Addison, authorS. Ahn, authorH. Min, authorR. Agarwal, and authorE. J. Mele, journalPhys. Rev. Lett. volume123, pages236403 (year2019), <https://link.aps.org/doi/10.1103/PhysRevLett.123.236403>. [Salemi and Oppeneer(2022a)]Oppeneer-PhysRevMaterials.6.095001 authorL. Salemi and authorP. M. Oppeneer, journalPhys. Rev. Mater. volume6, pages095001 (year2022a), <https://link.aps.org/doi/10.1103/PhysRevMaterials.6.095001>. [Salemi and Oppeneer(2022b)]Salemi-PhysRevB.106.024410 authorL. Salemi and authorP. M. Oppeneer, journalPhys. Rev. B volume106, pages024410 (year2022b), <https://link.aps.org/doi/10.1103/PhysRevB.106.024410>. [Bose et al.(2023)Bose, Kammerbauer, Gupta, Go, Mokrousov, Jakob, and Kläui]OH-Torque-PhysRevB.107.134423 authorA. Bose, authorF. Kammerbauer, authorR. Gupta, authorD. Go, authorY. Mokrousov, authorG. Jakob, and authorM. Kläui, journalPhys. Rev. B volume107, pages134423 (year2023), <https://link.aps.org/doi/10.1103/PhysRevB.107.134423>. [Go et al.(2023)Go, An, Lee, and Kim]go2023intrinsic authorG. Go, authorD. An, authorH.-W. Lee, and authorS. K. Kim, titleIntrinsic magnon orbital hall effect in honeycomb antiferromagnets (year2023), 2303.11687. [Zeer et al.(2022)Zeer, Go, Carbone, Saunderson, Redies, Kläui, Ghabboun, Wulfhekel, Blügel, and Mokrousov]Mokrousov-PhysRevMaterials.6.074004 authorM. Zeer, authorD. Go, authorJ. P. Carbone, authorT. G. Saunderson, authorM. Redies, authorM. Kläui, authorJ. Ghabboun, authorW. Wulfhekel, authorS. Blügel, and authorY. Mokrousov, journalPhys. Rev. Mater. volume6, pages074004 (year2022), <https://link.aps.org/doi/10.1103/PhysRevMaterials.6.074004>. [Han et al.(2022)Han, Lee, and Kim]HW-Lee-PhysRevLett.128.176601 authorS. Han, authorH.-W. Lee, and authorK.-W. Kim, journalPhys. Rev. Lett. volume128, pages176601 (year2022), <https://link.aps.org/doi/10.1103/PhysRevLett.128.176601>. [Fonseca et al.(2023)Fonseca, Pereira, and Barbosa]fonseca2023orbital authorD. B. Fonseca, authorL. L. A. Pereira, and authorA. L. R. Barbosa, titleOrbital hall effect in mesoscopic devices (year2023), 2305.01640. [Sala and Gambardella(2022)]Gambardella-PhysRevResearch.4.033037 authorG. Sala and authorP. Gambardella, journalPhys. Rev. Res. volume4, pages033037 (year2022), <https://link.aps.org/doi/10.1103/PhysRevResearch.4.033037>. [Busch et al.(2023)Busch, Mertig, and Göbel]busch2023orbital authorO. Busch, authorI. Mertig, and authorB. Göbel, titleOrbital hall effect and orbital edge states caused by s electrons (year2023), 2306.17295. [Go et al.(2018)Go, Jo, Kim, and Lee]Go-Hyun-Woo-PhysRevLett.121.086602 authorD. Go, authorD. Jo, authorC. Kim, and authorH.-W. Lee, journalPhys. Rev. Lett. volume121, pages086602 (year2018), <https://link.aps.org/doi/10.1103/PhysRevLett.121.086602>. [Go et al.(2021)Go, Jo, Lee, Kläui, and Mokrousov]Go_EPL-Review authorD. Go, authorD. Jo, authorH.-W. Lee, authorM. Kläui, and authorY. Mokrousov, journalEurophysics Letters volume135, pages37001 (year2021), <https://dx.doi.org/10.1209/0295-5075/ac2653>. [Choi et al.(2021)Choi, Jo, Ko, Go, Kim, Park, Kim, Min, Choi, and Lee]Go-Experiment-https://doi.org/10.48550/arxiv.2109.14847 authorY.-G. Choi, authorD. Jo, authorK.-H. Ko, authorD. Go, authorK.-H. Kim, authorH. G. Park, authorC. Kim, authorB.-C. Min, authorG.-M. Choi, and authorH.-W. Lee, titleObservation of the orbital hall effect in a light metal ti (year2021), <https://arxiv.org/abs/2109.14847>. [Zheng et al.(2020)Zheng, Guo, Jo, Go, Wang, Chen, Yin, Wang, Yu, He et al.]Zheng-OrbTorque-PhysRevResearch.2.013127 authorZ. C. Zheng, authorQ. X. Guo, authorD. Jo, authorD. Go, authorL. H. Wang, authorH. C. Chen, authorW. Yin, authorX. M. Wang, authorG. H. Yu, authorW. He, et al., journalPhys. Rev. Res. volume2, pages013127 (year2020), <https://link.aps.org/doi/10.1103/PhysRevResearch.2.013127>. [Lee et al.(2021a)Lee, Kang, Go, Kim, Kang, Lee, Lee, Kang, Lee, Mokrousov et al.]Lee-OrbTorque-10.1038/s42005-021-00737-7 authorS. Lee, authorM.-G. Kang, authorD. Go, authorD. Kim, authorJ.-H. Kang, authorT. Lee, authorG.-H. Lee, authorJ. Kang, authorN. J. Lee, authorY. Mokrousov, et al., journalCommunications Physics volume4 (year2021a), <https://doi.org/10.1038/s42005-021-00737-7>. [Lee et al.(2021b)Lee, Go, Park, Jeong, Ko, Yun, Jo, Lee, Go, Oh et al.]Lee2021 authorD. Lee, authorD. Go, authorH.-J. Park, authorW. Jeong, authorH.-W. Ko, authorD. Yun, authorD. Jo, authorS. Lee, authorG. Go, authorJ. H. Oh, et al., journalNature Communications volume12 (year2021b), <https://doi.org/10.1038/s41467-021-26650-9>. [Canonico et al.(2020a)Canonico, Cysne, Molina-Sanchez, Muniz, and Rappoport]Canonico-PhysRevB.101.161409 authorL. M. Canonico, authorT. P. Cysne, authorA. Molina-Sanchez, authorR. B. Muniz, and authorT. G. Rappoport, journalPhys. Rev. B volume101, pages161409 (year2020a), <https://link.aps.org/doi/10.1103/PhysRevB.101.161409>. [Canonico et al.(2020b)Canonico, Cysne, Rappoport, and Muniz]Canonico-PhysRevB.101.075429 authorL. M. Canonico, authorT. P. Cysne, authorT. G. Rappoport, and authorR. B. Muniz, journalPhys. Rev. B volume101, pages075429 (year2020b), <https://link.aps.org/doi/10.1103/PhysRevB.101.075429>. [Costa et al.(2023)Costa, Focassio, Canonico, Cysne, Schleder, Muniz, Fazzio, and Rappoport]Costa2023 authorM. Costa, authorB. Focassio, authorL. M. Canonico, authorT. P. Cysne, authorG. R. Schleder, authorR. B. Muniz, authorA. Fazzio, and authorT. G. Rappoport, journalPhys. Rev. Lett. volume130, pages116204 (year2023), <https://link.aps.org/doi/10.1103/PhysRevLett.130.116204>. [Cysne et al.(2021a)Cysne, Costa, Canonico, Nardelli, Muniz, and Rappoport]Cysne-PhysRevLett.126.056601 authorT. P. Cysne, authorM. Costa, authorL. M. Canonico, authorM. B. Nardelli, authorR. B. Muniz, and authorT. G. Rappoport, journalPhys. Rev. Lett. volume126, pages056601 (year2021a), <https://link.aps.org/doi/10.1103/PhysRevLett.126.056601>. [Cysne et al.(2022)Cysne, Bhowal, Vignale, and Rappoport]Cysne-PhysRevB.105.195421 authorT. P. Cysne, authorS. Bhowal, authorG. Vignale, and authorT. G. Rappoport, journalPhys. Rev. B volume105, pages195421 (year2022), <https://link.aps.org/doi/10.1103/PhysRevB.105.195421>. [Bhowal and Vignale(2021)]Bhowal-PhysRevB.103.195309 authorS. Bhowal and authorG. Vignale, journalPhys. Rev. B volume103, pages195309 (year2021), <https://link.aps.org/doi/10.1103/PhysRevB.103.195309>. [Salvador-Sánchez et al.(2022)Salvador-Sánchez, Canonico, Pérez-Rodríguez, Cysne, Baba, Clericò, Vila, Vaquero, Delgado-Notario, Caridad et al.]Salvador-Sanchez-https://doi.org/10.48550/arxiv.2206.04565 authorJ. Salvador-Sánchez, authorL. M. Canonico, authorA. Pérez-Rodríguez, authorT. P. Cysne, authorY. Baba, authorV. Clericò, authorM. Vila, authorD. Vaquero, authorJ. A. Delgado-Notario, authorJ. M. Caridad, et al., titleGeneration and control of non-local chiral currents in graphene superlattices by orbital hall effect (year2022), <https://arxiv.org/abs/2206.04565>. [Li and Appelbaum(2014)]Symmetry-Phosphorene-PhysRevB.90.115439 authorP. Li and authorI. Appelbaum, journalPhys. Rev. B volume90, pages115439 (year2014), <https://link.aps.org/doi/10.1103/PhysRevB.90.115439>. [Rodin et al.(2014)Rodin, Carvalho, and Castro Neto]Phosphorne-SpectraStrain-PhysRevLett.112.176801 authorA. S. Rodin, authorA. Carvalho, and authorA. H. Castro Neto, journalPhys. Rev. Lett. volume112, pages176801 (year2014), <https://link.aps.org/doi/10.1103/PhysRevLett.112.176801>. [Taghizadeh Sisakht et al.(2016)Taghizadeh Sisakht, Fazileh, Zare, Zarenia, and Peeters]Phosphorene-Spectra-strain-PhysRevB.94.085417 authorE. Taghizadeh Sisakht, authorF. Fazileh, authorM. H. Zare, authorM. Zarenia, and authorF. M. Peeters, journalPhys. Rev. B volume94, pages085417 (year2016), <https://link.aps.org/doi/10.1103/PhysRevB.94.085417>. [Rudenko and Katsnelson(2014)]Phosphorene-ModelRudenko-PhysRevB.89.201408 authorA. N. Rudenko and authorM. I. Katsnelson, journalPhys. Rev. B volume89, pages201408 (year2014), <https://link.aps.org/doi/10.1103/PhysRevB.89.201408>. [Faria Junior et al.(2019)Faria Junior, Kurpas, Gmitra, and Fabian]Paulo-Fabian-PhysRevB.100.115203 authorP. E. Faria Junior, authorM. Kurpas, authorM. Gmitra, and authorJ. Fabian, journalPhys. Rev. B volume100, pages115203 (year2019), <https://link.aps.org/doi/10.1103/PhysRevB.100.115203>. [Liu et al.(2015)Liu, Zhang, Abdalla, Fazzio, and Zunger]ElectricField-Fazzio-Zunger-doi:10.1021/nl5043769 authorQ. Liu, authorX. Zhang, authorL. B. Abdalla, authorA. Fazzio, and authorA. Zunger, journalNano Letters volume15, pages1222 (year2015), notepMID: 25607525, https://doi.org/10.1021/nl5043769, <https://doi.org/10.1021/nl5043769>. [Popovi ćć et al.(2015)Popovi ćć, Kurdestany, and Satpathy]Popovic2015 authorZ. S. Popovi ćć, authorJ. M. Kurdestany, and authorS. Satpathy, journalPhys. Rev. B volume92, pages035135 (year2015), <https://link.aps.org/doi/10.1103/PhysRevB.92.035135>. [Avsar et al.(2017)Avsar, Tan, Kurpas, Gmitra, Watanabe, Taniguchi, Fabian, and Özyilmaz]Avsar2017 authorA. Avsar, authorJ. Y. Tan, authorM. Kurpas, authorM. Gmitra, authorK. Watanabe, authorT. Taniguchi, authorJ. Fabian, and authorB. Özyilmaz, journalNature Physics volume13, pages888 (year2017), <https://doi.org/10.1038/nphys4141>. [Hu et al.(2016)Hu, Wu, Zeng, Deng, and Kan]Hu-SymmetriesPolarization-doi:10.1021/acs.nanolett.6b04630 authorT. Hu, authorH. Wu, authorH. Zeng, authorK. Deng, and authorE. Kan, journalNano Letters volume16, pages8015 (year2016), notepMID: 27960526, https://doi.org/10.1021/acs.nanolett.6b04630, <https://doi.org/10.1021/acs.nanolett.6b04630>. [Hohenberg and Kohn(1964)]DFT1 authorP. Hohenberg and authorW. Kohn, journalPhys. Rev. volume136, pagesB864 (year1964), <https://link.aps.org/doi/10.1103/PhysRev.136.B864>. [Kohn and Sham(1965)]DFT2 authorW. Kohn and authorL. J. Sham, journalPhys. Rev. volume140, pagesA1133 (year1965), <https://link.aps.org/doi/10.1103/PhysRev.140.A1133>. [Giannozzi et al.(2017)Giannozzi, Andreussi, Brumme, Bunau, Buongiorno Nardelli, Calandra, Car, Cavazzoni, Ceresoli, Cococcioni et al.]QE-2017 authorP. Giannozzi, authorO. Andreussi, authorT. Brumme, authorO. Bunau, authorM. Buongiorno Nardelli, authorM. Calandra, authorR. Car, authorC. Cavazzoni, authorD. Ceresoli, authorM. Cococcioni, et al., journalJournal of Physics: Condensed Matter volume29, pages465901 (year2017), <http://stacks.iop.org/0953-8984/29/i=46/a=465901>. [Perdew et al.(1996)Perdew, Burke, and Ernzerhof]PBE authorJ. P. Perdew, authorK. Burke, and authorM. Ernzerhof, journalPhys. Rev. Lett. volume77, pages3865 (year1996), <https://link.aps.org/doi/10.1103/PhysRevLett.77.3865>. [Kresse and Joubert(1999)]PAW authorG. Kresse and authorD. Joubert, journalPhys. Rev. B volume59, pages1758 (year1999), <https://link.aps.org/doi/10.1103/PhysRevB.59.1758>. [Dal Corso(2014)]pslibrary authorA. Dal Corso, journalComputational Materials Science volume95, pages337 (year2014), ISSN issn0927-0256, <https://www.sciencedirect.com/science/article/pii/S0927025614005187>. [Brumme et al.(2015)Brumme, Calandra, and Mauri]Efield authorT. Brumme, authorM. Calandra, and authorF. Mauri, journalPhys. Rev. B volume91, pages155436 (year2015), <https://link.aps.org/doi/10.1103/PhysRevB.91.155436>. [Agapito et al.(2013)Agapito, Ferretti, Calzolari, Curtarolo, and Buongiorno Nardelli]PAO1 authorL. A. Agapito, authorA. Ferretti, authorA. Calzolari, authorS. Curtarolo, and authorM. Buongiorno Nardelli, journalPhys. Rev. B volume88, pages165127 (year2013), <https://link.aps.org/doi/10.1103/PhysRevB.88.165127>. [Agapito et al.(2015)Agapito, Curtarolo, and Buongiorno Nardelli]PAO2 authorL. A. Agapito, authorS. Curtarolo, and authorM. Buongiorno Nardelli, journalPhys. Rev. X volume5, pages011006 (year2015), <https://link.aps.org/doi/10.1103/PhysRevX.5.011006>. [Agapito et al.(2016a)Agapito, Fornari, Ceresoli, Ferretti, Curtarolo, and Buongiorno Nardelli]PAO3 authorL. A. Agapito, authorM. Fornari, authorD. Ceresoli, authorA. Ferretti, authorS. Curtarolo, and authorM. Buongiorno Nardelli, journalPhys. Rev. B volume93, pages125137 (year2016a), <https://link.aps.org/doi/10.1103/PhysRevB.93.125137>. [Agapito et al.(2016b)Agapito, Ismail-Beigi, Curtarolo, Fornari, and Buongiorno Nardelli]PAO4 authorL. A. Agapito, authorS. Ismail-Beigi, authorS. Curtarolo, authorM. Fornari, and authorM. Buongiorno Nardelli, journalPhys. Rev. B volume93, pages035104 (year2016b), <https://link.aps.org/doi/10.1103/PhysRevB.93.035104>. [Buongiorno Nardelli et al.(2018)Buongiorno Nardelli, Cerasoli, Costa, Curtarolo, Gennaro, Fornari, Liyanage, Supka, and Wang]PAO5 authorM. Buongiorno Nardelli, authorF. T. Cerasoli, authorM. Costa, authorS. Curtarolo, authorR. D. Gennaro, authorM. Fornari, authorL. Liyanage, authorA. R. Supka, and authorH. Wang, journalComputational Materials Science volume143, pages462 (year2018), ISSN issn0927-0256, <http://www.sciencedirect.com/science/article/pii/S0927025617306651>. [Cerasoli et al.(2021)Cerasoli, Supka, Jayaraj, Costa, Siloi, Sławińska, Curtarolo, Fornari, Ceresoli, and Buongiorno Nardelli]PAO6 authorF. T. Cerasoli, authorA. R. Supka, authorA. Jayaraj, authorM. Costa, authorI. Siloi, authorJ. Sławińska, authorS. Curtarolo, authorM. Fornari, authorD. Ceresoli, and authorM. Buongiorno Nardelli, journalComputational Materials Science volume200, pages110828 (year2021), ISSN issn0927-0256, <https://www.sciencedirect.com/science/article/pii/S0927025621005486>. [Menezes and Capaz(2018)]MENEZES2018411 authorM. G. Menezes and authorR. B. Capaz, journalComputational Materials Science volume143, pages411 (year2018), ISSN issn0927-0256, <https://www.sciencedirect.com/science/article/pii/S0927025617306705>. [Costa et al.(2018a)Costa, Nardelli, Fazzio, and Costa]adatoms authorM. Costa, authorM. B. Nardelli, authorA. Fazzio, and authorA. T. Costa, titleLong range dynamical coupling between magnetic adatoms mediated by a 2d topological insulator (year2018a), <https://arxiv.org/abs/1808.00347>. [Costa et al.(2020)Costa, Peres, Fernández-Rossier, and Costa]fegete authorM. Costa, authorN. M. R. Peres, authorJ. Fernández-Rossier, and authorA. T. Costa, journalPhys. Rev. B volume102, pages014450 (year2020), <https://link.aps.org/doi/10.1103/PhysRevB.102.014450>. [Costa et al.(2021)Costa, Schleder, Acosta, Padilha, Cerasoli, Nardelli, and Fazzio]hoti authorM. Costa, authorG. R. Schleder, authorC. M. Acosta, authorA. C. M. Padilha, authorF. Cerasoli, authorM. B. Nardelli, and authorA. Fazzio, journalnpj Computational Materials volume7, pages49 (year2021), <https://doi.org/10.1038/s41524-021-00518-4>. [Heath et al.(2020)Heath, Costa, Buongiorno-Nardelli, and Kuroda]cri3-graphene authorJ. J. Heath, authorM. Costa, authorM. Buongiorno-Nardelli, and authorM. A. Kuroda, journalPhys. Rev. B volume101, pages195439 (year2020), <https://link.aps.org/doi/10.1103/PhysRevB.101.195439>. [Costa et al.(2019)Costa, Schleder, Buongiorno Nardelli, Lewenkopf, and Fazzio]Costa2019 authorM. Costa, authorG. R. Schleder, authorM. Buongiorno Nardelli, authorC. Lewenkopf, and authorA. Fazzio, journalNano Letters volume19, pages8941 (year2019), <https://doi.org/10.1021/acs.nanolett.9b03881>. [Costa et al.(2018b)Costa, Costa, Freitas, Schmidt, Buongiorno Nardelli, and Fazzio]Costa2018 authorM. Costa, authorA. T. Costa, authorW. A. Freitas, authorT. M. Schmidt, authorM. Buongiorno Nardelli, and authorA. Fazzio, journalACS Omega volume3, pages15900 (year2018b), <https://doi.org/10.1021/acsomega.8b01836>. [Hitomi et al.(2021)Hitomi, Kawakami, and Koshino]HOTI-Phosphorene-PhysRevB.104.125302 authorM. Hitomi, authorT. Kawakami, and authorM. Koshino, journalPhys. Rev. B volume104, pages125302 (year2021), <https://link.aps.org/doi/10.1103/PhysRevB.104.125302>. [Ezawa(2018)]HOTI-Phosphorene-PhysRevB.98.045125 authorM. Ezawa, journalPhys. Rev. B volume98, pages045125 (year2018), <https://link.aps.org/doi/10.1103/PhysRevB.98.045125>. [Lee et al.(2022)Lee, Choi, and Lee]H-Woo-Symmetry_PhysRevB.105.035142 authorH. Lee, authorB. Choi, and authorH.-W. Lee, journalPhys. Rev. B volume105, pages035142 (year2022), <https://link.aps.org/doi/10.1103/PhysRevB.105.035142>. [Jungwirth et al.(2012)Jungwirth, Wunderlich, and Olejník]SHE-Devices-Jungwirth2012 authorT. Jungwirth, authorJ. Wunderlich, and authorK. Olejník, journalNature Materials volume11, pages382 (year2012), <https://doi.org/10.1038/nmat3279>. [Marui et al.(2023)Marui, Kawaguchi, Sumi, Awano, Nakamura, and Hayashi]marui2023spin authorY. Marui, authorM. Kawaguchi, authorS. Sumi, authorH. Awano, authorK. Nakamura, and authorM. Hayashi, titleSpin and orbital hall currents detected via current induced magneto-optical kerr effect in v and pt (year2023), 2306.09585. [Kumar and Kumar(2023)]kumar2023ultrafast authorS. Kumar and authorS. Kumar, titleUltrafast thz probing of nonlocal orbital current in transverse multilayer metallic heterostructures (year2023), 2306.17027. [Lyalin et al.(2023)Lyalin, Alikhah, Berritta, Oppeneer, and Kawakami]lyalin2023magnetooptical authorI. Lyalin, authorS. Alikhah, authorM. Berritta, authorP. M. Oppeneer, and authorR. K. Kawakami, titleMagneto-optical detection of the orbital hall effect in chromium (year2023), 2306.10673. [Cysne et al.(2023)Cysne, Guimarães, Canonico, Costa, Rappoport, and Muniz]Cysne-PhysRevB.107.115402 authorT. P. Cysne, authorF. S. M. Guimarães, authorL. M. Canonico, authorM. Costa, authorT. G. Rappoport, and authorR. B. Muniz, journalPhys. Rev. B volume107, pages115402 (year2023), <https://link.aps.org/doi/10.1103/PhysRevB.107.115402>. [Peng et al.(2014)Peng, Wei, and Copple]Peng-Wei-Copple-PhysRevB.90.085402 authorX. Peng, authorQ. Wei, and authorA. Copple, journalPhys. Rev. B volume90, pages085402 (year2014), <https://link.aps.org/doi/10.1103/PhysRevB.90.085402>. [Midtvedt et al.(2016)Midtvedt, Lewenkopf, and Croy]Lewenkopf-Midtvedt2016 authorD. Midtvedt, authorC. H. Lewenkopf, and authorA. Croy, journal2D Materials volume3, pages011005 (year2016), <https://doi.org/10.1088/2053-1583/3/1/011005>. [Seemann et al.(2015)Seemann, Ködderitzsch, Wimmer, and Ebert]Ebert-Symmetry-tensor authorM. Seemann, authorD. Ködderitzsch, authorS. Wimmer, and authorH. Ebert, journalPhys. Rev. B volume92, pages155138 (year2015), <https://link.aps.org/doi/10.1103/PhysRevB.92.155138>. [Roy et al.(2022)Roy, Guimarães, and Sławi ńńska]Marcosguimaraes authorA. Roy, authorM. H. D. Guimarães, and authorJ. Sławi ńńska, journalPhys. Rev. Mater. volume6, pages045004 (year2022), <https://link.aps.org/doi/10.1103/PhysRevMaterials.6.045004>. [Furukawa et al.(2021)Furukawa, Watanabe, Ogasawara, Kobayashi, and Itou]C2v-Magnetoelectric-PhysRevResearch.3.023111 authorT. Furukawa, authorY. Watanabe, authorN. Ogasawara, authorK. Kobayashi, and authorT. Itou, journalPhys. Rev. Res. volume3, pages023111 (year2021), <https://link.aps.org/doi/10.1103/PhysRevResearch.3.023111>. [Cysne et al.(2021b)Cysne, Guimarães, Canonico, Rappoport, and Muniz]Cysne-OMEpxpy-PhysRevB.104.165403 authorT. P. Cysne, authorF. S. M. Guimarães, authorL. M. Canonico, authorT. G. Rappoport, and authorR. B. Muniz, journalPhys. Rev. B volume104, pages165403 (year2021b), <https://link.aps.org/doi/10.1103/PhysRevB.104.165403>. [Shinada and Peters(2023)]Koki-Peters-PhysRevB.107.214109 authorK. Shinada and authorR. Peters, journalPhys. Rev. B volume107, pages214109 (year2023), <https://link.aps.org/doi/10.1103/PhysRevB.107.214109>. [Shinada et al.(2023)Shinada, Kofuji, and Peters]Koki-Peters-PhysRevB.107.094106 authorK. Shinada, authorA. Kofuji, and authorR. Peters, journalPhys. Rev. B volume107, pages094106 (year2023), <https://link.aps.org/doi/10.1103/PhysRevB.107.094106>. [Hayami et al.(2018)Hayami, Yatsushiro, Yanagi, and Kusunose]Hayami-PhysRevB.98.165110 authorS. Hayami, authorM. Yatsushiro, authorY. Yanagi, and authorH. Kusunose, journalPhys. Rev. B volume98, pages165110 (year2018), <https://link.aps.org/doi/10.1103/PhysRevB.98.165110>. [Hayami et al.(2016)Hayami, Kusunose, and Motome]Hayami-JPCM-2016 authorS. Hayami, authorH. Kusunose, and authorY. Motome, journalJournal of Physics: Condensed Matter volume28, pages395601 (year2016), <https://doi.org/10.1088/0953-8984/28/39/395601>. [Salemi et al.(2019)Salemi, Berritta, Nandy, and Oppeneer]Salemi-Oppeneer-2019 authorL. Salemi, authorM. Berritta, authorA. K. Nandy, and authorP. M. Oppeneer, journalNature Communications volume10, pages5381 (year2019), <https://doi.org/10.1038/s41467-019-13367-z>. [Yoda et al.(2018)Yoda, Yokoyama, and Murakami]Yoda2018-OME authorT. Yoda, authorT. Yokoyama, and authorS. Murakami, journalNano Letters volume18, pages916 (year2018), <https://doi.org/10.1021/acs.nanolett.7b04300>. [He et al.(2020)He, Goldhaber-Gordon, and Law]He2020-OME authorW.-Y. He, authorD. Goldhaber-Gordon, and authorK. T. Law, journalNature Communications volume11, pages1650 (year2020), <https://doi.org/10.1038/s41467-020-15473-9>. [Johansson et al.(2018)Johansson, Henk, and Mertig]IngridMerting-PhysRevB.97.085417 authorA. Johansson, authorJ. Henk, and authorI. Mertig, journalPhys. Rev. B volume97, pages085417 (year2018), <https://link.aps.org/doi/10.1103/PhysRevB.97.085417>. [Dresselhaus et al.(2007)Dresselhaus, Dresselhaus, and Jorio]dresselhaus2007group authorM. Dresselhaus, authorG. Dresselhaus, and authorA. Jorio, titleGroup Theory: Application to the Physics of Condensed Matter (publisherSpringer Berlin Heidelberg, year2007), ISBN isbn9783540328971, <https://books.google.com.br/books?id=sKaH8vrfmnQC>.
http://arxiv.org/abs/2307.04328v1
20230710034732
Where to Drop Sensors from Aerial Robots to Monitor a Surface-Level Phenomenon?
[ "Chak Lam Shek", "Guangyao Shi", "Ahmad Bilal Asghar", "Pratap Tokekar" ]
cs.RO
[ "cs.RO", "cs.DM" ]
Where to Drop Sensors from Aerial Robots to Monitor a Surface-Level Phenomenon? This work is supported in part by National Science Foundation Grant No. 1943368. ^* indicates equal contribution and authors are listed alphabetically Chak Lam Shek^*, Guangyao Shi^*, Ahmad Bilal Asghar, and Pratap Tokekar University of Maryland, College Park, MD 20742 USA [cshek1, gyshi, abasghar, tokekar]@umd.edu August 12, 2023 ======================================================================================================================================================================================================================================== empty empty We consider the problem of routing a team of energy-constrained Unmanned Aerial Vehicles (UAVs) to drop unmovable sensors for monitoring a task area in the presence of stochastic wind disturbances. In prior work on mobile sensor routing problems, sensors and their carrier are one integrated platform, and sensors are assumed to be able to take measurements at exactly desired locations. By contrast, airdropping the sensors onto the ground can introduce stochasticity in the landing locations of the sensors. We focus on addressing this stochasticity in sensor locations from the path planning perspective. Specifically, we formulate the problem (Multi-UAV Sensor Drop) as a variant of the Submodular Team Orienteering Problem with one additional constraint on the number of sensors on each UAV. The objective is to maximize the Mutual Information between the phenomenon at Points of Interest (PoIs) and the measurements that sensors will take at stochastic locations. We show that such an objective is computationally expensive to evaluate. To tackle this challenge, we propose a surrogate objective with a closed-form expression based on the expected mean and expected covariance of the Gaussian Process. We propose a heuristic algorithm to solve the optimization problem with the surrogate objective. The formulation and the algorithms are validated through extensive simulations. § INTRODUCTION Multi-robot systems have been widely used in scientific information gathering including exploring the ocean <cit.>, tracking algal blooms <cit.>, and monitoring soil <cit.>. The planning problem on this topic is usually named Informative Path Planning (IPP), in which the research focus is on how to design planning algorithms to coordinate multiple robots to collect as much useful information as possible given the limited onboard resources (e.g., sensing and battery). In some cases, the robotic platform and the sensors for scientific monitoring are integrated systems and are treated as mobile sensors as a whole <cit.>. In other cases, the robotic platforms are treated as carriers of sensors <cit.>, and they are separable. The research efforts for such cases are mainly devoted to finding collaborative route strategies for these mobile platforms to serve the sensors to finish the sampling tasks. Our research is also along this line and we are interested in how to airdrop sensors to an area of interest with a team of Unmanned Aerial Vehicles (UAVs). Specifically, we consider the problem of airdropping multiple sensors to the ground with a team of budget-constrained UAVs to reduce the uncertainty of Points of Interest (PoIs) as shown in Fig. <ref>. If the UAVs can precisely drop the sensors to the desired locations, such a problem is closely related to the classic Team Orienteering Problem (TOP) <cit.>. However, due to wind disturbances, when we release one sensor from the UAV, its landing location, i.e., the sampling location, is stochastic. This is the main difference from the existing research on mobile robotic sensors, in which authors usually assume that robots can take samples at precisely the desired location. Such a difference requires to rethink of the underlying optimization for planning. To this end, we propose a new variant of the TOP for airdropping sensors with UAVs, in which the stochasticity of the sensor landing position is explicitly considered. However, the resulting optimization objective is computationally expensive to evaluate. To address this challenge, we resort to a Gaussian approximation approach <cit.> to obtain one surrogate objective with one closed-form expression. With this surrogate objective, we show that the problem can be solved in polynomial time and near optimally. In summary, the main contribution of this paper is: * We propose a variant of the Submodular Team Orienteering Problem to model the sensor dropping problem with aerial robots. * We propose one computationally efficient surrogate objective function for the proposed problem and propose a heuristic algorithm to solve it. * We demonstrate the effectiveness of our formulation and algorithm through simulations. The rest of the paper is organized as follows. We first give a brief overview of the related work in Section <ref>. Then, we explain the problem setup and formulation in Section <ref>. We introduce the technical approach in Section <ref> and validate the formulation and the proposed framework in Section <ref>. § RELATED WORK In this section, we present the work most closely related to ours. We first discuss the related work on airdropping sensors, followed by stationary sensor placement and mobile sensor planning, and finally on estimating stationary fields with Gaussian Processes. §.§ Airdroping sensors Dropping resources from an aerial vehicle has long been of interest, particularly for military and search-and-rescue operations. For example, in military resupply missions, aircrafts are required to accurately deliver supplies to the target areas, taking into account geological factors and weather conditions. Extensive research has been conducted on low-level optimization of the release trajectory to achieve high precision in airdrop operations <cit.>. In this work, we focus on the complementary high-level planning of where to drop the sensors from multiple UAVs to monitor a surface-level phenomenon. We abstract the low-level trajectory control by assuming that for any given airdrop trajectory planner, the associated uncertainty of the landing position of the sensor is known. Specifically, we focus on route-level planning for multiple UAVs to deploy multiple sensors to the area of interest for environmental monitoring applications. Our work is closely related to that of Gerlach et al. <cit.>. They formulate the problem of dropping multiple payloads to multiple targets as a Traveling Salesperson Problem (TSP). However, there are two key differences between their work and ours. First, our objective is to reduce the uncertainty at Points of Interest (PoIs) by dropping sensors and we use an information-theoretic metric. In contrast, the objective in <cit.> is to minimize the risk encountered by the soldiers. Second, our problem involves multiple energy-constrained UAVs, which cannot be modeled as TSP or its variants. §.§ Sensor Placement and Mobile Sensor Planning The sensor placement problem aims to maximize the information gain or sensing quality by strategically selecting sensor deployment locations. The typical approach is to model the phenomenon as a Gaussian Process <cit.> and use information theoretic measures for placing the sensors. The foundational work was done by Krause el al. <cit.> who showed that the partial monotonicity and submodularity allows a greedy placement to achieve a constant-factor approximation algorithm. This work was later extended to mobile sensor planning (also termed as informative path planning). Binney et al. <cit.> introduced the additional constraint of identifying a feasible path that connects these selected sensing locations. One approach to finding such paths is to convert the problem into an orienteering instance with submodular rewards. In <cit.> this problem is solved by constructing an additive approximation for the coverage objective to find a UAV path for image acquisition. A recursive greedy algorithm <cit.> is used in <cit.> to solve the submodular orienteering problem for informative path planning. This approach provides guarantees for the submodular objective but runs in quasi-polynomial time, limiting its use for large problem instances. In the context of a multi-robot setting, the orienteering problem can be solved iteratively, where the single robot performance guarantee can be extended to the multi-robot scenario <cit.>. Our work closely aligns with this body of work on informative path planning with a key difference. Because we are airdropping sensors, the exact sensing location depends on the wind field and is not known, unlike existing work. We show how to deal with this additional source of uncertainty. §.§ GP with Uncertain Inputs We use Gaussian Processes <cit.> to model the spatial function that is to be estimated by the sensors. Since we do not know the exact locations the sensors will fall at before planning UAV paths, the input to GP regression is uncertain. It is shown that the predictive distribution for Gaussian processes with uncertain inputs may not be Gaussian in general in <cit.>. Various approaches have been used to deal with input uncertainty in GPs. In the Bayesian approach, the distribution with uncertain input locations can be obtained by integrating over the uncertainty of the locations  <cit.>. However, these integrals are analytically intractable in general. Taylor expansion about the uncertain locations is used in <cit.> to present an approximate method that requires the derivative of the mean of f. The Gaussian Approximation method <cit.> assumes that the posterior distribution is Gaussian and finds its expected mean and expected covariance by integrating over the uncertainty of the locations. For certain kernel functions, these co-variances can be computed analytically. We employ the Gaussian approximation method in this paper to handle the random sensor locations. § PROBLEM STATEMENT Consider a weighted graph G = (V, E), where the vertex set V represents locations that can be visited by a team of m UAVs. The weight w(u,v) of an edge (u, v) ∈ E represents the time taken or energy spent by the UAVs to travel from vertex u to vertex v. Let (x_v, y_v, z_v) represent the coordinates of vertex v. Each vertex corresponds to a location where one of the UAVs can drop a sensor onto the ground below to observe the spatial field. The sensor's landing position on the surface, denoted by q_v, can vary depending on the wind conditions at the drop location v and the height of the drop location z_v. We assume that q_v follows a normal distribution, specifically q_v ∼𝒩(q̅_v, Σ_v), and that s̅_v and Σ_v are known for each v∈ V. Each UAV i∈[m] has a given number of sensors k_i and limited amount of time (or energy) T_i to visit some locations in V and to drop the sensors from those locations. The path of UAV i must start and end at its designated depot location r_i∈ V. The purpose of dropping sensors is to observe the value of a spatial function f at specific points of interest (POI) U on the ground. Each sensor obtains a measurement of the underlying field with additive Gaussian noise. Since we may have fewer sensors than POI, and due to the stochastic nature of sensor drop, we will need to estimate the value of f at POI. Consequently, there will be inherent uncertainty associated with these estimates. Gaussian Processes associate a random variable with each POI in U and the joint distribution over U can be used to quantify the information gained by the sensors dropped by the UAVs. Given paths P = {P_1,…, P_m} for the UAVs, let S(P) = {S_1,…,S_k} represent the corresponding sensor drop locations, and let Q(P) be the random variable representing the sensor locations, i.e., for every drop location v∈ S, the sensor location q_v ∈ Q. Also, let the length of the path ℓ(P_i) denote the total time taken by the UAV i to visit all the locations in P_i. Let η be the time required to drop a sensor. Therefore, the total time of a path P_i is given as C(P_i)=ℓ(P_i) + S(P_i)η. Let ℱ_U represent the random variable associated with POI U and let ℱ_Q represent the random variable associated with sensor readings at locations in Q. Then Pr(ℱ_U|ℱ_Q(P)=f_Q) is the prediction at U given sensor readings at locations in Q(P). To simplify notation, we will use S and Q going forward, without explicitly indicating their dependence on UAV paths P. We focus on the offline planning problem <cit.> where the plan must be decided before dropping any sensor. The mutual information – as a function of the UAVs' paths – between the random variables ℱ_U and ℱ_Q is defined as, MI(P) = H(ℱ_U) - H(ℱ_U|ℱ_Q), where H(𝒳) represents the entropy of random variable 𝒳. We now formally define the multi-UAV sensor drop problem. [Multi UAV Sensor Drop] Given the points of interest U, sensor drop locations in G=(V, E) along with the mean q̅_v and covariance Σ_v of sensor's location associated with each v ∈ V, k_i sensors and budget T_i for each UAV i∈[m], find path P_i rooted at the depot r_i along with drop locations S_i for each UAV i∈[m] to maximize the mutual information, i.e., max_P_1,…,P_m  MI(P) = H(ℱ_U) - H(ℱ_U|ℱ_Q) s.t.   C(P_i) ≤ T,  ∀ i∈ [m]   |S_i| ≤ k_i,  ∀ i∈ [m]. Note that given drop locations SS, the sensor locations in Q are random. If the locations in Q are deterministic, i.e., the sensors fall at the exact locations desired, and if points of interest U are the same as the vertices in V, we get the traditional informative path planning problem <cit.>. Since the locations in Q are themselves random variables, evaluating the probability distribution Pr(ℱ_U|ℱ_Q) and its entropy is challenging. In the next section, we discuss how we address this challenge and present the planning algorithm. § TECHNICAL APPROACH In this section, we discuss how to evaluate the objective function given in Problem <ref>. We then propose the planning algorithm to solve the problem. §.§ Gaussian Process with Stochastic Drop Locations Trying to agree on random variables and notation/abuse of notation Q random variable for sensor locations q realization of QQ, a vector of sensor locations S random variable representing drop locations s realisation of SS, again a vector U with abuse of notation, joint random variable for f(U)f(U) Then Pr(U|S=s) =∫ Pr(U|Q=q)Pr(q_i=q_i|s=s_i)dq_i or Pr(U|S=s) =∫ Pr(U|Q=q)Π_i=1^a( 𝒩(q_i,Σ_i) dq_i) In order to evaluate the objective function (<ref>), we need to calculate the entropy of the random variable (ℱ_U|ℱ_Q). If the sensor locations in Q were deterministic, this random variable would be a multivariate Gaussian, and its covariance matrix could be used to determine the entropy. However our data is of the form {q_i, f(q_i)+ϵ_i}_i=1^∑_j |S_j| and q_i∼𝒩((q̅_i, Σ_i)). Then, since the locations of sensors are independent of each other, the probability distribution Pr(ℱ_U|ℱ_Q) is given by integrating the distribution given fixed locations over random sensor locations, i.e., Pr(ℱ_U|ℱ_Q) = ∫∫ Pr(ℱ_U|ℱ_Q,{q_1, , q_a})∏_i=1^a(Pr(q_i) )dq_i dq_a. This distribution is not Gaussian and there is generally no closed form expression for this integral <cit.>. Existing literature on Gaussian Processes with input uncertainty <cit.> resorts to approximations in order to solve this integral. A Monte Carlo approach by drawing samples of q from uncertain location distributions is considered in <cit.>. Taylor expansion about q̅ is used in <cit.> to present an approximate method that requires the derivative of the mean of f. The Gaussian approximation method <cit.> assumes that the posterior distribution is Gaussian and finds its expected mean and expected covariance by integrating over the uncertainty of the locations q. For the squared exponential covariance, the expected covariance for normally distributed sensor locations can be analytically computed exactly using the following expression <cit.>. Σ_QQ(i,j) = σ^2 exp( -1/2 (q̅_i - q̅_j)^⊤ (W+Σ_i +Σ_j)^-1 (q̅_i - q̅_j))/| I+W^-1(Σ_i+Σ_j)(1-δ_ij) |^1/2 Here q̅_i and Σ_i are the mean and covariance of the normally distributed sensor location q_i in Q, and W is a diagonal matrix where each diagonal element corresponds to a characteristic length scale for the respective input variable. We use the Gaussian approximation method in this paper because it does not require sampling and is computationally tractable with a simple analytical expression for the covariance matrix. Moreover, since we are planning paths for UAVs offline, before getting any sensor readings, we can use this method to find the mutual information by just using the expected covariance as discussed below. Since the Gaussian approximation method assumes that the distribution of ℱ_U|ℱ_Q is a Gaussian distribution, and because ℱ_U and ℱ_Q are jointly Gaussian, the mutual information is given by MI = H(ℱ_U)- H(ℱ_U|ℱ_Q) = H(ℱ_U) + H(ℱ_Q) - H(ℱ_U,ℱ_Q) = 1/2log( (Σ_UU) (Σ_QQ)/(Σ̅)), where Σ̅ = [Σ_UU Σ_UQ Σ_QU Σ_QQ]. We can use the expression (<ref>) to evaluate Σ_UQ(i,j) by replacing x_i with the known location of i^th point of interest in U and Σ_i by the null matrix. The Objective function (<ref>) and the surrogate objective defined in Equation (<ref>) are submodular and monotonically non-decreasing set functions in S. The objective function is a submodular function. I(f_PoIs4pt,X) = H(f_PoIs4pt) + H(X) - H(f_PoIs4pt∪ X) I(f_PoIs4pt,X') = H(f_PoIs4pt) + H(X') - H(f_PoIs4pt∪ X') The increment of MI denotes EE, such that E_x = H(X ∪ z) - H(f_PoIs4pt∪ X ∪ z) - H(X) + H(f_PoIs4pt∪ X) E_x' = H(X' ∪ z) - H(f_PoIs4pt∪ X' ∪ z) - H(X') + H(f_PoIs4pt∪ X') E_x - E_x' = [ H(X ∪ z)) - H(X) - H(X' ∪ z) + H(X')_(1) ] +35pt [ H(f_PoIs4pt∪ X ∪ z)) - H(f_PoIs4pt∪ X) - H(f_PoIs4pt∪ X' ∪ z) + H( f_PoIs4pt∪ X')_(2) ] ≥ 030pt There are a few nice properties of mutual information. Monotonicity Clearly, the MI objective is also a monotonic function because the conditioning always reduces entropy: H(f_PoIs4pt | X) ≤ H(f_PoIs4pt) In other words, the additional sensor can provide extra information which always helps. §.§ Planner The submodularity and monotonicity of the surrogate objective function allow us to formulate Problem <ref> as a submodular TOP. However, there is one additional constraint in Problem <ref> that is not present in standard submodular TOP, that of the number of sensors k_i that each robot is able to deploy. We address this problem using the following observation. In a complete graph with N≥ k_i vertices for all i, there always exists an optimal solution where the robot i's path consists of no more than k_i vertices, excluding the starting vertex. The proof follows by contradiction. Suppose there is an instance where no optimal solution has at most k_i vertices along robot i's path. The robot is allowed to deploy at most k_i sensors. Therefore, there must be one or more vertices along the robot path that no sensor is dropped. Since the graph is a complete metric graph, we can “shortcut” such vertices without increasing the cost of the path. Therefore, we can recover a solution that consists of exactly k_i vertices. This is a contradiction proving the original claim. With this insight, we present our algorithm (Algorithm <ref>) to solve the Problem <ref>. We first take the metric completion of the input graph. Recall that for a weighted graph G(V, E), each edge (u,v) ∈ E is associated with a cost w(u,v). In the preprocessing step, we generate a complete graph G^'=(V, E^') using G, where the edge cost w^'(u,v) is defined as the length of the shortest path between u and v in G. Then, we sequentially call a subroutine, Generalized Cost-Benefit (GCB), to compute a path for each robot. Compared to the original GCB algorithm <cit.>, in Algorithm <ref>, we add one extra control condition in the while loop to account for the constraint, Eq. (<ref>), on the number of available sensors using Lemma <ref>. The constraints imposed on the paths of UAVs, which limit them to at most k_i vertices and a maximum length of T_i for UAV i, can be regarded as a partition matroid constraint. It has been shown in <cit.> that an α-approximate greedy step for submodular maximization over a matroid yields an approximation ratio of 1/α+1. Hence, given an α-approximation algorithm to solve the submodular orienteering problem for a single UAV, Algorithm <ref> results in a 1/α + 1 approximation ratio for maximizing Objective (<ref>) for multiple UAVs. When the paths of all the UAVs are constrained to be of at most T length and k vertices, we get a uniform matroid resulting in 1-1/e^α approximation ratio. A quasi-polynomial time recursive greedy algorithm to solve the single vehicle orienteering problem with submodular rewards is given in <cit.>, resulting in α=Olog(). In this paper we use Generalized Cost Benefit (GCB) algorithm to solve the single UAV problem as it has better runtime than the recursive greedy algorithm <cit.>. § EVALUATION In this section, we evaluate the performance of our algorithm through a series of numerical experiments. We first explain the setup for the simulation. Then, we will show one qualitative example to illustrate the difference between the proposed approach and the baseline. Next, we will quantitatively evaluate the performance of the proposed approaches w.r.t. the uncertainty reduction of PoIs. Moreover, we will show the running time of the proposed algorithm w.r.t. the number of robots. §.§ Experimental Setup The flying object model used in this study is based on the work described in <cit.>. This model captures the motion of the sensors, considering the gravity, the sensors' surface area, and the speed of the wind. The sensor mass is set to 10kg. The surface coefficient is 1 and the vertical height is 500m. We begin by defining the map, ground truth, and wind field, as shown in Fig. <ref>. The map provides labels for all the potential dropping points and PoIs. The ground truth is generated by combining multiple Gaussian functions. Data points sampled from the ground truth are used to learn the kernel function, where we employ the RBF function. The wind field indicates the speed at specific locations on the map. By combining the sensor motion model with the wind field, we can estimate the landing position of the sensors. Using a given kernel, the Algorithm <ref> is applied to search for a set of sensor dropping locations which is an approximate solution to the main problem. The final sensor locations are determined by sampling from the flying object model with uncertainty. Once the sensor locations are obtained, we can measure the environmental values and compute the posterior of PoIs based on these measurements. §.§ An qualitative example In the following, we present a comparison between a baseline approach and our proposed method using the defined settings. The experiment focuses on a scenario with two UAVs, where each UAV is equipped with four sensors. The UAVs are allocated a distance budget of 870 units to drop all the sensors along their respective paths. §.§.§ Baseline In the baseline case (Fig <ref>), the UAVs tend to drop a higher number of sensors in areas with a higher concentration of PoIs. The objective is to ensure that each sensor can cover one or more PoIs. However, due to the uncertainty introduced by the wind, the sensors tend to cluster in smaller regions. As a result, the four sensors located around coordinates (0,100) are only capable of accurately estimating two PoIs' value, while the remaining PoIs are not sufficiently covered. This can be observed in Fig <ref>, where the two PoIs in the lower right corner exhibit a significantly higher error of estimation. §.§.§ Our Approach Our approach, on the other hand, considers the impact of wind uncertainty and prefers to drop sensors in a wider area. As shown in Fig. <ref>, the wind blows the sensors to a broader coverage area, allowing them to reach and cover more PoIs. This broader coverage results in a significant reduction in the error of PoI estimation compared to the baseline case. Additionally, it is worth noting that the areas where the sensors are dropped but do not have high concentration of PoIs exhibit high error rates. This demonstrates the effectiveness of our approach in adapting to the wind uncertainty and achieving better coverage of the target area. §.§ Comparisons with Baselines In this section, we compare the MSE of three different approaches across three different scenarios. The MSE is computed as the sum of the square of the difference between the posterior of the PoIs and the ground truth values of the PoIs. In the first two scenarios, we assume that the wind speed is uniform and the variance of landing location is the same for all dropping nodes. In the first scenario, the final location of a sensor follows a Gaussian distribution with a variance of 900. Two UAVs are deployed, with each carrying 4 sensors. In the second scenario, the final location of a sensor follows a Gaussian distribution with a variance of 820. Two UAVs are deployed, with each carrying 3 sensors. In both of these scenarios, our approach demonstrates approximately a 10% improvement in MSE compared to the baseline approach. The random selection approach, on the other hand, results in an MSE of 1. The third scenario introduces non-uniform uncertainty w.r.t. the drop point location, where the variance is a function of the non-uniform wind speed. Once again, our approach consistently outperforms the baseline approach, achieving a 12% improvement in MSE. These results highlight the effectiveness of our approach in mitigating the impact of uncertainty in different scenarios and achieving more accurate sensor placements. §.§ Running Time Lastly, we demonstrate the scalability of our approach. In comparison to the baseline approach, our approach may have a slightly longer running time in each scenario. However, both approaches grow polynomially in run time with the number of sensors per UAV. To further evaluate the computational performance, we also simulated a brute-force approach. The brute-force approach generates all possible combinations of sensor dropping points within the budget constraint and selects the set with the highest objective value. The runtime of the brute-force approach grew exponentially, taking hours to days to complete due to the factorial computation of all possible combinations. This stark contrast highlights the effectiveness and efficiency of our approach in finding nearly optimal solutions for sensor placement in a timely manner. § CONCLUSION This paper studies the problem of routing a team of UAVs to drop sensors to reduce the uncertainty of PoIs. The problem is formulated as a variant of TOP. To reduce the computational cost in the evaluation of the objective, we propose one surrogate objective with closed-form expression based on Gaussian approximation. A heuristic algorithm (SGA) is proposed to solve the relaxed problem with the surrogate objective. The formulation and the algorithm are validated in numerical simulation. IEEEtran
http://arxiv.org/abs/2307.04537v1
20230710130246
Q-YOLOP: Quantization-aware You Only Look Once for Panoptic Driving Perception
[ "Chi-Chih Chang", "Wei-Cheng Lin", "Pei-Shuo Wang", "Sheng-Feng Yu", "Yu-Chen Lu", "Kuan-Cheng Lin", "Kai-Chiang Wu" ]
cs.CV
[ "cs.CV", "cs.AI" ]
𝐱 ŁL Q-YOLOP: Quantization-aware You Only Look Once for Panoptic Driving Perception Chi-Chih Chang11, Wei-Cheng, Lin11, Pei-Shuo Wang11, Sheng-Feng Yu112, Yu-Chen Lu112, Kuan-Cheng Lin11 and Kai-Chiang Wu1 1 National Yang Ming Chiao Tung University 2 Macronix International Co., Ltd. August 12, 2023 =========================================================================================================================================================================================================================== In this work, we present an efficient and quantization-aware panoptic driving perception model (Q-YOLOP) for object detection, drivable area segmentation, and lane line segmentation, in the context of autonomous driving. Our model employs the Efficient Layer Aggregation Network (ELAN) as its backbone and task-specific heads for each task. We employ a four-stage training process that includes pretraining on the BDD100K dataset, finetuning on both the BDD100K and iVS datasets, and quantization-aware training (QAT) on BDD100K. During the training process, we use powerful data augmentation techniques, such as random perspective and mosaic, and train the model on a combination of the BDD100K and iVS datasets. Both strategies enhance the model’s generalization capabilities. The proposed model achieves state-of-the-art performance with an [email protected] of 0.622 for object detection and an mIoU of 0.612 for segmentation, while maintaining low computational and memory requirements. Object detection, semantic segmentation, quantization-aware training, autonomous driving § INTRODUCTION Panoptic perception systems are critical components of autonomous cars, enabling them to perceive and understand their environment comprehensively. These systems solve multiple vision tasks simultaneously, including object detection, lane line segmentation, drivable area segmentation, and generate a rich understanding of the road scene. In order to solve the multi-task problem for panoptic driving perception, we develop a low-power, multi-task model tailored for traffic scenarios, addressing the challenges of object detection and semantic segmentation. The aim is to create efficient algorithms capable of accurately recognizing objects and segmenting both lane line and drivable area while maintaining minimal computational cost, rendering them ideal for deployment in resource-constrained environments such as mobile devices, IoT devices, and embedded systems. To achieve low-power consumption, we adopt a neural network architectures optimized for energy efficiency. The development process involves reducing the size and complexity of the models used for object detection and segmentation, as well as quantizing the model to minimize energy consumption. Our panoptic driving perception system reaches 93.46 FPS on NVIDIA V100 and 3.68 FPS on MediaTek Dimensity 9200 Series Platform. Meanwhile, it attains 0.622 mAP and 0.612 mIoU on the object detection and segmentation tasks of the competition iVS dataset. § METHOD Our model, derived from YOLOPv2 <cit.> and YOLOv7 <cit.>, is specifically designed to address both object detection and segmentation tasks. It comprises five main components: the backbone, the neck, the detection head, drivable area segmentation head, and lane line segmentation head. The backbone is Efficient Layer Aggregation Network (ELAN) <cit.>, optimized for rapid and efficient feature extraction. The neck of our model is a Spatial Pyramid Pooling (SPP) network <cit.>, which facilitates the handling of objects with varying scales and sizes by pooling features at multiple resolutions. This enhancement improves the accuracy and robustness of object detection. The detection head is based on RepConv <cit.>, an innovative neural network architecture that merges the efficiency of mobile networks with the accuracy of more complex models. Subsequently, a non-maximum suppression is applied to the output of object detection process to generate the final predictions. Consequently, our model is capable of accurately detecting objects in images while managing computation and memory requirements. Furthermore, in addition to object detection, our neural network also encompasses task-specific heads for drivable area segmentation and lane line segmentation. These dedicated heads possess distinct network structures that are optimized for their respective tasks. As drivable area segmentation and lane line segmentation generate separate predictions, we allow the result of lane line segmentation to overlap with the result of drivable area segmentation. In summary, our model is engineered to optimize efficiency and accuracy while also addressing the challenges associated with multi-task. Its unique combination of components and specialized task heads make it ideal for real-world applications such as autonomous driving and object recognition in resource-constrained environments. A visual representation of our model architecture is presented in Figure <ref>. §.§ Loss Function As we modify the head of YOLOPv2 <cit.> to support multi-label prediction, we introduce the loss function derived from HybridNets <cit.> to enhance the performance of our approach. The loss function for objection detection task consists of three components, L_det = α_1 L_class + α_2 L_obj + α_3 L_box Specifically, for L_det, focal loss is used in both L_class and L_obj. The classification loss, L_class, is responsible for penalizing classification errors, while L_obj is used for predicting object confidence. Both terms are implemented by focal loss <cit.>. The term L_box represents the similarity between the predicted results and ground truth by considering the overlap rate, aspect ratio, and scale. We implement L_box using the smooth L1 loss function. The coefficient α_1, α_2, and α_3 are hyperparameters used to balance the detection losses. The objective for lane line segmentation task combines three components, L_seg_ll = β_1 L_Tversky + β_2 L_Focal + β_3 L_Jaccard The first term Tversky loss <cit.>, L_Tversky, is used to address the issue of data imbalance and achieve much better trade-off between precision and recall, and the second term L_Focal aims to minimize the classification error between pixels and focuses on hard labels. The third term, L_Jaccard, is utilized to measure the similarity between prediction and ground-truth segmentation masks. The coefficient β_1, β_2 and β_3 are hyperparameters used to balance losses. On the other hand, the objective for drivable area segmentation task only combines two components: L_seg_da = γ_1 L_Tversky + γ_2 L_Focal The coefficient γ_1 and γ_2 are hyperparameters used to balance the losses. The overall objective, L_all, for our final model combines the object detection loss L_det and the segmentation loss L_seg to learn both tasks at the same time: L_all = δ_1 L_det + δ_2 L_seg_da + δ_3 L_seg_ll The coefficient δ_1, δ_2 and δ_3 are hyperparameters used to balance the detection loss and segmentation losses. §.§ Quantization Quantization-Aware Training (QAT) is a technique aimed at making neural networks more amenable to quantization. During QAT, we introduce the quantization error during training by sequentially applying quantize and dequantize operations. This enables the network to learn more robust representations that can be efficiently quantized during inference. We employ the Straight-Through Estimator (STE) <cit.> algorithm for QAT, which offers a simple and efficient approach. With STE, we round the weights and activations to the nearest quantization level during forward propagation, while utilizing the gradients of the unquantized values during backward propagation. In this manner, the network can backpropagate the gradients through the quantization operation, which is not differentiable in its original form. By simulating the quantization error during training, we can ensure that the network learns robust features that are less sensitive to quantization. § IMPLEMENTATION DETAIL §.§ Data Preparation As the organizers of the contest provided only a portion of the BDD100K <cit.> dataset, we opted to use the complete BDD100K dataset to augment the training data. In previous works that used the BDD100K dataset for semantic segmentation, the focus was typically on segmenting only the drivable areas and lane lines. There were no attempts to further classify the drivable areas or lane lines into multiple categories. However, our semantic segmentation task involves categorizing images into six classes: background, main lane, alternative lane, single line, double line, and dashed line. This is different from previous works, which only segmented images into two classes: line and lane. Therefore, we re-generate the six classes of segmentation labels for the BDD100K dataset. For the object detection task, the objective is to detect four types of objects: pedestrian, vehicle, scooter, and bicycle. In the case of scooters and bicycles, both the rider and the respective vehicle are included within the bounding box. However, the BDD100K dataset labels riders, scooters, and bicycles as distinct entities, as depicted in the following figure. To comply with the task requirements, we employ the Hungarian algorithm <cit.> to pair riders with their corresponding scooters or bicycles and label them within the same bounding box. §.§ Training Process In our experiments, the training process consists of several stages: 1) initial pretraining on the BDD100K <cit.> dataset, then 2) pretraining on the BDD100K with mosaic augmentation <cit.>, 3) finetuning on both BDD100K and iVS datasets, 4) quantization-aware training (QAT) on the integrated iVS and BDD100K datasets. Initially, we train our model on the BDD100K dataset without mosaic for 300 epochs, then turning on mosaic augmentation for 150 epochs. Subsequently, we jointly train the model on both the BDD100K and iVS datasets for an additional 150 epochs. Finally, we apply QAT <cit.> for an extra 20 epochs for quantization. Data Augmentation Techniques. To enhance the model's generalization capabilities, we apply several data augmentation techniques during the training process. These techniques include normalization, random perspective transformation, HSV color space augmentation, horizontal flipping, and mosaic. By simulating variations that may occur in real-world scenarios, these techniques improve the model's ability to adapt to new data. The mosaic technique turns on in the second and third stages, and it is turned off for the last 10 epochs of third stage. In detail, all images is normalized with mean (0.485, 0.456, 0.406) and std (0.229, 0.224, 0.225), random perspective transforming with scale factor 0.25, and translation factor 0.1. For HSV color space augmentation, the factor of Hue augmentation is 0.015, the factor of Saturation augmentation is 0.7, and the factor of Value augmentation is 0.4. Weight Initialization. The weight of the backbone and detection head of our model is initialized from YOLOv7 <cit.> pretrained weight, while the other parameters are all random initialized. Implementation Details. We resize all images to 384 × 640 of both BDD100K <cit.> and iVS datasets. The Adam optimizer is used for optimization. Different batch sizes are used for different stages, with 32 during first and second pretraining, 32 during finetuning, and 16 during quantization-aware training (QAT). The default anchor sizes are set as (12,16), (19,36), (40,28), (36,75), (76,55), (72,146), (142,110), (192,243), and (459,401). The learning rate scheduler employed is cosine annealing with a warm-up phase, and the initial learning rates are set to 1e-2 during first pretraining, 5e-3 during second pretraining, 5e-4 during finetuning, and 5e-5 during QAT. The minimum learning rates are set to 1e-5 during first pretraining, 5e-6 during second pretraining, 5e-7 during finetuning, and 5e-8 during QAT. The warm-up phase is set to 5 epochs during pretraining and 0 epochs during finetuning and QAT. The values of the coefficients for the losses are reported as follows: α_1 = 0.5, α_2 = 1.0, α_3 = 0.05, β_1 = 1.0, β_2 = 1.0, β_3 = 1.0, δ_1 = 1.0, δ_2 = 1.0, γ_1 = 0.2, γ_2 = 0.2, and γ_3 = 0.2. These coefficients are used in the computation of the loss function, which is a crucial component of our proposed method. §.§ Inference Process The inference process involves pre-processing the input images, which includes resizing from 1080 × 1920 to 384 × 640. Following this, images are normalized with mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225). The post-processing steps for the detection and segmentation parts are carried out. In the detection part, the intersection over union (IoU) threshold of non-maximum suppression (NMS) is set to 0.25, and the confidence threshold is set to 0.05. In the segmentation part, the results from the two segmentation heads are merged, and the output is upsampled from 384 × 640 to 1080 × 1920. § EXPERIMENTAL RESULTS §.§ Environment Setup We conducted our experiments using 8 Nvidia V100 GPUs for training. PyTorch 1.10 <cit.> and TensorFlow 2.8.0 <cit.> were used to implement our models and training pipeline, while OpenCV 4.6.0 <cit.> was used for image pre-processing. Our model architecture was based on the publicly available PyTorch implementations of YOLOP <cit.> and YOLOv7 <cit.>. To migrate the model from PyTorch to TensorFlow, we first translated the PyTorch model into ONNX[https://onnx.ai/] format, and then used the onnx2tflite[https://github.com/MPolaris/onnx2tflite] toolkit to convert ONNX into TensorFlow (.h5) and TFLite model (.tflite). §.§ Main Results We present the performance of our model on the final testing dataset provided by the contest organizer at different training stages. Initially, we trained the model only on the BDD100K <cit.> dataset. However, due to the variation in the data distribution between BDD100K and the target task, the model may not be able to generalize well on the target task. To address this issue, we added the iVS dataset to the training process and performed mix data finetuning (i.e. the third stage). This approach enabled the model to adapt itself to better fit the target task, as the iVS dataset provided additional data with a similar data distribution to the target task. By training on this diverse dataset, the model was able to learn more effectively from the data and improve its performance on the target task. The performance of our proposed model is evaluated through various training stages. In the pretraining without mosaic stage, as depicted in Table <ref>, the model is trained on BDD100K dataset, which effectively boosts the performance of all. Based on YOLOv4 <cit.>, we integrate mosaic technology in our model training. However, in the pretraining stage with mosaic shown in Table <ref>, we notice a decrease in performance across all tasks. The implementation of the mosaic technique does not yield improved performance, which could potentially be attributed to its training exclusively on the BDD100K dataset. As a result, the model may be more suited to the BDD100K dataset, leading to a slight decline in performance when applied to the iVS dataset. Nevertheless, further finetuning on the iVS dataset enables the model to achieve enhanced performance. In the third stage, the model is finetuned using a mix of the BDD100K and iVS datasets with mosaic augmentation, which resulted in a significant improvement in object detection and lane line segmentation performance. Additionally, in the last 10 epochs, the mosaic augmentation was turned off to allow the model to recover its adaptability to normal images. §.§ Testing Results in the Competition Table <ref> shows the testing results of public dataset in the competition provided by the contest organizer. Our approach is effective for both object detection and segmentation tasks, achieving 0.495 mAP and 0.401 mIoU on pretraining with mosaic stage. Finetuning the model on the mix dataset improved the performance to 0.540 mAP and 0.615 mIoU, demonstrating the importance of the mix dataset in overcoming domain shift. Applying QAT to the finetuned model not only maintained the model's performance but also improved the detection task, which achieved 0.622 mAP and 0.612 mIoU. The testing results of private dataset in the competition provided by the contest organizer is shown in Table <ref>. Our approach achieves state-of-the-art performance in both object detection and segmentation tasks, with 0.421 mAP and 0.612 mIoU. Moreover, Table <ref> shows that our quantization strategy effectively reduced the model size by 4 times and improved inference speed by 3 times. These results demonstrate the effectiveness of our quantization strategy not only in improving model performance but also in reducing computational cost and memory footprint, which is important for real-world deployment of deep learning models. §.§ Quantization Strategy The performance of the quantized network using different quantization paradigms is presented in Table <ref>. We first observe that Post-Training Quantization led to a significant performance drop in the segmentation tasks, with only 0.285 and 0.248 mIoU achieved for drivable area and lane line segmentation, respectively. However, this performance drop can be mitigated by adopting a Quantization-Aware Training (QAT) strategy. Our experimental results demonstrate the effectiveness of QAT in mitigating the performance drop caused by quantization. Specifically, the quantized network achieved an 0.569 mAP for object detection and 0.852 mIoU for drivable area segmentation and 0.402 mIoU for lane line segmentation. These findings demonstrate the effectiveness of the QAT strategy in boosting the performance of quantized network, as compared to the Post-Training Quantization strategy. § CONCLUSION In this work, we have successfully implemented a light-weighted object detection and segmentation model. To improve its efficiency, we explored the effectiveness of two techniques: quantization-aware training and mix data finetuning (i.e. the third stage). Through extensive experimentation, we have demonstrated the effectiveness of these techniques in improving the accuracy and efficiency of our model. Our final model has achieved competitive results on the target dataset, demonstrating its potential for real-world applications. IEEEbib
http://arxiv.org/abs/2307.05854v1
20230712001815
On the Characterization of Quantum Flip Stars with Quantum Network Tomography
[ "Matheus Guedes de Andrade", "Jake Navas", "Inès Montaño", "Don Towsley" ]
quant-ph
[ "quant-ph", "cs.NI" ]
Quantum-Enhanced Metrology for Molecular Symmetry Violation using Decoherence-Free Subspaces Nicholas R. Hutzler August 12, 2023 ============================================================================================= The experimental realization of quantum information systems will be difficult due to how sensitive quantum information is to noise. Overcoming this sensitivity is central to designing quantum networks capable of transmitting quantum information reliably over large distances. Moreover, the ability to characterize communication noise in quantum networks is crucial in developing network protocols capable of overcoming the effects of noise in quantum networks. In this context, quantum network tomography refers to the characterization of channel noise in a quantum network through end-to-end measurements. In this work, we propose network tomography protocols for quantum star networks formed by quantum channels characterized by a single, non-trivial Pauli operator. Our results further the end-to-end characterization of quantum bit-flip star networks by introducing tomography protocols where state distribution and measurements are designed separately. We build upon previously proposed quantum network tomography protocols, as well as provide novel methods for the unique characterization of bit-flip probabilities in stars. We introduce a theoretical benchmark based on the Quantum Fisher Information matrix to compare the efficiency of quantum network protocols. We apply our techniques to the protocols proposed, and provide an initial analysis on the potential benefits of entanglement for Quantum Network Tomography. Furthermore, we simulate the proposed protocols using NetSquid to assess the convergence properties of the estimators obtained for particular parameter regimes. Our findings show that the efficiency of protocols depend on parameter values and motivate the search for adaptive quantum network tomography protocols. § INTRODUCTION Quantum networks are a critical component of the next quantum revolution. The interconnection of quantum processing systems with channels that provide quantum communication are key for the scalability of quantum computers <cit.>, and enable applications such as quantum key distribution <cit.>, quantum secrete sharing <cit.> and distributed quantum sensing <cit.>. Despite recent experimental demonstrations of entanglement distribution in quantum networks with fiber <cit.> and free-space communications <cit.>, the fragility of quantum information in the face of noise remains as the major barrier to the physical realization of scalable, useful quantum networks. This barrier is inherent to the complexity of quantum communication systems, which must integrate diverse quantum and classical hardware. In particular, hardware imperfections introduce unavoidable noise in the quantum information exchanged among network nodes during communication. A quantum network node must be capable of initializing, storing, and processing quantum information, either using memory in the form of matter qubits <cit.> or by storing photons in delay lines <cit.>. The inefficiencies in memory devices introduce noise in the form of decoherence and loss <cit.>, as well as through gate imperfections which can introduce diverse processing noise. Moreover, photons are the fundamental transmission media for quantum information and performing transduction is key in networks with matter-based memories that are not optically active <cit.>. In addition to transduction, frequency conversion is necessary since different light frequencies are optimal for different applications <cit.>. For instance, optimal frequencies for processing can differ from the usual telecom band that reduces photon loss in fibers <cit.>. Due to unavoidable imperfections in implementation, transduction and frequency conversion methods are themselves sources of noise. Finally, the propagation of photons in fiber and free space incurs losses and phase errors, which can corrupt the information encoded in photons <cit.>. The noise introduced in the different layers of communication hardware accumulates as multiple nodes are used for communication. Whether it be in one-way network architectures, where quantum information is directly transmitted across quantum channels interconnecting the nodes, or in two-way architectures, where the channels are used to generate and propagate entangled states for teleportation, the greater the number of nodes required to establish communication, the higher the noise introduced in the information transmitted. Therefore, the development of quantum error correction codes and decoders <cit.>, as well as purification protocols <cit.> capable of improving upon the negative effects of noise in quantum communication is instrumental to further the physical implementation of useful quantum networks. Furthermore, the design of noise-aware applications is fundamental in Noisy Intermediate Scale Quantum (NISQ) hardware, since it is not possible to rely on complex quantum error correction protocols to achieve fault-tolerant quantum operations. In particular, quantum circuit compilation routines can be optimized based on gate and memory noise models to obtain a gain in performance <cit.>. The demand for quantum error correction and purification protocols, as well as the design of noise-aware applications renders network noise characterization as a central topic in the development of quantum networks. Error decoding in quantum error correction protocols benefits vastly from the characterization of errors processes. In addition, noise-aware applications need to use either static or dynamic error models to optimize behavior and increase efficiency <cit.>. In this context, Quantum Network Tomography (QNT) has been previously introduced to address the end-to-end characterization of network links <cit.>. It connects classical network tomography <cit.> with quantum parameter estimation <cit.> to devise efficient characterization methods for link noise in quantum communication through end-to-end measurements. End-to-end characterization is based on the assumption that quantum network infrastructure cannot be directly accessed to estimate link parameters. Thus, network users must obtain channel estimates by measuring quantum states that were distributed through the network. End-to-end estimation is considerably harder than point-to-point link estimation, i.e., independent estimation of network links, due to the fact that quantum channels cannot be probed directly. Instead, they act as hidden, unobservable processes for which statistics must be obtained by observing the systemic behavior of the network. QNT differs from Quantum Process Tomography (QPT) in a meaningful way. QNT considers an end-to-end network estimation problem, where the parameters to be estimated cannot be directly measured, while QPT aims to estimate black-box processes. For simplicity, the QNT formulation focused on in this article, assumes that network links represent quantum channels, i.e Completely Positive Trace Preserving (CPTP) maps, with a known parametric form. It is considered that each operator in the Kraus decomposition of a channel is of a given parametric form, and the goal is to estimate the parameters to characterize all the links. Such assumption is absent in the general QPT formulation which has the goal of estimating each component of transfer matrices characterizing CPTP maps. QPT can be used to characterize networks by introducing additional assumptions in its general formulation, although we do not address methods of this form in this article. §.§ Contributions Providing efficient solutions for the characterization of arbitrary quantum networks is extremely challenging. Nonetheless, the initial work in QNT <cit.> studied the particular case of quantum star networks with links representing probabilistic one-qubit Pauli channels, described by a single Pauli operator and one parameter, e.g quantum bit-flip star networks with different flip probabilities in general. Two methods have been previously proposed to obtain estimators for all of the network parameters: a method that uniquely identifies the parameter vector characterizing the network with the aid of global measurements, and a method that relies on local measurements to produce two estimates for the parameter vector. In this work, we provide additional results for the characterization of star networks and improve on the analysis of the methods proposed in <cit.>. Our contributions are four-folded: * We provide a new general description of QNT protocols in which state distribution and measurements are separately defined. This definition enables methods where the same state distribution protocol is used to generate different estimators based on distinct measurements performed at the end-nodes. In addition, it enables the construction of tomography protocols that combine multiple, distinct state distribution circuits and measurements to uniquely identify network parameters. * We provide novel QNT protocols for the unique characterization of bit-flip stars. The protocols use both global and local quantum measurements at the network nodes to estimate parameters. Moreover, the QNT protocols proposed generalize to stars with either Z or Y flip channels through a change of basis in the operators used at the network nodes. * We analyze the QNT protocols proposed in this article, and compare their estimation efficiencies. Our analysis is centered on the numerical evaluation of the Quantum Fisher Information Matrix (QFIM) containing link parameters. Our findings show that estimation efficiency of the protocols proposed depend on the values of the parameters to be estimated. This dependency in parameter value is similar to the findings reported in <cit.> for the estimation of single-qubit depolarizing channels. * We simulate the designed QNT protocols in four-node star networks using the discrete-event quantum network simulator NetSquid <cit.>. We use simulations to numerically analyze the convergence rate of the estimators in which our QNT protocols are based on. Our findings from simulation are in accordance with the results obtained for the QFIM, and show how the different QNT protocols behave in a particular parameter regime. The remainder of this article is organized as follows: in Section <ref>, we provide the necessary background knowledge to discuss our contributions; we describe the state distribution and measurement protocols used to devise QNT protocols in Section <ref>; in Section <ref>, we present the tomography protocols for the unique characterization of network links; our simulation results and numerical analysis are reported in Section <ref>; finally, we present concluding remarks in Section <ref>. § BACKGROUND Quantum networks are represented as graphs, with nodes representing arbitrary quantum processors and links representing quantum channels that enable the nodes to exchange quantum information. This work focuses on quantum star networks with links characterized by one-qubit probabilistic quantum channels ℰ_e of the form ℰ_e(ρ) = θ_e ρ + (1 - θ_e) σρσ, where θ_e ∈ [0, 1], ρ: ℍ^2→ℍ^2 is the two-qubit density operator, and σ = X. The assumption of bit-flip channels is considered for simplicity, and all the tomography protocols described in this work generalize to the case where σ = Y or σ = Z under a basis transformation of all the operations performed and states used. A quantum (n + 1)-node star network is formed by the interconnection of n end-nodes through an intermediate node, as depicted in Fig.<ref>. We represent the nodes of the star as v_j for j ∈{0, …, n}, and label the intermediate node as v_n. The link (v_j, v_n) represents a quantum channel ℰ_j following (<ref>). §.§ System model The nodes of the network are assumed capable of initializing qubits in the computational basis state |0⟩ and of performing arbitrary quantum circuits to process them. The end-nodes communicate quantum information by preparing qubits in arbitrary states and transmitting them through the intermediate node, one qubit at a time. The channels act on the qubits transmitted through the intermediate node and corrupt the states with bit-flip noise. Furthermore, we assume that noise introduced in link (v_j, v_n) by channel ℰ_j is symmetric, such that the probability of a bit-flip occurring in a transmission from node v_j to v_n is the same as that of a flip occurring in the opposite direction. Therefore, a star network is characterized by n bit-flip probabilities that specify (<ref>) for all channels. We consider QNT protocols for the star network using an intermediate node for quantum state distribution. Such a protocol consists of a set 𝒞 = {C_1, C_2, …, C_S} of state distribution circuits, ϖ = {Π_1,…,Π_S} of Positive Operator-Valued Measures (POVM), and ℳ = {m_1,…, m_S} of number of measurement copies to be performed. In particular, the state distribution circuit C_i is performed m_i times to generate m_i copies of a distributed state ρ_i(θ) in the end-nodes, each of which is measured with POVMs Π_i. We consider all POVMs to be projective measurements in multiple qubits. Therefore, the M = ∑_im_i measurement outcomes form a data set 𝒟 of M binary strings that can be used to perform estimation. It is important to emphasize that the intermediate node cannot perform quantum measurements to contribute to 𝒟 directly, otherwise the problem reduces to the case where each channel is independently estimated. Note that circuits in 𝒞 represent distributed circuits, and channels are used to transmit qubits when necessary. When discussing state distribution protocols in the star, we refer to the end-node that starts the process as the root of the protocol and to the remaining end-nodes as leaves. Moreover, we refer to quantum state distribution circuits as quantum state distribution algorithms interchangeably in the remainder of this work. §.§ States in the GHZ basis Some of the tomography protocols proposed in this article are based on the distribution of mixed states diagonal in the Greenberger–Horne–Zeilinger (GHZ) basis. The n-qubit GHZ basis generalizes the Bell basis to n qubits, and has 2^n states that, when written in the computational basis, assume the form |Φ_s⟩ = |0s_1:⟩ + (-1)^s_0|1s_1:⟩/√(2) where s ∈{0,1}^n is an n-bit string, s is the bit-wise negation of s, s_0 is the first bit of s and s_1:∈{0, 1}^n - 1 is the string obtained by removing s_0 from s. For example, when s = 0101, s_0 = 0, s_1: = 101 and s = 1010. We denote the projector onto |ϕ_s⟩ as Φ_s = ϕ_s. Specifying the GHZ basis in the Z basis is helpful when Z basis measurement statistics are to be extracted from such states. Similarly, expressing the GHZ basis in the X basis of n qubits will prove helpful in the description of tomography protocols. As a matter of fact, a complete description of the states in the general case is not necessary and we consider the rule that specifies what states of the X basis have non-zero components when |ϕ_s⟩ is projected in that basis. Thus, let |x^+⟩ denote a state in the X basis of n-qubits, such that x_j^+ = +, if x_j = 0, -, if x_j = 1, e.g |0^+1^+0^+⟩ = |+-+⟩. Using the bit string representation, the inner product ⟨x^+|ϕ_s⟩ provides the component of |ϕ_s⟩ in the X basis as ⟨x^+|ϕ_s⟩ = ⟨x^+|0s_1:⟩ + (-1)^s_0⟨x^+|1s_1:⟩/√(2). Through algebraic manipulations, it is possible to show that ⟨x^+|ϕ_s⟩ = (-1)^x · 0s_1: + (-1)^s_0 + x · 1s_1:/√(2^n + 1), where · denotes the inner product between two binary strings. The inner product allows to compute the probability of measuring state |ϕ_s⟩ in the X basis and obtaining state |x^+⟩ as ⟨x^+|ϕ_s⟩^2 = (s_0 + 1̃· x) 2/2^n - 1, where 1̃ = 1 … 1 denotes an n-bit string with all bits equal one. This last result implies that the probability will be non-zero only when the parity of x is s_0, i.e if the number of - labels is even when s_0 = 0 and odd when s_1 = 1. The binary parity of a string s will appear in the definition of different estimators for the tomography problem. Therefore, let β: {0, 1}^n→{0, 1} denote the function β(s) = (∑_i = 0^n - 1 s_i)2. §.§ Quantum Parameter Estimation The quantum parameter estimation problem captures the estimation of a parameter vector θ∈ℝ^n from a θ-dependent mixed state with density matrix ρ(θ). The goal is to describe an estimator θ̂∈ℝ^n for θ based on measurement statistics obtained from ρ(θ). A set of POVMs {Π_j} is applied to ρ to generate a data set 𝒟 of observations that allow one to obtain statistics to estimate θ. In this work, we focus on quantum parameter estimation problems with Q-qubit mixed states having density matrices of the form ρ(θ) = ∑_k = 0^2^Q - 1λ_k(θ) Λ_k, where λ_k: ℝ^n→ [0, 1] and Λ_k: ℍ^2^Q→ℍ^2^Q denote the k-th eigenvalue of ρ(θ) and the projector onto its correspondent eigenvector, respectively. Thus, the only dependence of ρ in θ comes from its eigenvalues, i.e., ρ depends on θ and its eigenvectors do not. Such states are the focus in this work since they arise from the action of quantum channels of the form in (<ref>). Note that Q is arbitrary in this case, although our methods are based on states with Q = n - 1 and Q = n. The Quantum Fisher Information Matrix (QFIM) is a fundamental tool in the analysis of quantum estimation problems <cit.>. For a given state ρ(θ), the entries of the QFIM have the form ℱ_ij^ρ = 1/2[ρ{L_i, L_j}], where ∂ρ(θ)/∂θ_j = 1/2{ρ(θ), L_j} for all j, and {A, B}=AB + BA denotes the anti-commutator of operators A and B. The matrix L_j is known as the Symmetric Logarithm Derivative (SLD) operator of parameter θ_j and its diagonal basis is the optimal basis to measure ρ in order to extract statistics to estimate θ_j. The element ℱ_jj^ρ is known as the Quantum Fisher Information (QFI) of ρ with respect to θ_j, which gives how much information a measurement from ρ contains about θ_j. Since the SLDs specify the optimal measurements basis for each individual parameter, an optimal measurement for all parameters exists if and only if all SLDs commute with each other. Moreover, the invertibility of the QFIM specifies whether or not the entire parameter vector can be jointly estimated. In particular, estimators for θ derived from measurement statistics of ρ are underdetermined if ℱ^ρ is singular. When ρ(θ) is full-rank and has the form in (<ref>), the QFIM has entries ℱ_ij^ρ = ∑_k1/λ_k∂λ_k/∂θ_i∂λ_k/∂θ_j. The QFIM establishes the quantum Cramèr-Rao bound (QCRB), which is described as follows. Any estimator θ̂ constructed from measurement statistics of ρ has a covariance matrix Σ_θ̂: ℝ^n→ℝ^n which holds the bound Σ_θ̂≥ (ℱ^ρ)^-1. An estimator is said to be efficient if its covariance matrix satisfies the QCRB with equality. § STATE DISTRIBUTION AND MEASUREMENT PROTOCOLS The goal of quantum network tomography is to estimate link parameters through end-to-end measurements. As any estimation process, quantum network tomography requires the encoding of parameters in quantum states, which can be measured to obtain parameter statistics. Therefore, there are three key steps that guide the analysis of quantum network tomography. It is necessary to design 1) state distribution protocols capable of generating the required parametrized states, 2) measurement protocols (POVMs) to be performed at the nodes, and 3) estimators taking measurements as inputs. Previous work defines solutions for the tomography problem where these three steps are done in unison <cit.>. This work provides a more general description of network tomography protocols, considering different state distribution and measurements protocols as building blocks. In this section, we explore these building blocks and devise state distribution protocols and measurements for QNT. Without loss of generality, the analysis presented considers the case of bit-flip channels. Nonetheless, the methods are easily generalized to any other channel of the form in (<ref>), i.e., a pure Z or pure Y channel, through a change of basis. §.§ States and measurement probabilities in parametric forms It is convenient to describe states in parametric forms for the analysis presented in this section. Let ρ(θ) be an n-qubit density matrix that depends on parameter θ. The states of interest are of the parametric form shown in (<ref>), where the eigenvalues of the density matrix depend on θ and the eigenvectors do not. Such states can be represented by describing a probability function p_ρ: {0, 1}^n→ [0, 1] that maps the binary label of an eigenvector to its corresponding eigenvalue. In particular, p_ρ(s, θ) = λ_k(θ), where s is the binary representation of the integer k. More precisely, Λ_k is itself a label representing a vector in a given basis Λ over the Hilbert space of n qubits. Thus, every index k ∈ℤ^+ can be understood as an n-bit string s ∈{0, 1}^n uniquely specifying a vector in the basis. Once basis Λ is specified together with an ordering for its states, a density matrix diagonal in Λ can be represented by the parametric function p_ρ(s, θ). This characterization is useful for describing the probability distribution of projective measurement outcomes of any state in an arbitrary basis. Applying the Born rule, the probability distribution of measurement outcomes of a state ρ in basis B={b_i} has the form p^B_ρ(s, θ) = ∑_s' ∈{0,1}^n p(s', θ) ⟨b_i|Λ_k⟩^2, where s and s' are the binary representations of integers i and k, respectively. Parametric forms of state eigenvalues yield the description of parametric forms for their measurement probabilities in arbitrary bases. This description is interesting as it provides a path for parameter estimation based on measurements of a state in different bases. Throughout the remainder of this article, we use the notation p_𝒞^B(s, θ) to denote the probability of measuring label s ∈{0, 1}^n from a B-basis projective measurement performed in a state ρ(θ) distributed by a quantum circuit 𝒞. Moreover, p_𝒞(s, θ) denotes measurement probabilities in the eigenbasis of ρ(θ) distributed by 𝒞. It is helpful to define the probability distribution function α: {0,1}^n× [0,1]^n→ [0,1] in the form α(s, θ) = ∏_j = 0^n - 1s_jθ_j + s_j(1 - θ_j), which represents a joint probability distribution of n independent binary random variables. In particular, θ_j and (1 - θ_j) are the probabilities of observing the j-th bit of s as s_j = 0 and s_j = 1, respectively. The form of α provides simple estimators for θ. In particular, let S ∈{0, 1}^n denote an n-bit random variable, with S_j ∈{0, 1} denoting the its j-th random bit. It is possible to write an estimator for θ_j in the form θ̂_j = P̂r̂[S_j = 0], where P̂r̂[S_j = 0] is any estimator for the probability that S_j = 0. §.§ Encoding network parameters in quantum states Remote state distribution is used in this work to generate quantum states of interest. Distribution of a quantum state from the root to the leaves can be described in terms of a distributed circuit implemented by the nodes of the network. The nodes initialize quantum registers at the beginning of the distribution process. The links are used to communicate quantum information, which manifests either as the direct transmission of qubits (one-way architecture), or as the generation of Bell pairs between two nodes (two-way architecture). Local quantum operations are performed at the nodes, progressively transforming the joint initialized state into the desired output. By using network links for communication, the final distributed states depend on channel parameters and allow for parameter inference through measurements. §.§.§ Previous state distribution protocols In previous work, a general state distribution algorithm for network tomography was defined under the restriction of a single channel use for each distributed state <cit.>. The algorithm generated two distribution circuits for the solution of tomography problems in star networks, which are of interest to this work. For the sake of completeness, we explicitly present the two distribution circuits defined in <cit.>. For both circuits, consider v_0 to be the node selected to initiate state distribution, i.e., the root. The first distribution circuit, which is referred to as the Multicast (M) circuit in this work, distributes Z-diagonal states. Node v_0 prepares a qubit in the pure-state |0⟩ and sends it to node v_n. Node v_n receives a qubit in a mixed state given the action of channel ℰ_0. Then, a multi-target CNOT gate is performed in v_n, using the received mixed state as control and n - 2 newly initialized qubits in state |0⟩ as targets. Each of the outputs is transmitted to a leaf of the star. The eigenvalues of the (n-1)-qubit density matrix describing the state of the qubits in the end-nodes have the parametric form p_M(s, θ) = θ_0 α(s, θ_1:) + (1 - θ_0) α(s, θ_1:), where s ∈{0,1}^n - 1, θ_1:∈ℝ^n - 1 is the vector [θ_1, …, θ_n - 1], and α is given in (<ref>). We expand p_M for a four-node star in Table <ref> and show the Multicast distribution circuit for such a star in Fig.<ref>. The second algorithm distributes GHZ-diagonal states and is similar to the first with modifications in the operations performed in v_0 and v_n. We refer to this circuit as the Independent Encoding (IE) circuit. Node v_0 starts the process by creating two qubits in the pure Bell state |Φ_00⟩. The one-qubit gate XHX is applied to one of the qubits, while the other is sent to v_n through channel ℰ_0. The qubit received in v_n is operated on with the one-qubit gate ZHZ, followed by a mutli-target CNOT gate similar to the one used for the M algorithm. The outputs of these gates are sent to the leaves of the network. The GHZ-diagonal n-qubit state in the leaves of the network has eigenvalues of the form p_IE(s,θ) = α(s, θ), which are fully expanded in Table <ref> for a four-node star. The distribution circuit for this particular case is depicted in Fig.<ref>. Note that the probability function in (<ref>) has the form of a joint probability distribution of independent binary random variables, and the probabilities shown in the third column of Table <ref> can be interpreted as follows: the label of each state is a three-qubit measurement outcome; for the i-th qubit, the probabilities that the measured bit is one and zero are θ_i and (1 - θ_i), respectively; finally, the joint measurement probability is the product of the individual probabilities, which comes from a global GHZ measurement at the end-nodes. The examples shown in Tables <ref> and <ref> demonstrate the differences in parameter encoding obtained by distinct distribution circuits. Such differences manifest themselves in the findings reported in <cit.>, where estimators based on the M circuit were not able to uniquely determine parameters, while the ones based on the IE circuit were. These findings are revisited in Section <ref>, where novel estimators combining different states and measurements are introduced. §.§.§ New states for parameter encoding We now present two new state distribution algorithms for tomography. The first algorithm is denoted as the Root Independent (RI) algorithm, which is based on the general distribution protocol defined in <cit.> when applied to star networks. The root node starts by initializing a qubit in the pure state |+⟩ and transmits it to the intermediate node. The channel connecting the root to the intermediate node does not change the state of this qubit, since |+⟩ is an eigenvector of X. Therefore, the intermediate node receives the pure state |+⟩, which undergoes the action of a Hadamard gate and a generalized (n - 1)-qubit CNOT gate. Once the operations are finished, each qubit is sent to a leaf of the star. The RI distribution algorithm yields a Z-basis diagonal, (n-1)-qubit state of the form p_RI(s, θ) = α(s_1:,θ_1:), which is similar to (<ref>), i.e joint distribution of independent variables, although with no dependency on θ_0. This property motivates the name of the algorithm, since θ_0 is the parameter defining channel ℰ_0 that interconnects the root to the intermediate node. The probability distribution in (<ref>) is exemplified in the fourth column of Table <ref> for a four-node star. The second algorithm is denoted as the Back-and-Forth (BF) distribution circuit. In this protocol, the root transmits a qubit to node v_n initialized in the pure-state |0⟩. The intermediate node performs a GHZ generation circuit, applying a Hadarmard gate to the qubit it received and using it as the control of a multi-target CNOT gate. The output of the circuit is the GHZ-diagonal state ρ(θ_0) = θ_0Φ_00…0 + (1 - θ_0) Φ_10…0. If a bit-flip occurs on the initial channel, the qubit received in the intermediate node is in state |1⟩. After the Hadarmard gate, the control used in the CNOT gate is in state |-⟩ and the output GHZ state will have a negative relative phase, i.e the state Φ_10…0. When a bit-flip does not occur, there is no relative phase in the output GHZ state, i.e the state Φ_00…0. The qubit used for control is then sent back through the initial channel, and each remaining qubit is sent to a particular leaf of the network through its respective link. The eigenvalues of the GHZ-diagonal state in the end-nodes of the network have the form p_BF(s, θ) = α(s_0, θ_0) p_M(s, θ), where p_M(s, θ) is given in (<ref>). The BF distribution circuit for a four-node star is shown in Fig.<ref>, and (<ref>) is fully expanded in the fourth column of Table <ref> for this case. §.§ Measurement protocols State distribution protocols are the first ingredient of quantum tomography protocols, since they provide quantum states that depend on channel parameters. In order to assess the information contained in such states, it is necessary to perform quantum measurements. In the above cases, these measurements refer to POVMs performed in the end nodes of the star. Following the discussion presented in Section <ref>, the optimal POVMs to extract statistics for estimation are projective measurements in the diagonal bases of the distributed states, which are the bases that diagonalize the corresponding SLD opertors. States ρ_IE and ρ_BF are diagonal in the GHZ basis, and the corresponding optimal measurements are global GHZ-basis projective measurements in the end-nodes. Such measurements are significantly more challenging than local ones, as they require distributed entanglement to be performed. Therefore, it is of interest to consider alternative non-optimal local measurements to obtain statistics for estimation when such states are considered. We now describe 1) optimal measurements for the states presented and 2) local measurement strategies for ρ_IE and ρ_BF. §.§.§ Optimal measurements The optimal measurement bases for the states are shown in Table <ref>. Since the states distributed by the circuits described in the previous section were expressed in their respective diagonal bases, the probability distribution for optimal measurements were already specified by the functions p_M,p_IE, p_RI, and p_BF. §.§.§ Local measurements The first strategy for both IE and BF is to measure the qubits in the end-nodes of the star in the Z basis. Computing the measurement probability distributions for the states when all qubits are locally measured in the Z basis is straightforward, since the parametric form of their eigenvalues was expressed by writing the GHZ basis in the Z basis following (<ref>). The measurement probability distribution for the state distributed by the IE protocol is obtained from (<ref>) and has the form p_IE^Z(s,θ) = 1/2 p_RI(s, θ), with p_RI given in (<ref>). This measurement distribution implies that statistics obtained from local Z measurements in ρ_IE(θ) do not depend on θ_0. Similarly, the Z-basis measurement probability for the BF state is derived from (<ref>) and has the form p_BF^Z(s, θ) = 1/2p_M(s,θ), where p_M is given in (<ref>). It is of interest to point out that the dependency of p_BF^Z in θ is qualitatively the same as that of p_M. Hence, any estimator based on p_M, such as the one specified in <cit.>, can use estimates for p_BF^Z to estimate θ. The second strategy is to locally measure the qubits at the end-nodes in the X basis. The probabilities in (<ref>) can be used with (<ref>) to describe the measurement probabilities for both IE and BF states. Interestingly, the two states have equal measurement probabilities in the X basis. Let s ∈{0, 1} denote the label of the measurement outcome in the X basis. The measurement probability function has the form p_IE^X(s, θ) = p_BF^X(s, θ) = 1/4α(β(s), θ_0), which depends exclusively on θ_0, where β(s) is the parity function given in (<ref>). This dependency shows that θ_0 can be directly estimated from X-basis measurement outcomes of either state and, thus, that the first channel can be characterized from such measurements. § QUANTUM NETWORK TOMOGRAPHY PROTOCOLS The state distribution and measurement protocols defined in Section <ref> are the necessary ingredients to characterize bit-flip probabilities in star networks. The protocols are combined in this section to construct complete tomography protocols. We now present multiple protocols that use different combinations of state distribution and measurements to provide estimators for all n-channel parameters in an arbitrary (n+1)-node star with different efficiencies. Moreover, the estimators presented are evaluated in Section <ref> both analytically, through their respective QFIMs, and numerically, with the aid of simulation. We categorize the tomography protocols presented in this section based on the number of distinct state distribution protocols and measurements used to obtain a unique description of the entire parameter vector. In particular, an estimator that requires a number S of state distribution circuits and P of measurement protocols is an (S · P)-step protocol. §.§ Quantum network tomography and parameter estimation In order to discuss the quantum network tomography protocols proposed in this section, it is instrumental to formally address how they are framed in the theory of quantum estimation. Complete network tomography protocols require, in the general case, multiple copies of distinct states to be distributed and measured with various protocols. In quantum estimation problems, a parameter-dependent state ρ(θ) is measured in order to estimate θ. We now discuss how the notion of multiple estimation steps fits into this perspective. Throughout this discussion, we use ρ^* to denote the joint state of multiple copies of possibly different, multi-qubit quantum states that are the input to the quantum estimation problem. The dependency of ρ^* with θ is omitted for simplicity. We start with the analysis of one-step protocols, described by a single Q-qubit distribution circuit that generates a state ρ(θ), and a single measurement protocol executed at the end-nodes. Let m be the number of copies of ρ(θ) to be distributed to the end-nodes to obtain parameter-dependent statistics. The goal is to construct an estimator θ̂∈ℝ^n for θ∈ℝ^n using a state ρ^* of the form ρ^* = ρ(θ)^⊗ m, where the superscript ⊗ m denotes the tensor product of ρ(θ) with itself m times. Note that each copy of ρ(θ) is a Q-qubit state spread across end-nodes. Consider that the projective measurement chosen for this one-step tomography protocol is determined by projectors {Π_1, …, Π_2^Q}, with Π_i = π_i. The dataset of observations to obtain statistics for the construction of θ̂ has the form {s_1,…, s_m}, where each s_i∈{0, 1}^Q is the Q-bit classical label denoting the i-th measurement outcome. It follows directly from the properties of the QFIM of separable states <cit.> that the QFIM of ρ^* has the form ℱ_ρ* = m ℱ_ρ(θ). Note that the measurement protocols described in Section <ref> measure each copy of ρ(θ) independently. Therefore, no entanglement is used across distinct copies of ρ(θ). Understanding the additional power provided by the use of entanglement across different copies to obtain estimators is a promising research direction for future work. The one-step case is crucial as it serves as the basis for the discussion of the general S · P-step estimation case introduced earlier. In particular, the combination of a distributed state and a measurement protocol produces a state. Such a state is directly obtained by applying a CPTP map that corresponds to a measurement operation of the distributed state in the specified basis. Hence, consider a tomography protocol that uses one circuit 𝒞 to distribute a Q-qubit state and two measurement strategies ϖ_1 = {Π_1^1,…, Π_2^Q^1} and ϖ_2 = {Π_1^2,…, Π_2^Q^2} to uniquely determine the parameters. We can represent the two scenarios by considering states of the form ρ_j(θ) = ∑_q = 0^2^Q - 1Π_q^jρ(θ) Π_q^j = ∑_q = 0^2^Q - 1⟨π_q^j|ρ(θ)|π_q^j⟩Π_q^j j ∈{1,2}, and for which the QFIM can be computed. In this case, the QFIM is the Classical Fisher Information Matrix (CFIM) obtained using the probability distribution of measurements in the ϖ_j basis. Since projective measurements are considered, the QFIM is directly obtained from the scalar values ⟨π_q^j|ρ(θ)|π_q^j⟩, which are the probabilities in (<ref>). Mappings following (<ref>) are helpful when multiple copies are considered. Suppose that m_1 and m_2 copies of ρ(θ) are measured in the ϖ_1 and ϖ_2 bases, respectively. The state ρ^* that characterizes the combined distributed copies has the form ρ^* = ρ_1(θ)^⊗ m_1⊗ρ_2(θ)^⊗ m_2, and the dataset 𝒟 = {{s_1^1,…, s_m_1^1}, {s_1^2,…, s_m_2^2}} of measurement observations has a total of m_1 + m_2 entries with a sub-component for each state. In this case, the QFIM of ρ^* assumes the form ℱ_ρ^* = m_1 ℱ_ρ_1 + m_2 ℱ_ρ_2, where the dependency in θ is omitted for clarity. This analysis is extended to the general case as follows. Suppose that a given estimation protocol uses a set of S distributed states {ρ_1(θ), …, ρ_S(θ)}, and a set of P_i projective measurements {ϖ_1^i,…, ϖ_P_i^i} for each state ρ_i(θ). By mapping each state-measurement pair to a state, we can represent the set of distributed states as ϱ = {ρ_11(θ), ρ_12(θ), …, ρ_SP_S(θ)}, which contains S^* = ∑_iP_i states. In order to simplify the analysis, we change the double index in (<ref>) to a single index ranging from 1 to S^* and consider that the set of distributed states has the form ϱ = {ρ_1(θ), …, ρ_S^*(θ)}. Furthermore, let m_i denote the number of copies of ρ_i(θ) that are measured for estimation. The density matrix representing the combination of copies is given by ρ^* = ⊗_i = 1^S^*ρ_i(θ)^⊗ m_i, and its QFIM has the form ℱ_ρ^* = ∑_i = 1^S^* m_i ℱ_ρ_i. Describing the structure of the QFIM in the general case provides a direct way to compare different quantum tomography protocols. Moreover, the separability of ρ^* facilitates the analytical description of the QFIM. Its complexity reduces to that of computing the QFIM of each ρ_i in ϱ and performing a weighted sum of the results. We highlight that any QNT method can be understood using a similar analysis. §.§ Estimators We now present the estimators that will be combined to define the complete QNT protocols. The estimators are written with respect to probability outcomes based on the parametric representation of states. These probabilities are themselves estimated based on datasets 𝒟 of measurement outcomes. Thus, let S^ρ, B denote a random variable representing the measurement outcome of a single copy of state ρ in the basis B. A dataset 𝒟_ρ^B, m= {s_1^ρ, B,… , s_m^ρ,B} formed by the outcomes of B-basis measurements performed in m copies of ρ is the realization of S_ρ^B m times. If the B is omitted in the superscript, the symbols refer to the case of measurements performed in the diagonal basis of density operators. §.§.§ M-state based estimators The probability distribution p_M(s, θ) in (<ref>) was used in <cit.> to provide two vector estimates for θ, that are based on two different estimates for θ_0. We focus on the relationship between θ̂_̂ĵ and θ̂_0 that has the form θ̂_j = P̂r̂[S_j-1 = 1] - θ̂_0/1 - 2θ̂_0, for j > 0. Given an estimate θ̂_0 ≠ 1/2, θ̂_j for j > 0 can be estimated by taking P̂r̂[S_j-1 = 1] = 1 - 1/m∑_i = 1^ms_i(j-1)^M, where the j - 1 indices appear because ρ_M is an n - 1 qubit state. §.§.§ IE-state based estimators The IE state is special in the sense that the probability distributions of measurements in the GHZ, X, and Z basis are all represented by the joint distribution of independent binary random variables. Using (<ref>), (<ref>), and (<ref>) together yields θ̂_j = 1 - 1/m∑_i = 1^ms_ij^IE, j ∈{0,…, n-1}, θ̂_j = 1 - 1/m∑_i = 1^ms_i0^IE, Z⊕ s_ij^IE, Z, j ∈{1,…, n-1}, θ̂_0 = 1 - 1/m∑_i = 1^mβ(s_ij^IE, X), where ⊕ denote the addition module-two operator. §.§.§ RI estimators The RI estimators follow (<ref>), since the probability distribution in (<ref>) is also a joint probability of independent variables that does not depend on θ_0, and ρ_RI is diagonal in the Z basis. §.§.§ BF estimators From (<ref>), estimators for θ̂_0 using X-basis measurements for ρ_BF have the form in (<ref>). When Z basis measurements are performed, the probability distribution of outcomes reduces to p_M with a normalization coefficient of 1 / 2. Thus, using an estimate for θ̂_0, an estimator for θ̂_j can be obtained from (<ref>) by substituting P̂r̂[S_j] with 1 - 1/m∑_i = 1^ms_i0^BF, Z⊕ s_ij^BF, Z, for j ∈{1,…, n-1}. For GHZ measurements, the estimators still follow (<ref>) and (<ref>), although the probabilities are computed from 𝒟_BF^m = {s_i^BF}. §.§ Tomography protocols We combine the estimators to obtain multiple protocols that completely characterize bit-flip stars. §.§.§ One-step protocols We discuss two one-step protocols. The first protocol was introduced in <cit.>, and uses the IE distribution circuit with GHZ measurements yielding estimators following (<ref>). In particular, the IE state is distributed m times and each GHZ measurement yields an observation containing information about all parameters. The second protocol uses the BF distribution algorithm m times with GHZ measurements, obtaining θ̂_0 from (<ref>), and using (<ref>) for each j ∈{1,…, n - 1} to obtain θ̂_j. Note that equations of the form (<ref>) have a singularity when θ̂_0 = 1 / 2 and estimators based on such equations are not well-defined when θ_0 = 1 / 2. §.§.§ Two-step protocols Both IE and BF states lead to two-step protocols using measurements in the Z and X basis. Thus, in each case, the end-nodes distribute m copies of the state, and m / 2 Z- and X-basis measurements are performed. In both cases, θ̂_0 is obtained from (<ref>), while θ̂_j for j ∈{1, …, n - 1} is obtained from (<ref>) and (<ref>) for the IE and BF states, respectively. We combine the RI and M distribution circuits into the following two-step protocol. First, node v_0 is used as the root of the RI circuit to distribute m / 2 copies of ρ_RI, which are measured in the Z basis. The copies provide estimators for θ̂_j for j ∈{1, …, n - 1} of the form in (<ref>). Secondly, the M circuit is used with v_0 as the root, and (<ref>) is used once for each θ̂_j to obtain n - 1 initial estimates θ̂^1_0,…, θ̂^n - 1_0 for θ_0. The final estimate returned for θ_0 assumes the form θ̂_0 = ∑_i = 1^n - 1θ̂^i_0 / (n - 1). §.§.§ n-step protocol The RI circuit leads to the following n-step protocol. Let m be the number of states distributed. Each end-node v_j in the star is used as the root for the RI circuit m / n times. When v_j is the root, the state distributed does not depend on θ_j. Hence, (n - 1) m / n states among the total m distributed depend on θ_j. The estimator in (<ref>) can be combined for each v_k ≠ v_j to obtain an estimator of the form θ̂_j = 1 - n/m(n - 1)∑_s ∈𝒟_j s_j, where 𝒟_j denotes the combined dataset of the (n - 1) m / n RI circuits performed with all end-nodes v_k ≠ v_j as root, and s_j denotes the measurement bit obtained in v_j once a sample is locally measured in the Z basis. § EVALUATION The six protocols presented in Section <ref> depict the diverse space of solutions for the tomography of quantum bit-flip networks. In this section, we numerically evaluate and compare the performance of all six protocols discussed. We start with a numerical analysis of the QFIM inverse for each tomography protocol. The QCRB, (<ref>), implies that the trace of the QFIM's inverse is a lower bound on the sum of the variances of estimators. In particular, [ℱ^-1] lower bounds the sum of the variances of entries θ̂_j of any estimator θ̂ obtained from measurements of ρ. The states distributed for tomography have QFIMs following (<ref>) and, as discussed in Section <ref>, the combined copies of states and measurements are captured by QFIMs following (<ref>). Let θ^*∈ [0, 1] denote a fixed probability value. For each QNT protocol 𝒫, we compute [ ℱ^*-1] when m states are distributed to obtain one estimate θ̂ of the entire parameter vector θ, for a four-node star with uniform bit flip probability, i.e., θ_j = θ^* for j ∈{0, 1, 2}. An n-step protocol requires n states to be distributed to obtain a single estimate θ̂. Therefore, we use m = 6 for every protocol, since six is the least common multiple of the number of states required by the protocols in a four-node star. Our results are reported in Figure <ref>. Interestingly, the curves highlight that the relative behavior of the inverse trace changes based on θ^*. In particular, BF- and M-based protocols exhibit lower QCRBs when θ^* is far from 1 / 2, and large bounds when close, while RI- and IE-based protocols show a smoother relationship with θ^*. Furthermore, the one-step BF protocol yields the lowest QCRB when θ^* is either close to zero or one, while the one-step IE protocol has the lowest bound for most of the parameter regime. The curves also highlight the advantages provided by entanglement in estimation vary according to the parameter values, which has been previoulsy reported in <cit.> for the point-to-point estimation of depolarizing channels. To further our analysis, we simulate the QNT protocols using Netsquid <cit.>. We study four-node networks with θ^* = 0.58, and compute the norm *θ̂ - θ^* to analyze the convergence behavior of the estimators with the number of distributed states used for estimation. Figure. <ref> shows results for the case when the number of states used by each protocol is varied from 36 to 2024, averaged over five trials. As expected, two distinct groupings appear, with the BF- and M- based protocols reporting a significantly higher variance than the rest of the protocols. This grouping agrees with our theoretical expectations given the QCRBs shown in Figure <ref> for θ^* = 0.58. The combined evaluation of the QCRBs and the convergence behavior of estimators is a first step towards the rigorous benchmarking of QNT protocols. Our findings provide evidence that the performance of QNT methods can depend on the values of parameters to be estimated, offering initial insights to the design of optimal QNT protocols. In particular, it paves the way for adaptive QNT methods that dynamically modify estimation strategy based on current parameter estimates in order to exploit the distinct efficiencies of protocols with parameter values. § CONCLUSION The results reported in this article further the end-to-end characterization of bit-flip quantum stars. We reviewed the methods proposed in <cit.> and provided novel QNT protocols that utilize multiple state distribution protocols and measurements. The proposed protocols uniquely characterize bit-flip probabilities in quantum star networks by exploiting both local and global network measurements, achieving varying estimation efficiency. Moreover, the QFIM analysis presented in this article is general and provides new insights in the design of QNT protocols. The numerical evaluation of the trace of the QFIM's inverse shed light on entanglement advantages for QNT. In particular, our findings show that the proposed QNT protocols which do not rely on global measurements exhibit comparable performance to the ones that use pre-shared entanglement to perform measurements at the end-nodes. Thus, determining the conditions under which entanglement yields optimal QNT methods and provides significant quantum advantage is a fundamental research question that we identify as future work. Furthermore, the results presented in this article are stepping stones toward the goal of devising QNT protocols for the characterization of quantum star networks formed by arbitrary Pauli channels. Uniquely determining probabilities for this general case is considerably harder than the bit-flip scenario considered in this work. Nonetheless, the state distribution schemes proposed serve as inspiration to the inquiry of efficient parameter encoding schemes for the characterization of generic Pauli noise. We identify the analysis of QNT protocols under the assumption of imperfect network hardware as a key direction for future work. The protocols developed in this article assume that nodes have access to perfect quantum operations and memories, disregarding the effects of operation noise in the end-to-end estimation of link parameters. Understanding the limits of end-to-end network characterization in the face of noise is fundamental for the development of useful QNT protocols, as it can help guide the development of quantum network management tools to inform protocol designers and network managers. Acknowledgments—This research was supported in part by the NSF grant CNS-1955744, NSF- ERC Center for Quantum Networks grant EEC-1941583, and MURI ARO Grant W911NF2110325. unsrt
http://arxiv.org/abs/2307.07481v1
20230714170504
BSM physics using photon-photon fusion processes in UPC in Pb+Pb collisions with the ATLAS detector
[ "Klaudia Maj" ]
hep-ex
[ "hep-ex" ]
=6.in =8.3in =-0.3in =-0.20in #1 #1 #1 #1 #1 #1 and #1 Submitted to #1 Abstract Presented PRESENTED AT BSM physics using photon-photon fusion processes in UPC in Pb+Pb collisions with the ATLAS detector Klaudia Maj[On behalf of the ATLAS Collaboration] AGH University of Kraków al. Mickiewicza 30, 30-059 Kraków, Poland pcre-mail: [email protected] Relativistic heavy-ion beams at the LHC are accompanied by a large flux of equivalent photons, leading to multiple photon-induced processes. This proceeding presents searches for physics beyond the Standard Model enabled by photon-photon processes in both di-tau and diphoton final states. The tau-pair production measurements can constrain the tau lepton's anomalous magnetic dipole moment (g-2), and a recent ATLAS measurement using muonic decays of tau leptons in association with electrons and tracks provides one of the most stringent limits available to date. Similarly, light-by-light scattering proceeds via loop diagrams, which can contain particles not yet directly observed. Thus, high statistics measurements of light-by-light scattering provide a precise and unique opportunity to investigate extensions of the Standard Model, such as the presence of axion-like particles. DIS2023: XXX International Workshop on Deep-Inelastic Scattering and Related Subjects, Michigan State University, USA, 27-31 March 2023 < g r a p h i c s > § INTRODUCTION In recent years, photon-induced processes in heavy-ion (HI) collisions have emerged as a promising path for studying beyond the Standard Model (BSM) physics. The ATLAS experiment <cit.> at the Large Hadron Collider (LHC) dedicates part of its annual operational time to the HI physics including ultra-peripheral collisions (UPC). UPC are a unique category of HI collisions, which occur when the distance separating the interacting nuclei exceeds the sum of their radii. The large electromagnetic fields generated by relativistic ions can be considered as fluxes of photons - as described in the Equivalent Photon Approximation formalism <cit.>. Photon-photon interactions occur in both proton-proton, pp, and HI collisions. However, in the latter, the cross-sections for a specific process experience a significant increase due to the Z^2 scaling of the photon fluxes (Z being the atomic number). Furthermore, HI collisions exhibit minimal hadronic pile-up, allowing for the identification of exclusive events and triggering on low-p_T particles. This exceptional characteristics make UPC an excellent tool for studying rare processes and searching for BSM phenomena. In this report two results of the ATLAS experiment <cit.> are discussed: the observation of γγ→τ^+τ^- process with the measurement of anomalous magnetic moment of the τ-lepton <cit.> and the measurement of light-by-light (LbyL, γγ→γγ) scattering with the search for axion-like particles (ALP) <cit.>. Both measurements utilize data from the UPC Pb+Pb collisions and are potentially sensitive to BSM effects. § EXCLUSIVE Τ^+Τ^- PRODUCTION AND CONSTRAINTS ON A_Τ ATLAS provides the exclusive observation of γγ→τ^+τ^- process <cit.> using data from 2018 Pb+Pb collisions at √(s_NN)= 5.02 TeV with an integrated luminosity of 1.44 nb^-1. The measurement of the exclusive production of τ-leptons is used to set new constraints on the anomalous magnetic moment of the τ-lepton, a_τ. The theoretical SM prediction is: a_τ^SM = 0.001 177 21 (5) <cit.>, which is remarkably smaller than the currently available experimental bounds. Various BSM theories, such as lepton compositeness, supersymmetry, and TeV-scale leptoquarks, etc., predicted modifications to the Standard Model (SM) value of a_τ. The most stringent limits on a_τ are currently provided by the DELPHI experiment: -0.052< a_τ < 0.013 (95% CL) <cit.>. The identification techniques commonly employed in ATLAS cannot be used for signal τ-leptons due to its very low transverse momentum (p_T) values.Instead, it is required that events considered in the analysis contain one muon from τ-lepton decay, and electron or charged-particle track(s) from the other τ-lepton decay. Three distinct signal regions (SR) are defined: muon and electron (μ e-SR), muon and one track (μ1T-SR), and muon and three tracks (μ3T-SR). Candidate events are selected with a single muon trigger requiring muon p_T above 4 GeV. To ensure the exclusivity of the selected events a veto on forward neutron activity in the Zero Degree Calorimeter is imposed. Muons selected for the analysis are required to have p_T> 4 GeV and |η|< 2.4, selected electrons have p_T> 4 GeV and |η|< 2.47 and selected tracks should have p_T> 100 MeV and |η|< 2.5. Events containing additional low-p_T tracks are rejected. Since different background processes contribute to each signal category, further requirements are introduced in the μ1T-SR (muon and track system p_T> 1 GeV) and the μ3T-SR (mass of the three-track system below 1.7 GeV). The main sources of background contributions arise from the exclusive dimuon production with the final-state radiation (FSR) and diffractive photonuclear interactions. The γγ→μμ background is constrained with a dimuon control region, 2μ-CR. It requires exactly two opposite-charge muons with invariant mass above 11 GeV to suppress quarkonia backgrounds and no additional tracks separated from the muons. After applying the event selection, a total of 656 data events were observed in three signal regions in which the analysis was performed. The fitted muon p_T distributions for the μ1T-SR and 2μ-CR are shown in Figure <ref>. A very good data-to-prediction agreement is seen for the best-fit value of the a_τ. The γγ→τ^+τ^- process was observed with a significance exceeding 5 standard deviations, and a signal strength of μ_ττ = 1.03_-0.05^+0.06 assuming the SM value of a_τ. To measure a_τ, a fit to the muon p_T distribution is performed in the three SRs with a_τ being the parameter of interest. Also a control region with events from the γγ→μ^+μ^- process is used in the fit to constrain initial-photon fluxes. Figure <ref> presents a comparison of the ATLAS measurement of the anomalous magnetic moment of τ-lepton in comparison to previous results obtained at the LEP experiments. The precision of this measurement is similar to the most precise single-experiment measurement by the DELPHI Collaboration <cit.>. § LIGHT-BY-LIGHT SCATTERING AND SEARCH FOR AXION-LIKE PARTICLES The γγ→γγ scattering is a rare phenomenon allowed by the quantum electrodynamics (QED) at the lowest order via a quantum loop of virtual charged fermions or W^± bosons. LbyL production can be altered by various BSM phenomena: new particles entering the loop, Born-Infeld extensions of the QED, space-time non-commutativity in the QED, extra spatial dimensions, etc. Furthermore, the diphoton mass spectrum obtained from the LbyL process can be explored to search for potential neutral axion-like particles, ALP. ALP may contribute to the distribution as a narrow diphoton resonance <cit.>. LbyL scattering was also measured by ATLAS in UPC Pb+Pb collisions at √(s_NN) = 5.02 TeV using a combined 2015+2018 data sample with an integrated luminosity of 2.2 nb^-1 <cit.>. The signature of interest is the exclusive production of two photons, each with transverse energy E_T^γ > 2.5 GeV, pseudorapidity |η^γ| < 2.4 and diphoton invariant mass m_γγ > 5 GeV with transverse momentum p_T^γγ < 1 GeV. Any extra activity in the detector is vetoed, in particular no reconstructed tracks originating from the nominal interaction point with p_T> 100 MeV are accepted. The final state photons are aligned in the azimuthal angle ϕ. Back-to-back topology is studied using diphoton acoplanarity defined as A_ϕ = 1 - |Δϕ|/π . Event candidates are expected to have A_ϕ< 0.01. The main background contribution originates from exclusive production of the electron–positron pairs (γγ→ e^+ e^-). In the measurement, the γγ→ e^+ e^- background is suppressed with the requirement of no tracks and pixel-tracks reconstructed in the Inner Detector. A remaining dielectron contribution is evaluated using a data-driven method. The second significant background source is gluon-induced central exclusive production (CEP) of photon pairs. The CEP background is evaluated using a dedicated control region in data (A_ϕ> 0.01) and then extrapolated to the LbyL signal region. ATLAS established the observation of LbyL process with 97 events observed in data, while the signal and background expectations are 45 events and 27 ± 5 events, respectively. The integrated cross-section measured in the fiducial phase space, defined by the requirements reflecting the event selection, is σ_fid = 120 ± 17(stat.)±13(syst.)±4(lumi) nb.The presented value can be compared with two theoretical predictions considered to be 78 ± 8 nb from the SuperChic v3.0 MC generator <cit.> and 80 ± 8 nb from <cit.>. In addition to the integrated fiducial cross-section, ATLAS measured γγ→γγ differential cross-sections involving four kinematic variables of the final-state photons. In general, a good agreement between the measurement and SM predictions is found. ALP may be produced in the photon–photon fusion, γγ→ a →γγ, followed by the decay to the diphoton pair, where a denotes the ALP field. Thus, a diphoton invariant mass distribution, m_γγ, presented in Figure <ref>, can be interpreted for ALP searches. The ALP production would result in a resonance peak with diphoton mass equal to the mass of a. The diphoton mass distribution was examined for a mass range between 6 and 100 GeV. No significant excess of events over expected background was found in the analysis. The 95% CL limit was derived for ALP production cross-section and ALP coupling to photons 1/Λ_a as a function of ALP mass. A summary of exclusion limits from different experiments together with the new ATLAS constraints is shown in Figure <ref>. The new ATLAS analysis places the strongest limits on the ALP production in the intermediate mass region to date. § SUMMARY The report highlighted the significance of UPC in exploring rare SM processes and searching for BSM phenomena. The γγ→τ^+τ^- process has been observed in Pb+Pb UPC by the ATLAS experiment, surpassing a 5σ significance. The signal strength is consistent with the Standard Model expectations. The new constraints on the a_τ have been set, and are competitive to the best limits obtained during the LEP era. With the upcoming Run-3 data, an improvement in precision is anticipated. Additionally, the ATLAS experiment has established the presence of γγ→γγ scattering, with the results consistent with the Standard Model prediction. The measured invariant mass of the diphoton system was used to set new exclusion limits on axion-like particles. This measurement provides the strongest constraints on the ALP production in the mass region of 6–100 GeV to date. § ACKNOWLEDGMENTS This work was partly supported by program "Excellence initiative – research university" for the AGH University of Kraków, by the National Science Centre of Poland under grant number UMO-2021/40/C/ST2/00187 and by PL-GRID infrastructure. Copyright [2023] CERN for the benefit of the ATLAS Collaboration. CC-BY-4.0 license. atlasBibStyleWoTitle
http://arxiv.org/abs/2307.06172v1
20230712135635
State dependence of tunneling processes and nuclear fusion
[ "Roberto Onofrio", "Carlo Presilla" ]
quant-ph
[ "quant-ph", "nucl-ex", "nucl-th" ]
Department of Physics and Astronomy, Dartmouth College, 6127 Wilder Laboratory, Hanover, NH 03755, USA Dipartimento di Matematica, Sapienza Università di Roma, Piazzale Aldo Moro 2, Roma 00185, Italy Istituto Nazionale di Fisica Nucleare, Sezione di Roma 1, Roma 00185, Italy We discuss the sensitivity of tunneling processes to the initial preparation of the quantum state. We compare the case of Gaussian wave packets of different positional variances using a generalized Woods-Saxon potential for which analytical expressions of the tunneling coefficients are available. Using realistic parameters for barrier potentials we find that the usual plane wave approximation underestimates fusion reactivities by an order of magnitude in a range of temperatures of practical relevance for controlled energy production. State dependence of tunneling processes and nuclear fusion Carlo Presilla August 12, 2023 ========================================================== Tunneling processes are of crucial relevance to a broad range of physical systems, including semiconductors <cit.> and hererostructures <cit.>, α-radioactivity and nuclear fusion in stars <cit.>, the early Universe <cit.>, and nuclear fusion processes in the laboratory <cit.>. Apart from an early contribution <cit.>, tunneling probabilities have been usually evaluated by considering incoming plane waves. However in realistic settings as the ones mentioned above, the particles undergoing tunneling cannot in general be fully described by plane waves, either because particles are confined in space, or because in a many-body setting they undergo scattering with other particles, thereby limiting the coherence length of the plane wave <cit.>. Moreover, there are discrepancies between theoretical expectations and data from fusion experiments <cit.> and therefore it may be important to scrutinize all the underlying theoretical assumptions. It is therefore important to discuss the robustness of tunneling coefficients and fusion reactivities with respect to the choice of more general initial states, for instance by considering the representative set of Gaussian wave packets. The use of generalized Gaussian wave packets has been already pioneered by Dodonov and collaborators <cit.>, with results confirming that the predictions on tunneling rates may differ even orders of magnitude with respect to the one arising from the Wentzel-Kramer-Brillouin (WKB) approximation usually employed for fusion reactivites. These studies, in particular <cit.>, have been focused on analytical expressions valid under specific conditions, not necessarily encompassing the entire parameter space. The main goal of the present paper is to extend the above results evaluating the tunneling coefficient for arbitrary values of the position and momentum spreading. A key ingredient of our analysis is the discussion of a potential admitting exact solutions for the tunneling coefficient in the entire energy range. This allows us to pinpoint differences arising from the sole structure of the incoming Gaussian wave packets, excluding other sources of differences as the ones due to the use of approximations in the calculating techniques, for instance the WKB method. Additionally, we provide more intuitive arguments for the behavior of the fusion reactivity in both the cases of very narrow and very broad positional variances. Finally, we also caution about using approximations like the WKB or the Hill-Wheeler ones, since estimates of tunneling coefficients may differ various orders of magnitude from their exact evaluation. We focus the attention on the Generalized Woods-Saxon (GWS) potential energy for a one-dimensional system first introduced in <cit.> (see also <cit.> for a simpler treatment) V(x) = -V_0/1+e^a(|x|-L) + W_0 e^a(|x|-L)/(1+e^a(|x|-L))^2 , where both V_0 and W_0 determine the peak values of the potential energy, and L, a, as in the usual Woods-Saxon potential, determine, respectively, the size of the effective well around the origin and its spatial spread. For a convenient choice of these four parameters, the GWS potential represents a symmetric well with value in the origin equal to -V_0/(1+exp(-aL))+ W_0 exp(-aL)/(1+exp(-aL))^2, and -V_0/2+W_0/4 at |x|=L. At large distances |x| ≫ L the potential energy decreases exponentially to zero as V(x) ≃ (W_0-V_0) exp(-ax), i.e., within a range λ≃ 1/a. This means that a semiqualitative difference from potential energies of interest for instance in nuclear fusion is that the barrier experienced by the nucleons, if schematized with this potential, does not have the long range as expected for Coulomb interactions, though in a realistic plasma the latter are screened on the Debye length. We choose the set of parameters as described in the caption of Fig. <ref>, resulting in well depth, barrier height and width of the well comparable to the ones of light nuclei. Using this potential and the related solutions in terms of tunneling coefficients T(E) evaluated for plane waves at energy E, we have considered more general cases of wave localized in both space and momentum. The most practical case, though not exhaustive of all possibilities, is a Gaussian wavepacket. Let us consider an initial Gaussian wavepacket with positional variance ξ^2, wave vector K and mean energy ħ^2 K^2/(2m): ψ(x,0)= (2/πξ^2)^1/4 e^-(x-x_0)^2/ξ^2+i K x. The corresponding wavefunction in wave vector space k is φ(k) = 1/√(2π)∫_-∞^+∞ψ(x,0) e^-i k x dx = 1/(2 π)^1/4√(ξ) e^-ξ^2 (k-K)^2/4 e^i(K-k)x. Then the probability density for a given wave vector k is a Gaussian function of k P(k,K) = |φ(k)|^2 = ξ/√(2 π) e^-ξ^2 (k-K)^2/2, where we have introduced the positional spreading ξ as the square root of the positional variance. The probability density P(k,K) represents a function of k peaked, for a symmetric distribution, at the average wave vector K. We now assume that the wave vector K belongs to a statistical distribution determined by a one-dimensional Maxwell-Boltzmann distribution with inverse temperature β, namely, w(K,β)_MB= (β/πħ^2/2m)^1/2 e^-βħ^2 K^2/(2m). with m=m_a m_b/(m_a+m_b) the reduced mass of the two interacting nuclei a and b. Particularly relevant is the reactivity defined as ⟨σ(E) v(E) ⟩_MB where ⟨ ... ⟩_MB denotes the statistical average, in our case over the Maxwell-Boltzmann distribution (<ref>), σ is the cross-section of a generic process, and v is the particle velocity. In the case of fusion, the cross-section is defined as σ(E)= π/k^2 T(E) = πħ^2/√(2 m^3)1/√(E) T(E), with the energy E=ħ^2 k^2/2m. When referred to the wave vector decomposition, the fusion reactivity is written as: ⟨σ v ⟩_MB = πħ^2/√(2 m^3)∫_-∞^+∞ dk ∫_-∞^+∞ dK (ħ^2 k^2/2m)^-1/2 × T(ħ^2 k^2/2m) P(k,K) w(K,β)_MB. The integral over K can be evaluated analytically in the case of Gaussian wavepackets, yielding the rather compact formula ⟨σ v ⟩_MB= √(π)/2ħ/m∫_-∞^+∞ dk 1/k T(ħ^2 k^2/2m) ξ_eff e^-ξ_eff^2k^2/2, where we have introduced an effective positional spreading ξ_eff, depending on the inverse temperature, such that 1/ξ_eff^2 = 1/ξ^2+m/βħ^2. For states approximating a plane wave ξ^2 ≫βħ^2/m, therefore ξ_eff^2 ≃βħ^2/m, i.e., ξ_eff becomes the thermal De Broglie wavelength. In the opposite limit of states highly localized in position, ξ^2 ≪βħ^2/m, we have ξ_eff≃ξ. This shows that even assuming an initial quantum state with positional variance of quantum nature, at temperature large enough the relevant lengthscale below which quantum coherence of the wavepacket is maintained no longer depends on the initial preparation. Analogous conclusions have been already obtained in <cit.>. This can also be interpreted, in the case of a gas at given temperature and density, as corresponding to the mean free path in between two thermal collisions between two particles. The tunneling coefficient versus the average energy of the wave packet E is depicted in Fig. <ref> for various values of the width of the Gaussian wave packet ξ. The dependence of the tunneling coefficient on E is, quite predictably, mild when the value of E is comparable or higher than the barrier height. Instead its dependence at lower energies strongly depends on ξ, with the case of plane waves (in the limit of ξ→ +∞) underestimating the transmission coefficient by even five orders of magnitude at the lowest reported energies, with respect to the case of a Gaussian wave packet with size ξ smaller than the size of the effective well. The case of small positional variance should correspond, for a state of minimal quantum uncertainty, to a broad distribution of possible momenta, including some corresponding to kinetic energies comparable or higher than the barrier height. Notice the presence of resonant tunneling in the case of plane waves and spatially delocalized Gaussian wave packets, which is instead washed out in the integration when considering Gaussian wave packets of smaller width in position, and therefore broader in momentum/wave vector space. In Fig. <ref> we present the reactivity corresponding to a Maxwell-Boltzmann distribution versus temperature for different values of the positional spreading ξ. Reflecting the results presented in Fig. <ref>, the high temperature behavior is the same for the various cases, while at low temperature the same pattern appears, with the highest reactivity occurring for the Gaussian wavepacket of smallest value. Notice a further curve (dashed) which is evaluated for a temperature-dependent positional spreading as discussed in <cit.>. This curve is relevant for at least two reasons. First, without any active control of the positional variance of the wavepacket, this is what we expect by considering a gas of reagents with a Maxwell-Boltzmann distribution. Secondly, in the temperature range between 10 keV and 100 keV, of interest for controlled thermonuclear fusion, we estimate a boost of the reactivities if compared to the ones achieved by considering plane waves. This is more easily noticeable in the inset, where we report the ratio between the dashed curve of Fig. <ref> and the curve corresponding to the prediction of plane waves, curve f, always versus the temperature. In the above mentioned range the ratio is about 1.5, followed by a mild increase to almost 2, then becoming smaller than unity at even higher temperatures. The peak value of the ratio depends on the involved masses, as shown in the comparison of the two nucleons with a mass of 2 a.m.u. (reduced mass of 1 a.m.u.) and 12 a.m.u. (reduced mass of 6 a.m.u.). While the latter example has been chosen having in mind the case of Carbon quite relevant in astrophysics, it should be kept in mind that the same GWS potential is used in both cases to see the sole dependence on the mass, which is unrealistic for Carbon especially in regard to its actual larger well width. We emphasize more these considerations from a complementary standpoint by plotting the reactivity as a function of the positional variance ξ for values of temperature relevant to fusion processes of light nuclei, β^-1=10, 20, 50, 100 keV, as depicted in Fig. <ref>. This plot allows to better appreciate that there is an optimal value of ξ maximizing the reactivity at a given temperature. Indeed, in the case of ξ→ 0 there will be increasing components of the wave packet at large k. These components will saturate the transmission coefficient to its maximum value, and will strongly suppress the cross-section due to the dependence of the latter upon 1/k^2, with the overall dependence on reactivity then scaling as the inverse of the wave vector. The above results have been tested for various choices of the parameters of the potential (it is worth to point out that by construction the GWS potential has a cusp in the origin) with outcome qualitatively similar to the specific case considered in this paper. We expect robustness also in the case of a potential which is the sum of a flat potential at distances smaller than the average radius of the nuclei, and a Coulombian potential. The outcome should also hold in the more realistic three-dimensional setting, when including effects due to the angular momentum term, and a spherically symmetric electric field inside the nucleus assuming uniform electric charge density. However, more extensive analyses will be necessary to determine the quantitative gain in using optimized Gaussian wave packets under these more realistic, yet not susceptible of analytic solutions, situations. Finally, we discuss the dependence of the tunneling coefficient upon the adopted calculating scheme, under the hypothesis of plane waves for the tunneling particles. This allows us to contrast the widespread WKB approximation, the Hill-Wheeler approximation, and the exact evaluation of the tunneling coefficient. As noticeable in Fig. <ref>, both the approximations provide unreliable results with respect to the exact case, in a regime of crucial importance for controlled nuclear fusion, i.e., at energies well below the barrier energy. The discrepancy between the Hill-Wheeler approximation - a further simplification of the WKB method - and the WKB expectation is understandable due to the modelization of the barrier as an inverted parabola, and results in overestimating the tunneling coefficient by about one order of magnitude with respect to the latter. More surprising is the fact that WKB provides tunneling coefficients higher by two orders of magnitude at E=10 keV, and three orders of magnitude at E=1 keV, with respect to the analytical result. This introduces a further element of uncertainty in the estimation of fusion cross-sections if not evaluated from exact solutions or precision numerical evaluations. It should also be remarked that this discrepancy may be quite sensitive to the specific form of the potential energy. In our specific case the absence of a substantial tail for the GWS potential at large distances, instead characteristic of the Coulomb case, could affect the discrepancy among the various cases, creating less sensitivity to the details of the potential at the base of the barrier <cit.>. In conclusion, we have investigated the sensitivity of tunneling processes to the preparation of Gaussian wave packets – and contrasted to the usually assumed case of plane waves – in the case of an analytically solvable potential, and we have evidenced a relevant sensitivity of the resulting reactivities for fusion processes. It is unclear how to engineer Gaussian states of well-defined, targeted, positional variance. Nevertheless, we have shown that Gaussian states weighted with Maxwell-Boltzmann energy distributions may result in a temperature-dependent positional variance, providing a natural way to enhance fusion reactivities. This is a further stimulus to design thermonuclear fusion prototypes in which emphasis is put in maximizing the plasma temperature with more moderate plasma density, an important point for achieving deuterium-deuterium fusion, with well-known advantages with respect to the currently experimentally investigated deuterium-tritium fusion. 99 Esaki L. Esaki, New phenomenon in narrow Germanium p-n junctions, Phys. Rev. 109, 603 (1958). Vasko F. T. Vasko and A. V. Kuznetsov, Tunneling in Heterostructures. In: Electronic States and Optical Transitions in Semiconductor Heterostructures, Graduate Texts in Contemporary Physics. Springer, New York, NY (1999). Gamow G. Gamow, Zur Quantentheorie der Atomkernes, Z. Physik 51, 204 (1928). Gurney R. W. Gurney and E. U. Condon, Quantum mechanics and Radiactive Disintegration, Nature 122, 439 (1928); Phys. Rev. 33, 127 (1929). Adelberger E. G. Adelberger, et al., Solar fusion cross-sections, Rev. Mod. Phys. 70, 1265 (1998). Atkatz D. Atkatz and H. Pagels, Origin of the Universe as a quantum tunneling event, Phys. Rev. D 25, 2065 (1982). Balantekin1998 A. B. Balantekin and N. Takigawa, Quantum tunneling in nuclear fusion, Rev. Mod. Phys. 70, 77 (1998). Vanderbosch1992 R. Vanderbosch, Angular momentum distributions in subbarrier fursion reactions, Annu. Rev. Sci. 42, 447 (1992). Bekerman1988 M. Beckerman, Sub-barrier fusion of two nuclei, Rep. Prog. Phys. 51, 1047 (1988). Hagino2022 K. Hagino, Sub-barrier fusion reactions, Contribution to the Handbook of Nuclear Physics, I. Tanihata, H. Toki and T. Kajino, eds. (Springer, 2022) [arXiv:2201.08061]. MacColl L. A. MacColl, Note on the transmission and reflection of wave packets by potential barriers, Phys. Rev. 40, 621 (1932). Kadomtsev1997 B. B. Kadomtsev and M. B. Kadomtsev, Wavefunctions of gas atoms, Phys. Lett. A 225, 303 (1997). Vaz L. C. Vaz, J. M. Alexander, and G. R. Satchler, Fusion barriers, empirical and theoretical: Evidence for dynamics deformation in subbarrier fusion, Phys. Rep. 69, 373 (1981). Dodonov1996 V. V. Dodonov, A. B. Klimov, and V. I. Man'ko, Low energy wave packet tunneling from a parabolic potential well through a high potential barrier, Phys. Lett. A 22, 41 (1996). Dodonov2014a A. V. Dodonov, V. V. Dodonov, Tunneling of slow quantum packets through the high Coulomb barrier, Phys. Lett. A 378, 1071 (2014). Dodonov2014b V. V. Dodonov and A. V. Dodonov, Transmission of correlated Gaussian packets through a delta-potential, J. Russian Laser Research 35, 39 (2014). Andreatta2004 M. A. Andreatta and V. V. Dodonov, Tunneling of narrow Gaussian packets through delta potentials, J. Phys. A: Math. Theor. Gen. 37, 2423 (2004). GWS B. C. Lutfuğlu, F. Akdeniz, and O. Bayrak, Scattering, bound, and quasi-bound states of the generalized symmetric Woods-Saxon potential, J. Math. Phys. 57, 032103 (2016). Sever A. Arda, O. Aydoğdu, and R. Sever, Scattering of the Woods-Saxon potential in the Schroedinger equation J. Phys. A: Math. Theor. 43, 425204 (2010). Chenu A. Chenu and M. Combescot, Many-body formalism for thermally excited wave packets: A way to connect the quantum regime to the classical regime, Phys. Rev. A 95, 062124 (2017). Alterman S. Alterman, J. Choi, R. Durst, S. M. Fleming, and W. K. Wootters, The Boltzmann distribution and the quantum-classical correspondence, J. Phys. A: Math Theor. 51, 345301 (2018). HillWheeler D. L. Hill and J. A. Wheeler, Nuclear constitution and the interpretation of fission phenomena, Phys. Rev. 89, 1102 (1953) Eltschka C. Eltschka, H. Friedrich, M. J. Moritz, and J. Trost, Tunneling near the base of the barrier, Phys. Rev. A 58, 856 (1998). Lee2022 I. Lee, A. Diaz-Torres. Coherence dynamics in low-energy nuclear fusion, arXiv:2201.02232v2 [nucl-th] 6 Feb 2022. Toubiana2017 A. J. Toubiana, L. F. Canto, and M. S. Hussein, Improved WKB approximation for quantum tunneling: Application to heavy-ion fusion, Eur. Phys. J. A 53:34 (2017). Onofrio2018 R. Onofrio, Concepts for a deuterium-deuterium fusion reactor, JETP 127, 883 (2018).
http://arxiv.org/abs/2307.04065v1
20230709000559
Large-scale global optimization of ultra-high dimensional non-convex landscapes based on generative neural networks
[ "Jiaqi Jiang", "Jonathan A. Fan" ]
cs.LG
[ "cs.LG", "math.OC" ]
Projective Rectangles Thomas Zaslavsky August 12, 2023 ===================== We present a non-convex optimization algorithm metaheuristic, based on the training of a deep generative network, which enables effective searching within continuous, ultra-high dimensional landscapes. During network training, populations of sampled local gradients are utilized within a customized loss function to evolve the network output distribution function towards one peaked at high performing optima. The deep network architecture is tailored to support progressive growth over the course of training, which allows the algorithm to manage the curse of dimensionality characteristic of high dimensional landscapes. We apply our concept to a range of standard optimization problems with dimensions as high as one thousand and show that our method performs better with fewer functional evaluations compared to state-of-the-art algorithm benchmarks. We also discuss the role of deep network over-parameterization, loss function engineering, and proper network architecture selection in optimization, and why the required batch size of sampled local gradients is independent of problem dimension. These concepts form the foundation for a new class of algorithms that utilize customizable and expressive deep generative networks to solve non-convex optimization problems. § INTRODUCTION High dimensional, non-convex optimization problems are pervasive in many scientific and engineering domains, including computational materials science <cit.>, electromagnetics <cit.>, circuits design <cit.>, process engineering <cit.>, and systems biology <cit.>. These problems are known to be very difficult to solve because they are NP-hard, and algorithms aiming to definitively search for the global optimum, such as branch and bound methods, cannot practically scale to high dimensional systems. As such, various algorithm heuristics have been developed, ranging from evolutionary metaheuristics to Bayesian optimization <cit.>, which use judicious sampling of the landscape to identify high performing optima. In all cases, it remains challenging to apply these algorithms to ultra-high dimensional spaces with dimensions of hundreds to thousands due to the curse of dimensionality. The explosion of interest and research in deep neural networks over the last decade has presented new opportunities in optimization, as the process of training a deep network involves solving a high dimensional optimization problem. To this end, gradient-based optimization metaheuristics termed global topology optimization networks (GLOnets) <cit.> were recently proposed that use the training of a deep generative network to perform non-convex optimization. The concept applies to optimization problems where 𝐱 is a d-dimensional variable and the goal is to maximize the smoothly varying, non-convex objective function f(𝐱). To run the metaheuristic, the generative network is first initialized so that it outputs a distribution of 𝐱 values that spans the full optimization landscape. Over the course of network training, this distribution is sampled, f(𝐱) and local gradients are computed for these sampled points, and these values are incorporated into a customized loss function and backpropagated to evolve and narrow the distribution around high performing optima. Initial demonstrations indicate that GLOnets can perform better than standard gradient-based optimizers and global search heuristics for various non-convex optimization problems. However it is unable to extend to high dimensional problems in its current form, and the lack of interpretability with this black box algorithm has made it difficult to understand if and how it can to adapt to more general problems, including high dimensional problems. In this Article, we introduce the progressive growing GLOnet (PG-GLOnet) in which optimization within an ultra-high dimensional non-convex landscape is mediated through the training of a progressive growing deep generative network. Our tailoring of the network architecture for this optimization task serves to incorporate knowledge and assumptions about the optimization landscape into the metaheuristic, which is a requirement for tractably navigating ultra-high dimensional landscapes. We also explain how the algorithm works to smoothen the design landscape, how evaluation of the loss function serves as a gradient estimation calculation, and why the number of required functional evaluations is independent of problem dimension. With standard benchmarking test functions, we show that our concept performs better than state-of-the-art algorithms with fewer functional evaluations for one thousand dimensional problems. We anticipate that the customization of network architectures within the GLOnets framework will seed new connections between deep learning and optimization. § PROGRESSIVE GROWING GLONETS ALGORITHM AND BENCHMARKING The PG-GLOnet concept builds on the foundation of the original GLOnet algorithm, which we briefly review here. The optimization problem to be solved with GLOnets can be written in the following form: max_𝐱 f(𝐱) where f(𝐱) is a non-convex, continuous objective function with feasible gradients. With GLOnets, this optimization problem is indirectly solved through the training of a general neural network (Figure <ref>a), where the input is a d-dimensional random variable 𝐳 with a standard normal distribution and the output is a distribution of 𝐱's. The generator therefore serves to map 𝐳 onto 𝐱 = G(𝐳; ϕ) with a distribution P(𝐱; ϕ), where ϕ denotes the trainable neural network parameters. The optimization objective for the generator is defined as: L = max_ϕ𝔼_𝐱∼ P(𝐱; ϕ)exp[ f(𝐱)/T] The distribution that maximizes this expected value is a delta function centered at the global optimum, and as such, an ideally trained generator will produce a narrow distribution centered at the global optimum, thereby solving the original optimization problem. The use of the exponential function and the hyperparameter T in the optimization objective further enhance the valuation of the global optimum, and more generally high performing optima, in the design space. Generator training is consistent with conventional deep learning training methods: gradients of the objective function with respect to network parameters, ∇_ϕ𝔼f, are calculated through backpropagation, and they are used to iteratively optimize ϕ using standard gradient-based methods. In practice, the objective function is approximated by a batch of M samples. P(𝐱; ϕ), on the other hand, is typically implicit and cannot be directly sampled. To circumvent this issue, we draw M samples {𝐳^(m)}_m=1^M from the standard normal distribution, transform them to {𝐱^(m)}_m=1^M, and then approximate L and its gradient ∇_ϕ L with respect to network parameters ϕ: L ≈1/M∑_m=1^Mexp[ f(𝐱^(m))/T] ∇_ϕ L ≈1/M∑_m=1^M1/Texp[ f(𝐱^(m))/T] ∇_𝐱f · D_ϕ𝐱^(m) ∇_𝐱f = [∂ f/∂ x_1, ∂ f/∂ x_2, …, ∂ f/∂ x_d] are the gradients of f(𝐱) and D_ϕ𝐱 = ∂ (x_1, x_2, …)/∂(ϕ_1, ϕ_2, ...) is the Jacobian matrix. Evaluation of f(𝐱) is usually performed by a numerical simulator and the gradient of f(𝐱) can be calculated explicitly or by auto-differentiation for analytic expressions, or by the adjoint variables method (AVM). In the initial conception of GLOnet, which we term FC-GLOnet, the generative network was a fully connected deep network and was capable of effectively addressing optimization problems with a modest number of dimensions. However, it was found to be ineffective at optimizing within very high dimensional landscapes due to the curse of dimensionality, which makes a direct search for the global optimum within a full, high dimensional landscape an intractable proposition. We therefore propose the PG-GLOnet, which utilizes a generative network that outputs a distribution that gradually grows from a coarse, low dimensional space to a fine, high dimensional space. By tailoring the network architecture in this way, we regularize the optimization process to take place over differing degrees of optimization landscape smoothing, enabling our search process to be computationally efficient and tractable. The PG-GLOnet generator architecture is shown in Figure <ref>b. The progressive growth concept is inspired by progressively growing GANs <cit.> that have been developed in the computer vision community to process images with increasing spatial resolution during network training. The input to the network is a D-dimensional random vector 𝐱^0, and its dimension is much smaller than that of 𝐱. With L growing blocks, the network simultaneously transforms and increases the dimensionality of the input vector, and its output is a 2^L D dimensional vector 𝐱^L that matches the dimensionality of 𝐱. In each growing block, the input vector dimension is doubled in two ways, by direct upsampling and by a linear transform. The resulting outputs are combined together and further transformed using a non-linear activation function: 𝐱^out_2d × 1 = q((1-α) [ 𝐱^in_d × 1; 𝐱^in_d × 1 ] +α A_2d × d·𝐱^in_d × 1) A_2d × d are trainable parameters in the linear transformation branch, q(·) is a non-linear activation function, and α is a hyperparameter that is manually tuned over the course of optimization. Initially, α's for all of the growing blocks in the network are set to 0, such that the vector outputted by each block has the same effective dimensionality as its input vector. The network output 𝐱^L therefore has an effective dimensionality that matches the dimensionality of the input 𝐱^0. As α is increased for a particular growing block, its output vector becomes dominated by its linear transformation branch, as opposed to its upsampling branch, and it has an effective dimensionality that exceeds and eventually doubles that of the growing block input vector. The effective dimensionality of 𝐱^L therefore arises from the aggregation of effective dimensionality increases from all growing blocks. To control the effective dimensionality of 𝐱^L over the course of PG-GLOnet training, α is manually changed from 0 to 1 sequentially from the left to right blocks (bottom of Figure <ref>b). At the end of PG-GLOnet training, α is 1 for all growing blocks and the effective dimensionality of 𝐱^L matches that of 𝐱. To evaluate the efficacy of PG-GLOnet in solving high dimensional non-convex optimization problems, we perform a series of benchmark numerical experiments where we optimize a set of standard test functions with PG-GLOnet and other established algorithms. In the first set of experiments, we consider a testing function that can be tuned from a convex to non-convex function and compare PG-GLOnet with ADAM, a well known momentum-based gradient descent algorithm that is typically more effective than gradient descent. ADAM is a local optimization algorithm and performs well on convex objective functions but can get trapped within local optima for non-convex functions. Our test function is a modified Rastrigin function defined as follows: f(𝐱; ρ) = ρ d + ∑_i=1^d [x_i^2 - ρcos(2π x_i)] ρ is a hyperparameter that specifies the amplitude of the sinusoidal modulation within the function. When ρ =0, f(𝐱; ρ) = ∑_i=1^d x_i^2 and is a convex function. As ρ increases, more local optima emerge and these optima become separated by larger magnitude barriers. We first consider the computational cost required by ADAM and PG-GLOnet to find the global optimum of a two dimensional modified Rastrigin function as a function of ρ. For ADAM, we run 10000 optimizations for 200 iterations with random starting points, and for PG-GLOnet, we run the algorithm 10 times with a batch size of 20 for 200 total iterations. In both cases, the algorithms terminate early when they output results within 10^-3 of the global optimum, and computational cost is quantified as the average number of function evaluations required to find the global optimum. The results are summarized in Figure <ref>a and indicate that for convex or nearly convex optimization landscapes, ADAM is more efficient at finding the global optimum. This efficiency arises because ADAM is a specially tailored local optimizer that is well suited for these types of problems, while PG-GLOnet always requires relatively large batch sizes and more iterations to converge. As ρ increases, orders-of-magnitude more ADAM evaluations are required to search for the global optimum due to trapping within local optima in the design landscape. The computational cost for PG-GLOnet, on the other hand, does not increase nearly as rapidly due to its ability to navigate non-convex landscapes and is ten times more efficient than ADAM for ρ greater than 3. We also perform benchmarks between ADAM and PG-GLOnet for a ten dimensional problem. Due to the inability for ADAM to converge to the global optimum in non-convex, high dimensional landscapes, we perform this benchmark differently and compare the best optimal value found by ADAM and PG-GLOnet given the same amount of computational resources. Here, we run ADAM for 200 iterations with 20 random starting points and PG-GLOnet for 200 iterations with a batch size of 20. We run these benchmark experiments ten times and average the best values from each experiment, and the results are reported in Figure <ref>b. We find that the PG-GLOnet is able to consistently find solutions at or near the global optimum for all values of ρ, but the local optimizer gets progressively worse as ρ increases. In our next set of benchmark experiments, we compare PG-GLOnet with the covariance matrix adaptation evolution strategy (CMA-ES), which is an established evolutionary algorithm used to perform population-based global searching of an optimization landscape. Compared to ADAM, it is more suitable for performing non-convex optimization. We consider two standard non-convex testing functions with lots of local optima, the Rastrigin and Schwefel functions (defined in the Appendix). Plots in Figures <ref>c and <ref>d show the average number of function evaluations required to find the global optimum as a function of problem dimension d. The computational cost of CMA-ES increases exponentially as the problem dimension becomes larger, indicating the intractability of applying this algorithm to ultra-high dimensional problems. For the Schwefel function, we limited our CMA-ES benchmarking experiments to a problem dimension of 20 due to this scaling trend. PG-GLOnet, on the other hand, has a relatively small computational cost that is not sensitive to the dimension. In fact, the same neural network architecture and batch size is used for all problems. A more detailed discussion as to the origins of problem dimension and batch size decoupling is provided in the Discussion section. Finally, we benchmark PG-GLOnet with state-of-art algorithms on testing functions proposed by the CEC'2013 Special Session and Competition on Large-Scale Global Optimization (LSGO) <cit.>. We consider the six non-convex benchmark functions from the competition, which involve variations and combinations of the Rastrigin and Ackely functions and are defined in the Appendix. These benchmark functions were designed to incorporate a number of challenging features for optimization, including: * High dimensions. The design space of a optimization problem grows exponentially as the dimension of design variables increases. These benchmark functions utilize one thousand dimensional landscapes. * Functions with non-separable subcomponents. The whole design variable is decomposed into several subcomponents and dimensions within each subcomponent are strongly coupled together. * Imbalance in the contribution of subcomponents. The contribution of a subcomponent is magnified or dampened by a coefficient. * Non-linear transformations to the base functions. Three transformations are applied to break the symmetry and introduce some irregularity on the landscape: (1) Ill-conditioning (2) Irregularities (3) Symmetry breaking. To globally search these landscapes for the global optimum, we perform a two step optimization procedure. First, we run PG-GLOnet for each benchmark function for 200 iterations and a batch size of 100, from which our generative network outputs a narrow distribution of 𝐱's in promising regions of the optimization landscape. We then sample this distribution 100 times and perform local gradient descent on each of these design variables for an additional 200 iterations. The best function values found by PG-GLOnet plus local gradient descent are reported in Table <ref>, together with results produced from FC-GLOnet plus local gradient descent, local conjugate gradient descent, and two state-of-art non-convex optimization algorithms that were the best performing algorithms in the most recent LSGO contest: CC-RDG3, which is a divide-and-conquer method <cit.>, and DGSC, which is a differential group method utilizing spectral clustering <cit.>. We observe that PG-GLOnet with local gradient descent refinement is able to significantly outperform the other algorithms for the majority of test functions. In addition, the total computational cost of the two step optimization procedure is only 4× 10^4 function evaluations, while CC-RDG3 and DGSC require 3× 10^6 function evaluations. § DISCUSSION We discuss the origins of the efficiency and efficacy of PG-GLOnet in solving ultra-high dimensional non-convex optimization problems. First, we examine how the generic GLOnet algorithm operates and why it is able to effectively utilize a gradient-based strategy to solve non-convex optimization problems. Second, we examine the role of the progressive growing generative network architecture in PG-GLOnet in solving ultra-high dimensional problems. By understanding the relationship between network architecture and optimization procedure, we elucidate built-in assumptions used by PG-GLOnet in its search for the global optimum. With the generic GLOnet algorithm, the original optimization problem cited in Equation 1 is reframed as a related problem (Equation 2) that addresses a transformed, smoothened optimization landscape. The key concepts that produce this landscape transformation and enable effective gradient-based optimization are outlined in Figure <ref>a and are: 1) distribution optimization, where the original problem involving the optimization of 𝐱 is transformed to a problem involving the optimization of parameters within a simple distribution P(𝐱); 2) exponential transformation, where the objective function is exponentially weighted; 3) over-parametrization, where the distribution P(𝐱) is now parameterized by a neural network with hundreds to thousands of weights; and 4) gradient estimation, where gradients that specify the evolution of the continuous distribution P(𝐱) are accurately computed through discrete samplings of 𝐳. Distribution optimization. With the concept of distribution optimization, the original problem of searching for an optimal 𝐱 is recast as a population-based search in which parameters within a distribution function are optimized, thereby enabling a search for the global optimum in a smoother and higher dimensional optimization landscape. This concept is shared by other population-based optimization algorithms, such as CMA-ES. To visualize the concept, we consider a non-convex one-dimensional function f(𝐱) plotted as a blue line in the leftmost figure in Figure <ref>a. The objective is to maximize f(𝐱), and the function contains multiple local maxima separated by deep valleys. It is easy for optimization algorithms, particularly gradient-based algorithms, to get trapped in the local optima. For example, if gradient descent optimization is used and is initialized at the yellow dot position, the algorithm will converge to the local optimum delineated by the red dot. With this approach, multiple independent gradient descent optimizations with random starting points are needed to increase the possibility of finding the global optimum. For these problems, gradient-free optimization heuristics are often employed, which can reduce the chances of trapping within suboptimal maxima but which introduce a more stochastic nature to the search process. However, if we consider the optimization of a distribution function that interacts with the global optimization landscape, local information at different parts of the landscape can be aggregated and collectively utilized to evolve this distribution in a manner that reduces issues of trapping within suboptimal maxima. Formally, we transform the optimization variable 𝐱 to parameters within the distribution P(𝐱), and the globally optimal distribution is one that is narrowly peaked around the global optimum. Distribution functions can be explicitly parameterized in many ways. As a simple illustrative example that builds on our discussion of the one-dimensional f(𝐱), we consider the one-dimensional Gaussian distribution denoted as P(𝐱; μ, σ), shown as the red curve in the leftmost figure in Figure <ref>a. μ and σ refer to mean and standard deviation, respectively. With a Gaussian distribution function, the objective function now becomes transformed to the expected value of f(𝐱) as a function of (μ, σ): 𝔼_𝐱∼ P(𝐱; μ, σ) f(𝐱). As this new optimization landscape is a function of two distribution parameters, μ and σ, it is two dimensional. We can directly visualize this new landscape by evaluating ∫ f(𝐱) P(𝐱;μ, σ) d𝐱 for all values of (μ, σ), and the result is summarized in the second figure from the left in Figure <ref>a. The horizontal line section at the bottom of the contour plot, where σ equals zero, is the original one-dimensional f(𝐱) with multiple optima. As σ increases to finite values above zero, the landscape becomes smoother. Mathematically, horizontal line sections for finite sigma are calculated by convolving f(𝐱) with the Gaussian function, producing a Gaussian blur that leads to smoothening. This smoothened landscape facilitates gradient-based optimization of (μ, σ) when the distribution is initialized to large σ values, and the final optimized distributions converge to the original f(𝐱) space at the bottom of the plot. However, while this two-dimensional landscape is smoother than the original f(𝐱), there remain multiple distribution parameter initializations for which the gradient-based optimizer converges to suboptimal maxima. Exponential transformation. To further smoothen the optimization landscape and enhance the presence of the global optimum, we perform an exponential transformation of the objective function. Mathematically, the objective function for the distribution optimization problem becomes: 𝔼_𝐱∼ P(𝐱; μ, σ)exp[ f(𝐱)/T]. The temperature term T modulates the impact of the global optimum on the optimization landscape such that low T produces strong landscape modulation by the global optimum. For our one-dimensional f(𝐱) example, the exponentially transformed landscape is plotted in the second figure from the left in Figure <ref>a and shows that the local optima has faded out, such that gradient-based optimization within this landscape is more likely to converge to the global optimum. The choice of T depends on the scale of f(𝐱). Consider f(𝐱) that is linearly normalized to span (0, 1). Such normalization can be typically achieved based on prior knowledge about the upper and lower bound of f(𝐱). If we want to amplify f(𝐱) for f(𝐱) > f_d and minimize f(𝐱) for f(𝐱) < f_d, where f_d is a division point between 0 and 1, the temperature is chosen to be T = f_d / log(1 + f_d). For example, if f_d is chosen to be the golden ratio, then the temperature is roughly T = 1.3. In practice, the selection of f_d is problem specific, and T can be treated as a hyperparameter that can be manually tuned around 1 for tailoring to a particular problem. Over-parameterization. To further enhance the ability for GLOnet to efficiently and reliably converge to the global optimum, we next consider the concept of over-parameterization in which the distribution P(𝐱) is now a neural network parameterized by weights ϕ. The objective function then becomes: 𝔼_𝐱∼ P(𝐱; ϕ)exp[ f(𝐱)/T]. Our use of a neural network is inspired by the fact that deep network training involves the solving of an extremely high dimensional non-convex optimization problem, that the convergence of the neural network is typically insensitive to initialization, and that good neural network parameters can be found using backpropagation. The underlying mathematical principles outlining why gradient descent is so effective for deep network training have been revealed to some extent by computer scientists in recent years. <cit.> First, the parameter space of deep networks is a high-dimensional manifold, such that most local optima are equivalently good and the probability of converging to a bad optimum during training decreases quickly with network size. Second, these equivalently high performing local optima originate from neural network over-parameterization, which builds in redundancy in the optimization landscape that speeds up and stabilizes the gradient-based optimization process. To understand how this applies to GLOnet, we revisit our one-dimensional f(𝐱) landscape in which local optima are separated by deep barriers. When the optimization landscape is transformed using P(𝐱,ϕ), it frames the optimization problem in a very high dimensional landscape, as the dimensionality of ϕ is much higher than 𝐱. Solutions to the optimization problem therefore reside in a high-dimensional manifold, such that many different ϕ's serve as high performing local optima. Additionally, local optima in f(𝐱) are no longer separated by deep barriers but are instead connected by pathways with low to no barriers in our transformed high dimensional landscape, mitigating trapping within these local optima during gradient-based optimization. The high dimensional landscape representing the transformed f(𝐱) is visualized as a two-dimensional projection in the rightmost plot in Figure <ref>a. The global optimum is now a connected band in the optimization landscape, as opposed to a single point in f(𝐱), and there are fewer energy barriers preventing gradients from converging to the global optimum, enabling gradient descent optimization to be more robust and faster. We note that neural network depth and expressivity play a large role in determining the practical impact of over-parameterization on optimization, and as a demonstration, we compare the performance of GLOnets based on linear and deep non-linear networks in the Appendix. Gradient estimation. A critical feature to maximizing the performance of GLOnet is ensuring that gradients used to evolve P(𝐱), which are approximated using a finite batch of samples, are sufficiently accurate. There are two methods for gradient estimation that can be used for GLOnets. The first is to use a score function gradient estimator, which utilizes the evaluated derivatives of the probability distribution P(𝐱; ϕ) and f(𝐱). This method for estimation requires explicit evaluation of derivatives to P(𝐱; ϕ) but only an implicit evaluation of ∇_𝐱f. The second is to use a pathwise gradient estimator, which relies on knowing the explicit derivatives of f(𝐱) but for which the probability distribution P(𝐱; ϕ) can be implicit. Empirically, we find for GLOnet that the pathwise gradient estimator more consistently produces smaller gradient error compared with the score function gradient estimator, and we therefore implement the pathwise gradient estimator in Equation <ref>. <cit.> The pathwise gradient estimator is based on the principle of Monte Carlo estimation, such that the estimation error decreases with the inverse square root of batch size. Importantly, this estimation error is independent of dimension. As a result, GLOnet and specifically PG-GLOnet are able to operate for batch sizes that are independent of problem dimension, as demonstrated in Figures 2c and 2d. This scaling of problem dimension without a required scaling in the number of functional evaluations allows PG-GLOnet to readily scale and address the 1000-dimensional problems in Table 1 with modest computational resources. Progressive growth. Direct searching within a high dimensional, non-convex landscape is an intractable problem. In the case of FC-GLOnet, which utilizes all of the features above, including distribution optimization and over-parameterization, the algorithm is still not effective in directly searching high dimensional landscapes (Table 1). With PG-GLOnet, the progressive growing architecture regularizes the optimization procedure to search first within a relatively coarse, low dimensional representation of the optimization landscape, followed by relatively local searching within increasingly higher dimensional landscape representations. This hierarchical increase of landscape dimensionality directly corresponds to the serial toggling of α within the series of growing blocks in the generator. As such, the optimization landscape is evolved over the course of PG-GLOnet training in a manner that maintains the tractability of the optimization problem. To further visualize the relationship between generative network architecture and optimization search procedure, we consider a non-convex two-dimensional landscape shown in Figure <ref>b. The generative network contains a single growing block, and the toggling of α from zero to one modulates the effective dimensionality of the generator output from one to two. Initially, α is zero and the vector outputted by the generator has the same effective dimensionality as its input vector and is one. The optimization landscape being searched is therefore a diagonal line within the two-dimensional landscape (Figure <ref>b, left-most plot), and with optimal solutions near the center of the line, the outputted generator distribution (red coloring in plot) narrows towards this region. As α is increased, the generator output vector becomes dominated by its linear transformation branch, as opposed to its upsampling branch, and it has an effective dimensionality that increases and eventually doubles. In our PG-GLOnet visualization, this increase in effective dimensionality corresponds to a broadening of the optimization landscape being searched, and the outputted generator distribution widens relative to the diagonal line. Upon the completion of network growth, the PG-GLOnet distribution converges to the global optimum. The success of PG-GLOnet is therefore predicated on the ability for the outputted distribution of the generative network to be narrowed down to smaller but more promising regions of a coarse optimization landscape, prior to increasing the landscape dimensionality and adding more degrees of freedom to the problem. This concept therefore works particularly well for problems where optima within a low dimensional analogue of the optimization landscape help to inform of the presence and position of optima within the high dimensional landscape. This regularization of the optimization procedure also indicates that for problems where optima within coarse variants of the optimization landscape do not inform the position of the global optimum, PG-GLOnet will not work well. In summary, we present a general global optimization algorithm metaheuristic based on progressive growing deep generative neural networks termed PG-GLOnet. Unlike other population-based algorithms, PG-GLOnet uses gradient-based optimization to evolve an expressive, complex distribution in the optimization landscape to one centered around promising optima. This complex distribution, parameterized using the deep network framework, utilizes loss function engineering and over-parameterization to facilitate effective gradient-based searching. PG-GLOnet is particularly well suited to address ultra-high dimensional problems because the required batch size is independent of problem dimension and the progressively growing network architecture facilitates a hierarchical search process within a landscape with progressively growing effective dimensionality. This use of a hierarchical search strategy also provides bounds as to the types of problems and landscapes that are suited for PG-GLOnet optimization. We anticipate that further research in the tailoring of application-specific generative network architectures to particular optimization landscapes will enable the GLOnet platform to extend and adapt to an even wider range of non-convex, high dimensional optimization problems.
http://arxiv.org/abs/2307.06345v1
20230712170218
Cornerstone: Octree Construction Algorithms for Scalable Particle Simulations
[ "Sebastian Keller", "Aurélien Cavelan", "Rubén Cabezon", "Lucio Mayer", "Florina M. Ciorba" ]
astro-ph.IM
[ "astro-ph.IM", "astro-ph.CO", "astro-ph.GA", "cs.DS", "J.2" ]
Cornerstone]Cornerstone: Octree Construction Algorithms for Scalable Particle Simulations [email protected] Swiss National Supercomputing Center (CSCS) Switzerland [email protected] University of Basel Switzerland [email protected] University of Basel Switzerland [email protected] University of Zürich Switzerland [email protected] University of Basel Switzerland This paper presents an octree construction method, called Cornerstone, that facilitates global domain decomposition and interactions between particles in mesh-free numerical simulations. Our method is based on algorithms developed for 3D computer graphics, which we extend to distributed high performance computing (HPC) systems. Cornerstone yields global and locally essential octrees and is able to operate on all levels of tree hierarchies in parallel. The resulting octrees are suitable for supporting the computation of various kinds of short and long range interactions in N-body methods, such as Barnes-Hut and the Fast Multipole Method (FMM). While we provide a CPU implementation, Cornerstone may run entirely on GPUs. This results in significantly faster tree construction compared to execution on CPUs and serves as a powerful building block for the design of simulation codes that move beyond an offloading approach, where only numerically intensive tasks are dispatched to GPUs. With data residing exclusively in GPU memory, Cornerstone eliminates data movements between CPUs and GPUs. As an example, we employ Cornerstone to generate locally essential octrees for a Barnes-Hut treecode running on almost the full LUMI-G system with up to 8 trillion particles. [ Florina M. Ciorba 0000-0002-2773-4499 August 12, 2023 ========================================= § INTRODUCTION Octrees serve a variety of purposes in particle-based simulations. With applications in Astrophysics and Cosmology <cit.>, Smoothed Particle Hydrodynamics (SPH) <cit.>, and Molecular Dynamics <cit.>, they are typically employed to decompose domains, identify halo particle regions, solve the neighbor-search problem, calculate global magnitudes efficiently <cit.>, and support the analysis of simulation data <cit.>. If long-range interactions are involved, as is the case in electrostatics or gravity, octrees also form the basis for algorithms like Barnes-Hut <cit.> and the Fast Multipole Method (FMM) <cit.>. With ever increasing compute power ratios between GPUs and CPUs, the need to also perform tree construction and traversal on the GPU is increasing, despite the usually lower numerical intensity compared to the particle force calculations. With seemingly sequential dependencies between tree levels and memory layout concerns, efficient octree construction and traversal on GPUs remain a difficult problem. Nevertheless, there has been a lot of progress in recent years in the field of 3D computer graphics where octrees form the first step in the construction of bounding volume hierarchies for collision detection. Several octree construction algorithms <cit.> operate in parallel on all levels of the tree hierarchy and store data compactly in linear buffers. As a first step, these algorithms rely on the computation of sorted Morton codes <cit.> for the 3D objects present in the scene, which are then treated as the leaves of a binary radix tree. Hence, if we were to apply these algorithms directly to the particles in a numerical simulation, the resulting binary radix tree would resolve every particle in a separate leaf node. When calculating physical particle-particle interactions, however, we have the option of reducing the number of tree nodes and allowing multiple particles per leaf node. With increasing number of particles per leaf, the overhead due to processing tree nodes decreases while the overhead due to decreasing search efficiency for neighbor particles increases. These two opposite trends lead to an optimum balance that must be determined empirically by varying the maximum number of particles permitted per leaf node; this is commonly referred to as N_crit <cit.>. In this work, we present a new octree construction method that combines the favorable property of operating on all levels of the tree in parallel on GPUs with the option of choosing N_crit > 1. Starting from the aforementioned methods developed for 3D computer graphics, we achieve this by aggregating the space filling curve (SFC) <cit.> keys of particles, into a histogram before generating the internal tree nodes. The placement and size of the histogram bins are themselves defined by SFC keys and are subject to certain constraints that cause each bin to correspond to a leaf node of an octree. All bins together cover the entire SFC, and thus each SFC key in the histogram delimits the start and end of an octree leaf node, which is why we call this histogram cornerstone array. Our method works for the Morton Z-curve as well as a 3D version of the Hilbert curve <cit.>. We employ the latter in our simulations due to its superior locality preserving properties compared to the Morton Z-curve. We then apply the cornerstone array to the computation of locally essential trees (LET) <cit.> in the context of distributed numerical particle simulations. A novel feature of our LET implementation is that the determination of LET branches to be exchanged with remote subdomains is carried out on GPUs, as is the subsequent communication. Finally, as a proof of concept, we present a Barnes-Hut treecode implementation that traverses the LET generated by our method. It is performance-portable between AMD and NVIDIA GPUs and able to scale to trillions of particles. In the following, Sec. <ref> introduces relevant properties of SFCs and their connections to octrees. Sec. <ref> describes our iterative method for generating octree leaf nodes with particle counts bounded by N_crit. while Sec. <ref> illustrates how the internal part of the octree can be computed. The extension to distributed octrees is discussed in Secs. <ref> and <ref> before we discuss results and future work in Sec. <ref>. § RELATED WORK In the context of N-body simulations, Burtscher and Pingali <cit.>, shortly followed by Bédorf et al., <cit.> were the first to generate octrees on GPUs as the basis for Barnes-Hut treecodes. In both approaches, the octree was constructed level by level on a single GPU. Bédorf et al. later extended their method <cit.> to distributed HPC systems with GPU accelerators and combined SFC-based domain decomposition with the LET method, handling the non-local parts of generating the LET on the CPU. With the aim of improving the speed of bounding volume hierarchy generation for applications in 3D computer graphics, Karras <cit.> was the first to present an algorithm for binary radix tree construction that generated all levels of the tree in parallel and used it as a building block to generate octrees as well. Apetrei <cit.> further refined the algorithm by replacing binary searches with atomic operations. A shared trait between both approaches is that the resulting binary radix or octrees resolve each 3D object in a separate leaf cell. § SPACE FILLING CURVES AND OCTREES Space filling curves (SFCs) are continuous functions that map the unit interval into an n-dimensional hypercube. In conjunction with three-dimensional particle simulations, we are interested in discretized versions of SFCs that map the key space [0 … 2^3L) onto a grid of 3D integer coordinates [0 … 2^L) × [0 … 2^L) × [0 … 2^L). In 3D, the number of bits required to store an SFC key or a point on the grid equals 3L. Since current computer architectures have instructions for either 32- or 64-bit integers, reasonable choices for L are 10 or 21, or – depending on accuracy requirements – multiples thereof. The utility of certain SFCs for numerical simulations stems from their relation to octrees. If we express a key k of the Morton Z-curve with 3L bits as a sequence of L octal digits, k = k_1 k_2 … k_L, Warren and Salmon found <cit.> that if the first octal digit l_1 of another key l matches k_1, then k and l decode into 3D coordinates that lie in the same octant of the root octree node. And by induction: if the first i octal digits of keys k and l match, then there exists an octree node at the i-th division level that contains the decoded 3D coordinates of both keys. Equivalently, an octree node at the i-th division level contains the 3D coordinates that encode into the key range [k … k+8^L-i) for some unique k with k mod 8^L-i = 0. Consequently, the number of octal digits L in the SFC key is equal to the octree depth that the key is able to resolve. More generally, this correspondence between octal digits of the key and nodes of an octree applies to any type of SFC that traverses a cube octant by octant, with the Hilbert curv as a further example. The encoding and decoding of 3D grid points into Hilbert keys is computationally expensive compared to the simpler Morton Z-curve, but in contrast to the latter, any continuous segment of the curve is mapped to a compact 3D volume, while an interval of Morton keys may correspond to disconnected 3D volumes. In distributed simulations, the smaller surfaces of subdomains defined as segments of the Hilbert curve require less communication, outweighing the higher computational cost of key encoding compared to the Morton Z-curve <cit.>. § LEAF NODES OF BALANCED OCTREES As described in the previous section, arrays of space filling curve (SFC) keys can be used to represent the leaves of arbitrary octrees. Let 𝐊 be an array of SFC keys of length n_l+1 subject to the following constraints: 𝐊_0 = 0, 𝐊_n_l = 8^L, 𝐊_i < 𝐊_j, i < j, and 𝐊_i+1 - 𝐊_i = δ_i = 8^l, l ∈ℕ_0, l ≤ L. Then, 𝐊, which we refer to as the cornerstone array, uniquely defines an octree with n_l leaves and a maximum depth of L. In this notation and ordering, the i-th leaf covers the key range [𝐊_i, 𝐊_i+1) and its division level is L - log_8δ_i. Given an ensemble of real-valued 3D particle coordinates, we want to calculate 𝐊, such that the leaf particle counts remain below a predefined threshold, i.e. 𝐍_i ≤ N_crit, while the particle counts of internal nodes exceed N_crit. If these conditions are satisfied, we say that 𝐊 is balanced, which is a requirement for efficient particle neighbor searches. Note that since here the term balanced refers to a property of the leaf nodes (the number of contained particles) rather than to a property of the tree itself, its depth can vary locally. The variable N_crit is a performance tuning parameter in this context whose optimal value has to be determined empirically. Commonly, N_crit is chosen in the range 16 to 64 <cit.>. Let 𝐏 be the sorted array of particle SFC keys obtained by encoding the particle coordinates into SFC keys followed by a radix-sort. We define two operations: f: (𝐏, 𝐊) ↦𝐍' (keys histogram) g: (𝐊, 𝐍) ↦𝐊' (rebalancing) The f operation counts the number of particle keys in each leaf node, or in other words, 𝐍 is the histogram of particle SFC keys 𝐏 for the n_l bins defined by consecutive keys in 𝐊. With 𝐏 sorted, the particle count of the i-th bin delimited by 𝐊_i and 𝐊_i+1 can be determined by locating 𝐊_i and 𝐊_i+1 in 𝐏 with binary searches. The particle count of the bin is then given by the difference of the two delimiter positions. The g operation rebalances a given cornerstone array based on particle counts where each leaf node can either remain unchanged, be subdivided or be merged. The merging of leaf nodes is achieved by replacing a group of 8 sibling nodes with their common parent node. We further subdivide g = g_3 ∘ g_2 ∘ g_1 into three steps: g_1: (𝐊, 𝐍) ↦𝐎, (rebalancing decision) g_2: 𝐎 ↦𝐎' (exclusive prefix sum) g_3: (𝐊, 𝐎') ↦𝐊' (node rebalancing) First, g_1 computes an auxiliary array 𝐎 that encodes the rebalance operation that is to be performed on each leaf node, which can be either removal, subdivision, or no change of the leaf node. These sub-operations are encoded as follows: * 𝐎_i = 8 if 𝐍_i > N_crit, indicating that leaf 𝐊_i is to be subdivided. * 𝐎_i = 0 if * 𝐊_i is not the first octant. * 𝐊_i and its other 7 sibling octants are next to each other, i.e. when 𝐊_i-j + 8 δ_i = 𝐊_i-j+8, with j = 𝐊_i mod 8δ_i/δ_i ∈ [0 … 7]. This is only possible if all 8 octants are leaves. * The combined particle count of these 8 octants is smaller than N_crit. If all three conditions are fulfilled 𝐊_i will not be transferred to 𝐊', resulting in the replacement of a group of 8 sibling nodes with their parent node. * 𝐎_i = 1 otherwise, leaving the leaf node unchanged. Subsequently, 𝐎' is obtained through g_2 by performing an exclusive prefix sum on 𝐎. By construction, 𝐎'_i is equal to the index of the old node 𝐊_i in the rebalanced node array 𝐊', and 𝐎'_n_l is equal to the total number of rebalanced leaf nodes in 𝐊'. The rebalance step g_3 therefore constructs 𝐊' as follows: 𝐊'_𝐎'_i = 𝐊_i if 𝐎_i = 1, 𝐊'_𝐎'_i + j = 𝐊_i + j ·δ_i / 8, j = [0…7] if 𝐎_i = 8. In Eq. (<ref>), the SFC key at 𝐊_i is copied to its new location 𝐎'_i in the rebalanced array 𝐊' while in Eq. (<ref>), 𝐊_i is subdivided by inserting into 𝐊' the 8 SFC keys of its child nodes that divide the original key range [𝐊_i …𝐊_i+1) into 8 sub-ranges of equal length. Fig. <ref> shows an example where one of the octants of the root node is further subdivided by inserting the SFC keys of its children into the cornerstone array. By applying f and g in alternating fashion, we can construct the leaf nodes of a balanced octree, i.e. one whose leaf node particle counts remain below or equal to N_crit, while the particle counts of internal nodes are greater than N_crit. Starting with the root node, 𝐊 = { 0, 8^L }, alternating application of f and g then yields the balanced cornerstone array 𝐊'. We say that convergence is reached when 𝐎_i = 1, ∀ i, i.e. when g_1 reports that no leaf nodes need to be merged or subdivided anymore, which will take at most L repeated invocations of f and g. When computing a cornerstone array by starting from just the root node, our method thus offers no advantage over a conventional top-down level-sequential octree construction approach. The utility of our method arises when octrees are built repeatedly for multiple steps with particle positions only changing by a small amount between each step. This applies, for example, to particle simulations that are performed for several time-steps. In this situation, we can maintain a cornerstone array balanced by performing a single histogram count and rebalance update on the cornerstone array of the previous step. Implicitly, 𝐊 contains the size and location of internal octree nodes as well, but traversal and storing additional node properties is not possible in a straightforward manner. We describe the explicit construction of internal nodes in section <ref>. In terms of computational building blocks, the construction of cornerstone arrays relies on radix sort for the particle SFC keys and on and prefix sums for rebalancing, which are available from parallel libraries for both CPU and GPU architectures, such as the Thrust library <cit.>. The remaining operations, SFC key generation from 3D particle coordinates, f, g_1, and g_3 consist of purely independent operations on array elements. Since there is no need for synchronization or communication, these operations are simple to implement in parallel with directive-based (e.g., OpenMP, OpenACC) as well as accelerator-specific programming paradigms (e.g., CUDA, HIP). § FULLY LINKED OCTREES Although cornerstone arrays implicitly contain the information of the full octree, they are not traversable because parent-child node relationships are not stored. To efficiently generate the missing internal nodes, we will adapt the algorithms by Karras <cit.> and Apetrei <cit.>. Both originate in the computer graphics community and may be used for 3D collision detection via the generation of bounding volume hierarchies, or for ray tracing. Either approach relies on the computation of sorted SFC keys for the objects present in a 3D scene as a first step. Subsequently, the SFC keys of the objects are viewed as the leaves of a binary radix tree whose internal nodes can be generated in parallel. A sequence of N sorted object SFC keys leads to a hierarchy of N-1 internal binary radix nodes. Each binary radix node is associated with a key that corresponds to the bit-string of the longest common prefix of all the keys that it covers, i.e. the root node has a key of length 0, and its two children have keys of length 1 consisting of either a 0 or a 1-bit. While the ordering of the N-1 binary radix nodes differs between the two approaches of Karras and Apetrei, either method defines a specific node layout that allows the determination of the node bit-string keys as well as parent-child relationships of the N-1 internal binary radix nodes in a data-parallel fashion. The resulting binary tree is compact in the sense that it does not contain any empty nodes or nodes with only a single child, which is a favorable property when N_crit = 1. As Karras mentions, it can be converted to an octree by extracting the binary nodes whose key lengths are divisible by 3. In practice, this conversion may be implemented in the same spirit as Eqs. (<ref>) - (<ref>): First, for each binary node, a value of 0 or 1 is stored in the 𝐎 array depending on whether the node key length is divisible by 3 or not. After a prefix sum of 𝐎, the total number of octree nodes is then known and the relevant binary nodes may be extracted in parallel. Since the cornerstone arrays contain SFC keys as well, we may provide them instead of the particle SFC keys as inputs to the binary radix tree construction. This allows us to take advantage of the fully parallel construction of internal nodes as well as adjusting N_crit to our needs (constructing the binary radix tree from the particle SFC keys would yield a tree with N_crit = 1). If we wanted to preserve the compactness property, we could now eliminate any empty buckets from 𝐊 and construct the internal part of the octree as described above. But as we're interested in the case N_crit > 1, we are going to explore a second option where we do not eliminate any empty buckets from 𝐊, which is equivalent to mandating that each internal octree node have exactly 8 children. In this case, we can derive an analytical mapping between the keys in 𝐊 and the internal octree nodes that they implicitly contain that allows us to bypass the construction of an intermediate binary radix tree. The resulting octree will contain a small fraction of empty nodes, depending on the value of N_crit and the distribution of particles, but for N_crit = 16 - 64 we have found that the increase in construction speed outweighs the penalty of a slightly larger tree, even for highly clustered distributions. The data format for the complete octree consists of the following arrays of integer types: * 𝐍𝐊[n] in [0, 2^3L), the node SFC keys * 𝐂𝐎[n] in [1, n), the connectivity array * 𝐋𝐎[L + 2] in [1, n), the tree level offsets where n = n_i + n_l is the total number of tree nodes, with n_i as the number of internal nodes and n_l as the number of leaves. Since the number of children per node is either 0 or 8, n_i = (n_l - 1) / 7. In brackets for each array, we first specify its length, followed by the range of values that its elements may assume. The first array contains the SFC keys of each node in the placeholder bit format defined in Ref. <cit.>, which allows encoding the SFC range covered by the node with a single key. As the SFC keys can be decoded into 3D coordinates, 𝐍𝐊 stores the 3D grid coordinates of each node. The connectivity of the tree is contained in 𝐂𝐎 which stores the index of the first child. The range of child indices of node i is therefore given by [𝐂𝐎[i], 𝐂𝐎[i] + 8). If 𝐂𝐎[i] = 0, index i corresponds to a leaf. Lastly, the i-th element of 𝐋𝐎 contains the index of the first node at the i-th octree level where L is the maximum octree depth that the keys contained in 𝐍𝐊 are able to encode. Since we include the root node at level zero, L+1 entries are required to store the offset of each level plus an additional element to mark the end of the last level. In Fig. <ref> we illustrate this format with an example. The gray circles correpond to the nodes and contain the SFC keys from 𝐍𝐊, each of which start with a 1-digit (the placeholder bit) followed by l octal digits where l is the tree level of the node. The numbers to the left of the circles indicate the node indices and the numbers to the right show the values of the 𝐂𝐎 array. For this example, the tree level offsets are 𝐋𝐎 = { 0, 1, 9, 17, …, 17 }. To generate octrees in this format, we perform the following steps: * Copy 𝐊 to 𝐍𝐊[n_i,n] and convert to the Warren-Salmon placeholder-bit format * Generate internal node keys 𝐍𝐊[0, n_i]: For elements of 𝐊 in parallel compute , where counts the number of leading zero bits. If mod(d, 3) = 0, 𝐊_i is the key of an internal node at level d/3. The corresponding placeholder-bit key is stored at 𝐍𝐊_j with j = (i - δ(𝐊_i, d)) / 7, where δ is defined below. This step can be combined with step <ref>. * Radix sort 𝐍𝐊 to yield a breadth-first node ordering. * Determine 𝐋𝐎 with a binary-search over 𝐍𝐊. * Locate the first child of internal nodes with a binary-search, using 𝐋𝐎 to narrow the search range and store in 𝐂𝐎. The δ function that maps indices of leaf nodes to indices of internal nodes is given by δ(𝐊_i, d) = ∑_l^d/3+1μ_k_l, μ_0, …, μ_7 = { 0, -1, -2, -3, 3, 2, 1, 0 }, where 𝐊_i = k_1 k_2 … k_L, such that k_l corresponds to the l-th octal digit of 𝐊_i, counting from the most significant digit. The resulting ordering of internal octree nodes before the sorting step is equal to what one would get by first constructing a binary radix tree on top of 𝐊 according to Karras, followed by enumeration and extraction of the octree nodes with a prefix sum. Indeed, we deduced Eq. (<ref>) from the layout of the binary radix tree by induction under the assumption that each internal node has 8 children. Additional octree node properties, such as particle counts, expansion centers, multipole moments, etc can be stored in separate arrays of length n and may be constructed in subsequent steps when required. We choose a breadth-first ordering of the octree nodes to ensure that the child nodes of each cell are always next to each other and to allow for efficient traversal in Barnes-Hut treecodes <cit.>. Finally, to provide an overview of the relative costs of the steps involved in the tree construction, we list in Table <ref> timings for 1, 32 and 512 million particles resulting in octrees with 55800, 1.78 million and 28.9 million leaf nodes, respectively. § DISTRIBUTED OCTREES AND DOMAIN DECOMPOSITION In the case where particles and their sorted SFC keys 𝐏 are distributed across multiple compute nodes, we have to extend the algorithm in Sec. <ref> with a communication scheme. With a small modification, we can identically replicate across compute nodes the octree that arises from the globally unified set of particles. To that end, after applying f, we perform an element-wise global sum of the histogram counts array 𝐍: f_glob = f_r ∘ f, where f_r simply computes an element-wise global sum of 𝐍. The additional reduction step f_r can be implemented with a single call to MPI_Allreduce, supplying 𝐍 as argument. Initializing 𝐊 to the root node on all compute nodes, followed by an alternating application of f_global and g until converged, then yields the balanced cornerstone array, replicated on all compute nodes, for the distributed set of SFC keys 𝐏. Due to the global collective communication pattern of f_r, this method is not suitable for generating octrees with a high resolution. However, it is fast enough for generating a global octree with sufficient resolution for decomposing the global domain by partitioning the space filling curve. The global domain can then be partitioned into n_ranks subdomains by grouping 𝐊 into n_ranks bins of consecutive leaves such that each bin contains approximately N_tot/n_ranks of the total particle count. Each bin covers a unique SFC key range that defines a compact volume of fractal-like shape. Our domain decomposition based on a coarse globally replicated octree requires collective communication that scales as 𝒪(N_totlogN_tot) and presents an alternative to other methods, such as sampling of the SFC <cit.> which scales as 𝒪(N_tot^2) <cit.>. § LOCALLY ESSENTIAL OCTREES §.§ Definition Let us assume that we have decomposed the global domain into n_ranks subdomains as described in Sec. <ref>. In order for the domain decomposition to balance the subdomain particle counts roughly within p% of N_tot/n_ranks-th where N_tot is the total global number of particles, only about 100 / p leaf nodes are needed per subdomain. Consequently, the global tree will have a leaf node count resolution of N_crit = N_tot· p/n_ranks· 100 which is far too coarse for efficiently calculating physical forces for particles within a subdomain. What would be the requirements of a LET that facilitates the calculation of forces within just one subdomain, containing only the minimum number of nodes required for that task? In order to address this question, we need to introduce some physical context. Two important types of forces covering a wide range of physical applications are * Short-range forces with a cutoff. Examples: Smoothed Particle Hydrodynamics (SPH), Van-der-Waals dispersion in molecular dynamics * Long-range forces arising from 1/r potentials. Examples: gravity and electrostatics For the calculation of short-range forces, the octree has to have a high resolution (small N_crit) inside a given subdomain F for accurate neighbor particle searches. Additionally, a high resolution is also required close to the exterior surface for the discovery of halo particles. Concerning long-range forces, we assume that the algorithm used to calculate them is either the Barnes-Hut treecode or FMM. Inside and close to the exterior surface of F, the tree resolution requirements are identical to the short-range case. Outside F however, a geometrical criterion, commonly called Multipole Acceptance Criterion (MAC), applies. For a given source tree node and target point or node, the MAC describes whether the application of the multipole approximation of the force due to the source on the target is acceptable or not. What all MACs have in common is that the largest acceptable size of a source node correlates with the opening angle, i.e. the ratio between source node size and distance to the target. One of the simplest possible MAC choice is the minimum distance MAC, which is fulfilled if r_min > l / θ, where r_min is the minimum distance between source and target, l is the source node edge length and θ is an accuracy parameter. The smaller the value for θ becomes, the smaller the error of the resulting force. More sophisticated acceptance criteria that depend on additional node properties exist and are compatible with our definition of an LET that follows. Let F be a subdomain covering a contiguous range of the space filling curve. An octree, represented by the cornerstone array 𝐊^foc, is locally essential in F or focused on F if * Leaves in F have a particle count ≤ N_crit while the counts of internal nodes in F exceed N_crit, i.e. they are balanced per the definition in Sec. <ref>. * Leaves outside F either pass the chosen MAC with respect to any point in F or their particle count is smaller than N_crit. The motivation for this condition is to make sure that when Barnes-Hut treecode or FMM is applied to particles in F, if tree traversal encounters a leaf node outside F, either the multipole approximation is valid or it contains only a small number of particles that can be transferred at a small cost. Consequently, refinement of nodes at the exterior surface of F stops once particle counts drop below N_crit as illustrated in Fig. <ref>. * Internal nodes outside F fail the chosen MAC for at least one point or node in F and their particle count is > N_crit. This ensures that the local tree depth is the minimum required to satisfy condition <ref>. If we assume that particles within F reside on a particular compute node that additionally holds a copy of the halo particles close to the surface, the definition of the LET ensures that we will be able to compute long-range forces for particles in F depending on global information without the need for access to particle data beyond F and its surface. Insofar, it is important to note that the LET still covers the entire global domain. §.§ Construction As was the case for local or globally replicated octrees, we represent a LET with a cornerstone array. The general iterative construction scheme that yields an octree that satisfies the LET definition also remains largely the same: for each leaf node, the decision to either subdivide, keep, or merge it is encoded in the same fashion, i.e. functions g_2 and g_3 apply without modification. What is different is that apart from the node counts array 𝐍^foc, the rebalancing operations to be performed now depend on a second array 𝐌 containing MAC evaluations and that the determination of 𝐍^foc requires a combination of collective and point-to-point communication. The MAC evaluations stored in array 𝐌 are performed by a function denoted f^LE_2 which traverses the octree defined by 𝐊^foc. The result of f^LE_2 are boolean values for each octree node outside F that describes whether there exists a leaf in F combined with which the MAC fails. If one chooses the minimum distance MAC, the evaluation only depends on the geometrical size and location of the tree nodes and thus does not require communication. More adaptive MAC variants may rely on additional tree node properties, however, such as multipole expansion centers, which have to be computed prior to evaluating the MACs and will require communication. As the computation of 𝐌 involves tree traversal, f^LE_2 constructs the fully linked octree that matches the leaf nodes 𝐊^foc according to the procedure described in Sec. <ref>. An update cycle that refines the LET from 𝐊^foc to 𝐊'^foc consists of the following steps: f^LE_1: (𝐏, 𝐊^foc, 𝐊^glob, 𝐍^glob) ↦𝐍' (particle counts) f^LE_2: 𝐊^foc↦𝐌' (MAC evaluations) g^LE: (𝐊^foc, 𝐍^foc, 𝐌) ↦𝐊'^foc, (rebalancing) where 𝐊^glob is the cornerstone array of the globally replicated octree used for domain decomposition. We again distinguish between functions labelled f which compute node properties without modifying the octree, such as 𝐍 and 𝐌, and functions labelled g which rebalance the octree based on said properties. A further distinction between the two groups is that the computation of node properties requires communication, while rebalancing consists of local operations only. The function g^LE decomposes into steps g^LE = g^LE_1 ∘ g_2 ∘ g_3 with g^LE_1: (𝐊^foc, 𝐍^foc, 𝐌) ↦𝐎, (rebalancing decision) and g_2 and g_3 as defined in Sec. <ref>. For nodes inside F, g^LE_1 is identical to g_1, i.e. only particle counts are taken into account to compute the corresponding elements of 𝐎. Outside F, the result of g^LE_1 depends on the particle counts as well as the MAC evaluations: a node is only split if the particle count exceeds N_crit and also the MAC evaluation failed. Conversely, a node is merged if its parent node either has a particle count smaller than N_crit or if the parent passed the MAC evaluation. The most demanding function to implement is f^LE_1. It calculates node properties of the LET which can be computed by an upward pass. This applies to properties where the leaves depend on the contained particles and internal nodes on their children. Examples are given by node particle counts 𝐍, multipole expansion centers, i.e. center-of-mass for gravity, and multipole moments. In the following, we assume that the domain decomposition was carried out based on 𝐊^glob, assigning to each MPI rank a subdomain F along with the particles contained therein. The aim of f^LE_1 is now to compute an array 𝐐 of node properties arranged in the same layout as the nodes of the fully linked octree that can be constructed on top of 𝐊^foc. Our strategy is to first construct the elements of 𝐐 that correspond to the leaf nodes as this allows each rank to obtain the internal part through an upward pass without any further communication. The elements of 𝐐 that correspond to leaves in 𝐊^foc fall into three different categories which are illustrated in Fig. <ref>: * inside F between keys k_2 and k_3 * outside F and exceeding the resolution of the global octree between keys k_1 to k_2 and k_3 to k_4. * outside F and contained in 𝐊^glob to the left of k_1 and to the right of k_4. While properties of leaves inside F can be directly computed based on local particle data, the second category involves point-to-point communication with nearby subdomains, and the third category involves collective communication followed by an upward pass of the global octree. Before any communication can take place, each rank must first build the fully linked octree on top of 𝐊^foc, compute the elements of 𝐐 tied to leaves in F and perform an upward pass. This yields a partially filled property array 𝐐 containing valid data for any node contained in F. Additionally, each rank determines a list of peer ranks that contain category 2 tree nodes in their subdomains by traversing the global octree. Point-to-point communication and collective communication may then proceed in parallel. The former is performed in two steps: in a first round of point-to-point messages, each rank sends a request to its peers that defines a series of octree nodes. As only leaf nodes are requested, the messages contain sub-sequences of 𝐊^foc. On the receiving side, the transmitted octree nodes may correspond to either leaf or internal nodes of the LET. An answer is then assembled by locating each requested octree node in the LET of the receiver and extracting the corresponding elements of 𝐐. The point-to-point exchange concludes with a second round of messages where the peer ranks send the required properties back to the requester. In order to obtain the remaining properties of nodes that fall into the third category, we construct an auxilliary array of properties 𝐐^glob whose node layout matches the fully linked octree constructed from 𝐊^glob. Each rank has the required data to populate the leaf nodes of 𝐐^glob that fall into its subdomain. We can then perform an MPI_Allgatherv operation on the 𝐐^glob array to replicate the leaf node quantities on all ranks. A subsequent upward pass of 𝐐^glob yields the internal nodes up to the global root node. Finally, each rank can complete the leaves of 𝐐 by extracting the missing nodes from 𝐐^glob. Since some internal nodes in the LET may exist outside F and also not appear in the global octree, a second upsweep of 𝐐 is needed to complete its construction. The final properties may then serve as the basis for particle force computations and as criteria to rebalance 𝐊^foc in Eq. (<ref>). We implement all stages of LET generation on GPUs. In addition to the operations discussed in Sec. <ref>, this involves and upsweep in f^LE_2 and evaluating MACs in f^LE_2, for which we employ fully linked octrees constructed on top of 𝐊^glob and 𝐊^foc. After the application of Eqs. (<ref>) to (<ref>) no longer result in any changes to the LET, an analogous locally essential quadtree might look as shown in Fig. <ref>. In general, the resolution will be highest in the focus area F highlighted in red and decrease when moving away from F. But since the local particle density is also taken into account, the tree structure does not necessarily follow a uniform geometrical pattern. In our construction scheme, both 𝐊^glob and 𝐊^foc are stored in cornerstone format which is also used exclusively as the format for communication. Fully linked octrees are constructed on-demand on top of 𝐊^glob and 𝐊^foc. They are not preserved between time-steps. The collective communication that we perform on 𝐐^glob scales as 𝒪(n_rankslog n_ranks) if the number of particles per MPI rank is kept constant, which is worse than the theoretical optimum of 𝒪(log n_ranks). Due to the dynamically changing fractal sub-domain geometries that result from dividing the SFC into segments, a strictly hierarchical 𝒪(log n_ranks) communication scheme that does not resort to collective communication for the highest levels of the LET where nodes transcend the sub-domain boundaries is very difficult to implement. This fact was also pointed out in Ref. <cit.>, and, to the best of our knowledge, remains an unsolved problem. § RESULTS To assess the impact of the LET on the required size of the global octree, we create a spherical particle distribution with density inversely proportional to the radius and distribute it on varying numbers of compute nodes each running one MPI rank assigned with one subdomain constructed according to the method described in Sec. <ref>. The radius of the sphere is kept constant and each compute node holds a constant count of 64 million particles. With increasing number of compute nodes and total number of particles, the density of the sphere increases while the volumes of the subdomains decrease. For system sizes between 64 million and 141 billion particles, we construct LETs with N_crit = 64 and compare the resulting number of tree leaves against the leaf counts of 𝐊^glob (i.e. the global octree used for domain decomposition) and against a hypothetical octree with a uniform resolution of N_crit = 64 everywhere. In Fig. <ref>, we observe that LET growth is consistent with the expected 𝒪(log n_ranks) behavior. By construction, 𝐊^glob grows linearly and will eventually exceed the size of 𝐊^foc, but it is roughly three orders of magnitude smaller compared to the octree with high uniform resolution that would be required in the absence of a LET. In order to compare the cost of communication and computation, we implement a highly optimized version of Barnes-Hut treecode and compare the time required for the tree traversal with the the time it takes to compute the multipole moments for the LET by applying f^LE_1. We used the treecode available at <https://github.com/exafmm/bonsai> as a starting point, a modified version of the original Bonsai code <cit.> with breadth-first traversal for GPUs. We further modified this code by changing the octree structure to our LET, replacing PTX inline assembly instructions from the NVIDIA Kepler microarchitecture with portable warp-level instructions for compatibility with AMD GPUs, and by adding further optimizations. In Fig. <ref>, we test the performance and scalability of our Barnes-Hut implementation on the LUMI-G system that features compute nodes with 4 AMD Instinct MI250X GPU cards. Each card contains two chips that appear as two separate GPUs and as a consequence, we ran our tests with 8 MPI ranks per compute node. Our choice of cartesian multipoles with quadrupole corrections and θ = 0.5 results in three significant digits in the computed particle accelerations. As described in Sec.<ref>, the LET multipole computation consists of two local upsweeps with a communication phase in between. After the first local upsweep is complete, it is possible to launch the tree traversal starting from the top LET nodes spanning the local sub-domain in order to achieve overlap with the multipole communication phase. Once the traversal of the local nodes completes, the remaining LET parts can be traversed by providing the LET nodes to the left and right of the local sub-domain on the SFC as starting points. For obtaining the results shown in Fig. <ref>, we did not overlap multipole construction with tree traversal and instead imposed a synchronization barrier in between to properly separate the timings for the two operations recorded as maxima across ranks. To estimate achieved performance in terms of Flops, we collected interaction statistics on one rank, counting the average number of particle-particle (P2P) and multipole-particle (M2P) interactions accrued during traversal per particle. In accordance with Ref. <cit.>, counting P2P and M2P interactions as 23 and 65 flops, respectively, we estimate the performance per rank and MI250X half card at 6.7 Tflops in double precision, reaching a peak performance of 110 Pflops for the largest run with 8 trillion particles on 2052 nodes or 16416 GPU half cards, the maximum we were able to use. Note that the jump in communication time for the exchange of multipole moments when moving from 2 to 8 trillion particles is not due to an increase in collective communication. As shown in Fig. <ref>, the number of nodes in the global octree is in fact identical between the last two data points. The difference in communication time is caused by the larger number of LET nodes in the largest run. This is simply a consequence of the average number of particles per leaf node in the LET fluctuating between ≈ N_crit/8 and N_crit (64), explaining the shorter traversal time on 16416 GPUs as well. While the average number of particles per leaf was around 30 on 1 to 4096 GPUs, the average dropped to 12 for the system with 8 trillion particles, resulting in higher traversal efficiency by reducing the number of P2P interactions per particle. § SUMMARY AND FUTURE WORK We have shown that octrees can be efficiently constructed on GPUs with the presented algorithms. The resulting octrees and LETs are suited to supporting the computation of short and long-ranged forces acting on particles and can be constructed at scale on distributed HPC systems. Relative to a competitive Barnes-Hut treecode implementation, LET construction overhead remains small enough to allow for nearly complete overlap with the tree traversal phase on almost the full LUMI-G HPC system. Our future goal is to employ Cornerstone for an interesting scientific application, for example by combining our N-body solver with SPH. §.§ Software availability The algorithms described in this work have been implemented in C++, CUDA and HIP, and are available as open-source software. Octrees, LETs, neighbor searching and domain decomposition are bundled into the Cornerstone library available at <https://github.com/sekelle/cornerstone-octree>. It does not implement any physics, such as Barnes-Hut or FMM for gravity. Cornerstone is integrated into a full-fledged application framework called SPH-EXA, available at <https://github.com/unibas-dmi-hpc/SPH-EXA>. SPH-EXA can perform I/O, generate initial conditions and offers SPH and gravity (described in Sec. <ref>) as composable components for physical simulations. Efforts are underway to also make our Barnes-Hut implementation available as a stand-alone package. This work was supported by the Swiss Platform for Advanced Scientific Computing (PASC) project SPH-EXA (funding periods 2017-2021 and 2021-2024). We acknowledge the support of the LUMI-G pilot program hosted by CSC, in particular Emmanuel Ory, Pekka Manninen and Fredrik Robertsén, and the Swiss National Supercomputing Center (CSCS) for providing access to the Piz Daint supercomputer. ACM-Reference-Format § TABLES OF SYMBOLS
http://arxiv.org/abs/2307.04114v1
20230709080743
FILM: How can Few-Shot Image Classification Benefit from Pre-Trained Language Models?
[ "Zihao Jiang", "Yunkai Dang", "Dong Pang", "Huishuai Zhang", "Weiran Huang" ]
cs.LG
[ "cs.LG", "cs.AI", "cs.CL", "cs.CV", "cs.MM" ]
Neutron scattering and muon-spin spectroscopy studies of the magnetic triangular-lattice compounds A_2La_2NiW_2O_12 (A = Sr, Ba) T. Shang Accepted 08-Jul-2023. Received 18-Jun-2023; in original form 23-May-2023 ================================================================================================================================ Few-shot learning aims to train models that can be generalized to novel classes with only a few samples. Recently, a line of works are proposed to enhance few-shot learning with accessible semantic information from class names. However, these works focus on improving existing modules such as visual prototypes and feature extractors of the standard few-shot learning framework. This limits the full potential use of semantic information. In this paper, we propose a novel few-shot learning framework that uses pre-trained language models based on contrastive learning. To address the challenge of alignment between visual features and textual embeddings obtained from text-based pre-trained language model, we carefully design the textual branch of our framework and introduce a metric module to generalize the cosine similarity. For better transferability, we let the metric module adapt to different few-shot tasks and adopt MAML to train the model via bi-level optimization. Moreover, we conduct extensive experiments on multiple benchmarks to demonstrate the effectiveness of our method. § INTRODUCTION Deep neural networks <cit.> have achieved remarkable success in many fields. However, training deep neural networks requires a large number of labeled data, which can be expensive and time-consuming to obtain. For instance, in medical imaging, obtaining labeled data requires expert radiologists to annotate images. This limits the application of deep learning models in real-world scenarios. In contrast, humans possess the ability to recognize and classify objects of unseen categories with only a few examples. This highlights the potential value of few-shot learning <cit.>, where models are trained on base classes and can be generalized well to novel classes with limited amounts of samples. Previous works mainly focus on image classification tasks, and most of them adopt the meta-learning paradigm <cit.>. Recent works consider leveraging additional information from other modalities such as text to enhance the performance of few-shot learning. In particular, some methods <cit.> adopt static word embedding models (e.g., GloVe <cit.>) to extract textual representations of class names and use them to adjust visual prototypes or classifiers. With the appearance of general language models such as BERT <cit.> and GPT <cit.>, another line of works <cit.> adopt public pre-trained language models (PLMs) to extract more comprehensive semantic information from class names. However, these works still focus on improving existing modules of the standard few-shot learning framework (e.g., visual prototypes and feature extractors), which confines the full utilization of powerful PLMs in few-shot learning. Inspired by the success of vision-language models <cit.> trained by contrastive learning, we explore the idea of aligning visual features and textual embeddings for few-shot image classification in this paper, where textual embeddings are extracted by a public PLM from class names following the setting of <cit.>. However, there are two main factors making this alignment challenging. Firstly, unlike vision-language models that have sufficient pairs of image and textual descriptions available for model training, we only have the class name of each image instead of a rich description. Secondly, in contrast to vision-language models where both visual and textual encoders are learnable to align embeddings, our textual encoder inherits from a puublic PLM trained on uni-modal text data. This leads to totally different structures of textual embedding spaces and thus makes the alignment between visual and textual features difficult. For instance, if we directly align visual features and textual embeddings, the probability[Here probabilities mean the elements outputted by softmax function.] of a sample image being assigned to its true label is extremely low (see blue bars in Figure <ref>). This indicates that the visual feature of an image is hard to approach the corresponding text embedding of its true label. In this paper, we propose a novel framework (Figure <ref>) to boost few-shot learning by means of public PLMs. To bridge the gap between visual and textual modalities, we carefully design a textual branch of our framework and introduce a metric module to measure the similarity between visual and textual embeddings. The textual branch first incorporates class labels into our hand-crafted prompt template containing a [MASK] token and then inputs the filled sentence to a PLM. The PLM transforms the input sentence into a hidden vector sequence and the final textual embedding is extracted from the vector corresponding to the [MASK] token. Meanwhile, the visual feature is obtained by a standard visual encoder. After that, we compute the similarities between visual features and textual embeddings through the proposed metric module, and send them into the contrastive loss. For better transferability on novel classes, we let the metric module adapt to different few-shot tasks and adopt Model-Agnostic Meta-Learning (MAML) <cit.> to train the model via bi-level optimization. Moreover, we conduct extensive experiments on multiple benchmarks to demonstrate that the proposed method significantly outperforms the state-of-the-art few-shot learning methods based on PLMs. The main contributions of this paper can be summarized as follows. * We propose a novel few-shot learning framework that leverages semantic information extracted by a pre-trained language model based on contrastive learning. * We carefully design a textual branch of the framework and introduce a metric module to generalize the similarity measure. * The metric module is designed to be adaptive to different few-shot tasks for better transferability, and MAML is adopted to train the model via bi-level optimization. * We conduct extensive experiments on multiple benchmarks with different domains to demonstrate the effectiveness of our method. § RELATED WORK Few-shot Learning. In general, few-shot learning methods are mainly divided into two categories: metric-based methods and optimization-based methods. Metric-based methods aim to map samples into an appropriate embedding space on the basis of certain distance metrics. Most previous methods use task-agnostic distance metrics, e.g., cosine similarity distance <cit.>, Euclidean distance <cit.>, CNN relation module <cit.>, and Earth Mover’s Distance <cit.>. Additionally, several methods <cit.> involve learning task-specific distance metrics, which can be adjusted for different tasks. Optimization-based methods <cit.> aims at learning optimal initial model parameters on base classes and quickly fine-tune them on novel classes with a few support examples. Our paper generalizes the similarity measure by the proposed metric module, and uses MAML <cit.> to train the model. Few-shot Learning with Semantic Information. Recent works on few-shot learning start to utilize semantic information from class labels to enhance few-shot learning. AM3 <cit.> proposes an adaptive modality mixture mechanism to model prototype representation as a combination of visual features and language semantic features. KTN <cit.> learns classifiers by fusing visual information and knowledge information acquired from a knowledge graph and word embeddings with a semantic-visual mapping network based on Graph Convolutional Network <cit.>. VS-Alignment <cit.> introduces a contrastive alignment between visual and semantic features as an additional objective. Semantic Prompt <cit.> considers semantic information as prompts to tune the ViT <cit.> feature extractor. All these methods leverage semantic features as auxiliary information to adjust visual prototypes, classifiers, or feature extractors. In contrast, we propose a new few-shot learning framework to directly align visual and textual embeddings via contrastive learning. Contrastive Learning. Contrastive learning is a popular method in self-supervised representation learning. It learns representations by pulling positive samples close and driving negative samples away from them in the latent embedding space with a contrastive loss. A set of previous works have shown the excellent performance of contrastive learning in computer vision <cit.> and natural language processing <cit.> tasks. Furthermore, recent works <cit.> apply contrastive learning to multi-modal settings by aligning image-text pairs in the embedding space. Our work introduces contrastive learning to few-shot learning, and proposes a learnable metric module to make aligning visual features and textual embeddings possible. § PROBLEM DEFINITION Few-shot learning involves two disjoint class sets: a base class set 𝒞_base classes and a novel class set 𝒞_novel classes. Sufficient labeled samples are provided for each base class, while abundant unlabeled samples and only a few labeled samples are provided for each novel class. Few-shot learning targets at classifying unlabeled samples from novel classes through training on all the given labeled samples. Previous works usually formulate the few-shot learning problem as N-way K-shot classification, which denotes a classification task among N classes with K labeled samples available for each class. In addition, given a fixed pre-trained language model, we use bimodal contrastive learning to leverage the semantic information extracted by it. Concretely, for each embedded sample image z and N embedded class labels {t_1,t_2,…,t_N} in a N-way K-shot classification task, contrastive learning adjusts the embedding space through the following widely-used contrastive loss <cit.> (using cosine similarity as an example): ℒ = -logexp(z· t_+/τ)/∑^N_i=1exp(z· t_i/τ), where t_+ is the embedded true label of the sample image and τ is a temperature hyper-parameter. Meta-learning paradigm <cit.> is commonly used to solve the few-shot learning problem, which trains and evaluates the model with the episodic mechanism. The standard meta-learning paradigm contains two stages: meta-training and meta-testing. In each episode of the meta-training stage, a N-way K-shot M-query classification task 𝒯=(𝒮,𝒬) is constructed with samples from the base classes. We first randomly select N classes from 𝒞_base as 𝒞_𝒯. For each class, we randomly sample K support images and M query images. Then we form the support set 𝒮={(x_i,y_i)|y_i∈𝒞_𝒯,i=1,2,…,N× K} and the query set 𝒬={(x_i,y_i)|y_i∈𝒞_𝒯,i=1,2,…,N× M} with the support images and the query images respectively, where x_i is the i-th sample image and y_i is the class label of x_i. To learn an appropriate embedding space, bi-level optimization is performed on 𝒮 and 𝒬 respectively, utilizing a contrastive loss. In each episode of the meta-testing stage, a classification task is built on the novel classes in a similar way. The support set is formed with a few label samples, while the query set is sampled from the unlabeled samples. After adapting to the novel classes by minimizing the contrastive loss on the support set, the model is used to predict class labels for the sample images in the query set. § METHOD We introduce our method of Few-shot Image classification with pre-trained Language Models (FILM) in this section. The overall framework is illustrated in Figure <ref>, which consists of three modules: a textual branch, a visual branch, and a metric module. For each episode, the textual branch extracts textual embeddings from class labels, while the visual branch extracts visual embeddings from support and query images. Moreover, the metric module computes the similarity score matrix between textual and visual embeddings from these two branches. In addition, we utilize a training strategy based on MAML algorithm to train the model via bi-level optimization. §.§ Textual Branch In this section, we explain how we design the textual branch to get textual embeddings from class labels. The textual branch comprises a text-based pre-trained language model (PLM) and a language model head. During meta-training and meta-testing, the PLM is frozen while the language model head is tuned for the downstream classification tasks. In our study, we mainly use the masked language model as the PLM. Notice that PLMs mainly take sentences rather than single words or phrases as input during the pre-training stage. Therefore, to bridge the gap between the pre-training and downstream tasks, for each class label y_i, we insert it into a hand-crafted prompt template and get y_i^prompt as the input of the PLM. The token sequence of y_i^prompt is first converted to a token embedding sequence through a token vocabulary. The input embedding sequence is calculated by summing the corresponding token embeddings and positional embeddings. Then PLM transforms the input embeddings into a sequence of hidden vectors. Two straightforward ways to get the textual embedding from the output hidden vector sequence are respectively: (1) taking the average vector of the output vector sequence as the textual embedding; (2) taking the hidden vector of the [CLS] token as the textual embedding. To make textual embeddings more relevant to the visual descriptive information of the corresponding categories, we design a prompt template with one [MASK] token as y_i^prompt = [CLS] The appearance ofy_i is [MASK] . [SEP] and extract the textual embedding by sending the hidden vector of the [MASK] token to the language model head. In this way, the extraction of textual embeddings is treated as a masked language modeling task, which makes downstream classification tasks more consistent with the pre-training of the PLM. The comparison among different designs of textual branches will be shown in Table <ref> later. §.§ Metric Module Inspired by vision-language models trained by contrastive learning, we explore aligning visual and textual modalities for few-shot image classification. However, directly aligning visual features and textual embeddings extracted by text-based PLM with cosine similarity has a poor effect in few-shot setting. The blue bars in Figure <ref> show that the probability of a sample image being assigned to its true label is extremely low if we directly align the visual and textual embeddings. In this paper, we introduce a metric module to generalize the similarity measure between visual features and textual embeddings. Moreover, we let the metric module adapt to different few-shot tasks for better transferability on novel classes. Specifically, we define f_θ_I as the image encoder with learnable parameters θ_I to transform each sample image x_i into a feature map z_i = f_θ_I(x_i). Textual branch f_θ_T with learnable parameters θ_T is used to extract the textual embedding t_y_i = f_θ_T(y_i) from each class label y_i. We generalize the similarity measure between visual embeddings z and textual embeddings t as a learnable function M(z, t) called metric module, whose parameters are denoted as θ_M. For example, the metric module could be a bilinear function M(z, t)=z^⊤θ_Mt (degenerating to the cosine similarity if θ_M is the identity matrix) or a neural network, e.g., M(z, t)=MLP_θ_M([z,t]). During meta-testing, we first fine-tune the task-specific parameters θ_M on the support set 𝒮. Then we use the similarity score matrix computed by the metric module as a reference to infer labels for sample images in the query set 𝒬. As is shown in Figure <ref>, the correct classification probabilities of our method are significantly higher than that of direct alignment, which means that our metric module can effectively align the visual features and textual embeddings. §.§ Loss Function We formulate the learning objective as a contrastive loss (Eq (<ref>)), which pulls together images and corresponding class labels while pushing away unmatched pairs in the embedding space. Moreover, we aim to train a model to maximize the similarity between visual features and textual embeddings for matching (image, text) pairs while reducing the similarity for non-matching pairs. Specifically, for a classification task 𝒯=(𝒮,𝒬), we calculate the contrastive loss on the support set 𝒮 and the query set 𝒬 respectively. On the support set, the contrastive loss ℒ_𝒮 is computed with all the support samples, which has a formulation as: ℒ_𝒮 = -1/|𝒮|∑_x_i∈𝒮logexp( M(z_i, t_y_i) /τ )/∑_c∈𝒞_𝒯exp(M(z_i, t_c)/τ ), where z_i is the visual embedding of the i^th support image x_i, t_y_i is the textual embedding of the true label y_i corresponding to x_i, t_c is the textual embedding of the class label c, and M(·, ·) is the similarity measure. On the query set, the contrastive loss ℒ_𝒬 has almost the same formulation as ℒ_𝒮, except it is computed with all the query samples of 𝒬. §.§ Training Strategy In this work, we incorporate the Model-Agnostic Meta-Learning (MAML) <cit.> algorithm to train the model via bi-level optimization as our training strategy. Our training strategy aims to learn a good model initialization (through the outer-loop optimization), which can be quickly adapted to novel tasks given a few examples (through the inner-loop optimization). The whole algorithm for our training strategy is outlined in Algorithm <ref>. First, we randomly initialize the parameters of image encoder θ_I, language model head θ_T, and metric module θ_M. For each task instance 𝒯_j from the distribution p(𝒯), we divide 𝒯_j into a support set 𝒮_j and a query set 𝒬_j. To let the metric module task-specific, we create copies of θ_M as the adapted parameters θ_M^'. In the inner loop, we adapt the model to the current task 𝒯_j by updating θ_M^' with a number of gradient descent steps on the support set while keeping θ_I, θ_T and θ_M fixed. In the outer loop, θ_M^' are utilized to evaluate the performance of the adapted model on the query set. Specifically, we compute loss on the query set with θ_I, θ_T, θ_M^' and perform gradient descent with respect to all the model parameters θ = {θ_I, θ_T, θ_M}. The optimization objective of the meta-training stage is to learn a good initialization across tasks. For example, when using one gradient update in the inner loop, the optimization objective can be formulated as follows: min_θ∑_𝒯_j ∼ p(𝒯)ℒ_𝒬_j (θ_I, θ_T, θ_M -α∇_θ_Mℒ_𝒮_j(θ_I, θ_T, θ_M)), where ℒ_𝒮_j and ℒ_𝒬_j denote the loss functions that evaluate the performance on support and query set respectively, and α is the learning rate of the inner loop. § EXPERIMENTS §.§ Setup Datasets. We experiment on three general object recognition datasets, i.e., miniImageNet, tieredImageNet and CIFAR-FS, and one fine-grained categorization image classification dataset, i.e., CUB-200-2011. The miniImageNet dataset is proposed in <cit.> as a benchmark for few-shot image classification tasks. It contains a subset of 100 classes in the ImageNet <cit.> dataset, where 64 classes are used for training, 16 classes for validation, and 20 classes for testing. The tieredImageNet dataset <cit.>, which is also derived from the ImageNet <cit.> dataset, contains 351 classes for training, 97 classes for validation, and 160 classes for testing. The CIFAR-FS dataset is built upon CIFAR-100 <cit.> dataset. Following the recent work of <cit.>, we use the same training/validation/testing splits consisting of 64/16/20 classes respectively. CUB-200-2011 (CUB) <cit.> is a dataset for fine-grained bird species classification tasks consisting of 100/50/50 classes for training/validation/testing splits respectively. We also evaluate the domain transferability of our method by training on miniImageNet dataset and then testing on CUB dataset. Architecture. For the visual branch, following previous works <cit.>, we use ResNet-12 as our image encoder of the visual branch, which consists of four residual blocks. Each block contains three 3×3 convolutional layers and a 2×2 max-pooling layer. Similar to <cit.>, we adopt Dropblock as the regularizer and set the number of filters to (64, 160, 320, 640). We apply a global average pooling layer after the last residual block. The backbone network takes images with a spatial size of 84×84 as input and outputs 640-dim support and query visual embeddings. To extract comprehensive semantic information from class names, we adopt RoBERTa-base <cit.> as our text-based pre-trained language model, which is trained on large-scale corpora and available for public use. The language model is a linear layer, which transforms 768-dim hidden vectors into 640-dim textual embeddings. In addition, we use the bilinear form of our metric module. Implementation Details. Following <cit.>, we first pre-train the image encoder for 200 epochs on miniImageNet, CIFAR-FS and CUB dataset, and 100 epochs on tieredImageNet dataset. Then we adopt the episodic training procedure under 5-way 1-shot and 5-shot settings. In each episode, 16 unlabeled query images per class are used for the meta-training and meta-testing phases. We use SGD optimizer with a momentum of 0.9 and a weight decay of 5e-4. The outer-loop learning rate is initialized as 1e-3 on miniImageNet, CIFAR-FS, CUB datasets and 1e-4 on tieredImageNet dataset. The inner-loop learning rate is initialized as 0.5 on four datasets. The number of inner-loop update steps is set to 25. Our model is meta-trained for 80 epochs on all datasets. The hyper-parameter τ is set as 1 for 1-shot setting, 0.2 for 5-shot setting in the inner loop, and 0.1 in the outer loop. To ensure the stability of the evaluation results, we test 1,000 episodes and report the average performance with 95% confidence intervals. We conduct experiments with an NVIDIA GeForce RTX 4090 GPU. §.§ Comparison with State-of-The-Art General Object Recognition and Fine-Grained Categorization. For fair comparisons, we compare with other methods using the same backbone or similar methods in both 5-way 1-shot and 5-way 5-shot settings on miniImageNet, tieredImageNet, CIFAR-FS and CUB datasets. As is shown in Table <ref>, our method is superior to existing methods and achieves the best performance. Compared with previous methods that leverage semantic information from class names, such as KTN <cit.>, AM3 <cit.>, TRAML <cit.> and Vs-Alignment <cit.>, our method improves 1-shot accuracy by 2.42% and 5-shot accuracy by 4.41% on miniImageNet. Furthermore, our method outperforms AM3 <cit.> by 3.88% and 4.41% at 1-shot and 5-shot settings on tieredImageNet respectively. According to Table <ref>, our method outperforms MetaOptNet <cit.> by 4.99% and 3.06% at 1-shot and 5-shot settings respectively on the CIFAR-FS dataset. In addition, on the CUB dataset, our method surpasses all the competitors, including RE-Net <cit.>, which previously achieved the best result. One observation worth highlighting is that our method not only outperforms traditional methods based on meta-learning but also is superior to methods using textual information on four benchmark datasets. These results validate the effectiveness of our proposed few-shot learning framework, which can leverage semantic information well in few-shot image classification tasks. Evaluation on Cross Domain and Larger Shots. To evaluate the cross-domain transferability of different few-shot learning methods, we train them on the source domain miniImageNet dataset and test them on the target domain CUB dataset. This setting is challenging due to the domain gap between the training and testing datasets. The results are reported in Table <ref>, showing that our method has competitive performance and obtains consistent improvements in the cross-domain setting. This indicates the transferability of our method in a situation where the meta-testing tasks are entirely different from the meta-training tasks. Furthermore, we evaluate the performance when the number of shots increases (e.g., 10-shot, 30-shot, and 50-shot) in Table <ref>. This shows that our method would be more effective when there are more (image, text) pairs available for novel classes. These comparisons demonstrate that our method has a more robust transferability, which means it can work well in cross-domain and larger shots scenarios. §.§ Ablation Study In this subsection, we empirically show the effectiveness of each component. To investigate the effects of our designed textual branch, we try to use different extraction methods and prompt templates. Moreover, we conduct extensive ablation studies to verify the effectiveness in the absence of the metric module and visualize our method on miniImageNet and tieredImageNet dataset. Analyze of Textual Branch. To evaluate the effect of our textual branch, we test different extraction methods (i.e., “Avg”, “[CLS]”, and “[MASK]”) and prompt templates in our framework with 5-way 1-shot setting on miniImageNet. As shown in Table <ref>, our “[MASK]” extraction method with “[CLS] The appearance ofy_i is [MASK] . [SEP]” prompt template outperforms the “[CLS]” extraction method by 5.39% and the “Avg” extraction method by 3.94%. Our proposed hand-crafted prompt template treats the extraction of textual embeddings as a masked language modeling task, which makes the textual embeddings more relevant to the visual description of object categories. The results demonstrate that the carefully designed textual branch is effective for aligning visual and textual embeddings for downstream few-shot classification tasks. Analyze of Metric Module. As is shown in Table <ref>, we design a new model without using the support set to update the parameters in the inner-loop optimization and directly compute the similarity score matrix between the query visual embeddings and textual embeddings with cosine similarity in the outer loop. The results show a significant decrease in performance on four widely-used few-shot image classification datasets, demonstrating the importance of the task-specific metric module. By leveraging the metric module to generalize the cosine similarity, our model can adaptively measure the similarity between visual features and textual embeddings for different few-shot tasks. Visualization. To qualitatively evaluate our method, we apply t-SNE <cit.> to visualize the results, which represent the visual features of five categories. We randomly sample 300 examples for each class in 5-way 5-shot setting on miniImageNet and tieredImageNet dataset. As shown in Figure <ref>, the t-SNE visualization results indicate that our method can learn more compact and separate clusters, which means that the learned representations are more discriminative. § CONCLUSION In this paper, we propose a novel few-shot learning framework with text-based pre-trained language model to boost few-shot learning. Furthermore, we introduce a task-specific metric module to enable the alignment between visual features and textual embeddings. Extensive experiments on miniImageNet, tieredImageNet and CIFAR-FS demonstrate the effectiveness of our method. unsrtnat Supplementary Materials § ADDITIONAL EXPERIMENTS Influence of Inner-Loop Temperature. To study the influence of inner-loop temperature hyper-parameter, we conduct experiments on four widely-used few-shot datasets with different inner-loop temperature values in our method. The rest settings are consistent with Section <ref>. Table <ref> shows the results in 5-way 5-shot setting. We find that 0.2 is an appropriate inner-loop temperature value for this setting on all these four datasets. Effect of the Number of Inner-Loop Update Steps. To find a suitable number of inner-loop update steps, we keep the experimental setup in Section <ref> and update the model 10, 15, 20, 25 and 30 steps in the inner loop respectively. Table <ref> shows the results in 5-way 5-shot setting on miniImageNet and tieredImageNet. Following the results, we set the number of inner-loop update steps to 25 in our experiments. Visualization of Grad-CAM. In Figure <ref>, we visualize the gradient-weighted class activation mapping from the pre-trained model and our method under a ResNet-12 feature extractor. It is observed that our method makes the model pay more attention to the discriminative part of the target object than the pre-trained model. For example, we find that for dog samples, the pre-trained model pays more attention to the body and background parts while our model focuses on the head part.
http://arxiv.org/abs/2307.05599v1
20230710194737
AlephZero and Mathematical Experience
[ "Simon DeDeo" ]
math.HO
[ "math.HO" ]
AlephZero and Mathematical Experience]AlephZero and Mathematical Experience Department of Social & Decision Sciences, Carnegie Mellon University, Pittsburgh PA 15123 USA & the Santa Fe Institute, Santa Fe NM 87501 USA [email protected] Contribution to a special issue of the Bulletin of the American Mathematical Society, “Will machines change mathematics?”. I thank Cris Moore, David Kinney, and John Bova for helpful discussions. This work was supported in part by the Survival and Flourishing Fund. [ Simon DeDeo August 12, 2023 =================== This essay explores the impact of automated proof construction on three key areas of mathematical cognition: on how we judge the role one piece of mathematics plays in another, on how we make mistakes in reasoning about mathematical objects, and on how we understand what our theorems are truly about. It concludes by speculating on a new form of mathematical experience that these methods could make possible: “glitching”, a game-like search for uncanny consequences of our definitions. § INTRODUCTION The advent of proof assistants such as Lean and Coq, combined with progress in Large Language Models and self-play systems such as AlphaZero, raises the question of what happens, to the practice of mathematics, when they are combined. In a recent paper that formed the basis of a 2022 Fields Institute Symposium <cit.>, Akshay Venkatesh even asks us imagine an “AlephZero”, trained not on the rules of Go, but on those of mathematical deduction, and that gains, in turn, human or post-human capacities, and is integrated into the mathematical community. This essay draws on basic ideas in cognitive science to predict three consequences of the contemporary turn to automated methods. First, it predicts a shift in mathematical judgement, as automation eliminates or blurs out experiences of impasse, crucial to anchoring judgements of value. Second, a shift in how we grasp mathematical objects, as automated systems prevent us from believing, even temporarily, false things about them. Third, a shift in how we relate to mathematical ideas, as our ability to create truths outpaces our ability to know what they might be about. To bring these changes into relief, I will talk in terms of loss: to the extent that we rely on automation in certain ways, we will no longer have certain kinds of mathematical experiences associated with impasse, error, and aboutness. Some of these experiences will be fewer in number, or lower in resolution—vaguer, briefer, less precise. Parts of my discussion might, as a consequence, remind readers of a dystopian future by the writer Ted Chiang <cit.>, where humans are outcompeted by oracular “metahumans”, and cease seeking knowledge altogether. It can be useful to imagine such futures, because they can serve as intuition pumps <cit.>. However, the nature of human curiosity suggests that now is not the time to expect them. Impasse, error, and aboutness will not disappear, and mathematicians at the cutting edge of automated methods (e.g., Ref. <cit.>) provide vivid accounts of all three. Those same accounts, however, also emphasise the ways in which their experiences are fundamentally different from what has come before. In the final section, “Glitching, Clipping, and Logical Exploits”, I speculate on the future evolution of this process. § VALUE AND IMPASSE Venkatesh's account of mathematical value emphasizes the importance of a conjecture being central: a conjecture acquires value when it is “linked with many other questions of (prior) importance” <cit.>; Jeremy Avigad <cit.> uses similar language. Mathematicians seem to follow a heuristic familiar to both the sciences and day-to-day reasoning: just as we often value, even over-value, explanations that link together an apparent diversity of prior observations <cit.>, mathematicians value conjectures that link together prior questions. On the surface, such a criterion is both intuitive and clear. Consider the Langlands Program, a celebrated example of value-through-linkage in modern mathematics. Even I, a non-mathematician, have heard of Langlands—for example in Michael Harris' admirable Mathematics without Apologies <cit.>. What I hear makes it sound like something that ought to be valued very highly indeed. If pressed, however, on what the Langlands Program actually is, I would say that it is an attempt to prove a set of theorems that link together high-value but hard-to-prove facts in number theory (on the one hand) and geometry (on the other). There are definitely curves involved, and integers, but what characterizes the theorems that make it into the Program and why the particular correspondences they govern, rather than others, are the object of such deep fascination, remains mysterious to me. My knowledge of the value of the Langlands Program is partial. This is not (just) because I don't know the theorems implicated in its correspondences, but also because my acquaintance with how this or that correspondence plays out when trying to prove things is at (at best) second-hand. My beliefs about the Program's value-relevant properties (“centrality”) come to me through the testimony of people who have tried to prove things when a Langlands correspondence is relevant, and not through the mathematical experience of trying to do so myself. Even if my beliefs about that value are correct, in other words, there is something suspect about my holding them without, at least silently, adding, “according to those who know”. Intuitively, the situation is analogous to the phenomenon of testimony in aesthetic matters <cit.>: for me to wax enthusiastic about the “deep importance” of Langlands, and the “true centrality” of its aims, is akin to someone praising the prose style of a book he has never read or the transporting delights of a cathedral he has never visited. If we transpose Ref. <cit.>'s account of aesthetic testimony to the judgement of mathematical value, we might say that Harris' book can transmit the correct beliefs about the value of the Langlands Program, but not the necessary understanding of those values. This can only come from the experience of working with Langlands itself. We need not restrict ourselves to something as exalted as Langlands. Similar concerns apply to any proof or conjecture, and the judgements of the depth to which it connects to the rest of mathematics. The validity of a judgement that Theorem A's involvement in Theorem B is important, depends, in the final analysis, on someone trying to prove Theorem B, and experiencing, directly, both the difficulty of the impasses that A resolves, and the ways in which A resolves them.[In the usual process, these experiences are shared socially, where they become testimony—not just to outsiders, but also to other mathematicians without the time or technical training to experience the process directly. There is nothing intrinsically wrong with this: just as in the case of aesthetic judgement, it is not always wrong to rely on testimony, for example, in the awarding of grants and prizes. Ref. <cit.> argues for an asymmetry in the aesthetic case: testimony can establish that something ought to be found beautiful upon acquaintance, but not that it is, indeed, so.] It seems difficult to eliminate the importance, for mathematical judgement, of working something through. In the sciences, by contrast, I can discover the key relationships between propositions simply by varying their likelihoods of being true and seeing how one affects another <cit.>: no matter how complex the underlying theory for why, say, X affects Y, I can determine the importance of a binary variable Z by turning it on and off (or, in probabilistic theories, by making it more or less likely to be on). No similar analysis works in the mathematical case, because I don't know how mathematics (or even logic) works in a world where (say) there were only a finite number of primes. One can imagine varying a mathematician's confidence in the validity of a step in a proof—this is what my colleague Scott Viteri and I do in Ref. <cit.>—but what this leads to is an (approximate) cognitive account of mathematical experience, not a paraconsistent theory of mathematics itself. If we see mathematics as a computer does—as a formal, “timeless” deductive system—we can say at best that, in mathematics, everything depends on everything: every (proven) theorem depends on every other simply because, if it were false, mathematics itself would be inconsistent. Automated methods seem to pose a challenge to how these judgements get made because testimony about value no longer necessarily “bottoms out” in human experience. The very goal of these tools is to take some part of the reasoning process out of human experience, and into a more reliable, mechanical realm.[The transposition is—we hope—truth preserving, i.e., we are just as justified, if not more justified, than before in believing in the truth of the result. The argument here concerns testimony about value, not (as in Ref. <cit.>) judgements about truth.] These methods might tell us that an efficient proof of Theorem B has, in its syntax tree, a crucial lemma of Theorem A. By their very nature, however, these trees are extraordinarily complicated, and simply knowing which theorems are cited is not decisive: Theorem B will depend upon many other theorems, including a host of more or less trivial things, and it may also depend on deep things but only in what a human would consider a trivial fashion. (This is the best-case scenario: a truly efficient proof of Theorem B might, in fact, make the role of Theorem A's lemma harder to notice, distributing it in fragments across the entire tree.) In response to this challenge, we might conduct a post-hoc analysis of the syntax tree of the proof of Theorem B and show how some graph-theoretic property shows that subtrees, associated with Theorem A, are especially “central”; my colleagues in network science, for example, might use “betweenness centrality” <cit.>. What we gain, however, is not the relevant experience of value, but only knowledge about a hypothesised proxy for value, an operationalization. Such a proxy may correlate with value judgements we already believe from experience (“the role of Theorem A is graph-theoretically similar, at p<0.01, to the celebrated role of...”). Proxies, however, are not the thing itself: at best, they are another form of testimony, one that lacks even the backing of experience. Deployed widely enough, the reliance on such proxies—even if they correlated perfectly with ideal judgement—would lead to a strange scenario: a kind of zombie mathematics, where mathematicians celebrate a theorem not for how it untangles and reorders their reasoning, or the reasoning of their colleagues, but because it has a high centrality score. Such a dystopian fantasy is unlikely, of course, to actually occur. We can expect automated methods to hide, behind mechanical search, certain experiences of impasse and resolution. But mathematicians, like all humans, have an insatiable drive for experiences and we should not expect the new experiences to be any less characterized by this dialectic. We should, however, expect them to transform; we return to how they might in Section <ref>. § MODELS AND ERRORS As alluded to in Section <ref>, mathematical errors are particularly hard to understand in the formalist picture. If mathematics is solely a matter of logical deduction from axioms, then errors are nothing more than “illogical thoughts”—on a par with claims about “square triangles” or “the third even prime”, forms of nonsense that are not truly thoughts about anything at all <cit.>. Such errors may be of interest to psychologists and cognitive scientists, perhaps, as physical causes of belief, but have no relevance for the subject of mathematics itself beyond the bare fact of their ungrammaticality. Putting “AlephZero” systems to the side for a moment, automated proof assistants, such as Coq and Lean, are only too happy to accommodate this point of view. Assuming that the underlying code implements the type system correctly, a proof assistant will never allow a mathematician to introduce a falsehood into the text. Because formalism excludes error as a meaningful component of mathematics, it is particularly interesting to see how mathematicians themselves make sense of what happens when they make errors. Some accounts are purely psychological, in a trivial sense; the eight errors listed by Ref. <cit.>, for example, point to standard human failures such as hubris and the Dunning-Kruger effect. In these cases, error is no more relevant to mathematical experience than sleep is: we have limits on our ability to be reasonable, and these limits can stop us doing mathematics. Other accounts are more cognitive; they see error as something that emerges organically from the process of mathematics itself. Ref. <cit.>, for example, describes the difference between “local” and “global” errors in a proof, and how the former have mathematical properties that make them easier to fix; statistical study of proofs <cit.> suggests that their logical structure is indeed modular in ways that help make sense of Tao's local-global distinction. In discussions with mathematicians, one learns that they have a variety of heuristics they use to identify errors in their reasoning, including particular signatures and traces that error leaves on downstream steps. One sign of an error, as I learned during the 2022 Symposium, is that subsequent results become “too easy” to obtain, in a kind of limited principle of explosion that temporarily levels the hierarchy of value. The extraordinary difficulty of mathematics, one imagines, makes these kinds of accounts all but inevitable: if mathematics only happened when one was reasoning correctly, then the majority of mathematicians would be spending the majority of their time speaking nonsense, no mathematics at all, and, as Lear says to Cordelia, “nothing can come of nothing”. If, by contrast, we believe we can experience errors of logic as having mathematical meaning—if we are willing to say, for example, that one can assert falsehoods while making progress—we must also grant that some essential aspect of the activity goes beyond the formalist picture. In as much as automated methods help us be wrong less often, they foreclose an aspect of the mathematical experience. For many mathematicians, of course, the prospect of spending less time being wrong is delightful: one wants the maximum amount of truth per unit time. Foreclose away! Is anything truly lost along with the experience of error? From the cognitive point of view: very possibly, yes. This is because mathematicians, just as much as any other human, are expected to rely in part on the construction of mental models <cit.>; in particular, reduced and partial mental models of the mathematical objects in play. In day-to-day life mental models are tuned by interaction with the world, through a process of learning and feedback; to develop our mental models, we use them to generate predictions about the world that we then compare to reality <cit.>. If our model fails to predict correctly, we update it. This update is far from instantaneous: sustained model failures help direct our attention—we attend more closely to aspects of our experience that our models failed to predict <cit.>, and prediction failures seem to be not only a core feature of low-level cognition <cit.> but useful guides to high-level decision-making processes such as scientific exploration <cit.>. The mathematical parallel is, most naturally, the use of mental models to be wrong about mathematics; without the possibility of being wrong, the mental model can not change and the process of attention is diffused. This leads to a second dystopian fantasy, one where automated methods lead to a world in which our mental models become increasingly vague and low-resolution. Such a loss would be more than simply epistemic. It is not just that mathematicians will have more impoverished representations of what they are doing, but also that they would be deprived of particular experiences. Mathematicians may have a horror of error, but wandering into error is an unavoidable consequence of taking deliberate risks with our mental models—an experience inseparable from the act of exploring the world with curiosity. Just as in the previous section, I suggest that this is unlikely to happen: mathematicians are simply too curious about their objects not to want to explore and reason about them in ways that allow them to be wrong. The most obvious way to continue doing that is in trying—and, naturally, sometimes failing—to predict what an automated theorem prover will do given an input. This is only a partial compensation, however, because “cyborg” errors last only as long as it takes to phrase and type the first step of the erroneous intuition. Automated systems such as Lean can defer a subgoal in a proof with a keyword such as sorry, but they are not (yet) able to humor their human counterparts by suggesting that a subgoal has been achieved when the assumption is false. § DEFINITIONS AND ABOUTNESS In his talk at the 2022 Fields Institute Symposium about the Liquid Tensor Experiment <cit.> and its challenge to formalize, in Lean, a key theorem of Peter Scholtze's work in Condensed Mathematics, Johan Commelin draws attention to an illuminating risk of formalization: the risk that, through malice or accident, one's definitions may make a proof trivial. A casual observer of the automated proof can look at the Lean statement of the main theorem, and see how it parallels, more or less, Peter Scholtze's original human text. There are references to profinite sets, p-Banach spaces, and so forth, in the expected places. In the end, however, when we glance back and forth between Scholtze's LaTeX statement and Lean's monospaced font, ...as Magritte told us “Ceci n'est pas une pipe”. We could have done something very evil[Commelin's reference to “evil” may seem extreme, but it is not entirely out of bounds; consider, for example, the recent (Spring 2023) scandal of the “Space Zoom” feature on Samsung phones: unknown to the public, the internal code was able to recognize when a user was taking a picture of the moon, and filled in details of the image that would have been impossible for the detector itself to have captured given atmospheric conditions.] or we could have done something very stupid—what if we had just defined X groups to be zero to begin with, because that's what the main statement is about. We need to prove that for all i some X group is zero, well, if we just define it to be zero then we're done ... even if we're not evil we could have done something stupid that would have completely trivialized the proof. [Johan Commelin, “Abstract Formalities”, 2022 Fields Institute Symposium, minute 59] The deeper concern, as Commelin articulates it, is that automated methods often place us in a position where we have thousands of definitions, and even in the presence of abstraction boundaries, documentation, and test cases, one is, in the final analysis, thrown back on abductive reasoning to figure out what, exactly, the proof is about. We might say, in turn, that we have two forms of aboutness in play. On the one hand, we have the way in which Scholtze's proof is an attempt to reason about ideas[I follow Scholtze's use of the word “idea”; see <https://bit.ly/scholze_harris>.] that Scholtze had in mind, ideas created through a (collaborative, iterative) process of forming intuitions, making definitions that answered to them, and attempting to prove things. On the other hand, we have the way in which a set of code fragments, labelled “definitions” and input into Lean's axiomatic system, are mutually constrained by the laws of type theory to output some sets of code fragments downstream, and not others. Commelin is in the business of mediating between these forms: poetically speaking, asking if the “shape” of the code fragments that the Lean proof enables (the representation of the pipe) matches the “shape” of Scholtze's ideas (the pipe itself). The need to square competing forms of aboutness is reminiscent of the dialogues in Lakatos' Proofs and Refutations <cit.>, which provide an extended example of how students, challenged to prove something, go back and forth exchanging claims and counterexamples, gradually realizing the limits of their intuitions—that what they think they are proving things about is not quite what they think it is—and correcting the terms of the proof as they go. Catarina Dutilh Novaes <cit.> formalizes this idea as a game between “prover” and “skeptic”; the end point—or the asymptote—of this dialectic is the justified belief that the proof (in the end) truly is about the ideas in question (in the end). The machine version of this, however, seems distinct: it is unclear what it would mean to achieve intersubjective agreement with a machine. It is, certainly, possible for two human mathematicians to come to agreement about the “aboutness” of a Lean proof, but this is a distinct different task—more analogous to two scientists attempting, through experiments, to determine the most efficient account of the causal structure of a black box, and perhaps the “stochastic mathematical systems” of Ref. <cit.>. Commelin's reference to the need for abductive reasoning about a Lean proof may point in this direction, but it is important to distinguish this from the students in Lakatos's dialogues. The Lakatos students are engaged in dialectical logic, not empirical modeling: they argue with, rather than about, each other. It is this “arguing with” experience, and the errors and vagueness it involves, that automated methods seem to exclude. § ℵ(0) IN THE LOOP: GLITCHING, CLIPPING, AND LOGICAL EXPLOITS Much of what we know about the impact of automated methods comes from proof verification systems. These somewhat constraining tools, however, are already being combined with systems like OpenAI's “Codex” code-completion technology, which can propose next steps, respond dynamically to criticism and coaching, and which maintains hidden states analogous to mental models of the task at hand. Mathematicians who become familiar with the syntax of these tools may soon be engaged in quite extensive forms of co-construction, where the machine not only “hammers”—fills in small gaps—but proposes new definitions, attempts to prove lemmas of its own devising, and even enlists the human partner in new goals that may be only partially transparent. One analogy to this experience, sometimes made by those on the cutting edge of this process, is to the iterative feedback found in a video game. If the dialectic of traditional mathematics is somewhat like the human-on-human play of Dungeons and Dragons—with the prover serving as “Dungeon Master”, and the skeptics attempting to play within, or subvert, the boundaries of the prover's vision—the new era heralded by projects such as the Liquid Tensor Experiment might be thought of a massively multiplayer online role-playing game (MMPORG), with an engine behind the screen that responds, in sometimes counterintuitive ways, to the community's inputs. Once moves in a proof system are allowed to produce side-effects (i.e., once we move beyond the functional programming paradigm of a system such as Lean to more general Codex-like systems), the analogy to video games is tighter than it might at first appear. As long as they do not crash, violate memory boundaries, or fall into an infinite loop, proof assistants and video games are just ordinary computer programs, a complex system of inter-related states and state-to-state transition rules. Even within the functional programming paradigm, game-like experiences seem to be common for those who engage at sufficient depth. Video games are inherently pleasurable creations, and the 21st Century may well be defined by this novel form <cit.>. Unlike other forms of art, they give agency a central role <cit.>, which parallels key aspects of mathematics: the experience of mathematics is an agentic one, one of dialectic, choice-making, puzzle-solving, backtracking, and error—far removed from the passive contemplation of timeless structures urged by early versions of formalism. Even if automated systems take on a video game quality, however, we should not expect mathematicians to be content with “playing” an automated proof system the way an ordinary person plays, say, Super Mario Bros. In “Mathematics and the Formal Turn”, for example, Jeremy Avigad notes how some Lean users talk in terms of “golfing a proof”, a practice analogous to “code bumming” described in early histories of the computer revolution <cit.>—and reminiscent of the practice today of the “speed run”, where someone attempts (say) to finish all the levels of Super Mario Bros in an absolutely minimal amount of time. One phenomenon not yet explored (to my knowledge) is the attempt to discover, and make use of, glitches. Glitches are a particularly uncanny source of interest, which emerge out of how ordinary video games attempt to simulate a physical reality for the user to explore. Misjudgements in the designers' vision sometimes means that particular objects or locations, despite following the prescriptions of the “physics engine” perfectly, have exceptional properties that violate our mental models of the intended reality and produce novel game logics. In one game, for example, the chain on a swing set in a playground—a minor background object intended mostly as scenery—serves as a fount of energy that can launch a player into the sky; in another, a player can walk through walls (“clipping”) if he holds a plate (or other dining utensil) in front of him as he moves. Glitches arise at the intersection of the technological and the human <cit.>. They are semantic errors (i.e., they reveal that the physics engine in question is not actually mirroring the world the player expects), but not syntactic ones (i.e., the code has, indeed, compiled correctly, and there is no crash, buffer overflow, or subversion). Following the discussion of Section <ref>, it seems likely that such glitches should appear in automated systems even when—indeed, precisely because—the type-checker is working as expected. They occupy a space between the intuitively true and the logically false—not true antinomies or logical errors, but uncanny experiences of what ought not to be. Video game glitches are difficult to find by examination of the code. They tend to be discovered by communities engaged in obsessive forms of play well beyond ordinary use <cit.>, and similar levels of obsessiveness may be needed for the mathematical case. Glitches might also be found by automated means: for example, fuzzing <cit.>, a technique for finding security vulnerabilities in code by trying trillions of randomly-chosen, but synatactically valid, inputs. Fuzzing could be applied to a system like Lean and checked against basic human intuitions to seek out the “security vulnerabilities” in our axioms. The day when we discover unexpected, uncanny glitches in the definitions we have to hand, or the ones we will co-create with our machines, may be near. Bertrand Russell once wrote, partly in jest, that “mathematics may be defined as the subject in which we never know what we are talking about, nor whether what we are saying is true” <cit.>. With the advent of AlephZero it seems likely that, while we will continue to not-know these things, we will come to not-know them in new and unexpected ways. amsplain
http://arxiv.org/abs/2307.07214v1
20230714081536
Complementary Frequency-Varying Awareness Network for Open-Set Fine-Grained Image Recognition
[ "Jiayin Sun", "Hong Wang", "Qiulei Dong" ]
cs.CV
[ "cs.CV" ]
Complementary Frequency-Varying Awareness Network for Open-Set Fine-Grained Image Recognition Jiayin Sun, Hong Wang and Qiulei Dong The corresponding author is Qiulei Dong. Jiayin Sun and Qiulei Dong are with the National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China, the School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China, and the Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Beijing 100190, China (e-mail: [email protected]; [email protected]). Hong Wang is with the College of Life Science, University of Chinese Academy of Sciences, Beijing 100049, China (email: [email protected]) ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Open-set image recognition is a challenging topic in computer vision. Most of the existing works in literature focus on learning more discriminative features from the input images, however, they are usually insensitive to the high- or low-frequency components in features, resulting in a decreasing performance on fine-grained image recognition. To address this problem, we propose a Complementary Frequency-varying Awareness Network that could better capture both high-frequency and low-frequency information, called CFAN. The proposed CFAN consists of three sequential modules: (i) a feature extraction module is introduced for learning preliminary features from the input images; (ii) a frequency-varying filtering module is designed to separate out both high- and low-frequency components from the preliminary features in the frequency domain via a frequency-adjustable filter; (iii) a complementary temporal aggregation module is designed for aggregating the high- and low-frequency components via two Long Short-Term Memory networks into discriminative features. Based on CFAN, we further propose an open-set fine-grained image recognition method, called CFAN-OSFGR, which learns image features via CFAN and classifies them via a linear classifier. Experimental results on 3 fine-grained datasets and 2 coarse-grained datasets demonstrate that CFAN-OSFGR performs significantly better than 9 state-of-the-art methods in most cases. § INTRODUCTION Open-set image recognition (OSR) has received more and more attention recently, which aims to both classify known-class images and identify unknown-class images. Existing OSR methods <cit.> could be roughly divided into two categories: CNN (Convolutional Neural Network)-based methods and transformer-based methods. Both the two categories of OSR methods have demonstrated their effectiveness to some extent on coarse-grained datasets. However, as shown in <cit.>, their performance would decrease in open-set fine-grained image recognition task (OSFGR), which is a sub-task of OSR where the differences among object classes become subtle. Thus in this paper, we focus on OSFGR. As indicated in <cit.>, many existing CNN/transformer-based closed-set classification methods could not effectively capture either high-frequency or low-frequency information from coarse-grained images: Specifically, CNN-based OSR methods are generally insensitive to low-frequency components, while transformer-based OSR methods are generally insensitive to high-frequency components. Furthermore, as done for classifying closed-set coarse-grained images in <cit.>, we evaluate a typical CNN (ResNet50) and a typical transformer (SwinB) on high-frequency images (called HFI) and low-frequency images (called LFI) generated from a fine-grained dataset Aircraft <cit.> under the same open-set setting as <cit.>, and the corresponding AUROC and OSCR (the two metrics are defined in Sec. <ref>) under three difficulty modes (i.e., Easy/Medium/Hard) are reported in Table <ref>. As seen from this table, ResNet50 performs better on HFI than on LFI while SwinB performs better on LFI than HFI, demonstrating that they have to be confronted with the same problem in OSFGR as that in closed-set classification. Naturally, the following question is raised: “How to capture both high-frequency and low-frequency information from fine-grained images more effectively for open-set recognition?" To address this question, we propose a Complementary Frequency-varying Awareness Network (called CFAN) for better capturing both high- and low-frequency information. CFAN consists of three sequential modules: a feature extraction module, a frequency-varying filtering module, and a complementary temporal aggregation module. The feature extraction module, which could be an arbitrary feature extractor in literature (e.g., ResNet <cit.>, and SwinB <cit.>), is firstly used to extract the preliminary features from the input fine-grained images. Then, the frequency-varying filtering module is designed to decompose the extracted preliminary features into both high- and low-frequency components in the frequency domain via an explored frequency-adjustable filter. Finally, the complementary temporal aggregation module is explored to aggregate the high- and low-frequency feature components learnt from the frequency-varying filtering module via two LSTMs (Long Short-Term Memory Networks) into discriminative features, inspired by the ability of LSTMs for modeling time-series data in many other visual tasks <cit.>. Furthermore, we explore a CFAN-based method to handle the OSFGR task, called CFAN-OSFGR, where the proposed CFAN is used to extract fine-grained image features and then a linear classifier is employed to classify these features. The main contributions of this paper are summarized as: (1) We explore the frequency-varying filtering module which could separate out high- and low-frequency components from an image feature via a designed frequency-adjustable filter. The designed filter could flexibly switch from a high-pass (also low-pass) filter to a full-pass filter in the frequency domain. Moreover, we explore the complementary temporal aggregation module, which could effectively aggregate multi-band high- and low-frequency feature components. (2) We propose the CFAN, which integrates the two explored modules for learning features that could better capture the high- and low-frequency information from the fine-grained images. The proposed CFAN could be used as a stronger feature extractor in both the coarse-grained and fine-grained image recognition tasks. (3) We propose the CFAN-OSFGR method for handling the OSFGR task by integrating the CFAN with a linear classifier, whose priority to 9 state-of-the-art methods have been demonstrated in Sec. <ref> § RELATED WORKS Here, we briefly review the CNN-based and transformer-based OSR/OSFGR methods in literature, and some typical works that enhance features in the frequency domain. §.§ OSR/OSFGR Methods CNN-based Methods. Most existing OSR/OSFGR methods are CNN-based methods, most of which are evaluated on coarse-grained datasets. Zhang et al. <cit.> adopted a resflow-net <cit.> to model the known-class likelihood scores and used the latent features for classification. Kong and Ramanan <cit.> adversarially trained a VGG <cit.> feature extractor against a discriminator for distinguishing known-class samples from outliers. Chen et al. <cit.> mined unknown-class features in the extra-class space of each known class via a ResNet <cit.>, then trained the model with known-class samples and these features adversarially for encouraging the known-class feature space to be more compact. Yang et al. <cit.> modeled the feature distribution of each known class as a Gaussian mixture for learning more discriminative features via a ResNet by prototype learning. Cao et al. <cit.> used a VGG-based GMVAE <cit.> for modeling such distributions. Besides, a few OSFGR methods <cit.> have been proposed, aiming to learn more discriminative features. Dai et al. <cit.> analyzed the characteristics of different classification scores and chose the class activation mapping values outputted from a VGG for preserving fine-grained information. Vaze et al. <cit.> proposed to take full advantage of multiple training strategies for improving the discriminability of known-class features extracted from a ResNet backbone. Transformer-based Methods. Inspired by the success of vision transformers in closed-set image recognition tasks <cit.>, a few transformer-based methods <cit.> have been proposed. Sun et al. <cit.> introduced multiple mixtures of exponential power distributions into a transformer-based autoencoder for modeling the distributions of known-class features. Azizmalayeri and Rohban <cit.> integrated various data augmentation strategies for learning more generalized feature representations via a transformer. §.§ Frequency Based Feature Enhancement Recently, some frequency-based feature enhancement works have been proposed in other visual task <cit.>. Rao et al. <cit.> proposed a global filter network for learning long-term spatial dependencies in an image, they used learnable filters at different layers and encouraged each filter to pass frequency component at an appropriate band. Liu et al. <cit.> proposed a global spectral filter memory network, aiming to learn long-term spatial dependencies between different video frames, and they used the traditional Gaussian filter for frequency filtering. Qin et al. <cit.> proposed a multi-spectral channel attention, which used the conventional global average pooling operation instead of frequency filters for feature decomposition in the frequency domain. Different from these methods, CFAN-OSFGR uses an adjustable frequency filter which can obtain a set of high- and low-frequency components at various frequency bands by adjusting the adjustable vectors, aiming to make use of more abundant frequency information. § COMPLEMENTARY FREQUENCY-VARYING AWARENESS NETWORK FOR OSFGR §.§ Complementary Frequency-Varying Awareness Network Here, we propose the Complementary Frequency-varying Awareness Network (CFAN), consisting of a feature extraction module, a frequency-varying filtering module, and a complementary temporal aggregation module. Firstly, we introduce the whole architecture of CFAN and the feature extraction module. Then, we describe the other two modules in the proposed CFAN in detail. 3.1.1 Architecture and Feature Extraction Module As seen from Fig. <ref>, CFAN takes object images as its inputs, and aims to output discriminative features. It contains three sequential modules, a feature extraction module, a frequency-varying filtering (FVF) module, and a complementary temporal aggregation (CTA) module. The feature extraction module is firstly used to learn preliminary features from the input images. Once the preliminary features have been learnt from the feature extraction module, the FVF module is used for converting the preliminary features into time-series features that cover various high- and low-frequency bands. Finally, the CTA module is used for aggregating the high- and low-frequency components into discriminative features. It has to be pointed out that many feature extractors in literature (e.g., VGG <cit.>, ResNet <cit.>, SwinB <cit.>, etc.) could be straightforwardly used as the feature extraction module. Here, we simply use ResNet50 <cit.> and SwinB <cit.> (as shown in the left-most yellow box in Fig. <ref>) as the feature extraction module, respectively. The FVF module and the CTA module would be described in detail in the following subsections. 3.1.2 Frequency-Varying Filtering Module The FVF module is designed for separating out both high- and low-frequency components from the preliminary features in the frequency domain via an explored frequency-adjustable filter. As shown in the red box in Fig. <ref>, this module takes the preliminary feature extracted from each input image by the feature extraction module as its input, and outputs two time series of feature components at various high and low frequencies respectively. Frequency-Adjustable Filter. In order to extract various bands of high- and low-frequency information flexibly, we design the frequency-adjustable filter, which is a weighted combination of a sequence of N_t (here we set N_t =20) band-pass template filters {𝐓_i }_i=1^N_t as shown in Fig. <ref>. This filter has a high-pass form F_h and a low-pass form F_l as: F_h = ∑_i=1^N_tf^h_i 𝐓_i , F_l = ∑_i=1^N_tf^l_i 𝐓_i where {f^h_i }_i=1^N_t (also {f^l_i }_i=1^N_t) is a set of weighting coefficients. Here, we use the exponential power (EP) function values to assign these coefficients, considering that the EP function is a member of the function family whose function shape can be flexibly adjusted as indicated in <cit.>. According to the parametric form suggested in <cit.>, the complete form of the EP function is formulated as: EP(m) = 12 σp^1/pΓ ( 1+1/p ) e^-| m - μ |^p/pσ^p where m is the independent variable, Γ(·) is the Gamma function, and {μ, σ, p} (σ>0, p>0) is a group of parameters that control the position, scale, and shape of the EP function respectively. It is noted that the shape parameter p plays a leading role in controlling the shape of the EP function, hence, we use p as an adjustable vector to switch the shape of the EP function from steep to gentle, which switches the frequency filter whose coefficients are assigned by the EP function values from a high- or low-pass filter to a full-pass filter. In order to limit the function values within (0,1), we discard the coefficient part before the exponential power part of this formula. It is noted that the EP function is axial-symmetrical to the axis m = μ, hence, the sequence of weighting coefficients {f^h_i }_i=1^N_t for the high-pass filter F_h and the sequence of weighting coefficients {f^l_i }_i=1^N_t for the low-pass filter F_l can be obtained by simply setting the axis of symmetry at m = 0 and m = N_t ·I (where I is a vector with all values being 1) respectively and taking the EP function values corresponding to the evenly-spaced independent variable values which are sampled over the interval (0, N_t ·I) (here we sample m=0.5 ·I, 1.5 ·I, ..., (N_t - 0.5) ·I) as the weighting coefficients for the template filters. Thus, the i-th elements (i ∈{ 1,2,..., N_t }) f^h_i and f^l_i in the two sequences {f^h_i }_i=1^N_t and {f^l_i }_i=1^N_t can be formulated as: f^h_i = e^-| m- 0|^p_h/p_h σ^p_h , f^l_i = e^-| m- N_t ·I|^p_l/p_l σ^p_l s.t. m= (i-0.5) ·I, σ = N_t ·I where the values in the two adjustable vectors p_h and p_l are separately adjusted varying in [p_min,p_max], where p_min and p_max are two preset constants. The high-pass filter F_h (or the low-pass filter F_l) inclines to high-pass filtering (or low-pass filtering) when the values in p_h (or p_l) get closer to p_min, and inclines to full-pass filtering when the values get closer to p_max. Frequency Filtering Process. Here, we describe the filtering process in the FVF module. Firstly, we conduct the Fast Fourier Transform (FFT) at each channel on the preliminary feature map X with N_c channels: Z = ℱ(X), and centralize the complex spectrum Z. Next, a time series of high-frequency components {Z_h^t }_t=1^N_f and a time series of low-frequency components {Z_l^t}_t=1^N_f can be obtained from Z by a time series of high-pass filters {F_h^t}_t=1^N_f and a time series of low-pass filters {F_l^t}_t=1^N_f by utilizing the designed frequency-adjustable filter, whose t-th elements (t ∈{ 1,2,...,N_f }) can be formulated as: Z_h^t = Z⊙F_h^t , Z_l^t = Z⊙F_l^t where `⊙' represents the element-wise product operator; N_f is the length of the time series. {F_h^t}_t=1^N_f and {F_l^t}_t=1^N_f are obtained by two time series of adjustable vectors {p_h^t }_t=1^N_f and {p_l^t }_t=1^N_f whose elements are evenly-spaced sampled over the intervals [ p_h^1, p_max·I ] and [ p_l^1, p_max·I ] respectively. Then, the two time series of feature components are decentralized and transformed by the Inverse Fast Fourier Transform (IFFT): {X_h^t = ℱ^-1(Z_h^t) }_t=1^N_f, {X_l^t = ℱ^-1(Z_l^t) }_t=1^N_f. Thus, we obtain two time series of feature components {X_h^t }_t=1^N_f and {X_l^t }_t=1^N_f at various high- and low-frequency bands respectively. 3.1.3 Complementary Temporal Aggregation Module The complementary temporal aggregation module is designed to aggregate the time series of high-frequency components and the time series of low-frequency components obtained from the FVF module, and output a discriminative feature, as shown in the green box in Fig. <ref>. Considering that the LSTMs have shown their priority in time-series data modeling, we use LSTMs to model the temporal dependence of the discriminative feature on the multi-band components in this module. Specifically, we use two LSTMs for handling the high-frequency time series {X_h^t }_t=1^N_f and the low-frequency time series {X_l^t }_t=1^N_f respectively, each of which provides complementary frequency information for the other one. At the t-th moment (t ∈{1,2,...,N_f }), the Hidden and Cell states are updated by: H_h^t = LSTM_h(H_h^t-1, C_h^t-1, X_h^t) , H_l^t = LSTM_l(H_l^t-1, C_l^t-1, X_l^t) where H_h^t and H_l^t represent the Hidden states, C_h^t and C_l^t represent the Cell states at the t-th moment; X_h^t and X_l^t represent the t-th element of the time series {X_h^t }_t=1^N_f and {X_l^t }_t=1^N_f, respectively. Finally, a discriminative feature X' can be obtained by concatenating the two updated Hidden states at the N_f-th moment along the channel dimension: X' = [H_h^N_f; H_l^N_f]. §.§ CFAN-OSFGR Here, we introduce the CFAN-OSFGR method for handling the OSFGR task. The CFAN is firstly integrated with a LayerNorm layer for normalization, an average pooling layer for dimension reduction, and a linear classifier for recognition, as shown in Fig. <ref>. Then, the training strategy and the inference strategy are described as follows. Training. The model is trained with a cross-entropy classification loss: ℒ_cls = - 1/N_b∑_k=1^N_blog p_k where N_b represents the batch size, and p_k represents the probability of the k-th image in the current batch corresponding to the ground-truth class. Inference. The score outputted from the classifier of the proposed model is used for inference, which is defined as: s = max_c{ y_c } , c ∈{ 1,2,...,C } where C is the number of the known classes, y_c indicates the c-th (c ∈{ 1,2,...,C }) element of the logit vector outputted from the classifier corresponding to the c-th class. Besides, a threshold θ, which is chosen to make 90% validation images be correctly recognized as known classes, is used for classifying known-class images and identifying unknown-class images by comparing with the score s: prediction= { argmax_c ∈{ 1,2,...,C } y_c , if s ≥θ unknown classes , if s < θ. § EXPERIMENTS §.§ Datasets and Metrics Datasets. The proposed CFAN-OSFGR method is evaluated on 3 fine-grained datasets (including Aircraft <cit.>, CUB <cit.>, and Stanford-Cars <cit.>) and 2 coarse-grained datasets which are relatively difficult in the OSR task (including CIFAR+10/+50 <cit.> and TinyImageNet <cit.>) under two dataset settings: (1) Standard-Dataset Setting. Under this setting, the known-class and unknown-class images are from the same dataset. Aircraft <cit.> contains 100-class aircraft images with attributes, 50 classes of which are selected as the known classes. The rest classes are further divided into three modes: `Easy', `Medium', and `Hard' according to their attribute similarity to the known classes, and we follow the 20/17/13 splitting manner for splitting the unknown classes as done in <cit.>. CUB <cit.> contains 200-class bird images with attributes, 100 classes of which are selected as the known classes, and we follow the 32/34/34 splitting manner for splitting the unknown classes as done in <cit.>. Stanford-Cars <cit.> contains 196-class car images, the first 98 classes of which are selected as the known classes while the rest 98 classes are used as the unknown classes. In CIFAR+10/+50, 10 classes in CIFAR10 <cit.> are used as the known classes, while 10 or 50 non-overlapping classes in CIFAR100 <cit.> are used as the unknown classes. TinyImageNet <cit.> contains 200-class natural images, 20 classes of which are used as the known classes, while the rest 180 classes are used as the unknown classes. (2) Cross-Dataset Setting. This setting is configured by using the 50 split known classes in Aircraft as the known classes while using all of the 200- and 196-class testing images in CUB and Aircraft as the unknown-class images, respectively. Metrics. The following evaluation metrics are used under the above dataset settings: (1) Standard-Dataset Setting. On the coarse-grained datasets, we use two metrics for evaluation as done in <cit.>: (i) AUROC which measures the open-set detection performance by regarding the OSFGR task as a binary classification task (i.e., classification between known classes and unknown classes), and (ii) ACC (i.e., the top-1 accuracy that is widely used in the closed-set classification task) which measures the closed-set classification performance. On the fine-grained datasets, in addition to AUROC and ACC, we also use OSCR (i.e., the open-set classification rate <cit.>), which is a threshold-independent metric that simultaneously measures the open-set detection performance and the closed-set classification performance, as done in <cit.>. (2) Cross-Dataset Setting. As done in <cit.>, we use macro-F1 score which is a threshold-dependent metric that measures the open-set classification performance by taking the unknown classes as the (C+1)-th class. §.§ Implementation Details The images from Aircraft and CUB are resized to 448 × 448 as done in <cit.>, while those from Stanford-Cars are resized to 224 × 224. The ResNet50 and the `base' version of Swin Transformer (i.e., SwinB <cit.>) are used as the feature extractor respectively in the feature extraction module. We use an SGD optimizer with the learning rate of 3 × 10^-4 and the weight decay of 1 × 10^-4, as well as an AdamW optimizer <cit.> with the learning rate of 5 × 10^-5 and the weight decay of 0.01 for optimizing the CNN part and the transformer part, respectively. N_t, N_f, N_b and N_c are set to 20, 4, 32, and 128, respectively; p_min is set to 0.2 for avoiding filter values being too small, and p_max is set to 20 for avoiding the numerical overflow. The originally extracted features are 4-times upsampled for obtaining the preliminary features, and the channels are simultaneously 8-times pruned for decreasing the model complexity. At the inference stage, the values in p_h^1 and p_l^1 are simply set to 1, since the model has been trained to be robust to the variance of these values, as shown in our additional experiments in the supplementary material. §.§ Evaluation 4.3.1 Evaluation on the Fine-Grained Datasets Evaluation Under the Standard-Dataset Setting. Considering that only a few works (CAMV <cit.> and Cross-Entropy+ <cit.>) are specially designed for handling the OSFGR task, we also compare the proposed CFAN-OSFGR method with 7 state-of-the-art OSR methods (OpenHybrid <cit.>, OpenGAN <cit.>, ARPL <cit.>, GCPL <cit.>, GMVAE-OSR <cit.>, MoEP-AE-OSR <cit.> and Trans-AUG <cit.>). In addition, considering the SwinB-based methods are few, we evaluate one relatively better OSFGR method (Cross-Entropy+) and two relatively better OSR methods (OpenHybrid and ARPL) by replacing their original backbones with SwinB for further comparison. Tables <ref>, <ref>, and <ref> report the evaluation results under the standard-dataset setting on Aircraft, CUB and Stanford-Cars, respectively. Table <ref> reports the computational resources of these methods on Aircraft. Two points can be seen from these tables: (1) Transformer-based models achieve better results than CNN-based models in most cases, except in a few cases (e.g. on Aircraft) where CNN-based models perform slightly better than transformer-based models. The main reason is that the multiple self-attention operations in transformers boost the model discriminability. Besides, transformer-based models are generally slower than CNN-based models, because the calculations and parameters of fully-connected layers in transformers are generally larger than those of convolutional layers in CNNs. (2) CFAN-OSFGR outperforms all the comparative methods with the same CNN or SwinB backbone in most cases. We need to point out that CFAN-OSFGR is relatively slower than other methods with the same backbone, mainly due to the feature transformation in the frequency domain as well as the temporal feature aggregation. Furthermore, we conduct an experiment for evaluating the OSFGR performance of different models with similar calculations and parameters, whose results show that CFAN-OSFGR still outperforms other models with similar overheads, the details can be found in the supplementary material. Evaluation Under the Cross-Dataset Setting. The above results have demonstrated the effectiveness of the proposed CFAN-OSFGR method in cases where known-class images and unknown-class images are from the same dataset. Here, we also evaluate the model performance under the cross-dataset setting where the unknown-class images are from outlier datasets. Specifically, the model is trained with Aircraft, and tested with images from CUB and Stanford-Cars as the unknown-class images respectively, whose evaluation results are reported in Table <ref>. As seen from this table, CFAN-OSFGR outperforms all the other methods significantly in most cases, demonstrating its cross-dataset generalization ability. 4.3.2 Evaluation on the Coarse-Grained Datasets The above results have demonstrated the effectiveness of the proposed CFAN-OSFGR method in handling open-set fine-grained images. Here, we further conduct an experiment for evaluating the effectiveness of CFAN-OSFGR in dealing with open-set coarse-grained images. Table <ref> reports the evaluation results on the two coarse-grained datasets (CIFAR+10/+50 and TinyImageNet). As seen from this table, CFAN-OSFGR still achieves the best results in most cases or achieves the 2nd place. These results further demonstrate that in an open-set scenario, CFAN-OSFGR can not only effectively recognize fine-grained images, but also recognize coarse-grained images accurately. 4.3.3 Evaluation on HFI/LFI We also evaluate the OSFGR performance of CFAN-OSFGR on HFI/LFI from Aircraft, whose results are reported in Table <ref>. As seen from this table, CFAN-OSFGR boosts the model performance on both HFI and LFI, indicating that CFAN-OSFGR has better captured both high- and low-frequency information than both ResNet50 and SwinB. §.§ Ablation Studies Here, we conduct extensive ablation studies for evaluating the effectiveness of CFAN-OSFGR more comprehensively. The following experiments are all implemented on Aircraft <cit.> under the standard-dataset setting, and are implemented on SwinB-based CFAN-OSFGR. The main results are reported and analyzed in the following subsections, and some additional results can be found in the supplementary material due to the limitation of space. 4.4.1 Ablation Study on Modules Firstly, we conduct an ablation study for analyzing the effect of the last two modules (i.e., the frequency-varying filtering (FVF) module and the complementary temporal aggregation (CTA) module) in the proposed CFAN. The corresponding results are reported in Table <ref>. As seen from this table, the model performance improves with the two modules added to the model one by one, indicating that either of the two modules plays an important role in CFAN. 4.4.2 Analysis of the FVF Module Influence of Randomizing the Values in the Initial Adjustable Vectors in Training. Here, we analyze the influence of randomizing the values in p_h^1 and p_l^1 in the FVF module at the training stage, which converts the high- and low-frequency time series from static time series to dynamic ones at each iteration. Specifically, we train two additional models based on the proposed CFAN-OSFGR method where the values in p_h^1 and p_l^1 are fixed to 1 and 10 respectively, and the corresponding results are shown in Fig. <ref> (a). As shown in Fig. <ref> (a), the model trained with randomization performs better than the model trained without randomization, demonstrating that the dynamic time series are more effective than the static ones. 4.4.3 Analysis of the CTA Module Influence of Aggregating High- and Low-Frequency Components. Here, we analyze the influence of aggregating the two time series of components (i.e., the time series of high-frequency components and the time series of low-frequency components) in the CTA module. Specifically, we train two additional models based on CFAN-OSFGR where the discriminative feature is obtained only from the high-frequency components by LSTM_h or the low-frequency components by LSTM_l, whose results are shown in Fig. <ref> (b). As shown in Fig. <ref> (b), either the high- or the low-frequency component boosts the model performance to some extent but not significantly, and aggregating both time series provides significant performance improvements in most cases, especially under the AUROC and OSCR metrics. These results demonstrate the effectiveness of aggregating both high- and low-frequency information in boosting the generalization ability of the model. Influence of Temporal Aggregation. Moreover, we analyze the influence of temporal aggregation in the CTA module. Specifically, we compare the proposed temporal aggregation with other two aggregation strategies: (i) concatenating the 2 · N_f input feature components in the two time series (denoted as `All-Input-Concat'), and (ii) concatenating the 2 · N_f Hidden states at all moments (denoted as `All-Output-Concat'), whose results are shown in Fig. <ref> (c). As shown in Fig. <ref> (c), `All-Input-Concat' slightly improves the backbone model performance since it also aggregates high- and low-frequency information, but is inferior to either `All-Output-Concat' or CFAN. Besides, the results of `All-Output-Concat' are slightly lower than those of CFAN since aggregating the Hidden states at earlier moments weakens the effect of temporal modeling. All these results demonstrate the effectiveness of the temporal aggregation strategy, which also provides an enlightening insight into modeling the temporal dependence of an image feature for boosting the feature discriminability. In addition, we also analyze the influence of p_h^1 and p_l^1 on evaluation, the influence of the number of high/low-pass filters N_f in the FVF module, and the influence of the number of template filters N_t in the FVF module, which can be found in the supplementary material. § CONCLUSION In this paper, we propose the complementary frequency-varying awareness network, CFAN, which learns discriminative features that could effectively capture and make use of both high- and low-frequency feature information from fine-grained images via three sequential modules: the feature extraction module, the frequency-varying filtering module where the frequency-adjustable filter is explored, and the complementary temporal aggregation module. Furthermore, the CFAN-OSFGR method is introduced for handling the open-set fine-grained image recognition task based on the proposed CFAN. Extensive experimental results demonstrate the effectiveness of CFAN-OSFGR. IEEEtran
http://arxiv.org/abs/2307.05252v1
20230711133527
MAP- and MLE-Based Teaching
[ "Hans Ulrich Simon", "Jan Arne Telle" ]
cs.LG
[ "cs.LG", "stat.ML" ]
[ [ August 12, 2023 =================== Imagine a learner L who tries to infer a hidden concept from a collection of observations. Building on the work <cit.> of Ferri et al., we assume the learner to be parameterized by priors P(c) and by c-conditional likelihoods P(z|c) where c ranges over all concepts in a given class C and z ranges over all observations in an observation set Z. L is called a MAP-learner (resp. an MLE-learner) if it thinks of a collection S of observations as a random sample and returns the concept with the maximum a-posteriori probability (resp. the concept which maximizes the c-conditional likelihood of S). Depending on whether L assumes that S is obtained from ordered or unordered sampling resp. from sampling with or without replacement, we can distinguish four different sampling modes. Given a target concept c^* ∈ C, a teacher for a MAP-learner L aims at finding a smallest collection of observations that causes L to return c^*. This approach leads in a natural manner to various notions of a MAP- or MLE-teaching dimension of a concept class C. Our main results are as follows. First, we show that this teaching model has some desirable monotonicity properties. Second we clarify how the four sampling modes are related to each other. As for the (important!) special case, where concepts are subsets of a domain and observations are 0,1-labeled examples, we obtain some additional results. First of all, we characterize the MAP- and MLE-teaching dimension associated with an optimally parameterized MAP-learner graph-theoretically. From this central result, some other ones are easy to derive. It is shown, for instance, that the MLE-teaching dimension is either equal to the MAP-teaching dimension or exceeds the latter by 1. It is shown furthermore that these dimensions can be bounded from above by the so-called antichain number, the VC-dimension and related combinatorial parameters. Moreover they can be computed in polynomial time. § INTRODUCTION In formal models of machine learning we have a concept class C of possible concepts/hy­po­the­ses, an unknown target concept c^* ∈ C and training data given by correctly labeled random examples. In formal models of machine teaching a collection T(c^*) of labeled examples is instead carefully chosen by a teacher T in a way that the learner can reconstruct the target concept c^* from T(c^*). In recent years, the field of machine teaching has seen various applications in fields like explainable AI <cit.>, trustworthy AI <cit.> and pedagogy <cit.>. Various models of machine teaching have been proposed, e.g. the classical teaching model <cit.>, the optimal teacher model <cit.>, recursive teaching <cit.>, preference-based teaching <cit.>, or no-clash teaching <cit.>. These models differ mainly in the restrictions that they impose on the learner and the teacher in order to avoid unfair collusion or cheating. The common goal is to keep the size of the largest teaching set, max_c ∈ C|T(c)|, as small as possible. There are also other variants using probabilities, from Muggleton <cit.> where examples are sampled based on likelihoods for a target concept, to Shafto et al. <cit.> who calls this pedagogical sampling and leads into Bayesian Teaching <cit.>, to the Bayesian learners of Zhu <cit.> with a proper teacher selecting examples. In this paper we continue this line of research and consider the probabilistic model that had been described in the abstract. This model is inspired by and an extension of the model that was introduced in <cit.>. As already observed in <cit.>, the condition for collusion-avoidance from <cit.> may here be violated, i.e., the learner may first reconstruct a concept c_1 from some given observations but, after having received additional observations, switch to another concept c_2 even if the new observations have given additional support to c_1. As the authors of <cit.>, we would like to argue that this should not be considered as collusion or cheating as long as the parameters assigned to the learner reflect some factual information about the world. As already outlined in the abstract, we will distinguish between four distinct sampling modes: ordered sampling with replacement ((O,R)-mode), unordered sampling with replacement ((O,R)-mode), ordered sampling without replacement ((O,R)-mode) and unordered sampling without replacement ((O,R)-mode). The smallest number d such that every c^* ∈ C can be taught to a given MAP-learner L by a collection of at most d observations is denoted by _L^α,β(C) where (α,β) ∈{O,O}×{R,R} indicates the underlying sampling mode. Then ^α,β(C) = min_L _L^α,β(C) is the corresponding parameter with an optimally parameterized learner L. The analogous notation is used for MLE-learners. Our main results are as follows: * The MAP-teaching model has two desirable and quite intuitive monotonicity properties. Loosely speaking, adding new observations (making Z larger) leads to smaller while adding new concepts (making C larger) leads to larger . See Section <ref> for details. * The sampling modes (O,R) and (O,R) are equivalent. The sampling modes (O,R), (O,R) and (O,R) are pairwise incomparable (i.e., which one leads to smaller values of _L(C) depends on the choice of C and L). Note that incomparability of the modes (α,β) and (α',β') does not rule out the possibility that ^α,β(C) ≤^α',β'(C) for each concept class C. See Section <ref> for details. * As for the (important!) special case, where concepts are subsets of a domain and observations are 0,1-labeled examples, we obtain some additional results, the first of which is the central one: * For a (properly defined) bipartite graph G(C)^α,β associated with C and (α,β) ≠ (O,R), one gets[(G) denotes the saturating matching number of a bipartite graph G (formally defined in Section <ref>)] ^α,β(C) = (G(C)^α,β) . If we replace G(C)^α,β by a slightly modified graph, we obtain the corresponding result for at the place of .[Some bounds on numbers in terms of numbers are already found in <cit.>, but no results that hold with equality (as in (<ref>)) are proven there.] Fig. <ref> visualizes this result. See Sections <ref> and <ref> for details. * The MLE-teaching dimension is either equal to the MAP-teaching dimension or exceeds the latter by 1. See Section <ref> for details. * The MAP- and the MLE-teaching dimension can be bounded from above by the so-called antichain number, the VC-dimension and related combinatorial parameters. See Section <ref> for details. * Moreover the MAP- and the MLE-teaching dimension can be computed in polynomial time from a natural encoding of the underlying concept class. See Section <ref> for details. § DEFINITIONS AND NOTATIONS We first fix some general notation. Afterwards, in Sections <ref>, <ref>, and <ref>, the MAP- and MLE-based teaching model is introduced, step-by-step. Mappings. The restriction of a mapping f:A B to a subset A' A will be denoted by f_ A'. Suppose that B is a set that is equipped with a size function which associates a size |b| with each b ∈ B. Then the order of a mapping f: A B is defined as the size of the largest element in the image of f, i.e., the order of f equals max_a ∈ A|f(a)|. Graphs and Matchings. For a graph G = (V,E) and a set U V, we denote by Γ(U) the set of vertices which are adjacent to at least one vertex in U. If G = (V_1,V_2,E) is the bipartite graph with vertex sets V_1 and V_2 and with edge set E V_1 × V_2, then U V_1 implies (of course) that Γ(U) V_2. A matching M in a bipartite graph G = (V_1,V_2,E) can be viewed as a (partially defined and injective) function M: V_1 V_2 with the property that (v,M(v)) ∈ E for each v having an M-partner. If V_1 is saturated by M, i.e., every vertex in V_1 has an M-partner, then this function is fully defined. VC-Dimension <cit.>. Let C be a family of subsets of some ground set X. For c ∈ C and x ∈ X, we also write c(x)=1 if x ∈ c and c(x)=0 if x ∉ c. We say that S X is shattered by C if, for every b:S {0,1}, there is some c ∈ C that coincides with b on S. The VC-dimension of C is defined as ∞ if there exist arbitrarily large shattered sets, and it is defined as the size of a largest shattered set otherwise. §.§ Concept Classes Let C be a finite set of size at least 2, let Z be another non-empty finite set and let be a relation on C × Z. We refer to C as a concept class and to Z as a set of observations. If c z, then we say that the concept c is consistent with the observation z. We say that c is consistent with a set (resp. multiset) A of observations, which is written as c A, if c is consistent with every z ∈ A. The notation c z⃗ with z⃗ = (z_1,…,z_n) ∈ Z^n is understood analogously. For each c ∈ C, we define Z_c = {z ∈ Z: c z} . [Positive Examples as Observations] Let Z = X be a set of examples and let C be a family of subsets of X. Let the consistency relation be given by ∀ c ∈ C , x ∈ X: c x x ∈ c . Note that Z_c = c in this setting, i.e., concepts are identified with the sets of observations they are consistent with. [Labeled Examples as Observations] Let Z = X ×{0,1} be a set of labeled examples and let C be a family of subsets of X. Let the consistency relation be given by ∀ c ∈ C , (x,b) ∈ Z: c (x,b) (b=1 ∧ x ∈ c) ∨ (b=0 ∧ x ∉ c) . Note that Z_c = {(x,1): x ∈ c}∪{(x,0): x ∉ c} in this setting. It follows that |Z_c| = |X| for all c ∈ C. We will occasionally identify a set c X with the corresponding 0,1-valued function so that c(x)=1 for x ∈ c and c(x) = 0 for x ∈ X c. The equivalence in (<ref>) can then be written in the form c (x,b) b = c(x). [Labeled Examples and Probabilistic Concepts] Let Z = X ×{0,1} be again a set of labeled examples and let C be a family of functions from X to [0,1]. Let the consistency relation be given by ∀ c ∈ C, x ∈ X: c (x,1) c(x)>0 c (x,0) c(x)<1 . Intuitively we should think of c(x) as the probability that c assigns label 1 to instance x. If all concepts c ∈ C were 0,1-valued, we would again be in the setting of Example <ref>. Note that within Examples <ref> and <ref>, we have that ∀ c,c' ∈ C: c ≠ c' Z_c ≠ Z_c' so that each concept c ∈ C is uniquely determined by the full set Z_c of observations that c is consistent with. Of course this will not necessarily be the case if the concepts are probabilistic as in Example <ref>. §.§ Variants of Sampling As formalized in the definitions below, we distinguish between ordered and unordered sampling and we may have sampling with or without replacement. Let Q = (q(z))_z ∈ Z be a collection of probability parameters, i.e., q(z) ≥ 0 and ∑_z ∈ Zq(z) = 1. For n ≥ 0, we define n-fold (ordered resp. unordered) Q-sampling with replacement as the following random procedure: * Choose z_1,…,z_n independently at random according to Q. * In case of ordered sampling, return the sequence (z_1,…,z_n) whereas, in case of unordered sampling, return the multiset {z_1,…,z_n}.[If n=0, then the empty sequence resp. the empty multiset is returned,] Let z⃗ = (z_1,…,z_n) ∈ Z^n be a sequence that contains k distinct elements, say z'_1,…,z'_k, and let n_i denote the number of occurrences of z'_i in z⃗. Let A_z⃗ Z be the corresponding multiset. The probability that z⃗ (resp. A_z⃗) is obtained from n-fold ordered (resp. unordered) Q-sampling with replacement is henceforth denoted by P^O,R(z⃗|Q) (resp. by P^O,R(A_z⃗|Q)). With these notations, the following holds: P^O,R(z⃗|Q) = ∏_i=1^n q(z_i) = ∏_i=1^k q(z'_i)^n_i P^O,R(A_z⃗|Q) = n!/n_1! … n_k!·∏_i=1^kq(z'_i)^n_i . Let Q = (q(z))_z ∈ Z be a collection of probability parameters. Let N^+(Q) be the number of z ∈ Z such that q(z)>0. For 0 ≤ n ≤ N^+(Q), we define n-fold (ordered resp. unordered) Q-sampling without replacement as the following random procedure: * Choose z_1 at random according to Q. * For i=2,…,n do the following: Choose z_i ∈ Z {z_1,…,z_i-1} at random where, for each z ∈ Z {z_1,…,z_i-1}, the probability for z_i=z equals q(z)/1-(q(z_1) +…+ q(z_i-1)).[Note that the probability parameters for z ∈ Z {z_1,…,z_i-1} are the same as before up to normalization.] * In case of ordered sampling, return the sequence (z_1,…,z_n) whereas, in case of unordered sampling, return the set {z_1,…,z_n}. Let z⃗ = (z_1,…,z_n) ∈ Z^n be a repetition-free sequence and let A_z⃗ Z be the corresponding set. For a permutation σ of 1,…,n, we define z⃗_σ = (z_σ(1),…,z_σ(n)). The probability that z⃗ (resp. A_z⃗) is obtained from n-fold ordered (resp. unordered) Q-sampling without replacement is henceforth denoted by P^O,R(z⃗|Q) (resp. by P^O,R(A_z⃗|Q)). With these notations, the following holds: P^O,R(z⃗|Q) = ∏_i=1^nq(z_i)/1-(q(z_1) +…+ q(z_i-1)) P^O,R(A_z⃗|Q) = ∑_σ P^O,R(z⃗_σ|Q) , where σ ranges over all permutations of 1,…,n. We introduce the following notation: * ^O,R = Z^* denotes the set of sequences over Z (including the empty sequence). * ^O,R denotes the set of multisets over Z (including the empty multiset). * ^O,R denotes the set of repetition-free sequences over Z (including the empty sequence). * ^O,R = 2^Z denotes the powerset of Z. The pairs (α,β) ∈{O,O}×{R,R} are called sampling modes. We use the symbol not only to denote the empty set but also to denote the empty multiset or the empty sequence. If A is a finite set or multiset, then |A| denotes its size where, in case of a multiset, the multiple occurrences of elements are taken into account. The length of a finite sequence z⃗ is denoted by |z⃗|. Suppose that Q = (q(z))_z ∈ Z is collection of probability parameters. Then, for each sampling mode (α,β), we have that P^α,β( | Q) = 1. Moreover, if all parameters q(z) with z ∈ Z are strictly positive, then P^O,R(Z | Q) = 1. We close this section with a more or less obvious result whose proof will be given for sake of completeness. Let z_1,…,z_n be a sequence with pairwise distinct elements from Z. Let p_1 > p_2 > … p_n be a strictly decreasing sequence of strictly positive parameters such that ∑_i=1^np_i ≤ 1. For each permutation σ of [n], consider the parameter collection Q_σ = (q_σ(z_i))_i=1,…,n given by q_σ(z_i) = p_σ(i). Then the identity permutation is the unique maximizer of P^O,R(z_1,…,z_n | Q_σ). According to (<ref>), we have P^O,R(z_1,…,z_k | Q_σ) = ∏_i=1^nq_σ(z_i)/1-(q_σ(z_1) +…+ q_σ(z_i-1)) = ∏_i=1^np_σ(i)/1-(p_σ(1) +…+ p_σ(i-1)) = ∏_i=1^np_i/∏_i=1^n(1-(p_σ(1) +…+ p_σ(i-1)) The product in the numerator is the same for all permutations σ. The following assertions are equivalent: * σ^* is the identity permutation. * The sequence p_σ^*(1),…,p_σ^*(n) is strictly decreasing. * For each permutation σ≠σ^* and each i ∈ [n], we have that p_σ^*(1) +…+ p_σ^*(i-1)≥ p_σ(1) +…+ p_σ(i-1) and, for at least one i ∈ [n], this inequality is strict. * The permutation σ^* is the unique maximizer of P^O,R(z_1,…,z_k | Q_σ). The remark now is immediate from the equivalence of the first and the fourth statement. §.§ MAP- and MLE-based Teaching An MLE-learner will always choose a hypothesis from a class C that maximizes the likelihood of a given set of observations. MAP-learners are a bit more general because they additionally bring into play priors (P(c))_c ∈ C. The notion of likelihood depends on how the observations are randomly sampled. We proceed with the formal definition of MAP- and MLE-learners and their teachers: A MAP-Learner L for C is given by (and henceforth identified with) parameters P(z|c) ≥ 0 and P(c) > 0 for z ∈ Z and c ∈ C such that ∑_c ∈ CP(c) = 1 ∀ c ∈ C: ∑_z∈ Z P(z|c) = 1 . The parameters P(c) are referred to as priors. The parameters P(z|c), referred to as c-conditional likelihoods, must satisfy the following validity condition: c z P(z|c) = 0 . Set Z_c^+(L) := {z ∈ Z: P(z|c) > 0} and N^+(C,L) = min_c ∈ C|Z_c^+(L)|.[Because of the validity condition, Z_c^+(L) is a subset of Z_c = {z ∈ Z: c z}.] L can be in four different sampling modes (depending on the assumed kind of sampling). These modes determine the form of L's input and the choice of its output as will be detailed below. (O,R)-mode: For every n ≥ 0 and every sequence a⃗∈ Z^n, we denote by P^O,R(a⃗|c) the probability that a⃗ is obtained from n-fold ordered P(·|c)-sampling with replacement. Given a sequence a⃗∈^O,R, L returns the concept !max_c ∈ C[P(c) · P^O,R(a⃗|c)] if it exists, and a question mark otherwise.[The operator !max_c ∈ Cf(c) returns the unique maximizer c^* ∈ C of f(c) provided that it exists.] (O,R)-mode: For every n ≥ 0 and and every multiset A Z of size n, we denote by P^O,R(A|c) the probability that A is obtained from n-fold unordered P(·|c)-sampling with replacement. Given a multiset A ∈^O,R, L returns the concept !max_c ∈ C[P(c) · P^O,R(A|c)] if it exists, and a question mark otherwise. (O,R)-mode: For every 0 ≤ n ≤ N^+(C,L), and every repetition-free sequence a⃗∈ Z^n, we denote by P^O,R(a⃗|c)) the probability that a⃗ is obtained from n-fold ordered P(·|c)-sampling without replacement. Given a repetition-free sequence a⃗∈^O,R with |a⃗| ≤ N^+(C,L), L returns the concept !max_c ∈ C[P(c) · P^O,R(a⃗|c)] if it exists, and a question mark otherwise. If |a⃗| > N^+(C,L), then also a question mark is returned. (O,R)-mode: For every 0 ≤ n ≤ N^+(C,L), and every set A Z of size n, we denote by P^O,R(A|c) the probability that A is obtained from n-fold unordered P(·|c)-sampling without replacement. Given a set A ∈^O,R with |A| ≤ N^+(C,L), L returns the concept !max_c ∈ C[P(c) · P^O,R(A|c) ] if it exists, and a question mark otherwise. If |A| > N^+(C,L), then also a question mark is returned. An MLE-learner is a MAP-learner with uniform priors (so that the factor P(c) in the above arg!max-expressions can be dropped). Suppose that L is a MAP-learner for C that is in sampling mode (α,β) ∈{O,O}×{R,R}. A (successful) teacher for L is a mapping T which assigns to each concept c_0 ∈ C an input I = T(c_0) for L such that L(I) = c_0. In other words: * I ∈^α,β and, if β = R, then |I| ≤ N^+(C,L). * c_0 = !max_c ∈ C[P(c) · P^α,β(I|c)]. A couple of observations are in place here. Suppose that L is a MAP-learner for C which is in sampling mode (α,β) ∈{O,O}×{R,R}. Suppose that T is a teacher for L. Then the following holds for all c,c' ∈ C: L(T(c)) = c , P^α,β(|c) = 1 , P^α,β(T(c)|c) > 0 , c T(c) (c ≠ c' T(c) ≠ T(c')) . Moreover, if L is an MLE-learner and T is a teacher for L, then T(c) ≠. L(T(c)) = c is an immediate consequence of Definitions <ref> and <ref>. It now follows that, if T(c) = T(c'), then c = L(T(c)) = L(T(c')) = c'. In other words, c ≠ c' implies that T(c) ≠ T(c'). 0-fold sampling conditioned to c yields regardless of how c is chosen. It follows that P^α,β(|c) = 1. Assume now for contradiction that P^α,β(T(c')|c') = 0. But then c' cannot be the unique maximizer of P^α,β(T(c')|c) in C. This is in contradiction with L(T(c')) = c'. Assume for contradiction that T(c) contains an observation z ∈ Z such that c z. It follows that P^α,β(T(c)|c) = 0, which is in contradiction with P^α,β(T(c)|c) > 0. Thus c T(c). Finally, suppose that the priors are uniform, i.e., P(c) = 1/|C| for every c ∈ C. Assume for contradiction that T(c_0) = for some c_0 ∈ C. For every c ∈ C, we have P(c) · P^α,β(|c) = P(c) = 1/|C|. Hence c_0 cannot be unique maximizer of P(c) · P^α,β(|c) in C. This is in contradiction with L(T(c_0)) = c_0. Here is the definition of the parameter that is in the focus of our interest: Suppose that L is a MAP-learner for C who is in sampling mode (α,β). The MAP-teaching dimension of C given L and (α,β), denoted as ^α,β_L(C), is defined as the smallest number d such that there exists a teacher of order d for L, respectively as ∞ if there does not exist a teacher for L. The MAP-teaching dimension of C with respect to sampling mode (α,β) is then given by ^α,β(C) := min_L ^α,β_L(C) , where L ranges over all MAP-learners for C. Similarly, the MLE-teaching dimension of C with respect to sampling mode (α,β) is given by ^α,β(C) := min_L ^α,β_L(C) with L ranging over all MLE-learners for C. The parameter ^α,β(C) equals the number of observations needed to teach an optimally parameterized learner. It represents an information-theoretic barrier that cannot be brocken regardless of how the learner is parameterized. Of course, this parameter will generally be smaller than the parameter _L^α,β(C) associated with a “naturally parameterized” learner. We close this section by mentioning the inequality ^α,β(C) ≤^α,β(C) , which (for trivial reasons) holds for each choice of C and (α,β). § BASIC RESULTS ON THE MAP-BASED TEACHING MODEL In <cit.>, the authors used a more restrictive condition at the place of the validity condition. However, as we will see in Section <ref>, in the context of MAP-learners and their teachers, both conditions lead essentially to the same results. In Section <ref>, we discuss two natural monotonicity properties and thereafter, in Section <ref>, we note the equivalence of (O,R)- and the (O,R)-mode and prove the pairwise incomparability of the modes (O,R), (O,R) and (O,R). §.§ Validity and Strong Validity We will refer to c z P(z|c) = 0 as the strong validity condition for the parameters (P(z|c))_z ∈ Z,c ∈ C. This is the condition that the authors of <cit.> had imposed on the c-conditional likelihoods associated with a MAP-learner. We will see that each L satisfying the validity condition has a “close relative” L_ that satisfies the strong validity condition. Here comes the definition of L_: Let L be given by parameters P(c) and P(z|c) with c ∈ C and z ∈ Z such that the validity condition is satisfied but the strong validity condition is not. We say that L_ (with 0 < ≤ 1/2) is the -shift of L if L_ is given by the parameters P(c) and P_(z|c) where P_(z|c) = {[ (1-) · P(z|c) ; /|Z_c Z_c^+(L)| ; 0 ]. . For convenience, we set P_(z|c) = P(z|c) if already L satisfies the strong validity condition. Note that L_ satisfies the strong validity condition because P_(z|c) = 0 iff z ∉Z_c and Z_c = {z ∈ Z: c z}. A learner and its -shift are related as follows: Let L be a MAP-learner for C whose parameters satisfy the validity condition. Then the following holds for each (α,β) ∈{O,O}×{R,R} and all sufficiently small >0: each teacher for L in sampling mode (α,β) is also a teacher for L_ in sampling mode (α,β). Suppose that L and L_ are both in sampling mode (α,β). Consider a teacher T for L. We claim that the following holds: ∀ c_0,c ∈ C: lim_ 0P_^α,β(T(c_0)|c) = P^α,β(T(c_0)|c) . This would imply that, for every c_0 ∈ C and sufficiently small , we have c_0 = !max_c ∈ CP^α,β(T(c_0)|c) = !max_c ∈ CP_^α,β(T(c_0)|c) , which, in turn, implies that T is a teacher for L_. We still have to verify (<ref>). This can be done by means of a simple continuity argument. Note first that ∀ c ∈ C, z ∈ Z: lim_ 0P_(z|c) = P(z|c) . Since P_^α,R(T(c_0)|c) is a polynomial (and hence a continuous function) in the variables P_(z|c) with z ∈ T(c_0), we may conclude that (<ref>) is true in case of β = R. Suppose now that (α,β) = (O,R) and T(c_0) = (z_1,…,z_n), which implies that n ≤ N^+(C,L) and z_1,…,z_n ∈ Z_c^+(L). The function P_^O,R(T(c_0)|c) = ∏_i=1^nP_(z_i|c)/1-(P_(z_1|c) +…+ P_(z_i-1|c)) is a rational function in the variables P_(z_i|c) for i=1,…,n. Hence we can apply the continuity argument again but, in addition, we must rule out that the denominator, 1-(P_(z_1|c) +…+ P_(z_i-1|c)), converges to 0 when approaches 0. This, however, can be ruled out as follows: * Set ρ := 1/2·min_c ∈ C , z ∈ Z_c^+(L)P(z|c) and note that 0 < ρ≤min_c ∈ C , z ∈ Z_c^+(L)P_(z|c). The latter inequality holds because of P_(z|c) = (1-) · P(z|c) and ≤ 1/2. * Because of n ≤ N^+(C,L), the set {z_1,…,z_n-1} cannot contain all elements of Z_c^+(L). * Therefore 1-(P_(z_1|c) +…+ P_(z_i-1|c) ≥ρ for all i = 1,…,n and the limit for 0 cannot be equal to 0. We may therefore conclude that (<ref>) is true in case of (α,β) = (O,R). The proof in case of (α,β) = (O,R) is similar. With the notation from Definition <ref>, we have ^α,β_L(C) = ^α,β_L_(C) for all sufficiently small > 0. §.§ Monotonicity Properties It is clear, intuitively, that adding concepts without adding observations should make the teaching problem harder. Conversely, adding observations without adding concepts should make the teaching problem easier. In this section, we formalize these statements and prove them. All results in this section are formulated in terms of . But the corresponding results with at the place of hold es well. We say that (C',Z',') is an extension of (C,Z,) if C C', Z Z' and, for all c ∈ C and z ∈ Z, we have that c z if and only if c ' z. So far, we used a notation (e.g. ^α,β(C) instead of ^α,β(C,Z,)) which made a dependence on (C,Z,) explicit for C only (because the corresponding Z and the corresponding relation were clear from context). In this section, there is some danger of confusion and, consequently, we use a notation which makes the dependence on the whole triple (C,Z,) more explicit. Let (C',Z',') be an extension of (C,Z,) with Z'=Z. Let L be a MAP-learner for (C',Z,') with parameters P(c')>0 and P(z|c') for c' ∈ C' and z ∈ Z. Set P(C) = ∑_c ∈ CP(c). The MAP-learner with parameters P(c)/P(C) and P(z|c) for c ∈ C and z ∈ Z, denoted by L_ C, is called the restriction of L to subclass C. The parameters of a MAP-learner L for (C',Z,') must satisfy the validity condition. Clearly the parameters of L_ C satisfy the validity condition too. Moreover, for each c ∈ C, we have that Z_c^+(L_ C) = Z_c^+(L). These observations can be used for showing the following result: With the assumptions and notation as in Definition <ref>, the following holds for each sampling mode (α,β): _L_ C^α,β(C,Z,) ≤ _L^α,β(C',Z,') . Let T:C' ^α,β be a teacher for L and let T_ C denote its restriction to subclass C. Clearly the order of T_ C is upper-bounded by the order of T. It suffices to show that T_ C is a teacher for L_ C. To this end, we have to show the following: (a) If β = R then, for all c ∈ C, we have that |T_ C(c)| ≤ N^+(C,L_ C). (b) For all c_0 ∈ C, c ∈ C {c_0}, we have that P(c) · P^α,β(T_ C(c_0)|c) < P(c_0) · P^α,β(T_ C(c_0)|c_0). Of course, since T is teacher for L, we know that the following hold: (a') If β = R then, for all c' ∈ C', we have that |T(c')| ≤ N^+(C',L). (b') For all c'_0 ∈ C', c' ∈ C' {c'_0}, we have that P(c') · P^α,β(T(c'_0)|c') < P(c'_0) · P^α,β(T(c'_0)|c'_0). The following calculation verifies (a) under the assumption that β = R: |T_ C(c)| = |T(c)| ≤ N^+(C',L) = min_c' ∈ C'|Z_c'^+(L)| ≤ min_c ∈ C|Z_c^+(L)| = min_c ∈ C|Z_c^+(L_ C)| = N^+(C,L_ C) . Suppose that c_0 ∈ C and c ∈ C {c_0}. Then (b) can be verified as follows: P(c) · P^α,β(T_ C(c_0)|c) = P(c) · P^α,β(T(c_0)|c) ≤ P(c_0) · P^α,β(T(c_0)|c_0) = P(c_0) · P^α,β(T_ C(c_0)|c_0) . Here the first and the last equation hold because c_0 ∈ C and therefore T_ C(c_0) = T(c_0). If (C',Z',') is an extension of (C,Z,) with Z = Z', then ^α,β(C,Z,) ≤ ^α,β(C',Z,') . Let (C',Z',') be an extension of (C,Z,) with C'=C. Let L be a MAP-learner for (C,Z,) with parameters P(c) and P(z|c) for c ∈ C and z ∈ Z. The MAP-learner with parameters P_ Z'(c) = P(c) and P_ Z'(z'|c) = {[ P(z'|c) ; 0 ]. , denoted by L_ Z', is called the extension of L to superset Z'. The parameters of a MAP-learner L for (C,Z,) must satisfy the validity condition. It is easy to check that, therefore, the parameters of L_ Z' satisfy the validity condition too. Moreover, for each c ∈ C, we have that {z' ∈ Z': P_ Z'(z'|c) > 0} = {z ∈ Z: P(z|c) > 0} = Z_c^+(L)} , which implies that N^+(C,L_ Z') = N^+(C,L). These observations can be used for showing the following result: With the assumptions and the notation as in Definition <ref>, the following holds for each sampling mode (α,β): _L^α,β(C,Z,) ≥ _L Z'^α,β(C,Z',') . Let T:C ^α,β be a teacher for L. It is sufficient to show that T is also a teacher for L_ Z' (albeit a teacher for L_ Z' who does not make use of observations in Z' Z). To this end, we have to show the following: (a) If β = R then, for all c ∈ C, we have that |T(c)| ≤ N^+(C,L_ Z'). (b) For all c_0 ∈ C, c ∈ C {c_0}, we have that P(c) · P_ Z'^α,β(T(c_0)|c) < P(c_0) · P_ Z'^α,β(T(c_0)|c_0). Assertion (a), assuming β = R, is obtained by |T(c)| ≤ N^+(C,L) = N^+(C,L_ Z') , where the first inequality holds because T is a teacher for L. Suppose that c_0 ∈ C and c ∈ C {c_0}. Assertion (b) is obtained by P(c) · P_ Z'^α,β(T(c_0)|c) = P(c) · P^α,β(T(c_0|c) < P(c_0) · P^α,β(T(c_0)|c_0) = P(c_0) · P_ Z'^α,β(T(c_0)|c_0) , where the first and the last equation holds because T(c_0) Z so that the likelihoods of observations in Z' Z do not come into play. The inequality in the middle holds because T is a teacher for L. If (C',Z',') is an extension of (C,Z,) with C=C', then ^α,β(C,Z,) ≥ ^α,β(C,Z',') . §.§ A Comparison of the Sampling Modes We say that the sampling mode (α,β) dominates the sampling mode (α',β') if, for every concept class C and every MAP-learner L for C, we have that _L^α,β(C) ≤_L^α',β'(C). We say they are equivalent if they mutually dominate each other, i.e., if _L^α,β(C) = _L^α',β'(C) holds for every choice of C and L. We say, they are incomparable if none of them dominates the other one. We start with an easy observation: The sampling modes (O,R) and (O,R) are equivalent. Consider a concept class C and a MAP-learner L for C. Let a⃗∈ Z^n be a sequence of k distinct elements with multiplicities n_1,…,n_k, respectively. Denote by A the corresponding multiset. An inspection of (<ref>) shows that the following holds for each c ∈ C: P^O,R(A|c) = n!/n_1! … n_k!· P^O,R(a⃗|c) . Let a⃗'⃗ be a sequence obtained from a⃗ by a permutation of the components. Since a⃗'⃗ also consists of k distinct elements with multiplicities n_1,…,n_k, respectively, equation (<ref>) also holds with a⃗'⃗ at the place of a⃗. It therefore easily follows that a teacher T for L, with L being in sampling mode (O,R), can be converted into a teacher T' of the same order for L with L being in sampling mode (O,R), and vice versa: * Suppose that T is given. If T(c) = a⃗, then define T'(c) = A where A is the multiset induced by a⃗. * Suppose that T' is given. If T'(c) = A then define T(A) = a⃗ where a⃗ is an (arbitrarily chosen) sequence containing the same elements as A with the same multiplicities. It follows from this discussion that ^O,R_L(C) = ^O,R_L(C), which concludes the proof. ^O,R(C) = ^O,R(C) and ^O,R(C) = ^O,R(C). We now turn our attention to the incomparability results: The sampling modes (O,R), (O,R) and (O,R) are pairwise incomparable. In order to prove the theorem, we will consider triples (C,Z,) with C = {c_1,c_2,c_3}, Z = {z_1,z_2,z_3} and c_i z_j for all 1 ≤ i,j ≤ 3. An important role will be played by concepts of the form c^±Δ with parameters given by P(z_1|c^±Δ) = p+Δ , P(z_2|c^±Δ) = p-Δ P(z_3|c^±Δ) = 1-2p . The following Facts 1–4, which pave the way for the proof of Theorem <ref>, can be proven by using the derivation rules of analysis. For sake of completeness, these proofs are given in the appendix. Fact 1: Suppose that 0 ≤ |Δ| < p < 1/2. Let c^±Δ be the concept given by (<ref>). Then P^O,R(z_1,z_2)|c^±Δ) and P^O,R(z_1,z_2|c^±Δ) are both strictly decreasing when |Δ| is increased, which implies that Δ = 0 is the unique maximizer. Fact 2: Suppose that 0 ≤ |Δ| < p < 1/2. Let c^±Δ be the concept given by (<ref>). Then P^O,R(z_1,z_2 | c^±Δ) - P^O,R(z_1,z_2 | c^±0) {[ = 0 ; > 0 ; < 0 ]. . Fact 3: Suppose that 0 ≤Δ < p < 1/2. Let c^±Δ be the concept given by (<ref>). Then P^O,R(z_1,z_1,z_2 | c^±Δ) - P^O,R(z_1,z_1,z_2 | c^±0) {[ = 0 ; > 0 ; < 0 ]. . Fact 4: Suppose that 0 < p < 1/2 and 1 ≤ t < 1-p/p. Let c^(t) be the concept given by c^(t)(z_1) = pt , c^(t)(z_2) = p/t c^(t)(z_3) = 1 - pt - p/t . Then P^O,R(z_1,z_2|c^(t)) is strictly increasing with t. A couple of more intuitive remarks are in place here. Fact 1 tells us that, in sampling modes (O,R) and (O,R), a concept explains observations z_1,z_2 the better (in the maximum likelihood sense), the more evenly it splits the available probability mass 2p among them. We will refer to an application of Fact 1 as applying the “even-split argument”. In sampling mode (O,R), however, the even split does not maximize the likelihood of these observations. The likelihood of z_1,z_2 becomes larger if the probability assigned to z_1 is slightly larger than the probability assigned to z_2. See (<ref>). A similar remark applies to the sampling mode (O,R) and the sequence z_1,z_1,z_2. See (<ref>). Fact 4 is concerned with sampling mode (O,R) and a multiplicative decomposition of p^2 into pt (the probability assigned to z_1) and p/t (the probability assigned to z_2) with t ≥ 1. According to Fact 4, the likelihood of {z_1,z_2} becomes larger when the scaling factor t≥1 is increased. Note that this is not in contradiction with the even-split argument, because pt + p/t is itself strictly increasing with t so that the even-split argument does not apply. We would furthermore like to note that the c-conditional likelihood of a (multi-)set or sequence of observations becomes larger if one of the relevant c-conditional likelihood parameters is increased while the others are fixed. We refer to this way of arguing as applying the “monotonicity argument”. Theorem <ref> is a direct consequence of the following three lemmas. Consider the triple (C,Z,) with C = {c_1,c_2,c_3}, Z = {z_1,z_2,z_3} and c_i z_j for all 1 ≤ i,j ≤ 3. Let L be an MLE-learner for C with parameters given by [ P(z|c) c_1 c_2 c_3; z_1 p+Δ_1 p+Δ_2 p; z_2 p-Δ_1 p-Δ_2 p; z_3 1-2p 1-2p 1-2p; ] , where 0 < Δ_1 < p^2/1-p < Δ_2 = 1/2(√(5)-1)p < p ≤ 0.4.[The constraint p ≤ 0.4 has the effect that p/1-p < 1/2(√(5)-1).] Then ^O,R_L(C) = 3 , ^O,R_L(C) = 2 ^O,R_L(C) = ∞ . It is obvious that, in any mode of sampling, the concept c_2 can be taught by observation z_1 and the concept c_3 can be taught by observation z_2. An inspection of (<ref>) and (<ref>) reveals that P_L^O,R(z_1,z_2|c_1) > P_L^O,R(z_1,z_2|c_3) > P_L^O,R(z_1,z_2|c_2) , P_L^O,R(z_1,z_1,z_2|c_1) > P_L^O,R(z_1,z_1,z_2|c_2) = P_L^O,R(z_1,z_1,z_2|c_3) . It follows that c_1 can be taught in (O,R)-mode (resp. in (O,R)-mode) by the sequence z_1,z_2 (resp. by the sequence z_1,z_1,z_2). We will argue now that there are no shorter sequences for teaching c_1 and that, in (O,R)-mode, c_1 cannot be taught at all. An application of the monotonicity argument yields that c_1 cannot be taught by a single observation (regardless of the sampling mode). The same remark holds for 2 observations except, possibly, for observations z_1,z_2. But, by the even-split argument, it is the concept c_3 that assigns the highest probability to the sequence (z_1,z_2) ∈^O,R resp. to the set {z_1,z_2}∈^O,R. Thus (O,R) is the only sampling mode in which c_1 can be taught by 2 observations. It follows that, in (O,R)-mode, c_1 cannot be taught at all.[Here we make use of the fact that, if Z_c = Z for each c ∈ C, then P^O,R(Z|c) = 1 for each c ∈ C. Note that this rules out the possibility of having teaching sets of size 3 = |Z|.] We may conclude from this discussion that the identities in (<ref>) are valid, Lemma <ref> implies that (O,R) does not dominate (O,R) and (O,R) does not dominate any of the other sampling modes. The next result leads to some more no-domination results: Consider the triple (C,Z,) with C = {c_1,c_2,c_3}, Z = {z_1,z_2,z_3} and c_i z_j for all 1 ≤ i,j ≤ 3. Let L be an MLE-learner for C with the parameters P(z|c) given by [ P(z|c) c_1 c_2 c_3; z_1 p p+Δ p-Δ; z_2 p p-Δ p+Δ; z_3 1-2p 1-2p 1-2p; ] , where 0 < Δ < p^2/1-p < p < 1/2. Then ^O,R_L(C) = ^O,R_L(C) = 2 ^O,R_L(C) = ∞ . Clearly the concept c_2 can be taught by observation z_1 and the concept c_3 can be taught by observation z_2 in any mode of sampling. The concept c_1 cannot be taught by a single observation. But it can be taught by the sequence (z_1,z_2) in (O,R)-mode and by the set {z_1,z_2} in (O,R)-mode (application of the even-split argument). We finally discuss teachability of c_1 in (O,R)-mode. An application of the monotonicity argument yields that c_1 cannot be taught in (O,R)-mode by two observations except, possibly, by the observations (z_1,z_2) or (z_2,z_1) in ^O,R. But an inspection of (<ref>) reveals that it is the concept c_2 (resp. c_3) that assigns the highest probability to (z_1,z_2) (resp. to (z_2,z_1)). It follows that, in (O,R)-mode, the concept c_1 cannot be taught at all. We may conclude from this discussion that the identities in (<ref>) are valid. Lemma <ref> implies that (O,R) does not dominate any of the other sampling modes. The next result implies (O,R) does not dominate (O,R). Consider the triple (C,Z,) with C = {c_1,c_2,c_3}, Z = {z_1,z_2,z_3} and c_i z_j for all 1 ≤ i,j ≤ 3. Let L be an MLE-learner for C with parameters P(z|c) given by [ P(z|c) c_1 c_2 c_3; z_1 sp p sp+; z_2 p/s p p/s-; z_3 1-sp-p/s 1-2p 1-sp-p/s; ] , where 0 < p <1/2 and 1 < s ≤1-p/p. Then ^O,R_L(C) = 2 < ^O,R_L(C) , provided that >0 is sufficiently small. Clearly, the concept c_2 can be taught by observation z_2 and c_3 can be taught by observation z_1 in any mode of sampling. It is obvious that c_1 cannot be taught by a single observation (regardless of the sampling mode). In (O,R)-mode, the concept c_1 cannot be taught by sequences of length 2 because c_1 is for none of them the unique maximizer: * P^O,R_L(z_1,z_2|c_1) = p^2 =P^O,R_L(z_1,z_2|c_2). * P^O,R_L(z_1,z_3|c_1) < P^O,R_L(z_1,z_3|c_3) and P^O,R_L(z_2,z_3|c_1) < P^O,R_L(z_2,z_3|c_2).[These are two applications of the monotonicity argument. Note that s + 1/s > 2 for all s>1.] However, in (O,R)-mode, the concept c_1 can be taught by the set {z_1,z_2}: * Concept c_1 distributes the probability mass sp + p/s (slightly) more evenly on z_1 and z_2 than the concept c_3. By the even-split argument, we obtain P^O,R({z_1,z_2}|c_1) > P^O,R({z_1,z_2}|c_3). * Recall from Fact 4 that c^(t), with t ≥ 1, denotes the concept which assigns probability pt to z_1, probability p/t to z_2 and the remaining probability mass to z_3. Note that c_1 = c^(s) and c_2 = c^(1). According to Fact 4, the function P^O,R(z_1,z_2 | c^(t)) is strictly increasing with t. Hence P^O,R({z_1,z_2}|c_1) > P^O,R({z_1,z_2}|c_2). The identities in (<ref>) are immediate from this discussion. Putting the above three lemmas together, we obtain Theorem <ref>. § MAP-BASED TEACHING AND SATURATING MATCHINGS Suppose that C is a concept class with observation set Z and consistency relation . The bipartite graph G(C) = (C,Z,E) with E = {(c,z) ∈ C × Z: c z} is called the consistency graph (associated with C). Let ^α,β with (α,β) ∈{O,O}×{R,R} be the notation that was introduced in Section <ref>. The bipartite graph G(C)^α,β = (C,^α,β,E^α,β) with E^α,β = {(c,ζ) ∈ C ×^α,β: c ζ} is called the extended consistency graph (associated with C). The graph resulting from G(C)^α,β by the removal of the vertex from the second vertex class ^α,β will be denoted by G(C)^α,β_≠. We denote by (G(C)^α,β) the smallest possible order of a C-saturating matching in G(C)^α,β. Analogously, (G(C)^α,β_≠) denotes the smallest possible order of a C-saturating matching in G(C)^α,β_≠. For ease of later reference, we make the following observation: Suppose that T:C ^α,β is a mapping which satisfies ∀ c,c' ∈ C: (c T(c)) ∧ (c ≠ c' T(c) ≠ T(c')) . Then T is of order at least (G(C)^α,β). Moreover, if T satisfies (<ref>) and is not in the image of T, then T is of order at least (G(C)^α,β_≠). If T satisfies (<ref>), then T represents a C-saturating matching in G(C)^α,β. If additionally is not in the image of T, then T represents a C-saturating matching in G(C)^α,β_≠. Here is the main result of this section: For each sampling mode (α,β), we have ^α,β(C) ≥(G(C)^α,β) ^α,β(C) ≥(G(C)^α,β_≠) . Moreover, for (α,β) = (O,R), this holds with equality. Let L be a MAP-learner for C and let (α,β) denote its sampling mode. Let T be a teacher for L. Recall from (<ref>) that T satisfies (<ref>). Moreover, if L is an MLE-learner for C, then T(c) ≠ for all c ∈ C. Now an application of Remark <ref> yields (<ref>). We move on and prove that ^O,R(C) ≤(G(C)_≠^O,R). Suppose that M is a C-saturating matching in G(C)^O,R_≠ that is of order (G(C)^O,R_≠). For each c ∈ C and z ∈ Z, let n(z,c) denote the number of occurrences of z in the multiset M(c) and let n(c) = |M(c)|. Consider a learner L with uniform priors (= MLE-learner) and the parameters P(z|c) = n(z,c)/n(c). Note that these parameters satisfy the validity condition. It suffices to show that M represents a teacher for L, i.e., we have to show that ∀ c^* ∈ C: c^* = !max_c ∈ C P^O,R(M(c^*)|c) . To this end, we pick a concept c from C {c^*}, and proceed by case analysis: Case 1: M(c^*) and M(c) contain the same elements of Z (albeit with different multiplicities)[The multiplicities cannot be the same because M: C ^O,R is a matching.]. Denote these elements by z_1,…,z_k. Let n := n(c^*), n_i = n(z_i,c^*). Then p_i := n_i/n is the relative frequency of z_i in M(c^*). Let q_i denote the relative frequency of z_i in M(c), which implies that q⃗≠p⃗. It follows that P^O,R(M(c^*)|c^*) = n!/n_1! … n_k!·∏_i=1^kp_i^n_i P^O,R(M(c^*)|c) = n!/n_1! … n_k!·∏_i=1^kq_i^n_i . A straightforward calculation shows that P^O,R(M(c^*)|c^*) > P^O,R(M(c^*)|c) iff ∑_i=1^k p_i log(p_i/q_i) > 0 . The left-hand side is the Kullback-Leibler divergence (= KLD) between p⃗ and q⃗. Since the KLD is non-negative and 0 only if q⃗ = p⃗, the condition (<ref>) is satisfied. Case 2: M(c^*) contains an element that is not contained in M(c). Then the c-conditional likelihood of M(c^*) equals 0. Case 3: All elements in M(c^*) are contained in M(c), but M(c) contains an element that is not contained in M(c^*). Then the c-conditional likelihood of M(c^*) can be expressed as (E_1) ·(E_2 | E_1) for the following two events: E_1: n(c^*)-fold c-sampling yields only elements from M(c^*). E_2: n(c^*)-fold c-sampling yields M(c^*). Since M(c) contains an element that is not contained in M(c^*), we have (E_1) < 1. It follows from the analysis of Case 1 that (E_2 | E_1) is upper-bounded by the c^*-conditional likelihood of M(c^*). We may conclude from the above discussion that c^* = !max_c ∈ C P^O,R(M(c^*)|c). Thus M can be seen as a teacher for L. It follows that ^O,R(C) ≤(G(C)^O,R_). The inequality ^O,R(C) ≤(G(C)^O,R) can be obtained in a similar fashion. We start with a C-saturating matching M in G(C)^O,R that is of order (G(C)^O,R). If M does not assign to any concept, we can proceed as before. Otherwise, if M(c_0) = for some c_0 ∈ C, we still use a similar reasoning but with a slight modification of the parameter collection of the learner L: * The priors are given by setting P(c_0) = 1+/|C| and by letting the remaining |C|-1 concepts evenly share the remaining probability mass (still almost uniform priors). * The parameters P(z|c) are chosen as before. We can again view the matching M as a teacher for L. Since P^O,R(|c) = 1 for all c ∈ C, we obtain !max_c ∈ C(P(c) · P^O,R(|c)) = !max_c ∈ CP(c) = c_0 . For the remaining concepts, the reasoning is as before provided that >0 s sufficiently small: this is an easy continuity argument which exploits that the priors converge to the uniform distribution on C if approaches 0. Clearly (G(C)^O,R) ≤ min{(G(C)^O,R) , (G(C)^O,R)} ≤ max{(G(C)^O,R) , (G(C)^O,R)}≤(G(C)^O,R) and (G(C)_≠^O,R) ≤ min{(G(C)_≠^O,R) , (G(C)_≠^O,R)} ≤ max{(G(C)_≠^O,R) , (G(C)_≠^O,R)}≤(G(C)_≠^O,R) . Combining this with Theorem <ref> and with Corollary <ref>, we immediately obtain the following result: * ^O,R(C) = (G(C)^O,R) ≤(G(C)^O,R) ≤^O,R(C). * ^O,R(C) = (G(C)_≠^O,R) ≤(G(C)_≠^O,R) ≤^O,R(C). Hence we get ^O,R(C) ≤^O,R(C) and ^O,R(C) ≤^O,R(C) despite of the fact that (O,R) does not dominate (O,R). § ON CONCEPTS TAUGHT BY LABELED EXAMPLES In this section, we will restrict ourselves to triples (C,Z,) of the form as described in Example <ref>, i.e., C is a family of subsets of a domain X, Z = X ×{0,1} and is given by (<ref>). We will see that, for each triple (C,Z,) of this special form and for each sampling mode (α,β) except (O,R), we have ^α,β(C) = (G(C)^α,β). For (α,β) = (O,R), this is already known from Theorem <ref>. For the other sampling modes, (O,R) and (O,R), it will be shown in Section <ref>, Since the modes (O,R) and (O,R) are equivalent, we see that, for triples of the special form, the MAP-teaching dimensions of C are fully determined by the saturating matching numbers associated with G(C). In Section <ref> we explore how MAP- and MLE-learners are related. For a given collection of conditional likelihoods, it can make much of a difference whether we commit ourselves to uniform priors or not. However, in the case of optimally parameterized learners, the freedom for choosing a non-uniform prior is of minor importance only: it turns out that the MLE-teaching dimension exceeds the MAP-teaching dimension at most by 1. In Section <ref>, we will see that the ^O,R(C) is upper bounded by the so-called antichain number of C, by the VC-dimension of C and by the no-clash teaching dimension of C. These upper bounds are then, all the more, valid for all parameters ^α,β(C) (no matter how he sampling mode (α,β)) is chosen). In Section <ref>, we will show that the saturating matching numbers associated with G(C) (and hence the MAP-teaching dimensions of C) can be computed in polytime. §.§ Saturating Matching Number Revisited We start with the two main results of this section. Suppose that (C,Z,) is of the form as described in Example <ref>. Then ^O,R(C) = (G(C)^O,R) and ^O,R(C) = (G(C)_≠^O,R). The ≥-direction of the claimed equalities is covered by Theorem <ref>. We have to show the ≤-direction. We may restrict ourselves to proving ^O,R(C) ≤(G(C)_≠^O,R) because the proof for ^O,R(C) ≤(G(C)^O,R) is quite similar and uses the same kind of arguments that we had used in the final part of the proof of Theorem <ref>. Set m = |X|, d^+ = (G(C)^O,R) and let M: C ^O,R{} be a C-saturating matching in G(C)^O,R of order d^+. For every c ∈ C, we set d(c) = |M(c)|. Note that 1 ≤ d(c) ≤ d^+. If d^+ = m, then we are done because ^O,R(C) cannot exceed m. We may assume therefore that d^+ ≤ m-1. Let 0 < ≤1/2 be a small real number (where the meaning of “small” will become clear from what follows). For each c ∈ C, we set U_0(c) := {(x,b)∈ Z: c(x) ≠ b} U_1(c) := {(x,b)∈ Z: c(x) = b ∧ (x,b) ∉ M(c)} and U(c) = U_0(c) ∪ U_1(c). Note that, for each c ∈ C, the set Z partitions into M(c), U_0(c) and U_1(c). For each c ∈ C and each (x,b) ∈ Z, we set P((x,b) | c) = {[ 1-/d(c) ; /m-d(c) ; 0 ]. . Let L be the MLE-learner given by (<ref>). We aim at showing that the matching M:C ^O,R{} can be seen as a teacher for L. To this end, it suffices to show that the condition ∀ c ≠ c_0 ∈ C: P^O,R(M(c_0)|c_0) > P^O,R(M(c_0)|c) is satisfied provided that is sufficiently small. We briefly note that |M(c)| + |U_1(c)| = m ≥ d^+ and ≤ 1/2, and proceed with two claims which will help us to verify (<ref>). Claim 1: Call a subset of Z c-rare if it contains a (low probability) element from U(c) while missing a (high probability) element from M(c). Suppose that d ≤ d^+. Then the probability that d-fold P(·|c)-sampling without replacement leads to a c-rare sample is smaller than d divided by 1-/d(c) and, therefore, smaller than 2dd(c). Proof of Claim 1: The total P(·|c) probability mass of U(c) is whereas any element of M(c) has a P(·,c)-probability of 1-/d(c). For k=1,…,d, let E_k be the event that, within trial k, a point from U(c) is sampled although at least one point from M(c) has not been sampled before. It suffices to upper-bound the probability of E_1 ∨…∨ E_d. The probability of E_k is obviously smaller than divided by 1-/d(c) and therefore smaller than d(c)/1-≤ 2d(c). An application of the union bound yields an additional factor d. Claim 2: Suppose that d ≤ d(c). Then a sample of size d which contains an element from U_1(c) is c-rare (because it necessarily must miss an element from M(c)). Setting c = c_0 and d = d(c_0), we infer from the above claims that P^O,R(M(c_0)|c_0) > 1-2d(c_0)^2. Consider now an arbitrary, but fixed, concept c_1 ∈ C{c_0}. Then M(c_1) ≠ M(c_0). We proceed by case analysis: Case 1: Neither M(c_0) ⊂ M(c_1) nor M(c_1) ⊂ M(c_0). Then M(c_0) is a c_1-rare sample. Hence P^O,R(M(c_0)|c_1) < 2d(c_0)d(c_1). Case 2: M(c_0) ⊂ M(c_1). We apply a symmetry argument. Every sample containing d(c_0) elements of M(c_1) has the same chance for being obtained from d(c_0)-fold P(·|c_1)-sampling without replacement. Hence P^O,R(M(c_0)|c_1) ≤d(c_1)d(c_0)^-1≤1/d(c_1)≤1/2 , where the last two inequalities follow from 1 ≤ d(c_0) ≤ d(c_1)-1. Case 3: M(c_1) ⊂ M(c_0). We may assume that M(c_0) M(c_1) ∪ U_1(c_1) because, otherwise, we obtain directly P^O,R(M(c_0)|c_1) = 0. We apply again a symmetry argument. Every sample containing M(c_1) and d(c_0) - d(c_1) elements of U_1(c_1) has the same chance for being obtained from d(c_0)-fold P(·|c_1)-sampling without replacement. Hence P^O,R(M(c_0)|c_1) ≤m-d(c_1)d(c_0)-d(c_1)^-1 . The latter expression is upper-bounded by 1/2 because 1 ≤ d(c_0)-d(c_1) < m-d(c_1), d(c_1) ≤ d(c_0)-1 ≤ m-2 and, therefore, m - d(c_1) ≥ 2. It becomes obvious from this discussion that condition (<ref>) is satisfied provided that is sufficiently small. Suppose that (C,Z,) is of the form as described in Example <ref>. Then ^O,R(C) = (G(C)^O,R) and ^O,R(C) = (G(C)_≠^O,R). The ≥-direction of the claimed equalities is covered by Theorem <ref>. We have to show the ≤-direction. We may restrict ourselves to proving ^O,R(C) ≤(G(C)_≠^O,R) because the proof for ^O,R(C) ≤(G(C)^O,R) is quite similar and uses the same kind of arguments that we had used in the final part of the proof of Theorem <ref>. Set m = |X|, d^+ = (G(C)_≠^O,R) and let M: C ^O,R{} be a C-saturating matching in G(C)_≠^O,R of order d^+. If d^+ = m, then we are done because ^O,R(C) cannot exceed m. We may assume therefore that d^+ ≤ m-1. For every c ∈ C, we set d(c) = |M(c)|. Note that 1 ≤ d(c) ≤ d^+. We fix for each concept c ∈ C a sequence z_1^c,…,z_m^c consisting of all elements of Z_c subject to the constraint that z_1^c,…,z_d(c) ^c = M(c), i.e., this sequence must start with M(c). In the sequel, we will specify the parameter set of an MLE-learner of C. We do this in two stages. In Stage 1, we make a preliminary definition which already achieves that each c^* ∈ C is a (not necessarily unique) maximizer of P^O,R(M(c^*|c)). In Stage 2, we make some infinitesimal changes of the parameter set (by bringing a small parameter >0 into play) so that, after these changes have taken place, each c^* ∈ C will be a unique maximizer of P^O,R(M(c^*|c)). This would imply that M can be viewed as a teacher for L, which would complete the proof. Details follow. We enter Stage 1 of the parameter construction. Let L be the MLE-learner whose parameters are given by P(z|c) = {[ 2^-i ; 2^-d(c)/m-d(c) ; 0 ]. . In other words, given c, L assigns probability mass 2^-i to the i-the element of the sequence M(c) and distributes the remaining probability mass, 2^-d(c), evenly on the elements of Z_c M(c). Note that the c-conditional likelihood of an element in M(c) is at least 2^-d(c) while the probability of an element in Z_c M(c) equals 2^-d(c)/m-d(c)≤ 2^-d(c) with equality only if d(c) = m-1. It is easy to determine the c-conditional likelihood of M(c): P^O,R(M(c) | c) = ∏_i=1^d(c)2^-i/∏_i=1^d(c)-12^-i = 2^-d(c) . The middle term contains in the numerator the product of the c-conditional likelihoods of z_1^c,…,z_d(c)^c, respectively. In the denominator, it contains the product of the corresponding normalization factors: if z_1^c,…,z_j^c haven been sampled within the first j trials, then the remaining probability mass equals 1-∑_i=1^j2^-i = 2^-j. Let us now fix an arbitrary target concept c^* ∈ C and see how the c^*-conditional likelihood of M(c^*) relates to the c-conditional likelihood of M(c^*) for some other concept c ∈ C {c^*}. We aim at showing that P^O,R(M(c^*) | c) ≤ P^O,R(M(c^*) | c^*). We may assume that c M(c^*) because, otherwise, we would obtain P^O,R̅(M(c^*)) = 0, and we were done. For sake of simplicity, we set d := d(c^*) and z_i := z_i^c^* for i=1,…,d. Let us briefly discuss the case that M(c) and M(c^*) are equal as sets. Then there exists a permutation σ such that M(c) = z_σ(1),…,z_σ(d). Since M is a matching, σ cannot be the identity permutation. It follows that P^O,R(M(c^*)|c^*) > P^O,R(M(c^*)|c) because (P(z_i|c^*))_i=1,…,d = (2^-i)_i=1,…,d is a strictly decreasing sequence while (P(z_i|c))_i=1,…,d (as a non-identity permutation of (2^-i)_i=1,…,d) is not.[Compare with Remark <ref>.] From now, we assume that M(c) and M(c^*) are different even when viewed as sets. Let j be the number of z ∈ Z occurring in M(c) and in M(c^*). We can make the pessimistic assumption that the sequences M(c) starts with z_1,…,z_j because this will lead to the largest conceivable value of P^O,R(M(c^*) | c).[This brings the j largest c-conditional likelihoods into play and puts them in the most effective position.] The remaining observations z_j+1,…,z_d(c) must then be members of Z_c M(c). Remember that for each z ∈ Z_c M(c) we have that P(z|c) = 2^-d(c)/m-d(c). The term P^O,R(M(c^*) | c) can be expressed as a product of two terms. The first one (resp. second one) is the contribution of the first j trials (resp. the last d-j trials). Since M(c) starts with z_1,…,z_j, the first term is simply T_1 := 2^-j. The second term has the following form T_2 := ( 2^-j/m-j)^d-j/ 2^-j( 2^-j-2^-j/m-j) ( 2^-j-22^-j/m-j) …( 2^-j-(d-j-1)2^-j/m-j) . As usual, the numerator contains the product of the c-conditional (here: uniform) likelihoods while the denominator contains the product of the corresponding normalization factors. T_2 looks terrifying at first glance, but luckily there is a lot of cancellation and T_2 can be rewritten as follows: T_2 = 1/(m-j)^d-j( 1-1/m-j) ( 1-2/m-j) …( 1-d-j-1/m-j) = 1/(m-j)(m-j-1)(m-j-2) … (m-d+1) . Remember that d = d(c^*) ≤ m-1. It follows that m-d+1 ≥ 2 and therefore T_2 ≤ 2^-(d-j) P^O,R(M(c^*|c) = T_1 · T_2 ≤ 2^-d with equality only if either j=d or d = m-1 and j = m-2. Note that j=d if and only if the sequence M(c) starts with the sequence M(c^*) = z_1,…,z_d. We enter now Stage 2 of the parameter construction, in which we make some infinitesimal changes of the parameters that we have used so far. In order to distinguish the new parameter collection from the old one, the new parameters are denoted by P_(z|c). They are defined as follows: P_(z|c) = {[ 2^-i ; 2^-i+ ; 2^-d(c)-/m-d(c) ; 0 ]. . The main difference to the old parameter collection is the “extra-bonus” that c assigns to the last element z_d(c)^c of the sequence M(c). Now the total probability mass assigned to z_1^c,…,z_d(c)^c is by the amount of greater than before, so that only probability mass 2^-d(c)- is left for Z_c M(c). Again, this probability mass is shared evenly among the elements of Z_c M(c). Here comes the central observation: Claim: If > 0 is sufficiently small, then the following implications are valid: P^O,R(M(c^*)|c^*) > P^O,R(M(c^*)|c) P_^O,R(M(c^*)|c^*) > P_^O,R(M(c^*)|c) , P^O,R(M(c^*)|c^*) = P^O,R(M(c^*)|c) P_^O,R(M(c^*)|c^*) > P_^O,R(M(c^*)|c) . Proof of the Claim: The first implication is based on a simple continuity argument. The second implication can be verified as follows. Remember from the discussion in Stage 1 that P^O,R(M(c^*)|c^*) = P^O,R(M(c^*)|c) can occur only if either M(c) starts with M(c^*) = z_1,…,z_d or if d = m-1 and j = m-2. In the former case, the effect of P_(z_d|c^*) = P(z_d|c^*) + and P_(z_d|c) = P(z_d|c) will be that P_^O,R(M(c^*)|c^*) > P^O,R(M(c^*)|c^*) = P^O,R(M(c^*)|c) = P_^O,R(M(c^*)|c) , as desired. In the latter case, we have M(c^*) = z_1,…,z_m-1 and either M(c) = z_1,…,z_m-2 or M(c) = z_1,…,z_m-2,z_m. In the latter case, we again end up at (<ref>). Suppose therefore that M(c^*) = z_1,…,z_m-1 and M(c) = z_1,…,z_m-2. Here the situation is less clear, because the -bonus will affect not only the c^*-conditional likelihood of M(c^*) but also the c-conditional likelihood. We therefore compute both quantities and compare them afterwards. Clearly P_^O,R(M(c^*)|c^*) = 2^-(m-1) +. The term P_^O,R(M(c^*) | c) can be expressed as a product of two terms, The first one (resp. second one) is the contribution of the first m-2 trials (resp. the last trial). Since M(c) = z_1,…,z_m-2, the first term clearly equals 2^-(m-2) +. Note that 2^-(m-2)- is the probability mass remaining for, and evenly shared by, z_m-1 and z_m. The second term equals therefore P_(z_m-1|c)/2^-(m-2)- = (2^-(m-2)-) / 2/2^-(m-2)- = 1/2 . It follows that P_^O,R(M(c^*) | c) = 1/2·(2^-(m-2) + ) = 2^-(m-1) + /2 , which is less than P_^O,R(M(c^*)|c^*) = 2^-(m-1) +. This completes the proof of the claim. The above discussions show that we can view M a teacher for the learner L with parameter collection (P_(z|c))_z ∈ Z,c ∈ C. This completes the proof of the theorem. Combining Theorems <ref> and <ref> with what we already know about saturating matching numbers, we obtain the following result: Suppose that (C,Z,) is of the form as described in Example <ref> and (α,β) ≠ (O,R). Then ^α,β(C) = (G(C)^α,β) ^α,β(C) = (G(C)_≠^α,β) . Moreover ^O,R(C) ≥ max{^O,R(C) , ^O,R(C)} , ^O,R(C) ≥ max{^O,R(C) , ^O,R(C)} . The first assertion of the corollary implies the correctness of the results which are visualized in Fig. <ref>. The following two results provide some supplementary information: Let (α,β) and (α',β') be two different sampling modes. There exists a concept class C such that (G(C)^α',β') ≠(G(C)^α,β). We present the proof for (α,β) = (O,R) and (α',β') = (O,R).[The proof for the other choices of (α,β) and (α',β') is similar.] Let X = {x_1,…,x_m}, let Z = X ×{0,1}, let C_m be the powerset of X and let be given by (<ref>). Let _2 (resp. '_2) be the set of all A ∈^O,R (resp. A ∈^O,R) such that |A| ≤ 2. A simple counting argument shows that |'_2| < |_2|. Consider the bipartite graph G with vertex sets C_m and _2 and with an edge (c,A) if and only if c A. Each vertex in _2 has degree at least D := 2^m-2 whereas each vertex in C_m has degree d := 1+2m+1/2(m-1)m. Suppose that m is sufficiently large such that d ≤ D. Fix an arbitrary subset S of _2. It follows that |Γ(S)| ≥D/d· |S| ≥ |S| so that G satisfies Hall's condition. It follows that G admits a _2-saturating matching, say M. Let C be the set of concepts in C_m having an M-partner. By construction: (G(C)^O,R) = 2. For cardinality reasons, namely |C| = |M| = |_2| > |'_2|, we have (G(C)^O,R) > 2. Theorem <ref> implies that the parameters with different colors in Fig. <ref> can generally have different values. §.§ MAP- versus MLE-Learners Suppose that L is an MLE-learner for C. Let L' be a MAP-learner that differs from L only by having non-uniform priors, i.e., the conditional likelihoods are the same. The following example demonstrates that the gap between _L^α,β(C) and _L'^α,β(C) can become arbitrarily large.[This example uses a concept class, namely singletons plus empty set, which is often used to demonstrate that the classical teaching model from <cit.> may assign an inappropriately high teaching dimension to a trivial concept class.] Let X = {x_1,…,x_m}, Z = X ×{0,1}, C = {{x_1},…,{x_m}}∪{} and let be given by (<ref>). Consider the MLE-learner L be given by the parameters P((x_i,c(x_i)) | c) = 1/m for each c ∈ C and i=1,…,m. We assume for simplicity that the sampling mode (α,β) of L equals (O,R), but the following reasoning (mutatis mutandis) applies to any other sampling mode as well. Clearly, for each k ∈ [m], the concept {x_k} can be taught by the single observation (x_k,1). However can only be taught by the full set A_0 := {(x_i,0): i=1,…,m} of observations that is consistent with: as long as some (x_k,0) is missing in a set A ⊂ A_0, we have that P(A|) = P(A|{x_k}) so that is not the unique maximizer of P(A|c). We may conclude from this discussion that _L^α,β(C) = m. Let L' be a MAP-learner that differs from L only by having for a higher prior than for the other concepts in C. Then the concept {x_k} can still be taught by the single observation (x_k,1). But now also the concept ∈ C can be taught in a trivial fashion by ∈ 2^Z. We may conclude that _L'^α,β(C) = 1. In contrast to Example <ref>, the next result shows that, in case of optimally parameterized learners, the advantage of MAP-learners over MLE-learners is all but dramatic: Suppose that (C,Z,) is of the form as described in Example <ref> and (α,β) ≠ (O,R). Then ^α,β(C) ≤^α,β(C) ≤ 1+^α,β(C) . Moreover, there exist concept classes C' and C” such that ^α,β(C') = ^α,β(C') ^α,β(C”) = 1+^α,β(C”) . Clearly ^α,β(C) ≤^α,β(C). In order to obtain (<ref>), it suffices therefore to show that ^α,β(C) ≤ 1 + ^α,β(C), or equivalently, that (G(C)_≠^α,β) ≤ 1 + (G(C)^α,β). We present the proof for (α,β) = (O,R).[The proof for the other choices of (α,β) is similar.] For sake of brevity, set m := |X|, G = G(C)^O,R and d := (G). Since (G_≠) ≤ m, we may assume that d ≤ m-2. Let M:C 2^Z be a C-saturating matching of order d in G. If M does not assign to any concept in C, then (G_≠) ≤ d. Otherwise, if M(c_0) = for some c_0 ∈ C, then we may arbitrarily pick a set A ⊂ X of size d+1 and replace the M-partner of c_0 by the set B = {(a,c_0(a)): a ∈ A}. The resulting matching now witnesses that (G_≠) ≤ d+1. We still have to specify concept classes C' and C” which satisfy (<ref>). As for C', there are plenty of choices, e.g., C' = {{x_i}: i = 1,…,m} satisfies ^α,β(C') = ^α,β(C') = 1 . In order to specify an appropriate class C”, we assume again that (α,β) = (O,R) and proceed as follows. Let X = {x_1,…,x_m}, let Z = X ×{0,1}, let C_m be the powerset of X_m and let be given by (<ref>). Let _≤ d (resp. '_≤ d) be the set of subsets (resp. non-empty subsets) of Z of size at most d. Consider the bipartite graph G with vertex sets C_m and _≤ d an edge (c,A) if and only if c A. If m is sufficiently large (while d is kept fixed), G admits a _≤ d-saturating matching, say M. Let C” be the set of concepts in C_m having an M-partner. By construction: (G(C”)^O,R) = d. For cardinality reasons, namely |C”| = |M| = |_≤ d| > |_≤ d| - 1 = |'_≤ d|, we have (G(C”)_≠^O,R) > d, which implies that (G(C”)_≠^O,R) = d+1. §.§ Parameters Bounding MLE-TD from Above Since can never be smaller than , it follows that ^O,R(C) is the largest among the parameters occurring in Corollary <ref>. Hence upper bounds on ^O,R(C) are, all the more, upper bounds on the other parameters. For this reason, we confine ourselves to MLE-learners and to sampling mode (O,R) in what follows. In order to simplify notation, we will write * 2^Z instead of ^O,R, * (C) instead of ^O,R(C), * G^+(C) instead of G(C)^O,R_≠. Among the parameters that bound (C) from above are the antichain number of C, the VC-dimension of C and the so-called no-clash teaching dimension of C. We begin with the definition of the antichain number: T:C 2^Z is called an antichain mapping for C if the following holds: * Each concept c ∈ C is consistent with T(c). * The sets (T(c))_c ∈ C form an antichain, i.e., ∀ c_1 ≠ c_2 ∈ C: T(c_1) T(c_2) ∧ T(c_2) T(c_1) . The smallest possible order of an antichain mapping for C is called the antichain number of C and denoted by (C). It is well-known that the antichain number is upper-bounded by the VC-dimension: Suppose that the concept class C is a family of subsets of a finite domain X. Then (C) ≤(C). We proceed with the definition of the teaching dimension in the so-called no-clash model of teaching: A mapping T: C 2^Z is called clash-free on C if it satisfies the following: * Each c ∈ C is consistent with T(c). * If c_1 ≠ c_2 ∈ C, then c_1 is inconsistent with T(c_2) or c_2 is inconsistent with T(c_1).[The situation that c_1 is consistent with T(c_2) and c_2 is consistent with T(c_1) would be called a clash of c_1 and c_2. This explains why the mapping T is called clash-free.] The no-clash teaching dimension of C, denoted as (C), is the smallest possible order of a mapping T:C 2^Z that is clash-free on C. Suppose that (C,Z,) is of the form as described in Example <ref>. Then (C) ≤(C) and (C) ≤(C). Because (C) = (G^+(C)), it suffices to show that (G^+(C)) is upper-bounded by (C) and (C). An antichain mapping T:C 2^Z clearly satisfies (<ref>) and does not have in its image. Thus, an application of Remark <ref> yields (C) ≥(G^+(C)). A clash-free mapping T:C 2^Z must be of order at least 1. There can be at most one concept c in C such that T(c) =. Suppose that T(c)=. Consider an arbitrary, but fixed, concept c' ∈ C{c}. Since c' is consistent with (the empty sample) T(c) and T is clash-free, the concept c must be inconsistent with T(c'). Let us redefine T(c) as a singleton set {(x,b)} such that b = c(x). This modification of T is still clash-free and leaves the order of T unchanged. Moreover, after this modification, T satisfies  (<ref>) and does not have in its image. Now another application of Remark <ref> yields (C) ≥(G^+(C)). The inequality (C) ≤(C) had been proven already in <cit.>. The proof given there does not make use of saturating matching numbers and is more complicated. Because (C) ≤(C), we immediately obtain the following result: Suppose that (C,Z,) is of the form as described in Example <ref>. Then (C) ≤(C). §.§ Computational Considerations We will show in the course of this section that (G^+(C)) (and related quantities) can be computed in time poly(|C|,|X|) from a given (finite) concept class C 2^X. The central observation will be that, in order to find a C-saturating matching of minimum order in G^+(C), we do not need to compute the (possibly exponentially large) bipartite graph G^+(C). All pieces of information about G^+(C) that we need in the course of the algorithm can be efficiently extracted from the much smaller bipartite graph G(C). We start with a lemma that is particularly interesting when we have a bipartite graph whose first vertex set, V_1, is much smaller than its second vertex set, V_2: Let G = (V_1,V_2,E) with E V_1 × V_2 be a bipartite graph. Let be an oracle that, upon request (v,k) with v ∈ V_1 and k ∈ [|V_1|], returns min{_G(v),k} distinct neighbors of v.[The oracle can be implemented efficiently if, for instance, G is represented by the adjacency lists for the vertices in V_1 and there is direct access to each of these lists.] Then there is an oracle algorithm A^ which computes a maximum matching in G and has a time bound that is polynomial in |V_1|. For sake of brevity, we set n = |V_1|. Let V'_1 V_1 be the set of vertices in V_1 with less than n neighbors, and let V”_1 = V_1 V'_1 be the set of remaining vertices in V_1, i.e., the vertices with at least n neighbors. The algorithm A^ proceeds as follows: * For each v ∈ V_1, it sends the request (v,n) to and receives a list of all neighbors if v ∈ V'_1, resp. a list of n distinct neighbors if v ∈ V”_1. * Now A^ computes a maximum matching M' in the subgraph G' of G that is induced by V'_1 and Γ(V'_1). * A^ augments M' to a V_1-saturating matching in a greedy fashion: for each v ∈ V”_1, it inspects the list of n distinct neighbors of v and matches v with the first neighbor which had not been matched before. Note that G' has at most n(n-1) vertices. Moreover, among n neighbors of a vertex v ∈ V”_1, there must be at least 1 neighbor which is not already matched with another vertex in V_1. It easily follows that A^ returns a maximum matching in poly(|V_1|) time. With a bipartite graph G = (V_1,V_2,E), we associate the bipartite graph G^+ = (V_1,2^V_2{},E^+) E^+ = {(v,B) ∈ V_1 × 2^V_2{}: {v}× B E} . In other words: the pair (v,B) with v ∈ V_1 and ⊂ B V_2 is an edge in E^+ iff, for every v' ∈ B, the pair (v,v') is an edge in E. Given a bipartite graph G = (V_1,V_2,E), a V_1-saturating matching of minimum order in G^+ (resp. an error message if a V_1-saturating matching does not exist) can be computed in polynomial time: We consider first the problem of computing a V_1-saturating matching of minimum order in G^+. Let us fix some notation. For ℓ=1,…,|V_2|, let G^(ℓ) = (V_1,V_2^(ℓ),E^(ℓ)) be the bipartite graph given by V_2^(ℓ) = {B V_2: 1 ≤ |B| ≤ℓ}) E_2^(ℓ) = {(v,B) ∈ V_1 × V_2^(ℓ): {v}× B E} . In other words, G^(ℓ) is the subgraph of G^+ induced by V_1 and V_2^(ℓ). Given G, ℓ∈ [|V_2|], k ∈ [|V_1|] and v ∈ V_1, it is easy to compute a list of min{(v),k} distinct neighbors of v in G^(ℓ). It follows from Lemma <ref> that, given G and ℓ∈ [|V_2|], we can compute in poly(|V_1|,|V_2|) steps a maximum matching M_ℓ in G^(ℓ). Let ℓ^+ be the minimum ℓ such that M_ℓ is of size |V_1|, respectively ℓ^+ = 1+|V_2| if none of the M_ℓ saturates V_1. If ℓ^+ ≤ |V_2|, then M_ℓ^+ is the desired V_1-saturating matching of minimum order in G^+. If ℓ^+ = |V_2|+1, we may report error because G^+ does not admit a V_1-saturating matching. Suppose that (C,Z,) is of the form as described in Example <ref>. Then the following objects can be computed in polynomial time: * the bipartite consistency graph G(C) with vertex sets C and Z * the (identical) parameters (G^+(C)) and (C) * a C-saturating matching M in G^+(C) of order (G^+(C)) * parameters representing an MLE-learner L for C and a teacher T for L who is of order (C) Given C, the set Z and the bipartite graph G(C) can clearly be computed in polynomial time. We may now apply Theorem <ref> to the bipartite graph G = G(C). Then G^+ in Theorem <ref> equals G^+(C), Hence the algorithm sketched in the proof of Theorem <ref> can be used for finding a C-saturating matching M in G^+(C) of minimum order (which is order (G^+(C))). As a byproduct, the parameter (G^+(C)) is now known as well. As for the specification of an appropriate MLE-learner L, we may use the parameter setting that is found in the proof of Theorem <ref>. As also shown in that proof, M (already known to be computable from C in polynomial time) represents a teacher of order (C) for L. This completes the proof of the corollary. It is straightforward to extend Corollary <ref> from sampling mode (O,R) to other sampling modes, and from to . The main point is to adjust the definition of G^+ in (<ref>) so that G(C)^+ becomes identical to G(C)_≠^α,β resp. to G(C)^α,β. We omit the details. Open Problems and Future Work. What are “natural parameterizations” of MAP- or MLE-learners? Does MAP-based teaching of naturally parameterized learners force the teacher to present observations/examples which illustrate the underlying target concept in an intuitively appealing way? § PROOF OF FACTS 1–4 Fact 1: Suppose that 0 ≤ |Δ| < p < 1/2. Let c^±Δ be the concept given by (<ref>). Then P^O,R(z_1,z_2)|c^±Δ) and P^O,R(z_1,z_2|c^±Δ) are both strictly decreasing when |Δ| is increased. The assertion is obvious for P^O,R(z_1,z_2)|c^±Δ) = (p+Δ)(p-Δ) = p^2-Δ^2. Consider now the function h(Δ) := P^O,R(z_1,z_2|c^±Δ) = (p+Δ)(p-Δ)/1-p-Δ + (p-Δ)(p+Δ)/1-p+Δ = 2(1-p)(p^2-Δ^2)/(1-p)^2-Δ^2 , where the last equation can be obtained by a straightforward calculation. Another straightforward, but tedious, calculation shows that h'(Δ) = - 4(1-p)(1-2p)Δ/((1-p)^2-Δ^2)^2. Hence the function h(Δ) is strictly increasing for Δ < 0 and strictly decreasing for Δ > 0. It is therefore strictly decreasing when |Δ| is increased. Fact 2: Suppose that 0 ≤Δ < p < 1/2. Let c^±Δ be the concept given by (<ref>). Then P^O,R(z_1,z_2 | c^±Δ) - P^O,R(z_1,z_2 | c^±0) {[ = 0 ; > 0 ; < 0 ]. . We set h(Δ) := P^O,R(z_1,z_2|c^±Δ) = (p+Δ)(p-Δ)/1-p-Δ = p^2-Δ^2/1-p-Δ and observe that P^O,R(z_1,z_2 | c^±Δ) - P^O,R(z_1,z_2 | c^±0) = h(Δ) - h(0) = (1-p)(p^2-Δ^2) - (1-p-Δ)p^2/(1-p-Δ)(1-p) = Δ (p^2-(1-p)Δ)/(1-p-Δ)(1-p) . The denominator of the latter expression is strictly positive. Moreover Δ (p^2-(1-p)Δ) {[ = 0 ; > 0 ; < 0 ]. , which accomplishes the proof of Fact 2. Fact 3: Suppose that 0 ≤Δ < p < 1/2. Let c^±Δ be the concept given by (<ref>). Then P^O,R(z_1,z_1,z_2 | c^±Δ) - P^O,R(z_1,z_1,z_2 | c^±0) {[ = 0 ; > 0 ; < 0 ]. . Let 0 < δ < 1 be given by Δ = δ p and note that P^O,R(z_1,z_1,z_2|c^±δ p) = (p+δ p)^2 · (p-δ p) = (1+δ)^2 · (1-δ) · p^3 = (1+δ-δ^2-δ^3) · p^3 . It follows that P^O,R(z_1,z_1,z_2|c^±δ p) - P^O,R(z_1,z_1,z_2|c^±0) = δ· (1-δ-δ^2) · p^3 . Furthermore δ· (1-δ-δ^2) {[ = 0 ; > 0 ; < 0 ]. . We may conclude from this discussion that (<ref>) is valid. Fact 4: Suppose that 0 < p < 1/2 and 1 ≤ t < 1-p/p. Let c^(t) be the concept given by (<ref>). Then P^O,R(z_1,z_2|c^(t)) is strictly increasing with t. Set h(t) := P^O,R(z_1,z_2|c^(t))/p^2 = 1/1-pt + 1/1-p/t = 1/1-pt + t/t-p = (t-p) + (1-pt)s/(1-pt)(t-p) = 2t-pt^2-p/(p^2+1)t-pt^2-p . It suffices to show that h(t) is strictly increasing with t. To this end, we compute the first derivative: h'(t) = (2-2pt) · ((p^2+1)t-pt^2-p) - (2t-pt^2-p)(p^2+1-2pt)/(1-pt)^2 · (t-p)^2 . The denominator is strictly positive. After an application of the distributive law and some cancellation, the numerator has the form f(t) := p(1-p^2)(t^2-1) . Hence the numerator equals 0 for t=1 and is strictly positive for t>1. It follows that h(t) with t ≥ 1 is strictly increasing.
http://arxiv.org/abs/2307.04063v1
20230708235625
Symmetry energy and neutron star properties constrained by chiral effective field theory calculations
[ "Yeunhwan Lim", "Achim Schwenk" ]
nucl-th
[ "nucl-th", "astro-ph.HE", "nucl-ex" ]
[E-mail: ][email protected] Department of Physics, Yonsei University, Seoul 03722, South Korea [E-mail: ][email protected] Technische Universität Darmstadt, Department of Physics, 64289 Darmstadt, Germany ExtreMe Matter Institute EMMI, GSI Helmholtzzentrum für Schwerionenforschung GmbH, 64291 Darmstadt, Germany Max-Planck-Institut für Kernphysik, Saupfercheckweg 1, 69117 Heidelberg, Germany We investigate the nuclear symmetry energy and neutron star properties using a Bayesian analysis based on constraints from different chiral effective field theory calculations using new energy density functionals that allow for large variations at high densities. Constraints at high densities are included from observations of GW170817 and NICER. In particular, we show that both NICER analyses lead to very similar posterior results for the symmetry energy and neutron star properties when folded into our equation of state framework. Using the posteriors, we provide results for the symmetry energy and the slope parameter, as well as for the proton fraction, the speed of sound, and the central density in neutron stars. Moreover, we explore correlations of neutron star radii with the pressure and the speed of sound in neutron stars. Our 95% credibility ranges for the symmetry energy S_v, the slope parameter L, and the radius of a 1.4 neutron star R_1.4 are S_v=(30.6-33.9) MeV, L=(43.7-70.0) MeV, and R_1.4=(11.6-13.2) km. Our analysis for the proton fraction shows that larger and-or heavier neutron stars are more likely to cool rapidly via the direct Urca process. Within our equation of state framework a maximum mass of neutron stars M_ max>2.1 indicates that the speed of sound needs to exceed the conformal limit. Symmetry energy and neutron star properties constrained by chiral effective field theory calculations Achim Schwenk ====================================================================================================== § INTRODUCTION Understanding dense matter is a central challenge in nuclear physics and astrophysics. In nature, dense matter exists in the core of neutron stars under extreme neutron-rich conditions. The properties of neutron-rich matter around nuclear densities are described by the nuclear symmetry energy and its density dependence. While there haven been impressive constraints from nuclear theory, nuclear experiments, and astrophysics (see, e.g., Refs. <cit.>), more precise determinations of the symmetry energy and its slope parameter L at saturation density, n_0 = 0.16 fm^-3, are still an open problem. From the theoretical side, the symmetry energy is best constrained by controlled calculations of the equation of state (EOS) of neutron matter based on chiral effective field theory (EFT) interactions <cit.>. This yields values for the symmetry energy S_v at saturation density and the L parameter in the range of S_v = (30-35) MeV and L = (35-70) MeV. However, to describe the EOS to all densities in neutron stars requires extensions beyond the reach of chiral EFT calculations. To this end, different extensions, such as piecewise polytropes <cit.>, speed-of-sound based parametrizations <cit.>, nonparametric Gaussian processes <cit.>, or nuclear energy-density funcationals (EDFs) have been used (see, e.g., Ref. <cit.>). Recently, new EDFs for the nuclear EOS has been introduced by Huth et al. <cit.>, which have the advantage to provide high density extrapolations that are consistent with causality and with a maximum of the speed of sound. These functionals allow for EOS calculations for the broad ranges of conditions reached in core-collapse supernovae and neutron star mergers. In this work, we use these new EDF EOSs to constrain the symmetry energy and neutron star properties based on a prior informed by chiral EFT calculations of neutron matter. From the astrophysics side, the strongest constraint on the nuclear EOS comes from the observation of heavy two-solar-mass neutron stars <cit.>. Moreover, the heaviest well measured neutron stars, PSR J0740+6620, was recently also observed by NICER to provide constraints on its radius <cit.>. In addition, NICER observed the mass and radius of a typical-mass neutron star, PSR J0030+0451 <cit.>. The NICER analyses for both neutron stars by Riley et al. <cit.> and by Miller et al. <cit.> give different mass-radius posteriors, but agree within their uncertainties. The differences in the posteriors are reduced by including realistic assumptions for the EOS, and in this work we explicitly show that in our EDF EOS ensembles the results from both NICER analyses are very similar. In addition to the NICER constraints, we include in our Bayesian inference the tidal deformability information from GW170817 inferred by LIGO/Virgo <cit.>. Using the chiral EFT informed priors with the astro posteriors, we provide results for the symmetry energy and neutron star properties. This paper is organized as follows. In Sec. <ref> we introduce our EOS framework using the new EDFs from Huth et al. <cit.>. These are fit to a range of chiral EFT calculations of neutron matter. Building on this EOS prior, we include constraints at high densities from observations of GW170817 and NICER using a Bayesian analysis. In Sec. <ref>, we investigate the posterior distributions for the symmetry energy and the slope parameter, as well as for the proton fraction, the speed of sound, and the central density in neutron stars. Moreover, we explore correlations of neutron star radii with the pressure and the speed of sound in neutron stars. Finally, we summarize our results and conclude in Sec. <ref>. § EQUATION OF STATE FRAMEWORK The EOS describes the energy density and pressure of matter for given baryon density, composition, and temperature. Since we focus on cold neutron stars, we consider zero temperature. For a given EOS, the mass and radius of neutron stars follow by solving the Tolman-Oppenheimer-Volkoff (TOV) equations <cit.>. Our starting point will be the EOS of homogeneous matter, which we constrain by empirical ranges of the properties of symmetric nuclear matter around saturation density and by neutron matter calculations. Based on each EOS, we the calculate consistently the structure of the neutron star crust. Since neutron stars are extremely neutron rich with proton fractions ∼ 5%, the most important constraints for the EOS come from neutron matter calculations. In this work, we focus on neutron matter calculations based on chiral EFT interactions, which has the advantage that chiral EFT predicts consistent many-body interactions and enables systematic uncertainty estimates based on the EFT expansion <cit.>. Neutron matter has been calculated based on chiral two- and three-nucleon interactions using many-body perturbation theory (MBPT) <cit.>, quantum Monte Carlo (QMC) methods <cit.>, self consistent Green's function (SCGF) methods <cit.>, and coupled cluster (CC) theory <cit.>. These calculations are able to include all interactions up to next-to-next-to-next-to-leading order (N^3LO) <cit.> and include uncertainty estimates from the EFT truncation <cit.>. §.§ Energy density functionals To extend the EOS to high density we use nonrelativistic EDFs, which depend on the baryon number density n and proton fraction x of uniform matter. The baryonic energy density ε(n,x) is expressed as ε(n,x) = 1/2m_N τ_n(n,x) + 1/2m_N τ_p(n,x) + (1-2x)^2 f_n(n) + [1-(1-2x)^2]f_s(n) , where τ_n/2m_N and τ_p/2m_N are the neutron and proton kinetic densities, with nucleon mass m_N. It was shown that the dependence on isospin asymmetry is to a very good approximation quadratic <cit.>, with the dominant non-quadratic contributions stemming from the kinetic densities, so that Eq. (<ref>) provides a very good approximation for asymmetric nuclear matter. The functionals f_n(n) and f_s(n) can be chosen to satisfy the constraints from neutron matter calculations and symmetric nuclear matter properties, respectively. For the interaction density functionals, we take the form introduced recently by Huth et al. <cit.> f_n(n) = ∑_j=0^3 a_j n^2+j/3/d_j + n^(j+1)/3 , f_s(n) = ∑_j=0^3 b_j n^2+j/3/d_j + n^(j+1)/3 , where a_j, b_j are fit parameters and d_j = d fm^-1-j with parameter d=3 <cit.>. This corresponds to an expansion of the interaction energy density in powers of the Fermi momentum k_ F∼ n^1/3, and the denominator ensures that the interaction part becomes proportional to n^5/3 at higher densities. Note that without the denominator, the interaction part generally causes the speed of sound to exceed the speed of light beyond some baryon density. For a detailed discussion of these new functionals and the parameter choices, see Ref. <cit.>. §.§ Constraints from neutron matter calculations based on chiral effective field theory For neutron matter constraints we use the MBPT calculations from Ref. <cit.> based on different chiral NN+3N Hamiltonians, including the Hebeler+ interactions <cit.>, the NNLOsim potentials <cit.>, as well as the N^3LO 450 MeV and 500 MeV uncertainty bands <cit.> (using the NN EMN interactions <cit.>). The different neutron matter results and their uncertainties are given by the individual lines shown in Fig. <ref>. We use the individual lines to fit the a_j of the EDF for neutron matter, f_n(n) in Eq. (<ref>), based on the k_ F expansion and d=3. The b_j of the corresponding symmetric matter part, f_s(n), are determined from empirical properties. We fit to the binding energy E/A(n_0)=-16 MeV at saturation density n_0 = 0.16 fm^-3, the incompressibility K=235 MeV, with K = 9 n^2 ^2(E/A)/ n^2(n_0,x=1/2), and the skewness Q = -300 MeV, with Q = 27 n^3 ^3(E/A)/ n^3(n_0,x=1/2). These values are extracted from Skyrme EDFs and constraints for nuclear matter properties <cit.>, see also Ref. <cit.>. Since neutron star properties are not very sensitive to symmetric nuclear matter, we do not vary all nuclear matter properties, but only explore the most uncertain value of Q in the following, see Sec. <ref>. The uncertainties in our EDF EOSs are reflected in the covariance matrix of x⃗=(a⃗, b⃗) defined as C_jk = 1/∑_i w_i∑_i w_i (x_j^i -⟨ x_j ⟩)(x_k^i -⟨ x_k ⟩) , where x_j^i is the set of fit parameters (a_j, b_j) for the i-th individual EOS, ⟨ x_j ⟩ represents the average of x_j, and w_i is the weight for each EOS. Since we do not vary the symmetric nuclear matter properties, in this work C_jk is 4 × 4 matrix for the a_j from the neutron matter EOSs only. In the initial set given by the 17 neutron matter EOSs, the weights are w_i=1, but when we implement Bayes statistics and inferences, w_i < 1. With the average ⟨ x_j ⟩ and the covariance matrix C_jk, a multivariate normal distribution can be used to generate an EOS ensemble based on our EDF EOSs. We note that the statistical uncertainties from this EOS ensemble have of course a prior sensitivity to the initial set of individual EOSs. The resulting EDF EOS ensemble based on the multivariate normal distribution is shown in Fig. <ref> with the 95% credibility region in comparison to the individual EOSs based on MBPT calculations of neutron matter. The ensemble is based on 100,000 EOSs generated using the EDF, Eqs. (<ref>) and (<ref>), from the average ⟨ x_j ⟩ and the covariance matrix C_jk based on the individual neutron matter MBPT EOSs. The agreement between the band and the individual lines in Fig. <ref> indicates that the EDF EOS ensemble employed in this work can generalize chiral EFT results within their uncertainties. Moreover, the compare the EDF EOS ensemble to the unitary gas constraint <cit.> and observe in Fig. <ref> that this is nicely fulfilled by our EOSs. §.§ Bayesian modelling We incorporate the astrophysics constraints on the EOS by applying Bayes theorem, from which the posterior distribution results from the combination of the prior and likelihood, P(a⃗| D) = P(D|a⃗)P(a⃗)/∫ da⃗ P(D|a⃗)P(a⃗) . Here, P(a⃗) represents the EOS prior given by the EDF parameter space obtained from the neutron matter calculations and symmetric nuclear matter properties, D stands for the astrophysical data so that the P(D|a⃗) is the likelihood or conditional probability to obtain D for a given EDF with parameter set a⃗. In our study, we include the astrophysical observations of GW170817 and NICER to constrain the EOS at higher densities. For the NICER mass-radius constraints for PSR J0030+0451 and PSR J0740+6620 we consider separately either the Amsterdam analysis of Riley et al. <cit.> or the Illinois/Maryland analysis of Miller et al. <cit.>. The heaviest neutron star mass of 2.08 ± 0.07, <cit.> is thus directly implemented through the NICER M-R information of PSR J0740+6620. Folding in the NICER constraints based on our prior leads to the likelihood for the EDF parameters <cit.> P(NICER|a⃗) = ∫ dM dR P(M,R) ×δ(M-M(a⃗)) δ(R-R(a⃗)) , where M(a⃗) and R(a⃗) is the M-R relation for a given EDF EOS with parameter set a⃗ and P(M,R) is the M-R posterior distribution for each of the two NICER sources. The integral is carried out be discretizing the M- space, summing over all bins which are passed by the M(a⃗)-R(a⃗) relation and weighting those bins with the NICER posterior for each of the sources successively. In addition to NICER, we use the tidal deformability information from GW170817 inferred by LIGO/Virgo <cit.>, P(LIGO|a⃗) = ∫ dM_1 dΛ_1 dM_2 dΛ_2 P(M_1,Λ_1,M_2,Λ_2) ×δ(M_1-M_1(a⃗)) δ(Λ_1-Λ_1(a⃗)) ×δ(M_2-M_2(a⃗)) δ(Λ_2-Λ_2(a⃗)) , where P(M_1,Λ_1,M_2,Λ_2) is the posterior distribution from LIGO/Virgo. We assume that the NICER and GW170817 analyses are independent each other so that combining both constraints, the likelihood is given by P(D|a⃗) = P(NICER|a⃗) P(LIGO|a⃗) . Multiplying the combined likelihood with the prior P(a⃗) and a normalization constant considering the integral in the denominator, we obtain the posterior distribution P(a⃗| D) for a given EDF EOS with parameter set a⃗. § RESULTS Next we present our results for the properties of neutron stars and the symmetry energy based on the EOS framework developed in the previous section. This combines the information from neutron matter based on chiral EFT interactions, with empirical properties of symmetric nuclear matter, as well as astrophysical constraints from GW170817 and NICER using a family of EDFs for nucleonic matter. Since matter in neutron stars is very neutron-rich, we have focused more on the propagation of the theoretical uncertainties in our knowledge of neutron matter. An advantage of our EOS framework is that we use the same EDF to construct the crust and core EOS for neutron stars. In the following, we present our results for the neutron star mass and radius, the proton fraction, the speed of sound, and the central density in neutron stars. We also provide results for the symmetry energy and the slope parameter and explore correlations of neutron star radii with the pressure and the speed of sound in neutron stars. §.§ Mass-radius relation The mass and radius of neutron stars are obtained by solving the TOV equations for nonrotating stars. Figure <ref> shows the 95% credibility regions for the mass M and radius R generated from the multivariate normal distribution for the EDF EOSs based on an ensemble of ∼ 10^5 EOS. The top panel shows the prior distribution for the k_ F expansion using different values of d=1,3,5,7, and d=∞. The middle and lower panels show the posterior distribution including astrophysics information from GW170817 and the NICER analysis of Riley et al. <cit.> or the NICER analysis of Miller et al. <cit.>, respectively. Our results show that the posterior distributions obtained from the two different NICER analyses are very similar once the nuclear physics information is encoded in the EOS framework. Regarding the different EDF choices, we find that the d=3 distribution is similar to the case of d=5, 7, and d=∞. However, large d, and in particular d=∞ allows for the speed of sound to become acausal, c_s^2 > 1 (in units with the speed of light c=1), as the density increases, which is not the case in either neutron or symmetric matter for d=3 by construction. In addition, as d=1 makes the interaction energy density rapidly behave like n^5/3, the EOS becomes soft at rather low densities compared to the larger d values. As a result the 95% credibility regions for mass and radius only extend slight above 2. Therefore, in the following, we will show results only for the EDF EOSs with d=3. Before doing so, we also list the radius ranges of typical 1.4 and 2 neutron stars to show the rather minor sensitivity to the choice of d (see Table <ref>). In Table <ref> we give the prior and posterior ranges for the radius R_1.4 of a 1.4 neutron star at 95% (± 2σ) and 68% (± 1σ) credibility as well as the most likely radius for the EDF EOS ensembles with the k_ F expansion and different d values. For d=3, the 95% credibility prior range is R_1.4 = (9.87-13.19) km. Including the astrophysics information from GW170817 and the NICER analysis of Riley et al. <cit.> gives for 95% credibility posterior range R_1.4=(11.57-13.17) km while with the Miller et al. <cit.> analysis R_1.4=(11.65-13.23) km, or the combined range R_1.4=(11.6-13.2) km. Both NICER analyses thus give very similar posterior ranges with the result based on Miller et al. shifted to slightly larger radii. Overall, the radius range decreases by over 50% from 3.3 km for the prior to 1.6 km for the combined posterior, mainly by disfavoring the smaller radii in the prior range. Moreover, in the prior distribution for d=3, 72% EOSs have a maximum mass of neutron stars greater than 2.0, while for the posterior distribution, 97% (98%)EOSs have a maximum mass above 2.0 using the NICER analysis of Riley et al. <cit.> (Miller et al. <cit.>). In Fig. <ref>,we show the color-coded prior and posterior distributions for the case of d=3. In both posterior distributions, the most probable radii for neutron stars between 1.0 and 1.8 vary only within 0.3 km. Moreover, the mass and radius distribution for M>2.0 is very similar between for the prior and the two posteriors, because the astrophysics information mainly removes EOSs that give low maximum mass and small radii. Table <ref> gives the prior and posterior ranges for the radius R_2.0 of a 2.0 neutron star for the EDF EOSs with d=3. The prior distribution shows a wider radius range because it does not include information of a massive neutron star. Again the two posterior ranges for R_2.0 are very similar and merely shifted by less than 100 m. In the case of d=3, the maximum mass of neutron stars among the ∼ 10^5 EOS ensemble reaches up to 2.23, while it can go up to 2.32 for d=∞. §.§ Symmetry energy and L parameter We can also extract the symmetry energy S_v and the slope parameter L from our calculations. This is shown in Fig. <ref> for the individual MBPT calculations for the different chiral NN+3N Hamiltonians from Ref. <cit.> as points, where the dashed (solid) line connects the 500 (450) MeV cutoff N^3LO results. As discussed, our EOS EDF ensembles are built from all the different chiral NN+3N results. The resulting 95% prior and posterior distributions are shown for the EDF EOS ensemble with the k_ F expansion and d=3. We find that the prior range for S_v and L is narrowed to larger values with the astrophysics constraints included. For both NICER analyses the posteriors are again very similar. The 95% distributions can be parametrized by the mean values and the covariance matrix. For the prior distribution these are given by (mean values in MeV and convariance matrix in MeV^2): ⟨ S_v, L ⟩ = (31.96, 51.70) , Σ_S_v,L = [ 0.79 6.73; 6.73 75.11 ] , while the posterior distributions for the astrophysical inferences are given for the Riley et al. <cit.> and Miller et al. <cit.> analysis, respectively, ⟨ S_v, L ⟩ = (32.23, 56.33) , Σ_S_v,L^ Riley= [ 0.66 4.56; 4.56 40.02 ] , and ⟨ S_v, L ⟩ = (32.31, 57.31) , Σ_S_v,L^ Miller = [ 0.64 4.43; 4.43 40.43 ] . We observe that the astrophysics constraints move the posterior distributions to larger S_v and L values within the prior range. Moreover, all MBPT calculations for the different chiral NN+3N Hamiltonians are still largely within the posterior range, but some of them only borderline. This points to that astrophysics prefers EOSs on the stiffer part of the neutron matter EOS band based on chiral EFT. This is consistent with the EOS findings in Ref. <cit.>. In Fig. <ref> we also show the GP-B results at N^3LO from Ref. <cit.>. Since the GP-B contours are based on the same N^3LO 500 (450) MeV results <cit.> included in our analysis, we can trace the difference between the GP-B countours and the N^3LO points to the evaluation of S_v and L for the correlated range of 95% of the calculated saturation density, while our distributions are at a fixed reference saturation density n_0=0.16 fm^-3. Since the L parameter scales linearly with the density, this mainly affects the L value, while the range of symmetry energies is broadened due to the additional uncertainty in the calculated saturation density. Finally, we compare our 95% posterior distributions in Fig. <ref> with the recent results from Essick et al. <cit.>, which are however 90% contours. These are based on a different set of chiral NN+3N calculations and astrophysics constraints through a more general Gaussian process extension to high densities. Nevertheless both contours (at the same reference saturation density n_0) are remarkably consistent. §.§ Proton fraction The ground state of neutron star matter is obtained by solving the condition for beta equilibrium, μ_n = μ_p + μ_e , where the neutron, proton, and electron chemical potentials μ_n, μ_p, and μ_e are given by μ_n = ε/ n_n, μ_p = ε/ n_p, μ_e = ε/ n_e , with total energy density ε. Since the core is composed of uniform nuclear matter, Eq. (<ref>) is straightforward for a given EDF. For the crust EOS, where matter exists in inhomogeneous form, we employ the liquid drop model (LDM) <cit.> using the same EDF to construct the EOSs of the inner and outer crust. In the inner crust, the total energy density including the electron contribution is given by <cit.> ε = u n_i f_i + σ(x_i)u d/r_N + 2π (e x_i n_i r_N)^2 u f_d(u) + (1-u)n_nof_no + ε_e , where u is the volume fraction of the nucleus to the Wigner-Seitz cell, n_i is the baryon number density of the heavy nucleus, n_no is the density of unbound neutrons, x_i is the proton fraction in the heavy nucleus, f_i =f(n_i,x_i) and f_no=f(n_no,x_no=0) are the energy per baryon for the heavy nucleus and unbound neutrons, respectively. σ(x_i) is the surface tension at zero temperature as a function of the proton fraction in heavy nuclei, r_N the radius of the heavy nucleus, e the electric charge, d the dimension of the nuclear pasta phase, f_d(u) the Coulomb shape function corresponding to the nuclear pasta phase, and ε_e is the electron energy density. We use the surface tension from <cit.> σ(x_i) = σ_0 2^α+1+ q/x^-α + q + (1-x)^-α , where σ_0, α, and q are parameters fit to the calculation of the surface tension. In this work, we use σ_0= 1.14 MeV fm^-2, α=3.4, and q=30, but note that the crust properties depend only weakly on the surface tension parameters, and also the impact of the crust on the investigated neutron star properties is minor. Based on the viral theorem, the Coulomb energy is approximately twice the nuclear surface energy. Thus, we can combine the surface and Coulomb energy to a single form of energy contribution, which leads to a simpler equation for the energy density <cit.> ε = u n_i f_i + (243π/5e^2 x_i^2 n_i^2 σ^2(x_i) )^1/3𝒟(u) + (1-u)n_nof_no + ε_e , where 𝒟(u) is a continuous dimension function introduced in Ref. <cit.>. For total baryon density and proton fraction Y_p, and thus electron density n_e = Y_p n, the conditions u, n_i, x_i, and n_no are found by minimizing the total energy density, Eq. (<ref>), using the Lagrange multiplier method for the constraints of baryon density and charge neutrality, n = u n_i + (1-u) n_no and n_e = (1-u) n_i x_i . For a outer crust EOS, which is defined as the region without unbound neutrons, the outside neutron density n_no is neglected. Using the LDM construction, the transitions from the outer to inner crust and to the outer core are thus smooth, since the same EDF is employed to construct the entire neutron star EOS. Figure <ref> shows the average proton fraction at the central density ⟨ Y_p^c ⟩ based on the EDF EOS ensemble for the k_ F expansion and d=3, as well as the variance over the average σ_Y_p^c / ⟨ Y_p^c ⟩. The average proton fraction is dominated by the core, but include the details of the crust calculation discussed above. We note that in Fig. <ref> (and in Figs. <ref> and <ref>), the mass and radius domain is restricted to the region where the relative probability to the maximum probability P(M,R)/P_ max≥ 10^-2 (as in Fig. <ref>). As expected, the proton fraction increases as the mass increases, and for a given mass, it increases with radius as the EOS becomes stiffer. Our EOS ensemble assumes for the proton fraction that matter is nucleonic, which may not be valid for massive stars. However, for typical 1.4 neutron stars, this may not be such a large extrapolation. In addition, we plot in Fig. <ref> the threshold Y_p = 1/9 for direct URCA process, which leads to fast cooling neutron stars <cit.>. We find that typical neutron stars around 1.4 do not exceed this threshold for radii around 12 km, but only in our largest radius configurations. However, based on our results, we expect that massive neutron stars with M>2.1 would cool via the direct Urca process. Figure <ref> shows the total proton fraction Y_p^ tot of the maximum mass star versus the maximum mass. The total proton fraction increases along a band as the maximum mass increases, due to the stiffer EOS. Figure <ref> shows results for four different Q values of symmetric nuclear matter, keeping in mind that negative Q values are favored by nuclear masses, ab initio calculations, and astrophysics <cit.>. With increasing Q, the total proton fraction for a given mass decreases and also the maximum mass increases, as larger Q stiffens the EOS. Naturally, the sensitivity to Q is much less pronounced for typical neutron stars. Figure <ref> shows the proton fraction at the central density Y_p^c versus the radius of a 1.4 star, which exhibits a tight correlation and is only very weakly dependent on Q. Larger radii thus have a larger proton fraction. Again we see that radii around 12 km, as expected based on most recent EOS astrophysical inferences <cit.>, do not cool via the direct Urca threshold. However for larger radii R_1.4 > 12.6 km (for Q=-300 MeV) even typical neutron stars would be fast coolers. §.§ Central density and speed of sound Next, we study the posterior distribution for the central density and the speed of sound in neutron stars. Figure <ref> shows the average central density in units of saturation density ⟨ n_c/n_0 ⟩ and its variance over the average σ_n_c / ⟨ n_c ⟩. The average central density increases with increasing mass, while it decreases for as the radius increases for a given mass of neutron star. This results from stiffer EOSs leading to larger radii. In our EDF EOSs, the maximal central density reaches up to ≈ 7 n_0, which is reached for softer EOSs in the most massive neutron stars with smaller radii. Figure <ref> shows the speed of sound squared c_s^2 = ∂ P/∂ε at the central densities in neutron stars. In our EDFs, the speed of sound increases but remains causal and decreases at high density <cit.>. As we see from Fig. <ref>, the speed of sound is increasing as the mass increases, so in neutron stars most matter is on the part of the EOS that has an increasing c_s^2 in our ensemble of EOSs. In Fig. <ref>, the red dashed line represent c_s^2=1/3, which shows that even typical 1.4 stars exceed the conformal limit, except when they have radii larger than 13 km (see also the middle panel of Fig. <ref>). Moreover, information on the radii of massive stars with M ≳ 2.0 would inform us about c_s^2 at the central density (see also Fig. <ref>). This could be realized with an improved NICER radius measurement <cit.> of the 2.08 ± 0.07 pulsar PSR J0740+6620 <cit.>. §.§ Correlations Finally, we study the correlation of neutron star radii with the pressure and the speed of sound. In Ref. <cit.> it was suggested that the radius of 1.4 neutron star would follow the emprical relation R_1.4∼ p_2n_0^1/4, where p_2n_0 is the pressure at twice saturation density. In the top panel of Fig. <ref> we show that this correlation is indeed fulfilled in our EDF EOS ensemble within a band. For the radius in km and the pressure in MeV fm^-3, we find R_1.4 = 0.731 + 5.312 P_2n_0^1/4 for the mean line of the correlation shown in Fig. <ref>, with a correlation coefficient r_xy = 0.980. While the details of this correlation depend on the EOS model, this indicates that astrophysical observations of neutron star radii provide constraints for the pressure at twice saturation density. The middle panel of Fig. <ref> shows the distribution of R_1.4 versus the speed of sound at the central density of neutron stars. Most of the distribution follows a linear trend, but the correlation coefficient r_xy=-0.870 is weaker in this case. We also observe that c_s^2 at the central density exceeds the conformal limit c_s^2 =1/3 in our EDF EOS ensemble for R_1.4 smaller than 12.8 km The correlation is even weaker at lower densities when comparing R_1.4 with the L parameter in the bottom panel fo Fig. <ref>, which is proportional to the pressure of pure neutron matter at saturation density. This is as expected because the central densities of a 1.4 neutron star is ∼ 3 n_0. Nevertheless, there is a general trend that R_1.4 increases as L increases. Figure <ref> shows the correlation of the radius of a 2.0 neutron star with the speed of sound at the central density. The strong correlation indicates that the radius measurement of massive neutron stars provides constraints for the speed of sound in dense nuclear matter. For the radius in km, we find R_2.0 = 16.493 - 7.846 c_s^2, with a correlation coefficient r_xy=-0.995. Moreover, we find within our EDF EOS ensemble that the speed of sound at the central density of 2.0 stars is always greater than the conformal limit. Figure <ref> shows the mass and radius prior when we impose the conformal limit for the speed of sound. The top panel shows the case when the speed of sound continues to increase up to 1/3 and maintains the conformal limit for all higher densities. The bottom panel is for the case where the speed of sound jumps to 1/3 at n=2n_0 and remains at the conformal limit for all higher densities. In both scenarios, the speed of sound is not larger than the conformal limit at any density. From Fig. <ref>, the prior probability to support 2.0 stars is around 10% or less, which is similar to the findings of Ref. <cit.>. Thus, the conformal limit can be consistent with 2.0 stars, but most of the support of our EDF EOS ensemble exceeds the conformal limit for massive neutron stars. However, when we take the maximum mass limit as the central value of PSR J0740+6620, 2.08, the speed of sounds needs to exceed 1/3 in our ensemble, as the maximum mass does not reach up to 2.08 in our modelling in both cases in Fig. <ref>. § SUMMARY AND CONCLUSION We have explored EOS ensembles using new EDFs from Ref. <cit.> that allow for large variations at high densities. The EDF EOS ensembles were constrained by empirical properties of symmetric nuclear matter and by MBPT calculations of neutron matter based on different chiral NN+3N Hamiltonians. Starting from this prior, constraints at high densities were included from observations of GW170817 and NICER, where the heavy neutron star mass constraint is incorporated through PSR J0740+6620. All our results show that both Riley et al. <cit.> and Miller et al. <cit.> NICER analyses lead to very similar posterior constraints for the symmetry energy and neutron star properties when folded into our EOS framework. Based on our EDF EOS ensembles, we have studied the symmetry energy and the L parameter, as well as for the proton fraction, the speed of sound, and the central density in neutron stars. Our 95% posterior credibility ranges for the symmetry energy S_v, the L parameter, and the radius of a 1.4 neutron star R_1.4 are S_v=(30.6-33.9) MeV, L=(43.7-70.0) MeV, and R_1.4=(11.6-13.2) km. Moreover, we have shown that larger and-or heavier neutron stars have a larger proton fraction and are thus more likely to cool rapidly via the direct Urca process. As can be seen from our results for S_v and L, present astrophysics constraints prefer larger pressures within the prior ranges. To this end, we have also explored correlations of neutron star radii with the pressure and the speed of sound. The radius of 1.4 stars was found to correlate well with the pressure at twice saturation density, and R_2.0 was shown to correlate tightly with the speed of sound at the central density. Therefore, precise measurements of R_1.4 provide key information for density regimes at the limits of chiral EFT calculations, and radii of massive neutron stars will help to constrain the behavior of the speed of sound in dense matter. Finally, by constructing EOS ensembles with imposed conformal limit on the speed of sound, we found that a maximum mass of neutron stars M_ max>2.1 indicates that the speed of sound needs to exceed the conformal limit. We thank Sabrina Huth for fruitful discussion. This work was supported by the Max Planck Society, the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant Agreement No. 101020842) and by the National Research Foundation of Korea (NRF) Grant funded by the Korea government (MSIT) (No. 2021R1A2C2094378). 55 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Lattimer and Lim(2013)]LattimerLim author author J. M. Lattimer and author Y. Lim, @noop journal journal Astrophys. J. volume 771, pages 51 (year 2013)NoStop [Drischler et al.(2021)Drischler, Holt, and Wellenhofer]Dris21ARNPS author author C. Drischler, author J. W. Holt, and author C. Wellenhofer, @noop journal journal Annu. Rev. Nucl. Part. Sci. volume 71, pages 403 (year 2021)NoStop [Huth et al.(2021)Huth, Wellenhofer, and Schwenk]Huth21 author author S. Huth, author C. Wellenhofer, and author A. Schwenk, @noop journal journal Phys. Rev. C volume 103, pages 025803 (year 2021)NoStop [Essick et al.(2021a)Essick, Landry, Schwenk, and Tews]Essick21PRC author author R. Essick, author P. Landry, author A. Schwenk, and author I. Tews, @noop journal journal Phys. Rev. C volume 104, pages 065804 (year 2021a)NoStop [Hebeler and Schwenk(2010)]Hebe10nmatt author author K. Hebeler and author A. Schwenk, @noop journal journal Phys. Rev. C volume 82, pages 014314 (year 2010)NoStop [Tews et al.(2013)Tews, Krüger, Hebeler, and Schwenk]Tews13N3LO author author I. Tews, author T. Krüger, author K. Hebeler, and author A. Schwenk, @noop journal journal Phys. Rev. Lett. volume 110, pages 032504 (year 2013)NoStop [Carbone et al.(2013)Carbone, Polls, and Rios]Carb13nm author author A. Carbone, author A. Polls, and author A. Rios, @noop journal journal Phys. Rev. C volume 88, pages 044302 (year 2013)NoStop [Hagen et al.(2014)Hagen, Papenbrock, Ekström, Wendt, Baardsen, Gandolfi, Hjorth-Jensen, and Horowitz]Hage14ccnm author author G. Hagen, author T. Papenbrock, author A. Ekström, author K. Wendt, author G. Baardsen, author S. Gandolfi, author M. Hjorth-Jensen, and author C. J. Horowitz, @noop journal journal Phys. Rev. C volume 89, pages 014319 (year 2014)NoStop [Lynn et al.(2016)Lynn, Tews, Carlson, Gandolfi, Gezerlis, Schmidt, and Schwenk]Lynn16QMC3N author author J. E. Lynn, author I. Tews, author J. Carlson, author S. Gandolfi, author A. Gezerlis, author K. E. Schmidt, and author A. Schwenk, @noop journal journal Phys. Rev. Lett. volume 116, pages 062501 (year 2016)NoStop [Holt and Kaiser(2017)]Holt17 author author J. W. Holt and author N. Kaiser, @noop journal journal Phys. Rev. C volume 95, pages 034326 (year 2017)NoStop [Drischler et al.(2019)Drischler, Hebeler, and Schwenk]Dris19MCshort author author C. Drischler, author K. Hebeler, and author A. Schwenk, @noop journal journal Phys. Rev. Lett. volume 122, pages 042501 (year 2019)NoStop [Jiang et al.(2020)Jiang, Ekström, Forssén, Hagen, Jansen, and Papenbrock]Jiang20 author author W. G. Jiang, author A. Ekström, author C. Forssén, author G. Hagen, author G. R. Jansen, and author T. Papenbrock, @noop journal journal Phys. Rev. C volume 102, pages 054301 (year 2020)NoStop [Keller et al.(2023)Keller, Hebeler, and Schwenk]Kell23ANM author author J. Keller, author K. Hebeler, and author A. Schwenk, @noop journal journal Phys. Rev. Lett. volume 130, pages 072701 (year 2023)NoStop [Hebeler et al.(2013)Hebeler, Lattimer, Pethick, and Schwenk]Hebe13ApJ author author K. Hebeler, author J. M. Lattimer, author C. J. Pethick, and author A. Schwenk, @noop journal journal Astrophys. J. volume 773, pages 11 (year 2013)NoStop [Tews et al.(2018)Tews, Carlson, Gandolfi, and Reddy]Tews18cs author author I. Tews, author J. Carlson, author S. Gandolfi, and author S. Reddy, @noop journal journal Astrophys. J. volume 860, pages 149 (year 2018)NoStop [Greif et al.(2019)Greif, Raaijmakers, Hebeler, Schwenk, and Watts]Greif19cs author author S. K. Greif, author G. Raaijmakers, author K. Hebeler, author A. Schwenk, and author A. L. Watts, @noop journal journal Mon. Not. Roy. Astron. Soc. volume 485, pages 5363 (year 2019)NoStop [Landry and Essick(2019)]Landry19GP author author P. Landry and author R. Essick, @noop journal journal Phys. Rev. D volume 99, pages 084049 (year 2019)NoStop [Lim and Holt(2018)]Lim18 author author Y. Lim and author J. W. Holt, @noop journal journal Phys. Rev. Lett. volume 121, pages 062701 (year 2018)NoStop [Demorest et al.(2010)Demorest, Pennucci, Ransom, Roberts, and Hessels]Demo10ns author author P. Demorest, author T. Pennucci, author S. Ransom, author M. Roberts, and author J. Hessels, @noop journal journal Nature volume 467, pages 1081 (year 2010)NoStop [Antoniadis et al.(2013)Antoniadis, Freire, Wex, Tauris, Lynch, van Kerkwijk, Kramer, Bassa, Dhillon, Driebe et al.]Anto13ns author author J. Antoniadis, author P. C. C. Freire, author N. Wex, author T. M. Tauris, author R. S. Lynch, author M. H. van Kerkwijk, author M. Kramer, author C. Bassa, author V. S. Dhillon, author T. Driebe, et al., @noop journal journal Science volume 340, pages 1233232 (year 2013)NoStop [Fonseca et al.(2021)Fonseca, Cromartie, Pennucci, Ray, Kirichenko, Ransom, Demorest, Stairs, Arzoumanian, Guillemot et al.]Fonseca21 author author E. Fonseca, author H. T. Cromartie, author T. T. Pennucci, author P. S. Ray, author A. Y. Kirichenko, author S. M. Ransom, author P. B. Demorest, author I. H. Stairs, author Z. Arzoumanian, author L. Guillemot, et al., @noop journal journal Astrophys. J. Lett. volume 915, pages L12 (year 2021)NoStop [Riley et al.(2021)Riley, Watts, Ray, Bogdanov, Guillot, Morsink, Bilous, Arzoumanian, Choudhury, Deneva et al.]Riley21 author author T. E. Riley, author A. L. Watts, author P. S. Ray, author S. Bogdanov, author S. Guillot, author S. M. Morsink, author A. V. Bilous, author Z. Arzoumanian, author D. Choudhury, author J. S. Deneva, et al., @noop journal journal Astrophys. J. Lett. volume 918, pages L27 (year 2021)NoStop [Miller et al.(2021)Miller, Lamb, Dittmann, Bogdanov, Arzoumanian, Gendreau, Guillot, Ho, Lattimer, Loewenstein et al.]Miller21 author author M. C. Miller, author F. K. Lamb, author A. J. Dittmann, author S. Bogdanov, author Z. Arzoumanian, author K. C. Gendreau, author S. Guillot, author W. C. G. Ho, author J. M. Lattimer, author M. Loewenstein, et al., @noop journal journal Astrophys. J. Lett. volume 918, pages L28 (year 2021)NoStop [Riley et al.(2019)Riley, Watts, Bogdanov, Ray, Ludlam, Guillot, Arzoumanian, Baker, Bilous, Chakrabarty et al.]Riley19 author author T. E. Riley, author A. L. Watts, author S. Bogdanov, author P. S. Ray, author R. M. Ludlam, author S. Guillot, author Z. Arzoumanian, author C. L. Baker, author A. V. Bilous, author D. Chakrabarty, et al., @noop journal journal Astrophys. J. Lett. volume 887, pages L21 (year 2019)NoStop [Miller et al.(2019)Miller, Lamb, Dittmann, Bogdanov, Arzoumanian, Gendreau, Guillot, Harding, Ho, Lattimer et al.]Miller19 author author M. C. Miller, author F. K. Lamb, author A. J. Dittmann, author S. Bogdanov, author Z. Arzoumanian, author K. C. Gendreau, author S. Guillot, author A. K. Harding, author W. C. G. Ho, author J. M. Lattimer, et al., @noop journal journal Astrophys. J. Lett. volume 887, pages L24 (year 2019)NoStop [Abbott et al.(2019)Abbott et al.]LIGO19PRX author author B. P. Abbott et al. (collaboration LIGO Scientific Collaboration and Virgo Collaboration), @noop journal journal Phys. Rev. X volume 9, pages 011001 (year 2019)NoStop [Tolman(1939)]Tolm39TOV author author R. C. Tolman, @noop journal journal Phys. Rev. volume 55, pages 364 (year 1939)NoStop [Oppenheimer and Volkoff(1939)]Oppe39TOV author author J. R. Oppenheimer and author G. M. Volkoff, @noop journal journal Phys. Rev. volume 55, pages 374 (year 1939)NoStop [Hebeler et al.(2015)Hebeler, Holt, Menéndez, and Schwenk]Hebe15ARNPS author author K. Hebeler, author J. D. Holt, author J. Menéndez, and author A. Schwenk, @noop journal journal Annu. Rev. Nucl. Part. Sci. volume 65, pages 457 (year 2015)NoStop [Gezerlis et al.(2013)Gezerlis, Tews, Epelbaum, Gandolfi, Hebeler, Nogga, and Schwenk]Geze13QMCchi author author A. Gezerlis, author I. Tews, author E. Epelbaum, author S. Gandolfi, author K. Hebeler, author A. Nogga, and author A. Schwenk, @noop journal journal Phys. Rev. Lett. volume 111, pages 032501 (year 2013)NoStop [Drischler et al.(2020)Drischler, Furnstahl, Melendez, and Phillips]Dris20PRL author author C. Drischler, author R. J. Furnstahl, author J. A. Melendez, and author D. R. Phillips, @noop journal journal Phys. Rev. Lett. volume 125, pages 202702 (year 2020)NoStop [Drischler et al.(2016)Drischler, Hebeler, and Schwenk]Dris16asym author author C. Drischler, author K. Hebeler, and author A. Schwenk, @noop journal journal Phys. Rev. C volume 93, pages 054314 (year 2016)NoStop [Somasundaram et al.(2021)Somasundaram, Drischler, Tews, and Margueron]Somasundaram21 author author R. Somasundaram, author C. Drischler, author I. Tews, and author J. Margueron, @noop journal journal Phys. Rev. C volume 103, pages 045803 (year 2021)NoStop [Tews et al.(2017)Tews, Lattimer, Ohnishi, and Kolomeitsev]Tews17 author author I. Tews, author J. M. Lattimer, author A. Ohnishi, and author E. E. Kolomeitsev, @noop journal journal Astrophys. J. volume 848, pages 105 (year 2017)NoStop [Hebeler et al.(2011)Hebeler, Bogner, Furnstahl, Nogga, and Schwenk]Hebe11fits author author K. Hebeler, author S. K. Bogner, author R. J. Furnstahl, author A. Nogga, and author A. Schwenk, @noop journal journal Phys. Rev. C volume 83, pages 031301(R) (year 2011)NoStop [Carlsson et al.(2016)Carlsson, Ekström, Forssén, Strömberg, Jansen, Lilja, Lindby, Mattsson, and Wendt]Carl15sim author author B. D. Carlsson, author A. Ekström, author C. Forssén, author D. F. Strömberg, author G. R. Jansen, author O. Lilja, author M. Lindby, author B. A. Mattsson, and author K. A. Wendt, @noop journal journal Phys. Rev. X volume 6, pages 011019 (year 2016)NoStop [Entem et al.(2017)Entem, Machleidt, and Nosyk]Ente17EMn4lo author author D. R. Entem, author R. Machleidt, and author Y. Nosyk, @noop journal journal Phys. Rev. C volume 96, pages 024004 (year 2017)NoStop [Dutra et al.(2012)Dutra, Sa Martins, Delfino, Stone, and Stevenson]Dutra12PRC author author M. Dutra, author J. S. Sa Martins, author A. Delfino, author J. R. Stone, and author P. D. Stevenson, @noop journal journal Phys. Rev. C volume 85, pages 035201 (year 2012)NoStop [Lim and Holt(2019)]Lim19 author author Y. Lim and author J. W. Holt, @noop journal journal Eur. Phys. J. A volume 55, pages 209 (year 2019)NoStop [Lim et al.(2021)Lim, Bhattacharya, Holt, and Pati]Lim2020n author author Y. Lim, author A. Bhattacharya, author J. W. Holt, and author D. Pati, @noop journal journal Phys. Rev. C volume 104, pages L032802 (year 2021)NoStop [Lim and Holt(2022)]Lim2022f author author Y. Lim and author J. W. Holt, @noop journal journal Galaxies volume 10, pages 99 (year 2022)NoStop [Raaijmakers et al.(2021)Raaijmakers, Greif, Hebeler, Hinderer, Nissanke, Schwenk, Riley, Watts, Lattimer, and Ho]Raaijmakers21 author author G. Raaijmakers, author S. K. Greif, author K. Hebeler, author T. Hinderer, author S. Nissanke, author A. Schwenk, author T. E. Riley, author A. L. Watts, author J. M. Lattimer, and author W. C. G. Ho, @noop journal journal Astrophys. J. Lett. volume 918, pages L29 (year 2021)NoStop [Lim and Holt(2017)]Lim17 author author Y. Lim and author J. W. Holt, @noop journal journal Phys. Rev. C volume 95, pages 065805 (year 2017)NoStop [Ravenhall et al.(1983)Ravenhall, Pethick, and Lattimer]RPL1983 author author D. G. Ravenhall, author C. J. Pethick, and author J. M. Lattimer, @noop journal journal Nucl. Phys. A volume 407, pages 571 (year 1983)NoStop [Lattimer and Swesty(1991)]LSEOS author author J. M. Lattimer and author F. D. Swesty, @noop journal journal Nucl. Phys. A volume 535, pages 331 (year 1991)NoStop [Lattimer et al.(1991)Lattimer, Pethick, Prakash, and Haensel]Lattimer91 author author J. M. Lattimer, author C. J. Pethick, author M. Prakash, and author P. Haensel, @noop journal journal Phys. Rev. Lett. volume 66, pages 2701 (year 1991)NoStop [Capano et al.(2020)Capano, Tews, Brown, Margalit, De, Kumar, Brown, Krishnan, and Reddy]Capano20 author author C. D. Capano, author I. Tews, author S. M. Brown, author B. Margalit, author S. De, author S. Kumar, author D. A. Brown, author B. Krishnan, and author S. Reddy, @noop journal journal Nature Astronomy volume 4, pages 625 (year 2020)NoStop [Al-Mamun et al.(2021)Al-Mamun, Steiner, Nättilä, Lange, O'Shaughnessy, Tews, Gandolfi, Heinke, and Han]Al-Mamun21 author author M. Al-Mamun, author A. W. Steiner, author J. Nättilä, author J. Lange, author R. O'Shaughnessy, author I. Tews, author S. Gandolfi, author C. Heinke, and author S. Han, @noop journal journal Phys. Rev. Lett. volume 126, pages 061101 (year 2021)NoStop [Essick et al.(2021b)Essick, Tews, Landry, and Schwenk]Essick21 author author R. Essick, author I. Tews, author P. Landry, and author A. Schwenk, @noop journal journal Phys. Rev. Lett. volume 127, pages 192701 (year 2021b)NoStop [Huth et al.(2022)Huth, Pang, Tews, Dietrich, Le Févre, Schwenk, Trautmann, Agarwal, Bulla, Coughlin, and Van Den Broeck]Huth22 author author S. Huth, author P. T. H. Pang, author I. Tews, author T. Dietrich, author A. Le Févre, author A. Schwenk, author W. Trautmann, author K. Agarwal, author M. Bulla, author M. W. Coughlin, and author C. Van Den Broeck, @noop journal journal Nature volume 606, pages 276 (year 2022)NoStop [Annala et al.(2022)Annala, Gorda, Katerini, Kurkela, Nättilä, Paschalidis, and Vuorinen]Annala22 author author E. Annala, author T. Gorda, author E. Katerini, author A. Kurkela, author J. Nättilä, author V. Paschalidis, and author A. Vuorinen, @noop journal journal Phys. Rev. X volume 12, pages 011058 (year 2022)NoStop [Altiparmak et al.(2022)Altiparmak, Ecker, and Rezzolla]Altiparmak22 author author S. Altiparmak, author C. Ecker, and author L. Rezzolla, @noop journal journal Astrophys. J. Lett. volume 939, pages L34 (year 2022)NoStop [Gorda et al.(2022)Gorda, Komoltsev, and Kurkela]Gorda22 author author T. Gorda, author O. Komoltsev, and author A. Kurkela, @noop (year 2022), http://arxiv.org/abs/2204.11877 arXiv:2204.11877 NoStop [Lattimer and Prakash(2007)]Lattimer06 author author J. M. Lattimer and author M. Prakash, @noop journal journal Phys. Rept. volume 442, pages 109 (year 2007)NoStop [Bedaque and Steiner(2015)]Bedaque15 author author P. Bedaque and author A. W. Steiner, @noop journal journal Phys. Rev. Lett. volume 114, pages 031103 (year 2015)NoStop
http://arxiv.org/abs/2307.05096v1
20230711081058
The smarty4covid dataset and knowledge base: a framework enabling interpretable analysis of audio signals
[ "Konstantia Zarkogianni", "Edmund Dervakos", "George Filandrianos", "Theofanis Ganitidis", "Vasiliki Gkatzou", "Aikaterini Sakagianni", "Raghu Raghavendra", "C. L. Max Nikias", "Giorgos Stamou", "Konstantina S. Nikita" ]
cs.SD
[ "cs.SD", "eess.AS" ]
Pseudomagnetic suppression of non-Hermitian skin effect Baile Zhang ======================================================= empty § BACKGROUND & SUMMARY The COVID-19 pandemic induced innovation in many technological sectors leading to the development of a variety of means to combat the global outbreak such as vaccines, bio-sensors facilitating the diagnosis at the point of care, 3D printed ventilators and a wealth of mobile applications. More specifically, leveraging the latest trends on mobile health technologies, several applications have been implemented to fight COVID-19 with the aim of creating awareness, collecting suitable data for health survey and surveillance, reducing person-to-person contacts, offering telemedicine services, tracking COVID-19 contacts, supporting healthcare professionals in decision making, facilitating communication and collaboration among healthcare providers and serving as a means towards coordinating emergency response and transport <cit.>. Further to the above, Artificial Intelligence (AI) and Machine Learning (ML) have played an important role in the response to the COVID-19 related challenges through accelerating the research and treatment while offering remarkable solutions to the diagnosis taking into consideration several biomedical data such as X-rays, Computer Tomography (CT) scans, electrocardiogram and audio recordings <cit.>. Furthermore, ML has demonstrated promising performance in epidemiological modelling based on social and weather data <cit.>. A prompt diagnosis of newly infected cases is of particular importance. However, RT-PCR tests and CT scans suffer from certain limitations such as variable sensitivity and increased turnaround time while requiring highly trained staff, approved laboratories and expensive equipment. The antigen tests constitute an alternative, nevertheless they demonstrate poor sensitivity <cit.>. An m-health approach able to support affordable, fast, sustainable and effective testing facilitating multiple repetitions to track progression, could contribute in containing the spread and suppressing resurgence <cit.>. Within this context, the idea of harnessing the power of AI coupled with mobile technologies to implement an easy-to-use and widely accessible COVID-19 detection method, has motivated the application of signal analysis and AI on audio recordings of cough, voice and breath towards the detection of innovative COVID-19 related bio-markers <cit.>. In recent literature, most approaches for predicting the COVID-19 risk from audio recordings rely on deep learning models, which typically require large amounts of data to be trained. Therefore the development of curated COVID-19 audio datasets is crucial for achieving accuracy and reliability <cit.>. Several studies have been oriented to collect audio recordings from citizens following a crowd-sourcing approach through the use of a web interface. The first attempt in this direction has been initiated within the frame of the COVID-19 Sounds project <cit.>. The COVID-19 Sounds database consists of 53,449 audio samples each including 3 to 5 deep breaths through mouth, 3 voluntary coughs and 3 voice repetitions of a predefined short sentence. Coswara is another crowd-sourced database consisting of various kinds of sounds such as breaths (shallow and deep), voluntary coughs (heavy and shallow), sustained vowel phonation (/ey/ as in made, /i/ as in beet, /u:/ as in cool), and number counting from one to twenty (normal and fast-paced) <cit.>. Coughvid is also considered to be one of the largest crowd-sourced databases, yet including only cough sounds <cit.>. To date, the latest version of Coughvid is publicly released with 27,550 cough recordings. As illustrated in Figure <ref> the number of the obtained audio samples (e.g. each audio sample includes all the considered types of audio recordings) ranges from 2,030 to 53,449 while the prevalence of COVID-19 cases is relatively low especially in the Coughvid and the COVID-19 Sounds datasets. All these available databases include various demographics, symptoms, and co-morbidities in order to provide further information towards detecting COVID-19. A common pitfall of crowd-sourced data is that it contains audio recordings unrelated to the desired content of the database and audio recordings characterized by low quality and increased noise. This highlights the need to apply methods for data curation. Within the frame of the Coughvid and the COVID-19 Sounds projects, computational models have been developed to detect the specific segments in the audio signal that contain the considered audio recording. More specifically, the YAMNet pre-trained audio classification network has been used to filter out noisy, silent, low-quality, and inconsistent recordings <cit.> in the dataset. The model has been evaluated on a small subset (e.g. 3,067 audio recordings) that has been manually annotated and has achieved an accuracy up to 88%. In the case of the Coughvid dataset, a small number (e.g 215) of audio recordings have been selected and manually annotated as cough or non-cough sounds. This small datataset has been used to develop an eXtreme Gradient Boosting classifier towards discriminating cough from non cough audio recordings taking as input 68 audio features in the domains of (i) Mel Frequency, (ii) Time, and (iii) Frequency. Following a 10-fold cross validation framework the COUGHVID model has achieved sensitivity and c-statistic equal to 78.2% and 96.4%, respectively <cit.>. The Coswara dataset has been entirely manually annotated. The development of robust machine learning models able to detect COVID-19 is particularly challenging due to the heterogeneity of the available datasets, the low number of cases (positive for COVID-19) versus controls (negative for COVID-19), and deficiencies related to COVID-19 variants and factors that strongly affect the infection, for example the vaccination status against COVID-19. On top of this, there are biases in the available datasets that need to be thoroughly investigated, while there is an increased risk of model over-fitting especially when complex modelling strategies are applied <cit.>. The realistic performance of an audio based digital testing for COVID-19 has been explored through artificially creating biases in the development dataset, for example introducing gender bias into the data by selecting a high percentage of cases as males, and evaluating their impact on the model's efficacy <cit.>. Another research challenge is the development of a knowledge representation of the available data/information that enables data consolidation and reasoning. The latter is particularly important in order to ensure transparency and gain end-users' trust through providing explanations of the estimated risk. From this perspective, the deployment of smart interfaces that present end users with human understandable interpretations and explanations of their estimated COVID-19 probability can greatly support informed decision making while enhancing human supervision towards the realization of a human centered AI approach. The development of responsible AI models requires data that is richly annotated with metadata, expert labels, and semantic information. This additional information can be used as high-level features for training explainable AI models, since these features are more understandable for humans than for example audio signals or spectrograms that usually form the input space of deep learning models. Furthermore, this additional information can be utilized for post hoc explainability and analysis of black-box classifiers, which is particularly useful since opaque deep learning models are usually applied towards detecting COVID-19 from audio recordings <cit.>. The smarty4covid project aspires the creation of an intelligent multimodal framework for COVID-19 risk assessment and monitoring based on Explainable Deep Learning. Following the necessary approvals from the National Technical University’s Ethics Committee of Research, a responsive web based application (www.smarty4covid.org) has been implemented and publicly released as a means of data collection. The smarty4covid dataset contains in total 18,265 audio recordings of cough, breath (regular, deep) and voice corresponding to 4,673 users (Greek and Cypriot citizens). It also includes other self-reported information related to demographics, symptoms, underlying conditions, smoking status, vital signs, COVID-19 vaccination status, hospitalization, emotional state, working conditions and COVID-19 status (e.g. positive, negative, not-tested). The entire dataset has been cleaned of erroneous and noisy samples, and a subset of the dataset (e.g. 1,475 samples) has been labeled by medical experts. Furthermore, all available information has been encoded into an innovative web ontology knowledge (OWL) base that also contains a rudimentary hierarchy of concepts. The medically related concepts in the OWL knowledge base are provided in the form of ids from SNOMED-CT <cit.>. The curated crowd-sourced smarty4covid dataset is publicly released, yet all audio records of voices that are considered personal data according to the GDPR regulation are excluded (Figure <ref>). The smarty4covid OWL knowledge is also made available in order to enable data consolidation from multiple databases. The smarty4covid OWL knowledge base offers an interpretable framework of high expressiveness which can be employed to explain complex machine learning models through identifying semantic queries over the knowledge that mimic the model <cit.>. The smarty4covid dataset has been utilized towards the development of models able to: (i) classify segments of audio signals as "cough", "breath", "voice", and "other", and (ii) detect inhalation and exhalation segments from breathing recordings, that can be used for extracting clinically related features such as respiratory rates (RR), inhalation to exhalation ratio (I/E ratio), and fractional inspiration time (FIT). The smarty4covid OWL knowledge has been validated as a means of generating counterfactual explanations and discovering potential biases in the available datasets. § METHODS The overall approach towards the development of the smarty4covid database is depicted in Figure <ref>. It includes a crowd-sourcing data collection strategy followed by a two-step data curation method involving data cleaning and labeling. A multi-modal dataset was collected including audio records and tabular data. The curated dataset was exploited for extracting breathing related features, creating publicly available data records, and developing the smarty4covid OWL knowledge that enables data selection and reasoning. §.§ Crowd-sourcing Data Collection The smarty4covid crowd-sourcing data collection was approved by the National Technical University’s Ethics Committee of Research and complied with all relevant ethical regulations. A responsive and user-friendly web-based application (www.smarty4covid.org) was implemented targeting Greek and Cypriot citizens older than 18 years old. The smarty4covid questionnaire consisted of several sections accompanied by instructions for users to perform audio recordings of voice, breath and cough and provide information regarding demographics, COVID-19 vaccination status, medical history, vital signs as measured by means of oximeter and blood pressure monitor, COVID-19 symptoms, smoking habits, hospitalization, emotional state and working conditions. Four types of audio recordings were considered: (i) three voice recordings where the user was required to read a specific sentence, (ii) five deep breaths, (iii) 30 s regular breathing close to the microphone of the device and (iv) three voluntary coughs. Following an effective media planning, more than 10,000 individuals provided demographic information and underlying medical conditions to the smarty4covid application, yet almost half of them (e.g. 4,679) gave the necessary permissions to perform the audio recordings. The web-based application was released in January 2022 during the spread of the omicron wave in Greece, resulting in high COVID-19 prevalence (17.3% of users were tested positive for COVID-19). §.§ Data Curation Part of the crowd-sourced dataset was invalid due to erroneous audio recording submissions by the users and the presence of distortions and high background noise. The data cleaning process was performed by means of a crowd-sourcing campaign utilizing the Label Studio[https://labelstud.io/] open source data labeling tool. AI engineers who volunteered to annotate the audio signals, signed a Non Disclosure Agreement (NDA) and granted with the necessary access permissions. A user-friendly environment was implemented enabling the annotators to listen the audio signals and answer to questions regarding their validity (yes/no) and their quality (Good, Acceptable, Poor) in terms of background noise and distortion. In order to evaluate the quality of the annotations, a set of randomly selected audio files (e.g. 1,389) was considered more than once and up to 5 times in the annotation procedure. A high level of consistency (92.5%) among the annotators was observed indicating that there was no need to have multiple annotators for each audio recording. The smarty4covid crowd-sourced dataset was enriched with labels annotated from healthcare professionals (e.g. pulmonologists, anesthesiologists, internists) who volunteered to characterize the collected audio recordings in terms of audible abnormalities and to provide personalized recommendations regarding the need for medical advice. To this end, four crowd-sourcing campaigns were initiated utilizing the Label Studio. Three campaigns focused on the audio recordings (e.g. breath, voice, cough). As depicted in Figure <ref>, the healthcare professionals were asked to assess the presence of audible abnormalities by selecting one or more options from the available labels. In the fourth campaign, the healthcare professionals were exposed to all available multimodal information about the user, excluding vital signs (e.g. oxygen saturation, beats per minute (BPM), diastolic/systolic pressure) that would lead them to a biased assessment, in order to estimate the risk of health deterioration and suggest a next course of action: a) Seek for medical advice, b) Repeat the Smarty4Covid test in 24 hours and c) In case you notice changes in your health status, repeat the Smarty4Covid test. They were also asked to define a level of confidence (from 1 to 10) in their assessment. §.§ Breathing Feature Extraction Respiration is a complex physiological process, involving both voluntary and involuntary processes, as well as underlying reflexes. A breathing pattern is the upshot of a fine coordination between peripheral chemoreceptors, central nervous system’s organizing structures, lung mechanoreceptors and parenchyma, musculoskeletal components, intrinsic metabolic rate, emotional state, and many others. A breathing pattern adopted at any given moment is assumed to be that which produces adequate alveolar ventilation at the lowest possible energy cost, given the contemporary system’s mechanical status and organism’s metabolic needs. Any disruption in any of these respiratory homeostasis’ pillars, will be reflected in a change of the respiratory pattern, shifting this balance to the best for the prevailing conditions energetic state <cit.>. A viral infection could be a breathing pattern’s disorientation factor <cit.>. Some quantitative indicators commonly used to describe a breathing pattern and its readjustments are RR, respiratory phases and volumes, gases partial pressure, blood gases analysis and other <cit.>. Most of the studies associated with COVID-19 crowd-sourced databases of breathing audio recordings explore features generated through signal processing or deep learning. The smarty4covid dataset innovates the current state of the art by including clinically relevant important and informative respiratory indicators extracted from regular breathing records, such as the RR, I/E ratio, and FIT. RR is the number of breaths per minute, that is normally 16-20 breaths/min. It can be affected by both external and internal factors such as the temperature, endogenous acid-base balance, metabolic state, diseases, injuries, toxicity, etc. I/E ratio is the ratio between the inspiratory (T_i) and expiratory time (T_e) and it can be indicative to a flow disturbance in the respiratory tract <cit.>. Normal breathing usually presents 1:2 or 1:3 I/E ratio at rest <cit.> while airways obstruction may lead to prolonged expiration or inspiration resulting to an abnormal I/E ratio. FIT, also termed as the inspiratory "duty cycle" of the respiratory system, is the ratio between (T_i) and the duration of a total respiratory cycle (T_tot) <cit.>. It provides a rough measure of airway obstruction and stress on the respiratory muscles. Table <ref> summarizes the description and the normal ranges of the aforementioned respiratory indicators. A two step approach was developed in order to extract T_i and T_e from the crowd-sourced breathing audio signals: (i) localization of the segments on the audio signal that contains breathing, and (ii) detection of the exhaling and inhaling parts. In the first step, an AI-based model, described in the "Technical Validation" Section, was applied. The obtained breathing segments were split into non silent intervals. The second step was particularly challenging since either the inhalation part, that was characterized by low mean amplitude, was not appropriately captured due to the hardware of the recording device or due to the short distance of the sound source from the microphone during the exhalation phase, resulting in distortion of the waveform. In order to face this challenge, an unsupervised method was developed with the aim to identify similar parts on a single breathing audio signal that in turn could be considered as either inhalation or exhalation. This particular method presents several advantages over the state of the art <cit.>, since it doesn't require a dataset of human-labeled data for training while there is no need to take into consideration prior knowledge that inhalation follows exhalation and vice versa. Furthermore, the application of the unsupervised method on a single breathing audio signal adds robustness against distortion and background noise since all inhalation/exhalation parts of the same breathing recording are subject to the same level of distortion and background noise. The unsupervised method featured a clustering algorithm based on affinity propagation <cit.> at a frequency level. To this end, the mel-spectrogram (MFCC-128) of the audio signal was obtained and transformed into a vector of 128 frequencies each one corresponding to the summation of the respective frequencies over time. The obtained clusters were labeled as "inhalation", "exhalation" or "other" though applying a heuristic approach. More specifically, for each cluster, the mean amplitudes were calculated by averaging the mean amplitudes over all the members of the cluster. Next, the clusters were sorted from largest to smallest mean amplitude. The top listed cluster was considered as exhalation while the second cluster (if existed) as inhalation. The remaining clusters were labeled as "other". For validation purposes, the inhalation and exhalation parts of thirty-three audio recordings of regular breathing, were manually annotated in order to enable the calculation of the corresponding respiratory indicators. The proposed unsupervised method achieved Root Mean Square Error (RMSE) up to 1.85, 0.14, and 0.08 for the RR, FIT, and I/E ratio, respectively. These RMSE values are considered to be low taking into consideration the normal ranges of each respiratory indicator (<ref>). § DATA RECORDS Part of the smarty4covid crowd-sourced dataset (4,303 submissions) was organized into data records in order to be publicly available. The data records are deposit in the Zenodo Repository (DOI: 10.5281/zenodo.7760170). As depicted in Figure <ref>, each directory contains the submissions of a specific user. The user directory is named after the user’s id that is generated according to the UUID V4 protocol. Apart from the submissions, a json file (“demographics_underlying_conditions.json”) with information regarding demographics (e.g. BMI, age group, gender) and potential underlying conditions (Table <ref>) is also included. Each submission corresponds to a separate sub-directory that is named after the unique submission id and it contains: * valid audio recordings of cough (“audio.cough.mp3”), deep breathing (“audio.breath_deep.mp3”) and regular breathing (“audio.breath_regular.mp3”). Each audio recording has a sampling rate of 48 kHz and a bitrate of 64 kb/s. * a json file (“main_questionnaire.json”) with information related to the COVID-19 test (result, type, and date), COVID-19 vaccination status, COVID-19 related symptoms, vital signs and more (Table <ref>). * a json file (“breathing_features.json”) with the extracted respiratory indicators and the manual annotations of the breathing phases (inhalation, exhalation) on the breathing audio signal (Table <ref>). * four json files (“experts.breath.json”, “experts.cough.json”, “experts.medical_advice.json”, “experts.speech.json”) including the input/labels (characterization, advice) from the healthcare professionals (Tables <ref> - <ref>). §.§ Knowledge Base A web-ontology language (OWL) knowledge base [https://www.w3.org/OWL/] was developed motivated by the need of data consolidation from different relevant databases (e.g Coughvid, COVID-19 sounds, Coswara) and the application of complex queries for the detection of users with specific characteristics. All available information resulting from the crowd-sourcing, data cleaning and data labeling procedures were also released in the form of the smarty4covid OWL knowledge base. In general, using a vocabulary 𝒱=⟨𝖢𝖭,𝖱𝖭,𝖨𝖭⟩ where 𝖢𝖭,𝖱𝖭,𝖨𝖭 are mutually disjoint sets of concept names, role names and individual names respectively, a knowledge base (𝒦=⟨𝒜,𝒯⟩) can be built through creating the Assertional Database (ABox - 𝒜) and the Terminology Database (TBox - 𝒯). The ABox includes assertions of the form C(a),r(a,b) where C∈𝖢𝖭, r∈𝖱𝖭, and a,b∈𝖨𝖭. The TBox is a set of terminological axioms of the form C⊑D, where C,D∈𝖢𝖭, r⊑s and r,s∈𝖱𝖭. Based on these axioms, the hierarchies of concepts and roles can be defined in the TBox. In the smarty4covid OWL knowledge base, the set of individual names (𝖨𝖭) contains a unique name indicative to each participant, questionnaire, audio file, healthcare professional that participated in the labeling procedure and the corresponding characterizations of the audio records. (𝖨𝖭) also includes unique names for each declared symptom, COVID-19 test and preexisting condition that is linked to the corresponding questionnaire (e.g symptom, COVID-19 test) and participant (e.g. underlying condition), respectively. These individuals are linked through appropriately defined roles. The role names 𝖱𝖭 and their defined hierarchy is depicted in Figure <ref>. Each role is associated with a domain and a range indicative to the types of the individuals that can be linked through this role. In particular, the role 𝗁𝖺𝗌𝖢𝗁𝖺𝗋𝖺𝖼𝗍𝖾𝗋𝗂𝗓𝖺𝗍𝗂𝗈𝗇 links audio files to characterizations as labelled by the healthcare professionals, and 𝖼𝗁𝖺𝗋𝖺𝖼𝗍𝖾𝗋𝗂𝗓𝖾𝖽𝖡𝗒 links characterizations to instances of the healthcare professionals. The role 𝗁𝖺𝗌𝖠𝗎𝖽𝗂𝗈 and its children link questionnaires to audio files. The roles 𝗁𝖺𝗌𝖢𝗈𝗏𝗂𝖽𝖳𝖾𝗌𝗍 and 𝗁𝖺𝗌𝖲𝗒𝗆𝗉𝗍𝗈𝗆 link questionnaires to instances of COVID-19 tests, self-reported symptoms, and vaccination status, respectively. The role 𝗁𝖺𝗌𝖯𝗋𝖾𝖾𝗑𝗂𝗌𝗍𝗂𝗇𝗀𝖢𝗈𝗇𝖽𝗂𝗍𝗂𝗈𝗇 links participants to preexisting conditions, while 𝗁𝖺𝗌𝖴𝗌𝖾𝗋𝖨𝗇𝗌𝗍𝖺𝗇𝖼𝖾 links participants to their submitted questionnaires. The set of concept names 𝖢𝖭 involves concepts that describe instances of audio, COVID-19 tests, preexisting conditions, symptoms, users and questionnaires. For audio related concepts, their hierarchy is shown is Figure <ref>. Specifically, there is a concept for each type of audio recording (e.g. regular breathing, deep breathing, voice, cough), and concepts regarding the audio quality. Audio instances can additionally be linked, via the 𝗁𝖺𝗌𝖢𝗁𝖺𝗋𝖺𝖼𝗍𝖾𝗋𝗂𝗓𝖺𝗍𝗂𝗈𝗇 role to audible abnormalities, for which the hierarchy of concepts is shown in Figure <ref>. Similarly, all preexisting conditions that appear in the questionnaire are organized as concepts in a hierarchy as shown in Figure <ref>, and all symptoms are part of the symptom hierarchy, shown in Figure <ref>. Furthermore, the 𝖴𝗌𝖾𝗋 concept subsumes concepts related to the different age and gender of the participants, as shown in Figure <ref>, while the 𝖴𝗌𝖾𝗋𝖨𝗇𝗌𝗍𝖺𝗇𝖼𝖾 concept that corresponds to a specific questionnaire submitted by a user, also subsumes a hierarchy based on the different possible answers in the questionnaire, shown in Figure <ref>. Finally, the concepts related to COVID-19 tests, shown in Figure <ref>, are used to define the type of test and its outcome. The described hierarchies of concepts and roles are provided in OWL format in the file [smarty-ontology.owl]. Using this terminology, all information presented in the dataset is asserted in the form of triples, provided in the file [smarty-triples.nt]. An example of a smarty4covid user is depicted in Figure <ref>. This user who is a female (20-30 years old) and has asthma, has submitted a questionnaire declaring a positive PCR test and a headache while being a smoker. Her audio recording of cough has been labeled by medical professionals as featuring audible choking. § TECHNICAL VALIDATION §.§ Inferences from statistical analysis The representativeness in the smarty4covid dataset was explored in terms of demographics, symptoms, vaccination status, COVID-19 prevalence and level of anxiety. The distribution of gender, age and COVID-19 test results is depicted in Figure <ref>. A higher percentage (61.0%) of male versus female users was observed, yet a wide range of ages was present. Most of the users' ages were between 30 to 59 years old, that is the age range characterized by increased familiarization with mobile devices. A high percentage of submissions (79.5%) included COVID-19 test results from various COVID-19 test types (e.g. PCR, Rapid Antigen, Rapid Antigen self-test) as depicted in Figure <ref>. Figure <ref> illustrates the presence of underlying medical conditions associated with the progress of COVID-19 in the smarty4covid dataset. More than 1 out of 4 users (27%) reported at least one underlying medical condition while hypertension was the most commonly reported condition (Figure <ref>). The distribution of the underlying medical conditions was similar to the one published by Eurostat <cit.> that considered the general population in Greece. Referring to the COVID-19 related symptoms, more than half of the users reported at least one symptom. Figure <ref> depicts the frequency of each symptom versus vaccination status (not vaccinated, fully vaccinated and booster dose). It can be inferred that users with booster dose presented fewer symptoms than those who were not vaccinated. Figure <ref> illustrates the percentages of positive and negative for COVID-19 users for each vaccination status. It can be seen that the COVID-19 prevalence is lower within the booster dose vaccinated population. The smarty4covid dataset also included vital signs (e.g oxygen saturation, beats per minute (BPM), diastolic/systolic pressure) as measured by means of relevant devices and self-reported COVID-19 related anxiety level. A box plot of the oxygen saturation for different age groups (Figure <ref>) presents oxygen saturation reduction against age progression. Figure <ref> depicts the vaccination status versus anxiety. Higher levels of anxiety presented higher percentage of users vaccinated with booster dose. §.§ Training AI models for classification of audio types An AI-based model for classifying audio segments into cough, voice and breathing was developed utilizing the smarty4covid dataset in order to: (i) validate the quality of the smarty4covid dataset towards training an AI model with generalization capabilities, (ii) support the automated cleaning of crowd-sourced audio recordings, and (iii) be integrated in relevant crowd-sourcing platforms for detecting whether the submitted audio recording is valid and if needed to prompt the users to repeat the audio recording. §.§.§ Architecture The classifier was based on the combined use of 2D Convolutional Neural Networks (CNN) that received as input the Mel spectrograms of audio segments of a specific duration (d) and output the probability of detecting cough, breath, and voice. The frequency axis of the Mel spectrograms had size equal to 128, while the size of the time axis (d) was a hyperparameter which was tuned through applying a grid search from 128 to 1024 corresponding to approximately 1 to 10 s of audio, respectively. Each CNN consisted of b stacked blocks containing l convolutional layers followed by a 2x2 max pooling layer and a dropout layer with the dropout probability set to its default value equal to 0.5. The convolutional layers of each block featured k 3x3 relu activated kernels and applied identical padding in order to ensure that the output of each layer had the same dimensions as its input. Finally, the output of the final convolutional layer was flattened and fed to a fully connected layer with 3 softmax activated neurons. A grid search was performed to realize optimal values of the hyperparameters l (from 1 to 3), k (from 64 to 128), and b (from 3 to log2(d)). This architecture was inspired by the winning entry [https://ieee-dataport.org/analysis/ntuautn-ieee-covid-19-sensor-informatics-challenge] of the COVID-19 sensor informatics challenge hackathon [https://healthcaresummit.ieee.org/data-hackathon/]. It is relatively lightweight with  300k trainable parameters, depending on the value of hyperparameter d, and shallow, with at most seven convolutional layers, which makes it less prone to overfitting and speeds up the training procedure, when compared to larger neural networks. Another advantage of this architecture relies on combining CNNs featuring different time sizes d of the considered segments resulting in a multi-scale modelling approach. §.§.§ Training procedure Prior to the training process, the audio signals were normalized, the leading and trailing silence were removed, and the Mel spectrograms were extracted. Each training instance was obtained by applying a randomized selection of a labeled (cough, voice, breathing) segment of a width d. The CNN's training procedure aimed at driving the optimization of the categorical cross entropy loss through the Adam algorithm <cit.>. During inference, a sliding window of length d and step 1 was used to extract all (overlapping) segments of the audio signal, which were then fed to the trained CNN that estimated the probabilities of detecting cough, voice and breathing. Following this approach, the classification of an entire audio signal was also feasible, by combining (e.g averaging) the estimated probabilities over all extracted segments. §.§.§ Results and external evaluation Aiming at exploring the impact of width (d) on the model's performance, a low (e.g. 128) and a high (e.g. 1024) value was applied resulting in two classifiers operating in short (1 s) and long (10 s) time scale, respectively. In order to evaluate the generalization capabilities of the classifiers, the COSWARA dataset served as external validation dataset since it includes all the three types of the considered audio recordings. Table <ref> presents the confusion matrix of the obtained results. The long time scale classifier achieved a slightly better discrimination performance than the one obtained by applying the short time scale classifier (accuracy = 95.3% vs 94%, c-statistic = 0.995 vs 0.992, macro F1 score = 0.953 vs 0.941). Leveraging upon the proposed architecture's flexibility, a multi-scale classifier was developed as an ensemble of the short and long time scale classifiers by applying a soft combination scheme (e.g. averaging) on the primary output probabilities. The obtained confusion matrix (Table <ref>) indicated that the multiscale classifier had the highest sensitivity in detecting cough and breathing and the lowest one in detecting voice, yet the difference among the classifiers' performances was small. In order to justify the multiscale classifier's effectiveness, its performance was comparatively assessed with the one obtained by applying the COUGHVID classifier <cit.> on the coswara dataset, that was based on pretrained XGBoost and scaler. Table <ref> presents the confusion matrix when applying a probability decision threshold equal to 0.8, which is the optimal threshold as mentioned by the creators of the COUGHVID model. The superiority of the multiscale classifier over the COUGHVID model was demonstrated through the evaluation metrics of accuracy (95.4% versus 83.6%), c-statistic (0.995 versus 0.888) and macro F1 score (0.954 versus 0.81). §.§ Conceptual edits on the smarty4covid OWL knowledge to produce counterfactual explanations Taking into consideration the increased demand of transparent AI, a framework that leverages the high expressiveness of the smarty4covid OWL knowledge base is proposed towards identifying potential biases in the COVID-19 classification models and the datasets used for their development. The framework utilizes counterfactual explanations that can provide meaningful information by generating the most influencing factors affecting the model's output. As depicted in Figure <ref>, it includes two different datasets: (i) the development dataset that is used to train an AI based classifier, and (ii) the explanation dataset that is used to test the trained AI based COVID-19 classifier. The trained AI based COVID-19 classifier is applied on the explanation dataset and the estimated classifications feed the smarty4covid OWL knowledge base by replacing the actual classifications (e.g. COVID-19, non COVID). The obtained modified smarty4covid OWL knowledge is subject to conceptual edits, which apply alterations on the concepts in order to identify the minimum changes that result in switching the estimated classification to a desired class. A thorough description of utilizing conceptual edits as counterfactual explanations is presented in <cit.>. Figure <ref> illustrates two examples of identifying the minimal conceptual edits in order for a positive COVID-19 user to become negative. The global counterfactual explanations are obtained by adding the minimal concepts edits over all users. In order to validate the aforementioned framework, a COVID-19 classifier was developed and potential biases were explored taking into consideration the Coswara dataset as development dataset and the smarty4covid dataset as explanation dataset. The COVID-19 classifier was based on ensembles of CNNs that received as input segments of the cough audio signal's mel spectrogram. The obtained global explanations are presented in Figure <ref>. Gender was considered to be the most critical factor towards switching from positive for COVID-19 to negative. This was a bias that needed to be further explored whether it was a bias in the Coswara dataset or in the COVID-19 classifier. The application of some basic statistics on the Coswara dataset revealed that the COVID-19 prevalence in the male population was higher than the one in the female population. The same applied to the age of the Coswara's users as depicted in Figure <ref>. § USAGE NOTES A triplestore purpose-built database (e.g GraphDB [https://graphdb.ontotext.com]) is required in order to utilize the OWL knowledge base files while an ontology editor (e.g protege [http://protege.stanford.edu]) is needed to modify the underlying ontology. The smarty4covid OWL ontology can also be loaded as python object using the owlready2 [https://owlready2.readthedocs.io] and rdflib [https://rdflib.readthedocs.io] packages. § CODE AVAILABILITY The audio classifier and the algorithm for extracting breathing features are available in a public repository [https://github.com/kinezodin/smarty4covid]. Furthermore, the repository includes the weights of the CNNs used by the classifier and a script for generating triples from the available data for the purpose of customizing the smarty4covid OWL knowledge base. § ACKNOWLEDGEMENTS This research was funded by the Hellenic Foundation for Research and Innovation-H.F.R.I within the framework of the H.R.F.I Science & Society “Interventions to address the economic and social consequences of the COVID-19 pandemic” call. Grant number: 05020. § AUTHOR CONTRIBUTIONS STATEMENT K.Z: study conception, design, and implementation, draft manuscript preparation, interpretation of results, funding acquisition; E.D, G.F, T.G: study implementation, data curation, data analysis and interpretation of results; V.G, A.S: data labeling; R.R, M.N: review and editing, interpretation of results; G.S: conceptualization, interpretation of results; K.N: study conception, interpretation of results, funding acquisition; supervision; All authors reviewed the manuscript. § COMPETING INTERESTS The authors declare no competing interests. § FIGURES & TABLES C[1]> m#1 |l|l|l|l|l| Field name 2p4cm|Description Type Values participantid 2p4cm|Participant's identification number. String UUID submissionid 2p4cm|Questionnaire's Identification number. String UUID covid_status 2p4cm|Tested for COVID-19. String [l]"positive": Positive, "negative": Negative, "no": Not tested pcr_test 2p4cm|Tested with PCR. Bool rapid_test 2p4cm|Tested with a Rapid Antigen test. Bool self_test 2p4cm|Tested with a Rapid Antigen Self test. Bool test_last_3_days 2p4cm|Tested in the last 3 days. Bool last_negative_test_date 2p4cm|Date of the last negative test. String "yyyy-mm-dd" first_positive_test_date 2p4cm|Date of the first positive test. String "yyyy-mm-dd" vaccination_status 2p4cm|COVID-19 vaccination status. String [l]"no": No, "partially": One of two shots, "fully": Fully, "booster1": Fully and Booster dose, "booster2": Fully and two Booster doses latest_vaccination_date 2p4cm|Date of the last vaccination dose. String "yyyy-mm-dd" hospitalization 2p4cm|Whether the user was hospitalised for COVID-19. String [l] "0": "No", "1": "I am currently hospitalized", "2": "Yes, discharged a week ago", "3": "Yes, discharged more than a month ago" exposure_to_someone_with_covid 2p4cm|Whether the user was exposed to a confirmed COVID-19 case. String "No" / "Maybe" / "Yes" travelled_abroad 2p4cm|Whether the user has travelled abroad the last 14 days. String [l]"0": No, "1": Yes submission_timestamp 2p4cm|Timestamp when the submission was received String Main questionnaire json file description (Part 1/3: COVID-19 related information) table-1 |l|l|l|l|l| Field name 2p4cm|Description Type Values sore_throat 16*Symptoms Sore Throat Bool dry_cough Dry Cough Bool wet_cough Productive Cough Bool sputum Sputum Bool runny_nose Nasal congestion Bool breath_discomfort Dyspnea Bool has_fever Fever Bool tremble Chills Bool fatigue Fatigue Bool headache Headache Bool dizziness Dizziness/ confusion Bool myalgias_arthralgias Myalgias, arthralgias Bool taste_smell_loss Loss of taste/smell Bool diarrhea_upset_stomach Stomach upset/ Diarrhea Bool sneezing Sneezing Bool dry_throat Dry Throat Bool oxymeter 7*Vital Signs Oximetry test Bool oxygenSaturation Oxygen Saturation Int [60, 99] bpm Beats per minute (BPM) Int [30, 250] blood_pressure_meter Blood pressure test Bool systolic_pressure Systolic Pressure Int [30, 260] diastolic_pressure Diastolic Pressure Int [30, 260] breath_holding Seconds of breath holding Int [0, ∞) leave_bed 6*Difficulty to Leave Bed Bool leave_home Leave Home Bool prepare_meal Prepare Meal Bool concentrate Concentrate Bool self_care Self Care Bool other_difficulty Every day activites Bool Main questionnaire json file description (Part 2/3: Symptoms and vital signs) table-1 |l|l|l|l|l| Field name 2C4cm|Description Type Values smoking 9*Smoking habits Smoking status String [l]"nev": Never smoked, "ex": Ex-smoker, "yes": Smoker years_of_quitting_smoking Years of quitting smoking Int [0, ∞) years_of_smoking Years of smoking Int [0, ∞) no_cigarettes Number of cigarettes per day String [l]"1u": less than 1, "10u": 1-10, "20u": 11-20, "20o": more than 20 vaping Vaping String [l]"0": No, "1": Yes anxiety 2p4cm|Level of anxiety about the pandemic String [l]"0": None, "1": Low, "2": Moderate, "3": High, "4": Very High working 2p4cm|Working Status String [l]"home": Working from home, "hospital": Working in hospital, "store": Working in an essential goods store (pharmacy, supermarket), "social": Working in a service with increased contact with the general public, "no": Not working Main questionnaire json file description (Part 3/3: Smoking habits, anxiety level, and working status)
http://arxiv.org/abs/2307.07617v1
20230714202512
Generalized Finite Difference Method on unknown manifolds
[ "Shixiao W. Jiang", "Rongji Li", "Qile Yan", "John Harlim" ]
math.NA
[ "math.NA", "cs.NA" ]
http://arxiv.org/abs/2307.04340v2
20230710044840
Crystal Structure Generation with Autoregressive Large Language Modeling
[ "Luis M. Antunes", "Keith T. Butler", "Ricardo Grau-Crespo" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci" ]
Realization of an extremely anisotropic Heisenberg magnet in Rydberg atom arrays Jaewook Ahn August 12, 2023 ================================================================================ The generation of plausible crystal structures is often an important step in the computational prediction of crystal structures from composition. Here, we introduce a methodology for crystal structure generation involving autoregressive large language modeling of the Crystallographic Information File (CIF) format. Our model, CrystaLLM, is trained on a comprehensive dataset of millions of CIF files, and is capable of reliably generating correct CIF syntax and plausible crystal structures for many classes of inorganic compounds. Moreover, we provide general and open access to the model by deploying it as a web application, available to anyone over the internet. Our results indicate that the model promises to be a reliable and efficient tool for both crystallography and materials informatics. § INTRODUCTION The in silico search for new materials often involves the exploration of a space of compositions in a chemical system, and the investigation of various predicted structural phases in that space (see <cit.> and <cit.> for examples). To predict the structures of unknown materials, a Crystal Structure Prediction (CSP) approach is often employed, which attempts to derive the ground state crystal structure for a given chemical composition under specific physical conditions. CSP approaches are relatively computationally expensive, typically involving ab initio techniques. They often begin with the generation of candidate structures. Examples are the AIRSS <cit.> and USPEX <cit.> approaches. Initializing the search space with sensible structures increases the likelihood of success, and decreases the amount of computation required. It is therefore expected that effective Crystal Structure Generation (CSG) tools would help accelerate the prediction of structures using CSP methods. Increasingly, techniques from Machine Learning (ML) and data science are being used to solve problems in materials science. <cit.> In particular, generative modelling approaches based on autoencoder architectures and generative adversarial networks (GANs) <cit.> have been used to generate crystal structures. <cit.> Indeed, generative modelling has become commonplace, an outcome catalyzed by astounding advancements in the computational generation of images, audio and natural language over the last several years. <cit.> The Large Language Model (LLM), backed by the Transformer architecture <cit.>, is the approach behind state-of-the-art performance on natural language processing tasks. This approach begins with a generative pre-training step, which is autoregressive in nature, involving the unsupervised task of predicting the next token given a sequence of preceding tokens. <cit.> When such models are scaled to billions of parameters, their effectiveness becomes quite remarkable, as tools such as ChatGPT <cit.> demonstrate. The LLM approach has recently been used in the context of materials science. <cit.> However, these attempts have been focused on either training and tuning the model for natural language tasks, and utilizing the model in natural language generation scenarios involving chemical subject matter, or training the model on a corpus of expanded chemical compositions for the purposes of generating unseen compositions. An alternate perspective, which we present here, is to train the model on textual representations of inorganic crystal structures, such as the Crystallographic Information File (CIF) format, rather than on corpora of natural language, or chemical compositions alone. The motivation for this perspective originates from two conjectures: The first states that a sequence of symbols (i.e. tokens) is an appropriate representation modality for many predictive tasks (including those involving chemical structure). The idea of representing any domain with a sequence of tokens may at first seem counter-intuitive. However, consider that even images can be represented this way, and be subject to the autoregressive language modelling of pixels <cit.>. This challenges the notion that domain-specific representations, such as graphs for chemical structure, are necessary for superior performance. The second conjecture states that LLMs learn more than simply “surface statistics” and the conditional probability distribution of tokens. Indeed, autoregressive pre-training involving next-token prediction may result in learning an effective world model: an internalized causal model of the processes generating the target phenomena. A model which simply learns spurious correlations in the data is less desirable, as it may have greater difficulty in generalizing beyond the training distribution. Recent studies have demonstrated that LLMs trained on sequences of board game play (e.g. Chess and Othello) do indeed track the state of the board, and probes of the internal activations of the model reveal the existence of representations of various abstract concepts specific to the domain. <cit.> We therefore asked whether a model trained to predict the 3-dimensional coordinates of atoms, digit-by-digit, could learn the chemistry implicit in crystal structures, and generate unseen structures, borrowing from its model of the world of atoms. As such, we herein describe the CrystaLLM model, a tool for CSG trained on an extensive corpus of CIF files representing the structures of millions of inorganic solid-state materials. Unlike small molecule organic compounds, the generative modelling of inorganic crystals presents unique challenges: the structures are complex and periodic, are not readily described by simple graphs, and are imbued with different forms of symmetry. Moreover, they can be constructed from more than 100 different elements. Even so, the model is capable of reliably generating correct CIF syntax and physically plausible crystal structures for many classes of inorganic compounds. § METHODS The following terminology is used in the remainder of the document: A formula, or reduced composition, refers to the empirical formula, or formula unit, which is the simplest, whole-number ratio of atoms in the compound. An example of a formula is Ba2MnCr. A cell composition is a chemical formula referring to the total number of atoms of each type in the unit cell of a crystal. It represents the chemical formula of the compound as it would appear in the crystal structure, which might contain multiple formula units. An example of a cell composition is Ba6Mn3Cr3. §.§ Dataset The dataset was assembled by obtaining structures from the Materials Project <cit.>, the OQMD <cit.>, and NOMAD <cit.>, which were originally optimized using density functional theory (DFT) simulations. In total, approximately 3.6 million structures were obtained. This dataset consists of compounds containing anywhere from 1 to 10 elements, with most consisting of 3 or 4 elements. The elements up to and including atomic number 94 are present, with the exception of polonium, astatine, radon, francium, and radium. The dataset contains roughly 800,000 unique formulas, and 1.2 million unique cell compositions. When paired with space groups, there are 2.3 million unique cell composition-space group pairs. To choose between duplicate structures containing the same cell composition and space group, the structure with the lowest volume per formula unit was selected. The 2.3 million structures in this dataset were converted to CIF files using the pymatgen library <cit.>, and were used for training. The CIF files were created with the pymatgen option for symmetry finding tolerance set to 0.1 Å. All floating point numbers in the files were rounded to 4 decimal places. The dataset was split randomly into train, validation, and test sets, such that the training set consisted of about 2.2 million CIF files, the validation set 35,000 CIF files, and the test set 10,000 CIF files. §.§ Tokenization The dataset of CIF files was tokenized prior to training. The vocabulary consisted of CIF tags, space group symbols, element symbols, numeric digits, and various punctuation symbols, for a total of 371 symbols. After tokenization, the training set consisted of 768 million tokens. §.§ Generative Pre-training The generative pre-training step requires a vocabulary, 𝒱, and an ordered list of tokens 𝒰 = (u_1, ..., u_n), with u_i ∈𝒱. We want to maximize the following likelihood: ℒ(θ; 𝒰) = ∑_i log P(u_i | u_i-c, ..., u_i-1;θ) where c is the size of a context window, P is the conditional probability distribution to be modelled, and θ the parameters of a neural network. We therefore minimize 𝒥(θ; 𝒰)=-ℒ, using stochastic gradient descent to adjust the parameters. We use a multi-layer Transformer decoder <cit.> for the neural network, as described in <cit.>. Our model consists of 25 million parameters, with 8 layers, 8 attention heads, and an embedding size of 512. We decay the learning rate from 10^-3 to 10^-4 over the course of training, and use a batch size of 32. §.§ Evaluation To evaluate the generative capabilities of the model, we define two scenarios where the model is tasked with generating the compounds of the held-out test set. The first scenario, which we name the Cell Composition-only scenario, involves prompting the model with each cell composition in the test set, and having it generate up to a maximum of 3000 tokens. The model is prompted with only the first line of a CIF file, which consists of the data block header, containing the cell composition of the structure specified in the rest of the file. The second scenario, which we name the Cell Composition+Space Group scenario, is similar to the first, except that the model is prompted with both the cell composition and space group, for each entry in the test set. Moreover, we perform the generation 3 separate times for each entry. To assess how well the model performed in the first scenario, we check if a generated CIF file is consistent in terms of space group, if it is consistent in terms of the atom site multiplicity, and if the generated bond lengths are reasonable. To check if the generated structure is consistent with the printed space group, we use the class of the pymatgen library, which uses the spglib library <cit.>. To check if bond lengths are reasonable, we first use a Voronoi-based nearest-neighbour algorithm in pymatgen to define which atoms are bonded together; then, we establish expected bond lengths based on the electronegativity difference between the bonded atoms, and their ionic or covalent radii. We classify a structure as having reasonable bond lengths if all the detected bond lengths are within 30% of the corresponding expected bond lengths. The goal of the second evaluation scenario is to establish how often the model can recover the unseen structures of the test set, when prompted with a cell composition and space group. To determine whether a generated structure matches the structure in the test set, we use the pymatgen class, which performs a structural similarity assessment of two crystals. We use a fractional length tolerance of 0.2, a site tolerance of 0.3 Å, and an angle tolerance of 5 degrees, which are the default values in pymatgen. Both structures are reduced to primitive cells before matching, and are scaled to equivalent volume. §.§ DFT Calculations For the pyrochlore case study, a small number of DFT calculations were performed using VASP, following as closely as possible the settings used in the OQMD project (where most of the pyrochlore structures seen in training were taken from). For example, the recommended PAW potential was used for each element: Zr_sv for zirconium, Hf_pv for hafnium, Lu_3 for lutetium, Pr_3 for praseodymium, Ce_3 for cerium (for the remaining elements, the name of the PAW potential simply matched the element's symbol). The Perdew-Burke- Ernzerhof (PBE) exchange-correlation functional <cit.>, in the generalized-gradient approximation, was used in all calculations. Hubbard (PBE+U) corrections were applied for transition metal elements with unfilled d levels (U_eff=3.8 eV for Mn and 3.1 eV for V). Although the cell parameters reported here correspond to the conventional cubic cell with 8 formula units, the DFT calculations were performed using the primitive cell with two formula units, and sampling of the reciprocal space corresponding to that primitive cell was performed using a 7x7x7 grid, as done for all pyrochlore calculations in the OQMD project. § RESULTS §.§ Assessment of Generation Quality To assess the quality of the model's generated structures, we considered two scenarios, as discussed in section <ref>. The Cell Composition-only scenario involves prompting the model with the first line of the test set CIF file only (which specifies the cell composition), whereas the Cell Composition+Space Group scenario involves prompting the model from the first line of the test set CIF file to the line specifying the space group (inclusive). The fraction of generated structures that are consistent in terms of space group, atom site multiplicity, and have reasonable bond lengths are presented in Table <ref>. The generated CIF files of the Cell Composition+Space Group scenario were compared to the corresponding CIF files of the test set using a structure matching algorithm (as discussed in section <ref>). The fraction of matching structures is presented in Table <ref>. The Reduced Unseen column represents the results for formulas that were not seen in training with any Z. We further examined how closely the generated cell parameters resembled the actual cell parameters, for the cases where there was a structural match. We took the first matching structure for samples that had at least one generated structure matching the test set structure, and measured the R^2 and mean absolute error (MAE) for the true versus generated cell lengths, the true versus generated (i.e. printed) volume, and the implied (from cell parameters) versus generated volume. The results are presented in Table <ref> and Figure <ref>. §.§ Generalizing to Unseen Scenarios To further examine the model's ability to generalize to unseen scenarios, we prompted the model with various formulas, and examined its output. The results are presented in Figure <ref>. An example of the model generalizing to a formula that had been seen in training, but with different space groups, is presented in Figure <ref>a. The formula, Ba2MnCr, was in the held-out test set, with the R3̅m space group. That combination of formula and space group had not been seen in training. The model generated a structure matching the one in the test set on the first attempt, when the space group was provided. The model also demonstrated the ability to generate plausible structures for formulas not seen in training with any Z. An example is the quaternary compound CsCuTePt. This compound was not in the training set, but was in the held-out test set (with Z=4). The model generated a structure matching the one in the test set, in the F4̅3m space group, on the third attempt when the space group was provided. The generated structure is presented in Figure <ref>b. Finally, in Figure <ref>c is the generated structure of YbMn6Sn6 <cit.>, an example of the model generalizing to structural motifs with atoms not seen in training. This formula was not seen in training for any Z, and was not in the held-out test set. However, ZrMn6Sn6 was seen in training, in the P6/mmm space group. The model generated a structure in the same space group on the first attempt, without the space group being provided. The generated structure matched the ZrMn6Sn6 structure, with Yb substituted for Zr, and with cell parameters and atomic coordinates adjusted accordingly. This demonstrates the model performing a structure prediction by analogy procedure, as commonly used by materials scientists for discovery <cit.>, despite never having been provided with the procedure to do this. §.§ Generating Known Structural Classes The CrystaLLM model was trained on an extensive collection of the various structural classes known to inorganic chemistry. We thus investigated its ability to generate unseen members of these classes. We focused on classes of binary, ternary and quaternary compounds. §.§.§ Rutiles Rutiles are a class of binary compounds that adopt a tetragonal unit cell, in the P4_2/mnm space group (Z=2), as is seen in TiO2, from which this class of materials adopts its name. The general formula for rutile oxides is MO2, where M is a metallic species in the +4 oxidation state. Rutile fluorides are also known, where the metal is in the +2 oxidation state. The model's training dataset consisted of essentially all of the rutiles one might expect to be able to find in nature. Therefore, to test the model's ability to generate unseen rutiles, we requested the generation of theoretically possible, but unlikely compounds, such as AuO2. With gold in a highly unlikely +4 oxidation state, AuO2 is not expected to be formed under most conditions. However, the model was able to imagine what the structure of such a compound might be (when the space group is provided). While TiO2 has cell parameters a=4.594Å, c=2.959Å, the generated rutile gold variant has a=4.838Å c=3.429Å, reflecting the increased volume occupied by the larger gold atoms (Figure <ref>a). §.§.§ Spinels The spinels are a group of ternary compounds with the general formula AB2X4, where A is a cation in the +2 oxidation state, B is a cation in the +3 oxidation state, and X, normally a chalcogen, is an anion. Spinels form cubic close-packed structures, with eight tetrahedral, and four octahedral sites, normally in the Fd3̅m space group. To explore the model's ability to generate unseen spinels, we selected two samarium spinels: Sm2BO4, which was present in the held out test set, and the thiospinel Sm2BS4, which was absent from both the training and test sets. The model was able to generate the expected spinel structures for both compounds when the cell composition and space group were provided (Figures <ref>b and <ref>c). During training, the model encountered a number of different oxy-, thio-, and selenospinels, and this likely contributed to its ability to generate these two compounds. §.§.§ Elpasolites The elpasolites are quaternary compounds with the general formula ABC2X6. The A and C species are typically alkali metal cations in the +1 oxidation state, B is usually a transition metal cation in the +3 oxidation state, and X is a halogen anion. The elpasolites are often referred to as “double perovskites”, since their structures are related to perovskites by the doubling of their unit cell dimensions, and the replacement of the M^2+ cation with alternating M^+ and M^3+ cations. Elpasolites crystallize in the Fm3̅m space group, and are the most common quaternary crystal system reported in the Inorganic Crystal Structure Database (ICSD) <cit.>. We wondered if the CrystaLLM model could generate elpasolites not seen during training. We selected two elpasolites from the held-out test, that were not seen in training: the fluoride KRb2TiF6 and the iodide K2AgMoI6. The model was able to generate the correct elpasolite structure when the cell composition and space group was provided (Figures <ref>d and <ref>e). §.§.§ Pyrochlores The general formula for the pyrochlores is A2B2O7, where A, a trivalent cation, and B, a tetravalent cation, are either rare-earths or transition metals (other oxidation states, e.g. combining monovalent and pentavalent cations, are also possible, but we focus here on the trivalent/tetravalent pyrochlores). Pyrochlores crystallize in the Fd3̅m space group (Z=8). There are many combinations of A and B that are possible for this structure, by using lanthanide ions, actinide ions, and Y(III) for the A species, and various transition metal ions, as well as Ti(IV), Zr(IV), and Hf(IV) for the B species. We investigated whether CrystaLLM could generate valid pyrochlore structures for any unseen combinations, and whether it could estimate reasonable cell parameters in line with the trends observed for the pyrochlore series, as the cell parameters are expected to be correlated with the ionic radii of the A and B cations. We created a space of pyrochlores consisting of 144 compounds by producing different combinations of A and B species. Of these, 54 were seen in training. We selected 10 compounds from among the 90 not seen in training, and attempted 3 generations with the model, for each. The cell composition and space group were included in the prompt. All generations resulted in valid pyrochlore structures (Table <ref>). We subsequently performed DFT relaxation calculations on the first generated structure for each of the 10 compounds. One case, Ce2V2O7, was problematic and was excluded from further analysis. This result isn't very surprising, since both Ce and V are pathological elements in DFT settings. The DFT-derived value of the cell parameter for each of the 10 compounds is plotted against the mean generated value in Figure <ref>. A good agreement exists between the DFT-derived and generated cell lengths, with an R^2 of 0.62 and MAE of 0.08 Å being exhibited. §.§ Problematic Cases While the model seems capable of generating structures for many different classes of inorganic crystals, it does nonetheless have difficulty in certain cases. All of the cases appear to involve systems that are rare, and under-represented in the training dataset. For example, the model was generally unable to generate a structure for Mg7Pt4Ge4, the structure of which was reported recently to exist in the P6_3mc space group (Z=2). <cit.> In this case, there were only 38 examples of 7:4:4 systems in the training dataset, none contained Mg or Pt, and none were in the P6_3mc space group. The current version of the model also seems to struggle with generating phosphates, sulfates, carbonates, and organic-inorganic hybrid structures. Examples include carbonate hydroxide minerals, such as Co2CO3(OH)2 <cit.> and Cu2CO3(OH)2 (malachite). While present in the dataset, they belong to a group of analogous structures for which there are only a handful of examples. While the model can generate Ca5(PO4)3(OH) (hydroxyapatite), it generally fails to generate a valid structure for Mn4(PO4)3. A common theme is the appearance of multiple oxyanions, which can give rise to more complex arrangements of atoms, for which the model may not have seen enough examples. In contrast, the model can generate compounds of the perovskite class reliably. However, over 5,000 examples of the ABX3 (X=O,F) system in the Pm3̅m space group were seen in training. Future versions of the model will consider strategies for addressing these occurrences of class imbalance. §.§ The CrystaLLM.com Web Application To allow for general and open access to the CrystaLLM model, we make it available through a web application, available at https://crystallm.com/https://crystallm.com. The user of the application is presented with a text field requiring a formula to be entered. Optionally, they may provide the number of formula units (Z) and the desired space group (Figure <ref>). Once they press the button, a request is sent to a GPU server which has the model in memory. The request is converted into a prompt, and the generated contents are returned to the user. If no Z is provided, we scan through Z values of 1, 2, 3, 4, 6, and 8, and return the first valid structure generated by the model. We validate the generated structure using the same procedure described in the Methods section, checking that the generated structure is consistent in terms of the printed space group, and other elements of the CIF file. If no valid structure can be found, the user is presented with an informative error message, including the option to view the generated content. Requests typically take several seconds to process, but can take longer if no Z is provided and the model has trouble finding an appropriate Z value. Generated structures are displayed in a web browser-based 3D structure viewer provided by the Crystal Toolkit framework, upon which the front-end of the web application is built. <cit.> By making the model easily accessible, we hope to contribute a potentially useful tool to the materials structure research community. We also hope to receive feedback from users that may help improve future versions of the model. § DISCUSSION & CONCLUSION Here, we have shown that LLMs of the CIF format are able to generate inorganic crystal structures for a variety of known classes. Indeed, the model is able to produce valid and sensible arrangements of atoms in 3-dimensional space by generating xyz coordinates digit-by-digit. The model also seems to have captured the relationship between space group symbols and the symmetries inherent in the structures it generates. We chose to build a language model of the CIF format (instead of a simplified format, for example, which might include a minimal vocabulary) for several reasons. First, the CIF format is not particularly verbose. The model learns the grammatical structure of the format fairly quickly. We can thus avoid having to devise an intermediate format that requires inter-conversion between more common formats, which could also be error prone. Second, we believe that having the model learn to generate the more redundant parts of the CIF format, such as the cell volume, and Z, which are inferable from prior inputs, helps the model to perform better overall. While the model can generate sensible structures, this does not by itself make it suitable, as is, for CSP. Just as natural language LLMs, such as GPT-3 and -4, are not suitable chatbots without further fine-tuning, the CrystaLLM model will also need to be fine-tuned for more advanced tasks. Fine-tuning involves an additional and separate training step, where the model's parameters are adjusted in the context of a different task. This may also involve altering the model's output layer, such as to make it suitable for a regression task, for example. Models can be fine-tuned using a variety of techniques, but supervised learning and reinforcement learning <cit.> are most common. One might use reinforcement learning, for example, when a task is not clearly defined as a supervised learning problem. When fine-tuning natural language LLMs for chatbot applications, it is common to use Reinforcement Learning from Human Feedback (RLHF). <cit.> With RLHF, the idea is to gather data from human annotators to be used to train a reward model, which scores generated text according to its desirableness. The reward model is then used as part of a reinforcement learning-based tuning of the LLM. In CSP, one would like to produce ground-state structures (for some given physical conditions). One could thus imagine an analogous procedure where CrystaLLM is fine-tuned for the goal of generating low-energy structures, via feedback from an external evaluator of the generated structure's energy. We call this Reinforcement Learning from Thermodynamic Feedback (RLTF). This procedure would also require a reward model, and such a model should ideally provide a timely estimate of a structure's energy. This excludes time-consuming approaches such as DFT. A viable approach could make use of a separate machine learning-based model of formation energy, such as one based on ALIGNN. <cit.> Indeed, neural network potentials have been used to accelerate the prediction of crystal structures. <cit.> There are several limitations with the current approach. First, none of the structures of the dataset have site-occupancy disorder (fractional site occupancies). Therefore, CrystaLLM cannot generate disordered structures, and may not successfully generate structures for combinations of cell composition and space group that imply a disordered structure. An example is K2NaTiOF5, which is reported to be an elpasolite, in the Fm3̅m space group (Z=4), with F and O species sharing the same crystal site <cit.>. Another limitation is that the CIF files of the dataset were not all created using the same level of theory. The training set is derived from a combination of DFT sources using different settings, functionals, etc., which may make it difficult for the model, in some instances, to learn a consistent relationship between cell composition and detailed structure. <cit.> Nevertheless, we believe that CrystaLLM will be a useful tool for CSG and materials informatics. We plan to explore fine-tuning the model for physical property prediction tasks, such as the prediction of lattice thermal conductivity, where experimental data is relatively scarce. <cit.> The architecture of the model allows it to be fine-tuned for either composition-based or structure-based prediction tasks. This implies that CrystaLLM may be the basis for a general-purpose materials informatics model, which can be used for generative tasks, and fine-tuned for property prediction tasks that require either composition or structure. If the model is able to transfer what it has learned about the world of atoms to these various predictive problems, it may prove to be a quite flexible tool relevant to many aspects of materials chemistry. § NOTE During development of the CrystaLLM model, we became aware of a pre-print by Flam-Shepherd and Aspuru-Guzik that describes the use of autoregressive large language modelling for molecular and crystal structure generation. <cit.> While the fundamental idea of generating the coordinates of atomic systems token-by-token is the same, our work differs in the following ways: 1, we focus exclusively on the generation of the crystal structures of inorganic materials; 2, we train the model directly on CIF files and CIF syntax, with a vocabulary consisting of CIF tags and space group symbols, in addition to atomic symbols and numeric digits; 3, we use a much larger and custom dataset consisting of millions of CIF files for training the model; 4, our model is symmetry-aware, and supports the generation of structures in specified space groups and for specific numbers of formula units. In summary, we develop a model specifically for the purposes of material structure generation, which produces syntactically valid and physically sensible CIF files as an output. § DATA AVAILABILITY The structures used in the experiments described in this work were obtained from the Materials Project (https://materialsproject.org/https://materialsproject.org/), the OQMD (https://oqmd.org/https://oqmd.org/), and NOMAD (https://nomad-lab.eu/https://nomad-lab.eu/). All structures were made available by those sources under the Creative Commons Attribution 4.0 License. <cit.> § ACKNOWLEDGEMENTS This work was partially supported by computational resource donations from Amazon Web Services through the AWS Activate program, obtained with assistance from the Communitech Hub. For the DFT calculations, we used the Young supercomputer facility via the UK Materials and Molecular Modelling Hub, which is partially funded by EPSRC (EP/T022213/1, EP/W032260/1). § AUTHOR CONTRIBUTIONS L.M.A. conceived the project, performed the experiments, and drafted the manuscript. L.M.A. and R.G.-C. designed the experiments. R.G-C. carried out the DFT calculations for the pyrochlore case study. R.G.-C. and K.T.B. supervised and guided the project. All authors reviewed, edited and approved the manuscript. naturemag
http://arxiv.org/abs/2307.05962v1
20230712071001
Radial boundary elements method and removing singularity of integrals by an optimal selection of boundary source points
[ "Hossein Hosseinzadeh", "Zeinab Sedaghatjoo" ]
math.NA
[ "math.NA", "cs.NA" ]
equationsection footnotesection TheoremTheorem[section] Test[Theorem]Test Example[Theorem]Example Proposition[Theorem]Proposition Definition[Theorem]Definition corCorollary[section] lemmaLemma ex[Theorem]Example Remark[Theorem]Remark
http://arxiv.org/abs/2307.07361v1
20230714140755
Gloss Attention for Gloss-free Sign Language Translation
[ "Aoxiong Yin", "Tianyun Zhong", "Li Tang", "Weike Jin", "Tao Jin", "Zhou Zhao" ]
cs.CV
[ "cs.CV", "cs.CL" ]
Gloss Attention for Gloss-free Sign Language Translation Aoxiong Yin1 Both authors contributed equally to this research. , Tianyun Zhong1 * , Li Tang1 , Weike Jin1 , Tao Jin1 , Zhou Zhao1Corresponding author. 1Zhejiang University {yinaoxiong,zhongtianyun,tanglzju,weikejin,jint_zju,zhaozhou}@zju.edu.cn August 12, 2023 ============================================================================================================================================================================================================================================================= Most sign language translation (SLT) methods to date require the use of gloss annotations to provide additional supervision information, however, the acquisition of gloss is not easy. To solve this problem, we first perform an analysis of existing models to confirm how gloss annotations make SLT easier. We find that it can provide two aspects of information for the model, 1) it can help the model implicitly learn the location of semantic boundaries in continuous sign language videos, 2) it can help the model understand the sign language video globally. We then propose gloss attention, which enables the model to keep its attention within video segments that have the same semantics locally, just as gloss helps existing models do. Furthermore, we transfer the knowledge of sentence-to-sentence similarity from the natural language model to our gloss attention SLT network (GASLT) to help it understand sign language videos at the sentence level. Experimental results on multiple large-scale sign language datasets show that our proposed GASLT model significantly outperforms existing methods. Our code is provided in <https://github.com/YinAoXiong/GASLT>. § INTRODUCTION Sign languages are the primary means of communication for an estimated 466 million deaf and hard-of-hearing people worldwide<cit.>. Sign language translation (SLT), a socially important technology, aims to convert sign language videos into natural language sentences, making it easier for deaf and hard-of-hearing people to communicate with hearing people. However, the grammatical differences between sign language and natural language <cit.> and the unclear semantic boundaries in sign language videos make it difficult to establish a mapping relationship between these two kinds of sequences. Existing SLT methods can be divided into three categories, 1) two-stage gloss-supervised methods, 2) end-to-end gloss-supervised methods, and 3) end-to-end gloss-free methods. The first two approaches rely on gloss annotations, chronologically labeled sign language words, to assist the model in learning alignment and semantic information. However, the acquisition of gloss is expensive and cumbersome, as its labeling takes a lot of time for sign language experts to complete <cit.>. Therefore, more and more researchers have recently started to turn their attention to the end-to-end gloss-free approach <cit.>. It learns directly to translate sign language videos into natural language sentences without the assistance of glosses, which makes the approach more general while making it possible to utilize a broader range of sign language resources. The gloss attention SLT network (GASLT) proposed in this paper is a gloss-free SLT method, which improves the performance of the model and removes the dependence of the model on gloss supervision by injecting inductive bias into the model and transferring knowledge from a powerful natural language model. A sign language video corresponding to a natural language sentence usually consists of many video clips with complete independent semantics, corresponding one-to-one with gloss annotations in the semantic and temporal order. Gloss can provide two aspects of information for the model. On the one hand, it can implicitly help the model learn the location of semantic boundaries in continuous sign language videos. On the other hand, it can help the model understand the sign language video globally. In this paper, the GASLT model we designed obtain information on these two aspects from other channels to achieve the effect of replacing gloss. First, we observe that the semantics of sign language videos are temporally localized, which means that adjacent frames have a high probability of belonging to the same semantic unit. The visualization results in Figure <ref> and the quantitative analysis results in Table <ref> support this view. Inspired by this, we design a new dynamic attention mechanism called gloss attention to inject inductive bias <cit.> into the model so that it tends to pay attention to the content in the local same semantic unit rather than others. Specifically, we first limit the number of frames that each frame can pay attention to, and set its initial attention frame to frames around it so that the model can be biased to focus on locally closer frames. However, the attention mechanism designed in this way is static and not flexible enough to handle the information at the semantic boundary well. We then calculate an offset for each attention position according to the input query so that the position of the model's attention can be dynamically adjusted on the original basis. It can be seen that, as shown in Figure <ref>, our model can still focus on the really important places like Figure <ref> after losing the assistance of gloss. In contrast, as shown in Figure <ref>, the original method fails to converge to the correct position after losing the supervision signal provided by the gloss. Second, to enable the model to understand the semantics of sign language videos at the sentence level and disambiguate local sign language segments, we transfer knowledge from language models trained with rich natural language resources to our model. Considering that there is a one-to-one semantic correspondence between natural language sentences and sign language videos. We can indirectly obtain the similarity relationships between sign language videos by inputting natural language sentences into language models such as sentence bert <cit.>. Using this similarity knowledge, we can enable the model to understand the semantics of sign language videos as a whole, which can partially replace the second aspect of the information provided by gloss. Experimental results on three datasets RWTH-PHOENIX-WEATHER-2014T (PHOENIX14T)<cit.>, CSL-Daily <cit.> and SP-10 <cit.> show that the translation performance of the GASLT model exceeds the existing state of the art methods, which proves the effectiveness of our proposed method. We also conduct quantitative analysis and ablation experiments to verify the accuracy of our proposed ideas and the effectiveness of our model approach. To summarize, the contributions of this work are as follows: * We analyze the role of gloss annotations in sign language translation. * We design a novel attention mechanism and knowledge transfer method to replace the role of gloss in sign language translation partially. * Extensive experiments on three datasets show the effectiveness of our proposed method. A broad range of new baseline results can guide future research in this field. § RELATED WORK Sign Language Recognition. Early sign language recognition (SLR) was performed as isolated SLR, which aimed to recognize a single gesture from a cropped video clip <cit.>. Researchers then turned their interest to continuous SLR <cit.>, because this is the way signers actually use sign language. Sign Language Translation. The goal of SLT is to convert a sign language video into a corresponding natural language sentence <cit.>. Most existing methods use an encoder-decoder architecture to deal with this sequence-to-sequence learning problem. Due to the success of the Transformer network in many fields <cit.>, Camgöz et al. <cit.> apply it to SLT and design a joint training method to use the information provided by gloss annotations to reduce the learning difficulty. Zhou et al. <cit.> propose a data augmentation method based on sign language back-translation to increase the SLT data available for learning. It first generates gloss text from natural language text and then uses an estimated gloss to sign bank to generate the corresponding sign sequence. Yin et al. <cit.> propose a simultaneous SLT method based on the wait-k strategy <cit.>, and they used gloss to assist the model in finding semantic boundaries in sign language videos. Besides, some works improve the performance of SLT by considering multiple cues in sign language expressions <cit.>. Gloss-free Sign Language Translation. Gloss-free SLT aims to train the visual feature extractor and translation model without relying on gloss annotations. <cit.> first explores the use of hierarchical structures to learn better video representations to reduce reliance on gloss. Orbay et al. <cit.> utilize adversarial, multi-task and transfer learning to search for semi-supervised tokenization methods to reduce dependence on gloss annotations. <cit.> proposes a new Transformer layer to train the translation model without relying on gloss. However, the pre-trained visual feature extractor used by <cit.> comes from <cit.>, which uses the gloss annotation in the dataset during training. The gloss-related information is already implicit in the extracted visual representations, so <cit.> does not belong to the gloss-free SLT method. Sentence Embedding. Sentence embeddings aim to represent the overall meaning of sentences using dense vectors in a high-dimensional space <cit.>. Some early works use the linear combination of word embedding vectors in sentences to obtain sentence representations <cit.>. Subsequently, the emergence of large-scale self-supervised pre-trained language models such as BERT <cit.> significantly improves the effectiveness of natural language representation. However, since BERT is not optimized for sentence embedding during pre-training, it does not perform well in sentence-level tasks such as text matching. The fact that BERT needs to input two sentences at the same time to calculate the similarity also makes the computational complexity high. Sentence-BERT proposed by Reimers et al. <cit.> adopts the architecture of the Siamese network to solve this problem. Since natural language has far more resources than sign language, in our work, we transfer knowledge from natural language models to sign language translation models. This enables our model to understand sign language at the sentence level by learning the similarity between different sign language sentences. § ANALYZING THE ROLE OF GLOSS IN SLT In this section, we analyze and validate the idea we proposed in Section <ref> that gloss makes the attention map diagonal, and gloss helps the model understand the relationship between sign languages at the sentence level. Quantitative Analysis of Diagonality. First inspired by <cit.>, we use cumulative attention diagonality (CAD) metrics to quantitatively analyze the degree of diagonalization of attention maps in gloss-supervised and gloss-free settings. As shown in Table <ref>, we can see that the degree of diagonalization of the attention map with gloss supervision is always higher than that of the attention map under the gloss-free setting. This suggests that the attention map in the gloss-supervised setting is more diagonal, which is also what we observe when visualizing qualitative analysis, as shown in Figure <ref>. Sign Language Sentence Embedding. We take the mean of the encoder output as the sign language sentence embedding and then use the cosine similarity to calculate the similarity of the two sentences. We use the similarity between natural language sentences computed by sentence bert as the approximate ground truth. We evaluate whether gloss helps the model understand sign language videos at the sentence level by computing the average similarity difference (ASD), that is, the difference between the similarity between the sign language sentence embedding and the natural language sentence embedding. The calculation formula is as follows: ASD=1/n^2-n∑_i = 1^n∑_j=1^n| S[i,j] -S[i,j] | where S[i,j] represents the similarity between natural language sentence embeddings, S[i,j] represents the similarity between sign language sentence embeddings, and n represents the number of sentence pairs. As shown in Table <ref>, we can see that the ASD metric of the model is significantly lower than the model under the gloss-free setting when there is gloss supervision. This shows that gloss annotations do help the model understand sign language videos at the sentence level. § METHODOLOGY SLT is often considered a sequence-to-sequence learning problem <cit.>. Given a sign video X' = (x'_1,x'_2,...,x'_T) with T frames, SLT can be formulated as learning the conditional probability p(Y'|X') of generating a spoken language sentence Y' = (y'_1,y'_2,...,y'_M) with M words. We model translation from X' to Y' with Transformer architecture <cit.>. Our main contribution focuses on the encoder part, so we omit details about the decoder part, and the interested reader can refer to the original paper. In this section, we first describe our designed gloss attention mechanism. Then we introduce how to transfer knowledge from natural language models to enhance the model's capture of global information in sign language videos. §.§ Embedding for Video and Text Similar to general sequence-to-sequence learning tasks, we first embed the input video and natural language text. For the input video features, we follow a similar scheme as in <cit.>. We simply use a linear layer to convert it to the dimension of the encoder, and then attach a relu <cit.> activation function after batch normalization (BN) <cit.> to get the embedded feature x_t∈ℝ^D. For text embedding, we first use BPEmb <cit.>, which is a BPE <cit.> sub-word segmentation model learned on the Wikipedia dataset using the SentencePiece <cit.> tool to segment text into sub-words. BPE is a frequency-based sub-word division algorithm. Dividing long words into subwords allows for generalized phonetic variants or compound words, which is also helpful for low-frequency word learning and alleviating out of vocabulary problems. We then use the pre-trained sub-word embeddings in BPEmb as the initialization of the embedding layer and then convert the word vectors into text representations y_m ∈ℝ^D using a method similar to the visual feature embedding. We formulate these operations as: x_t = relu(BN(W_1x'_t+b_1)) + f_pos(t) y_m = relu(BN(W_2Emb(y'_m)+b_2)) + f_pos(m) Similar to other tasks, the position of a sign gesture in the whole video sequence is essential for understanding sign language. Inspired by <cit.>, we inject positional information into input features using positional encoding f_pos(· ). §.§ Gloss Attention After the operations in the previous section, we now have a set of tokens that form the input to a series of transformer encoder layers, as in the sign language transformer <cit.>, consist of Layer Norm (LN) operations <cit.>, multi-head self-attention (MHSA) <cit.>, residual connections <cit.>, and a feed-forward network (MLP): z = MHSA(LN(x)) + x x = MLP(LN(z)) + z Next, we discuss the difference between our proposed gloss attention and self-attention and how this inductive bias partially replaces the function of gloss. For clarity, we use a single head in the attention operation as a demonstration in this section and ignore the layer norm operation. For the self-attention operation, it first generates a set of q_t, k_t, v_t ∈ℝ^D vectors for each input sign language video feature x_t. These vectors are computed as linear projections of the input x_t, that is, q_t = W_qx_t, k_t = W_kx_t, and v_t = W_vx_t, for each projection matrices W_i ∈ℝ^D × D. Then the attention calculation result of each position is as follows: z_t = ∑_i^T v_i ·exp⟨ q_t, k_i⟩/∑_j^Texp⟨ q_t, k_j⟩ In this way, the attention score is calculated by dot products between each query q_t and all keys k_i, and then the scores are normalized by softmax. The final result is a weighted average of all the values v_i using the calculated scores as weights. Here for simplicity, we ignore the scaling factor √(D) in the original paper and assume that all queries and keys have been divided by √(D). There are two problems with this calculation. One is that its computational complexity is quadratic, as shown in Equation <ref>. The other more important problem is that its attention is difficult to converge to the correct position after losing the supervision of the gloss annotation, as shown in Figure <ref>. The root cause of this problem is that each query has to calculate the attention score with all keys. This approach can be very effective and flexible when strong supervision information is provided, but the model loses focus when supervision information is missing. In order to solve the above problems, we propose gloss attention, which is an attention mechanism we design according to the characteristics of sign language itself and the observation of the experimental results of existing models. We observe that gloss-level semantics are temporally localized, that is, adjacent video frames are more likely to share the same semantics because they are likely to be in the same gloss-corresponding video segment. Specifically, we first initialize N attention positions P = (p_1,p_2,...,p_N) for each qeury, where p_1 = t-⌈ N/2⌉, p_n=t+N-⌈ N/2⌉, and the intermediate interval is 1. Later, in order to better deal with the semantic boundary problem, we will calculate the N offset according to the input query to dynamically adjust the position of the attention: O = W_oq_t; P = (P+O)%T where W_o ∈ℝ^N× D, P is the adjusted attention position, and we take the remainder of T to ensure that the attention position will not cross the bounds. The adjusted attention positions have become floating-point numbers due to the addition of offset O, and the indices of keys and values in the array are integers. For this reason, we use linear interpolation to get the keys K_t = (k_t^1,k_t^2,...,k_t^N ) and values V_t = (v_t^1,v_t^2,...,v_t^N ) that are finally used for calculation: b_i = ⌊p_i⌋, u_i = b_i+1 k_t^i = (u_i-p_i )· k_b_i + (p_i-b_i)· k_u_i v_t^i = (u_i-p_i )· v_b_i + p_i-b_i)· v_u_i Finally, the attention calculation method for each position is as follows: z_t = ∑_i^Nv_t^i·exp⟨ q_t, k_t^i⟩/∑_j^Texp⟨ q_t, k_t^j⟩ Compared with the original self-attention, the computational complexity of gloss attention is 𝒪 (NT), where N is a constant and in general N ≪ T, so the computational complexity of gloss attention is 𝒪 (n). In addition, as shown in Figure <ref>, the visualization results show that the gloss attention we designed can achieve similar effects to those with gloss supervision. The experimental results in Section <ref> also demonstrate the effectiveness of our proposed method. A flowchart of the full gloss attention operation is shown in tensor form in Figure <ref>. §.§ Knowledge Transfer Another important role of gloss is to help the model understand the entire sign language video from a global perspective. Its absence will reduce the model's ability to capture global information. Fortunately, however, we have language models learned on a rich corpus of natural languages, and they have been shown to work well on numerous downstream tasks. Since there is a one-to-one semantic relationship between sign language video and annotated natural language text, we can transfer the knowledge from the language model to our model. Specifically, we first use sentence bert <cit.> to calculate the cosine similarity S ∈ℝ^D_t× D_t between all natural language sentences offline, where D_t is the size of the training set. Then we aggregate all the video features output by the encoder to obtain an embedding vector e ∈ℝ^D representing the entire sign language video. There are various ways to obtain the embedding vector, and we analyze the impact of choosing different ways in Section <ref>. Finally we achieve knowledge transfer by minimizing the mean squared error of cosine similarity between video vectors and cosine similarity between natural languages: ℒ_kt =(e_i· e_j/e_ie_j - S[i,j] )^2 In this way we at least let the model know which sign language videos are linguistically similar and which are semantically different. § EXPERIMENTS §.§ Experiment Setup and Implementation Details Datasets. We evaluate the GASLT model on the RWTH-PHOENIX-WEATHER-2014T (PHOENIX14T)<cit.>, CSL-Daily <cit.> and SP-10 <cit.> datasets. We mainly conduct ablation studies and experimental analysis on the PHOENIX14T dataset. PHOENIX14T contains weather forecast sign language videos collected from the German public television station PHOENIX and corresponding gloss annotations and natural language text annotations to these videos. CSL-Daily is a recently released large-scale Chinese sign language dataset, which mainly contains sign language videos related to daily life, such as travel, shopping, medical care, etc. SP-10 is a multilingual sign language dataset that contains sign language videos in 10 languages. For all datasets we follow the official partitioning protocol. Evaluation Metrics. Similar to previous papers, we evaluate the translation performance of our model using BLEU <cit.> and ROUGE-L <cit.> scores, two of the most commonly used metrics in machine translation. BLEU-n represents the weighted average translation precision up to n-grams. Generally, we use uniform weights, that is, the weights from 1-grams to n-grams are all 1/n. ROUGE-L uses the longest common subsequence between predicted and reference texts to calculate the F1 score. We use the script officially provided by the Microsoft COCO caption task <cit.> to calculate the ROUGE-L score, which sets β = 1.2 in the F1 score [<https://github.com/tylin/coco-caption>]. Implementation and Optimization. We use the pytorch <cit.> framework to implement our GASLT model based on the open source code of <cit.> and <cit.>. Our model is based on the Transformer architecture, the number of hidden units in the model, the number of heads, and the layers of encoder and decoder are set to 512, 8, 2, 2, respectively. The parameter N in gloss attention is set to 7. We also use dropout with 0.5 and 0.5 drop rates on encoder and decoder layers to mitigate overfitting. For a fair comparison, we uniformly use the pre-trained I3D model in TSPNet <cit.> to extract visual features. For models other than TSPNet, we only use visual features extracted with a sliding window of eight and stride of two. We adopt Xavier initialization <cit.> to initialize our network. we use label smoothed <cit.> crossentropy loss to optimize the SLT task, where the smoothing parameter ε is set to 0.4. We set the batch size to 32 when training the model. We use the Adam <cit.> optimizer with an initial learning rate of 5×10^-4 (β_1=0.9, β_2=0.998, ϵ =10^-8), and the weight decays to 10^-3. We use similar plateau learning rate scheduling as in <cit.>, except we adjust the patience and decrease factor to 9 and 0.5, respectively. The weights of translation cross-entropy loss and knowledge transfer loss ℒ_kt are both set to one. All experiments use the same random seed. §.§ Comparisons with the State-of-the-art Competing Methods. We compare our GASLT model with three gloss-free SLT methods. 1) Conv2d-RNN <cit.> is the first proposed gloss-free SLT model, which uses a GRU-based <cit.> encoder-decoder architecture for sequence modeling. 2) Tokenization-SLT <cit.> achieves the state-of-the-art on the ROUGE score of PHOENIX14T dataset, which utilizes adversarial, multi-task, and transfer learning to search for semi-supervised tokenization methods to reduce dependence on gloss annotations. 3) Joint-SLT <cit.> is the first sign language translation model based on the Transformer architecture, which jointly learns the tasks of sign language recognition and sign language translation. 4) TSPNet <cit.> achieves the state-of-the-art on the BLEU score of PHOENIX14T dataset, which enhances translation performance by learning hierarchical features in sign language. Quantitative Comparison. We report the BLEU scores and ROUGE scores of our GASLT model and comparison models on the PHOENIX14T dataset in Table <ref>. For Joint-SLT we reproduce and report its results in the gloss-free setting; for other models, we use the data reported in the original paper. As shown in Table <ref>, the translation performance of our model significantly outperforms the original two state-of-the-art gloss-free SLT models, Tokenization-SLT and TSPNet-Joint, the blue4 score is improved from 13.41 to 15.74 (17.37%), and the ROUGE-L score is improved from 36.28 to 39.86 (9.86%). As shown in Table <ref>, we further evaluate our proposed GASLT model on two other public datasets, and we can see that our method outperforms existing methods on both datasets. Benefiting from the injection of prior information about semantic temporal locality in our proposed gloss attention mechanism and its flexible attention span, our GASLT model can keep attention in the right place. Coupled with the help of knowledge transfer, the GASLT model significantly narrows the gap between gloss-free SLT and gloss-supervised SLT methods compared to previous gloss-free SLT methods. Qualitative Comparison. We present 3 example translation results generated by our GASLT model and TSPNet model in Table <ref> for qualitative analysis. In the first example, our model produces a very accurate translation result, while TSPNet gets the date wrong. In the second example, our model ensures that the semantics of the sentence has not changed by using the synonym of "warnungen" (warnings) such as "unwetterwarnungen" (severe weather warnings), while TSPNet has a translation error and cannot correctly express the meaning of the sign language video. In the last example, it can be seen that although our generated results differ in word order from the ground truth, they express similar meanings. However, existing evaluation metrics can only make relatively mechanical comparisons, making it difficult to capture these differences. We provide the full translation results generated by our proposed model in the supplementary material. §.§ Ablation studies In this section, we introduce the results of our ablation experiments on the PHOENIX14T dataset, and analyze the effectiveness of our proposed method through the experimental results. In addition, we also study the impact of different component choices and different parameter settings on the model performance. To facilitate the expression, in the table in this section, we use R to represent ROUGE-L, B1→B4 to represent BLEU1→BLEU4. The Effectiveness of Gloss Attention. As shown in Table <ref>, we test the model's performance with self-attention, local-attention, and gloss-attention, respectively, on the PHOENIX14T dataset, where local-attention and gloss-attention use the same window size. We can see that local-attention performs better than self-attention, while gloss-attention achieves better performance than both. This shows that the attention mechanism of gloss-attention, which introduces inductive bias without losing flexibility, is more suitable for gloss-free sign language translation. The Effectiveness of Knowledge Transfer. As shown in Table <ref>, we add our proposed knowledge transfer method to various attention mechanisms, and we can see that it has an improved effect on all attention mechanisms. This demonstrates the effectiveness of our proposed knowledge transfer method. Gloss Attention. We then explore the effect of the number N of initialized attention positions in gloss attention on model performance. As shown in Table <ref>, without using gloss, the BLEU-4 score of the model increases first and then decreases with the increase of N, and reaches the best performance when N=5. This demonstrates that too few attention positions will limit the expressive ability of the model, while too large N may introduce interference information. After all, when N=T, the calculation method of gloss attention will be no different from the original self-attention. In addition, the translation performance of the model is the best when N=7 (due to the introduction of linear interpolation, the actual field of view of the model at this time is 14), which is also close to the statistics of 15 video frames per gloss in the PHOENIX14T dataset. Sign Language Sentence Embedding. Then we compare the impact of different sign language sentence embedding vector generation methods on the model performance. The experimental results are shown in Table <ref>. In the table, CLS-vector indicates that a special CLS token is used to aggregate global information as the sentence embedding. Ave demonstrates that the average of all the vectors output by the encoder is used as the sentence embedding. Max means to take the maximum value of each dimension for all the vectors output by the encoder as the sentence embedding. Gloss attention embedding means that only gloss attention is used in the encoder. Self-attention embedding means that a layer using self-attention is added at the end of the encoder. It can be seen that the sentence embedding generated by the method of CLS-vector does not perform well in the model performance. In addition, we can find that the Ave method performs better in translation performance than the Max method. The model achieves the best performance when using the Ave. gloss-attention embedding method, which demonstrates that thanks to the superposition of receptive fields and the flexible attention mechanism, the model can capture global information well even when only gloss attention is used. Weight of Knowledge Transfer Loss. Finally, we analyze the effect of setting different weights for the knowledge transfer loss on the model performance. As shown in Figure <ref>, we can find that the model's performance tends to decrease as the weight of the knowledge transfer loss increases. This may be because the similarity relationship between sentences obtained from Sentence Bert is not so accurate, and too high weight will cause the model to overfit the similarity relationship and decrease translation performance. § CONCLUSION In this paper, we analyze the role of gloss annotations in the SLT task. Then we propose a new attention mechanism, gloss attention, which can partially replace the function of gloss. The gloss attention, which is designed according to the temporal locality principle of sign language semantics, enables the model to keep the attention within the video segments corresponding to the same semantics, just as the supervision signal provided by the gloss is still there. In addition, we design a new knowledge transfer method to help the model better capture global sign language semantics. In the appendix, we discuss the limitations of our work. § ACKNOWLEDGMENTS This work was supported by National Natural Science Foundation of China under Grant No. 62222211, No.61836002 and No.62072397. ieee_fullname
http://arxiv.org/abs/2307.05558v1
20230709160341
From Estimation to Sampling for Bayesian Linear Regression with Spike-and-Slab Prior
[ "Qijia Jiang" ]
stat.CO
[ "stat.CO", "stat.ME", "stat.ML" ]
Hierarchical Autoencoder-based Lossy Compression for Large-scale High-resolution Scientific Data Jian Tao August 12, 2023 ================================================================================================ We consider Bayesian linear regression with sparsity-inducing prior and design efficient sampling algorithms leveraging posterior contraction properties. A quasi-likelihood with Gaussian spike-and-slab (that is favorable both statistically and computationally) is investigated and two algorithms based on Gibbs sampling and Stochastic Localization are analyzed, both under the same (quite natural) statistical assumptions that also enable valid inference on the sparse planted signal. The benefit of the Stochastic Localization sampler is particularly prominent for data matrix that is not well-designed. Gibbs Sampler, Spike-and-Slab Sparse Linear Regression, Stochastic Localization, Posterior Contraction of Frequentist Bayesian procedure 65C60, 68W40, 62C10 § INTRODUCTION In this work we study posterior sampling arising from high-dimensional Bayesian variable selection – our focus is on sampling from the full posterior for uncertainty quantification purpose as opposed to computing aspect of it (e.g., point estimators). Given design matrix X∈ℝ^n× p and response y∈ℝ^n, the linear regression model with Spike-and-Slab prior has posterior π(β|y,X)∝ℒ(y|X,β)ℙ_prior(β)∝exp(-1/2σ^2y-Xβ_2^2) (⊗_i=1^p(1-z)G_0(β_i)+zG_1(β_i)) for some z∈ (0,1), where G_0 has density more concentrated around 0 than G_1. What makes the Bayesian methodology attractive is that it comes with credible sets instead of a single summary statistics; however, we emphasize that we will study Bayesian guarantee in a frequentist framework in this paper, where we assume there is a planted (and fixed) k-sparse signal β^* for which data is generated from, i.e., y=Xβ^*+ϵ for ϵ∼𝒩(0,σ^2 I). This prior can be viewed as a regularized least squares / penalized likelihood if one draws parallel to the frequentist perspective, where Lasso (ℓ_1 penalty) corresponds to the posterior mode of i.i.d Laplace(λ) prior with density λ/2exp(-λ|β|): β̂_Lasso←min_β y-Xβ_2^2+λ∑_i=1^p |β_i| . Lasso, however, isn't fully Bayesian in the sense credible interval building upon the posterior distribution does not provide valid coverage guarantee <cit.> for β^*. Therefore good performance of posterior mode doesn't automatically translate to good performance of the full posterior. This is, in some sense, not surprising since it has to balance between the task of selection and prediction (i.e., shrinkage and bias). Spike and Slab prior, on the other hand, by explicitly introducing two scales/groups, is better at dealing with this tension. Indeed, favorable statistical properties can be established on the posterior for inference on the unknown sparse β^* – in what follows, we will design sampling procedures under statistical assumptions for the model and will be mostly concerned with the scaling with p when it comes to computational methods. We note that for the purpose of recovering the sparse β^*, classical BvM says that data will eventually wash out the influence of the prior choice, however mismatch between the prior and the truth will be reflected in the slow posterior contraction rate of π_n(β |y^n)→δ_β^* in terms of statistical efficiency. Another way to see this manifested is through the variational inequality -log𝔼_prior (β)ℒ_y,X(β)=min_ρ≪ℙ_prior(β){-𝔼_ρ [logℒ_y,X(β)]+KL(ρ || ℙ_prior (β))} and the minimizer ρ^* is precisely (<ref>) when ℒ_y,X(β) is the likelihood function, therefore the posterior will concentrate on maximizers of the likelihood in presence of the evidence from data, while staying faithful to the prior knowledge one may have. §.§ Related Literature Statistical properties of (<ref>) have been studied by <cit.> with different choices for G_0,G_1,z,σ. On a closely related prior, computational-statistical guarantees given by <cit.> highlight that sharp concentration of the high-dimensional posterior distribution (i.e., π_n(z^*|y)≳ 1-p^-1 with probability at least 1-p^-c assuming smallest non-zero element of β^* ≳σ^2log p/n) need not lead to polynomial mixing of MCMC algorithm. Unless one restricts the size of the state space the prior is supported on 1{z_0≤ u}, the authors show that the gradient-free Metropolis-Hastings algorithm (also known as Add-Delete-Swap in this context) can have mixing time scaling exponentially with p. However, this upper bound u depends on quantities unknown in practice. Gibbs sampler is widely used for spike-and-slab models, and its convergence is analyzed in <cit.> with numerical speedup investigated in <cit.>. Various approximate schemes exist, where in <cit.> mean-field variational inference ideas are used (i.e., reduce model search space from 2^p to p assuming coordinates are independent) to show posterior contraction but since the objective to be optimized is non-convex, guarantee for convergence to global optima is hard to establish (in fact it was empirically observed that the result can be sensitive to initialization). Some previous attempts also focus on designing efficient algorithms for computing point estimators such as posterior modes using e.g., EM algorithms for priors with continuous support <cit.>. The philosophy we adopt for sampling from the non-log-concave spike-and-slab posterior (<ref>) is close in spirit to (1)<cit.>, where posterior converges to a normal limit as both the sample size n and parameter dimension p grow to infinity at appropriate rate (reminiscent of Bernstein-von-Mises theorem which states the posterior approach a Gaussian centered at MLE with Fisher information covariance under appropriate assumptions), and show polynomial time mixing in p – an assumption on the starting point for the algorithm that falls in the approximate support of the posterior, i.e., where CLT applies, is also imposed; (2) A line of investigation on Bayesian nonlinear inverse problem <cit.> also crucially hinges on warm start into the locally convex region where most of the posterior mass concentrates for polynomial-time convergence of the MCMC algorithm they design. On the other hand, standard off-the-shelf gradient-based HMC, MALA samplers typically struggle for potentials deviating significantly from log-concavity beyond functional inequalities – one could check that the Log-Sobolev constant (therefore mixing time) scales exponentially with the separation between the peaks, in addition to already expensive gradient calculation, without the possible help of parallel tempering/replica exchange that avoids being trapped in separated modes. In fact, these are not surprising in light of the asymptotic posterior shape characterization in <cit.> where they are shown to be well-approximated by random (i.e., data-dependent) mixture of Gaussians. §.§ Notation & Outline (In)equalities with ≲, ≳, ≍ hold up to absolute constants. For two models z,z' ∈{0,1}^p, z⊂ z' means that the active components of z is a subset of that of z', and z_0 counts the number of non-zeros/active elements. We write j∉ z to indicate z_j=0. Total-variation distance is defined as μ-ν_TV=sup_A∈ℬ |μ(A)-ν(A)|∈ [0,1], and Wasserstein-2 distance is defined as W_2(μ,ν) = inf_x∼μ, y∼ν𝔼[x-y^2]^1/2, which satisfies triangle inequality. Moreover, we use o_n(1) to specify a quantity tending to 0 as n →∞, and O_p(a) for the usual stochastic boundedness. Both X_n X and p-lim_n→∞ X_n = X denote convergence in probability. In what follows, <Ref> studies Gibbs sampler, <Ref> the Stochastic Localization Sampler, both under warm start and posterior contraction assumptions. These statistical assumptions are justified in <Ref> for the particular quasi-likelihood posterior with continuous spike-and-slab prior that we focus on in this work. § (SCALABLE) GIBBS SAMPLER In this section, we (1) give Gibbs update and efficient implementation for point-mass-like spike-and-slab priors, along with its random design analogue for Gaussian design matrix; (2) provide mixing guarantee from a warm start. We also highlight the bottleneck for Gibbs-based samplers for this class of posteriors. §.§ Point-mass-like Spike-and-Slab A popular approach of conducting Bayesian variable selection in the regime p ≫ n is through setting up a hierarchical model: for linear model y=Xβ+ϵ with ϵ∼𝒩(0,σ^2 I_n) for σ^2 the noise variance (inverse Gamma distribution on σ^2 is sometimes considered but we will assume that it's known here) and the sparsity prior z_j∼Bern(q) where β_j| z_j ∼ z_j𝒩(0,τ_1^2)+(1-z_j)δ_0(β_j) for all j∈[p], the joint posterior is π(β,z| y)∝𝒩(y;Xβ,σ^2 I_n)∏_j=1^p (q𝒩(β_j;0,τ_1^2))^z_j·((1-q)δ_0(β_j))^1-z_j . The Gibbs update, which relies on the availability of conditional probabilities, becomes π (β| z,y) ∝𝒩(y;Xβ,σ^2)∏_j=1^p (δ_0(β_j))^1-z_j·(𝒩(β_j;0,τ_1^2))^z_j ∝exp(-1/2σ^2(β^⊤ X^⊤Xβ-2β^⊤ X^⊤ y)-β^⊤ D(z_j/2τ_1^2)β)∏_j=1^p (δ_0(β_j))^1-z_j ∼𝒩(β̅;Σ^-1X̅^⊤ y,σ^2Σ^-1) ∏_j=1^p (δ_0(β_j))^1-z_j for Σ(z) = X̅^⊤X̅+2σ^2 D(z_j/2τ_1^2), where X̅ denotes the n×z_0 sub-matrix with z_j=1, β̅ the subvector with active coordinates, and D(·) a z_0×z_0 diagonal matrix with the indicated components. In other words, π (β| z,y)∼𝒩(β̅; (X̅^⊤X̅+σ^2/τ_1^2I)^-1X̅^⊤ y, σ^2(X̅^⊤X̅+σ^2/τ_1^2I)^-1)⊗∏_j=1^p (δ_0(β_j))^1-z_j where δ_0(β_j) denotes Dirac delta, i.e., β_j=0 if z_j=0. The conditional distribution for z is π(z|β,y) ∝∏_j=1^p (q𝒩(β_j;0,τ_1^2))^z_j·((1-q)δ_0(β_j))^1-z_j ∼∏_j=1^p Bern(z_j;q𝒩(β_j;0,τ_1^2)/(1-q)δ_0(β_j)+q𝒩(β_j;0,τ_1^2)) which suggests z_j=0 if β_j=0 and z_j=1 if β_j≠ 0. It might be tempting to conclude that this is computationally favorable as (<ref>) involves inversion of a lower-dimensional matrix, as opposed to a continuous prior of the form β_j| z_j ∼ z_j𝒩(0,τ_1^2)+(1-z_j)𝒩(0,τ_0^2), that necessarily requires matrix inversion of size p× p. However, the updates (<ref>)-(<ref>) in fact lead to a non-convergent / reducible Markov chain, i.e., the chain gets stuck whenever it generates β_j=0, although statistically the posterior on β contracts at the near minimax-optimal rate for the recovery of β^*. For example for a related prior where β_1,…,β_p are i.i.d from (1-r)δ_0+rLaplace, and r∼Beta(1,p^u) hyper prior with u>1, the important work of <cit.> showed under k_n-sparse compatibility assumption on the design matrix for the high-dimensional setting p>n, uniformly over k_n-sparse signals, sup_β^*_0≤ k_n𝔼_β^*[π_n(ββ-β^*_1 ≳ k_n√(log p)/X | y^n)] 0 . Note this is a remarkably strong statement about the complete posterior π(·|y), which is a random measure over β for any fixed β^*, and not just aspect of it such as the posterior mode / mean as sup_β^*_0≤ k_n𝔼_β^*[ ∫βπ(β | y^n) dβ -β^*^2 ]≲ 2k_nlog(p/k_n) , which the Lasso estimator β̂_Lasso also verify with an appropriate choice of λ. Above k_n→∞ is permitted as n→∞. For this reason, computational strategies involving exact-sparsity inducing priors resort to Add-Delete-Swap or shotgun stochastic search <cit.>, which integrate out the regression coefficients from the posterior (i.e., design samplers based on ℙ(z|y) over {0,1}^p), but falls short of solving both the variable selection (z) and parameter estimation (β) problems simultaneously. On the other hand, Gibbs can handle spike-and-slab prior with continuous support effortlessly, that doesn't have this trans-dimensionality problem, but inversion of a p× p matrix renders the sampling procedure expensive. The quasi-likelihood approach below, which is a variant of the classical formulation (<ref>), provides a middle ground that balance between the desirable statistical performance and computational convenience, as we will elaborate. The sparsified likelihood <cit.> that has posterior (with τ_1 ≫τ_0) π(β,z| y)∝𝒩(y;X_zβ_z,σ^2 I_n)∏_j=1^p (q𝒩(β_j;0,τ_1^2))^z_j·((1-q)𝒩(β_j;0,τ_0^2))^1-z_j targets a different posterior than Skinny Gibbs <cit.>, but can also be sampled using Gibbs with a reduced-dimensional matrix inversion operation at each iteration. For the posterior with quasi-likelihood, we alternate between π (β| z,y) ∝exp(-1/2σ^2(β̅^⊤X̅^⊤X̅β̅-2β̅^⊤X̅^⊤ y)-β̅^⊤ D(1/2τ_1^2)β̅)∏_j=1^p (𝒩(β_j;0,τ_0^2))^1-z_j ∼𝒩(β_z;Σ^-1X̅^⊤ y,σ^2Σ^-1) ∏_j=1^p (𝒩(β_j;0,τ_0^2))^1-z_j where Σ(z) = X̅^⊤X̅+2σ^2 D(z_j/2τ_1^2) and for each j∈[p] sequentially π(z_j|β,y,z_-j) ∝∏_j=1^p (q𝒩(β_j;0,τ_1^2))^z_j·((1-q)𝒩(β_j;0,τ_0^2))^1-z_j𝒩(y;X_zβ_z,σ^2) ∝ ((1-q)𝒩(β_j;0,τ_0^2))^1-z_j· q^z_j×𝒩(β_z;Σ^-1X̅^⊤ y,σ^2Σ^-1) ∼Bern(z_j;q𝒩(β_z;Σ^-1X̅^⊤ y,σ^2Σ^-1)/(1-q)𝒩(β_j;0,τ_0^2)+q𝒩(β_z;Σ^-1X̅^⊤ y,σ^2Σ^-1)) which although is still Bernoulli, is no longer independent across coordinates, and the update for z_j depends on not just β_j. In (<ref>), the normal distribution in the numerator involves setting z_j=1 and the rest as the conditioned z_\ j at the current iteration. Another way to write the update for z_j conditional on the rest is Q_j:=π(z_j=1|β,y,z_-j)/π(z_j=0|β,y,z_-j)=q/1-qτ_0/τ_1× exp[-(β_z-(X̅^⊤X̅+σ^2/τ_1^2I)^-1X̅^⊤ y)^⊤1/2σ^2(X̅^⊤X̅+σ^2/τ_1^2I)(β_z-(X̅^⊤X̅+σ^2/τ_1^2I)^-1X̅^⊤ y)]/exp(-β_j^2/2τ_0^2) ∝q/1-qτ_0/τ_1exp[-1/2σ^2β_z^⊤(X̅^⊤X̅+σ^2/τ_1^2I)β_z+1/σ^2y^⊤X̅β_z]/exp(-β_j^2/2τ_0^2) ∝q/1-qτ_0/τ_1exp(-β_j^2(1/2τ_1^2+(X^⊤ X)_jj/2σ^2))/exp(-β_j^2/2τ_0^2)exp(-1/σ^2β_j X_j^⊤X̅_\ jβ_z,\ j+1/σ^2β_j X_j^⊤ y) =: Π_j ·exp(-β_j^2(X^⊤ X)_jj/2σ^2) where X̅_\ j denotes the submatrix corresponding to the components of z_\ j such that z_k=1. Note that Q_j doesn't depend on z_j. This is slightly different from Skinny Gibbs update, which approximate the covariance matrix (ignoring cross-correlation between active X̅ and inactive X̅_c components) [ X̅^⊤X̅+σ^2/τ_1^2I X̅^⊤X̅_c; X̅^⊤_cX̅ X̅^⊤_c X̅_c+σ^2/τ_0^2I ] with [ X̅^⊤X̅+σ^2/τ_1^2I 0; 0 Diag(X̅^⊤_c X̅_c)+σ^2/τ_0^2I ] therefore the update for β_j for which z_j=0, although independent across coordinates, would involve Diag(X̅^⊤_c X̅_c) for the inactive components (but the update for the active components are the same as (<ref>)), and the update for z_j in this case can be shown to be Π_j (c.f. (3.12) in <cit.>). However Skinny Gibbs posterior <cit.> still enjoys strong model selection consistency property π(z=z^*|y )→ 1 asymptotically, as p_n>n both grow at a proportional ratio. One might also consider a Hogwild asynchronous style update with all z_j drawn in parallel, using the latest z in the shared memory with possible overwriting, although it seems hard to characterize the error introduced by this approximate MCMC scheme. If all the updates use the z from the previous iteration, it amounts to assuming that the z_j's are independent. We'd like to mention that a Metropolized-Gibbs strategy with an accept/reject implementation for the {z_j}_j=1^p update on (<ref>) was proposed in <cit.>, but we find the algorithm above somewhat more natural. Such Gibbs update based on sparsified-likelihood can also be generalized to spike/slab distributions that admit representation as a scale-mixture of normals: for example in the case when G_0/G_1 is Laplace, one could write for λ>0 λ/2e^-λ|β|=∫_0^∞1/√(2π s)e^-β^2/2sλ^2/2e^-λ^2 s/2 ds, which is equivalent to having β|s ∼𝒩(0, √(s)), s∼Laplace(λ^2), and one can alternate between updating β,z,s; the conditional distribution of s will be an inverse-Gamma in this case. §.§.§ Practical Matters The update given in <ref> requires drawing samples from a multivariate Gaussian with covariance matrix that involves inversion of a z_0 ×z_0 matrix (since the posterior is concentrated on sparse z's as we will show in <Ref>, one can expect z_0 ≪ p). This is the more expensive step among (<ref>)-(<ref>). Building on the work of <cit.>, data augmentation and pre-computation can be used to improve the (<ref>) step as follows, which cost 𝒪(max{n^2z_0,n^3}) since forming the matrix takes 𝒪(n^2z_0) and inverting takes 𝒪(n^3). If the number of variables switching states between consecutive iterations is small (i.e., z_t-z_t-1_0 small, either due to sparse z/posterior concentration from <ref> or stable Markov chain), a few more ideas can be used for speeding up <ref>: * Use the previous M_t∈ℝ^n× n as preconditioner and solve the linear system using conjugate gradient, which only involves matrix-vector product * Instead of computing M_t^-1 from scratch at every step, perform Sherman-Morrison on the previous matrix M_t-1, since only a few columns are added/deleted Per-iteration cost aside, due to the curse of dimensionality, blocked updates can also help with mixing as illustrated by the following example. From <ref> we know π(z_j=1|β,y,z_-j)/π(z_j=0|β,y,z_-j)∝ q/1-qτ_0/τ_1exp(-1/2(1/τ_1^2-1/τ_0^2)β_j^2)exp(-1/σ^2β_j X_j^⊤X̅_\ jβ_z,\ j+1/σ^2β_j X_j^⊤ y-β_j^2(X^⊤ X)_jj/2σ^2) . Suppose half of the mass is concentrated on e_1 and the rest half evenly distributed among the remaining 2^p-1 models. We start with e_1+e_p (therefore 1 false positive and no false negatives), for a choice of τ_1 > τ_0, let us take the first term qτ_0/(1-q)τ_1 = o(1) since it is independent of β, the update reduces to π(z_1=1|β,y,z_-1)/π(z_1=0|β,y,z_-1)∼exp(-1/2(1/τ_1^2-1/τ_0^2)β_1^2+n/2σ^2β_1^2) and for all other j≠ 1, π(z_j=1|β,y,z_-j)/π(z_j=0|β,y,z_-j)∼exp(-1/2(1/τ_1^2-1/τ_0^2)β_j^2-1/σ^2β_j X_j^⊤X̅_\ jβ_z,\ j+1/σ^2β_j X_j^⊤ X_1β_1^*-nβ_j^2/2σ^2) using y=X_1β_1^*+σϵ and assuming (X^⊤ X)_jj=n is normalized. Additionally, we assume X_1 is orthogonal to all other columns. Under this assumption we have β_1 ∼β_1^* and β_2,…,p∼ 0 after the first β update (recall it amounts to regressing on the active components and setting the inactive ones to ∼ 0). Therefore even though z_1 will stay 1 and hence active with high probability, the rest of the z_2,…, z_p will have almost equal probability of staying 0 or 1. The situation will likely repeat since β_2,…,p∼ 0 will remain. What we can conclude from this example is that the Gibbs sampler will witness (exponentially) long streaks of updates over the 2^p-1 null models, followed by occupying the true model e_1 for equally long period of time and be very slow to move in between these two scenarios, since using <ref> π(z_j=1|β,y,z_-j)/π(z_j=0|β,y,z_-j) ∼q/1-qτ_0/τ_1exp[-1/2σ^2β_1^⊤(X_1^⊤ X_1+σ^2/τ_1^2I)β_1+1/σ^2β_1^*⊤ X_1^⊤ X_1β_1]/exp(-β_j^2/2τ_0^2) ∼exp[-1/2σ^2β_1^⊤(X_1^⊤ X_1+σ^2/τ_1^2I)β_1+1/σ^2β_1^⊤ X_1^⊤ X_1β_1] becomes very small for j≠ 1 when we have identified the true model e_1, which means z_j will stay 0 (i.e., inactive) with high probability. On the other hand, blocked updates that do not adopt a coordinate-by-coordinate strategy will switch between the two half of the time. We also point out that while the updates for Gibbs sampler is simple to implement, its mixing time is not immune to multi-modality. Consider the case when X_1 and X_2 are strongly correlated and the posterior puts half of the mass on the model consisting of these two variables only; and the other half evenly on the rest 2^p-2-1+2^p-2=2^p-1-1 models (note that due to the correlation, either both X_1 and X_2 are included or not included, assuming the remaining X_\{1,2} are almost orthogonal to them). Such colinearity in the data shows up as coherence of X (defined in (<ref>) below) in the mixing time analysis of the Gibbs sampler. If one initializes with either z_1=z_2=1 or z_1=z_2 = 0, similar argument as above shows that the Gibbs update will be very slow moving in between these two cases (even though both make up non-negligible portion of the posterior 3/4 vs. 1/4) – this is essentially because they form two separated peaks in the z-space. §.§.§ Gibbs Mixing Guarantee for Posterior (<ref>) We will loosely follow the approach taken in <cit.> which assumes that we can initialize from a model z with no false negatives and at most t false positives. The analysis is based on spectral gaps tailored to (finite) mixture of log-concave measures and allows one to restrict the study of spectral gaps to sets where most of the probability mass resides. Define for some s≥ 0, δ>0 ℰ_s := {π(z∈{0,1}^p z^*⊂ z, z_0≤z^*_0+s| y)≥ 1-4/p^δ/2(s+1)∩π(z^*| y)≥ 1/2 ∩max_z^*⊂ z, z_0≤z^*_0+smax_j∈[p], j∉ z |⟨ (I_n+τ_1^2/σ^2 X_zX_z^⊤)^-1 X_j, ϵ⟩| ≤σ√(2(s+1)nlog(p))} which is a high probability event over the randomness of the noise ϵ only (X and β^* are assumed to be fixed that satisfy certain conditions given below). Moreover, the design matrix X has coherence for some integer k≥ 1, 𝒞(k) := max_z_0≤ kmax_j≠ i, j ∉ z |X_j^⊤(I_n+τ_1^2/σ^2 X_zX_z^⊤)^-1X_i| ≥ 0 and restricted eigenvalue that entails X^⊤ X is strongly convex in certain directions ω(k) := min_z: z_0≤ kmin_v_2=1{v^⊤ X_1-z^⊤(I_n+τ_1^2/σ^2 X_zX_z^⊤)^-1X_1-z v v∈ℝ^p-z_0, v_0≤ k}≥ 0 . In general smaller 𝒞(k) and bigger ω(k) indicate better design, which in some sense capture the correlation between active and inactive components. Result of <cit.> suggests posterior concentration such as (<ref>) alone isn't enough for efficient sampling if one allows arbitrary initialization, but these are the bare minimum and we will justify the posterior concentration property for the posterior (<ref>) (i.e., the first two conditions in ℰ_s) in <Ref>. We additionally assume β-min condition for the true signal, i.e., |β^*_z^*,j|≳σ√(log(p)/n), β^*_1-z^*_2=0 above the detection threshold for all active coordinates j, which is unavoidable if an initialization with no false negatives / contraction towards the true support is desired. Initializing from the support of Lasso can be a viable choice for warm-start. Even in the frequentist setup, it is popular to consider model selection with Lasso first, followed by regressing on the selected subset with (appropriately chosen) coordinated-weighted ℓ_1-penalty (∝ 1/|β̂_init,j|) à la Adaptive Lasso <cit.>. Another possibility is to do a preliminary MCMC run on the posterior π(z|y) first and hopefully identify the high-probability models. The last condition in ℰ_s holds with high probability and (<ref>) is satisfied for 𝒞(k)≲ k^2log(p), (<ref>) is bounded away from 0 for k∼ n/log(p) when e.g., the design matrix X_ij∼𝒩(0,1) for n≳ klog(p). Moreover, with the above scaling of 𝒞(k), ω(k) and (<ref>), Lasso has false positives bounded above by 𝒪(k), i.e., sparsity level of β^*, and no false negatives with high probability. This is a modification of Lemma 8 and 9 of <cit.> so we will be brief. Since ϵ∼𝒩(0,σ^2 I), (<ref>) simply follows by observing that for X_ij∼𝒩(0,1), max_z^*⊂ z, z_0≤z^*_0+smax_j∈[p],z_j=0 (I_n+τ_1^2/σ^2 X_zX_z^⊤)^-1 X_j≤max_j X_j≲√(n) and the Gaussian deviation inequality. For the condition (<ref>) and (<ref>), it is known when n≳ klog(p), for Gaussian random matrix ℙ(X∈ℋ) ≳ 1-1/p, where ℋ := { X∈ℝ^n× pX_j_2≍√(n) ∀ j∈[p], max_j≠ i|⟨ X_j,X_i⟩ |≲√(nlog (p)), min_v_0≤ k, v_2=1 v^⊤ (X^⊤ X)v ≳ n} therefore we condition on the event ℋ for the rest of the argument. Now Woodbury's identity and Cauchy Schwarz together with ℋ give for j≠ i, |X_j^⊤(I_n+τ_1^2/σ^2 X_zX_z^⊤)^-1X_i|=|X_j^⊤ X_i-X_j^⊤ X_z(σ^2/τ_1^2I+X_z^⊤ X_z)^-1X_z^⊤ X_i| ≤ |X_j^⊤ X_i|+√(X_j^⊤ X_z(σ^2/τ_1^2I+X_z^⊤ X_z)^-1X_z^⊤ X_j)√(X_i^⊤ X_z(σ^2/τ_1^2I+X_z^⊤ X_z)^-1X_z^⊤ X_i) ≲√(nlog (p))+1/nX_j^⊤ X_zX_z^⊤ X_i ≲√(nlog(p))+k√(nlog(p))(k√(nlog(p))+n)/n ≲ k^2 log(p) for X_j∉ X_z and z_0=k and we used n≳ klog(p). Similarly, for z_0≤ k and supp(v)⊂ 1-z, v_0 ≤ k, on event ℋ, we have v^⊤ X_1-z^⊤(I_n+τ_1^2/σ^2 X_zX_z^⊤)^-1X_1-z v = X_1-z v^2 - v^⊤ X_1-z^⊤ X_z(σ^2/τ_1^2I+X_z^⊤ X_z)^-1X_z^⊤ X_1-z v ≳ nv^2-X_z^⊤ X_1-z v^2/n ≳ nv^2 - k nlog(p)/nv^2 > 0 for n≳ klog(p). The warm start guarantee of Lasso for Gaussian design under β-min condition follows from classical results on support recovery <cit.>. We will analyze a blocked variant of Gibbs with lazy updates (it is well-known that lazy version of the Markov chain only slows down the convergence by a constant factor). To implement, at step k we perform the following updates. Written mathematically, the Markov transition kernel takes the form K(β_k,β_k+1)=∑_z_k+1∈{0,1}^pπ(z_k+1|β_k, y)(1/2δ_β_k(β_k+1)+1/2π(β_k+1|z_k+1,y)) . The sampling of the z_k+1|β_k+1 step in <ref> is not particularly cheap, but our focus is on the mixing property of the Markov chain, and in light of the discussion in <Ref>, blocked updates as studied here only give a stronger guarantee in terms of mixing (there could generally be more bottlenecks in the chain). We preface with a lemma before stating our main result for the algorithm above. The relative density for two models π(z_2| y)/π(z_1| y) where z_1⊂ z_2 can be shown to be as (<ref>)-(<ref>), and given tolerance ζ_0∈ (0,1), assuming q/(1-q)∼ 1/p^δ+1 for some δ>0, X_j_2^2=n ∀ j∈[p], we have π_0 K^k-π(β|y)_TV≤ 2p^(δ+1)t (1+τ_1^2· tn/σ^2)^t/2(1-SpecGap_ζ(K))^k/2+ζ_0/√(2) for ζ=ζ_0^2/8p^-2(δ+1)t (1+τ_1^2· tn/σ^2)^-t if we initialize with t false-positives and no false negatives. The posterior marginal over finite state space z∈{0,1}^p after integrating out β(z) =[β̅ β̅_c] is (this is a special feature of conjugate priors) π(z| y) ∝ q^z_0(1-q)^p-z_0× τ_0^z_0-p/τ_1^z_0∫_ℝ^pexp(-1/2σ^2(β̅^⊤X̅^⊤X̅β̅-2β̅^⊤X̅^⊤ y)-β̅^⊤ D(1/2τ_1^2)β̅-β̅_c^⊤ D(1/2τ_0^2)β̅_c)dβ ∝ q^z_0(1-q)^p-z_0(τ_0/τ_1)^z_0 (τ_0^2)^(p-z_0)/2exp(1/2σ^4y^⊤X̅(1/σ^2X̅^⊤X̅+1/τ_1^2 · I)^-1X̅^⊤ y)/√((1/σ^2X̅^⊤X̅+1/τ_1^2· I)) ∝ q^z_0(1-q)^p-z_0(τ_0/τ_1)^z_0 (τ_0^2)^(p-z_0)/2exp(1/2σ^4y^⊤X̅(1/σ^2X̅^⊤X̅+1/τ_1^2 · I)^-1X̅^⊤ y)/√((I_n+τ_1^2/σ^2 X̅X̅^⊤)) (τ_1^2)^z_0/2 ∝ (q/1-q)^z_0(τ_0/τ_1)^z_0(τ_1/τ_0)^z_0exp(1/2σ^4y^⊤X̅(τ_1^2· I-τ_1^4X̅^⊤(σ^2I+τ_1^2X̅X̅^⊤)^-1X̅)X̅^⊤ y)/√((I_n+τ_1^2/σ^2 X̅X̅^⊤)) ∝ (q/1-q)^z_0exp(-1/2σ^2y^⊤ (I_n+τ_1^2/σ^2 X_zX_z^⊤)^-1y)/√((I_n+τ_1^2/σ^2 X_zX_z^⊤)) where we used (1) Gaussian integral ∫_ℝ^kexp(-1/2x^⊤Σ^-1x)dx=(2π)^k/2(Σ)^1/2 and completion of squares; (2) matrix determinant lemma (A+UV^⊤)=(A)(I+V^⊤ A^-1U); (3) Woodbury identity (A+UCV)^-1=A^-1-A^-1U(C^-1+VA^-1U)^-1VA^-1 and the fact that y^⊤ (X̅X̅^⊤-τ_1^2X̅X̅^⊤(σ^2I+τ_1^2X̅X̅^⊤)^-1X̅X̅^⊤)y =σ^2 y^⊤X̅X̅^⊤(σ^2I+τ_1^2X̅X̅^⊤)^-1y=σ^2 y^⊤X̅ (τ_1^2 X̅^⊤X̅+σ^2 I)^-1X̅^⊤ y =σ^2/τ_1^2y^⊤ (I-σ^2(σ^2I+τ_1^2X̅X̅^⊤)^-1) y for the last step. Now if we want to look at the change in posterior for two models z_1 and z_2 where z_1⊂ z_2, since both numerator and denominator involve I_n+τ_1^2/σ^2 X_z_2X_z_2^⊤=I_n+τ_1^2/σ^2 X_z_1X_z_1^⊤+τ_1^2/σ^2∑_j z_1,j=0,z_2,j=1 X_jX_j^⊤ , matrix determinant lemma and Woodbury identity will again let us compute the ratio π(z_2| y)/π(z_1| y) = (q/1-q)^z_2_0-z_1_0×1/√((I+τ_1^2/σ^2X_z_2-z_1^⊤ A^-1X_z_2-z_1)) ×exp(1/2σ^2y^⊤ A^-1X_z_2-z_1(σ^2/τ_1^2I+X_z_2-z_1^⊤ A^-1X_z_2-z_1)^-1X_z_2-z_1^⊤ A^-1y) , where A=I_n+τ_1^2/σ^2 X_z_1X_z_1^⊤≽ I_n and X_z_2-z_1 denotes columns of X for which z_1,j=0 and z_2,j=1. Let us denote the initial model as z_0, and define f_0(β) := π(β|z_0,y)/π(β|y)≤1/π(z_0|y)≤2π(z^*|y)/π(z_0|y) since π(z^*|y) ≥ 1/2 on the event ℰ_s. This implies using (<ref>)-(<ref>) that since z^*⊂ z_0, denoting the number of initial false positives as t, and using the assumptions f_0_π,∞ := esssup|f_0(β)| w.r.t π(dβ) ≤ 2p^(δ+1)t√((I_t+τ_1^2/σ^2X_z_0-z^*^⊤ A^-1X_z_0-z^*))≤ 2p^(δ+1)t (1+τ_1^2· tn/σ^2)^t/2 . Using Lemma 1 from <cit.> we have for all iterations k≥ 1 and initial π_0(dβ)=π(β|z_0,y), π_0 K^k-π(β|y)_TV^2 ≤max{∫ |f_0(β)-∫ f_0(β)π(dβ)|^2 π(dβ),ζf_0_π,∞^2}(1-SpecGap_ζ(K))^k+ζf_0_π,∞^2 ≤f_0_π,∞^2(1-SpecGap_ζ(K))^k+ζ_0^2/2 if setting ζ=ζ_0^2/8p^-2(δ+1)t (1+τ_1^2· tn/σ^2)^-t for some ζ_0∈(0,1) the desired accuracy. With these preparations, it only remains to bound the approximate spectral gap from <ref> to conclude, for which we leverage the framework developed in <cit.>. At a high level it states that if when constrained on a subset of models z̅ where the posterior mass concentrates, the marginal densities π(β|z_1,y),π(β|z_2,y) overlap sufficiently for z_1,z_2 on this set that are somewhat close to each other, ζ-spectral gap can be much larger than the classically defined spectral gap without such a restriction (hence tighter resulting bounds). We assume for some δ>0, q/(1-q)∼ 1/p^δ+1,τ_1∼σ p/√(n), τ_0∼σ/√(n),X_j_2^2=n for all j∈[p]. Throughout the paper we consider q∈ (0,1) to be fixed, i.e., non-data-adaptive as opposed to empirical Bayes approaches in the literature. Under the event ℰ_s, condition (<ref>),(<ref>),(<ref>), <ref> and warm start with number of false positives t≥ 0 bounded above as (1/p)^2(1+δ)t(1/1+tp^2)^t≥20/p^δ/2 (s+1)ζ_0^2 , after k≳ (s+1) p^(1+δ)t (1+tp^2)^t/2exp(ns^2/σ^2η^2+2√(nlog(p))/ση+n/2σ^2η^2)log(1/ζ_0) steps of <ref>, we have π_0 K^k-π(β|y)_TV≤ζ_0. In particular, if s=0, δ=1, the iteration complexity is k≳ p^2t (1+tp^2)^t/2exp((√(n)/√(ω(k))∨n^2/ω^2(k)) log(p)+(k𝒞(k)/ω(k)∨k^2𝒞^2(k)/ω^2(k)) log(p))log(1/ζ_0) . Each iteration implemented with <ref> costs at least 𝒪(max{n^2k,n^3}). On the event ℰ_s, we have that the posterior puts at least 1-ζ/10=1-ζ_0^2/80p^-2(δ+1)t (1+τ_1^2· tn/σ^2)^-t fraction of the mass on the set π(z∈{0,1}^p z^*⊂ z, z_0≤z^*_0+s| y)≥ 1-4/p^δ/2(s+1) if picking the initial false positives t small enough such that given s≥ 0,ζ_0∈ (0,1),δ>0 (1/p)^2(1+δ)t(1/1+tτ_1^2 n/σ^2)^t≥20/p^δ/2 (s+1)ζ_0^2 , so the statement of Theorem 3 from <cit.> applies (picking m=∞, B_i=ℝ^p) and we need to find κ>0 such that ∀ z_1,z_2 belonging to the set (<ref>) =I_0 that differs in 1 element (so both z_1, z_2 have at most s false positives), ∫_ℝ^pmin{π(β|z_1,y),π(β|z_2,y)} dβ≥κ . Suppose w.l.o.g z_1⊂ z_2 where z_1,j=0 and z_2,j=1, using <ref>, (<ref>) we have for A=I_n+τ_1^2/σ^2 X_z_1X_z_1^⊤≽ I_n and under <ref>, π(β|z_1,y)/π(β|z_2,y) = π(β,z_1|y)/π(z_1|y)π(z_2|y)/π(β,z_2|y) = q/1-q1/√(1+τ_1^2/σ^2X_j^⊤ A^-1X_j)exp(1/2σ^2(y^⊤ A^-1X_j)^2/σ^2/τ_1^2+X_j^⊤ A^-1X_j)1-q/qτ_1/τ_0exp(β_j^2/2(1/τ_1^2-1/τ_0^2)) ×exp(-1/σ^2y^⊤ X_j β_j+n/2σ^2β_j^2+β_jX_j^⊤ X_z_1β_z_1/σ^2) ≥p/√(1+τ_1^2/σ^2n)exp(1/2σ^2(y^⊤ A^-1X_j)^2/σ^2/τ_1^2+X_j^⊤ A^-1X_j-β_j^2/21/τ_0^2+β_jX_j^⊤ (X_z_1β_z_1-y)/σ^2) ≥exp(-n/2σ^2β_j^2-|β_jX_j^⊤ (X_z_1β_z_1-y)|/σ^2) . Since both z_1,z_2 contain z^*, it must be the case j∉ z^*=supp(β^*) with |supp(β^*)|=k, and as X_j is not part of z_1, under event ℰ_s and <ref>, 1/σ^2|β_j X_j^⊤ (Xβ^*+ϵ- X_z_1β_z_1)| ≤1/σ^2 (|β_jX_j^⊤ X_sβ_s|+|β_jX_j^⊤ϵ|) ≤ns/σ^2 |β_j|β_s_1 +2/σ|β_j|√(nlog(p)) where X_s is the n-by-at-most-s matrix composed of columns of X that are in the z_1 model (and therefore z_2) but not in z^* (these are false positives). Now take any j that is not in z^* but is in z_2, we know from <ref> the marginal distribution π(β_j|z_2,y) is Gaussian with absolute value of the mean bounded as (using the definition of (<ref>),(<ref>)) | e_1^⊤[ X_j^⊤ X_j+σ^2/τ_1^2 X_j^⊤ X_z_2\ j; X_z_2\ j^⊤ X_j X_z_2\ j^⊤ X_z_2\ j+σ^2/τ_1^2 · I ]^-1[ X_j^⊤ y; X_z_2\ j^⊤ y ]| = |X_j^⊤ y-X_j^⊤ X_z_2\ j(X_z_2\ j^⊤ X_z_2\ j+σ^2/τ_1^2· I)^-1X_z_2\ j^⊤ y/σ^2/τ_1^2+X_j^⊤(I+τ_1^2/σ^2· X_z_2\ jX_z_2\ j^⊤)^-1X_j| =| X_j^⊤ (I+τ_1^2/σ^2· X_z_2\ jX_z_2\ j^⊤)^-1 (Xβ^*+ϵ)/σ^2/τ_1^2+X_j^⊤(I+τ_1^2/σ^2· X_z_2\ jX_z_2\ j^⊤)^-1X_j|≤σ√(2(s+1)nlog(p))+β^*_1𝒞(k+s)/ω(k+s) by Hölder and triangle inequality and variance σ^2[(X_z_2^⊤ X_z_2+σ^2/τ_1^2 I)^-1]_jj =σ^2/X_j^⊤ X_j+σ^2/τ_1^2-X_j^⊤ X_z_2\ j(X_z_2\ j^⊤ X_z_2\ j+σ^2/τ_1^2 · I)^-1X_z_2\ j^⊤ X_j =σ^2/σ^2/τ_1^2+X_j^⊤(I+τ_1^2/σ^2· X_z_2\ jX_z_2\ j^⊤)^-1X_j≤σ^2/ω(k+s) where we used matrix block inversion and Woodbury identity. Noting that since these two expressions are independent of the choice of j, which in particular means that the upper bound holds for any such j, we can write β_j=μ_j+σ_j z for z∼𝒩(0,1), and ∫_ℝ^pmin{π(β|z_1,y),π(β|z_2,y)} dβ=𝔼_π(β|z_2,y)[min{π(β|z_1,y)/π(β|z_2,y),1}] ≥𝔼_β_s[𝔼_β_j[exp(-n/2σ^2β_j^2-ns/σ^2 |β_j|β_s_1 -2/σ|β_j|√(nlog(p)))|β_s ]] ≥1/2𝔼_β_s[exp(-ns/σ^2(|u_j|+σ_j)β_s_1-2√(nlog(p))/σ(|u_j|+σ_j)-n/2σ^2(|u_j|+σ_j)^2)] ≥1/2exp(-ns/σ^2(|u_j|+σ_j)𝔼[β_s_1]-2√(nlog(p))/σ(|u_j|+σ_j)-n/2σ^2(|u_j|+σ_j)^2) ≥1/2exp(-ns^2/σ^2η^2-2√(nlog(p))/ση-n/2σ^2η^2) where we used that (1) for any non-negative function g, 𝔼[g(z)]≥ℙ(|z|≤ 1)min_z:|z|≤ 1 g(z); (2) Jensen's inequality; (3) for any coordinate j of β_s, 𝔼[|β_s[j]|]≤√(𝔼[β_s[j]^2]) ≤√(σ^2/ω(k+s)+(σ√(2(s+1)nlog(p))+β^*_1𝒞(k+s)/ω(k+s))^2) ≤σ/√(ω(k+s))+σ√(2(s+1)nlog(p))+β^*_1𝒞(k+s)/ω(k+s) =: η , (4) it holds that |μ_j|+σ_j≤η. Therefore one can invoke Theorem 3 with κ= 1/2exp(-ns^2/σ^2η^2-2√(nlog(p))/ση-n/2σ^2η^2) for (<ref>). Using that the diameter of the graph constructed on I_0 (where z_1,z_2∈ I_0 differing in 1 element) is bounded above by 2s, we reach SpecGap_ζ(K) ≥κ/4smin_z: z^*⊂ z, z_0≤z^*_0+sπ(z|y) ≳1/sζ_0 p^δ (s+1)/4exp(-ns^2/σ^2η^2-2√(nlog(p))/ση-n/2σ^2η^2) where we used the relative ratio from <ref>, for any z with at most s false positives, π(z|y)≥1/2π(z|y)/π(z^*|y)≥1/2p^s(δ+1)(1+τ_1^2 ns/σ^2)^-s/2≥1/2p^s(δ+1)(1+p^2 s)^-s/2≳ p^-δ(s+1)/4ζ_0^-1 . Putting together with <ref> now yields π_0 K^k-π(β|y)_TV≤ζ_0 when k ≳ (s+1)ζ_0 p^δ(s+1)/4exp(ns^2/σ^2η^2+2√(nlog(p))/ση+n/2σ^2η^2) log(p^(δ+1)t(1+p^2 t)^t/2/ζ_0) ≳ (s+1) p^(1+δ)t (1+tp^2)^t/2exp(ns^2/σ^2η^2+2√(nlog(p))/ση+n/2σ^2η^2)log(1/ζ_0) , where we hide a poly-logarithmic factor in p. In the case of s=0, δ=1, the posterior puts most of the mass on z^*, and we have k≳ p^2t (1+tp^2)^t/2exp((√(n)/√(ω(k))∨n^2/ω^2(k)) log(p)+(k𝒞(k)/ω(k)∨k^2𝒞^2(k)/ω^2(k)) log(p))log(1/ζ_0) , where we used the separation condition on the signal (<ref>) to estimate β^*_1 ≥ kσ√(log(p)/n). <ref> therefore implies that warm-start (made possible by frequentist estimators) is one way of getting around the hardness result of <cit.>. Other than the less-than-ideal scaling with the number of false positives t (which capture the bottleneck moving in between lower and higher density regions), we'd like to note the exponential dependence of the mixing time on the coherence 𝒞(k) and restricted eigenvalue parameter ω(k) of the design matrix X – these won't be present if not due to spectral gap considerations, and it shows up even with warm start. §.§ Spike-and-Slab for Random Design We consider a slightly different task in this section where the goal is to sample from a posterior π(β| y) of the following form: given y and assume X_i,j∼𝒩(0,1) independently, π(β|y) ∝∑_z∈{0,1}^p∫_X∈ℝ^n× pexp(-1/2σ^2y-Xβ_2^2) μ_G(dX)·∏_j=1^p ((1-q)G_0(β_j))^1-z_j(qG_1(β_j))^z_j with spike-and-slab prior on the parameter β∈ℝ^p. This is closer to random design setup where y=Xβ+ϵ for both X,y a random sample as opposed to just y, and one could be interested in the performance of β̂∼π(β| y) on future pairs of (X,y) from the same model. The Gaussian i.i.d entry assumption of course hardly holds in practice but it may serve as a good proxy for some class of design matrix. The posterior, shown below in <ref>, is only a function of y (therefore no expensive matrix inversion involved in the algorithm), and if q=0, the density only depends on the magnitude β which means that it's rotationally invariant (i.e., equal probability over sphere of fixed radius). For q≠ 0, due to the combinatorial nature of the mixture it introduces challenge for high-dimensional sampling – naïvely it could be exponential in p. The posterior with continuous Gaussian Spike-and-Slab prior under random design takes the form (with τ_1 ≫τ_0) π(β,z | y)∝ σ^n/(β^2+σ^2)^n/2exp(y^2β^2/2σ^4+2σ^2β^2)∏_j=1^p [1-q/τ_0exp(-β_j^2/2τ_0^2)]^1-z_j·[q/τ_1exp(-β_j^2/2τ_1^2)]^z_j which is non-log-concave, but it is amenable to Gibbs updates (that is known to be reversible). We calculate, since the entries of X are assumed to be independent, ∫_X exp(-1/2σ^2y-Xβ_2^2) μ_G(dX) ∝∏_i=1^n [∫_ℝ^pexp(1/σ^2y_ix_i^⊤β-1/2σ^2β^⊤ x_ix_i^⊤β-1/2x_i_2^2) dx_i] =∏_i=1^n[∫_ℝ^pexp(1/σ^2y_iβ^⊤ x_i-1/2σ^2x_i^⊤(ββ^⊤+σ^2 I)x_i) dx_i] = ∏_i=1^n exp(1/2σ^2y_i^2 β^⊤(ββ^⊤+σ^2 I)^-1β) × ∫_ℝ^pexp(-1/2σ^2[x_i-y_i(ββ^⊤+σ^2I)^-1β]^⊤(ββ^⊤+σ^2I)[x_i-y_i(ββ^⊤+σ^2I)^-1β]) dx_i ∝∏_i=1^n exp(y_i^2/2σ^2·1/σ^2β_2^2/1+1/σ^2β_2^2)√((σ^2(ββ^⊤+σ^2 I)^-1)) ∝∏_i=1^n exp(y_i^2/2σ^2·1/σ^2·β_2^2/1+1/σ^2·β_2^2)σ^p/√(β^2+σ^2)σ^(p-1) where we used Gaussian integral and the Sherman–Morrison formula, as claimed. Gibbs update alternate between π (β| z,y) ∝σ^n/(β^2+σ^2)^n/2exp(y^2β^2/2σ^4+2σ^2β^2)𝒩(β;0,D^-1) ∝σ^n/(β^2+σ^2)^n/2exp(y^2β^2/2σ^4+2σ^2β^2-1/2β^⊤ D(z)β) where D(z):=Diag(zτ_1^-2+(1_p-z)τ_0^-2) is a positive-definite diagonal matrix, and π(z|β,y) ∝∏_j=1^p (q𝒩(β_j;0,τ_1^2))^z_j·((1-q)𝒩(β_j;0,τ_0^2))^1-z_j ∼∏_j=1^p Bern(z_j;q𝒩(β_j;0,τ_1^2)/q𝒩(β_j;0,τ_1^2)+(1-q)𝒩(β_j;0,τ_0^2)) = ∏_j=1^p Q_j^z_j(1-Q_j)^1-z_j for Q_j = 1/1+1-q/qτ_1/τ_0exp(1/2(1/τ_1^2-1/τ_0^2)β_j^2) is product of independent Bernoulli's that can be sampled in parallel. The marginal over β is not log-concave therefore standard off-the-shelf sampler (e.g., Langevin, HMC etc.) doesn't come with efficiency guarantee, even in continuous time. To see this, we simply calculate the Hessian for the negative log density in (<ref>), -∇logπ(β|z,y)=n/β^2+σ^2β-y^2(1/σ^4+σ^2β^2-β^2/(σ^3+σβ^2)^2)β+D(z)β for some 1/τ_1^2I ≼ D(z) ≼1/τ_0^2 I. And -∇^2logπ(β|z,y) = (n/β^2+σ^2-y^2/σ^4+σ^2β^2+y^2β^2/(σ^3+σβ^2)^2)I+D(z) -(2σβ^2y^2-2σ^3y^2/(σ^3+σβ^2)^3-2y^2/(σ^3+σβ^2)^2+2n/(β^2+σ^2)^2)ββ^⊤ which we can see is not always positive semi-definite on the entire domain of β, e.g., for a counter-example one could consider σ^2 ≪β^2, y^2/σ^2 ≪ n, τ_0^2 ≫β^2/n. Therefore the posterior π(β|y) in this case is in fact a mixture of non-log-concave measures, unlike the fixed design case in <ref>. §.§.§ Inner Step Implementation of (<ref>) for Gibbs As it turns out target that has a density with respect to the Gaussian measure is somewhat easy to sample from. Consider the problem of sampling from the un-normalized density π(x)∝ f(x)𝒩(0,γ I) for f>0, where one can think of the prior as being Gaussian, and is performing optimal transport from 𝒩(0,γ I) to π in the space of probability measures. Schrödinger bridge admits closed-form expression as an SDE if starting at the origin at t=0. It is known from <cit.> that Q^π:=min_Q∈ℳ^πKL(Q|| P) where ℳ^π={Q:Q_0=δ_0, Q_1=π} the set of distributions with the two time marginals pinned at t=0 and t=1 end points and P the reference Wiener measure associated with the process dX_t = √(γ) dW_t, X_0 ∼δ_0 , is governed by an SDE with time-varying Föllmer drift (i.e., depends on both X_t and t, unlike Langevin): dX_t = ∇_X log𝔼_Z [f(X_t+√(1-t)Z)]dt+ √(γ) dW_t, X_0=0, t∈ [0,1] for Z∼𝒩(0,γ I). Using Stein's lemma (i.e., Gaussian integration by parts) this is the same as dX_t =𝔼_Z [Z· f(X_t+√(1-t)Z)]/√(1-t)·𝔼_Z [f(X_t+√(1-t)Z)]dt+ √(γ) dW_t, X_0=0 . At t=1 the backward heat semigroup/convolution kernel of (<ref>) localizes, but the crucial difference from (overdamped) Langevin dynamics is that it reaches target π in finite time, compared to Langevin that reaches target as t→∞ in infinite time horizon (but has arbitrary initialization under ergodicity). And without the drift (i.e., the control), one gets Brownian motion which indeed becomes 𝒩(0,γ I) at time t=1. In continuous time, no convexity assumption on f is needed for convergence, thanks to the optimal stochastic control interpretation <cit.>. The general problem of arbitrary endpoints with general reference measure will involve forward-backward iterative scheme for reaching a solution, but the particular case under consideration has a convenient analytical form (<ref>). In the case of Wiener measure as the reference measure, the solution to the Schrödinger bridge problem is also intimately connected to the entropy-regularized optimal transport (with quadratic cost) between the two time marginals. The following is a sanity check that discretization of the SDE is stable for the particular choice of f as demanded by <ref>, therefore one could hope to simply implement the inner step (<ref>) of the Gibbs sampler via e.g., Euler-Maruyama discretization: X_k+1 = X_k+h1/S∑_i=1^S v_i· f(X_k+√(1-kh)v_i)/√(1-kh)·1/S∑_i=1^S f(X_k+√(1-kh)v_i) + √(γ h) Z_k, X_0=0 for v_i∼𝒩(0,γ I) and Z_k∼𝒩(0,I) independent. Putting things together gives the following algorithm at iteration k. Between t∈(0,1), for any n>2 and σ>0 f(β_t)=σ^n/(β_t^2+σ^2)^n/2exp(y^2β_t^2/2σ^4+2σ^2β_t^2-1/2β_t^⊤ D(z) β_t+1/2γβ_t^2) and the drift b(β_t,t):=∇_βlog𝔼_Z∼𝒩(0,γ I) [f(β_t+√(1-t)Z)] is Lipschitz in β_t, assuming 1/τ_1^2I ≼ D(z) ≼1/τ_0^2 I and γ > τ_0^2. The goal is to show that b(β_t^1,t)-b(β_t^2,t)≤ C β_t^1-β_t^2 ∀β_t^1,β_t^2, or equivalently, ∇_β b(β_t,t)_op≤ C, for any t∈(0,1). We will need the following fact: if f(β_t) > 0 is L-Lipschitz, the convolved quantity g(β_t):=𝔼_Z [f(β_t+√(1-t)Z)]>0 will be Lipschitz and smooth. To see this, denote the Gaussian density with covariance γ· I as u_γ, since g(β_t)=∫ f(β_t+√(1-t)y)u_γ(y) dy=∫ f(β_t-y)u_(1-t)γ(y) dy = f*u_(1-t)γ is a positively-weighted linear combination of shifted f, it is clear that it will also be L-Lipschitz. Now for the smoothness claim ∇^2 g(β_t)_op≤ L/√((1-t)γ), we compute since ∇ f≤ L, for any v=1, |v^⊤∇^2 g(β_t)v| = |v^⊤ (∇ f * ∇ u_(1-t)γ) v| = |∫ (v^⊤∇ f(y))· (y-β/(1-t)γ)^⊤ v · u_(1-t)γ(β-y) dy| ≤L/√((1-t)γ) |∫ (y-β/√((1-t)γ))^⊤ v · u_(1-t)γ(β-y) dy | =L/√((1-t)γ)𝔼_z∼𝒩(0,1)[|z|] = L/√((1-t)γ)√(2/π) . This in turn implies that ∇_β b(β_t,t)_op≤∇^2 g(β_t)_op/g(β_t)+∇ g(β_t)^2/g(β_t)^2≤ C since ∇^2 g(β)_op and ∇ g(β) is bounded from above and g(β) bounded from below. It remains to check that f is Lipschitz to conclude. Write f(β_t)=σ^n/(β_t^2+σ^2)^n/2α where α is shorthand for the exp(·) term, we have ∇ f(β_t)≤ασ^n β_t/(β_t^2+σ^2)^n/2(y^2/σ^4+σ^2 β_t^2+β_t^2/(σ^3+σβ_t^2)^2-n/β_t^2+σ^2)+(1/γ-D(z)) β_t and it is easy to see that it is always bounded from above on the domain of β. The drift in (<ref>) can also be written as a conditional expectation: ∇_x log𝔼[f(X_1) | X_t=x] for (X_t)_t∈[0,1] distributed as the prior Wiener process P <cit.>. In fact the dynamics can be viewed as X_t = tβ+B_t for β∼π and one reaches target at t=1 where B_t is the Brownian bridge on [0,1] (therefore B_0=B_1=0) – this is somewhat related to the stochastic localization dynamics (<ref>), which we turn to in <Ref>. The SDE (<ref>) also shows up in proximal sampler <cit.> as part of the backward heat flow interpretation of the RGO oracle (c.f. Lemma 15/ equation (21) therein), albeit with different initialization (we are initializing from the origin, while <cit.> initialize from a Gaussian-convolved version of the target). §.§ Extension: Spike-and-Slab Logistic Regression While the preceding results pertain only to linear regression, we sketch its possible applicability to some GLMs via data augmentation technique. In the case of logistic regression for example, ∀ i∈[n], y_i∈{0,1} with sparse β^*, y_i | x_i,β∼Bern(exp(x_i^⊤β)/1+exp(x_i^⊤β)) where x_i is the i-th row of the matrix X. Through the introduction of the auxiliary variable ω_i, one can write the quasi-likelihood as (note the resemblance to linear model after transformation) ℙ(y_i=1|ω_i,z,β)=1/√(2)exp((y_i-1/2)(x_i,z^⊤β_z) -ω_i/2 (x_i,z^⊤β_z)^2) for ω_i∼PG(1,0) the Pólya-Gamma distribution, which admits efficient sampling algorithm <cit.>. This step relies on the essential integral identity that holds for all a∈ℝ: (e^ϕ)^a/1+e^ϕ=1/2e^(a-1/2)ϕ∫_0^∞exp(-ωϕ^2/2) p(ω) dω , where p(ω) is the pdf for PG(1,0). Assuming a continuous Gaussian spike-and-slab prior on the parameter β, the Bayesian logistic regression with spike and slab prior has posterior that can be sampled with Gibbs by alternating between π (β| z,y,ω) ∝exp(-1/2(β̅^⊤X̅^⊤D(ω)X̅β̅-β̅^⊤X̅^⊤ (y-1/2)-β̅^⊤ D(1/2τ_1^2)β̅)∏_j=1^p (𝒩(β_j;0,τ_0^2))^1-z_j ∼𝒩(β̅;Σ^-1X̅^⊤ (y-1/2),Σ^-1) ∏_j=1^p (𝒩(β_j;0,τ_0^2))^1-z_j where Σ(z) = X̅^⊤ D(ω) X̅+2 D(z_j/2τ_1^2) and for each i∈[n] in parallel π(ω_i|β,z,y)∼PG(1,x_i,z^⊤β_z) and for each j∈[p] sequentially π(z_j|β,y,z_-j,ω) ∝∏_j=1^p (q𝒩(β_j;0,τ_1^2))^z_j·((1-q)𝒩(β_j;0,τ_0^2))^1-z_j𝒩(y-1/2/ω;X_zβ_z, D(1/ω_i)) ∝ (q𝒩(β_j;0,τ_1^2))^z_j·((1-q))^1-z_j×𝒩(β_z;Σ^-1X̅^⊤ (y-1/2),Σ^-1) ∼Bern(z_j;q𝒩(β_z;Σ^-1X̅^⊤ (y-1/2),Σ^-1)/(1-q)𝒩(β_j;0,τ_0^2)+q𝒩(β_z;Σ^-1X̅^⊤ (y-1/2),Σ^-1)) where we used completion of squares at various places. The most expensive step of the update is (<ref>), for which one can re-use similar tricks from <Ref> for speed-up. We leave investigation of the mixing along with statistical property of the posterior for future work. § STOCHASTIC LOCALIZATION SAMPLER In this section, we study Stochastic Localization Sampler for (<ref>) under similar posterior contraction assumptions with warm start as in <Ref>. This class of samplers essentially takes a denoising perspective – as we already saw, computationally sampling from the posterior is harder than statistical estimation in some sense (even for identifying the support z as illustrated in <cit.>), but the approach below is not based on MCMC – therefore not sensitive to spectral gap, isoperemetric constant etc. – and put the two tasks on equal footing under favorable statistical conditions, at least for some spike-and-slab models. §.§ Preliminaries: From Denoising to Sampling The idea of stochastic localization came out of the analysis of functional inequalities (i.e., key ingredient behind the solution to the KLS conjecture <cit.>) as a proof technique. The work of <cit.> initiated its algorithmic use for sampling from the Sherrington-Kirkpatrick Gibbs measure with discrete hypercube support {± 1}^n, where approximate message passing (AMP) is used for implementing the mean estimation step, which we explain below (their guarantee holds with probability 1-o_n(1) over input A∼GOE(n)). The crucial insight of this method is that the following two processes have the same law <cit.> (this is sequential revelation of information) θ_t = tβ + W_t, β∼π (unknown signal where we know the prior & have Gaussian observation) which is ideal and un-implementable since we don't know β, and dθ_t = [∫_ℝ^pβ· p_t,θ_t(β) dβ] dt + dW_t=𝔼[β|θ_t=θ]dt+dW_t, θ_0 = 0 for which (notice it only depends on the last time point) p_t,θ_t(β) := 1/Z(t,θ_t)exp(θ_t^⊤β-t/2β^2) π(β) precisely describes the posterior ℙ(β| (θ_s)_0≤ s≤ t )=ℙ(β|θ_t = θ) for β under (<ref>). Above Z(t,θ_t) is a normalizing constant. The measure p_t,θ_t localizes to a Dirac measure δ_β for a random β∼π as t→∞ (this can also be seen from (<ref>) since the signal part scales as 𝒪(t) and the noise part 𝒪(√(t))). We abbreviate p_t,θ_t as p_t below, and let a_t:= ∫β· p_t(β)dβ that one can think of as a Bayes optimal estimator. As <ref> below will reveal, Stochastic Localization is evolving a measure p_t(β) driven by W_t that has the martingale property of p_0=π and p_∞=δ_β for β∼π. The process can be simulated via a SDE (<ref>) which reduces the task of sampling from π to estimating the denoising drift 𝔼[β|θ_t=θ] – an approximation of this is what we will output at the end after running it for sufficiently long, and we track the (random) evolving measure for its barycenter a_t. In some sense at every fixed t, the process decomposes π into a mixture of random measures, i.e., π = 𝔼_θ_t[π(·|θ_t)], and the variance of the component π(·|θ_t) decreases as t→∞. A more general version of (<ref>) can take the form p_t,θ_t(β) = 1/Z(t,θ_t)exp(θ_t^⊤β-β_G_t^2/2) π(β) for G_t ≻ 0 but we will not pursue such extension here. If π(β) has bounded second moment, a(θ_t,t) is Lipschitz in θ_t, since ∇_θ_t a(θ_t,t)_op=𝔼[ββ^⊤]-𝔼[β]𝔼[β]^⊤_op≤𝔼[ββ^⊤]_op≤𝔼[β^2] will be bounded, where above the expectation is taken with respect to p_t,θ_t(β), which means the SDE (<ref>) has a unique strong solution. The lemma below gives quantitative convergence rate for (<ref>) in continuous time. We have after t=1/ϵ^2, W_2(π,Law(a_t))=W_2(𝔼[p_t],Law(a_t))≤√(p)ϵ . Based on covariance decay we have 𝔼[cov(μ_t)]≼1/tI for all t>0 <cit.>, which reflects the fact that the measure localizes, therefore 𝔼[W_2^2(p_t,δ_a_t)]≤𝔼[𝔼_p_t[x-a_t^2]]≤p/t by the coupling definition of W_2 distance and taking trace on both sides. Now since W_2^2 is convex (this can be seen from the dual formulation which is sup over a set of linear functions), we can push expectation inside using Jensen's inequality and conclude W_2^2(𝔼[p_t],Law(a_t))≤p/t. Recall 𝔼[𝔼_x∼ p_t[x]] = 𝔼_x∼ p_0[x]=𝔼_x∼π[x] from the martingale property, hence Law(a_t)→ p_0 = π as t→∞. This rate is slower than other SDE-based algorithms, which have exponential convergence in continuous time under strong convexity, but is nevertheless quite minimal in terms of the assumptions made. In fact there's a dynamics one can write for the barycenter as well, if one can compute the covariance of p_t(β). This is a drift-free, diffusion-only SDE with multiplicative noise, which could make discretization challenging. In this regard (<ref>) relies on the mean and (<ref>) relies on the covariance of the tilted measure for implementation. We have the following barycenter representation: da_t = A_t dW_t= [∫_ℝ^p (β-a_t)(β-a_t)^⊤p_t,θ_t(β) dβ] dW_t where a_∞∼π. And density-valued SDE representation: dp_t(β) = p_t(β)⟨β-a_t,dW_t⟩ . These are known in the context of stochastic localization so we simply refer the reader to <cit.> for the proof. As an immediate consequence of the martingale property, which is evident from (<ref>), we have 𝔼[∫ f(β) p_t,θ_t(β)d β]=∫ f(β) π(β) dβ remains constant for all t≥ 0 for any continuous function f. Therefore if π has bounded mean/second moment, (β_t)_t will have similarly bounded mean/second moment in expectation throughout the localization process. The density-valued SDE could potentially be used for an ensemble / interacting particle system implementation on a fixed grid with δ_β_1,δ_β_2, …, but it will likely require a fine grid for the localization on the continuous domain that we consider (i.e., exponential in dimension). We will not explore it here but nevertheless establish its validity: if we start with a probability distribution p_0=∑_i p_0(β_i)=1, the process will remain a probability measure over the discrete set since a_t = ∑_i β_i p_t(β_i), for β_i∈ℝ^p ∀ i, d ∑_i p_t(β_i)/dt = ∑_i p_t(β_i)⟨β_i-a_t,dW_t⟩ =0 ⇒∑_i p_t(β_i)=1 ∀ t>0 . The time-discretized algorithm for sampling from π using (<ref>) is given below. We note that the algorithm is in some sense gradient-free. §.§ Warm-up: Orthogonal Design In the case of X=I (sequence model), since we start with a product measure, we end up with another product measure that decouples across coordinates, which reduces the complexity significantly. With the point-mass spike and slab prior, the marginal posterior distribution of each coordinate is a mixture of (data-dependent, weighted) Dirac measure at zero and a continuous convolved density, with the weights signaling if the parameter has a higher chance coming from the spike or the slab part given the data and a fixed q: π(β_j|y_j,q) =ℙ(z_j=1|y_j,q)π(β_j|y_j,z_j=1)+ℙ(z_j=0|y_j,q)π(β_j|y_j,z_j=0) = (1-q)ϕ_σ(y_j)/(1-q)ϕ_σ(y_j)+q h(y_j)δ_0(β_j)+q h(y_j)/(1-q)ϕ_σ(y_j)+q h(y_j)ϕ_σ(y_j-β_j) g_τ_1(β_j)/∫ϕ_σ(y_j-β_j) g_τ_1(β_j) dβ_j where ϕ_σ(y_j-β_j)∝ e^-1/2σ^2(y_j-β_j)^2 is the likelihood, h(y_j):=∫ϕ_σ(y_j-β_j) g_τ_1(β_j) dβ_j the convolution and g_τ_1(·) the slab prior. For the choice of q≥ 1/p, a known fact is that the posterior median behaves similarly as a coordinate-wise hard thresholding estimator with threshold σ√(2log(p)), i.e., the max of p independent Gaussians with variance σ^2, which capture the level below which there is no expected signal. It has been recognized since the 90s that shrinkage estimator can be tuned to attain minimax rates over a wide range of sparsity classes <cit.>. The empirical Bayes choice of q can be performed by maximizing the log-marginal q | y as max_q ∑_j=1^n log((1-q)ϕ_σ(y_j)+q h(y_j)) but we will not pursue such an extension here. We remark that sequence model is known to be polynomial-time computable – even with a hyper-prior on q that renders the coordinates dependent, existing exact method scales as 𝒪(n^3) using polynomial multiplication <cit.> for calculating various posterior point estimators. In what follows in this section we assume the data matrix satisfies X^⊤X = I_p, n=p, i.e., orthogonal, since some salient features of the dynamics can be more easily seen in this simpler case. Under point-mass spike, by definition, given t,θ_t,y the mean of the tilted measure is given by a_t(θ_t,t) = ∫_ℝ^pβ· p_t,θ_t(β) dβ = 1/Z∫_ℝ^pβ·∑_z∈{0,1}^p e^θ_t^⊤β-tβ^2/2-1/2σ^2y-X_zβ_z_2^2∏_j=1^p (q𝒩(β_j;0,τ_1^2))^z_j·((1-q)δ_0(β_j))^1-z_j dβ . Without loss of generality we look at the first coordinate. Let x_i denote the i-th column of the matrix X, for point-mass spike whether we assume quasi-likelihood or exact likelihood doesn't affect the calculation in this case. Recall a_t,1 can be viewed as a denoiser for β_1^* and a_t,1→δ_β_1^* for some β_1^*∼π as t→∞, which we output. a_t,1(θ_t,t) = ∫_ℝβ_1·exp((θ_t,1+1/σ^2 y^⊤x_1)β_1-(t/2+1/2σ^2)β_1^2 )[q1/√(2π)τ_1e^-β_1^2/2τ_1^2+(1-q)δ_0(β_1)] dβ_1/∫_ℝexp((θ_t,1+1/σ^2 y^⊤x_1)β_1-(t/2+1/2σ^2)β_1^2 )[q1/√(2π)τ_1e^-β_1^2/2τ_1^2+(1-q)δ_0(β_1)] dβ_1 = q1/√(2π)τ_1∫_ℝβ_1·exp((θ_t,1+1/σ^2 y^⊤x_1)β_1-(t/2+1/2σ^2+1/2τ_1^2)β_1^2 ) dβ_1/q1/√(2π)τ_1∫_ℝexp((θ_t,1+1/σ^2 y^⊤x_1)β_1-(t/2+1/2σ^2+1/2τ_1^2)β_1^2 ) dβ_1 + (1-q) = θ_t,1+1/σ^2y^⊤x_1/(t+1/σ^2+1/τ_1^2)+1-q/q(t+1/σ^2+1/τ_1^2)^3/2τ_1exp(-(θ_t,1+1/σ^2y^⊤x_1)^2/2(t+1/σ^2+1/τ_1^2)) where we used ∫_-∞^∞ xexp(-ax^2+bx) dx = √(π)b/2a^3/2exp(b^2/4a) and ∫_-∞^∞exp(-ax^2+bx) dx = √(π/a)exp(b^2/4a) for a>0. The effect of spike is to introduce shrinkage – in particular if we look at the denominator, it only becomes prominent when (for q≥ 1/p) |θ_t,1+1/σ^2y^⊤ x_1|≤√((t+1/σ^2+1/τ_1^2)log(τ_1^2 p^2(t+1/σ^2+1/τ_1^2))) , and in the case X=I, y^⊤ x_1 = y_1. For small t, this gives the threshold for |y_1| ≲σ√(2log(pτ_1/σ)); and for large t, this becomes |θ_t,1|≲√(2tlog(τ_1 p √(t))). For the sampling dynamics dβ_t,1 = a_t,1(β_t,t)dt+dW_t , we see that initially if |y_1| is above the threshold, it behaves almost like a linear SDE with time-dependent drift β_t,1+1/σ^2y^⊤x_1/t+1/σ^2+1/τ_1^2 that can be integrated exactly and β_t,1 scales as ∼ t; otherwise the Brownian motion part will take over and β_t,1 roughly scales as ∼√(t). As t→∞, with all else holding constant (i.e., for any finite sample size n), the drift a_t,1(β_t,t)≈β_t,1+1/σ^2y^⊤ x_1/t+1/σ^2+1/τ_1^2≈tβ_1^*+W_t/t→β_1^*∼π_1 if β_t,1≳√(t), signaling it will converge to the slab part of the posterior (<ref>); otherwise if β_t,1≲√(t), a_t,1(β_t,t)≈β_t,1+1/σ^2y^⊤ x_1/1-q/q(t+1/σ^2+1/τ_1^2)^3/2τ_1≈β_t,1/t^3/2→ 0 . On the other hand, with Gaussian spike and sparsified likelihood (<ref>), for τ_1 ≫τ_0, a_t,1( θ_t,t)=∫β_1· e^(θ_t,1+1/σ^2 y^⊤x_1)β_1-(t/2+1/2σ^2)β_1^2 q1/√(2π)τ_1e^-β_1^2/2τ_1^2 dβ_1+∫β_1· e^θ_t,1β_1-t/2β_1^2 (1-q)1/√(2π)τ_0e^-β_1^2/2τ_0^2 dβ_1/∫ e^(θ_t,1+1/σ^2 y^⊤x_1)β_1-(t/2+1/2σ^2)β_1^2 q1/√(2π)τ_1e^-β_1^2/2τ_1^2 + e^θ_t,1β_1-t/2β_1^2 (1-q)1/√(2π)τ_0e^-β_1^2/2τ_0^2 dβ_1 = q/τ_1θ_t,1+1/σ^2 y^⊤ x_1/(t+1/σ^2+1/τ_1^2)^3/2exp((θ_t,1+1/σ^2y^⊤ x_1)^2/2t+2/σ^2+2/τ_1^2)+1-q/τ_0θ_t,1/(t+1/τ_0^2)^3/2exp(θ_t,1^2/2t+2/τ_0^2)/q/τ_11/(t+1/σ^2+1/τ_1^2)^1/2exp((θ_t,1+1/σ^2y^⊤ x_1)^2/2t+2/σ^2+2/τ_1^2)+1-q/τ_01/(t+1/τ_0^2)^1/2exp(θ_t,1^2/2t+2/τ_0^2) = θ_t,1+1/σ^2 y^⊤ x_1/(t+1/σ^2+1/τ_1^2)+1-q/qτ_1/τ_0(t+1/σ^2+1/τ_1^2)^3/2/√(t+1/τ_0^2)exp(θ_t,1^2/2t+2/τ_0^2-(θ_t,1+1/σ^2 y^⊤ x_1)^2/2t+2/σ^2+2/τ_1^2) +θ_t,1/(t+1/τ_0^2)+q/1-qτ_0/τ_1(t+1/τ_0^2)^3/2/√(t+1/σ^2+1/τ_1^2)exp((θ_t,1+1/σ^2 y^⊤ x_1)^2/2t+2/σ^2+2/τ_1^2-θ_t,1^2/2t+2/τ_0^2) . Therefore as t→∞, with all else holding constant (i.e., for any finite sample size n), one of the above two terms will go to θ_t,1/exp(t)=tβ_1^*+W_t/exp(t)→ 0 and the other go to θ_t,1/t=tβ_1^*+W_t/t→β_1^*∼π_1, depending on whether (θ_t,1+1/σ^2 y^⊤ x_1)^2/t+1/σ^2+1/τ_1^2≶θ_t,1^2/t+1/τ_0^2 , if θ_t,1≳√(t), which is the only possibility since the posterior π puts zero mass at 0 exactly (with the first δ_0(β_j) term from (<ref>) replaced by another convolved density ϕ_σ(y_j-β_j) g_τ_0(β_j)). Consequently, continuous spike-and-slab priors yield non-sparse posterior point estimators that require thresholding for variable selection, and the alternative of selection based on ℙ(z|y) can be expensive generally. For some intuition on the time discretization of the SDE, take the point-mass spike-and-slab for example, since π is sub-Gaussian (therefore Novikov's condition holds with a very similar argument as below), using Girsanov's theorem, and consider the two SDEs: dβ_t = a(β_t,t)dt+dW_t, same as (<ref>) dβ̂_t = a(β̂_kh,kh) dt+dW_t, for t∈[kh,(k+1)h] an interpolation of discrete update (<ref>) where (β_t)_t ∼ Q, (β̂_t)_t ∼ P are two path measures, and one can obtain with the data processing inequality, KL(π_Kh || μ_Kh) ≤KL(Q_Kh || P_Kh) ≲∑_k=1^K∫_kh^(k+1)h𝔼_Q[a(β_t,t)-a(β_kh,kh)^2] dt ≲ L(σ,h,τ_0,τ_1,y) ∑_k=1^K∫_kh^(k+1)h𝔼_Q[β_t-β_kh^2] dt ≲ L(σ,h,τ_0,τ_1,y) ∑_k=1^K∫_kh^(k+1)h [(t-kh)^2𝔼_Q[a(β_t,t)^2] + 2d (t-kh)] dt which means if h and K are sufficiently small, since using Jensen's inequality, the drift 𝔼_Q[a(β_t,t)^2]= 𝔼_Q[∫β p_t,θ_t(β) dβ^2]≤𝔼_Q[∫β^2 p_t,θ_t(β) dβ] = ∫β^2 π(β) dβ < ∞ along the dynamics as shown in <ref>, the two processes will be close to each other in law. Above L is a constant depending on σ, h, τ_0,τ_1,y since each coordinate a_j(β_t,t) can be written as for some c(0),c(1) > 0, min{v(0),v(1)}≤v(0)c(0)+v(1)c(1)/c(0)+c(1)≤max{v(0),v(1)} where v(1) = (1/σ^2+1/τ_1^2+t)^-1(1/σ^2 y^⊤ x_i+β_t,i) and v(0)=(1/τ_0^2+t)^-1β_t,i, and similarly for a_j(β_kh,kh) therefore a(β_t,t)-a(β_kh,kh) can be bounded by the claimed quantities. Notice that above we didn't use any approximations for a(·) – since the computation scales linearly with p instead of exponentially in this case, we didn't rely on probabilistic arguments / large-scale behavior on the model for showing convergence of the time-discretized SDE (<ref>) for sampling from π (of course, for π to behave well statistically however, τ_1,τ_0, q will have to be chosen carefully as we will see in <Ref>). §.§ Spike-and-Slab Linear Regression: Mean Computation Recall from <ref> the posterior marginal over β in this case is a discrete mixture of log-concave densities: π(β| y)∝∑_z∈{0,1}^p q^z_0(1-q)^p-z_0×e^-1/2β^⊤ D_z^-1β/√((2π D_z))×e^-1/2σ^2y-X_zβ_z^2/(2πσ^2)^n/2 where D_z is diagonal with τ_1^2 if z=1 and τ_0^2 otherwise (τ_1 ≫τ_0), and we will adopt the same assumption as in <Ref> that the data/posterior belong to ℰ_s implying posterior concentration with the initial number of false positives t bounded (the design matrix X is again assumed deterministic satisfying the same “restricted isometry" conditions). We note that the posterior (<ref>) is non-convex / non-smooth so max (MAP) estimator is also hard to obtain from optimization, but integration/sampling can be somewhat easier under favorable statistical assumptions. For the sparsified likelihood with continuous priors (<ref>), we have given fixed t,θ_t,y, q∈ (0,1) the approximate drift â(θ_t,t) = ∑_z: z ∈𝒮 v(z)· c(z)/∑_z: z ∈𝒮 c(z) , where v(z),c(z) are defined in (<ref>)-(<ref>), and 𝒮 is the warm start set with z^*⊂ z, z_0≤ k+t. Recall from <ref> that a warm-start with number of false positives t≍ k can generally be expected under <ref>, therefore ∑_i=0^k t+k i≤ (e(t+k)/k)^k≍ ((t+k)/k)^k≍ c^t number of sub-models are evaluated at each time step. Additionally, under the statistical assumptions for <Ref>, 1/√(p)â(θ_t,t)- a(θ_t,t) converges to zero in probability as n→∞, as the rest of z^* ⊄z contributes vanishingly small to the posterior. By definition the tilted mean (as a function of the random measure π) takes the form a(θ_t,t)= ∑_z∈{0,1}^p∫_ℝ^pβ· p_t,θ_t(β,z) dβ= ∫_ℝ^pβ·exp(θ_t^⊤β-tβ^2/2) π(β) dβ = ∑_z ∈{0,1}^p q^z_0(1-q)^p-z_0[∫_ℝ^pβ· e^θ_t^⊤β-tβ^2/2-1/2σ^2y-X_zβ_z_2^2-1/2β^⊤ D_z^-1β dβ]1/√((2π D_z))/∑_z∈{0,1}^p q^z_0(1-q)^p-z_0×∫_ℝ^p e^θ_t^⊤β-tβ^2/2×e^-1/2β^⊤ D_z^-1β/√((2π D_z))× e^-1/2σ^2y-X_zβ_z^2 dβ = ∑_z ∈{0,1}^p v(z)· c(z)/∑_z∈{0,1}^pc(z) for vector v(z) ∈ℝ^p, v(z)_j = [(1/σ^2X_z^⊤ X_z+1/τ_1^2I+tI)^-1(1/σ^2X_z^⊤ y+θ_t,z)]_j if j is active [(1/τ_0^2I+tI)^-1θ_t,1-z]_j otherwise , furthermore the scalar c(z) = exp(1/2(1/σ^2 y^⊤ X_z+θ_t,z^⊤) (1/σ^2X_z^⊤ X_z+1/τ_1^2I+tI)^-1(1/σ^2X_z^⊤ y+θ_t,z))/√((1/σ^2X_z^⊤ X_z+1/τ_1^2I+tI)) ×exp(1/2θ_t,1-z^⊤(1/τ_0^2I+tI)^-1θ_t,1-z)/√((1/τ_0^2I+tI))× (qτ_0/(1-q)τ_1)^z_0 where we used Gaussian integral and completion of squares. The approximate posterior mean which acts as the drift of the SDE (<ref>) is given by for the warm start set 𝒮:={z: z^*⊂ z, z_0≤z^*_0+t} with at most k+t ≪ p active coordinates, â(θ_t,t) = ∑_z: z ∈𝒮 v(z)· c(z)/∑_z: z ∈𝒮 c(z) where computing (<ref>) involves solving linear systems of size z_0×z_0 with both changing left and right hand sides t and θ_t. Asymptotically as t→∞, the drift becomes z-independent and approaches θ_t/t=β+W_t/t →β for some random β∼π, which we output. From the denoising perspective, the task gets easier as t→∞ since the signal-to-noise ratio grows like t/√(t)=√(t). Using (<ref>) as a consequence of <Ref>, we have ∀ϵ > 0, recall p_n→∞ as n→∞ such that p_n=e^o(n), since z^*∈𝒮 by definition, lim_n→∞ℙ(1/√(p)â(θ_t,t)-a(θ_t,t)≥ϵ) ≤lim_n→∞ℙ(1/√(p)â(θ_t,t)-a(θ_t,t)≥ϵ | π(z^*|y)≥ 1-1/p)+lim_n→∞ℙ(π(z^*|y) ≤ 1-1/p) ≤ 0+lim_n→∞1/p = 0 therefore p-lim_n→∞â(θ_t,t)_j = p-lim_n→∞ a(θ_t,t)_j yields the convergence in probability claim. We can use pre-computation scheme and cache a factorization of X_z^⊤ X_z (generally expected to be full rank since k≲ n) to speed up the subsequent calculation. Since the sub-models under consideration share common features, one should also use Sherman-Morrison for low-rank updates whenever possible. If the integral is hard to compute analytically, one might hope to use Laplace approximation. It may also be possible to use mode instead of mean if the posterior consists of mixture of log-concave distributions (they can be shown to be not far apart due to measure concentration for log-concave densities), in the case of more general slab distributions. §.§ Spike-and-Slab Linear Regression: SDE Implementation Recall we discretize as β_k+1 = β_k+h·â(β_k,kh)+√(h)· z_k, z_k∼𝒩(0,I) independent and output â(β_k,kh) for sufficiently large k. In line with <Ref> we consider a sequence of problems with growing n,p_n,k_n →∞, so the posterior is implicitly indexed by n, and the probabilities are conditional on X. Here p_n/n∼ e^o(n)/n, k_n/n∼log(p_n)/n∼ o(n)/n serve as proxies for statistical difficulty of the problem, which cannot grow too fast. This is a more meaningful limit than the classical fixed p, large n setup. We are interested in the regime where one has variable-selection consistency in the sense 𝔼[π(z^*|y)] ≥ 1-1/p^2, which is established in <ref> under appropriate parameter choices (the allowed scaling of p,k will depend on X for such a guarantee to hold). We study the convergence rate of the Stochastic Localization sampler in this setting – in fact a guarantee of both computational & statistical nature along the lines of 𝔼_β^*(ℙ_n(â(β_K)-β^*≲ M | y^n)) ≥ 1-o_n(1) should also be within-reach for the output of the algorithm under such posterior contraction. The helper lemma below on the exact drift is crucial for the stable discretization of the SDE, where we borrow parts from <cit.>. For some constant C depending on t, the following regularity condition on β(t) ↦ a(β(t),t) holds: for any h ≤ t≤ T and β_k,β_t∈ℝ^p, with probability 1-o_n(1) over the data y^n, a(β_k,t)-a(β_t,t)≤ C(t) β_k-β_t+o_n(1) . Moreover with (k+1)h≤ T, for the continuous process (<ref>) on β̅(t), and sufficiently small h such that h < λ_min (X_z^*^⊤ X_z^*)/σ^2, sup_t∈ [kh,(k+1)h]1/√(p)a(β̅(t),t)-a(β̅(kh),kh) = O_p(√(h)) . Above both are stated under the assumptions for <ref>. Since τ_1→∞, τ_0→ 0 as n→∞, which ensures π(z=z^*|y)→ 1 as n→∞ from <ref>, recall we have for any given β_t,t, v(z)_j = [(1/σ^2X_z^⊤ X_z+1/τ_1^2I+tI)^-1(1/σ^2X_z^⊤ y+β_t,z)]_j → [(1/σ^2X_z^⊤ X_z+tI)^-1(1/σ^2X_z^⊤ y+β_t,z)]_j [(1/τ_0^2I+tI)^-1β_t,1-z]_j → 0 as n→∞ and for some c(z)≥ 0, min_z v(z)_j ≤ a(β_t,t)_j=∑_z ∈{0,1}^p v(z)_j· c(z)/∑_z∈{0,1}^pc(z)≤max_z v(z)_j . Now for any t ≥ h, with probability 1-o_n(1), since X_z^*^⊤ X_z^*≻ 0, a(β_k,t)-a(β_t,t)≤(1/σ^2X_z^*^⊤ X_z^*+tI)^-1_opβ_t-β_k+o_n(1) ≤1/tβ_k-β_t+o_n(1) . For the second part, using that for two linear systems Lu=r and L̂û=r̂ where L^-1(L̂-L)<1, the perturbed solution obeys û-u≤L^-1/1-L^-1(L̂-L)((L̂-L)u+r̂-r) ; with probability taken over both the stochastic process (β̅(t))_t and the data y^n/posterior π_n, the sequence of random variables sup_t∈ [kh,(k+1)h]1/pa(β̅(t),t)-a(β̅(kh),kh)^2 is bounded in probability as n→∞ by p-lim_n→∞1/pa(β̅((k+1)h),(k+1)h)-a(β̅(kh),kh)^2 = lim_n→∞1/p𝔼[a(β̅((k+1)h),(k+1)h)-a(β̅(kh),kh)^2] ≤lim_n→∞1/p((1/σ^2X_z^⊤ X_z+kh I)^-1/1-(1/σ^2X_z^⊤ X_z+kh I)^-1 hI)^2𝔼[h a(β̅(kh),kh)+β̅((k+1)h)_z-β̅(kh)_z^2] ≲lim_n→∞1/p((1/σ^2X_z^⊤ X_z+kh I)^-1/1-h(1/σ^2X_z^⊤ X_z+kh I)^-1)^2 𝔼[h^2 a(β̅(kh),kh)^2+β̅((k+1)h)-β̅(kh)^2] ≲lim_n→∞1/p (h^2 𝔼[a(β̅(kh),kh)^2]+h∫_kh^(k+1)h𝔼[a(β̅(t),t)^2] dt+ph) ≲lim_n→∞1/p(h^2 max_t∈[kh,(k+1)h]𝔼[a(β̅(t),t)^2] + ph) ≲ h for h sufficiently small such that h < λ_min (X_z^*^⊤ X_z^*)/σ^2, where we used (1) the update (<ref>) and Cauchy-Schwarz; (2) a(·)_j is bounded almost surely through the localization process as shown in <ref>; (3) dominated convergence theorem to exchange limit and expectation together with π(z=z^*|y)→ 1 as n→∞. Above ≲ hides constant independent of the dimension p. The reduction in the first step (<ref>) where we go from sup over t∈[kh,(k+1)h] to t=(k+1)h follows since t→ a(β̅(t),t) is a bounded martingale according to <ref> for any a(·) constructed with the localization process, therefore 1/√(p)a(β̅(t),t)-a(β̅(kh),kh) is a positive bounded sub-martingale for t≥ kh by Jensen's inequality. Then Doob's maximal inequality gives for a fixed c>0, lim_n→∞ℙ(sup_t∈ [kh,(k+1)h]1/√(p)a(β̅(t),t)-a(β̅(kh),kh)≥ c) ≤1/clim_n→∞1/√(p)𝔼[a(β̅((k+1)h),(k+1)h)-a(β̅(kh),kh)] ≤1/clim_n→∞1/√(p)𝔼[a(β̅((k+1)h),(k+1)h)-a(β̅(kh),kh)^2]^1/2 . Therefore using (<ref>) we can choose c≳√(h)/ϵ deterministically large enough such that the probability above is smaller than ϵ. This in turn implies p-lim_n→∞sup_t∈[kh,(k+1)h]1/√(p)a(β̅(t),t)-a(β̅(kh),kh)≲√(h) , as claimed. Putting everything together, the theorem below is our main result for the Stochastic Localization sampler. Under the assumptions for <ref>, with probability at least 1-o_n(1) over the data and the randomness of the algorithm, for all kh≤ T, we have the following recursion for the errors: 1/√(p)â(β_k,kh)-a(β̅(kh),kh)≲1/kh√(p)β_k-β̅(kh)+ o_n(1)≲ e^ckh√(h)+ o_n(1) . Moreover, there is a constant K independent of the dimension such that after K many steps of <ref> where 𝒯 is implemented with <ref>, we have W_2(π,Law(â(β_K))) ≤√(p)ζ for any desired tolerance ζ with probability at least 1-o_n(1). The total complexity of the algorithm is O_p(c^t n^2k)≲ O_p(c^k p^3) for some constant c if we focus on the scaling with p for warm-start with at most t≍ k false positives. We couple the continuous β̅(kh) and discrete β_k processes (<ref>) with the same Brownian increment, i.e., β̅((k+1)h) = β̅(kh)+∫_kh^(k+1)h a(β̅(t),t) dt + ∫_kh^(k+1)hdW(t) where √(h)z_k = ∫_kh^(k+1)hdW(t) with same initial condition β_0 = β̅(0)=0 and a(β_0,0)=a(β̅(0),0). Here a(·) denotes the exact drift from <ref> and â(·) the approximate one from (<ref>). Therefore we have for any (k+1)h ≤ T, with probability 1-o_n(1), 1/√(p)β̅((k+1)h)-β_k+1 ≤1/√(p)β̅(kh)-β_k+1/√(p)∫_kh^(k+1)ha(β̅(t),t)-â(β_k,kh)dt ≤1/√(p)β̅(kh)-β_k+h/√(p)a(β̅(kh),kh)-â(β_k,kh)+h/√(p)sup_t∈ [kh,(k+1)h]a(β̅(t),t)-a(β̅(kh),kh) ≲1/√(p)β̅(kh)-β_k+h/√(p)a(β̅(kh),kh)-â(β_k,kh)+h^3/2 where we used the regularity property from <ref> in the last step. Due to the posterior concentration assumption, using Markov's inequality, with probability 1-o_n(1), for any k, 1/√(p)â(β_k)-a(β_k)≤δ (n) where lim_n→∞δ(n)=0 is a non-negative deterministic sequence. Together with <ref> give that with probability 1-o_n(1), 1/√(p)â(β_k+1,(k+1)h)-a(β̅((k+1)h),(k+1)h) ≤1/√(p)â(β_k+1,(k+1)h)-a(β_k+1,(k+1)h)+1/√(p)a(β_k+1,(k+1)h)-a(β̅((k+1)h),(k+1)h) ≤δ(n)+1/(k+1)h√(p)β_k+1-β̅((k+1)h) . Now putting the last two displays together, and inducting over k, we conclude that with high probability 1/√(p)β̅((k+1)h)-β_k+1≲ e^c(k+1)h(k+1)h^3/2 +δ(n) , 1/√(p)â(β_k,kh)-a(β̅(kh),kh)≲1/kh(e^ckhkh^3/2+ δ(n))+δ(n) , since it verifies the recursion 1/√(p)β̅((k+1)h)-β_k+1 ≲ e^kh kh^3/2 +δ(n) + h/kh (e^ckhkh^3/2+ δ(n))+hδ(n)+h^3/2 ≲ e^ckhh^3/2(k+1)+h^3/2+δ(n) ≲ e^c(k+1)h(k+1)h^3/2 +δ(n) , finishing the first part of the statement. This in turn implies using the continuous time convergence rate from <ref> and the coupling definition of the W_2 distance, for K=T/h, 1/√(p)W_2(π,Law(â(β_K))) ≤1/√(p)W_2(π,Law(a(β̅(T)))) + 1/√(p)W_2(Law(a(β̅(T))),Law(â(β_K))) ≤ 1/√(T)+C(T)h^1/2+δ(n) therefore for n sufficiently large, when T is sufficiently large and h suitably small (both are independent of the dimension), we have W_2(π,Law(â(β_K))) ≤√(p)ζ, for any desired ζ>0, which holds with probability 1-o_n(1) w.r.t randomness in y such that (<ref>) holds (X deterministically verifies restricted eigenvalue properties). The complexity of the algorithm now follows by putting together with <ref>. The main benefit of the Stochastic Localization sampler lies in its obliviousness to the “ill-design" of the data matrix X (e.g., if there are strong correlation between some columns of X), where we see from <ref> that even under warm-start and posterior contraction (s=0), such terms still show up and scale with the mixing time exponentially. The guarantee of <ref> is in W_2 distance and not TV, but both have √(p) scaling with dimension. In both cases the scaling with the number of initial false positives t is less than ideal, but a warm-start is essentially necessary for efficiently simulating from such a mixture posterior. § (FREQUENTIST) STATISTICAL PROPERTIES OF POSTERIOR (<REF>) In this section, we justify the posterior concentration assumption made on the sparsified likelihood model (<ref>). We highlight the importance of diffusing and shrinking priors for this class of posteriors as in <cit.> (i.e., allowing the prior parameters to depend on n), which is required for strong model selection consistency π(z=z^*|y ) 1 as n→∞ in high-dimensional setting where p is allowed to grow with n exponentially, i.e., p_n=e^o(n). This choice can in some sense be seen as adjusting for multiplicity. §.§ Warm-up: Sparse Normal Means Model Let us motivate the choice of τ_0,τ_1, q by considering the setup X^⊤ X = n I_p where p_n≤ n, and study under what conditions on the priors do the corresponding posteriors confer model selection consistency. With a sparsified likelihood, the model selection consistency requirement is the same for point-mass spike and Gaussian spike under orthogonal design, which is satisfied by the choice in <ref> under β-min condition (<ref>). The posterior for z is (define β̂_j:= y^⊤ X_z^*,j/n) ℙ(z=z^*|y) ∝∫_ℝ^pexp(-1/2σ^2y-X_z^*β_z^*^2)∏_j=1^p(1-q/τ_0exp(-β_j^2/2τ_0^2))^1-z_j^*(q/τ_1exp(-β_j^2/2τ_1^2))^z_j^* dβ ∝∏_z^*_j=0∫_ℝ (1-q/τ_0exp(-β_j^2/2τ_0^2))^1-z_j^*dβ_j∏_z_j^*=1∫_ℝexp(-n/2σ^2(β_j-β̂_j)^2)(q/τ_1exp(-β_j^2/2τ_1^2))^z_j^* dβ_j = ∏_z_j^*=1𝔼_β_j∼𝒩(β̂_j,σ^2/n)[exp(-β_j^2/2τ_1^2)]·q/τ_1exp(1/2σ^2 ny^⊤ X_z^*,jX_z^*,j^⊤ y)/𝔼_β_j∼𝒩(β̂_j,σ^2/n)[exp(-β_j^2/2τ_1^2)]·q/τ_1exp(1/2σ^2 ny^⊤ X_z^*,jX_z^*,j^⊤ y)+1-q× ∏_z_j^*=01-q/𝔼_β_j∼𝒩(β̂_j,σ^2/n)[exp(-β_j^2/2τ_1^2)]·q/τ_1exp(1/2σ^2 ny^⊤ X_z^*,jX_z^*,j^⊤ y)+1-q =: ∏_z_j^*=1 a_j ∏_z_j^*=0 b_j One can show for each of the k terms corresponding to z_j^*=1 using completion of squares, r_j :=q/τ_1𝔼_β_j∼𝒩(β̂_j,σ^2/n)[exp(-β_j^2/2τ_1^2)]exp(1/2σ^2 ny^⊤ X_z^*,jX_z^*,j^⊤ y) =q/√(1+n τ_1^2/σ^2)exp(1/2(1/τ_1^2+n/σ^2)^-1β̂_j^2n^2/σ^4-β̂_j^2n/2σ^2)exp(β̂_j^2 n/2σ^2) = q/√(1+n τ_1^2/σ^2)exp(1/2(1/τ_1^2+n/σ^2)^-1(y^⊤ X_z^*,j)^2/σ^4) where we recall y=X_z^*β^*_z^*+ϵ and we require for j such that z_j^*=1, |β̂_j|^2 > cσ^2log(p)/n for a large enough c, and z^*_0=k_n <p_n ≤ n. To have ℙ(z=z^*|y ) 1, a sufficient condition is to have ∑_j=1^p ℙ(z_j≠ z_j^*|y) 0, or equivalently, min_j∈ [p]ℙ(z_j= z_j^*|y) ≥ 1-η/p for a sufficiently small η; but generally requiring min_j∈ [p]ℙ(z_j= z_j^*|y) 1 is a weaker consistency result. We begin with the first term, for the product of k terms to go to 1 as n→∞, we see that using Bernoulli's inequality, 1←∏_z_j^*=1 a_j ≥ ( min_z_j^*=1 a_j)^k_n≥ (1-max_z_j^*=1ℙ(z_j=0|y))^k_n≥ 1-k_n ·max_z_j^*=1ℙ(z_j=0|y) therefore we need k_n ·max_z_j^*=1ℙ(z_j=0|y) → 0, which means it suffices for |β̂_j|^2 ≍σ^2log(p)/n 1-q/r_j=1-q/q√(1+nτ_1^2/σ^2)exp(-1/2(1/τ_1^2+n/σ^2)^-1β̂_j^2n^2/σ^4) ≪1/k_n . Similarly for the second term, 1←∏_z_j^*=0 b_j ≥ ( min_z_j^*=0 b_j)^p≥ (1-max_z_j^*=0ℙ(z_j=1|y))^p≥ 1-p ·max_z_j^*=0ℙ(z_j=1|y) . which implies since z_j^*=0, the exp(·) term from r_j vanishes using (<ref>), q/(1-q)√(1+n τ_1^2/σ^2)≪1/p . In both cases, it suffices to impose (1-q)/q∼ p,τ_1∼σ p/√(n), and it is crucial for them to scale with (n,p) to achieve model selection consistency. In this particular case τ_0 doesn't play a role, e.g., whether we pick point-mass spike or Gaussian spike. Variable selection is generally considered a harder problem than parameter estimation / prediction <cit.>, therefore one should expect good performance with respect to those criteria as well from these choices, which is indeed the case as shown next in the regression setting. §.§ Posterior Contraction We show that posterior contraction conditions in ℰ_s are statistically grounded in this section. From an information-theoretic perspective, one generally needs some identifiability assumptions on the design matrix for statistical estimation / posterior consistency, and these will show up here as follows. For all u∈ℝ^p such that z^*,c(u-β^*)_1 ≤ 7z^*(u-β^*)_1, there exist R>0 where for β^*_0=k, 1/n(u-β^*)^⊤(X^⊤ X) (u-β^*)≥ R·z^*(u-β^*)_2^2 , which is closely related to (<ref>), but slightly relaxed so the restricted eigenvalue direction can be not exactly sparse, rather take small values off support of z^*. We also assume a general condition for k'≍ k, the exact-sparsity restricted eigenvalue min_v_0≤ k' v^⊤ (X^⊤ X)v ≥ω(k') n v^2 is bounded away from 0 by a small constant ω(k'). Additionally, β^*_∞ = 𝒪(1) doesn't grow with n. We consider a sequence of problems with n,p→∞ where n=o(p) and demonstrate that for s=0, condition π(z∈{0,1}^p z^*⊂ z, z_0≤z^*_0+s| y)≥ 1-4/p^δ/2(s+1) from <Ref> holds with δ=2, which implies π(z^*| y)≥ 1/2 for p>9 with probability at least 1-o_n(1) over the data, since using (3) from <ref> and Markov's inequality ℙ((1-π(z^*|y))≥ 1/p) ≤𝔼[1-π(z^*|y)]/1/p≤1/p^2/1/p=1/p . We characterize the large scale behavior of the posterior (<ref>) below. Under the parameter choice q/(1-q)∼ 1/p^δ+1 for some constant δ>0, τ_1∼σ p/√(n), X_j_2^2=n from <ref>, in addition to the β-min condition (<ref>) and <ref> above, it holds in the regime p_n=e^o(n) that * 𝔼[π(z:z_0≳ k(1+1/δ)|y)] ≤2/p^2 * 𝔼[π(B^c | y)] ≲ 1/p^2 for B=∪_z:z_0≲ k {β:β_z-β^*≲σ√(klog(p))/√(n)ω(k), β_z-β≲τ_0 √(p)} * 𝔼[π(z^*|y)] ≳ 1-1/p^2 where in the above expectation is taken with respect to the noise ϵ only and X deterministically satisfy the stated assumptions. Moreover, for <ref> to hold with probability tending to one as n→∞, for example with a Gaussian design matrix X_ij∼𝒩(0,1), it entails n≳ klog(p) and k≲log(p) for the sample size and sparsity level respectively. In general n,k will scale with the “ill-design-ness" of the matrix X. We build upon the result in <cit.> and verify the conditions stated there. In our case, ℓ(β_z,y)=1/2σ^2y-X_zβ_z^2, therefore with probability at least 1-2/p^2 since ϵ∼𝒩(0,σ^2 I), ∇ℓ(β^*;y)_∞=-1/σ^2X^⊤ (y-Xβ^*)_∞=1/σ^2X^⊤ϵ_∞≤√(n)/σ√(2log(p))=: ρ̅/2 and for β,β^* ∈ℝ^p where β has the same support as β^*, since X_z^⊤ X_z≼X_z_F^2 · I, ℒ_β^*(β;y) = -1/2σ^2(β-β^*)^⊤ (X^⊤ X) (β-β^*) ≥ -nk/2σ^2β-β^*_2^2 =: -κ̅/2β-β^*_2^2 which means H1 is satisfied. For H2, it suffices to check p^δ/2≳ e^2n/σ^2 p^2, which holds under p=e^o(n) as we assume. Starting from Theorem 2 therein, we check equation (2), picking ℰ as the intersection of (<ref>) and <ref>, we have on this event using the Gaussian moment-generating function, 𝔼[e^ℒ_β^*(β;y)+(1-ρ_1/ρ̅)⟨∇ℓ(β^*;y),β-β^*⟩] =𝔼[e^-1/2σ^2(β-β^*)^⊤ (X^⊤ X) (β-β^*)-1-ρ_1/ρ̅/σ^2(β-β^*)^⊤ X^⊤ϵ] =𝔼[e^-1/2σ^2(1-(1-ρ_1/ρ̅)^2)(β-β^*)^⊤ (X^⊤ X) (β-β^*)] ≤ e^-Rn(1-(1-ρ_1/ρ̅)^2)/2σ^2β-β^*_2^2 therefore we can pick the rate function r_0(x)=Rn(1-(1-ρ_1/ρ̅)^2)/σ^2 x^2 for such β's. Since ρ_1 in our context is 1/τ_1^2, it is clear that ρ_1 < ρ̅, therefore r_0(x) ≥Rn/σ^2τ_1^2 ρ̅x^2, x≥ 0, which means since neither R nor σ scales with n, a_0:= -min_x>0{ r_0(x)-4/τ_1^2√(k)x}≲k√(nlog(p))/Rσ p^2 is bounded above by an absolute constant. It can then be checked that condition (3): k(1/2+2/τ_1^2)+k/2log(1+κ̅τ_1^2)+a_0/2+2/τ_1^2β^*_2^2 ≤ c_0 klog(p) holds with an absolute constant c_0 with the specified τ_1 and β^*_∞. Therefore Theorem 2 concludes for k':=k(1+1/δ) > k, picking j= 4/δ, 𝔼[π(z:z_0≳ k'|y)] ≤2/p^2 . For the second part, again using <ref>, for all β with at most k' active coordinates, ℒ_β^*(β;y)=-1/2σ^2(β-β^*)^⊤ (X^⊤ X) (β-β^*) ≤ -1/2n/σ^2ω(k+k')β-β^*_2^2=: -1/2r(β-β^*_2) therefore we are on the event ℰ_1(k') with the above rate function. Take the contraction radius ζ :=inf{l>0: n/σ^2ω(k+k')x^2-4√(k+k')√(nlog(p^2))/σx ≥ 0 ∀ x≥ l} ≍σ√((k'+k)log(p^2))/√(n)ω(k+k')≍σ√(klog(p))/√(n)ω(k) Note this contraction rate is largely comparable to the “ideal" near-minimax benchmark in (<ref>) assuming ω(k) is a constant. Now we check that equation (8) C√(nlog(p^2))/σ√(k+k')σ√((k'+k)log(p^2))/√(n)ω(k+k')≳max{k'log(p),(1+δ)klog(p+p^3 k)} holds with an absolute constant C since we assume both ω(k+k') and δ to be constants. Applying Theorem 3, together with (<ref>) gives the contraction rate 𝔼[π(B^c | y)]≤2/p^2+8e^-√(nlog(p^2))/σ√(k+k')ζ +2e^-p≲1/p^2 where we define the set B:=∪_z:z_0≤ k' {β:β_z-β^*≲ζ, β_z-β≲τ_0 √(p)} , which describes the set of β's that have most of the mass concentrated on k'-sparse sub-vector and on the support is close to β^*. Now for the (perfect) model selection, on event ℰ_2(k') we have ∩_j=1^k'-k 𝒰_j := ∩_j=1^k'-k{max_z^*⊂ z,z_0=k+j 1/2σ^2(y-Xβ_z_2^2 - y-Xβ_z^*_2^2)≤jδ/2log(p)} , which happens with high probability since by union bound and y-Xβ_z^2=(I-P_z)y^2=y^2-P_z y^2, ∑_j=1^k'-kℙ(𝒰_j^c) = ∑_j=1^k'-kℙ(max_z^*⊂ z,z_0=k+j y^⊤ (P_z^*-P_z)y ≥ jδσ^2 log(p)) = ∑_j=1^k'-kℙ(max_z^*⊂ z,z_0=k+jχ^2(dof=z_0-z^*_0,non-central=(Xβ^*)^⊤(P_z^*-P_z)Xβ^*) ≥ jδσ^2 log(p)) ≲∑_j=1^k'-k p^-σ^2 δ j/4≲1/p^2 where we used the concentration inequality for the central χ^2 distribution since y=Xβ^*+ϵ∼𝒩(Xβ^*,σ^2 I), the above non-centrality parameter is in fact 0 and P_z∈ℝ^n× n denotes the orthogonal projector onto the column span of X_z (idempotent of rank z_0), and similarly for P_z^*. We also used that z^*⊂ z above. We can also deduce that κ̅=nk/σ^2, κ=nω(k+k')/σ^2, i.e., the matrix X_z is full-column rank (restricted strong-convexity) and restricted smooth on the event ℰ_1(k') (since Hessian is constant, the inner inf and sup in the definition of (12) and (13) are immaterial here). Invoking Theorem 5 by setting j=0 with a_2=0 since ℓ is quadratic, with the β-min condition (<ref>) yields 𝔼[1{∩_j=1^k'-k 𝒰_j } (1-π(z^*|y))] ≲ e^√(k')ζ/τ_1^2√(1/τ_1^2 κp^δ)+1/p^2≲1/p^2 , where we used (<ref>) and κp^δ≳ 1/τ_1^2 that is satisfied by our choice. Now to remove the conditional event inside, putting together with (<ref>) gives the desired result 𝔼[π(z^*|y)] ≳ 1-1/p^2 , since 𝔼[1-π(z^*|y)] ≤𝔼[1{∩_j=1^k'-k 𝒰_j} (1-π(z^*|y))] +ℙ({∩_j=1^k'-k 𝒰_j}^c) ≤𝔼[1{∩_j=1^k'-k 𝒰_j} (1-π(z^*|y))] +∑_j=1^k'-kℙ(𝒰_j^c), and the last required condition ζ√(κ)≳√(k) also checks out. The last claim about the scaling of n,k for the Gaussian design to hold with high probability follows from well-known results in high-dimensional statistics <cit.> – the condition on ω(k') is already used in the proof of <ref>. This result implies that in the high-dimensional regime n=o(p) and for well-chosen parameters, one has with high probability (1) sparse support; (2) contraction towards β^*; (3) model selection consistency for the posterior π(·|y). We remark that the result does not in fact depend crucially on the scaling of τ_0 (the prior for the spike), other than it should decrease with n. Both the posterior contraction rate and the dependence of prior parameters on n,p also bear resemblance with another family of continuous priors <cit.> with heavier-tailed Laplace spike and slab, assuming q fixed (i.e., non-hierarchical prior). In fact, the relative density ratio expression from (<ref>)-(<ref>) also hint at a connection to ℓ_0-penalty if we look at the posterior mode. Since we have τ_1 →∞ and q/(1-q)∼ 1/p, max_z∈ℰ_slog(π(z| y)/π(z^*| y)) =max_z∈ℰ_slog( (q/1-q)^z_0-z^*_0/√((I+τ_1^2/σ^2X_z-z^*^⊤ (I+τ_1^2/σ^2X_z^*X_z^*^⊤)^-1X_z-z^*))exp(-τ_1^2/2σ^2y^⊤ X_z (τ_1^2 X_z^⊤ X_z+σ^2 I)^-1 X_z^⊤ y)/exp(-τ_1^2/2σ^2y^⊤ X_z^* (τ_1^2 X_z^*^⊤ X_z^*+σ^2 I)^-1 X_z^*^⊤ y)) ≈min_z∈ℰ_s (z_0-z^*_0)log(p)+1/2σ^2(Xβ_z-y^2-Xβ_z^*-y^2) +log(√((I+X_z-z^*^⊤(X_z^*X_z^*^⊤)^-1X_z-z^*))) ≈min_z∈ℰ_s (z_0-k)log(p)+1/2σ^2Xβ_z-y^2 , which means that asymptotically when the posterior concentrates on z∈ℰ_s with ≤ s false positives, since (<ref>) implies the (·) is uniformly bounded away from 0 on this set, the posterior mode is approximately imposing a ℓ_0-penalty on the model size while trading off with data fitting. The following is an immediate corollary that shows the posterior spread can quantify the remaining uncertainty for inferring β^* based on the observed data y^n. Note 𝒞_n(y^n) below is random since it's constructed using the data y^n. We omit the proof as it is straightforward. Given the conditions that allow consistent model selection π(z=z^*|y ) 1, credible sets for individual parameters β_j building upon the posterior are valid asymptotic confidence sets: π_n(𝒞_n|y^n)=1-α⇒ℙ_β^*(β_j^*∈𝒞_n) 1-α by virtue of the BvM distributional approximation from <cit.> and equation (15) therein. We also mention in passing that the fact we assumed ϵ∼𝒩(0,σ^2 I) should not be considered a limitation for the statistical guarantee stated above. For example, for ϵ with subgaussian tails, concentration inequality for the quadratic form (<ref>) and (<ref>) are readily available. Therefore the posterior (<ref>), which would be slightly mis-specified in this case, is still a meaningful object for inference and design sampling procedures for. § DISCUSSION Our work contributes to the ongoing effort of understanding statistical / computational trade-offs arising from contemporary data science problems. The continuous spike-and-slab priors with quasi-likelihood we study strike good balance between these two goals. While the number of submodels scales as 2^p, natural statistical considerations indicate that it is not necessary to explore the entire state space to get a good approximate sample from the posterior for inference purpose. Moreover, under the same (1) posterior concentration on the parameter; and (2) warm start conditions (possibly implemented using a frequentist point estimator) that enable efficient sampling with a Gibbs sampler, we propose an improved method, based on Stochastic Localization, that is oblivious to the well-posedness of the design matrix. Much like the flurry of work on non-convex optimization which demonstrate that, under various mild statistical assumptions on the data/model and with possibly good initialization, simple gradient-based method can be shown to find good local/global minima efficiently; what we observe in this work is similar in spirit for the sampling analogue that exploit problem structure to avoid worst-case scenarios for sampling from non-log-concave distributions. Beyond spike-and-slab models, the Stochastic Localization sampler can be more broadly applicable whenever an estimate of the denoising drift 𝔼[β|θ_t=θ] is available (not necessarily in closed-form, an output from an efficient algorithm is also an option) for the Gaussian estimation problem (<ref>), which can be especially useful when the posterior arising from interesting Bayesian statistical models exhibit multi-modal structure – they pose challenge for MCMC-based method but seem to be quite prevalent in practice. siamplain
http://arxiv.org/abs/2307.03974v2
20230708132712
Comparing EventB, $\{log\}$ and Why3 Models of Sparse Sets
[ "Maximiliano Cristiá", "Catherine Dubois" ]
cs.SE
[ "cs.SE" ]
Short-time large deviations of the spatially averaged height of a KPZ interface on a ring Baruch Meerson August 12, 2023 ========================================================================================= Many representations for sets are available in programming languages libraries. The paper focuses on sparse sets used, e.g., in some constraint solvers for representing integer variable domains which are finite sets of values, as an alternative to range sequence. We propose in this paper verified implementations of sparse sets, in three deductive formal verification tools, namely , and 3. Furthermore, we draw some comparisons regarding specifications and proofs. § INTRODUCTION Sets are widely used in programs. They are sometimes first-class objects of programming languages, e.g. SETL <cit.> or <cit.>, but more frequently they are data structures provided in libraries. Many different representations are available, depending on the targeted set operations. In this paper, we focus on sparse sets, introduced by Briggs and Torczon in <cit.>, used in different contexts and freely available for different programming languages (Rust, C++ and many others). In particular, sparse sets are used in constraint solvers as an alternative to range sequences or bit vectors for implementing domains of integer variables <cit.> which are nothing else than mathematical finite sets of integers. Their use in solvers implementations is motivated by -at least- the two following properties: searching and removing an element are constant-time operations—removing requires only two swapping operations on arrays; sparse sets are cheap to trail and restore, which is a key point when backtracking. Confidence on constraint solvers using sparse sets can be improved if the algorithms implementing the main operations are formally verified, as it has been done by Ledein and Dubois in <cit.> for the traditional implementation of domains as range sequences. Hence, the main contribution of this paper is a verified implementation of sparse sets for representing finite sets of integers in , and 3. We prove that the implemented operations preserve the invariants and we also prove properties that can be seen as formal foundations of trailing and restoring. As far as we know, this is the first formally verified implementation of sparse sets, whereas it has been done for other representations e.g. <cit.>. All the specifications and proofs can be found here: <https://gitlab.com/cdubois/sets2023.git>. It has been known for decades that there is no silver bullet for software engineering or software development. The best we can do as software engineers is to increase our toolbox as much as possible and use the best available tool in it for the problem at hand. This software engineer practical principle still applies when it comes to formal development, formal methods and formal verification. In our opinion the Formal Methods (FM for short) community should have as much information as possible about the relative advantages and disadvantages of different FM methods and tools. With the intention to shed some light on the ups and downs of different FM, we specified and verified sparse sets with three different FM techniques. Then, a second contribution of this paper is a comparison of these FM w.r.t. aspects such as expressiveness, specification analysis and automated proof. § SPARSE SETS We deal here with sets as subsets of natural numbers up to N-1, where N is any non null natural number. A sparse set S is represented by two arrays of length N called mapD and domD (as in <cit.>), and a natural number sizeD. The array mapD maps any value v ∈ [0,N-1] to its index ind_v in domD, the value indexed by ind_v in domD is v. The main idea that brings efficiency when removing an element or testing membership is to split domD into two sub-arrays, domD[0,sizeD-1] and domD[sizeD, N-1], containing resp. the elements of S and the elements of [0,N-1] not in S. Then, if S is empty, sizeD is equal to 0, if S is the full set, then sizeD is N. Checking if an element i belongs to the sparse set S simply consists in the evaluation of the expression mapD[i]<sizeD. Removing an element from the set consists in moving this element to domD[sizeD, N-1] (with 2 swaps in mapD and domD and decreasing sizeD). Binding S to the singleton set {v} follows the same idea: moving this element at the first place in domD and assigning the value 1 to sizeD. In our formalizations, we only deal with two operations consisting in removing an element in a sparse set and bind a sparse set to a singleton set since these two operations are fundamental when solving constraints. In this context, we may also need to walk through all the elements of a variable domain, it means exploring domD[0..sizeD-1]. If minimal and maximal values are required, then they have to be maintained in parallel. This is outside the scope of this work. § FORMAL DEVELOPMENT In this section we succinctly introduce the formal specification language and with more detail the models for sparse sets. §.§ <cit.> is a deductive formal method based on set theory and first order logic allowing users to design correct-by-construction systems. It relies on a state-based modeling language in which a model, called a machine, is made of a state and a collection of events allowing for state changes. The state consists of variables constrained by invariants. Proof obligations are generated to verify the preservation of invariants by events. A machine may use a -mathematical- context which introduces abstract sets, constants, axioms or theorems. A formal design in starts with an abstract machine which is usually refined several times. Proof obligations are generated to verify the correctness of a refinement step. An event may have parameters. When its guards are satisfied, its actions, if any, are executed, updating state variables. Actions may be -multiple- deterministic assignments, x,y:=e, f, or -multiple- nondeterministic ones, x,y :| BAP(x,x',y,y') where BAP is called a Before-After Predicate relating current (x, y) and next (x', y') values of state variables x and y. In the latter case, x and y are assigned arbitrary values satisfying the BAP predicate. When using such a non-deterministic form of assignment, a feasibility proof obligation is generated in order to check that there exist values for x' and y' such that BAP(x,x',y,y') holds when the invariants and guards hold. Furthermore when this kind of action is used and refined, the concrete action updating x and y is required to assign them values which satisfy the BAP predicate. In the following, we use Rodin, an Eclipse based IDE for project management, model edition, refinement and proof, automatic proof obligations generation, model animation and code generation. Rodin supports automatic and interactive provers <cit.>. In this work we used the standard provers (AtelierB provers) and also the SMT solvers VeriT, CVC3 and CVC4. More details about and Rodin can be found in <cit.> and <cit.>. §.§ formalization The formalization is made of six components, i.e. two contexts, a machine and three refinements. Context Ctx introduces the bound N as a non-zero natural number and context Ctx1 extends the latter with helper theorems. The high level machine gives the abstract specification. This model contains a state composed of a finite set D, constrained to be a subset of the (integer) range 0..N-1, and two events, to remove an element from D or set D as a singleton set (see Fig. <ref> in which bind is removed for lack of space). The first refinement (see Fig.<ref>) introduces the representation of the domain as a sparse set, i.e. two arrays mapD and domD modeled as total functions and also the variable sizeD which is a natural number in the range 0..N. Invariants inv4 and inv5 constrain mapD and domD to be inverse functions of each other. The gluing invariant inv6 relates the states between the concrete and former abstract machines. So the set domD[0..sizeD-1] containing the elements of the subarray from 0 to sizeD-1 is exactly the set D. Theorem inv7 is introduced to ease some interactive proofs, it is proved as a consequence of the previous formulas (inv1 to inv6). It follows directly from a theorem of Ctx1 whose statement is inv7 where domD and mapD are universally quantified. Theorem inv8, also used in an interactive proof, and automatically proved by CVC3, states that domD is an injective function. Variables mapD and domD are both set initially to the identity function on 0..N-1 and sizeD to N. So invariants are satisfied at the initial state. Machine SparseSets_ref1 refines the events of the initial machine by non deterministic events. So here the remove event assigns the three state variables with values that satisfy invariants and also such that sizeD strictly decreases and removed elements in domD are kept at the same place (properties in bold font). Event bind follows the same pattern (again not shown here). The second refinement has the same state than the previous refinement (see Fig. <ref>). Its events implement the operations using the new state variables. It is a straightforward translation of the algorithms described in <cit.>. The only reason to have introduced the intermediate model SparseSets_ref1 is to express the properties written in bold font and thus generate, in the next refinement, proof obligations which, when discharged, will not only ensure that the events refined in Fig. <ref> preserve the invariants inv1, inv2 …inv6 but also the local properties regarding sizeD and domD[sizeD..N-1] (SIM proof obligations). The feasibility (FIS) proof obligations generated by the non-deterministic events of SparseSets_ref1 require to prove that there exist values such that the BAP predicate holds. We can prove it using the new values of domD, mapD and sizeD specified in the last refinement as witnesses. The simulation (SIM) proof obligations generated by events of SparseSets_ref2 require to prove that the latter values again satisfy the BAP predicate used in SparseSets_ref1. In order not to do these -interactive- proofs twice, we generalize them and prove them as theorems of the context. Thus to discharge the FIS and SIM proof obligations, we only have to instanciate these theorems to provide a proof. A last algorithmic refinement, omitted here, refines the remove event in two events, removeLastButOne and removeLast. The former differs from remove only by its more restrictive guard; the latter is dedicated to the case where the element with index sizeD-1 in domD is removed thus avoiding the unnecessary swapping. § FORMAL DEVELOPMENT In this section we briefly present the tool and how we used it to encode the model of sparse sets. §.§ is a constraint logic programming (CLP) language and satisfiability solver where sets and binary relations are first-class citizens <cit.>. The tool implements several decision procedures for expressive fragments of set theory and set relation algebra including cardinality constraints <cit.>, restricted universal quantifiers <cit.>, set-builder notation <cit.> and integer intervals <cit.>. In previous works has been satisfactory tested against some known case studies <cit.>. code enjoys the formula-program duality. This means that code can behave as both a formula and a program. When seen as a formula, it can be used as a specification on which verification conditions can be (sometimes automatically) proved. When seen as a program, it can be used as a (less efficient) regular program. Due to the formula-program duality, a piece of code is sometimes called forgram—a portmanteau word resulting from combining formula with proggram. §.§ formalization The formalization presented in this paper is the result of translating the abstract specification (i.e., Fig. <ref>) and the second refinement (i.e. Fig. <ref>). Both models can be easily translated into by using the (still under development) state machine specification language (SMSL) defined on top of (see Fig. <ref> and <ref>) <cit.>. The notions of context and refinement are not available in SMSL. For this reason, refinements introduced in the model have to be manually encoded in . The context is encoded simply as an axiom. In order to ensure that the code verifies the properties highlighted in bold in Fig. <ref> as well as the gluing invariant (i.e., inv6), a few user-defined verification conditions are introduced as theorems. Since the first refinement is introduced to express the properties written in bold, its events have not been encoded in . Figures <ref> and <ref> list only representative parts of the forgram. We tried to use the same identifiers as for the models as much as possible. In this way, for example, the invariant labeled as inv6 in the SparseSets_ref1 machine (Fig. <ref>), is named in the forgram. The name of variables in cannot fully complain with those used in the models because requires all variables to begin with a capital letter. So, for example, domD in the SparseSets_ref1 machine becomes in . As can be seen in Fig. <ref>, the state machine specification language defined on top of allows for the declaration of parameters (similar to context constants), state variables, axioms (similar to context axioms) and invariants. Parameter is used to compute the identity relation on the integer interval [0,N-1] as shown in axiom , which in turn is used in invariant . As is a CLP language implemented on top of Prolog, it inherits many of Prolog's features. In particular, integer expressions are evaluated by means of the predicate. Along the same lines, all set operators are implemented in as constraints. For example, is true when is the identity relation on the set . The term corresponds to the integer interval [0,M]. Invariants named , and correspond to invariant inv1 of the SparseSets_ref1 machine. Splitting invariants in smaller pieces, is a good practice when using as a prover because it increases the chances of automated proofs. implements the negation of invariant . does not automatically compute the negation of user-defined predicates. As a user-defined predicate can contain existential variables, its negation could involve introducing universal quantifiers which fall outside 's decision procedures. Then, users are responsible for ensuring that all predicates are safe. In invariant we can see the constraint. This constraint implements the notion of restricted universal quantifier (RUQ). That is, for some formula ϕ and set , corresponds to ∀ X.(X ∈ A ϕ(X)). In a constraint it is possible to quantify over binary relations, as is the case of . Hence, we have a quantified ordered pair (), rather than just a variable. Likewise, offers the constraint implementing the notion of restricted existential quantifier (REQ). The important point about REQ and RUQ is not only their expressiveness but the fact that there is a decision procedure involving them <cit.>. In these constraints are used to state a double set inclusion equivalent to the formula domD[0 .. sizeD - 1] = D. If the user is not convinced or unsure about the validity of this equivalence (s)he can use itself to prove it. Note that is not declared as an invariant because in Fig. <ref> it is a theorem that can be deduced from previous invariants. Therefore, we introduce it as a simple predicate but then we declare a theorem whose conclusion is . Later, will include as a proof obligation and will attempt to discharge it. Given that is a satisfiability solver, if Φ is intended to be a theorem then we ask it to prove the unsatisfiability of ¬Φ. Moving into in Fig. <ref> we can see the encoding of the remove operation specified in the SparseSets_ref2 machine of Fig. <ref>, along with two user-defined proof obligations. In , there is no global state so state variables have to be included as explicit arguments of clauses representing operations. Next-state variables are denoted by decorating the base name with an underscore character (e.g., corresponds to the value of in the next state). Another important difference between the and the specifications is that in the latter we can use set unification to implement function application. For instance, is equivalent to the predicate: ∃ y_2, y_5, domD_1. (domD = {sizeD - 1 ↦ y_2, y_1 ↦ y_5}∪ domD_1), where y_1 = mapD(v) (due to the previous set unification). The not-membership constraints following the equality constraint prevent to generate repeated solutions. Hence, when is called with some set term in its fourth argument, this term is unified with . If the unification succeeds, then the images of and are available. As said before, some user-defined proof obligations are introduced as theorems to ensure that the forgram verifies the gluing invariant (i.e., inv6) and the properties written in bold in machine SparseSets_ref1. Precisely, theorem states that if holds and and its abstract version (not shown in the paper) are executed, then holds in the next state.[ and its abstract version can be distinguished by their arities.] Likewise, theorem ensures that the second property written in bold in machine SparseSets_ref1 is indeed a property of the forgram. As can be seen, the theorem states that if is executed and the functional image[ is a user-defined predicate computing the relational image through a function— stands for functional image.] of the interval from up to through is , then it must coincide with the functional image of the same interval but through . Once the specification is ready, we can call the verification condition generator (VCG) and run the verification conditions (VC) so generated: VCs include the satisfiability of the conjunction of all axioms, the satisfiability of each operation and preservation lemmas for each and every operation and invariant. The last command above will attempt to automatically discharge every VC. Part of the output is as follows: An answer means that, for some reason, is unable to discharge the VC. Most of the times this is due to some missing hypothesis which, in turn, is due to the way the VCG generates the VCs. Briefly, when it comes to invariance lemmas, the VCG generates them with the minimum number of hypothesis. So, for instance, the invariance lemma named is as follows: By including minimum hypothesis, will have to solve a simpler goal which reduces the possibilities to have a complexity explosion. If the hypothesis is not enough, the command can be used to find potential missing hypothesis. In this way, users can edit the VC file, add the missing hypothesis and run the VC again. If more hypotheses are still missing, the process can be executed until the proof is done—or the complexity explosion cannot be avoided. discharges all the VC generated by the VCG for the present forgram. § WHY3 FORMAL DEVELOPMENT In this section we briefly introduce the 3 platform and describe with some details our specification of sparse sets. §.§ 3 Why3 <cit.> is a platform for deductive program verification providing a language for specification and programming, called WhyML, and relies on external automated and interactive theorem provers, to discharge verification conditions. In the context of this paper, we used Why3 with the SMT provers CVC4 and Z3. Proof tactics are also provided, making 3 a proof environment close to the one of Rodin for interactive proofs. 3 supports modular verification. WhyML allows the user to write functional or imperative programs featuring polymorphism, algebraic data types, pattern-matching, exceptions, references, arrays, etc. These programs can be annotated by contracts and assertions and thus verified. User-defined types with invariants can be introduced, the invariants are verified at the function call boundaries. Furthermore to prevent logical inconsistencies, 3 generates a verification condition to show the existence of at least one value satisfying the invariant. To help the verification, a witness is explicitly given by the user (see the clause in Fig. <ref>). The and operators can be used inside post-conditions and assertions to refer to the value of a mutable program variable at some past moment of execution. In particular in a function post-condition refers to the value of term when the function is called. Programs may also contain ghost variables and ghost code to facilitate specification and verification. From verified WhyML programs, correct-by-construction OCaml programs (and recently C programs) can be automatically extracted. §.§ 3 formalization From the 3 library, we use pre-defined theories for integer arithmetic, polymorphic finite sets and arrays. In the latter, we use in particular the operation that exchanges two elements in an array and its specification using the predicate. We first define a record type, , whose mutable fields are a record of type containing the computational elements of a sparse set representation and a ghost finite set of integer numbers which is the abstract model of the data structure. The type invariant of relates the abstract model with the concrete representation. It is used to enforce consistency between them. Invariants enforcing consistency between the two arrays and and the bound are attached to the type: lengths of the arrays is , contents are belonging to 0..-1 and the two arrays are inverse of each other, is in the interval 0... These type definitions and related predicates are shown in Fig. <ref>. Our formalization (see Fig. <ref>, where, again, bind is removed for lack of place) contains three functions, , and , which update their arguments. They are the straightforward translation of the algorithms in <cit.> in WhyML, except for the supplementary ghost code (the last statement in both and ) which updates the abstract model contained in . Function is a helper function which is called in the other ones. The contract of makes explicit the modifications of both arrays and , using the predicate defined in the library. Verification conditions for this function concern the conformance of the code to the two post-conditions (trivial as it is ensured by ) and also the preservation of the invariant attached to the type—i.e. mainly that and after swapping elements remain inverse from each other. Both and act not only on the two arrays and the bound but also on the ghost part, i.e. the corresponding mathematical set . Thus the verification conditions here not only concern the structural invariants related to , and but also the ones deriving from the use of the type, proving the link between the abstract logical view (using finite sets) and the computational one implemented through arrays. Observe that types and correspond to the state and invariants of the refinements. The abstract specification presented in the first machine becomes a ghost field in WhyML. The invariant of the type corresponds to the gluing invariant (inv6). A similar transposition happens for the operations. Actions in the abstract events, i.e. updating the abstract set, appear as ghost code in WhyML. All proofs are discovered by the automatic provers except for some proof obligations related to the function. Nevertheless these proofs are simplified thanks to some 3 tactics that inject some hints that can be used by the external provers to finish the proofs. § COMPARISON AND DISCUSSION Set theory is primitive in and whereas Why3 which permits to express other theories, provides a theory for it. Rodin uses provers where set theory is primitive but can also call external provers such as VeriT, Z3 and CVC4—where set theory is not primitive. However a big effort has been done to process set theory in VeriT, which is often recognized as allowing significant improvements in proofs <cit.>. Why3 relies entirely on external provers where set theory is not primitive. Conversely, is a satisfiability solver that can only work with set theory—and linear integer algebra. It is the only of the three tools implementing advanced decision procedures for set theory. Likely, this proved to be crucial for being able to be the only tool that automatically discharged all the VC, although it required a simple hypothesis discovery procedure. It should be a concern the time needs to discharge all the VC because with more complex models the resolution time might be prohibitive. It worth to be studied ways of avoiding the algorithmic complexity of the decision procedures implemented in . Results on Computable Set Theory should be revisited (eg. <cit.>). Why3 and Rodin interactive proofs are not numerous and remain quite simple. In , 51 proof obligations were generated for the whole development, around half of them coming from the first refinement. 37 were proven automatically by the standard provers (AtelierB provers), 18 automatically by SMT provers, mainly VeriT, either directly or after applying the Rodin lasso allowing for adding additional, backup hypotheses having identifiers in common with the goal. Only two proof obligations required real human intervention, mainly instantiations of the general theorems introduced in Ctx1 or explicit witnesses introduction in the case of feasibility proof obligations. After working in the way described in Sect. <ref>, discharges all the 38 VC generated by the VCG in around 7 minutes. Why3 makes it possible to apply transformations (e.g. split conjunctions) on a proof goal instead of calling an automatic prover on it. Some of these transformations are very simple, e.g. splitting conjunctions, and can then been applied systematically and automatically. Most of the generated VC in our formalization were proven automatically thanks to the split transformation. Only two of them about pieces of type invariants, required human interaction to insert some more complex transformations, e.g a case analysis on indexes in mapD (). At the end, 55 VC were proved by CVC4, except two of them discharged by Z3, in a total amount of time of 30 seconds. Clearly, all three tools are expressive enough for the problem at hand. However, the specification is probably the most readable. The tools permit to express axioms, invariants and automatically generate similar VC. still needs work to express how two models are linked in terms of abstraction/refinement relations. Writing some key properties proved to be complex in . Indeed, it was necessary to add a somewhat artificial refinement level for Rodin being able to generate the desired VC linking. These properties can be easily defined by the user in . However, in Why3 and , proof obligations are automatically generated from the specifications, in particular the abstract and concrete models can be naturally linked and the tool automatically generates the corresponding VC. In that regard, Why3 and are safer than . The possibility to count with executable code without much effort enables many lightweight analysis that can be put into practice before attempting complex proofs. In tool where specification and implementation are described by only one piece of code (cf. forgrams). This tool is not the integration of an interpreter and a prover; the same set of rewrite rules are used to compute and prove. In /Rodin there is only a specification—later it can be converted into an executable representation if tools such as ProB are used. Why3 can execute WhyML programs natively thanks to its interpreter and the command. Furthermore, once the the program is proved to verify the specification, correct-by-construction OCaml and C programs can be automatically extracted. These programs will be orders of magnitude more efficient than the equivalent forgrams. § CONCLUSION We formally verified the implementation of sparse sets using three formal languages and associated tools, focusing on the operations and correctness properties required by a constraint solver when domains of integer variables are implemented with sparse sets. We compared in particular the several statements of invariants and pre-post properties and their proofs. As future work, two directions can be investigated. The first one is to complete the formal developments with other set operations. A second one is to implement and verify, in Why3 or , a labeling procedure such as the ones used in constraint solvers, it would need to backtrack on the values of some domains, and thus make use of the theorems proven in this paper. Labeling is native in when the CLP(FD) solver is active. abbrv
http://arxiv.org/abs/2307.06111v2
20230712121202
Strongly anisotropic magnetocaloric effect in a dipolar magnet LiGdF$_4$
[ "G. Iu. Andreev", "I. V. Romanova", "O. A. Morozov", "S. L. Korableva", "R. G. Batulin", "V. N. Glazkov", "S. S. Sosin" ]
cond-mat.str-el
[ "cond-mat.str-el", "cond-mat.mtrl-sci" ]
Kazan Federal University, 420008 Kazan, Russia Kazan Federal University, 420008 Kazan, Russia Kazan Federal University, 420008 Kazan, Russia Zavoisky Physical-Technical Institute, FRC Kazan Scientific Center of RAS, 420029 Kazan, Russia Kazan Federal University, 420008 Kazan, Russia Kazan Federal University, 420008 Kazan, Russia P. Kapitza Institute for physical problems RAS, 117334 Moscow, Russia HSE University, 101000 Moscow, Russia [email protected] P. Kapitza Institute for physical problems RAS, 117334 Moscow, Russia HSE University, 101000 Moscow, Russia We report the detailed study of the magnetocaloric effect (MCE) in a dipolar-Heisenberg magnet using magnetization measurements performed on a single crystal sample. Entropy variation on isothermal demagnetization from the magnetic field up to 3 T is determined in the temperature range 2–10 K for two principal directions of the applied field (parallel and perpendicular to the tetragonal c-axis of the crystal). The MCE is found to be highly anisotropic, with the cooling efficiency being up to twice higher at H∥ c. The results are nicely interpreted in the frame of a conventional molecular field approach taking into account considerable anisotropy of the paramagnetic Curie-Weiss temperature. These results are compared to earlier studies of MCE in powder samples of [T. Numazawa et al., AIP Conf. Proc. 850, 1579 (2006)] as well as with analogous data for other well known magnetocaloric materials. Our findings may open new possibilities to enhance the efficiency of magnetic refrigeration in the liquid helium-4 temperature range. 75.30.Sg, 75.20.-g, 75.40.Cx Strongly anisotropic magnetocaloric effect in a dipolar magnet LiGdF_4 S. S. Sosin August 12, 2023 ====================================================================== § INTRODUCTION Adiabatic demagnetization is an efficient tool to achieve low temperatures, being one of the beautiful manifestations of the basic thermodynamics principles: the release of magnetic entropy on lowering the magnetic field leads to the decrease of the lattice entropy, thus resulting in cooling of the entire sample. Demagnetization of paramagnetic salts originally discussed by P. Debye <cit.>, was the first instrument to reach sub-kelvin temperatures. Nuclear demagnetization stage <cit.> till now remains the only route to achieve sub-millikelvin range in dilution fridge cryostats. Conventional methods of magnetic cooling for practical applications are now well developed and reviewed in textbooks (see e.g. <cit.>). From the practical point of view magnetic refrigerators have considerable convenience of simple and compact construction as well as its independence of gravity and avoiding expensive and sparse ^3He-based cooling agent. However, particular disadvantage of paramagnetic salts is the small concentrations of magnetic ions in the substance resulting in low entropy density and insufficient cooling capacity of these materials. An attempt to increase the entropy density in regular paramagnets generally encounters the problem of growing magnetic interactions, for example dipolar, and thus, limiting the temperature range of efficient cooling. The later can be partly avoided in systems with competing magnetic interactions. The family of magnets with strongly frustrated principal exchange interaction is believed to be promising in this respect. An “infinite” degeneracy of the ground state and a macroscopic number of soft modes in the excitation spectrum leaves a finite part of entropy of a concentrated system unfrozen at a temperature scale much lower than the energy of the principal interaction <cit.>. Lifting this degeneracy by magnetic field in a spin-saturated state opens broad space for an enhanced magnetocaloric effect <cit.> in various ranges of temperatures and magnetic fields, as was observed for some types of rare-earth garnets (see e.g. <cit.> or more recent research <cit.>) or pyrochlores <cit.>. In the present paper we discuss a fresh look at magnetocaloric properties of a lithium-gadolinium fluoride . The absence of magnetic ordering down to at least 400 mK in combination with exceptionally high entropy density have already attracted much attention to this system as one of promising powder magnetic refrigerants <cit.>. Lack of microscopic model underlying the magnetic disorder was filled by recent experiments performed on single-crystal samples of a concentrated and strongly diluted LiY_1-xGd_xF_4 which reveal an unusual type of “hidden” magnetic frustration, i.e. a competition between various types of interactions <cit.>. Moreover, the fine compensation of contributions from exchange coupling, long-range dipolar interaction and single-ion anisotropy to the magnetic susceptibility makes the Curie-Weiss temperature strongly anisotropic, being very close to zero for one of the principal directions of the external field. Here we demonstrate that applying the magnetic field along the tetragonal axis of a single-crystal sample makes the demagnetization process up to 30% more efficient than that previously observed for a powder material, thus opening new ways to enhance the magnetic refrigeration at liquid helium-4 temperature range under moderate applied fields. § EXPERIMENTAL RESULTS The crystal structure of is of Scheelite-type with the space group I4_1/a (C_ 4h^6) and the local symmetry S_4 on each Gd-site. The tetragonal unit cell with the parameters a=5.219 and c=10.97 Å contains four formula units <cit.>. A single-crystal sample was grown using a standard Bridgman-Stockbarger technique. The directions of crystal axes were precisely determined by X-ray Laue diffraction patterns. Magnetization measurements have been carried out using the Quantum Design PPMS Vibrating Sample Magnetometer. The sample was cut from the parent single crystal in a shape of a thin plate 16.8 mg by mass containing the ac crystal plane. A magnetic field H has been applied along the two principal crystal axes, c and a, within the sample plane to exclude the demagnetization corrections. The temperature of the experiment varied from 2 to 10 K with the data obtained on cooling and heating being indistinguishable. The isothermal magnetization curves recorded at T=2.5 and 5 K for two directions of the external field H∥ c,a are presented in the main panel of Fig. <ref>. Linear low-field parts of the curves demonstrate significant anisotropy of the susceptibility amounting to χ^c/χ^a≃ 1.5 at T=2.5 K. This anisotropy, clearly visible by temperature dependences of magnetization measured at small constant field (see Inset), reflects the above mentioned anisotropic paramagnetic Curie-Weiss temperature. The standard procedure to study the MCE from static magnetization data involves collecting a set of M(T) curves measured at constant fields (from 0.05 to 3 T with a step of 0.05 T in our experiment). Using the Maxwell relation (∂ M/∂ T)_H=(∂ S/∂ H)_T one obtains a set of isothermal field dependences of the derivative (∂ S/∂ H)_T (see upper panel of Fig. <ref>) which can be then integrated over the field to obtain the entropy change for the demagnetization process performed at various temperatures. A few examples of these curves integrated at several temperatures for two principal directions of the applied field are shown in the lower panel of Fig. <ref>. Final results, that are magnetic entropy change on the isothermal demagnetization from magnetic fields μ_0H_ i=3, 2 and 1 T applied along to principal axes c and a, are presented in the main panel of Fig. <ref>. The demagnetization process traced at various temperatures reveals considerable anisotropy, being up to twice more efficient under field H∥ c, when the Curie-Weiss temperature θ_ cw^c≃ 0 and the system is expected to be very close to an ideal paramagnet. § DISCUSSION AND CONCLUDING REMARKS The above experimental results can be adequately interpreted in terms of the standard approach to a system of interacting magnetic moments in a disordered state (see e.g. <cit.>). The interaction is introduced in the form of the effective molecular field acting on a single moment from the rest of the ensemble, which in the antiferromagnetic case can be written as H_M=-λ M, where λ>0 is a molecular field constant, M is a uniform net magnetization per spin. This molecular field leads to a non-zero Curie-Weiss temperature in the Curie-like paramagnetic susceptibility, which can be directly related to the molecular field strength: θ_ cw=-Cλ, where C=(gμ_ B)^2S(S+1)/3k_ B is a Curie constant (g≃ 2.0 is a g-factor of a Gd^3+ ion, μ_ B is a Bohr magneton, k_ B is a Boltzmann constant). For arbitrary (H,T) values the magnetization can be self-consistently expressed in the form of a Brillouin function with the external field H replaced by a sum of external and molecular fields: M̃=B_7/2 (H-λ̃M̃), where M̃=M/(gμ_ BS) is a reduced magnetization per spin and λ̃=gμ_ BSλ≃ 7.0 kOe (for θ_ cw=-1.4 K) is a molecular field constant expressed in CGS field units. The value of M obtained from (<ref>) can be used to find the magnetic entropy per moment as a function of field and temperature in the following form <cit.>: T S= E- F= (λ/2M^2-MH )+k_ BTln [sinh(2S+1)/2x/sinhx/2 ], where E and F are an energy and free energy per magnetic moment, respectively, x=gμ_ B(H-λ M)/(k_ BT). Theoretical curves for θ_ cw^c=0 (an ideal paramagnet) and θ_ cw^a=-1.4 K are shown in Figs. <ref>–<ref> by solid and dashed lines, respectively. One can see that theoretical curves computed for an ideal paramagnet perfectly reproduce our data for H∥ c in the whole temperature range 2–10 K. As was mentioned above, all substantial magnetic interactions (exchange, dipolar and single-ion anisotropy) contributing to the susceptibility compensate each other in the way that the system is magnetized along c-axis in a wide temperature range effectively as non-interacting magnetic ions. However, when the magnetic field is applied in a perpendicular direction, the compensation is broken and magnetic ions are subjected to a relatively strong internal molecular field. The results of a molecular field approximation with an antiferromagnetic Curie-Weiss temperature θ_ cw=-1.4 K are also in a good agreement with the experimental data obtained for H∥ a. This anisotropy immediately leads to a considerable difference in the amount of entropy released on demagnetizing in the two principal directions of the external field. The ratio Δ S_c/Δ S_a can reach a factor of 2 at T=2 K for the starting field μ_0H_ i=1 T (Fig. <ref>). One should note that analogous results measured in a powder sample <cit.> and shown in this Figure by bold dashed-dotted line appear to fall between our data obtained for H∥ c and a-axes. Moreover, averaging our results by orientations in the powder as Δ S_ p=1/3Δ S_c+2/3Δ S_a one achieves perfect quantitative agreement with the previous data. Further, we have summarized in Table <ref> the entropy changes (expressed in k_ B per magnetic ion) on demagnetizing from two different starting fields μ_0H_ i=1 and 2 T measured in some other state-of-art cooling materials as well as in the powder known from literature. The comparison both with two well known rare-earth gallium garnets <cit.> and with KBaYb(BO_3)_2, a recently studied frustrated material suitable for cooling to very low temperatures <cit.>, is obviously greatly in favor of a single-crystal demagnetized at H∥ c. The advantage of is especially pronounced if one takes into account the enhanced density of magnetic ions in this material which is important from the practical point of view. Our data (see inset to Fig. <ref>) show that the most efficient demagnetization process in could be achieved under moderate applied fields. In the temperature range 3–4 K the highest cooling efficiency is observed for a starting magnetic field μ_0H_ i≃ 2 T while that for 1–2 K shifts to smaller initial fields around 1 T. In both regimes the cooling capacity of reaches the value ≃ 0.25 J/T per cm^3 of the material which enables the cooling power up to ≃ 10 mW/cm^3 for a reasonable field sweep rate 2T/min accessible in typical laboratory cryomagnets. To summarize, using static magnetization measurements of a single-crystal performed in a temperature range 2–10 K we have demonstrated the MCE in the system to be considerably anisotropic. This anisotropy results from competing contributions from various magnetic interactions to the paramagnetic susceptibility of the system. We show that when the magnetic field is applied along the tetragonal axis of the crystal, is magnetized in a wide temperature range in the way similar to a system of non-interacting magnetic moments, thus enhancing the MCE to a maximum possible level of an ideal paramagnet. These results can be described in the frame of a usual molecular field approach taking into account considerable anisotropy of the paramagnetic Curie-Weiss temperature. Comparison with other well known materials for magnetic refrigeration shows significant advantage of a single crystal for demagnetization at temperatures of liquid helium-4 (1–4 K) in moderate applied fields, which may open new opportunities for practical applications. § ACKNOWLEDGMENTS The work was financially supported by: Russian Science Foundation, Grant No 22-12-00259 (sample growth); Basic research program of HSE University (data processing and theoretical calculations); Kazan Federal University Strategic Academic Leadership Program PRIORITY-2030 (magnetization measurements). 99 Debye P. Debye, Ann. Phys. 81, 1154 (1926). Kurti N. Kurti, F. N. Robinson, F. Simon, D. A. Spohr, Nature 178, 450 (1956). Lounasmaa O. V. Lounasmaa, Experimental Principles and Methods, (Academic's Press, London and New York, 1974). gardner_review J. S. Gardner, M. J. P. Gingras, J. E. Greedan, Rev. Mod. Phys. 82, 53 (2010). mzh03 M. E. Zhitomirsky, Phys. Rev. B 67, 104421 (2003). Numazawa_DGGG T. Numazawa, K. Kamiya, T. Okano, K. Matsumoto, Physica B 329-333, 1656 (2003). Bras D. A. P. Brasiliano, J.-M. Duval, C. Marin, E. Bichaud, J.-P. Brison, M. Zhitomirsky, N. Luchier, Cryogenics 105, 103002 (2020). sosin S. S. Sosin, L. A. Prozorova, A. I. Smirnov, A. I. Golov, I. B. Berkutov, O. A. Petrenko, G. Balakrishnan, and M. E. Zhitomirsky, Phys. Rev. B 71, 094413 (2005). Wolf B. Wolf, U. Tutsch, S. Dorschug, C. Krellner, F. Ritter, W. Assmus, M. Lang, J. Appl. Phys. 120, 142112 (2016). Numazawa06 T. Numazawa, K. Kamiya, P. Shirron, M. DiPirro, and K. Matsumoto, AIP Conf. Proc. 850, 1579 (2006). Numazawa09 T. Numazawa, K. Kamiya, P. Shirron, and K. Mitsuda, J. Phys.: Conf. Series 150, 012032 (2009). Wikus14 P. Wikus, E. Canavan, S. Trowbridge Heine, K. Matsumoto, and T. Numazawa, Cryogenics 62, 150 (2014). sosin1 S. S. Sosin, A. F. Iafarova, I. V. Romanova, O. A. Morozov, S. L. Korableva, R. G. Batulin, M. Zhitomirsky, V. N. Glazkov, JETP Lett. 116, 771 (2022). Keller C. Keller and H. Scmutz, J. Inorg. Nucl. Chem. 27, 900 (1965). Sanders M. B. Sanders, F. A. Cevallos, R. J. Cava, Mater. Res. Express 4, 036102 (2017). Tokiwa Y. Tokiwa, S. Bachus, K. Kavita, A. Jesche, A. A. Tsirlin, P. Gegenwart, Commun. Mater. 2, 42 (2021). smart J. Samuel Smart, Effective Field Theories in Magnetism, (W. B. Saunders Company, Philadelphia – London, 1966).
http://arxiv.org/abs/2307.04546v1
20230710132437
Safety Analysis of Parameterised Networks with Non-Blocking Rendez-Vous
[ "Lucie Guillou", "Arnaud Sangnier", "Nathalie Sznajder" ]
cs.LO
[ "cs.LO", "cs.MA", "C.2.4; F.4.3" ]
[ Simon R. Eugster1 August 12, 2023 ===================== We consider networks of processes that all execute the same finite-state protocol and communicate via a rendez-vous mechanism. When a process requests a rendez-vous, another process can respond to it and they both change their control states accordingly. We focus here on a specific semantics, called non-blocking, where the process requesting a rendez-vous can change its state even if no process can respond to it. In this context, we study the parameterised coverability problem of a configuration, which consists in determining whether there is an initial number of processes and an execution allowing to reach a configuration bigger than a given one. We show that this problem is EXPSPACE-complete and can be solved in polynomial time if the protocol is partitioned into two sets of states, the states from which a process can request a rendez-vous and the ones from which it can answer one. We also prove that the problem of the existence of an execution bringing all the processes in a final state is undecidable in our context. These two problems can be solved in polynomial time with the classical rendez-vous semantics. § INTRODUCTION Verification of distributed/concurrent systems. Because of their ubiquitous use in applications we rely on constantly, the development of formal methods to guarantee the correct behaviour of distributed/concurrent systems has become one of the most important research directions in the field of computer systems verification in the last two decades. Unfortunately, such systems are difficult to analyse for several reasons. Among others, we can highlight two aspects that make the verification process tedious. First, these systems often generate a large number of different executions due to the various interleavings generated by the concurrent behaviours of the entities involved. Understanding how these interleavings interact is a complex task which can often lead to errors at the design-level or make the model of these systems very complex. Second, in some cases, the number of participants in a distributed system may be unbounded and not known a priori. To fully guarantee the correctness of such systems, the analysis would have to be performed for all possible instances of the system, i.e., an infinite number of times. As a consequence, classical techniques to verify finite state systems, like testing or model-checking, cannot be easily adapted to distributed systems and it is often necessary to develop new techniques. Parameterised verification. When designing systems with an unbounded number of participants, one often provides a schematic program (or protocol) intended to be implemented by multiple identical processes, parameterised by the number of participants. In general, even if the verification problem is decidable for a given instance of the parameter, verifying all possible instances is undecidable (<cit.>). However, several settings come into play that can be adjusted to allow automatic verification. One key aspect to obtain decidability is to assume that the processes do not manipulate identities in the protocolsand use simple communication mechanisms like pairwise synchronisation (or rendez-vous) <cit.>, broadcast of a message to all the entities <cit.> (which can as well be lossy in order to simulate mobility <cit.>), shared register containing values of a finite set <cit.>, and so on (see <cit.> for a survey). In every aforementioned case, all the entities execute the same protocol given by a finite state automaton. Note that parameterised verification, when decidable like in the above models, is also sometimes surprisingly easy, compared to the same problem with a fixed number of participants. For instance, liveness verification of parameterised systems with shared memory is Pspace-complete for a fixed number of processes and in NP when parameterised  <cit.>. Considering rendez-vous communication. In one of the seminal papers for the verification of parameterised networks <cit.>, German and Sistla (and since then <cit.>) assume that the entities communicate by “rendez-vous”, a synchronisation mechanism in which two processes (the sender and the receiver) agree on a common action by which they jointly change their local state. This mechanism is synchronous and symmetric, meaning that if no process is ready to receive a message, the sender cannot send it. However, in some applications, such as Java Thread programming, this is not exactly the primitive that is implemented. When a Thread is suspended in a waiting state, it is woken up by the reception of a message sent by another Thread. However, the sender is not blocked if there is no suspended Thread waiting for its message; in this case, the sender sends the anyway and the message is simply lost. This is the reason why Delzanno et. al. have introduced non-blocking rendez-vous in <cit.> a communication primitive in which the sender of a message is not blocked if no process receives it. One of the problems of interest in parameterised verification is the coverability problem: is it possible that, starting from an initial configuration, (at least) one process reaches a bad state? In <cit.>, and later in <cit.>, the authors introduce variants of Petri nets to handle this type of communication. In particular, the authors investigate in <cit.> the coverability problem for an extended class of Petri nets with non-blocking arcs, and show that for this model the coverability problem is decidable using the techniques of Well-Structured Transitions Systems <cit.>. However, since their model is an extension of Petri nets, the latter problem is -hard <cit.> (no upper bound is given). Relying on Petri nets to obtain algorithms for parameterised networks is not always a good option. In fact, the coverability problem for parameterised networks with rendez-vous can be solved in polynomial timeis in P<cit.>, while it is -complete for Petri nets <cit.>. Hence, no upper bound or lower bound can be directly deduced for the verification of networks with non-blocking rendez-vous from <cit.>. Our contributions. We show that the coverability problem for parameterised networks with non-blocking rendez-vous communication over a finite alphabet is -complete. To obtain this result, we consider an extension of counter machines (without zero test) where we add non-blocking decrement actions and some restore mechanism, i.e.edges that can bring back the machine to its initial location at any moment. We show that the coverability problem for these extended counter machines is -complete (<ref>) and that it is equivalent to our problem over parameterised networks (<ref>). We consider then a subclass of parameterised networks – wait-only protocols – in which no state can allow to both request a rendez-vous and wait for one. This restriction is very natural to model concurrent programs since when a thread is waiting, it cannot perform any other action. We show that coverability problem can then be solved in polynomial time (<ref>). Finally, we show that the synchronization problem, where we look for a reachable configuration with all the processes in a given state, is undecidable in our framework, even for wait-only protocols (<ref>). Due to lack of space, some proofs are only given in the appendix. § RENDEZ-VOUS NETWORKS WITH NON-BLOCKING SEMANTICS For a finite alphabet Σ, we let Σ^* denote the set of finite sequences over Σ (or words). Given w∈Σ^*, we let |w| denote its length: if w=w_0… w_n-1∈Σ^*, then |w|=n. We write to denote the set of natural numbers and [i,j] to represent the set k∈| i≤ k k ≤ j for i,j ∈. For a finite set E, the set ^E represents the multisets over E. For two elements m,m' ∈^E, we denote m+m' the multiset such that (m+m')(e) = m(e) +m'(e) for all e ∈ E. We say that m ≤ m' if and only if m(e) ≤ m'(e) for all e ∈ E. If m ≤ m', then m'-m is the multiset such that (m'-m)(e) = m'(e)-m(e) for all e ∈ E. Given a subset E' ⊆ E and m ∈^E, we denote by ||m||_E' the sum Σ_e∈ E'm(e) of elements of E' present in m. The size of a multiset m is given by ||m|| =||m||_E. For e ∈ E, we use sometimes the notation e for the multiset m verifying m(e)=1 and m(e')=0 for all e' ∈ E∖e and, to represent for instance the multiset with four elements a, b,b and c, we will also use the notations a, b, b, c or a, 2· b, c. §.§ Rendez-Vous Protocols We can now define our model of networks. We assume that all processes in the network follow the same protocol. Communication in the network is pairwise and is performed by rendez-vous through a finite communication alphabet Σ. Each process can either perform an internal action using the primitive τ, or request a rendez-vous by sending the message m using the primitive !m or answer to a rendez-vous by receiving the message m using the primitive ?m (for m ∈Σ). Thus, the set of primitives used by our protocols is RV(Σ)=τ∪?m,!m | m ∈Σ. A rendez-vous protocol (shortly protocol) is a tuple = (Q, Σ, , q_f, T) where Q is a finite set of states, Σ is a finite alphabet, ∈ Q is the initial state, q_f ∈ Q is the final state and T ⊆ Q × RV(Σ) × Q is the finite set of transitions. For a message m ∈Σ, we denote by m the set of states q from which the message m can be received, i.e. states q such that there is a transition (q, ?m, q') ∈ T for some q' ∈ Q. A configuration associated to the protocol is a non-empty multiset C over Q for which C(q) denotes the number of processes in the state q and ||C|| denotes the total number of processes in the configuration C. A configuration C is said to be initial if and only if C(q)=0 for all q ∈ Q∖. We denote by () the set of configurations and by () the set of initial configurations. Finally for n ∈∖0, we use the notation _n() to represent the set of configurations of size n, i.e. _n()=C ∈() | ||C||=n. When the protocol is made clear from the context, we shall write , and _n. We explain now the semantics associated with a protocol. For this matter we define the relation ⊆⋃_n≥ 1_n ×(τ∪Σ∪𝐧𝐛(m) | m ∈Σ) ×_n as follows (here · is a special symbol). Given n ∈∖0 and C,C' ∈_n and m ∈Σ, we have: * C τ C' iff there exists (q, τ, q') ∈ T such that C(q) > 0 and C' = C - q + q' (internal); * C m C' iff there exists (q_1, !m, q_1') ∈ T and (q_2, ?m, q_2')∈ T such that C(q_1)>0 and C(q_2)>0 and C(q_1)+C(q_2)≥ 2 (needed when q_1 = q_2) and C' = C - q_1, q_2 + q_1', q_2' (rendez-vous); * C 𝐧𝐛(m) C' iff there exists (q_1, !m, q_1') ∈ T, such that C(q_1)>0 and (C-q_1)(q_2)=0 for all (q_2, ?m, q_2') ∈ T and C' = C - q_1 + q'_1 (non-blocking request). Intuitively, from a configuration C, we allow the following behaviours: either a process takes an internal transition (labeled by τ), or two processes synchronize over a rendez-vous m, or a process requests a rendez-vous to which no process can answer (non-blocking sending). This allows us to define S_ the transition system ((), ) associated to . We will write C C' when there exists a ∈τ∪Σ∪𝐧𝐛(m) | m ∈Σ such that C a C' and denote by ^∗ the reflexive and transitive closure of . Furthermore, when made clear from the context, we might simply write instead of . An execution is a finite sequence of configurations ρ = C_0C_1… such that, for all 0≤ i< |ρ|, C_i C_i+1. The execution is said to be initial if C_0∈(). Figure <ref> provides an example of a rendez-vous protocol where is the initial state and the final state. A configuration associated to this protocol is for instance the multiset 2 · q_1, 1· q_4, 1 · q_5 and the following sequence represents an initial execution: 2 ·𝐧𝐛(a), b, c 2 ·. When we only allow behaviours of type (internal) and (rendez-vous), this semantics corresponds to the classical rendez-vous semantics (<cit.>). In opposition, we will refer to the semantics defined here as the non-blocking semantics where a process is not blocked if it requests a rendez-vous and no process can answer to it. Note that all behaviours possible in the classical rendez-vous semantics are as well possible in the non-blocking semantics but the converse is false. §.§ Verification Problems We now present the problems studied in this work. For this matter, given a protocol = (Q, Σ, , q_f, T), we define two sets of final configurations. The first one () = { C ∈()  | C(q_f)> 0} characterises the configurations where one of the processes is in the final state. The second one () = { C ∈()  | C(Q ∖{q_f})= 0} represents the configurations where all the processes are in the final state. Here again, when the protocol is clear from the context, we might use the notations and . We study three problems: the coverability problem (), the synchronization problem () and the termination problem () which all takes as input a protocol and which can be stated as follows: We study three problems: the state coverability problem (), the configuration coverability problem () and the synchronization problem (), which all take as input a protocol   and can be stated as follows: Problem name Question Are there C_0 ∈ and C_f ∈, such that C_0 ^∗ C_f? Given C ∈, are there C_0 ∈ and C' ≥ C, such that C_0 ^∗ C'? Are there C_0 ∈ and C_f ∈, such that C_0 ^∗ C_f? Does _∞ (S_) = ∅?  expresses a safety property: if q_f is an error state and the answer is negative, then for any number of processes, no process will ever be in that error state. , in another hand, is a liveness property: if q_f is a deadlock state (a state in which no action is possible), and the answer is negative, then for any number of processes, all processes together are never blocked at the same time. The difficulty in solving these problems lies in the fact that we are seeking for an initial configuration allowing a specific execution but the set of initial configurations is infinite. The difference between  and   is that in the first one we ask for at least one process to end up in the final state whereas the second one requires all the processes to end in this state. Note that  is an instance of  but  is not. The rendez-vous protocol of Figure <ref> is a positive instance of , as shown in <ref>. However, this is not the case for : if an execution brings a process in , this process cannot be brought afterwards to . If is the final state,  is now a positive instance of  (see Example <ref>). Note that if the final state is , is not a positive instance of  anymore. In fact, the only way to reach a configuration with a process in is to put (at least) two processes in state as this is the only state from which one process can send the message b. However, this cannot happen, since from an initial configuration, the only available action consists in sending the message a as a non-blocking request. Once there is one process in state q_5, any other attempt to put another process in this state will induce a reception of message a by the process already in q_5, which will hence leave q_5. Finally, note that for any n ∈ℕ, the configuration n · is coverable, even if with as final state is not a positive instance of . § COVERABILITY FOR NON-BLOCKING COUNTER MACHINES We first detour into new classes of counter machines, which we call non-blocking counter machines and non-blocking counter machines with restore, in which a new way of decrementing the counters is added to the classical one: a non-blocking decrement, which is an action that can always be performed. If the counter is strictly positive, it is decremented; otherwise it is let to 0. We show that the coverability of a control state in this model is -complete, and use this result to solve coverability problems in rendez-vous protocols. To define counter machines, given a set of integer variables (also called counters) , we use the notation to represent the set of associated actions given by ,,|∈∪. Intuitively, increments the value of the counter , while decrements it and checks if it is equal to 0. We are now ready to state the syntax of this model. A counter machine (shortly CM) is a tuple M = (, , Δ, ) such that is a finite set of locations, ∈ is an initial location, is a finite set of counters, and Δ⊆×× is finite set of transitions. We will say that a CM is test-free (shortly ) whenever Δ∩×{|∈}× = ∅. A configuration of a CM M = (, , Δ, ) is a pair (ℓ, v) where ℓ∈ specifies the current location of the CM and v∈^ associates to each counter a natural value. The size of a CM M is given by |M|= || + || + |Δ|. Given two configurations (ℓ, v) and (ℓ',v') and a transition δ∈Δ, we define (ℓ, v) δ_M (ℓ', v') if and only if δ = (ℓ, op, ℓ') and one of the following holds: [t]7cm * op = and v =v'; * op = and v'() = v() + 1 and v'(') = v(') for all ' ∈∖; [t]7cm * op = and v'() = v() - 1 and v'(') = v(') for all ' ∈∖; * op = and v() = 0 and v'= v. In order to simulate the non-blocking semantics of our rendez-vous protocols with counter machines, we extend the class of test-free CM with non-blocking decrement actions. A non-blocking test-free counter machine (shortly ) is a tuple M=(, , Δ_b, Δ_nb, ) such that (, , Δ_b, ) is a  and Δ_nb⊆×{|∈}× is a finite set of non-blocking transitions. Observe that in a , both blocking and non-blocking decrements are possible, according to the definition of the transition relation. Again, a configuration is given by a pair (ℓ,v)∈×^. Given two configurations (ℓ, v) and (ℓ, v') and δ∈Δ_b∪Δ_nb, we extend the transition relation (ℓ,v)δ_M (ℓ,v') over the set Δ_nb in the following way: for δ= (ℓ, , ℓ') ∈Δ_nb, we have (ℓ,v) δ_M (ℓ',v') if and only if v'() = max(0, v() - 1), and v'(') = v(') for all ' ∈∖. We say that M is an  with restore (shortly ) when (ℓ, , ) ∈Δ for all ℓ∈, i.e. from each location, there is a transition leading to the initial location with no effect on the counters values. For a CM M with set of transitions Δ (resp. an   with sets of transitions Δ_b and Δ_nb), we will write (ℓ, v) _M (ℓ', v') whenever there exists δ∈Δ (resp. δ∈Δ_b∪Δ_nb) such that (ℓ, v) δ_M (ℓ', v') and use ^∗_M to represent the reflexive and transitive closure of _M. When the context is clear we shall write instead of _M. We let 0_ be the valuation such that 0_()=0 for all ∈. An execution is a finite sequence of configurations (ℓ_0, v_0) (ℓ_1, v_1) …(ℓ_k, v_k). It is said to be initial if (ℓ_0,v_0)=(, 0_). A configuration (ℓ,v) is called reachable if (, 0_) ^∗ (ℓ,v). We shall now define the coverability problem for (non-blocking test-free) counter machines, which asks whether a given location can be reached from the initial configuration. We denote this problem [ℳ], for ℳ∈{CM, , , }. It takes as input a machine M in ℳ (with initial location and working over a set of counters) and a location ℓ_f and it checks whether there is a valuation v ∈ℕ^ such that (, 0_) ^*(ℓ_f, v). In the rest of this section, we will prove that [] is -complete. To this end, we first establish that [] is in , by an adaptation of Rackoff's proof which shows that coverability in Vector Addition Systems is in Expspace <cit.>. This gives also the upper bound for , since any  is a . This result is established by the following theorem, whose proof is omitted due to lack of space. [] and [] are in . To obtain the lower bound, inspired by Lipton's proof showing that coverability in Vector Addition Systems is -hard <cit.>, we rely on 2Exp-bounded . We say that a CM M = (,, Δ,) is 2Exp-bounded if there exists n ∈ O(|M|) such that any reachable configuration (ℓ, v) satisfies v() ≤ 2^2^n for all ∈. We use then the following result. [2Exp-bounded ] is -hard. We now show how to simulate a 2Exp-bounded  by a , by carefully handling restore transitions that may occur at any point in the execution. We will ensure that each restore transition is followed by a reset of the counters, so that we can always extract from an execution of the  a correct initial execution of the original . The way we enforce resetting of the counters is inspired by the way Lipton simulates 0-tests of a CM in a . As in <cit.>, we will describe the final  by means of several submachines. To this end, we define procedural non-blocking counter machines that are   with several identified output states: formally, a procedural- is a tuple N = (, , Δ_b, Δ_nb, ℓ_in, L_out) such that (, , Δ_b, Δ_nb, ℓ_in) is a , L_out⊆, and there is no outgoing transitions from states in L_out. Now fix a 2Exp-bounded  M = (,, Δ,), ℓ_f∈ the location to be covered. There is some c, such that, any reachable configuration (ℓ, v) satisfies v() < 2^2^c |M| for all ∈, fix n = c|M|. We build a  N as pictured in <ref>. The goal of the procedural  𝚁𝚜𝚝𝙸𝚗𝚌 is to ensure that all counters in are reset. Hence, after each restore transition, we are sure that we start over a fresh execution of the  M. We will need the mechanism designed by Lipton to test whether a counter is equal to 0. For a counter bounded by some value K, this is done by duplicating into and ensure along any execution that the sum of and is equal to K. So, we define two families of sets of counters (Y_i)_0≤ i ≤ n and (Y_i)_0≤ i≤ n as follows. Let Y_i = {_i, _i, _i } and Y_i = {_i, _i, _i} for all 0≤ i < n and Y_n = and Y_n = ∅ and '=⋃_0≤ i≤ n Y_i∪Y_i. All the machines we will describe from now on will work over the set of counters '. Procedural- 𝚃𝚎𝚜𝚝𝚂𝚠𝚊𝚙_i(). We use a family of procedural- defined in <cit.>: for all 0≤ i <n, for all ∈Y_i, 𝚃𝚎𝚜𝚝𝚂𝚠𝚊𝚙_i() is a procedural- with an initial location ^𝚃𝚂,i,, and two output locations ℓ^𝚃𝚂,i,_z and ℓ^𝚃𝚂,i,_nz. It tests if the value of is equal to 0, using the fact that the sum of the values of and is equal to 2^2^i. If =0, it swaps the values of and , and the execution ends in the output location ℓ^𝚃𝚂,i,_z. Otherwise, counters values are left unchanged and the execution ends in ℓ^𝚃𝚂,i,_nz. In any case, other counters are not modified by the execution. Note that 𝚃𝚎𝚜𝚝𝚂𝚠𝚊𝚙_i() makes use of variables in ⋃_1≤ j< i Y_i∪Y_i. Formally, these machines have the following property: We use this proposition. Let 0≤ i < n, and ∈Y_i. For all v,v'∈ℕ^X', for ℓ∈{ℓ^𝚃𝚂,i,_z,ℓ^𝚃𝚂,i,_nz}, we have (^𝚃𝚂,i,v)^*(ℓ,v') in 𝚃𝚎𝚜𝚝𝚂𝚠𝚊𝚙_i() if and only if : * (PreTest1): for all 0 ≤ j < i, for all _j ∈Y_j, v(_j) = 2^2^j and for all _j ∈ Y_j, v(_j) = 0; * (PreTest2): v(_i) = 2^2^i and v( _i) = 0; * (PreTest3): v() + v() = 2^2^i; * (PostTest1): For all ∉{,}, v'() = v(); * (PostTest2): either (i) v() = v'() = 0, v() = v'() and ℓ = ℓ^i_z, or (ii) v'() = v() >0, v'() = v() and ℓ = ℓ^𝚃𝚂,i,_nz. Moreover, if for all 0 ≤ j ≤ n, and any counter ∈ Y_j ∪Y_j, v()≤ 2^2^j, then for all 0 ≤ j ≤ n, and any counter ∈ Y_j ∪Y_j, the value of will never go above 2^2^j during the execution. Note that for a valuation v∈ℕ^X' that meets the requirements (PreTest1), (PreTest2) and (PreTest3), there is only one configuration (ℓ,v') with ℓ∈{ℓ^𝚃𝚂,i,_z,ℓ^𝚃𝚂,i,_nz} such that (ℓ_in,v) ^* (ℓ,v'). Procedural  𝚁𝚜𝚝_i. We use these machines to define a family of procedural- (𝚁𝚜𝚝_i)_0≤ i≤ n that reset the counters in Y_i∪Y_i, assuming that their values are less than or equal to 2^2^i. Let 0≤ i≤ n, we let 𝚁𝚜𝚝_i=(^𝚁,i, ',Δ_b^𝚁,i,Δ^𝚁,i_nb, ℓ^𝚁,i_in, {ℓ_out^𝚁,i}). The machine 𝚁𝚜𝚝_0 is pictured Figure <ref>. For all 0≤ i< n, the machine 𝚁𝚜𝚝_i+1 uses counters from Y_i∪Y_i and procedural- 𝚃𝚎𝚜𝚝𝚜𝚠𝚊𝚙_i(_i) and 𝚃𝚎𝚜𝚝𝚜𝚠𝚊𝚙_i(_i) to control the number of times variables from Y_i+1 and Y_i+1 are decremented. It is pictured Figure <ref>. Observe that since Y_n=, and Y_n=∅, the machine 𝚁𝚜𝚝_n will be a bit different from the picture: there will only be non-blocking decrements over counters from Y_n, that is over counters from the initial  M. If _i, _i (and 𝚜_i) are set to 2^2^i and _i, _i (and 𝚜_i) are set to 0, then each time this procedural-  takes an outer loop, the variables of Y_i+1∪Y_i+1 are decremented (in a non-blocking fashion) 2^2^i times. This is ensured by Proposition <ref>the properties of 𝚃𝚎𝚜𝚝𝚂𝚠𝚊𝚙_i(). Moreover, the location ℓ^𝚃𝚂, i, _z will only be reached when the counter _i is set to 0, and this will happen after 2^2^i iterations of the outer loop, again thanks to Proposition <ref>the properties of 𝚃𝚎𝚜𝚝𝚂𝚠𝚊𝚙_i(). So, all in all, variables from Y_i and Y_i+1 will take a non-blocking decrement 2^2^i.2^2^i times, that is 2^2^i+1. These properties are formalized in the following proposition. For all 0≤ i≤ n, for all v∈ℕ^' such that * (PreRst1): for all 0 ≤ j < i, for all ∈Y_j, v() = 2^2^j and for all ∈ Y_j, v() = 0, for all v' ∈ℕ^', if (^𝚁,i, v) ^* (ℓ^𝚁,i_out,v') in 𝚁𝚜𝚝_𝚒 then * (PostRst1): for all ∈ Y_i ∪Y_i, v'() = max(0, v() - 2^2^i), * (PostRst2): for all ∉Y_i ∪Y_i, v'() = v(). For all ∈', we say that is initialized in a valuation v if ∈ Y_i for some 0≤ i≤ n and v()=0, or ∈Y_i for some 0≤ i≤ n and v()=2^2^i. For 0≤ i≤ n, we say that a valuation v∈ℕ^' is i-bounded if for all ∈ Y_i ∪Y_i, v() ≤ 2^2^i. The procedural- 𝚁𝚜𝚝_i is taking care of resetting counters in Y_i∪Y_i. The following lemma states that no counter in Y_j∪Y_j, for 1≤ j≤ n, will be increased over 2^2^j during this process, and that it reset properly counters in Y_i ∪Y_i. Let 0≤ i ≤ n, and let v∈ℕ^' satisfying (PreRst1) for 𝚁𝚜𝚝_𝚒. If for all 0≤ j ≤ n, v is j-bounded, then for all (ℓ,v')∈^𝚁,i×ℕ^' such that (ℓ^𝚁,i_in,v) ^* (ℓ, v') in 𝚁𝚜𝚝_i, v' is j-bounded for all 0≤ j ≤ n. Furthermore, the unique configuration such that (ℓ^𝚁,i_in,v) ^* (ℓ^𝚁,i_out, v') in 𝚁𝚜𝚝_i is defined by v'() = 0 for all ∈ Y_i ∪Y_i and v'() = v() for all ∉ Y_i ∪Y_i. The construction ensures that when one enters 𝚁𝚜𝚝_i with a valuation v that is i-bounded, and in which all variables in ⋃_0≤ j<i Y_j∪Y_j are initialized, the location ℓ^𝚁,i_out is reached with a valuation v' such that: v'() = 0 for all ∈ Y_i ∪Y_i and v'() = v() for all ∉ Y_i ∪Y_i. Moreover, if v is j-bounded for all 0≤ j≤ n, then any valuation reached during the execution remains j-bounded for all 0≤ j≤ n. Procedural  𝙸𝚗𝚌_i. The properties we seek for 𝚁𝚜𝚝_i are ensured whenever the variables in ⋃_0≤ j<iY_j∪Y_j are initialized. This is taken care of by a family of procedural- introduced in <cit.>. For all 0≤ i< n, 𝙸𝚗𝚌_i is a procedural- with initial location ^𝙸𝚗𝚌, i, and unique output location ℓ^𝙸𝚗𝚌, i_out. They enjoy the following property: for 0≤ i<n, when one enters 𝙸𝚗𝚌_i with a valuation v in which all the variables in ⋃_0≤ j<i Y_j∪Y_j are initialized and v()=0 for all ∈Y_i, then the location ℓ^𝙸𝚗𝚌_i_out is reached with a valuation v' such that v'()=2^2^i for all ∈Y_i, and v'()=v() for all other ∈'. Moreover, if v is j-bounded for all 0≤ j≤ n, then any valuation reached during the execution remains j-bounded for all 0≤ j≤ n. For all 0≤ i< n, for all v,v'∈ℕ^', (^𝙸𝚗𝚌, i,v) ^* (ℓ_out^𝙸𝚗𝚌, i, v') in 𝙸𝚗𝚌_i if and only if: * (PreInc1) for all 0 ≤ j < i, for all ∈Y_j, v() = 2^2^j and for all ∈ Y_j, v() = 0; * (PreInc2) for all ∈Y_i, v( ) = 0, * (PostInc1) for all ∈Y_i, v'() = 2^2^i; * (PostInc2) for all ∉Y_i, v'() = v(). Moreover, if for all 0≤ j ≤ n, v is j-bounded, then for all (ℓ,v”) such that (ℓ^𝙸𝚗𝚌,i_in,v) ^* (ℓ, v”) in 𝙸𝚗𝚌_i, then v” is j-bounded for all 0≤ j≤ n. Procedural  𝚁𝚜𝚝𝙸𝚗𝚌. Finally, let 𝚁𝚜𝚝𝙸𝚗𝚌 be a procedural-  with initial location ℓ_a and output location ℓ_b, over the set of counters ' and built as an alternation of 𝚁𝚜𝚝_i and 𝙸𝚗𝚌_i for 0≤ i<n, finished by 𝚁𝚜𝚝_n. It is depicted in <ref>. Thanks to the properties of the machines 𝚁𝚜𝚝_i and 𝙸𝚗𝚌_i, in the output location of each 𝙸𝚗𝚌_i machine, the counters in Y_i are set to 2^2^i, which allow counters in Y_i+1∪Y_i+1 to be set to 0 in the output location of 𝚁𝚜𝚝_i+1. Hence, in location ℓ^𝙸𝚗𝚌,n_out, counters in Y_n= are set to 0. The reduction. To build the final  N, we compose the procedural  𝚁𝚜𝚝𝙸𝚗𝚌 with the   M in the way described <ref>, and we add to every location ℓ of 𝚁𝚜𝚝𝙸𝚗𝚌 and M a restore transition (ℓ, ∅,') which is represented in the figure in an abstract way with dashed arrows, for readability's sake. From <cit.>, each procedural machine 𝚃𝚎𝚜𝚝𝚂𝚠𝚊𝚙_i() and 𝙸𝚗𝚌_i has size at most C × n^2 for some constant C. Hence, observe that N is of size at most B for some B∈ O(|M|^3). One can show that (, 0_) ^*_M (ℓ_f, v) for some v∈ℕ^, if and only if (', 0_') ^*_N (ℓ_f, v') for some v'∈ℕ^'. Using <ref>, we obtain: [] is -hard. § COVERABILITY FOR RENDEZ-VOUS PROTOCOLS In this section we prove that  and  problems are both -complete for rendez-vous protocols. To this end, we present the following reductions:  reduces to [] and [] reduces to . This will prove that  is in  and  is -hard (from <ref> and <ref>). As  is an instance of , the two reductions suffice to prove -completeness for both problems. §.§ From Rendez-vous Protocols to Let = (Q, Σ, , q_f, T) a rendez-vous protocol and C_F a configuration of  to be covered. We shall also decompose C_F as a sum of multisets 𝐪_1 + 𝐪_2 + … + 𝐪_s. Observe that there might be 𝐪_i=𝐪_j for i≠ j. We build the  M = (, , Δ_b, Δ_nb, ) described in <ref>. Here, with =Q. A configuration C of is meant to be represented in M by (,v), with v(q)=C(q) for all q∈ Q. The only meaningful location of M is then . The other ones are here to ensure correct updates of the counters when simulating a transition. We let = {}∪{ℓ_(t,t')^1, ℓ_(t,t')^2,ℓ_(t,t')^3| t=(q,!a,q'), t'=(p,?a,p')∈ T}∪{ℓ_t, ℓ_t,p_1^a,⋯,ℓ_t,p_k^a| t=(q,!a,q')∈ T, a={p_1,…, p_k}}∪{ℓ_q| t=(q,τ,q')∈ T}∪{ℓ_1 …ℓ_s}, with final location ℓ_f = ℓ_s, where m for a message m ∈Σ has been defined in <ref>. The sets Δ_b and Δ_nb are shown <ref>. Transitions pictured <ref> show how to simulate a rendez-vous protocol with the classical rendez-vous mechanism. The non-blocking rendez-vous are handled by the transitions pictured <ref>(where the only non-blocking transitions of the  occur): to simulate the occurrence of (q,!a,q'), the  M decrements the value of q by a transition of the form (3). It then takes a sequence of non-blocking decrements for each state in a. The last transition of the simulation of a non-blocking rendez-vous is to increment the counter q' by a transition of the form (3).. If the  M faithfully simulates , then this loop of non-blocking decrements is taken when the values of the counters in a are equal to 0, and the configuration reached still corresponds to a configuration in . However, it could be that this loop is taken in M while some counters in a are strictly positive. In this case, a blocking rendez-vous has to be taken in , e.g. (q,!a,q') and (p,?a,p') if the counter p in M is strictly positive. Therefore, the value of the reached configuration (, v) and the corresponding configuration C in will be different: first, C(p')>v(q'), since the process in p has moved in the state p' in when there has been no increment of p' in M. Furthermore, all other non-blocking decrements of counters in a in M may have effectively decremented the counters, when in no other process has left a state of a. However, this ensures that C≥ v. The reduction then ensures that if (, v) is reachable in M, then a configuration C≥ v is reachable in . Then, if it is possible to reach a configuration (, v) in M whose counters are high enough to cover ℓ_F, then the corresponding initial execution in will reach a configuration C≥ v, which hence covers C_F.  over rendez-vous protocols is in . §.§ From  to Rendez-Vous Protocols The reduction from [] to  in rendez-vous protocols mainly relies on the mechanism that can ensure that at most one process evolves in some given set of states, as explained in <ref>. This will allow to somehow select a “leader” among the processes that will simulate the behaviour of the  whereas other processes will simulate the values of the counters. Let M = (, , Δ_b, Δ_nb, ) a  and ℓ_f ∈ a final target location. We build the rendez-vous protocol pictured in <ref>, where (M) is the part that will simulate the  M. The locations {1_|∈} will allow to encode the values of the different counters during the execution: for a configuration C, C(1_) will represent the value of the counter . We give then (M)=(Q_M,Σ_M,,ℓ_f,T_M) with Q_M = ∪{ℓ_δ|δ∈Δ_b} Σ_M = {inc_,inc_, dec_, dec_, nbdec_|∈} T_M ={(ℓ_i,!inc_, ℓ_δ), (ℓ_δ, ?inc_, ℓ_j)|δ=(ℓ_i, , ℓ_j)∈Δ_b} ∪{(ℓ_i, !dec_, ℓ_δ), (ℓ_δ, ?dec_, ℓ_j)|δ = (ℓ_i, , ℓ_j)∈Δ_b} ∪{(ℓ_i, !nbdec_, ℓ_j)| (ℓ_i, x,ℓ_j)∈Δ_nb} ∪{(ℓ_i, τ, ℓ_j)| (ℓ_i, ,ℓ_j)∈Δ_b, ℓ_j } Q_M=∪{ℓ_δ|δ∈Δ_b}, Σ_M= {inc_,inc_, dec_, dec_, nbdec_|∈}, and T_M={(ℓ_i,!inc_, ℓ_δ),(ℓ_δ, ?inc_, ℓ_j)|δ=(ℓ_i, , ℓ_j)∈Δ_b}∪{(ℓ_i, !dec_, ℓ_δ), (ℓ_δ, ?dec_, ℓ_j)| δ = (ℓ_i, , ℓ_j) [0]∈Δ_b}∪{(ℓ_i, !nbdec_, ℓ_j)| (ℓ_i, ,ℓ_j)∈Δ_nb}∪{(ℓ_i, τ, ℓ_j)| (ℓ_i, ,ℓ_j)∈Δ_b}. Here, the reception of a message inc_ (respectively dec_) works as an acknowledgement, ensuring that a process has indeed received the message inc_ (respectively dec_), and that the corresponding counter has been incremented (resp. decremented). For non-blocking decrement, obviously no acknowledgement is required. We define =(Q,Σ,T,, ℓ_f) as follows. Q = Q_M∪{1_, q_, q'_|∈}∪{, q, q_} Σ = Σ_M∪{L, R} T =T_M∪{(, !L, q), (q, !R, ), (q, ?L, q_)}∪{(ℓ, ?L, q_)|ℓ∈ Q_M} ∪{(, ?inc_, q_), (q_, !inc_, 1_), (1_, ?dec_, q'_), (q'_, !dec_, ), (1, ?nbdec_, )|∈} {(q_, ?R, ), (q'_, ?R, )|∈} The protocol =(Q,Σ,, ℓ_f,T) is then defined with Q= Q_M∪{1_, q_, q'_|∈}∪{, q, q_}, Σ=Σ_M∪{L, R} and T is the set of transitions T_M along with the transitions pictured in <ref>. Note that there is a transition (ℓ,?L,q_) for all ℓ∈ Q_M. With two non-blocking transitions on L and R at the beginning, protocol can faithfully simulate the  M without further ado, provided that the initial configuration contains enough processes to simulate all the counters values during the execution: after having sent a process in state , any transition of M can be simulated in . Conversely, an initial execution of can send multiple processes into the 𝒫(M) zone, which can mess up the simulation. However, each new process entering 𝒫(M) will send the message L, which will send the process already in {q}∪ Q_M in the deadlock state q_, and send the message R, which will be received by any process in {q_,q'_|∈}. Moreover, the construction of the protocol ensures that there can only be one process in the set of states {q_,q'_|∈}. Then, if we have reached a configuration simulating the configuration (ℓ, v) of M, sending a new process in the 𝒫(M) zone will lead to a configuration (, v), and hence simply mimicks a restore transition of M. So every initial execution of corresponds to an initial execution of M.  and over rendez-vous protocols are  complete. § COVERABILITY FOR WAIT-ONLY PROTOCOLS In this section, we study a restriction on rendez-vous protocols in which we assume that a process waiting to answer a rendez-vous cannot perform another action by itself. This allows for a polynomial time algorithm for solving . §.§ Wait–Only Protocols We say that a protocol = (Q, Σ, , q_f, T) is wait-only if the set of states Q can be partitioned into Q_A — the active states — and Q_W — the waiting states — with ∈ Q_A and: * for all q ∈ Q_A, for all (q',?m,q”)∈ T, we have q'≠ q; * for all q∈ Q_W, for all (q', !m, q”) ∈ T, we have q' ≠ q and for all (q', τ, q”) ∈ T, we have q'≠ q. From a waiting state, a process can only perform receptions (if it can perform anything), whereas in an active state, a process can only perform internal actions or send messages. Examples of wait-only protocols are given by Figures <ref> and <ref>. In the sequel, we will often refer to the paths of the underlying graph of the protocol. Formally, a path in a protocol = (Q, Σ, , q_f, T) is either a control state q ∈ Q or a finite sequence of transitions in T of the form (q_0,a_0,q_1)(q_1,a_1,q_2)…(q_k,a_k,q_k+1), the first case representing a path from q to q and the second one from q_0 to q_k+1. §.§ Abstract Sets of Configurations To solve the coverability problem for wait-only protocols in polynomial time, we rely on a sound and complete abstraction of the set of reachable configurations. In the sequel, we consider a wait-only protocol = (Q, Σ, , q_f, T) whose set of states is partitioned into a set of active states Q_A and a set of waiting states Q_W. An abstract set of configurations γ is a pair (S,) such that: * S ⊆ Q is a subset of states, and, * ⊆ Q_W ×Σ is a subset of pairs composed of a waiting state and a message, and, * q ∉S for all (q,m) ∈. We then abstract the set of reachable configurations as a set of states of the underlying protocol. However, as we have seen, some states, like states in Q_A, can host an unbounded number of processes together (this will be the states in S), while some states can only host a bounded number (in fact, 1) of processes together (this will be the states stored in ). This happens when a waiting state q answers a rendez-vous m, that has necessarily been requested for a process to be in q. Hence, in , along with a state q, we remember the last message m having been sent in the path leading from to q, which is necessarily in Q_W. Observe that, since several paths can lead to q, there can be (q,m_1),(q,m_2)∈ with m_1≠ m_2. We denote by Γ the set of abstract sets of configurations. Let γ=(S,) be an abstract set of configurations. Before we go into the configurations represented by γ, we need some preliminary definitions. We note (-1𝑝𝑡) the set q ∈ Q_W |there exists m∈Σ such that (q,m) ∈ of control states appearing in . Given a state q ∈ Q, we let q be the set m ∈Σ|there exists q'∈ Q such that (q,?m, q') ∈ T of messages that can be received in state q (if q is not a waiting state, this set is empty). Given two different waiting states q_1 and q_2 in , we say q_1 and q_2 are conflict-free in γ if there exist m_1,m_2 ∈Σ such that m_1 ≠ m_2, (q_1,m_1),(q_2,m_2) ∈ and m_1 ∉q_2 and m_2 ∉q_1. We now say that a configuration C∈() respects γ if and only if for all q ∈ Q such that C(q)>0 one of the following two conditions holds: * q ∈ S, or, * q ∈ and C(q)=1 and for all q' ∈∖q such that C(q')=1, we have that q and q' are conflict-free. Note that the condition is on states q such that C(q) > 0 and not all states q ∈ Q because it might be that some states don't appear in S∪ st(Toks) (non-reachable states for instance). Let γ be the set of configurations respecting γ. Note that in γ, for q in S there is no restriction on the number of processes that can be put in q and if q in , it can host at most one process. Two states from can both host a process if they are conflict-free. Finally, we will only consider abstract sets of configurations that are consistent. This property aims to ensure that concrete configurations that respect it are indeed reachable from states of S. Formally, we say that an abstract set of configurations γ=(S,) is consistent if (i) for all (q,m) ∈, there exists a path (q_0,a_0,q_1)(q_1,a_1,q_2)…(q_k,a_k,q) in such that q_0 ∈ S and a_0= !m and for all 1≤ i ≤ k, we have that a_i= ?m_i and that there exists (q'_i,!m_i,q”_i) ∈ T with q'_i ∈ S, and (ii) for two tokens (q,m), (q',m') ∈ either m∈q' and m'∈q, or, m∉q' and m'∉q. Condition (i) ensures that processes in S can indeed lead to a process in the states from . Condition (ii) ensures that if in a configuration C, some states in are pairwise conflict-free, then they can all host a process together. Given γ∈Γ and a configuration C, there exists C' ∈γ such that C' ≥ C if and only if C ∈γ. Checking that C∈γ can be done in polynomial time. §.§ Computing Abstract Sets of Configurations Our polynomial time algorithm is based on the computation of a polynomial length sequence of consistent abstract sets of configurations leading to a final abstract set characterising in a sound and complete manner (with respect to the coverability problem), an abstraction for the set of reachable configurations. This will be achieved by a function F:Γ→Γ, that inductively computes this final abstract set starting from γ_0=(, ∅). Formal definition of the function F relies on intermediate sets S”⊆ Q and ”⊆ Q_W ×Σ, which are the smallest sets satisfying the conditions described in <ref>. From S and , rules described in <ref> add states and tokens to S” and ” from the outgoing transitions from states in S and (). It must be that every state added to S” can host an unbounded number of processes, and every state added to ” can host at least one process, furthermore, two conflict-free states in ” should be able to host at least one process at the same time. We now provide the formal definition of this function. For an abstract set of configurations γ=(S,), we will have γ'=F(γ) if and only if γ'=(S',') where S' and ' are built as follows. First we use some intermediate sets of states S”⊆ Q and ”⊆ Q_W ×Σ which are the smallest sets satisfying the following conditions S ⊆ S” and ⊆” and: * for all (p,τ,p') ∈ T with p ∈ S, we have p' ∈ S”; * for all (p,!a,p') ∈ T with p ∈ S, we have: (a) p' ∈ S” if a ∉p' or if there exists (q,?a,q') ∈ T with q ∈ S; (b) (p',a) ∈” otherwise (i.e. when a ∈p' and there does not exists (q,?a,q') ∈ T with q ∈ S); * for all (q,?a,q') ∈ T with q ∈ S or (q,a) ∈, we have q' ∈ S” if there exists (p,!a,p') ∈ T with p ∈ S; * for all (q,?a,q') ∈ T with (q,m) ∈ with m ≠ a, we have: (a) q' ∈ S” if m ∉q' and there exists (p,!a,p') ∈ T with p ∈ S; (b) (q',m) ∈” if m ∈q' and there exists (p,!a,p') ∈ T with p ∈ S. We have then that S' is the smallest set including S” and such that: * for all (q_1, m_1), (q_2, m_2) ∈” such that m_1 m_2 and m_2 ∉q_1 and m_1 ∈q_2, we have q_1 ∈ S'; * for all (q_1, m_1), (q_2, m_2), (q_3,m_2) ∈” s.t m_1 m_2 and (q_2, ?m_1, q_3) ∈ T, we have q_1 ∈ S'; * for all (q_1, m_1), (q_2, m_2), (q_3, m_3) ∈” such that m_1 m_2 and m_1 m_3 and m_2 m_3 and m_1 ∉q_2, m_1 ∈q_3 and m_2∉q_1, m_2 ∈q_3, and m_3 ∈q_2 and m_3 ∈q_1, we have q_1 ∈ S'. And finally '=(q,m) ∈”| q ∉S'. Consider the wait-only protocol _1 depicted on Figure <ref>. From (q_in,∅), rules described in <ref> construct the following pair (S_1”, _1”) = (q_in,q_4,(q_1,a),[0](q_1,b),(q_5,c)). In _1, it is indeed possible to reach a configuration with as many processes as one wishes in the state q_4 by repeating the transition (q_in,!d,q_4) (rule <ref>). On the other hand, it is possible to put at most one process in the waiting state q_1 (rule <ref>), because any other attempt from a process in will yield a reception of the message a (resp. b) by the process already in q_1. Similarly, we can put at most one process in q_5. Note that in _1”, the states q_1 and q_5 are conflict-free and it is hence possible to have simultaneously one process in both of them. If we apply rules of <ref> one more time to (S”_1, ”_1), we get S_2”=, q_2, q_4, q_6,q_7 and _2”=(q_1,a), (q_1,b) ,(q_3,a),(q_3,b),(q_5,c). We can put at most one process in q_3: to add one, a process will take the transition (q_1,?c,q_3). Since (q_1,a), (q_1,b)∈”_1, there can be at most one process in state q_1, and this process arrived by a path in which the last request of rendez-vous was !a or !b. Since {a,b}⊆q_3, by rule <ref>, (q_3,a),(q_3,b) are added. On the other hand we can put as many processes as we want in the state q_7 (rule <ref>): from a configuration with one process on state q_5, successive non-blocking request on letter c, and rendez-vous on letter d will allow to increase the number of processes in state q_7. However, one can observe that q_5 can in fact host an unbounded number of processes: once two processes have been put on states q_1 and q_5 respectively (remember that q_1 and q_5 are conflict-free in (S”_1, ”_1)), iterating rendez-vous on letter c (with transition (q_1, ?c, q_3)) and rendez-vous on letter a put as many processes as one wants on state q_5. This is why we need another transformation from S_2”, _2” to F(S”_1, ”_1). As we shall see, this transformation does not have any impact on S”_1 and ”_1 and so it holds that F((, ∅)) = (S”_1, ”_1). Note F(γ) = (S', '), <ref> describes the construction of S' from (S”, ”), while ' = ”∖ (S ×Σ), i.e. all states added to S' are removed from ' so a state belongs either to S' or to '. Now the case of state q_5 evoked in the previous example leads to application of rule <ref>, since (q_5,c), (q_1,a) ∈”_2, and (q_3,a) (q_1,?c,q_3)∈ T. Finally, F(F(q_in,∅))=(q_in, q_2,q_4, q_5, q_6,q_7,[0](q_1,a), (q_1,b) ,(q_3,a),(q_3,b)). Since q_1 and q_3 are not conflict-free, they won't be reachable together in a configuration. We consider now the wait-only protocol _2 depicted on Figure <ref>. In that case, to compute F((q_in,∅)) we will first have S”=q_in and ”=(q_1,a),(q_2,b),(p_1,m_1),(p_2,m_2),[0](p_3,m_3) (using rule <ref>), to finally get F((q_in,∅))=(q_in,q_1,p_1,(q_2,b),(p_2,m_2),[0](p_3,m_3))). Applying rule <ref> to tokens (q_1, a) and (q_2, b) from ”, we obtain that q_1∈ S': whenever one manages to obtain one process in state q_2, this process can answer the requests on message a instead of processes in state q_1, allowing one to obtain as many processes as desired in state q_1. Now since (p_1,m_1), (p_2, m_2) and (p_3, m_3) are in ” and respect the conditions of rule <ref>, p_1 is added to the set S' of unbounded states. This case is a generalisation of the previous one, with 3 processes. Once one process has been put on state p_2 from , iterating the following actions: rendez-vous over m_3, rendez-vous over m_1, non-blocking request of m_2, will ensure as many processes as one wants on state p_1. Finally applying successively F, we get in this case the abstract set (q_in,q_1,q_3,p_1,p_2,p_3,p_4,(q_2,b)). We show that F satisfies the following properties. * F(γ) is consistent and can be computed in polynomial time for all consistent γ∈Γ. * If (S',')=F(S,) then S ≠ S' (and S ⊆ S') or ⊆'. * For all consistent γ∈Γ, if C ∈γ and C C' then C' ∈F(γ). * For all consistent γ∈Γ, if C' ∈F(γ), then there exists C”∈ and C ∈γ such that C”≥ C' and C ^∗ C”. Point 1. and 2, ensures us that if we apply successively the function F to (q_in,∅) then the computation will reach a consistent abstract set γ_f such that γ_f=F(γ_f) and it will take a polynomial time. Points 3. ensures that the computed abstraction is complete whereas Point 4. guarantees its soundness. §.§ Polynomial Time Algorithm We now present our polynomial time algorithm to solve  for wait-only protocols. We define the sequence (γ_n)_n ∈ as follows: γ_0=(,∅) and γ_i+1=F(γ_i) for all i ∈. First note that γ_0 is consistent and that γ_0= is the set of initial configurations. Using Lemma <ref>, we deduce that γ_i is consistent for all i ∈. Furthermore, each time we apply F to an abstract set of configurations (S,) either S or increases, or (S, ) stabilises. Hence for all n ≥ |Q|^2*|Σ|, we have γ_n+1=F(γ_n)=γ_n. Let γ_f=γ_|Q|^2*|Σ|. Using Lemma <ref>, we get: Given C ∈, there exists C_0 ∈ and C' ≥ C such that C_0 ^∗ C' if and only if there exists C”∈γ_f such that C”≥ C. We need to iterate |Q|^2*|Σ| times the function F to compute γ_f and each computation of F can be done in polynomial time. Furthermore checking whether there exists C”∈γ_f such that C”≥ C for a configuration C ∈ can be done in polynomial time by Lemma <ref>, hence using the previous lemma we obtain the desired result.  and  restricted to wait-only protocols are in . § UNDECIDABILITY OF It is known that [CM] is undecidable in its full generality <cit.>. This result holds for a very restricted class of counter machines, namely Minsky machines (Minsky-CM for short), which are CM over 2 counters, _1 and _2. Actually, it is already undecidable whether there is an execution (,0_{_1,_2})^* (ℓ_f, 0_{_1,_2}). Reduction from this last problem gives the following result.  is undecidable, even for wait-only protocols. Fix M = (, ℓ_0, {_1, _2}, Δ ) with ℓ_f ∈ the final state. W.l.o.g., we assume that there is no outgoing transition from state ℓ_f in the machine. The protocol  is described in <ref>. The states {0_i,p_i,1_i,p'_i| i=1,2} will be visited by processes simulating values of counters, while the states in will be visited by a process simulating the different locations in the Minsky-CM. If at the end of the computation, the counters are equal to 0, it means that each counter has been incremented and decremented the same number of times, so that all processes simulating the counters end up in the state ℓ_f. The first challenge is to appropriately check when a counter equals 0. This is achieved thanks to the non-blocking semantics: the process sends a message !zero_i to check if the counter i equals 0. If it is does not, the message will be received by a process that will end up in the deadlock state . The second challenge is to ensure that only one process simulates the Minsky-CM in the states in . This is ensured by the states {w, w'}. Each time a process arrives in the state, another must arrive in the w' state, as a witness that the simulation has begun. This witness must reach ℓ_f for the computation to be a testifier of a positive instance of , but it should be the first to do so, otherwise a process already in ℓ_f will receive the message “w” and reach the deadlock state . Thus, if two processes simulate the Minsky-CM, there will be two witnesses, and they won't be able to reach ℓ_f together. § CONCLUSION We have introduced the model of parameterised networks communicating by non-blocking rendez-vous, and showed that safety analysis of such networks becomes much harder than in the framework of classical rendez-vous. Indeed,  and  become -complete and  undecidable in our framework, while these problems are solvable in polynomial time in the framework of <cit.>. We have introduced a natural restriction of protocols, in which control states are partitioned between active states (that allow requesting of rendez-vous) and waiting states (that can only answer to rendez-vous) and showed that  can then be solved in polynomial time. Future work includes finding further restrictions that would yield decidability of . A candidate would be protocols in which waiting states can only receive one message. Observe that in that case, the reduction of <ref> can be adapted to simulate a , hence  for this subclass of protocols is as hard as reachability in Vector Addition Systems with States, i.e. non-primitive recursive <cit.>. Decidability remains open though. § PROOFS OF <REF> We present here the omitted proofs of <ref>. §.§ Proof of <ref> We will in fact prove the  upper bound for a more general model: Non-Blocking Vector Addition Systems (). A  is composed of a set of transitions over vectors of dimension d, sometimes called counters, and an initial vector of d non-negative integers, like in VAS. However, in a , a transition is a pair of vectors: one is a vector of d integers and is called the blocking part of the transition and the other one is a vector of d non-negative integers and is called the non-blocking part of the transition. Let d ∈ℕ. A Non-blocking Vector Addition System () of dimension d is a tuple (T, v_0) such that T ⊆ℤ^d ×ℕ^d and v_init∈ℕ^d. Formally, for two vectors v, v' ∈ℕ^d, and a transition t=(t_b, t_nb) ∈ T, we write v t v' if there exists v”∈ℕ^d such that v” = v + t_b and, for all i ∈ [1,d], v'(i) = max(0, v”(i) - t_nb(i)). We write for ⋃_t ∈ Tt. We define an execution as a sequence of vectors v_1 v_2 … v_k such that for all 1 ≤ i < k, v_i v_i+1. Intuitively, the blocking part t_b of the transition has a strict semantics: to be taken, it needs to be applied to a vector large enough so no value goes below 0. The non-blocking part t_nb can be taken even if it decreases some component below 0: the corresponding component will simply be set to 0. We can now define what is the  problem on .  problem for a  V = (T,v_init) of dimension d ∈ℕ and a target vector v_f, asks if there exists v∈ℕ^d, such that v ≥ v_f and v_init^∗ v. Adapting the proof of <cit.> to the model of  yields the following result. The  problem for  is in . Fix a  (T,v_init) of dimension d, we will extend the semantics of  to a slightly relaxed semantics: let v,v' ∈ℕ^d and t = (t_b, t_nb) ∈ T, we will write v t v' when for all 1≤ j ≤ d, v'(j) = max(0, (v+t_b -t_nb)(j)). Note that v t v' implies that v t v' but the converse is false: consider an   of dimension d = 2, with t = (t_b, t_nb) ∈ T such that t_b =(-3, 0) and t_nb = (0, 1), and let v = (1, 2) and v' =(0, 1). One can easily see that there does not exist v”∈ℕ^2 such that v” = v + t_b, as 1 - 3<0. So, t cannot be taken from v and it is not the case that vt v', however, v t v'. We use for ⋃_t ∈ Tt. Let J ⊆ [1,d], a path v_0 v_1 … v_m is said to be J-correct if for all v_i such that i < m, there exists t = (t_b, t_nb) ∈ T, such that v_i t v_i+1 and for all j ∈ J, (v_i + t_b)(j) ≥ 0. We say that the path is correct if the path is [1,d]-correct. It follows from the definitions that for all v,v'∈ℕ^d, v^* v' if and only if there exists a correct path between v and v'. Fix a target vector v_f ∈ℕ^d, and define = |v_f| + max_(t_b, t_nb)∈ T(|t_b| + |t_nb|), where |·| is the norm 1 of vectors in ℤ^d. Let ρ = v_0 v_1 … v_m and J ⊆ [1,d]. We say the path ρ is J-covering if it is J-correct and for all j ∈ J, v_m(j) ≥ v_f(j). Let r ∈ℕ, we say that ρ is (J,r)-bounded if for all v_i, for all j ∈ J, v_i(j) < r. Let v ∈ℕ^d, we define m(J,v) as the length of the shortest J-covering path starting with v, 0 if there is none. Note 𝒥_i = {J⊆ [1,d]| |J| = i } and define the function f as follows: for 1 ≤ i ≤ d, f(i) = max{m(J_i, v) | J_i ∈𝒥_i, v∈ℕ^d}. We will see that f is always well defined, in . f(0) = 1. From any vector v ∈ℕ^d, the path with one element v is ∅-covering. For all 0 ≤ i < d, f(i+1) ≤ (· f(i))^i+1 + f(i). Let J ∈𝒥_i+1 and v∈ℕ^d such that there exists a J-covering path starting with v. Note ρ = v_0t^1…t^mv_m the shortest such path. First case: ρ is (J, .f(i))-bounded. Assume, for sake of contradiction, that for some k < ℓ, for all j∈ J, v_k(j)=v_ℓ(j). Then we show that v_0… v_kv_ℓ+1…v_m is also a J-correct path, with the vectors (v_ℓ')_ℓ< ℓ'≤ m, defined as follows. v_ℓ+1(j)=v_ℓ+1(j) for all j∈ J max(0,(v_k(j)+t^ℓ+1_b(j)-t^ℓ+1_nb(j))) otherwise. And for all ℓ + 1< ℓ'≤ m, v_ℓ'(j)=v_ℓ'(j) for all j∈ J max(0, (v_ℓ'-1(j)+t_b^ℓ'(j)-t_nb^ℓ'(j))) otherwise. Then v_0… v_kv_ℓ+1…v_m is also a J-correct path. Indeed, since v_k(j)=v_ℓ(j) for all j∈ J, we have that v_ℓ+1(j)=v_ℓ+1(j)=max(0,(v_ℓ(j) + t^ℓ+1_b(j) - t^ℓ+1_nb(j)))=max(0,(v_k(j) + t^ℓ+1_b(j) - t^ℓ+1_nb(j))). Moreover, for j∈ J, since v_ℓ(j)+t^ℓ+1_b(j)≥ 0, we get that v_k(j)+ t^ℓ+1_b(j)≥ 0. By definition, for j∉ J, v_ℓ+1(j)=max(0,(v_k(j) + t^ℓ+1_b(j) - t^ℓ+1_nb(j))). Hence, v_k^t^ℓ+1v_ℓ+1, and v_0^t^1… v_k^t^ℓ+1v_ℓ+1 is J-correct. Now let ℓ<ℓ'<m. By definition, for j∈ J, v_ℓ'+1(j)=v_ℓ'+1(j). Then, v_ℓ'+1(j)=max(0,(v_ℓ'(j)+t^ℓ'+1_b(j) - t^ℓ'+1_nb(j))) = max(0,(v_ℓ'(j)+t^ℓ'+1_b(j) - t^ℓ'+1_nb(j))). Again, since ρ is J-correct, we deduce that for j∈ J, v_ℓ'(j)+t^ℓ'+1_b(j)≥ 0, hence v_ℓ'(j)+t^ℓ'+1_b(j)≥ 0. For j∉ J, v_ℓ'+1(j)=max(0, (v_ℓ'(j)+t_b^ℓ'+1(j)-t_nb^ℓ'+1(j))). So v_ℓ'^t^ℓ'+1v_ℓ'+1, and v_0^t^1… v_k^t^ℓ'+1v_ℓ'+1 is J-correct. Then, ρ'=v_0… v_kv_ℓ+1…v_m is a J-correct path, and since v_m(j)=v_m(j) for all j∈ J, it is also J-covering, contradicting the fact that ρ is minimal. Hence, for all k < ℓ, there exists j ∈ J such that v_k(j) ≠ v_ℓ(j). The length of such a path is at most (.f(i))^i+1, so m(J,v)≤ (.f(i))^i+1≤ (.f(i))^i+1+f(i). Second case: ρ is not (J, .f(i))-bounded. We can then split ρ into two paths ρ_1 ρ_2 such that ρ_1 is (J,.f(i))-bounded and ρ_2 = v'_0 … v'_n is such that v'_0(j) ≥.f(i) for some j ∈ J. As we have just seen, |ρ_1|≤ (.f(i))^i+1. Note J' = J ∖{j} with j such that v'_0(j) ≥.f(i). Note that ρ_2 is J'-covering, therefore, by definition of f, there exists a J'-covering execution ρ = w_0 … w_k with w_0=v'_0, and such that |ρ|≤ f(i). Also, by definition of , for all 1≤ j' ≤ d, for all (t_b,t_nb)∈ T, ≥ |t_b(j')|+|t_nb(j')|, then t_b(j')≥ -, and t_b(j')-t_nb(j')≥ -. Hence, for all v∈^d, 1≤ j'≤ d, and c∈ such that v(j')≥ + c, for all (t_b,t_nb)∈ T, (v+t_b)(j') ≥ c and (v+t_b-t_nb)(j') ≥ c. Now, since w_0 = v'_0, we get w_0(j)≥.f(i). We deduce two things: first, for all 0 ≤ℓ < k, if t=(t_b,t_nb)∈ T is such that w_ℓ^t w_ℓ+1, it holds that (w_ℓ + t_b)(j)≥.(f(i)- ℓ - 1). Since k = f(i) - 1, it yields that ρ is J-correct. Second, for all 0 ≤ℓ≤ k, w_ℓ(j)≥(f(i) - ℓ). Again, k = f(i) - 1, so w_k(j) ≥≥ v_f(j). Hence ρ is also J-covering. Since ρ is the shortest J-covering path, we conclude that |ρ|≤ (.f(i))^i+1 + f(i), and so m(J,v)≤ (.f(i))^i+1 + f(i). We define a function g such that g(0) = 1 and g(i+1) = (+1)^d(g(i))^d for 0 ≤ i < d; then f(i)≤ g(i) for all 1 ≤ i ≤ d. Hence, f(d) ≤ g(d) ≤ (+1)^d^d+1≤ 2^2^cnlog n for some n ≥max( d, , |v_init|) and a constant c which does not depend on d, v_0, nor v_f or the . Hence, we can cover vector v_f from v_init if and only if there exists a path (from v_init) of length ≤ 2^2^cn log n which covers v_f. Hence, there is a non-deterministic procedure that guesses a path of length ≤ 2^2^cn log n, checks if it is a valid path and accepts it if and only if it covers v_f. As |v_init|≤ n, |v_f| ≤ n and for all (t_b, t_nb) ∈ T, |t_b| + |t_nb| ≤ n, this procedure takes an exponential space in the size of the protocol. By Savitch theorem, there exists a deterministic procedure in exponential space for the same problem. We are now ready to prove that the  problem for  is as hard as the  problem for . [] reduces to  in . Let a  M = (, , Δ_b,Δ_nb, ), for which we assume wlog that it does not contain any self-loop (replace a self loop on a location by a cycle using an additional internal transition and an additional location). We note = {_1, …, _m}, and = {ℓ_1…ℓ_k}, with ℓ_1= and ℓ_k=ℓ_f, and let d = k+m. We define the  V = (T, v_init) of dimension d as follows: it has one counter by location of the , and one counter by counter of the . The transitions will ensure that the sum of the values of the counters representing the locations of M will always be equal to 1, hence a vector during an execution of V will always represent a configuration of M. First, for a transition δ = (ℓ_i, op, ℓ_i')∈Δ, we define (t_δ, t'_δ)∈ℤ^d×ℕ^d by t_δ(i) = -1, t_δ(i')= 1 and, * if op=, then t_δ(y)= 0 for all other 1≤ y≤ d, and t'_δ=0_d (where 0_d is the null vector of dimension d), i.e. no other modification is made on the counters. * if op=_j, then t_δ(k+j)=1, and t_δ(y)= 0 for all other 1≤ y≤ d, and t'_δ=0_d, i.e. the blocking part of the transition ensures the increment of the corresponding counter, while the non-blocking part does nothing. * if op=_j, then t_δ(k+j)=-1, and t_δ(y)= 0 for all other 1≤ y≤ d, and t'_δ=0_d, i.e. the blocking part of the transition ensures the decrement of the corresponding counter, while the non-blocking part does nothing. . * if op=_j, then t_δ(y)= 0 for all other 1≤ y≤ d, and t'_δ(k+j)=1 and t'_δ(y)=0 for all other 1≤ y≤ d, i.e. the blocking part of the transition only ensures the change in the location, and the non-blocking decrement of the counter is ensured by the non-blocking part of the transition. We then let T={t_δ|δ∈Δ}, and v_0 is defined by v_init(1)=1 and v_init(y)=0 for all 2≤ y≤ d. We also fix v_f by v_f(k)=1, and v_f(y)=0 for all other 1≤ y≤ d. One can prove that v_f is covered in V if and only if ℓ_f is covered in M. If there exists w ∈ℕ^ such that (, 0_) ^* (ℓ_f, w), then there exists v ∈ℕ^d such that v_0 ^* v and v ≽ v_f. Any configuration (ℓ,w) of M can be turned into a valuation v(ℓ_i,w) of T such that v(ℓ_i,w)(i)=1, for all 1≤ i≤ m, v(ℓ_i,w)(k+i)=w(_i) and for all other 1≤ y≤ k, v(ℓ_i,w)(y)=0. Observe that v(,0_)=v_0. It follows from the definitions that (ℓ_i,w)(ℓ_i',w') if and only if v(ℓ_i,w) v(ℓ_i',w'). Hence, v_0^*v(ℓ_f,w)≥ v_f. If there exists v ∈ℕ^d such that v_0 ^* v and v ≽ v_f, then there exists w ∈ℕ^ such that (, 0_) ^* (ℓ_f, w). One can prove by induction that every vector v reachable from v_0 is such that there exists only one 1 ≤ i ≤ k such that v(i) = 1 and for all 1 ≤ i' ≤ k such that i ≠ i', v(i') = 0. Hence, given a reachable vector v, one can define γ_v a machine configuration as (ℓ_i, w) where i is the unique index 1≤ i≤ k such that v(i) = 1 and, for all 1 ≤ j ≤ m, w(_j) = v(k+j). Note v_0 v_1 … v_n = v, and observe that γ_v_n = (ℓ_f, w) for some w ∈ℕ^. Again, by a simple induction, one can prove that γ_v_0γ_v_1…γ_v_n, which concludes the proof. Putting together Lemma <ref> and Lemma <ref>, we obtain the proof of <ref>. §.§ Proof of <ref> In this subsection, we prove <ref> by proving that the [] problem is  hard. Put together with <ref>, it will prove the -completeness of []. §.§.§ Proofs on the Pocedural  Defined in <ref> We formalize some properties on the procedural  presented in <ref> used in the proof. As for the procedural  𝚃𝚎𝚜𝚝𝚂𝚠𝚊𝚙_i, we use this proposition from <cit.>. Let 0≤ i < n, and ∈Y_i. For all v,v'∈ℕ^X', for ℓ∈{ℓ^𝚃𝚂,i,_z,ℓ^𝚃𝚂,i,_nz}, we have (^𝚃𝚂,i,v)^*(ℓ,v') in 𝚃𝚎𝚜𝚝𝚂𝚠𝚊𝚙_i() if and only if: * (PreTest1): for all 0 ≤ j < i, for all _j ∈Y_j, v(_j) = 2^2^j and for all _j ∈ Y_j, v(_j) = 0; * (PreTest2): v(_i) = 2^2^i and v( _i) = 0; * (PreTest3): v() + v() = 2^2^i; * (PostTest1): For all ∉{,}, v'() = v(); * (PostTest2): either (i) v() = v'() = 0, v() = v'() and ℓ = ℓ^i_z, or (ii) v'() = v() >0, v'() = v() and ℓ = ℓ^𝚃𝚂,i,_nz. Moreover, if for all 0 ≤ j ≤ n, and any counter ∈ Y_j ∪Y_j, v()≤ 2^2^j, then for all 0 ≤ j ≤ n, and any counter ∈ Y_j ∪Y_j, the value of will never go above 2^2^j during the execution. Note that for a valuation v∈ℕ^X' that meets the requirements (PreTest1), (PreTest2) and (PreTest3), there is only one configuration (ℓ,v') with ℓ∈{ℓ^𝚃𝚂,i,_z,ℓ^𝚃𝚂,i,_nz} such that (ℓ_in,v) ^* (ℓ,v'). *Procedural  𝚁𝚜𝚝_i. We shall now prove that the procedural s we defined and displayed in <ref> meet the desired requirements. For all 0≤ i≤ n, any procedural  𝚁𝚜𝚝_i has the following property: For all 0≤ i≤ n, for all v∈ℕ^' such that * (PreRst1): for all 0 ≤ j < i, for all ∈Y_j, v() = 2^2^j and for all ∈ Y_j, v() = 0, for all v' ∈ℕ^', if (^𝚁,i, v) ^* (ℓ^𝚁,i_out,v') in 𝚁𝚜𝚝_𝚒 then * (PostRst1): for all ∈ Y_i ∪Y_i, v'() = max(0, v() - 2^2^i), * (PostRst2): for all ∉Y_i ∪Y_i, v'() = v(). For 𝚁𝚜𝚝_0, (PreRst1) trivially holds, and it is easy to see that (PostRst1) and (PostRst2) hold. Now fix 0 ≤ i < n, and consider the procedural- 𝚁𝚜𝚝_𝚒+1. Let v_0 ∈ℕ^' such that for all 0 ≤ j < i+1, for all ∈Y_j, v_0() = 2^2^j and for all ∈ Y_j, v_0( ) = 0, and let v_f such that (^𝚁,i, v_0) ^+ (ℓ^𝚁,i_out,v_f) in 𝚁𝚜𝚝_i. First, we show the following property. Property (∗): if there exist v,v'∈ℕ^' such that v(_i)=k, (^𝚃𝚂,i,,v)^*(ℓ^𝚃𝚂,i,_z,v') with no other visit of ℓ^𝚃𝚂,i,_z in between, then v'(_i)=2^2^i, v'(_i)=0, for all ∈ Y_i+1∪Y_i+1, v'()=max(0, v()-k), and v'()=v() for all other ∈'. If k=0, then Proposition <ref> ensures that v'(_i)=2^2^i, v'(_i)=0, and for all other ∈', v'()=v(). Otherwise, assume that the property holds for some k≥ 0 and consider (^𝚃𝚂,i,,v)^*(ℓ^𝚃𝚂,i,_z,v') with no other visit of ℓ^𝚃𝚂,i,_z in between, and v(_i)=k+1. Here, since v(_i)=k+1, Proposition <ref> and the construction of the procedural- ensure that (^𝚃𝚂,i,,v)^*(ℓ^𝚃𝚂,i,_nz,v)(ℓ^𝚁,i+1_2,v)^*(^𝚃𝚂,i,,v_1) with v_1(_i)=k, v_1(_i)=v(_i)+1, for all ∈ Y_i+1∪Y_i+1, v_1()=max(0, v()-1), and for all other ∈', v_1()=v(). Induction hypothesis tells us that (^𝚃𝚂,i,,v_1)^* (ℓ^𝚃𝚂,i,_z,v') with v'(_i)=2^2^i, v'(_i)=0, for all ∈ Y_i+1∪Y_i+1, v'()=max(0, v()-k-1), and v'()=v() for all other ∈'. Next, we show the following. Property (∗∗): if there exist v,v'∈ℕ^' such that v(_i)=k, v(_i)=2^2^i, v(_i)=0, and (^𝚃𝚂,i,,v)^*(ℓ^𝚃𝚂,i,_z,v') with no other visit of ℓ^𝚃𝚂,i,_z in between, then v'(_i)=2^2^i, v'(y_i)=0, for all ∈ Y_i+1∪Y_i+1, v'()=max(0, v()- k.2^2^i), and v'()=v() for all other ∈'. If k=0, then Proposition <ref> ensures that v'(_i)=2^2^i, v'(_i)=0, and v'()=v() for all other ∈'. Otherwise, assume that the property holds for some k≥ 0 and consider (^𝚃𝚂,i,,v)^*(ℓ^𝚃𝚂,i,_z,v') with no other visit of ℓ^𝚃𝚂,i,_z in between, and v(_i)=k+1. Again, since v(_i)=k+1, Proposition <ref> and the construction of the procedural- ensure that (^𝚃𝚂,i,,v)^*(ℓ^𝚃𝚂,i,_nz,v)(^𝚁,i+1,v)^*(^𝚃𝚂,i,,v_1)^* (ℓ^𝚃𝚂,i,_z,v'_1) (^𝚃𝚂,i,,v'_1), with v_1(_i)=v(_i)-1=k, v_1(_i)=v(_i)+1, v_1(_i)=v(_i)-1=2^2^i-1, v_1(_i)=v(_i)+1=1, for all ∈ Y_i+1∪Y_i+1, v_1()=max(0,v()-1), and for all other ∈', v_1()=v(). By Property (∗), v'_1(_i)=2^2^i, v'_1(_i)=0, for all ∈ Y_i+1∪Y_i+1, v'_1()=max(0, v()-2^2^i), and v'_1()=v_1() for all other ∈'. Induction hypothesis allows to conclude that since (^𝚃𝚂,i,,v'_1)^* (ℓ^𝚃𝚂,i,_z,v'), v'(_i)=2^2^i, v'(_i)=0, for all ∈ Y_i+1∪Y_i+1, v'()=max(0, v'_1()- k.2^2^i) = max(0, v() - (k+1).2^2^i), and v'()=v'_1()=v() for all other ∈'. Since (^𝚁,i, v_0) ^+ (ℓ^𝚁,i_out,v_f), we know that (^𝚁,i, v_0) ^* (^𝚃𝚂,i,,v)^*(ℓ^𝚃𝚂,i,_z,v')(^𝚃𝚂,i,,v')^*(ℓ^𝚃𝚂,i,_z,v”) (ℓ^𝚁,i_out,v_f). By construction, v(_i)=2^2^i-1, v(_i)=2^2^i-1, v(_i)=1, v(_i)=1, for all ∈ Y_i+1∪Y_i+1, v()=max(0,v_0()-1), and for all other counter , v()=v_0(). By Property (∗), v'(_i)=2^2^i=v_0(_i), v'(_i)=0=v_0(_i), for all ∈ Y_i∪Y_i+1, v'()=max(0, v_0()-2^2^i) and for all other ∈', v'()=v(). By Property (∗∗), v”(_i)=2^2^i=v_0(_i), v”(_i)=0=v_0(_i), for all ∈ Y_i∪Y_i+1, v”()=max(0, v_0()-2^2^i - (2^2^i-1).2^2^i)=max(0, v_0()-2^2^i.2^2^i)=max(0, v_0()-2^2^i+1), and for all other ∈', v”()=v'()=v_0(). We get the immediate corollary: Let 0≤ i≤ n, and v∈ℕ^' satisfying (PreRst1) for 𝚁𝚜𝚝_i. If v is i-bounded, then the unique configuration such that (ℓ^𝚁,i_in,v) ^+ (ℓ^𝚁,i_out, v') in 𝚁𝚜𝚝_i is defined v'() = 0 for all ∈ Y_i ∪Y_i and v'() = v() for all ∉ Y_i ∪Y_i. Let 0≤ i ≤ n, and let v∈ℕ^' satisfying (PreRst1) for 𝚁𝚜𝚝_𝚒. If for all 0≤ j ≤ n, v is j-bounded, then for all (ℓ,v')∈^𝚁,i×ℕ^' such that (ℓ^𝚁,i_in,v) ^* (ℓ, v') in 𝚁𝚜𝚝_i, v' is j-bounded for all 0≤ j ≤ n. We will prove the statement of the property along with some other properties: (1) if ℓ is not a state of 𝚃𝚎𝚜𝚝𝚂𝚠𝚊𝚙_i(_i) or 𝚃𝚎𝚜𝚝𝚂𝚠𝚊𝚙_i(_i), then for all 0 ≤ j < i, for all ∈Y_j, v'() = 2^2^j and for all ∈ Y_j, v'() = 0, and v'(_i) =2^2^i and v'(_i) = 0. (2) if ℓ is not a state of 𝚃𝚎𝚜𝚝𝚂𝚠𝚊𝚙_i(_i) or 𝚃𝚎𝚜𝚝𝚂𝚠𝚊𝚙_i(_i) and if ℓℓ_1^𝚁, i+1, then v'(_i) + v'(_i) = 2^2^i, and if ℓℓ_3^𝚁, i+1, then v'(_i) + v'(_i) = 2^2^i. For 𝚁𝚜𝚝_0, the property is trivial. Let 0≤ i <n, and a valuation v∈ℕ^' such that for all 0 ≤ j ≤ i, for all ∈Y_j, v() = 2^2^j and for all ∈ Y_j, v() = 0, and such that, for all 0≤ j≤ n, v is j-bounded. Let now (ℓ,v') such that (ℓ^𝚁,i+1_in,v) ^* (ℓ, v') in 𝚁𝚜𝚝_i+1. We prove the property by induction on the number of occurences of ^𝚃𝚂,i,z and ^𝚃𝚂,i,y. If there is no occurence of such state between in (ℓ^𝚁,i+1_in,v) ^* (ℓ, v'), then, for all ∈ Y_j ∪Y_j∪{_i, _i} and j i, j i+1, then v'() = v() and so v' is j-bounded. Furthermore, for ∈ Y_i ∪ Y_i+1∪Y_i+1, v'() ≤ v(), and for all ∈Y_i, v'() ≤ v() + 1 = 1. The property (2) is easily verified. Hence the properties hold. Assume now we proved the properties for k occurrences of ^𝚃𝚂,i,z and ^𝚃𝚂,i,y, and let us prove the clam for k+1 such occurrences. Note ℓ_k+1∈{^𝚃𝚂,i,z,^𝚃𝚂,i,y} the last occurence such that: (ℓ^𝚁,i+1_in,v) ^+ (ℓ_k, v_k) (ℓ_k+1, v_k+1) ^* (ℓ, v'). By induction hypothesis, v_k is j-bounded for all 0 ≤ j ≤ n and it respects (1) and (2), and by construction, (ℓ_k, , ℓ_k+1) and ℓ_k ℓ_1^𝚁,i+1, ℓ_k ℓ_3^𝚁, i+1, hence v_k+1 is j-bounded for all 0 ≤ j ≤ n and respects (PreTest1), (PreTest2), and (PreTest3) for 𝚃𝚎𝚜𝚝𝚂𝚠𝚊𝚙_i(_i) and 𝚃𝚎𝚜𝚝𝚂𝚠𝚊𝚙_i(_i). As a consequence, if ℓ is a state of one of this machine such that (ℓ_k+1, v_k+1)^* (ℓ, v'), then by <ref>, for all 0 ≤ j ≤ n, as v_k+1 is j-bounded, so is v'. Assume now ℓ to not be a state of one of the two machines. And keep in mind that v_k+1 respects (1) and (2). Then, either ℓ = ℓ_out^𝚁, i+1 and so v'() = v_k+1() for all ∈ Y_j ∪Y_j for all j i, and v'(_i) = 2^2^i and v'(_i) = 0 and so the claim holds, either ℓ∈{ℓ_in^𝚁,𝚒+1, ℓ_j'^𝚁, i+1}_j' = 1, 2, 3, 4, 5, 6, …, r. In this case, the execution is such that: (ℓ_k+1, v_k+1) ^+ (ℓ_nz, k+1, v_k+1) ^* (ℓ, v'), where if ℓ_k+1 =^𝚃𝚂,i,z, ℓ_nz, k+1 = ℓ^𝚃𝚂, i ,z_nz and otherwise ℓ_nz, k+1 = ℓ^𝚃𝚂, i ,y_nz. In any cases, for all j i, j i+1, ∈ Y_j∪Y̅_j ∪{_i, _i}, v'() = v_k+1(), hence (1) holds and v' is j-bounded for all j < i and j > i+1. Observe as well that for all ∈ Y_i+1∪Y_i+1, v'() ≤ v_k+1(), and so v' is i+1-bounded. The last thing to prove is that (2) holds. This is direct from the fact that v_k+1 respects (2). About the procedural  𝙸𝚗𝚌_i, we use this proposition from <cit.>. For all 0≤ i< n, for all v,v'∈ℕ^', (^𝙸𝚗𝚌, i,v) ^* (ℓ_out^𝙸𝚗𝚌, i, v') in 𝙸𝚗𝚌_i if and only if: * (PreInc1) for all 0 ≤ j < i, for all ∈Y_j, v() = 2^2^j and for all ∈ Y_j, v() = 0; * (PreInc2) for all ∈Y_i, v( ) = 0, * (PostInc1) for all ∈Y_i, v'() = 2^2^i; * (PostInc2) for all ∉Y_i, v'() = v(). Moreover, if for all 0≤ j ≤ n, v is j-bounded, then for all (ℓ,v”) such that (ℓ^𝙸𝚗𝚌,i_in,v) ^* (ℓ, v”) in 𝙸𝚗𝚌_i, then v” is j-bounded for all 0≤ j≤ n. *Procedural  𝚁𝚜𝚝𝙸𝚗𝚌. We shall now prove the properties in the procedural  𝚁𝚜𝚝𝙸𝚗𝚌 defined in <ref>. The next proposition establishes the correctness of the construction 𝚁𝚜𝚝𝙸𝚗𝚌. Let v ∈ℕ^' be a valuation such that for all 0≤ i ≤ n and for all ∈ Y_i ∪Y_i, v() ≤ 2^2^i. Then the unique valuation v' ∈ℕ^' such that (ℓ_a, v) ^* (ℓ_b, v') in 𝚁𝚜𝚝𝙸𝚗𝚌 satisfies the following: for all 0≤ i ≤ n, for all ∈Y_i, v'() = 2^2^i and for all ∈ Y_i, v'() = 0. Moreover, for all (ℓ,v”) such that (ℓ_a, v) ^* (ℓ, v”) in 𝚁𝚜𝚝𝙸𝚗𝚌, for all 0≤ i≤ n, v” is i-bounded. We can split the execution in (ℓ_a,v) (^𝚁,0,v)^*(ℓ^𝚁,0_out, v_0) (^𝙸𝚗𝚌,0,v_0)^* (ℓ_out^𝙸𝚗𝚌,0,v'_0) (^𝚁,1,v'_0)^*(ℓ^𝚁,1_out,v_1)^*(^𝙸𝚗𝚌,n-1, v_n-1)^*(ℓ^𝙸𝚗𝚌,n-1_out, v'_n-1) (^𝚁,n, v'_n-1)^*(ℓ_out^𝚁,n,v_n)(ℓ_b,v'), with v'=v_n and v=v'_-1. We show that for all 0≤ i≤ n: * P_1(i): For all ∈ Y_i∪Y_i, v_i()=0, and for all ∉ (Y_i∪Y_i), v_i()=v'_i-1(). * P_2(i): For all 0≤ j <i, for all ∈ Y_j, v'_i-1()=0 and for all ∈Y_j, v'_i-1()=2^2^j, and for all other ∈', v'_i()=v_i(). * P_3(i): For all v” such that (ℓ_a, v) ^* (ℓ, v”)^* (ℓ^𝚁,i_out, v_i), v” is i-bounded, for all 0≤ i≤ n. For k=0, <ref> implies that for all ∈ Y_0∪Y_0, v_0()=0, and that for all other ∈', v_0()=v(). Moreover, for all v” such that (^𝚁,0,v)^*(ℓ, v”)^*(ℓ_out^𝚁,0,v_0), <ref> ensures that v” is i-bounded, for all 0≤ i≤ n. P_2(0) is trivially true. Let 0≤ k< n, and assume that P_1(k), P_2(k) and P_3(k) hold. P_1(k) and P_2(k) and <ref> imply that for all ∈Y_k, v'_k()= 2^2^k, and that for all other counter ∈', v'_k()=v_k(). Thanks to P_1(k), P_2(k+1) holds. Moreover, we also know by <ref> that for all v” such that (ℓ_out^𝚁,k,v_k) (^𝙸𝚗𝚌,k, v_k)^*(ℓ, v”)^*(ℓ_out^𝙸𝚗𝚌,k, v'_k), v” is i-bounded for all 0≤ i≤ n. Since v'_k is then i-bounded for all 0≤ i≤ n, and since P_2(k) holds, <ref> implies that v_k+1()=0 for all ∈ Y_k+1∪Y_k+1, and that, for all other ∈', v_k+1()=v'_k). So P_1(k+1) holds. Moreover, by <ref>, for all v” such that (ℓ_out^𝙸𝚗𝚌,k, v'_k)(^𝚁,k+1,v'_k)^*(ℓ,v”)^* (ℓ_out^𝚁,k+1,v_k+1), v” is i-bounded for all 0≤ i≤ n. Hence P_3(k+1) holds. By P_1(n), v'()=0 for all ∈ Y_n, and since Y_n=∅, v'()=2^2^n for all ∈Y_n. Let ∉ (Y_n∪Y_n). Then v'()=v'_n-1(), and by P_2(n), for all 0≤ i <n, for all ∈Y_i, v'()=2^2^i, and for all ∈ Y_i, v'()=0. By P_3(n), for all (ℓ,v”) such that (ℓ_a, v) ^* (ℓ, v”) in 𝚁𝚜𝚝𝙸𝚗𝚌, for all 0≤ i≤ n, v” is i-bounded. §.§.§ Proofs of the Reduction We are now ready to prove <ref>, i.e. that the reduction is sound and complete. For some subset of counters Y, we will note v_| Y for the valuation v on counters Y, formally, v_| Y : Y →ℕ and is equal to v on its domain. If there exists v ∈ℕ^ such that (, 0_) ^*_M (ℓ_f, v), then there exists v' ∈ℕ^' such that (', 0_') ^*_N (ℓ_f, v'). From <ref>, we have that (', 0_') ^*_N (, v_0) where v_0 is such that, for all 0 ≤ j ≤ n, for all ∈Y_j, v_0() = 2^2^j and for all ∈ Y_j, v_0( ) = 0. By construction of N, (, v_0)^*_N (ℓ_f,v') with v' defined by: for all 0≤ i <n, for all ∈Y_j, v'() = 2^2^j, for all ∈ Y_j, v'() = 0, and, for all ∈, v'() = v(). Note that in this path, there is no restore step. If there exists v' ∈ℕ^' such that (', 0_') ^*_N (ℓ_f, v'), then there exists v ∈ℕ^ such that (, 0_) ^*_M (ℓ_f, v). We will note v_0 the function such that for all 0≤ i ≤ n, and for all ∈Y_i, v_0() = 2^2^i and for all ∈ Y_i, v_0() = 0. Observe that there might be multiple visits of location in the execution of N, because of the restore transitions. The construction of 𝚁𝚜𝚝𝙸𝚗𝚌 ensures that, every time a configuration (,v) is visited, v=v_0. Formally, we show that for all (, v) such that (',0_')^*_N(,v), we have that v=v_0. First let (',w)^*_N(',w'), with w()≤ 2^2^i, and ', not visited in between. Then for all 0≤ i≤ n, for all ∈ Y_i∪Y_i, w'()≤ 2^2^i. Indeed, let (ℓ,w) be such that (',w)^*_N(ℓ, w)_N(',w'). By <ref>, we know that, for all 0≤ i≤ n, for all ∈ Y_i∪Y_i, w()≤2^2^i. Since the last transition is a restore transition, we deduce that, for all 0≤ i≤ n, for all ∈ Y_i∪Y_i, w'()=w()≤ 2^2^i. * Let v∈ℕ^' be such that (',0_')^*_N(,v), and (,v) is the first configuration where is visited. The execution is thus of the form (',0_')^*_N(',w)^*_N(,v), with (',w) the last time ' is visited. We have stated above that w()≤ 2^2^i. Then, we have that (',0_') ^*_N(',w)_N(ℓ_a,w)^*_N(ℓ_b,v)_N(,v), and by <ref>, v=v_0. * Let now v_k,v_k+1∈ℕ^' be such that (',0_')^*_N(,v_k)^*_N(,v_k+1), and v_k and v_k+1 are respectively the k^th and the (k+1)^th time that is visited, for some k≥ 0. Assume that v_k=v_0. We have (, v_k)^*_N(ℓ,v)_N(',v)^*_N(',v)_N(ℓ_a,v)_N^*(ℓ_b,v_k+1) _N(, v_k+1). Since the  M is 2EXP-bounded, and v_k=v_0, we obtain that for all ∈=Y_n, v()≤ 2^2^n. For all 0≤ i<n, for all ∈ Y_i∪Y_i, v()=v_0(), then for all 0≤ i≤ n, for all ∈ Y_i∪Y_i, v()≤ 2^2^i. Then, as proved above, v()≤ 2^2^i for all 0≤ i≤ n, for all ∈ Y_i∪Y_i. By <ref>, v'=v_0. Consider now the execution (',0_')^*_N(,v)^*_N(ℓ_f,v'), where (,v) is the last time the location is visited. Then, as proved above, v=v_0. From the execution (,v)^*_N(ℓ_f,v'), we can deduce an execution (, v_|)^*_M (ℓ_f, v'_|). Since v=v_0 and for all ∈=Y_n, v()=0, we can conclude the proof. The two previous lemmas prove that the reduction is sound and complete. By <ref>, we proved the -hardness of the problem, and so <ref>. § PROOFS OF <REF> In this section, we present proofs omitted in <ref>. §.§ Proof of <ref> We present here the proof of <ref>. The two lemmas of this subsection prove the soundness and completeness of the reduction presented in <ref>. Put together with <ref>, we prove <ref>. Let C_0 ∈, C_f ≥ C_F. If C_0 ^* C_f, then there exists v∈ℕ^Q such that (, 0_)^*(ℓ_f, v). For all q∈ Q, we let v_q(q)=1 and v_q(q')=0 for all q'∈ such that q'≠ q. Let n=||C_0||=C_0(), and let C_0C_1⋯ C_mC_f be the configurations visited in . Then, applying the transition (, , ), we get (, 0_) (, v^1) … (, v^n) with v_0 = v^n and v_0()=n and v_0()=0 for all ≠. Let i≥ 0 and assume that (,0_)^*(, C_i). We show that (, C_i)^*(, C_i+1). * If C_im C_i+1, let t=(q_1,!m,q'_1), t'=(q_2, ?m, q'_2)∈ T such that C_i(q_1)>0, C_i(q_2)>0, C_i(q_1)+C_i(q_2)≥ 2, and C_i+1= C_i - q_1,q_2+q'_1,q'_2. Then (, C_i) (ℓ_(t,t')^1, v_i^1)(ℓ_(t,t')^2, v_i^2)(ℓ_(t,t')^3, v_i^3)(, v_i^4), with v_i^1= C_i - v_q_1, v_i^2=v_i ^1 - v_q_2, v_i^3 = v_i^2 + v_q'_1, v_i^4 = v_i^3+v_q'_2. Observe that v_i^4=C_i+1 and then (, C_i)^*(, C_i+1). * If C_iτ C_i+1, let t=(q,τ,q') such that C_i(q)>0 and C_i+1=C_i-q+q'. Then, (, C_i) (ℓ_q, v_i^1) (, v_i^2) with v_i^1=C_i- v_q and v_i^2 = v_i^1+ v_q'. Observe that v_i^2 = C_i+1, then (, C_i)^*(, C_i+1). * If C_im C_i+1, let t=(q,!m,q') such that C_i+1=C_i-q+q', and m = {q_1,…, q_k}. Then C_i(p_j)=0 for all 1≤ j≤ k. We then have that (, C_i) (ℓ_t, v_i^1) (ℓ_t,q_1^m, v_i^1)⋯(ℓ_t,q_k^m, v_i^1) (, v_i^2) with v_i^1= C_i - v_q and v_i^2= v_i^1 + v_q'. Indeed, v_i^1(q_j)=0 for all q_j∈m, so the transitions (ℓ^m_t,q_j, q_j+1), ℓ^m_t,q_j+1) do not change the value of the counters. Hence, v_i^2= C_i+1 and (, C_i)^* (, C_i+1). So we know that (, 0_)^* (, C_f). Moreover, since C_f ≥ C_F, it holds that C_f ≥ v_𝐪_1 + v_𝐪_2 + … + v_𝐪_s. Then (, C_f)^s (ℓ_f, v) with v=C_f-(v_𝐪_1 + v_𝐪_2 + … + v_𝐪_s). Let v∈^Q. If (, 0_)^*(ℓ_f, v), then there exists C_0 ∈, C_f ≥ C_F such that C_0 ^* C_f. Let (, v_0), (, v_1) … (, v_n) be the projection of the execution of M on {}×ℕ^. We prove that, for all 0≤ i≤ n, there exists C_0 ∈, and C≥ v_i such that C_0 ^* C. For i = 0, we let C_0 be the empty multiset, and the property is trivially true. Let 0≤ i < n, and assume that there exists C_0 ∈, C≥ v_i such that C_0 ^* C. * If (, v_i)δ(, v_i+1) with δ=(, , ), then v_i+1 = v_i +v_. The execution C_0^* C built so far cannot be extended as it is, since it might not include enough processes. Let N be such that C_0 C_1… C_N = C, and let C'_0∈ with C'_0()=C_0()+N+1. We build, for all 0≤ j ≤ N, a configuration C'_j such that C'_0^j C'_j, C'_j≥ C_j and C'_j()>C_j()+N-j. For j=0 it is trivial. Assume now that, for 0≤ j < N, C'_j≥ C_j and that C'_j() > C_j()+N-j. If C_jm C_j+1 for m∈Σ, with t_1=(q_1,!m, q'_1) and t_2=(q_2,?m,q'_2). Then, C_j+1=C_j - q_1,q_2 + q'_1,q'_2. Moreover, C'_j(q_1) ≥ C_j(q_1)>0 and C'_j(q_2) ≥ C_j(q_2) >0 and C'_j(q_1) + C'_j(q_2)≥ C_j(q_1) + C_j(q_2) ≥ 2. We let C'_j+1 = C'_j - q_1,q_2 + q'_1,q'_2, and C'_jm C'_j+1. It is easy to see that C'_j+1≥ C_j+1. Moreover, C'_j+1() > C_j+1() +N -j > C_j+1 + N -j-1. If C_jm C_j+1 and for all q∈m, C'_j-q_1(q)=0, with t=(q_1,!m,q_2), (respectively C_jτ C_j+1 with t=(q_1,τ,q_2)), we let C'_j+1=C'_j - q_1+q_2, and C'_jmC'_j+1 (respectively C'_jτ C'_j+1). Again, thanks to the induction hypothesis, we get that C'_j+1≥ C_j+1, and C'_j+1 ()> C_j+1() + N - j > C_j+1() + N - j-1. If now C_jm C_j+1, with t_1=(q_1,!m,q_2) and there exists q'_1∈m such that C'_j - q_1(q'_1) >0. Let (q'_1,?m,q'_2)∈ T, and then C'_j+1=C'_j - q_1,q'_1 + q_2, q'_2. Since C'_j≥ C_j, C'_j(q_1)≥ 1, and since C'_j-q_1(q'_1) >0, C'_j(q'_1)≥ 1 and C'_j(q_1) + C'_j(q'_1) ≥ 2. Hence, C'_jm C'_j+1. We have that C'_j(q'_1) > C_j(q'_1), so C'_j+1(q'_1) ≥ C_j+1(q'_1) and C'_j+1(q)≥ C_j+1(q) for all other q∈ Q. Hence C'_j+1 > C_j+1. Also, C_j+1() = C_j() + x, with x∈{0,1}. If q'_1≠, then C'_j+1() = C'_j() + y, with y≥ x. Hence, since C'_j() > C_j() + N - j, we get C'_j+1() > C_j+1() + N - j > C_j+1() + N -j - 1. If q'_1 =, then we can see that C'_j+1() = C'_j() +y, with x-1≤ y ≤ x. In that case, C'_j+1() > C_j() + N-j+y≥ C_j() + N- j + x-1 ≥ C_j+1() + N-j-1. So we have built an execution C'_0 ^* C'_N such that C'_N≥ C_N and C'_N() > C_N(). Hence, C'_N≥ v_i+1. * If (,v_i) (ℓ_(t,t')^1, v_i^1) (ℓ_(t,t')^2, v_i^2) (ℓ_(t,t')^3, v_i^3) (, v_i+1), with t= (q_1,!m,q_2) and t'=(q'_1, ?m, q'_2), then v_i^1 = v_i - v_q_1, v_i^2= v_i^1 - v_q'_1, v_i^3 = v_i^2 + v_q_2, and v_i+1 = v_i^3+ v_q'_2. Then by induction hypothesis, C(q_1)≥ 1, C(q'_1)≥ 1, and C(q_1) + C(q'_1) ≥ 2. We let C' = C - q_1, q'_1 + q_2, q'_2. We have Cm C' and C' ≥ v_i+1. * If (, v_i) (ℓ_q, v_i^1) (, v_i+1) with (q,τ, q')∈ T and v_i^1 = v_i - v_q and v_i+1 = v_i^1 + v_q', then by induction hypothesis, C≥ 1, and if we let C'=C- q+q', then CτC', and C'≥ v_i+1. * If (, v_i) (ℓ_t, v_i^1) (ℓ_t,p_1^m, v_i^2)… (ℓ_t,p_k^m, v_i^k+1) (, v_i+1) with t=(q,!m,q') and m = {p_1,…,p_k}, and (C-q)(p)=0 for all p∈m. We let C' = C- q+q', hence Cm C'. Moreover, v_i^1 = v_i - v_q, and, for all 1≤ j <k, it holds that v_i^j+1(p_j) = max(0, v_i^j(p_j) - 1) and v_i^j+1(p)=v_i^j(p) for all p≠ p_j. By induction hypothesis, C≥ v_i, hence v_i^j(p)=0 for all p∈m, for all 1≤ j≤ k+1. Hence, v_i+1 = v_i^k+1 + v_q' = v_i^1 + v_q', and C' ≥ v_i+1. * If (, v_i) (ℓ_t, v_i^1) (ℓ_t,p_1^m, v_i^2)… (ℓ_t,p_k^m, v_i^k+1) (, v_i+1) with t=(q,!m,q') and m = {p_1,…,p_k}, and (C-q)(p_j)>0 for some p_j∈m. Let (p_j,?m,p'_j)∈ T and C' = C - q,p_j+q',p'_j. Obviously, Cm C'. It remains to show that C'≥ v_i+1. This is due to the fact that in the M, the counter p'_j will not be incremented, unlike C(p'_j). Moreover, in the protocol , only p_j will lose a process, whereas in M, other counters corresponding to processes in m may be decremented. Formally, by definition and by induction hypothesis, C-q≥ v_i^1. Also, for all p∈m, either v_i^1(p)=v_i^k+1(p) = 0, or v_i^k+1(p) = v_i^1(p)-1. Remark that since C≥ v_i, then C-q≥ v_i-v_q = v_i^1, hence (C-q,p_j)(p_j) = (C-q)(p_j) - 1 ≥ v_i^1(p_j)-1. Also, (C-q)(p_j) - 1≥ 0, hence (C-q)(p_j) - 1≥max(0,v_i^1(p_j)-1)=v_i^k+1(p_j). Observe also that, for all p≠ p_j∈m, if v_i^1(p)>0, then (C-q,p_j)(p)= (C-q)(p) ≥ v_i^1(p) > v_i^k+1(p). If v_i^1(p) = 0, then (C-q,p_j)(p)≥ v_i^1(p)= v_i^k+1(p). For all other p∈ Q, (C-q,p_j)(p) = (C-q)(p) ≥ v_i^1(p)= v_i^k+1(p). Hence, C-q,p_j≥ v_i^k+1. By definition, v_i+1 = v_i^k+1 + v_q'. Hence, (C-q,p_j+q',p'_j)(p)≥ v_i+1(p), for all p≠ p'_j, and (C-q,p_j+q',p'_j)(p'_j)> v_i+1(p'_j). So, C'> v_i+1. Now we know that the initial execution of M is: (, 0_)^∗(, v_n)^∗ (ℓ_f, v_f) with v_f = v_n - (v_𝐪_1 + v_𝐪_2 + … + v_𝐪_s). Thus v_n>v_𝐪_1 + v_𝐪_2 + … + v_𝐪_s. We have proved that we can build an initial execution of P: C_0^*C_n and that C_n≥ v_𝐪_1 + v_𝐪_2 + … + v_𝐪_s. Hence C_n ≥ C_F. §.§ Proofs of <ref> To prove <ref>, we shall use <ref> along with the reduction presented in <ref>. If the reduction is sound and complete, it will prove that  is -hard. As  is a particular instance of the  problem, this is sufficient to prove <ref>. The two lemmas of this subsection prove the soundness and completeness of the reduction presented in <ref>, put together with <ref>, it proves that  is -hard. For all v∈ℕ^d, if (, 0_)_M^*(ℓ_f, v), then there exists C_0 ∈, C_f ∈ such that C_0 ^* C_f. For all ∈, we let N_ be the maximal value taken by in the initial execution (, 0_)^*(ℓ_f, v), and N=Σ_∈ N_. Now, we let C_0∈∩ C_N+1 be the initial configuration with N+1 processes. In the initial execution of that we will build, one of the processes will evolve in the (M) part of the protocol, simulating the execution of the , the others will simulate the values of the counters in the execution. Now, we show by induction on k that, for all k≥ 0, if (, 0_)^k (ℓ, w), then C_0^* C, with C(1_)=w() for all ∈, C(ℓ)=1, C()=N-Σ_∈ w(), and C(s)=0 for all other s∈ Q. C_0L C_0^1R C_0^2, and C_0^2()=N, C_0^2()=1, and C_0^2(s)=0 for all other s∈ Q. So the property holds for k=0. Suppose now that the property holds for k≥ 0 and consider (, 0_)^k (ℓ,w)δ (ℓ',w'). * if δ=(ℓ,,ℓ'), then Cinc_C_1 with C_1=C-ℓ, +ℓ_δ,q_. Indeed, by induction hypothesis, C(ℓ)=1> 0, and C()>0, otherwise Σ_∈ w()=N and w() is already the maximal value taken by so no increment of could have happened at that point of the execution of M. We also have C_1inc_C', since C_1(ℓ_δ)>0 and C_1(q_)>0 by construction, and C'=C_1-ℓ_δ,q_+ℓ', 1_. So C'(ℓ')=1, for all ∈, C'(1_)=w'(), and C'()=N-Σ_∈ w'(). * if δ=(ℓ,,ℓ'), then C(ℓ)=1>0 and C(1_)>0 since w()>0. Then Cdec_C_1 with C_1=C-ℓ,1_+ℓ_δ,q'_. Then C_1dec_C', with C'=C_1-q'_, ℓ_δ+, ℓ'. So C'(ℓ')=1, C'(1_)=C(1_)-1, C'()=C()+1. * if δ=(ℓ,,ℓ') and w()>0 then Cnbdec_C', and C'=C-ℓ, 1_+ℓ', and the case is proved. * if δ=(ℓ,,ℓ') and w()=0 then by induction hypothesis, C(1_)=0 and Cnbdec_C', with C'=C-ℓ+ℓ'. Then, C'(1_)=0=w'(), and C'(ℓ')=1. * if δ=(ℓ,,ℓ'), then CτC', avec C'=C-ℓ+ℓ'. This includes the restore transitions. Then C_0^* C with C(ℓ_f)=1 and C∈. Let C_0 ∈, C_f ∈ such that C_0 ^* C_f, then (ℓ_0, 0_)^*_M(ℓ_f, v) for some v∈ℕ^. Before proving this lemma we establish the following useful result. Let C_0 ∈. For all C∈ such that C_0^+ C, we have Σ_p∈{q}∪ Q_M C(p)= 1. Note C_0C_1…C_n = C_f. Now, thanks to <ref>, for all 1≤ i≤ n, we can note 𝗅𝖾𝖺𝖽𝖾𝗋(C_i) the unique state s in {q}∪ Q_M such that C_i(s) = 1. In particular, note that 𝗅𝖾𝖺𝖽𝖾𝗋(C_n) = ℓ_f. We say that a configuration C is M-compatible if 𝗅𝖾𝖺𝖽𝖾𝗋(C)∈. For any M-compatible configuration C∈, we define the configuration of the  π(C_i)=(𝗅𝖾𝖺𝖽𝖾𝗋(C), v) with v=C(1_) for all ∈. We let C_i_1⋯ C_i_k be the projection of C_0C_1… C_n onto the M-compatible configurations. We show by induction on j that: P(j): For all 1≤ j≤ k, (,0_)^*_M π(C_i_j), and Σ_∈C_i_j(q_)+C_i_j(q'_)=0. Moreover, for all C such that C_0^*CC_i_j, Σ_∈C(q_)+C(q'_)≤ 1. By construction of the protocol, C_0L C_1(L)^k C_2R C_i_1 for some k ∈ℕ. So π(C_i_1)=(, 0_), and for all C such that C_0^*CC_i_1, Σ_∈C(q_)+C(q'_)=0, so P(0) holds true. Let now 1≤ j <k, and suppose that (,0_)^*_M π(C_i_j), and Σ_∈C_i_j(q_)+C_i_j(q'_)=0. We know that C_i_j^+C_i_j+1. * If there is no C∈ such that C(q)=1 and C_i_j^+C^*C_i_j+1, the only possible transitions from C_i_j are in T_M. Let π(C_i_j)=(ℓ,v). * if C_i_jinc_C then C=C_i_j-ℓ,+ℓ_δ,q_ for δ=(ℓ,,ℓ')∈Δ_b. Σ_∈C(q_)+C(q'_)=1. Note that the message inc_ is necessarily received by some process, otherwise C(q_)=0 and C has no successor, which is in contradiction with the fact the the execution reaches C_f. Moreover, the only possible successor configuration is Cinc_ C_i_j+1, with C_i_j+1=C-q_, ℓ_δ+1_, ℓ'. Hence, obviously, π(C_i_j)π(C_i_j+1). * if C_i_jdec_C then C=C_i_j-ℓ,1_+ℓ_δ,q'_ for δ=(ℓ,,ℓ')∈Δ_b. Σ_∈C(q_)+C(q'_)=1. Note that the message dec_ is necessarily received by some process, otherwise C(q'_)=0 and C has no successor, which is in contradiction with the fact the the execution reaches C_f. Besides, C_i_j(1_)>0 hence v()>0. Moreover, the only possible successor configuration is Cdec_ C_i_j+1, with C_i_j+1=C-q'_, ℓ_δ+, ℓ'. Hence, obviously, π(C_i_j)π(C_i_j+1). * if C_i_jnbdec_C_i_j+1 then C_i_j+1=C_i_j-ℓ,1_+ℓ', for δ=(ℓ,,ℓ')∈Δ_nb. Σ_∈C(q_)+C(q'_)=0. Besides, C_i_j(1_)>0 hence v()>0. Hence, obviously, π(C_i_j)π(C_i_j+1). * if C_i_j𝐧𝐛(nbdec_)C_i_j+1 then C_i_j+1=C_i_j-ℓ+ℓ' for δ=(ℓ,,ℓ')∈Δ_nb. Σ_∈C(q_)+C(q'_)=0. Besides, C_i_j(1_)=0 hence v()=0. Hence, obviously, π(C_i_j) π(C_i_j+1). * if C_i_jτC_i_j+1 then C_i_j+1=C_i_j-ℓ+ℓ' for δ=(ℓ,,ℓ')∈Δ_nb. Σ_∈C(q_)+C(q'_)=0. Besides, C_i_j(1_)=C'_i_j+1(1_) for all ∈. Hence, obviously, π(C_i_j)π(C_i_j+1). * Otherwise, let C be the first configuration such that C(q)=1 and C_i_j^+C^*C_i_j+1. The transition leading to C is necessarily a transition where the message L has been sent. Remember also that by induction hypothesis, Σ_∈C_i_j(q_)+C_i_j(q'_)=0. * if C_i_jLC, then C(q)=1, and by induction hypothesis, Σ_∈C(q_)+C(q'_)=0. Then the only possible successor configuration is CRC_i_j+1, with Σ_∈C_i_j+1(q_)+C_i_j+1(q'_)=0, and π(C_i_j+1)=(, v), so π(C_i_j)π(C_i_j+1), by a restore transition. * if C_i_jinc_C_1LC then C_1=C_i_j-ℓ,+ℓ_δ,q_ for δ=(ℓ,,ℓ')∈Δ_b and Σ_∈C_1(q_)+C_1(q'_)=1. Now, C=C_1 - ℓ_δ, + q_, q, so C(q)=1=C(q_), and Σ_∈C(q_)+C(q'_)=1. * If CRC_i_j+1, then C_i_j+1 = C - q,q_+,, then Σ_∈C_i_j+1(q_)+C_i_j+1(q'_)=0 and π(C_i_j+1)=(, v), hence π(C_i_j)π(C_i_j+1) by a restore transition. * Now C(q_)=1 so it might be that Cinc_ C', with C'=C - q_+1_. Here, Σ_∈C'(q_)+C'(q'_)=0. However, 𝚕𝚎𝚊𝚍𝚎𝚛(C')={q} so C' is not M-compatible. The only possible transition from C' is now C'R C_i_j+1 with C_i_j+1= C'-q+. Hence, C_i_j+1(1_)= C'(1_)=C_i_j(1_)+1=v()+1, and C_i_j+1(1_)=C'(1_)=C_i_j(1_)=v() for all ≠. So π(C_i_j)=(ℓ,v)δ (ℓ',v+v_)(, v+v_)=π(C_i_j+1), the last step being a restore transition. Finally, Σ_∈C_i_j+1(q_)+C_i_j+1(q'_)=0. * if C_i_jdec_C_1L C, then C_1=C_i_j-ℓ,1_+ℓ_δ,q'_ for δ=(ℓ,,ℓ')∈Δ_b and Σ_∈C_1(q_)+C_1(q'_)=1. Now, C=C_1 - ℓ_δ, + q_, q, so C(q)=1=C(q'_), and Σ_∈C(q_)+C(q'_)=1. Again, two transitions are available: * If CRC_i_j+1, then C_i_j+1 = C - q,q'_+,, then Σ_∈C_i_j+1(q_)+C_i_j+1(q'_)=0 and π(C_i_j+1)=(, v), hence π(C_i_j)π(C_i_j+1) by a restore transition. * Now C(q'_)=1 so it might be that Cdec_ C', with C'=C - q'_+. Here, Σ_∈C'(q_)+C'(q'_)=0. However, 𝚕𝚎𝚊𝚍𝚎𝚛(C')={q} so C' is not M-compatible. The only possible transition from C' is now C'R C_i_j+1 with C_i_j+1= C'-q+. Hence, C_i_j+1(1_)= C'(1_)=C_i_j(1_)-1=v()-1, and C_i_j+1(1_)=C'(1_)=C_i_j(1_)=v() for all ≠. So π(C_i_j)=(ℓ,v)δ (ℓ',v-v_)(, v+v_)=π(C_i_j+1), the last step being a restore transition. Finally, Σ_∈C_i_j+1(q_)+C_i_j+1(q'_)=0. * If C_i_jinc_ C_1 then, it means that C_i_j()=0. In that case, let δ=(ℓ,,ℓ')∈Δ_b, and C_1=C_i_j -ℓ+ℓ_δ. Since, by induction hypothesis, C_1(q_)=C_i_j()=0, the only possible transition from C_1 would be C_1LC_i_j+1. However, C_i_j()=C_1()=0, so this transition is not possible, and C_1 is a deadlock configuration, a contradiction with the hypothesis that C_i_jC_i_j+1. * If C_i_jdec_ C_1 then it means that C_i_j(1_)=0. In that case, let δ=(ℓ,,ℓ')∈Δ_b, and C_1=C_i_j -ℓ+ℓ_δ. Since, by induction hypothesis, Σ_∈C_1(q_)+C_1(q'_) = Σ_∈C_i_j(q_)+C_i_j(q'_) = 0, the only possible transition from C_1 is C_1LC, with C=C_1 - ,ℓ_δ + q, q_. Again, Σ_∈C(q_)+C(q'_) = 0, and C(ℓ)= for all ℓ∈ Q_M, so the only possible transition is CR C_i_j+1. Observe that C_i_j+1 is M-compatible, with C_i_j+1()=1, and C_i_j+1(1_)=C_i_j(1_) for all ∈. Hence π(C_i_j+1)=(, v), and π(C_i_j)π(C_i_j+1), thanks to a restore transition of M. We then have, by P(k), that (,0_)^*_M π(C_i_k), with C_i_k M-compatible and such that C_i_k^* C_f, and C_i_k is the last M-compatible configuration. Then, by definition of an M-compatible configuration, C_i_k=C_f, and π(C_i_k)=(ℓ_f,v) for some v∈ℕ^. § PROOF OF SECTION <REF> We present here omitted proofs of <ref>. §.§ Technical Lemma We provide here a lemma which will be useful in different parts of this section. Let be rendez-vous protocol and C,C' ∈ such that C=C_0 C_1 ⋯ C_ℓ=C'. Then we have the two following properties. * For all q ∈ Q verifying C(q)=2.ℓ+a for some a ∈, we have C'(q)≥ a. * For all D_0 ∈ such that D_0 ≥ C_0, there exist D_1,…,D_ℓ such that D_0 D_1 ⋯ D_ℓ and D_i ≥ C_i for all 1 ≤ i ≤ℓ. According to the semantics associated to (non-blocking) rendez-vous protocols, each step in the execution from C to C' consumes at most two processes in each control state q, hence the result of the first item. Let C,C' ∈ such that C C'. Let D ∈ such that D ≥ C. We reason by a case analysis on the operation performed to move from C to C' and show that there exists D' such that D D' and D'≥ C'. (To obtain the final result, we repeat k times this reasoning). * Assume C m C' then there exists (q_1, !m, q_1') ∈ T and (q_2, ?m, q_2')∈ T such that C(q_1)>0 and C(q_2)>0 and C(q_1)+C(q_2)≥ 2 and C' = C - q_1, q_2 + q_1', q_2'. But since D ≥ C, we have as well D(q_1)>0 and D(q_2)>0 and D(q_1)+D(q_2)≥ 2 and as a matter of fact D m D' for D' = D - q_1, q_2 + q_1', q_2'. Since D≥ C, we have D' ≥ C'. * The case C τ C' can be treated in a similar way. * Assume C 𝐧𝐛(m) C', then there exists (q_1, !m, q_1') ∈ T, such that C(q_1)>0 and (C-q_1)(q_2)=0 for all (q_2, ?m, q_2') ∈ T and C' = C - q_1 + q'_1. We have as well that D(q_1)>0. But we need to deal with two cases: * If (D-q_1)(q_2)=0 for all (q_2, ?m, q_2') ∈ T. In that case we have D 𝐧𝐛(m) D' for D' = D - q_1 + q'_1 and D' ≥ C'. * If there exists (q_2, ?m, q_2') ∈ T such that (D-q_1)(q_2)>0. Then we have that D m D' for D' = D - q_1, q_2 + q_1', q_2'. Note that since (C-q_1)(q_2)=0 and D ≥ C, we have here again D' ≥ C'. §.§ Properties of Consistent Abstract Sets of Configurations §.§.§ Proof of Lemma <ref> Let C' ∈γ such that C' ≥ C. Let q ∈ Q such that C(q)>0. Then we have C'(q)>0. If q ∉ S, then q ∈ and C'(q)=1 and C(q)=1 too. Furthermore for all q' ∈∖q such C(q')=1, we have that C'(q')=1 and q and q' are conflict-free. This allows us to conclude that C ∈γ. Checking whether C belongs to γ can be done in polynomial time applying the definition of ·. §.§.§ Building Configurations from a Consistent Abstract Set Let γ be a consistent abstract set of configurations. Given a subset of states U ⊆ Q, if for all N ∈ and for all q ∈ U there exists C_q ∈γ and C'_q ∈ such that C_q ^∗ C'_q and C'_q(q)≥ N, then for all N ∈, there exists C ∈γ and C' ∈ such that C ^∗ C' and C'(q) ≥ N for all q ∈ U. We suppose γ=(S,) and reason by induction on the number of elements in U∖ S. The base case is obvious. Indeed assume U ∖ S=∅ and let N∈. We define the configuration C such that C(q)=N for all q ∈ S and C(q)=0 for all q ∈ Q∖ S. It is clear that C ∈γ and that C(q) ≥ N for all q ∈ U (since U ∖ S=∅, we have in fact U ⊆ S). We now assume that the property holds for a set U and we shall see it holds for U ∪p, p∉ S. We assume hence that for all N ∈ and for all q ∈ U ∪p there exists C_q ∈γ and C'_q ∈ such that C_q ^∗ C'_q and C'_q(q)≥ N. Let N ∈. By induction hypothesis, there exists C_U ∈γ and C'_U ∈ such that C_U ^∗ C'_U and C_U'(q) ≥ N for all q ∈ U. We denote by ℓ_U the minimal number of steps in an execution from C_U to C'_U. We will see that that we can build a configuration C ∈γ such that C ^∗ C”_U with C”_U ≥ C_U and C”_U(p) ≥ N+2*ℓ_U. Using Lemma <ref>, we will then have that C”_U ^∗ C' with C' ≥ C'_U and C'(p) ≥ N. This will allow us to conclude. We as well know that there exist C_p ∈γ and C'_p ∈ such that C_p ^∗ C'_p and C'_p(p)≥ N+2*ℓ_U+(k*ℓ). We denote by ℓ_p the minimum number of steps in an execution from C_p to C'_p. We build the configuration C as follows: we have C(q)=C_U(q)+2*ℓ_p+(k*ℓ)+C_p(q) for all q ∈ S, and we have C(q)=C_p(q) for all q ∈. Note that since C_p ∈γ, we have that C ∈γ. Furthermore, we have C ≥ C_p, hence using again Lemma <ref>, we know that there exists a configuration C”_p such that C ^∗ C”_p and C”_p ≥ C'_p (i.e. C”_p(p) ≥ N+2*ℓ_U+(k*ℓ) and C”_p(q) ≥ C_U(q)+(k*ℓ) + C_p(q) for all q ∈ S by <ref>,<ref>) Having C_U ∈γ, we name (q_1, m_1) … (q_k, m_k) the tokens in such that C_U(q_j) = 1 for all 1 ≤ j ≤ k, and for all q ∈∖{q_j}_1 ≤ j ≤ k, C_U(q) =0. Since γ is consistent, for each (q_j, m_j) there exists a path (q_0,j,!m_j,q_1,j)(q_1,j,?m_1,j,q_2,j)…(q_ℓ_j,j,?m_ℓ_j,j,q_j) in such that q_0,j∈ S and such that there exists (q'_i,j,!m_i,j,q”_i,j) ∈ T with q'_i,j∈ S for all 1 ≤ i ≤ℓ_j. We denote by ℓ = max_1 ≤ j≤ k(ℓ_j)+1. Assume there exists 1≤ i≤ j≤ k such that (q_i,m_i),(q_j,m_j)∈ and C_U(q_i)=C_U(q_j)=1, and m_i∈q_j and m_j∈q_i. Since C_U respects γ, q_i and q_j are conflict-free: there exist (q_i,m), (q_j,m')∈ such that m∉q_j and m'∉q_i. Hence, (q_i,m_i), (q_i, m), (q_j,m_j), (q_j,m')∈, and m∉q_j and m_j∈q_i. Therefore, we have (q_i,m), (q_j,m_j)∈ and m∉q_j and m_j∈q_i, which is in contradiction with the fact that γ is consistent. Hence, for all 1≤ i≤ j≤ k, for all (q_i,m_i), (q_j,m_j)∈, m_i∉q_j and m_j∉q_i. We shall now explain how from C”_p we reach C”_U in k*ℓ steps, i.e. how we put (at least) one token in each state q_j such that q_j ∈ and C_U(q_j)=1 in order to obtain a configuration C”_U ≥ C_U. We begin by q_1. Let a process on q_0,1 send the message m_1 (remember that q_0,1 belongs to S) and let ℓ_1 other processes on states of S send the messages needed for the process to reach q_1 following the path (q_0,1,!m_1,q_1,1)(q_1,1,?m_1,1,q_2,1)…(q_ℓ_1,1,?m_ℓ_1,1,q_1). At this stage, we have that the number of processes in each state q in S is bigger than C_U(q)+((k-1)*ℓ) and we have (at least) one process in q_1. We proceed similarly to put a process in q_2, note that the message m_2 sent at the beginning of the path cannot be received by the process in q_1 since, as explained above, m_2 ∉q_1. We proceed again to put a process in the states q_1 to q_K and at the end we obtain the configuration C”_U with the desired properties. §.§ Proof of Lemma <ref> In this subsection, the different items of Lemma <ref> have been separated in distinct lemmas. F(γ) is consistent and can be computed in polynomial time for all consistent γ∈Γ. The fact that F(γ) can be computed in polynomial time is a direct consequence of the definition of F (see <ref>). Assume γ = (S,) ∈Γ to be consistent. Note (S”, ”) the intermediate sets computed during the computation of F(γ), and note F(γ) = (S', '). To prove that F(γ) is consistent, we need to argue that (1) for all (q, m) ∈”∖, there exists a finite sequence of transitions (q_0, a_0, q_1) … (q_k, a_k, q) such that q_0 ∈ S, and a_0 = !m and for all 1 ≤ i≤ k, we have that a_i = ?m_i and that there exists (q'_i, !m_i, q'_i+1) ∈ T with q'_i ∈ S, and (2) for all (q,m), (q',m') ∈' either m∈q' and m'∈q or m∉q' and m'∉q. We start by proving property (1). If (q, m) has been added to ” with rule <ref>, then by construction, there exists p ∈ S such that (p, !a, p') ∈ T, and (q, m) = (p', a). The sequence of transition is the single transition is (p, !a, q). If (q, m) has been added to ” with rule <ref>, then there exists (q',m) ∈, and (q', ?a, q) with m a. Furthermore, m ∈q and there exists (p, !a,p') ∈ T with p ∈ S. By hypothesis, γ is consistent, hence there exists a finite sequence of transitions (q_0, q_0, q_1) … (q_k, a_k, q') such that q_0 ∈ S, and a_0 = !m and for all 1 ≤ i≤ k, we have that a_i = ?m_i and that there exists (q'_i, !m_i, q'_i+1) ∈ T with q'_i ∈ S. By completing this sequence with transition (q', ?a, q) we get an appropriate finite sequence of transitions. It remains to prove property (2). Assume there exists (q, m), (q',m') ∈' such that m ∈q' and m' ∉q, then as ' ⊆”, (q, m), (q',m') ∈”. By condition <ref>, q ∈ S', therefore, as ' = {(p, a) ∈”| p ∉ S'}, we have that (q, m) ∉', and we reached a contradiction. If (S',')=F(S,) then S ⊊ S' or ⊆'. From the construction of F (see <ref>), we have S ⊆ S”⊆ S'. Assume now that S=S'. First note that ⊆” (see Table <ref>) and that ∩ S=∅. But '=(q,m) ∈”| q ∉S'=(q,m) ∈”| q ∉S. Hence the elements that are removed from ” to obtain ' are not elements of . Consequently ⊆'. For all consistent γ∈Γ, if C ∈γ and C C' then C' ∈F(γ). Let γ = (S,)∈Γ be a consistent abstract set of configurations, and C ∈ such that C ∈γ and C C'. Note F(γ) = (S', ') and γ' = (S”, ”) the intermediate sets used to compute F(γ). We will first prove that for all state q such that C'(q) > 0, q ∈ S' or q ∈('), and then we will prove that for all states q such that q ∈(') and C'(q)>0, C'(q) = 1 and for all other state p∈(') such that C'(p) >0, p and q are conflict-free. Observe that S ⊆ S”⊆ S', ⊆”, and (”) ⊆(') ∪ S'. First, let us prove that for every state q such that C'(q)>0, it holds that q ∈ S' ∪('). Note that for all q such that C(q) > 0, because C respects γ, q ∈() ∪ S. As () ∪ S ⊆(') ∪ S', the property holds for q. Hence, we only need to consider states q such that C(q) = 0 and C'(q) > 0. If C τ C' then q is such that there exists (q', τ, q) ∈ T, q' is therefore an active state and so q' ∈ S, (recall that ⊆ Q_W ×Σ). Hence, q should be added to (”) ∪ S” by condition <ref>. As (”) ∪ S”⊆(') ∪ S', it concludes this case. If C a C' then q is such that there exists (q', !a, q) ∈ T, with q' an active state. With the same argument, q' ∈ S and so q should be added to (”) ∪ S” by condition <ref> or <ref>. If C a C', then q is either a state such that (q', !a, q) ∈ T and the argument is the same as in the previous case, or it is a state such that (q', ?a, q) ∈ T, and it should be added to (”)∪ S” by condition <ref>, <ref>, or <ref>. Therefore, we proved that for all state q such that C'(q) >0, it holds that q ∈(') ∪ S'. It remains to prove that if q ∈(), then C'(q) = 1 and for all q' ∈(') ∖{q} such that C'(q') = 1, we have that q and q' are conflict-free. Note that if q ∈() and C(q) = C'(q) = 1, then for every state p such that p ∈() and C(p) = C'(p) = 1, it holds that q and p are conflict-free. Observe that if C τ C', then note q the state such that (q', τ ,q), it holds that {p | p ∈(') and C'(p) > 0}⊆{p | p ∈() and C(p) = 1}: q' is an active state, q might be in () but it is added to S”⊆ S' with rule <ref>, and for all other states, C'(p) = C(p). If p ∈(') and C(p) > 0, it implies that C'(p)= C(p) = 1 and p∈() (otherwise p is in S ⊆ S'). Hence, there is nothing to do as C respects γ. Take now q ∈(') ∖() with C'(q) > 0, we shall prove that C'(q) =1 and for all p ∈(') and C'(p) > 0, q and p are conflict-free. If q ∈(') ∖(), it implies that C(q) = 0 because C respects γ. Hence: either (1) C a C' with transition (q', !a, q) ∈ T, either (2) C a C' with transitions (q_1, !a, q'_1) ∈ T and (q_2, ?a, q'_2) ∈ T and q = q'_1 or q=q'_2. In the latter case, we should be careful as we need to prove that q'_2 q'_1, otherwise, C'(q) = 2. Case (1): Note that as only one process moves between C and C' and C(q)= 0, it is trivial that C'(q) = 1. In this first case, as it is a non-blocking request on a between C and C', it holds that: for all p ∈() such that C(p) = 1, a ∉p. Take p ∈('), such that p q and C'(p) = 1, then C'(p) = C(p) = 1 and so p ∈(), and a ∉p. Suppose (p, m) ∈' such that m ∈q, then we found two tokens in ' such that m ∈q and a ∉p which contradicts F(γ)'s consistency. Hence, p and q are conflict-free. Case (2): Note that if q'_2 ∈('), then q_2 ∈() (otherwise, q'_2 should be in S' by condition <ref>), and note (q_2, m) ∈, with (q'_2, m) ∈'. Note as well that if q'_1 ∈('), then a ∈q'_1 (otherwise, q'_1 should be in S' by condition <ref>) and (q'_1 ,a) ∈' by condition <ref>. Furthermore, if q'_1 ∈('), q_2 ∈() as well as otherwise q'_1 should be added to S' by condition <ref>. We first prove that either q'_1 ∈ S', or q'_2 ∈ S'. For the sake of contradiction, assume this is not the case, then there are three tokens (q'_1, a), (q_2, m), (q'_2, m) ∈' ⊆”, such that (q_2, ?a, q'_2) ∈ T. From condition <ref>, q'_1 should be added to S' and so (q'_1, a) ∉'. Note that, as a consequence q'_1 q'_2 or q'_1 = q'_2 ∈ S'. Take q ∈(') ∖() such that C'(q) >0, if such a q exists, then q = q'_1 or q = q'_2 and q'_1 q'_2. As a consequence, C'(q) = 1 (note that if q'_1 = q_2, C(q_2) = 1). Take p ∈(') ∖{q} such that C'(p) > 0, it is left to prove that q and p are conflict-free. If p q and p ∈('), then C'(p) = C(p) (because q'_1 ∈ S' or q'_2 ∈ S'). Hence, p ∈() and C'(p) = 1. Assume q = q'_1 and assume q and p are not conflict-free. Remember that we justified that q_2 ∈(), and therefore, C(q_2) = 1. Hence, either C'(q_2) = 0, or q_2 = q'_2 and in that case q_2,q_2' ∈ S' or q_2' = q_1' and then q_2=q. In any case, p q_2. As C respects γ, there exists (p, m_p) and (q_2, m) ∈ such that m_p ∉q_2 and m ∉p (q_2 and p are conflict-free). As p ∈('), (p,m_p) ∈' and so m_p∈q or a ∈p (q and p are not conflict-free). As F(γ) is consistent, m_p∈q and a ∈p. Note that a m_p because a ∈q_2, a m because m ∉p, and obviously m m_p. Note also that if m ∉q, then we found two tokens (q,a) and (q_2,m) in ' such that a ∈q_2 and m ∉q, which contradicts the fact that F(γ) is consistent (Lemma <ref>). Hence, m∈q. Note that even if q_2 is added to S”, it still is in ”. As ' ⊆” we found three tokens (p, m_p), (q_2,m), (q, a) in ”, satisfying condition <ref>, and so p should be added to S', which is absurd as p ∈('). We reach a contradiction and so q and p should be conflict-free. Finally assume q = q_2'. If q = q_2, then, because C respects γ, q and p are conflict-free. Otherwise, as q_2 is conflict-free with p, there exists (q_2, m ) and (p, m_p) in such that m ∉p and m_p ∉q_2. Note that (q,m) ∈” from condition <ref> (otherwise, q ∈ S” which is absurd). Hence, (q, m) ∈' and, as p ∈('), (p,m_p) is conserved from to '. It remains to show that m_p ∉q. Assume this is not the case, then there exists (p,m_p) and (q,m) ∈' such that m∉p and m_p∈q which is absurd given F(γ)'s consistency. As a consequence, q and p are conflict-free. We managed to prove that for all q such that C'(q) >0, q ∈ S' ∪('), and if q ∈('), then C'(q) = 1 and for all others p∈(') such that C'(p) = 1, p and q are conflict-free. For all consistent γ∈Γ, if C' ∈F(γ), then there exists C”∈ and C ∈γ such that C”≥ C' and C ^∗ C”. Let γ be a consistent abstract set of configurations and C'∈F(γ). We suppose that γ=(S,) and F(γ)=γ'=(S','). We will first show that for all N ∈, for all q ∈ S' there exists a configuration C_q ∈γ and a configuration C_q' ∈ such that C_q ^∗ C_q' and C'_q(q) ≥ N. This will allow us to rely then on Lemma <ref> to conclude. Take N ∈ and q ∈ S', if q ∈ S, then take C_q ∈γ to be N · q. Clearly C_q ∈F(γ), C_q(q) ≥ N and C_q ^∗ C_q. Now let q ∈ S' ∖ S. Note (”, S”) the intermediate sets of F(γ)'s computation. Case 1: q ∈ S”. As a consequence q was added to S” either by one of the conditions <ref>, <ref>, <ref> or <ref>. In cases <ref> and <ref> when a ∉q, note q' the state such that (q', τ, q) or (q', !a, q), and consider the configuration C_q = N · q'. By doing N internal transitions or non-blocking requests, we reach C'_q= N · q. Note that the requests on a are non-blocking as q' ∈ Q_A and a ∉q. C'_q ∈F(γ). In cases <ref> with a∈q and in case <ref>, note (q_1, !a, q_1') and (q_2, ?a, q_2') the two transitions realizing the conditions. As a consequence q_1, q_2 ∈ S. Take the configuration C_q =N · q_1, N · q_2. C_q ∈γ and by doing N successive rendez-vous on the letter a, we reach configuration C'_q = N· q'_1 + N · q'_2. C'_q ∈F(γ), and as q ∈{q'_1, q'_2}, C'_q(q) ≥ N. In case <ref>, there exists (q', m) ∈ such that (q', ?a, q) ∈ T, m ∉q, and there exists p ∈ S such that (p, !a,p') ∈ T. Remember that γ is consistent, and so there exists a finite sequence of transitions (q_0, !m, q_1) (q_1, a_1, q_2) … (q_k, a_k, q') such that q_0 ∈ S and for all 1 ≤ i ≤ k, a_i = ?m_i and there exists (q'_i , !m_i, q”_i) ∈ T with q'_i ∈ S. Take C_q = (N-1) · q_0 + (N-1) · q'_1 + … + (N-1) · q'_k + N · p + q'. Clearly C_q ∈γ as all states except q' are in S and q' ∈(), C_q(q') = 1. We shall show how to put 2 processes on q from C_q and then explain how to repeat the steps in order to put N. Consider the following execution: C_q a C_1 x_m C_2 m_1…m_k C_k+2a C_k+3. The first rendez-vous on a is made with transitions (p, !a, p') and (q', ?a, q). Then either m ∉p' and x_m = m, otherwise, x_m = m, in any case, the rendez-vous or non-blocking sending is made with transition (q_0, !m, q_1) and the message is not received by the process on q (because m ∉q) and so C_2 ≥q + q_1. Then, each rendez-vous on m_i is made with transitions (q'_i, !m_i,q”_i) and (q_i, ?m_i, q_i+1) (q_k+1 = q'), . Hence C_k+3≥(N-2)· q_0+ (N-2) · q'_1 + … + (N-2) · q'_k + (N-2) · p + 2 · q. We can reiterate this execution (without the first rendez-vous on a) N-2 times to reach a configuration C'_q such that C'_q ≥N · q. Case 2: q ∉ S”. Hence, q should be added to S' by one of the conditions <ref>, <ref>, and <ref>. If it was added with condition <ref>, let (q_1, m_1), (q_2, m_2) ∈” such that q =q_1, m_1 m_2, m_2 ∉q_1 and m_1 ∈q_2. From the proof of Lemma <ref>, one can actually observe that all tokens in ” correspond to "feasible" paths regarding states in S, i.e there exists a finite sequence of transitions (p_0, !m_1, p_1) (p_1, a_1, p_2) … (p_k, a_k, q_1) such that p_0 ∈ S and for all 1 ≤ i ≤ k, a_i = ?b_i and there exists (p'_i , !b_i, p”_i) ∈ T with p'_i ∈ S. The same such sequence exists for the token (q_2, m_2), we note the sequence (s_0, !m_2, s_1)… (s_ℓ, a_ℓ, q_2) such that s_0 ∈ S and for all 1 ≤ i ≤ℓ, a_i = ?c_i and there exists (s'_i , !c_i, s”_i) ∈ T with s'_i ∈ S. Take C_q = N · p_0 + N · s_0 + N p'_1 + … + N p'_k + N · s'_1 + … + N · s'_ℓ. Clearly, C_q ∈γ, as all states are in S. Consider the following execution: C_q m_1 C_1 b_1…b_k C_k+1, the non-blocking sending of m_1 is made with transition (p_0, !m_1, p_1) and each rendez-vous on letter b_i is made with transitions (p'_i, !b_i, p_i”) and (p_i, ?b_i, p_i+1) (p_k+1 = q_1). Hence, C_k+1 is such that C_k+1≥q_1. From C_k+1, consider the following execution: C_k+1x_m_2 C_k+2c_1…c_ℓ C_k+ℓ +2m_1C_k+ℓ +3, where x_m_2 = m_2 if no process is on a state in R(m_2), or x_m_2 = m_2 otherwise. In any case, as m_2 ∉q_1, C_k+2≥q_1. And each rendez-vous on letter c_i is made with transitions (s'_i, !c_i, s_i”) and (s_i, ?c_i, s_i+1) (s_k+1 = q_2), the last rendez-vous on m_1 is made with transitions (p_0, !m_1, p_1) and (q_2, ?m_1, q_2') (such a q_2' exists as m_1 ∈q_2). Hence, C_k+ℓ +3≥p_1 + q_1. By repeating the two sequences of steps (without the first non-blocking sending of m_1) N-1 times (except for the last time where we don't need to repeat the second execution), we reach a configuration C'_q such that C'_q≥N · q_1. If it was added with condition <ref>, then let (q_1, m_1), (q_2,m_2), (q_3,m_2) ∈” such that m_1 m_2 and (q_2, ?m_1, q_3) ∈ T with q =q_1. From the proof of Lemma <ref>, ” is made of "feasible" paths regarding S and so there exists a finite sequence of transitions (p_0, !m_2, p_1) (p_1, a_1, p_2) … (p_k, a_k, q_2) such that p_0 ∈ S and for all 1 ≤ i ≤ k, a_i = ?b_i and there exists (p'_i , !b_i, p”_i) ∈ T with p'_i ∈ S. The same sequence exists for the token (q_1, m_1), we note the sequence (s_0, !m_1, s_1)… (s_ℓ, a_ℓ, q_1) such that s_0 ∈ S and for all 1 ≤ i ≤ℓ, a_i = ?c_i and there exists (s'_i , !c_i, s”_i) ∈ T with s'_i ∈ S. Take C_q = N · p_0 + N · s_0 + N p'_1 + … + N p'_k + N · s'_1 + … + N · s'_ℓ. Clearly, C_q ∈γ, as all states are in S. We do the same execution from C_q to C_k+1 as in the previous case: C_q m_2 C_1 a_1…a_k C_k+1. Here C_k+1 is then such that C_k+1≥q_2. Then, from C_k+1 we do the following: C_k+1m_1 C_k+2c_1…c_ℓ C_k+ℓ+2m_2 C_k+ℓ+3: the rendez-vous on letter m_1 is made with transitons (s_0, !m_1, s_1) and (q_2, ?m_1, q_3). Then, each rendez-vous on letter c_i is made with transitions (s'_i, !c_i, s_i”) and (s_i, ?c_i, s_i+1) (s_k+1 = q_1), and the last rendez-vous on letter m_2 is made with transitions (p_0, !m_2, p_1) and (q_3, ?m_2,q_3') (such a state q_3' exists as (q_3, m_2) ∈” and so m_2∈q_3). Hence, C_k+ℓ+3 is such that C_k+ℓ +3≥q_1 + p_1. We can repeat the steps from C_1 N-1 times (except for the last time where we don't need to repeat the second execution), to reach a configuration C'_q such that C'_q≥N · q_1. pas encore relu condition 8If it was added with condition <ref>, then let (q_1, m_1), (q_2, m_2), (q_3, m_3) ∈”, such that m_1 m_2, m_2 m_3, m_1 m_3, and m_1 ∉q_2, m_1 ∈q_3, and m_2 ∉q_1, m_2 ∈q_3 and m_3 ∈q_2 and m_3 ∈q_1, and q_1 = q. Then there exists three finite sequences of transitions (p_0, !m_1, p_1) (p_1, ?b_1, p_2) … (p_k, ?b_k, p_k+1), and (s_0, !m_2, s_1) (s_1, ?c_1, s_2) … (s_ℓ, ?c_k, s_ℓ +1), and (r_0, !m_3, r_1) (r_1, ?d_1, r_2) … (r_j, ?d_j, r_j+1) such that p_k+1 = q_1, s_ℓ +1 = q_2 and r_j+1 = q_3, and for all messages a ∈{ b_i_1, c_i_2, d_i_3}_1 ≤ i_1 ≤ k, 1 ≤ i_2 ≤ℓ, 1 ≤ i_3 ≤ j = M, there exists q_a∈ S such that (q_a, !a, q'_a). Take C_q = Np_0 + Ns_0 + Nr_0 + ∑_a ∈ MNq_a. From C_q there exists the following execution: C_q m_1 C_1 b_1…b_k C_k +1 where the non-blocking sending is made with the transition (p_0, !m_1, p_1) and each rendez-vous with letter b_i is made with transitions (q_b_i, !b_i, q'_b_i) and (p_i, ?b_i, p_i+1). Hence, C_k+1≥q_1. Then, we continue the execution in the following way: C_k+1x_m_2 C_k+2c_1…c_ℓ C_k+ ℓ +2 where x_m_2 = m_2 if there is no process on R(m_2), and x_m_2 = m_2 otherwise. In any case, the rendez-vous is not answered by a process on state q_1 because m_2 ∉q_1. Furthermore, each rendez-vous with letter c_i is made with transitions (q_c_i, !c_i, q'_c_i) and (s_i, ?c_i, s_i+1). Hence, C_k +ℓ+2≥q_2 + q_1. From C_k+ℓ +2 we do the following execution: C_k+ℓ +2m_3 C_k+ℓ +3d_1…d_j C_k +ℓ + j +3 where the rendez-vous on letter m_3 is made with transitions (r_0, !m_3, r_1) and (q_2, ?m_3, q_2') (this transition exists as m_3 ∈q_2). Each rendez-vous on d_i is made with transitions (q_d_i, !d_i, q'_d_i) and (r_i, ?d_i, r_i+1). Hence, the configuration C_k+ ℓ +j+3 is such that C_k+ℓ +j +3≥q_3 + q_1. Then from C_k+ℓ +j +3: C_k+ℓ + j +3m_1 C_k+ℓ + j +4 where the rendez-vous is made with transitions (p_0, !m_1, p_1) and (q_3, ?m_1, q'_3) (this transition exists as m_1 ∈q_3). By repeating N-1 times the execution from configuration C_1, we reach a configuration C'_q such that C'_q(q_1) ≥ N. Hence, for all N ∈ℕ, for all q ∈ S', there exists C_q ∈γ, such that C_qC'_q and C'_q(q) ≥ N. From Lemma <ref>, there exists C'_N and C_N ∈γ such that C_N ^∗ C'_N and for all q ∈ S', C_N(q) ≥ N. Take C' ∈F(γ), we know how to build for any N ∈, a configuration C'_N such that C'_N(q) ≥ N for all states q ∈ S' and there exists C_N ∈γ, such that C_N ^∗ C'_N, in particular for N bigger than the maximal value C'(q) for q ∈ S', C'_N is greater than C'_N on all the states in S'. To conclude the proof, we need to prove that from a configuration C'_N' for a particular N', we can reach a configuration C” such that C”(q) ≥ C'(q) for q ∈ S' ∪('). As C' respects F(γ), remember that for all q ∈('), C'(q) = 1. The execution is actually built in the manner of the end of the proof of Lemma <ref>. Note N_max the maximum value for any C'(q). We enumerate states q_1, …, q_m in (') such that C'(q_i) = 1. As C' respects F(γ), for i j, q_i and q_j are conflict free. From Lemma <ref>, F(γ) is consistent, and so we note (p^j_0, !m^j, p^j_1) (p^j_1, ?m^j_1, p^j_2) … (p^j_k_j, ?m^j_k_j, p^j_k_j+1) the sequence of transitions associated to state q_j such that: p^j_k_j+1 = q_j, (q_j, m^j) ∈ and for all m^j_i, there exists (q_m^j_i, !m_i^j, q'_m^j_i) with q_m^j_i∈ S'. Note that for all i j, q_i and q_j are conflict-free and so there exists (q_i, m), (q_j,m') ∈' such that m ∉q_j and m' ∉q_i. As F(γ) is consistent, it should be the case for all pairs of tokens (q_i, a), (q_j, a'). Hence m^j ∉q_i and m^i ∉q_j. Note ℓ_j = k_j + 1. For N' = N_max + ∑_1≤ j ≤ mℓ_j, there exists a configuration C'_N' such that there exists C_N'∈γ, C_N'^*C'_N', and C'_N'(q) ≥ N' for all q ∈ S'. In particular, for all q ∈ S', C'_N'(q) ≥ C'(q) + ∑_1≤ j ≤ mℓ_j. Then, we still have to build an execution leading to a configuration C” such that for all q ∈('), C”(q) ≥ C'(q). We then use the defined sequences of transitions for each state q_j. With ℓ_1 processes we can reach a configuration C_1 such that C_1(q_1) ≥ 1: C_1 x_m^1 C_2 m_1^1…m_k_1^1 C_ℓ_1+ 1. x_m^1 = m^1 if there is no process on R(m^1), and x_m^1 = m^1 otherwise. Each rendez-vous on m_i^1 is made with transitions (p_i^1, ?m_i^1, p_i+1^1) and (q_m_i^1, ! m_i^1, q'm_i^1). As a result, for all q ∈ S', C_ℓ_1+1(q) ≥ C'(q) +∑_2≤ j ≤ mℓ_j and C_ℓ_1 +1(q_1) ≥ 1. We then do the following execution form C_ℓ_1 + 1: C_ℓ_1 +1x_m^2 C_ℓ_1+2m_1^2…m_k_2^2 C_ℓ_1+ ℓ_2+ 2. x_m^2 = m^2 if there is no process on R(m^2), and x_m^2 = m^2 otherwise. Remember that we argued that m^2 ∉q_1, and therefore C_ℓ_1 + 2(q_1) ≥ C_ℓ_1 +1(q_1) ≥ 1. Each rendez-vous on m_i^2 is made with transitions (p_i^2, ?m_i^2, p_i+1^2) and (q_m_i^2, ! m_i^2, q'm_i^2). As a result, C_ℓ_1+ℓ_2 +2(q) ≥ C'(q) +∑_3≤ j ≤ mℓ_j for all q ∈ S' and C_ℓ_1+ ℓ_2 + 2≥q_1 + q_2. We can then repeat the reasoning for each state q_i and so reach a configuration C” such that C”(q) ≥ C'(q) for all q ∈ S' and, C”≥q_1 + q_2 + …q_m. We built the following execution: C_N'^∗ C'_N'^∗ C”, such that C”≥ C', and C'_N'∈γ. §.§ Proof of Lemma <ref> Assume that there exists C_0 ∈ and C' ≥ C such that C_0 C_1 … C_ℓ =C'. Then using the Lemma <ref> iteratively, we get that C' ∈γ_ℓ. From the definition of F and ·, one can furthermore easily check that γ⊆F(γ) for all γ∈Γ. Hence we have γ_ℓ⊆γ_f and C' ∈γ_f. Before proving the other direction, we first prove by induction that for all i ∈ and for all D ∈γ_i, there exists C_0 ∈ and D' ≥ D such that C_0 ^∗ D'. The base case for i=0 is obvious. Assume the property holds for γ_i and let us show it is true for γ_i+1. Let E ∈γ_i+1. Since γ_i+1=F(γ_i), using Lemma <ref>, we get that there exists E' ∈ and D ∈γ_i such that E' ≥ E and D ^∗ E'. By the induction hypothesis, there exist C_0 ∈ and D' ≥ D such that C_0 ^∗ D'. Using the monotonicity property stated in Lemma <ref>, we deduce that there exists E”∈ such that E”≥ E' ≥ E and C_0 ^∗ D' ^∗ E”. Suppose now that there exists C”∈γ_f such that C”≥ C. By the previous reasoning, we get that there exist C_0 ∈ and C' ≥ C”≥ C such that C_0 ^∗ C'.
http://arxiv.org/abs/2307.05654v2
20230711150925
Practical Dirac Majorana confusion theorem: Issues and Applicability
[ "C. S. Kim" ]
hep-ph
[ "hep-ph", "hep-ex", "hep-th", "nucl-th" ]
=1 =1cm =2cm ==17cm =23cm
http://arxiv.org/abs/2307.05703v1
20230711181149
Simple unbalanced optimal transport
[ "Boris Khesin", "Klas Modin", "Luke Volk" ]
math.DG
[ "math.DG", "math.OC" ]
We introduce and study a simple model capturing the main features of unbalanced optimal transport. It is based on equipping the conical extension of the group of all diffeomorphisms with a natural metric, which allows a Riemannian submersion to the space of volume forms of arbitrary total mass. We describe its finite-dimensional version and present a concise comparison study of the geometry, Hamiltonian features, and geodesics for this and other extensions. One of the corollaries of this approach is that along any geodesic the total mass evolves with constant acceleration, as an object's height in a constant buoyancy field. Data-driven Discovery of Diffuse Interstellar Bands with APOGEE Spectra [ August 12, 2023 ======================================================================= § INTRODUCTION Many problems of optimal transport are closely related to the differential geometry of diffeomorphism groups. In particular, the problem of moving one mass (or density) to another by a diffeomorphism while minimizing a certain (quadratic) cost can be understood as construction of geodesics in an appropriate metric on the space of normalized densities (or on its completion), see e.g. <cit.>. Similar problems arise in applications when one attempts to evaluate the proximity between different shapes or medical images <cit.>. However, the action by a diffeomorphism does not allow a change of the total mass of the density. Hence one arrives at the problem of constructing a natural extension of the action which would allow one to connect in the most economical way densities of different total masses. Such problems belong to the domain of unbalanced optimal transport (UOT), and they have received a lot of attention lately, see e.g. <cit.>. Usually, the setting of unbalanced optimal transport involves a “large” extension G=⋉ C^∞_+(M) of the group of all diffeomorphisms of a manifold by means of a semidirect product with the space of smooth positive functions. Such a large semidirect-product group acts on densities by a change of coordinates and then by adjusting pointwise the obtained density by means of a function. In this paper we instead introduce and study a much simpler “small" extension ()= of the same group . This way both the group of diffeomorphisms and the space of normalized densities have similar conical extensions () and =((M)) by one extra parameter, the total mass m of the density. We describe natural metrics and geodesics for those extensions. It turned out that the corresponding problem of unbalanced optimal transport, while being much easier to handle, captures most of the main features for the large extensions. For instance, for both small and large extensions, a common phenomenon is that in many two-point problems a geodesic joining two end-densities goes through densities whose total mass dips below the smallest of the two it connects. In particular, one of the corollaries of this approach is that along any geodesic the total mass m evolves with constant acceleration, m̈=const, i.e. as an object's height in a constant buoyancy field. We also introduce special variables in which we demonstrate the convexity of the dynamical formulation for the simple conical extension, generalizing the convexity of standard optimal transport. This convex minimization formulation is known to be central for the existence and uniqueness of the corresponding solutions in such variational problems. One immediate additional advantage of the present approach is that it admits a finite-dimensional model, where the diffeomorphism group for M=ℝ^n is replaced by its subgroup GL(n), while the space of all volume forms is constrained to its subspace of non-normalized Gaussian densities on ℝ^n. Finally, we compare in more detail our “small" extension with two other “larger" extensions: the one considered in <cit.> and called Wasserstein-Fisher-Rao and the one which is indeed a weighted sum of the Wasserstein and Fisher-Rao metrics. In a sense, those two models can be viewed as extensions of our simpler model in, respectively, Lagrangian and Hamiltonian settings, as we discuss below. We describe the corresponding geodesics and candidates for their finite-dimensional counterparts. It turns out that the corresponding larger finite-dimensional models are less natural than for the small extension, as they require additional restrictions on orbits of the corresponding action. Acknowledgements. We would like to thank F.X. Vialard and T. Gallouët for thought-stimulating discussions. Research of B.K. was partially supported by an NSERC Discovery Grant. K.M. was supported by the Swedish Research Council (grant number 2022-03453) and the Knut and Alice Wallenberg Foundation (grant number WAF2019.0201). § A CONICAL EXTENSION OF THE DIFFEOMORPHISM GROUP Let M be a Riemannian manifold with volume form μ of total volume (or “mass") equal to 1. Let denote the set of all (un-normalized) volume forms on M of finite total volume. While for most applications one can think of a compact manifold M, it is also convenient to keep in mind the case of M=ℝ^n with Gaussian densities on it. Let denote the direct product Lie group. A left action of on is given by (φ,m)·ϱ = m φ_*ϱ for φ∈, ϱ∈, and m∈ℝ_+. Fixing a Riemannian volume form μ, this left action endows the product ×ℝ_+ with the structure of a principal G-bundle with projection π×ℝ_+ → (φ,m) ↦ m φ_*μ and corresponding isotropy subgroup G given by G = { (φ, m)| m φ_*μ = μ} = ×{ 1}. It follows that m=1 by taking the integral. The Lie algebra of G is thus 𝔤 = 𝔛_μ(M)×{ 0}. The tangent space of the fibre through (φ,m), denoted 𝒱_(φ,m) = dπ_(φ,m), gives the vertical distribution associated with the bundle π. The vertical distribution for π×ℝ_+→ is given by 𝒱_(φ,m) = {(v∘φ,0)|div(ρ v) = 0 for ρ = m φ_*μ/μ}. Given a curve (φ(t),m(t)) ∈×ℝ_+ with (φ(0),m(0)) = (φ,m), we have that for (φ̇(0),ṁ(0)) = (v∘φ,ξ m )∈ T_(φ, m )(×ℝ_+): dπ_(φ, m )(v∘φ,ξ m ) = .d/dt|_t=0 m (t)φ(t)_*μ = ξ m φ_*μ + m .d/dt|_t=0φ(t)_*μ = ξρμ - m L_vφ_*μ = ξρμ - m (ρ v)μ. Thus (v∘φ,ξ m )∈𝒱_(φ, m ) if and only if ξρ = m (ρ v). Now, by integrating the both sides against μ over M we see that the integral of the divergence is zero. This implies that the constant ξ=0, which in turn implies that the divergence is zero pointwise. This concludes the proof. §.§ A natural metric for unbalanced optimal transport (UOT) Consider the following metric on the direct product group : [(φ, m )]((φ̇,ṁ ), (φ̇,ṁ )) = m ∫_M |φ̇|^2 μ + ṁ ^2/ m = ∫_M | v |^2 ϱ + m ξ^2 for vaiables v = φ̇∘φ^-1, ξ = ṁ / m, and ϱ = m φ_*μ. Recall that for a Riemannian manifold N with metric g(v,v) its conical extension (N):=N×ℝ_+ is a Riemannian manifold with metric r^2g(v,v)+dr^2. Consequently, the above product group is a natural conical extension of the most straightforward L^2 metric on given by ⟨φ̇, φ̇⟩_φ = ∫_M |φ̇|^2 μ . Indeed, by changing variables m =r^2 (implying ṁ =2rṙ) we come to the conical extension with metric ⟨ (φ̇,ṙ), (φ̇,ṙ)⟩_(φ,r) = r^2∫_M |φ̇|^2 μ + 4ṙ^2 . Here and below we always consider vector fields, densities, etc. of an appropriate Sobolev class H^s(M) with sufficiently large s (s>n/2+1), and such that all the integrals discussed are finite, cf. <cit.>. The orthogonal complement of the vertical distribution with respect to the metric on ×ℝ_+ gives the horizontal distribution of the bundle. For the metric in equation (<ref>), the horizontal distribution at (φ, m ) is given by ℋ_(φ, m ) = { (∇θ∘φ, ξ m )|θ∈ C^∞(M), ξ∈ℝ} ≃{θ∈ C^∞(M) } , where (∇θ∘φ,∫_Mθϱ) ↔θ . The vertical distribution 𝒱_(φ, m ) consists of (v∘φ,0) where v is divergence-free with respect to ϱ = m φ_*μ, i.e., div(ρ v) = 0. Thus it follows from the (generalized) Hodge decomposition and the choice of metric (<ref>) that if (u∘φ,ξ m ) ∈ℋ then u = ∇θ is a gradient vector field. It now follows that (∇θ∘φ,ξ m ) is orthogonal to 𝒱_(φ, m ) for any ξ∈ℝ. In particular, we may encode ξ in the arbitrary constant of θ for ∇θ. The choice ξ m = ∫_M θϱ gives a geometric identification of ℋ_(φ, m ) with the space C^∞(M). The metric in (<ref>) projects as a Riemannian submersion to the metric on given at any point ρ∈ by [ϱ](ϱ̇,ϱ̇) = ∫_M ( |∇θ|^2 + ξ^2 )ϱ , ρ̇= -(ρ∇θ) + ξρ , ∫_Mϱ̇= m ξ . Furthermore, the variable θ∈ C^∞(M) defined by the equations above together with ξ m = ∫_M θϱ is Legendre-dual to ϱ̇ under the pairing ⟨ϱ̇,θ⟩ = ∫_M θϱ̇. Consequently, the Hamiltonian on T^* corresponding to the metric is H(ϱ,θ) = 1/2∫_M |∇θ|^2ϱ + 1/2 m ( ∫_M θϱ)^2_1/2 m ξ^2 . First, notice that the metric 𝒢 is invariant under the right action of the isotropy subgroup G on the tangent bundle T(Diff(M)×ℝ_+). Thus, 𝒢 is compatible with the principal bundle structure, so it indeed induces a metric 𝒢̅ on the base Vol(M). Now take an arbitrary horizontal vector (∇θ∘φ,ξ m ) ∈ℋ_(φ,m). If ϱ = π(φ,m) and ϱ̇= dπ_(φ,m)(∇θ∘φ,ξ m ) is the lifted bundle projection, then, by definition, 𝒢̅_ϱ(ϱ̇, ϱ̇) ≡𝒢_(φ,m)(∇θ∘φ,ξ m ,∇θ∘φ,ξ m ) = ∫_M |∇θ|^2 ϱ + m ξ^2. From equation (<ref>) for dπ we get that ρ̇= -div(ρ∇θ) + ξρ. Applying integration and using that m = ∫_M ϱ, we see that ∫_M ϱ̇= m ξ . This confirms the formula (<ref>) for the induced metric. For the second statement, that θ is in fact the Legendre transform, the variable Legendre-dual to ϱ̇ is defined by δ L/δϱ̇ where L is the Lagrangian corresponding to 𝒢̅. Given a variation ϱ̇_ϵ = ϱ̇+ ϵ δϱ̇ we obtain .d/dϵ|_ϵ=0L(ϱ,ϱ̇_ϵ) = ∫_M(∇θ·∇.d/dϵ|_ϵ=0θ_ϵ) ϱ_(i) + ξ m.d/dϵ|_ϵ=0ξ_ϵ_(ii). On the other hand, from the definition of θ in (<ref>) we see that δρ̇= .d/dϵ|_ϵ=0ρ̇_ϵ = -(ρ∇.d/dϵ|_ϵ=0θ_ϵ) + ρ.d/dϵ|_ϵ=0ξ_ϵ . By applying the divergence theorem to the term (i) and then comparing (<ref>) with (<ref>), we see that ⟨δϱ̇,θ⟩ = .d/dϵ|_ϵ=0L(ϱ,ϱ̇_ϵ), giving θ as the Legendre-dual variable of ϱ̇. The form of the Hamiltonian follows readily. Equipping ×ℝ_+ and with the metrics and (see (<ref>) and (<ref>)) makes π×ℝ_+→ into a Riemannian submersion, which gives a correspondence between geodesics in and horizontal geodesics in ×ℝ_+ (i.e. those tangent to the horizontal distribution), when given an initial point in the fibre. The convexity of the dynamical formulation of standard optimal transport, as studied by Benamou and Brenier <cit.>, carries over to the conical extension. Indeed, in the variables ρ̅= ρ/m = φ_*μ/μ > 0, w =ρ̅∇θ, and r = √(m) > 0 it becomes min_w,ρ̅, r∫_0^1 ( r^2 ∫_M | w|^2/ρ̅μ + 4ṙ^2) dt , i.e. a minimization of a convex functional, under the affine constraints ρ̇̅̇ + div w = 0, ρ̅(0,·) = ρ_0/m_0 , ρ̅(1,·) = ρ_1/m_1, r(0) = √(m_0), r(1) = √(m_1). This convex minimization formulation is important for existence and uniqueness of solutions. §.§ Geodesic equations The equations of geodesics for the above metrics can be computed in either Lagrangian or Hamiltonian form. The Lagrangian form of the geodesic equations, i.e. equations in the corresponding tangent bundle, for the conical manifold can be obtained using the formulas for warped Riemannian manifolds (see <cit.>). We review this approach in the Appendix (Section <ref>). Here we derive the geodesic equations on the cotangent bundle, i.e. as the Hamiltonian equations for the Hamiltonian (<ref>). The geodesic equations in the Hamiltonian form for the Hamiltonian (<ref>) are given by ρ̇ = -div(ρ∇θ) + ξρ θ̇ = -1/2|∇θ|^2 - ξθ + ξ^2/2. Hamilton's equations are ϱ̇= δ H/δθ and θ̇= -δ H/δϱ. First, consider a variation θ_ϵ = θ + ϵ δθ, where: .d/dϵ|_ϵ=0H(ϱ,θ_ϵ) = ∫_M (ρ∇θ)·∇(δθ) μ + 1/m(∫_Mθ ϱ)(∫_Mδθ ϱ) = ∫_M (ρ∇θ)·∇(δθ) μ + ∫_M(ξρ)δθ μ = ∫_M (((ρ∇θ)δθ) - (ρ∇θ)δθ + (ξρ)δθ) μ = ∫_M (- (ρ∇θ) + ξρ)δθ μ, and so ρ̇= -(ρ∇θ) + ξρ. Similarly, considering a variation ϱ_ϵ = ϱ + ϵ δϱ: .d/dϵ|_ϵ=0 H(ϱ_ϵ,θ) = 1/2∫_M|∇θ|^2 δϱ - ṁ_0/2m^2(∫θ ϱ)^2 + 1/m(∫_Mθ ϱ)(∫_Mθ δϱ) = 1/2∫_M|∇θ|^2 δϱ - ṁ_0/2m^2(mξ)^2 + 1/m(mξ)(∫_Mθ δϱ), but ṁ_0 = .d/dϵ|_ϵ=0∫_Mϱ_ϵ = ∫_Mδϱ, so: = 1/2∫_M(|∇θ|^2 - ξ^2/2 + ξθ) δϱ, hence θ̇= -1/2|∇θ|^2 - ξθ + ξ^2/2. Recall from above that m = ∫_M ϱ is the total volume and that ξ = ∫_M θϱ / m is the logarithmic derivative of m. The evolution of m and ξ is described by the following theorem. The variables m and ξ fulfill the equations ξ̇ = 1/ m ( H(ϱ,θ) - m ξ^2 ) ṁ = m ξ . It can also be written as the second order equation m̈ = H . Since H(ϱ,θ) is constant along solutions, we obtain the following: The total volume m :=∫_Mϱ evolves with constant acceleration that depends only on the energy level of the initial conditions. In other words, the volume m evolves as an object's height in a constant gravity or buoyancy field. Note that in a conical metric it is a common phenomenon that, depending of the boundary conditions, a geodesic joining two densities might enter the region where the total mass is smaller than the smallest of the two it connects. By construction ṁ = ξ m. From ξ m = ∫_Mθϱ we then get ξ̇= d/dt1/ m ∫_M θϱ = 1/ m ∫_M ( θ̇ϱ+θϱ̇) - ξ^2 = 1/ m ∫_M ( ( -1/2|∇θ|^2 - ξθ + ξ^2/2)ϱ + θ( -div(ρ∇θ) + ξϱ)) - ξ^2 = 1/ m ∫_M ( 1/2|∇θ|^2 )ϱ - ξ^2/2 = 1/ m ( ∫_M 1/2|∇θ|^2 ϱ + 1/2 m ξ^2 - 1/2 m ξ^2 ) - ξ^2/2 = H(ϱ,θ)/ m - ξ^2 . We then get that m̈ = ṁξ + m ξ̇= m ξ^2 + m 1/ m (H - m ξ^2) = H. Note from equation (<ref>) that H ≥1/2mξ^2 with equality if and only if θ is constant, which corresponds to the invariant subset of pure scalings of the density ϱ. §.§ A finite-dimensional version of the simple UOT The existence of a finite-dimensional version of the conical extension is based on the following observation. Suppose that a submanifold N⊂ M is a totally geodesic in the manifold M. Then (N) is totally geodesic in (M). To prove the totally geodesic property, one needs to compute the geodesic equations. One can see that if the covariant derivatives ∇_q̇q̇ for q∈ N⊂ M belong to the tangent bundle of N then its extension by the radial variable r∈ℝ_+ can belong to the product of the tangent bundle of N and ℝ_+. Conical extensions (n)×ℝ_+⊂×ℝ_+ of the submanifolds (n)⊂ are totally geodesic for the natural UOT metric.[The same statement holds for the unbalanced Ḣ^1 and Fisher-Rao metrics considered below.] On the base, we now restrict the metric to the space of scaled (or non-normalized) Gaussian densities 𝒩⊂. In the total space, we restrict the metric to the finite-dimensional direct product subgroup (n)×ℝ_+⊂: [(A, m )]((Ȧ,ṁ ),(Ȧ,ṁ )) = m ∫_ℝ^nȦx^2η(x) + ṁ ^2/ m , where φ(x) = Ax for A∈(n) and η = p(x,Σ) dx is a normal density with covariance matrix Σ and zero mean, p(x,Σ) = 1/√((2π)^n|Σ|)exp(-1/2 x^⊺Σ^-1 x). Recall that the isotropic Gaussian is given by μ(x) = 1/√((2π)^n)exp(-1/2x^⊺ x ) dx. Consider now a group element (φ: x↦ Ax, m ). The action on μ is m φ_*μ = √( m ^2/(AA^⊺)(2π)^n)exp(-1/2x^⊺(AA^⊺)^-1x ) dx =: m p(x,AA^⊺_Σ)dx. The latter has the natural scaling property: m p(x,Σ) = p( x/√( m ), Σ/√( m ^2)). Note that one cannot write m p(x,Σ) = p(x,Σ̃) for some covariance matrix Σ̃, since p(·,Σ_1) = p(·,Σ_2) Σ_1 = Σ_2. It stands that after identifying the Gaussian densities with their (symmetric positive definite) covariance matrices in _+(n), the finite-dimensional version of our bundle is π(n)×ℝ_+ →_+(n)×ℝ_+ (A, m ) ↦ (AΣ A^⊺, m ) where we parametrize the base using both the covariance matrix and the total volume. The metric (<ref>) on GL(n)×ℝ_+ in terms of (A,V = Ȧ A^-1)∈ T_A(n) and ( m ,ξ)∈ T_ m ℝ_+ is given by m ( ∫_ℝ^n |V x|^2 p(x, Σ)dx + ξ^2) = m ((Σ V^⊺ V) + ξ^2 ). The vertical and horizontal distributions of (n)×ℝ_+ with the metric are given by: 𝒱_(A, m ) = {(VA,0) ∈ T_A(n)×ℝ| 0 = V(AΣ A^⊺) + (AΣ A^⊺)V^⊺}, ℋ_(A, m ) = {(VA,ξ m ) ∈ T_A(n)×ℝ| V∈(n), ξ∈ℝ}. If (A(t), m (t)) is a path in (n)×ℝ_+ with (A(0), m (0)) = (A, m ) and (Ȧ(0),ṁ (0))=(VA,ξ m ), then: dπ_(a, m )(VA,ξ m ) = .d/dt|_t=0(A(t)Σ A(t)^⊺, m (t)) = (Ȧ(0)Σ A(0)^⊺ + A(0)ΣȦ(0)^⊺,ṁ (0)) = (VAΣ A^⊺ + AΣ A^⊺ V^⊺,ξ m ), which gives the desired vertical distribution as its kernel. Noting that 𝒱_(A, m ) consists of VA such that VAΣ A^⊺ is antisymmetric, if WA∈ℋ_(A, m ) then for all such Z = VA we have: 0 = [(A, m )]((W,ξ m ),(Z,0)) = m (WAΣ Z^⊺) = - m (W(ZΣ A^⊺)). Picking ZΣ A^⊺ to be the elementary antisymmetric matrix with 1 in the (i,j)-entry and -1 in the (j,i)-entry (for i≠ j) gives that W must be symmetric, giving the desired horizontal distribution. The projection π(n)×ℝ_+→_+(n)×ℝ_+ subduces a metric on _+(n)×ℝ_+ by defining: [π(A, m )](dπ_(A, m )(X,a),dπ_(A, m )(Y,b)) = [(A, m )]((X,a)_ℋ,(Y,b)_ℋ), where the subscript ·_ℋ denotes the horizontal part of the vector. This metric makes π into a Riemannian submersion. This gives the expression: [(V, m )]((X,ξ m ),(X,ξ m )) = m ((VSS) + ξ^2), where S is a symmetric n× n matrix that is a solution to the continuous Lyapunov equation given by X = SV + VS. This is simply the cone metric built from the “balanced” case described in <cit.>. Let us now compute the Legendre transform. The dual variable P to V̇ = X is given by ⟨ P,δV̇⟩ = d/dϵ1/2[(V,m)]((V̇_ϵ,ṁ),(V̇_ϵ,ṁ)) = m/2(Σ (δ S S + S δ S)) = m/2((V δ S + δ SV) S) = m/2(S δV̇), where for δ S we have δV̇ = δ SV + V δ S . Thus, the dual variable is P = mS/2. The dual variable for m is ξ. This gives the Hamiltonian H(V, m ,P,ξ) = (VPP)/2m + 1/2 mξ^2 . This gives the Hamiltonian form of the geodesic equations on T^*(_+(n)×ℝ_+) as V̇ = 2/ m (PV + VP), ṁ = ξ m Ṗ = -P^2/2 m , ξ̇= 1/2(tr(vP^2)/ m ^2 - ξ^2 ) Note that here the need for all four equations, as opposed to Theorem <ref> where we only have two equations, arises from the observation of Remark <ref> that we need two parameters to describe the unscaled Gaussians. §.§ Affine transformations and Gaussians with nonzero means It turns out that considering the group of affine transformations (n)⋉ℝ^n⊂ acting on Gaussians with arbitrary (not necessarily zero) means does not essentially change the above picture. While the group extension is semi-direct, its metric extension is a direct product, provided that a reference Gaussian is η = p(x,Σ) dx with mean μ = 0. For a more general reference Gaussian measure the metric accumulates the following terms: [(A,b, m )]((Ȧ,ḃ,ṁ ),(Ȧ,ḃ,ṁ )) = m [(ȦΣȦ^⊺) + Ȧμ^2 + 2⟨Ȧḃ,μ⟩ + ḃ^2] + ṁ ^2/ m , and it descends to the metric on ((n)⋉ℝ^n)×ℝ_+ given by: [(U,v, m )]((X,y,a),(W,z,b))= m [(XU^1/2Σ U^1/2W^⊺) + ⟨ XU^1/2Σ^1/2μ,WU^1/2Σ^1/2μ⟩. .+ ⟨ XU^1/2Σ^1/2z,μ⟩ + ⟨ WU^1/2Σ^1/2y,μ⟩ + ⟨ y,z⟩] + ab/ m . Note that if μ = 0, then several terms vanish, and one is left with the product metric of (_+(n)×ℝ^n)×ℝ_+. This implies that the geodesics between two Gaussian densities with different means are the pushforwards of measures by affine transformations, which decompose into the uniform motion between the centers of the two Gaussian densities and the (n) transformation with the fixed center. The explicit geodesics for _+(n) with the Wasserstein metric are given in McCann <cit.>. In particular, for U,V∈_+(n), define T = U^1/2(U^1/2VU^1/2)^-1/2U^1/2∈_+(n), and then W(t) = [(1-t)E + tT]V[(1-t)E + tT] is a geodesic between U and V. In our case, if the reference measure is of mean zero (μ=0) the geodesics in the balanced affine extension are those of the product _+(n)×ℝ^n. The geodesics in the unbalanced case are those of the conical extension _+(n)×ℝ^n×ℝ_+. The sectional curvatures of _+(n) with the Wasserstein metric are well understood (see <cit.>) and are known to be non-negative. Hence in the case μ = 0 the affine and conical extensions also have non-negative sectional curvatures. § A “LARGE" EXTENSION FOR UOT §.§ The “large" group, metric, and the geodesic equations A more “classical" approach to an unbalanced optimal transport involves the following large semidirect extension of the group of all diffeomorphisms of a manifold by means of the space of smooth functions, see e.g. <cit.>. Namely, the semi-direct product G=⋉ C^∞_+(M) acts on by (φ, λ )·ϱ = φ_*(λϱ), i.e. diffeomorphisms act on densities by changes of coordinates, while functions adjust the obtained density pointwise. Let μ∈ denote the reference volume form. Then we get a projection Π G→ by the action on μ. The vertical bundle is given by 𝒱_(φ,λ) = { (v∘φ,φ^*(L_v ϱ)/μ) | v∈𝔛(M)}≃𝔛(M), where ϱ = φ_*(λμ). A curve (φ(t),λ(t)) belongs to the fiber of ϱ∈ iff φ(t)_*(λ(t)μ) = ϱ for all t. Equivalently, λ(t) = φ(t)^*ϱ/μ. By differentiating this relation we get the result. This description of 𝒱 is equivalent to the one given by Vialard <cit.> as dπ(φ,√(Jac φ)) = {.(v, v/2)∘(φ,√(Jac φ)) | v∈𝔛(M)} The relation is ϱ = Jac(φ) μ, where the square root appears if one passes to half densities. Consider now the Riemannian metric on G considered in <cit.> and given by [(φ,λ)]((φ̇,λ̇), (φ̇,λ̇)) = ∫_M |φ̇|^2 λμ + ∫_M λ̇^2/λμ . The horizontal bundle of the metric  (<ref>) is ℋ_(φ,λ) = { (∇θ∘φ,λ (θ∘φ))|θ∈ C^∞(M) }≃ C^∞(M). Any element in T_(φ,λ) G can be written (u∘φ, λ (θ∘φ)). Suppose that (u∘φ, λ(θ∘φ))∈ℋ_(φ,λ). Since for all v∈𝔛(M) the pairs (v∘φ, φ^*(L_vϱ)/μ) span the vertical space 𝒱_(φ,λ), we have that 0 = [(φ,λ)]((v∘φ,φ^*(L_vϱ)/μ),(u∘φ,λ(θ∘φ))) = ∫_M⟨ v∘φ,u∘φ⟩ λμ + ∫_M(θ∘φ)φ^*(L_vϱ) = ∫_M⟨ v,u⟩ φ_* (λμ) + ∫_Mθ L_vϱ = ∫_M⟨ v,u⟩ ϱ - ∫_M⟨ v, ∇θ⟩ϱ = ∫_M⟨ v,u - ∇θ⟩ ϱ . The latter integral vanishes for any v∈𝔛(M) if and only if u = ∇θ, which concludes the proof. The metric given by (<ref>) projects as a Riemannian submersion to the metric on given by [ϱ](ϱ̇,ϱ̇) = 1/2∫_M ( |∇θ|^2 + θ^2 )ϱ , ρ̇= -div(ρ∇θ) + ρθ. The variable θ∈ C^∞(M) is Legendre-dual to ϱ̇, i.e., the Hamiltonian corresponding to the metric is H(ϱ,θ) = 1/2∫_M ( |∇θ|^2 + θ^2 )ϱ . The equations of geodesics (in Hamiltonian form) are ρ̇= -div(ρ∇θ) + ρθ θ̇= -1/2|∇θ|^2 - θ^2/2 . The proof of this follows similarly to Theorem <ref>. Hamilton's equations are ϱ̇= δ H/δθ and θ̇= -δ H/δϱ. Given a variation θ_ϵ = θ + ϵ δθ, note: .d/dϵ|_ϵ=0 H(ϱ,θ_ϵ) = ∫_M ρ(∇θ·∇(δθ) + θ δθ) μ = ∫_M (-(ρ∇θ) + ρθ)δθ μ, and so ρ̇= -(ρ∇θ) + ρθ. Similarly, considering a variation ϱ_ϵ = ϱ + ϵ δϱ, we see: .d/dϵ|_ϵ=0 H(ϱ_ϵ,θ) = ∫_M1/2(|∇θ|^2 + θ^2) δϱ, and so we immediately get θ̇= - |∇θ|^2/2 - θ^2/2. The metric in Theorem <ref> can be interpreted as an interpolation between Wasserstein–Otto and Fisher–Rao, but not a convex combination of the Riemannian metric tensors (see Remark <ref> below). One way of understanding the relation is the following: the Wasserstein–Otto part of the metric depends on the finite-dimensional metric g on M, but the second term does not. Thus, let us introduce a parameter β by making the replacement g↦β g. Then as β→∞ we recover the Fisher–Rao metric (since ∇θ→ 0 for the metric β g). On the other hand, as β→ 0 we recover the (scaled) Wasserstein–Otto metric. A direct interpretation is that the mixed metric behaves as Fisher–Rao on small scales, but as Wasserstein–Otto on large scales. The metric in Theorem <ref> lifted to a metric on ×ℝ_+ is given by ⟨ (v∘φ,λ̇), (v∘φ,λ̇)⟩_(φ,λ) = ∫_M ( |∇θ|^2 + θ^2 )ϱ , where ϱ = λφ_*μ and θ∈ C^∞(M) is the solution to the equation -div(ρ∇θ) + ρθ = -div(ρ v) + ρλ̇/λ . Notice, however, that θ is somewhat difficult to find, as it requires the solution of a non-local equation. The “small” extension discussed above does not encounter this difficulty. This shows that the metric of the simple UOT is not a restriction of the noticeably more complicated metric in Theorem <ref>. §.§ A finite-dimensional version of the large extension Consider the finite-dimensional group (n)⋉(n)⊂⋉ C^∞_+(ℝ^n), where (n) is the additive space of symmetric n× n matrices (or equivalently, the corresponding quadratic forms on ℝ^n), on which linear transformations act by the variable change. We regard (n) as a subset of C^∞_+(ℝ^n) by using the map E = {x↦exp(x^⊺ Sx) | S∈(n)}⊂ C^∞_+(ℝ^n) . This way the addition group of symmetric matrices (n) becomes a multiplication subgroup of positive functions C^∞_+(ℝ^n). An advantage of this approach is that the Riemannian submersion can be restricted to the finite-dimensional model, where the group (n)⋉ E acts on E⊂. Here an element (A,S)∈(n)⋉ E acts naturally on a density p(x,Σ) by changing variables and bringing the quadratic into the exponential: (A, S):    p(x,Σ)↦ p(x,Σ):= exp(x^⊺ S x) p(x,A^⊤Σ A) = 1/√((2π)^n|Σ|)exp(-1/2 x^⊺((A^⊤Σ A)^-1 - 2S)x). The drawback is that even if p(x,Σ) is a Gaussian density and S is also positive-definite, the symmetric matrix (A^⊤Σ A)^-1 - 2S might not be positive-definite! This means that the total volume of the density p(x, Σ )=exp(x^⊺ S x) p(x,A^⊤Σ A) in ℝ^n might be infinite. Thus one has to consider a restricted orbit of this action, constrained by the condition of positivity of the matrix (A^⊤Σ A)^-1 - 2S. Note that the required positivity is automatically satisfied and does not constrain anything in the infinite-dimensional setting of the space of L^2 densities . Also this constraint is not required in the finite-dimensional simple 1D conical extension described in Section <ref>. § OTHER VERSIONS OF THE UNBALANCED TRANSPORT METRIC §.§ A “small” extension with a divergence term Consider now the Riemannian metric on given by [(φ,λ)]( (v∘φ,ξλ), (v∘φ,ξλ) ) = 1/2∫_M ( | v|^2 + div(ρ v)^2/ρ^2)ϱ + λξ^2. Notice the similarity of with given by (<ref>): it is the same metric supplemented by the divergence term. In particular, on vertical vectors it is exactly the same metric. Thus, the horizontal bundle is the same as in Lemma <ref>. Consider now the metric on given by [ϱ](ϱ̇,ϱ̇) = ∫_M ( |∇ S|ϱ + (ϱ̇/ϱ)^2ϱ) , -div(ρ∇ S) = ρ̇- κρ . Here, we think of κ as a Lagrange multiplier to ensure that the average of the right hand side vanishes. The projection π→ given by π(φ,λ) = λφ_*μ is a Riemannian submersion with respect to and . The tangent derivative of the projection is T_(φ,λ)(v∘φ,ξλ) = ξϱ - L_vϱ . In particular, for a horizontal vector (∇θ∘φ, ∫_M θϱ) we have T_(φ,λ)(∇θ∘φ,∫_M θϱ) = ϱ/λ∫_M θϱ - div(ρ∇θ)μ . Taking this expression as ϱ̇ we see from the definition of that -div(ρ∇ S) = -div(ρ∇θ). Thus, ∇θ = ∇ S. We now plug this into the metric : [ϱ](ϱ/λ∫_M θϱ - div(ρ∇θ)μ,ϱ/λ∫_M θϱ - div(ρ∇θ)μ) = ∫_M |∇θ|ϱ + 1/λ∫_M θϱ∫_M θϱ + ∫_M div(ρ∇θ)^2/ρμ = [(φ,λ)]( (∇θ∘φ, ∫_M θϱ), (∇θ∘φ, ∫_M θϱ) ) This proves the assertion. The small conical extension with metric (see (<ref>)) can be viewed as the “common ground" for the Lagrangian and Hamiltonian extensions in constructions of an unbalanced optimal transport. Indeed, the Hamiltonian H(ϱ,θ) = 1/2∫_M ( |∇θ|^2 + θ^2 )ϱ (see (<ref>)) expressing the metric on in the dual variables is the sum of two terms. The first one corresponds to the Wasserstein metric, while the second term ∫_M θ^2 ϱ = ∫_M(ν/ϱ)^2ϱ for the density ν:=θϱ represents the Fisher-Rao metric. Hence the name of the WFR metric for the semidirect generalization of UOT developed in <cit.>. On the other hand, the metric on with an extra divergence term (see (<ref>)) has a similar WFR form, although not in the Hamiltonian, but in the Lagrangian setting: the first term is the Wasserstein metric, while the second, divergence term is the degenerate Ḣ^1 contribution giving the Fisher-Rao metric on . From this point of view, the direct product descending to metric on is the “intersection” of the two approaches, delivering the common terms for both extensions. §.§ Conical Fisher–Rao metrics Consider the group equipped with the Ḣ^1-type metric and its projection to the space (M) of normalized densities, equipped with the Fisher-Rao metric. It also admits the conical extensions with the projection on . Note that since the Fisher-Rao metric on (M) is spherical, its conical extension to ⊃(M) is an (infinite-dimensional) positive quadrant of the (pre-Hilbert) space of highest-degree forms naturally equipped with the flat L^2-metric. The positive quadrant is formed by all volume forms on the manifold. The projection →(M) from diffeomorphisms to volume forms is known to be a Riemannian submersion <cit.>, and it remains a Riemannian submersion for its conical extension →. The space equipped with the conical Ḣ^1-type metric has non-positive sectional curvatures. Indeed, under a Riemannian submersion the sectional curvature cannot decrease <cit.>. Since the base manifold of all volume forms is flat (i.e. its sectional curvatures all vanish) under the projection, the sectional curvatures of the space must be negative or equal to zero. This conical extension also admits a finite-dimensional version, extending the one in <cit.>. Indeed, the finite-dimensional submanifold (n)×ℝ_+⊂×ℝ_+ is totally geodesic and according to Lemma <ref> the corresponding projection to is totally geodesic as well. One can expect similar matrix decompositions coming from this Riemannian submersion, extending those in <cit.>. § APPENDIX: A GENERAL FORM OF THE GEODESIC EQUATIONS FOR A CONICAL EXTENSION The geodesic equations for the cone Q×ℝ_+ formed from a Riemannian manifold (Q,g) with the metric r^2g+dr^2 are: 0 = ∇_q̇q̇ + 2/αα̇q̇, 0 = α̈- g(q̇,q̇) α, for a geodesic γ = (q, α) ∈ Q×ℝ_+. For the cone Q×ℝ_+ consider the two projections: Q×ℝ_+ [r,"π"] [d,swap,"σ"] ℝ_+ Q This cone can be viewed as a warped product ℝ_+ ×_f Q for fℝ_+ →ℝ defined by r↦ r and the metric defined for v∈ T_(r,q)ℝ_+× Q by: ⟨ v,v⟩_(r,q) = dπ(v)^2 + f(r)^2 g_q(dσ(v),dσ(v)). The geodesic equations for such a warped product is given for a geodesic γ = (α,q)∈ℝ_+×_f Q by 0 = ∇_α̇α̇- g(q̇,q̇)(f∘α)∇ f, 0 = ∇_q̇q̇ + 2/f∘αd(f∘α)/dtq̇ , see <cit.>. With our setup, ∇ f = 1 (with the standard metric on ℝ_+), and f∘ r = r, and the result follows. In the present paper we apply the corresponding conical one-dimensional extension r^2 g(v,v)+dr^2 to the group of diffeomorphisms and the space of normalized densities, where g(v,v) is, respectively, the L^2-metric on Diff(M) and the Wasserstein metric on Dens(M). For an arbitrary p∈ℝ, the geodesic equations for a geodesic γ = (q, α) on the cone Q×ℝ_+ with the metric r^2pg(v,v)+dr^2 assume the form: 0 = ∇_q̇q̇ + 2p/αα̇q̇, 0 = α̈- pr^p-1g(q̇,q̇) α^p, of which Theorem <ref> is the special case of p=1, while p=0 corresponds to the direct product metric on the cylinder Q×ℝ. One can also consider the one-parameter extensions r^2p g+dr^2 in infinite dimensions as well. Other hyperbolic and parabolic-type metrics for negative and positive values of p might be useful in problems of optimal transport whenever it is convenient to tune the mass balance. amsplainnat
http://arxiv.org/abs/2307.05603v1
20230710203941
Can You Improve My Code? Optimizing Programs with Local Search
[ "Fatemeh Abdollahi", "Saqib Ameen", "Matthew E. Taylor", "Levi H. S. Lelis" ]
cs.SE
[ "cs.SE", "cs.LG", "cs.PL" ]
Spin-EPR-pair separation by conveyor-mode single electron shuttling in Si/SiGe Lars R. Schreiber August 12, 2023 ============================================================================== This paper introduces a local search method for improving an existing program with respect to a measurable objective. Program Optimization with Locally Improving Search () exploits the structure of a program, defined by its lines. improves a single line of the program while keeping the remaining lines fixed, using existing brute-force synthesis algorithms, and continues iterating until it is unable to improve the program's performance. was evaluated with a 27-person user study, where participants wrote programs attempting to maximize the score of two single-agent games: Lunar Lander and Highway. was able to substantially improve the participants' programs with respect to the game scores. A proof-of-concept demonstration on existing Stack Overflow code measures applicability in real-world problems. These results suggest that could be used as a helpful programming assistant for programming problems with measurable objectives. § INTRODUCTION Recent advances in large language models and program synthesis have enabled the development of powerful artificial intelligence assistants for computer programmers. For example, Copilot <cit.> can provide an initial solution to a problem if the programmer is unsure of how to approach the problem or auto-complete what the programmer writes to speed up coding. Copilot and other assistants were designed to interact with the programmer throughout the development of the program. This paper considers a setting where the assistant interacts with the programmer only after a working version of the program is available. In this paper's setting, the assistant attempts to improve the programmer's solution with respect to a real-valued, measurable objective function, something systems such as Copilot cannot perform. We introduce Program Optimization with Locally Improving Search (), an intelligent assistant to improve existing programs. leverages the ability of existing synthesizers to generate high-quality (short) programs by treating each line of an existing program as an independent program synthesis task. uses an enumeration algorithm for synthesis, called bottom-up search <cit.>, for each line of the program. Since selects the best solution encountered in each bottom-up search, it can be seen as a hill-climbing algorithm in the program-line space. Despite not using any models for guiding its search, can handle complex programs because it divides the original problem into much smaller sub-problems by considering the synthesis of one line at a time. To evaluate , 27 programmers wrote programs for playing Lunar Lander and Highway, two single-agent games commonly used to evaluate reinforcement learning algorithms. was able to improve the score of all programs written by the participants, often by a large margin. Our results also show that often the modified programs retain most of the structure of the original programs. As a result, the users who wrote the programs are likely to understand 's modifications to their implementations. We also present a proof-of-concept demonstration of 's ability of fixing bugs in 4 simple programs posted on Stack Overflow. 's modified programs can be seen as the result of the work done by an effective human-AI team. This is because bottom-up search would not be able to synthesize the resulting programs from scratch, as the programs are long and complex. However, bottom-up search is able to substantially improve human-generated programs. As our results demonstrate, human programmers are unable to write on their own programs of the quality obtained with . These results suggest that can be a helpful assistant to programmers for problems with measurable objectives. This paper makes two contributions. First, it defines a problem setting for intelligent programming assistants where the assistant attempts to improve existing programs with respect to an objective function. Second, it introduces , a system that employs a novel local search algorithm based on a simple brute-force search algorithm. § RELATED WORK is related to intelligent programming assistants, program synthesis, programmatically interpretable policies, and program enhancement algorithms. §.§ Intelligent Programming Assistants Intelligent assistants for programmers are getting popular and have become a popular area of research lately. SnipPy <cit.> is one such tool that allows the programmer to synthesize instructions by defining input-output examples in the context of live programming. Similarly, Blue-Pencil <cit.> is a system that identifies repetitive tasks that arise in programming and suggests transformations for such tasks. reCode <cit.> observes code transformation to identify other places of the code that would require similar changes.  code-completion-statistical introduced a statistical model for code completion and  guo2022learning introduced a model for code completion that leaves “holes” where the model is uncertain. differs from these works in how it assists the programmer. Instead of real-time interactions during the development of the program, we consider the scenario where the programmer provides a complete, compilable version of their program. leverages human-defined code structure to improve the user's implementation with a simple synthesizer. §.§ Program Synthesis The task of synthesizing programs that satisfy a specification is a long-standing problem <cit.> and it has received much attention lately <cit.>. While previous works attempt to improve the synthesis process and generate programs which satisfy given specification, uses program synthesis to optimize existing programs with respect to a given objective function. §.§ Programmatic Policies One way to solve the problems considered in this work is to synthesize programs encoding a policy for solving the tasks. Neurally directed program search (NDPS) <cit.> synthesizes programs while imitating a neural oracle. Viper <cit.> also employs imitation learning to train decision trees encoding policies. In order to provide better search guidance for synthesis, Propel <cit.> trains neural policies that are not “too different” from the synthesized program. Sketch-SA <cit.> is another such system that uses imitation learning to synthesize a sketch of a policy; the policy is synthesized from the sketch by evaluating it directly in the environment. Oracle-free programmatically interpretable reinforcement learning (π-PRL) <cit.> and Bilevel Synthesis (Bi-S) <cit.> bypass the need of an oracle to guide the synthesis of programmatic policies. π-PRL uses a differentiable language and trains the model using policy gradient methods, while Bi-S uses the result of a search in a feature space to guide the search in the programmatic space. differs from these algorithms because they were designed to synthesize programs from scratch, while focuses on leveraging the structure of existing programs. §.§ Program Enhancement Refactoring is a well-known program enhancement technique used to improve a program's quality without affecting its external behavior <cit.>. Another way of enhancing a program is the Automated Program Repair (APR) technique which refers to the process of fault localization in software and the development of patches using search-based software engineering and logic rules <cit.>. For instance,  1genprog use genetic programming to develop bug-fixing patches without affecting software functionality. is different from these techniques because a) improves programs with respect to an objective function and its external behavior is likely to change; and b) while fixes unintended programmer mistakes (similar to APR), it is likely to also change sub-optimal parts of the program, improving overall performance. § PROBLEM DEFINITION Rather than using a general-purpose language like Python, which defines a very large program space, we use a domain-specific language (DSL) to define a more constrained space of programs for solving a programming task. A DSL is defined as a context-free grammar (V, Σ, R, S), where V is a finite set of non-terminals, Σ is a finite set of terminals, and R is the set of relations corresponding to the production rules of grammar. S is the grammar's start symbol. An example of a DSL defined by a grammar G is shown below, where V = {S, C, B}, Σ = {c_1, c_2, c_3, b_1, b_2, if-then-else}, R are the relations (e.g., C → c_1), and S is the start symbol. S →if(B) then S else S C C → c_1 c_2 c_3 CC B → b_1 b_2 This DSL allows programs with a single instruction (c_1, c_2, or c_3), or multiple commands using nested if-then-else blocks. Let G be the set of programs (possibly infinite) that can be written with grammar G. Each program p ∈ G is defined by a pair {T, L}, where T is a multiset of non-terminal symbols and L defines a partition of symbols from T into program lines, i.e., L defines how a programmer organizes the symbols in T in a text editor. Note that two programs that have identical functionality could have different partitions L. takes as input a program p ∈ G, and an objective function F (real-valued evaluation of the program), and outputs a program p' ∈ G that is at least as good as p and approximates a solution for max_p ∈ G F(p), assuming a maximization problem. § : A PROGRAMMING ASSISTANT The pseudocode in Algorithm <ref> shows the local search algorithm employs. It receives an existing program p and two time limits, t and t_l, for the overall running time of the search and for the running time allowed to optimize each line of code, respectively, and an evaluation function F. returns a new program, p', that is at least as good as p in terms of F-value. While there is time available to improve the input program, iterates through each line (the for loop in line <ref>) and it attempts to synthesize a program that replaces the code in the i-th line of p such that the objective function F is improved. This is achieved with a call to the synthesizer (line <ref>), which returns a version of p where the i-th line of p is replaced by a program that optimizes F. The synthesizer can return the program unchanged, if its original i-th line returns the best F-value or it exceeds its time limit before finding a better line. Lastly, returns the optimized program (line <ref>) if the search reaches a local optimum, i.e., the improved program p has the same F-value as p'. Our system uses size-based bottom-up search (BUS) <cit.> as the synthesizer. BUS was shown to outperform other uninformed enumeration-based synthesizers <cit.>. BUS starts by enumerating the smallest possible programs of a given language. It then uses the smallest programs with the production rules of the DSL to generate larger programs. One can use different metrics of “size” for defining BUS's enumeration procedure. A commonly used metric, which we use in our implementation, is the number of nodes in the abstract syntax tree representing the synthesized programs. That is, in BUS's first iteration it generates all programs whose tree has a single node, then all programs whose tree has two nodes, and so on, until a solution is found. In its first iteration, for the DSL shown in Equation <ref>, BUS generates programs c_1, c_2, c_3, b_1, b_2. Then, in its second iteration BUS generates programs c_1 c_1, c_1 c_2, c_1 c_3 c_2 c_2, c_2 c_1, c_2 c_3, and so on. One advantage of BUS is that, once it finds a solution program, the program is provably the smallest one that solves the problem. Another advantage is that all programs generated in search are executable, which allows one to run them and perform an observational equivalence check (i.e., the search only keeps one of two programs that produce the same set of output values for a given set of input values of interest). §.§ Domain-Dependent Implementation Details We evaluate on programmatic policies for playing games, which are written by human programmers. A programmatic policy is a program encoding a function (policy) that receives a state of a game and returns the action the agent should take at that state. In what follows, we describe 's implementation details. §.§.§ Input-Output Examples For the task of writing programmatic policies for playing games, we use the approach introduced by  pirl to define a set of input-output examples. That is, we train a neural policy that generates a set of input-output pairs: for a set of observations o (input), we store the neural policy's chosen action a (output). We use DQN <cit.> to train a neural policy π for 2000 episodes. We let the agent follow π in the environment for 2000 steps and collect all the observation-action pairs along with their Q-values. §.§.§ Evaluation Function We use two evaluation functions. The function F is given by running the programmatic policy and computing its game score. This evaluation function is computationally expensive, since we need to play the game several times to evaluate a program, due to the stochastic nature of the environments. Instead of computing F for all programs generated in search, we keep a list of the current k-best programs with respect to an action-agreement metric: the number of observations each program correctly maps to the action a neural policy π selects for that observation. The action-agreement metric we use is computed as ∑_o ∈ T1[p(o) = π(o)]/|T|, where T is the set of input-output examples, 1[·] is the indicator function, p(o) and π(o) are the actions returned by the program p and policy π, respectively, for observation o. We evaluate the value of F only for the programs in the k-best set. Once the synthesizer runs out of time, it returns the best program in the set of k best with respect to F, not with respect to the action agreement metric. We use k=20 in our experiments. §.§.§ Highlights Highlights ranks a set of observations according to the largest difference in Q-values for different actions available at a given observation. We employ the idea of highlights to further optimize the computational cost of our evaluation function by using a small number of input-output examples. Instead of collecting a large number of observation-action pairs uniformly at random, we collect the 400 observations ranked most important by Highlights <cit.>. §.§.§ Bayesian Optimization The real numbers n in the DSL (Figure <ref>) are set using Bayesian optimization <cit.>. Bottom-up enumeration in the synthesizer generates programs with the symbol n, later replaced with real values by the optimizer. The optimizer chooses these values while attempting to optimize for the action agreement metric. §.§.§ Restarts The initial program and the set of input-output pairs define the optimization landscape traverses with its hill-climbing algorithm. 's greedy approach to optimization could lead to the algorithm returning locally optimal solutions. An effective strategy for dealing with local optimum solutions is to restart the search from a different starting location in the optimization landscape once the search stops in a local optimum <cit.>. To restart the search and allow for different initial starting conditions, we train a different DQN agent to generate a new set of input-output pairs every time we restart the algorithm. A restart is triggered in Algorithm <ref> when line <ref> is reached and still has time available for synthesis. § USER STUDY EVALUATION This section describes the experimental design of the study.[Our implementation and the data collected in our user study is available at <https://github.com/FatemehAB/POLIS>.] §.§ Problem Domains We use to improve programs written by users to play two games: Lunar Lander and Highway (Figure <ref>). Both games have a game score, which serves as a clear metric for evaluating the quality of the programs. Lunar Lander In this game the player controls three thrusters of a spaceship trying to land on the moon. Each thruster can be either on or off. The game score is maximized if the player does not use the thrusters unnecessarily and gently reaches the landing pad. We use the LunarLander-v2 implementation from OpenAI Gym <cit.>. Highway In this game the player controls a car on a three-lane highway. The game score is higher when the player drives fast, avoids collisions, and spends more time in the rightmost lane. The player can change lanes, increase, or reduce speed. We use the implementation of  highway-env. §.§ User Study Design We developed a web-based system based on HIPPO Gym <cit.> to conduct the user study and advertised it in mailing lists of graduate and undergraduate Computing Science students at our university.[The study was approved by the University of Alberta Research Ethics Office (Pro00113586).] Each participant first electronically signed a consent form, explaining that they would write a program to play a computer game. It also explained that their compensation would be impacted by the game score of their final program; higher game scores would result in higher monetary compensation. The minimum compensation was $15. We used the following formulae to compute the compensation of each participant: 15+ (100+x) × (1/30) and 15 + x × (1/5) for Lunar Lander and Highway, respectively. x represents the participants' average game score over 100 and 25 episodes of Lunar Lander and Highway, respectively (an episode is completed when the player finishes landing the spaceship in Lunar Lander or when the player crashes the car or a time limit is reached in Highway). The maximum compensation was capped at $25. After agreeing with the terms of the study, each participant was randomly assigned to one of the two games. Then, they read a tutorial about the assigned game. In the tutorial, we explained the features in each observation passed as an input parameter to the program as well as the actions available to the player. Our tutorial had a few examples with screenshots of the game showing situations where different actions were applied to different observations of the game. The tutorial finished with a multiple-choice question about the game; immediate feedback was provided to the participant showing whether they chose the correct or wrong answer. If an answer was incorrect, the participant would have as many attempts as needed to answer it correctly. Following the game tutorial, each participant read a tutorial about our DSL. The tutorial presented the DSL (Figure <ref>) and explained Boolean and algebraic expressions as well as the programming structures our DSL supports. Similarly to the game tutorial, we provided several examples of programs that can be written in our DSL. The tutorial finished with a multiple-choice question where the participant had to select, among four options, the program that was accepted in our DSL; the participant had as many attempts as needed to answer the question correctly. Before writing a program for playing the game, the participant had the chance to play the game using their keyboard for a maximum of 10 minutes. Our graphical user interface showed, in real-time, the observation values and the game score each participant obtained for each run of the game. The participant could choose to stop playing the game at any time (within the 10 minutes allowed by our system) and start writing their program. Our goal with this step of the study was to allow the participant to develop a strategy for playing the game, something they could try to encode in their programs. We provided the participants with a Python-like editor, where the keywords of the DSL are highlighted. The editor also had an example of a simple program for playing the game. For Highway, the initial program moves the car to the right lane if the car is not already there; the player takes no action otherwise. Our interface also allowed the participants to go back to the tutorials while writing their program. Our interface also showed the game so that participants could execute their program and see its behavior. Similarly to the interface where the participant played the game, we showed the observation values and the game scores in real-time. The participant could stop the simulation at any time to inspect the values of the observations. We stored all programs the participants evaluated so that they could be used as input for our evaluation. The total time allowed for the experiment was 60 minutes. The participant could submit the final version of their program at any time within the 60-minute limit. We used the final program submitted to compute the participant's monetary compensation. The participant then answered demographic questions before leaving. § USER STUDY RESULTS In our results, we abbreviate standard deviation as SD and interquartile range as IQR. §.§ Demographics 40 people consented to participate and 26 completed the survey. The average age was 20.96 (SD of 4.13), with their ages ranging from 18 to 40; 20 of the participants identified themselves as male, 5 as female, and 1 withheld gender information. Most (20) had received or were pursuing undergraduate education, 4 had completed high school, and 2 were pursuing post-secondary training. Most (25) had not done any form of game artificial intelligence (AI) research and about half of them had not taken any AI courses. More than one-third of the participants (10) rarely or never played computer games and others occasionally or often played computer games. We asked about the participants' programming experience: 22 had more than one year of experience and 4 had less than a year. We also asked about their knowledge of Python, how hard it was to write a program in our DSL, and how hard it was to write a program for solving the game. We used a 5-point, Likert-like scale: 1 being “novice” in Python and “very easy” for writing programs, to 5 being “expert” in Python and “very hard” for writing programs. The median response to these three questions were: 3 (IQR = 1), 2.5 (IQR = 2), and 4 (IQR = 1), respectively. On average, the participants had some experience in Python, and found it easy to use our DSL, but found it hard to write a program to play the game. To evaluate we considered the data from those who submitted at least one working program (different from the example program we provided), resulting in a total of 27 participants (one of them did not complete the survey). §.§ Computational Results <Ref> show the results for Lunar Lander and Highway, respectively. Here, each participant is represented by an ID. The game score of both the participants' and 's programs is an average of the score the program obtained in 100 of Lunar Lander and 25 episodes of Highway. The game score shown for is the average over 10 independent runs of the system. Each run of can result in different game scores due to the random initialization of the neural policy used to generate input-output pairs. We also present the standard deviation, minimum, and maximum game scores across these 10 independent runs. We performed 5 restarts for each run of the system; the result of a run is the best program encountered across the 5 restarts. The average score we present for both participants and are for the program that achieved the highest average score throughout the study; the program the participant submits is not necessarily the program with the highest score. The number of lines of code (LoC) indicates how many lines the original program has. In both tables, we sort the rows according to the participant's program game score, from lowest (top) to highest (bottom). The number of edited lines (Edited LoC) refers to the average number of lines that modifies in the restart that resulted in the best program of a given run. We also show the average number of car collisions in Highway (Hits). 's average score is higher for all programs written in our study. Even the minimum value across the 10 independent runs is often much higher than the score of the program the participants wrote. A Wilcoxon signed-rank test pointed to a large effect size for the average results of both domains: 0.624 for Lunar Lander (p<4.9 × 10^-4) and 0.621 for Highway (p < 3.1 × 10^-5). For Lunar Lander, provided quite significant improvements to some of the participants' scores (e.g., IDs 3 and 11), but for some others the improvements were minor (e.g., IDs 4 and 5). The number of lines edited for the programs of participants 4 and 5 is much smaller than for the other programs, which indicates that quickly reached a local minimum for these programs. Interestingly, for Highway, improved the performance of all programs to an average game score above 33 (the best program a participant wrote achieved a score of 35.71). Moreover, substantially reduced the number of collisions, in some cases from more than 20 to less than 3 collisions. Since does not change the overall structure of the program, we conjecture that the participants identified the program structure needed to play Highway, which makes the programs for that game more amenable to 's improvements. The Lunar Lander results might be pointing to a limitation of which is its inability to improve programs that need simultaneous changes to more than one line of code. §.§ Representative Program The program shown in Figure <ref> is a representative program written by one of the participants of our study for the Highway domain; we refer to this program as p in this section. This program obtains an average game score of 6.8 over 25 episodes. Figure <ref> shows 's improved program for p, which we will refer to as p'. We lightly edited p' for readability. 's p' obtains an average game score of 39.0 over 25 episodes, a major improvement over the original program. The participant of our study made a mistake while writing the first if-statement of p as the Boolean condition checks whether o[5] is equal to o[1] and if o[5] - o[1] > 200; the two parts of the expression cannot be simultaneously true as once o[5] is equal to o[1], we have that o[5] - o[1] is zero. As a result, the player never slows down (action 4). The participant's intention with this if-statement was likely to slow the car down if the player's car was on the same lane as the nearest car on the road (the condition “o[5] is equal to o[1]” returns true if the cars are on the same lane). not only fixed the problem with the Boolean condition in the participant's program, but also changed the player's strategy. Instead of slowing down if another car is on the same lane, p' only slows down when changing lanes; o[3] is the car's velocity on the y-axis, which is different from zero when the car is changing lanes. Since the car is changing lanes, o[1] cannot be zero, as o[1] is zero when the car is in the leftmost lane. Unlike p, p' changes lanes when there is another car in the same lane. This is encoded in the elif structure of the program, which can be translated as if the nearest car is on the same lane (o[5] is equal to o[1]) and the car is not already in the rightmost lane (line 7), then move to the right lane (action 2; line 8). The agent will move to the left lane if already in the rightmost lane (action 0; line 10). 's improved program prefers to drive in the rightmost lane if the car driving in the same lane is not the closest (i.e., there is still time to change lanes). The program maximizes its score by driving in the rightmost lane. Finally, 's program does nothing (action 1) if it is not changing lanes and there is no car in front of it. 's strategy is a cautious one as the car slows as it changes lanes, but never accelerates. This cautious strategy achieves a much higher game score than the participant's program. § PROOF OF CONCEPT: STACK OVERFLOW To demonstrate that is general and can be applied to problems other than games and also to languages with more complex structures such as loops, we collected four programs with implementation problems on Stack Overflow and translated them to our Python-like language so that could fix them. Three of the four programs are sorting algorithms; the last program attempts to compute the cumulative sum of a set of numbers. Figure <ref> shows the DSL used in this experiment. The input parameter indicates that the program can accept and return a variable number of arguments depending on the problem being solved. Compared to the DSL used in the user study, this DSL accepts more data types (arrays) and more complex structures (loops). corrected all three sorting programs with the evaluation function that simply counts the number of input examples that are correctly mapped to the desired output. The problems with the Stack Overflow sorting programs were simple (e.g., one of the programs used instead of in the Boolean expression of a while loop) and was able to fix them by changing a single line of the original programs. The fourth program we collected on Stack Overflow attempts to solve a “cumulative sum problem,” which is defined as follows. Given an array of numbers, the goal is to replace each element with index i in the array with the sum of all elements with index j ≤ i. For example, the expected output for array [4,3,6] is [4, 7, 13]. Figure <ref> shows the incorrect implementation of a program for solving the cumulative sum problem () and 's corrected version for the problem (). The cumulative sum program had two implementation errors: the Boolean expression of the while-loop and the list used in the operation within the loop. could not fix them by simply using the number of input examples correctly mapped to the desired outputs. Instead, we used an F function that computed the sum of the absolute differences between each element of the list the program produced as output and the desired list of numbers. Using this F function, corrected the program, as shown in Figure <ref>. In this proof-of-concept experiment, we manually generated the input-output examples, similar to how a programmer would come up with a set of test cases for their program. Such a set could possibly be used to define 's F function, so it can attempt to correct the implementation errors. § CONCLUSIONS In this paper, we present , a system capable of improving existing programs with respect to a measurable, real-valued metric. employs a simple synthesizer within the loop of its local search. divides the problem of improving an existing implementation into smaller sub-problems by considering each line of the program as an independent program synthesis task. This way, employs a bottom-up search synthesizer that attempts to replace a single line of the original program at a given time, while all the other lines remain unchanged. We conducted a user study where 27 participants wrote programs to play two games. was able to improve the performance of the programs of all participants, often by a large margin. Since performs local changes with an enumerative synthesizer, its modified program shares the same structure as the original program. The similarity of the programs allowed us to understand how was able to improve the performance of a representative program from our study. We also performed a proof-of-concept experiment with four programs collected from Stack Overflow to demonstrate that can also be applied to other application domains and handle more complex languages such as those with loops. was able to correct all four programs. The results of our experiments suggest that can be used as a programming assistant in scenarios where one is interested in improving an existing program with respect to a measurable, real-valued metric. § ACKNOWLEDGEMENTS This research was supported by Canada's NSERC and the CIFAR AI Chairs program. The research was carried out using computational resources from Compute Canada. Part of this work has taken place in the Intelligent Robot Learning (IRL) Lab at the University of Alberta, which is supported in part by research grants from the Alberta Machine Intelligence Institute (Amii); a Canada CIFAR AI Chair, Amii; Compute Canada; Huawei; Mitacs; and NSERC. We thank the anonymous reviewers for their feedback. named
http://arxiv.org/abs/2307.04114v1
20230709080743
FILM: How can Few-Shot Image Classification Benefit from Pre-Trained Language Models?
[ "Zihao Jiang", "Yunkai Dang", "Dong Pang", "Huishuai Zhang", "Weiran Huang" ]
cs.LG
[ "cs.LG", "cs.AI", "cs.CL", "cs.CV", "cs.MM" ]
Neutron scattering and muon-spin spectroscopy studies of the magnetic triangular-lattice compounds A_2La_2NiW_2O_12 (A = Sr, Ba) T. Shang Accepted 08-Jul-2023. Received 18-Jun-2023; in original form 23-May-2023 ================================================================================================================================ Few-shot learning aims to train models that can be generalized to novel classes with only a few samples. Recently, a line of works are proposed to enhance few-shot learning with accessible semantic information from class names. However, these works focus on improving existing modules such as visual prototypes and feature extractors of the standard few-shot learning framework. This limits the full potential use of semantic information. In this paper, we propose a novel few-shot learning framework that uses pre-trained language models based on contrastive learning. To address the challenge of alignment between visual features and textual embeddings obtained from text-based pre-trained language model, we carefully design the textual branch of our framework and introduce a metric module to generalize the cosine similarity. For better transferability, we let the metric module adapt to different few-shot tasks and adopt MAML to train the model via bi-level optimization. Moreover, we conduct extensive experiments on multiple benchmarks to demonstrate the effectiveness of our method. § INTRODUCTION Deep neural networks <cit.> have achieved remarkable success in many fields. However, training deep neural networks requires a large number of labeled data, which can be expensive and time-consuming to obtain. For instance, in medical imaging, obtaining labeled data requires expert radiologists to annotate images. This limits the application of deep learning models in real-world scenarios. In contrast, humans possess the ability to recognize and classify objects of unseen categories with only a few examples. This highlights the potential value of few-shot learning <cit.>, where models are trained on base classes and can be generalized well to novel classes with limited amounts of samples. Previous works mainly focus on image classification tasks, and most of them adopt the meta-learning paradigm <cit.>. Recent works consider leveraging additional information from other modalities such as text to enhance the performance of few-shot learning. In particular, some methods <cit.> adopt static word embedding models (e.g., GloVe <cit.>) to extract textual representations of class names and use them to adjust visual prototypes or classifiers. With the appearance of general language models such as BERT <cit.> and GPT <cit.>, another line of works <cit.> adopt public pre-trained language models (PLMs) to extract more comprehensive semantic information from class names. However, these works still focus on improving existing modules of the standard few-shot learning framework (e.g., visual prototypes and feature extractors), which confines the full utilization of powerful PLMs in few-shot learning. Inspired by the success of vision-language models <cit.> trained by contrastive learning, we explore the idea of aligning visual features and textual embeddings for few-shot image classification in this paper, where textual embeddings are extracted by a public PLM from class names following the setting of <cit.>. However, there are two main factors making this alignment challenging. Firstly, unlike vision-language models that have sufficient pairs of image and textual descriptions available for model training, we only have the class name of each image instead of a rich description. Secondly, in contrast to vision-language models where both visual and textual encoders are learnable to align embeddings, our textual encoder inherits from a puublic PLM trained on uni-modal text data. This leads to totally different structures of textual embedding spaces and thus makes the alignment between visual and textual features difficult. For instance, if we directly align visual features and textual embeddings, the probability[Here probabilities mean the elements outputted by softmax function.] of a sample image being assigned to its true label is extremely low (see blue bars in Figure <ref>). This indicates that the visual feature of an image is hard to approach the corresponding text embedding of its true label. In this paper, we propose a novel framework (Figure <ref>) to boost few-shot learning by means of public PLMs. To bridge the gap between visual and textual modalities, we carefully design a textual branch of our framework and introduce a metric module to measure the similarity between visual and textual embeddings. The textual branch first incorporates class labels into our hand-crafted prompt template containing a [MASK] token and then inputs the filled sentence to a PLM. The PLM transforms the input sentence into a hidden vector sequence and the final textual embedding is extracted from the vector corresponding to the [MASK] token. Meanwhile, the visual feature is obtained by a standard visual encoder. After that, we compute the similarities between visual features and textual embeddings through the proposed metric module, and send them into the contrastive loss. For better transferability on novel classes, we let the metric module adapt to different few-shot tasks and adopt Model-Agnostic Meta-Learning (MAML) <cit.> to train the model via bi-level optimization. Moreover, we conduct extensive experiments on multiple benchmarks to demonstrate that the proposed method significantly outperforms the state-of-the-art few-shot learning methods based on PLMs. The main contributions of this paper can be summarized as follows. * We propose a novel few-shot learning framework that leverages semantic information extracted by a pre-trained language model based on contrastive learning. * We carefully design a textual branch of the framework and introduce a metric module to generalize the similarity measure. * The metric module is designed to be adaptive to different few-shot tasks for better transferability, and MAML is adopted to train the model via bi-level optimization. * We conduct extensive experiments on multiple benchmarks with different domains to demonstrate the effectiveness of our method. § RELATED WORK Few-shot Learning. In general, few-shot learning methods are mainly divided into two categories: metric-based methods and optimization-based methods. Metric-based methods aim to map samples into an appropriate embedding space on the basis of certain distance metrics. Most previous methods use task-agnostic distance metrics, e.g., cosine similarity distance <cit.>, Euclidean distance <cit.>, CNN relation module <cit.>, and Earth Mover’s Distance <cit.>. Additionally, several methods <cit.> involve learning task-specific distance metrics, which can be adjusted for different tasks. Optimization-based methods <cit.> aims at learning optimal initial model parameters on base classes and quickly fine-tune them on novel classes with a few support examples. Our paper generalizes the similarity measure by the proposed metric module, and uses MAML <cit.> to train the model. Few-shot Learning with Semantic Information. Recent works on few-shot learning start to utilize semantic information from class labels to enhance few-shot learning. AM3 <cit.> proposes an adaptive modality mixture mechanism to model prototype representation as a combination of visual features and language semantic features. KTN <cit.> learns classifiers by fusing visual information and knowledge information acquired from a knowledge graph and word embeddings with a semantic-visual mapping network based on Graph Convolutional Network <cit.>. VS-Alignment <cit.> introduces a contrastive alignment between visual and semantic features as an additional objective. Semantic Prompt <cit.> considers semantic information as prompts to tune the ViT <cit.> feature extractor. All these methods leverage semantic features as auxiliary information to adjust visual prototypes, classifiers, or feature extractors. In contrast, we propose a new few-shot learning framework to directly align visual and textual embeddings via contrastive learning. Contrastive Learning. Contrastive learning is a popular method in self-supervised representation learning. It learns representations by pulling positive samples close and driving negative samples away from them in the latent embedding space with a contrastive loss. A set of previous works have shown the excellent performance of contrastive learning in computer vision <cit.> and natural language processing <cit.> tasks. Furthermore, recent works <cit.> apply contrastive learning to multi-modal settings by aligning image-text pairs in the embedding space. Our work introduces contrastive learning to few-shot learning, and proposes a learnable metric module to make aligning visual features and textual embeddings possible. § PROBLEM DEFINITION Few-shot learning involves two disjoint class sets: a base class set 𝒞_base classes and a novel class set 𝒞_novel classes. Sufficient labeled samples are provided for each base class, while abundant unlabeled samples and only a few labeled samples are provided for each novel class. Few-shot learning targets at classifying unlabeled samples from novel classes through training on all the given labeled samples. Previous works usually formulate the few-shot learning problem as N-way K-shot classification, which denotes a classification task among N classes with K labeled samples available for each class. In addition, given a fixed pre-trained language model, we use bimodal contrastive learning to leverage the semantic information extracted by it. Concretely, for each embedded sample image z and N embedded class labels {t_1,t_2,…,t_N} in a N-way K-shot classification task, contrastive learning adjusts the embedding space through the following widely-used contrastive loss <cit.> (using cosine similarity as an example): ℒ = -logexp(z· t_+/τ)/∑^N_i=1exp(z· t_i/τ), where t_+ is the embedded true label of the sample image and τ is a temperature hyper-parameter. Meta-learning paradigm <cit.> is commonly used to solve the few-shot learning problem, which trains and evaluates the model with the episodic mechanism. The standard meta-learning paradigm contains two stages: meta-training and meta-testing. In each episode of the meta-training stage, a N-way K-shot M-query classification task 𝒯=(𝒮,𝒬) is constructed with samples from the base classes. We first randomly select N classes from 𝒞_base as 𝒞_𝒯. For each class, we randomly sample K support images and M query images. Then we form the support set 𝒮={(x_i,y_i)|y_i∈𝒞_𝒯,i=1,2,…,N× K} and the query set 𝒬={(x_i,y_i)|y_i∈𝒞_𝒯,i=1,2,…,N× M} with the support images and the query images respectively, where x_i is the i-th sample image and y_i is the class label of x_i. To learn an appropriate embedding space, bi-level optimization is performed on 𝒮 and 𝒬 respectively, utilizing a contrastive loss. In each episode of the meta-testing stage, a classification task is built on the novel classes in a similar way. The support set is formed with a few label samples, while the query set is sampled from the unlabeled samples. After adapting to the novel classes by minimizing the contrastive loss on the support set, the model is used to predict class labels for the sample images in the query set. § METHOD We introduce our method of Few-shot Image classification with pre-trained Language Models (FILM) in this section. The overall framework is illustrated in Figure <ref>, which consists of three modules: a textual branch, a visual branch, and a metric module. For each episode, the textual branch extracts textual embeddings from class labels, while the visual branch extracts visual embeddings from support and query images. Moreover, the metric module computes the similarity score matrix between textual and visual embeddings from these two branches. In addition, we utilize a training strategy based on MAML algorithm to train the model via bi-level optimization. §.§ Textual Branch In this section, we explain how we design the textual branch to get textual embeddings from class labels. The textual branch comprises a text-based pre-trained language model (PLM) and a language model head. During meta-training and meta-testing, the PLM is frozen while the language model head is tuned for the downstream classification tasks. In our study, we mainly use the masked language model as the PLM. Notice that PLMs mainly take sentences rather than single words or phrases as input during the pre-training stage. Therefore, to bridge the gap between the pre-training and downstream tasks, for each class label y_i, we insert it into a hand-crafted prompt template and get y_i^prompt as the input of the PLM. The token sequence of y_i^prompt is first converted to a token embedding sequence through a token vocabulary. The input embedding sequence is calculated by summing the corresponding token embeddings and positional embeddings. Then PLM transforms the input embeddings into a sequence of hidden vectors. Two straightforward ways to get the textual embedding from the output hidden vector sequence are respectively: (1) taking the average vector of the output vector sequence as the textual embedding; (2) taking the hidden vector of the [CLS] token as the textual embedding. To make textual embeddings more relevant to the visual descriptive information of the corresponding categories, we design a prompt template with one [MASK] token as y_i^prompt = [CLS] The appearance ofy_i is [MASK] . [SEP] and extract the textual embedding by sending the hidden vector of the [MASK] token to the language model head. In this way, the extraction of textual embeddings is treated as a masked language modeling task, which makes downstream classification tasks more consistent with the pre-training of the PLM. The comparison among different designs of textual branches will be shown in Table <ref> later. §.§ Metric Module Inspired by vision-language models trained by contrastive learning, we explore aligning visual and textual modalities for few-shot image classification. However, directly aligning visual features and textual embeddings extracted by text-based PLM with cosine similarity has a poor effect in few-shot setting. The blue bars in Figure <ref> show that the probability of a sample image being assigned to its true label is extremely low if we directly align the visual and textual embeddings. In this paper, we introduce a metric module to generalize the similarity measure between visual features and textual embeddings. Moreover, we let the metric module adapt to different few-shot tasks for better transferability on novel classes. Specifically, we define f_θ_I as the image encoder with learnable parameters θ_I to transform each sample image x_i into a feature map z_i = f_θ_I(x_i). Textual branch f_θ_T with learnable parameters θ_T is used to extract the textual embedding t_y_i = f_θ_T(y_i) from each class label y_i. We generalize the similarity measure between visual embeddings z and textual embeddings t as a learnable function M(z, t) called metric module, whose parameters are denoted as θ_M. For example, the metric module could be a bilinear function M(z, t)=z^⊤θ_Mt (degenerating to the cosine similarity if θ_M is the identity matrix) or a neural network, e.g., M(z, t)=MLP_θ_M([z,t]). During meta-testing, we first fine-tune the task-specific parameters θ_M on the support set 𝒮. Then we use the similarity score matrix computed by the metric module as a reference to infer labels for sample images in the query set 𝒬. As is shown in Figure <ref>, the correct classification probabilities of our method are significantly higher than that of direct alignment, which means that our metric module can effectively align the visual features and textual embeddings. §.§ Loss Function We formulate the learning objective as a contrastive loss (Eq (<ref>)), which pulls together images and corresponding class labels while pushing away unmatched pairs in the embedding space. Moreover, we aim to train a model to maximize the similarity between visual features and textual embeddings for matching (image, text) pairs while reducing the similarity for non-matching pairs. Specifically, for a classification task 𝒯=(𝒮,𝒬), we calculate the contrastive loss on the support set 𝒮 and the query set 𝒬 respectively. On the support set, the contrastive loss ℒ_𝒮 is computed with all the support samples, which has a formulation as: ℒ_𝒮 = -1/|𝒮|∑_x_i∈𝒮logexp( M(z_i, t_y_i) /τ )/∑_c∈𝒞_𝒯exp(M(z_i, t_c)/τ ), where z_i is the visual embedding of the i^th support image x_i, t_y_i is the textual embedding of the true label y_i corresponding to x_i, t_c is the textual embedding of the class label c, and M(·, ·) is the similarity measure. On the query set, the contrastive loss ℒ_𝒬 has almost the same formulation as ℒ_𝒮, except it is computed with all the query samples of 𝒬. §.§ Training Strategy In this work, we incorporate the Model-Agnostic Meta-Learning (MAML) <cit.> algorithm to train the model via bi-level optimization as our training strategy. Our training strategy aims to learn a good model initialization (through the outer-loop optimization), which can be quickly adapted to novel tasks given a few examples (through the inner-loop optimization). The whole algorithm for our training strategy is outlined in Algorithm <ref>. First, we randomly initialize the parameters of image encoder θ_I, language model head θ_T, and metric module θ_M. For each task instance 𝒯_j from the distribution p(𝒯), we divide 𝒯_j into a support set 𝒮_j and a query set 𝒬_j. To let the metric module task-specific, we create copies of θ_M as the adapted parameters θ_M^'. In the inner loop, we adapt the model to the current task 𝒯_j by updating θ_M^' with a number of gradient descent steps on the support set while keeping θ_I, θ_T and θ_M fixed. In the outer loop, θ_M^' are utilized to evaluate the performance of the adapted model on the query set. Specifically, we compute loss on the query set with θ_I, θ_T, θ_M^' and perform gradient descent with respect to all the model parameters θ = {θ_I, θ_T, θ_M}. The optimization objective of the meta-training stage is to learn a good initialization across tasks. For example, when using one gradient update in the inner loop, the optimization objective can be formulated as follows: min_θ∑_𝒯_j ∼ p(𝒯)ℒ_𝒬_j (θ_I, θ_T, θ_M -α∇_θ_Mℒ_𝒮_j(θ_I, θ_T, θ_M)), where ℒ_𝒮_j and ℒ_𝒬_j denote the loss functions that evaluate the performance on support and query set respectively, and α is the learning rate of the inner loop. § EXPERIMENTS §.§ Setup Datasets. We experiment on three general object recognition datasets, i.e., miniImageNet, tieredImageNet and CIFAR-FS, and one fine-grained categorization image classification dataset, i.e., CUB-200-2011. The miniImageNet dataset is proposed in <cit.> as a benchmark for few-shot image classification tasks. It contains a subset of 100 classes in the ImageNet <cit.> dataset, where 64 classes are used for training, 16 classes for validation, and 20 classes for testing. The tieredImageNet dataset <cit.>, which is also derived from the ImageNet <cit.> dataset, contains 351 classes for training, 97 classes for validation, and 160 classes for testing. The CIFAR-FS dataset is built upon CIFAR-100 <cit.> dataset. Following the recent work of <cit.>, we use the same training/validation/testing splits consisting of 64/16/20 classes respectively. CUB-200-2011 (CUB) <cit.> is a dataset for fine-grained bird species classification tasks consisting of 100/50/50 classes for training/validation/testing splits respectively. We also evaluate the domain transferability of our method by training on miniImageNet dataset and then testing on CUB dataset. Architecture. For the visual branch, following previous works <cit.>, we use ResNet-12 as our image encoder of the visual branch, which consists of four residual blocks. Each block contains three 3×3 convolutional layers and a 2×2 max-pooling layer. Similar to <cit.>, we adopt Dropblock as the regularizer and set the number of filters to (64, 160, 320, 640). We apply a global average pooling layer after the last residual block. The backbone network takes images with a spatial size of 84×84 as input and outputs 640-dim support and query visual embeddings. To extract comprehensive semantic information from class names, we adopt RoBERTa-base <cit.> as our text-based pre-trained language model, which is trained on large-scale corpora and available for public use. The language model is a linear layer, which transforms 768-dim hidden vectors into 640-dim textual embeddings. In addition, we use the bilinear form of our metric module. Implementation Details. Following <cit.>, we first pre-train the image encoder for 200 epochs on miniImageNet, CIFAR-FS and CUB dataset, and 100 epochs on tieredImageNet dataset. Then we adopt the episodic training procedure under 5-way 1-shot and 5-shot settings. In each episode, 16 unlabeled query images per class are used for the meta-training and meta-testing phases. We use SGD optimizer with a momentum of 0.9 and a weight decay of 5e-4. The outer-loop learning rate is initialized as 1e-3 on miniImageNet, CIFAR-FS, CUB datasets and 1e-4 on tieredImageNet dataset. The inner-loop learning rate is initialized as 0.5 on four datasets. The number of inner-loop update steps is set to 25. Our model is meta-trained for 80 epochs on all datasets. The hyper-parameter τ is set as 1 for 1-shot setting, 0.2 for 5-shot setting in the inner loop, and 0.1 in the outer loop. To ensure the stability of the evaluation results, we test 1,000 episodes and report the average performance with 95% confidence intervals. We conduct experiments with an NVIDIA GeForce RTX 4090 GPU. §.§ Comparison with State-of-The-Art General Object Recognition and Fine-Grained Categorization. For fair comparisons, we compare with other methods using the same backbone or similar methods in both 5-way 1-shot and 5-way 5-shot settings on miniImageNet, tieredImageNet, CIFAR-FS and CUB datasets. As is shown in Table <ref>, our method is superior to existing methods and achieves the best performance. Compared with previous methods that leverage semantic information from class names, such as KTN <cit.>, AM3 <cit.>, TRAML <cit.> and Vs-Alignment <cit.>, our method improves 1-shot accuracy by 2.42% and 5-shot accuracy by 4.41% on miniImageNet. Furthermore, our method outperforms AM3 <cit.> by 3.88% and 4.41% at 1-shot and 5-shot settings on tieredImageNet respectively. According to Table <ref>, our method outperforms MetaOptNet <cit.> by 4.99% and 3.06% at 1-shot and 5-shot settings respectively on the CIFAR-FS dataset. In addition, on the CUB dataset, our method surpasses all the competitors, including RE-Net <cit.>, which previously achieved the best result. One observation worth highlighting is that our method not only outperforms traditional methods based on meta-learning but also is superior to methods using textual information on four benchmark datasets. These results validate the effectiveness of our proposed few-shot learning framework, which can leverage semantic information well in few-shot image classification tasks. Evaluation on Cross Domain and Larger Shots. To evaluate the cross-domain transferability of different few-shot learning methods, we train them on the source domain miniImageNet dataset and test them on the target domain CUB dataset. This setting is challenging due to the domain gap between the training and testing datasets. The results are reported in Table <ref>, showing that our method has competitive performance and obtains consistent improvements in the cross-domain setting. This indicates the transferability of our method in a situation where the meta-testing tasks are entirely different from the meta-training tasks. Furthermore, we evaluate the performance when the number of shots increases (e.g., 10-shot, 30-shot, and 50-shot) in Table <ref>. This shows that our method would be more effective when there are more (image, text) pairs available for novel classes. These comparisons demonstrate that our method has a more robust transferability, which means it can work well in cross-domain and larger shots scenarios. §.§ Ablation Study In this subsection, we empirically show the effectiveness of each component. To investigate the effects of our designed textual branch, we try to use different extraction methods and prompt templates. Moreover, we conduct extensive ablation studies to verify the effectiveness in the absence of the metric module and visualize our method on miniImageNet and tieredImageNet dataset. Analyze of Textual Branch. To evaluate the effect of our textual branch, we test different extraction methods (i.e., “Avg”, “[CLS]”, and “[MASK]”) and prompt templates in our framework with 5-way 1-shot setting on miniImageNet. As shown in Table <ref>, our “[MASK]” extraction method with “[CLS] The appearance ofy_i is [MASK] . [SEP]” prompt template outperforms the “[CLS]” extraction method by 5.39% and the “Avg” extraction method by 3.94%. Our proposed hand-crafted prompt template treats the extraction of textual embeddings as a masked language modeling task, which makes the textual embeddings more relevant to the visual description of object categories. The results demonstrate that the carefully designed textual branch is effective for aligning visual and textual embeddings for downstream few-shot classification tasks. Analyze of Metric Module. As is shown in Table <ref>, we design a new model without using the support set to update the parameters in the inner-loop optimization and directly compute the similarity score matrix between the query visual embeddings and textual embeddings with cosine similarity in the outer loop. The results show a significant decrease in performance on four widely-used few-shot image classification datasets, demonstrating the importance of the task-specific metric module. By leveraging the metric module to generalize the cosine similarity, our model can adaptively measure the similarity between visual features and textual embeddings for different few-shot tasks. Visualization. To qualitatively evaluate our method, we apply t-SNE <cit.> to visualize the results, which represent the visual features of five categories. We randomly sample 300 examples for each class in 5-way 5-shot setting on miniImageNet and tieredImageNet dataset. As shown in Figure <ref>, the t-SNE visualization results indicate that our method can learn more compact and separate clusters, which means that the learned representations are more discriminative. § CONCLUSION In this paper, we propose a novel few-shot learning framework with text-based pre-trained language model to boost few-shot learning. Furthermore, we introduce a task-specific metric module to enable the alignment between visual features and textual embeddings. Extensive experiments on miniImageNet, tieredImageNet and CIFAR-FS demonstrate the effectiveness of our method. unsrtnat Supplementary Materials § ADDITIONAL EXPERIMENTS Influence of Inner-Loop Temperature. To study the influence of inner-loop temperature hyper-parameter, we conduct experiments on four widely-used few-shot datasets with different inner-loop temperature values in our method. The rest settings are consistent with Section <ref>. Table <ref> shows the results in 5-way 5-shot setting. We find that 0.2 is an appropriate inner-loop temperature value for this setting on all these four datasets. Effect of the Number of Inner-Loop Update Steps. To find a suitable number of inner-loop update steps, we keep the experimental setup in Section <ref> and update the model 10, 15, 20, 25 and 30 steps in the inner loop respectively. Table <ref> shows the results in 5-way 5-shot setting on miniImageNet and tieredImageNet. Following the results, we set the number of inner-loop update steps to 25 in our experiments. Visualization of Grad-CAM. In Figure <ref>, we visualize the gradient-weighted class activation mapping from the pre-trained model and our method under a ResNet-12 feature extractor. It is observed that our method makes the model pay more attention to the discriminative part of the target object than the pre-trained model. For example, we find that for dog samples, the pre-trained model pays more attention to the body and background parts while our model focuses on the head part.
http://arxiv.org/abs/2307.03885v1
20230708033002
Hot QCD Phase Diagram From Holographic Einstein-Maxwell-Dilaton Models
[ "Romulo Rougemont", "Joaquin Grefa", "Mauricio Hippert", "Jorge Noronha", "Jacquelyn Noronha-Hostler", "Israel Portillo", "Claudia Ratti" ]
nucl-th
[ "nucl-th", "hep-ph", "hep-th" ]
1staddress]Romulo Rougemontmycorrespondingauthor [mycorrespondingauthor]Corresponding author [email protected] 2ndaddress]Joaquin Grefa 3rdaddress]Mauricio Hippert 3rdaddress]Jorge Noronha 3rdaddress]Jacquelyn Noronha-Hostler 2ndaddress]Israel Portillo 2ndaddress]Claudia Ratti [1staddress]Instituto de Física, Universidade Federal de Goiás, Av. Esperança - Campus Samambaia, CEP 74690-900, Goiânia, Goiás, Brazil [2ndaddress]Physics Department, University of Houston, Houston TX 77204, USA [3rdaddress]Illinois Center for Advanced Studies of the Universe, Department of Physics, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA In this review, we provide an up-to-date account of quantitative bottom-up holographic descriptions of the strongly coupled quark-gluon plasma (QGP) produced in relativistic heavy-ion collisions, based on the class of gauge-gravity Einstein-Maxwell-Dilaton (EMD) effective models. The holographic approach is employed to tentatively map the QCD phase diagram at finite temperature onto a dual theory of charged, asymptotically Anti-de Sitter (AdS) black holes living in five dimensions. With a quantitative focus on the hot QCD phase diagram, the nonconformal holographic EMD models reviewed here are adjusted to describe first-principles lattice results for the finite-temperature QCD equation of state, with 2+1 flavors and physical quark masses, at zero chemical potential and vanishing electromagnetic fields. We review the evolution of such effective models and the corresponding improvements produced in quantitative holographic descriptions of the deconfined hot QGP phase of QCD. The predictive power of holographic EMD models is tested by quantitatively comparing their predictions for the hot QCD equation of state at nonzero baryon density and the corresponding state-of-the-art lattice QCD results. Hydrodynamic transport coefficients such as the shear and bulk viscosities predicted by these EMD constructions are also compared to the corresponding profiles favored by the latest phenomenological multistage models simultaneously describing different types of heavy-ion data. We briefly report preliminary results from a Bayesian analysis using EMD models, which provide systematic evidence that lattice QCD results at finite temperature and zero baryon density strongly constrains the free parameters of such bottom-up holographic constructions. Remarkably, the set of parameters constrained by lattice results at vanishing chemical potential turns out to produce EMD models in quantitative agreement with lattice QCD results also at finite baryon density. We also review results for equilibrium and transport properties from anisotropic EMD models, which effectively describe the hot and magnetized QGP at finite temperatures and magnetic fields with zero chemical potentials. Finally, we provide a critical assessment of the main limitations and drawbacks of the holographic models reviewed in the present work, and point out some perspectives we believe are of fundamental importance for future developments. QCD phase diagram critical point quark-gluon plasma gauge-gravity duality equations of state empty § INTRODUCTION Quantum chromodynamics (QCD) is the quantum field theory (QFT) responsible for the sector of the standard model of particle physics associated with the strong interaction. At the most fundamental level, it comprises quarks and gluons (collectively called partons) as particles of the corresponding fermionic and non-Abelian gauge vector fields, respectively <cit.>. A rich and complex diversity of phases and regimes is possible for QCD matter, depending on the conditions to which partons are subjected <cit.>. These different regimes have been intensively investigated in the last five decades, conjuring simultaneous efforts from theory, experiments, astrophysical observations, and large computational simulations <cit.>. At the microscopic level, QCD is fundamentally responsible for two of the most important aspects of ordinary baryonic matter in our universe, namely: i) the stability of nuclei due to the effective exchange of pions binding the nucleons (protons and neutrons), with the most fundamental interaction between the composite hadronic particles being mediated via gluon exchange between quarks; ii) most of its mass, thus generating the vast majority of the mass of ordinary matter in our universe, as a result of the dynamical breaking of chiral symmetry at low energies — for instance, at low temperatures compared to the typical scale T_c∼ 150 MeV of the QCD deconfinement crossover transition at zero baryon density <cit.>. In fact, about ≳ 98% of the mass of the nucleons (and, consequently, also the mass of atoms and the ordinary macroscopic structures of the universe built upon them) comes from strong interactions, with the tiny rest being actually due to the current quark masses generated by the Higgs mechanism <cit.>. Intrinsically related to the two aforementioned facts, QCD also presents what is called color confinement, which generically refers to the fact that quarks and gluons, as degrees of freedom carrying color charge under the non-Abelian gauge group SU(N_c=3) of QCD, are never observed in isolation as asymptotic states in experiments, being confined inside color-neutral hadrons <cit.>. Relying on various properties of QCD, we can determine its degrees of freedom at specific energy scales. Due to the number of colors, N_c=3, and quark flavors, N_f=6, QCD is an asymptotically free non-Abelian gauge theory <cit.>. That is, the β-function for the QCD coupling constant is negative, implying that it is a decreasing function of the renormalization group energy scale, vanishing at asymptotically high energies. Conversely, QCD becomes a strongly coupled non-perturbative QFT at energy scales below or around the QCD dimensional transmutation scale, Λ_QCD∼ 200 MeV, indicating the failure of perturbative QFT methods when applied to low energy QCD phenomena (e.g. quark confinement). Indeed, due to quark confinement, one expects a hadron gas resonance (HRG) phase at low energies and temperatures, while, due to asymptotic freedom, a deconfined phase of quarks and gluons called the quark-gluon plasma (QGP) is expected at high energies. Because of its asymptotic freedom, the latter could naively be expected to be a weakly interacting medium. In fact, at high enough temperatures, as attained in the quark epoch (where the cosmic background radiation temperature varied from hundreds of GeV to hundreds of MeV within a time window of microseconds), and before the QCD phase transition in the early universe, the QGP was a weakly coupled fluid. As a clear comparison, hard thermal loop (HTL) perturbation theory in QCD seems to provide a reasonable description of some thermodynamic observables computed non-perturbatively in lattice QCD (LQCD) simulations for temperatures T≳ 300 MeV <cit.>. However, at temperatures below that approximate threshold, the agreement between perturbative QCD (pQCD) and non-perturbative LQCD results is generally lost, which approximately sets the temperature window T_c ∼ 150 MeV < T < 2T_c ∼ 300 MeV (at zero baryon density) for which the QGP is a strongly coupled fluid <cit.>. This is just within the range of temperatures probed by relativistic heavy-ion collision experiments conducted e.g. at the Relativistic Heavy Ion Collider (RHIC) <cit.> and at the Large Hadron Collider (LHC) <cit.>. §.§ Some phenomenological results from heavy-ion collisions The strongly coupled nature of the QGP produced in heavy-ion collisions is not only deduced from thermodynamic observables but also from hydrodynamic transport coefficients. These coefficients are typically inferred from the analysis of phenomenological models simultaneously describing several types of heavy-ion data <cit.>. The hot and dense medium produced in relativistic heavy-ion collisions is commonly believed to pass through several different stages during its space and time evolution, as sketched in Fig. <ref>. Initially, two heavy ions are accelerated to speeds close to the speed of light, and at very high energies, the gluon density inside those nuclei grows until reaching a saturation value, forming the so-called color glass condensate (CGC) <cit.>, which is a typical source of initial conditions for the medium produced after the collision. For a characteristic time interval ≲ 1 fm/c after the collision[Notice that 1 fm/c ≈ 3.33564× 10^-24 s, so that the characteristic time scales involved in heavy-ion collisions are extremely short.], in the pre-equilibrium stage, the system is expected to be described by a turbulent medium composed by highly coherent gluons. Therefore, this stage is dominated by the dynamics of classical chromodynamic fields forming the so-called glasma, a reference to the fact that this is an intermediate stage between the color glass condensate and the quark-gluon plasma <cit.>. As the glasma expands and cools, it begins to decohere towards a state of QCD matter which possesses an effective description in terms of relativistic viscous hydrodynamics <cit.> and whose physically relevant degrees of freedom correspond to deconfined, but still strongly interacting quarks and gluons formed around ≳ 1 fm/c after the collision. As the QGP keeps expanding and cooling, it eventually hadronizes by entering into the QGP-HRG crossover region of the QCD phase diagram <cit.>. The next stage of the space and time evolution of the system comprise the so-called chemical freeze-out <cit.>, when inelastic collisions between the hadrons cease and the relative ratio between the different kinds of particles in the hadron gas is kept fixed. Afterwards, there is the thermal or kinetic freeze-out, when the average distance between the hadrons is large enough to make the short-range residual strong nuclear interaction between them effectively negligible. This fixes the momentum distribution of the hadrons. After that, the produced hadrons are almost free and the particles resulting from their decays reach the experimental detectors, providing information on the previous stages in the evolution of the system. Of particular relevance for the topics to be approached in the present review are the shear, η, and bulk viscosities, ζ. These hydrodynamic transport coefficients cannot be directly measured in heavy-ion collision experiments and are typically employed as free functions (of temperature and eventually also of other possible variables, such as chemical potentials and/or electromagnetic fields) in phenomenological hydrodynamic models, which are then fixed by comparison to heavy-ion data (for example, using Bayesian inference methods <cit.>). From such an approach, it is generally found that, around the QGP-HRG crossover region at zero baryon density in the QCD phase diagram, η/s (where s is the entropy density of the medium) should be of the same order of magnitude (in natural units with c=ħ=k_B=1) of 1/4π (which, as we shall discuss in section <ref>, is a benchmark value for strongly coupled quantum fluids coming from a very broad class of holographic models <cit.>), being at least one order of magnitude smaller than perturbative calculations <cit.>. The small value of the shear viscosity to entropy density ratio, η/s, inferred for the QGP produced in heavy-ion collisions is physically interpreted as a clear manifestation of its nearly-perfect fluidity, as sketched in Fig. <ref>. As a reference, in the QGP-HRG crossover window, where the QGP temperature is low enough to make the medium hadronize, T_c∼ 150 MeV ∼ 1.72× 10^12 K ∼ 10^5 T_center of sun (see e.g. https://solarscience.msfc.nasa.gov/interior.shtmlNASA/Marshall Solar Physics). In heavy-ion collisions realized in particle accelerators, the QGP attains temperatures at most 2 - 3 times T_c while much higher temperatures were achieved in the early universe. Besides η/s, also the bulk viscosity to entropy density ratio ζ/s plays a prominent role in the phenomenological description of heavy-ion data <cit.>. For instance, in Ref. <cit.> the JETSCAPE Collaboration developed a state-of-the-art phenomenological multistage model for heavy-ion collisions, which was employed to simultaneously describe several hadronic measurements from different experiments at RHIC and LHC. Their results favor the temperature-dependent profiles (at zero baryon density) for ζ/s and η/s shown in Fig. <ref>. These phenomenological results for the hydrodynamic viscosities will be compared to quantitative microscopic holographic calculations and predictions in section <ref>. By varying the conditions under which heavy-ion collisions take place in particle accelerators, it is possible to experimentally probe some aspects and regions of the QCD phase diagram at finite temperature and nonzero baryon density. For instance, for heavy-ion collisions at the LHC operating at the center of mass energies of √(s_NN) = 2.76 - 5.02 TeV, the energy of the collisions is so large that average effects due to a nonzero baryon chemical potential μ_B become negligible (note that fluctuations of conserved charges do still play a role at these energies <cit.>). On the other hand, the Beam Energy Scan (BES) program at RHIC scans out lower collision energies spanning the interval √(s_NN) = 7.7 - 200 GeV <cit.>, where the baryon chemical potential reached within the QGP is of the same order of magnitude of the temperature, allowing experimental access to some regions of the QCD phase diagram at nonzero μ_B. Furthermore, fixed-target experiments at RHIC <cit.>, and also experiments with lower collision energies at HADES <cit.>, and FAIR <cit.>, aim at experimentally probing the structure of the QCD phase diagram in the (T,μ_B)-plane at higher baryon densities. One of the main purposes of such experiments is to determine the location of the conjectured critical endpoint (CEP) of the line of first-order phase transition which, from several different model calculations, is expected to exist in the QCD phase diagram at high-baryon densities <cit.>. §.§ Lattice QCD results An important limitation of phenomenological multistage models is that several physical inputs are not calculated from self-consistent microscopic models or systematic effective field theories. As mentioned above, these inputs can be constrained by experimental data (and some underlying phenomenological model assumptions). However, such a phenomenological approach cannot explain why and how certain transport and equilibrium properties arise from QCD. The strongly coupled nature of QCD at low energies renders the systematic methods of pQCD not applicable to describe a wide range of physically relevant phenomena that can be probed by experiments in high-energy particle accelerators and also by astrophysical observations. However, at vanishing or small chemical potentials μ_B, another first-principles method for investigating equilibrium phenomena (such as the behavior of several thermodynamic observables) in QCD is available, namely, LQCD simulations. The general reasoning behind this method, originally developed by Kenneth Wilson <cit.>, amounts to discretizing the Euclidean, imaginary-time version of the background spacetime. Matter fields, such as the fermion fields of the quarks, are defined at the sites of the resulting discretized grid, while gauge fields, such as the gluons, are treated as link variables connecting neighboring sites <cit.>. The Euclidean path integral, defined in the imaginary-time Matsubara formalism for finite-temperature statistical systems, can then be performed using Monte Carlo methods. Continuum QCD can formally be recovered by taking the limit in which the lattice spacing between neighboring sites goes to zero. In practice, due to the large increase in the computational cost of numerical simulations with decreasing lattice spacing, the formal continuum limit is approached by extrapolating a sequence of calculations with progressively decreasing lattice spacings, which are nonetheless still large enough to be computationally manageable <cit.>. Some very remarkable achievements of LQCD relevant to this review include the first principles calculation of light hadron masses, like pions and nucleons, compatible with experimental measurements <cit.>, and also the determination of the nature of the transition between the HRG and QGP phases of QCD at zero baryon density, which turns out to be a broad continuous crossover <cit.>. However, despite its notable successes, LQCD calculations also feature some important limitations, in particular: i) the difficulties in performing numerical simulations at nonzero baryon density, due to the so-called sign problem of lattice field theory <cit.>, and ii) the issues in calculating non-equilibrium transport observables associated with the real-time dynamics of the system. The former is an algorithmic issue that arises from the fermion determinant of the quarks becoming a complex quantity at real nonzero μ_B, which implies that it cannot be employed to define a probabilistic measure to be used in importance sampling — thus spoiling the direct evaluation of the LQCD path integral by means of Monte Carlo methods. The latter is due to difficulties in analytically continuing the Euclidean correlators calculated in the lattice at imaginary times to real-time intervals in a spacetime with Minkowski signature <cit.>. Nonetheless, in recent years several different techniques have been developed and applied to calculate in LQCD the equation of state at finite temperature and moderate values of baryon chemical potential, and also to estimate the behavior of some transport coefficients at finite temperature and zero baryon density, as reviewed in Refs. <cit.>. In fact, state-of-the-art lattice simulations for the continuum-extrapolated QCD equation of state with 2+1 flavors and physical values of the quark masses are now available up to μ_B/T≤ 3.5 <cit.> from a novel expansion scheme, and up to μ_B/T≤ 3 from a traditional Taylor expansion <cit.>. Some of these LQCD results for thermodynamic observables at finite (T,μ_B) will be compared to quantitative microscopic holographic calculations and predictions in section <ref>. §.§ Some basic aspects of the holographic gauge-gravity duality The limitations of present-day lattice simulations mentioned above prevent first-principles QCD calculations to be employed in the investigation of strongly interacting QCD matter at higher baryon densities, where an actual phase transition between confining hadronic and deconfined partonic degrees of freedom may exist, as depicted in the sketch displayed in Fig. <ref>. Also, LQCD simulations of QCD transport properties are considerably difficult already at μ_B=0 <cit.>, let alone at finite baryon density. In such cases, it is customary to resort to effective models and other alternative theoretical approaches to obtain some qualitative insight and even some quantitative predictions for the behavior of QCD matter under such extreme conditions. One such alternative approach, which is the theoretical tool considered in the present review, is what is broadly called the holographic gauge-gravity duality (also known, under more restricted conditions, as the AdS-CFT correspondence) <cit.>. The holographic gauge-gravity duality is motivated by the framework of string theory, which originally had an old and curious relationship with the strong interaction. Indeed, (non-supersymmetric) string theory was originally developed as an S-matrix theory for the strong nuclear force between hadrons, which were empirically known to fall into linear Regge trajectories relating their total angular momentum J to their mass squared m^2, in what is known as the Chew-Frautschi plots <cit.>. By modeling a meson as a relativistic open string spinning around its center, it is possible to reproduce the observed Chew-Frautschi relation, J=α_0+α'm^2, where the relativistic string tension is given in terms of the measured slope of the linear Regge trajectory, σ=(2πα')^-1≈(440 MeV)^2 <cit.>. The slope is approximately the same for the different Regge trajectories defined by the different measured values of the Regge intercept, α_0 (which is known to depend on the flavor quantum numbers of the hadrons considered — hadrons with the same flavor quantum numbers fall into the same Regge trajectory, and can be viewed as resonances of this trajectory with different values of mass and angular momentum). However, since this simple string model also predicts results in striking contradiction with hadronic experiments (e.g. a wrong, soft exponential falloff for the associated Veneziano scattering amplitude in the high energy limit of hard scattering for hadrons at fixed angles), it has been abandoned as a model for hadrons, being superseded by the advent of QCD, with its theoretical and experimental successes as the fundamental description of the strong interaction. Later, the theoretical interest in string theory greatly resurfaced, although within a very different context, with the so-called first and second superstring revolutions, which correspond, respectively: 1) to the discovery of five different consistent supersymmetric quantum string theories in 10 spacetime dimensions (superstring theories of Type I, Type IIA, Type IIB, Heterotic SO(32) and Heterotic E_8⊗ E_8); and also, 2) the latter discovery that these five superstring theories in 10 dimensions are related through a web of duality transformations, besides being also related to a theory of membranes defined in 11 spacetime dimensions called M-theory, whose low energy limit corresponds to a unique 11-dimensional theory of supergravity. A remarkable common feature of all superstring theories is that all of them possess a tensorial spin 2 massless particle in their spectrum, which is the graviton, the hypothetic vibrational string mode responsible for mediating the gravitational interaction at the quantum level. Due to that reason, and also due to the fundamental fact that at low energies superstring reduces to supergravity, therefore containing general relativity as the low energy, classical description of gravity, superstring theory is an interesting candidate for a theory of quantum gravity <cit.>. There is also some expectation that the standard model would emerge as a low-energy sector in string theory with 6 of its 10 dimensions compactified in some appropriate manifold, which should be chosen in a very specific way in order to generate the observed phenomenology of particle physics in our universe. This way, string theory could be seen as a “theory of everything”, in the sense of possibly describing all the particles and fundamental interactions in nature. Regardless of whether string theory is the unifying theory of all the fundamental interactions of nature <cit.> or not, it is undeniable that new effective approaches and applications, directly inspired by string theory and aimed towards the strong interaction, flourished with the advent of the holographic gauge-gravity duality. Before discussing some of their phenomenological aspects in regard to the physics of the hot and baryon dense strongly-coupled QGP in section <ref>, we discuss below some basic general aspects of the holographic correspondence. The original formulation of the so-called AdS-CFT correspondence <cit.>, relates Type IIB superstring theory defined on the product manifold between a 5-dimensional Anti-de Sitter (AdS) spacetime and a 5-dimensional sphere, AdS_5⊗ S^5, to a conformal quantum field theory (CFT) corresponding to 𝒩=4 Supersymmetric Yang-Mills (SYM) theory with gauge group SU(N_c),[𝒩=4 refers to the number of different supersymmetries of the theory.] defined on the conformally flat 4-dimensional boundary of AdS_5. Two other early realizations of the AdS-CFT duality comprise also the relation between M-theory defined on AdS_4⊗ S^7 and the Aharony-Bergman-Jafferis-Maldacena (ABJM) superconformal field theory defined on the 3-dimensional boundary of AdS_4, besides the relation between M-theory defined on AdS_7⊗ S^4 and the so-called 6D (2,0) superconformal field theory defined on the 6-dimensional boundary of AdS_7. In a very naive and imprecise way, one could in principle think of the first example of the 𝒩=4 SYM theory as a “toy model” for QCD, while the second example regarding the ABJM theory could be taken as a “toy model” for low-dimensional condensed matter systems. However, this is inadequate from a realistic phenomenological perspective, both at the quantitative and qualitative levels, as we shall discuss in section <ref>. Before doing that, let us first comment a little bit more on the original proposal (see e.g. the discussion in section 3 of the standard review <cit.>, and also other works such as <cit.> for details). We take for definiteness the example relating Type IIB superstring theory compactified on AdS_5⊗ S^5 and 𝒩=4 SYM theory living on the boundary of AdS_5. One first considers Type IIB string theory in flat ℝ^1,9 Minkowski spacetime and a collection of N_c coincident parallel D3-branes in this background.[An endpoint of an open string must satisfy either Dirichlet or Neumann boundary conditions. If one considers Neumann boundary conditions on p spatial dimensions plus time, then the remaining D-p-1 dimensions must satisfy Dirichlet boundary conditions. Since for Dirichlet boundary conditions a string endpoint is fixed in space, while for Neumann boundary conditions it must move at the speed of light, then with Neumann boundary conditions on p+1 dimensions, the open string endpoints are constrained to move within a (p+1)-dimensional hypersurface, which is a dynamical object called Dp-brane. Dp-branes are shown to be related to black p-branes <cit.>, which are solutions of higher dimensional (super)gravity which generalize the concept of black holes by having extended event horizons which are translationally invariant through p spatial dimensions. They actually provide different descriptions of a single object, which in a perturbative string regime is accurately described by Dp-branes not backreacting on the background spacetime, while at low energies (corresponding to take α'≡ l_s^2 to be small, where l_s is the fundamental string length, so that massive string states can be neglected) and large gravitational fields, the backreaction of the Dp-branes on the background produces a black p-brane geometry <cit.>.] The perturbative string theory excitations in this system correspond to vibrational modes of both, closed strings, and also open strings with their ends attached to the D3-branes. If we consider the system defined at low energies compared to the characteristic string scale, (α')^-1/2≡(l_s)^-1, only massless string modes can be excited which, for closed strings give a gravity supermultiplet and, for the open strings with their ends attached to the (3+1)-dimensional worldvolume of the N_c coincident D3-branes, give a 𝒩=4 vector supermultiplet with gauge group SU(N_c). A low energy effective action for these massless string excitations in the background considered can be schematically written by integrating out the massive string modes, S_eff = S_ℝ^1,9 bulk + S_ℝ^1,3 brane + S_int, where S_ℝ^1,9 bulk is the low energy action for the gravity supermultiplet, corresponding to Type IIB supergravity (SUGRA) in ℝ^1,9 plus higher order derivative corrections coming from the integration of the string massive modes; S_ℝ^1,3 brane is the low energy action for the 𝒩=4 vector supermultiplet living on the ℝ^1,3 worldvolume of the N_c coincident D3-branes, corresponding to 𝒩=4 SYM theory with gauge group SU(N_c) plus higher order derivative corrections coming from the integration of the string massive modes; and S_int is an interaction term between the bulk and brane modes. The higher order derivative corrections for the bulk and brane actions coming from the integration of massive string modes are proportional to positive powers of α', while the interaction action is proportional to positive powers of the square root of the 10D Newton's gravitational constant, κ_10≡√(8π G_10)∼ g_sα' ^2, where g_s is the string coupling, so that by considering the so-called decoupling limit where α'≡ l_s^2→ 0 with fixed N_c,g_s, one has S_ℝ^1,9 bulk→ S_ℝ^1,9 IIB SUGRA, S_ℝ^1,3 brane→ S_ℝ^1,3 𝒩=4 SYM, and S_int→ 0, so that we end up with two decoupled actions, lim_α'→ 0 (fixed N_c,g_s) S_eff = S_ℝ^1,9 IIB SUGRA + S_ℝ^1,3 𝒩=4 SYM. For a given number N_c of coincident D3-branes, the `t Hooft coupling effectively controlling the strength of the interactions in the 𝒩=4 SYM SU(N_c) gauge theory is given by λ_t≡ N_c g_SYM^2= N_c g_s.[The relation g_SYM^2= g_s can be inferred from the fact that a closed string, governed by the g_s coupling, can be formed from the collision between the endpoints of two open strings moving on the D3-branes, with g_SYM being the coupling of the non-Abelian gauge field corresponding to the massless mode of the open strings on these branes <cit.>.] This picture holds for any value of λ_t (and since the SYM theory is a CFT, its `t Hooft coupling remains constant for any value of energy so that one actually has infinitely many different SYM theories, each one of them defined at some given value of λ_t). Another perspective for the same system can be considered as follows. The effective gravitational field generated by the collection of N_c coincident D3-branes is ∼ N_c g_s (l_s/r)^4 <cit.>, and by considering a very large N_c such that λ_t = N_c g_s≫ 1 even for small values of g_s (so that one can ignore quantum string loop contributions in the bulk), very close to the D3-branes for r→ 0 the gravitational field is very intense and its backreaction on the background spacetime highly distorts its geometry, producing a curved manifold. In this limit it is necessary to replace the perturbative string description of D3-branes in flat Minkowski spacetime with the associated black 3-brane supergravity solution, whose near-horizon (i.e. near-black brane) geometry approaches precisely that of AdS_5(L)⊗ S^5(L), with the same curvature radius L for the AdS_5 and S^5 manifolds.[For the other two early examples of the AdS-CFT correspondence mentioned before, one obtains: AdS_4(L/2)⊗ S^7(L) and AdS_7(2L)⊗ S^4(L) (see e.g. <cit.>).] On the other hand, far away from the black brane the background geometry is still that of Minkowski ℝ^1,9. In both regions (near and far from the black brane), since we considered that the string coupling g_s is small (so that string loops may be discarded), by taking the decoupling limit as before, with l_s→ 0 and fixed N_c,g_s, the bulk spacetime is inhabited only by Type IIB SUGRA fields. By comparing the two perspectives above for the same system, when defined in the same regime corresponding to low energies, low string coupling, large N_c, and strong `t Hooft coupling (α'≡ l_s^2→ 0 with fixed N_c,g_s, but such that g_s is small, N_c is large and λ_t = N_c g_SYM^2 = N_c g_s≫ 1), one notices that in both views there is a common element, which is Type IIB SUGRA defined on ℝ^1,9, and it is then conjectured that the remaining pieces in each perspective should be dual to each other: strongly coupled, large N_c, 𝒩=4 SYM theory with gauge group SU(N_c), defined on ℝ^1,3 (which is equivalent, up to a conformal factor, to the boundary of AdS_5), and classical, weakly coupled Type IIB SUGRA defined on AdS_5(L)⊗ S^5(L). The duality involved in this comparison actually conveys a detailed mathematical dictionary translating the evaluation of physical observables in a classical SUGRA theory defined at weak coupling on top of a background given by the product of an AdS spacetime and a compact manifold, to the calculation of other observables in a different, conformal quantum gauge field theory defined at strong coupling and with a large number of colors on top of the conformally flat boundary of the AdS manifold. Then, the notion of the hologram comprised in the AdS-CFT duality refers to the fact that the gravitational information of a higher dimensional bulk spacetime can be encoded in its boundary. This is the weakest form of the holographic AdS-CFT correspondence, and a particular case of the broader gauge-gravity duality, being largely supported by a plethora of independent consistency checks (see e.g. <cit.>). The strongest version of the AdS-CFT conjecture, corresponding to a particular case of the so-called gauge-string duality (which is more general than the gauge-gravity duality, which can be seen as a low-energy limit of the latter), proposes that the duality should be valid for all values of g_s and N_c, therefore relating 𝒩=4 SYM theory on ℝ^1,3 with arbitrary `t Hooft coupling and an arbitrary number of colors for the gauge group SU(N_c), and full quantum Type IIB superstring theory generally formulated in a nonperturbative way on AdS_5(L)⊗ S^5(L) (instead of just its classical low energy limit corresponding to Type IIB SUGRA). It is also posited that high derivative/curvature corrections in the bulk correspond to the inverse of `t Hooft coupling corrections in the dual CFT, since according to the detailed holographic dictionary, α'/L^2={l_s/[l_s (N_c g_s)^1/4]}^2 =1/√(λ_t), and that quantum string loop corrections in the bulk correspond to the inverse of N_c corrections in the dual CFT, since, g_s (l_s/L)^4 = g_s (l_s/[l_s (N_c g_s)^1/4])^4 = 1/N_c. The conjectured holographic AdS-CFT duality has a very clear attractive feature, which is the fact that complicated nonperturbative calculations in a strongly coupled quantum CFT can be translated, through the detailed mathematical holographic dictionary, into much simpler (although not necessarily easy) calculations involving weakly coupled classical gravity in higher dimensions. More generally, the broader holographic gauge-gravity duality[The even broader gauge-string duality is very difficult to handle in practice, due to the present lack of a detailed and fully nonperturbative definition of string theory on asymptotically AdS spacetimes. Consequently, we focus in this review only on its low-energy manifestation corresponding to the gauge-gravity duality, which is the framework where the vast majority of the calculations are done in the literature regarding the holographic correspondence.] is not restricted to bulk AdS spacetimes and dual boundary CFTs. Indeed, for instance, by considering the backreaction of effective 5D massive fields living on AdS_5, which are associated with the Kaluza-Klein (KK) reduction on S^5 of the originally 10D massless modes of SUGRA, the background AdS_5 metric is generally deformed within the bulk, and the effective 5D bulk spacetime geometry becomes just asymptotically AdS, with the metric of AdS_5 being recovered asymptotically near the boundary of the bulk spacetime. Generally, there is also a corresponding deformation of the dual QFT theory at the boundary of the asymptotically AdS spacetime induced by the consideration of relevant or marginal operators, which may break conformal symmetry and supersymmetry and whose scaling dimension is associated through the holographic dictionary to the masses of the effective 5D bulk fields. In this sense, one has a broader holographic gauge-gravity duality relating a strongly coupled QFT (not necessarily conformal or supersymmetric) living at the boundary of a higher dimensional asymptotically AdS spacetime, whose geometry is dynamically determined by a classical gravity theory interacting with different matter fields in the bulk. In the holographic gauge-gravity duality, the extra dimension connecting the bulk asymptotically AdS spacetime to its boundary plays the role of a geometrization of the energy scale of the renormalization group flow in the QFT living at the boundary <cit.>, with low/high energy processes in the QFT being mapped into the deep interior/near-boundary regions of the bulk spacetime, respectively. Since its original proposal by Maldacena in 1997 <cit.>, the holographic gauge-gravity duality has established itself as one of the major breakthroughs in theoretical physics in the last few decades, being applied to obtain several insights into the nonperturbative physics of different strongly coupled quantum systems, comprising studies in the context of the strong interaction <cit.>, condensed matter systems <cit.> and, more recently, also quantum entanglement and information theory <cit.>. §.§ Main purpose of this review Holographic gauge-gravity models are generally classified as being either i) top-down constructions when the bulk supergravity action comes from known low-energy solutions of superstrings and the associated holographic dual at the boundary is precisely determined, ii) or bottom-up constructions when the bulk effective action is generally constructed by using phenomenological inputs and considerations with the purpose of obtaining a closer description of different aspects of some real-world physical systems, but the exact holographic dual, in this case, is not precisely known. Actually, for bottom-up holographic models, one assumes or conjectures that the main aspects of the gauge-gravity dictionary inferred from top-down constructions remain valid under general circumstances, such that for a given asymptotically AdS solution of Einstein field equations coupled to other fields in the bulk, some definite holographic dual QFT state at the boundary should exist.[This putative bottom-up holographic dual does not need to (and generally will not) coincide with the exact QFT taken as a target to be described in the real world. Instead, one will generally obtain some holographic dual of a QFT which is close to some aspects of the target QFT, but which differs from the latter in many other regards. In a general sense, this is not different, for instance, from the reasoning employed to construct several non-holographic effective models for QCD, where a given effective model is used to produce approximate results for some but not all aspects of QCD. In fact, if an exact holographic dual of real-word QCD (with gauge group SU(3), 6 flavors and physical values of the quark masses) does exist, its dual bulk formulation will likely comprise not merely a gravity dual, but instead some complicated nonperturbative full string dual whose formulation is currently unknown.] In order to be useful in practice for different phenomenological purposes, such an assumption for bottom-up holographic models should provide explicit examples where the target phenomenology is indeed well reproduced by the considered bulk gravity actions, which should furthermore be able to provide new and testable predictions. In fact, as we are going to discuss in this review, one can construct holographic bottom-up models which are able to provide quantitative results and predictions in compatibility with first principles LQCD simulations and with some phenomenological outputs inferred from heavy-ion collisions, besides providing new predictions for thermodynamic and transport quantities in regions of the QCD phase diagram currently not amenable to first principles analysis due to the limitations discussed in the preceding sections. Let us first analyze thermal SYM theory[That the SYM theory is completely inadequate as a holographic model for the confined phase of QCD is immediately obvious from e.g. the fact that SYM is a CFT and QCD is not. Even if one considers a comparison of SYM with just pure YM theory (i.e. the pure gluon sector of QCD without dynamical quarks), issues remain since YM features linear confinement between static, infinitely heavy probe quarks (corresponding to an area law for the Wilson loop <cit.>) and a mass gap in the spectrum.] as a possible “proxy” for the strongly coupled deconfined QGP, as it has been commonly considered within a considerable part of the holographic literature for years. It is often said that SYM theory has some qualitative features in common with QCD at the typical temperatures attained by the QGP in heavy-ion collisions, namely: within the considered temperature window, both theories are strongly coupled, deconfined, with non-Abelian vector fields corresponding to gluons transforming in the adjoint representation of the gauge group, and their η/s have comparable magnitude. Although the points above are true, they are insufficient to establish a reliable connection between SYM and QCD. Indeed, there are infinitely many different holographic theories with the same properties listed above. In fact, all gauge-gravity duals are strongly coupled and all isotropic and translationally invariant Einstein's[That is, with the kinetic term for the metric field in the bulk action given by the usual Einstein-Hilbert term with two derivatives.] gauge-gravity duals have a specific shear viscosity given by the “(quasi)universal holographic” result η/s=1/4π <cit.>, which is actually a clear indication that even for nonconformal gauge-gravity duals with running coupling (which is not the case of SYM theory, since it is a CFT), the effective coupling of the holographic theory remains large at all temperature scales. Consequently, classical gauge-gravity duals lack asymptotic freedom, featuring instead a strongly coupled ultraviolet fixed point, being asymptotic safe but not asymptotic free. Moreover, there are infinitely many different holographic duals with deconfined phases at high temperatures. In the face of this infinite degeneracy of holographic gauge-gravity duals with the very same generic features often employed to “justify” the use of SYM theory as a “proxy” for the QGP, one may be led to conclude that such a choice is not well-defined. One may argue that this choice is more related to the fact that SYM theory is the most well-known and one of the simplest examples of gauge-gravity duality, than to any realistic phenomenological connection between the SYM plasma and the real-world QGP. In order to take steps towards lifting the infinite degeneracy of holographic models to describe (some aspects of) the actual QGP, one needs to look at the behavior of more physical observables than just η/s. In this regard, the SYM plasma is easily discarded as a viable phenomenological holographic model for the QGP due to several reasons, among which we mention mainly the following. The SYM plasma is a CFT, while the QGP is highly nonconformal within the window of temperatures probed by heavy-ion collisions, and this fact makes the equation of state for the SYM plasma completely different from the one obtained for the QGP in LQCD simulations, not only quantitatively, but also qualitatively <cit.>. Indeed, dimensionless ratios for thermodynamic observables such as the normalized pressure (P/T^4), energy density (ϵ/T^4), entropy density (s/T^3), the speed of sound squared (c_s^2), and the trace anomaly (I/T^4=(ϵ-3P)/T^4, which is identically zero for a CFT), are all given by constants in the SYM plasma, while they display nontrivial behavior as functions of the temperature in the QGP. Furthermore, the bulk viscosity vanishes for the conformal SYM plasma, while it is expected to possess nontrivial behavior as a function of the temperature in the QGP, playing an important role in the description of heavy-ion data, as inferred from phenomenological multistage models (see the discussion in section <ref> and Fig. <ref>). Therefore, when considering thermodynamic equilibrium observables and transport coefficients, the SYM plasma is not a realistic model for the QGP both at the quantitative and qualitative levels. On the other hand, the holographic duality can be indeed employed to construct effective gauge-gravity models which make it possible to actually calculate several thermodynamic and transport observables, displaying remarkable quantitative agreement with state-of-the-art LQCD simulations at zero and finite baryon density, while simultaneously possessing transport properties very close to those inferred in state-of-the-art phenomenological multistage models for heavy-ion collisions. Additionally, such holographic models also provide quantitative predictions for the QGP in regions of the QCD phase diagram which are currently out of the reach of first-principles calculations. The main purpose of the present paper is to review these results, mainly obtained through specific bottom-up constructions engineered within the so-called Einstein-Maxwell-Dilaton class of holographic models, discussing the main reasoning involved in their formulation, and also pointing out their phenomenological limitations and drawbacks, in addition to their successful achievements. This will be done in the course of the next sections, with holographic applications to the hot and baryon dense strongly coupled QGP being discussed in section <ref>. We will also review some applications to the hot and magnetized QGP (at zero chemical potential) in section <ref>. In the concluding section <ref>, we provide an overview of the main points discussed through this review and list important perspectives for the future of phenomenological holographic model applications to the physics of the QGP. In this review, unless otherwise stated, we make use of natural units where c=ħ=k_B=1, and adopt a mostly plus metric signature. § HOLOGRAPHIC MODELS FOR THE HOT AND BARYON DENSE QUARK-GLUON PLASMA In this section, we review the construction and the main results obtained from phenomenologically-oriented bottom-up holographic models aimed at a quantitative description of the strongly coupled QGP at finite temperature and baryon density. We focus on a class of holographic constructions called Einstein-Maxwell-Dilaton (EMD) gauge-gravity models, which has provided up to now the best quantitative holographic models for describing equilibrium thermodynamic and hydrodynamic transport properties of the hot and baryon dense QGP produced in heavy-ion collisions. We also discuss different predictions for the structure of the QCD phase diagram, comprising at high baryon chemical potential a line of first-order phase transition ending at a CEP, which separates the phase transition line from the smooth crossover observed at low baryon densities. §.§ Holographic Einstein-Maxwell-Dilaton models In order to possibly obtain a quantitative holographic model for the QGP (and also quantitative holographic constructions for other strongly coupled physical systems in the real world), one necessarily needs to break conformal symmetry in the holographic setting. Breaking conformal symmetry alone is not sufficient to reproduce QCD, since one needs to obtain a holographic modeling of specific phenomenological properties, and not just an arbitrary or generic nonconformal model. Therefore, the conformal symmetry-breaking pattern needs to be driven in a phenomenologically-oriented fashion. One possible approach to obtain a non-conformal system is a bottom-up holographic construction where the free parameters of the model are constrained by existing results from LQCD in some specific regime. Once the parameters are fixed, one can then use this model to make predictions. Of course, as in any effective theory construction, the functional form of the bulk action and also the ansatze for the bulk fields must be previously chosen based on some symmetry and other relevant considerations, taking into account a given set of observables from the target phenomenology and the basic rules for evaluating these observables using holography. The seminal works of <cit.> laid down a remarkably simple and efficient way of constructing quantitative holographic models for the strongly coupled QGP in equilibrium. The general reasoning originally developed in these works may be schematically structured as follows: * The focus is on constructing an approximate holographic dual or emulator for the equation of state of the strongly coupled QGP in the deconfined regime of QCD, without trying to implement confinement (e.g. Regge trajectories for hadrons), chiral symmetry breaking at low temperatures, asymptotic freedom at asymptotically high temperatures, nor an explicit embedding into string theory. In this construction, the QCD equation of state (and the second-order baryon susceptibility for the case of finite baryon densities, see <ref>) is used to fix the free parameters at finite temperature and vanishing chemical potentials. Note that only these specific LQCD data are used to fix the free parameters of the models. All other resulting thermodynamic quantities or transport coefficients are then predictions of the model; * The dynamical field content and the general functional form of the bulk gravity action is taken to be the simplest possible in order to accomplish the above. One considers a bulk metric field (holographically dual to the boundary QFT energy-momentum tensor) plus a Maxwell field with the boundary value of its time component providing the chemical potential at the dual QFT. Additionally, a real scalar field (called the dilaton) is used to break conformal symmetry in the holographic setting, emulating the QGP equation of state at zero chemical potential. The dilaton field also relates string and Einstein frames, as used e.g. in the holographic calculation of parton energy loss (some results in this regard will be briefly reviewed in section <ref>); * The general functional form for the bulk action constructed with the dynamical field content features at most two derivatives of the fields. The bulk action includes the Einstein-Hilbert term with a negative cosmological constant (associated with asymptotically AdS_5 spacetimes) for the metric field g_μν, the kinetic terms for the Abelian gauge field A_μ and the dilaton field ϕ, an almost arbitrary potential (free function) V(ϕ) for the dilaton, and an interaction term between the Maxwell and the dilaton fields, which features another free function of the dilaton field, f(ϕ). The free functions, V(ϕ) and f(ϕ), the effective 5D Newton's constant, G_5, and the characteristic energy scale of the nonconformal model, Λ∝ L^-1, need to be dynamically fixed by holographically matching the specific set of LQCD results mentioned in the first item above. Note that these parameters comprise the entire set of free parameters of the bottom-up EMD construction. * The effects of the dynamical quarks in the medium are assumed to be effectively encoded in the form of the bottom-up model parameters fixed to holographically match the QCD equation of state and second-order baryon susceptibility obtained from LQCD simulations at zero chemical potential (no explicit flavor-branes are employed for this purpose in the holographic EMD models reviewed in the present paper). More details on the procedure mentioned above will be discussed in section <ref>. Let us now comment on the main limitations of such an approach, some of which are fairly general and refer to all classical gauge-gravity models. First, gauge-gravity models such as the one mentioned above lack asymptotic freedom. This is expected from the original AdS-CFT correspondence since classical gravity in the bulk lacks the contributions coming both from massive string states and quantum string loops. By discarding such contributions in the bulk, one obtains a strongly coupled dual QFT at the boundary with a large number of degrees of freedom (large N_c). The consideration of deformations of the bulk geometry given by asymptotic (but not strictly) AdS solutions of classical gravity does not seem enough to claim that such deformations could in principle describe asymptotic freedom in the dual gauge theory at the boundary. The fact that η/s=1/4π for any value of temperature (and chemical potentials) in isotropic and translationally invariant gauge-gravity models with two derivatives of the metric field, conformal or not, is a clear indication that such models are strongly coupled at all energy scales. Therefore, these models miss asymptotic freedom in the ultraviolet regime. It is then clear that the ultraviolet regime of such models is in striking contradiction with perturbative QCD (excepted to be relevant at high temperatures), where η/s is an order of magnitude larger than 1/4π. One possible way of improving this situation has been discussed in Ref. <cit.>. There they consider the effects of higher curvature corrections to the metric field in the bulk (i.e., higher derivative corrections to Einstein's gravity) in the presence of a dilaton field, which allows for a temperature-dependent η/s. Higher derivative corrections for the bulk action are associated with contributions coming from massive string states, which are expected to lead to a reduction of the effective coupling of the boundary QFT theory. However, consistently including higher derivative curvature corrections for an EMD model, taking into account the full dynamical backreaction of the higher curvature terms into the background geometry, is a very challenging task that has yet to be done. Another general limitation of gauge-gravity models for QCD is that a realistic holographic description of thermodynamic and hydrodynamic observables in the HRG confining phase is unfeasible. Standard gauge-gravity models describe large N_c systems. However, the pressure of the QCD medium in the confining hadronic phase goes as ∼ N_c^0 = 𝒪(1), while in the deconfined QGP phase it goes as ∼ N_c^2. Therefore, the pressure in hadron thermodynamics is N_c^-2 suppressed relative to the pressure in the QGP phase in a large N_c expansion. Formally, the hadron phase requires string loop corrections in the bulk in order to have a feasible holographic dual description at the boundary. Such a quantum string loop corrected holographic dual would be more complicated than simple classical gauge-gravity models. The two above limitations are common to all gauge-gravity models aimed at realistically describing QCD. Further limitations are related to the EMD constructions reviewed here. We have already alluded to the fact that such models are not intended to describe chiral symmetry breaking, confinement, and thus, hadron spectroscopy. These points, together with the intrinsic limitations of gauge-gravity models regarding the description of hadron thermodynamics and asymptotic freedom, clearly restrict the target phenomenology of such EMD models to be the hot deconfined phase of QCD matter corresponding to the strongly coupled QGP produced in heavy-ion collisions. Another phenomenological limitation of EMD models is that they only describe a single conserved charge (i.e. only one finite chemical potential is possible). Typically, finite baryon chemical potential, μ_B is considered (see Sec. <ref>). However, the hot and baryon dense QGP produced in relativistic heavy-ion collisions at low energies actually comprises three chemical potentials (μ_B, the electric charge chemical potential, μ_Q, and the strangeness chemical potential, μ_S). In equilibrium, these chemical potentials can be related to each other through the global strangeness neutrality condition realized in such collisions, due to the fact that the colliding nuclei do not carry net strangeness. The strangeness neutrality condition is ⟨ S⟩ = ⟨ N_S̅-N_S⟩ = VT^3χ̂_1^S = 0 where N_S is the number of strange quarks, N_S̅ is the number of strange antiquarks, and χ̂_1^S≡∂(P/T^4)/∂(μ_S/T) is the reduced strangeness density. Additionally, μ_Q can also be constrained by the charge to baryon number ratio of the colliding nuclei. There is a small isospin imbalance for lead-lead (Pb+Pb) collisions at the LHC and gold-gold (Au+Au) collisions at RHIC, ⟨ Q⟩/⟨ B⟩ = ⟨ N_Q - N_Q̅⟩/⟨ N_B - N_B̅⟩ = χ̂_1^Q / χ̂_1^B = Z/A ≈ 0.4 where Z is the atomic number and A is the mass number of the colliding nuclei. Thus, between strangeness neutrality and charge conservation, we can then determine μ_Q=μ_Q(T,μ_B) and μ_S=μ_S(T,μ_B) from (T,μ_B) <cit.>. These phenomenological constraints from heavy-ion collisions are not implemented in the holographic EMD constructions reviewed here, where one simply sets μ_Q=μ_S=0. We finish these introductory comments on phenomenological bottom-up holographic EMD models for the QGP by remarking that these models are partially inspired by, but not actually derived from string theory. Therefore, the actual applicability of the holographic dictionary for such constructions, and more generally, for any bottom-up gauge-gravity model, may be questioned. Indeed, the phenomenological viability of bottom-up holographic models can be checked by direct comparison with the results of the target phenomenology. The degree of agreement between holographic EMD results and several first principles LQCD calculations as well as hydrodynamic viscosities inferred from phenomenological multistage models describing several heavy-ion data, provides compelling evidence that the holographic dictionary works in practice for these models. The general reasoning outlined above may be systematically adapted to successfully describe different aspects of phenomenology, indicating that at least some of the entries in the holographic dictionary may have a broad range of validity. For instance, one could consider using gauge-gravity models to describe pure YM theory without dynamical quarks. Bottom-up dilatonic gauge-gravity models with specific functional forms for the dilaton potential may be engineered to quantitatively describe the thermodynamics of a deconfined pure gluon plasma with a first-order phase transition (although the thermodynamics of the confining phase corresponding to a gas of glueballs cannot be described by classical gauge-gravity models), besides describing also glueball spectroscopy <cit.>. §.§.§ Holographic equations of state A gauge-gravity model is usually defined by its action on the classical gravity side of the holographic duality, while different dynamic situations for its dual QFT, living at the boundary of the asymptotically AdS bulk spacetime, are related to different ansatze and boundary conditions for the bulk fields. For instance, given some bulk action, the vacuum state in the dual QFT is associated with solutions of the bulk equations of motion with no event horizon, which is accomplished by an ansatz for the metric field with no blackening function. Thermal states in equilibrium for the same dual QFT are often associated with equilibrium black hole (or more generally, black brane) solutions of the bulk equations of motion, which now require a blackening function in the ansatz for the metric field. Hydrodynamic transport coefficients and characteristic equilibration time scales may be evaluated from the spectra of quasinormal modes <cit.> of these black hole solutions slightly disturbed out of thermal equilibrium, while different far-from-equilibrium dynamics may be simulated by taking into account boundary conditions and ansatze for the bulk fields with nontrivial dependence on spacetime directions parallel to the boundary <cit.>. The main bottom-up holographic models reviewed in the present manuscript are specified by actions of the EMD class, whose general form in the bulk is given below <cit.>, S=∫_ℳ_5 d^5x ℒ = 1/2κ_5^2∫_ℳ_5 d^5x √(-g)[R-(∂_μϕ)^2/2 -V(ϕ) -f(ϕ)F_μν^2/4], where κ_5^2≡ 8π G_5 is the 5D gravitational constant. The bulk action (<ref>) is supplemented by two boundary terms: i) the Gibbons-Hawking-York (GHY) boundary action <cit.>, which in a manifold ℳ_5 with a boundary (as in the case of asymptotically AdS spacetimes) is required in the formulation of a well-defined variational problem with a Dirichlet boundary condition for the metric field,[By the variational principle, the variation of the gravity action must vanish for arbitrary variations δ g_μν of the metric field in the bulk. In the case of spacetime manifolds with a boundary, in calculating the variation of the metric tensor in the bulk, integration by parts in directions transverse to the boundary leads to a boundary term that is nonvanishing even by imposing the Dirichlet boundary condition that the metric is held fixed at the boundary, δ g_μν|_∂ℳ_5=0. This boundary term is exactly canceled out by the variation of the GHY action (see e.g. chapter 4 of <cit.>), allowing for the variation of the total gravity action to vanish in compatibility with Einstein's equations in a bulk spacetime with a boundary.] and ii) a boundary counterterm action employed to remove the ultraviolet divergences of the on-shell action by means of the holographic renormalization procedure <cit.>. Since these two boundary actions do not contribute to the bulk equations of motion while being required in order to write the full holographic renormalized on-shell action, which will not be needed in the calculations reviewed in the present work, we do not write their explicit form here.[The holographic renormalized on-shell action is employed in the evaluation of the pressure of the medium defined in the dual QFT at the boundary, also for the calculation of hydrodynamic transport coefficients extracted from perturbations of the bulk fields, and for the analysis of far-from-equilibrium dynamics. However, here we will not consider far-from-equilibrium calculations. Regarding the equilibrium pressure of the medium, its calculation can also be done by integrating the entropy evaluated through the Bekenstein-Hawking relation for black hole thermodynamics <cit.> over the temperature, which does not require holographic renormalization. Moreover, for the holographic calculation of the specific hydrodynamic transport coefficients reviewed in this work, which are related through Kubo formulas to the imaginary part of thermal retarded correlators of the relevant dual QFT operators, holographic renormalization can also be bypassed through the use of radially conserved fluxes extracted from the equations of motion for the relevant bulk perturbations — see <cit.> and also <cit.>.] The set of free parameters and functions {G_5,Λ,V(ϕ),f(ϕ)} comprised in the bottom-up EMD setup can be fixed by taking as phenomenological inputs some adequate lattice results on QCD thermodynamics at finite temperature and zero chemical potentials (and vanishing electromagnetic fields), where Λ is a characteristic energy scale of the nonconformal holographic model employed to express in powers of MeV dimensionful observables in the dual QFT, which are calculated in the gravity side of the holographic correspondence in powers of the inverse of the asymptotic AdS radius L. In practice, we simply set L=1 and trade it off as a free parameter by the energy scale Λ, without changing the number of free parameters of the model <cit.>. The set {G_5,Λ,V(ϕ)} can be fixed by the LQCD equation of state evaluated at vanishing chemical potential, while f(ϕ) may be fixed, up to its overall normalization, by the LQCD second order baryon susceptibility, also evaluated at zero chemical potential <cit.>.[However, as we are going to discuss afterward in this section, and more deeply in section <ref>, available LQCD results cannot constrain the set of free parameters of the EMD model to be fixed in a unique way.] In order to do this, one first needs to specify the adequate ansatze for the bulk EMD fields such as to describe isotropic and translationally invariant thermal states at the dual boundary quantum gauge theory (as in LQCD simulations). Since we are going to consider, in general, also the description of thermal states at finite baryon chemical potential, we take the form below for the bulk fields corresponding to isotropic and translationally invariant charged EMD black hole backgrounds in equilibrium <cit.>, ds^2 = g_μνdx^μ dx^ν = e^2A(r)[-h(r)dt^2+dx⃗^2]+dr^2/h(r), ϕ = ϕ(r), A = A_μdx^μ=Φ(r)dt, where r is the holographic radial coordinate, with the boundary at r→∞ and the black hole horizon at r=r_H, and r_H being the largest root of the blackening function, h(r_H)=0. The set of general EMD equations of motion obtained by extremizing the bulk action (<ref>) with respect to the EMD fields can be written in the following form <cit.>, R_μν-g_μν/3[V(ϕ)-f(ϕ)/4F_αβ^2]-1/2∂_μϕ∂_νϕ-f(ϕ)/2g^αβF_μαF_νβ =0, ∂_μ(√(-g)f(ϕ)g^μαg^νβF_αβ) =0, 1/√(-g)∂_μ(√(-g)g^μν∂_νϕ)-∂ V(ϕ)/∂ϕ-F_μν^2/4∂ f(ϕ)/∂ϕ =0, which, for the isotropic ansatze for the EMD fields in equilibrium given in Eqs. (<ref>), reduce to the following set of coupled ordinary differential equations of motion, ϕ”(r)+[h'(r)/h(r)+4A'(r)]ϕ'(r)-1/h(r)[∂ V(ϕ)/∂ϕ-e^-2A(r)Φ'(r)^2/2∂ f(ϕ)/∂ϕ] =0, Φ”(r)+[2A'(r)+d[lnf(ϕ)]/dϕϕ'(r)]Φ'(r) =0, A”(r)+ϕ'(r)^2/6 =0, h”(r)+4A'(r)h'(r)-e^-2A(r)f(ϕ)Φ'(r)^2 =0, h(r)[24A'(r)^2-ϕ'(r)^2]+6A'(r)h'(r)+2V(ϕ)+e^-2A(r)f(ϕ)Φ'(r)^2 =0, where Eq. (<ref>) is a constraint. These equations of motion are discussed in detail in Refs. <cit.>. They must be solved numerically, and different algorithms have been developed through the years to accomplish this task with increasing levels of refinement <cit.>. Two different sets of coordinates are used in this endeavor: the so-called standard coordinates (denoted with a tilde), in which the blackening function goes to unity at the boundary, h̃(r̃→∞)=1, and also Ã(r̃→∞)→r̃, such that holographic formulas for the physical observables are expressed in standard form; and the so-called numerical coordinates (denoted without a tilde), corresponding to rescalings of the standard coordinates used to specify definite numerical values for the radial location of the black hole horizon and also for some of the initially undetermined infrared expansion coefficients of the background bulk fields close to the black hole horizon, which is required to start the numerical integration of the bulk equations of motion from the black hole horizon up to the boundary.[Notice that the part of the bulk geometry within the interior of the black hole horizon is causally disconnected from observers at the boundary.] In fact, with such rescalings, all the infrared coefficients are determined in terms of just two initially undetermined coefficients, ϕ_0 and Φ_1, which are taken as the “initial conditions” (in the holographic radial coordinate, r) for the system of differential equations of motion. Those correspond, respectively, to the value of the dilaton field and the value of the radial derivative of the Maxwell field evaluated at the black hole horizon. For the holographic calculation of physical observables at the boundary QFT, one also needs to obtain the ultraviolet expansion coefficients of the bulk fields near the boundary, far from the horizon. For the evaluation of the observables reviewed in this paper, it suffices to determine four ultraviolet expansion coefficients of the bulk fields, namely, h_0^far coming from the blackening function h(r) of the metric field, Φ_0^far and Φ_2^far coming from the nontrivial component of the Maxwell field Φ(r), and ϕ_A coming from the dilaton field ϕ(r), with the functional forms of the ultraviolet expansions being derived by solving the asymptotic forms of the equations of motion near the boundary <cit.>. In order to determine the numerical values of the ultraviolet coefficients for a given numerical solution generated by a given choice of the pair of initial conditions (ϕ_0,Φ_1), one matches the full numerical solution for the bulk fields to the functional forms of their corresponding ultraviolet expansions near the boundary. While the values of h_0^far, Φ_0^far and Φ_2^far can be easily obtained, the evaluation of ϕ_A is much more subtle and delicate due to the exponential decay of the dilaton close to the boundary <cit.>. In Refs. <cit.>, different algorithms were proposed to extract ϕ_A in a reliable and numerically stable way from the near-boundary analysis of the numerical solutions for the dilaton field, with progressively increasing levels of accuracy and precision. Moreover, in Ref. <cit.>, a new algorithm for choosing the grid of initial conditions (ϕ_0,Φ_1) was devised in order to cover the phase diagram of the dual QFT in the (T,μ_B)-plane in a much more efficient and broader way than in earlier works, like e.g. <cit.>. Together with more precise fittings to LQCD results at zero chemical potential, which led to the construction of an improved version of the EMD model at finite temperature and baryon density in Ref. <cit.>, all the algorithmic upgrades mentioned above allowed to obtain predictions from this improved EMD model not only for the location of the CEP <cit.>, but also for the location of the line of first-order phase transition and the calculation of several thermodynamic <cit.> and transport <cit.> observables in a broad region of the (T,μ_B)-plane, including the phase transition regions, where the numerical calculations are particularly difficult to perform due to the coexistence of competing branches of black hole solutions and the manifestation of significant noise in the numerical solutions. Before comparing some thermodynamic results from some different versions of the EMD model in the literature, displaying the aforementioned improvements and discussing some of their consequences for the holographic predictions regarding the structure of the QCD phase diagram in the (T,μ_B)-plane, we provide below the relevant formulas for their calculation on the gravity side of the holographic duality. The numerical solutions for the EMD fields in thermal equilibrium generated by solving the bulk equations of motion for different pairs of initial conditions (ϕ_0,Φ_1) are associated through the holographic dictionary with definite thermal states at the boundary QFT, where the temperature T, the baryon chemical potential μ_B, the entropy density s, and the baryon charge density ρ_B of the medium are given by <cit.>,[We provide the formulas in the standard coordinates (with a tilde) and in the numerical coordinates (in terms of which the numerical solutions are obtained and the relevant ultraviolet coefficients are evaluated). It is worth mentioning that <cit.> introduced three extra free parameters in the holographic model, corresponding to different energy scaling parameters for μ_B, s, and ρ_B, besides the one for T. These parameters are unnecessary as they artificially augment the number of free parameters of the bottom-up construction without a clear physical motivation. In the holographic formulas reviewed in this paper there is just a single energy scale Λ associated with the nonconformal nature of the EMD model <cit.>, as mentioned above. In this context, if an observable has energy dimension p, its formula in the gravity side of the holographic duality gets multiplied by Λ^p in order to express the corresponding result in the dual QFT at the boundary in physical units of MeV^p.] T = .√(-g'_t̃t̃g^r̃r̃ ')/4π|_r̃=r̃_HΛ=e^Ã(r̃_H)/4π|h̃'(r̃_H)|Λ = 1/4πϕ_A^1/ν√(h_0^far)Λ, μ_B = lim_r̃→∞Φ̃(r̃)Λ = Φ_0^far/ϕ_A^1/ν√(h_0^far)Λ, s = S/VΛ^3=A_H/4G_5VΛ^3=2π/κ_5^2e^3Ã(r̃_H)Λ^3 = 2π/κ_5^2ϕ_A^3/νΛ^3, ρ_B = lim_r̃→∞∂ℒ/∂(∂_r̃Φ̃)Λ^3 = -Φ_2^far/κ_5^2ϕ_A^3/ν√(h_0^far)Λ^3, where A_H is the area of the black hole event horizon, the prime denotes radial derivative, and ν≡ d-Δ, with d=4 being the number of spacetime dimensions of the boundary and with Δ=(d+√(d^2+4m^2L^2))/2 being the scaling dimension of the (relevant) QFT operator dual to the bulk dilaton field ϕ(r), which has a mass m obtained from the form of the dilaton potential V(ϕ), to be discussed in a moment. The dimensionless ratio χ̂_2^B≡χ_2^B/T^2≡∂^2(P/T^4)/∂(μ_B/T)^2 corresponds to the reduced second order baryon susceptibility. When evaluated at μ_B=0, χ̂_2^B has an integral expression given by <cit.> χ̂_2^B(T,μ_B=0)=1/16π^2s/T^31/f(0)∫_r_H^∞dr e^-2A(r)f(ϕ(r))^-1, which is to be evaluated over EMD backgrounds generated with the initial condition Φ_1=0.[Although the holographic mapping (ϕ_0,Φ_1)↦(T,μ_B,s,ρ_B) is highly nontrivial <cit.>, choosing Φ_1=0 automatically provides only EMD backgrounds with μ_B=0.] In numerical calculations <cit.>, one actually takes the following substitutions in Eq. (<ref>), r_H→ r_start and ∞→ r_max, where r_start is some small number (typically r_start∼ 10^-8) employed to avoid the singular point of the EMD equations of motion at the rescaled numerical horizon r_H=0, and r_max is a numerical parametrization of the radial position of the boundary, which is ideally at r→∞. Of course, it is not possible to use infinity in numerical calculations, and in practice, r_max∼ 2 - 10 is typically enough for the numerical EMD backgrounds to reach, within a small numerical tolerance, the ultraviolet fixed point of the holographic renormalization group flow associated with the AdS_5 geometry. It must be also emphasized that Eq. (<ref>) is not valid at μ_B≠ 0. In fact, to calculate the second order baryon susceptibility at finite μ_B, we take in practice χ̂_2^B=∂(ρ_B/T^3)/∂(μ_B/T) where ρ_B is the baryon density. The pressure of the dual QFT fluid can be approximated as follows (for fixed values of μ_B), P(T, μ_B)≈∫_T_low^T dT s(T,μ_B), where T_low is the lowest value of temperature available for all solutions with different values of μ_B within the set of EMD black hole backgrounds generated with the grid of initial conditions considered. Eq. (<ref>) ceases to be a good approximation for the pressure for values of T∼ T_low.[The reason for taking a finite T_low instead of zero as the lower limit in the temperature integral of the entropy density in Eq. (<ref>) is that it is numerically difficult to obtain solutions of the EMD equations of motion at very low temperatures. For instance, T_low=2 MeV for the calculations done in Ref. <cit.>. By varying the value of T_low it is possible to numerically check the window of values for which the approximate results for the pressure remain stable within a given numerical tolerance.] The first law of thermodynamics then allows the calculation of the energy density of the medium according to ϵ(s,ρ_B) = Ts(T,μ_B)-P(T,μ_B)+μ_Bρ_B(T,μ_B), and the trace anomaly of the energy-momentum tensor (also known as the interaction measure) of the dual QFT at the boundary is I(T,μ_B) = ϵ(T,μ_B)-3P(T,μ_B). The square of the speed of sound in the medium is defined as c_s^2=(d P/d ϵ)_s/ρ_B, which can be calculated along different trajectories of constant entropy over baryon number in the (T,μ_B)-plane. For phenomenological applications in the context of heavy-ion collisions, one can rewrite this c_s^2 in terms of derivatives of T,μ_B <cit.>, [c_s^2(T,μ_B)]_s/ρ_B=ρ_B^2∂_T^2P-2sρ_B∂_T∂_μ_BP +s^2∂_μ_B^2P/(ϵ+P)[∂_T^2P∂_μ_B^2P-(∂_T∂_μ_BP)^2] that provides a much more convenient formula since most equations of state use T,μ_B as the free variables. The above expressions allow the calculation of the main thermodynamic observables characterizing the equilibrium state of the QGP. Particularly, in order to fix the free parameters of the EMD model, we take as phenomenological inputs state-of-the-art continuum extrapolated results from first principles LQCD simulations with 2+1 flavors and physical values of the quarks masses, regarding the QCD equation of state <cit.> and the second order baryon susceptibility <cit.>, both evaluated at finite temperature and zero chemical potential. In fact, the choice of an adequate susceptibility is what seeds the bottom-up EMD model with phenomenological information concerning the nature of the controlling state variable(s) of the medium besides the temperature.[For instance, while the baryon susceptibility is used in the present section, the magnetic susceptibility will be employed in section <ref> within the context of the anisotropic EMD model at finite temperature and magnetic field, but with zero chemical potential.] In this way, it was constructed in Ref. <cit.>, and latter also used in Refs. <cit.>, a second-generation improved version of the EMD model (relative to previous constructions in the literature, namely, the original one in Refs. <cit.>, and the first generation improved EMD model of Refs. <cit.>), which is defined by the bulk action (<ref>) with the following set of holographically fixed bottom-up parameters and functions, V(ϕ) = -12cosh(0.63 ϕ)+0.65 ϕ^2-0.05 ϕ^4+0.003 ϕ^6, κ_5^2 = 8π G_5=8π(0.46), Λ=1058.83 MeV, f(ϕ) = (-0.27 ϕ+0.4 ϕ^2)+1.7 (100 ϕ)/2.7. A number of observations are in order concerning the forms fixed above for the dilaton potential V(ϕ) and the Maxwell-dilaton coupling function f(ϕ). First, regarding the dilaton potential, since from the ultraviolet asymptotic expansions for the EMD fields the dilaton is known to vanish at the boundary for relevant QFT deformations <cit.>, the boundary value V(0)=-12 =̇ 2Λ_AdS_5 is required in order to recover the value of the negative cosmological constant of AdS_5 in the ultraviolet regime, as Λ_AdS_d+1=-d(d-1)/2L^2 is equal to -6 for d=4 and L=1 (recall that we set here the asymptotic AdS radius to unity).[We remark that, in spite of the similar notation, the cosmological constant Λ_AdS_5=-6 has no relation with the nonconformal energy scale Λ in (<ref>).] One notices from (<ref>) that for this EMD model, the dilaton field has a mass squared given by m^2=∂_ϕ^2V(0)≈ -3.4628, which satisfies the Breitenlohner-Freedman (BF) stability bound <cit.> for massive scalar fields in asymptotically AdS backgrounds, m^2 > m^2_BF = -d^2/4L^2 = -4. Also, since the scaling dimension of the QFT operator dual to the dilaton is Δ=(d+√(d^2+4m^2L^2))/2≈ 2.73294 < d = 4 (which implies that ν≡ d-Δ≈ 1.26706), as anticipated, this is a relevant operator triggering a renormalization group flow from the AdS_5 ultraviolet fixed point towards a nonconformal state as one moves from the ultraviolet to the infrared regime of the dual QFT, or correspondingly, as one moves from the near-boundary to the interior of the bulk in the gravity side of the holographic duality. In fact, if one wishes to introduce a relevant deformation in the dual QFT away from the conformal regime asymptotically attained in the ultraviolet, and simultaneously satisfy the BF stability bound, then one should engineer the dilaton potential such as to have Δ_BF = 2 < Δ < d = 4, or equivalently, m^2_BF = -4 < m^2 < 0. Moreover, the dilaton potential in (<ref>) monotonically decreases from its maximum at the boundary to the deep infrared of the bulk geometry, such that there are no singular points (associated with local extrema of the potential) in the bulk equations of motion between the boundary and the black hole horizon, and also, Gubser's criterion for admissible classical gravitational singularities <cit.>, V(ϕ(r_H))≤ V(ϕ(r→∞)=0)=-12, is satisfied. Second, concerning the Maxwell-dilaton coupling function, one should note from Eq. (<ref>) that the baryon susceptibility calculated at zero chemical potential cannot fix the overall normalization of f(ϕ). In (<ref>) this overall normalization was chosen such that f(0)=1, as originally proposed in <cit.>.[In practice, this choice for the overall normalization of f(ϕ) can be motivated by the fact that it allows a quantitative description of LQCD results at nonzero μ_B, as we are going to see later in this review.] Moreover, by also following <cit.>, we choose f(ϕ) such that it asymptotically goes to zero for large ϕ(r), in the infrared regime of the theory. However, differently from <cit.>, in order to obtain a quantitative description of this observable at zero chemical potential one seems to be forced to engineer a functional form for f(ϕ) such that it presents a very fast variation close to the boundary (i.e., for ϕ(r→∞)→ 0).[This is the practical reason for the term ∼(100 ϕ) in (<ref>) (the numerical factor of 100 can be substituted by some other `large number' without considerably affecting the results).] This peculiar feature has been also observed in other bottom-up EMD constructions with different functional forms for f(ϕ) and which had been proved to quantitatively describe χ̂_2^B(T,μ_B=0) from LQCD simulations with 2+1 flavors and physical values of the quark masses <cit.>. In Fig. <ref>, we display the improvements in the holographic fits, from three different EMD models in the literature, taking as the target data to be described the LQCD results for the reduced second-order baryon susceptibility at vanishing chemical potential — one can also notice the improvements in the lattice results (see the figure caption for the details). The profile for the Maxwell-dilaton coupling f(ϕ) in Eq. (<ref>) was engineered to produce the result in the bottom panel of this figure, by using Eq. (<ref>) evaluated over the zero chemical potential, finite temperature EMD backgrounds. Those backgrounds, in turn, are generated with the choices of the EMD parameters in Eq. (<ref>), which were fixed in order to produce the results shown in Fig. <ref> for the holographic equation of state at μ_B=0. In Fig. <ref>, the full set of LQCD results shown were used as input for the model. In particular, the holographic trace anomaly seems very difficult to quantitatively reproduce to the corresponding LQCD results over the entire temperature interval considered. With the bottom-up EMD parameters fixed in Eq. (<ref>) for the μ_B=0 portion of the equation of state by the results displayed in Fig. <ref> and the parameters fixed in Eq. (<ref>) relvant for the μ_B>0 portion of the equation of state by the results displayed in bottom panel of Fig. <ref> (i.e. χ̂_B^2 at μ_B=0), one can proceed to make holographic predictions for several observables relevant for the physics of the strongly coupled QGP. Aside from the specific set of LQCD results at μ_B = 0 used to fix the free parameters of the EMD model, any other calculation follows as a legitimate prediction of the holographic setup considered. In order to populate the phase diagram of the model, several EMD black hole solutions are numerically generated with a set of initial conditions (ϕ_0,Φ_1/Φ_1^max) chosen as indicated in the two top panels of Fig. <ref> <cit.>, where Φ_1^max = √(-2V(ϕ_0)/f(ϕ_0)) is a bound on the maximum value of Φ_1, given some ϕ_0>0 (which produces only positive values for the dilaton field), such as to have asymptotically AdS_5 solutions <cit.>. The corresponding holographic EMD predictions for the QCD equation of state at finite temperature and baryon chemical potential are also shown in Fig. <ref> and compared to state-of-the-art LQCD results at finite baryon density (with μ_Q=μ_S=0, as in the holographic model) <cit.>. One notices a good quantitative agreement between the EMD holographic predictions and the lattice results for the QCD equation of state at finite (T,μ_B), except for the baryon charge density for T≳ 190 MeV with μ_B/T≳ 2. It is important to emphasize that the holographic predictions shown in Fig. <ref> were obtained from the holographic EMD model of Ref. <cit.>, which was constructed in 2017, 4 years before the publication of the lattice results of Ref. <cit.>. As far as we know, this was the first model in the literature, holographic or not, to correctly predict at the quantitative level the behavior of this state-of-the-art lattice QCD equation of state at finite temperature and baryon chemical potential. In this regard, it is also important to point out that in the same 2017 paper <cit.>, holographic predictions were put forward for higher-order baryon susceptibilities at zero chemical potential, which were quantitatively confirmed one year later by the LQCD simulations of Ref. <cit.>, as depicted in the top panel of Fig. <ref>. A broad scanning of the phase diagram of the EMD model of Ref. <cit.>, comprising not only the crossover region and the CEP originally reported in this paper, but also the line of first-order phase transition ending at the CEP, was finally obtained in Ref. <cit.>, thanks to the significant algorithmic and numerical improvements achieved in that work, which also allowed the calculation of physical observables over the phase transitions regions in the phase diagram of the model. The EMD model prediction for the QCD phase diagram in the (T,μ_B)-plane is displayed in the bottom panel of Fig. <ref>, with the predicted CEP location lying around (T,μ_B)_CEP^[1706.00455]≈(89,724) MeV. The different curves characterizing the crossover region refer to characteristic points (extrema or inflections) of different equilibrium and transport observables that evolve with μ_B such that they merge at the CEP <cit.>. The CEP location also coincides with the end of the coexistence region with multiple black hole solutions with the same values of (T,μ_B) in the phase diagram of the model, as displayed in Fig. <ref> (b). Within this coexistence region, the thermodynamically stable branch of black hole solutions refers to the backgrounds with the largest pressure (or, equivalently, the smallest free energy). In Ref. <cit.>, also the discontinuity gaps for all the considered thermodynamic observables were calculated across the first-order phase transition line. Before discussing transport coefficients results from holography, we close the present section with an important observation that will be further discussed in section <ref>. The functional forms of V(ϕ) and f(ϕ) are not uniquely fixed by current lattice QCD results. The very same set of LQCD results at μ_B=0 <cit.>, which was used to fix the dilaton potential and the Maxwell-dilaton coupling function for the EMD model of Refs. <cit.>, was also employed to fix different functional forms for V(ϕ) and f(ϕ) in the EMD model proposed in Ref. <cit.>. They also found a good quantitative fit to those set of LQCD results, and a very close result to that of <cit.> (Δ≈ 2.73294) for the scaling dimension of the QFT operator dual to the bulk dilaton field, namely Δ≈ 2.769. Although the EMD model of Ref. <cit.> had not been compared to LQCD results at finite μ_B, it predicts a CEP in a different location in the phase diagram, (T,μ_B)_CEP^[1702.06731]=(111.5± 0.5,611.5± 0.5) MeV. More recently, another competing EMD model was proposed in Ref. <cit.> that employed the LQCD results for the equation of state at finite temperature and μ_B=0 from the HotQCD collaboration <cit.> to fix V(ϕ). For the baryon susceptibility they used the Wuppertal-Budapest results from <cit.> to fix f(ϕ), also imposing by construction Δ=3 for the scaling dimension of the QFT operator dual to the dilaton. Up until this work from 2022 <cit.>, only the Wuppertal Budapest LQCD results were used. While the Wuppertal Budapest and HotQCD collaboration results predominately agree, there are still quantitative differences at large temperatures and the error bars from HotQCD are slightly larger. Then, the results from <cit.> also produced a holographic equation of state in good quantitative agreement with the state-of-the-art LQCD results at finite temperature and baryon chemical potential from Ref. <cit.>, besides a good agreement with lattice results on higher order baryon susceptibilities <cit.>, but with yet another different location for the CEP, (T,μ_B)_CEP^[2201.02004]≈(105,555) MeV. In Fig. <ref>, we display the holographic predictions for the QCD CEP from the three competing bottom-up EMD models mentioned above, which were shown to be in quantitative agreement with available state-of-the-art LQCD results, while presenting different predictions for regions of the QCD phase diagram still out of the reach of first principles LQCD simulations. These three competing EMD models present a very fast variation in the behavior of the Maxwell-dilaton coupling function f(ϕ) near the boundary, which seems to be a rather robust feature connected to the holographic EMD description of LQCD results with 2+1 flavors and physical values of the quark masses. These results motivated the need for a more systematic approach to investigate, in a quantitative way, the structure of the different EMD predictions for the location of the QCD critical endpoint. This can be accomplished through a Bayesian analysis of holographic EMD models, and initial results will be briefly mentioned in section <ref>. §.§.§ Holographic transport coefficients One of the most attractive features of the holographic gauge-gravity duality, when applied to the strongly coupled QGP, is that, besides the evaluation of thermodynamic observables at finite temperature and baryon density, it also allows for the calculation of transport coefficients entering as microscopic inputs into hydrodynamic calculations and also the evaluation of other microscopic properties such as partonic energy loss. These transport observables, which are of fundamental relevance for the phenomenology of the QGP produced in relativistic heavy-ion collisions, are generally determined through the holographic duality by employing two kinds of approaches, namely, * Hydrodynamic coefficients (such as the first-order shear and bulk viscosity transport coefficients <cit.> and coefficients associated with higher-order derivative expansions of the energy-momentum tensor of the boundary QFT <cit.>, besides different conductivities and diffusion coefficients associated with the transport of conserved charges <cit.>), and also the thermal production rates of photons and dileptons within the medium <cit.>, may be evaluated through the use of holographic Kubo formulas obtained via linear response theory. The Kubo formulas relate transport coefficients to the expectation values of retarded thermal correlators of gauge invariant operators at the dual QFT, which can be calculated by solving with some adequate boundary conditions linearized equations of motion for quadratic perturbations of the bulk fields defined at the level of the bulk action, with these linearized equations of motion for the perturbations being evaluated over the equilibrium background geometries holographically associated with definite thermal states at the boundary QFT;[Alternatively, some of these hydrodynamic transport coefficients can also be calculated from the spectra of quasinormal modes in different channels of holographic gauge-gravity models, see e.g. <cit.>.] * Observables associated with momentum transport and the energy loss of partons within the strongly coupled quantum fluid are generally evaluated by employing the Nambu-Goto action for strings within different setups (which may be holographically associated with probe partons traversing the medium described by the background black hole solutions) <cit.> (see also <cit.>). Let us first review some relevant EMD predictions for a few hydrodynamic transport coefficients, namely the shear viscosity, bulk viscosity, and baryon conductivity. Afterwards, we shall also briefly review some EMD results for transport observables associated with partonic energy loss. Here we will consider the calculation of homogeneous hydrodynamic transport coefficients of the hot and baryon dense quantum fluid holographically dual to the EMD model close to thermal equilibrium. The SO(3) rotation symmetry of the isotropic medium classifies into different irreducible representations (also called “channels”) the gauge and diffeomorphism invariant combinations of the linearized plane-wave EMD field perturbations at the level of the equations of motion, evaluated at zero spatial momentum <cit.>. The bulk viscosity of the boundary QFT is holographically dual to the diffeomorphism and gauge invariant bulk EMD perturbation transforming under the singlet (scalar) representation of SO(3). The baryon conductivity is dual to the EMD perturbations transforming under the triplet (vector) representation, and the shear viscosity is dual to the EMD perturbations transforming under the quintuplet (tensor) representation of the SO(3) rotation symmetry group of the isotropic medium. Indeed, due to the fact that these gauge and diffeomorphism invariant EMD perturbations transform under different irreducible representations of SO(3), they do not mix at the linearized level and, consequently, one obtains a single decoupled equation of motion for each of these bulk perturbations <cit.>. The tensor components of the isotropic EMD SO(3) quintuplet graviton perturbation are given by five independent combinations of components of the bulk metric field perturbation sourcing the piece of the boundary energy-momentum tensor which is traceless and transverse to the fluid flow. These components satisfy the same differential equation, corresponding to the equation of motion for a massless scalar perturbation over the background geometry considered. The equation of motion has the same form in the standard and in the numerical coordinates (as a consequence of the diffeomorphism invariance of these perturbations) <cit.>. Then, it was shown <cit.> that the shear viscosity satisfies η/s = 1/4π ∀ T>0, μ_B≥ 0, as expected since the isotropic EMD model fits into the very broad class of holographic gauge-gravity models which are translationally and rotationally invariant, besides having two derivatives of the metric field in the bulk action <cit.>. However, the natural dimensionless ratio for the shear viscosity at finite baryon densities is no longer simply η/s, but rather η T/(ϵ+P) <cit.>. This dimensionless ratio reduces to η/s when evaluated at μ_B=0, developing a nontrivial behavior as a function of (T,μ_B) at nonzero baryon densities. η T/(ϵ+P) has been analyzed in detail across the phase diagram of the EMD model in Ref. <cit.>, where it was shown that η T/(ϵ+P) decreases with increasing values of μ_B. In that work, η T/(ϵ+P) developed an inflection point and a minimum, with the former evolving toward the CEP of the model, where it acquires an infinite slope. For larger values of the baryon chemical potential and lower temperatures, η T/(ϵ+P) develops a discontinuity gap across the first order phase transition line of the model, as depicted in Fig. <ref> (a). With the overall reduction in the value of η T/(ϵ+P) with increasing μ_B, the EMD model predicts that the QGP becomes even closer to the perfect fluid limit in its baryon-dense regime. The three vector components of the EMD SO(3) triplet perturbation are associated with the spatial components of the perturbation of the bulk Maxwell field sourcing the baryon vector current at the dual boundary QFT. Again, due to the spatial isotropy of the medium, these vector components satisfy a single decoupled equation of motion. One may consider the bulk spatial Maxwell perturbation, a≡ a_i, i∈{x,y,z}, to calculate in holography the baryon conductivity, which gives the same result in any direction. The equation of motion for the vector perturbation <cit.> is a”(r,ω)+[2A'(r)+h'(r)/h(r)+∂_ϕ f(ϕ)/f(ϕ)ϕ'(r)]a'(r,ω)+e^-2A(r)/h(r)[ω^2/h(r)-f(ϕ)Φ'(r)^2]a(r,ω)=0, where ω is the frequency of the plane-wave ansatz for the Maxwell perturbation and the prime denotes the radial derivative. One must solve Eq. (<ref>) imposing the infalling wave boundary condition for the Maxwell perturbation at the background black hole horizon. In holography, this is equivalent to solving for the retarded thermal correlator of the boundary baryon vector current operator with the further requirement that the Maxwell perturbation is normalized to unity at the boundary <cit.>. These two boundary conditions may be systematically implemented by writing the Maxwell perturbation as follows <cit.>, a(r,ω)≡r^-iωP(r,ω)/r_max^-iωP(r_max,ω), where r_max is a numerical parametrization of the boundary (see below Eq. (<ref>)), and P(r,ω) is a regular function at the black hole horizon, whose equation of motion is obtained by substituting (<ref>) into (<ref>). The holographic Kubo formula for the baryon conductivity in the EMD model in physical units of MeV <cit.> is given by σ_B(T,μ_B)=-1/2κ_5^2ϕ_A^1/νlim_ω→01/ω(e^2A(r)h(r)f(ϕ)Im[a^*(r,ω)a'(r,ω)])|_on-shell Λ [MeV], where the term between brackets in Eq. (<ref>) is a radially conserved flux that can be calculated at any value of the radial coordinate. The details regarding the numerical procedure are discussed in <cit.>. The dimensionless ratio σ_B/T has been analyzed in detail in Ref. <cit.> where it was shown that it generically increases with the temperature, as displayed in Fig. <ref> (b). For σ_B/T there is a temperature window from T∼ 150-180 MeV where the different curves at fixed values of μ_B approximately cross. For values of temperature above this crossing window, T>180 MeV, σ_B/T decreases with increasing μ_B, whereas the opposite behavior is observed for temperatures below the crossing window T<150. One also notices that at the CEP of the model, the baryon conductivity is finite and develops an infinite slope, with a small discontinuity gap being observed across the first-order phase transition line at larger values of μ_B and lower values of T. In Ref. <cit.>, they also calculated the second-order baryon susceptibility, χ_2^B, and the baryon diffusion coefficient, D_B across the phase diagram of the EMD model. It was found that χ_2^B diverges at the critical point (a universal feature of all critical points) whereas D_B→ 0 at the CEP since D_B=σ_B/χ_2^B. The traceful and transverse piece of the boundary energy-momentum tensor T^μν is associated with the bulk viscous pressure of the medium. Note that Tr[T^μν]≠ 0 in nonconformal boundary QFTs, where the trace anomaly of T^μν is related to the bulk dilaton field. The dilaton field is introduced in the bulk action to break the conformal symmetry of the dual gauge theory at the boundary. The scalar EMD SO(3) singlet perturbation is composed by the spatial trace of the graviton and the dilaton perturbation. The singlet perturbation sources the traceful part of T^μν, being holographically related to the bulk viscosity. Denoting the singlet perturbation by ℋ, its equation of motion <cit.> is shown to be given by ℋ”+[4A'+h'/h+2ϕ”/ϕ-2A”/A']ℋ'+[e^-2Aω^2/h^2+h'/h(A”/A'-ϕ”/ϕ')+e^-2A/hϕ'(3A'∂_ϕ f(ϕ)-f(ϕ)ϕ')Φ'^2]ℋ=0, which must be solved with infalling boundary condition at the background black hole horizon and normalized to unity at the boundary. In practice this is implemented by setting, ℋ(r,ω)≡r^-iωF(r,ω)/r_max^-iωF(r_max,ω), where F(r,ω) is a regular function at the black hole horizon, whose equation of motion is obtained by substituting (<ref>) into (<ref>). The holographic Kubo formula for the bulk viscosity in the EMD model <cit.> is ζ/s(T,μ_B)=-1/36πlim_ω→01/ω(e^4A(r)h(r)ϕ'(r)^2Im[ℋ^*(r,ω)ℋ'(r,ω)]/A'(r)^2)|_on-shell, where the term between brackets in Eq. (<ref>) is a radially conserved flux that may be evaluated at any value of the radial coordinate. The details concerning the numerical calculations are discussed in <cit.>. At μ_B=0, the numerical results obtained using this holographic formula were checked to be the same as the holographic formula provided in <cit.> by following a different approach based on the r=ϕ gauge. The latter approach, however, does not seem to be extensible to finite μ_B calculations. Similarly to shear viscosity at μ_B>0, one can no longer use ζ/s as the natural hydrodynamic expression, but instead the dimensionless combination ζ T/(ϵ+P) that reduces to ζ/s at μ_B=0. ζ T/(ϵ+P) was analyzed in detail in Ref. <cit.>, where it was shown that ζ T/(ϵ+P) develops a peak in the crossover region at μ_B=0. In contrast to older versions of the EMD model from Refs. <cit.>, this peak does not move toward the CEP of the model as one increases μ_B. Instead in new versions of the EMD model, the location of the peak in ζ T/(ϵ+P) moves to slightly higher values of T as the baryon density increases. While in the original EMD construction of Ref. <cit.> the height of the peak of ζ T/(ϵ+P) remains approximately constant as μ_B increases toward the CEP, both in the second generation improved EMD model of Ref. <cit.> (see Fig. <ref> (c) ) and in the first generation improved model of Ref. <cit.> the magnitude of the peak of ζ T/(ϵ+P) reduces as one increases the value of μ_B. Therefore, the behavior of the peak of ζ T/(ϵ+P) is clearly model dependent within the class of holographic EMD constructions. In Fig. <ref> (c) at different values of μ_B, ζ T/(ϵ+P) starts to develop both an inflection point and a minimum as a function of T, with both characteristic points evolving toward the CEP location as the baryon density of the medium is increased (see also the bottom panel in Fig. <ref>). At the CEP, ζ T/(ϵ+P) acquires an infinite slope, while further developing discontinuity gaps across the first-order phase transition line of the EMD model. Similarly to what happens with the shear viscosity, the magnitude of the bulk viscosity is also suppressed with increasing values of μ_B. This overall suppression of viscous effects within the strongly coupled medium maybe constitutes a robust property of holographic EMD models seeded with lattice QCD inputs, since this same qualitative behavior has been also observed in the older versions of the EMD model of Refs. <cit.>. In Fig. <ref> (d), we show the comparison between the EMD prediction for [ζ/s](T) at μ_B=0 to extracted values of [ζ/s](T) from recent Bayesian analyses <cit.> that simultaneously describe several experimental heavy-ion data. The holographic EMD prediction for [ζ/s](T) is in the ballpark of values favored by state-of-the-art phenomenological models. Considering that η/s (for any holographic model) is in the correct magnitude for extracted η/s from experimental data and that there is quantitative agreement for the equation of state as well (see Figs. <ref> and <ref>) between the EMD predictions and the QCD equation of state and susceptibilities at finite (T,μ_B), there is reasonable evidence for the practical and quantitative applicability of bottom-up EMD holography as an effective modeling of the strongly coupled QGP produced in heavy-ion collisions. This argument will be further strengthened in section <ref>, when we will discuss the applicability of the anisotropic version of the EMD model at finite temperature and magnetic fields to the physics of the hot and magnetized QGP. The fact that at the CEP of the EMD model the baryon conductivity and also the shear and bulk viscosities remain finite indicates that the EMD model is compatible with the model B dynamical universality class <cit.>. This seems to be a common feature of large N_c gauge theories (as in any holographic gauge-gravity model) <cit.>, and it is different from general expectations for N_c=3 QCD, where these three observables are expected to diverge at the CEP <cit.>, in compatibility with the model H dynamical universality class <cit.>. It is also informative to briefly comment on some results obtained from the calculation of the spectra of homogeneous quasinormal modes (QNMs) in the SO(3) quintuplet, triplet, and singlet channels of the EMD model <cit.>. In fact, the QNMs of asymptotically AdS black holes <cit.> encode a wide range of physical information concerning the holographic dual QFT linearly perturbed out of thermal equilibrium. The near-boundary expansions of the perturbed bulk fields typically feature a leading order non-normalizable mode and a subleading normalizable mode for each field perturbation. The leading modes source the corresponding local and gauge invariant operators at the dual boundary QFT, while the subleading modes are associated with the expectation values of these operators. If one sets the subleading modes to zero at the boundary and imposes the infalling wave condition at the black hole horizon, the corresponding solutions to the linearized equations of motions for the bulk perturbations can be used to evaluate the on-shell action and obtain the retarded thermal correlators of the dual QFT, which are associated through Kubo formulas to transport coefficients of the strongly coupled quantum fluid. For transport coefficients extracted from the imaginary part of the Green's functions, this procedure is physically equivalent to the calculation of transport coefficients through the use of radially conserved fluxes, which has been discussed before. On the other hand, since the retarded thermal correlators of the dual QFT are given by minus the ratio between the subleading and the leading modes of the bulk perturbations <cit.>, by setting these leading modes to zero at the boundary and imposing the causal infalling wave condition at the black hole horizon, one gets the poles of these Green's functions. Since the frequency eigenvalue problem for QNMs defined on asymptotically AdS spacetimes is precisely defined by the Dirichlet boundary condition corresponding to the vanishing of these leading modes at the boundary <cit.>,[Notice this is different from the calculation of transport coefficients discussed before, where these leading modes for the on-shell perturbations of the bulk fields were normalized to unity at the boundary.] one sees that the QNMs describing the exponential decay of linear perturbations of asymptotically AdS black holes holographically correspond to the poles of retarded thermal Green's functions at the dual QFT. These, in turn, describe hydrodynamic and non-hydrodynamic dispersion relations of collective excitations in the strongly coupled quantum fluid, in terms of which it is possible to calculate, respectively, some hydrodynamic transport coefficients <cit.> (in an alternative way to the more direct method of holographic Kubo formulas previously discussed) and also some upper values for characteristic equilibration times of the dual QFT linearly perturbed out of equilibrium. Indeed, as discussed in <cit.>, the non-hydrodynamic QNMs[Non-hydrodynamic QNMs are associated with collective excitations of the medium with nonvanishing frequencies even in the homogeneous regime of perturbations with zero wavenumber.] with the lowest absolute value of its imaginary part, corresponding to the longest-lived non-hydrodynamic excitations of the system, give upper bounds for different equilibration times of the medium close to thermal equilibrium. From the lowest homogeneous non-hydrodynamic QNMs in the SO(3) quintuplet, triplet, and singlet channels of the EMD model of Ref. <cit.>, it has been shown that the equilibration times in these different channels are very close to each other at high temperatures while developing a pronounced separation at the CEP. This result indicates that the energy-momentum tensor dual to the bulk metric field, the baryon current dual to the bulk Maxwell field, and the scalar condensate dual to the bulk dilaton field, equilibrate at considerably different rates in the critical regime of the EMD model, with the baryon current taking the longest time to approach thermal equilibrium, while the energy-momentum tensor generally equilibrates faster than the other observables, also within the regions of the phase diagram far from the criticality. Moreover, in most cases, the characteristic equilibration times of the medium decrease with increasing values of the baryon chemical potential, while strongly increasing with decreasing values of temperature. There have been various holographic calculations of transport coefficients associated with partonic energy loss within the strongly coupled quantum fluid, such as the energy loss of heavy quarks due to the heavy quark drag force <cit.>, the Langevin momentum diffusion coefficients for heavy quarks <cit.>, and the jet quenching parameter associated with the energy loss of light partons moving at the speed of light <cit.>. These energy loss transport coefficients are evaluated by considering different calculations done with a probe Nambu-Goto (NG) action for a classical string defined over the background solutions for the bulk fields. The NG action depends on √(λ_t), where the `t Hooft coupling is typically considered in holographic calculations as an extra free parameter. In principle, this parameter may be fixed in different ways by considering holographic observables calculated with the NG action compared to different kinds of phenomenological data (see e.g. Refs. <cit.>). For the class of isotropic EMD models at finite temperature and baryon density, the holographic formulas for these partonic energy loss observables were derived in Ref. <cit.>, and in Ref. <cit.>. The corresponding results for the improved EMD model were also numerically calculated across its phase diagram, including the regions with the CEP and the line of first-order phase transition. It was found that the heavy quark drag force and energy loss, the Langevin momentum diffusion, and the jet quenching parameter are all enhanced by increasing the baryon density of the medium toward the critical region of the phase diagram. In fact, faster partons are more sensitive to the temperature and baryon chemical potential of the medium. Those results indicate that there is more jet suppression and partonic energy loss in the baryon-dense regime of the fluid. All of these observables developed an infinite slope at the CEP, while displaying large discontinuity gaps across the line of first-order phase transition. In the bottom panel of Fig. <ref> some crossover characteristic curves (made of sequences of inflection points or extrema) of these observables converging to a single location corresponding to the CEP are displayed with other characteristic curves for different observables of the model — see Ref. <cit.> for details. §.§.§ Holographic Bayesian analysis The results for the EMD model discussed above rely on the choice of holographic potentials V(ϕ) and f(ϕ). That is, calculations require that suitable functional forms are provided, along with the corresponding parameters. As discussed above, several competing parametrizations for these functions can be found in the literature, but no systematic comparison between them has been performed thus far. A pressing question regarding any particular parametrization of the EMD model concerns how much of its predictions are informed by lattice QCD results used to fit the different parameters, and how robust they are against uncertainties in these results. Such issues can only be addressed by quantifying uncertainties in V(ϕ) and f(ϕ) and systematically comparing different parametrizations. The tools required for a systematic analysis of parameter sensitivity and uncertainty quantification in modeling the QCD equation of state can be found in the framework of Bayesian statistical inference <cit.>. In recent years, Bayesian statistics have become the state-of-the-art tool for systematically assessing models and hypotheses across high-energy physics, including neutron-star <cit.> and heavy-ion physics <cit.>. The core tenet of Bayesian inference resides in Bayes' theorem: P(M^(θ)| D) = P(D| M^( θ)) × P(M^(θ))/P(D), where D represents the data and M^(θ) is a given model with parameters θ. Equation (<ref>) follows from expressing the joint probability P(D∩ M^(θ)) in terms of the associated conditional probabilities P(M^(θ)| D) and P(D| M^( θ)). The conditional distribution P(M^(θ)| D) is called the posterior and can be used to discriminate between different parameter sets θ. It is the product of the likelihood P(D| M^( θ)), quantifying agreement between model and data, and the prior P(M^(θ)), which assigns a priori weights to the different parameter sets to reflect prior knowledge. The denominator P(D) on the right-hand side of Eq. (<ref>) is known as the evidence and can be obtained as a normalization constant. Recently, an improved numerical implementation of the EMD model developed within the MUSES Collaboration has enabled a Bayesian analysis over lattice QCD results for the zero-density equation of state. In Eqs. (<ref>) and (<ref>), the very nonlinear character of the potentials over ϕ make functional forms, such as seen in Fig. <ref>, highly sensitive to precise parameter values. A complete Bayesian analysis is underway and will be published shortly. Here, we briefly highlight and explain the results obtained from an initial analysis. New parametric ansatze for the free functions V(ϕ) and f(ϕ) of the holographic EMD action (<ref>) are introduced to reproduce qualitative features of Eqs. (<ref>) and (<ref>) in a way that depends more transparently on parameter values: V(ϕ) = -12cosh[(γ_1 Δϕ_V^2 + γ_2 ϕ^2/Δϕ_V^2 + ϕ^2) ϕ], f(ϕ) = 1 - (1-A_1) [1/2 + 1/2tanh(ϕ - ϕ_1/δϕ_1)] - A_1[1/2 + 1/2tanh(ϕ - ϕ_2/δϕ_2)]. Equation (<ref>) interpolates between two different exponential slopes, γ_1 and γ_2, for ϕ≪Δϕ_V and ϕ≫Δϕ_V, respectively. Equation (<ref>), on the other hand, goes from f(ϕ)≈ 1, for ϕ_1-ϕ≪δϕ_1 to a plateau of height f(ϕ)≈ A, for ϕ in the range ϕ_1-ϕ_2, before finally going to f(ϕ)≈ 0, for ϕ-ϕ_2≫δϕ_2. The prior distribution for parameter values was taken to be uniform within designated ranges, shown in Table <ref>, on the left. Random samples from this prior distribution were then fed into a Markov Chain Monte Carlo (MCMC) algorithm <cit.>. This MCMC implements random changes to parameters such that the equilibrium probability distribution, to be reached after a sufficiently large number of iterations, coincides with the posterior distribution given by Eq. (<ref>). This algorithm can then be reiterated to generate a large sample of parameter sets from the posterior. Parameters are fit based on both the baryon susceptibility and the entropy density from lattice QCD at μ_B=0 <cit.>. The agreement between model and lattice results is quantified by the likelihood P(D| M^( θ)), chosen to be Gaussian. The corresponding covariance matrix is chosen according to the lattice QCD error bars while implementing auto-correlation between neighboring points. An extra parameter is introduced to gauge these correlations and is also estimated within the Bayesian inference <cit.>. The 95% confidence interval obtained from lattice QCD results in this fashion is shown in Table <ref>, on the right. Finally, parameter sets from the posterior can be used to compute predictions. The statistical distribution of predictions can then be used to quantify uncertainties stemming from the lattice QCD errorbars, as well as the sensitivity to different model parameters. As a check that these predictions are compatible with lattice QCD results, Fig. <ref> compares predictions for different values of μ_B/T, shown as thin semitransparent lines, to the finite-density lattice QCD equation of state from <cit.>, shown as wide bands with the same color scheme. While it is not apparent at first sight, thousands of lines are shown over each band in Fig. <ref>. Remarkably, the zero-density equation of state constrains the model parameters so tightly that these lines accumulate in what appears to be a very thin band. Constraining the model with input from lattice QCD in this fashion, one is able to extract predictions at higher densities, and even around the QCD phase transition. Because it generates a large set of model realizations, this Bayesian analysis of the EMD model will also enable the investigation of the role of each different model parameter, both in predictions and in fitting lattice results. Perhaps even more importantly, this kind of analysis provides the possibility of assigning probabilities to predictions and hypotheses. In principle, Bayesian model selection can also be used to discriminate between different models. Overall, the combination of bottom-up holographic models with Bayesian tools thus provides a promising tool for extrapolating knowledge on the low-density and high-temperature QCD equation of state to higher densities in a partially systematic way. Because of its ability to capture the physics of the strongly coupled QGP in the crossover region, the EMD model is a particularly fitting candidate for this task. §.§ Other holographic models Although the focus of the present review is on the results from bottom-up holographic EMD models for the hot and baryon-dense QCD phase diagram, in this section, we briefly mention some results obtained from other kinds of holographic constructions. Within the broad class of bottom-up Einstein-Dilaton constructions, but without considering the effects of flavor dynamics effectively enclosed in the form of the dilaton potential matched to the corresponding LQCD results, as originally proposed in Refs. <cit.>, there is the so-called class of “Improved Holographic QCD” (ihQCD) models originally devised in Refs. <cit.>, and further reviewed in <cit.>. Due to the fact that flavor dynamics are not taken into account in those ihQCD models, such a class of bottom-up holographic constructions actually refers to effective models for pure Yang-Mills systems, instead of QCD. In a pure YM system at T=0 at large color-charge separations, there is a linear confining potential for infinitely heavy probe quarks as well as a mass gap featured in the physical spectrum of glueball excitations, which are both well described by ihQCD models. In contrast to the deconfinement crossover observed in actual QCD with 2+1 dynamical quark flavors, pure YM theory has a first-order phase transition between a confining gas of glueballs and a deconfined phase corresponding to a pure gluon plasma. At finite temperature the ihQCD models are able to achieve this first-order phase transition, just like what is seen in pure YM theory. However, η/s=1/4π in these ihQCD models that demonstrates the theory is strongly coupled at all energy scales and, therefore, misses crucial properties related to asymptotic freedom in the ultraviolet. As explicitly shown in Ref. <cit.>, higher curvature corrections to ihQCD models can provide a nontrivial temperature dependence for [η/s](T), allowing this observable to acquire a similar profile to what is expected for pure YM and also QCD matter where [η/s](T) is expected to largely increase with the temperature of the medium in the ultraviolet regime due to asymptotic freedom. Simple Einstein's gauge-gravity models with two derivatives of the metric field in the bulk gravity action lack asymptotic freedom, while the consideration of higher curvature corrections for the bulk action is associated with corrections that reduce the value of the effective `t Hooft coupling of the dual QFT at the boundary. Generalizations of the original ihQCD constructions for pure YM systems that consider a very large number N_f of quark flavors where the ratio x≡ N_f/N_c remains finite in the holographic setup[The number of colors N_c is always very large] are known in the literature as V-QCD models <cit.>. The letter “V” stands for the so-called Veneziano limit of large N_c, N_f, with fixed x=N_f/N_c. In such bottom-up models, the flavor dynamics are taken into account by considering the full backreaction of tachyonic flavor D-branes on the gluonic backgrounds. The V-QCD models have been employed to calculate a large number of physical observables, ranging from spectroscopy <cit.> to thermodynamic quantities <cit.> and transport coefficients <cit.>, and have been also used in some far-from-equilibrium calculations, see e.g. <cit.>. Most of these V-QCD models have been mainly applied in the literature to study the physics involving neutron stars and QCD matter at high densities, see also the recent review <cit.>. The class of EMD models reviewed here may be viewed as Taylor-expanded versions of the more general class of V-QCD models with vanishing tachyon field (see discussion in section 3.2 of Ref. <cit.>). However, it is important to stress that the details involved in the holographic constructions may lead to considerably different results. By comparing the fitting results for the EMD model of Refs. <cit.> with the LQCD results in Fig. <ref> and in the bottom panel of Fig. <ref>, one can see that the EMD model provides a better description of first principles lattice results on the QCD thermodynamics than the several different V-QCD models considered in Fig. <ref>. In particular, for the trace anomaly of the energy-momentum tensor, one notices in Fig. <ref> (a) that the different V-QCD constructions miss even qualitatively the correct LQCD behavior for this observable below the pseudocritical temperature. Indeed, while in actual QCD with 2+1 flavors there is no phase transition at μ_B=0 between the hadron gas and QGP regimes, but just an analytical crossover <cit.>, in the holographic V-QCD approach there is a first-order phase transition <cit.>, which is reminiscent from the ihQCD backgrounds embedded in such constructions. Therefore, keeping in mind the limitations and shortcomings stated in section <ref>, it is fair to say that the EMD class of holographic models discussed in this review remains the leading description to provide a quantitative description of lattice results on actual QCD thermodynamics with 2+1 dynamical flavors with physical quark masses, both at zero and finite baryon density. Another class of holographic models, but of top-down nature, which has been extensively studied in the literature, mainly connected to spectroscopic properties of QCD, is the so-called Witten-Sakai-Sugimoto model <cit.> — see also <cit.> for a review.[This top-down holographic construction stems from Type IIA instead of Type IIB superstring theory. Contrary to most gauge-gravity models, the background geometries in the Witten-Sakai-Sugimoto model are not asymptotically AdS and feature a dilaton field that diverges at the boundary, consequently, the Witten-Sakai-Sugimoto model has no ultraviolet fixed point <cit.>.] This kind of holographic model has not been shown to be able to provide an accurate quantitative description of first principles lattice results of QCD thermodynamics with dynamical quark flavors. In Ref. <cit.> the Witten-Sakai-Sugimoto approach has been employed to provide a phenomenologically realistic description of cold and dense nuclear matter at zero temperature, which is in good agreement with some known theoretical and observational constraints regarding the physics of neutron stars. See also the recent review <cit.> for a broad discussion on the holographic modeling of compact stars. § HOLOGRAPHIC MODELS FOR THE HOT AND MAGNETIZED QUARK-GLUON PLASMA The QCD phase diagram is not just a function of (T,μ_B) but is also dependent on the chemical potentials for strangeness (μ_S) and electric charge (μ_Q), electromagnetic fields, the number of flavors relevant for a given environment, etc. By varying the centrality class of heavy-ion collisions it is possible, to investigate the phase diagram of QCD in the plane of temperature and magnetic field, (T,eB). The most intense magnetic fields ever created by humankind are reached in high-energy peripheral heavy-ion collisions at RHIC (eB_max∼ 5 m_π^2∼ 0.09 GeV^2 for Au+Au collisions at center of mass energies of √(s_NN)=200 GeV with an impact parameter of b∼ 12 fm) and at the LHC (eB_max∼ 70 m_π^2∼ 1.3 GeV^2 for Pb+Pb collisions at center of mass energies of √(s_NN)=2.76 TeV with an impact parameter of b∼ 13 fm) [We note that eB = 1 GeV^2⇒ B ≈ 1.69× 10^20 G.] — see e.g. Fig. 2 in <cit.>; see also Refs. <cit.>. The study of the QCD matter under the influence of strong magnetic fields is also relevant in the context of the physics of magnetars <cit.> and of the early universe <cit.>, making it a very active research field in recent years see, e.g., <cit.>. Even though very intense magnetic fields are produced in the early stages of noncentral heavy-ion collisions, being therefore important in those initial stages, due to the receding spectator hadrons fastly leaving the collision zone, one generally expects the magnitude of such strong magnetic fields to have significantly decayed by the time the QGP is formed. Very intense magnetic fields are produced in the early stages of noncentral heavy-ion collisions such that they should play an important role in the initial conditions. However, it is generally expected that the strength of the magnetic fields decay significantly by the time the QGP if formed because the spectator nucleons quickly leave the collision zone (and the protons, in particular, carry electric charge) <cit.>. Early papers argued that by considering effects due to the electric conductivity induced in the medium <cit.> and the quantum nature of the sources of such fields <cit.>, the decay of the magnetic field may be considerably delayed within the medium. More recently in <cit.> it was argued that an incomplete electromagnetic response of the medium to the decaying external magnetic field that is associated with an induced electric current that is lower than expected by Ohm's law, leads to a strong suppression in the magnitude of the induced magnetic field in the medium (two orders below previous estimates in the literature done by assuming the validity of Ohm's law). This argument may help to explain the consequences of the recent STAR isobar run <cit.> where it was originally thought that strong magnetic fields would lead to the chiral magnetic effect. Nonetheless, it is interesting to investigate the structure of the QCD phase diagram in the (T,eB)-plane from a theoretical perspective. At low temperatures the magnitude of the chiral condensate is enhanced with increasing magnetic fields constituting the so-called magnetic catalysis phenomenon <cit.>. However, for higher temperatures slightly above the QCD crossover region, the inverse effect is observed with a reduction in the magnitude of the chiral condensate and a decreasing pseudocritical crossover temperature for increasing values of the magnetic field, known as inverse magnetic catalysis (or magnetic inhibition) as found in the first principles lattice QCD simulations of Refs. <cit.>, see also <cit.>. There is also a prediction <cit.> that a first-order phase transition line ending at a critical point exists in the (T,eB)-plane of the QCD phase diagram for very high values of the magnetic field, eB∼ 4 - 10(2) GeV^2 <cit.>, although current lattice simulations <cit.> for the QCD equation of state with 2+1 flavors and physical values of the quark masses only found an analytic deconfinement crossover for values of 110 MeV < T < 300 MeV and eB ≲ 0.7 GeV^2. Various holographic models have been proposed in the literature to study different aspects of strongly coupled quantum systems under the influence of external magnetic fields, with either a more qualitative view towards different physical observables calculated from holographic methods — see e.g. <cit.>, or with a more quantitative perspective aimed towards direct comparisons with results from first principles LQCD calculations — see, for instance, <cit.>. In the present section, we focus on quantitative holographic EMD predictions for some thermodynamic and transport observables of the hot and magnetized strongly coupled QGP. §.§ Anisotropic Einstein-Maxwell-Dilaton models The first phenomenological anisotropic holographic EMD model at finite temperature with a constant external magnetic field (and μ_B=0) was <cit.>. This model generalized the isotropic approach considered in the previous section to anisotropic EMD backgrounds with the SO(3) rotation symmetry broken down to SO(2) in the transverse plane to the magnetic field. The general form of the bulk action in this case is the same as in Eq. (<ref>), but the Maxwell-dilaton coupling function f(ϕ) must be different from the isotropic case at finite temperature and baryon chemical potential. In the isotropic EMD model at finite (T,μ_B), f(ϕ) effectively represents the coupling associated with the conserved baryon current, with this coupling being dynamically fixed in the holographic setup by matching the LQCD baryon susceptibility evaluated at finite temperature and zero chemical potential, as discussed in section <ref>. In the case of the anisotropic EMD model at finite (T,eB) the coupling must be associated with the electric sector, instead of the baryon sector of the dual QFT at the boundary. Then, instead of the baryon susceptibility the phenomenological input seeded to the holographic model to “teach” the asymptotically AdS black hole backgrounds to behave as a hot and magnetized QGP, is the LQCD magnetic susceptibility evaluated at finite temperature and zero magnetic field.[In principle, one could also choose to use the electric susceptibility, instead of the magnetic susceptibility, in order to fix the Maxwell-dilaton coupling f(ϕ) for the electric sector of the dual QFT at the boundary. However, as discussed in Appendix A of Ref. <cit.>, a simple EMD model is not versatile enough to adequately cover the entire electromagnetic sector of the QGP, in the sense that by fixing f(ϕ) by matching the LQCD electric susceptibility, one obtains a holographic prediction for the magnetic susceptibility in disagreement with the corresponding LQCD result, and vice-versa. Therefore, it seems unfeasible to obtain a simultaneously good description of QCD magnetic and electric response functions using a single EMD model. Consequently, in order to describe magnetic field-related phenomena, one chooses the magnetic susceptibility as a phenomenological input to fix f(ϕ) within the holographic EMD approach.] We shall review the main aspects of this endeavor in the next section, but before that, paralleling the discussion made in section <ref> for the improvements done through the years regarding the isotropic EMD model at finite (T,μ_B), we briefly comment below on the improvements done also in the construction of the anisotropic EMD model at finite (T,eB). The original construction at finite (T,eB) presented in Ref. <cit.> has the same set of free parameters {G_5,Λ,V(ϕ)} of the first generation improved isotropic EMD model of Refs. <cit.>, meaning that both models represent the same system at finite temperature when the baryon chemical potential and the magnetic field are turned off. On the other hand, as already mentioned, the Maxwell-dilaton coupling f(ϕ) for the anisotropic EMD model is different from the isotropic case. In Ref. <cit.> it was constructed an improved version of the anisotropic EMD model (with this improved version being also used in Refs. <cit.>), where the set of free parameters and functions {G_5,Λ,V(ϕ),f(ϕ)} was updated by performing a better matching procedure to more recent lattice results on the QCD equation of state and magnetic susceptibility at finite temperature and zero magnetic fields. The set of improved free parameters {G_5,Λ,V(ϕ)}, originally obtained in Ref. <cit.> for the improved anisotropic EMD model, was later employed also in the second generation improved isotropic EMD model of Refs. <cit.>. In what follows, we mainly review the results for physical observables calculated with the improved version of the anisotropic EMD model at finite (T,eB) from Refs. <cit.>. §.§.§ Anisotropic holographic thermodynamics The general EMD equations of motion obtained from the bulk action (<ref>) are given by Eqs. (<ref>) — (<ref>). The presence of a constant external magnetic field, which we take to be directed along the z-axis, breaks the SO(3) rotation symmetry of the dual QFT down to SO(2) rotations around the direction of the magnetic field. This symmetry breaking implies that the ansatz for the bulk metric field must be anisotropic when the magnetic field is turned on. Thus, for the description of a hot and magnetized fluid in thermodynamic equilibrium, we take the following anisotropic and translationally invariant charged black hole ansatze for the bulk EMD fields <cit.>, ds^2 = g_μνdx^μ dx^ν= e^2a(r)[-h(r)dt^2+dz^2]+e^2c(r)(dx^2+dy^2)+dr^2/h(r), ϕ =ϕ(r), A=A_μ dx^μ=ℬxdy ⇒ F=dA=ℬdx∧ dy, where ℬ is the constant magnetic field expressed in the numerical coordinates. By substituting the ansatze (<ref>) into the general EMD field equations (<ref>) — (<ref>), one obtains the following set of coupled ordinary differential equations of motion <cit.>, ϕ”+(2a'+2c'+h'/h)ϕ'-1/h(∂ V(ϕ)/∂ϕ+ℬ^2e^-4c/2∂ f(ϕ)/∂ϕ) =0, a”+(14/3c'+4/3h'/h)a' +8/3a'^2+2/3c'^2+2/3h'/hc' +2/3h V(ϕ)-1/6ϕ'^2 =0, c”-(10/3a'+1/3h'/h)c' +2/3c'^2-4/3a'^2-2/3h'/ha' -1/3h V(ϕ)+1/3ϕ'^2 =0, h”+(2a'+2c')h' =0, a'^2+c'^2-1/4ϕ'^2+(a'/2+c')h'/h+4a'c' +1/2h(V(ϕ)+ℬ^2e^-4c/2f(ϕ)) =0, where Eq. (<ref>) is a constraint. The steps used to numerically solve the above equations of motion for a given pair of initial conditions (ϕ_0,ℬ) are discussed in detail in Refs. <cit.> (with algorithmic and numerical improvements regarding the original approach devised in <cit.>). Similarly to the isotropic EMD model at finite temperature and baryon density discussed in section <ref>, one extracts the following set of ultraviolet expansion coefficients required for the holographic calculation of several thermodynamic observables: {h_0^far,a_0^far,c_0^far,ϕ_A} from the numerical solutions for the background anisotropic EMD fields at finite temperature and magnetic field evaluated near the boundary. From these ultraviolet coefficients one can write down the following holographic formulas for the temperature T, the electric charge e times the constant external magnetic field B at the boundary (expressed in standard coordinates), and the entropy density s (measured, respectively, in units of MeV, MeV^2, and MeV^3) <cit.>, T=1/4πϕ_A^1/ν√(h_0^far)Λ, eB=e^2(a_0^far-c_0^far)ℬ/ϕ_A^2/νΛ^2, s=2π e^2(a_0^far-c_0^far)/κ_5^2 ϕ_A^3/νΛ^3, where the energy scale Λ, as well as the 5D Newton's constant and the dilaton potential are the same as given in Eq. (<ref>). In order to fix the Maxwell-dilaton coupling function f(ϕ) for the anisotropic EMD model at finite temperature and magnetic field, one needs to dynamically match the holographic magnetic susceptibility at finite temperature and zero magnetic field with the corresponding LQCD result. As discussed in <cit.>, the holographic EMD formula for the regularized magnetic susceptibility evaluated at finite temperature and zero magnetic field may be written as follows in the numerical coordinates,[One should ideally take T_low=0, however, due to numerical difficulties in reaching exactly the vacuum geometry in the EMD model, we numerically subtract a zero magnetic field background geometry with a small but nonzero temperature, similarly to what was done in Eq. (<ref>) for the calculation of the pressure.] χ(T,B=0)=χ_bare(T,B=0)-χ_bare(T_low,B=0)=-1/2κ_5^2[(1/√(h_0^far)∫_r_start^r^var_max dr f(ϕ(r)))|_T,B=0-(same)|_T_low,B=0]_on-shell, where r^var_max≡√(h_0^far)[r̃^fixed_max- a_0^far+ln(ϕ_A^1/ν)], with r̃^fixed_max being a fixed ultraviolet cutoff in standard coordinates which must be chosen such that the upper limits of integration in Eq. (<ref>) satisfy r_conformal≤ r^var_max≤ r_max for all the background geometries under consideration. We remark that r_conformal is a value of the radial coordinate[Typically, r_conformal∼ 2.] where the background geometry already reached the conformal AdS_5 ultraviolet fixed point (within some numerical tolerance), and r_max≥ r_conformal is the maximum value of the radial coordinate up to which we perform the numerical integration of the bulk equations of motion. By taking as phenomenological input the LQCD magnetic susceptibility at finite temperature and zero magnetic field with 2+1 flavors and physical values of the quark masses from Ref. <cit.>, one may fix the form of the Maxwell-dilaton coupling function as follows <cit.>, f(ϕ)=0.95 sech(0.22ϕ^2-0.15ϕ-0.32), with the result displayed in Fig. <ref> (a). Also in Fig. <ref>, we show the predictions from the anisotropic EMD model at finite (T,eB) <cit.> compared to the LQCD results from <cit.> for (b) the pressure difference,[Similarly to what was done in Eq. (<ref>), one may evaluate the pressure as the temperature integral of the entropy density in (<ref>), calculated with the magnetic field held fixed. As discussed in detail in Section 2 of <cit.>, this gives the isotropic pressure in the so-called “B-scheme”, where the magnetic field is held fixed during compression, with the pressure being the response function of the system to such a compression. Correspondingly, this also gives the anisotropic longitudinal pressure in the direction of the magnetic field in the so-called “Φ-scheme”, where it is the magnetic flux that is held fixed during a compression. In the Φ-scheme, the transverse pressures (to the direction of the magnetic field) depend on the magnetization of the medium, which requires holographic renormalization of the bulk action to be evaluated through the gauge-gravity duality, and that has not been calculated in Refs. <cit.>.] Δ p(T,eB)≡ p(T,eB)-p(T=125MeV,eB), (c) the normalized entropy density s/T^3 (we also show the LQCD results from <cit.> at B=0), and (d) the crossover temperature as a function of the magnetic field, as extracted from the inflection of s/T^3. For the values of the magnetic field considered there is no actual phase transition between the hadronic and partonic regimes of the hot and magnetized QCD matter, just an analytic crossover. Contrary to the isotropic EMD model at finite (T,μ_B) from Refs. <cit.>, whose phase diagram has been deeply investigated, the phase diagram of the anisotropic EMD model at finite (T,eB) from Refs. <cit.> still remains largely unexplored. One challenge, however, is that the anisotropic EMD model typically requires a much larger set of background black hole solutions than the isotropic model in order to allow for smooth interpolations of physical observables as functions of T and eB. In Ref. <cit.>, the holographic anisotropic EMD model at finite (T,eB) was further employed to calculate the magnitude of the expectation value of the renormalized Polyakov loop operator <cit.>,[The holographic renormalization procedure for the calculation of the Polyakov loop involves only the on-shell Nambu-Goto (NG) action for a probe string extending from an isolated quark at the boundary up to the background black hole horizon deep into the bulk, and not the bulk action (which generates the black hole backgrounds, over which the probe string described by the NG action is defined) <cit.>.] P_r(T,eB)=|⟨L̂_P⟩_r|=e^-F_Q^r(T,eB)/T, where F_Q^r(T,eB) is the renormalized free energy of a single static heavy quark at the boundary.[The renormalization scheme at nonzero magnetic field employed in Ref. <cit.> was the same one used in the LQCD simulations of Refs. <cit.>.] In holography, this quantity depends on the `t Hooft coupling coming from the NG action, which in a bottom-up setup is taken as an extra free parameter. Since √(λ_t)=L^2/α'=(L/l_s)^2,[See the discussion in section <ref>.] where l_s is the fundamental string length and L is the asymptotic AdS radius (which is set here to unity), one expects that in the classical gauge-gravity regime of the holographic duality the `t Hooft coupling should be large, since in this limit, l_s≪ L. Indeed, by matching the overall magnitude of the holographic Polyakov loop, P_r(T,eB), with the corresponding LQCD results from <cit.>, as illustrated in Fig. <ref> (e), in Ref. <cit.> it was fixed the large value √(λ_t)=1450, which hints at a nontrivial consistency between top-down theoretical expectations and bottom-up phenomenological results within this holographic approach. Furthermore, one also notices that the anisotropic EMD model provides a reasonable description of the LQCD results for the Polyakov loop in the deconfined regime of QCD matter corresponding to the strongly coupled hot and magnetized QGP, for magnetic fields up to eB≲ 1 GeV^2 with T≳ 150 MeV. Also in Ref. <cit.>, the holographic EMD prediction for the heavy quark entropy, S_Q(T,eB)=-∂ F_Q^r(T,eB)/∂ T was computed. The ratio between any two different values of S_Q is particularly interesting because it does not depend on the extra free parameter √(λ_t) present in the holographic calculation of the Polyakov loop. Consequently, once the background black hole solutions are obtained, there are no extra free parameters to fix in such a calculation. In Fig. <ref> (f), there are shown the EMD predictions for the ratio S_Q(T,eB)/S_Q(T=200MeV,eB=0), with the result at zero magnetic field being compared to the corresponding available LQCD result from <cit.>. Interestingly enough, the EMD prediction at B=0 is in perfect quantitative agreement with the LQCD result in the deconfined regime for T≳150 MeV, while completely missing the correct behavior for the heavy quark entropy in the confined hadronic regime. The disagreement found with the lattice results for the Polyakov loop and the heavy quark entropy below the crossover temperature in the hadronic regime with the contrast of the quantitative agreement found above the crossover temperature in the partonic regime, is because the holographic EMD model is suited to describe the deconfined QGP phase of hot QCD matter but not the confined hadronic phase. The anisotropic EMD model at finite (T,eB) results in Fig. <ref> and the isotropic EMD model at finite (T,μ_B) in Figs. <ref>, <ref>, <ref> (d) compared to first principles LQCD results on several thermodynamic observables and transport coefficients posteriors from Bayesian analyses using heavy-ion data, comprise the main argument for the actual phenomenological applicability of EMD holography in the description of many aspects of the hot QGP produced in heavy-ion collisions. These results are interesting mainly due to the following reasons: * The class of relatively simple bottom-up holographic EMD constructions reviewed here may be used to make physically reasonable predictions for the QGP, providing not only qualitative insight but also some quantitatively reliable results, which may extend beyond the current reach of first principles approaches in QCD. * As a class of bottom-up holographic constructions, the phenomenological EMD models reviewed here provide further evidence that the holographic dictionary may be useful in practice even when the precise form of the holographic dual QFT at the boundary of the higher dimensional bulk spacetime is unknown. * Even though the precise holographic dual is unknown, the results reviewed here show that this holographic dual must be some effective 4D strongly coupled QFT which very closely mimics several aspects of QCD. While EMD holography differs from QCD in several aspects e.g. the lack of asymptotic freedom and the thermodynamic behavior in the confining hadronic regime, it is still able to capture several other key features of QCD. §.§.§ Anisotropic holographic transport coefficients The presence of an external magnetic field (or, more generally, of any source of anisotropy) in the medium splits the transport coefficients into several anisotropic components, when compared to the more simple case of an isotropic medium. Holographic analyses regarding the anisotropic heavy quark drag forces and the Langevin momentum diffusion coefficients, and also the anisotropic jet quenching parameters involving light partons, were done e.g. in Refs. <cit.>. Additionally, anisotropic shear and bulk viscosities were analyzed in <cit.>. At this time systematic checks of these transport coefficients have not yet been performed, since the field of relativistic magnetohydrodynamics is currently under intense development <cit.>. The purpose of the present section is to briefly review some of the main results obtained in Refs. <cit.> regarding the anisotropic EMD predictions at finite (T,eB) for some transport coefficients of the strongly coupled hot and magnetized QGP. The holographic formulas of the anisotropic heavy quark drag forces and Langevin momentum diffusion coefficients can be found in Appendix A of <cit.>, and then applied to the anisotropic EMD model at finite (T,eB) as done in sections III.B and III.C of the same reference. The general conclusion[Which was shown, in section II of <cit.>, to also hold for the top-down magnetic brane model of Ref. <cit.>.] is that energy loss and momentum diffusion for heavy quarks traversing a strongly coupled anisotropic plasma are enhanced by the presence of an external magnetic field, being larger in transverse directions than in the direction of the magnetic field. In Ref. <cit.> it was found that also the anisotropic jet quenching parameters for light partons display an overall enhancement with increasing values of the external magnetic field, with the phenomenon of transverse momentum broadening being larger in transverse directions than in the direction of the magnetic field.[These conclusions were shown in <cit.> to also hold for the top-down magnetic brane model of Ref. <cit.>.] Consequently, one generally predicts more energy loss for heavy and light partons traversing a strongly coupled quantum medium in the presence of an external magnetic field. The holographic formulas for the anisotropic shear viscosities in the plane transverse to the magnetic field, η_⊥, and along the direction of the magnetic field, η_∥, were derived in <cit.> and reviewed in Appendix A of <cit.>. The anisotropic η/s ratios are then η_⊥/s = 1/4π, η_∥/s = 1/4π g_zz(r_H)/g_xx(r_H), from which one recovers the isotropic result η_⊥/s=η_∥/s≡η/s=1/4π when B=0, since in this case the background metric is isotropic g_zz=g_xx. At nonzero magnetic fields, only η_∥/s varies with the value of the external magnetic B while η_⊥/s=1/4π is constant. In Fig. <ref> the results for the ratio η_∥/η_⊥ in the anisotropic EMD model at finite (T,eB) are shown. The anisotropic shear viscosity is lower in the direction parallel the magnetic field than in the transverse plane, with its magnitude being reduced as one increases the value of B. Along the external magnetic field direction, a strongly coupled magnetized medium becomes progressively closer to the idealized perfect fluid limit field by enhancing the value of the magnetic field.[See also Ref. <cit.> for a discussion about the breaking of rotational invariance and its effects in the calculation of the shear viscosity of a p-wave superfluid model. In the case considered in <cit.>, the rotational symmetry breaking does not lead to a value of η/s below 1/4π.] § SUMMARY AND OUTLOOK In this work, we provided an up-to-date review of quantitative holographic EMD models for the hot and strongly coupled QGP produced in relativistic heavy-ion collisions. We reviewed both isotropic EMD constructions at finite temperature and baryon chemical potential with vanishing electromagnetic fields and anisotropic EMD models at finite temperature and magnetic field with zero chemical potential. Evidence that the holographic duality can quantitatively provide reliable predictions for the hot and deconfined QGP phase of QCD, depending on the class(es) of gauge-gravity models considered and how their free parameters are fixed by phenomenological inputs, was discussed. These key results highlight precisely this evidence for the reliability of the EMD predictions: * Isotropic EMD model for the (T,μ_B)-plane of QCD: in Figs. <ref> and <ref> we displayed, respectively, the holographic predictions for the equation of state at finite temperature and baryon chemical potential, and for the 6th and 8th order baryon susceptibilities at μ_B=0, compared to state-of-the-art first principles LQCD results; and in Fig. <ref> (d), we have shown the EMD prediction for the bulk viscosity to entropy density ratio at vanishing baryon density compared to the profiles favored by the latest phenomenological multistage models that simultaneously describes several different experimental data from relativistic heavy-ion collisions. As an isotropic and translationally invariant holographic model with two derivatives of the metric field in the bulk gravity action, the model naturally encompasses a small shear viscosity, η/s=1/4π, compatible with the overall magnitude estimated for the strongly coupled QGP produced in heavy-ion collisions. A number of other holographic EMD models are currently available in the literature which have been also shown to successfully describe LQCD results at the quantitative level, such as the works presented in Refs. <cit.>. * Anisotropic EMD model for the (T,eB)-plane of QCD: in Fig. <ref> we displayed the holographic predictions for the anisotropic equation of state, the crossover transition temperature, the renormalized Polyakov loop, and the heavy quark entropy at finite temperature and magnetic field compared to the available first principles LQCD results. The holographic EMD model allows one to go beyond the current capabilities of LQCD simulations. For instance, one prediction of this model is the existence of a critical end point. While different competing EMD models do provide differences in the predicted location of this critical point after fitting to LQCD results for μ_B=0, they all lead to the existence of a critical point in approximately a similar region of the QCD phase diagram. Such a spread of critical points clearly motivates a more systematic investigation of different parametrizations of the free functions and parameters of the bottom-up class of holographic EMD models through Bayesian statistical inference. A detailed Bayesian analysis of such models is currently underway, but preliminary results were discussed in section <ref>. This Bayesian analysis considered uniform prior distributions of the free parameters. Using the LQCD results for the entropy density and the baryon susceptibility at μ_B=0 as constraints, the posterior distributions for the free parameters of the holographic EMD setup become strongly constrained, as shown in Table <ref>. Thousands of different EMD models were generated within the constrained posterior distributions that provided holographic predictions for the behavior of the QCD equation of state at finite temperature and baryon density. The resulting equation of state has remarkably thin bands, as shown in Fig. <ref>, which are in quantitative agreement with state-of-the-art lattice results for the QCD equation of state also at finite baryon density.[Although some deviations exist for the baryon charge density at high temperatures and high baryon chemical potentials, as depicted in Fig. <ref>. However, that is also precisely in the regime where the expansion scheme may begin to break down from lattice QCD and/or weaker coupling may be relevant.] A complete analysis considering regions of the phase diagram beyond the reach of current lattice simulations and the distribution of critical points predicted by a broader class of holographic EMD models will be presented elsewhere. A critical assessment of the most relevant limitations and the drawbacks of holographic approaches to the description of hot QCD phenomenology were also discussed in detail. First, classical holographic gauge-gravity models with two derivatives of the metric field in the bulk gravity action lack asymptotic freedom, with the dual effective QFT at the boundary of the higher dimensional bulk spacetime being strongly coupled at all energy scales. This is explicitly manifest in the temperature-independent value of η/s=1/4π found in these models, which is in contrast to the gas-like pQCD results at asymptotically high temperatures. Instead of a trivial ultraviolet fixed point, classical holographic gauge-gravity models which are asymptotically AdS feature a strongly coupled ultraviolet fixed point, being asymptotically safe but not asymptotically free. The lack of asymptotic freedom and η/s=const are presumably tied to the neglected contributions from massive string states and quantum string loops in the classical gravity bulk theory. This can be possibly improved by considering higher derivative corrections associated with massive string states in the bulk action, which in the presence of a nontrivial dilaton background has already been shown in the literature <cit.> to produce temperature-dependent profiles for η/s in holographic models. However, the systematic construction of phenomenologically realistic and fully-backreacted dilatonic models with higher-order derivative corrections is a challenging task still not accomplished in the literature. Another very general limitation of classical holographic gauge-gravity models regards the inability to describe the thermodynamic and transport properties of the confining hadron resonance gas phase of QCD. This limitation is related to the large N_c character of classical gauge-gravity models, in which the pressure in the confining phase is largely suppressed by a multiplicative factor of ∼ N_c^-2 relatively to the deconfined QGP phase.[One very clear manifestation of such a limitation has been shown in Fig. <ref> (f), where the holographic prediction for the heavy quark entropy was found to be in perfect agreement with the corresponding LQCD results above the pseudocritical crossover temperature, while for temperatures below the crossover region the holographic heavy quark entropy suddenly completely misses the correct LQCD behavior.] In principle, this situation can be improved by considering quantum string loops contributions to the dilatonic bulk theory. However, this task is considerably more complicated than the one discussed in the previous paragraph. Specific limitations and drawbacks of the holographic EMD models reviewed here have been also identified in the literature. For instance, the strangeness neutrality condition realized in heavy-ion collisions is not implemented in the EMD model, as it only features a single chemical potential (in the case considered here, the baryon chemical potential). Moreover, in the investigation of the phase diagram of the EMD model of Refs. <cit.> no regions were found where the square of the speed of sound exceeds its conformal limit (c_s^2|_CFT=1/3), strongly indicating that such models are inadequate to describe the dense QCD equation of state of the most massive neutron stars <cit.>. As mentioned in section <ref>, the anisotropic EMD model is not versatile enough to simultaneously describe the magnetic and the electric sectors of the QGP with a single Maxwell-dilaton coupling function f(ϕ). For future work, it is important to extend dilatonic holographic approaches to simultaneously include fully backreacted effects from conserved baryon, electric, and strangeness charges. Such an endeavor would enable the implementation of strangeness neutrality, which is relevant for applications in heavy-ion collisions. In order to pursue this task within a consistent implementation of QCD flavor symmetry in the holographic setup, the EMD class of holographic models should be substituted by a more general class of (fully backreacted) Einstein-Yang-Mills-Dilaton (EYMD) models. Still within the class of holographic EMD models, the more complicated anisotropic EMD setups at finite temperature and magnetic field remain largely unexplored. Most of its phase diagram has yet to be investigated. Additionally, a Bayesian analysis would be another important next step to understand properties at large B fields (as in the case of the Bayesian analysis currently under development for the isotropic setup at finite baryon density). Other important developments to be pursued in the future include the consideration of rotation effects for the strongly coupled dual plasma by taking into account more general ansatze for the bulk fields allowing for rotating and charged asymptotically AdS black holes. Also numerical simulations of far-from-equilibrium holographic dynamics <cit.> should be further pursed, such as the consideration of holographic Bjorken flow and holographic collisions of shockwaves in the context of the phenomenologically realistic EMD models reviewed in this manuscript. § ACKNOWLEDGEMENTS This material is based upon work supported in part by the National Science Foundation under grants No. PHY-2208724 and No. PHY-2116686 and in part by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics, under Award Number DE-SC0022023, DE-SC0021301, DE-SC0020633, DE-SC0023861. This work was supported in part by the National Science Foundation (NSF) within the framework of the MUSES collaboration, under grant number No. OAC-2103680. This research was supported in part by the National Science Foundation under Grant No. PHY-1748958.
http://arxiv.org/abs/2307.04387v1
20230710074833
Classification of metric fibrations
[ "Yasuhiko Asao" ]
math.AT
[ "math.AT", "math.CT", "math.MG" ]
Relieving the S_8 Tension: Exploring the Surface-type DBI Model as a Dark Matter Paradigm Huanyuan Shan August 12, 2023 ========================================================================================= In this paper, we study `a fibration of metric spaces' that was originally introduced by Leinster (<cit.>) in the study of the magnitude and called metric fibrations. He showed that the magnitude of a metric fibration splits into the product of those of the fiber and the base, which is analogous to the Euler characteristic and topological fiber bundles. His idea and our approach is based on Lawvere's suggestion of viewing a metric space as an enriched category (<cit.>). Actually, the metric fibration turns out to be the restriction of the enriched Grothendieck fibrations (<cit.>) to metric spaces (<cit.>). We give a complete classification of metric fibrations by several means, which is parallel to that of topological fiber bundles. That is, the classification of metric fibrations is reduced to that of `principal fibrations', which is done by the `1-Čech cohomology' in an appropriate sense. Here we introduce the notion of torsors in the category of metric spaces, and the discussions are analogous to the sheaf theory. Further, we can define the `fundamental group π^m_1(X)' of a metric space X, which is a group object in metric spaces, such that the conjugation classes of homomorphisms (π^m_1(X), ) corresponds to the isomorphism classes of `principal -fibrations' over X. Namely, it is classified like topological covering spaces. § INTRODUCTION The idea of metric fibration is first introduced by Leinster in the study of magnitude (<cit.>). The magnitude theory that he coined can be considered as a promotion of Lawvere's suggestion of viewing a metric space as a [0, ∞]-enriched category. The magnitude of a metric space is defined as the `Euler characteristic of enriched categories'. In fact, he showed that the magnitude of a metric fibration splits into the product of those of the fiber and the base (Theorem 2.3.11 of <cit.>), which is analogous to the case of topological fiber bundles. Later, the author (<cit.>) pointed out that it is actually a restriction of the enriched Grothendieck fibration (<cit.>) to metric spaces, by dealing with small categories and metric spaces from a unified view point, namely as filtered set enriched categories. By this approach, we can expect to obtain novel ideas to the one side that is well-studied on the other side. As an example, the following Figure 1 is one of the simplest non-trivial metric fibrations. Note that we consider connected graphs as metric spaces by taking the shortest path metric (see also Proposition <ref>). Both graphs are metric fibrations over the complete graph K_3 with the fiber K_2 as shown in Example 5.29 of <cit.>. Further, they have the same magnitude as pointed out in Example 3.7 of <cit.>. In Proposition 5.30 of <cit.>, it is shown that the right one is the only non-trivial metric fibration over K_3 with the fiber K_2. Here, `trivial' means that it is the cartesian product of graphs. On the other hand, any metric fibration over a four cycle graph C_4, or more generally an even cycle graph, is shown to be trivial in the same proposition. In this paper, we give a complete classification of metric fibrations by several means, which is parallel to that of topological fiber bundles. Namely, we define `principal fibrations', `fundamental groups' and `a 1-Čech cohomology' for metric spaces, and obtain the equivalence between categories of these objects. Roughly speaking, we obtain an analogy of the following correspondence in the case of topological fiber bundles with a discrete structure group. Fiber bundles over X with structure group G@<->[d] Principal G-bundles over X (G-torsors)@<->[d] [X, BG] ≅(π_1(X), G)/ conjugation@<->[d] H^1(X, G) We explain more in detail in the following. First recall that any usual Grothendieck fibration over a small category C can be obtained from a lax functor C, which is called the Grothendieck construction (<cit.>). In <cit.>, it is shown that any metric fibration over a metric space X can be obtained from a `lax functor' X that is called metric action (Definition <ref>). Here is the category of metric spaces and Lipschitz maps. We can consider the Grothendieck and the metric fibration as the definition of fibrations via `the lifting property', while the lax functor and the metric action is the one via `the transformation functions'. More precisely, we have the following. The Grothendieck construction gives a category equivalence _X ≃_X, where we denote the category of metric actions X by _X and the category of metric fibrations over X by _X (Definitions <ref>, <ref>). We can define a subcategory _X^ of _X that consists of `principal -fibrations' (Definition <ref>). We call it a category of -torsors. On the other hand, we can also define a subcategory _X^ of _X^ that is the counterpart of _X^ (Definition <ref>). The category _X^ consists of a metric action X that takes a group , not just a metric space, as the value. Then we have the following. The Grothendieck construction gives a category equivalence _X^≃_X^. Here, a group is not just a group but is a group object of , which we call a metric group (Definition <ref>). As an example of a metric group, we construct the fundamental group π_1^m(X) of a metric space X (Definition <ref>). We also define a category (π_1^m(X), ) of homomorphisms π_1^m(X), where a morphism between homomorphisms is defined as a conjugation relation (Definition <ref>). Then we have the following. We have a category equivalence (π^m_1(X, x_0), ) ≃^_X. As a corollary, we reprove Proposition 5.30 of <cit.> in the following form. We note that the notion of a metric group is equivalent to that of a `normed group' (Proposition <ref>). For a metric group , we denote the corresponding norm of an element g ∈ by |g| ∈_≥ 0. Let C_n be an undirected n-cycle graph. Then we have π^m_1(C_n) ≅ with |1| = 1 n : odd, 0 n : even. Hence we have that _C_n^≃ (, ) n : odd, 0 n : even, for any metric group , which implies that there is only a trivial metric fibration over C_2n and that there is at most one non-trivial metric fibration over C_2n+1. Now, similarly to the topological case, we can define an `associated bundle construction' from a torsor and a metric space Y (Corollary <ref>). This construction gives the following. Suppose that Y is a bounded metric space. Then we have a category equivalence _X^ Y≃ core_X^Y, where _X^Y is the full subcategory of _X that consists of metric fibrations with the fiber Y (Definition <ref>), and we denote the core of a category by core (Definition <ref> (4)). Here, we equip the group Y of isometries on Y with a metric group structure by d_ Y(f, g) = sup_y ∈ Yd_Y(fy, gy) (Example <ref>). However, we should suppose that Y is a bounded metric space so that d_ Y is indeed a distance function. For the case of general metric fibrations, we should extend our arguments concerning extended metric group that allows ∞ as values of a distance function (Definition <ref>), and we obtain an essentially same but extended result (Proposition <ref>). Finally, we define a `1-Čech cohomology' ^1(X, ), which is a category, of a -torsor X (Definition <ref>). This is an analogy from the Čech cohomology constructed from the local sections of a principal bundle. Similarly to the topological case, we can construct a cocycle from a family of local sections (Proposition <ref>), and conversely we can construct a -torsor by pasting copies of 's along a cocycle (Proposition <ref>). Then we have the following from this correspondences. We have a category equivalence ^1(X; ) ≃^_X. §.§.§ Acknowledgements The author is grateful to Luigi Caputi for fruitful and helpful comments and feedbacks on the first draft of the paper. He also would like to thank Masahiko Yoshinaga for valuable discussions and comments. § CONVENTIONS In this section, we prepare terms for categories, graphs, weighted graphs and metric spaces that are well-known but may not be commonly used. §.§ Categories In this article, we suppose that categories are locally small. We denote the object class of a category C by C, and the set of all morphisms from a to b by C(a, b) for any objects a, b ∈ C. We also denote the class of all morphisms in C by C. Let C and D be categories, and F : C D be a functor. * We say that F is faithful if the map F : C(a, b) D(Fa, Fb) is injective for any objects a, b ∈ C. We say that F is full if the map F : C(a, b) D(Fa, Fb) is surjective for any objects a, b ∈ C. We also say that F is fully faithful if it is faithful and full. * We say that F is split essentially surjective if there is a family of isomorpshisms {Fc ≅ d | c ∈ C}_d ∈ D. * We say that F is a category equivalence if there exists a functor G : D C and natural isomorpshisms GF ≅ id_C and FG ≅ id_D. When there exists a category equivalence C D, we say that C and D are equivalent. * We define a groupoid C by C = C and C(a, b) = {f ∈ C(a, b) |f is an isomorphism} for any a, b ∈ C. The following are standard. If a functor F : C D is fully faithful and split essentially surjective, then it is a category equivalence. A category equivalence F : C D induces a category equivalence F : C D. For a classification of objects of a category, we often want to consider `isomorphism classes of objects' and compare it with another category. However, in general, we can't do that since the class of objects is not necessarily a set. Instead, we consider a category equivalence C D that implies a bijection between isomorphism classes of objects if they are small. §.§ Metric spaces * A quasi metric space (X, d) is a set X equipped with a function d : X _≥ 0 satisfying that * d(x, x) = 0, * d(x, x') = d(x', x), * d(x, x') + d(x', x”) ≥ d(x, x”), for any x, x', x”∈ X. * A Lipschitz map f : X Y between quasi metric spaces X and Y is a map satisfying that d_Y(fx, fx') ≤ d_X(x, x') for any x, x' ∈ X. We denote the category of quasi metric spaces and Lipschitz maps by . We call an isomorphism in an isometry. * A metric space (X, d) is a quasi metric space satisfying that * d(x, x') = 0 if and only if x = x'. We denote the full subcategory of that consists of metric spaces by . * A graph G is a pair of sets (V(G), E(G)) such that E(G) ⊂{e ∈ 2^V(G)|# e = 2}, where we denote the cardinality of a set by #. We call an element of V(G) a vertex, and an element of E(G) an edge. A graph homomorphism f : G H between graphs G and H is a map f : V(G) V(H) such that fe ∈ E(H) or # fe = 1 for any e ∈ E(G). We denote the category of graphs and graph homomorphisms by . * A path on a graph G is a tuple (x_0, …, x_n) ∈ V(G)^n+1 for some n≥ 0 such that {x_i, x_i+1}∈ E(G) for any 0≤ i ≤ n-1. A connected graph G is a graph such that there exists a path (x_0, …, x_n) with x_0 = x and x_n = x' for any x, x' ∈ V(G). We denote the full subcategory of that consists of connected graphs by _ conn. * A weighted graph (G, w_G) is a graph G equipped with a function w_G : E(G) _≥ 0. A weighted graph homomorphism f : G H between weighted graphs G and H is a graph homomorphism such that w_H(fe) ≤ w_G(e) for any e ∈ E(G), where we abuse that w_H(fe) = 0 if # fe = 1. We denote the category of weighted graphs and weighted graph homomorphisms by . We also denote the full subcategory of that consists of weighted graphs (G, w_G) such that the graph G is connected by _ conn. We define functors and _ conn_ conn by forgetting additional structures. We also define the functor _ conn that sends a quasi metric space (X, d) to a weighted graph (X, w_X) defined by V(X) = X, E(X) = {e ∈ 2^X|# e = 2} and w_X {x, x'} = d(x, x'). The above functors have left adjoints. We describe each functor F in the following, and they are the left adjoint functors of each functor G of the above since the unit and the counit give that FGF = F and GFG = G. * We define a functor _ conn_ conn by sending a connected graph to a weighted graph with w = 0. * We define a functor _ conn by sending a weighted graph (G, w_G) to a quasi metric space (V(G), d_G) defined by d_G(x, x') = inf∪_n≥ 0{∑_i=0^n-1w_G{x_i, x_i+1}| (x = x_0, …, x_n = x') is a path on G}. * We define a functor by sending a quasi metric space (X, d) to a metric space ( KQX, d) defined as follows. We define an equivalence relation ∼ on X by x ∼ x' if and only if d(x, x') = 0. We also define a function KQX := X/∼_≥ 0 by d([x], [x']) = d(x, x'). For a quasi metric space X, we call the metric space KQX the Kolmogorov quotient of X. * For quasi metric spaces (X, d_X) and (Y, d_Y), we define a metric space called the L^1-product (X× Y, d_X× Y) by d_X× Y((x, y), (x', y')) = d_X(x, x') + d_Y(y, y') for any x, x' ∈ X and y, y' ∈ Y. * For graphs G and H, we define a graph called the cartesian product G× H by V(G× H) = V(G)× V(H), and {(x, y), (x', y')}∈ E(G× H) if and only if one of the following holds : * x = x' and {y, y'}∈ E(H), * {x, x'}∈ E(G) and y = y', for any x, x' ∈ V(G) and y, y' ∈ V(H). * For weighted graphs (G, w_G) and (H, w_H), we define a weighted graph (G× H, w_G× H) by w_G× H{(x, y), (x', y')} = w_G{x, x'} + w_H{y, y'} for any {(x, y), (x', y')}∈ E(G× H), where G× H is the cartesian product of graphs and we abuse that w_G{x, x} = w_H{y, y} = 0. These products make each category a symmetric monoidal category. The functors _ conn_ conn and their left adjoints are strong monoidal except for the functor _ conn that is lax monoidal. For the functors and _ conn_ conn, it is obvious since they are inclusions. It is also obvious for the functor _ conn_ conn by the definition. For the functor , we define a map KQ(X× Y) KQX× KQY by [(x, y)] ↦ ([x], [y]). Then it is obviously natural and is an isometry since we have that [(x, y)]∼ [(x', y')] if and only if [x]∼ [x'] and [y]∼ [y']. For the functor F : _ conn, the identity on the set F(G× H) = F(G)× F(H) is an isometry since d_w_G× H((x, y), (x', y')) = inf∪_n≥ 0{∑_i=0^n-1w_G× H{(x_i, y_i), (x_i+1, y_i+1)}| ((x, y) = (x_0, y_0), …, (x_n, y_n) = (x', y')) is a path on G× H} = inf∪_n≥ 0{∑_i=0^n-1w_G{x_i, x_i+1} + w_H{y_i, y_i+1}| ((x, y) = (x_0, y_0), …, (x_n, y_n) = (x', y')) is a path on G× H} = inf∪_n≥ 0{∑_i=0^n-1w_G{x_i, x_i+1}| (x = x_0, …, x_n = x')} + inf∪_m≥ 0{∑_i=0^m-1 w_H{y_i, y_i+1}| (y = y_0, …, y_m = y')} = d_w_G(x, x') + d_w_H(y, y') = d_F(G)× F(H)((x, y), (x', y')), for any x, x' ∈ V(G) and y, y' ∈ V(H). It is obviously natural. Finally, for the functor G : _ conn, the identity on the set G(X)× G(Y) = G(X× Y) is a weighted graph homomorphism since it is an inclusion of graphs and preserves weightings. It is obviously natural. This completes the proof. * An extended quasi metric space is a set X equipped with a function d : X [0, ∞] that satisfies the same conditions for quasi metric spaces. Namely, it is a quasi metric space admitting ∞ as a value of distance. A Lipschitz map between extended quasi metric spaces is a distance non-increasing map. We denote the category of extended quasi metric spaces and Lipschitz maps by . We similarly define extended metric spaces and we denote the full subcategory of that consists of them by . * For extended quasi metric spaces X and Y, we define the L^1-product of them similarly to that of quasi metric spaces. It makes the category a symmetric monoidal category. * We define functors and by forgetting additional structures. We also define the functor similarly to the functor _ conn except that {x, x'} does not span an edge for x, x' ∈ X with d(x, x') = ∞. The following is immediate. * The functors have left adjonts. Further, all of these functors are commutative with the inclusions , , _ conn and _ conn. * The functors of (1) are strong monoidal except for the functor that is lax monoidal. § _X ≃_X In this section, we introduce two notions, the metric action and the metric fibration, and show the equivalence between them. The notion of metric fibation is originally introduced by Leinster (<cit.>) in the study of magnitude. The other was introduced by the author in <cit.>, which is the counterpart of lax functors in category theory, while the metric fibration is a generalization of the Grothendieck fibration. As written in the introduction, we can consider the Grothendieck (or metric) fibration as the definition of fibrations via `the lifting property', while the lax functor is the one via `the transformation functions'. Let X be a metric space. * A metric action F : X consists of metric spaces Fx ∈ for any x ∈ X and isometries F_xx' : Fx Fx' for any x, x' ∈ X satisfying the following for any x, x', x”∈ X : * F_xx = id_Fx and F_x'x = F_xx'^-1, * d_Fx”(F_x'x”F_xx'a, F_xx”a) ≤ d_X(x, x') + d_X(x', x”) - d_X(x, x”) for any a ∈ Fx. * A metric transformation θ : F ⟹ G consists of Lipschitz maps θ_x : Fx Gx for any x ∈ X satisfying that G_xx'θ_x = θ_x'F_xx' for any x, x' ∈ X. We can define the composition of metric transformations θ and θ' by (θ'θ)_x = θ'_xθ_x. We denote the category of metric actions X and metric transformations by _X. * Let π : E X be a Lipschitz map between metric spaces. We say that π is a metric fibration over X if it satisfies the following : For any ∈ E and x ∈ X, there uniquely exists _x ∈π^-1x such that * d_E(, _x) = d_X(π, x), * d_E(, ') = d_E(, _x) + d_E(_x, ') for any ' ∈π^-1x. We call the point _x the lift of x along . * For metric fibrations π : E X and π' : E' X, a morphism φ : ππ' is a Lipschitz map φ : E E' such that π'φ = π. We denote the category of metric fibrations over X and morphisms by _X. For a product of metric spaces E = X× Y, the projection X× Y X is a metric fibration. We call it a trivial metric fibration. Let π : E X be a metric fibration, and x, x' ∈ X. Then the correspondence π^-1x ∋ a ↦ a_x'∈π^-1x' is an isometry, where we equip the sets π^-1x and π^-1x' with the induced metric from E. Note that the statement is obviously true if E = ∅. We suppose that E ≠∅ in the following, and then any fiber π^-1x is non-empty. For a ∈π^-1x, we have d_E(a_x', a) = d_E(a_x', (a_x')_x) + d_E((a_x')_x, a) = d_X(x', x) + d_E((a_x')_x, a). We also have d_E(a, a_x') = d_X(x, x'). Hence we obtain that d_E((a_x')_x, a) = 0, hence (a_x')_x = a for any x, x' ∈ X. This implies that the correspondence is a bijection. Further, we have d_E(a, b_x') = d_E(a, a_x') + d_E(a_x', b_x') = d_X(x, x') + d_E(a_x', b_x') and d_E(b_x', a) = d_E(b_x', b) + d_E(b, a) = d_X(x', x) + d_E(b, a) for any a, b ∈π^-1x. Hence we obtain that d_E(a, b) = d_E(a_x', b_x') for any x, x' ∈ X and a, b ∈π^-1x, which implies that the correspondence is an isometry. This completes the proof. Let φ : ππ' be a morphism of metric fibrations. For any x, x' ∈ X and a ∈π^-1x, we have (φ a)_x' = φ a_x'. We have d_E'((φ a)_x', φ a_x') = d_E'(φ a, φ a_x') - d_X(x, x') ≤ d_E(a, a_x') - d_X(x, x') = 0, hence we obtain that (φ a)_x' = φ a_x'. This completes the proof. Let F : X be a metric action. We define a metric fibration π_F : E(F) X as follows : * E(F) = {(x, a) | a ∈ Fx, x ∈ X}, * d_E(F)((x, a), (x', b)) = d_X(x, x') + d_Fx'(F_xx'a, b), * π_F(x, a) = x. We call the above construction the Grothendieck construction. The Grothendieck construction gives a functor E : _X _X. Let θ : F ⟹ G be a metric transformation. Then we construct Lipschitz maps φ_θ : E(F) E(G) by φ_θ (x, a) = (x, θ_x a) for any x ∈ X and a ∈ Fx. It is checked that φ_θ is a Lipschitz map as follows : d_E(G)(φ_θ (x, a), φ_θ (x', b)) = d_E(G)((x, θ_x a), (x', θ_x' b)) = d_X(x, x') + d_Gx'(G_xx'θ_x a, θ_x'b) = d_X(x, x') + d_Gx'(θ_x' F_xx' a, θ_x'b) ≤ d_X(x, x') + d_Fx'( F_xx' a, b) = d_E(F)((x, a), (x', b)). Next we show that the correspondence θ↦φ_θ is functorial, that is, we have φ_ id_F = id_E(F) and φ_θ'θ = φ_θ'φ_θ for any metric transformations θ : F ⟹ G and θ' : G ⟹ H. The former is obvious and the latter is checked as follows : φ_θ'θ(x, a) = (x, (θ'θ)_x a) = (x, θ'_xθ_x a) = φ_θ'φ_θ(x, a). Finally, φ_θ is obviously a morphism of the metric fibration. This completes the proof. We have a functor F : _X _X. Let π : E X be a metric fibration. We define a metric action F_π : X by F_π x = π^-1x and (F_π)_xx'a = a_x' for any x, x' ∈ X and a∈π^-1x, where we equip the set π^-1x with the induced metric from E. It follows that (F_π)_xx = id_F_π x by the uniqueness of the lifts, and that (F_π)_xx' defines an isometry F_π x F_π x' with (F_π)_xx'^-1 = (F_π)_x'x by Lemma <ref>. Further, we have that d_F_π x”((F_π)_x'x”(F_π)_xx'a, (F_π)_xx”a) = d_F_π x”((a_x')_x”, a_x”) = d_E(a, (a_x')_x”) - d_X(x, x”) ≤ d_E(a, a_x') + d_E(a_x', (a_x')_x”) - d_X(x, x”) = d_X(x, x') + d_X(x', x”) - d_X(x, x”), for any x, x', x”∈ X and a ∈ F_π x. Hence F_π certainly defines a metric action X. Next, let φ : ππ' be a morphism of metric fibrations. We define a metric transformation θ_φ : F_π⟹ F_π' by (θ_φ)_x a = φ a for any x ∈ X and a ∈ F_πx. Then it satisfies that (F_π')_xx'(θ_φ)_x a = (F_π')_xx'φ a = (φ a)_x' = φ a_x' = (θ_φ)_x'(F_π)_xx', where the third line follows from Lemma <ref>, hence θ_φ certainly defines a metric transformation F_π⟹ F_π'. Note that we have θ_ id_π = id_F_π and (θ_ψφ)_xa = ψφ a = (θ_ψ)_x(θ_φ)_xa for morphisms φ and ψ, which implies the functoriality of F. This completes the proof. The following is the counterpart of the correspondence between lax functors and the Grothendieck fibrations (B1 <cit.>), and enhances Corollary 5.26 of <cit.>. The Grothendieck construction functor E : _X _X is a category equivalence. We show that FE ≅ id__X and EF ≅ id__X. It is immediate to verify FE ≅ id__X by the definition. We show that EF_π≅π for a metric fibration π : E X. Note that EF_π is a metric space consists of points (x, a) with x ∈ X and a ∈π^-1x, and we have d_EF_π((x, a), (x', a')) = d_X(x, x') + d_π^-1x'(a_x', a'). We define a map f : EF_π E by f(x, a) = a for any x ∈ X and a ∈π^-1x. Then it is obviously an isometry and preserves fibers, hence an isomorphism of metric fibrations. The naturality of this isomorphism is obvious. This completes the proof. Note that the trivial metric fibration corresponds to the constant metric action, that is F_xx'= id for any x, x' ∈ X. § THE FUNDAMENTAL METRIC GROUP OF A METRIC SPACE In this section, we give a concise introduction to metric groups. We also give a definition of metric fundamental group, which plays a role of π_1 for metric space in the classification of metric fibrations. §.§ Metric groups * A metric group is a group object in . That is, a metric space equipped with Lipschitz maps · : ×, (-)^-1 : and a point e ∈ satisfying the suitable conditions of groups. * For metric groups 𝒢 and ℋ, a homomorphism from to $̋ is a Lipschitz map$̋ that commutes with the group structure. * We denote the category of metric groups and homomorphisms by . Let (, d) be a metric group. Then * we have d(kg, kh) = d(g, h) = d(gk, hk) for any g, h, k ∈. * we have d(g, h) = d(g^-1, h^-1) for any g, h ∈. * Since the map : g kg is a Lipschitz map for any k ∈, we have d(kg, kh) ≤ d(g, h) and d(k^-1(kg), k^-1(kh)) ≤ d(kg, kh). Hence we obtain that d(kg, kh) = d(g, h). The other can be proved similarly. * By (1), we have d(g^-1, h^-1) = d(e, gh^-1) = d(h, g) = d(g, h). This completes the proof. Let (X, d) be a metric space, and let ^u X be the set of isometries f on X such that sup_x∈ Xd_X(x, fx)< ∞. We equip ^u X with a group structure by compositions. We also define a distance function on ^u X by d_^u X(f, g) = sup_x∈ X d_X(fx, gx). Then it is immediate to verify the conditions that (^u X, d_^u X) is a metric group. Note that, if the metric space X is bounded, namely we have sup_x,x'∈ X d_X(x, x')< ∞, then the group ^u X consists of all isometries on X, by which we denote X. * A normed group is a group G equipped with a map |-| : G _≥ 0 satisfying that * |g| = 0 if and only if g = e, * |gh| ≤ |g| + |h| for any g, h ∈ G. Here we denote the unit of G by e. * A normed group G is called conjugation invariant if it satisfies that |h^-1gh| = |g| for any g, h ∈ G. * A normed group G is called inverse invariant if it satisfies that |g^-1| = |g| for any g ∈ G. * For normed groups G and H, a normed homomorphism from G to H is a group homomorphism φ : G H satisfying that |φ g|≤ |g|. * We denote the category of conjugation and inverse invariant normed groups and normed homomorphisms by _ conj^-1. The categories and _ conj^-1 are equivalent. For a metric group , we define a conjugation and inverse invariant normed group N by * N = as a group, * |g| = d_(e, g) for any g ∈ N. Note that this construction is functorial. Conversely, we define a metric group MG from a conjugation and inverse invariant normed group G by * MG = G as a group, * d_ MG(g, h) = |h^-1g|. This construction is also functorial. It is straightforward to verify that the compositions of these functors are naturally isomorphic to the identities. This completes the proof. §.§ The fundamental metric group Let X be a metric space and x ∈ X. * For each n ≥ 0, we define a set P_n(X, x) by P_n(X, x) := {(x, x_1, …, x_n, x) ∈ X^n+2}. We also define that P(X, x) := ⋃_nP_n(X, x). * We define a connected graph G(X, x) with the vertex set P(X, x) as follows. For u, v ∈ P(X, x), an unordered pair {u, v} spans an edge if and only if it satisfies both of the following : * There is an n ≥ 0 such that u ∈ P_n(X, x) and v ∈ P_n+1(X, x). * There is a 0 ≤ j ≤ n such that u_i = v_i for 1 ≤ i ≤ j and u_i = v_i+1 for j+1 ≤ i ≤ n, where we have u = (x, u_1, …, u_n, x) and v = (x, v_1, …, v_n+1, x). * We equip the graph G(X, x) with a weighted graph structure by defining a function w_G(X, x) on edges by w_G(X, x){u, v} = d_X(v_j, v_j+1) + d_X(v_j+1, v_j+2) - d_X(v_j, v_j+2) v_j≠ v_j+2, 0 v_j = v_j+2, where we use the notations in (2). * We denote the quasi-metric space obtained from the weighted graph G(X, x) by Q(X, x). We also denote the Kolmogorov quotient of Q(X, x) by π_1^m(X, x). Let X be a metric space and x ∈ X. * The metric space π^m_1(X, x) has a metric group structure given by the concatenation defined as [(x, u_1, …, u_n, x)]∙ [(x, v_1, …, v_k, x)] = [(x, u_1, …, u_n, v_1, …, v_k, x)]. The unit is given by [(x, x)] ∈π^m_1(X, x). * For any x' ∈ X, we have an isomorphism π^m_1(X, x) ≅π^m_1(X, x') given by [(x, u_1, …, u_n, x)] ↦ [(x', x, u_1, …, u_n, x, x')]. * We first show that the weighted graph G(X, x) is a monoid object in _ conn by the concatenation. Let (u, v), (u', v') ∈ G(X, x)× G(X, x), and suppose that {(u, v), (u', v')} spans an edge. Then we have that u = u' and v ∈ P_n(X, x), v' ∈ P_n+1(X, x), or v = v' and u ∈ P_n(X, x), u' ∈ P_n+1(X, x) for some n. We also have that w_G(X, x)× G(X, x){(u, v), (u', v')} = w_G(X, x){u, u'} + w_G(X, x){v, v'}. Note that {u∙ v, u'∙ v'} spans an edge in G(X, x). Further, we have w_G(X, x){u∙ v, u'∙ v'} = w_G(X, x){u, u'} + w_G(X, x){v, v'}. Hence the concatenation map ∙ : G(X, x)× G(X, x) G(X, x) is a weighted graph homomorphism. It is immediate to verify that the identity is the element (x, x) and that the product is associative. Thus the weighted graph G(X, x) is a monoid object in _ conn, and by Proposition <ref>, π^m_1(X, x) is a monoid object in . Now we show that it is a group object, namely, any element [(x, x_0, …, x_n, x)] has the inverse [(x, x_n, …, x_0, x)]. It reduces to show that d_Q(X, x)((x, x_n, …, x_0, x_0, …, x_n x), (x, x)) = 0. However, it is obvious that the elements (x, x_n, …, x_0, x_0, …, x_n x) and (x, x) can be connected by a path that consists of edges with weight 0 in G(X, x), that implies the desired equality. This completes the proof. * It is straightforward. Let X be a metric space and x ∈ X. We call the metric group π_1^m(X, x) the fundamental metric group of X with the base point x. We sometimes omit the base point and denote it by π_1^m(X). As just a group, π_1^m(X) is obtained as the fundamental group of a simplicial complex S_X whose n-simplices are subsets {x_0, …, x_n}⊂ X such that any distinct 3 points x_i, x_j, x_k satisfy that |Δ(x_i, x_j, x_k)| = 0 (see Definition <ref>). Note that our fundamental group π_1^m(X) is not functorial with respect to Lipschitz maps. However, it is functorial with respect to Lipschitz maps that preserve coline'ness ( |Δ(x_i, x_j, x_k)| = 0 ), in particular embedding of metric spaces. § _X^≃_X^≃ (Π^M_1(X, X_0), ) In this section, we introduce the notion of `principal -bundles' for metric spaces. We define it from two different view points, namely as a metric action and as a metric fibration, which turn out to be equivalent. As a metric action, we call it a -metric action, and as a metric fibration, we call it a -torsor. Then we show that they are classified by the conjugation classes of homomorphisms π^m_1(X, x_0). §.§ _X^≃_X^ Let X be a metric space and be a metric group. * A -metric action F : X is a metric action satisfying the following : * F_x = for any x ∈ X. * F_xx' is a left multiplication by some f_xx'∈ for any x, x' ∈ X. * Let F, G : X be -metric actions. A -metric transformation θ : F ⟹ G is a metric transformation such that each component θ_x : Fx Gx is a left multiplication by an element θ_x ∈. We denote the category of -metric actions X and -metric transformations by _X^. Apparently, _X^ is a subcategory of _X and is also a groupoid. Let G be a group and X be a metric space. We say that X is a right G-torsor if G acts on X from the right and satisfies the following : * It is free and transitive. * g : X X is an isometry for any g ∈ G. * we have d_X(x, xg) = d_X(x', x'g) for any x, x' ∈ X and g ∈ G. Let (X, d_X) be a metric space and G be a group. Suppose that X is a right G-torsor. Then there exist a distance function d_G on G and a metric group structure ·_x on X for each x ∈ X such that the map G X ; g ↦ xg gives an isomorphism of metric groups (G, d_G) ≅ (X, ·_x). Furthermore, the unit of the metric group (X, ·_x) is x. Fix a point x ∈ X. We define a map d_G : G × G _≥ 0 by d_G(f, g) = d_X(xf, xg), which is independent from the choice of x ∈ X. It is immediate to check that (G, d_G) is a metric space. Further, we have d_G(ff', gg') = d_X(xff', xgg') ≤ d_X(xff', xgf') + d_X(xgf', xgg') ≤ d_X(xf, xg) + d_X(xf', xg') = d_G(f, g) + d_G(f', g'), and d_G(f^-1, g^-1) = d_X(xf^-1, xg^-1) = d_X(x, xg^-1f) = d_X(xg, (xg)g^-1f) = d_X(xg, xf) = d_X(xf, xg) = d_G(f, g), for any f, f', g, g' ∈ G. Hence (G, d_G) is a metric group. Now we define a map G X by g ↦ xg. Then this map is an isometry by the definition. Hence we can transfer the metric group structure on G to X by this map. With respect to this group structure ·_x on X, we have x·_x x' = eg' = x' and x'·_x x = g'e = x', where we put x' = xg'. Hence x ∈ X is the unit of the group (X, ·_x). This completes the proof. Let G be a group. A metric fibration π : E X is a G-torsor over X if it satisfies the following : * G acts isometrically on E from the right, and preserves each fiber of π. * each fiber of π is a right G-torsor with respect to the above action. Let π : E X be a G-torsor, and x, x' ∈ X. Then the metric group structures on G induced from the fibers π^-1x and π^-1x' are identical. Note that, for any ∈π^-1x and f ∈, we have d_E(( f)_x', _x'f) = d_E( f, _x'f) - d_E( f, ( f)_x') = d_E(, _x') - d_E( f, ( f)_x') = d_X(x, x') - d_X(x, x') = 0, hence we obtain that ( f)_x' = _x'f. Let d_x and d_x' be the distance function on G induced from the fibers π^-1x and π^-1x' respectively. Namely, for ∈π^-1x and f, g ∈ G, we have d_x(f, g) = d_E( f, g) and d_x'(f, g) = d_E(_x'f, _x'g). Therefore we obtain that d_x'(f, g) = d_E(_x'f, _x'g) = d_E(( f)_x', ( g)_x') = d_E( f, g) = d_x(f, g) by Lemma <ref>. This completes the proof. For a G-torsor π : E X, we can consider the group G as a metric group that is isometric to a fiber of π by Lemma <ref>. Further, such a metric structure is independent from the choice of the fiber by Lemma <ref>. Hence, in the following, we write `G-torsors' by `-torsors', where is the metric group that is the group G equipped with the above metric structure. Let π : E X and π' : E' X be -torsors. A -morphism φ : ππ' is a G-equivariant map E E' that is also a morphism of metric fibrations. We denote the category of -torsors over X and -morphisms by _X^. Note that the category _X^ is a subcategory of _X. Further, we can show that any -morphism is an isomorphism as follows : Note that for any ∈ E, x ∈ X and g ∈, we have d_E(, _xg) = d_X(π, x) + |g| by the definitions. Then the -equivariance of φ and Lemma <ref> implies that d_E'(φ, φ (_xg)) = d_E'(φ, (φ)_xg) = d_X(π' φ, x) + |g| = d_X(π, x) + |g| = d_E(, _xg), which implies that φ preserves distances. The invertibility of φ is immediate from the G-equivariance. Now we show the equivalence of -metric actions and -torsors in the following. The Grothendieck construction functor E : _X _X of Proposition <ref> restricts to a functor _X^_X^. Let F : X be a -metric action. Let E(F) be the metric fibration given by the Grothendieck construction. Note that we have d_E(F)((x, g), (x', g')) = d_X(x, x') + d_(g_xx'g, g'). We define a action on E(F) by (x, g)h = (x, gh) for any g, h ∈ and x ∈ X. Then it is obviously compatible with the projection, and also free and transitive on each fiber. We also have that d_E(F)((x, g)h, (x', g')h) = d_E(F)((x, gh), (x', g'h)) = d_X(x, x') + d_(g_xx'gh, g'h) = d_X(x, x') + d_(g_xx'g, g') = d_E(F)((x, g), (x', g')), hence it acts isometrically. Further, we have that d_E(F)((x, g), (x, g)h) = d_E(F)((x, g), (x, gh)) = d_(g, gh) = d_(e, h), hence each fiber is a right -torsor. Therefore, we obtain that E(F) is a -torsor. Let θ : F ⟹ F' be a -metric transformation. The Grothendieck construction gives a map φ_θ : E(F) E(F') by φ_θ (x, g) = (x, θ_x g), which is a morphism of metric fibrations. It is checked that φ_θ is -equivariant as follows : (φ_θ (x, g))h = (x, θ_x gh) = φ_θ (x, gh). Hence it is a -morphism. This completes the proof. The functor F : _X _X of Proposition <ref> restricts to a functor _X^_X^. Let π : E X be a -torsor. We fix points x_0 ∈ X and ∈π^-1x_0. For any x ∈ X, we equip each set π^-1x with a metric group structure isomorphic to with the unit _x by Lemma <ref>. Hence we can identify each fiber with by the map g ↦_xg for any x ∈ X. Now we put (_x)_x' = _x'g_xx'∈π^-1x' for x, x' ∈ X and g_xx'∈. Then, for any h ∈, we have d_X(x, x') = d_E(_xh, (_xh)_x') = d_E(_x, (_xh)_x'h^-1) = d_E(_x, _x'g_xx') + d_E(_x'g_xx', (_xh)_x'h^-1) = d_X(x, x') + d_E(_x'g_xx', (_xh)_x'h^-1), hence we obtain that (_xh)_x' = _x'g_xx'h. This implies that the map π^-1x π^-1x' given by lifts _xh ↦ (_xh)_x' is the left multiplication by g_xx' when we identify each fiber with as above. Hence the functor F gives a -metric action. Next, let φ : ππ' be a -morphism between -torsors π : E X and π' : E' X. It induces a Lipschitz map φ_x : π^-1x π'^-1x. Since fibers π^-1x and π'^-1x are idetified with and φ_x is -equivariant, we can identify φ_x with the left multiplication by φ_x_x. This implies that the functor F sends the -morphism φ to a -metric transformation between F_π and F_π'. This completes the proof. The Grothendieck construction functor _X^_X^ is a category equivalence. By Proposition <ref>, we have natural isomorphisms EF ≅ id__X and FE ≅ id__X. We should show that these isomorphisms are obtained in _X^ and _X^ when restricted to them, which is immediate. This completes the proof. §.§ _X^≃ (π^m_1(X, x_0), ) First we define the category of homomorphisms of metric groups '. Let and ' be metric groups, and let (, ') be the set of all homomorphisms '. We equip (, ') with a groupoid structure by defining (, ')(φ, ψ) = {h ∈' |φ = h^-1ψ h} for any homomorphisms φ, ψ : '. The identity on φ∈ (, ') is the unit e ∈', and the composition of morphisms h ∈ (, ')(φ, ψ) and h' ∈ (, ')(ψ, ξ) is defined by h'∘ h = h'h. Let X be a metric space and be a metric group. For each x_0 ∈ X, we have a functor A : (π^m_1(X, x_0), ) ^_X. Let φ : π^m_1(X, x_0) be a homomorphism. We define a -metric action F_φ : X by F_φ x = and (F_φ)_xx' = φ[(x_0, x', x, x_0)]· :, where we denote the left multiplication by (-)·. It is verified that this certainly defines a -metric action as follows. For any x, x' ∈ X, we have (F_φ)_xx = φ[(x_0, x, x, x_0)]· = e· = id_, and (F_φ)_x'x = φ[(x_0, x, x', x_0)]· = (φ[(x_0, x', x, x_0)])^-1· = (F_φ)_xx'^-1. Further, we have d_((F_φ)_x'x”(F_φ)_xx'g, (F_φ)_xx”g) = d_(φ[(x_0, x”, x', x_0)]φ[(x_0, x', x, x_0)], φ[(x_0, x”, x, x_0)]) = d_(φ[(x_0, x”, x', x, x_0)], φ[(x_0, x”, x, x_0)]) = d_((φ[(x_0, x”, x, x_0)])^-1φ[(x_0, x”, x', x, x_0)], e) = d_(φ[(x_0, x, x”, x', x, x_0)], e) ≤ d_π^m_1(X, x_0)([(x_0, x, x”, x', x, x_0)], [x_0, x_0]) ≤ d_X(x, x') + d_X(x', x”) - d_X(x, x”), for any x, x', x”∈ X and g∈. Let h : φψ be a morphism in (π^m_1(X, x_0), ), namely we have φ = h^-1ψ h with h ∈. Then we can construct a -metric transformation θ : F_φ⟹ F_ψ by θ_x = h· :. It satisfies that (F_ψ)_xx'θ_x = θ_x'(F_φ)_xx' since we have ψ[(x_0, x', x, x_0)]h = hφ[(x_0, x', x, x_0)]. This completes the proof. Let X be a metric space and be a metric group. For each x_0 ∈ X, we have a functor B : ^_X (π^m_1(X, x_0), ). Let F : X be a -metric action. Then we can define a homomorphism φ_F : π^m_1(X, x_0) by φ_F [(x_0, x_1, …, x_n, x_0)] = F_x_1x_0F_x_2x_1… F_x_nx_n-1F_x_0x_n, for any [(x_0, x_1, …, x_n, x_0)] ∈π^m_1(X, x_0). It is immediate to check the well-defined'ness. Let F, F' : X be -metric actions and θ : F ⟹ F' be a -metric transformation. Then we have θ_x_0^-1φ_F'[(x_0, x_1, …, x_n, x_0)]θ_x_0 = θ_x_0^-1F'_x_1x_0F'_x_2x_1… F'_x_nx_n-1F'_x_0x_nθ_x_0 = F_x_1x_0F_x_2x_1… F_x_nx_n-1F_x_0x_n = φ_F[(x_0, x_1, …, x_n, x_0)], for any [(x_0, x_1, …, x_n, x_0)] ∈π^m_1(X, x_0). Hence θ_x_0∈ gives a morphism θ_x_0 : φ_F φ_F'. This correspondence is obviously functorial. This completes the proof. The functor A : (π^m_1(X, x_0), ) ^_X of Lemma <ref> is a category equivalence. We show the natural isomorphisms BA ≅ id_ (π^m_1(X, x_0), ) and AB ≅ id_^_X. For a homomorphism φ : π^m_1(X, x_0), we have φ_F_φ[(x_0, x_1, …, x_n, x_0)] = (F_φ)_x_1x_0(F_φ)_x_2x_1… (F_φ)_x_0x_n = φ[(x_0, x_0, x_1, x_0)]φ[(x_0, x_1, x_2, x_0)]…φ[(x_0, x_n, x_0, x_0)] = φ[(x_0, x_0, x_1, x_1, … x_n, x_n, x_0, x_0)] = φ[(x_0, x_1, … x_n, x_0)], for any [(x_0, x_1, …, x_n, x_0)] ∈π^m_1(X, x_0). Hence we obtain an isomorphism BA φ = φ that is obviously natural. Conversely, let F : X be a -metric action. Then we have (F_φ_F)_x = and (F_φ_F)_xx' = φ_F[(x_0, x', x, x_0)] = F_x'x_0F_xx'F_x_0x. Now we define a -metric transformation θ : F_φ_F⟹ F by θ_x = F_x_0x. It is obvious that we have F_xx'θ_x=θ_x'(F_φ_F)_xx', hence it is well-defined and obviously an isomorphism. For a -metric transformation τ : F ⟹ F', we have (ABτ)_x = τ_x_0· : (F_φ_F)_x (F'_φ_F')_x by the construction. Hence the condition τ_xF_x_0x = F'_x_0xτ_x_0 of the -metric transformation implies the naturality of this isomorphism. This completes the proof. §.§ Example We give the following example of fundamental metric group. Let C_n be an undirected n-cycle graph. Then we have π^m_1(C_n) ≅ with |1| = 1 n : odd, 0 n : even. Hence we have that _C_n^≃ (, ) n : odd, 0 n : even, for any metric group , which implies that there is only a trivial metric fibration over C_2n and that there is at most one non-trivial metric fibration over C_2n+1. Let V(C_n) = {v_1, …, v_n} be the vertex set whose numbering is anti-clockwise. For C_2n, it reduces to show that [(v_1, v_2, …, v_2n, v_1)] = [(v_1, v_1)]. Since we have d_C_2n(v_i, v_j) = d_C_2n(v_i, v_k) + d_C_2n(v_k, v_j) for any i≤ k ≤ j with j-i≤ n, we obtain that [(v_1, v_2, …, v_2n, v_1)] = [(v_1, …, v_n+1, …, v_2n, v_1)] = [(v_1, v_n+1, v_1)] = [(v_1, v_1)]. For C_2n+1, the possible non-trivial element of π^m_1(C_2n+1) is a concatenation or its inverse of the element [(v_1, …, v_2n+1, v_1)]. Now we have [(v_1, …, v_2n+1, v_1)] = [(v_1, v_n+1, v_n+2, v_1)], by the same argument as above, and d_Q(C_2n+1, v_1)((v_1, v_n+1, v_n+2, v_1), (v_1, v_n+1, v_1)) = d_C_2n+1(v_n+1, v_n+2) + d_C_2n+1(v_n+2, v_1) - d_C_2n+1(v_n+1, v_1) = d_C_2n+1(v_n+1, v_n+2) = 1. Hence we obtain that |[(v_1, …, v_2n+1, v_1)]| = 1. This completes the proof. Note that the cycle graph C_n is a metric group /n with |1| = 1. Hence the examples in Figure 1 are /2-torsors, which are classified by (, /2) ≅/2. § CLASSIFICATION OF METRIC FIBRATIONS In this section, we classify general metric fibrations by fixing the base and the fiber. It is analogous to that of topological fiber bundles, namely it reduces to classifying principal bundles whose fiber is the structure group of the concerned fibration. We divide it into two cases, whether the fiber is bounded or not, since we need to consider expanded metric spaces for the unbounded case, which are essentially same although. §.§ The functor (-)^x_0 Before we show the classification, we introduce a technical functor that will be used later. For any metric action F : X and a point x_0 ∈ X, we define a metric action ^x_0 : X as follows. We define that ^x_0 x = Fx_0 and ^x_0_xx' = F_x'x_0F_xx'F_x_0x : Fx_0 Fx_0 for any x, x' ∈ X. Then it is verified that this defines a metric action as follows : We have ^x_0_xx = F_xx_0F_xxF_x_0x = id_Fx_0 = id_^x_0 x. We also have (^x_0_x'x)^-1 = (F_xx_0F_x'xF_x_0x')^-1 = F_x'x_0F_xx'F_x_0x = ^x_0_xx' and d_^x_0 x”(^x_0_x'x”^x_0_xx'a, ^x_0_xx”a) = d_Fx_0(F_x”x_0F_x'x”F_x_0x'F_x'x_0F_xx'F_x_0xa, F_x”x_0F_xx”F_x_0xa) = d_Fx”(F_x'x”F_xx'F_x_0xa, F_xx”F_x_0xa) ≤ d_X(x, x') + d_X(x', x”) - d_X(x, x”), for any x, x', x”∈ X and a ∈^x_0 x. The correspondence F ↦^x_0 defines a fully faithful functor (-)^x_0: _X _X. Further, it is restricted to a fully faithful functor _X^_X^ for any metric group . Let θ : F ⟹ G be a metric transformation. We define a metric transformation θ^x_0 : ^x_0⟹G^x_0 by θ^x_0_x = θ_x_0 : ^x_0x G^x_0x ; a ↦θ_x_0a. Then we have G^x_0_xx'θ^x_0_x = G_x'x_0G_xx'G_x_0xθ_x_0 = G_x'x_0G_xx'θ_xF_x_0x = G_x'x_0θ_x'F_xx'F_x_0x = θ_x_0F_x'x_0F_xx'F_x_0x = θ^x_0_xF^x_0_xx', hence this certainly defines a metric transformation. It is obvious that id_F^x_0 = id_^x_0 and (θ' θ)^x_0 = θ'^x_0θ^x_0. It is a faithful functor because G_xx_0θ_x = θ_x_0F_xx_0 implies that θ_x = θ'_x for any x ∈ X if two metric transformation θ, θ' satisfies θ_x_0 = θ'_x_0. By the definition, it is restricted to a faithful functor _X^_X^ for any metric group . Next we show the fullness. Let η : ^x_0⟹G^x_0 be a metric transformation. Then we have G^x_0_x_0xη_x_0 = η_xF^x_0_x_0x and F^x_0_x_0x = id_F_x_0, G^x_0_x_0x = id_G_x_0. Hence we obtain that η_x_0 = η_x for any x ∈ X. Now we define a metric transformation η : F ⟹ G by η_x = G_x_0xη_x_0F_xx_0 : Fx Gx. Then we have G_xx'η_x = G_xx'G_x_0xη_x_0F_xx_0 = G_x_0x'G^x_0_xx'η_xF_xx_0 = G_x_0x'η_x'F^x_0_xx'F_xx_0 = G_x_0x'η_x'F_x'x_0F_xx'F_x_0xF_xx_0 = G_x_0x'η_x_0F_x'x_0F_xx' = η_x'F_xx', hence this certainly defines a metric transformation. We obviously have (η)^x_0 = η, which implies that the functor (-)^x_0 is full. The restriction to _X^_X^ is immediate. This completes the proof. The functor (-)^x_0: _X _X is split essentially surjective. Its restriction _X^_X^ is also split essentially surjective for any metric group . Let F : X be a metric action. We define a metric transformation θ : ^x_0⟹ F by θ_x = F_x_0x : F^x_0x Fx ; a ↦ F_x_0xa. It certainly satisfies that F_xx'θ_x = F_xx'F_x_0x = F_x_0x'F_x'x_0F_xx'F_x_0x = θ_x'F^x_0_xx'. Further, we define a metric transformation θ^-1 : F ⟹^x_0 by θ^-1_x = F_xx_0 : Fx ^x_0x for any x ∈ X. Then we have _xx'^x_0θ^-1_x = θ^-1_xF_xx' similarly to the above, hence it certainly defines a metric transformation. It is obviously an isomorphism. The restriction to _X^_X^ is immediate. This completes the proof. The functor (-)^x_0: _X _X and its restriction _X^_X^ for any metric group are category equivalences. * We denote the image of the functor (-)^x_0: _X _X by _X^x_0. * We denote the full subcategory of _X that consists of metric actions F : X such that Fx ≅ Y for any x ∈ X and a metric space Y by _X^Y. * We denote the image of (-)^x_0 restricted to _X^Y and _X^ by _X^Y, x_0 and _X^, x_0 respectively. * We denote the full subcategory of _X that consists of metric fibrations π : E X such that π^-1x ≅ Y for any x ∈ X and a metric space Y by _X^Y. * We have category equivalences _X^Y _X^Y, x_0 and _X^_X^, x_0. * The Grothendieck construction functor E : _X _X is restricted to the category equivalence _X^Y _X^Y. (1) follows from Corollary <ref>, and (2) follows from the proof of Proposition <ref>. §.§ Classification for the case of bounded fibers In this subsection, we suppose that X and Y are metric spaces and Y is bounded. Note that we have a metric group Y (Example <ref>). We have a faithful functor -↷ Y : _X^ Y_X^ Y. Let F ∈_X^ Y. We define a metric action F ↷ Y : X by (F ↷ Y)x = Y and (F ↷ Y)_xx' = F_xx' : Y Y. It is immediate to verify that this certainly defines a metric action. For an Y-metric transformation θ : F ⟹ G, we define a metric transformation θ↷ Y : F↷ Y ⟹ G↷ Y by (θ↷ Y)_x = θ_x : Y Y ; y ↦θ_xy. Then it is also immediate to verify that it is a metric transformation. Further, this obviously defines a faithful functor. This completes the proof. The functor -↷ Y : _X^ Y_X^Y is split essentially surjective. Let F ∈_X^Y and fix isometries φ_x : Y Fx by the axiom of choice. We define an Y-metric action F by ( F)x = Y and ( F)_xx' = φ_x'^-1F_xx'φ_x· that is a left multiplication. Then we can verify that it is an Y-metric action as follows. Note that we have ( F)_xx = φ_x^-1F_xxφ_x· = id_ Y and ( F)_xx'^-1 = φ_x^-1F_x'xφ_x'· = ( F)_x'x. We also have that d_ Y(( F)_x'x”( F)_xx', ( F)_xx”) = d_ Y(φ_x”^-1F_x'x”φ_x'φ_x'^-1F_xx'φ_x, φ_x”^-1F_xx”φ_x) = d_ Y(φ_x”^-1F_x'x”F_xx'φ_x, φ_x”^-1F_xx”φ_x) = sup_a ∈ Yd_Y(φ_x”^-1F_x'x”F_xx'φ_xa, φ_x”^-1F_xx”φ_xa) = sup_a ∈ Fxd_Fx”(F_x'x”F_xx'a, F_xx”a) ≤ d_X(x, x') + d_X(x', x”) - d_X(x, x”). Now we define a metric transformation φ : F↷ Y ⟹ F by φ_x : ( F↷ Y)x = Y Fx. Then it certainly satisfies that F_xx'φ_x = φ_x'( F↷ Y)_xx' and is an isomorphism by the definition. This completes the proof. Since the category _X^ Y is a groupoid, the image of the functor -↷ Y is in core_X^Y, where we denote the subcategory that consists of all isomorphisms by core (Definition <ref> (4)). The functor -↷ Y^x_0 : _X^ Y core_X^Y, x_0 is full. Note that we have -↷ Y^x_0 = (-)^x_0↷ Y by the definitions. Since the functor (-)^x_0 : _X^ Y_X^ Y is full by Lemma <ref>, we show that the restriction -↷ Y : _X^ Y, x_0 core_X^Y, x_0 is full. Let θ : F^x_0↷ Y ⟹G^x_0↷ Y be an isomorphism in _X^Y, x_0, where F, G ∈_X^ Y. Then we have an isometry θ_x : Y Y such that G_x'x_0G_xx'G_x_0xθ_x = θ_x'F_x'x_0F_xx'F_x_0x for any x, x' ∈ X. Since we have θ_x ∈ Y, we obtain a morphism θ' : F^x_0⟹G^x_0∈_X^ Y, x_0 defined by θ'_x = θ_x. It is obvious that we have θ' ↷ Y = θ. This completes the proof. The functor -↷ Y^x_0 : _X^ Y core_X^Y, x_0 is a category equivalence. The categories _X^ Y and core_X^Y are equivalent. It follows from Corollary <ref> with core_X^Y ≃ core_X^Y ≃ core_X^Y, x_0 by Lemma <ref>. §.§ Classification for the case of unbounded fibers To classify general metric fibrations, we generalize the discussions so far to extended metric groups. * An extended metric group is a group object in . * For extended metric groups 𝒢 and ℋ, a homomorphism from to $̋ is a Lipschitz map$̋ that commutes with the group structure. * We denote the category of extended metric groups and homomorphisms by . Note that the category is a full subcategory of . Let (X, d) be a metric space, and let X be the group of isometries on X. We define a distance function on X by d_ X(f, g) = sup_x∈ X d_X(fx, gx). Then it is immediate to verify the conditions that ( X, d_ X) is an extended metric group. We note that the `unit component' of X, that is a set of isometries f such that d_ X( id_X, f)< ∞, is exactly ^u X (Example <ref>). Note that, if the metric space X has finite diameter, then we have X = ^u X that is a metric group. Let and ' be extended metric groups, and let (, ') be the set of homomorphisms. We equip (, ') with a groupoid structure similarly to the metric group case by defining (, ')(φ, ψ) = {h ∈' |φ = h^-1ψ h} for any homomorphisms φ, ψ : '. We note that the same statement as Lemma <ref> holds for extended metric groups. Further, the relationship between extended metric spaces and normed groups similar to Proposition <ref> holds if we replace the codomain of norms by [0, ∞]. Let be an extended metric group and X be a metric space. An extended -metric action F is a correspondence X ∋ x ↦ Fx = and F_xx'∈ such that * F_xx = e, F_xx' = F_x'x^-1, * d_(F_x'x”F_xx', F_xx”) ≤ d_X(x, x')+d_X(x', x”) - d_X(x, x”). For extended -metric actions F and G, an extended -metric transformation θ : F⟹ G is a family of elements {θ_x ∈}_x∈ X such that G_xx'θ_x = θ_x'F_xx'. We denote the category of extended -metric actions and extended -metric transformations by _X^. The following is obtained from the same arguments in subsection <ref> by replacing the `metric group' by `extended metric group'. For an extended metric group and a metric space X, the categories ^_X and (π^m_1(X, x_0), ) are equivalent. Further, the arguments in subsection <ref> can be applied for extended case, and we obtain the following. For any metric spaces X and Y, the categories _X^ Y and core_X^Y are equivalent. Hence metric fibrations with fiber Y are classified by (π^m_1(X, x_0), ). § COHOMOLOGICAL INTERPRETATION In this section, we give a cohomological classification of -torsors. It is an analogy of the 1-Čech cohomology. Before giving the definition, we introduce the following technical term. Let X be a metric space, and x_1, x_2, x_3 ∈ X. We denote the subset {x_1, x_2, x_3}⊂ X by Δ(x_1, x_2, x_3) and call it a triangle. We define the degeneracy degree of the triangle Δ(x_1, x_2, x_3) by |Δ(x_1, x_2, x_3)| := min{d_X(x_i, x_j) + d_X(x_j, x_k) - d_X(x_i, x_k) |{i, j, k} = {1, 2, 3}}. Note that it is enough to consider i, j, k's running in the cyclic order to obtain the above minimum. The following is the definition of our `1-Čech chomology'. Let X be a metric space and suppose that points of X are indexed as X = {x_i}_i ∈ I. For a metric group , we define the 1-cohomology of X with the coefficient in as the category ^1(X; ) by ^1(X; ) = {(a_ijk) ∈^I^3| a_ijka_kjℓ = a_ijℓ, |a_ijka_jkia_kij| ≤ |Δ(x_i, x_j, x_k)|}, and ^1(X; )((a_ijk), (b_ijk)) = {(f_ij) ∈^I^2| a_ijkf_jk = f_ijb_ijk}, where we denote the conjugation invariant norm on by |-|. We call an object of ^1(X; ) a cocycle. Apparently, the above constructions are independent from the choice of the index I. Note that, for a cocycle (a_ijk) ∈^1(X; ), the condition a_ijka_kjℓ = a_ijℓ implies that a_iji = e and a_ijk = a_kji^-1 for any i, j, k ∈ I. Further, for a morphism (f_ij), we have f_ij = f_ji from the condition a_ijkf_jk = f_ijb_ijk and a_iji = b_iji = e. The 1-cohomology of X with the coefficient in is well-defined, that is, ^1(X; ) is indeed a category, in particular a groupoid. Let (a_ijk), (b_ijk), (c_ijk) ∈^1(X; ), and (f_ij) : (a_ijk) (b_ijk) and (f'_ij) : (b_ijk) (c_ijk) be morphisms. Then (f'∘ f)_ij := f_ijf'_ij defines a morphism ((f'∘ f)_ij) : (a_ijk) (c_ijk) since we have a_ijkf_jkf'_jk = f_ijb_ijkf'_jk = f_ijf'_ijc_ijk. It obviously satisfies the associativity. The identity on a_ijk is apparently defined by e_ij = e, where e denotes the unit of . Further, (f^-1_ij) defines a morphism (f^-1_ij) : b_ijk a_ijk that is the inverse of (f_ij). This completes the proof. We have a faithful functor β : ^1(X; ) ^_X. For (a_ijk) ∈^1(X; ), we define a -torsor β (a_ijk) as follows. Let 𝒰 = ∐_(i, j) ∈ I^2_ij, where _ij = ^ij_i ∐^ij_j = ∐. We write an element of ^ij_∙ as g^ij_∙ and we denote the identification = ^ij_∙ by the map ^ij_∙ ; g ↦ g^ij_∙, where ∙∈{i, j} for any i ≠ j ∈ I. We define an equivalence relation ∼ on 𝒰 generated by g^ij_j ∼ (ga_ijk)^jk_j. Note that we have g^ij_j∼ g^ji_j for any i, j ∈ I. We denote the quotient set 𝒰/∼ by β (a_ijk) in the following. Then we have a surjective map π : β (a_ijk) X defined by π [g^ij_j] = x_j. For this map π, we have the following. For any i, j ∈ I, the map π^-1x_j ; g ↦ [g^ij_j] is a bijection. The surjectivity is clear. We show the injectivity. Suppose that we have [g^ij_j] = [h^ij_j] for g, h ∈. That is, we have elements a_k_0jk_1, a_k_1jk_2, …, a_k_N-1jk_N∈ such that ga_k_0jk_1… a_k_N-1jk_N = h and k_0 = k_N = i. Then the condition a_ijka_kjℓ = a_ijℓ implies that ga_iji=h, hence g = h. This completes the proof. Note that Lemma <ref> implies that [g^ij_j] = [h^jk_j] implies that h = ga_ijk. Now we can define a distance function d_β (a_ijk) on β (a_ijk) as follows. Let ε_i ∈π^-1x_i and ε_j ∈π^-1x_j. Then there uniquely exist g, h ∈ such that [g^ij_i] = ε_i and [h^ij_j] = ε_j by Lemma <ref>. Then we define that d_β (a_ijk)(ε_i, _j) = d_X(x_i, x_j) + d_(g, h). The non-degeneracy is clear. The symmetry follows from that [g^ij_i] = [g^ji_i]. The triangle inequality is verified as follows. Let ε_i ∈π^-1x_i, ε_j ∈π^-1x_j and ε_k ∈π^-1x_k. Suppose that we have [g^ij_i] = ε_i = [g'^ik_i], [h^ij_j] = ε_j = [h'^jk_j], and [m^jk_k] = ε_k = [m'^ik_k]. Then we have g = g'a_kij, h' = ha_ijk and m = m'a_ikj, hence we obtain that d_β (a_ijk)(ε_i, ε_j) + d_β (a_ijk)(ε_j, ε_k) = d_X(x_i, x_j) + d_(g, h) + d_X(x_j, x_k) + d_(h', m) = d_X(x_i, x_j) + d_X(x_j, x_k) + d_(g'a_kij, h) + d_(ha_ijk, m'a_ikj) = d_X(x_i, x_j) + d_X(x_j, x_k) + d_(g'a_kij, h) + d_(ha_ijka_jkia_kij, m'a_kij) + d_(h, ha_ijka_jkia_kij) - d_(h, ha_ijka_jkia_kij) ≥ d_X(x_i, x_j) + d_X(x_j, x_k) + d_(g'a_kij, m'a_kij) - |a_ijka_jkia_kij| ≥ d_X(x_i, x_j) + d_X(x_j, x_k) + d_(g', m') - |Δ(x_i, x_j, x_k)| ≥ d_X(x_i, x_k) + d_(g', m') = d_β (a_ijk)(ε_i, ε_k). Now a map π : β (a_ijk) X is obviously a 1-Lipschitz map. Further, we verify that it is a metric fibration as follows. Let x_i, x_j ∈ X and ε_i ∈π^-1x_i. Suppose that we have ε_i = [g^ij_i] for g ∈. Then ε_j := [g^ij_j] ∈π^-1x_j is the unique element in π^-1x_j such that d_β (a_ijk)(ε_i, ε_j) = d_X(x_i, x_j). Also, for ε'_j := [h^ij_j] ∈π^-1x_j, we have d_β (a_ijk)(ε_i, ε'_j) = d_X(x_i, x_j) + d_(g, h) = d_β (a_ijk)(ε_i, ε_j) + d_β (a_ijk)(ε_j, ε'_j). Finally, we equip the metric fibration π : β (a_ijk) X with a right action by as [g^ij_∙]h = [(h^-1g)^ij_∙] for any i, j ∈ I and ∙∈{i, j}. This is well-defined since we have that [(ga_ijk)^jk_j]h = [(h^-1ga_ijk)^jk_j] = [(h^-1g)^ij_j] = [g^ij_j]h. It is straightforward to verify that this is a -torsor. Next we show the functoriality. Let (f_ij) : (a_ijk) (b_ijk) ∈^1(X; ). We construct a map f_∗ : β(a_ijk) β(b_ijk) by [g^ij_∙] ↦ [(gf_ij)^ij_∙] for any i, j ∈ I and ∙∈{i, j}. It is well-defined since we have that [(ga_ijk)^jk_j] ↦ [(ga_ijkf_jk)^jk_j]= [(gf_ijb_ijk)^jk_j] = [(gf_ij)^ij_j]. The map f_∗ obviously preserves fibers, and is an isometry since we have that d_β(b_ijk)(f_∗ [g^ij_i], f_∗ [h^ij_j]) = d_β(b_ijk)( [(gf_ij)^ij_i], [(hf_ij)^ij_j]) = d_X(x_i, x_j) + d_(gf_ij, hf_ij) = d_X(x_i, x_j) + d_(g, h) = d_β(a_ijk)([g^ij_i], [h^ij_j]). Further, it is -equivariant since we have that (f_∗[g^ij_j])m = [(gf_ij)^ij_j]m = [(m^-1gf_ij)^ij_j] = f_∗([g^ij_j]m). The faithfullness is obvious from the construction. This completes the proof. The functor β : ^1(X; ) ^_X is full. Let (a_ijk), (b_ijk) ∈^1(X; ) be cocycles, and suppose that we have a morphism φ : β(a_ijk) β(b_ijk) in ^_X. We denote the projections β(a_ijk) X and β(b_ijk) X by π_a and π_b respectively in the following. For any i, j ∈ I, we have bijections A_ij : π_a^-1x_j and B_ij : π_b^-1x_j given by g ↦ [g^ij_j] by Lemma <ref>. Then we define a map φ_ij = B_ij^-1φ A_ij :, namely we have φ[g^ij_j] = [(φ_ijg)^ij_j]. Now the -equivariance of φ implies that φ[g^ij_j] = φ[(ge)^ij_j] = (φ[e^ij_j])g^-1 = [(φ_ije)^ij_j]g^-1 = [(gφ_ije)^ij_j], which implies that φ_ijg = gφ_ije by Lemma <ref>. From this, we obtain that φ[(ga_ijk)^jk_j] = φ[(ga_ijk)^kj_j] = [(φ_kj(ga_ijk))^kj_j] = [(ga_ijkφ_kje)^kj_j]. Since we have [g^ij_j] = [(ga_ijk)^jk_j], we obtain that a_ijkφ_kje = (φ_ije)b_ijk by Lemma <ref>. Further, since the lift of x_j along [g^ij_i] is [g^ij_j] and φ preserves the lift, the conditions φ[g^ij_j] = [(φ_ijg)^ij_j] and φ[g^ji_i] = [(φ_jig)^ji_i] implies that φ_ij = φ_ji. Hence we obtain a morphism (φ_ije) : (a_ijk) (b_ijk) in ^1(X; ), which satisfies that β (φ_ije) = φ by the construction. This completes the proof. Let π : E X be a -torsor. For x_i, x_j ∈ X, we define a local section of π over a pair (x_i, x_j) as a pair of points (ε_i, ε_j) ∈ E^2 such that π_i = x_i, π_j = x_j and ε_j is the lift of x_j along ε_i. We say that ((ε^ij_i,ε^ij_j))_(i, j)∈ I^2 is a local section of π if each (ε^ij_i, ε^ij_j) is a local section of π over a pair (x_i, x_j) and satisfies that ε^ij_i = ε^ji_i. Let π : E X be a -torsor. For a local section s =((ε^ij_i,ε^ij_j))_(i, j)∈ I^2 of π, we can construct a cocycle α_s π∈^1(X;). Further, for any two local sections s, s' of π, the corresponding cocycles α_s π and α_s'π are isomorphic. We define a_ijk∈ as the unique element such that ε^ij_ja_ijk = ε^jk_j. Then (a_ijk) satisfies that a_ijka_kjℓ = a_ijℓ since we have ε^ij_ja_ijka_kjℓ = ε^jk_ja_kjℓ = ε^kj_ja_kjℓ = ε^jℓ_j. Now note that we have ε_xg = (ε g)_x for any ε∈ E, x ∈ X and g ∈. Hence we have that ε^ij_ja_ijka_jkia_kij = ε^jk_ja_jkia_kij = (ε^jk_k)_x_ja_jkia_kij = (ε^jk_ka_jki)_x_ja_kij = (ε^ki_k)_x_ja_kij = ((ε^ki_i)_x_ka_kij)_x_j = ((ε^ki_ia_kij)_x_k)_x_j = ((ε^ij_i)_x_k)_x_j. Hence we obtain that |a_ijka_jkia_kij| = d_E(ε^ij_j, ε^ij_ja_ijka_jkia_kij) = d_E(ε^ij_j, ((ε^ij_i)_x_k)_x_j) = -d_E(ε^ij_j, ε^ij_i) + d_E(ε^ij_i, ((ε^ij_i)_x_k)_x_j) ≤ -d_E(ε^ij_j, ε^ij_i) + d_E(ε^ij_i, (ε^ij_i)_x_k) + d_E((ε^ij_i)_x_k, ((ε^ij_i)_x_k)_x_j) = -d_X(x_j, x_i) + d_X(x_i, x_k) + d_X(x_k, x_j). Since the norm |-| on is conjugation invariant, the value |a_ijka_jkia_kij| is invariant under the cyclic permutation on {i, j, k}, hence we obtain that |a_ijka_jkia_kij| ≤ |Δ(x_i, x_j, x_k)|. Thus we obtain a cocycle α_s π := (a_ijk) ∈^1(X ; ). Suppose that we have local sections s = ((ε^ij_i,ε^ij_j))_(i, j)∈ I^2 and s' = ((μ^ij_i,μ^ij_j))_(i, j)∈ I^2. Then there exists an element (f_ij) ∈^I^2 such that (ε^ij_if_ij,ε^ij_jf_ij) = (μ^ij_i,μ^ij_j). Let α_s π = (a_ijk) and α_s'π = (b_ijk). Then we obtain that ε^ij_ja_ijkf_jkb^-1_ijk = ε^jk_jf_jkb^-1_ijk = μ^jk_jb^-1_ijk = μ^ij_j, which implies that f_ij = a_ijkf_jkb^-1_ijk. Hence (f_ij) defines a morphism (f_ij) : (a_ijk) (b_ijk) in ^1(X; ). Since ^1(X; ) is a groupoid, this is an isomorphism. This completes the proof. The functor β : ^1(X; ) ^_X is split essentially surjective. Let π : E X be a -torsor. Fix a local section s = ((ε^ij_i,ε^ij_j))_(i, j)∈ I^2 of π. Let α_sπ = (a_ijk) be the cocycle constructed in Proposition <ref>. We show that the -torsors β(a_ijk) and π are isomorphic. We define a map φ : β(a_ijk) E by [g^ij_∙] ↦ε^ij_∙ g^-1. It is well-defined since we have that [(ga_ijk)^jk_j] ↦ε^jk_ja^-1_ijkg^-1 = ε^ij_jg^-1. It obviously preserves fibers and is a bijection. Also, it is an isometry since we have that d_E(φ[g^ij_i], φ[h^ij_j]) = d_E(ε^ij_ig^-1, ε^ij_jh^-1) = d_E(ε^ij_i, ε^ij_jh^-1g) = d_E(ε^ij_i, ε^ij_j) + d_E(ε^ij_j, ε^ij_jh^-1g) = d_X(x_i, x_j) + d_(g^-1, h^-1) = d_β(a_ijk)([g^ij_i], [h^ij_j]). Further, it is immediately verified that φ is -equivariant. Hence the map φ gives an isomorphsim in ^_X. This completes the proof. The functor β : ^1(X; ) ^_X is a category equivalence. 99 A0 Y. Asao, Magnitude and magnitude homology of filtered set enriched categories, (2023), preprint, arXiv:2303.05677. Gr A. Grothendieck, Technique de descente et théorèmes d’existence en géométrie algébrique. I. Généralités. Descente par morphismes fidèlement plats, Sèminaire N. Bourbaki exp. no190 (1960) 299–327 Gr2A. Grothendieck, Revêtements Étales et Groupe Fondamental - Séminaire de Géometrie Algébrique du Bois Marie 1960/61, LNM 224 Springer (1971) JPT P. T. Johnstone, Sketches of an Elephant : A Topos Theory Compendium, Oxford: Oxford University Press (2002). La F. W. Lawvere, Metric spaces, generalized logic and closed categories, Rendiconti del Seminario Matematico e Fisico di Milano, XLIII : 135–166, 1973. Reprinted as Reprints in Theory and Applications of Categories 1:1–37, 2002. L3 T. Leinster, The magnitude of metric spaces, Documenta Mathematica 18 (2013), 857–905. L1 T. Leinster, The magnitude of a graph , Mathematical Proceedings of the Cambridge Philosophical Society 166 (2019), 247–264. Mc S. Mac Lane, Categories for the Working Mathematician, Graduate Texts in Mathematics 5, Springer, Berlin, 1971. Roff E. Roff, The size and shape of things: magnitude, diversity, homology, PhD thesis, University of Edinburgh, 2022.
http://arxiv.org/abs/2307.05563v1
20230709220915
RidgeBase: A Cross-Sensor Multi-Finger Contactless Fingerprint Dataset
[ "Bhavin Jawade", "Deen Dayal Mohan", "Srirangaraj Setlur", "Nalini Ratha", "Venu Govindaraju" ]
cs.CV
[ "cs.CV", "cs.AI", "cs.LG" ]
RidgeBase: A Cross-Sensor Multi-Finger Contactless Fingerprint Dataset Bhavin Jawade, Deen Dayal Mohan, Srirangaraj Setlur, Nalini Ratha, Venu Govindaraju Computer Science and Engineering University at Buffalo, SUNY {bhavinja, dmohan, setlur, nratha, govind}@buffalo.edu Received: date / Accepted: date ====================================================================================================================================================================================================================== empty Contactless fingerprint matching using smartphone cameras can alleviate major challenges of traditional fingerprint systems including hygienic acquisition, portability and presentation attacks. However, development of practical and robust contactless fingerprint matching techniques is constrained by the limited availability of large scale real-world datasets. To motivate further advances in contactless fingerprint matching across sensors, we introduce the RidgeBase benchmark dataset. RidgeBase consists of more than 15,000 contactless and contact-based fingerprint image pairs acquired from 88 individuals under different background and lighting conditions using two smartphone cameras and one flatbed contact sensor. Unlike existing datasets, RidgeBase is designed to promote research under different matching scenarios that include Single Finger Matching and Multi-Finger Matching for both contactless-to-contactless (CL2CL) and contact-to-contactless (C2CL) verification and identification. Furthermore, due to the high intra-sample variance in contactless fingerprints belonging to the same finger, we propose a set-based matching protocol inspired by the advances in facial recognition datasets. This protocol is specifically designed for pragmatic contactless fingerprint matching that can account for variances in focus, polarity and finger-angles. We report qualitative and quantitative baseline results for different protocols using a COTS fingerprint matcher (Verifinger) and a Deep CNN based approach on the RidgeBase dataset. The dataset can be downloaded here: <https://www.buffalo.edu/cubs/research/datasets/ridgebase-benchmark-dataset.html> § INTRODUCTION Fingerprints are one of the most widely used biometric modalities. Recent works <cit.>in fingerprint recognition have focused their attention on contactless fingerprint matching owing to various benefits over contact-based methods. Traditional fingerprint sensors which require a physical contact with the acquisition surface elevate the risk of spread of contagious diseases. Furthermore, contact with a fingerprint platen leaves a latent impression which can be captured for fingerprint presentation attacks. Contactless fingerprint matching using smartphone cameras alleviates these concerns while also making the acquisition process easier, faster, and portable. [Code for the acquisition app can be accessed here: <https://github.com/bhavinjawade/FingerprintCameraApp>] Despite its apparent benefits, performing robust contactless fingerprint matching is more challenging than traditional fingerprint matching. The major challenges with contactless fingerprint matching include: out-of-focus image acquistion, lower contrast between ridges and valleys, variations in finger-angle, and perspective distortion. A resilient contactless fingerprint acquisition system must overcome these challenges while being capable of performing both contactless to contactless (CL2CL) and contact to contact-less (C2CL) fingerprint matching. Early attempts <cit.> at contact to contactless fingerprint matching proposed datasets that were collected in specialized environmental settings. Other works <cit.> proposed smartphone captured finger-selfies under different background and lighting conditions. In order to develop robust contactless fingerprint matching that can be used in practically viable systems, research datasets should preferably include: (i) Images acquired in different lighting conditions and backgrounds. (ii) Different camera sensors. (iii) Multi-finger (Four finger) images. (iv) Images acquired in Unconstrained or semi-constrained settings. and (vi) Large number of images with high resolution. Existing contactless fingerprint matching datasets are found to be limited in their scope because they do not meet one or more of the aforementioned conditions. In this paper, we propose RidgeBase, a large-scale multi-finger contactless and contact-based fingerprint dataset obtained using multiple sensors in diverse environmental conditions and backgrounds. Over 3500 contactless and contact-based four-finger images are obtained from 88 subjects in multiple sessions. To enable finger-to-finger matching, the four-finger images are further split into single-finger images. In all (including single finger and four-finger), RidgeBase consists of 17,784 contactless and contact-based images captured in self-operated mode by participants using two smartphone cameras and contact-based sensors. Contactless finger images of the same finger distal acquired using a smartphone camera contain higher degree of intra-class variance (due to focus, contrast and angles distortions) when compared to traditional contact-based images. Capturing multiple images of the same finger at acquisition and inference time can improve matching performance. We observe that existing works use traditional sample based evaluation protocol for contactless fingerprint matching. Inspired by the Janus Benchmark Dataset's <cit.> evaluation protocol for face recognition, we propose a set-based evaluation protocol for RidgeBase along with other matching protocols. This comprehensive evaluation suite consisting of three tasks: 1. Single finger-to-finger matching 2. Four-finger image matching and 3. Set-based fingerprint matching. Each evaluation task is performed for both contactless (CL2CL) and cross-sensor (C2CL) fingerprint matching, thereby facilitating the development of a robust cross-sensor fingerprint matching framework. The key contributions of this work are summarized below: * Collected a new cross-sensor fingerprint dataset which overcomes many drawbacks of existing datasets, and is designed to promote practical contactless fingerprint matching research. * Proposed a novel fingerprint distal labeling heuristic algorithm to generate pseudo labels for training a Faster-RCNN based object detector for distal segmentation. We also provide fingerprint quality metric (NFIQ) distribution on the RidgeBase dataset. * Developed an extensive tasks and protocols suite for RidgeBase that emulates real-world scenarios and ensures reproducibility. * Finally, we report baseline results on the RidgeBase dataset using a state-of-the-art commercial-off-the-shelf fingerprint matcher (Verifinger 12.0) and a DeepCNN <cit.> based method. § RELATED WORKS Automatic fingerprint matching is a well researched area. Recently, fingerprint interoperability, especially C2CL matching has gained popularity. In this section we will discuss relevant datasets and prior methods for C2CL matching. NISTIR 8307 <cit.> performed an Interoperability Assessment with data collected from 200 Federal employees to evaluate various existing contactless fingerprint acquisition devices and smartphone apps. They observed that performance of DUTs (Devices under test) can be categorized into three tiers: where the best performing tier consists of contact based devices, the middle tier consists of stationary contactless devices and the worst performing tier consists of smartphone based contactless fingerprint matching apps. Furthermore, NISTIR 8307 <cit.> also concluded that multi-finger acquisition of contactless fingerprints increases the performance of contactless matching, thereby enhancing potential operational utility. This further corroborates the importance of our publicly released dataset for research in multi-finger smartphone based contactless fingerprint matching. Ross et al <cit.> were among the first to draw attention to problems with biometric sensor interoperability in the context of fingerprints, and they observed that the need for sensor interoperability was paramount as it significantly impacted the usability of a biometric matching system with cross-sensor performance being notably worse. <cit.> addresses the problem of interoperability by analyzing fingerprint data from 9 different sensors. These methods while promising, only focused on data acquired using contact based fingerprint sensors. Lee et al <cit.> proposed methods to process images captured using mobile phones. <cit.> used gradient information coherence to perform finger quality estimation. Recent methods on contactless fingerprint recognition have focused on three main sub areas namely, segmentation of area of interest, enhancement of the segmented area and representation learning for matching. <cit.> developed a segmentation model using saliency map and skin color following which Grosz et al. <cit.> proposed a U-Net based autoencoder to avoid failure cases in the presence of complex backgrounds. Compared to contact fingerprint data, contactless fingerprint images suffer from various distortions. Lin et al. <cit.><cit.> have proposed algorithms that correct non-linear deformations as well as methods for generalized distortion correction based on a robust thin- spline plate mode. Once the image enhancement is performed, the fingerprint matching is generally done using either minutiae based methods or deep learning based methods. <cit.> proposed the use of a Siamese CNN architecture for matching contact to contactless fingerprints. Malhotra et al.<cit.> designed a network to extract features which preserve the mulit-orientation and multi-scale information of the fingerprint. Table <ref> shows the comparisons of our datasets to other contactless datasets present in the literature. Although Deb et al. 2018 <cit.> and Wild et al. 2019 <cit.> have multi-finger images, they do not have the environmental and background variations present in our dataset. Furthermore the number of unique samples (four-finger and single images) are small compared to the proposed dataset. § RIDGEBASE DATASET §.§ Collection Methodology RidgeBase benchmark dataset has been collected over a period of 3 months from 88 participants. Contactless fingerprints were acquired using two smartphones, iPhone 11 and Google Pixel 5. We used an application similar to Jawade et.al <cit.> for acquiring four-finger images using the two smartphones. As shown in figure 2, the application presents volunteers with a bounded region within which they can place their hand in an unconstrained manner. Corresponding contact-based fingerprints were acquired using Futronic FS64 EBTS flat-bed fingerprint scanner. For each participant fingerprint images were collected over two sessions separated by atleast two weeks. There was significant gender, ethnicity ((East Asians, White Americans, African American, Filipino, Asian Indian and Filipino), race and age variation among subjects participating in the data collection. This data collection was approved by the institutional IRB and the identities of the subjects involved in the data collection have been anonymized. For each participant, contactless fingerprint images were acquired in three different lighting conditions and backgrounds namely (i) Indoor (ii) White Background and (iii) Outdoor. Each image was captured using flash and auto-focus. Contactless images were acquired using Apple iPhone 11 and Google Pixel 5 with resolutions (2016 x 4224) and (3024 x 4032) respectively. Across the 88 participants, we captured 280 contact-based four-finger images and 3374 contactless four-finger images. The dataset is further split using the distal segmentation approach as described in section 3.2. Table <ref> summarizes the dataset size and scope. §.§ Distal Segmentation Method Most fingerprint matching algorithms (such as Verifinger, <cit.> <cit.>), primarily work on distal fingerprints rather on the multi full-finger prints. To support compatibility with these algorithms and interoperability with existing datasets, we segment the four-finger images to extract distal phalanges. To produce pseudo bounding boxes for distal phalanges that can be then used to train an object detection model, we formulate a heuristic algorithm based on localization of convex defects. We start by segmenting the background and the four-finger foreground. To perform this segmentation, we follow steps similar to <cit.>. First, we downsample the image and apply Gradcut algorithm using the guiding region presented to the user as a prior. We next apply morphological opening using kernels of size (11,11) and (5, 5) over the predicted grabcut mask M. Applying the up-scaled and Gaussian blurred mask over the original image gives us the segmented four-finger region. Next we find the convex hull C for the segmented mask M using Sklansky's algorithm. Figure <ref> shows the convex hull over the four-finger region. For the set of images in the dataset that are acquired keeping the four fingers close to each other, the top-most point shared by any two fingers in contact must also be the farthest point on the perimeter of the segmented region from the convex hull. Under this premise, we detect top three farthest points (denoted by set S) from the convex hull (referred to as convexity defects). S = {(x_1, y_1),(x_2, y_2),(x_3, y_3)} Next, we apply a set of empirically observed measures to generate bounding boxes for the four distals using the set S. We start by computing finger width using y_2, y_3, y_4. D_w = max((y3-y2),(y4-y3)) Next, the following set of rules are used to predict bounding boxes around distals: D_TL_1 = (x_2 + 2*α - β*D_w, y_2 - D_w) D_BR_1 = (x_2 + 2*α, y_2) D_TL_2 = (x_3 + 4*α - β*D_w , y_2) D_BR_2 = (x_3 + 4*α + 0.5*D_w, y_3) D_TL_3 = (x_3 + 3*α - D_w * β, y_3) D_BR_3 = (x_3 + 3*α, y_4) D_TL_4 = (x_4 + 2*α - D_w * β, y_4) D_BR_4 = (x_4 + 2*α, y_4 + D_w) Here, α = 1.5 denotes an approximation of distal height to width ratio and β is selected empirically as 50. We employ a visual selection method to pick 980 images that are perfectly annotated by the heuristic method, with 800 images being used for training and 180 images being used to test a FasterRCNN network for recognizing distal phalanges. The mAP @ IOU = 50% of the trained FasterRCNN network is 95.7%. The final trained FasterRCNN is capable of detecting finger distals in images where the four fingers are not close to each other, despite the fact that it was trained with images with four fingers close to each other. § TASKS AND PROTOCOLS We design the RidgeBase dataset to support three sets of tasks: (i) Single Finger Matching (or Distal-to-Distal matching) (ii) Four Finger Matching and (iii) Set based Distal Matching. Each task is further divided into contactless-to-contactless and contact-to-contactless verification and identification tasks. Unlike previous datasets, to ensure reproducibility, we provide fixed test evaluation pairs for each of the tasks. Below we provide detailed description of the tasks, evaluation protocols and associated train-test splits. §.§ Single Finger Matching Task 1 represents the single finger distal-to-distal matching scenario which is equivalent to traditional fingerprint matching. For task 1, we segment the distal phalanges for the whole dataset using the method described in section 3. So, for the 88 participants, the dataset consists of 704, (88 x 4 x 2) unique fingers. We select 200 unique fingers (classes) for the test set and 504 disjoint unique fingers (classes) for the training set. This protocol is for one-to-one fingerprint matching approaches, and considers each unique finger as a unique identity. This provides comparability and support for dataset augmentation with existing contactless fingerprint datasets that consist only of single finger images. In total, the test set consists of 2229 contactless finger distal images, and 200 contact-based fingerprints. The train set consists of 11255 contactless distal images and 916 contact-based fingerprints. For the contactless to contactless matching (CL2CL verification) task, we provide 24,83,106 test pairs for evaluation and for contact-to-contactless based matching (C2CL verification), we provide 4,54,716 test pairs. §.§ Four Finger Matching Task 2 represents the four-finger to four-finger matching scenario. Typically, multi-finger authentication is more robust than single finger authentication. This protocol promotes research in end-to-end trainable algorithms and feature fusion methods that can overcome distortion challenges of contactless images by utilizing identity features available in the entire four finger region. Here, we consider a hand (four-finger region) as a unique identity. For 88 participants, the task consists of 176 unique hands. We use 25 participants (∼30%) for test, and 63 participants for training (as in task 1). Therefore, task 2 consists of 50 unique four-finger images for test set and 126 unique four-finger images for train set. §.§ Set-Based Matching To overcome the inconsistencies and distortions observed in real-time unconstrained capture of fingerprint images using smartphone camera, we introduce a set-based matching protocol. Set-based matching schemes have been previously used for face recognition <cit.> where there is high intra-class variations. In task 3, each set consists of finger-distal images of the same finger under different backgrounds and lighting conditions and acquired using different devices in multiple sessions. For contactless-to-contactless distal matching, test split consists of 200 query sets and associated 200 gallery sets. On average each query-set consists of 4 samples, and each gallery-set consists of 5 samples. Similarly, for contact-to-contactless matching each gallery-set consists of 8 samples on average, and each query-set consists of 1 contact-based image. A robust feature fusion method developed to perform well on set-based matching protocol can greatly improve contactless matching performance in real-world where multiple images can be acquired from a continuous video. § QUALITY ANALYSIS OF FINGERPRINTS (NFIQ 2.0) Figure <ref> shows the distribution for fingerprint quality estimated using NFIQ 2.0 <cit.> for the test-set split of Task 1 (only distals). All raw contactless distal images are gray scaled and converted to 8bit and 500 dpi before computing NFIQ scores. As can be observed from the distribution, a majority of fingerprints have NFIQ 2.0 scores in the range 20-45. Galbally et.al <cit.> trained a bayes classifier for computing NFIQ 1.0 classes from NFIQ 2.0 values. Their learned mapping function <cit.> can be summarized as: NFIQ 1 = 5 if 0 < NFIQ 2 ≤ 5 3 if 6 < NFIQ 2 <= 35 2 if 36 < NFIQ 2 <= 45 1 if 46 < NFIQ 2 <= 100 where, NFIQ1 = 5 denotes worst quality images, and NFIQ1 = 1 denotes best quality images (NFIQ1=4 and NFIQ1=5 are treated as one unique class <cit.>). Using this mapping function we observe that, for raw gray scale contactless images in RidgeBase test dataset 2.5% images lie in NFIQ1 class 5, 76.7% in class 3, 16.0% in class 2 and 4.8% in class 1. Figure <ref> shows the NFIQ2 score distribution for RidgeBase's training split. As it can be observed from Figure <ref> and <ref>, test set is representative of the training set in terms of raw fingerprint image quality distribution. Figure <ref> shows NFIQ2 score distribution after enhancing contactless fingerprints using Hong. et.al's algorithm <cit.> to improve ridge clarity based on local ridge orientation and frequency. § EXPERIMENTS We evaluate baseline methods for verification (1:1) and Identification (1:N) tasks. We provide subject disjoint training and testing sets for all the tasks. Furthermore, the protocol also provides defined query and gallery templates for both verification and identification for Task 3. §.§ Metrics - Verification and Identification (1:N, 1:1), ROC and CMC Methods are compared for verification task using EER (Equal Error rate), TAR(%)@FAR=10^-2 (as in previous contactless matching works) and AUC. For Identification tasks, we report Rank(%)@1, Rank(%)@10, Rank(%)@50 and Rank(%)@100. Additionally, we compare methods using Receiver Operating Characterstic (ROC) and Cumulative Match Characterstic (CMC) for verification and identification respectively. §.§ Preprocessing and Enhancements We perform a set of pre-processing and ridge enhancement steps over the distal images segmented using the method described in section 3.2. We start by gray-scaling the contactless fingerprint images and then performing adaptive contrast enhancement with binary inversion. This is done to improve the ridge-valley contrast and account for the ridge inversion. We observe that due to variations in the focus over the distal region, directly enhancing the preprocessed contactless image leads to large number of spurious minutiae in the out of focus region. To address this, we perform adaptive Gaussian thresholding over the preprocessed image followed by a series of median blurs. This removes the out-of-focus regions of the distal image, leaving behind the sharp ridge pattern. Next, we enhance the fingerprint ridge pattern using the ridge frequency based enhancement method proposed by Hong et.al<cit.>. §.§ Baselines We present evaluations on the RidgeBase dataset using the commercial-off-the-shelf (COTS) Verifinger matcher and the CNN based deep metric learning method proposed in <cit.>. To generate ISO templates using Verifinger 12.0 we first preprocess and enhance fingerprints using the algorithm described in section 6.2, and then convert the fingerprints to 8 bit 500dpi images. For the second baseline, we evaluate the AdaCos based branch as described in <cit.>. The model takes a channel sequenced, enhanced and grayscaled image as input followed by Densenet 161 representation extractor optimized with adaptive scaling cosine (AdaCos) loss <cit.>. We first pretrain the network using 50,000 synthetic fingerprints generated using the Anguli [Anguli: https://dsl.cds.iisc.ac.in/projects/Anguli/index.html] Fingerprint Generator and then fine tune over RidgeBase. We use 2000 images out of the 11,252 images for validation and the remaining 9,252 images for training. Results reported for both baselines are over the RidgeBase test split. For task 2 and task 3, we segment the distal phalanges and perform score fusion using sum rule. End-to-End training of four-finger region and association-based feature pooling are left for future exploration. §.§ Results Table <ref> and <ref> report the verification results for contactless-to-contactless matching and contact-to-contactless matching respectively for all three tasks i.e. Distal Matching, Four Finger Matching, and Set Based Distal Matching. Figure <ref> and <ref> shows the receiver operating characteristic (ROC) for all three tasks. Table <ref> and <ref> report the Identification results for the contactless-to-contactless matching and contact-to-contactless matching task respectively. Figure <ref> and <ref> show the Cumulative Match Curve for the identification rate. Based on the performance evaluation of both the widely used COTS verifinger and the CNN based method, we observe that RidgeBase is more challenging than other existing contactless fingerprint datasets, and hence motivates further innovation in contactless fingerprint matching algorithms. § CONCLUSION In this work, we have proposed a novel smartphone based contactless fingerprint matching dataset. RidgeBase, a multi-use full-finger dataset, will help advance new avenues for contactless fingerprint matching, promoting methods that could leverage different parts from the four-finger region for matching. With the set-based matching protocol introduced along with RidgeBase, novel contactless fusion algorithms can be investigated to achieve better query-set to gallery-set matching performance. Along with this dataset, we release the cross-platform app developed to collect the fingerphotos. § ACKNOWLEDGEMENT This work was conducted at the Center for Unified Bio- metrics and Sensors (CUBS) at the University at Buffalo and was supported by the Center for Identification Technology Research (CITeR) and the National Science Foundation through grant #1822190. ieee
http://arxiv.org/abs/2307.05582v1
20230710143957
DBFed: Debiasing Federated Learning Framework based on Domain-Independent
[ "Jiale Li", "Zhixin Li", "Yibo Wang", "Yao Li", "Lei Wang" ]
cs.LG
[ "cs.LG", "cs.AI", "cs.CY", "cs.DC" ]
DBFed: Debiasing Federated Learning Framework based on Domain-Independent Jiale Li School of Software Dalian University of Technology Dalian, China [email protected] Zhixin Li School of Computer Science Fudan University Shanghai, China [email protected] Yibo Wang School of Software Dalian University of Technology Dalian, China [email protected] Yao Li School of Software Dalian University of Technology Dalian, China [email protected] Lei Wang^∗* Lei Wang is corresponding author School of Software Dalian University of Technology Dalian, China [email protected] August 12, 2023 ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== As digital transformation continues, enterprises are generating, managing, and storing vast amounts of data, while artificial intelligence technology is rapidly advancing. However, it brings challenges in information security and data security. Data security refers to the protection of digital information from unauthorized access, damage, theft, etc. throughout its entire life cycle. With the promulgation and implementation of data security laws and the emphasis on data security and data privacy by organizations and users, Privacy-preserving technology represented by federated learning has a wide range of application scenarios. Federated learning is a distributed machine learning computing framework that allows multiple subjects to train joint models without sharing data to protect data privacy and solve the problem of data islands. However, the data among multiple subjects are independent of each other, and the data differences in quality may cause fairness issues in federated learning modeling, such as data bias among multiple subjects, resulting in biased and discriminatory models. Therefore, we propose DBFed, a debiasing federated learning framework based on domain-independent, which mitigates model bias by explicitly encoding sensitive attributes during client-side training. This paper conducts experiments on three real datasets and uses five evaluation metrics of accuracy and fairness to quantify the effect of the model. Most metrics of DBFed exceed those of the other three comparative methods, fully demonstrating the debiasing effect of DBFed. data security; federated learning; model fairness; information security § INTRODUCTION Artificial intelligence technology relies heavily on large amounts of data as input, which is used for learning and training to recognize patterns, discover rules, make decisions, and predict outcomes. This data can be structured, such as table data in a database, or unstructured, such as images, text, and speech. Due to the diversity and scale of data, AI technology requires powerful computing capabilities and algorithm support to realize its application value. Additionally, to protect personal privacy and data security, reasonable restrictions and protections need to be placed on the use of sensitive data. Therefore, as AI technology continues to evolve, ensuring the quality and security of data in terms of acquisition, storage, sharing, and utilization becomes an important issue. For enterprises, expanding data sources, establishing a complete data lifecycle management system, and adopting privacy-preserving computing technology can better utilize data resources, improve the efficiency of mining and utilizing data value, and better meet the needs of data security and privacy protection. Therefore, with the development of artificial intelligence technology, how to ensure the quality and security of data in terms of data acquisition, storage, sharing, and utilization through privacy computing technology has become an important frontier research topic. Many fields, such as finance, healthcare, and communication<cit.>, have extremely high requirements for data security. As a result, data between different institutions often becomes a data island, making it difficult to share and use it safely, resulting in ineffective utilization of data value and hindering the development of artificial intelligence technology. In order to protect data security, Privacy-preserving computation technology represented by federated learning has been widely applied in the field of artificial intelligence. Privacy-preserving computation refers to a computing mode for computing and data processing under the premise of protecting data privacy. It can encrypt, share, calculate, and analyze data without exposing the original data, thereby protecting personal privacy and business secrets. Federated learning is a distributed machine learning technology that allows multiple participants to jointly train a model, but each participant can only access local data, thereby protecting data privacy. Federated learning can avoid collecting and storing user data on a central server, which better protects user privacy. However, the issue of fairness also becomes more important when using federated learning. Fairness issues usually involve inequalities between different participants, including data imbalance, computing resource imbalance, capability differences, etc. Fairness issues in federated learning can be grouped into the following categories: §.§.§ Data Bias The data distribution and characteristics of different participants may differ, leading to the imbalanced performance of the model among different participants. Factors such as gender, race, geographical location, etc. can lead to the neglect or unfair treatment of certain clients' datasets. §.§.§ Model Bias Since different participants have different data distributions and characteristics, the model may be biased towards certain participants, resulting in the unbalanced performance of the model, which will also affect model fairness. §.§.§ Imbalance of Computing Resources Different participants may have different computing resources. Some participants may have more computing power and storage resources to conduct more iterative training locally, while others may only have limited iterative training. This may result in some participants having better model performance than others, leading to fairness issues. §.§.§ Capability Differences Different participants may have different abilities and professional knowledge. Some participants may have more domain expertise and experience to better understand and interpret the model results, while others may lack these abilities, which may lead to issues with the interpretability and fairness of the model. Addressing bias issues in federated learning can improve the robustness and generalization ability of federated learning models, as well as improve the model's coverage and service quality for different groups. Therefore, in order to address these fairness issues, it is necessary to conduct relevant research and develop appropriate algorithms and tools to promote fair federated learning. This includes methods based on multi-party data joint learning, the use of distributed privacy protection technology in federated learning, and the development of appropriate metrics to evaluate the fairness of the model. The main contributions of this paper are as follows: * This paper proposes a debiasing federated learning framework based on domain-independent, which alleviates model bias by explicitly encoding sensitive attributes during client training, effectively improving the fairness of deep learning classification models in federated learning. * This paper conducts experiments on 3 real datasets, and uses 4 fairness indicators to quantify the debiasing effect of the model for multi-classification and multi-sensitive attribute tasks. The effectiveness of DBFed was verified through experiments. § RELATED WORK §.§ Fedrated Learning Federated learning is a distributed machine learning framework proposed by McMahan et al.<cit.>, which allows clients to collaborate with each other to train machine learning models without exposing their own data. Nowadays, federated learning has been applied to fields such as healthcare and finance. Yang et al. <cit.> and others divided federal learning into horizontal federal learning, vertical federal learning, and federal transfer learning according to the distribution and alignment characteristics of data in the model training process. In the horizontal federated learning framework, datasets from different clients share the same sample space, but the samples are different; In the vertical federated learning framework, datasets from different clients have the same or partially identical sample IDs, but the feature spaces of the datasets are not the same; In the federated transfer learning framework, the sample ID and feature space of datasets on different clients are different, but different clients have similar business scenarios. In federated learning, the client protects local data privacy by passing model gradients or weights instead of data sharing. However, Zhu et al. <cit.> showed that gradient leakage attacks could achieve recovery of client datasets, thereby compromising customer data privacy. After obtaining gradients, this method randomly generates a pair of pseudo features and pseudo labels to perform forward and backward propagation, After deriving the gradient, optimize the pseudo features and pseudo labels by minimizing the distance between the pseudo gradient and the real gradient, so that the pseudo data continuously approaches the original data. Hanchi et al. <cit.> proposed an adversarial generation network for generating random data to minimize the distance between pseudo gradients and real gradients, thereby inferring the raw data of federated learning clients. Melis et al. <cit.> have demonstrated that malicious clients can deceive the global model by using multitasking learning, allowing the model to learn more of its desired features and extract more data information from other clients. In order to protect the privacy of clients in the federated learning process, researchers applied homomorphic encryption <cit.> and differential privacy <cit.> to federated learning. Homomorphic encryption applies the encryption algorithm to the process of gradient exchange. The value encrypted by the algorithm is decoded after addition and multiplication, and the result is the same as that of decoding before the operation. Therefore, it can effectively protect data privacy in the gradient exchange process between the client and the server. But homomorphic encryption reduces the computational efficiency of the federated learning framework. Differential privacy effectively protects data privacy by adding noise to the dataset or blurring certain features through generalization methods, making it difficult for attackers to distinguish between samples and recover data. However, due to modifications made to the dataset, differential privacy technology usually needs to balance accuracy and security. Many researchers have made trade-offs between security, computational efficiency, and accuracy in federated learning, and have improved and optimized the federated learning framework. The FedCG proposed by Wu et al. <cit.>. utilizes conditional adversarial generation networks to achieve a high level of privacy protection while maintaining the computational performance of the model. FedCG has a private extractor and a public classifier on each client, and in the process of weight aggregation, a client generator is used instead of a public extractor. The client knowledge is aggregated through knowledge distillation, protect and the privacy of extractor weights prevents user information leakage while aggregating client knowledge. In the training process of the client, the extractor and classifier are first trained to learn the features of the local dataset, and the output distribution of the extractor is closer to the generation distribution of the generator. Then, the local adversarial generation network, namely the generator, and the discriminator are trained separately from the local data to improve the accuracy of the model. Zhu et al. <cit.> proposed a method to aggregate client knowledge through knowledge distillation without using additional data. They set up a lightweight generator on the server and used the learned knowledge as induction bias to adjust local training. This method uses fewer communication times and has good generalization ability. §.§ Model bias and debiasing Although machine learning models have been applied to a wide range of life scenarios such as face recognition and medical image analysis, some models make decisions based on information such as race, gender, and nationality, resulting in algorithmic bias. As Larson et al. <cit.> once pointed out, the COMPAS system has a certain degree of racial biases. The research of Kohavi et al. <cit.> shows that under the same circumstances, the deep neural network usually predicts that male salaries are higher than females. Ashraf et al. <cit.> pointed out that commercial gender classification systems developed by Microsoft, Face++, and IBM have a high recognition error rate for dark-skinned women. Some works have proposed solutions to algorithmic bias. Mehrabi et al.<cit.>divided the solutions to algorithmic bias into three types: preprocessing, in-process, and post-processing. Preprocessing technology eliminates or alleviates algorithm bias by changing the training dataset; in-process technology modifies the machine learning algorithm itself to eliminate or reduce algorithm bias during training; postprocessing technology usually refers that by regarding the trained model as back-box, the label output by the black-box is recalculated according to a new function to eliminate or reduce algorithm bias. In the field of community detection, Mehrabi et al. <cit.> proposed a community detection method for nodes with low connection attributes, which can alleviate the bias of low-degree nodes. In the field of classifiers, Bilal et al. <cit.>proposed a fair constraint to prevent classifiers from making predictions related to sensitive attributes in the data, and Kamishima et al. <cit.> controlled classification accuracy and results by adjusting regularization parameters and the trade-off between fairness. This regularization method is applicable to any probabilistic discriminant model prediction algorithm. In the field of language models, Bordia et al. <cit.> proposed a regularization loss term for language models, which minimizes the projection of the encoder-trained embeddings onto the gender-encoding embedding subspace, effectively alleviating gender bias in language models. In the field of causal inference, Lu et al. <cit.> proposed a framework for discovering and eliminating bias for causal networks, capturing direct and indirect discrimination through the causal effects of protected attributes on decisions passed along different causal paths. The problems of algorithm bias and model fairness also exist in the field of federated learning. In recent years, some researchers have proposed some debiasing methods under the framework of federated learning. Zhang et al. <cit.>designed a reward mechanism to adjust the training model’s accuracy and fairness, which drives fairness across all demographic groups and addresses the challenges of limited information and limited coordination. The FairBatch framework proposed by Roh et al. <cit.>, while retaining the standard training algorithm as an internal optimizer, incorporates an external optimizer to equip the internal problem with additional features, implementing adaptively choosing the mini-batch size, so that it will improve the fairness of the model. This framework can significantly improve the fairness of any pre-trained model through fine-tuning. Papadaki et al. <cit.> proposed an algorithm for maximum and minimum fairness in federated learning, where the server requires each client to explicitly share the performance of the model on each race separately. § METHOD This chapter details the federated training process of DBFed. As shown in Figure <ref>, in the learning process of DBFed, it is assumed that there is a global server and clients to jointly train a deep neural network image classification model, the server first initializes the global model and then sends the weights to each client to initialize the local model of the client. After receiving the model weights, the clients use the gradient descent principle to update the model weights on the local dataset to minimize the local loss function. After a certain number of rounds, the clients send the local model weights to the server, and the server performs federated average aggregation on the model weights of the client to obtain global model weights and then distributes the global model weights to the client. The specific process of client training and server aggregation will be introduced in detail below. §.§ Client Domain-Independent Training Inspired by the research of Wang [35] etc., this paper improves the fairness of the model through domain-independent training, which encoding sensitive attributes explicitly. For the problem of bias in deep learning classification tasks, the predicted features are called target attributes, and the potentially biased populations are called sensitive attributes. The fully connected layer of the deep learning classification model sets N D-way discriminant classifiers, where N is the categories of target attribute, and D is the categories of the sensitive attribute. DBFed mitigates model bias by explicitly encoding sensitive attribute information during training and reducing the correlation between sensitive attributes and predicted attributes during prediction. Assuming f_(·) is a deep learning classification model, there is N × D neuron in the last layer of the model, that is, the number output value of the last layer of the model is N× D, recording f_z(x,θ) as the output result of the z-th node of the classification layer, where x is the data sample, and θ is the model weight. The output of the N D-way discriminant classifier can be passed through the activation function: Softmax(f_z(x;θ ))=e^f_z(x;θ )/∑_i=0^Ne^f_i(x;θ ) The activated data results can be considered as probabilities, then the probability that the predicted result of sample x with sensitive feature d is y is: P (y|d,x)=Softmax (f_ (y+dN) (x;θ)) For a data sample, according to the full probability formula, the prediction result of the deep learning classifier can be calculated according to the following formula ŷ=argmax_yP(y,x)=argmax_y∑_d^GP(y|d,x)P(d|x) where P(d|x) is the probability of sensitive attribute d of the data sample x, and G is the set of sensitive attributes, then |G|=D. For a data sample with a known sensitive attribute, the predicted value ŷ=arg max_yP(y|d,x). However, in order to achieve blind review on the sensitive attribute, that is, to ignore the correlation between predicted attributes and sensitive attributes during the prediction process, as well as achieve Demographic Parity on sensitive attributes, that means, for any a,b∈ G, P(a|x)=P(b|x), to guarantee the fairness between the prediction different data samples of sensitive attributes and reduce the bias of the algorithm. So in the prediction process, take P(d|x)=1/|G|. For the training data sample x whose real value of the target attribute is y and the sensitive attribute is d, its cross entropy can be calculated: L(x,y,d;θ)=-logP(y|d,x) Therefore, using the gradient descent algorithm, the model weight update formula of the client k can be expressed as: θ^k=θ^k-η∇ L( b;θ^k) where η is the learning rate and b is a batch of training samples. In the local training of the client, the local training data set is divided into multiple batches, and each batch of data is selected for training in each epoch. After multiple iterations of training, the local model weight data is sent to the server. §.§ Server Aggregation After the server receives the weight of the client, it performs weight aggregation through the FedAvg algorithm. The aggregation formula of the t+1 round global weight can be expressed as: θ_t+1^g=∑_k=1^Kn_k/nθ_t^k where n_k is the number of samples in the local training data set of the client k, and n=∑^K_k=1n_k is the sum of the number of samples in the training data set of all clients. §.§ Joint Training In the training process of the federated learning debiasing framework, as shown in Algorithm <ref>, the global model weights θ_0^g are first initialized by the server, and then many rounds of communication are performed. In each round, the server first sends the global model weights of this round to each client, and then each client receives the global model weights and performs local training in parallel. The client executes many iterations for each local training. In the iterations, the batch data sets are extracted in batches, and the local model weights are adjusted through gradient descent to optimize the local model loss function. After the client is trained locally, the client sends the local weights to the server. The server receives the weights of all clients in this round and starts to aggregate the weights of this round to obtain the global model weights of a new round, finally ending the communication training of this round. § EXPERIMENT §.§ Dataset §.§.§ CelebA CelebA dataset <cit.> is a face attribute dataset provided by Liu et al. It contains 202,599 face pictures of 10,177 celebrity identities. The training data set contains 162,770 pictures, the test dataset contains 19,962 pictures, and each picture is marked with 40 features such as gender, hair color and lips. This dataset is widely used in computer vision deep learning tasks.TABLE <ref> shows the data distribution and settings of the dataset in this paper. In the experiment, this paper chooses the "Smiling" label as the target attribute and uses the "Male" label as the sensitive attribute to study the bias of the model in the gender population when predicting smiles. §.§.§ FairFace The FairFace dataset <cit.> is a face image dataset proposed by Karkkainen et al. It consists of 108,501 pictures, of which the training data set contains 86,744 pictures, and the verification data set contains 10,954 pictures. It includes seven ethnic groups: White, Black, Indian, East Asian, Southeast Asian, Middle Eastern, and Latin. Each facial image is labeled with race, gender, and age, with age attributes classified into nine categories based on age group. In the experiment, this paper chooses the "Age" label as the target attribute and uses the "Race" label as the sensitive attribute to study the bias of the deep learning model on the racial group when predicting age. §.§.§ UTKFace The UTKFace dataset <cit.> is a facial dataset with a long age span proposed by Zhang et al., which contains over 20,000 images. In this paper, the dataset is divided into a training dataset with a size of 18,964 and a testing dataset with a size of 4,741 in a ratio of 80% and 20%. Each image in the dataset is labeled with gender, age, and race. There are five types of race labels, including white, black, Asian, Indian, and Others. This article uses images from the first four races for experiments[The number of the last race "Others" is too small and not properly labeled, which has a significant random impact on the experiment.], using the "Race" label as a sensitive attribute. The age tags in the image are divided into nine categories based on the age groups of "less than 2", "3-9", "10-19", "20-29", "30-39", "40-49", "50-59", "60-69", and "more than 70", with the "Age" label as the target attribute. §.§ Evaluating Metrics In order to quantify the actual debiasing effectiveness of deep learning classification models, this paper selects one metric to measure classification accuracy and four metrics to measure model fairness as evaluating metrics. §.§.§ Accuracy Accuracy refers to the probability that the model correctly classifies predictive attributes, which can be calculated by the following formula: ACC=P(Ŷ=c | Y=c) where c∈ C and C are the set of the target attribute. §.§.§ Skewed Error Ratio(SER) The skewed error ratio is a metric that evaluates the maximum difference between different sensitive attributes. It mainly represents the difference between the sensitive attribute with the highest accuracy and the sensitive attribute with the lowest accuracy. The larger the value, the greater the difference in the algorithm's discrimination accuracy for different races. The formula can be expressed as: SER=min_g∈ GError_g/max_g∈ GError_g where g is the sensitive attribute, G is the set of sensitive attribute, and Error_g representing the classification error rate of images with sensitive attribute g. §.§.§ Equal of Opportunity(EO) Equal of Opportunity is a metric that evaluates the equal discrimination between different races, mainly indicating the equality of correctly classified races. The larger the value, the greater the difference in the probability of correctly classified races. Achieving equal opportunities requires the model to have an equal true positive or false negative rate, and the conditions for achieving equal opportunities can be expressed as: for all a,b ∈ G, P( Ŷ=1 | S=a,Y=1 )=P( Ŷ=1| S=b,Y=1 ). Since this paper focuses on the situation of multiple categories of target attribute and sensitive attribute, the calculation formula for Equality of Opportunity is defined as the mean accuracy variance of each sensitive attribute image in different target attribute images, as follows: EO=1/|C|∑_c∈ Cvar_g∈ G(P(Ŷ=c| S=g,Y=c)) where var(·) is variance calculation function. §.§.§ Bias Amplification(BA) Bias amplification is metric that evaluates the degree of inclination of algorithm decisions towards specific types of target attributes, mainly indicating the unfairness of the algorithm among the types of target attributes. The larger the value, the more inclined the algorithm is towards certain specific target attributes. Bias Amplification can be calculated using the following formula: BA=1/|C|∑_c∈ Cmax_g∈ Gg_c/∑_g∈ G g_c-1/|G| §.§.§ Demographic Parity(DP) Demographic Parity is a metric that evaluates the similarity of algorithm decisions to different races, mainly indicating the degree of similarity of algorithm decisions to different races. The larger the value, the greater the difference in algorithm decisions for different populations. The condition that the model meets Demographic Parity is that all a,b ∈ G, meet P(Ŷ=1| S=a)=P(Ŷ=1| S=b). The calculation formula for Demography Parity is as follows: DP=1/ C ∑_c∈ Cvar_g∈ G( P( Ŷ=c |S=g ) ) §.§ Comparative Experiment §.§.§ Environment Settings The experimental operating system is Linux 3.10.0, and the development environment is Anaconda3, Python3.10.9, and Pycharm. The deep learning model is mainly written based on the deep learning framework Pyorch 1.13.1 and trained on NVIDIA A100. §.§.§ Comparison Methods This paper chooses the Federated Average (FedAvg) and local training algorithm as the baseline and chooses Fair Federated Learning Model<cit.> (FairFed) proposed by Ezzeldin et al. as state-of-the-art (SOTA). In local training algorithms, each client only trains on the local dataset and does not aggregate weights through communication. §.§.§ Parameter settings This paper uses Resnet34 <cit.> as the basic model for deep learning for experiments. Resnet34 is a deep residual network with 34 convolutional layers, which is easy to optimize and widely used in computer vision tasks. In the experiment, the adaptive moment estimation (Adam)<cit.> optimizer was used to implement the gradient descent principle, which can achieve high computational efficiency in small memory. The learning rate of the optimizer is 0.0001, and the weight decay is 0.0003. During the model training process. Five clients are set up and randomly divided into equal datasets as their local training dataset. The batch size is 128, and the client performs communication aggregation after every three local training iterations. This paper mainly uses three image datasets, CelebA, FairFace, and UTKFace for experiments. Each image has three channels, R, G, and B, with values ranging from 0 to 255. All images are uniformly adjusted to pixel size. §.§.§ Results and Analysis TABLE <ref> shows the experiment results, the best-performing data for each metric in each dataset are highlighted in bold, while the second-best data is underlined. The experimental effect of DBFed on the CelebA data set is the best. It outperforms other methods in terms of Equality of Opportunity and Demographic Parity and ranks second in terms of Accuracy and Skewed Error Ratio. This shows that DBFed has a good effect on model fairness while having a high prediction accuracy, and effectively reduces the gender bias in smile classification prediction. In the experiments on the FairFace dataset, DBFed achieved the highest Accuracy, but the other four fairness metrics did not significantly surpass other methods. This may be due to the lack of significant unbiased effects caused by the large variety of sensitive attributes in the dataset. In the experiment of UTKFace, this framework outperformed other methods in terms of Skewed Error Ratio, Bias Amplification, and population equality while achieving high accuracy, indicating excellent results in model accuracy and fairness. Overall, DBFed performs well in both model accuracy and fairness, effectively reducing population discrimination when performing classification tasks. § CONCLUSION This paper proposes a debiasing federated learning framework based on domain-independent that can predict classification without using sensitive attribute labels, mitigate model bias model biases during federated learning, and improve model fairness. This paper verifies the depolarization effect of DBFed through experiments on three datasets and five evaluation metrics. In addition, due to the need for sensitive attribute labels during the training process, there are certain requirements for dataset annotation. The research in this paper is a new attempt at model fairness under the federated learning computing architecture, which can be applied to many scenarios that require high data security, such as model fine-tuning and model unbinding in human recognition or fund trading intelligent dialogue models in the financial field. The next research direction will focus on exploring how to remove model biases in federated learning processes without using sensitive attribute labels and strive to perform better debiasing effects on more categories of the sensitive attribute. based on domain-independent that can predict classification without using sensitive attribute labels, mitigate model bias model biases during federated learning, and improve model fairness. This paper verifies the depolarization effect of DBFed through experiments on three datasets and five evaluation metrics. In addition, due to the need for sensitive attribute labels during the training process, there are certain requirements for dataset annotation. The research in this paper is a new attempt at model fairness under the federated learning computing architecture, which can be applied to many scenarios that require high data security, such as model fine-tuning and model unbinding in human recognition or fund trading intelligent dialogue models in the financial field. The next research direction will focus on exploring how to remove model biases in federated learning processes without using sensitive attribute labels and strive to perform better debiasing effects on more categories of the sensitive attribute. § ACKNOWLEDGEMENTS Joint Research and Development Project of Yangtze River Delta Region Technology and Innovation Community(2022CSJGG0800). IEEEtran
http://arxiv.org/abs/2307.03924v1
20230708074639
Real-Time Simulation of Open Quantum Spin Chains with Inchworm Method
[ "Geshuo Wang", "Zhenning Cai" ]
quant-ph
[ "quant-ph", "cond-mat.stat-mech" ]
Enhancing Room Security and Automating Class Attendance Using ID Cards Shravan Bhat – 171EE240, Nithin R – 171EC131, Pranav S - 171EC135 August 12, 2023 ======================================================================== We study the real-time simulation of open quantum systems, where the system is modeled by a spin chain, with each spin associated with its own harmonic bath. Our method couples the inchworm method for the spin-boson model and the modular path integral methodology for spin systems. In particular, the introduction of the inchworm method can significantly suppress the numerical sign problem. Both methods are tweaked to make them work seamlessly with each other. We represent our approach in the language of diagrammatic methods, and analyze the asymptotic behavior of the computational cost. Extensive numerical experiments are done to validate our method. § INTRODUCTION An open quantum system refers to a quantum-mechanical system coupled to an environment. The coupling can significantly affect the quantum dynamics, resulting in effects such as quantum dissipation and quantum decoherence. It can also lead to non-Markovian evolution of the quantum system, posing significant challenges in the numerical simulation. Nevertheless, the study of open quantum systems is becoming increasingly important and has practical applications in many fields <cit.>, as real-world systems are never completely isolated. In the simulation of open quantum systems, a simple harmonic bath is generally assumed so that the effect of the bath on the system can be analytically given by the bath influence functional <cit.>, allowing the path integral approach <cit.> to be used to formulate the system dynamics. One classical method based on path integrals is the quasi-adiabatic propagator path integral (QuAPI) <cit.>. Other methods have been developed based on QuAPI to improve simulation efficiency by reducing computational complexity or enhancing computational accuracy, including the iterative QuAPI method <cit.>, the blip decomposition of the path integral <cit.> and differential equation-based path integral method (DEBPI) <cit.>. Due to the non-Markovian nature of the dynamics, the path-integral-based methods often suffer from increasing memory costs for longer simulation time. The small matrix decomposition of the path integral (SMatPI) <cit.>, however, has successfully overcome the problem by summarizing the contribution of the paths into small matrices representing the kernel of the quantum master equation. An alternative approach to dealing with the high memory cost in simulating quantum systems is to use the quantum Monte Carlo method to evaluate the high-dimensional integrals in the Dyson series <cit.>. However, the Monte Carlo method introduces stochastic errors and can lead to the so-called “sign problem” for highly oscillatory integrands <cit.>. To relieve the sign problem, the inchworm Monte Carlo method was developed in <cit.>, which takes the idea of bold diagrammatic Monte Carlo method introduced in <cit.>. The idea is to compute quantum propagators for shorter time intervals, and then combine them into the propagators of longer time intervals. The extension of the propagators can also be formulated into an integro-differential equation <cit.>, so that classical numerical methods can be applied. The inchworm Monte Carlo method has been proven to be successful in reducing the severity of sign problem <cit.>. Some efficient numerical methods for solving the integro-differential equation has been discussed in <cit.>. The methods discussed above are mainly focused on simple systems such as a single spin or other systems with a small number of possible states, since the dimension of the Hilbert space for a system grows exponentially with the number of particles. As a result, simulating more complex systems requires new approaches. One such approach is the method of modular path integral (MPI) <cit.>, which leads to linear scaling with the number of particles. Other methods apply tensor train decomposition to keep the memory cost low for large systems <cit.>, which utilizes low-rank approximations to reduce the computational and memory cost. In these methods, a typical system under consideration is the Ising chain model, a one-dimensional chain of interacting spins <cit.>. The Ising model has wide application in magnetism <cit.>, neuroscience <cit.> and many other fields. The dynamics of closed Ising chains is well-studied in the literature <cit.>. Recently, there has been more research focusing on the dissipative Ising chain <cit.>. This paper focuses on the evolution of an Ising chain coupled with harmonic baths, which are characterized by the Ohmic spectral density <cit.>. The Ising model used in this study is introduced in <Ref>. In <Ref>, we propose a diagrammatic representation of the model based on the special structure of the Ising chain. The computation of the diagrams is introduced in detail in <Ref> and <Ref>. <Ref> mainly discusses the computation of diagrams for each single spin, and <Ref> contains the algorithm for merging the diagrams. The estimation of the computational cost is given in <Ref>, and numerical experiments are given in <Ref>. Finally, in <Ref>, we provide some concluding remarks and introduce possible future works inspired by our results. § ISING CHAIN WITH SPIN-BATH COUPLING This section provides a brief introduction to the model studied in this paper, which is an Ising chain coupled with baths consisting of harmonic oscillators. In this model, the baths for different spins are not directly coupled. An isolated Ising chain is a chain of spins in which each spin couples with its nearest neighbors <cit.>. The Hamiltonian for an Ising chain with K spins is generally given by H_Ising = ∑_k=1^K H_s^(k) + ∑_k=1^K-1 U^(k)⊗ V^(k+1). where H_s^(k) = ϵ^(k)σ_z^(k) + Δ^(k)σ_x^(k) with σ_x^(k),σ_z^(k) being Pauli matrices for the kth spin in the chain. The parameter ϵ^(k) describes the energy difference between two spin states and Δ^(k) is the frequency of the spin flipping. The term U^(k)⊗ V^(k+1) describes the nearest-neighbor coupling between the kth and (k+1)th spins. In this paper, a more complicated case is studied where each spin in the Ising chain is coupled with a harmonic bath. The total Hamiltonian for the whole system-bath is then given by H = H_Ising + ∑_k=1^K H_b^(k) + ∑_k=1^K W_s^(k)⊗ W_b^(k) where H_b^(k) = ∑_j1/2[(p̂_j^(k))^2 + (ω_j^(k))^2 (q̂_j^(k))^2], W_s^(k) = σ_z^(k), W_b^(k) = ∑_j c_j^(k)q̂_j^(k). In this expression, p̂_j^(k) and q̂_j^(k) are the momentum operator and the position operator of the jth harmonic oscillator in the bath of the kth spin, respectively. ω_j^(k) is the frequency of the jth harmonic oscillator in the bath of the kth spin and c_j^(k) is the coupling intensity between the kth spin and the jth oscillator in its bath. <Ref> illustrates the overall Hamiltonian and the coupling relation in this model more intuitively with a Ising chain with 4 spins. Similar to the assumption in <cit.>, in the paper, the baths for different spins are not directly coupled with each other. Similar to <cit.>, we simply use U^(k) = V^(k) so that our method can be better illustrated by diagrams in the following sections. The method discussed in this paper is also applicable to a more general system U^(k)≠ V^(k). As for the initial condition, the spins and the baths are assumed to be decoupled. More specifically, the kth spin is assumed to be in the state |ς^(k)⟩ and the baths are at their thermal equilibriums. The initial density matrix for the whole system is then given by ρ(0) = ⊗_k=1^K ρ^(k) (0) = ⊗_k=1^K ( ρ_s^(k)(0) ⊗ρ_b^(k)(0) ) = ⊗_k=1^K ( ς^(k)⊗exp(-β^(k) H_b^(k))/(exp(-β^(k) H_b^(k)))) where β^(k) is the inverse temperature for the kth bath <cit.>. § DIAGRAMMATIC REPRESENTATION OF THE PATH INTEGRAL In this section, we rewrite the evolution of the spin chain system using path integrals, so that the computation of each spin can be decoupled. Such an approach has been studied in many previous works <cit.>, and here we are going to represent the path integrals using diagrams to facilitate our future discussions. We first split the total Hamiltonian in <ref> into two parts H = H_0 + V, where H_0 ∑_k=1^K H_0^(k)∑_k=1^K (H_s^(k) + H_b^(k) + W_s^(k)⊗ W_b^(k)), V ∑_k=1^K-1 V^(k)⊗ V^(k+1). Below, we will assume that the interaction between spins V is a perturbation of the unperturbed Hamiltonian H_0, and describe the dynamics in the interaction picture. Given an observable O = O_s ⊗Id_b, we can define the following propagator G(-t,t) = ^-H_0 t ^H t O ^-H t^H_0 t, which can be expanded into the following Dyson series G(-t,t) = ∑_N=0^∞∫_-t⩽⩽ t(∏_n=1^N (s_n)) 𝒯[V_I(s_N) ⋯ V_I(s_1) O_s,I(0)] where V_I(s_n) ^- H_0 | s_n | V ^ H_0 | s_n |, O_s,I(0) = O_s and 𝒯 is the time ordering operator that sorts all the operators in the time descending order. The integrals in the equation is interpreted as ∫_-t⩽s⩽ t(integrand)s = ∫_-t^t∫_-t^s_N…∫_-t^s_2(integrand) s_1 … s_N-1 s_N Note that the coefficient ∏_n=1^N (s_n) comes from the coupling operators V, meaning that each V_I(s_n) is attached by or - according to the sign of s_n. With this propagator, the expectation of the observable can be expressed by <cit.> O_s(t) = (ρ_I(t) G(-t,t)) with ρ_I(t) = ^- H_0 tρ(0) ^ H_0 t. If the observable has the form O_s = O_s^(1)⊗…⊗ O_s^(K), we can plug the definition of V in <ref> into the Dyson series <ref>, so that the integrand will show N summation symbols, and each summand can be written in the tensor product form. Precisely speaking, for the kth spin, the summand has the form: 𝒢^(k)(s') = (∏_n'=1^N'√( (s_n''))) 𝒯[ V_I^(k)(s_N'') … V_I^(k)(s_1') O_s,I^(k)(0) ], where s' is a subsequence of s of length N' ⩽ N. In particular, if s' is an empty sequence, we use the notation 𝒢^(k)(∅) O_s,I^(k)(0) to denote the above quantity. Here we have again used the interaction picture: V_I^(k)(s_n) ^- H_0^(k)| s_n| V^(k)^ H_0^(k)| s_n |, O_s,I^(k)(0) = O_s^(k). In <ref>, the subsequence ' depends on the number of operators V^(k) appearing in the summand, and the reason for the square root is that the term V^(k)⊗ V^(k+1) or - V^(k)⊗ V^(k+1), appearing in the expansion of V or - V, is separated into the terms 𝒢^(k) and 𝒢^(k+1) after decomposition. In this work, we stick to the choice √() = ^π/4 and √(-) = ^-π /4. With these propagators, the terms in <ref> can be represented by the sum of integrals whose integrands are tensor products of 𝒢^(k)(s). For example, when N=1 and K=4, we have ∫_-t^t (s_1) 𝒯[V_I(s_1)O_s,I(0)] s_1 =∫_-t^t 𝒢^(1)(s_1) ⊗𝒢^(2)(s_1) ⊗𝒢^(3)(∅) ⊗𝒢^(4)(∅) s_1 +∫_-t^t 𝒢^(1)(∅) ⊗𝒢^(2)(s_1) ⊗𝒢^(3)(s_1) ⊗𝒢^(4)(∅) s_1 +∫_-t^t 𝒢^(1)(∅) ⊗𝒢^(2)(∅) ⊗𝒢^(3)(s_1) ⊗𝒢^(4)(s_1) s_1. In this equation, different spins are separated inside the integrals, allowing us to perform computations for each spin independently. For simplicity, we may express the above equation as a diagrammatic equation: [baseline=0] [fill=black] (-1,-0.1) rectangle (1,0.1); [text=black,anchor=north] at (-1,0) -t; [text=black,anchor=north] at (1,0) t; plot[only marks, mark=x, mark options=color=red, scale=2, ultra thick] coordinates (-0.5,0); [text=black,anchor=north] at (-0.5,0) s_1; = [baseline=0] [fill=lightgray] (-1,0.55) rectangle (1,0.65); [fill=lightgray] (-1,0.15) rectangle (1,0.25); [fill=lightgray] (-1,-0.25) rectangle (1,-0.15); [fill=lightgray] (-1,-0.65) rectangle (1,-0.55); [text=red] at (-0.5,0.6) ×; [text=red] at (-0.5,0.2) ×; [black, densely dotted, line width = 1pt] (-0.5,0.6) – (-0.5,0.2); [text=black,anchor=north] at (-1,-0.6) -t; [text=black,anchor=north] at (1,-0.6) t; [text=black,anchor=north] at (-0.5,-0.6) s_1; + [baseline=0] [fill=lightgray] (-1,0.55) rectangle (1,0.65); [fill=lightgray] (-1,0.15) rectangle (1,0.25); [fill=lightgray] (-1,-0.25) rectangle (1,-0.15); [fill=lightgray] (-1,-0.65) rectangle (1,-0.55); [text=red] at (-0.5,0.2) ×; [text=red] at (-0.5,-0.2) ×; [black, densely dotted, line width = 1pt] (-0.5,0.2) – (-0.5,-0.2); [text=black,anchor=north] at (-1,-0.6) -t; [text=black,anchor=north] at (1,-0.6) t; [text=black,anchor=north] at (-0.5,-0.6) s_1; + [baseline=0] [fill=lightgray] (-1,0.55) rectangle (1,0.65); [fill=lightgray] (-1,0.15) rectangle (1,0.25); [fill=lightgray] (-1,-0.25) rectangle (1,-0.15); [fill=lightgray] (-1,-0.65) rectangle (1,-0.55); [text=red] at (-0.5,-0.2) ×; [text=red] at (-0.5,-0.6) ×; [black, densely dotted, line width = 1pt] (-0.5,-0.6) – (-0.5,-0.2); [text=black,anchor=north] at (-1,-0.6) -t; [text=black,anchor=north] at (1,-0.6) t; [text=black,anchor=north] at (-0.5,-0.6) s_1; In this diagrammatic equation, the bold line on the left-hand side represents an operator acting on all spins. The red cross indicates that only one coupling operator at time s_1 exists in the integral. On the right-hand side, each gray line represents a single spin. Since each interaction operator V consists of three terms, each acting on two neighboring spins, we have three diagrams on the right-hand side, and each diagram includes two red crosses connected by a dotted line, indicating the two involved spins. By comparison with (<ref>), we can find that every diagram on the right-hand side is an integral with respect to s_1, and the kth line corresponds to the expression 𝒢^(k)(…), where the ellipses should be filled with the time points of the red crosses. In this case, the ellipses can only be a single point s_1 or an empty set. Similarly, for the term with two coupling operators (N=2), the expansion is [baseline=0] [fill=black] (-1,-0.1) rectangle (1,0.1); [text=black,anchor=north] at (-1,0) -t; [text=black,anchor=north] at (1,0) t; plot[only marks, mark=x, mark options=color=red, scale=2, ultra thick] coordinates (-0.5,0); [text=black,anchor=north] at (-0.5,0) s_1; plot[only marks, mark=x, mark options=color=red, scale=2, ultra thick] coordinates (0.3,0); [text=black,anchor=north] at (-0.5,0) s_1; [text=black,anchor=north] at (0.3,0) s_2; = [baseline=0] [fill=lightgray] (-1,0.55) rectangle (1,0.65); [fill=lightgray] (-1,0.15) rectangle (1,0.25); [fill=lightgray] (-1,-0.25) rectangle (1,-0.15); [fill=lightgray] (-1,-0.65) rectangle (1,-0.55); [text=red] at (-0.5,0.6) ×; [text=red] at (-0.5,0.2) ×; [black, densely dotted, line width = 1pt] (-0.5,0.6) – (-0.5,0.2); [text=red] at (0.3,0.6) ×; [text=red] at (0.3,0.2) ×; [black, densely dotted, line width = 1pt] (0.3,0.6) – (0.3,0.2); [text=black,anchor=north] at (-1,-0.6) -t; [text=black,anchor=north] at (1,-0.6) t; [text=black,anchor=north] at (-0.5,-0.6) s_1; [text=black,anchor=north] at (0.3,-0.6) s_2; + [baseline=0] [fill=lightgray] (-1,0.55) rectangle (1,0.65); [fill=lightgray] (-1,0.15) rectangle (1,0.25); [fill=lightgray] (-1,-0.25) rectangle (1,-0.15); [fill=lightgray] (-1,-0.65) rectangle (1,-0.55); [text=red] at (-0.5,0.6) ×; [text=red] at (-0.5,0.2) ×; [black, densely dotted, line width = 1pt] (-0.5,0.6) – (-0.5,0.2); [text=red] at (0.3,0.2) ×; [text=red] at (0.3,-0.2) ×; [black, densely dotted, line width = 1pt] (0.3,0.2) – (0.3,-0.2); [text=black,anchor=north] at (-1,-0.6) -t; [text=black,anchor=north] at (1,-0.6) t; [text=black,anchor=north] at (-0.5,-0.6) s_1; [text=black,anchor=north] at (0.3,-0.6) s_2; + [baseline=0] [fill=lightgray] (-1,0.55) rectangle (1,0.65); [fill=lightgray] (-1,0.15) rectangle (1,0.25); [fill=lightgray] (-1,-0.25) rectangle (1,-0.15); [fill=lightgray] (-1,-0.65) rectangle (1,-0.55); [text=red] at (-0.5,0.6) ×; [text=red] at (-0.5,0.2) ×; [black, densely dotted, line width = 1pt] (-0.5,0.6) – (-0.5,0.2); [text=red] at (0.3,-0.2) ×; [text=red] at (0.3,-0.6) ×; [black, densely dotted, line width = 1pt] (0.3,-0.2) – (0.3,-0.6); [text=black,anchor=north] at (-1,-0.6) -t; [text=black,anchor=north] at (1,-0.6) t; [text=black,anchor=north] at (-0.5,-0.6) s_1; [text=black,anchor=north] at (0.3,-0.6) s_2; + [baseline=0] [fill=lightgray] (-1,0.55) rectangle (1,0.65); [fill=lightgray] (-1,0.15) rectangle (1,0.25); [fill=lightgray] (-1,-0.25) rectangle (1,-0.15); [fill=lightgray] (-1,-0.65) rectangle (1,-0.55); [text=red] at (-0.5,0.2) ×; [text=red] at (-0.5,-0.2) ×; [black, densely dotted, line width = 1pt] (-0.5,0.2) – (-0.5,-0.2); [text=red] at (0.3,0.6) ×; [text=red] at (0.3,0.2) ×; [black, densely dotted, line width = 1pt] (0.3,0.6) – (0.3,0.2); [text=black,anchor=north] at (-1,-0.6) -t; [text=black,anchor=north] at (1,-0.6) t; [text=black,anchor=north] at (-0.5,-0.6) s_1; [text=black,anchor=north] at (0.3,-0.6) s_2; + [baseline=0] [fill=lightgray] (-1,0.55) rectangle (1,0.65); [fill=lightgray] (-1,0.15) rectangle (1,0.25); [fill=lightgray] (-1,-0.25) rectangle (1,-0.15); [fill=lightgray] (-1,-0.65) rectangle (1,-0.55); [text=red] at (-0.5,0.2) ×; [text=red] at (-0.5,-0.2) ×; [black, densely dotted, line width = 1pt] (-0.5,0.2) – (-0.5,-0.2); [text=red] at (0.3,0.2) ×; [text=red] at (0.3,-0.2) ×; [black, densely dotted, line width = 1pt] (0.3,0.2) – (0.3,-0.2); [text=black,anchor=north] at (-1,-0.6) -t; [text=black,anchor=north] at (1,-0.6) t; [text=black,anchor=north] at (-0.5,-0.6) s_1; [text=black,anchor=north] at (0.3,-0.6) s_2; + [baseline=0] [fill=lightgray] (-1,0.55) rectangle (1,0.65); [fill=lightgray] (-1,0.15) rectangle (1,0.25); [fill=lightgray] (-1,-0.25) rectangle (1,-0.15); [fill=lightgray] (-1,-0.65) rectangle (1,-0.55); [text=red] at (-0.5,0.2) ×; [text=red] at (-0.5,-0.2) ×; [black, densely dotted, line width = 1pt] (-0.5,0.2) – (-0.5,-0.2); [text=red] at (0.3,-0.2) ×; [text=red] at (0.3,-0.6) ×; [black, densely dotted, line width = 1pt] (0.3,-0.2) – (0.3,-0.6); [text=black,anchor=north] at (-1,-0.6) -t; [text=black,anchor=north] at (1,-0.6) t; [text=black,anchor=north] at (-0.5,-0.6) s_1; [text=black,anchor=north] at (0.3,-0.6) s_2; + [baseline=0] [fill=lightgray] (-1,0.55) rectangle (1,0.65); [fill=lightgray] (-1,0.15) rectangle (1,0.25); [fill=lightgray] (-1,-0.25) rectangle (1,-0.15); [fill=lightgray] (-1,-0.65) rectangle (1,-0.55); [text=red] at (-0.5,-0.2) ×; [text=red] at (-0.5,-0.6) ×; [black, densely dotted, line width = 1pt] (-0.5,-0.2) – (-0.5,-0.6); [text=red] at (0.3,0.6) ×; [text=red] at (0.3,0.2) ×; [black, densely dotted, line width = 1pt] (0.3,0.6) – (0.3,0.2); [text=black,anchor=north] at (-1,-0.6) -t; [text=black,anchor=north] at (1,-0.6) t; [text=black,anchor=north] at (-0.5,-0.6) s_1; [text=black,anchor=north] at (0.3,-0.6) s_2; + [baseline=0] [fill=lightgray] (-1,0.55) rectangle (1,0.65); [fill=lightgray] (-1,0.15) rectangle (1,0.25); [fill=lightgray] (-1,-0.25) rectangle (1,-0.15); [fill=lightgray] (-1,-0.65) rectangle (1,-0.55); [text=red] at (-0.5,-0.2) ×; [text=red] at (-0.5,-0.6) ×; [black, densely dotted, line width = 1pt] (-0.5,-0.2) – (-0.5,-0.6); [text=red] at (0.3,0.2) ×; [text=red] at (0.3,-0.2) ×; [black, densely dotted, line width = 1pt] (0.3,0.2) – (0.3,-0.2); [text=black,anchor=north] at (-1,-0.6) -t; [text=black,anchor=north] at (1,-0.6) t; [text=black,anchor=north] at (-0.5,-0.6) s_1; [text=black,anchor=north] at (0.3,-0.6) s_2; + [baseline=0] [fill=lightgray] (-1,0.55) rectangle (1,0.65); [fill=lightgray] (-1,0.15) rectangle (1,0.25); [fill=lightgray] (-1,-0.25) rectangle (1,-0.15); [fill=lightgray] (-1,-0.65) rectangle (1,-0.55); [text=red] at (-0.5,-0.2) ×; [text=red] at (-0.5,-0.6) ×; [black, densely dotted, line width = 1pt] (-0.5,-0.2) – (-0.5,-0.6); [text=red] at (0.3,-0.2) ×; [text=red] at (0.3,-0.6) ×; [black, densely dotted, line width = 1pt] (0.3,-0.2) – (0.3,-0.6); [text=black,anchor=north] at (-1,-0.6) -t; [text=black,anchor=north] at (1,-0.6) t; [text=black,anchor=north] at (-0.5,-0.6) s_1; [text=black,anchor=north] at (0.3,-0.6) s_2; . Here the left-hand side corresponds to the two-dimensional integral in <ref>. On the right-hand side, we have nine diagrams since both interaction operators at s_1 and s_2 have three choices. For general N and K, the number of diagrams should be (K-1)^N. In particular, for the first term in <ref> where no interaction exists, no integral is required and we have O_s(0) = O_s = O_s^(1)⊗ O_s^(2)⊗ O_s^(3)⊗ O_s^(4)⊗ = 𝒢^(1) (∅) ⊗𝒢^(2) (∅) ⊗𝒢^(3) (∅) ⊗𝒢^(4) (∅), which can be represented by the following diagrammatic equation: [baseline=0] [fill=black] (-1,-0.1) rectangle (1,0.1); [text=black,anchor=north] at (-1,0) -t; [text=black,anchor=north] at (1,0) t; = [baseline=0] [fill=lightgray] (-1,0.55) rectangle (1,0.65); [fill=lightgray] (-1,0.15) rectangle (1,0.25); [fill=lightgray] (-1,-0.25) rectangle (1,-0.15); [fill=lightgray] (-1,-0.65) rectangle (1,-0.55); [text=black,anchor=north] at (-1,-0.6) -t; [text=black,anchor=north] at (1,-0.6) t; . As a result, the final diagrammatic expansion of G(-t,t) is G(-t,t) = [baseline=0] [fill=black] (-1,-0.1) rectangle (1,0.1); [text=black,anchor=north] at (-1,0) -t; [text=black,anchor=north] at (1,0) t; + [baseline=0] [fill=black] (-1,-0.1) rectangle (1,0.1); [text=black,anchor=north] at (-1,0) -t; [text=black,anchor=north] at (1,0) t; plot[only marks, mark=x, mark options=color=red, scale=2, ultra thick] coordinates (-0.5,0); [text=black,anchor=north] at (-0.5,0) s_1; + [baseline=0] [fill=black] (-1,-0.1) rectangle (1,0.1); [text=black,anchor=north] at (-1,0) -t; [text=black,anchor=north] at (1,0) t; plot[only marks, mark=x, mark options=color=red, scale=2, ultra thick] coordinates (-0.5,0); [text=black,anchor=north] at (-0.5,0) s_1; plot[only marks, mark=x, mark options=color=red, scale=2, ultra thick] coordinates (0.3,0); [text=black,anchor=north] at (-0.5,0) s_1; [text=black,anchor=north] at (0.3,0) s_2; + … = [baseline=0] [fill=lightgray] (-1,0.55) rectangle (1,0.65); [fill=lightgray] (-1,0.15) rectangle (1,0.25); [fill=lightgray] (-1,-0.25) rectangle (1,-0.15); [fill=lightgray] (-1,-0.65) rectangle (1,-0.55); [text=black,anchor=north] at (-1,-0.6) -t; [text=black,anchor=north] at (1,-0.6) t; + [baseline=0] [fill=lightgray] (-1,0.55) rectangle (1,0.65); [fill=lightgray] (-1,0.15) rectangle (1,0.25); [fill=lightgray] (-1,-0.25) rectangle (1,-0.15); [fill=lightgray] (-1,-0.65) rectangle (1,-0.55); [text=red] at (-0.5,0.6) ×; [text=red] at (-0.5,0.2) ×; [black, densely dotted, line width = 1pt] (-0.5,0.6) – (-0.5,0.2); [text=black,anchor=north] at (-1,-0.6) -t; [text=black,anchor=north] at (1,-0.6) t; [text=black,anchor=north] at (-0.5,-0.6) s_1; + [baseline=0] [fill=lightgray] (-1,0.55) rectangle (1,0.65); [fill=lightgray] (-1,0.15) rectangle (1,0.25); [fill=lightgray] (-1,-0.25) rectangle (1,-0.15); [fill=lightgray] (-1,-0.65) rectangle (1,-0.55); [text=red] at (-0.5,0.2) ×; [text=red] at (-0.5,-0.2) ×; [black, densely dotted, line width = 1pt] (-0.5,0.2) – (-0.5,-0.2); [text=black,anchor=north] at (-1,-0.6) -t; [text=black,anchor=north] at (1,-0.6) t; [text=black,anchor=north] at (-0.5,-0.6) s_1; + [baseline=0] [fill=lightgray] (-1,0.55) rectangle (1,0.65); [fill=lightgray] (-1,0.15) rectangle (1,0.25); [fill=lightgray] (-1,-0.25) rectangle (1,-0.15); [fill=lightgray] (-1,-0.65) rectangle (1,-0.55); [text=red] at (-0.5,-0.2) ×; [text=red] at (-0.5,-0.6) ×; [black, densely dotted, line width = 1pt] (-0.5,-0.6) – (-0.5,-0.2); [text=black,anchor=north] at (-1,-0.6) -t; [text=black,anchor=north] at (1,-0.6) t; [text=black,anchor=north] at (-0.5,-0.6) s_1; + [baseline=0] [fill=lightgray] (-1,0.55) rectangle (1,0.65); [fill=lightgray] (-1,0.15) rectangle (1,0.25); [fill=lightgray] (-1,-0.25) rectangle (1,-0.15); [fill=lightgray] (-1,-0.65) rectangle (1,-0.55); [text=red] at (-0.5,0.6) ×; [text=red] at (-0.5,0.2) ×; [black, densely dotted, line width = 1pt] (-0.5,0.6) – (-0.5,0.2); [text=red] at (0.3,0.6) ×; [text=red] at (0.3,0.2) ×; [black, densely dotted, line width = 1pt] (0.3,0.6) – (0.3,0.2); [text=black,anchor=north] at (-1,-0.6) -t; [text=black,anchor=north] at (1,-0.6) t; [text=black,anchor=north] at (-0.5,-0.6) s_1; [text=black,anchor=north] at (0.3,-0.6) s_2; + [baseline=0] [fill=lightgray] (-1,0.55) rectangle (1,0.65); [fill=lightgray] (-1,0.15) rectangle (1,0.25); [fill=lightgray] (-1,-0.25) rectangle (1,-0.15); [fill=lightgray] (-1,-0.65) rectangle (1,-0.55); [text=red] at (-0.5,0.6) ×; [text=red] at (-0.5,0.2) ×; [black, densely dotted, line width = 1pt] (-0.5,0.6) – (-0.5,0.2); [text=red] at (0.3,0.2) ×; [text=red] at (0.3,-0.2) ×; [black, densely dotted, line width = 1pt] (0.3,0.2) – (0.3,-0.2); [text=black,anchor=north] at (-1,-0.6) -t; [text=black,anchor=north] at (1,-0.6) t; [text=black,anchor=north] at (-0.5,-0.6) s_1; [text=black,anchor=north] at (0.3,-0.6) s_2; + [baseline=0] [fill=lightgray] (-1,0.55) rectangle (1,0.65); [fill=lightgray] (-1,0.15) rectangle (1,0.25); [fill=lightgray] (-1,-0.25) rectangle (1,-0.15); [fill=lightgray] (-1,-0.65) rectangle (1,-0.55); [text=red] at (-0.5,0.6) ×; [text=red] at (-0.5,0.2) ×; [black, densely dotted, line width = 1pt] (-0.5,0.6) – (-0.5,0.2); [text=red] at (0.3,-0.2) ×; [text=red] at (0.3,-0.6) ×; [black, densely dotted, line width = 1pt] (0.3,-0.2) – (0.3,-0.6); [text=black,anchor=north] at (-1,-0.6) -t; [text=black,anchor=north] at (1,-0.6) t; [text=black,anchor=north] at (-0.5,-0.6) s_1; [text=black,anchor=north] at (0.3,-0.6) s_2; + [baseline=0] [fill=lightgray] (-1,0.55) rectangle (1,0.65); [fill=lightgray] (-1,0.15) rectangle (1,0.25); [fill=lightgray] (-1,-0.25) rectangle (1,-0.15); [fill=lightgray] (-1,-0.65) rectangle (1,-0.55); [text=red] at (-0.5,0.2) ×; [text=red] at (-0.5,-0.2) ×; [black, densely dotted, line width = 1pt] (-0.5,0.2) – (-0.5,-0.2); [text=red] at (0.3,0.6) ×; [text=red] at (0.3,0.2) ×; [black, densely dotted, line width = 1pt] (0.3,0.6) – (0.3,0.2); [text=black,anchor=north] at (-1,-0.6) -t; [text=black,anchor=north] at (1,-0.6) t; [text=black,anchor=north] at (-0.5,-0.6) s_1; [text=black,anchor=north] at (0.3,-0.6) s_2; + [baseline=0] [fill=lightgray] (-1,0.55) rectangle (1,0.65); [fill=lightgray] (-1,0.15) rectangle (1,0.25); [fill=lightgray] (-1,-0.25) rectangle (1,-0.15); [fill=lightgray] (-1,-0.65) rectangle (1,-0.55); [text=red] at (-0.5,0.2) ×; [text=red] at (-0.5,-0.2) ×; [black, densely dotted, line width = 1pt] (-0.5,0.2) – (-0.5,-0.2); [text=red] at (0.3,0.2) ×; [text=red] at (0.3,-0.2) ×; [black, densely dotted, line width = 1pt] (0.3,0.2) – (0.3,-0.2); [text=black,anchor=north] at (-1,-0.6) -t; [text=black,anchor=north] at (1,-0.6) t; [text=black,anchor=north] at (-0.5,-0.6) s_1; [text=black,anchor=north] at (0.3,-0.6) s_2; + [baseline=0] [fill=lightgray] (-1,0.55) rectangle (1,0.65); [fill=lightgray] (-1,0.15) rectangle (1,0.25); [fill=lightgray] (-1,-0.25) rectangle (1,-0.15); [fill=lightgray] (-1,-0.65) rectangle (1,-0.55); [text=red] at (-0.5,0.2) ×; [text=red] at (-0.5,-0.2) ×; [black, densely dotted, line width = 1pt] (-0.5,0.2) – (-0.5,-0.2); [text=red] at (0.3,-0.2) ×; [text=red] at (0.3,-0.6) ×; [black, densely dotted, line width = 1pt] (0.3,-0.2) – (0.3,-0.6); [text=black,anchor=north] at (-1,-0.6) -t; [text=black,anchor=north] at (1,-0.6) t; [text=black,anchor=north] at (-0.5,-0.6) s_1; [text=black,anchor=north] at (0.3,-0.6) s_2; + [baseline=0] [fill=lightgray] (-1,0.55) rectangle (1,0.65); [fill=lightgray] (-1,0.15) rectangle (1,0.25); [fill=lightgray] (-1,-0.25) rectangle (1,-0.15); [fill=lightgray] (-1,-0.65) rectangle (1,-0.55); [text=red] at (-0.5,-0.2) ×; [text=red] at (-0.5,-0.6) ×; [black, densely dotted, line width = 1pt] (-0.5,-0.2) – (-0.5,-0.6); [text=red] at (0.3,0.6) ×; [text=red] at (0.3,0.2) ×; [black, densely dotted, line width = 1pt] (0.3,0.6) – (0.3,0.2); [text=black,anchor=north] at (-1,-0.6) -t; [text=black,anchor=north] at (1,-0.6) t; [text=black,anchor=north] at (-0.5,-0.6) s_1; [text=black,anchor=north] at (0.3,-0.6) s_2; + [baseline=0] [fill=lightgray] (-1,0.55) rectangle (1,0.65); [fill=lightgray] (-1,0.15) rectangle (1,0.25); [fill=lightgray] (-1,-0.25) rectangle (1,-0.15); [fill=lightgray] (-1,-0.65) rectangle (1,-0.55); [text=red] at (-0.5,-0.2) ×; [text=red] at (-0.5,-0.6) ×; [black, densely dotted, line width = 1pt] (-0.5,-0.2) – (-0.5,-0.6); [text=red] at (0.3,0.2) ×; [text=red] at (0.3,-0.2) ×; [black, densely dotted, line width = 1pt] (0.3,0.2) – (0.3,-0.2); [text=black,anchor=north] at (-1,-0.6) -t; [text=black,anchor=north] at (1,-0.6) t; [text=black,anchor=north] at (-0.5,-0.6) s_1; [text=black,anchor=north] at (0.3,-0.6) s_2; + [baseline=0] [fill=lightgray] (-1,0.55) rectangle (1,0.65); [fill=lightgray] (-1,0.15) rectangle (1,0.25); [fill=lightgray] (-1,-0.25) rectangle (1,-0.15); [fill=lightgray] (-1,-0.65) rectangle (1,-0.55); [text=red] at (-0.5,-0.2) ×; [text=red] at (-0.5,-0.6) ×; [black, densely dotted, line width = 1pt] (-0.5,-0.2) – (-0.5,-0.6); [text=red] at (0.3,-0.2) ×; [text=red] at (0.3,-0.6) ×; [black, densely dotted, line width = 1pt] (0.3,-0.2) – (0.3,-0.6); [text=black,anchor=north] at (-1,-0.6) -t; [text=black,anchor=north] at (1,-0.6) t; [text=black,anchor=north] at (-0.5,-0.6) s_1; [text=black,anchor=north] at (0.3,-0.6) s_2; + …, where the right-hand side includes all possible connections between neighboring spins. The advantage of this expansion is two-fold: * For each diagram, when the time points s_1, ⋯, s_N are fixed, the kth line with crosses is mathematically represented by 𝒢^(k)(), which involves only one spin, so that it can be computed relatively easily. * We can shuffle the diagrams and truncate the series appropriately to obtain efficient algorithms. The idea for the computation of each line on the right-hand side will be based on an efficient path integral method known as the inchworm method <cit.>, and our algorithm for the integration over the time points and the summation of the diagrams is inspired by the method of modular path integrals <cit.>. The following two sections will be devoted to these two steps, respectively. § INCHWORM ALGORITHM FOR EACH SPIN Recall that our purpose is to compute the expectation of the observable in the form of <ref>. Based on our decomposition <ref>, we can first take the trace for each diagram, and then sum up the results. Thus, for each diagram, we need to compute ( ρ_I^(k)(t) 𝒢^(k)() ) with ρ_I^(k)(t) = ^- H_0^(k) tρ^(k)(0) ^ H_0^(k) t. In this section, we will introduce an efficient algorithm to evaluate this single-spin quantity 𝒢^(k)() for given . The algorithm is inspired by the inchworm Monte Carlo Method for system-bath coupling <cit.>, where a single heat bath interacts with the entire system. Note that each spin is associated with a thermal bath, we can apply the Dyson series expansion again to separate the spin and the bath. Since the baths are initially in the thermal equilibrium states, the trace with respect to the bath part can be calculated explicitly using Wick's theorem <cit.>. We refer the readers to <cit.> for the detailed calculation, and here we only present the final result: ( ρ_I^(k)(t) 𝒢^(k)() ) = _s^(k)[ ρ_s,I^(k)(t) (∏_n=1^N √( (s_n))) ∑_M=0^∞^M ∫_-t ⩽τ⩽ t( ∏_m=1^M (τ_m) ) 𝒰_0^(k)(τ,s) ℒ_b^(k)(τ) τ], where 𝒰_0^(k)(τ,s) = 𝒯[V_s,I^(k)(s_1) … V_s,I^(k)(s_N) W_s,I^(k)(τ_1) … W_s,I^(k)(τ_M) O_s,I^(k)(0)] with V_s,I^(k)(s) = ^- H_s^(k)| s | V^(k)^ H_s^(k)| s | , W_s,I^(k)(τ) = ^- H_s^(k)|τ| W^(k)^ H_s^(k)|τ|, ρ_s,I^(k)(t) = ^- H_s^(k) tρ_s^(k) (0) ^ H_s^(k) t and the bath influence functional ℒ_b^(k)(τ) has the form <cit.> ℒ_b^(k)(τ_1,…,τ_M) = 0, if  M  is odd ∑_𝔮∈𝒬_M∏_(j,j')∈𝔮 B^(k)(τ_j,τ_j'), if  M  is even. Here B^(k) is the two-point correlation function to be defined later in our test cases, and the set 𝒬_M contains all possible pairings of integers {1,2,⋯,M}. For example, 𝒬_2 = {{(1,2)}}, 𝒬_4 = {{(1,2),(3,4)}, {(1,3),(2,4)}, {(1,4),(2,3)}}. The general definition of 𝒬_M for even M is 𝒬_M = {{(j_1,j_1'),…,(j_M/2,j_M/2')}|⋃_l=1^M/2{j_l,j_l'} = {1,…,M}, j_l < j_l' for  l = 1,…,M/2 }, which includes (M-1)!! pairings. According to <ref>, now our objective is to evaluate the following quantity 𝒢^(k)(-t,s,t) (∏_n=1^N √( (s_n))) ∑_M=0^∞∫_-t ⩽τ⩽ t( ∏_m=1^M (τ_m) ) 𝒰_0^(k)(τ,s) ℒ_b^(k)(τ) τ, which yields (ρ_I^(k)(t) 𝒢^(k)() ) = _s^(k)(ρ_s,I^(k)(t) 𝒢^(k)(-t,,t) ). Recall that we have used a gray line with red crosses to represent 𝒢^(k)(). Due to the equivalence given in <ref>, below we will use the same diagram to represent the quantity 𝒢^(k)(-t,,t). For example, given = (s_1,s_2) with both s_1 and s_2 between -t and t, <ref> can be represented diagrammatically as [baseline=0] [fill=lightgray] (-2,-0.05) rectangle (2,0.05); [text=red] at (-1,0) ×; [text=red] at (0.6,0) ×; [text=black,anchor=north] at (-2,0) -t; [text=black,anchor=north] at (2,0) t; [text=black,anchor=north] at (-1,0) s_1; [text=black,anchor=north] at (0.6,0) s_2; = [baseline=0] [black] (-2,0) – (2,0); [text=red] at (-1,0) ×; [text=red] at (0.6,0) ×; [text=black,anchor=north] at (-2,0) -t; [text=black,anchor=north] at (2,0) t; [text=black,anchor=north] at (-1,0) s_1; [text=black,anchor=north] at (0.6,0) s_2; + [baseline=0] [black] (-2,0) – (2,0); [text=red] at (-1,0) ×; [text=red] at (0.6,0) ×; [text=black,anchor=north] at (-2,0) -t; [text=black,anchor=north] at (2,0) t; [text=black,anchor=north] at (-1,0) s_1; [text=black,anchor=north] at (0.6,0) s_2; [-] (-1.5,0) to[bend left=75] (-0.2,0); [text=black,anchor=north] at (-1.5,0) τ_1; [text=black,anchor=north] at (-0.2,0) τ_2; + [baseline=0] [black] (-2,0) – (2,0); [text=red] at (-1,0) ×; [text=red] at (0.6,0) ×; [text=black,anchor=north] at (-2,0) -t; [text=black,anchor=north] at (2,0) t; [text=black,anchor=north] at (-1,0) s_1; [text=black,anchor=north] at (0.6,0) s_2; [-] (-1.5,0) to[bend left=75] (-0.5,0); [-] (0.1,0) to[bend left=75] (1.2,0); [text=black,anchor=north] at (-1.5,0) τ_1; [text=black,anchor=north] at (-0.5,0) τ_2; [text=black,anchor=north] at (0.1,0) τ_3; [text=black,anchor=north] at (1.2,0) τ_4; + [baseline=0] [black] (-2,0) – (2,0); [text=red] at (-1,0) ×; [text=red] at (0.6,0) ×; [text=black,anchor=north] at (-2,0) -t; [text=black,anchor=north] at (2,0) t; [text=black,anchor=north] at (-1,0) s_1; [text=black,anchor=north] at (0.6,0) s_2; [-] (-1.5,0) to[bend left=75] (0.1,0); [-] (-0.5,0) to[bend left=75] (1.2,0); [text=black,anchor=north] at (-1.5,0) τ_1; [text=black,anchor=north] at (-0.5,0) τ_2; [text=black,anchor=north] at (0.1,0) τ_3; [text=black,anchor=north] at (1.2,0) τ_4; + … In the diagrammatic equation, the location of the cross marks, given by , are fixed. On the right hand side, τ's are integration variables. Note that a time ordering operator 𝒯 in the definition of 𝒰_0^(k)(τ, ) is required to guarantee that the operators are applied in the correct order. Each arc represents a two-point correlation function B(τ_j, τ_j') in the bath influence functional ℒ_b. The equation <ref> is ready for computation. One can directly apply the Monte Carlo method to the right-hand side to approximate the sum of integrals, which is known as the bare diagrammatic quantum Monte Carlo method (bare dQMC). To design a more efficient approach, we will follow the method in <cit.> to derive an integro-differential equation. We first generalize the definition of 𝒢^(k)(-t,s,t) to 𝒢^(k)(s_, , s_) for any s_ < s_: 𝒢^(k)(s_,s,s_) = ( ∏_n=1^N √( (s_n))) ∑_M=0^∞∫_s_⩽τ⩽ s_( ∏_m=1^M (τ_m) ) 𝒰_0^(k)(s_,τ,s,s_) ℒ_b^(k)(τ) τ where is an increasing sequence of time points, each of which is between s_ and s_, and 𝒰_0^(k)(s_,τ,s,s_) =𝒯[V_s,I^(k)(s_1) … V_s,I^(k)(s_N) W_s,I^(k)(τ_1) … W_s,I^(k)(τ_M) O_s^(k)(0)],  if  0∈[s_,s_], 𝒯[V_s,I^(k)(s_1) … V_s,I^(k)(s_N) W_s,I^(k)(τ_1) … W_s,I^(k)(τ_M)],  if  0∉[s_,s_]. Note that only operators between s_ and s_ are included in the definition. Therefore, when [s_, s_] does not include the origin, O_s(0) should be excluded. This definition can also be represented diagramatically as <ref>, only with -t replaced by s_ and t replaced by s_. It can then be seen that for two intervals satisfying [s_, s_] ⊂ [s_', s_'], 𝒢^(k)(s_, , s_) can be understood as a proportion of 𝒢^(k)(s_', ', s_') if is the subvector of ' with all components between s_ and s_. To formulate an integro-differential equation for 𝒢^(k)(s_, , s_), we extend the gray line from s_ to s_ by a length of ds (see the left-hand side of <ref>). Then in the expansion of the extended gray line, all diagrams on the right-hand side of <ref> are included. Besides, diagrams that are not included in <ref> are thin lines with arcs ending within the interval [s_, s_ + ds] (second line in <ref>). Since ds is infinitesimal, it suffices to assume that there is only one time point inside [s_, s_ + ds]. We can further assume that this time point is fixed at s_, and then this diagram must be multiplied by ds when being added to the sum (third to fifth lines of <ref>). For simplicity, we will name the arc ending at s_ as 𝒜_s_ (thick black arcs in <ref>). We can now categorize all the diagrams with a point at s_ into classes characterized by the connected component of the arcs including the arc 𝒜_s_. Here the “connected component” can be established by beginning with a set including the arc (τ_k, τ_M) only, and then expanding the set iteratively by including all arcs with intersections with any arc that is already in the set, until the set does not change. In <ref>, two categories are labeled by yellow and green backgrounds, and the connected components are highlighted using thick lines (including both black and white lines). For all diagrams with the same connected component including 𝒜_s_, we can sum them up and the result is the connection of a few thick lines with all arcs in this connected component, which is known as a “bold diagram”. The derivation is summarized in the following diagrammatic equation: [baseline=0,scale=0.8] [fill=lightgray] (-2,-0.05) rectangle (2.2,0.05); [text=red] at (-1.4,0) ×; [text=red] at (-0.6,0) ×; [text=red] at (1,0) ×; [text=black,anchor=north] at (-2,0) s_; [text=black,anchor=north] at (2,0) s_; [gray] (2,0.05) – (2,-0.05); [text=black,anchor=north] at (2.8,0.1) s_+ s; [text=black,anchor=north] at (-1.4,0) s_1; [text=black,anchor=north] at (-0.6,0) s_2; [text=black,anchor=north] at (0.2,-0.1) …; [text=black,anchor=north] at (1,0) s_N; = [baseline=0,scale=0.8] [fill=lightgray] (-2,-0.05) rectangle (2,0.05); [text=red] at (-1.4,0) ×; [text=red] at (-0.6,0) ×; [text=red] at (1,0) ×; [text=black,anchor=north] at (-2,0) s_; [text=black,anchor=north] at (2,0) s_; [gray] (2,0.05) – (2,-0.05); [text=black,anchor=north] at (-1.4,0) s_1; [text=black,anchor=north] at (-0.6,0) s_2; [text=black,anchor=north] at (0.2,-0.1) …; [text=black,anchor=north] at (1,0) s_N; + s ( [baseline=0,scale=0.8] [black] (-2,0) – (2,0); [text=red] at (-1.4,0) ×; [text=red] at (-0.6,0) ×; [text=red] at (1,0) ×; [text=black,anchor=north] at (-2,0) s_; [text=black,anchor=north] at (2,0) s_; [gray] (2,0.05) – (2,-0.05); [text=black,anchor=north] at (-1.4,0) s_1; [text=black,anchor=north] at (-0.6,0) s_2; [text=black,anchor=north] at (0.2,-0.1) …; [text=black,anchor=north] at (1,0) s_N; [-,very thick] (0,0) to[bend left=75] (2,0); + [baseline=0,scale=0.8] [black] (-2,0) – (2,0); [text=red] at (-1.4,0) ×; [text=red] at (-0.6,0) ×; [text=red] at (1,0) ×; [text=black,anchor=north] at (-2,0) s_; [text=black,anchor=north] at (2,0) s_; [gray] (2,0.05) – (2,-0.05); [text=black,anchor=north] at (-1.4,0) s_1; [text=black,anchor=north] at (-0.6,0) s_2; [text=black,anchor=north] at (0.2,-0.1) …; [text=black,anchor=north] at (1,0) s_N; [-,very thick] (0,0) to[bend left=75] (2,0); [-] (0.5,0) to[bend left=75] (1.5,0); + [baseline=0,scale=0.8] [black] (-2,0) – (2,0); [text=red] at (-1.4,0) ×; [text=red] at (-0.6,0) ×; [text=red] at (1,0) ×; [text=black,anchor=north] at (-2,0) s_; [text=black,anchor=north] at (2,0) s_; [gray] (2,0.05) – (2,-0.05); [text=black,anchor=north] at (-1.4,0) s_1; [text=black,anchor=north] at (-0.6,0) s_2; [text=black,anchor=north] at (0.2,-0.1) …; [text=black,anchor=north] at (1,0) s_N; [-,very thick] (0,0) to[bend left=75] (2,0); [-] (0.5,0) to[bend left=75] (1.5,0); [-] (-1.1,0) to[bend left=75] (-0.3,0); + [baseline=0,scale=0.8] [black] (-2,0) – (2,0); [text=red] at (-1.4,0) ×; [text=red] at (-0.6,0) ×; [text=red] at (1,0) ×; [text=black,anchor=north] at (-2,0) s_; [text=black,anchor=north] at (2,0) s_; [gray] (2,0.05) – (2,-0.05); [text=black,anchor=north] at (-1.4,0) s_1; [text=black,anchor=north] at (-0.6,0) s_2; [text=black,anchor=north] at (0.2,-0.1) …; [text=black,anchor=north] at (1,0) s_N; [-,very thick] (0,0) to[bend left=75] (2,0); [-,double] (-1.75,0) to[bend left=75] (0.25,0); + [baseline=0,scale=0.8] [black] (-2,0) – (2,0); [text=red] at (-1.4,0) ×; [text=red] at (-0.6,0) ×; [text=red] at (1,0) ×; [text=black,anchor=north] at (-2,0) s_; [text=black,anchor=north] at (2,0) s_; [gray] (2,0.05) – (2,-0.05); [text=black,anchor=north] at (-1.4,0) s_1; [text=black,anchor=north] at (-0.6,0) s_2; [text=black,anchor=north] at (0.2,-0.1) …; [text=black,anchor=north] at (1,0) s_N; [-,very thick] (0,0) to[bend left=75] (2,0); [-,double] (-1.75,0) to[bend left=75] (0.25,0); [-] (0.5,0) to[bend left=75] (1.5,0); + [baseline=0,scale=0.8] [black] (-2,0) – (2,0); [text=red] at (-1.4,0) ×; [text=red] at (-0.6,0) ×; [text=red] at (1,0) ×; [text=black,anchor=north] at (-2,0) s_; [text=black,anchor=north] at (2,0) s_; [gray] (2,0.05) – (2,-0.05); [text=black,anchor=north] at (-1.4,0) s_1; [text=black,anchor=north] at (-0.6,0) s_2; [text=black,anchor=north] at (0.2,-0.1) …; [text=black,anchor=north] at (1,0) s_N; [-,very thick] (0,0) to[bend left=75] (2,0); [-,double] (-1.75,0) to[bend left=75] (0.25,0); [-] (0.5,0) to[bend left=75] (1.5,0); [-] (-1.1,0) to[bend left=75] (-0.3,0); + …) = [baseline=0,scale=0.8] [fill=lightgray] (-2,-0.05) rectangle (2,0.05); [text=red] at (-1.4,0) ×; [text=red] at (-0.6,0) ×; [text=red] at (1,0) ×; [text=black,anchor=north] at (-2,0) s_; [text=black,anchor=north] at (2,0) s_; [gray] (2,0.05) – (2,-0.05); [text=black,anchor=north] at (-1.4,0) s_1; [text=black,anchor=north] at (-0.6,0) s_2; [text=black,anchor=north] at (0.2,-0.1) …; [text=black,anchor=north] at (1,0) s_N; + s ( [baseline=0,scale=0.8] [black] (-2,0) – (2,0); [fill=lightgray] (-2,-0.05) rectangle (-0.012,0.05); [fill=lightgray] (0.012,-0.05) rectangle (2,0.05); [text=red] at (-1.4,0) ×; [text=red] at (-0.6,0) ×; [text=red] at (1,0) ×; [text=black,anchor=north] at (-2,0) s_; [text=black,anchor=north] at (2,0) s_; [gray] (2,0.05) – (2,-0.05); [text=black,anchor=north] at (-1.4,0) s_1; [text=black,anchor=north] at (-0.6,0) s_2; [text=black,anchor=north] at (0.2,-0.1) …; [text=black,anchor=north] at (1,0) s_N; [-,very thick] (0,0) to[bend left=75] (2,0); (tl) at (0,-0.8) τ_1; [->] (tl) – (0,-0.05); + [baseline=0,scale=0.8] [black] (-2,0) – (2,0); [fill=lightgray] (-2,-0.05) rectangle (-1.75-0.012,0.05); [fill=lightgray] (-1.75+0.012,-0.05) rectangle (0-0.012,0.05); [fill=lightgray] (0+0.012,-0.05) rectangle (0.25-0.012,0.05); [fill=lightgray] (0.25+0.012,-0.05) rectangle (2,0.05); [text=red] at (-1.4,0) ×; [text=red] at (-0.6,0) ×; [text=red] at (1,0) ×; [text=black,anchor=north] at (-2,0) s_; [text=black,anchor=north] at (2,0) s_; [gray] (2,0.05) – (2,-0.05); [text=black,anchor=north] at (-1.4,0) s_1; [text=black,anchor=north] at (-0.6,0) s_2; [text=black,anchor=north] at (0.2,-0.1) …; [text=black,anchor=north] at (1,0) s_N; [-,very thick] (0,0) to[bend left=75] (2,0); [-,double] (-1.75,0) to[bend left=75] (0.25,0); (tl) at (-1.75,-0.8) τ_1; [->] (tl) – (-1.75,-0.05); (tl) at (-0.2,-0.8) τ_2; [->] (tl) – (0,-0.05); (tl) at (0.5,-0.8) τ_3; [->] (tl) – (0.25,-0.05); + …) where the notations τ's are omitted in some diagrams without ambiguity. The mathematical formulae of the bold diagrams can be easily read off. For example, the bold diagram with the yellow background should be interpreted as ∫_s_^s_τ_1 ( (τ_1) (s_)) W_s^(k)(s_) 𝒢^(k)(τ_1, s_1 ,s_) W_s^(k)(τ_1) 𝒢^(k)(s_, s_0 ,τ_1) B^(k)(τ_1,s_), where s_0,s_1 are subsequences of s such that (s_0,τ_1,s_1) is an ascending sequence and s = (s_0,s_1), and the bold diagram with the green background reads ∫_s_^s_τ_1 τ_2 τ_3 ( (τ_1) (τ_2) (τ_3) (s_)) W_s^(k)(s_) 𝒢^(k)(τ_3, s_3 ,s_) W_s^(k)(τ_1) 𝒢^(k)(τ_2, s_2 ,τ_3) W_s^(k)(τ_1) 𝒢^(k)(τ_1, s_1 ,τ_2) W_s^(k)(τ_1) 𝒢^(k)(s_, s_0 ,τ_1) B^(k)(τ_1,τ_3) B^(k)(τ_2,s_) where s_0,s_1,s_2,_3 are subsequences of s such that (s_0,τ_1,s_1,τ_2,s_2,τ_3,s_3) is an ascending sequence and s = (s_0,s_1,s_2,s_3) . The explicit expression of the diagrammatic equation (<ref>) is as follows: 𝒢^(k)(s_, , s_ + ds) = 𝒢^(k)(s_, , s_) + 𝒦^(k)(s_, , s_) ds, where 𝒦^(k)(s_, , s_) is the sum of bold diagrams inside the parentheses in <ref>: 𝒦^(k)(s_,s,s_) = [baseline=0,scale=0.8] [black] (-2,0) – (2,0); [fill=lightgray] (-2,-0.05) rectangle (-0.012,0.05); [fill=lightgray] (0.012,-0.05) rectangle (2,0.05); [text=red] at (-1.4,0) ×; [text=red] at (-0.6,0) ×; [text=red] at (1,0) ×; [text=black,anchor=north] at (-2,0) s_; [text=black,anchor=north] at (2,0) s_; [gray] (2,0.05) – (2,-0.05); [text=black,anchor=north] at (-1.4,0) s_1; [text=black,anchor=north] at (-0.6,0) s_2; [text=black,anchor=north] at (0.2,-0.1) …; [text=black,anchor=north] at (1,0) s_N; [-] (0,0) to[bend left=75] (2,0); + [baseline=0,scale=0.8] [black] (-2,0) – (2,0); [fill=lightgray] (-2,-0.05) rectangle (-1.75-0.012,0.05); [fill=lightgray] (-1.75+0.012,-0.05) rectangle (0-0.012,0.05); [fill=lightgray] (0+0.012,-0.05) rectangle (0.25-0.012,0.05); [fill=lightgray] (0.25+0.012,-0.05) rectangle (2,0.05); [text=red] at (-1.4,0) ×; [text=red] at (-0.6,0) ×; [text=red] at (1,0) ×; [text=black,anchor=north] at (-2,0) s_; [text=black,anchor=north] at (2,0) s_; [gray] (2,0.05) – (2,-0.05); [text=black,anchor=north] at (-1.4,0) s_1; [text=black,anchor=north] at (-0.6,0) s_2; [text=black,anchor=north] at (0.2,-0.1) …; [text=black,anchor=north] at (1,0) s_N; [-] (0,0) to[bend left=75] (2,0); [-] (-1.75,0) to[bend left=75] (0.25,0); + … The integro-differential equation of 𝒢^(k)(s_, , s_) can then be derived as 𝒢^(k)(s_,s,s_)s_ = 𝒦^(k)(s_,,s_). For the purpose of easier implementation, we will also provide the mathematical expression of 𝒦^(k)(s_, , s_). The general form of 𝒦^(k)(s_, , s_) is 𝒦^(k)(s_,s,s_) = ∑_M=1 M is odd^∞∫_s_⩽τ_1 ⩽…⩽τ_M⩽ s_τ_1 …τ_M ( ∏_m=1^M+1 (τ_m) ) W_s^(k)(s_) 𝒰^(k)(s_,τ,s,s_) ℒ_b^c(k)(τ), where τ = (τ_1,…,τ_M,τ_M+1) and τ_M+1 = s_. The system-associated operator 𝒰^(k) is defined by 𝒰^(k)(s_,τ,s,s_) = 𝒢^(k)(τ_M, _M, s_) W_s^(k)(τ_M) 𝒢^(k)(τ_M-1, _M-1, τ_M) W_s^(k)(τ_M-1) ⋯ W_s^(k)(τ_1) 𝒢^(k)(s_, _0, τ_1) with _0, ⋯, _M being subsequences of such that = (_0, _1, ⋯, _M) and the extended sequence (s_, _0, τ_1, _1, τ_2, ⋯, τ_M-1, _M, τ_M) is increasing. This indicates that _0, ⋯, _M are subsequences of separated by τ_1, ⋯, τ_M. The bath influence functional ℒ_b^c(k) is exactly the same as the bath influence functional in <cit.>: ℒ_b^c(k)(τ_1,…,τ_M+1) = ∑_𝔮∈𝒬_M+1^c∏_(j,j')∈𝔮 B(τ_j,τ_j') where 𝒬_M+1^c is the set of connected diagrams. For example, 𝒬_2^c = {{(1,2)}}, 𝒬_4^c = {{(1,3),(2,4)}}, 𝒬_6^c = {{(1,3),(2,5),(4,6)}, {(1,4),(2,5),(3,6)}, {(1,4),(2,6),(3,5)}, {(1,5),(2,4),(3,6)}}. One may refer to <cit.> for more information about the set 𝒬_M+1^c. In general, the number of pairings in 𝒬_M+1^c is asymptotically ^-1 M!! when M is a large odd integer <cit.>. For fixed s_ and , solving the integro-differential equation <ref> requires an initial condition at s_ = s_N (or s_ = s_ if is an empty sequence). By definition, it can be immediately seen that 𝒢^(k)(s_, s_ = s_) = 𝕀^(k), if s_≠ 0, 𝒢^(k)(s_, s_1, ⋯, s_N, s_ = s_N) = √( (s_N)) V_s,I^(k)(s_N) 𝒢^(k)(s_, s_1,⋯,s_N-1, s_ = s_N), if s_N ≠ 0. Due to the observable O_s^(k) appearing in the definition of 𝒢^(k), there is a discontinuity when any of the time points touches zero. The jump condition needed in the computation is lim_s_→ 0^+𝒢^(k)(s_,s_1,…,s_N,s_) = O_s^(k)lim_s_→ 0^-𝒢^(k) (s_,s_1,…,s_N,s_). By these conditions, all the full propagators 𝒢^(k)(s_, , s_) can be uniquely determined. To solve the integro-differential equation (<ref>) numerically, we start with solving all 𝒢^(k)(s_, s_), i.e. N = 0, and then increase the length of iteratively. Such an order guarantees that the initial condition <ref> can be applied whenever needed. When solving 𝒢^(k)(s_, , s_) for fixed s_ and , the second-order Heun's method is applied, and the jump condition <ref> must be applied when s_ crosses zero. For the series of integrals on the right-hand side of <ref>, we select an odd positive integer M̅ and truncate the series up to M = M̅ as an approximation. In our experiments, the value of M̅ is at most 5, and therefore the integrals in <ref> are computed numerically using the second-order composite trapezoidal rule. If larger M̅ needs to be used, one can use Monte Carlo methods to approximate the integrals, leading to the inchworm Monte Carlo method as introduced in <cit.>. To save computational cost, we have also utilized the following property of the full propagators: for all T > 0, 𝒢^(k)(s_+T,s_1+T,…,s_N+T,s_+T) = ^- H_s T𝒢^(k)(s_,s_1,…,s_N,s_) ^ H_s T,  if s_>0; 𝒢^(k)(s_-T,s_1-T,…,s_N-T,s_-T) = ^- H_s T𝒢^(k)(s_,s_1,…,s_N,s_) ^ H_s T,  if s_<0. Note that the property holds only when all the time points are on the same side of the origin. § RESUMMATION OF THE FULL PROPAGATOR Using the algorithm introduced in the previous section, we are able to compute all the gray lines in <ref>. In this section, we will propose a fast algorithm to sum up all the diagrams. Before introducing the algorithm, we first note that the same gray line for the same spin can sometimes be used multiple times during the summation. For example, in the 4-spin case, when the propagator 𝒢^(4)(-t, s_1, t) is computed for the fourth spin, it can be applied in the following terms, all of which appear in <ref>: [baseline=0] [fill=lightgray] (-1,0.55) rectangle (1,0.65); [fill=lightgray] (-1,0.15) rectangle (1,0.25); [fill=lightgray] (-1,-0.25) rectangle (1,-0.15); [fill=lightgray] (-1,-0.65) rectangle (1,-0.55); [text=red] at (-0.5,-0.2) ×; [text=red] at (-0.5,-0.6) ×; [black, densely dotted, line width = 1pt] (-0.5,-0.6) – (-0.5,-0.2); [text=black,anchor=north] at (-1,-0.6) -t; [text=black,anchor=north] at (1,-0.6) t; [text=black,anchor=north] at (-0.5,-0.6) s_1; , [baseline=0] [fill=lightgray] (-1,0.55) rectangle (1,0.65); [fill=lightgray] (-1,0.15) rectangle (1,0.25); [fill=lightgray] (-1,-0.25) rectangle (1,-0.15); [fill=lightgray] (-1,-0.65) rectangle (1,-0.55); [text=red] at (-0.5,0.6) ×; [text=red] at (-0.5,0.2) ×; [black, densely dotted, line width = 1pt] (-0.5,0.6) – (-0.5,0.2); [text=red] at (0.3,-0.2) ×; [text=red] at (0.3,-0.6) ×; [black, densely dotted, line width = 1pt] (0.3,-0.2) – (0.3,-0.6); [text=black,anchor=north] at (-1,-0.6) -t; [text=black,anchor=north] at (1,-0.6) t; [text=black,anchor=north] at (-0.5,-0.6) s_1; [text=black,anchor=north] at (0.3,-0.6) s_2; , [baseline=0] [fill=lightgray] (-1,0.55) rectangle (1,0.65); [fill=lightgray] (-1,0.15) rectangle (1,0.25); [fill=lightgray] (-1,-0.25) rectangle (1,-0.15); [fill=lightgray] (-1,-0.65) rectangle (1,-0.55); [text=red] at (-0.5,0.2) ×; [text=red] at (-0.5,-0.2) ×; [black, densely dotted, line width = 1pt] (-0.5,0.2) – (-0.5,-0.2); [text=red] at (0.3,-0.2) ×; [text=red] at (0.3,-0.6) ×; [black, densely dotted, line width = 1pt] (0.3,-0.2) – (0.3,-0.6); [text=black,anchor=north] at (-1,-0.6) -t; [text=black,anchor=north] at (1,-0.6) t; [text=black,anchor=north] at (-0.5,-0.6) s_1; [text=black,anchor=north] at (0.3,-0.6) s_2; , [baseline=0] [fill=lightgray] (-1,0.55) rectangle (1,0.65); [fill=lightgray] (-1,0.15) rectangle (1,0.25); [fill=lightgray] (-1,-0.25) rectangle (1,-0.15); [fill=lightgray] (-1,-0.65) rectangle (1,-0.55); [text=red] at (-0.5,-0.2) ×; [text=red] at (-0.5,-0.6) ×; [black, densely dotted, line width = 1pt] (-0.5,-0.2) – (-0.5,-0.6); [text=red] at (0.3,0.6) ×; [text=red] at (0.3,0.2) ×; [black, densely dotted, line width = 1pt] (0.3,0.6) – (0.3,0.2); [text=black,anchor=north] at (-1,-0.6) -t; [text=black,anchor=north] at (1,-0.6) t; [text=black,anchor=north] at (-0.5,-0.6) s_1; [text=black,anchor=north] at (0.3,-0.6) s_2; , [baseline=0] [fill=lightgray] (-1,0.55) rectangle (1,0.65); [fill=lightgray] (-1,0.15) rectangle (1,0.25); [fill=lightgray] (-1,-0.25) rectangle (1,-0.15); [fill=lightgray] (-1,-0.65) rectangle (1,-0.55); [text=red] at (-0.5,-0.2) ×; [text=red] at (-0.5,-0.6) ×; [black, densely dotted, line width = 1pt] (-0.5,-0.2) – (-0.5,-0.6); [text=red] at (0.3,0.2) ×; [text=red] at (0.3,-0.2) ×; [black, densely dotted, line width = 1pt] (0.3,0.2) – (0.3,-0.2); [text=black,anchor=north] at (-1,-0.6) -t; [text=black,anchor=north] at (1,-0.6) t; [text=black,anchor=north] at (-0.5,-0.6) s_1; [text=black,anchor=north] at (0.3,-0.6) s_2; . Instead of applying <ref> directly to compute the summation, we will follow the idea of the modular path integral <cit.> to assemble all the gray lines by adding spins iteratively. Assuming that we want to add up all the five diagrams in <ref>. Notice that the terms related to the last spin are essentially the same in all these diagrams. Therefore, instead of computing all the diagrams, a more efficient way is to apply the distributive law to separate the last spin and only add up the terms for the first three spins. Similarly, when dealing with the sum involving the first three spins, the first and the second diagrams in <ref> can be combined; the third and the fifth diagrams in <ref> can also be combined. In general, to deal with the sum on the right-hand side of <ref>, we can first separate all the diagrams in to groups according to the number of crosses on the last line. Then, for each of the groups, we further separate the diagrams into subgroups according to the crosses on third line. For each of the subgroups, we apply such grouping one more time according to the crosses on the second line. When performing computations, we first sum up the terms involving only the first spin in all the smallest groups. For the result of each group, we multiply them by the corresponding term related to the second spin, and then repeat a similar procedure for rest of the spins. Mathmatically, this idea is based on the following iterative representation of the observable: G^[1](-t,s,t) = _s^(1)( ρ_s,I^(1)(t) 𝒢^(1)(-t,s,t) ); G^[k+1](-t,s,t) = ∑_N' = 0^∞∫_-t⩽s'⩽ t G^[k](-t,s',t) _s^(k+1)( ρ_s,I^(k+1)(t) 𝒢^(k+1)(-t,𝒫(s,s'),t) s'),  for  k = 1,…,n-2; G^[K](-t,t) = ∑_N=0^∞∫_-t⩽s⩽ t G^[K-1](-t,s,t) _s^(K)( ρ_s,I^(K)(t) 𝒢^(K)(-t,s,t) ) s where s'=(s'_1,…,s'_N') and s=(s_1,…,s_N) are two non-descending lists. In <ref>, 𝒫 is the sorting operator to merge s and s' into a sorted list. We start from the first spin with <ref>, add the middle spins by <ref> and close the diagram by <ref>. These equations show that there are many duplicate computations in the procedure above, which can be avoided. The details of the final algorithm will again be illustrated using diagrams below. The computation of (<ref>) is straightforward. We start our discussion with the case j = 1 in <ref>, which becomes G^[2](-t,s,t) = ∑_N' = 0^∞∫_-t⩽s'⩽ t G^[1](-t,𝒫(s,s'),t) _s^(2)( ρ_s,I^(2)(t) 𝒢^(2)(-t,s',t) ) s'. If s has length 1, the equation can be diagrammatically represented by G^[2](-t,s_1,t) = [baseline=0] [fill=lightgray] (-1,0.55-0.4) rectangle (1,0.65-0.4); [fill=lightgray] (-1,0.15-0.4) rectangle (1,0.25-0.4); [black, line width=1.5pt] (-1,0.65-0.4) – (-1,0.15-0.4); [black, line width=1.5pt] (+1,0.65-0.4) – (+1,0.15-0.4); [text=red] at (-0.5,0.2-0.4) ×; [black, densely dotted, line width = 1pt] (-0.5,0.2-0.4) – (-0.5,-0.2-0.4); = [baseline=0] [fill=lightgray] (-1,0.55-0.4) rectangle (1,0.65-0.4); [fill=lightgray] (-1,0.15-0.4) rectangle (1,0.25-0.4); [text=red] at (-0.5,0.2-0.4) ×; [black, densely dotted, line width = 1pt] (-0.5,0.2-0.4) – (-0.5,-0.2-0.4); + [baseline=0] [fill=lightgray] (-1,0.55-0.4) rectangle (1,0.65-0.4); [fill=lightgray] (-1,0.15-0.4) rectangle (1,0.25-0.4); [text=red] at (-0.5,0.2-0.4) ×; [black, densely dotted, line width = 1pt] (-0.5,0.2-0.4) – (-0.5,-0.2-0.4); [text=red] at (0.2,0.6-0.4) ×; [text=red] at (0.2,0.2-0.4) ×; [black, densely dotted, line width = 1pt] (0.2,0.6-0.4) – (0.2,0.2-0.4); + [baseline=0] [fill=lightgray] (-1,0.55-0.4) rectangle (1,0.65-0.4); [fill=lightgray] (-1,0.15-0.4) rectangle (1,0.25-0.4); [text=red] at (-0.5,0.2-0.4) ×; [black, densely dotted, line width = 1pt] (-0.5,0.2-0.4) – (-0.5,-0.2-0.4); [text=red] at (0.1,0.6-0.4) ×; [text=red] at (0.1,0.2-0.4) ×; [black, densely dotted, line width = 1pt] (0.1,0.6-0.4) – (0.1,0.2-0.4); [text=red] at (-0.8,0.6-0.4) ×; [text=red] at (-0.8,0.2-0.4) ×; [black, densely dotted, line width = 1pt] (-0.8,0.6-0.4) – (-0.8,0.2-0.4); + [baseline=0] [fill=lightgray] (-1,0.55-0.4) rectangle (1,0.65-0.4); [fill=lightgray] (-1,0.15-0.4) rectangle (1,0.25-0.4); [text=red] at (-0.5,0.2-0.4) ×; [black, densely dotted, line width = 1pt] (-0.5,0.2-0.4) – (-0.5,-0.2-0.4); [text=red] at (0.1,0.6-0.4) ×; [text=red] at (0.1,0.2-0.4) ×; [black, densely dotted, line width = 1pt] (0.1,0.6-0.4) – (0.1,0.2-0.4); [text=red] at (-0.8,0.6-0.4) ×; [text=red] at (-0.8,0.2-0.4) ×; [black, densely dotted, line width = 1pt] (-0.8,0.6-0.4) – (-0.8,0.2-0.4); [text=red] at (0.7,0.6-0.4) ×; [text=red] at (0.7,0.2-0.4) ×; [black, densely dotted, line width = 1pt] (0.7,0.6-0.4) – (0.7,0.2-0.4); + …. On the left-hand side, the diagram represents the quantity G^[2](-t,s,t) where the two short black lines binding the bold lines indicate that all connections between the first two spins are taken into account. The parameter s is shown as the cross on the second spin. We use an open dashed line to indicate that it will be connected to the third spin in the next step. The right-hand side of the equation represents the sum and the integral in <ref>. The four diagrams represent the terms for N' = 0,1,2,3,4, respectively. Similarly, if the length of s is 2, we have the following diagrammatic equation: G^[2](-t,s_1,s_2,t) = [baseline=0] [fill=lightgray] (-1,0.55-0.4) rectangle (1,0.65-0.4); [fill=lightgray] (-1,0.15-0.4) rectangle (1,0.25-0.4); [black, line width=1.5pt] (-1,0.65-0.4) – (-1,0.15-0.4); [black, line width=1.5pt] (+1,0.65-0.4) – (+1,0.15-0.4); [text=red] at (-0.5,0.2-0.4) ×; [black, densely dotted, line width=1pt] (-0.5,0.2-0.4) – (-0.5,-0.2-0.4); [text=red] at (0.5,0.2-0.4) ×; [black, densely dotted, line width=1pt] (0.5,0.2-0.4) – (0.5,-0.2-0.4); = [baseline=0] [fill=lightgray] (-1,0.55-0.4) rectangle (1,0.65-0.4); [fill=lightgray] (-1,0.15-0.4) rectangle (1,0.25-0.4); [text=red] at (-0.5,0.2-0.4) ×; [black, densely dotted, line width=1pt] (-0.5,0.2-0.4) – (-0.5,-0.2-0.4); [text=red] at (0.5,0.2-0.4) ×; [black, densely dotted, line width=1pt] (0.5,0.2-0.4) – (0.5,-0.2-0.4); + [baseline=0] [fill=lightgray] (-1,0.55-0.4) rectangle (1,0.65-0.4); [fill=lightgray] (-1,0.15-0.4) rectangle (1,0.25-0.4); [text=red] at (-0.5,0.2-0.4) ×; [black, densely dotted, line width=1pt] (-0.5,0.2-0.4) – (-0.5,-0.2-0.4); [text=red] at (0.2,0.6-0.4) ×; [text=red] at (0.2,0.2-0.4) ×; [black, densely dotted, line width=1pt] (0.2,0.6-0.4) – (0.2,0.2-0.4); [text=red] at (0.5,0.2-0.4) ×; [black, densely dotted, line width=1pt] (0.5,0.2-0.4) – (0.5,-0.2-0.4); + [baseline=0] [fill=lightgray] (-1,0.55-0.4) rectangle (1,0.65-0.4); [fill=lightgray] (-1,0.15-0.4) rectangle (1,0.25-0.4); [text=red] at (-0.5,0.2-0.4) ×; [black, densely dotted, line width=1pt] (-0.5,0.2-0.4) – (-0.5,-0.2-0.4); [text=red] at (0.1,0.6-0.4) ×; [text=red] at (0.1,0.2-0.4) ×; [black, densely dotted, line width=1pt] (0.1,0.6-0.4) – (0.1,0.2-0.4); [text=red] at (-0.8,0.6-0.4) ×; [text=red] at (-0.8,0.2-0.4) ×; [black, densely dotted, line width=1pt] (-0.8,0.6-0.4) – (-0.8,0.2-0.4); [text=red] at (0.5,0.2-0.4) ×; [black, densely dotted, line width=1pt] (0.5,0.2-0.4) – (0.5,-0.2-0.4); + [baseline=0] [fill=lightgray] (-1,0.55-0.4) rectangle (1,0.65-0.4); [fill=lightgray] (-1,0.15-0.4) rectangle (1,0.25-0.4); [text=red] at (-0.5,0.2-0.4) ×; [black, densely dotted, line width=1pt] (-0.5,0.2-0.4) – (-0.5,-0.2-0.4); [text=red] at (0.1,0.6-0.4) ×; [text=red] at (0.1,0.2-0.4) ×; [black, densely dotted, line width=1pt] (0.1,0.6-0.4) – (0.1,0.2-0.4); [text=red] at (-0.8,0.6-0.4) ×; [text=red] at (-0.8,0.2-0.4) ×; [black, densely dotted, line width=1pt] (-0.8,0.6-0.4) – (-0.8,0.2-0.4); [text=red] at (0.7,0.6-0.4) ×; [text=red] at (0.7,0.2-0.4) ×; [black, densely dotted, line width=1pt] (0.7,0.6-0.4) – (0.7,0.2-0.4); [text=red] at (0.5,0.2-0.4) ×; [black, densely dotted, line width=1pt] (0.5,0.2-0.4) – (0.5,-0.2-0.4); + …. After computing the values of G^[2](-t,s,t) for all s, we can move forward to adding the third spin into the diagram. An example for N=3 is G^[3](-t,s_1,s_2,s_3,t) = [baseline=0] [fill=lightgray] (-1,0.55) rectangle (1,0.65); [fill=lightgray] (-1,0.15) rectangle (1,0.25); [fill=lightgray] (-1,-0.15) rectangle (1,-0.25); [black, line width=1.5pt] (-1,0.65) – (-1,-0.25); [black, line width=1.5pt] (+1,0.65) – (+1,-0.25); [text=red] at (-0.5,0.2-0.4) ×; [black, densely dotted, line width=1pt] (-0.5,0.2-0.4) – (-0.5,-0.2-0.4); [text=red] at (0.2,0.2-0.4) ×; [black, densely dotted, line width=1pt] (0.2,0.2-0.4) – (0.2,-0.2-0.4); [text=red] at (0.7,0.2-0.4) ×; [black, densely dotted, line width=1pt] (0.7,0.2-0.4) – (0.7,-0.2-0.4); = [baseline=0] [fill=lightgray] (-1,0.55) rectangle (1,0.65); [fill=lightgray] (-1,0.15) rectangle (1,0.25); [fill=lightgray] (-1,-0.15) rectangle (1,-0.25); [black, line width=1.5pt] (-1,0.65) – (-1,-0.25+0.4); [black, line width=1.5pt] (+1,0.65) – (+1,-0.25+0.4); [text=red] at (-0.5,0.2-0.4) ×; [black, densely dotted, line width=1pt] (-0.5,0.2-0.4) – (-0.5,-0.2-0.4); [text=red] at (0.2,0.2-0.4) ×; [black, densely dotted, line width=1pt] (0.2,0.2-0.4) – (0.2,-0.2-0.4); [text=red] at (0.7,0.2-0.4) ×; [black, densely dotted, line width=1pt] (0.7,0.2-0.4) – (0.7,-0.2-0.4); + [baseline=0] [fill=lightgray] (-1,0.55) rectangle (1,0.65); [fill=lightgray] (-1,0.15) rectangle (1,0.25); [fill=lightgray] (-1,-0.15) rectangle (1,-0.25); [black, line width=1.5pt] (-1,0.65) – (-1,-0.25+0.4); [black, line width=1.5pt] (+1,0.65) – (+1,-0.25+0.4); [text=red] at (-0.5,0.2-0.4) ×; [black, densely dotted, line width=1pt] (-0.5,0.2-0.4) – (-0.5,-0.2-0.4); [text=red] at (0.2,0.2-0.4) ×; [black, densely dotted, line width=1pt] (0.2,0.2-0.4) – (0.2,-0.2-0.4); [text=red] at (0.7,0.2-0.4) ×; [black, densely dotted, line width=1pt] (0.7,0.2-0.4) – (0.7,-0.2-0.4); [text=red] at (0.4,0.2) ×; [black, densely dotted, line width=1pt] (0.4,0.2) – (0.4,-0.2); [text=red] at (0.4,0.2-0.4) ×; + [baseline=0] [fill=lightgray] (-1,0.55) rectangle (1,0.65); [fill=lightgray] (-1,0.15) rectangle (1,0.25); [fill=lightgray] (-1,-0.15) rectangle (1,-0.25); [black, line width=1.5pt] (-1,0.65) – (-1,-0.25+0.4); [black, line width=1.5pt] (+1,0.65) – (+1,-0.25+0.4); [text=red] at (-0.5,0.2-0.4) ×; [black, densely dotted, line width=1pt] (-0.5,0.2-0.4) – (-0.5,-0.2-0.4); [text=red] at (0.2,0.2-0.4) ×; [black, densely dotted, line width=1pt] (0.2,0.2-0.4) – (0.2,-0.2-0.4); [text=red] at (0.7,0.2-0.4) ×; [black, densely dotted, line width=1pt] (0.7,0.2-0.4) – (0.7,-0.2-0.4); [text=red] at (0.4,0.2) ×; [black, densely dotted, line width=1pt] (0.4,0.2) – (0.4,-0.2); [text=red] at (0.4,0.2-0.4) ×; [text=red] at (-0.2,0.2) ×; [black, densely dotted, line width=1pt] (-0.2,0.2) – (-0.2,-0.2); [text=red] at (-0.2,0.2-0.4) ×; + ⋯ We then repeat this process recurrently until we add the second last spin into the diagram. This completes the computation of <ref>. To add the last spin, <ref> is applied instead of <ref>. The only difference is that there are no further spins so that the time sequence s in G^[K](-t,s,t) can only be an empty list, which will then be simply denoted by G^[K](-t,t). Diagrammatically, in the 4-spin case, the last step can be represented by [baseline=0] [fill=lightgray] (-1,0.55) rectangle (1,0.65); [fill=lightgray] (-1,0.15) rectangle (1,0.25); [fill=lightgray] (-1,-0.55) rectangle (1,-0.65); [fill=lightgray] (-1,-0.15) rectangle (1,-0.25); [black, line width=1.5pt] (-1,0.65) – (-1,-0.65); [black, line width=1.5pt] (+1,0.65) – (+1,-0.65); = [baseline=0] [fill=lightgray] (-1,0.55) rectangle (1,0.65); [fill=lightgray] (-1,0.15) rectangle (1,0.25); [fill=lightgray] (-1,-0.15) rectangle (1,-0.25); [fill=lightgray] (-1,-0.55) rectangle (1,-0.65); [black, line width=1.5pt] (-1,0.65) – (-1,-0.25); [black, line width=1.5pt] (+1,0.65) – (+1,-0.25); + [baseline=0] [fill=lightgray] (-1,0.55) rectangle (1,0.65); [fill=lightgray] (-1,0.15) rectangle (1,0.25); [fill=lightgray] (-1,-0.15) rectangle (1,-0.25); [fill=lightgray] (-1,-0.55) rectangle (1,-0.65); [black, line width=1.5pt] (-1,0.65) – (-1,-0.25); [black, line width=1.5pt] (+1,0.65) – (+1,-0.25); [text=red] at (-0.5,0.2-0.4) ×; [text=red] at (-0.5,-0.2-0.4) ×; [black, densely dotted, line width=1pt] (-0.5,0.2-0.4) – (-0.5,-0.2-0.4); + [baseline=0] [fill=lightgray] (-1,0.55) rectangle (1,0.65); [fill=lightgray] (-1,0.15) rectangle (1,0.25); [fill=lightgray] (-1,-0.15) rectangle (1,-0.25); [fill=lightgray] (-1,-0.55) rectangle (1,-0.65); [black, line width=1.5pt] (-1,0.65) – (-1,-0.25); [black, line width=1.5pt] (+1,0.65) – (+1,-0.25); [text=red] at (-0.5,0.2-0.4) ×; [text=red] at (-0.5,-0.2-0.4) ×; [black, densely dotted, line width=1pt] (-0.5,0.2-0.4) – (-0.5,-0.2-0.4); [text=red] at (0.2,0.2-0.4) ×; [text=red] at (0.2,-0.2-0.4) ×; [black, densely dotted, line width=1pt] (0.2,0.2-0.4) – (0.2,-0.2-0.4); + [baseline=0] [fill=lightgray] (-1,0.55) rectangle (1,0.65); [fill=lightgray] (-1,0.15) rectangle (1,0.25); [fill=lightgray] (-1,-0.15) rectangle (1,-0.25); [fill=lightgray] (-1,-0.55) rectangle (1,-0.65); [black, line width=1.5pt] (-1,0.65) – (-1,-0.25); [black, line width=1.5pt] (+1,0.65) – (+1,-0.25); [text=red] at (-0.5,0.2-0.4) ×; [text=red] at (-0.5,-0.2-0.4) ×; [black, densely dotted, line width=1pt] (-0.5,0.2-0.4) – (-0.5,-0.2-0.4); [text=red] at (0.2,0.2-0.4) ×; [text=red] at (0.2,-0.2-0.4) ×; [black, densely dotted, line width=1pt] (0.2,0.2-0.4) – (0.2,-0.2-0.4); [text=red] at (0.7,0.2-0.4) ×; [text=red] at (0.7,-0.2-0.4) ×; [black, densely dotted, line width=1pt] (0.7,0.2-0.4) – (0.7,-0.2-0.4); + … Additionally, the quantity of the left hand side is exactly the quantity O_s = (ρ_I(t) G(-t,t)). In practical simulations, it is impossible to consider an infinite number of diagrams. Instead, a sufficiently large integer N̅ is chosen as the maximum number of interactions between any spin and its neighboring spins. Diagrammatically, N̅ corresponds to the maximum number of red crosses on each line. Furthermore, as depicted in <ref>, each diagram corresponds to an integral over a simplex, which is approximated using the composite trapezoidal quadrature rule in our numerical implementation. Recall that the integro-differential equation is also solved using a second-order method. The overall convergence rate of our method is second order. Here we would like to comment the relation and difference between modular path integral (MPI) proposed in <cit.> and our approach. Both methods compute the Ising chain dynamics iteratively based on the connection of spins. MPI utilizes QuAPI for the computation of a single spin dynamics while our method uses the Inchworm algorithm. Another significant difference between two methods is that MPI considers all possible connections between spins given a specific time discretization while our method, instead, introduces a cut off for the spin couplings. With the cut-off, it is possible to reduce the number of diagrams and hence improve the computational efficiency. § ESTIMATION OF THE COMPUTATIONAL COST In this section, we estimate the computational cost for our method. As discussed above, the computation contains two parts, including the computation of all bold lines with red crosses for all the spins (<ref>) and the summation of the full propagators (<ref>). For simplicity, a uniform time step is chosen throughout the computation. All the discrete time points are therefore multiples of . Below we will estimate the cost for computing G(-t,t) for t=,2,…,L given a positive integer L. §.§ Computational cost for each spin The integro-differential equation (<ref>) shows that the computation of longer diagrams depends on the knowledge of shorter diagrams. To compute G(-t,t) for t up to L, the maximum length of the diagrams is 2L. For any l = 1,⋯,2L, we can then assume that all the diagrams of length less than l Δ t are already computed, and focus on the diagrams of length l Δ t. For fixed l, the computational costs for all diagrams of length lΔ t are generally the same. The most costly part is the computation of 𝒢^(k)(s_,s,s_) in <ref>. Taking the forward Euler method as an example, we need to evaluate 𝒦^(k)(s_,s,s_ + (l-1)) to obtain 𝒢^(k)(s_,s,s_ + l ). According to <ref>, the computational cost can be estimated by ∑_M=1 M is odd^M̅ C_M ()0pt0M+lM, where the binomial coefficient ()0pt2M+lM is the number of grid points in the M-dimensional simplex s_⩽⩽ s_ + (l-1), and C_M is the computational cost of the integrand. Note that this estimation is based on the grid-based numerical quadrature, which does not apply to Monte Carlo methods. For large M, the computation of the bath influence functional becomes dominant since the number of diagrams increases as 𝒪(M!!), so that C_M can be estimated by 𝒪((M+2)!!). In our tests, M̅ is no more than 5. Hence, we will regard C_M as a constant for simplicity. With the cost of each diagram estimated by <ref>, we now need to calculate the number of diagrams of length l. The estimation of the computational cost starts from the number of different bold-lines with total length l for l=1,…,2L. When l ⩽ L, the interval [s_,s_] may or may not contain the origin 0. With the <ref>, if 0∉[s_,s_], we may apply the shift invariant property to reduce the number of diagrams. Since each spin has at most N̅ couplings, the total number of different diagrams with length l⩽ L is ∑_N=0^N̅ (2L+1-l) ()0pt0N+lN =(2L+1-l) ()0pt0N̅+l+1N̅ where the factor 2L+1-l is the number of different choices of s_, namely, s_ = -L, (-L+1), …, (L-l), and the binomial coefficient ()0pt2N+lN represents the different choices of N spin interactions on the set {s_, s_ + ,…, s_+l}. Practically, when 0∉[s_,s_], the translation relation <ref> can be applied for the reduction of diagrams. However, the reduction does not change the order of the estimated cost. Therefore, for the single-spin full propagators of all lengths, the computational cost is estimated by ∑_l=1^2L (2L+1-l) ()0pt0N̅+l+1N̅∑_M=1 M  is odd^M̅ C_M ()0pt0M+lM ⩽ ∑_l=1^2L (2L+1-l) ()0pt0N̅+l+1N̅C_M̅(M̅+1)/2()0pt0M̅+lM̅ ≲ M̅C_M̅ L ∑_l=1^2L l^N̅ l^M̅≲ L^M̅+N̅+2 where M̅, N̅ are relatively small in practice and are regarded as constants in the above estimation. For a spin chain with K spins, the computational cost should be multiplied by K if all spins have different parameters. §.§ Computational cost for the summation We now estimate the summation of diagrams described in <ref>. Note that in this step, we only need to use the values of 𝒢^(k)(s_,s,s_) with -s_=s_=l, so that the total number of diagrams involved is much less than the previous step. We now consider the computation of G^[k+1](-t,,t) with = (s_1, …, s_N) and t = l according to <ref>. Recall that we have the values of 𝒢^(k+1)(-t,𝒫(,'), t) only for N + N' ⩽N̅ (see the text about the truncation before <Ref>). The series (<ref>) should be truncated up to N' = N̅ in the computation. As a result, the computational cost of <ref> is ∑_N'=0^N̅-N()0pt02l+N'N' = ()0pt01+2l+N̅-NN̅-N, where the binomial coefficient on the left-hand side is the number of grid points in the N'-dimensional simplex. Since we need to evaluate G^[k+1](-t,,t) for on all the grid points of an N-dimensional simplex, and N ranges from 0 to N̅, we have the following estimation of the total computational cost: ∑_N=0^N̅()0pt02l+NN()0pt01+2l+N̅-NN̅-N≲∑_N=0^N̅ l^N l^N̅-N≲N̅ l^N̅. Finally, to compute observables on all time steps l = 1,…,L, the time complexity is then 𝒪(L^N̅+1). Compared to the solver of the inchworm equation, the computational cost of the summation is relatively small. Hence, the total computational cost remains at 𝒪(L^M̅+N̅+2) as analyzed in <ref>. §.§ Numerical verification In agreement with our analysis, our numerical experiments (to be presented in detail in <ref>) also show that the computational cost of the summation is nearly negligible compared with the solver of the inchworm equation. Therefore, to verify our estimation of the computational cost, we will focus only on the analysis in <ref>. A convenient way to check the time complexity is to count the number of evaluations of the bath influence functional ℒ_b^(c)(), which depends only on L, M̅, N̅ and is independent of all other parameters. Results for M̅ = 1, N̅ = 1 and M̅ = 3, N̅ = 2 with different values of L are plotted in <ref>. It can be clearly seen that when L gets larger, the trend of growth agrees better with our analysis. In general, this estimation of the computational cost is the same as direct path-integral methods such as the summation of the Dyson series. However, the use of bold lines can significantly accelerate the convergence of the series, resulting in a much smaller M̅ needed in the simulation. The time complexity 𝒪(L^M̅ + N̅ + 2) shows that reducing M̅ has a great impact on the computational cost, especially for large values of L. We would like to comment that in the algorithm, the most time-consuming step is the evaluation of 𝒢^(k)(s_, , s_). To reduce the computational time, multithreading is implemented to parallelize the computation. In general, according to the structure of the inchworm equation (<ref>), the value of 𝒢^(k)(s_, , s_) for shorter is needed to obtain the full propagator for longer . Therefore, we first compute 𝒢^(k)(s_, ∅, s_) for all s_,s_, and the solve 𝒢^(k)(s_, s_1, s_) for all s_, s_1, s_, followed by the computation of 𝒢^(k)(s_, s_1,s_2, s_) for all s_, s_1, s_2, s_, and so forth until the maximum length of is reached. The computations of 𝒢^(k)(s_, ∅, s_) and 𝒢^(k)(s_, s_1, s_) are carried out sequentially. When the length of in 𝒢^(k)(s_, , s_) is greater than or equal to 2, the algorithm is parallelized. The parallelization is based on the fact that the inchworm equations (<ref>) for 𝒢^(k)(s_, s_1, …, s_N, s_) can actually be decoupled. Precisely speaking, the propagator 𝒢^(k)(s_', s_1', …, s_N', s_') can appear on the right-hand side of <ref> for = (s_1, …, s_N) only when s_k' = s_k for all k = 1,…,N, and in this case, we have s_' = τ_m and s_' = τ_m+1 for a certain m. If 0 ∉(s_', s_'), the value of 𝒢^(k)(s_', s_1', …, s_N', s_') (or 𝒢^(k)(τ_m, s_1, …, s_N, τ_m+1)) is actually obtained from <ref>. Therefore, 𝒢^(k)(s_, s_1, …, s_N, s_) and 𝒢^(k)(s_', s_1', …, s_N', s_') are coupled only if there exists T such that s_j' = s_j + T for all j = 1,⋯,N. This allows decoupling of equations according to the vector (s_2 - s_1, …, s_N - s_N-1), and thus the algorithm can be parallelized. In fact, when 0 ∈ (s_1, s_N), the equations of 𝒢^(k)(s_, , s_) are decoupled simply for different values of , since the translational relation <ref> cannot be applied. Using this structure helps with better distribution of computational cost across the threads. § NUMERICAL EXPERIMENTS In this section, we evaluate our newly-proposed method using several numerical examples. To begin with, we introduce the parameters used for the numerical tests. For the coupling intensity between spins, the operator V^(k) simply a scaled Pauli matrix: V^(k) = J^(k)σ_z^(k) where J^(k) indicates the coupling intensity between the kth spin and its neighboring spins. The observable is chosen to be O_s = σ_z^(k) for k=1,…,K, respectively. In <ref>, the two point correlation functions B^(k)(τ_1,τ_2) are set to be the same for every k: B^(k)(τ_j,τ_j') = B^*(Δτ) =1/π∫_0^∞ J(ω) [ (βω/2) cos(ωΔ t) - sin(ωΔ t) ] ω where Δτ = |τ_j | - |τ_j'| and J(ω) is the spectral density of the harmonic oscillators in the bath. In this paper, we set it to be the Ohmic spectral density: J(ω) = π/2∑_l=1^L c_l^2/ω_lδ(ω - ω_l) where L is the number of harmonic oscillators and is set to be 400 in all our tests. The coupling intensity c_l and frequency of each harmonic oscillator ω_l are given by ω_l = -ω_c ln(1- l/L[1-exp(-ω_max/ω_c)]), c_l = ω_l √(ξω_c/L[1-exp(-ω_max/ω_c)]). The values of the parameters, including the Kondo parameter ξ, the primary frequency of the harmonic oscillators ω_c, and the maximum frequency ω_max, will be given later for each experiment. In addition to the above physical parameters, three numerical parameters need to be specified to carry out the simulation, including two truncation parameters (M̅ for system-bath couplings and N̅ for interspin couplings) and the time step . The convergence of the numerical results with respect to these parameters will be studied in the following subsection. §.§ Convergence tests This section carries out experiments on three convergence parameters, M̅,N̅ and , among which M̅,N̅ are two truncation parameters and stands for the time step. In this section, all spins in the spin chain are prepared in the state |+1⟩. In other words, ς^(k) = +1 for k=1,…,K in <ref>. In the spin-boson model with a single spin, the convergence with respect to the parameter M̅ has been studied numerically in <cit.>, where it was shown that the convergence of the inchworm method was much faster than the Dyson series. Here we will carry out a numerical test for the convergence of M̅ by considering a 5-spin system. We choose the time step to be = 0.2. Other parameters are chosen as follows: ξ = 0.2, β = 5, ω_c = 2.5, ω_max = 4ω_c, N̅ = 2, ϵ^(k) = 1, Δ^(k) = 1, J^(k) = 0.2, ∀ k = 1,…,5. Our numerical results are given in <ref>, which shows the evolution of ⟨σ_z^(k)⟩ for k = 1,⋯,5. Note that due to the symmetry of the spin chain system, we have σ_z^(1)(t) = σ_z^(5)(t) and σ_z^(2)(t) = σ_z^(4)(t) for all t, and therefore only three figures are shown in <ref>. These figures show fast convergence with respect M̅ for this set of parameters, due to the use of the inchworm method. The curves for M̅ = 3 and M̅ = 5 are almost on top of each other, while some slight differences can be observed for the computation with M̅ = 1, which is less accurate. We now fix M̅ and consider the convergence with respect to N̅. We again consider a chain of 5 spins and choose the time step to be = 0.2. Other parameters are ξ = 0.2, β = 5, ω_c = 2.5, ω_max = 4ω_c, M̅ = 3, ϵ^(k) = 0, Δ^(k) = 1, J^(k) = 0.5, ∀ k = 1,…,5. The results for N̅=2,3,4,5 are shown in <ref>. In general, due to the numerical sign problem, for longer-time simulations, larger values of N̅ are needed to obtain accurate results. For the first and the last spins, since they are coupled only with one neighboring spin, the results of N̅ = 3 already show good quality until t = 5. For the remaining three spins, the results for N̅ = 4 and N̅ = 5 almost coincide, showing the convergence for the coupling intensity J^(k) = 0.5 up to t = 5. Further increasing N̅ does not significantly improve the results. Additionally, the convergence test is also carried out for the time step , with the parameters of the 5-spin Ising chain being ξ = 0.2, β = 5, ω_c = 2.5, ω_max = 4ω_c, M̅ = 3, N̅ = 2 ϵ^(k) = 1, Δ^(k) = 1, J^(k) = 0.2, ∀ k = 1,…,5. We perform simulations for the time step being 0.4, 0.2, 0.1, and 0.05 and present the results in <ref>. Note that for M̅ = 3 and N̅ = 2, according to our analysis in <ref>, the computational cost is estimated by 𝒪(L^7) with L being the total number of time steps. Therefore, to save computational time, we run the simulation only up to t = 3. It can be observed that for our second-order numerical method, the time step Δ t = 0.2 can give sufficiently accurate results. Such a time step will be taken for all the simulations in the following subsections. §.§ Numerical tests for different coupling intensities In this section, we conduct numerical experiments to examine the effects of varying coupling intensities between spins. We again consider the 5-spin Ising chain with the following parameters: ξ = 0.2, β = 5, ω_c = 2.5, ω_max = 4ω_c, M̅ = 3, N̅ = 4 ϵ^(k) = 1, Δ^(k) = 1, ∀ k = 1,…,5. As mentioned previously, the time step is chosen as Δ t = 0.2, which is sufficient to guarantee a small truncation error. We again set J^(k) to be the same for all k = 1,…,5. Three values J^(k) = 0.2, 0.4, 0.6 are considered in our experiments, and the results are given in <ref> given that all spins are initially in the state |ς^(k)⟩ = |+1⟩ for all k. Again, our results correctly reflect the symmetry of the Ising chain, and therefore only three lines are plotted in each figure. For the purpose of comparison, we also include the result for J^(k) = 0, meaning that all the spins are decoupled. In this case, the evolution of the observable is identical for all the spins, and they are the same as the spin-boson model studied in <cit.>. Generally, for higher coupling intensity J^(k), the discrepancy between spins is more significant, and they differ more from the decoupled case. In particular, when J^(k) = 0, all the curves coincide as predicted. It can also be observed that the curve for the first and the last spin is more separated from the other three spins, especially in the initial stage of the dynamics. This is due to the fact that the two spins at the ends of the chain interact only with one spin instead of two. In all cases, the interaction between the spin and the bath causes smaller amplitude of the fluctuation as the system evolves. Additionally, we also carry out an experiment where the first spin is initially at the state |ς^(1)⟩ = |-1⟩ and all other spins have the initial state |ς^(k)⟩ = |+1⟩ for k=2,…,5. Such a spin chain is no longer symmetric. The evolution of the observable ⟨σ_z^(k)(t) ⟩ is plot in <ref>. In this experiment, when J^(k)=0, Spins 2 to 5 are physically identical, so there are only two distinct curves in the figure. For non-zero coupling intensities between spins, it is clear that the behavior of the first spin is affected by the other spins. The local minimum of the blue curves around t = 2.2 is obviously higher when the coupling intensity J^(k) gets larger. Similar to <ref>, the separation of the curves for Spins 2 to 5 also gets clearer for stronger coupling between spins. §.§ Simulation of a long Ising chain This section aims to study the behavior of a long spin chain, in which the middle part can mimic the behavior of an infinite Ising chain, and meanwhile, one can observe the end effects. We consider an Ising chain comprising of 50 spins and 100 spins, respectively. The parameters of all the spins are set to be the same. Under such settings, we anticipate observing very similar behaviours for the spins near the center of the chain. Note that in our method, if the spins and baths have the same physical parameters, the computational cost grows only linearly as the number of spins increases. The parameters used in this experiment are ξ = 0.2, β = 5, ω_c = 2.5, ω_max = 4ω_c, M̅ = 3, N̅ = 4 ϵ^(k) = 0, Δ^(k) = 1, J^(k) = 0.5, ∀ k = 1,…,K. with K=50 or K=100. The time step is chosen as = 0.2. For comparison, we also carry out the experiments for the same parameters with K=1 and K=5. Since all spins have the same parameters, the inchworm equation needs to be solved only once. For longer spin chains, more computational cost is needed for the for the summation of full propagators. But even so, according to our analysis in <Ref>, the summation only takes a small proportion of the computational time. Our numerical results are presented in <ref>. In general, the case of a single spin is clearly different from the interacting spin chains, while the three spin chains show very similar behaviors. Due to the end effect, the first and the last spins have a slightly higher flipping frequency. Between the third and the third last spin, the curves for all spins are indistinguishable in the plots, and in this example, the five-spin case can already well represent a long spin chain. § CONCLUSION AND DISCUSSION We proposed a method to simulate an Ising chain coupled with harmonic baths. The algorithm is derived by two steps: firstly, the Dyson series decompose the system into spin-boson units and the problem is also decomposed to a single spin problem; secondly, the inchworm algorithm is applied to evaluate the evolution of spin-boson units with special “crosses” representing the spin-spin couplings. The algorithm leads to the sum of diagrams. A special order for the summation based on distributive law is then proposed for faster evaluation of the sum, which accelerates the computation. Under this special order for the summation, the most time consuming step is the computation for a single spin-boson unit. The computational cost is then estimated by 𝒪(L^M̅+N̅+2) where L is the number of time steps and M̅,N̅ are two truncation parameters for the series expansions. Numerical experiments are carried out to validate our method. While this paper focuses mainly on the Ising chain coupled with harmonic baths, similar idea can be migrated to more complicated interacting systems in a way similar to <cit.>. Also, since our approach can be regarded as a perturbation theory, it is mainly applicable for short-time simulations. Long-time simulations can be made possible by truncation of the memory kernel like the iterative QuAPI method. These will be considered in our future works. abbrv
http://arxiv.org/abs/2307.04295v1
20230710012341
Quasi-normal modes of naked singularities in presence of non-linear scalar fields
[ "O. S. Stashko", "O. V. Savchuk", "V. I. Zhdanov" ]
gr-qc
[ "gr-qc" ]
http://arxiv.org/abs/2307.04578v1
20230710141120
Exceptional points and phase transitions in non-Hermitian binary systems
[ "Amir Rahmani", "Andrzej Opala", "Michał Matuszewski" ]
quant-ph
[ "quant-ph", "cond-mat.quant-gas" ]
Institute of Physics Polish Academy of Sciences, Al. Lotników 32/46, 02-668 Warsaw, Poland Institute of Physics Polish Academy of Sciences, Al. Lotników 32/46, 02-668 Warsaw, Poland Institute of Experimental Physics, Faculty of Physics, University of Warsaw, ul. Pasteura 5, PL-02-093 Warsaw, Poland Institute of Physics Polish Academy of Sciences, Al. Lotników 32/46, 02-668 Warsaw, Poland Recent study demonstrated that steady states of a polariton system may demonstrate a first-order dissipative phase transition with an exceptional point that appears as an endpoint of the phase boundary [R. Hanai et al., Phys. Rev. Lett. 122, 185301 (2019)]. Here, we show that this phase transition is strictly related to the stability of solutions. In general, the exceptional point does not correspond to the endpoint of a phase transition, but rather it is the point where stable and unstable solutions coalesce. Moreover, we show that the transition may occur also in the weak coupling regime, which was excluded previously. In a certain range of parameters, we demonstrate permanent Rabi-like oscillations between light and matter fields. Our results contribute to the understanding of nonequilibrium light-matter systems, but can be generalized to any two-component oscillatory systems with gain and loss. Exceptional points and phase transitions in non-Hermitian binary systems Michał Matuszewski August 12, 2023 ======================================================================== Phase transitions correspond to significant alterations of the properties of a system caused by the modification of physical parameters. Examples include the ferromagnetic-paramagnetic phase transition <cit.>, gas-liquid-solid transition <cit.>, Bose-Einstein condensation in bosonic and fermionic systems <cit.>, metal–insulator transition in solid state <cit.>, and topological phase transitions <cit.>. Phase transitions may also occur in non-Hermitian systems, which are systems that do not satisfy the condition of Hermiticity, which is embedded in quantum mechanics <cit.>. Here the non-Hermitian contributions may stem from dissipation <cit.> or asymmetric coupling <cit.> and lead to a number of unique properties such as non-reciprocity <cit.>, mutually interlinked non-Hermitian phase transitions <cit.> and the non-Hermitian skin effect <cit.>. A striking example of non-Hermitian physics that deviates significantly from the Hermitian case is the coalescence of eigenstates and energy eigenvalues at so-called exceptional points (EPs). These spectral singularities may be accompanied by a non-Hermitian phase transition <cit.>. Standard procedure to investigate these phase transitions is through the study of the spectrum of the system as some controllable parameters are changed <cit.>. Typically, the process involves meticulous adjustment of loss and gain in order to achieve the desired outcome. In general, in a linear system the presence of EPs is independent of the stability of the stationary state that the system evolves to <cit.>. However, in a nonlinear system, more than one solution may be stable, which gives rise to the phenomena of bistability and multistability <cit.>. The existence of nonlinear features may affect the non-Hermitian effects realized in linear cases or give rise to entirely new phenomena <cit.>. In order to examine the relationship between nonlinearity and non-Hermitian physics, it is necessary to study systems that possess variable nonlinearity and controllable gain and loss. Particularly suitable systems for this study are those where matter couples with light, as they allow to take advantage of the difference in physical properties of these components. For example, it was demonstrated that exceptional points appear naturally in light-matter systems of exciton-polaritons and subtreshold Fabry-Perot lasers <cit.>. Moreover, it is possible to induce exceptional points by manipulating spatial and spin degrees of freedom of exciton-polaritons in various configurations <cit.>. In the case of bosonic condensates of exciton-polaritons, it was predicted that a dissipative first-order phase transition line exists in the phase diagram <cit.>, similar to a critical point in a liquid-gas phase transition. According to this study, this phase transition line exists in the regime of strong light-matter coupling and has an endpoint which corresponds to an exceptional point <cit.>. In this letter, we investigate a non-Hermitian model describing interaction between two oscillating modes. We use it to examine the significance of nonlinearity in a non-Hermitian phase transition. This model can describe light and matter modes in exciton-polariton condensation and lasing, as investigated in Ref. <cit.>. We find that the model is incomplete unless nonlinear saturation of gain is taken into account. Importantly, saturation increases the complexity of the phase diagram and leads to the appearance of bistability. It has also profound consequences on the physics of the system. We find that while the first-order phase transition line with an endpoint is present, the equivalence of the endpoint to an exceptional point as found in <cit.> is no longer valid in the general case. The phase diagram of Ref. <cit.> can be restored in the limit of strong saturation. In contrast to the results of Ref. <cit.>, the transition between solutions can occur also in the weak coupling regime. This suggests that the second threshold from polariton to photon lasing, observed in experiments <cit.>, may be related to a dissipative phase transition in the weak coupling regime. Moreover, we find a regime of permanent Rabi-like oscillations between two stable solutions. This regime corresponds to a line in the phase diagram that ends with an exceptional point. Model and Analytical Solutions. We consider a system of two coupled oscillators described by a non-Hermitian Hamiltonian with gain and loss. The imbalance between gain and loss in a linear system leads in general to solutions exponentially growing or decaying in time. To obtain non-trivial stationary solutions it is necessary to include nonlinearity. Here we adopt cubic nonlinearity that appears naturally in symmetric systems with no dependence on the complex phase. Such a model can be realized, among many other physical systems, in the case of cavity photons coupled to excitons, where the nonlinearity occurs only in the matter (exciton) component <cit.>. The system is described by complex functions ψ_C=n_Ce^iφ_C and ψ_X=n_Xe^iφ_X, corresponding to amplitudes of cavity photons and excitons, respectively. The dynamics is governed by equations iħ∂ψ/∂ t = iħ∂_t|Ψ⟩=H|Ψ⟩ with |Ψ⟩=(ψ_C,ψ_X)^T, where non-Hermitian Hamiltonian H is given by <cit.> H=( E_C-iħγ_C ħΩ_R ħΩ_R E_X+g|ψ_X|^2+ip ) . Here ħΩ_R is the coupling strength, γ_C is the decay rate of the photon field, and p represents the gain to the exciton field. This gain can be realized in practice by nonresonant optical or electrical pumping. We define the complex nonlinear coefficient as g=g_1-ig_2, where g_1 is the strength of two body interactions (Kerr-like nonlinearity) and g_2|ψ_X|^2 is the saturation term that allows to avoid instability. Spectrum of Hamiltonian (<ref>) can be found analytically E= 1/2[E_c+ℰ+i(𝒫-ħγ_c) ±√(4ħ^2Ω_R^2+[ℰ-E_c+i(𝒫+ħγ_c)]^2)] , where 𝒫=p-g_2(n_X^SS)^2 and ℰ=E_x+g_1 (n_X^SS)^2. For convenience, we denote the solution associated with plus (minus) by U(L). The respective steady state analytical solutions |Ψ⟩=|Ψ_0⟩ e^-i E t can be found from the condition Im[E]=0, that is, the imaginary part of the eigenvalue of (<ref>) must be zero. In <cit.>, it was argued that one or two real energy solutions exist in certain regions in parameter space. However, it can be seen from (<ref>) that except from special values of parameters, real energy solutions can exist only when saturation represented by g_2 is taken into account. We will show below that accounting for the nonlinear g_2 term does in fact lead to the appearance of up to three real-energy solutions, each of them of the form (<ref>). The condition Im[E]=0 allows one to find analytical expression for n_X^SS (n_X^SS)^2=1/g(Re[E]-E_X-iP-(ħΩ_R)^2/Re[E]-E_C+iħγ_C). The resulting explicit formula for n_X^SS is tedious, but for a given n_X^SS, one can find closed forms of steady state n_C^SS and φ_CX=φ_C-φ_X n^SS_C= n^SS_X√(p/ħγ_C-(n_X^SS)^2g_2/ħγ_C) , φ_CX^SS= (δ-g_1(n_X^SS)^2/ħΩ_R(n^SS_C/n_X^SS-n_X^SS/n^SS_C)-iγ_C n^SS_C/Ω_R n_X^SS) , where we introduced photon-exciton energy detuning δ=E_C-E_X. Non-Hermitian Phase Transitions. We use the analytical solutions from the previous section to determine the phase diagram of the system, looking at it from two perspectives. We analyze the steady state solutions and their multiplicity, as in Fig. <ref>(a). On the other hand, we consider the lowest-energy state among the dynamically stable ones and investigate its properties and possible transitions, see Fig. <ref>(b). The latter approach is equivalent to analyzing a system that is weakly coupled to an energy sink, which does not perturb the spectrum, but picks the lowest-energy stable solution after a sufficiently long evolution due to its energetic stability. In the case when the conservative nonlinearity g_1 is stronger than the dissipative nonlinearity g_2, representative phase diagrams are shown in Fig. <ref>. We focus on the blue-detuned case (δ>0), which is much richer that the red-detuned case. In Fig. <ref>(a) the number of steady state solutions is shown. Up to three non-zero solutions, corresponding to both upper and lower branches of Eq. (<ref>) can exist, which results from the nonlinearity of the system. The region of zero solutions corresponds to the situation where pumping cannot overcome losses and no lasing nor polariton condensation occurs. For given Ω and γ_C, increasing pumping p can lead to one or several thresholds, as indicated with horizontal lines. Special points in the phase diagram (marked by stars in Fig. <ref>) include the exceptional point (EP) and the endpoint of the first-order phase transition (ET). In contrast to <cit.>, we find that in general they do not coincide. To determine the position of the EP, one can find the following conditions for which the real and imaginary parts of eigenvalues are zero in Eq. (<ref>) p^EP=ħΩ_R+g_2δ/g_1 ,  γ_C=Ω_R . This can occur when n_X^SS=δ/g_1, that is, whenever the system is blue-detuned (δ>0). On the other hand, the ET point is clearly visualised in the phase diagram that takes into account the energetic instability in panel Fig. <ref>(b). The first-order phase transition line begins at the ET point in the weak coupling regime (γ_C>Ω_R) and follows the arc represented by the ET-EP line towards the EP point. Below the EP, the phase transition line follows into the strong coupling regime. We conclude that, contrary to the results of <cit.>, the first-order phase transition can occur also in the weak coupling regime. This can be explained by a simple physical argument. Since the pumping influences the effective photon-exciton detuning δ̃=E_C-(E_X+g (n^SS_X)^2), the increase of pumping can change of the sign of δ̃, leading to an abrupt change of the lowest-energy state in the weak-coupling regime. Figure <ref>(d) shows the dependence of the real part of the energy of solutions shown in Figs. <ref>(a,b), in the vicinity of the ET-EP line. As can be seen, the ET point is the point of the transition to bistability. On the other hand, the EP point corresponds to a turning point in the bistability curve. The cross-section including the EP point (γ_C=Ω) is depicted in more detail in Figure <ref>(c), which shows the occurrence of two stable branches from the upper and lower branches of Eq. (<ref>) and one unstable branch. At the EP, the unstable upper branch coalesces with the lower stable branch, leading to the first-order phase transition. The cross-section with the ET point (γ_C>Ω_R) is shown in Fig. <ref>(e), where the bistability curve closes, and the transition from the upper to lower branch becomes smooth. This leads to the possibility to encircle the exceptional point as indicated with arrows in Fig. <ref>(d). Interestingly, additional features that have an influence on the physics of the system can occur in the strong coupling case (γ_C<Ω_R), see Fig. <ref>(f). These include the disappearance of one of the solutions in a certain parameter range and the dynamical instability of the lowest-energy branch (marked with orange line). Consequently, the upper, higher-energy solution may become the only viable solution despite the existence of lower-energy solutions. In the opposite case when the dissipative nonlinearity dominates over the conservative one, we find that the phase diagram of energetically stable solutions recovers the results of <cit.>, see Fig. <ref>. As the dissipative nonlinearity is increased, the length of the ET-EP arc decreases, and finally the two points coalesce. In this specific case, the exceptional point is characterized by a jagged crest in the phase diagram, embodying a third-order exceptional point (see supplementary materials). This phenomenon arises from the coalescence of two stable solutions and a single unstable solution. Permanent Rabi-Like Oscillations: R-Line. Our analysis allows to predict that a peculiar oscillating state may form, as indicated in Fig. <ref>(a) by R-Line. In this case, long evolution leads to permanent oscillations, resembling Rabi oscillations in a two-level system, instead of stationary solutions. To explain this phenomenon, we examine imaginary and real parts of eigenvalues given in Eq. (<ref>). An example is shown in Figs. <ref>(a) and <ref>(b). In general, two kinds of stationary solutions corresponding to Im[E(n_X)]=0 may exist. As shown in Fig. <ref>(a), in this particular case there are two solutions from the upper branch and one solution from the lower branch (the black dashed vertical lines denote the emergent solutions). Our interest is in solutions from upper and lower branches that occur at the same n_X, while there is a gap in respective real parts, see Fig. <ref>(b). Such solutions occur when p=(g_2/g_1)δ+ħγ_C, which corresponds to a straight line (marked by R-line) in the phase diagram of Fig. <ref>(c). An example of such permanent oscillations is shown in Fig. <ref>(c). After initial transient time, the oscillations stabilize at a cetain amplitude. When different initial conditions are used, the system may end up in one of the steady state solutions, as shown in Fig. <ref>(d). The frequency of oscillations is given by the gap, Ω=2√(Ω_R^2-γ_C^2). When the parameters of the system approach the exceptional point along the R-line, the gap decreases and the period of oscillations increases. At the exceptional point (Ω_R=γ_C), the solutions coalesce and the period becomes infinite. Therefore, the exceptional point is the endpoint of the R-line. Discussion. We showed that, contrary to previous understanding, non-Hermitian polariton systems exhibit first-order phase transition with an endpoint that in general does not coincide with the exceptional point. Explanation of this phenomenon requires taking into account the nonlinear gain saturation and the consideration of the bistability curve. While the endpoint of the phase transition is where the bistability appears, the exceptional point is where the stable and unstable solutions coalesce. In addition, we demonstrated that first-order phase transition may occur in the weak coupling regime, and that for certain values of parameters one can predict permanent oscillations, whose frequency vanishes at the exceptional point. The predicted results contribute to the ongoing debate surrounding polariton/photon lasing. The presence of an exceptional point has been identified as the possible underlying factor for the observed second threshold  <cit.>. Here, we provide further insights by identifying several other thresholds in phase diagrams and pointing out that multiplicity and stability of solutions are also crucial factors, so far overlooked. The presented results may be applied to much broader class of systems. The non-Hermitian Hamiltonian represented by the 2×2 matrix in Eq. (<ref>) describes in general an arbitrary two-mode oscillatory system with gain and loss in the two modes, and the cubic nonlinearity in one of them. This term appears naturally in any oscillatory system in the first order as long as the nonlinearity respects the global U(1) symmetry of the oscillations. Examples include not only all quantum mechanical systems such as Bose-Einstein condensates, but also high-frequency coupled classical oscillators, where phase of oscillations is irrelevant on the time scale of a slowly varying envelope. The results presented here should be applicable to any such system that exhibits exceptional points and nonlinearity. A.R. and M.M. acknowledge support from National Science Center, Poland (PL), Grant No. 2016/22/E/ST3/00045. A.O. acknowledges support from Grant No. 2019/35/N/ST3/01379.
http://arxiv.org/abs/2307.06072v1
20230712104910
Acceleration of complex matrix multiplication using arbitrary precision floating-point arithmetic
[ "Tomonori Kouya" ]
math.NA
[ "math.NA", "cs.NA" ]
Acceleration of complex matrix multiplication using arbitrary precision floating-point arithmetic Tomonori Kouya Shizuoka Institute of Science and Technology Fukuroi, Japan ORCID: 0000-0003-0178-5519 13th July, 2023 =============================================================================================================== Efficient multiple precision linear numerical computation libraries such as MPLAPACK are critical in dealing with ill-conditioned problems. Specifically, there are optimization methods for matrix multiplication, such as the Strassen algorithm and the Ozaki scheme, which can be used to speed up computation. For complex matrix multiplication, the 3M method can also be used, which requires only three multiplications of real matrices, instead of the 4M method, which requires four multiplications of real matrices. In this study, we extend these optimization methods to arbitrary precision complex matrix multiplication and verify the possible increase in computation speed through benchmark tests. The optimization methods are also applied to complex LU decomposition using matrix multiplication to demonstrate that the Ozaki scheme can be used to achieve higher computation speeds. complex matrix multiplication, arbitrary precision arithmetic, Strassen algorithm, Ozaki scheme § INTRODUCTION Efficient multiple precision (MP) floating-point arithmetic is critical for a linear computation library, such as MPLAPACK <cit.>, to obtain accurate numerical solutions of ill-conditioned problems. Currently, MPLAPACK is the de-facto standard MP linear library owing to embedded trustworthy MP floating-point arithmetic libraries such as QD <cit.>, GNU MP <cit.>, MPFR <cit.>, and MPC <cit.>. Nevertheless, MPBLAS of MPLAPACK, which is based on Reference BLAS (Basic Linear Algebra Subprograms), has plenty of room for improvement. The current LAPACK using IEEE binary32 and binary64 is utilized in conjunction with highly optimized BLAS such as ATLAS <cit.>, OpenBLAS <cit.>, and Intel Math Kernel <cit.>. The optimized MPBLAS library can accelerate the functionalities of MPLAPACK even more. Therefore we can expect that this type of optimization when applied to complex basic linear computations is more effective than the original MPBLAS. In speeding up multiple precision linear computations using the MPFR <cit.> arbitrary precision floating-point library, which is based on the multiple precision natural number (MPN) kernel provided by GNU MP, the algorithm that reduces the amount of computation is the most effective in speeding up the process. For matrix multiplication, there are optimization methods such as the Strassen algorithm <cit.> and the Ozaki scheme <cit.>, which can be used to speed up calculations. In particular, the recently developed Ozaki scheme is an algorithm that pursues both accuracy and speed by dividing the original matrix into matrices of short mantissa parts, thereby performing fast low-precision matrix multiplication without errors. The Ozaki scheme has already been shown to be effective in float128 calculations <cit.>; however, since its effectiveness depends on the nature of the matrices used, we must verify the effectiveness of the matrices for concrete problems through benchmark tests. In addition, although there is ongoing research on optimization of real matrix multiplication, there are no comparative studies on the effectiveness of these optimization techniques for multiple precision complex matrix multiplication. The 3M method is used in MPC <cit.>, an arbitrary precision complex arithmetic library based on MPFR, which can be employed in complex matrix multiplication. It is expected that further speed-up can be achieved by using the 3M method. There are already comparisons of BLIS implementations for binary32 and binary64 complex matrix multiplications<cit.>, and it has been shown that the 3M method is effective in reducing computational time despite its numerical instability. Another example of implementing binary64 complex multiplication based on the Ozaki scheme for real matrices is that of Kazal.et.al <cit.>, which uses the 4M method and, hence, cannot be compared with the 3M method. In this study, we implement complex matrix multiplication using the Strassen algorithm and the Ozaki scheme based on the 3M and 4M methods and report the results of benchmark tests on well-conditioned matrices generated using random numbers. We have already verified the speedup of arbitrary precision real matrix multiplication based on the Strassen algorithm and Ozaki scheme using MPFR<cit.>, and we expect to be able to speed up the 3M and 4M methods based on these results so far. As an application, we implemented a complex LU decomposition incorporating these matrix multiplications and compared the computational time and accuracy of the numerical solutions through benchmark tests on well-conditioned linear system of equations. As a result, it is found that the Ozaki scheme is effective for relatively small precision; however, the computational time increases as precision increases and is slower than the Strassen matrix multiplication for precisions greater than 768 bits. The following two computation environments, EPYC and Xeon, are used in this study. MPLAPACK and our library, including QD and GNU MP, MPFR, and MPC, are natively compiled with Intel Compiler and DGEMM (cblas_dgemm function) in Intel Math Kernel. Xeon Intel Xeon W-2295 3.0GHz 18 cores, Ubuntu 20.04.3 LTS, Intel Compiler version 2021.5.0, MPLAPACK 2.0.1, GNU MP 6.2.1, MPFR 4.1.0, MPC 1.2.1 EPYC AMD EPYC 7402P 24 cores, Ubuntu 18.04.6 LTS, Intel Compiler version 2021.4.0, MPLAPACK 2.0.1, GNU MP 6.2.1, MPFR 4.1.0, MPC 1.2.1 § MATHEMATICAL NOTATION Here we define 𝔽_bS and 𝔽_bL as sets of the S- and L-bit mantissas of floating-point numbers, respectively. For instance, 𝔽_b24 and 𝔽_b53 refer to sets of IEEE754-1985 binary32 and binary64 floating-point numbers, respectively. Although any mantissa length can be selected in MPFR arithmetic, the set of MPFR numbers is expressed as 𝔽_bM, which is primarily defined as M-bit using the mpfr_set_default_prec function. We used (𝐱)_i (= x_i) as the i-th element of the n-dimensional real or complex vector 𝐱 = [x_i]_i=1, 2, .., n∈ℝ^n or ℂ^n, and real or complex matrix (A)_ij (= a_ij) as the (i, j)-th element of A = [a_ij]_i=1, 2, ..., m, j = 1, 2, ..., n∈ℝ^m× n or ℂ^m× n. Real and imaginary parts of complex vector 𝐱∈ℂ^n and complex matrix A∈ℂ^m× n are expressed as Re(𝐱), Im(𝐱)∈ℝ^n, and Re(A), Im(A)∈ℝ^m× n, respectively. § ALGORITHMS OF COMPLEX MATRIX MULTIPLICATION In this section, the algorithms used in this paper are introduced. First, the 4M and 3M methods of complex multiplication are explained and subsequently, their application to matrix multiplication is described. Next, the Strassen matrix multiplication and the Ozaki scheme are explained. §.§ 3M and 4M methods Given A∈ℂ^m× l and B∈ℂ^l× n, we calculate the complex product C := AB∈ℂ^m× n. From the mathematical definition of the above matrix multiplication, we can obtain the element of C as follows: (C)_ij := ∑_k=1^l (A)_ik (B)_kj, where the element (C)_ij can be calculated with a simple triple-loop in the programs. The formula (<ref>) includes complex multiplication and addition. The complex multiplication is calculated as follows: Re((A)_ik (B)_kj) := Re((A)_ik)Re((B)_kj) - Im((A)_ik)Im((B)_kj) Im((A)_ik (B)_kj) := Re((A)_ik)Im((B)_kj) + Im((A)_ik)Re((B)_kj) The formula for standard complex multiplication is called the “4M" method. In contrast, the method shown below, which reduces the number of multiplications by one, as in the Karatsuba method, is called the “3M" method. t_1 := Re((A)_ik)Re((B)_kj) t_2 := Im((A)_ik)Im((B)_kj) Re((A)_ik (B)_kj) := t_1 - t_2 Im((A)_ik (B)_kj) := (Re((A)_ik) + Im((A)_ik)) · (Re((B)_kj) + Im((B)_kj)) - t_1 - t_2 In the MPC library based on MPFR arithmetic, the fast complex multiplication (mpc_mul function) is implemented using the 3M method. All multiple precision complex linear computations defined in MPLAPACK/MPBLAS are constructed on “mpcomplex" C++ class with MPC arithmetic functions. The 3M and 4M complex general matrix multiplication (CGEMM) methods are implemented using real matrix multiplication. The 4M CGEMM method is expressed in a straightforward manner, as shown in (<ref>), as follows: Re(AB) := Re(A)Re(B) - Im(A)Im(B) Im(AB) := Re(A)Im(B) + Im(A)Re(B) The 3M CGEMM method is also simple to express, as shown in (<ref>), as follows: T_1 := Re(A)Re(B) T_2 := Im(A)Im(B) Re(AB) := T_1 - T_2 Im(AB) := (Re(A) + Im(A))(Re(B) + Im(B)) - T_1 - T_2 <ref> describes the number of real arithmetic operations to obtain complex number or matrix products using the 3M and 4M methods. For complex multiple precision arithmetic and matrix multiplication, reducing one multiplication is more efficient than increasing three additions/subtractions. Although it is possible to obtain errors in imaginary parts <cit.>, we could not confirm this phenomenon in our benchmark tests, which are described later. §.§ Strassen algorithm The Strassen matrix multiplication algorithm<cit.> is categorized as a divide-and-conquer method and is well-known to drastically reduce the number of arithmetic operations when the size of matrices is increased. Multiple precision floating-point arithmetic incurs high costs, so the Strassen algorithm shown in Algorithm <ref> is useful when multiple precision matrix multiplication is needed. We have implemented the Strassen algorithm using the MPC library. The thresholds of matrix size are defined as m_0 = n_0 = 32. §.§ Ozaki scheme The usefulness of the Ozaki scheme<cit.> is also becoming clear at multiple precision levels owing to the success of Mukunoki et al. in accelerating the float128 precision matrix multiplication<cit.>. The float128 arithmetic supported by GCC, features triple-double to quadruple-double precision performance for addition and multiplication, which is expected to be sufficient for this precision range. The Ozaki scheme is an algorithm that aims to simultaneously accelerate performance and improve accuracy by dividing original matrices into another matrices with elements represented by shorter digits. Similar to the “Split” method in error-free transformation technique, this approach takes advantage of the speed of optimized short-precision matrix multiplication (xGEMM) functions without round-off errors. For a given matrix A ∈ℝ^m× l and B ∈ℝ^l× n, to obtain a matrix product C := AB ∈ℝ^m× n of long L-bit precision, A and B are divided using the Ozaki scheme, where d ∈ℕ is the maximum number of divisions of short S-bit precision matrices (S << L), as shown in Algorithm <ref>. The S-bit arithmetic is used for calculations when no particular description is provided, and the L-bit arithmetic is used only when high-precision operations are required. We implemented complex matrix multiplication based on 4M CGEMM (<ref>) and 3M CGEMM (<ref>) methods including the Ozaki scheme to obtain real matrix products. For practical purposes, it is desirable to set the maximum number of divisions d for both the real and imaginary parts of the calculation; however, in this case, the calculations were performed on a real example using values of the same order in both cases, and a common d is used. § BENCHMARK TEST In this section, we present the results of complex matrix multiplication in solving complex linear system of equations including a complex coefficient matrix to be well-conditioned generated using random numbers. In particular, since the accuracy of the Ozaki scheme depends on the number of partitions, it is necessary to compare the computational time for the smallest number of partitions d to obtain the best accuracy. For this reason, graphs and tables presenting both computational time and computation accuracy are included. §.§ Complex matrix multiplication Here we describe a benchmark test using complex square matrix multiplication. The elements of A and B used in the matrix multiplication (<ref>) are random numbers (using the mpfr_nrandom function) with the real and imaginary parts following a normal distribution of [-1, 1]. The matrix size is increased and the computational time and maximum relative error per element of C are compared. The sorts of precision used are 256, 512, and 768 bits. For comparison, the results of the Cgemm function of MPBLAS are also included, omitting cases with excessive computational time. The graphs of computational time (left) and maximum relative error (right) on the Xeon environment are shown in <ref>. In these figures, “OZ_4M" and “OZ_3M" mean results obtained by the Ozaki scheme based on 4M and 3M methods, respectively, and the numbers that follow mean the maximum numbers of divisions, d. In the graphs of maximum relative error, the errors increase as matrix size increases. Overall, the Strassen matrix multiplication has the largest error, and the accuracy deteriorates by up to two decimal places compared to that of other CGEMM. In the Ozaki scheme, we also see that d ≥ 13 for 256-bit, d ≥ 25 for 512-bit, and d ≥ 37 for 768-bit calculations minimize the relative error for any matrix size. Although this example did not require a large number of divisions for the Ozaki scheme because of the small order difference in the absolute values of the matrix elements, for DGEMM, a larger number of divisions is still required with increasing precision. We cannot confirm that a relative error of more than one decimal place occurred between the 3M and 4M CGEMM methods. The 3M CGEMM method is faster than the 4M CGEMM method by a 24 to 26% decrease in computational time. Next, the graphs of computational time (left) and maximum relative error (right) are shown in <ref> for the EPYC environment. Overall, the computational time is slower than on the Xeon: about 20% slower for MPC alone, and up to 40% slower for the Ozaki scheme with DGEMM. There is little difference in the maximum relative error. In terms of computational time, the Ozaki scheme is the fastest for 256-bit; the Strassen matrix multiplication is superior to the Ozaki scheme for 512-bit with n ≤ 1500; and the Strassen matrix multiplication is the fastest for 768-bit computation. From the above results, the 3M CGEMM method based on the Ozaki scheme with d=14 is the fastest for 256-bit calculations, whereas the Strassen matrix multiplication is the fastest for 768-bit calculations. Therefore, for matrices A and B, the Ozaki scheme is the fastest in the precision range of 256 bits or less, and the Strassen matrix multiplication is the fastest for a precision of 768 bits or more. §.§ Complex LU decomposition If the Ozaki scheme is faster than other matrix multiplication algorithms, it is expected to be effective for complex LU decomposition using matrix multiplication. Below we present the results of a benchmark test of complex LU decomposition using complex matrix multiplication based on the fast 3M method. The corresponding n-dimensional linear system of equations is: A𝐱 = 𝐛, where A∈ℂ^n× n, 𝐱∈ℂ^n, and 𝐛∈ℂ^n. In this problem, we use n=1024. The elements of the complex coefficient matrix A are given as random numbers with real and imaginary parts following a normal distribution in [-1, 1]. The exact solution of 𝐱 is (x)_k = k + k i and 𝐛 := A𝐱 is a constant vector obtained with 2048-bit precision arithmetic. Assuming that LU decomposition in the current LAPACK standard allows the use of fast matrix multiplication, a constant width K is predefined as in <ref>, and the rectangular component A - L_21U_12 is updated for each K column. Therefore, the complexity of matrix multiplication changes with respect to K; MPLAPACK's LU decomposition (Cgetrf) is also implemented using Cgemm in MPBLAS. For fast LU decomposition, L_21U_12 must be fast in xGEMM. In our benchmark test, the computational time and the maximum error of the numerical solution of the LU decomposition obtained by varying K are measured using 256-, 512-, and 768-bit precision. The graphs of computational time (left) and relative error (right) of the numerical solution are displayed in <ref> and <ref>, respectively and are obtained by forward and backward substitutions. In addition, the results of normal column-wise LU decomposition (Normal LU, K=1) are also shown in these graphs. There is no difference in the variation of the maximum relative error of the numerical solution when K is changed in either the Xeon or EPYC environment. The trend in computational time is also approximately the same. This leads us to conclude that, * Regardless of precision, the computational time is never less than that of Normal LU as long as the Strassen matrix multiplication is used. The only exception is 768-bit precision of LU decomposition on EPYC; however, the difference in computational time is extremely small. * Using the Ozaki scheme, a 256-bit calculation can be performed in less computational time than for Normal LU with relatively small K. The number of divisions d for which the relative error is minimal regardless of K is d=13 for 256 bits, d=25 for 512 bits, and d=37 for 768 bits. * Normal LU is the fastest for 512-bit and 768-bit calculations. The K and the maximum relative error associated with minimum computational time for each algorithm for each computing environment and precision are shown in <ref> and <ref>. The results of MPLAPACK's Cgetrf function are also shown for comparison. The relative errors of Cgetrf, Normal LU, and Strassen, which use MPC only, are consistent across environments. Overall, the Cgetrf function is slow, but this may be due to the overhead of the C++ mpcomplex class and the performance of Cgemm in MPBLAS. These results indicate that computation speed can be further increased by using the Ozaki scheme and specifying the optimal K and number of divisions d, for a precision of at least up to 256 bits. § CONCLUSION AND FUTURE WORK It is demonstrated that optimization of complex matrix multiplication is possible on both Xeon and EPYC environments and that the Strassen matrix multiplication and implementation using the Ozaki scheme contribute to speed-up, especially compared to MPLAPACK. We also find that the 3M CGEMM method is fast and that the difference in accuracy is not noticeable for the problems examined in this study. Hence, there is no need to choose the 4M method, especially for multiple precision calculations. The use of fast complex matrix multiplication can also reduce the computational time of complex LU decomposition with the Ozaki scheme which is sufficiently faster than the Strassen matrix multiplication. Future plans to expand this study include: * Pursuing speed-up of arbitrary precision complex matrix multiplication. If the Strassen matrix multiplication is faster than the Ozaki scheme, a 3M CGEMM method using the Strassen algorithm is expected to be faster. Therefore, we plan to implement such a method and also parallelize using OpenMP. * Implementing and evaluating the performance of complex matrix multiplication in multi-component fixed-precision (double-double, triple-double or quadruple-double) arithmetic. In this case, the Ozaki scheme is expected to be advantageous in many cases with its relatively small computational accuracy. * Investigating cases where the Ozaki scheme is useful for a wider range of problems. Complex matrix multiplication requires changing the number of divisions according to the difference in the order of absolute values of the real and imaginary parts of matrices. Therefore, we would like to investigate the precision and the number of divisions for real matrices by the Ozaki scheme, which are faster than the Strassen matrix multiplication. A method can then be established to set the optimal number of divisions for complex matrix multiplication based on these results. § ACKNOWLEDGMENT This work was supported by JSPS KAKENHI Grant Number 23K11127. 10 url@samestyle mplapack MPLAPACK/MPBLAS, “Multiple precision arithmetic LAPACK and BLAS,” <https://github.com/nakatamaho/mplapack>. qd D. Bailey, “QD,” <https://www.davidhbailey.com/dhbsoftware/>. gmp T. Granlaud and G. development team, “The GNU Multiple Precision arithmetic library,” <https://gmplib.org/>. mpfr M. Project, “The MPFR library,” <https://www.mpfr.org/>. mpc A. Enge, P. Théveny, and P. Zimmermann, “MPC,” <http://www.multiprecision.org/mpc/>. atlas A. A. T. L. A. Software, “<http://math-atlas.sourceforge.net/>.” openblas OpenBLAS, “<http://www.openblas.net/>.” imkl I. M. K. Library, “<http://www.intel.com/software/products/mkl/>.” strassen_original V. Strassen, “EnglishGaussian elimination is not optimal,” EnglishNumerische Mathematik, vol. 13, no. 4, pp. 354–356, 1969. [Online]. Available: <http://dx.doi.org/10.1007/BF02165411> ozaki_scheme K. Ozaki, T. Ogita, S. Oishi, and S. M. Rump, “Error-free transformations of matrix multiplication by using fast routines of matrix multiplication and its applications,” Numerical Algorithms, vol. 59, no. 1, pp. 95–118, Jan 2012. [Online]. Available: <https://doi.org/10.1007/s11075-011-9478-1> mukunoki_binary128 D. Mukunoki, K. Ozaki, T. Ogita, and T. Imamura, “Accurate matrix multiplication on binary128 format accelerated by ozaki scheme,” in 50th International Conference on Parallel Processing, ser. ICPP 2021.1em plus 0.5em minus 0.4emNew York, NY, USA: Association for Computing Machinery, 2021. [Online]. Available: <https://doi.org/10.1145/3472456.3472493> cmatmul_3m4m F. G. Van Zee and T. M. Smith, “Implementing high-performance complex matrix multiplication via the 3m and 4m methods,” ACM Trans. Math. Softw., vol. 44, no. 1, jul 2017. [Online]. Available: <https://doi.org/10.1145/3086466> cmatmul_ozaki N. Y. Kazal, I. Mukhlash, B. A. Sanjoyo, N. Hidayat, and K. Ozaki, “Extended use of error-free transformation for real matrix multiplication to complex matrix multiplication,” Journal of Physics: Conference Series, vol. 1821, no. 1, p. 012022, mar 2021. [Online]. Available: <https://dx.doi.org/10.1088/1742-6596/1821/1/012022> kouya_utsugiri_ozaki T. Kouya and T. Utsugiri, “Optimization of^^c2^^a0multiple-precision lu decomposition using ozaki scheme,” in Computational Science and Its Applications – ICCSA 2023 Workshops, O. Gervasi, B. Murgante, A. M. A. C. Rocha, C. Garau, F. Scorza, Y. Karaca, and C. M. Torre, Eds.1em plus 0.5em minus 0.4emCham: Springer Nature Switzerland, 2023, pp. 529–545. higham_accuracy N. J. Higham, Accuracy and Stability of Numerical Algorithms, 2nd ed.1em plus 0.5em minus 0.4emPhiladelphia, PA, USA: Society for Industrial and Applied Mathematics, 2002.
http://arxiv.org/abs/2307.04516v1
20230710122404
An Examination of Wearable Sensors and Video Data Capture for Human Exercise Classification
[ "Ashish Singh", "Antonio Bevilacqua", "Timilehin B. Aderinola", "Thach Le Nguyen", "Darragh Whelan", "Martin O'Reilly", "Brian Caulfield", "Georgiana Ifrim" ]
cs.CV
[ "cs.CV" ]
Wearable Sensors and Video Data Capture for Human Exercise Classification Insight Centre for Data Analytics, University College Dublin, Ireland {ashish.singh,antonio.bevilacqua,timi.aderinola,thach.lenguyen,b.caulfield, georgiana.ifrim}@insight-centre.org Output Sports Limited, NovaUCD, Dublin, Ireland {darragh, martin}@ouputsports.com An Examination of Wearable Sensors and Video Data Capture for Human Exercise Classification Ashish Singh Antonio Bevilacqua Timilehin B. Aderinola Thach Le Nguyen Darragh Whelan Martin O'Reilly Brian Caulfield Georgiana Ifrim August 12, 2023 ============================================================================================================================================ Wearable sensors such as Inertial Measurement Units (IMUs) are often used to assess the performance of human exercise. Common approaches use handcrafted features based on domain expertise or automatically extracted features using time series analysis. Multiple sensors are required to achieve high classification accuracy, which is not very practical. These sensors require calibration and synchronization and may lead to discomfort over longer time periods. Recent work utilizing computer vision techniques has shown similar performance using video, without the need for manual feature engineering, and avoiding some pitfalls such as sensor calibration and placement on the body. In this paper, we compare the performance of IMUs to a video-based approach for human exercise classification on two real-world datasets consisting of Military Press and Rowing exercises. We compare the performance using a single camera that captures video in the frontal view versus using 5 IMUs placed on different parts of the body. We observe that an approach based on a single camera can outperform a single IMU by 10 percentage points on average. Additionally, a minimum of 3 IMUs are required to outperform a single camera. We observe that working with the raw data using multivariate time series classifiers outperforms traditional approaches based on handcrafted or automatically extracted features. Finally, we show that an ensemble model combining the data from a single camera with a single IMU outperforms either data modality. Our work opens up new and more realistic avenues for this application, where a video captured using a readily available smartphone camera, combined with a single sensor, can be used for effective human exercise classification. § INTRODUCTION Recent years have seen an accelerated use of machine learning solutions to assess the performance of athletes. New technologies allow easier data capture and efficient machine learning techniques enable effective measurement and feedback. In this paper, we focus on the application of human exercise classification where the task is to differentiate normal and abnormal executions for strength and conditioning (S&C) exercises. S&C exercises are widely used for rehabilitation, performance assessment, injury screening and resistance training in order to improve the performance of athletes <cit.>. Approaches to data capture are either sensor-based or video-based. For sensor-based approaches, sensors such as Inertial Measurement Units (IMUs) are worn by participants <cit.>. For video, a participant's motion is captured using 3D motion capture <cit.>, depth-capture based systems <cit.>, or 2D video recordings using cameras <cit.>. The data obtained from these sources is processed and classified using machine learning models. Classification methods based on sensor data are popular in the literature and real-world applications, and yet, video-based approaches are gaining popularity <cit.> as they show potential for providing high classification accuracy and overcoming common issues of inertial sensors. Sensors require fitting on different parts of the body and the number of sensors to be worn depends upon the context of the exercise. For instance, the Military Press exercise requires at least 3 IMUs for optimal performance. Despite their popularity, sensors may cause discomfort, thereby hindering the movement of participants. In addition, using multiple sensors leads to overheads such as synchronization, calibration and orientation. Recent advances in computer vision have enabled the usage of 2D videos for human exercise classification. Past work explored posture detection <cit.> and the application of human exercise classification using pose estimation. Our previous work <cit.> proposed a novel method named BodyMTS to classify human exercises using video, human pose estimation and multivariate time series classification. There is less work comparing sensors with video in real-world applications. In this paper, we compare the performance of a sensor-based approach utilizing 5 IMUs with that of video from a single front-facing camera, on the same set of 54 participants, on two real-world datasets consisting of Military Press (MP) and Rowing exercises. These are important S&C exercises and are widely used for injury risk screening and rehabilitation <cit.>. Incorrect executions may lead to musculoskeletal injuries and undermine the performance of athletes <cit.>. Hence, correct detection of abnormal movements is crucial to avoid injuries and maximize performance. The main requirements for an effective human exercise classification application are <cit.>: accurate monitoring of body parts movement, correct classification of deviations from normal movements, timely feedback to end users, simple data capture using available smartphones and coverage of a wide range of S&C exercises. Previous work <cit.> has shown that this task is difficult and has poor intra and inter-rater accuracy in user studies with domain experts, with Kappa scores for inter-rater agreement between 0.18-0.53, and intra-rater between 0.38-0.62. Through discussions with domain experts, we established that an effective application should achieve a minimum accuracy of 80% to be useful for end users. Existing methods using IMUs involve pre-processing the raw data, creating handcrafted features <cit.>, and applying classical machine learning algorithms. Handcrafted feature extraction is often tedious and time-consuming, requires access to domain knowledge and is prone to cherry-pick features that only work for a specific set of exercises. Deep learning methods <cit.> overcome this issue by automatically constructing features during training, but still require expertise in deep learning architectures along with hardware resources such as GPUs. Hence, we take two approaches to feature extraction: (1) using lightweight packages such as catch22 <cit.> and tsfresh <cit.> to automate the feature extraction from raw signals and (2) using the raw time series data with time series classifiers, which implicitly construct features inside the algorithm. For videos, we first extract multivariate data using human pose estimation with OpenPose <cit.> to obtain (X,Y) location coordinates of key body parts over all the frames of a video. Figure <ref> shows data captured with IMUs and video for the Military Press exercise. The top part shows the Y-signal for 3 body parts for a total of 10 repetitions, while the bottom part shows the X, Y, and Z signals of the magnetometer from an IMU worn on the right arm for the same set of 10 repetitions. Our main contributions are: * We compare 3 strategies for creating features from IMU data for human exercise classification. We observe that directly classifying the raw signals using multivariate time series classifiers outperforms the approach based on handcrafted features by a margin of 10 and 4 percentage points in accuracy for MP and Rowing respectively. Automatic feature extraction shows better performance than handcrafted features. * We compare the performance of IMU and video for human exercise classification. We observe that a single video-based approach outperforms a single IMU-based approach by a margin of 5 percentage points accuracy for MP and 15 percentage points for Rowing. Additionally, we observe that a minimum of 3 IMU devices are needed to outperform a single video for both exercises. * We propose an ensemble model that combines the data modalities from IMU and video, which outperforms either approach by a minimum of 2 percentage points accuracy for both MP and Rowing. This leads to an accuracy of 93% for MP and 87% for Rowing, using only a single IMU and a reduced-size video. We discuss reasons why combining video and sensor data is beneficial, in particular, the 2D video provides positional information, while the sensor provides information on orientation and depth of movement. * To support this paper we have made all our code and data available [<https://github.com/mlgig/Video_vs_Shimmer_ECML_2023>]. The rest of the paper is organized as follows. Section <ref> presents an overview of related work, Section <ref> describes the data collection procedure, Section <ref> describes the data analysis and methodology for classification and Section <ref> presents the classification results using IMUs and video. Section <ref> concludes and outlines directions for future work and Section <ref> discusses ethical implications of this work. § RELATED WORK This section describes the purpose of S&C exercises and provides an overview of sensor-based and video-based data capture approaches. §.§ S&C Exercise Classification S&C exercises aim at improving the performance of human participants in terms of strength, speed and agility, and they can be captured using sensor-based or video-based techniques. Wearable sensor-based approaches involve fitting Inertial Measurement Units (IMUs) <cit.> on different parts of the body. This is followed by creating handcrafted features which are used in conjunction with a classical machine learning model. Deep learning methods attempt to automate the process of feature extraction. CNN models work by stacking IMU signals into an image <cit.>, whereas <cit.> uses an attention mechanism to identify the important parts in a signal. Using IMUs has its own limitations. First, the number of inertial sensors required and their positions can vary from exercise to exercise <cit.>. Furthermore, sensors require calibration and synchronization and may also hinder the movement of the body and cause discomfort when used over longer time periods <cit.>. Video-based systems can be categorized into 3 types: 3D motion capture, depth camera-based and 2D video camera. Though they are accurate, 3D motion capture systems are expensive and require complex setups. In addition, fitting multiple markers on the body may hinder the normal movement of the body <cit.>. Microsoft Kinect is commonly used for depth camera-based systems <cit.>. These systems are less accurate and are affected by poor lighting, occlusion, and clothing, and require high maintenance <cit.>. The third subcategory uses video-based devices such as DSLR or smartphone cameras. Works based on video rely on human pose estimation to track different body parts <cit.> and have shown 2D videos to be a potential alternative to IMU sensors. The video-based analysis also includes commercial software such as Dartfish <cit.> by providing the option to analyze motion at a very low frame rate. However, these are less accurate and require fitting body markers of a different colour to the background. §.§ Multivariate Time Series Classification (MTSC) In multivariate time series classification tasks, the data is ordered and each sample has more than one dimension. We focus on recent linear classifiers and deep learning methods, which have been shown to achieve high accuracy with minimal run-time and memory requirements <cit.>. Linear Classifiers. ROCKET <cit.> is a state-of-the-art algorithm for MTSC in terms of accuracy and scalability. Two more extensions named MiniROCKET <cit.> and MultiROCKET <cit.>, have further improved this method. These classifiers work by using a large number of random convolutional kernels which capture different characteristics of a signal and hence do not require learning the kernel weights as opposed to deep learning methods. These features are then classified using a linear classifier such as Logistic or Ridge Regression. Deep Learning Classifiers. Deep learning architectures based on Fully Convolutional Networks (FCN) and Resnet <cit.> have shown competitive performance for MTSC, without suffering from high time and memory complexity. § DATA COLLECTION Participants. 54 healthy volunteers (32 males and 22 females, age: 26 ± 5 years, height: 1.73 ± 0.09 m, body mass: 72 ± 15 kg) were recruited for the study. Participants were asked to complete multiple repetitions of the two exercises in this study; the Military Press and Rowing exercises. In each case, the exercises were performed under 'normal' and 'induced' conditions. In the 'normal' condition the exercise was performed with the correct biomechanical form and in the 'induced' condition the exercise was purposefully performed with pre-determined deviations from the normal form, assessed and confirmed in real-time by the movement scientist. Please refer to these sources <cit.> for additional information on the experiment protocol. The data was collected using two video cameras and 5 Shimmer IMUs placed on 5 different parts of the body. Two cameras (30 frames/sec with 720p resolution) were set up in front and to the side of the participants. In this work, we only use the video recordings from the front view camera which is a more common use case. The 5 IMUs with settings: sampling frequency of 51.2 Hz, tri-axial accelerometer(±2 g), gyroscope (±500 ^∘/s) and magnetometer (±1.9 Ga) <cit.> were fitted on the participants at the following five locations: Left Wrist (LW), Right Wrist (RW), Left Arm (LA), Right Arm (RA) and Back. The orientation and locations of all the IMUs were consistent for all the participants. Exercise Technique and Deviations. The induced forms were further sub-categorized depending on the exercise. §.§ Exercise Classes for Military Press (MP) Normal (N): This class refers to the correct execution, involving lifting the bar from shoulder level to above the head, fully extending the arms, and returning it back to shoulder level with no arch in the back. The bar must be stable and parallel to the ground throughout the execution. Asymmetrical (A): The bar is lopsided and asymmetrical. Reduced Range (R): The bar is not brought down completely to the shoulder level. Arch (Arch): The participant arches their back during execution. Figure <ref> shows these deviations using a single frame. §.§ Exercise Classes for Rowing Normal (N): This class refers to the correct execution, where the participant begins by positioning themselves correctly, bending knees and leaning forward from the waist. The execution starts by lifting the bar with fully extended arms until it touches the sternum and bringing it back to the starting position. The bar must be stable and parallel to the ground and the back should be straight. Asymmetrical (A): The bar is lopsided and asymmetrical. Reduced Range (R): The bar is not brought up completely until it touches the sternum. Ext: The participant moves his/her back during execution. RB: The participant executes with a rounded back. Figure <ref> shows these deviations by depicting a single frame. § DATA ANALYSIS AND METHODS This section presents the data pre-processing, features extraction and classification models. We present the feature extraction for IMU data, followed by feature extraction for video. We also provide a description of the train/test splits for IMUs and video data. §.§ IMU Data We discuss three strategies to create features from IMU data. First, we directly use the raw signal as a time series. Second, we use existing approaches to create handcrafted features. Third, we use dedicated packages to automatically extract features. Features extraction is performed after segmenting the full signal to obtain individual repetitions. §.§.§ Raw Signal as Multivariate Time Series. The raw signal from IMU records data for 10 repetitions. Hence, we segment the time series to obtain signals for individual repetitions. The Y signal of the magnetometer from the IMU placed on the right arm is utilized to segment the signals. The time series obtained after this step has variable length since the time taken to complete each repetition differs from participant to participant. Further, current implementations of selected time series classifiers cannot handle variable-length time series and therefore all time series are re-sampled to a length of 161 (the length of the longest time series). This does not impact the performance as shown in the supplementary material. Every single repetition constitutes a single sample for train/test data. The final data D has a shape of D ∈ℝ^N × 45 × 161, where N indicates the total samples. Each sample denoted by x_i in the data has a dimension of x_i ∈ ℝ^45 × 161, where 45 denotes the total number of time series (5 IMUs x 9 signals) and 161 is the length of each time series. §.§.§ Handcrafted Features. Each of the 5 IMUs outputs 9 signals (X,Y,Z) for each of the accelerometer, magnetometer and gyroscope. We follow the procedure as described in <cit.> to create handcrafted features. Additionally, 5 signals were created for each IMU: pitch, roll, yaw signal and vector magnitude of accelerometer and gyroscope, giving a total of 70 signals (5 × (9 + 5)). For each repetition signal, 18 handcrafted features that capture time and frequency domain characteristics were created. Hence, we obtain the final data D ∈ℝ^N × 1260, where N is the total samples and 1260 represents the features extracted from 70 signals with 18 features each for both MP and Rowing. §.§.§ Auto Extracted Features. We use packages catch22 <cit.> and tsfresh <cit.> to perform automatic feature extraction from a single repetition signal. These packages calculate a wide range of pre-defined metrics in order to capture the diverse characteristics of a signal. They are straightforward to use and avoid the need for domain knowledge and signal processing techniques. Catch22 captures 22 features for each of the 45 signals (5 IMUs x 9 signals) giving a total of 990 tabular features for MP and Rowing in the final dataset D ∈ℝ^N × 990, where N indicates the total samples. Similarly, tsfresh captures a large number of time series characteristics by creating a large number of features. The final dataset D has a shape of D ∈ℝ^N × 15000 and D ∈ℝ^N × 16000, for MP and Rowing respectively. Both manual and automatic feature extraction are performed on the normalized time series, as we observed that normalizing the time series leads to an increase in accuracy. §.§ Video Data We follow the methodology presented in our previous work <cit.> to classify human exercise from videos. OpenPose is used for human pose estimation to track the key body parts, followed by a multivariate time series classifier. Each video consists of a sequence of frames where each frame is considered a time step. Each frame is fed to OpenPose which outputs coordinates (X,Y) for 25 body parts. We only use the 8 upper body parts most relevant to the target exercises but also conduct experiments with the full 25 body parts. The time series obtained from a single body part is denoted by b^n = [(X,Y)^1, (X,Y)^2, (X,Y)^3,...(X,Y)^T] where n indicates the n^th body part and T is the length of the video clip. §.§.§ Multivariate Time Series Data. Since each video records 10 repetitions for each exercise execution, segmentation is necessary in order to obtain single repetitions. Each repetition forms a single time series sample for training and evaluating a classifier. We use peak detection to segment the time series as mentioned in our previous work <cit.>. Similarly to the IMU case, every time series obtained after this step has a variable length and therefore is re-sampled to a length of 161. The final data is denoted by D ∈ℝ^N × 16 × 161, where N indicates the total samples. Each sample denoted by x_i has a dimension of x_i∈ℝ^16 × 161, where 16 indicates X and Y coordinates for 8 body parts and 161 is the length of each time series. §.§.§ Auto Extracted Features. We use catch22 <cit.> and tsfresh <cit.> to perform automatic feature extraction from each single repetition signal. §.§ Train/Test Splits We use 3 train/test splits in the ratio of 70/30 on the full data set to obtain train and test data for both IMUs and video. Each split is done based on the unique participant IDs to avoid leaking information into the test data. Train data is further split in the ratio of 85/15 to create validation data to fine-tune the hyperparameters. The validation data is merged back into the train data before the final classification. The data is balanced across all the classes. Table <ref> shows the number of samples across all classes for a single train/test split for MP and Rowing respectively. §.§ Classification Models We use tabular machine learning models to work with handcrafted and automated features. Informed by previous literature on feature extraction for IMU data <cit.>, we focus on Logistic Regression, Ridge Regression, Naive Bayes, Random Forest and SVM as classifiers for tabular data. We select ROCKET, MultiROCKET and deep learning models FCN and Resnet as recent accurate and fast multivariate time series classifiers <cit.>. § EMPIRICAL EVALUATION We present results on IMU data, video data and combinations using ensembles. We report average accuracy over 3 train/test splits for all the results. We use the sklearn library <cit.> to classify tabular data and sktime <cit.> to classify time series data. All the experiments are performed using Python on an Ubuntu 18.04 system (16GB RAM, Intel i7-4790 CPU @ 3.60GHz). The Supplementary Material [<https://github.com/mlgig/Video_vs_Shimmer_ECML_2023/blob/master/Supplementary_material.pdf>] presents further detailed results on leave-one-participant-out cross-validation, demographic results, execution time, as well as the impact of normalization and re-sampling length on the classification accuracy. §.§ Accuracy using IMUs We present the classification results using 3 different strategies for creating features from IMU data. For tabular features, we perform feature selection to reduce overfitting and execution time. We use Lasso Regression (C=0.01) with L1 penalty for feature selection, where C is the regularization parameter. Logistic Regression achieves the best performance followed by Ridge Regression and SVM. These results suggest that linear classifiers are best suited for this problem. Hence we only present results using Logistic Regression here. We tune hyperparameters, particularly regularization parameter C of Logistic Regression using cross validation. We observed that Logistic Regression (LR) with C=0.01 achieves the highest accuracy (Table <ref> presents results with Logistic Regression). Table <ref> presents the results using raw data and multivariate time series classifiers. ROCKET achieves the best performance with MultiROCKET having similar accuracy for this problem. ROCKET has the added benefit that it can also work with unnormalised data and it is faster during training and prediction, so we select this classifier for the rest of the analysis. We analyse the average accuracy using all 5 IMUs as well as combinations of IMUs using raw time series with ROCKET as classifier. The goal is to select the minimum number of IMUs needed to achieve the best performance for MP and Rowing. Table <ref> presents the average accuracy over 3 splits obtained using all IMUs whereas Table <ref> presents the average accuracy using different combinations of IMUs. Results and Discussion: From Table <ref> we observe that using raw data with ROCKET achieves the highest accuracy when compared to the approaches based on handcrafted and automated feature extraction. We tune hyperparameters of ROCKET using the validation data, particularly the number-of-kernels and observe no impact on the accuracy. The normalization flag is set to True here as turning it off leads to a 4 percentage points drop in the accuracy. ROCKET can easily be run on a single CPU machine without the need for much engineering effort (only 2 parameters to tune) and dedicated hardware. It is much faster than using tsfresh or catch22 for feature extraction followed by classification. Table <ref> presents the accuracy using different combinations of IMUs placed on different parts of the body. Accuracy is lowest when using only a single sensor. Accuracy starts to increase as more IMUs are included, for both MP and Rowing. We observe that placing 1 IMU on each wrist and 1 at the back achieved the same accuracy as using all 5 IMUs. The accuracy jumps from 0.83 to 0.88 moving from one IMU placed on the right wrist to two IMUs placed on both wrists and finally jumps to 0.91 when adding one more IMU at the back for MP. Similar behaviour is observed for Rowing. This suggests that 3 IMUs are sufficient for these exercises. §.§ Accuracy using Video Here we present the results of classification using video as the data source. We report the average accuracy over 3 train/test splits for MP and Rowing. We also present results using tabular classifiers with automated features for comparison with the IMU based approach. For the raw data approach, we study the accuracy when involving different body parts, e.g., all 25, the 8 upper body parts suggested by domain experts and results using automated channel selection technique <cit.>. The normalization flag is set to False here as turning it on leads to a 4 percentage points drop in accuracy. This is in contrast to the setting configured for IMUs. We tune hyperparameters of ROCKET, particularly the number-of-kernels and observe no impact on the accuracy. Table <ref> presents the average accuracy using these different approaches for classifying MP and Rowing exercises. Results and Discussion: From Table <ref> we observe that the average accuracy achieved using raw time series is highest when using the 8 body parts suggested by domain experts. Using automated features does not seem to work very well, in this case, achieving accuracy below 80% for both exercises. Moreover, using channel selection techniques leads to an improvement by 1 and 3 percentage points in accuracy versus using the full 25 body parts. §.§ IMU versus Video We compare IMU and video data for human exercise classification, using the raw data approach for both IMU and video as it achieves the best performance. We report the accuracy, the execution time and the storage space required. Table <ref> presents the results for both MP and Rowing exercises. We observe that a minimum of 3 IMUs are required to achieve a higher accuracy than a single video. A single video outperforms a single IMU for both exercises by a minimum of 5 percentage points. Table <ref> reports the real train/test time for both approaches. This time includes time taken for data pre-processing and to train/test the model. It also includes time to run pose estimation in case of video. The IMUs approach takes the least amount of time to train/test as compared to the video-based approach. For video, OpenPose extracts the multivariate time series data. The total duration of all videos is 1h 38 minutes for MP, whereas OpenPose took 1h 12 minutes thus OpenPose can run faster than real-time, which is important for getting fast predictions. Table <ref> presents the storage consumption for both approaches. We note savings in terms of storage space: 5 IMUs require 6 times more space than the time series obtained from videos. Even after selecting the minimum number of sensors which is 3 in both exercises, the storage consumption is more than 200 MB which is also higher as compared to using time series from video. Our previous work in <cit.> explored the impact of video quality such as resolution and bit rate on classification accuracy and demonstrated how much video quality can be degraded without having a significant impact on the accuracy, whilst saving storage space and processing power. §.§ Combining IMU and Video We create an ensemble model by combining individual models trained independently on IMU and Video. For IMUs, we take the 3 sensors that achieved the highest accuracy. When video is combined with just a single sensor, we take the IMU placed on the left wrist, as it had the highest accuracy among single sensors and it is the most common location for people to wear their smartwatch. Probabilities are combined by averaging and the class with the highest average probability is predicted for a sample during test time. Table <ref> presents a comparison of different approaches, using ROCKET as a multivariate time series classifier. From Table <ref>, we observe that an ensemble model achieves the best average accuracy when compared to using any number of IMUs and a single video-based approach. The accuracy for MP jumps by 2 percentage points when transitioning from 5 IMUs to an ensemble approach, and by 5 percentage points when moving from a single video to an ensemble. Similar results are observed for Rowing. These results suggest that combining IMU and video modalities enhances the performance of exercise classification. Combining video and IMU data sources, with video providing 2D location coordinates for key anatomical landmarks and IMUs capturing acceleration and orientation of the body parts, results in improved classification accuracy, as shown in this investigation (see supplementary material). This finding is consistent with previous work in <cit.> that highlights the complementary nature of video and IMUs in enhancing human pose estimation quality, while in this work we see a similar benefit for human exercise classification. § CONCLUSION We presented a comparison of IMU and video-based approaches for human exercise classification on two real-world S&C exercises (Military Press and Rowing) involving 54 participants. We compared different feature-creation strategies for classification. The results show that an automated feature extraction approach outperforms classification that is based on manually created features. Additionally, directly using the raw time series data with multivariate time series classifiers achieves the best performance for both IMU and video. While comparing IMU and video-based approaches, we observed that using a single video significantly outperforms the accuracy obtained using a single IMU. Moreover, the minimum number of IMUs required is not known in advance, for instance, 3 IMUs are required for MP to reach a reasonable accuracy. Next, we compared the performance of an ensemble method combining both IMU and video with the standalone approaches. We showed that an ensemble approach outperforms either data modality deployed in isolation. The accuracy achieved was 93% and 88% for MP and Rowing respectively. The criteria to select sensors or videos will ultimately depend on the goal of the end user. For instance: the choice between video and IMUs will depend on a combination of factors such as convenience and levels of accuracy required for the specific application context. We acknowledge the fact that the scenario that was tested in this research does not accurately reflect real-world conditions. This does mean that we are exposed to the risk that the induced deviation performances could be exaggerated, and therefore not reflective of the often very minor deviations that can be observed in the real-world setting. However, we would argue that performing exercises under induced deviation conditions, if done appropriately, is a very necessary first step towards validating these exercise classification strategies in this field. It would not be prudent to assume that this model could be generalised to operate to the same level in real-world conditions. Having said that, the use of conditioned datasets is a necessary first step in this kind of application and provides the proof of concept evidence necessary to move onto the real-world setting. §.§.§ Acknowledgements This work was funded by Science Foundation Ireland through the Insight Centre for Data Analytics (12/RC/2289_P2) and VistaMilk SFI Research Centre (SFI/16/RC/3835). § ETHICAL IMPLICATIONS Using videos for human exercise classification raises ethical implications that need to be mitigated, prompting a discussion of potential ethical implications. Data Collection. Participants in this study provided written consent and the Human Research Ethics Committee of the university approved this study. All experiments were conducted under the supervision of an expert physiotherapist. The potential implications, in this case, can arise when the language used for the consent form may not be native to all the participants. In our case, the organizing authority or professional who was carrying out the data collection made sure that all the participants have well understood the consent form and the use of this data in the future. Privacy and Confidentiality. This study uses videos which record participants executing exercises. This poses obvious privacy challenges. A first step is to blur the video to protect the participant's identity. This work utilizes human pose estimation to extract time series from video, thereby avoiding the need to directly use the original video. By working with the extracted time series, it largely safeguards the privacy and confidentiality of the participants. Diversity of Representation. The participants considered in this study fall into the age group of 20 to 46. Hence the results presented here may not generalise for other age groups. Therefore the final use case will depend on the specific target users, such as athletes competing in the Olympic games versus individuals with less intensive training goals. While there were slightly more male participants than female participants, it does not impact the conclusions drawn in this work, as analysed in the supplementary material. However, this requires further exploration to avoid any biases in the conclusion. Future studies should aim for equal representation among participants in terms of age, sex, gender, race etc., from the start of the study. Transparency and Feedback. The prediction of the model in this case outputs whether the execution of the exercise was correct or incorrect. Deep learning-based models and other posthoc explanation methods support saliency maps which can be used to highlight the discriminative regions of the data that can be mapped back to the original video thus providing more information about the model decision to the participant. The above list is not exhaustive and other inherent biases may appear because of the chosen model and the way the data has been collected. splncs04
http://arxiv.org/abs/2307.04313v1
20230710025609
Unknotted Curves on Seifert Surfaces
[ "Subhankar Dey", "Veronica King", "Colby T. Shaw", "Bülent Tosun", "Bruce Trace" ]
math.GT
[ "math.GT", "57K30, 57K10" ]
Department of Mathematics University of Alabama Tuscaloosa AL [email protected] Department of Mathematics University of Texas Austin Austin TX [email protected] School of Mathematics Georgia Institute of Technology Atlanta GA [email protected] Department of Mathematics University of Alabama Tuscaloosa AL [email protected] Department of Mathematics University of Alabama Tuscaloosa AL [email protected] [2010]57K33, 57K43, 32E20 We consider homologically essential simple closed curves on Seifert surfaces of genus one knots in S^3, and in particular those that are unknotted or slice in S^3. We completely characterize all such curves for most twist knots: they are either positive or negative braid closures; moreover, we determine exactly which of those are unknotted. A surprising consequence of our work is that the figure eight knot admits infinitely many unknotted essential curves up to isotopy on its genus one Seifert surface, and those curves are enumerated by Fibonacci numbers. On the other hand, we prove that many twist knots admit homologically essential curves that cannot be positive or negative braid closures. Indeed, among those curves, we exhibit an example of a slice but not unknotted homologically essential simple closed curve. We further investigate our study of unknotted essential curves for arbitrary Whitehead doubles of non-trivial knots, and obtain that there is a precisely one unknotted essential simple closed curve in the interior of the doubles' standard genus one Seifert surface. As a consequence of all these we obtain many new examples of 3-manifolds that bound contractible 4-manifolds. Unknotted Curves on Seifert Surfaces Bruce Trace ==================================== § INTRODUCTION Suppose K ⊆ S^3 is a genus g knot with Seifert Surface Σ_K. Let b be a curve in Σ_K which is homologically essential, that is it is not separating Σ_K, and a simple closed curve, that is it has one component and does not intersect itself. Furthermore, we will focus on those that are unknotted or slice in S^3, that is each bounds a disk in S^3 or B^4. In this paper we seek to progress on the following problem: Characterize and, if possible, list all such b's for the pair (K, Σ_K) where K is a genus one knot and Σ_K its Seifert surface. Our original motivation for studying this problem comes from the intimate connection between unknotted or slice homologically essential curves on a Seifert surface of a genus one knot and 3-manifolds that bound contractible 4-manifolds. We defer the detailed discussion of this connection to Section <ref>, where we also provide some historical perspective. For now, however, we will focus on getting a hold on the stated problem above for a class of genus one knots, and as we will make clear in the next few results, this problem is already remarkably interesting and fertile on its own. §.§ Main Results. A well studied class of genus one knots is so called twist knot K=K_t which is described by the diagram on the left of Figure <ref>. We note that with this convention K_-1 is the right-handed trefoil T_2,3 and K_1 is the figure eight knot 4_1. We will consider the genus one Seifert surface Σ_K for K=K_t as depicted on the right of Figure <ref>. The first main result in this paper is the following. Let t≤ 2. Then the genus one Seifert surface Σ_K of K=K_t admits infinitely many homologically essential, unknotted curves, if and only if t=1, that is K is the figure eight knot 4_1. Indeed, we can be more precise and characterize all homologically essential, simple closed curves on Σ_K, from which Theorem <ref> follows easily. To state this we recall an essential simple closed curve c on Σ_K can be represented (almost uniquely) by a pair of non-negative integers (m,n) where m is the number of times c=(m,n) runs around the left band and n is the number of times it runs around the right band in Σ_K. Moreover, since c is connected, we can assume gcd(m,n) = 1. Finally, to uniquely describe c, we adopt the notation of ∞ curve and loop curve for a curve c, if the curve has its orientation switches one band to the other and it has the same orientation on both bands, respectively (See Figure <ref>). Let K=K_t be a twist knot and Σ_K its Seifert surface as in Figure <ref>. Then; * For K =K_t, t≤ -1, we can characterize all homologically essential simple closed curves on Σ_K as the closures of negative braids in Figure <ref>. In case of the right-handed trefoil K_-1=T_2,3, exactly 6 of these, see Figure <ref>, are unknotted in S^3. For t<-1, exactly 5 of these, see Figure <ref>, are unknotted in S^3. * For K=K_1=4_1, we can characterize all homologically essential simple closed curves on Σ_K as the closures of braids in Figure <ref>. A curve on this surface is unknotted in S^3 if and only if it is (1) a trivial curve (1,0) or (0,1), (2) an ∞ curve in the form of (F_i+1,F_i), or (3) a loop curve in the form of (F_i,F_i+1), where F_i represents the i^th Fibonacci number, see Figure <ref>. For twist knot K=K_t with t>1 the situation is more complicated. Under further hypothesis on the parameters m,n we can obtain results similar to those in Theorem <ref>, and these will be enough to extend the theorem entirely to the case of K=K_2, so called Stevedore's knot 6_1 (here we use the Rolfsen's knot tabulation notation). More precisely we have; Let K=K_t be a twist knot and Σ_K its Seifert surface as in Figure <ref>. Then; * When t>1 and m<n, we can characterize all homologically essential simple closed curves on Σ_K as the closures of positive braids in Figure <ref>(a)(b). Exactly 5 of these, see Figure <ref>, are unknotted in S^3. * When t>1 and m>n. * If m-tn>0, then we can characterize all homologically essential simple closed curves on Σ_K as the closures of negative braids in Figure <ref> and  <ref>. Exactly 5 of these, see Figure <ref>, are unknotted in S^3. * If m-n<n and the curve is ∞ curve, then we can characterize all homologically essential simple closed curves on Σ_K as the closures of positive braids Figure <ref>. Exactly 5 of these, see Figure <ref>, are unknotted in S^3. * For K =K_2=6_1, we can characterize all homologically essential simple closed curves on Σ_K as the closures of positive or negative braids. Exactly 5 of these, see Figure <ref>, are unknotted in S^3. What Theorem <ref> cannot cover is the case t>2, m>n and m-tn<0 or when m-n<n and the curve is a loop curve. Indeed in this range not every homologically essential curve is a positive or negative braid closure. For example, when (m,n)=(5,2) and t=3 one obtains that the corresponding essential ∞ curve, as a smooth knot in S^3, is the knot 5_2, and for (m,n)=(7,3) and t=3, the corresponding knot is 10_132 both of which are known to be not positive braid closures–coincidentally, these knots are not unknotted or slice. Moreover we can explicitly demonstrate, see below, that if one removes the assumption of “∞” from part 2(b) in Theorem <ref>, then the conclusion claimed there fails for certain loop curves when t>2. A natural question is then whether for knot K =K_t with t>2, m>n and m-tn<0 or m-n<n loop curve, there exists unknotted or slice curves on Σ_K other than those listed in Figure <ref>? A follow up question will be whether there exists slice but not unknotted curves on Σ_K for some K=K_t? We can answer the latter question in affirmative as follows: Let K=K_t be a twist knot with t>2 and Σ_K its Seifert surface as in Figure <ref> and consider the loop curve (m,n) with m=3, n=2 on Σ_K. Then this curve, as a smooth knot in S^3, is the pretzel knot P(2t-5, -3, 2). This knot is never unknotted but it is slice (exactly) when t=4, in which case this pretzel knot is also known as the curious knot 8_20. We note that the choices of m,n values made in Theorem <ref> are somewhat special in that they yielded an infinite family of pretzel knots, and that it includes a slice but not unknotted curve. Indeed, by using Rudolph's work in <cit.>, we can show (see Proposition <ref>) that the loop curve (m,n) with m-n=1, n>2 and t>4 on Σ_K, as a smooth knot in S^3, is never slice. The calculation gets quickly complicated once m-n>1, and it stays an open problem if in this range one can find other slice but not unknotted curves. We can further generalize our study of unknotted essential curves on minimal genus Seifert surface of genus one knots for the Whitehead doubles of non-trivial knots. We first introduce some notation. Let P be the twist knot K_t embedded (where t=0 is allowed) in a solid torus V⊂ S^3, and K denote an arbitrary knot in S^3, we identify a tubular neighborhood of K with V in such a way that the longitude of V is identified with the longitude of K coming from a Seifert surface. The image of P under this identification is a knot, D^±(K,t), called the positive/negative t–twisted Whitehead double of K. In this situation the knot P is called the pattern for D^±(K,t) and K is referred to as the companion. Figure <ref> depicts the positive -3–twisted Whitehead double of the left-handed trefoil, D^+(T_2,-3, -3). If one takes K to be the unknot, then D^+(K,t) is nothing but the twist knot K_t. Let K denote a non-trivial knot in S^3. Suppose that Σ_K is a standard genus one Seifert surface for the Whitehead double of K. Then there is precisely one unknotted homologically essential, simple closed curves in the interior of Σ_K. §.§ From unknotted curves to contractible 4-manifolds. The problem of finding unknotted homologically essential curves on a Seifert surface of a genus one knot is interesting on its own, but it is also useful for studying some essential problems in low dimensional topology. We expand on one of these problems a little more. An important and still open question in low dimensional topology asks: which closed oriented homology 3-sphere [A homology 3-sphere/4-ball is a 3-/4- manifold having the integral homology groups of S^3/B^4.] bounds a homology 4-ball or contractible 4-manifold (see <cit.>). This problem can be traced back to the famous Whitney embedding theorem and other important subsequent results due to Hirsch, Wall and Rokhlin <cit.> in the 1950s. Since then the research towards understanding this problem has stayed active. It has been shown that many infinite families of homology spheres do bound contractible 4-manifolds <cit.> and at the same time many powerful techniques and invariants, mainly coming from Floer and gauge theories <cit.> have been used to obtain constraints. In our case, using our main results, we will be able to list some more homology spheres that bound contractible 4-manifolds. This is because of the following theorem of Fickle <cit.>. Let K be a knot in S^3 which has a genus one Seifert surface F with a primitive element [b]∈ H_1(F) such that the curve b is unknotted in S^3. If b has self-linking s, then the homology sphere obtained by 1/(s± 1) Dehn surgery on K bounds a contractible [Indeed, this contractible manifold is a Mazur-type manifold, namely it is a contractible 4-manifold that has a single handle of each index 0, 1 and 2 where the 2-handle is attached along a knot that links the 1-handle algebraically once. This condition yields a trivial fundamental group.] 4-manifold. This result in <cit.> was generalized to genus one knots in the boundary of an acyclic 4–manifold W, and where the assumption on the curve b is relaxed so that b is slice in W. This will be useful for applying to the slice but not unknotted curve/knot found in Theorem <ref>. The natural task is to determine self-linking number s, with respect to the framing induced by the Seifert surface, for the unknotted curves found in Theorem <ref> and <ref>. For this we use the Seifert matrix given by S = [ -1 -1; 0 t ] where we use two obvious cycles–both oriented counterclockwise–in Σ_K. Recall that, if c=(m,n) is a loop curve then m and n strands are endowed with the same orientation and hence the same signs. On the other hand for ∞ curve they will have opposite orientation and hence the opposite signs. Therefore, given t, the self-linking number of c=(m,n) loop curve is s=-m^2-mn+n^2t, and the self-linking number of (m,n) ∞ curve is s=-m^2+mn+n^2t. A quick calculation shows that the six unknotted curves in Figure <ref> for K_-1=T_2,3 share self-linking numbers s=-1, -3. As we will see during the proof of Theorem <ref> the infinitely many unknotted curves for the figure eight knot K_1=4_1 reduce (that are isotopic) to unknotted curves with s=-1 or s=1. The five unknotted curves in Figure <ref> for K_t, t<-1 or t>1, share self-linking numbers s=-1, t and t-2 (see <cit.> and references therein for some relevant work). Finally, Theorem <ref> finds a slice but not unknotted curve which is the curve (3,2) with t=4. One can calculate from the formula above that this curve has self-linking number s=1. Finally, the unique unknotted curve from Theorem <ref> has self linking s=-1. Thus, as an obvious consequence of these calculations and Theorem <ref> and its generalization in <cit.> we obtain: Let K be any non-trivial knot. Then, the homology spheres obtained by * -1/2 Dehn surgery on D^+(K,t) * ±1/2 Dehn surgery on K_1=4_1 * -1/2 and -1/4 Dehn surgeries on K_-1=T_2,3 * -1/2 and 1/t±1 and 1/(t-2)±1 Dehn surgeries on K_t, t≠± 1 * 1/2 Dehn surgery on K_4 bound contractible 4-manifolds. The 3-manifolds in part (3) are Brieskorn spheres Σ(2,3,13) and Σ(2,3,25); they were identified by Casson-Harer and Fickle that they bound contractible 4-manifolds. Also, it was known already that the result of 1/2 Dehn surgery on the figure eight knot bounds a contractible 4-manifold (see <cit.>) from this we obtain the result in part (2) as the figure eight knot is an amphichiral knot. It is known that the result of 1/n Dehn surgery on a slice knot K⊂ S^3 bounds a contractible 4-manifold. To see this, note that at the 4-manifold level with this surgery operation what we are doing is to remove a neighborhood of the slice disk from B^4 (the boundary at this stage is zero surgery on K) and then attach a 2-handle to a meridian of K with framing -n. Now, simple algebraic topology arguments shows that this resulting 4-manifold is contractible. It is a well known result that <cit.>; a nontrivial twist knot K=K_t is slice if and only if K=K_2 (Stevedore's knot 6_1). So, by arguments above we already know that result of 1/n surgery on K_2 bounds contractible 4-manifold for any integer n. But interestingly we do not recover this by using Theorem <ref>. The paper is organized as follows. In Section <ref> we set some basic notations and conventions that will be used throughout the paper. Section <ref> contains the proofs of Theorem <ref>,  <ref> and  <ref>. Our main goal will be to organize, case by case, essential simple closed curves on genus one Seifert surface Σ_K, through sometimes lengthy isotopies, into explicit positive or negative braid closures. Once this is achieved we use a result due to Cromwell that says the Seifert algorithm applied to the closure of a positive/negative braid closure gives a minimal genus surface. This together with some straightforward calculations will help us to determine the unknotted curves exactly. But sometimes it will not be obvious or even possible to reduce an essential simple closed curve to a positive or negative closure (see Section <ref>,  <ref> and  <ref>). Further analyzing these cases will yield interesting phenomenon listed in Theorem  <ref> and  <ref>. Section <ref> contains the proof of Theorem <ref>. §.§ Acknowledgments We thank Audrick Pyronneau and Nicolas Fontova for helpful conversations. The first, second and third authors were supported in part by a grant from NSF (DMS-2105525). The fourth author was supported in part by grants from NSF (CAREER DMS-2144363 and DMS-2105525) and the Simons Foundation (636841, BT). § PRELIMINARIES In this section, we set some notation and make preparations for the proofs in the next three sections. In Figure <ref> we record some basic isotopies/conventions that will be repeatedly used during proofs. Most of these are evident but for the reader's convenience we explain how the move in part (f) works in Figure <ref>. We remind the reader that letters on parts of our curve, as in part (e) of the figure, or in certain location is to denote the number of strands that particular curve has. Recall also an essential, simple closed curve on Σ_K can be represented by a pair of non-negative integers (m,n) where m is the number of times it runs around the left band and n is the number of times it runs around the right band in Σ_K, and since we are dealing with connected curves we must have that m,n are relatively prime. We have two cases: m>n or n>m. For an (m,n) curve with m>n, after the m strands pass under the n strands on the Seifert surface, it can be split into two sets of strands. For this case, assume that the top set is made of n strands. They must connect to the n strands going over the right band, leaving the other set to be made of m-n strands. Now, we can split the other side of the set of m strands into two sections. The m-n strands on the right can only go to the bottom of these two sections, because otherwise the curve would have to intersect itself on the surface. This curve is notated an (m,n) ∞ curve. See Figure <ref>(a). The other possibility for an (m,n) curve with m>n, has n strands in the bottom set instead, which loop around to connect with the n strands going over the right band. This leaves the other to have m-n strands. We can split the other side of the set of m strands into two sections. The m-n strands on the right can only go to the top of these two sections, because again otherwise the curve would have to intersect itself on the surface. The remaining subsection must be made of n strands and connect to the n strands going over the right band. This curve is notated as an (m,n) loop curve. See Figure <ref>(b). The case of (m,n) curve with n>m is similar. See Figure <ref>(c)&(d). § TWIST KNOTS In this section we provide the proofs of Theorem <ref>, <ref> and  <ref>. We do this in four parts. Section <ref> and <ref> contains all technical details of Theorem <ref>, Section <ref> contains details of Theorem <ref> and Section <ref> contains Theorem <ref> . §.§ Twist knot with t<0 In this section we consider twist knot K=K_t, t≤ -1. This in particular includes the right-handed trefoil K_-1. All essential, simple closed curves on Σ_K can be characterized as the closure of one of the negative braids in Figure <ref>. It suffices to show all possible curves for an arbitrary m and n such that gcd(m, n) = 1 are the closures of either braid in Figure <ref>. As mentioned earlier we will deal with cases where both m, n ≥1 since cases involving 0 are trivial. There are four cases to consider. The arguments for each of these will be quite similar, and so we will explain the first case in detail and refer to to the rather self-explanatory drawings/figures for the remaining cases. Case 1: (m,n) ∞ curve with m>n>0. This case is explained in Figure <ref>. The picture on top left is the (m,n) curve we are interested. The next picture to its right is the (m,n) curve where we ignore the surface it sits on and use the convention from Figure <ref>(e). The next picture is an isotopy where we push the split between n strands and m-n strands along the dotted blue arc. The next three pictures are obtained by applying simple isotopies coming from Figure <ref>. For example, the passage from the bottom right picture to one to its left is via Figure <ref>(c). Finally, the picture on the bottom left, one can easily see that, is the closure of the negative braid depicted in Figure <ref>(a). Case 2: (m,n) loop curve with m>n>0. By series isotopies, as indicated in Figure <ref>, the (m,n) curve in this case can be simplified to the knot depicted on the right of Figure <ref>, which is the closure of negative braid in Figure <ref>(b). Case 3: (m,n) ∞ curve with n>m>0. By series isotopies, as indicated in Figure <ref>, the (m,n) curve in this case can be simplified to the knot depicted on the bottom left of Figure <ref>, which is the closure of negative braid in Figure <ref>(c). Case 4: (m,n) loop curve with n>m>0. By series isotopies, as indicated in Figure <ref>, the (m,n) curve in this case can be simplified to the knot depicted on the right of Figure <ref>, which is the closure of negative braid in Figure <ref>(d). Next, we determine which of those curves in Proposition <ref> are unknotted. It is a classic result due to Cromwell <cit.> (see also <cit.>) that the Seifert algorithm applied to the closure of a positive braid gives a minimal genus surface. Let β be a braid as in Figure <ref> and K = β̂ be its closure. Let s(K) be the number of crossings and l(K) be the number of Seifert circles Seifert circles. Then; (s(K), l(K))= (m, |t|n(n-1) + (m - n)(m - n - 1) + n(m - n))  β as in Figure <ref>(a) (m+n, (|t|+1)n(n-1) + (m - n)(m - n - 1) + nm+2n(m - n))  β as in Figure <ref>(b) (n, (|t-1|)n(n-1) + (n - m)(n - m - 1) +m(m-1) + m(n - m))  β as in Figure <ref>(c) (m+n, |t|n(n-1) + m(m - 1) + nm)  β as in Figure <ref>(d) Consider the braid β as in Figure <ref>(a). Clearly, it has m Seifert circles as β has m strands. Next, we will analyze the three locations in which crossings occur. First, the t negative full twists on n strands. Since each strand crosses over the other n-1 strands, we obtain |t|n(n-1) crossings. Second, the negative full twist on m-n strands produces additional (m-n)(m-n-1) crossings. Lastly, notice the part of β where m-n strands overpass the other n strands, and so for each strand in m-n strands we obtain an additional n crossings. Hence for K=β̂ we calculate: l(β̂) = |t|n(n-1) + (m - n)(m - n - 1) + n(m - n). The calculations for the other cases are similar. We can now prove the first part of Theorem <ref>. Proposition <ref> proves the first half of our theorem. To determine there are exactly six unknotted curves when t=-1 and five when t<-1, let B be the set containing the six and five unknotted curves as in Figure <ref> and <ref>, respectively. It suffices to show an essential, simple closed curve c on Σ_K where c ∉B, cannot be unknotted in S^3. We know by Proposition <ref>, c is the closure of one of the braids in Figure <ref> in S^3, where m,n ≥ 1, gcd(m,n) = 1. We show, case by case, that the Seifert surface obtained via the Seifert algorithm for curves c∉B in each case has positive genus, and hence it cannot be unknotted. * Let c=(m,n) be the closure of the negative braid as in Figure <ref>(a) and Σ_c its Seifert surface obtained by the Seifert algorithm. There are m Seifert circles and by Proposition <ref> l(c) = |t|n(n-1) + (m-n)(m-n-1) + n(m-n). Hence, g(Σ_c) = 1 + l - s/2= m(m-n-2) + n(|t|(n-1)+1) + 1/2. If m=n+1, then we get g(Σ_c)=|t|n(n-1)/2 which is positive as long as n>1–note that when c=(2,1) we indeed get an unknotted curve. If m>n+1, then g(Σ_c)≥n(|t|(n-1)+1)+1/2>0 as long as n>0. So, c∉B is not an unknotted curve as long as m>n≥ 1. * Let c=(m,n) be the closure of the negative braid as in Figure <ref>(b) and Σ_c its Seifert surface obtained by the Seifert algorithm. There are n+m Seifert circles and by Proposition <ref> l(c) = (|t|+1)n(n-1) + (m - n)(m - n - 1) + nm+2n(m - n). Hence, g(Σ_c) = m(m+n-2)+n(|t|(n-1)-1)+1/2. One can easily see that this quantity is always positive as long as n≥ 1. So, c∉B is not an unknotted curve when m>n≥ 1. * Let c=(m,n) be the closure of the negative braid as in Figure <ref>(c) and Σ_c its Seifert surface obtained by the Seifert algorithm. There are n Seifert circles and by Proposition <ref> l(c) = (|t|-1)n(n-1) + (n - m)(n - m - 1) + m(m-1)+m(n - m). Hence, g(Σ_c) = n(|t|(n-1)-m-1)+m^2+1/2. This is always positive as long as m≥ 1 and |t|≠ 1–note that when c=(1,2) and |t|=1 we indeed get unknotted curve. So, c∉B is not an unknotted curve when n>m≥ 1. * Let c=(m,n) be the closure of the negative braid as in Figure <ref>(d) and Σ_c its Seifert surface obtained by the Seifert algorithm. There are n+m Seifert circles and by Proposition <ref> l(c) = |t|n(n-1) + m(m - 1) + nm. Hence, g(Σ_c) = |t|n(n-1)+m(m-2)+n(m-1)+1/2. One can easily see that this quantity is always positive as long as m≥ 0. So, c∉B is not an unknotted curve when n>m≥ 1. This completes the first part of Theorem <ref>. §.§ Figure eight knot The case of figure eight knot is certainly the most interesting one. It is rather surprising, even to the authors, that there exists a genus one knot with infinitely many unknotted curves on its genus one Seifert surface. As we will see understanding homologically essential curves for the figure eight knot will be similar to what we did in the previous section. The key difference develops in Case 2 and 4 below where we show how, under certain conditions, a homologically essential (m,n) ∞ (resp. (m,n) loop) curve can be reduced to the homologically essential (m-n, 2n-m) ∞ (resp. (2m-n, n-m) loop) curve, and how this recursively produces infinitely many distinct homology classes that are represented by the unknot, and we will show that certain Fibonacci numbers can be used to describe these unknotted curves. Finally we will show fort he figure eight knot this is the only way that an unknotted curve can arise. Adapting the notations developed thus far we start characterizing homologically essential simple closed curves on genus one Seifert surface Σ_K of the figure eight knot K. All essential, simple closed curves on Σ_K can be characterized as the closure of one of the braids in Figure <ref> (note the first and third braids from the left are negative and positive braids, respectively). The curves (1,0), (0,1) are clearly unknots. Moreover, because gcd(m,n)=1, the only curve with n=m is (1,1) curve, which is also unknot in S^3. For the rest of the arguments below, we will assume n>m or m>n. There are four cases to consider: Case 1: (m,n) loop curve with m>n>0. This curve can be turned into a negative braid following the process in Figure <ref>. Case 2: (m,n) ∞ curve with m> n>0. As mentioned at the beginning, this case (and Case 4) are much more involved and interesting (in particular the subcases of Case 2c and 4c). Following the process as in Figure <ref>, the curve can be isotoped as in the bottom right of that figure, which is the closure of the braid on its left–that is the second braid from the left in Figure <ref>. Case 3: (m,n) ∞ curve with n>m>0. This curve can be turned into a positive braid following the process in Figure <ref>. Case 4: (m,n) loop curve with n> m>0. This curve can be turned into the closure of a braid following the process in Figure <ref>. We next determine which of these curves are unknotted: A homologically essential curve c characterized as in Proposition <ref> is unknotted if and only if it is (a) a trivial curve (1,0) or (0,1), (b) an ∞ curve in the form of (F_i+1,F_i), or (c) a loop curve in the form of (F_i,F_i+1). Let c denote one of these homologically essential curve listed in Proposition <ref>. We will analyze the unknottedness of c in four separate cases. Case 1. Suppose c=(m,n) is the closure of the negative braid in the bottom left of Figure <ref>. Note the minimal Seifert Surface of c, Σ_c, has (n)(m-n)+(m)(m-1) crossings and m Seifert circles. Hence; g(Σ_c) = n(m-n)+(m-1)^2/2 This is a positive integer for all m,n with m>n. So c is never unknotted in S^3 as long m>n>0 . Case 2. Suppose c is of the form in the bottom right of Figure <ref>. Since this curve is not a positive or negative braid closure, we cannot directly use Cromwell's result as in Case 1 or the previous section. There are three subcases to consider. Case 2a: m-n=n. Because m and n are relatively prime integers, we must have that m=2, n=1, and we can easily see that this (2,1) curve unknotted. Case 2b: m-n>n. This curve can be turned into a negative braid following the process in Figure <ref>. More precisely, we start, on the top left of that figure, with the curve appearing on the bottom right of Figure <ref>. We extend the split along the dotted blue arc and isotope m strands to reach the next figure. We note that this splitting can be done as by the assumption we have m-2n>0. Then using Figure <ref>(a) and further isotopy we reach the final curve on the bottom right of Figure <ref> which is obviously the closure of the negative braid depicted on the bottom left of that picture. The minimal Seifert Surface coming from this negative braid closure contains m-n circles and (m-2n)n+(m-n)(m-n-1) twists. Hence; g(Σ_c)=(m-2n)n+(m-n)(m-n-2)+1/2. This a positive integer for all integers m,n with m-n>n. So, c is not unknotted in S^3. Case 2c: m-n<n. We organize this curve some more. We start, on the top left of Figure <ref>, with the curve that is appearing on the bottom left of Figure <ref>. We extend the split along the dotted blue arc and isotope m-n strands to reach the next figure, After some isotopies we reach the curve on the bottom left of Figure <ref>. In other words, this subcase of Case 2c leads to a reduced version of the original picture (top left curve in Figure <ref>), in the sense that the number of strands over either handle is less than the number of strands in the original picture. This case can be further subdivided depending on the relationship between 2n-m and m-n, but this braid (or rather its closure) will turn into a (m-n, 2n-m) ∞ curve when m-n>2n-m: Case 2c-i: 2n-m = m-n. This simplifies to 3n=2m. Because gcd(m,n)=1, this will only occur for m=3 and n=2, and the resulting curve is (1,1) ∞ curve. In other words here we observed that (3,2) curve has been reduced to (1,1) curve Case 2c-ii: 2n-m > m-n. This means that we are dealing with a curve under Case 3, and we will see that all curves considered there are positive braid closures. Case2c-iii: 2n-m < m-n. This means we are back to be under Case 2. So for m>n>m-n, the (m,n) ∞ curve is isotopic to the (m-n, 2n-m) ∞ curve. This isotopy series will be notated (m,n) ∼ (m-n, 2n-m). Equivalently, there is a series of isotopies such that (m-n, 2n - m) ∼ (m,n). If (k,l) denote a curve at one stage of this isotopy, then (k,l) ∼ ((k +l) + k, k+l). So, starting with k = l = 1, we recursively obtain: (1,1) ∼ (3,2) ∼ (8,5) ∼ (21, 13) ∼ (55, 34) ∼⋯ In a similar fashion, if we start with k = 2, l = 1 we obtain: (2,1) ∼ (5,3) ∼ (13,8) ∼ (34, 21) ∼ (89, 55) ∼⋯ Notice every curve c above is of the form c = (F_i + 1, F_i), i ∈ℤ_>0 where F_i denotes the i^th Fibonacci number. We will call these Fibonacci curves. We choose (1,1) and (2,1) because they are known unknots. As a result, this relation generates an infinite family of homologically distinct simple closed curves on Σ_K that are unknotted in S^3. Case 3. Suppose a curve, c, is of the form (3), which is the closure of the positive braid depicted in the bottom left of Figure <ref>. An argument similar to that applied to Case 1 can be used to show c is never unknotted in S^3. Case 4. Suppose c is of the form as in the bottom middle of Figure <ref>. Similar to Case 2, there are three subcases to consider. Case 4a: m = n - m. Then 2m=n. Because gcd(m,n)=1, m=1 and n=2, resulting in unknot. Case 4b: n-m>m. Then n-2m>0 and following the isotopies in Figure <ref>, the curve can be changed into the closure of positive braid depicted on the bottom right of that figure. Identical to Case 2b, the curve c in this case is never unknotted in S^3. Case 4c: m>n-m. Then 2m-n>0, and we can split the m strands into two: a n-m strands and a 2m-n strands. This case can be further subdivided depending on the relationship between n-m and 2m-n, but this braid will turn into a (2m-n, n-m) loop curve when n-m>2m-n: Case 4c-i: 2m-n = n-m. This simplifies to 3m=2n. Because gcd(m,n)=1, this will only occur for m=2 and n=3, and the resulting curve is a (1,1) loop curve. Case 4c-ii: n-m < 2m-n. This means that we are dealing with a curve under Case 1, and we saw that all curves considered there are negative braid closures. Case 4c-iii: n-m > 2m-n. This means that we are back to be under Case 4. So for n>m>n-m, an (m,n) loop curve has the following isotopy series: (m,n) ∼ (2m-n,n-m). If (k,l) denote a curve at one stage of this isotopy, then the reverse also holds: (k,l) ∼ (k+l, (k+l)+l). As a result, much like Case 2c, we can generate two infinite families of unknotted curves in S^3: (1,1) ∼ (2,3) ∼ (5,8) ∼ (13, 21) ∼ (34, 55) ∼⋯ and (1,2) ∼ (3,5) ∼ (8,13) ∼ (21, 34) ∼ (55, 89) ∼⋯ Notice every curve c is of the form c = (F_i, F_i + 1), i ∈ℤ_>0. Finally, we show that this is the only way one can get unknotted curves. That is, we claim: If a homologically essential curve c on Σ_K for K=4_1 is unknotted, then it must be a Fibonacci curve. From above, it is clear that if our curve c is Fibonacci, then it is unknotted. So it suffices to show if a curve is not Fibonacci then it is not unknotted. We will demonstrate this for loop curves under Case 4. Let c be a loop curve that is not Fibonacci but is unknotted. Since it is unknotted, it fits into either Case 4a or 4c. But the only unknotted curve from Case 4a is (1,1) curve which is a Fibonacci curve, so c must be under Case 4c. By our isotopy relation, (m,n) ∼ (2m-n, n - m). So, the curve can be reduced to a minimal form, say (a,b) where (a,b) ≠ (1,1) and (a,b) ≠ (2,1). We will now analyze this reduced curve (a,b): * If a = b, then (a,b) = (1,1); a contradiction. * If a > b, then (a,b) is under Case 1; none of those are unknotted. * If b - a < a < b, then (a,b) is still under Case 4c, and not in reduced form; a contradiction. * If a < b - a < b, then (a,b) is under Case 4b; none of those are unknotted. * If b-a = a<b, then (a,b) = (2,1); a contradiction. So, it has to be that either (a,b) ∼ (1,1) or (a,b) ∼ (2,1). Hence, it must be that c = (F_i, F_i+1) for some i. The argument for the case where c is an ∞ curve under Case 2 is identical. §.§ Twist knot with t>1–Part 1 In this section we consider twist knot K=K_t, t≥ 2, and give the proof of Theorem <ref>. All essential, simple closed curves on Σ_K can be characterized as the closure of one of the braids in Figure <ref>. It suffices to show all possible curves for an arbitrary m and n such that gcd(m, n) = 1 are the closures of braids in Figure <ref>. Here too there are four cases to consider but we will analyze these in slightly different order than in the previous two sections. Case 1: (m,n) ∞ curve with n>m>0. In this case the curve is the closure of a positive braid, and this is explained in Figure <ref> below. More precisely, we start with the curve which is drawn in the top left of the figure, and after a sequence of isotopies this becomes the curve in the bottom right of the figure which is obviously the closure of the braid in the bottom left of the figure. In particular, when n>m≥ 1, none of these curves will be unknotted. Case 2: (m,n) loop curve with n>m>0. In this case too the the curve is the closure of a positive braid, and this is explained in Figure <ref> below. In particular, when n>m>1, none of these curves will be unknotted. In the remaining two cases we will follow slightly different way of identifying our curves as braid closures. As we will see (which is evident in part (c) and (d) of Proposition <ref>) that the braids will not be positive or negative braids for general and m, n and t values. We will then verify how under the various hypothesis listed in Theorem <ref> these braids can be reduced to a positive or negative braids. Case 3: (m,n) ∞ curve with m>n>0. We explain in Figure <ref> below how the (m,n) ∞ curve with m>n>0 is the closure of the braid in the bottom left of the figure. This braid is not obviously a positive or negative braid. Case 3a (m,n) ∞ curve with m>n>0 and m-tn>0. We want to show the braid in the bottom left of Figure <ref> under the hypothesis that m-tn>0 can be made a negative braid. We achieve this in Figure <ref>. More precisely, in part (a) of the figure we see the braid that we are working on. We apply the move in Figure 7(f) and some obvious simplifications to reach the braid in part (d). In part (e) of the figure we re-organize the braid: more precisely, since m-tn>0 and m-n=m-tn+(t-1)n, we can split the piece of the braid in part (d) made of m-n strands as the stack of m-tn strands and set of t-1 n strands. We then apply the move in Figure <ref>(f) repeatedly (t-1 times) to obtain the braid in part (f). We note that the block labeled as “all negative crossings” is not important for our purpose to draw explicitly but we emphasize that each time we apply the move in Figure <ref>(f) it produces a full left handed twist between an n strands and the rest. Next, sliding -1 full twists one by one from n strands over the block of these negative crossings we reach part (g). After further obvious simplifications and organizations in parts (h)–(j) we reach the braid in part (k) which is a negative braid. Case 3b (m,n) ∞ curve with m>n>0 and m-n<n. We want to show in this case the braid in the bottom left of Figure <ref> under the hypothesis that m-n<n can be made a positive braid (regardless of t value). This is achieved in Figures  <ref>. Case 4: (m,n) loop curve with m>n>0. The arguments for this case are identical Case 3 and 3a above. The (m,n) loop curve with m>n>0 is the closure of the braid that is drawn in the bottom left of Figure <ref>. Case 4a (m,n) loop curve with m>n>0 and m-tn>0. We show the braid, which the (m,n) ∞ curve with m>n>0 is closure of, can be made a negative braid under the hypothesis m-tn>0. This follows very similar steps as in Case 3a which is explained through a series drawings in Figure <ref>. Case 4b (m,n) loop curve with m>n>0 and m-n<n. Finally, we consider the (m,n) loop curve with m>n>0 and m-n<n. Interestingly, this curve for t>2 does not have to the closure of a positive or negative braid. This will be further explored in the next section but for now we observe, through Figure <ref>(a)-(c) that when t=2 the curve is the closure of a negative braid: The braid in (a) in the figure is the braid from Figure <ref>(d). After applying the move in Figure <ref>f, and simple isotopies we obtain the braid in (c) which is clearly a negative braid when t=2. The proof of part (1) follows from Case 1 and 2 above. Part (2)a/b follows from Case 3a/b and Case 4a above. As for part (3), observe that when n>m by using Case 1 and 2 we obtain that all homologically essential curves are the closures of positive braids. When m>n, we have either m-2n>0 or m-2n<0. In the former case we use Case 3a and 4a to obtain that all homologically essential curves are the closures of negative braids. In the latter case, first note that m-2n<0 is equivalent to m-n<n, Now by Case 3b all homologically essential ∞ curves are the closures of positive braids, and by Case 4b all homologically essential loop curves are the closures of negative braids. Now by using Cromwell's result and some straightforward genus calculations we deduce that when m>n>1 or n>m≥ 1 there are no unknotted curves among (positive/negative) braid closures obtained in Case 1-4 above. Therefore, there are exactly 5 unknotted curves among homologically essential curves on Σ_K for K=K_t in Theorem <ref>. §.§ Twist knot with t>1–Part 2 In this section we consider twist knot K=K_t, t≥ 3, and give the proof of Theorem <ref>. We show that the loop curve (3,2) when t≥ 3 is the pretzel knot P(2t-5, -3,2). This is explained in Figure <ref>. The braid in (a) is from Figure <ref>(d) with m=3, n=2, where we moved (t-2) full right handed twists to the top right end. We take the closure of the braid and cancel the left handed half twist on the top left with one of the right handed half twists on the top right to reach the knot in (c). In (c)-(g) we implement simple isotopies, and finally reach, in (h), the pretzel knot P(2t-5, -3,2). This knot has genus t-1 (<cit.>[Corollary 2.7] , and so is never unknotted as long as t>1. This pretzel knot is slice exactly when 2t-5+(-3)=0. That is when t=4. The pretzel knot P(3,-3,3) is also known as 8_20. An interesting observation is that although P(2t-5, -3, 2) for t>2 is not a positive braid closure, it is a quasi-positive braid closure. The (m,n) loop curve with m-n=1, n>3 and t> 4 is never slice. By Rudoplh in <cit.>, we have that for a braid closure β̂ when k_+≠ k_- g_4(β̂) ≥|k_+ - k_-| - n + 1/2 where β is a braid in n strands, and k_± is the number of positive and negative crossings in β. For quasi-positive knots, equality holds. In which case, the Seifert genus is also the same as the four ball (slice) genus. Note that this formula can also be thought as a generalization to the Seifert genus calculation formula we used for positive/negative braid closures, since for those braids when, |k_+ - k_-| is the number of crossings and n, the braid number, is exactly the number of Seifert circles. Thus Rudoplh's inequality can also be used to state that the above calculations to rule out unknotted curves on various genus one Seifert surface can also be used to state that there are no slice knots other than the unknotted ones found. Now for the loop curve c=(m,n) as in Figure <ref>(c), we have that k_+ = (t-2)n(n-1), k_- = (m-n)(m-n-1) + 3(m-n)n Hence, when m-n = 1, we get that k_- = 3n. Notice also that for n ≥ 3, t ≥ 4, we have k_+ > k_-. Thus, for n > 3, t > 4, m-n=1 we obtain c=β̂ is never slice as; g_4(β̂=c) ≥(t-2)n(n-1) - 3n - m +1/2 = n((t-2)(n-1) - 4) > 0 It can be manually checked that the (4,3) loop curve when t = 3 is not slice either. § WHITEHEAD DOUBLES In this section we provide the proof of Theorem <ref> Let f:S^1× D^2→ S^3 denote a smooth embedding such that f(S^1×{0})=K. Set T=f(S^1× D^2). Up to isotopy, the collection of essential, simple closed, oriented curves in ∂ T is parameterized by {mμ+nλ | m, n∈ℤ and gcd(m,n)=1} where μ denotes a meridian in ∂ T and λ denotes a standard longitude in ∂ T coming from a Seifert surface. With this parameterization, the only curves that are null-homologous in T are ±μ and the only curves that are null-homologous in S^3∖int(T) are ±λ. Of course ±μ will bound embedded disks in T, but ±λ will not bound embedded disks in S^3∖int(T) as K is a non-trivial knot. In other words, the only compressing curves for ∂ T in S^3 are meridians. Suppose now that C is a smooth, simple closed curve in the interior of T, and there is a smoothly embedded 2-disk, say Δ, in S^3 such that ∂Δ=C. Since C lies in the interior of T, we may assume that Δ meets ∂ T transversely in a finite number of circles. Initially observe that if Δ∩∂ T=∅, then we can use Δ to isotope C in the interior of T so that the result of this isotopy is a curve in the interior of T that misses a meridinal disk for T. Now suppose that Δ∩∂ T≠∅. We show, in this case too, C can be isotoped to a curve that misses a meridinal disk for T. To this end, let σ denote a simple closed curve in Δ∩∂ T such that σ is innermost in Δ. That is σ bounds a sub-disk, Δ' say, in Δ and the interior of Δ' misses ∂ T. There are two cases, depending on whether or not that σ is essential in ∂ T. If σ is essential in ∂ T, then, as has already been noted, σ must be a meridian. As such, Δ' will be a meridinal disk in T and C misses Δ'. If σ is not essential in ∂ T, then σ bounds an embedded 2-disk, say D, in ∂ T. It is possible that Δ meets the interior of D, but we can still cut and paste Δ along a sub-disk of D to reduce the number of components in Δ∩∂ T. Repeating this process yields that if C is smoothly embedded curve in the interior of T and C is unknotted in S^3, then C can be isotoped in the interior of T so as to miss a meridinal disk for T. With all this in place, we return to discuss Whitehead double of K. Suppose that F is a standard, genus 1 Seifert surface for a double of K. See Figure <ref>. The surface F can be viewed as an annulus A with a a 1-handle attached to it. Here K is a core circle for A, and the 1-handle is attached to A as depicted in Figure <ref> Observe that F can be constructed so that it lives in the interior of T. Now, the curve C that passes once over the 1-handle and zero times around A obviously misses a meridinal disk for T, and it obviously is unknotted in S^3. On the other hand, if C is any other essential simple closed curve in the interior of F, then C must go around A some positive number of times. It is not difficult, upon orienting, C can be isotoped so that the strands of C going around A are coherently oriented. As such, C is homologous to some non-zero multiple of K in T. This, in turn, implies that C cannot be isotoped in T so as to miss some meridinal disk for T. It follows that C cannot be an unknot in S^3. 10 CH A. Casson and J. Harer, Some homology lens spaces which bound rational homology balls, Pacific Journal of Mathematics 96 (1981), no. 1, 23–36. CG A. Casson and C. McA. Gordon, On slice knot in dimension three, Proc. Smpos. Pure Math. XXXII Amer. Math. Soc. (1978), 39–53. CD T. D. Cochran C. W. Davis and , Counterexamples to Kauffman's conjectures on slice knots, Adv. Math. 274 (2015), 263–284. Cr P.  R.  Cromwell, Homogeneous links, J. London Math. Soc. (series 2) 39 (1989), 535–552. 1002465 ET J.B. Etnyre and B. Tosun, Homology spheres bounding acyclic smooth manifolds and symplectic fillings. , Michigan Math. Journal (2022). Hirsch M. W. Hirsch, On imbedding differentiable manifolds in euclidean space, Ann. of Math. (2) 73 (1961), 566–571. 124915 Fickle H. C. Fickle, Knots, Z-homology 3-spheres and contractible 4-manifolds, Houston J. Math. 10 (1984), no. 4, 467–493. 774711 FintushelStern84 R. Fintushel and R. J. Stern, A μ-invariant one homology 3-sphere that bounds an orientable rational ball, Four-manifold theory (Durham, N.H., 1982), Contemp. Math., vol. 35, Amer. Math. Soc., Providence, RI, 1984, pp. 265–268. 780582 Kirby:problemlist R. Kirby, Problems in low dimensional manifold theory, Algebraic and geometric topology (Proc. Sympos. Pure Math., Stanford Univ., Stanford, Calif., 1976), Part 2, Proc. Sympos. Pure Math., XXXII, Amer. Math. Soc., Providence, R.I., 1978, pp. 273–312. 520548 KimLee D. Kim and J. Lee, Some invariants of pretzel links, Bull. Austral. Math. Soc., 75 2007, 253–271 Manolescu:T C. Manolescu, Pin(2)-equivariant Seiberg-Witten Floer homology and the triangulation conjecture, J. Amer. Math. Soc. 29 (2016), no. 1, 147–176. 3402697 rudolph L. Rudoplh Quasipositivity as an obstruction to sliceness Bulletin of the American Mathematical Society, 29, 1993 Rohlin V. A. Rohlin, The embedding of non-orientable three-manifolds into five-dimensional Euclidean space, Dokl. Akad. Nauk SSSR 160 (1965), 549–551. 0184246 Rohlin:3manembedding V. A. Rohlin, The embedding of non-orientable three-manifolds into five-dimensional Euclidean space, Dokl. Akad. Nauk SSSR 160 (1965), 549–551. 0184246 Stern R. Stern, Some Brieskorn spheres which bound contractible manifolds, Notices Amer. Math. Soc (25) (1978). St A. Stoimenow, Positive knots, closed braids and the Jones polynomial, Ann. Scuola Noem. Sup. Pisa Cl. Sci. (5) Vol. II, (2003) 237–285. 2004964 tosun:survey B. Tosun, Stein domains in ℂ^2 with prescribed boundary, Adv. Geom. 22(1) (2022), 9–22. 4371941 Wall:embedding C. T. C. Wall, All 3-manifolds imbed in 5-space, Bull. Amer. Math. Soc. 71 (1965), 564–567. 175139 Zeeman E. C. Zeeman, Twisting spun knots, Trans. Amer. Math. Soc. 115 (1965), 471–495. 195085
http://arxiv.org/abs/2307.04442v1
20230710094930
Automatic diagnosis of knee osteoarthritis severity using Swin transformer
[ "Aymen Sekhri", "Marouane Tliba", "Mohamed Amine Kerkouri", "Yassine Nasser", "Aladine Chetouani", "Alessandro Bruno", "Rachid Jennane" ]
cs.CV
[ "cs.CV" ]
[email protected] Laboratoire PRISME, université d'Orléans 12 Rue de Blois Orléans France 45100 [email protected] Laboratoire PRISME, université d'Orléans 12 Rue de Blois Orléans France 45100 [email protected] Laboratoire PRISME, université d'Orléans 12 Rue de Blois Orléans France 45100 [email protected] Laboratoire PRISME, université d'Orléans 12 Rue de Blois Orléans France 45100 [email protected] Laboratoire PRISME, université d'Orléans 12 Rue de Blois Orléans France 45067 [email protected] IULM AI Lab, IULM University Via Carlo Bo 1 Milan Italy 20143 [email protected] IDP laboratory, université d'Orléans xxx Orleans France 45067 Knee osteoarthritis (KOA) is a widespread condition that can cause chronic pain and stiffness in the knee joint. Early detection and diagnosis are crucial for successful clinical intervention and management to prevent severe complications, such as loss of mobility. In this paper, we propose an automated approach that employs the Swin Transformer to predict the severity of KOA. Our model uses publicly available radiographic datasets with Kellgren and Lawrence scores to enable early detection and severity assessment. To improve the accuracy of our model, we employ a multi-prediction head architecture that utilizes multi-layer perceptron classifiers. Additionally, we introduce a novel training approach that reduces the data drift between multiple datasets to ensure the generalization ability of the model. The results of our experiments demonstrate the effectiveness and feasibility of our approach in predicting KOA severity accurately. Automatic diagnosis of knee osteoarthritis severity using Swin transformer Rachid Jennane ========================================================================== § INTRODUCTION Knee osteoarthritis (KOA) is a degenerative disease of the knee joint and the most common form of arthritis. It affects almost half of the population aged 65 years or older worldwide, causing pain, mobility limitation, and impaired quality of life. KOA is caused by a breakdown of knee articular cartilage and bone micro-architecture changes <cit.>. Joint space narrowing, osteophyte formation, and sclerosis are KOA's most visually relevant pathological features that can be visualized with radiographs. Although various imaging techniques such as magnetic resonance, computed tomography, and ultrasound have been introduced to diagnose osteoarthritis, radiography remains the most widely used method for initial diagnosis due to its accessibility, low cost, and widespread use. Kellgren and Lawrence (KL) classified KOA severity into five stages based on the radiographic features, from KL-G0 for healthy cases to KL-G4 for severe cases <cit.> (See Fig <ref>). However, KOA changes gradually, so the evaluation into different stages is often subjective and depends on the operator. This causes subjectivity and makes the automatic KOA diagnosis a difficult task. In addition, the high similarity between the X-ray images increases the challenge of achieving an accurate diagnosis. Several deep learning-based methods have been proposed for medical imaging applications <cit.>, and many to diagnose KOA in recent years. In <cit.>, Antony et al. employed Convolutional Neural Networks (CNNs) to quantify the severity of KOA from radiographic images. Their method is based on two main steps: first, automatically locate the knee joints using a Fully Convolutional Neural etwork (FCN), then, classify the knee joint images using a second CNN. In addition, to improve the quantification of KOA, they combined the classification loss with the regression loss to consider the continuous aspect of the disease progression. Tuilpin et al. <cit.> presented a Siamese CNN network for KL grade prediction. They used three models with different random seeds and combined their outputs with a softmax layer to obtain the final KL grade. Chen et al. <cit.> proposed an ordinal loss for fine-tuning various CNN models to classify KOA severity. They leveraged the ordinal nature of the knee KL grading system and penalized incorrect classifications more by increasing the distance between the real and predicted KL grades. Nasser et al. <cit.> proposed a Discriminative Regularized Auto-Encoder (DRAE) for early KOA prediction using X-ray images. The proposed model uses a discriminative penalty term and the traditional AE reconstruction cost function to enhance the separability of the features learned from different classes. The aim was to boost the recognition system's performance by minimizing the inter-class variance and maximizing the intra-class distance. Recently, transformers have shown promising results in various medical imaging tasks <cit.>. Wang et al. <cit.> proposed a novel data augmentation method for early detection of KOA using a Vision Transformer model. The method involves shuffling the position embedding of non-ROI patches and exchanging the ROI patches with other images. The authors also used a hybrid loss function that combines label smoothing and cross-entropy to improve the model's generalization capability and avoid over-fitting. Several important studies <cit.>,<cit.>, <cit.>, <cit.>, <cit.>, used two multi-center databases, the Osteoarthritis Initiative (OAI, <https://nda.nih.gov/oai/>) and the Multicenter Osteoarthritis Study (MOST, <https://most.ucsf.edu/>) by not accounting for the data drift problem. The latter occurs when a machine learning model trained on one dataset lowers its performance when tested on another set of data. Subsequently, data drift causes poor generalization and performance degradation. In this work, we first investigate the use of the Swin transformer in predicting KOA severity from radiographic images. In particular, the Swin transformer is the core network that extracts high-level features and detects KOA-induced changes. Second, we introduce a multi-predictive classification header to address the high similarity problem between different KOA grades. In addition, to reduce the data drift problems between the data in the two databases, OAI and MOST, we tested several learning strategies to find the one providing the model with better generalization capabilities and balanced classification results. The remainder of the paper is organized as follows: the proposed method is described in Section <ref>. Next, the obtained experimental results are presented in Section <ref>. Finally, the conclusions and outlooks are given in Section <ref>. § PROPOSED METHOD The method proposed in this paper consists of two parts: 1) a Swin transformer as a features extractor and 2) a multi-prediction head network as a classifier. The schematic illustration of our proposed network is presented in Figure <ref>. §.§ Swin Transformer The Swin Transformer <cit.> is a state-of-the-art model that has been specifically designed to address the challenges of applying transformer models in the visual domain. While transformers have been widely successful in natural language processing, they have been less effective in computer vision due to the unique characteristics of visual data. The Swin Transformer proposes a novel architecture that leverages hierarchical feature maps and shift-based windows to improve the efficiency and performance of the model. With its innovative approach, the Swin Transformer has emerged as one of the most efficient and effective transformer models for visual applications. The model is divided into four stages, where the features are hierarchically extracted in each stage. The input image with dimensions H × W × 3 is divided into H/4×W/4 non-overlapping patches as tokens of size 4× 4 × 3 = 48. These tokens are then passed through the first stage, consisting of a linear embedding layer and two Swin Transformer blocks. The linear embedding layer projects the tokens into a higher-dimensional space denoted by C; after that, in the first Swin Transformer block, the multi-headed window self-attention mechanism (W-MSA) is employed. This mechanism computes self-attention only between patches within the same window, where each window contains M× M patches. The second Swin Transformer block utilizes shifted window multi-headed self-attention (SW-MSA), in which the partitioning windows are shifted by (⌊M/2⌋, ⌊M/2⌋) patches with respect to the standard partitioning windows used in the previous block. This approach aims to create more relationships between neighboring patches previously located in different windows and reduce the computational complexity of the global MSA module used in vision transformer. In the second stage, a patch merging layer is applied to group each 2× 2 neighboring patches into a single patch of length 4C, thus reducing the number of patches to H/8×W/8. These patches are then linearly projected to a dimension of size 2C and passed to two Swin Transformer blocks as in the first stage. This process is repeated in the third stage, using 18 Swin Transformer blocks to produce H/16×W/16 patches of length 4C. Finally, in the fourth stage, two Swin Transformer blocks are used to produce H/32×W/32 of length 8C. These consecutive stages jointly produced a hierarchical representation like those of typical convolutional networks. §.§ Multi-Prediction Head Network The main task of our designed model is to be able to predict the KOA severity grade. This presents a case of a multi-class classification task. Traditionally this is solved by using a single MLP classification head with 5 outputs activated by a softmax function. The complex nature of X-ray images imposes a high similarity between the images of adjacent KL Grades as shown in Figure <ref>. To address this issue, we decompose the task into multiple binary classification tasks. We use 5 MLP networks, each specializing in predicting one KL-Grade. This enhances the model's ability to extract and filter a rich representation for each class. Let f: X → Z be our feature extractor, where X and Z are the input and latent spaces, respectively. x represents the input image and y their corresponding one hot encoding label. The predictive label ŷ_i at the head classifier MLP_i is defined as: ŷ_i = MLP_i(f(x)) The final predictive label ŷ is computed then as follows: ŷ = argmax(⋃_i = 0 ^ 4 ŷ_i) where i ∈{0 … 4} represents the KL grades. To sum up, our final model consists of a basic Swin-B encoder with C=128 and 2, 2, 18, 2 Swin Transformer blocks, followed by Normalisation and average pooling layers to produce a final representation vector of size 1024. This vector is then passed to 5 MLPs, one for each KL grade. Each MLP contains 3 linear layers of size 384, 48, 48, 1, respectively. The final layer of each MLP network has a single neuron to predict the occurrence probability of each grade. §.§ Data Drift Correction In this paper, we employ 2 of the most widely used datasets for KOA classification (i.e. MOST and OAI datasets). These datasets were collected over a substantial amount of time, from several medical centers, and were annotated by a multitude of medical practitioners. The inherent disparity of equipment, study subjects, radiography, and diagnostics methods between different medical centers caused a shift between the datasets as further discussed in Section <ref>. We represent our model using the formula h = g ∘ f, where f : X → Z and g : Z → Y, represent the feature extractor and the multi-classification head, respectively. X is the input image, Z is the latent feature space, and Y represents the label space. To address the issue of data drift between the MOST and OAI datasets, we need to align the latent representational spaces between Z_MOST and Z_OAI. This means that the feature extractor f needs to be able to perceive the data distributions from 𝒟_ℳ𝒪𝒮𝒯 and 𝒟_𝒪𝒜ℐ as belonging to the same distribution 𝒟. It models relevant mutual features while discarding any dataset-specific information that could be considered noisy. This could be represented using the following equation: 𝒟 = ( 𝒟_ℳ𝒪𝒮𝒯∪𝒟_𝒪𝒜ℐ ) ∖ ( 𝒩_MOST∪𝒩_OAI ) where 𝒩_MOST and 𝒩_OAI represent the noisy distribution of information specific to the MOST and OAI datasets, respectively. To achieve this result, we train the model h on the MOST dataset and then freeze the MLP layers g. We continue to train the feature extractor f on the OAI dataset. This way, we force the feature extractor f to align the representational space for both datasets. This proposed approach leverages the pre-trained source model effectively and adapts it to the target dataset by minimizing the shift between the data distributions in the latent representational space Z. The objective is to achieve this without compromising the prior knowledge of the pre-trained classifier. §.§ Implementation In order to train the model, we used the AdamW optimizer <cit.> with a learning rate of 3e-5, a weight decay of 0.05, an epsilon of 1e-8, and betas of (0.9, 0.999) to adjust the weights. We trained the model with a batch size of 32 images for 300 epochs. We implemented the code in PyTorch and used an NVIDIA RTX A4000 GPU with 16 GB of VRAM to speed up the training process. We also implemented various data augmentation techniques such as 15-degree rotation, translation, scaling, random horizontal flipping, and contrast adjustment with a factor of 0.3. These techniques have previously been used in similar studies to improve the performance of deep learning models on image classification tasks in order to address the problem of limited data and overfitting. § EXPERIMENTAL RESULTS To evaluate the efficacy of the proposed approach, we conducted five experiments, described in this section. §.§ Datasets In this study, we employed two widely used and publicly available datasets: MOST dataset: It contains 18,269 knee images that were segmented in the same manner as in <cit.>. We divided this dataset into three subsets, namely training, validation, and testing with a ratio of 6:1:3. Table <ref> provides a summary of the dataset's partitioning. We use this dataset to train and evaluate our model's performance on knee image classification. OAI dataset: It consists of 8260 already prepared knee images <cit.>. It is randomly divided into three subsets, namely training, validation, and testing with a ratio of 7:1:2. Table <ref> summarizes the partitioning of the OAI dataset. We use this dataset to validate and test our model's performance. §.§ Experimental Protocol During the development of our model, we tested multiple configurations and compared them. In the first experiment, we use a single classifier to predict all grades simultaneously. In the second experiment, we use the same settings but employed the Multi-prediction head architecture, which involves breaking down the multi-classification problem into sub-binary classifications. For experiments three and four, we explored the data drift between two datasets by training only one dataset per experiment. Finally, in the fifth experiment, we tackled the issue of data drift by transferring the knowledge from the trained classifier on the source dataset (MOST) and solely training the feature extractor of our model on the target dataset (OAI). §.§ Quantitative Evaluation The performances obtained for each considered configurations are presented in Table <ref>. In the first two experiments, we observed an improvement in the F1 score for our model when using the Multi-prediction head architecture in the second experiment. Specifically, the model yielded a 0.062 and 0.042 F1 score increase compared to the first experiment in the MOST and OAI test sets, respectively. We also notice an increase in accuracy on the MOST dataset. Moreover, as seen by the confusion matrices in Figure <ref>, the architecture proposed in experiment 2 was able to avoid the catastrophic failure of detecting the KL-G1 observed in experiment 1. The grad KL-G1 is notoriously challenging to detect even for trained doctors due to the high similarity with the KL-G0 and KL-G2. In fact, the model correctly predicted 54 images in KL-G1 in experiment 2, while 0 images were classified in experiment 1. These results highlight the impact of dividing the multi-classification problem into sub-binary classification problems as described in sections <ref>. The substantial drop of performance in experiment 3 on both datasets is mainly attributed to the lack of a sufficient quantity of data. Transformer-based models are known to require a lot of data for training <cit.>. This has led to the underfitting of our model as it was not able to extract meaningful representations from this dataset. On the other hand, we notice that the performance of the model on the MOST dataset is quite similar, this is due to the richness of the representations in this dataset. In experiment 4, the MOST dataset contains more samples that cover a broader range of KOA severity levels than the OAI dataset as shown in Table <ref>. Consequently, MOST provides a more diverse and representative training set for our model, leading to better performance in the MOST test set. However, we still see a greater decrease in performance on the OAI dataset compared to experiment 2 in terms of accuracy and F1 score. Experiment 5 showed a considerable enhancement in performances on the OAI dataset compared to all other experiments, achieving a 70.17% accuracy and 0.671 F1-score, as shown in Table <ref>, while maintaining a high accuracy on the MOST dataset. This particularly highlights the significance and effectiveness of our method to reduce the data drift and align the latent representations of both datasets as described in section <ref>. §.§ Latent Representation Ability The reduction of the data drift is an important task for our model as shown in the previous quantitative results. Figure <ref> depicts the distribution of latent features extracted for the samples of each dataset across the models produced through our previous experiments. We used the t-SNE algorithm <cit.> in order to reduce the dimensionality of the features. The data drift in the representation of the two datasets is clearly apparent for both experiments 1 and 2. Even though experiment 2 achieved better results, we still noticed the high disparity of performance between datasets. Due to the underfitting of the model in experiment 3, it was also unable to address the data drift. In experiment 4 the model was trained only on the MOST dataset. Because of the availability of data, we noticed a better general alignment for data distribution between datasets. But Figure <ref> shows that the shift on the scale of individual classes is still noticeable. In experiment 5, we noticed a very strong alignment for both datasets on the general and class-specific levels in Figures <ref> and <ref>, respectively. Our approach successfully aligned all the data points from both datasets, effectively mitigating the data drift problem. As a result, the learned representations were more relevant to the task, and the model's performance improved significantly. Figure <ref> illustrates the distribution of latent representations of each class for each of our previous experiments on the OAI test-set. It highlights the ability of the model to discriminate and separate the different classes of KL-Grade. In experiment 3 where the underfitting occurred, we can observe the inability of the model to separate the distributions of the different classes. In experiments 1,2 and 4, the models were able to clearly separate the distributions of KL-G3 and KL-G4. Separating the KL-G0, KL-G1, and KL-G2 grades was more challenging in the first experiment due to the significant similarity between them and the use of a single MLP classifier. Along with the ability to align the distributions of both datasets, we noticed in Experiment 5 a better separability between KL-G0, KL-G1, and KL-G2 which posed a challenge in other experiments. We observed a clear ability to discriminate between KL-G1 and KL-G2 especially, while KL-G0 and KL-G1 still pose some challenges because they represent the none existence and the very early stages of OA respectively. Overall, these results demonstrate the effectiveness of our method in handling data drifts and enhancing the model's ability to differentiate between grades of KOA. §.§ Qualitative Evaluation We use GradCAM as a tool for interpretability purposes. By visualizing the last layer's activations of the feature extractor, we chose a sample from each grade, where the true labels of samples from (a) to (e) are from KL-G0 to KL-G4, respectively, as shown in Figures <ref> and <ref>. In Figure <ref>, we observed that the model effectively identified areas like osteophytes, joint space narrowing, and sclerosis, which are essential factors for assessing the severity of KOA <cit.>. This points out that our model bases its classifications on the right regions of interest commonly used in clinical diagnosis and not on non-relevant features. Figure <ref> represents misclassified samples. As can be observed, the model still focuses on the relevant regions around the knee joint. For instance, the model predicts sample (a) as KL-G1, even though the true KL grade was zero. It focused on the area where a medial joint space narrowing was present, which is a possible feature of KL-G1. Similar misclassifications occurred for samples (b), (c), and (d), where the model either overestimated or underestimated the KL grade, indicating the challenge of distinguishing between grades due to their high similarity and also the fact that the KL grade suffers from subjectivity/ambiguity among experts <cit.>. In sample (e), we encountered an image that contained an unusual object (i.e. A screw) in the tibia, which could potentially distract the model from the areas of the image that are crucial for grading KOA. However, our model demonstrated robustness by still being able to focus on the region of interest. Furthermore, our model classified the image as a KL-G3 instead of KL-G4, which are close compared to other KL-Grades. This result highlights the ability of our model to prioritize task-specific important features in the image and not be affected by irrelevant and noisy distractors. §.§ State-of-the-art Comparison Table <ref> presents a comparison of the results obtained with state-of-the-art methods. We note that the methods used in these studies were trained differently. Specifically, some methods used the OAI training set exclusively, others used the MOST training set exclusively, and others used both bases. This diversity in learning can have an impact on the overall performance, and should therefore be carefully considered when interpreting the results. Antony et al. <cit.> and <cit.> achieved accuracies of 53.40% and 63.60%, respectively, and F1-scores of 0.43 and 0.59, respectively. Chen et al. <cit.> used ordinal loss with different deep learning architectures and achieved accuracies of 69.60%, 66.20%, and 65.50% with Vgg19, ResNet50, and ResNet101, respectively, but they did not report F1-score. Tiulpin et al. <cit.> used a Siamese network and reported an accuracy of 66.71%. Wang et al. <cit.> achieved an accuracy of 69.18%. Our proposed method, experiment 5, outperformed all other methods with an accuracy of 70.17% and an F1-score of 0.67. These results indicate the potential of our proposed method for improving the accuracy and reliability of knee osteoarthritis diagnosis, which could be valuable in clinical practice. § CONCLUSION In this paper, we proposed a new method to predict the severity of Knee OA from radiographic images using the Swin Transformer. Our results showed that this method achieved state-of-the-art performance on the OAI test set, significantly outperforming existing methods. We show that the Swin Transformer network is effective in extracting relevant knee OA information, which can be used to detect most of the symptoms of the disease. In addition, handling the data drift and using the multi-prediction head architecture significantly improves the accuracy of the model and helps reduce the similarity between features of nearby grades. Prospects for future work may involve other imaging modalities such as MRI, while exploring clinical and demographic data, to further improve the prediction of KOA severity. Funded by the TIC-ART project, Regional fund (Region Centre-Val de Loire) ACM-Reference-Format
http://arxiv.org/abs/2307.04779v1
20230710075009
Law of Large Numbers for Bayesian two-layer Neural Network trained with Variational Inference
[ "Arnaud Descours", "Tom Huix", "Arnaud Guillin", "Manon Michel", "Éric Moulines", "Boris Nectoux" ]
stat.ML
[ "stat.ML", "math.PR", "math.ST", "stat.TH" ]
Relieving the S_8 Tension: Exploring the Surface-type DBI Model as a Dark Matter Paradigm Huanyuan Shan August 12, 2023 ========================================================================================= We provide a rigorous analysis of training by variational inference (VI) of Bayesian neural networks in the two-layer and infinite-width case. We consider a regression problem with a regularized evidence lower bound (ELBO) which is decomposed into the expected log-likelihood of the data and the Kullback-Leibler (KL) divergence between the a priori distribution and the variational posterior. With an appropriate weighting of the KL, we prove a law of large numbers for three different training schemes: (i) the idealized case with exact estimation of a multiple Gaussian integral from the reparametrization trick, (ii) a minibatch scheme using Monte Carlo sampling, commonly known as Bayes by Backprop, and (iii) a new and computationally cheaper algorithm which we introduce as Minimal VI. An important result is that all methods converge to the same mean-field limit. Finally, we illustrate our results numerically and discuss the need for the derivation of a central limit theorem. Bayesian neural networks, variational inference, mean-field, law of large numbers, infinite-width neural networks. § INTRODUCTION Deep Learning has led to a revolution in machine learning with impressive successes. However, some limitations of DL have been identified and, despite, many attempts, our understanding of DL is still limited. A long-standing problem is the assessment of predictive uncertainty: DL tends to be overconfident in its predictions <cit.>, which is a problem in applications such as autonomous driving <cit.>, medical diagnosis <cit.>, or finance; cf <cit.>. Therefore, on the one hand, analytical efforts are being made to thoroughly investigate the performance of DL; and on the other hand, many approaches have been proposed to alleviate its shortcomings. The Bayesian paradigm is an attractive way to tackle predictive uncertainty, as it provides a framework for training uncertainty-aware neural networks (NNs) (e.g. <cit.>). Thanks to a fully probabilistic approach, Bayesian Neural Networks (BNN) combine the impressive neural-network expressivity with the decision-theoretic approach of Bayesian inference, making them capable of providing predictive uncertainty; see <cit.>. However, Bayesian inference requires deriving the posterior distribution of the NN weights. This posterior distribution is typically not tractable. A classical approach is to sample the posterior distribution using Markov chain Monte Carlo methods (such as Hamilton-Monte-Carlo methods). There are however long-standing difficulties, such as the proper choice of the prior and fine-tuning of the sampler. Such difficulties often become prohibitive in large-dimensional cases,<cit.>. An alternative is to use variational inference, which has a long history <cit.>. Simpler methods that do not require exact computation of integrals over the variational posterior were then developed, e.g. first by <cit.> thanks to some approximation and then by <cit.> with the Bayes by Backprop approach. In the latter, the posterior distribution is approximated by a parametric distribution and a generalisation of the reparametrization trick used by <cit.> leads to an unbiased estimator of the gradient of the ELBO; see also <cit.>. Despite the successful application of this approach, little is known about the overparameterized limit and appropriate weighting that must be assumed to obtain a nontrivial Bayesian posterior, see <cit.>. Recently, <cit.> outlined the importance of balancing in ELBO the integrated log-likelihood term and the KL regularizer, to avoid both overfitting and dominance of the prior. However, a suitable limiting theory has yet to be established, as well as guarantees for the practical implementation of the stochastic gradient descent (SGD) used to estimate the parameters of the variational distribution. Motivated by the need to provide a solid theoretical framework, asymptotic analysis of NN has gained much interest recently. The main focus has been on the gradient descent algorithm and its variants <cit.>. In much of these works, a mean-field analysis is performed to characterize the limiting nonlinear evolution of the weights of a two-layer NN, allowing the derivation of a law of large numbers and a central limit theorem for the empirical distribution of neuron weights. A long-term goal of these works is to demonstrate convergence toward a global minimum of these limits for the mean field. Despite some progress in this direction, this is still an open and highly challenging problem; cf <cit.>. Nevertheless, this asymptotic analysis is also of interest in its own right, as we show here in the case of variational inference for Bayesian neural networks. Indeed, based on this asymptotic analysis, we develop an efficient and new variant of the stochastic gradient descent (SGD) algorithm for variational inference in BNN that computes only the information necessary to recover the limit behavior. Our goal, then, is to work at the intersection of analytical efforts to gain theoretical guarantees and insights and of practical methods for a workable variational inference procedure. By adapting the framework developed by <cit.>, we produce a rigorous asymptotic analysis of BNN trained in a variational setting for a regression task. From the limit equation analysis, we first find that a proper regularisation of the Kullback-Leibler divergence term in relation with the integrated loss leads to their right asymptotic balance. Second, we prove the asymptotic equivalence of the idealized and Bayes-by-Backprop SGD schemes, as both preserve the same core contributions to the limit. Finally, we introduce a computationally more favourable scheme, directly stemming from the effective asymptotic contributions. This scheme is the true mean-field algorithmic approach, as only deriving from non-interacting terms. More specifically, our contributions are the following: * We first focus on the idealized SGD algorithm, where the variational expectations of the derivative of the loss from the reparametrization trick of <cit.> are computed exactly. More precisely, we prove that with the number of neurons N→ +∞, the sequence of trajectories of the scaled empirical distributions of the parameters satisfies a law of large numbers. This is the purpose of Theorem <ref>. The proof is completely new: it establishes directly the limit in the topology inherited by the Wasserstein distance bypassing the highly technical Sobolev space arguments used in <cit.>. The idealized SGD requires the computation of some integrals, which in practice prevents a direct application of this algorithm. However, we can prove its convergence to an explicit nonlinear process. These integrals are usually obtained by a Monte Carlo approximation, leading to the Bayes-by-Backprop SGD, see <cit.>. * We show for the Bayes-by-Backprop SGD (see Theorem <ref>) that the sequence of trajectories of the scaled empirical distributions of the parameters satisfies the same law of large numbers as that in Theorem <ref>, which justifies such an approximation procedure. Note that each step of the algorithm involves the simulation of O(N) Gaussian random variables, which can make the associated gradient evaluation prohibitively expensive. * A careful analysis of the structure of the limit equation (<ref>) allows us to develop a new algorithm, called Minimal-VI SGD, which at each step generates only two Gaussian random variables and for which we prove the same limiting behavior. The key idea here is to keep only those contributions which affect the asymptotic behavior and which can be understood as the mean-field approximation from the uncorrelated degrees of freedom. This is all the more interesting since we observe numerically that the number weights N required to reach this asymptotic limit is quite small which makes this variant of immediate practical interest. * We numerically investigate the convergence of the three methods to the common limit behavior on a toy example. We observe that the mean-field method is effective for a small number of neurons (N=300). The differences between the methods are reflected in the variances. The paper is organized as follows: Section <ref> introduces the variational inference in BNN, as well as the SGD schemes commonly considered, namely the idealized and Bayes-by-backprop variants. Then, in Section <ref> we establish our initial result, the LLN for the idealized SGD. In Section <ref> we prove the LLN for the Bayes-by-backprop SGD and its variants. We show that both SGD schemes have the same limit behavior. Based on an analysis of the obtained limit equation, we present in Section <ref> the new minimal- VI. Finally, in Section <ref> we illustrate our findings using numerical experiments. The proofs of the mean-field limits, which are original and quite technically demanding, are gathered in the supplementary paper. Related works. Law of Large Numbers (LLN) for mean-field interacting particle systems, have attracted a lot of attentions; see for example <cit.> and references therein. The use of mean-field particle systems to analyse two-layer neural networks with random initialization have been considered in <cit.>, which establish a LLN on the empirical measure of the weights at fixed times - we consider in this paper the trajectory convergence, i.e. the whole empirical measure process (time indexed) converges uniformly w.r.t. Skorohod topology. It enables not only to use the limiting PDE, for example to study the convergence of the weights towards the infimum of the loss function (see <cit.> for preliminary results), but is is also crucial to establish the central limit theorem, see for example <cit.>. <cit.> give conditions for global convergence of GD for exact mean-square loss and online stochastic gradient descent (SGD) with mini-batches increasing in size with the number of weights N. A LLN for the entire trajectory of the empirical measure is also given in <cit.> for a standard SGD. <cit.> establish the propagation of chaos for SGD with different step size schemes. Compared to the existing literature dealing with the SGD empirical risk minimization in two-layer neural networks, <cit.> provide the first rigorous proof of the existence of the limit PDE, and in particular its uniqueness, in the LLN. We are interested here in deriving a LLN but for Variational Inference (VI) of two-layer Bayesian Neural Networks (BNN), where we consider a regularized version of the Evidence Lower Bound (ELBO). § VARIATIONAL INFERENCE IN BNN: NOTATIONS AND COMMON SGD SCHEMES §.§ Variational inference and Evidence Lower Bound Setting. Let 𝖷 and 𝖸 be subsets of 𝐑^n (n≥ 1) and 𝐑 respectively. For N≥1 and w=(w^1,…,w^N)∈(𝐑^d)^N, let f_w^N: 𝖷→𝐑 be the following two-layer neural network: for x∈𝖷, f_w^N(x):=1/N∑_i=1^Ns(w^i,x)∈𝐑, where s:𝐑^d×𝖷→𝐑 is the activation function. We work in a Bayesian setting, in which we seek a distribution of the latent variable w which represents the weights of the neural network. The standard problem in Bayesian inference over complex models is that the posterior distribution is hard to sample. To tackle this problem, we consider Variational Inference, in which we consider a family of distribution 𝒬^N={ q_θ^N, θ∈Ξ^N} (where Ξ is some parameter space) easy to sample. The objective is to find the best q_θ^N∈𝒬^N, the one closest in KL divergence (denoted 𝒟_ KL) to the exact posterior. Because we cannot compute the KL, we optimize the evidence lower bound (ELBO), which is equivalent to the KL up to an additive constant. Denoting by 𝔏: 𝐑×𝐑→𝐑_+ the negative log-likelihood (by an abuse of language, we call this quantity the loss), the ELBO (see <cit.>) is defined, for ∈Ξ^N, (x,y)∈𝖷×𝖸, by E_ lbo(θ,x,y) :=- ∫_(𝐑^d)^N𝔏(y,f_w^N(x))q_θ^N(w)w - 𝒟_ KL(q_^N|P_0^N), where P_0^N is some prior on the weights of the NN. The ELBO is decomposed into two terms: one corresponding to the Kullback-Leibler (KL) divergence between the variational density and the prior and the other to a marginal likelihood term. It was empirically found that the maximization of the ELBO function is prone to yield very poor inferences <cit.>. It is argued in <cit.> and <cit.> that optimizing the ELBO leads as N →∞ to the collapse of the variational posterior to the prior. <cit.> proposed to consider a regularized version of the ELBO, which consists in multiplying the KL term by a parameter which is scaled by the inverse of the number of neurons: E_ lbo^N(θ,x,y) :=- ∫_(𝐑^d)^N𝔏(y,f_w^N(x))q_θ^N(w)w -1/N𝒟_ KL(q_^N|P_0^N), A first objective of this paper is to show that the proposed regularization leads to a stable asymptotic behavior and the effect of both the integrated loss and Kullback-Leibler terms on the limiting behavior are balanced in the limit N →∞. The maximization of E_ lbo^N is carried out using SGD. The variational family 𝒬^N we consider is a Gaussian family of distributions. More precisely, we assume that for any =(θ^1,…,θ^N)∈Ξ^N, the variational distribution q_^N factorizes over the neurons: for all w=(w^1,…,w^N)∈(𝐑^d)^N, q_^N(w)=∏_i=1^Nq^1_θ^i(w^i), where θ=(m,ρ)∈Ξ:=𝐑^d×𝐑 and q^1_θ is the probability density function (pdf) of 𝒩(m,g(ρ)^2 I_d), with g(ρ)=log(1+e^ρ), ρ∈𝐑. In the following, we simply write 𝐑^d+1 for 𝐑^d×𝐑. In addition, following the reparameterisation trick of <cit.>, q^1_θ(w) w is the pushforward of a reference probability measure with density γ by Ψ_θ (see more precisely Assumption A1). In practice, γ is the pdf of 𝒩(0,I_d) and Ψ_θ(z)=m+g(ρ)z. With these notations, (<ref>) writes E_ lbo^N(θ,x,y) =- ∫_(𝐑^d)^N𝔏(y,1/N∑_i=1^Ns(Ψ_θ^i(z^i),x)) γ(z^1)…γ(z^N) z_1… z_N -1/N𝒟_ KL(q_^N|P_0^N). Loss function and prior distribution. In this work, we focus on the regression problem, i.e. 𝔏 is the Mean Square Loss: for y_1,y_2∈𝐑, 𝔏(y_1,y_2)=1/2|y_1-y_2|^2. We also introduce the function ϕ:(θ,z,x)∈𝐑^d+1×𝐑^d×𝖷↦ s(Ψ_θ(z),x). On the other hand, we assume that the prior distribution P_0^N write, for all w∈(𝐑^d)^N, P_0^N(w)=∏_i=1^NP_0^1(w^i), where P_0^1:𝐑^d→𝐑_+ is the pdf of 𝒩(m_0,σ^2_0I_d), and σ_0>0. Therefore 𝒟_ KL(q_^N|P_0^N)=∑_i=1^N𝒟_ KL(q_θ^i|P_0^1) and, for θ=(m,ρ)∈𝐑^d+1, 𝒟_ KL(q_θ^1|P_0^1)=∫_𝐑^d q^1_θ(x) log(q^1_θ(x)/P_0^1(x)) x=m-m_0_2^2/2σ_0^2+d/2(g(ρ)^2/σ_0^2-1)+d/2log(σ_0^2/g(ρ)^2). Note that 𝒟_ KL has at most a quadratic growth in m and ρ. Note that we assume here a Gaussian prior to get an explicit expression of the Kullback-Leibler divergence. Most arguments extend to sufficiently regular densities and are essentially the same for exponential families, using conjugate families for the variational approximation. §.§ Common SGD schemes in backpropagation in a variational setting Idealized SGD. Let (Ω, ℱ,𝐏) be a probability space. Consider a data set {(x_k,y_k)}_k≥ 0 i.i.d. w.r.t. π∈𝒫(𝖷×𝖸), the space of probability measures over 𝖷×𝖸. For N≥1 and given a learning rate η>0, the maximization of θ∈𝐑^d+1↦E_ lbo^N(θ,x,y) with a SGD algorithm writes as follows: for k≥ 0 and i∈{1,…,N}, θ_k+1=θ_k+ η∇_θE_ lbo^N(θ_k,x_k,y_k) θ_0 ∼μ_0^⊗ N, where μ_0∈𝒫(𝐑^d+1) and θ_k=(θ^1_k,…, θ^N_k). We now compute ∇_θE_ lbo^N(θ,x,y). First, under regularity assumptions on the function ϕ (which will be formulated later, see A1 and A3 below) and by assumption on 𝔏, we have for all i∈{1,…,N} and all (x,y)∈𝖷×𝖸, ∫_(𝐑^d)^N∇_θ^i𝔏(y,1/N∑_j=1^Nϕ(θ^j,z^j,x))γ(z^1)…γ(z^N) z^1… z^N = -1/N^2∑_j=1^N∫_(𝐑^d)^N(y-ϕ(θ^j,z^j,x))∇_θϕ(θ^i,z^i,x)γ(z^1)…γ(z^N) z^1… z^N =-1/N^2[∑_j=1,j≠ i^N(y-⟨ϕ(θ^j,·,x),γ⟩)⟨∇_θϕ(θ^i,·,x),γ⟩ + ⟨(y-ϕ(θ^i,·,x))∇_θϕ(θ^i,·,x),γ⟩], where we have used the notation ⟨ U,ν⟩=∫_𝐑^qU(z)ν( z) for any integrable function U:𝐑^q→𝐑 w.r.t. a measure ν (with a slight abuse of notation, we denote by γ the measure γ(z) z). Second, for θ∈𝐑^d+1, we have ∇_θ𝒟_ KL(q_θ^1|P_0^1)= [ ∇_m𝒟_ KL(q_θ^1|P_0^1); ∂_ρ𝒟_ KL(q_θ^1|P_0^1) ] = [ 1/σ_0^2(m-m_0); d/σ_0^2g'(ρ)g(ρ)-dg'(ρ)/g(ρ) ]. In conclusion, the SGD (<ref>) writes: for k≥ 0 and i∈{1,…,N}, θ_k+1^i=θ_k^i-η/N^2∑_j=1,j≠ i^N(⟨ϕ(θ_k^j,·,x_k),γ⟩-y_k)⟨∇_θϕ(θ_k^i,·,x_k),γ⟩ -η/N^2⟨(ϕ(θ_k^i,·,x_k)-y_k)∇_θϕ(θ_k^i,·,x_k),γ⟩-η/N∇_θ𝒟_ KL(q_θ^i_k^1|P_0^1) θ_0^i ∼μ_0. We shall call this algorithm idealised SGD because it contains an intractable term given by the integral w.r.t. γ. This has motivated the development of methods where this integral is replaced by an unbiased Monte Carlo estimator (see <cit.>) as detailed below. Bayes-by-Backprop SGD. The second SGD algorithm we study is based on an approximation, for i∈{1,…,N}, of ∫_(𝐑^d)^N(y-ϕ(θ^j,z^j,x))∇_θϕ(θ^i,z^i,x)γ(z^1)…γ(z^N) z^1… z^N (see (<ref>)) by 1/B∑_ℓ=1^B (y-ϕ(θ^j, 𝖹^j,ℓ,x) )∇_θϕ(θ^i,𝖹^i,ℓ,x) where B∈𝐍^* is a fixed integer and (𝖹^q,ℓ, q∈{i,j}, 1≤ℓ≤ B) is a i.i.d finite sequence of random variables distributed according to γ(z) z. In this case, for N≥ 1, given a dataset (x_k,y_k)_k≥0, the maximization of θ∈𝐑^d+1↦E_ lbo^N(θ,x,y) with a SGD algorithm is the following: for k≥ 0 and i∈{1,…,N}, θ_k+1^i=θ_k^i -η/N^2B∑_j=1^N∑_ℓ=1^B (ϕ(θ_k^j,𝖹^j,ℓ_k,x_k)-y_k )∇_θϕ(θ_k^i,𝖹^i,ℓ_k,x_k) -η/N∇_θ𝒟_ KL(q_θ^i_k^1|P_0^1) θ_0^i=(m_0^i,ρ_0^i)∼μ_0, where η>0 and (𝖹^j,ℓ_k, 1≤ j≤ N, 1≤ℓ≤ B, k≥ 0) is a i.i.d sequence of random variables distributed according to γ. § LAW OF LARGE NUMBERS FOR THE IDEALIZED SGD Assumptions and notations. When E is a metric space and ℐ= 𝐑_+ or ℐ=[0,T] (T≥ 0), we denote by 𝒟(ℐ,E) the Skorohod space of càdlàg functions on ℐ taking values in E and 𝒞(ℐ,E) the space of continuous functions on ℐ taking values in E. The evolution of the parameters ({θ_k^i, i=1,…,N})_k≥ 1 defined by (<ref>) is tracked through their empirical distribution ν_k^N (for k≥ 0) and its scaled version μ_t^N (for t∈𝐑_+), which are defined as follows: ν_k^N:=1/N∑_i=1^Nδ_θ_k^i and μ_t^N:=ν_⌊ Nt⌋^N, where the θ^i_k's are defined (<ref>). Fix T>0. For all N≥1, μ^N:={μ_t^N, t∈[0,T]} is a random element of 𝒟([0,T],𝒫(𝐑^d+1)), where 𝒫(𝐑^d+1) is endowed with the weak convergence topology. For N≥1 and k≥1, we introduce the following σ-algebras: ℱ_0^N=σ(θ_0^i, 1≤ i≤ N) and ℱ_k^N=σ(θ_0^i, (x_q,y_q),1≤ i≤ N, 0≤ q≤ k-1). Recall q_θ^1:𝐑^d→𝐑_+ be the pdf of 𝒩(m,g(ρ)^2I_d) (θ=(m,ρ)∈𝐑^d+1). In this work, we assume the following. A1. There exists a pdf γ:𝐑^d→𝐑_+ such that for all θ∈𝐑^d+1, q^1_θ x=Ψ_θ#γ x, where {Ψ_θ, θ∈𝐑^d+1} is a family of 𝒞^1-diffeomorphisms over 𝐑^d such that for all z∈𝐑^d, θ∈𝐑^d+1↦Ψ_θ(z) is of class 𝒞^∞. Finally, there exists 𝔟:𝐑^d→𝐑_+ such that for all multi-index α∈𝐍^d+1 with |α|≥ 1, there exists C_α>0, for all z∈𝐑^d and θ=(θ_1,…,θ_d+1)∈𝐑^d+1, | ∂_αΨ_θ(z)| ≤ C_α𝔟(z) with for all q≥ 1, ⟨𝔟^q, γ⟩ <+∞, where ∂_α= ∂_θ_1^α_1…∂_θ_d+1^α_d+1 and ∂_θ_j^α_j is the partial derivatives of order α_j w.r.t. to θ_j. A2. The sequence {(x_k,y_k)}_k≥ 0 is i.i.d. w.r.t. π∈𝒫(𝖷×𝖸). The set 𝖷×𝖸⊂𝐑^d×𝐑 is compact. For all k≥0, (x_k,y_k)ℱ_k^N, where ℱ_k^N is defined in (<ref>). A3. The activation function s:𝐑^d×𝖷→𝐑 belongs to 𝒞^∞_b(𝐑^d×𝖷) (the space of smooth functions over 𝐑^d×𝖷 whose derivatives of all order are bounded). A4. The initial parameters (θ_0^i)_i=1^N are i.i.d. w.r.t. μ_0∈𝒫(𝐑^d+1) which has compact support. Note that A1 is satisfied when γ is the pdf of 𝒩(0,I_d) and Ψ_θ(z)=m+g(ρ)z, with 𝔟(z)=1+|z|. With these assumptions, for every fixed T>0, the sequence ({θ_k^i, i=1,…,N})_k=0, …, ⌊ NT ⌋ defined by (<ref>) is a.s. bounded: Assume A1→A4. Then, there exists C>0 such that a.s. for all T>0, N≥ 1, i∈{1,…, N}, and 0≤ k≤⌊ NT⌋, |θ_k^i|≤ Ce^[ C(2+T)]T. Lemma <ref> implies that a.s. for all T>0 and N≥ 1, μ^N ∈𝒟([0,T],𝒫(Θ_T)), where Θ_T={θ∈𝐑^d+1, |θ|≤ Ce^[ C(2+T)]T}. Law of large numbers for (μ^N)_N≥1 defined in (<ref>). The first main result of this work is the following. Assume A1→A4. Let T>0. Then, the sequence (μ^N)_N≥1⊂𝒟([0,T],𝒫(Θ_T)) defined in (<ref>) converges in probability to the unique deterministic solution μ̅∈𝒞([0,T],𝒫(Θ_T)) to the following measure-valued evolution equation: ∀ f∈𝒞^∞(Θ_T) and ∀ t∈ [0,T], ⟨ f,μ̅_t⟩-⟨ f,μ_0⟩ =- η∫_0^t∫_𝖷×𝖸⟨ϕ(·,·,x)-y,μ̅_s⊗γ⟩⟨∇_θ f·∇_θϕ( · ,·,x),μ̅_s⊗γ⟩π( x, y) s - η∫_0^t⟨∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1),μ̅_s⟩ s. The proof of Theorem <ref> is given in Appendix <ref>. We stress here the most important steps and used techniques. In a first step, we derive an identity satisfied by (μ^N)_N≥ 1, namely the pre-limit equation (<ref>); see Sec. <ref>. Then we show in Sec. <ref> that (μ^N)_N≥ 1 is relatively compact in 𝒟([0,T],𝒫(Θ_T)). To do so, we check that the sequence (μ^N)_N≥ 1 satisfies all the required assumptions of <cit.> when E= 𝒫(Θ_T) there. In Sec. <ref> we prove that every limit point of (μ^N)_N≥ 1 satisfies the limit equation (<ref>). Then, in Section <ref>, we prove that there is a unique solution of the measure-valued equation (<ref>). To prove the uniqueness of the solution of (<ref>), we use techniques developed in <cit.> which are based on a representation formula for solution to measure-valued equations <cit.> together with estimates in Wasserstein distances between two solutions of (<ref>) derived in <cit.>. In Section <ref>, we also conclude the proof of Theorem <ref>. Compared to <cit.>, the fact that ({θ_k^i, i=1,…,N})_k=0, …, ⌊ NT ⌋ defined by (<ref>) are a.s. bounded allows to use different and more straightforward arguments to prove (i) the relative compactness in 𝒟([0,T],𝒫(Θ_T)) of (μ^N)_N≥1 (defined in (<ref>)) (ii) the continuity property of the operator 𝗆↦Λ_t[f](𝗆) defined in (<ref>) w.r.t. the topology of 𝒟([0,T],𝒫(Θ_T)) and (iii) (μ^N)_N≥ 1 has limit points in 𝒞([0,T],𝒫(Θ_T)). Step (ii) is necessary in order to pass to the limit N→ +∞ in the pre-limit equation and Step (iii) is crucial since we prove that there is at most one solution of (<ref>) in 𝒞([0,T],𝒫(Θ_T)). It is worthwhile to emphasize that, as N →∞, the effects of the integrated loss and of the KL terms are balanced, as conjectured in <cit.>. To avoid further technicalities, we have chosen what may seem restrictive assumptions on the data or the activation function. Note however that it readily extends to unbounded set 𝖷, and also unbounded 𝖸 assuming that π as polynomial moments of sufficiently high order. Also, RELU (or more easily leaky RELU) may be considered by using weak derivatives (to consider the singularity at 0), and a priori moment bounds on the weights. § LLN FOR THE BAYES-BY-BACKPROP SGD The sequence {θ_k^i, i∈{1,… N}}_k=0, …, ⌊ NT ⌋ defined recursively by the algorithm (<ref>) is in general not bounded, since ∇_θϕ(θ ,𝖹, x) is not necessarily bounded if 𝖹∼γ(s) z. Therefore, we cannot expect Lemma <ref> to hold for {θ_k^i, i∈{1,… N}}_k=0, …, ⌊ NT ⌋ set by (<ref>). Thus, the sequence {θ_k^i, i∈{1,… N}}_k=0, …, ⌊ NT ⌋ is considered on the whole space 𝐑^d+1. Wasserstein spaces and results. For N≥1, and k≥ 1, we set ℱ_k^N=σ (θ_0^i , 𝖹^j,ℓ_q,(x_q,y_q), 1≤ i,j≤ N, 1≤ℓ≤ B, 0≤ q≤ k-1} ). In addition to A1→A4 (where in A2, when k≥ 1, ℱ_k^N is now the one defined in (<ref>)), we assume: A5. The sequences (𝖹^j,ℓ_k,1≤ j≤ N, 1≤ℓ≤ B, k≥ 0) and ((x_k,y_k), k≥ 0) are independent. In addition, for k≥ 0, ((x_k,y_k),𝖹^j,ℓ_k, 1≤ j≤ N, 1≤ℓ≤ B)ℱ_k^N. Note that the last statement of A5 implies the last statement of A2. We introduce the scaled empirical distribution of the parameters of the algorithm (<ref>), i.e. for k≥ 0 and t≥ 0: ν_k^N:=1/N∑_i=1^Nδ_θ_k^i and μ_t^N:=ν_⌊ Nt⌋^N, where the θ^i_k's are defined (<ref>). One can no longer rely on the existence of a compact subset Θ_T⊂𝐑^d+1 such that a.s. (μ^N)_N≥1⊂𝒟([0,T], 𝒫(Θ_T)), where μ^N={t≥ 0↦μ_t^N} is defined in (<ref>). For this reason, we will work in Wasserstein spaces 𝒫_q(𝐑^d+1), q≥ 0, which, we recall, are defined by 𝒫_q(𝐑^d+1)={ν∈𝒫(𝐑^d+1), ∫_𝐑^d+1 |θ|^q ν (θ)<+∞}. These spaces are endowed with the Wasserstein metric 𝖶_q, see e.g. <cit.> for more materials on Wasserstein spaces. For all q≥ 0, (μ^N)_N≥1⊂𝒟(𝐑_+,𝒫_q(𝐑^d+1)). The second main results of this work is a LLN for (μ^N)_N≥1 defined in (<ref>). Assume A1→A5. Let γ_0> 1+ d+1/2. Then, the sequence (μ^N)_N≥1 defined in (<ref>) converges in probability in 𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1)) to a deterministic element μ̅∈𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1)), where μ̅∈𝒞(𝐑_+,𝒫_1(𝐑^d+1)) is the unique solution in 𝒞(𝐑_+,𝒫_1(𝐑^d+1)) to the following measure-valued evolution equation:∀ f∈𝒞^∞_b(𝐑^d+1) and ∀ t∈𝐑_+, ⟨ f,μ̅_t⟩-⟨ f,μ_0⟩ =- η∫_0^t∫_𝖷×𝖸⟨ϕ(·,·,x)-y,μ̅_s⊗γ⟩⟨∇_θ f·∇_θϕ( · ,·,x),μ̅_s⊗γ⟩π( x, y) s - η∫_0^t⟨∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1),μ̅_s⟩ s. Theorem <ref> is proved in the appendix <ref>. Since {θ_k^i, i∈{1,… N}}_k=0, …, ⌊ NT ⌋ defined by (<ref>) is not bounded in general, we work in the space 𝒟(𝐑_+, 𝒫_γ_0(𝐑^d+1)). The proof of Theorem <ref> is more involved than that of Theorem <ref>, and generalizes the latter to the case where the parameters of the SGD algorithm are unbounded. We prove that (μ^N)_N≥1 (defined in (<ref>)) is relatively compact in 𝒟(𝐑_+, 𝒫_γ_0(𝐑^d+1)). To this end we now use <cit.>. The compact containment, which is the purpose of Lemma <ref>, is not straightforward since 𝒫_γ_0(𝐑^d+1) is not compact contrary to Theorem <ref> where we used the compactness of 𝒫(Θ_T). More precisely, the compact containment of (μ^N)_N≥ 1 relies on a characterization of the compact subsets of 𝒫_γ_0(𝐑^d+1) (see Proposition <ref>) and moment estimates on {θ_k^i, i∈{1,… N}}_k=0, …, ⌊ NT ⌋ (see Lemma <ref>). We also mention that contrary to what is done in the proof of Theorem <ref>, we do not show that every limit point of (μ^N)_N≥1 in 𝒟(𝐑_+, 𝒫_γ_0(𝐑^d+1)) is continuous in time but we still manage to prove that they all satisfy (<ref>). Then, using the duality formula for the 𝖶_1-distance together with rough estimates on the jumps of t↦⟨ f, μ_t^N⟩ (for f uniformly Lipschitz over 𝐑^d+1), we then show that every limit point of (μ^N)_N≥1 in 𝒟(𝐑_+, 𝒫_γ_0(𝐑^d+1)) belongs a.s. to 𝒞(𝐑_+, 𝒫_1(𝐑^d+1)). Again this is important since we have uniqueness of (<ref>) in 𝒞(𝐑_+, 𝒫_1(𝐑^d+1)). We conclude this section with the following important uniqueness result. Under the assumptions of Theorems <ref> and <ref>, the solution to (<ref>) is independent of T and is equal to the solution to (<ref>). This uniqueness result states that both idealized and Bayes-by-backprop SGD have the same limiting behavior. It is also noteworthy that the mini-batch B is held fixed B. The effect of batch size can be seen at the level of the central limit theorem, which we leave for future work. § THE MINIMAL-VI SGD ALGORITHM The idea behing the Bayes-by-Backprop SGD stems from the fact that there are integrals wrt γ in the loss function that cannot be computed in practice and it is quite natural up to a reparameterization trick, to replace these integrals by a Monte Carlo approximation (with i.i.d. gaussian random variables). To devise a new cheaper algorithm based on the only terms impacting the asymptotic limit, we directly analyse the limit equation (<ref>) and remark that it can be rewritten as, ∀ f∈𝒞^∞(Θ_T) and ∀ t∈ [0,T], ⟨ f,μ̅_t⟩-⟨ f,μ_0⟩ =- η∫_0^t∫_𝖷×𝖸× (𝐑^d)^2⟨ϕ(·,z_1,x)-y,μ̅_s⟩⟨∇_θ f·∇_θϕ( · ,z_2,x),μ̅_s⟩γ^⊗ 2( z_1 z_2)π( x, y) s - η∫_0^t⟨∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1),μ̅_s⟩ s. Thus, the integration over γ^⊗ 2 can be considered as that over π, i.e., we can consider them as two more data variables that only need to be sampled at each new step. In this case, the SGD  (<ref>) becomes: for k≥ 0 and i∈{1,…,N}, θ_k+1^i=θ_k^i -η/N^2∑_j=1^N (ϕ(θ_k^j,𝖹^1_k,x_k)-y_k )∇_θϕ(θ_k^i,𝖹^2_k,x_k) -η/N∇_θ𝒟_ KL(q_θ^i_k^1|P_0^1) θ_0^i=(m_0^i,ρ_0^i)∼μ_0, where η>0 and (𝖹^p_k, p∈{1,2}, k≥ 0) is a i.i.d sequence of random variables distributed according to γ^⊗2. We call this backpropagation scheme minimal- VI SGD which is much cheaper in terms of computational complexity, with the same limiting behavior as we now discuss. We introduce the σ-algebra for N,k≥ 1: ℱ_k^N=σ (θ_0^i , 𝖹^p_q,(x_q,y_q), 1≤ i≤ N, p∈{1,2}, 0≤ q≤ k-1} ). In addition to A1→A4 (where in A2, ℱ_k^N is now the one defined above in (<ref>) when k≥ 1), the following assumption A6. The sequences (𝖹^p_k, p∈{1,2}, k≥ 0) and ((x_k,y_k), k≥ 0) are independent. In addition, for k≥ 0, ((x_k,y_k),𝖹^p_k, p∈{1,2})ℱ_k^N, where ℱ_k^N is defined in (<ref>). Set for k≥ 0 and t≥ 0, ν_k^N:=1/N∑_i=1^Nδ_θ_k^i and μ_t^N:=ν_⌊ Nt⌋^N, where the θ^i_k's are defined in (<ref>). The last main result of this work states that the sequence (μ^N)_N≥1 satisfies the same law of large numbers when N→ +∞ as the one satisfied by (<ref>), whose proof will be omitted as it is the same as the one made for Theorem <ref>. Assume A1→A4 and A6. Then, the sequence of (μ^N)_N≥1 satisfies all the statements of Theorem <ref>. § NUMERICAL EXPERIMENTS In this section we illustrate the theorems <ref>, <ref>, and <ref> using the following toy model. We set d=5. Given θ^*∈𝐑^d (drawn from a normal distribution and scaled to the unit norm), we draw i.i.d observations as follows: Given x∼𝒰([-1,1]^d), we draw y=tanh(x^⊤θ^*)+ϵ, where ϵ is zero mean with variance 10^-4. The initial distribution of parameters is centered around the prior: θ_0∼ (𝒩(m_0,0.01I_d)×𝒩(g^-1(σ_0),0.01))^⊗ N, with m_0=0 and σ_0=0.2. Since the idealized algorithm cannot be implemented exactly, a mini-batch of size 100 is used as a proxy for the following comparisons of the different algorithms. For the algorithm (<ref>) SGD we set B=1. Evolution and limit of the distribution Fig. <ref> displays the histograms of {F(θ_⌊ Nt⌋^i), i=1,…,N} (F(θ)=m_2, g(ρ) or m, where θ=(m,ρ)∈𝐑^d×𝐑), for N=10000, at initialization, halfway through training, and at the end of training. The empirical distributions illustrated by these histograms are very similar over the course of training. It can be seen that for N=10000 the limit of the mean field is reached. Convergence with respect to the numbers of neurons. We investigate here the speed of convergence of μ_t^N to μ̅_t (as N→+∞), when tested against test functions f. More precisely, we fix a time T (end of training) and Figure <ref> represents the empirical mean of ⟨ f, μ_T^N⟩ over 50 realizations. The test functions used for this experiment are f_m(θ) = ‖ m‖_2, f_Elbo(θ) = - Ê_lbo(θ)^N where Ê_lbo is the empirical E_lbo^N (see (<ref>)) computed with 100 samples of (x,y) and (z^1,…,z^N). Finally, f_pred(θ) = 𝔼̂_x[𝕍̂_w∼ q_θ^N[f_w^N(x)]^1/2] where 𝔼̂ and 𝕍̂ denote respectively the empirical mean and the empirical variance over 100 samples. All algorithms are converging to the same limit and are performing similarly even with a limited number of neurons (N=300 in this example). Convergence with respect to time. This section illustrates the training process of a BNN with a given number of neurons N = 10000. In Figure <ref>, we plot the negative ELBO on a test set and its two components, the loss and the KL-divergence terms. Figure <ref> shows that the BNN is able to learn on this specific task and all algorithms exhibit a similar performance. It illustrates the trajectorial convergence of {μ_t^N, t∈[0,T]}_N≥1 to {μ̅_t, t∈[0,T]} as N→+∞. Behavior around the limit μ̅. On Figure <ref>, we plot the boxplots of ⟨ f,μ_t^N⟩ for 50 realizations and N=10000, at different times of the training. Minimal-VI scheme (which is computationally cheaper as explained in <ref>) exhibit a larger variance than the other algorithms. § CONCLUSION By establishing the limit behavior of the idealized SGD for the variational inference of BNN with the weighting suggested by <cit.>, we have rigorously shown that the most-commonly used in practice Bayes-by-Backprop scheme indeed exhibits the same limit behavior. Furthermore, the analysis of the limit equation led us to validate the correct scaling of the KL divergence term in with respect to the loss. Notably, the mean-field limit dynamics has also helped us to devise a far less costly new SGD algorithm, the Minimal-VI. This scheme shares the same limit behavior, but only stems from the non-vanishing asymptotic contributions, hence the reduction of the computational cost. Aside from confirming the analytical results, the first simulations presented here show that the three algorithms, while having the same limit, may differ in terms of variance. Thus, deriving a CLT result and discussing the right trade-off between computational complexity and variance will be done in future work. Also, on a more general level regarding uncertainty quantification, an interesting question is to analyse the impact of the correct scaling of the KL divergence term on the error calibration and how to apply the same analysis in the context of deep ensembles. A.D. is grateful for the support received from the Agence Nationale de la Recherche (ANR) of the French government through the program "Investissements d'Avenir" (16-IDEX-0001 CAP 20-25) A.G. is supported by the French ANR under the grant ANR-17-CE40-0030 (project EFI) and the Institut Universtaire de France. M.M. acknowledges the support of the the French ANR under the grant ANR-20-CE46-0007 (SuSa project). B.N. is supported by the grant IA20Nectoux from the Projet I-SITE Clermont CAP 20-25. E.M. and T.H. acknowledge the support of ANR-CHIA-002, "Statistics, computation and Artificial Intelligence"; Part of the work has been developed under the auspice of the Lagrange Center for Mathematics and Calculus § PROOF OF THEOREM <REF> For simplicity, we prove the theorem <ref> when T=1, and we denote Θ_1 simply by Θ. In this section we assume A1–A4. §.§ Pre-limit equation (<ref>) and error terms in (<ref>) §.§.§ Derivation of the pre-limit equation The aim of this section is to establish the so-called pre-limit equation (<ref>), which will be our starting point to derive Equation (<ref>). Let N≥ 1, k∈{0,…,N}, and f∈𝒞^∞(Θ). Recall that by Lemma <ref> and since 0≤ k ≤ N, a.s. θ^i_k∈Θ, and thus a.s. f(θ^i_k) is well-defined. The Taylor-Lagrange formula yields ⟨ f,ν_k+1^N⟩-⟨ f,ν_k^N⟩ =1/N∑_i=1^Nf(θ_k+1^i)-f(θ_k^i) =1/N∑_i=1^N∇_θ f(θ_k^i)·(θ_k+1^i-θ_k^i) +1/2N∑_i=1^N(θ_k+1^i-θ_k^i)^T∇^2f(θ_k^i)(θ_k+1^i-θ_k^i), where, for all i∈{1,…, N}, θ_k^i∈ (θ_k^i,θ_k+1^i)⊂Θ. Using (<ref>), we then obtain ⟨ f,ν_k+1^N⟩-⟨ f,ν_k^N⟩ = -η/N^3∑_i=1^N∑_j=1,j≠ i^N (⟨ϕ(θ_k^j,·,x_k),γ⟩-y_k )⟨∇_θ f(θ_k^i)·∇_θϕ(θ_k^i,·,x_k),γ⟩ -η/N^2⟨(ϕ(·,·,x_k)-y_k)∇_θ f·∇_θϕ(·,·,x_k),ν_k^N⊗γ⟩ -η/N⟨∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1),ν_k^N⟩ + 𝐑_k^N[f], where 𝐑_k^N[f]:=1/2N∑_i=1^N(θ_k+1^i-θ_k^i)^T∇^2f(θ_k^i)(θ_k+1^i-θ_k^i). Let us define 𝐃_k^N[f] := 𝐄[-η/N^3∑_i=1^N∑_j=1,j≠ i^N (⟨ϕ(θ_k^j,·,x_k),γ⟩-y_k )⟨∇_θ f(θ_k^i)·∇_θϕ(θ_k^i,·,x_k),γ⟩|ℱ_k^N] -𝐄[η/N^2⟨(ϕ(·,·,x_k)-y_k)∇_θ f·∇_θϕ(·,·,x_k),ν_k^N⊗γ⟩|ℱ_k^N]. Note that using (<ref>) and (<ref>) together with the fact that |∇_θ f(θ_k^i)|≤sup_θ∈Θ |∇_θ f(θ)|, the integrant in (<ref>) is integrable and thus 𝐃_k^N[f] is well defined. Using the fact that (x_k,y_k)ℱ_k^N by A2 and that {θ_k^i, i=1,…,N} is ℱ_k^N-measurable by (<ref>), we have: 𝐃_k^N[f] =-η/N^3∑_i=1^N∑_j=1,j≠ i^N∫_𝖷×𝖸 (⟨ϕ(θ_k^j,·,x),γ⟩-y )⟨∇_θ f(θ_k^i)·∇_θϕ(θ_k^i,·,x),γ⟩π( x, y) -η/N^2∫_𝖷×𝖸⟨(ϕ(·,·,x)-y)∇_θ f·∇_θϕ(·,·,x),ν_k^N⊗γ⟩π( x, y). Introduce also 𝐌_k^N[f] :=-η/N^3∑_i=1^N∑_j=1,j≠ i^N(⟨ϕ(θ_k^j,·,x_k),γ⟩-y_k)⟨∇_ θ f(θ_k^i)·∇_θϕ(θ_k^i,·,x_k),γ⟩ -η/N^2⟨(ϕ(·,·,x_k)-y_k)∇_θ f·∇_θϕ(·,·,x_k),ν_k^N⊗γ⟩-𝐃_k^N[f]. Note that 𝐄 [𝐌_k^N[f]|ℱ_k^N]=0. Equation (<ref>) then writes ⟨ f,ν_k+1^N⟩-⟨ f,ν_k^N⟩ =𝐃_k^N[f]+ 𝐌_k^N[f] -η/N⟨∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1),ν_k^N⟩ +𝐑_k^N[f]. Notice also that 𝐃_k^N[f] =-η/N^3∑_i=1^N∑_j=1^N∫_𝖷×𝖸(⟨ϕ(θ_k^j,·,x),γ⟩-y)⟨∇_θ f(θ_k^i)·∇_θϕ(θ_k^i,·,x),γ⟩π( x, y) +η/N^3∑_i=1^N∫_𝖷×𝖸(⟨ϕ(θ_k^i,·,x),γ⟩-y)⟨∇_θ f(θ_k^i)·∇_θϕ(θ_k^i,·,x),γ⟩π( x, y) -η/N^2∫_𝖷×𝖸⟨(ϕ(·,·,x)-y)∇_θ f·∇_θϕ(·,·,x),ν_k^N⊗γ⟩π( x, y) =-η/N∫_𝖷×𝖸⟨ϕ(·,·,x)-y,ν_k^N⊗γ⟩⟨∇_θ f·∇_θϕ(·,·,x),ν_k^N⊗γ⟩π( x, y) +η/N^2∫_𝖷×𝖸⟨(⟨ϕ(·,·,x),γ⟩-y)⟨∇_θ f·∇_θϕ(·,·,x),γ⟩,ν_k^N⟩π( x, y) -η/N^2∫_𝖷×𝖸⟨(ϕ(·,·,x)-y)∇_θ f·∇_θϕ(·,·,x),ν_k^N⊗γ⟩π( x, y). Now, we define for t∈ [0,1]: 𝐃_t^N[f]:=∑_k=0^⌊ Nt⌋-1𝐃_k^N[f], 𝐑_t^N[f]:=∑_k=0^⌊ Nt⌋-1𝐑_k^N[f], and 𝐌_t^N[f]:=∑_k=0^⌊ Nt⌋-1𝐌_k^N[f] . We can rewrite 𝐃_t^N[f] has follows: 𝐃_t^N[f]=∑_k=0^⌊ Nt⌋-1∫_k/N^k+1/NN 𝐃_⌊ Ns⌋^N[f] s=N∫_0^t 𝐃_⌊ Ns⌋^N[f] s-N∫_⌊ Nt⌋/N^t 𝐃_⌊ Ns⌋^N[f] s. Since ν_⌊ Ns⌋^N=μ_s^N (by definition, see (<ref>)), we have, using also (<ref>) with k=⌊ Ns⌋, 𝐃_t^N[f] =-η∫_0^t∫_𝖷×𝖸⟨ϕ(·,·,x)-y,μ_s^N⊗γ⟩⟨∇_θ f·∇_θϕ(·,·,x),μ_s^N⊗γ⟩π( x, y) s +η/N∫_0^t∫_𝖷×𝖸⟨⟨ϕ(·,·,x)-y,γ⟩⟨∇_θ f·∇_θϕ(·,·,x),γ⟩,μ_s^N⟩π( x, y) s -η/N∫_0^t∫_𝖷×𝖸⟨(ϕ(·,·,x)-y)∇_θ f·∇_θϕ(·,·,x),μ_s^N⊗γ⟩π( x, y) s-𝐕_t^N[f], where 𝐕_t^N[f] :=-η∫^t_⌊ Nt⌋/N∫_𝖷×𝖸⟨ϕ(·,·,x)-y,μ_s^N⊗γ⟩⟨∇_θ f·∇_θϕ(·,·,x),μ_s^N⊗γ⟩π( x, y) s +η/N∫^t_⌊ Nt⌋/N∫_𝖷×𝖸⟨⟨ϕ(·,·,x)-y,γ⟩⟨∇_θ f·∇_θϕ(·,·,x),γ⟩,μ_s^N⟩π( x, y) s -η/N∫^t_⌊ Nt⌋/N∫_𝖷×𝖸⟨(ϕ(·,·,x)-y)∇_θ f·∇_θϕ(·,·,x),μ_s^N⊗γ⟩π( x, y) s. On the other hand, we also have for t∈ [0,1], ∑_k=0^⌊ Nt⌋-1-η/N⟨∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1),ν_k^N⟩ =-η∫_0^⌊ Nt⌋/N⟨∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1),μ_s^N⟩ s. We finally set: 𝐖_t^N[f]:=- 𝐕_t^N[f] + η∫^t_⌊ Nt⌋/N⟨∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1),μ_s^N⟩ s. Since ⟨ f,μ_t^N⟩-⟨ f,μ_0^N⟩=∑_k=0^⌊ Nt⌋-1⟨ f,ν_k+1^N⟩-⟨ f,ν_k^N⟩, we deduce from (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>), the so-called pre-limit equation satisfied by μ^N: for N≥1, t∈ [0,1], and f∈𝒞^∞(Θ), ⟨ f,μ_t^N⟩-⟨ f,μ_0^N⟩ =-η∫_0^t∫_𝖷×𝖸⟨ϕ(·,·,x)-y,μ_s^N⊗γ⟩⟨∇_θ f·∇_θϕ(·,·,x),μ_s^N⊗γ⟩π( x, y) s -η∫_0^t ⟨∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1),μ_s^N⟩ s +η/N∫_0^t∫_𝖷×𝖸⟨⟨ϕ(·,·,x)-y,γ⟩⟨∇_θ f·∇_θϕ(·,·,x),γ⟩,μ_s^N⟩π( x, y) s -η/N∫_0^t∫_𝖷×𝖸⟨(ϕ(·,·,x)-y)∇_θ f·∇_θϕ(·,·,x),μ_s^N⊗γ⟩π( x, y) s + 𝐌_t^N[f] +𝐖_t^N[f]+ 𝐑_t^N[f]. §.§.§ The last five terms in (<ref>) are error terms The purpose of this section is to show that the last five terms appearing in the r.h.s. of (<ref>) are error terms when N→+∞. For J∈𝐍^* and f∈𝒞^J(Θ), set ‖ f‖_𝒞^J(Θ):=∑_|k|≤ J‖∂_kf ‖_∞, Θ, where ‖ g‖_∞, Θ=sup_θ∈Θ|g(θ)| for g:Θ→𝐑^m. Assume A1→A4. Then, there exists C>0 such that a.s. for all f∈𝒞^∞(Θ) and N≥1, * η/N∫_0^1∫_𝖷×𝖸|⟨⟨ϕ(·,·,x)-y,γ⟩⟨∇_θ f·∇_θϕ(·,·,x),γ⟩,μ_s^N⟩| π( x, y) s ≤ C‖ f‖_𝒞^1(Θ)/N. * η/N∫_0^1∫_𝖷×𝖸|⟨(ϕ(·,·,x)-y)∇_θ f·∇_θϕ(·,·,x),μ_s^N⊗γ⟩| π( x, y) s≤ C‖ f‖_𝒞^1(Θ)/N. * sup_t∈[0,1]|𝐖_t^N[f]|+ sup_t∈[0,1]|𝐑_t^N[f]| ≤ C‖ f‖_𝒞^2(Θ)/N. Finally, sup_t∈[0,1]𝐄[|𝐌_t^N[f]|]≤C‖ f‖_𝒞^1(Θ)/√(N). All along the proof, C>0 denotes a positive constant independent of N≥ 1,k∈{0,…,N-1},(s,t)∈ [0,1]^2,(x,y)∈𝖷×𝖸,θ∈Θ,z∈𝐑^d, and f∈𝒞^∞(Θ) which can change from one occurrence to another. Using (<ref>), the Cauchy-Schwarz inequality, and the fact that ∇_θ f is bounded over Θ imply: |⟨∇_θ f(θ)·∇_θϕ(θ,·,x),γ⟩|≤⟨|∇_θ f(θ)·∇_θϕ(θ,·,x)|,γ⟩≤ C‖ f‖_𝒞^1(Θ). Combining (<ref>) and (<ref>), we obtain: ∫_0^1∫_𝖷×𝖸|⟨⟨ϕ(·,·,x)-y,γ⟩⟨∇_θ f·∇_θϕ(·,·,x),γ⟩,μ_s^N⟩| π( x, y) s≤ C‖ f‖_𝒞^1(Θ) and ∫_0^1∫_𝖷×𝖸|⟨(ϕ(·,·,x)-y)∇_mf·∇_mϕ(·,·,x),μ_s^N⊗γ⟩| π( x, y) s≤ C‖ f‖_𝒞^1(Θ), which proves Items <ref> and <ref>. Let us now prove Item <ref>. By (<ref>) and (<ref>), sup_t∈[0,1]|𝐕_t^N[f]|≤ C‖ f‖_𝒞^1(Θ)/N. On the other hand, because f∈𝒞^∞(Θ) and θ↦∇_θ𝒟_ KL(q_θ^1|P_0^1) is continuous (see (<ref>)) over Θ which is compact, it holds, ‖∇_θ f·∇_θ𝒟_ KL(q_θ^1|P_0^1)‖_∞,Θ<+∞. Hence, it holds: sup_t∈[0,1]|∫^t_⌊ Nt⌋/N⟨∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1),μ_s^N⟩ s|≤ C‖ f‖_𝒞^1(Θ)/N. Using (<ref>), it then holds sup_t∈[0,1]|𝐖_t^N[f]| ≤ C‖ f‖_𝒞^1(Θ)/N. Since f∈𝒞^∞(Θ), we have, by (<ref>), for N≥ 1 and 0≤ k≤ N-1, |𝐑_k^N[f]|≤‖ f‖_𝒞^2(Θ)C/N∑_i=1^N|θ_k+1^i-θ_k^i|^2. By (<ref>) and Lemma <ref>, |θ_k+1^i-θ_k^i|^2≤ C/N^2 and consequently, one has: |𝐑_k^N[f]|≤C‖ f‖_𝒞^2(Θ)/N^2. Hence, for all t∈[0,1], |𝐑_t^N[f]|≤C‖ f‖_𝒞^2(Θ)/N. This proves Item <ref>. Let us now prove the last item in Lemma <ref>. Let t∈[0,1]. We have, by (<ref>), |𝐌_t^N[f]|^2=∑_k=0^⌊ Nt⌋-1 |𝐌_k^N[f] |^2+2∑_k<j𝐌_k^N[f] 𝐌_j^N[f] . For all 0≤ k<j<⌊ Nt⌋, 𝐌_k^N[f] is ℱ_j^N-measurable (see (<ref>)), and since 𝐄 [𝐌_j^N[f]|ℱ_j^N]=0, one deduces that 𝐄 [ 𝐌_k^N[f] 𝐌_j^N[f] ]=𝐄 [𝐌_k^N[f] 𝐄 [𝐌_j^N[f]|ℱ_j^N] ]=0. Hence, 𝐄[|𝐌_t^N[f]|^2]=∑_k=0^⌊ Nt⌋-1𝐄[|𝐌_k^N[f]|^2]. By (<ref>) and (<ref>), one has a.s. for all 0≤ k≤ N-1, |𝐌_k^N[f]|≤ C‖ f‖_𝒞^1(Θ)/N. Hence, 𝐄[|𝐌_t^N[f]|^2]≤ C‖ f‖_𝒞^1(Θ)/N, which proves the last inequality in Lemma <ref>. §.§ Convergence to the limit equation as N→+∞ In this section we prove the relative compactness of (μ^N)_N≥ 1 in 𝒟([0,1],𝒫(Θ)). We then show that any of its limit points satisfies the limit equation (<ref>). §.§.§ Wasserstein spaces and duality formula In this section we recall some basic results which will be used throughout this work on the space 𝒫(𝒮) when (𝒮, 𝖽) is a Polish space. First when endowed with the weak convergence topology, 𝒫(𝒮) is a Polish space <cit.>. In addition, 𝒫_q(𝒮)= {ν∈𝒫(𝒮), ∫_𝒮𝖽(w_0,w)^q ν ( w)<+∞}, where w_0∈𝒮 is arbitrary (note that this space was defined previously in (<ref>) when 𝒮=𝐑^d+1) when endowed with the 𝖶_q metric is also a Polish space <cit.>. Recall also the duality formula for the 𝖶_1-distance on 𝒫_1(𝒮) (see e.g <cit.>): 𝖶_1(μ,ν)=sup{|∫_𝒮f(w)μ(w)-∫_𝒮f(w)ν( w)|, f_Lip≤ 1}. Finally, when 𝒦⊂𝐑^d+1 is compact, the convergence in 𝖶_q-distance is equivalent to the usual weak convergence on 𝒫(𝒦) (see e.g. <cit.>). §.§.§ Relative compactness The main result of this section is to prove that (μ^N)_N≥ 1 is relatively compact in 𝒟([0,1],𝒫(Θ)), which is the purpose of Proposition <ref> below. To this end, we need to prove that for all f∈𝒞^∞(Θ), every sequence (⟨ f,μ_t^N⟩)_N≥ 1 satisfies some regularity conditions, which is the purpose of the next result. Assume A1→A4. Then there exists C>0 such that a.s. for all f∈𝒞^∞(Θ), 0≤ r<t≤ 1, and N≥1: |⟨ f,μ_t^N⟩-⟨ f,μ_r^N⟩|≤ C‖ f‖_𝒞^2(Θ)[|t-r|+|t-r|/N+1/N]. Let f∈𝒞^∞(Θ) and let N≥1 and 0≤ r<t≤ 1. In the following C>0 is a positive constant independent of f∈𝒞^∞(Θ), N≥1, and 0≤ r<t≤ 1, which can change from one occurrence to another. From (<ref>), we have ⟨ f,μ_t^N⟩-⟨ f,μ_r^N⟩ =𝐀_r,t^N[f] - η∫_r^t ⟨∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1),μ_s^N⟩ s +𝐌_t^N[f]-𝐌_r^N[f] +𝐖_t^N[f]-𝐖_r^N[f]+𝐑_t^N[f]-𝐑_r^N[f], where 𝐀_r,t^N[f] =-η∫_r^t∫_𝖷×𝖸⟨ϕ(·,·,x)-y,μ_s^N⊗γ⟩⟨∇_θ f·∇_θϕ(·,·,x),μ_s^N⊗γ⟩π( x, y) +η/N∫_r^t∫_𝖷×𝖸⟨⟨ϕ(·,·,x)-y,γ⟩⟨∇_θ f·∇_θϕ(·,·,x),γ⟩,μ_s^N⟩π( x, y) -η/N∫_r^t∫_𝖷×𝖸⟨(ϕ(·,·,x)-y)∇_θ f·∇_θϕ(·,·,x),μ_s^N⊗γ⟩π( x, y). By (<ref>) and (<ref>), |𝐀_r,t^N[f]| ≤ C‖ f‖_𝒞^1(Θ)[|t-r|+|t-r|/N]. In addition, since θ↦𝒟_ KL(q_θ^1|P_0^1) is bounded over Θ (since it is smooth and Θ is compact), | ∫_r^t ⟨∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1),μ_s^N⟩ s |≤ C‖ f‖_𝒞^1(Θ)|t-r|. Furthermore, using (<ref>), |𝐌_t^N[f]-𝐌_r^N[f]|=|∑_k=⌊ Nr⌋^⌊ Nt⌋-1𝐌_k^N[f]|≤ (⌊ Nt⌋-⌊ Nr⌋) C‖ f‖_𝒞^1(Θ)/N. Next, we have, by Item <ref> in Lemma <ref>, |𝐖_t^N[f]-𝐖_r^N[f]|≤|𝐖_t^N[f]|+|𝐖_r^N[f]|≤C‖ f‖_𝒞^2(Θ)/N. Finally, by (<ref>), |𝐑_t^N[f]-𝐑_r^N[f]|=|∑_k=⌊ Nr⌋^⌊ Nt⌋-1𝐑_k^N[f]|≤ (⌊ Nt⌋-⌊ Nr⌋) C‖ f‖_𝒞^2(Θ)/N^2. The proof of Proposition <ref> is complete plugging all the previous estimates in (<ref>). Assume A1→A4. Then, the sequence (μ^N)_N≥ 1 is relatively compact in 𝒟([0,1],𝒫(Θ)). The proof consists in applying <cit.> with E=𝒫(Θ) endowed with the weak convergence topology. Set 𝔽={𝔏_f, f∈𝒞^∞(Θ)} where 𝖫_f: ν∈𝒫(Θ)↦⟨ f, ν⟩. The class of continuous functions 𝔽 on 𝒫(Θ) satisfies Conditions <cit.>. On the other hand, the condition <cit.> is satisfied since 𝒫(Θ) is compact because Θ is compact (see e.g. <cit.> together with <cit.>). It remains to verify Condition (3.4) of <cit.>, i.e. that for all f∈𝒞^∞(Θ), (⟨ f,μ^N⟩)_N≥1 is relatively compact in 𝒟([0,1],𝐑). To this end, we apply <cit.>. Condition (i) in <cit.> is satisfied because |⟨ f,μ^N_t⟩|≤‖ f‖_∞,Θ for all t∈ [0,1] and N≥ 1. Let us now show that Condition (ii) in <cit.> holds. For this purpose, we use Lemma <ref>. For δ,β>0 sufficiently small, it is possible to construct a subdivision { t_i}_i=0^v of [0,1] such that t_0 =0, t_v=1, t_i+1-t_i = δ+β for i∈{0,…,v-2} and δ+β≤ t_v -t_v-1≤ 2(δ+β). According to the terminology introduced in <cit.>, { t_i}_i=0^v is δ-sparse. Then, by Lemma <ref>, there exists C>0 such that a.s. for all δ,β>0, all such subdivision { t_i}_i=0^v, i∈{0,…,v-1}, and N≥ 1, sup_t,r∈[t_i ,t_i+1 ] |⟨ f,μ_t^N⟩-⟨ f,μ_r^N⟩|≤ C(|t_i+1 -t_i |+|t_i+1 -t_i |/N+1/N)≤ C(2(δ+β)+2(δ+β)/N+1/N). Thus, one has: inf_β>0max_isup_t,r∈[t_i ,t_i+1 ]|⟨ f,μ_t^N⟩-⟨ f,μ_r^N⟩|≤ C(2δ+2δ/N+1/N). Consequently, there exists C>0 such that a.s. for all δ>0 small enough and N≥ 1, w'_⟨ f,μ^N⟩(δ):=inf_{t_i} δ-sparsemax_isup_t,r∈[t_i,t_i+1]|⟨ f,μ_t^N⟩-⟨ f,μ_r^N⟩|≤ C(2δ+2δ/N+1/N). This implies lim_δ→0lim sup_N→+∞𝐄[w'_⟨ f,μ^N⟩(δ)]=0. By Markov's inequality, this proves Condition (ii) of <cit.>. Therefore, for all f∈𝒞^∞(Θ), using also Prokhorov theorem, the sequence (⟨ f,μ^N⟩)_N≥1⊂𝒟([0,1],𝐑) is relatively compact. In conclusion, according to <cit.>, (μ^N)_N≥ 1⊂𝒟([0,1],𝒫(Θ)) is tight. §.§.§ Limit points satisfy the limit equation (<ref>) In this section we prove that every limit point of (μ^N)_N≥ 1 in 𝒟([0,1],𝒫(Θ)) satisfies (<ref>). Let 𝗆,(𝗆^N)_N≥ 1⊂𝒟([0,1],𝒫(Θ)) be such that 𝗆^N→𝗆 in 𝒟([0,1],𝒫(Θ)). Then, for all Lipschitz continuous function f:Θ→𝐑, we have ⟨ f,𝗆^N⟩→⟨ f,𝗆⟩ in 𝒟([0,1],𝐑). Let f be such a function. By <cit.>, 𝗆^N→𝗆 in 𝒟([0,1],𝒫(Θ)) iff there exist functions λ_N: [0,1]→ [0,1] continuous, increasing onto itself such that sup_t∈[0,1]|λ_N(t)-t|→_N→∞ 0 and sup_t∈ [0,1]𝖶_1(𝗆_λ_N(t)^N,𝗆_t)→_N→∞0. Then ⟨ f,𝗆^N⟩→⟨ f,𝗆⟩ in 𝒟([0,1],𝐑) since by (<ref>), sup_t∈ [0,1]|⟨ f,𝗆_λ_N(t)^N⟩-⟨ f,𝗆_t⟩| ≤f_Lipsup_t∈ [0,1]𝖶_1(𝗆_λ_N(t)^N,𝗆_t)→_N→∞0. Let f∈𝒞^∞(Θ). Then, any limit point of (⟨ f,μ^N⟩)_N≥1⊂𝒟([0,1],𝐑) belong a.s. to 𝒞([0,1],𝐑). Fix t∈ (0,1]. Letting r→ t in (<ref>), we obtain |⟨ f,μ_t^N⟩-⟨ f,μ_t^-^N⟩|≤ C/N. Therefore sup_t∈(0,1]|⟨ f,μ_t^N⟩-⟨ f,μ_t^-^N⟩| 0 as N→+∞. The result follows from <cit.>. Let μ^*∈𝒟([0,1], 𝒫(Θ)) be a limit point of (μ^N)_N≥1⊂𝒟([0,1], 𝒫(Θ)). Then, a.s. μ^*∈𝒞([0,1], 𝒫(Θ)). Up to extracting a subsequence, we assume that μ^Nμ^*. By Skorohod representation theorem, there exists another probability space (Ω̂, ℱ̂,𝐏̂) on which are defined random elements (μ̂^N)_N≥1 and μ̂^*, where, μ̂^*𝒟=μ^*, and for all N≥1, μ̂^N𝒟=μ^N, and such that 𝐏̂-a.s., μ̂^N→μ̂^* in 𝒟([0,1], 𝒫(Θ)) as N→ +∞. Fix f∈𝒞^∞(Θ). We have, by Lemma <ref>, 𝐏̂-a.s., ⟨ f,μ̂^N⟩→_N→+∞⟨ f,μ̂^*⟩ in 𝒟([0,1],𝐑). In particular, ⟨ f,μ̂^N⟩→_N→+∞⟨ f,μ̂^*⟩ in distribution. By Proposition <ref>, there exists Ω̂_f ⊂Ω̂ of 𝐏̂-mass 1 such that for all ω∈Ω̂_f, ⟨ f,μ̂^*(ω)⟩∈𝒞([0,1],𝐑). Denote by ℱ the class polynomial functions with rational coefficients. Since this class is countable, the set Ω̂_ℱ:=∩_f∈ℱΩ̂_f is of 𝐏̂-mass 1. Consider now an arbitrary f∈𝒞(Θ) and let us show that for all ω∈Ω̂_ℱ, ⟨ f,μ̂^*(ω)⟩∈𝒞([0,1],𝐑). By the Stone-Weierstrass theorem, there exist (f_n)_n≥1⊂ℱ such that f_n-f_∞,Θ→_n→+∞0. On Ω̂_ℱ, for all n, t∈ [0,1]↦⟨ f_n,μ̂_t^*⟩ is continuous and converges uniformly to t∈ [0,1]↦⟨ f,μ̂_t^*⟩. Hence, for all ω∈Ω̂_ℱ and f∈𝒞 (Θ), ⟨ f,μ̂^*(ω)⟩∈𝒞([0,1],𝐑), i.e. for all ω∈Ω̂_ℱ, μ̂^*(ω)∈𝒞([0,1],𝒫(Θ)). This concludes the proof. Now, we introduce, for t∈[0,1] and f∈𝒞^∞(Θ), the function Λ_t[f]:𝒟([0,1],𝒫(Θ))→𝐑_+ defined by: Λ_t[f]:𝗆↦ |⟨ f,𝗆_t⟩-⟨ f,μ_0⟩ +η∫_0^t∫_𝖷×𝖸⟨ϕ(·,·,x)-y,𝗆_s⊗γ⟩⟨∇_θ f·∇_θϕ(·,·,x),𝗆_s⊗γ⟩π( x, y) s + η∫_0^t ⟨∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1),𝗆_s⟩ s |. We now study the continuity of Λ_t[f]. Let (𝗆^N)_N≥ 1⊂𝒟([0,1],𝒫(Θ)) converge to 𝗆∈𝒟([0,1],𝒫(Θ)). Then, for all continuity point t∈[0,1] of 𝗆 and all f∈𝒞^∞(Θ), we have Λ_t[f](𝗆^N)→Λ_t[f](𝗆). Let f∈𝒞^∞(Θ) and denote by 𝒞(𝗆)⊂[0,1] the set of continuity points of 𝗆. Let t∈𝒞(𝗆). From <cit.>, we have, for all s∈𝒞(𝗆), 𝗆^N_s→𝗆_s in 𝒫(Θ). Thus, ⟨ f,𝗆_t^N⟩→_N→∞⟨ f,𝗆_t⟩. For all z∈𝐑^d and (x,y)∈𝖷×𝖸, A1 and A3 ensure that the functions θ∈Θ↦ϕ(θ ,z,x)-y and θ∈Θ↦∇_θ f(θ)·∇_θϕ(θ,z,x) are continuous and also bounded because Θ is compact. Hence, for all s∈ [0,t]∩𝒞(𝗆), using (<ref>), ⟨ϕ(·,z,x)-y,𝗆_s^N⟩→⟨ϕ(·,z,x)-y,𝗆_s⟩ and ⟨∇_θ f·∇_θϕ(·,z,x),𝗆_s^N⟩→⟨∇_θ f·∇_θϕ(·,z,x),𝗆_s⟩ Since [0,1]\𝒞(𝗆) is at most countable (see <cit.>) we have that for a.e. (s,z',z,x,y)∈ [0,t]×𝐑^d×𝐑^d×𝖷×𝖸, ⟨ϕ(·,z',x)-y,𝗆_s^N⟩⟨∇_θ f·∇_θϕ(·,z,x),𝗆_s^N⟩→⟨ϕ(·,z',x)-y,𝗆_s⟩⟨∇_θ f·∇_θϕ(·,z,x),𝗆_s⟩. Since ϕ(θ,z',x)-y is bounded and by (<ref>), there exists C>0 such that for all (s,z',z,x,y)∈ [0,t]×𝐑^d×𝐑^d×𝖷×𝖸, ⟨ |ϕ(·,z',x)-y|,𝗆_s^N⟩⟨|∇_θ f·∇_θϕ(·,z,x)|,𝗆_s^N⟩≤ C‖∇ _θ f‖_∞,Θ𝔟(z). By the dominated convergence theorem, we then have: ∫_0^t∫_𝖷×𝖸⟨ϕ(·,·,x)-y,𝗆_s^N⊗γ⟩⟨∇_θ f·∇_θϕ(·,·,x),𝗆_s^N⊗γ⟩π( x, y) s N→+∞⟶∫_0^t∫_𝖷×𝖸⟨ϕ(·,·,x)-y,𝗆_s⊗γ⟩⟨∇_θ f·∇_θϕ(·,·,x),𝗆_s⊗γ⟩π( x, y) s. With the same arguments as above, one shows that ∫_0^t ⟨∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1),𝗆_s^N ⟩ s →∫_0^t ⟨∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1),𝗆_s ⟩ s. The proof of the lemma is complete. Let μ^*∈𝒟([0,1],𝒫(Θ)) be a limit point of (μ^N)_N≥1⊂𝒟([0,1],𝒫(Θ)). Then, a.s. μ^* satisfies (<ref>). Up to extracting a subsequence, we can assume that μ^Nμ^* as N→ +∞. Let f∈𝒞^∞(Θ). The pre-limit equation (<ref>) and Lemma <ref> imply that a.s. for all N≥ 1 and t∈[0,1], Λ_t[f](μ^N)≤ C/N+ 𝐌_t^N[f]. Hence, using the last statement in Lemma <ref>, it holds for all t∈[0,1], lim_N→∞𝐄[Λ_t[f](μ^N)]=0. In particular, Λ_t[f](μ^N) 0. Let us now show that Λ_t[f](μ^N)Λ_t[f](μ^*). Denoting by 𝖣(Λ_t[f]) the set of discontinuity points of Λ_t[f], we have, from Proposition <ref> and Lemma <ref>, for all t∈[0,1] and f∈𝒞^∞(Θ), 𝐏(μ^*∈𝖣(Λ_t[f])) =0. By the continuous mapping theorem, Λ_t[f](μ^N)Λ_t[f](μ^*). By uniqueness of the limit in distribution, we have that for all t∈[0,1] and f∈𝒞^∞(Θ), a.s. Λ_t[f](μ^*)=0. Let us now prove that a.s. for all t∈[0,1] and f∈𝒞^∞(Θ), Λ_t[f](μ^*)=0. On the one hand, for all f∈𝒞^∞(Θ) and 𝗆∈𝒟([0,1],𝒫(Θ)), the function t↦Λ_t[f](𝗆) is right-continuous. Since [0,1] is separable, we have that for all f∈𝒞^∞(Θ), a.s. for all t∈[0,1], Λ_t[f](μ^*)=0. One the other hand 𝒞^∞(Θ) is separable (when endowed with the norm f_𝒞^∞(Θ)= ∑_k≥ 02^-kmin(1,∑_|j|=k∂_jf_∞,Θ)) and the function f∈𝒞^∞(Θ) ↦Λ_t[f](𝗆) is continuous (for fixed t∈[0,1] and 𝗆∈𝒟([0,1],𝒫(Θ))) relatively to the topology induced by f_𝒞^∞(Θ). Hence, we obtain that a.s. for all t∈[0,1] and f∈𝒞^∞(Θ), Λ_t[f](μ^*)=0. The proof of the proposition is thus complete. §.§.§ Uniqueness and end of the proof of Theorem <ref> There exists a unique solution to (<ref>) in 𝒞([0,1],𝒫(Θ)). First of all, the fact that there is a solution to (<ref>) is provided by Propositions <ref>, <ref> and <ref>. The proof of the fact that there is a unique solution to (<ref>) relies on the same arguments as those used in the proof of <cit.>. For μ∈𝒫(𝐑^d+1), we introduce v[μ]:𝐑^d+1→𝐑^d+1 defined, for θ=(m,ρ)∈𝐑^d+1, by v[μ](θ)= -η∫_𝖷×𝖸⟨ϕ(·,·,x)-y,μ⊗γ⟩⟨∇_θϕ(θ,·,x),γ⟩π( x, y)-η∇_θ𝒟_ KL(q_θ^1|P_0^1). In addition, if μ̅∈𝒞([0,1],𝒫(Θ)) is solution to (<ref>), it satisfies also (<ref>) with test functions f∈𝒞^∞_c( 𝐑^d+1). Then, adopting the terminology of <cit.>, any solution μ̅ to (<ref>) is a weak solution[We mention that according to <cit.>, the two notions of solutions of (<ref>) (namely the weak solution and the distributional solution) are equivalent.] on [0,T] of the measure-valued equation ∂_tμ̅_t=div( v[μ̅_t]μ̅_t) μ̅_0=μ_0. Let us now prove that: * There exists C>0 such that for all μ∈𝒫(𝐑^d+1) and θ∈𝐑^d+1, |J_θ v[μ](θ)|≤ C. * There exists C>0 such that for all μ̅∈𝒞([0,1],𝒫(Θ)) solution to (<ref>), 0≤ s,t≤ 1, and θ∈𝐑^d+1, | v[μ̅_t](θ)- v[μ̅_s](θ)|≤ C|t-s|. * There exists L'>0 such that for all μ,ν∈𝒫_1(𝐑^d+1), sup_θ∈𝐑^d| v[μ](θ)- v[ν](θ)|≤ L'𝖶_1(μ,ν). Before proving the three items above, we quickly conclude the proof of the proposition. Items 1 and 2 above imply that v(t,θ)= v[μ̅_t](θ) is globally Lipschitz continuous over [0,1]×𝐑^d+1 when μ̅∈𝒞([0,1],𝒫(Θ)) is a solution to (<ref>). Since μ̅∈𝒞([0,1],𝒫(Θ))⊂𝒞([0,1],𝒫(𝐑^d+1)), this allows to use the representation theorem <cit.> for the solution of (<ref>) in 𝒞([0,1],𝒫(𝐑^d+1)), i.e. it holds: ∀ t∈ [0,1], μ̅_t=ϕ_t#μ_0, where ϕ_t is the flow generated by the vector field v[μ̅_t](θ) over 𝐑^d+1. Equation (<ref>) and the fact that 𝒞([0,1],𝒫(Θ))⊂𝒞([0,1],𝒫_1(𝐑^d+1)) together with Item 3 above and the same arguments as those used in the proof of <cit.> (which we recall is based estimates in Wasserstein distances between two solutions of (<ref>) derived in <cit.>), one deduces that there is a unique solution to (<ref>). Let us prove Item 1. Recall g(ρ)= ln(1+e^ρ). The functions ρ↦ g”(ρ)g(ρ), ρ↦ g'(ρ), ρ↦g'(ρ)/g(ρ), and ρ↦g”(ρ)/g(ρ) are bounded on 𝐑. Thus, in view of (<ref>), ‖ Hess_θ 𝒟_ KL(q_θ^1|P_0^1)‖_∞,𝐑^d+1<+∞. On the other hand, by A1 and A3, for x∈𝖷, z∈𝐑^d, θ∈Θ↦ϕ(θ,z,x) is smooth and there exists C>0, for all x∈𝖷, θ∈𝐑^d+1, z∈𝐑^d: | Hess_θϕ(θ,z,x) | ≤ C(𝔟(z)^2+𝔟(z)). This bound allows us to differentiate under the integral signs in (<ref>) and proves that |J_θ∫_𝖷×𝖸⟨ϕ(·,·,x)-y,μ⊗γ⟩⟨∇_θϕ(θ,·,x),γ⟩π( x, y)|≤ C, where C>0 is independent of μ∈𝒫(Θ) and θ∈Θ. The proof of Item 1 is complete. Let us prove Item 2. Let μ̅∈𝒞([0,1],𝒫(Θ)) be a solution to (<ref>), 0≤ s≤ t≤ 1, and θ∈𝐑^d+1. We have v[μ̅_t](θ)- v[μ̅_s](θ)= -η∫_𝖷×𝖸⟨ϕ(·,·,x),(μ̅_t-μ̅_s)⊗γ⟩⟨∇_θϕ(θ,·,x),γ⟩π( x, y). Let z∈𝐑^d and x∈𝖷. By A1 and A3, ϕ(·,z,x)∈𝒞^∞(Θ). Therefore, by (<ref>), ⟨ϕ(·,z,x),μ̅_t-μ̅_s⟩ = -η∫_s^t ∫_𝖷×𝖸⟨ϕ(·,·,x')-y,μ̅_r⊗γ⟩⟨∇_θϕ(·,z,x)·∇_θϕ(·,·,x'),μ̅_r⊗γ⟩π( x', y) r -η∫_s^t⟨∇_θϕ(·,z,x)·∇_θ𝒟_ KL(q_ _·^1|P_0^1),μ̅_r⟩ r We have ‖∇_θ𝒟_ KL(q_θ^1|P_0^1)‖_∞,Θ<+∞. Using also (<ref>) and the fact that 𝖷×𝖸 is a compact (see A2), it holds: |⟨ϕ(·,z,x),μ̅_t-μ̅_s⟩|≤ C 𝔟(z)|t-s|. Hence, for all x'∈𝖷, |⟨ϕ(·,·,x'),(μ̅_t-μ̅_s)⊗γ⟩|≤⟨|⟨ϕ(·,·,x'),μ̅_t-μ̅_s⟩|,γ⟩≤ C|t-s|. Thus, by (<ref>) and (<ref>), | v[μ̅_t](θ)- v[μ̅_s](θ)|≤ C|t-s|. This ends the proof of Item 2. Let us now prove Item 3. Fix μ,ν∈𝒫_1(𝐑^d+1) and θ∈𝐑^d+1. We have v[μ](θ)- v[ν](θ)= -η∫_𝖷×𝖸⟨ϕ(·,·,x),( μ -ν)⊗γ⟩⟨∇_θϕ(θ,·,x),γ⟩π( x, y) For all x∈𝖷, using (<ref>) and (<ref>), it holds: |⟨ϕ(·,·,x),(μ-ν)⊗γ⟩| ≤∫_𝐑^d|⟨ϕ(·,z,x),μ⟩-⟨ϕ(·,z,x),ν⟩|γ(z) z ≤ C ∫_𝐑^d𝖶_1(μ,ν)𝔟(z)γ(z) z≤ C 𝖶_1(μ,ν). Finally, using in addition (<ref>) and (<ref>), we deduce Item 3. This ends the proof of the proposition. We are now ready to prove Theorem <ref>. Recall Lemma <ref> ensures that a.s. (μ^N)_N≥1⊂𝒟([0,1],𝒫(Θ)). By Proposition <ref>, this sequence is relatively compact. Let μ^*∈𝒟([0,1],𝒫(Θ)) be a limit point. Along some subsequence N', it holds: μ^N'μ^*. In addition, a.s. μ^*∈𝒞([0,1],𝒫(Θ)) (by Proposition <ref>) and μ^* satisfies (<ref>) (by Proposition <ref>). By Proposition <ref>, (<ref>) admits a unique solution μ̅∈𝒞([0,1],𝒫(Θ)). Hence, a.s. μ^*=μ̅. Therefore, μ^N'μ̅. Since the sequence (μ^N)_N≥1 admits a unique limit point, the whole sequence converges in distribution to μ̅. The convergence also holds in probability since μ̅ is deterministic. The proof of Theorem <ref> is complete. §.§ Proof of Lemma <ref> In this section we prove Lemma <ref>. We start with the following simple result. Let T>0, N≥ 1, and c_1>0. Consider a sequence (u_k)_0≤ k≤⌊ NT⌋⊂𝐑_+ for which there exists v_0 such that u_0≤ v_0 and for all 1≤ k≤⌊ NT⌋, u_k≤ c_1 (1+1/N∑_ℓ=0^k-1u_ℓ). Then, for all 0≤ k≤⌊ NT⌋, u_k≤ v_0e^c_1T. Define v_k=c_1(1+1/N∑_ℓ=0^k-1v_ℓ). For all 0≤ k≤⌊ NT⌋, u_k≤ v_k and v_k=v_k-1(1+c_1/N). Hence v_k=v_0 (1+ c_1/N)^k≤ v_0(1+ c_1/N)^⌊ NT⌋≤ v^0e^c_1T. This ends the proof of the Lemma. Since ρ↦ g'(ρ) and ρ↦ g'(ρ)/g(ρ) are bounded continuous functions over 𝐑, and since |g(ρ)|≤ C(1+|ρ|), according to (<ref>), there exists c>0, for all θ∈𝐑^d+1, |∇_θ𝒟_ KL(q_θ^1|P_0^1)|≤ c(1+|θ|). All along the proof, C>0 is a constant independent of N≥ 1, T>0, i∈{1,…, N}, 1≤ k≤⌊ NT⌋, (x,y)∈𝖷×𝖸, θ∈𝐑^d+1, and z∈𝐑^d, which can change from one occurence to another. It holds: |θ_k^i|≤ |θ_0^i|+ ∑_ℓ=0^k-1|θ_ℓ+1^i-θ_ℓ^i|. Using (<ref>), we have, for 0≤ℓ≤ k-1, |θ_ℓ+1^i-θ_ℓ^i| ≤η/N^2∑_j=1,j≠ i^N|(⟨ϕ(θ_ℓ^j,·,x_ℓ),γ⟩-y_ℓ)⟨∇_θϕ(θ_ℓ^i,·,x_ℓ),γ⟩| + η/N^2|⟨(ϕ(θ_ℓ^i,·,x_ℓ)-y_ℓ)∇_θϕ(θ_ℓ^i,·,x_ℓ),γ⟩| +η/N |∇_θ𝒟_ KL(q_θ_ℓ^i^1|P_0^1)|. For all θ∈𝐑^d+1, z∈𝐑^d, (x,y)∈𝖷×𝖸, we have, by A2 and A3, since ϕ(θ,z,x)=s(Ψ_θ(z),x), |ϕ(θ,z,x)-y|≤ C. Moreover, we have ∇_θϕ(θ,z,x)=∇_1s(Ψ_θ(z),x) J_θΨ_θ(z) (here ∇_1s refers to the gradient of s w.r.t. its first variable). By A3, |∇_1s(Ψ_θ(z),x)|≤ C and, hence, denoting by J_θ the Jacobian w.r.t. θ, using (<ref>), |∇_θϕ(θ,z,x)|≤ C|J_θΨ_θ(z)|≤ C𝔟(z). Therefore, by (<ref>), ⟨|∇_θϕ(θ,·,x)|,γ⟩≤ C. Hence, we obtain, using (<ref>) and (<ref>), |θ_ℓ+1^i-θ_ℓ^i| ≤η/N^2∑_j=1,j≠ i^NC+η/N^2C + cη/N(1+|θ_ℓ^i|) ≤C/N(1+ |θ_ℓ^i|). Using A4, there exists K_0>0 such that a.s. for all i, |θ_0^i|≤ K_0. Then, from (<ref>) and (<ref>), for 1≤ k≤⌊ NT⌋, it holds: |θ_k^i|≤ K_0 + C/N∑_ℓ=0^k-1(1+|θ_ℓ^i|)≤ K_0+CT+ C/N∑_ℓ=0^k-1 |θ_ℓ^i|≤ C_0,T(1+ 1/N∑_ℓ=0^k-1 |θ_ℓ^i|), with C_0,T=max(K_0+CT, C)≤ K_0+C(1+T). Then, by Lemma <ref> and A4, we have that for all N≥1, i∈{1,…,N} and 0≤ k≤⌊ NT⌋, |θ_k^i|≤ K_0e^[K_0+C(1+T)]T. The proof of Lemma <ref> is thus complete. § PROOF OF THEOREM <REF> In this section, we assume A1→𝐀5 (where in A2, when k≥ 1, ℱ_k^N is now the one defined in (<ref>)) and the θ^i_k's (resp. μ^N) are those defined by (<ref>) for i∈{1,…,N} and k≥ 0 (resp. by (<ref>) for N≥ 1). §.§ Preliminary analysis and pre-limit equation §.§.§ Notation and weighted Sobolev embeddings For J∈N and β≥0, let ℋ^J,β(𝐑^d+1) be the closure of the set 𝒞_c^∞(𝐑^d+1) for the norm f_ℋ^J,β:=(∑_|k|≤ J∫_𝐑^d+1|∂_kf(θ)|^2/1+|θ|^2βθ)^1/2. The space ℋ^J,β(𝐑^d+1) is a separable Hilbert space and we denote its dual space by ℋ^-J,β(𝐑^d+1) (see e.g. <cit.>). The associated scalar product on ℋ^J,β(𝐑^d+1) will be denoted by ⟨·,·⟩_ℋ^J,β. For Φ∈ℋ^-J,β(𝐑^d+1), we use the notation ⟨ f,Φ⟩_J,β= Φ[f], f∈ℋ^J,β(𝐑^d+1). For ease of notation, and if no confusion is possible, we simply denote ⟨ f,Φ⟩_J,β by ⟨ f,Φ⟩. The set 𝒞^J,β_0(𝐑^d+1) (resp. 𝒞^J,β(𝐑^d+1)) is defined as the space of functions f:𝐑^d+1→𝐑 with continuous partial derivatives up to order J∈N such that for all |k|≤ J, lim_|θ|→∞|∂_kf(θ)|/1+|θ|^β=0 (resp. ∑_|k|≤ J sup_θ∈𝐑^d+1|∂_kf(θ)|/1+|θ|^β<+∞). The spaces 𝒞^J,β(𝐑^d+1) and 𝒞^J,β_0(𝐑^d+1) is endowed with the norm f_𝒞^J,β:=∑_|k|≤ J sup_θ∈𝐑^d+1|∂_kf(θ)|/1+|θ|^β. We note that θ∈𝐑^d+1↦ (1-χ(θ))|θ|^α∈ℋ^J,β(𝐑^d+1) if β-α>(d+1)/2, where χ∈𝒞_c^∞(𝐑^d+1) equals 1 near 0. We recall that from <cit.>, for m'>(d+1)/2 and α,j≥ 0, ℋ^m'+j,α(𝐑^d+1)↪𝒞_0^j,α(𝐑^d+1). In the following, we consider γ_0,γ_1∈𝐑 and L_0∈𝐍 such that γ_1>γ_0> d+1/2+1 and L_0> d+1/2 +1. We finally recall the following standard result. Let q>p≥ 1 and C>0. The set 𝒦_C^q:={μ∈𝒫_p(𝐑^d+1), ∫_𝐑^d+1|x|^qμ( x)≤ C} is compact. §.§.§ Bound on the moments of the θ_k^i's We have the following uniform bound in N≥ 1 on the moments of the sequence {θ_k^i, i∈{1,…,N}}_k= 0,…, ⌊ NT ⌋ defined by (<ref>). Assume A1→ 𝐀5. For all T>0 and p≥ 1, there exists C>0 such that for all N≥1, i∈{1,…,N} and 0≤ k≤⌊ NT⌋, 𝐄[|θ_k^i|^p]≤ C. Let p≥ 1. By A4, 𝐄[|θ_0^i|^p]≤ C_p for all i∈{1,…,N}. Let T>0. In the following C>0 is a constant independent of N≥1, i∈{1,…,N}, and 1≤ k≤⌊ NT⌋. Using (<ref>), the fact that ϕ is bounded, 𝖸 is bounded, and (<ref>), we have, for 0≤ n ≤ k-1, |θ_n+1^i-θ_n^i| ≤C/N^2B∑_j=1^N∑_ℓ=1^B 𝔟(𝖹^i,ℓ_n) +C/N |∇_θ𝒟_ KL(q_θ_n^i^1|P_0^1)| ≤C/NB∑_ℓ=1^B (1+𝔟(𝖹^i,ℓ_n)) +C/N (1+|θ_n^i|), where we have also used (<ref>) for the last inequality. Let us recall the following convexity inequality: for m,p≥ 1 and x_1,…,x_p∈𝐑_+, (∑_n=1^mx_n)^p≤ m^p-1∑_n=1^mx_n^p. Using (<ref>), A1 with q=p, and the fact that 1≤ k ≤⌊ NT⌋, one has setting u_k=𝐄[|θ_k^i|^p], u_k≤ C (1+1/N∑_n=0^k-1u_n). The result then follows from Lemma <ref>. §.§.§ Pre-limit equation In this section, we derive the pre-limit equation for μ^N defined by (<ref>). For simplicity we will keep the same notations as those introduced in Section <ref>, though these objects will now be defined with θ^i_k set by (<ref>), and on 𝒞^2,γ_1(𝐑^d+1), for all integer k≥ 0, and all time t≥ 0. Let f∈𝒞^2,γ_1(𝐑^d+1). Then, set for k≥ 0, 𝐃_k^N[f] =-η/N^3∑_i=1^N∑_j=1,j≠ i^N∫_𝖷×𝖸 (⟨ϕ(θ_k^j,·,x),γ⟩-y )⟨∇_θ f(θ_k^i)·∇_θϕ(θ_k^i,·,x),γ⟩π( x, y) -η/N^2∫_𝖷×𝖸⟨(ϕ(·,·,x)-y)∇_θ f·∇_θϕ(·,·,x),ν_k^N⊗γ⟩π( x, y). Note that 𝐃_k^N above is the one defined in (<ref>) but now on 𝒞^2,γ_1(𝐑^d+1) and with θ^i_k defined by (<ref>). For k≥ 0, we set 𝐌_k^N[f]= -η/N^3B∑_i,j=1^N ∑_ℓ=1^B(ϕ(θ_k^j,𝖹_k^j,ℓ,x_k)-y_k)∇_θ f(θ_k^i)·∇_θϕ(θ_k^i,𝖹_k^i,ℓ,x_k)-𝐃_k^N[f]. By Lemma <ref> together with (<ref>) and (<ref>), 𝐌_k^N[f] is integrable. Also, using A5 and the fact that θ_k^j is ℱ_k^N-measurable (see (<ref>)), 𝐄 [𝐌_k^N[f]|ℱ_k^N ]=0. Set 𝐌_t^N[f]=∑_k=0^⌊ Nt⌋-1𝐌_k^N[f], t≥ 0. We now extend the definition of 𝐖_t^N[f] and 𝐑_k^N[f] in (<ref>) and (<ref>) to any time t≥ 0, k≥ 0, and f∈𝒞^2,γ_1(𝐑^d+1), and with θ^i_k set by (<ref>). We then set 𝐑_t^N[f]=∑_k=0^⌊ Nt⌋-1𝐑_k^N[f], t≥ 0. With the same algebraic computations as those made in Section <ref>, one obtains the following pre-limit equation: for N≥ 1, t≥ 0, and f∈𝒞^2,γ_1(𝐑^d+1), ⟨ f,μ_t^N⟩-⟨ f,μ_0^N⟩ =-η∫_0^t∫_𝖷×𝖸⟨ϕ(·,·,x)-y,μ_s^N⊗γ⟩⟨∇_θ f·∇_θϕ(·,·,x),μ_s^N⊗γ⟩π( x, y) s -η∫_0^t ⟨∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1),μ_s^N⟩ s +η/N∫_0^t∫_𝖷×𝖸⟨⟨ϕ(·,·,x)-y,γ⟩⟨∇_θ f·∇_θϕ(·,·,x),γ⟩,μ_s^N⟩π( x, y) s -η/N∫_0^t∫_𝖷×𝖸⟨(ϕ(·,·,x)-y)∇_θ f·∇_θϕ(·,·,x),μ_s^N⊗γ⟩π( x, y) s + 𝐌_t^N[f] +𝐖_t^N[f]+ 𝐑_t^N[f]. We will now show that the sequence (μ^N)_N≥1 is relatively compact in 𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1)). §.§ Relative compactness and convergence to the limit equation §.§.§ Relative compactness in 𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1)) In this section we prove the following result. Assume A1→𝐀5. Recall γ_0> d+1/2+1. Then, the sequence (μ^N)_N≥1 is relatively compact in 𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1)). We start with the following lemma. Assume A1→ 𝐀5. Then, ∀ T>0 and f∈𝒞^2,γ_1(𝐑^d+1), sup_N≥1𝐄[sup_t∈[0,T]⟨ f,μ_t^N⟩^2]<+∞. Let T>0. In what follows, C>0 is a constant independent of f∈𝒞^2,γ_1(𝐑^d+1), (s,t)∈ [0,T]^2, and z∈𝐑^d which can change from one occurence to another. We have by A4, 𝐄[⟨ f,μ_0^N⟩^2]≤ C f_𝒞^2,γ_1^2. By (<ref>) and (<ref>), it holds: sup_t∈[0,T]⟨ f,μ_t^N⟩^2 ≤ C[ f_𝒞^2,γ_1^2+ ∫_0^T∫_𝖷×𝖸 |⟨⟨ |∇_θ f·∇_θϕ(·,·,x) |,γ⟩,μ_s^N⟩ | ^2 π( x, y) s ∫_0^ T | ⟨ |∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1) |,μ_s^N⟩ | ^2 s +1/N^2∫_0^T∫_𝖷×𝖸 |⟨⟨ |∇_θ f·∇_θϕ(·,·,x) |,γ⟩,μ_s^N⟩ | ^2 π( x, y) s + sup_t∈[0,T] |𝐌_t^N[f]|^2 +sup_t∈[0,T] |𝐖_t^N[f]|^2+ sup_t∈[0,T] |𝐑_t^N[f]|^2.]. We have using (<ref>), for s∈ [0,T] and z∈𝐑^d, | ∇_θ f (θ^i_⌊ Ns⌋) ·∇_θϕ(θ^i_⌊ Ns⌋,z,x)|≤ C f_𝒞^1,γ_1𝔟(z) (1+|θ^i_⌊ Ns⌋|^γ_1). Thus, using Lemma <ref>, 𝐄[ ⟨⟨|∇_θ f·∇_θϕ(·,·,x)|,γ⟩ ,μ_s^N⟩^2 ]≤ Cf_𝒞^1,γ_1^2. Using (<ref>), for s∈ [0,T], it holds: | ∇_θ f(θ^i_⌊ Ns⌋)·∇_θ𝒟_ KL(q_θ^i_⌊ Ns⌋^1|P_0^1) | ≤ C f_𝒞^1,γ_1 (1+|θ^i_⌊ Ns⌋|^γ_1+1). Thus, using Lemma <ref>, 𝐄 [ | ⟨∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1),μ_s^N⟩ | ^2 ]≤ Cf_𝒞^1,γ_1^2. On the other hand, we have using (<ref>): sup_t∈ [0,T]|𝐌_t^N[f]|^2≤⌊ NT⌋∑_k=0^⌊ NT⌋-1| 𝐌_k^N[f]|^2. Recall (<ref>). By (<ref>), (<ref>), A1, and (<ref>), it holds: |𝐃_k^N[f]|^2≤ Cf_𝒞^1,γ_1^2 [1/N^4∑_i≠ j=1^N (1+|θ^i_k|^2γ_1)+ 1/N^4 (1+⟨ |· |^2γ_1, ν_k^N⟩)]≤C/N^2f_𝒞^1,γ_1^2 (1+|θ^i_k|^2γ_1) and |𝐌_k^N[f]|^2≤C/N^4B∑_i,j=1^N ∑_ℓ=1^Bf^2_𝒞^1,γ_1 |𝔟(𝖹_k^i,ℓ)|^2 (1+|θ^i_⌊ Ns⌋|^2γ_1)+ |𝐃_k^N[f]|^2. By Lemma <ref> and A1, one deduces that 𝐄[|𝐌_k^N[f]|^2]≤Cf_𝒞^1,γ_1^2/N^2. Going back to (<ref>), we then have 𝐄[sup_t∈ [0,T]|𝐌_t^N[f]|^2]≤ Cf_𝒞^1,γ_1^2. Using the same arguments as those used so far, one also deduces that for t∈ [0,T] sup_t∈[0,T]|𝐖_t^N[f]|^2 ≤Cf_𝒞^1,γ_1^2/N^2sup_t∈[0,T] (1+⟨ |· |^γ_1+1, ν_⌊ Nt⌋^N⟩)^2 = Cf_𝒞^1,γ_1^2/N^2max_0≤ k≤⌊ NT⌋(1+⟨ |· |^γ_1+1, ν_k^N⟩)^2 ≤Cf_𝒞^1,γ_1^2/N^2∑_k=0^⌊ NT⌋ (1+⟨ |· |^γ_1+1, ν_k^N⟩)^2. and thus 𝐄[sup_t∈[0,T]|𝐖_t^N[f]|^2] ≤Cf_𝒞^1,γ_1^2/N. Let us finally deal with the term involving 𝐑_t^N[f]. One has using (<ref>): sup_t∈[0,T]|𝐑_t^N[f]|^2≤⌊ NT⌋∑_k=0^⌊ NT⌋-1|𝐑_k[f]|^2. For 0≤ k≤⌊ NT⌋-1, we have, from (<ref>), |𝐑_k^N[f]|^2 ≤Cf_𝒞^2,γ_1^2/N∑_i=1^N|θ_k+1^i-θ_k^i|^4(1+|θ̂_k^i|^γ_1)^2 ≤Cf_𝒞^2,γ_1^2/N∑_i=1^N|θ_k+1^i-θ_k^i|^4(1+|θ_k+1^i|^2γ_1+|θ_k^i|^2γ_1). Using (<ref>), |θ_k+1^i-θ_k^i|^4≤ C[1/N^4+|θ_k^i|^4/N^4+1/N^4B∑_ℓ=1^B|𝔟(𝖹_k^i,ℓ)|^4]. By Lemma <ref> and A1, it then holds 𝐄[|θ_k+1^i-θ_k^i|^4(1+|θ_k+1^i|^2γ_1+|θ_k^i|^2γ_1)] ≤C/N^4. Hence, one deduces that 𝐄[sup_t∈[0,T]|𝐑_t^N[f]|^2]≤ C f_𝒞^2,γ_1^2 /N^2. This ends the proof of Lemma <ref>. Assume A1→𝐀5. Let 0<ϵ<γ_1-γ_0. For every T>0, sup_N≥1𝐄[sup_t∈[0,T]∫_𝐑^d+1|x|^γ_0+ϵμ_t^N( x) ] <+∞. Apply Lemma <ref> with f:θ↦(1-χ)|θ|^γ_0+ϵ∈𝒞^2,γ_1(𝐑^d+1). Assume A1→𝐀5. Let T>0 and f∈𝒞^2,γ_1(𝐑^d+1). Then, there exists C>0 such that for all δ>0 and 0≤ r<t≤ T such that t-r≤δ, one has for all N≥ 1, 𝐄[|⟨ f,μ_t^N⟩ -⟨ f,μ_r^N⟩ |^2]≤ C (δ^2+δ/N+ 1/N). Using (<ref>), Jensen's inequality, (<ref>), (<ref>), and (<ref>), one has for f∈𝒞^2,γ_1(𝐑^d+1), 𝐄[|⟨ f,μ_t^N⟩ -⟨ f,μ_r^N⟩ |^2] ≤ C[(t-r)^2(1+1/N^2)f_𝒞^1,γ_1^2 +𝐄[ | ∑_k=⌊ Nr⌋^⌊ Nt⌋-1𝐌_k^N[f] |^2] +𝐄[| 𝐖_t^N[f] - 𝐖_r^N[f] |^2]+𝐄[| 𝐑_t^N[f] - 𝐑_r^N[f] |^2]. We also have with the same arguments as those used just before (<ref>) 𝐄[ | ∑_k=⌊ Nr⌋^⌊ Nt⌋-1𝐌_k^N[f] |^2]=∑_k=⌊ Nr⌋^⌊ Nt⌋-1𝐄[|𝐌_k^N[f]|^2]. Using in addition (<ref>), one has 𝐄[ | ∑_k=⌊ Nr⌋^⌊ Nt⌋-1𝐌_k^N[f] |^2]≤ C (Nδ+1) f_𝒞^1,γ_1^2/ N^2. Note that with this argument, we also deduce that 𝐄[ | 𝐌_t^N[f]|^2]≤ Cf_𝒞^1,γ_1^2/ N. On the other hand, by (<ref>) and (<ref>), one has 𝐄[| 𝐖_t^N[f] - 𝐖_r^N[f] |^2]≤ C f^2_𝒞^1,γ_1/ N and 𝐄[| 𝐑_t^N[f] - 𝐑_r^N[f] |^2]≤ C f_𝒞^2,γ_1^2/ N^2. One then plugs all the previous estimates in (<ref>) to deduce the result of Lemma <ref>. We are now in position to prove Proposition <ref>. The proof consists in applying <cit.> with E= 𝒫_γ_0(𝐑^d+1) and 𝔽={𝖧_f, f∈𝒞^∞_c(𝐑^d+1)} where 𝖧_f: ν∈𝒫_γ_0(𝐑^d+1)↦⟨ f, ν⟩. The set 𝔽 on 𝒫_γ_0(𝐑^d+1) satisfies Conditions <cit.>. Condition (4.8) there follows from Proposition <ref>, Lemma <ref>, and Markov's inequality. Let us now show <cit.> is verified, i.e. that for all f∈𝒞^∞_c(𝐑^d+1), the family (⟨ f,μ^N⟩)_N≥1 is relatively compact in 𝒟(𝐑_+,𝐑). To do this, it suffices to use Lemma <ref> and <cit.> (with ℋ_1=ℋ_2=𝐑 there). In conclusion, according to <cit.>, the sequence (μ^N)_N≥1⊂𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1)) is relatively compact. §.§.§ Limit points satisfy the limit equation (<ref>) For f∈𝒞^1,γ_0-1(𝐑^d+1) and t≥ 0, we introduce for 𝗆∈𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1)), Φ_t[f]:𝗆↦ |⟨ f,𝗆_t⟩-⟨ f,μ_0⟩ +η∫_0^t∫_𝖷×𝖸⟨ϕ(·,·,x)-y,𝗆_s⊗γ⟩⟨∇_θ f·∇_θϕ(·,·,x),𝗆_s⊗γ⟩π( x, y) s + η∫_0^t ⟨∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1),𝗆_s⟩ s |. Note that Φ_t[f] is the function Λ_t[f] previously defined in (<ref>) for test functions f∈𝒞^1,γ_0-1(𝐑^d+1) and for 𝗆∈𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1)). Assume A1→𝐀5. Let f∈𝒞^1,γ_0-1(𝐑^d+1). Then Φ_t[f] is well defined. In addition, if a sequence (𝗆^N)_N≥ 1 converges to 𝗆 in 𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1)), then, for all continuity point t≥ 0 of 𝗆, we have Φ_t[f](𝗆^N)→Φ_t[f](𝗆). Using A1, and because 𝖸 is bounded and the function ϕ is bounded, 𝒢_1^x,y: θ↦⟨ϕ(θ,·,x)-y,γ⟩∈𝒞^∞_b(𝐑^d+1). In addition, for all multi-index α∈𝐍^d+1, there exists C>0, for all x,y∈𝖷×𝖸 and all θ∈𝐑^d+1, |∂_α𝒢_1^x,y(θ)|≤ C. The same holds for the function 𝒢_2^x: θ∈𝐑^d+1↦⟨∇_θϕ(θ,·,x), γ⟩. Consequently, θ↦∇_θ f(θ)·𝒢_2^x(θ)∈𝒞^0,γ_0-1(𝐑^d+1)↪𝒞^0,γ_0(𝐑^d+1). Then, there exists C>0 independent of (x,y)∈𝖷×𝖸 and s∈ [0,t] such that |⟨𝒢_1^x,y,𝗆_s⟩|≤ C, and |⟨∇_θ f·𝒢_2^x,𝗆_s⟩ |≤ C ‖ f ‖_𝒞^1,γ_0-1⟨ 1+|.|^γ_0, 𝗆_s⟩. Finally, the function θ↦∇_θ𝒟_ KL(q_θ^1|P_0^1) is smooth (see (<ref>)) and (<ref>) extends to all its derivatives, i.e. for all multi-index α∈𝐍^d+1, there exists c>0, for all θ∈𝐑^d+1, |∂_α∇_θ𝒟_ KL(q_θ^1|P_0^1)|≤ c(1+|θ|). Thus, ∇_θ f·∇_θ𝒟_ KL(q_θ^1|P_0^1)∈𝒞^0,γ_0(𝐑^d+1) and for some C>0 independent of s∈ [0,t] |⟨∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1),𝗆_s⟩|≤ C ‖ f ‖_𝒞^1,γ_0-1⟨ 1+|.|^γ_0, 𝗆_s⟩. Since in addition sup_s∈ [0,t]⟨ 1+|.|^γ_0, 𝗆_s⟩<+∞ (since s↦⟨ 1+|.|^γ_0, 𝗆_s⟩∈𝒟(𝐑_+,𝐑)), Φ_t[f] is well defined. To prove the continuity property of Φ_t[f] it then suffices to use the previous upper bounds together similar arguments as those used in the proof of Lemma <ref> (see also <cit.>). Assume A1→𝐀5. Let μ^* be a limit point of (μ^N)_N≥1 in 𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1)). Then, μ^* satisfies a.s. Equation (<ref>). Let us consider f∈𝒞_c^∞(𝐑^d+1) and μ^* be a limit point of (μ^N)_N≥1 in 𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1)). Recall that by <cit.>, the complementary of the set 𝒞(μ^*)={t≥ 0, 𝐏(μ^*_t^-= μ^*_t)=1} is at most countable. Let t_*∈𝒞(μ^*). Then, by Lemma <ref>, one has that 𝐏(μ^*∈𝖣(Φ_t_*[f]))=0. Thus, by the continuous mapping theorem, it holds Φ_t_*[f](μ^N)Φ_t_*[f](μ^*). On the other hand, using (<ref>) and the estimates (<ref>), (<ref>), (<ref>), (<ref>), and (<ref>), it holds lim_N→∞𝐄[Φ_t_*[f](μ^N)]=0. Consequently, for all f∈𝒞_c^∞(𝐑^d+1) and t_*∈𝒞(μ^*), it holds a.s. Φ_t_*[f](μ^*)=0. On the other hand, for all ψ∈𝒞_c^∞(𝐑^d+1), 𝗆∈𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1)), and s≥ 0, the mappings t≥ 0↦Φ_t[ψ ](𝗆) is right continuous, and f∈ℋ^L_0,γ_0-1(𝐑^d+1)↦Φ_s[f](𝗆) is continuous (because ℋ^L_0,γ_0-1(𝐑^d+1)↪𝒞_0^1,γ_0-1(𝐑^d+1)). In addition, ℋ^L_0,γ_0-1(𝐑^d+1) admits a dense and countable subset of elements in 𝒞_c^∞(𝐑^d+1). Moreover, there exists a countable subset 𝒯_μ^* of 𝒞(μ^*) such that for all t≥ 0 and ϵ>0, there exists s∈𝒯_μ^*, s∈ [t,t+ϵ]. We prove this claim. Since ℝ_+ is a metric space, 𝒞(μ^*) is separable and thus admits a dense subset 𝒪_μ^*. Since [t+ϵ/4,t+3ϵ/4]∩𝒞(μ^*)≠∅, there exists u∈ [t+ϵ/4,t+3ϵ/4]∩𝒞(μ^*). Consider now s∈𝒪_μ^* such that |s-u|≤ϵ/4. It then holds t≤ s≤ t+ ϵ, proving the claim with 𝒯_μ^*=𝒪_μ^*. Hence, we have with a classical argument that a.s. for all f∈ℋ^L_0,γ_0-1(𝐑^d+1) and t≥ 0, Λ_t[f](μ^*)=0. Note also that 𝒞^∞_b(𝐑^d+1)⊂ℋ^L_0,γ_0-1(𝐑^d+1) since 2γ_0>d+1. This ends the proof of the proposition. §.§ Uniqueness of the limit equation and end of the proof of Theorem <ref> In this section, we prove that there is a unique solution to (<ref>) in 𝒞(𝐑_+,𝒫_1(𝐑^d+1)). To this end, we first need to prove that every limit points of (μ^N)_N≥ 1 a.s. belongs to 𝒞(𝐑_+,𝒫_1(𝐑^d+1)). §.§.§ Limit points belong to 𝒞(𝐑_+,𝒫_1(𝐑^d+1)) Assume A1→𝐀5. Let μ^*∈𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1)) be a limit point of (μ^N)_N≥ 1 in 𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1)). Then, a.s. μ^*∈𝒞(𝐑_+,𝒫_1(𝐑^d+1)). Note that since 𝖶_1≤𝖶_γ_0, μ^N'μ^* also in 𝒟(𝐑_+,𝒫_1(𝐑^d+1)), along some subsequence N'. According to <cit.>, μ^*∈𝒞(𝐑_+,𝒫_1(𝐑^d+1)) a.s. if for all T>0, lim_N→ +∞𝐄[ sup_t∈ [0,T]𝖶_1(μ^N_t_-,μ^N_t) ]=0. Using (<ref>), this is equivalent to prove that lim_N→ +∞𝐄[ sup_t∈ [0,T]sup_‖ f‖_Lip≤ 1|⟨ f,μ^N_t_-⟩-⟨ f,μ^N_t⟩| ]=0. Let us consider T>0 and a Lipschitz function f:𝐑^d+1→𝐑 such that ‖ f‖_Lip≤ 1. We have ⟨ f,μ_t^N⟩=⟨ f,μ_0^N⟩+ ∑_k=0^⌊ Nt⌋-1⟨ f,ν_k+1^N⟩-⟨ f,ν_k^N⟩ (with usual convention ∑_0^-1=0). Thus the discontinuity points of t∈ [0,T]↦⟨ f,μ_t^N⟩ lies exactly at {1/N, 2/N,…, ⌊ NT⌋/N} and |⟨ f,μ^N_t_-⟩-⟨ f,μ^N_t⟩|≤max_k=0,…,⌊ NT⌋-1|⟨ f,ν_k+1^N⟩-⟨ f,ν_k^N⟩|, ∀ t∈ [0,T], f Lipschitz. Pick k=0,…,⌊ NT⌋-1. We have by (<ref>), |⟨ f,ν_k+1^N⟩-⟨ f,ν_k^N⟩| ≤1/N∑_i=1^N |θ_k+1^i-θ_k^i| ≤C/N∑_i=1^N[ 1/NB∑_ℓ=1^B (1+𝔟(𝖹^i,ℓ_k)) +1/N (1+|θ_k^i|)]=:d_k^N Hence, it holds: |d_k^N|^2 ≤C/N∑_i=1^N[ 1/N^2B∑_ℓ=1^B (1+𝔟^2(𝖹^i,ℓ_k)) +1/N^2 (1+|θ_k^i|^2)], where thanks to Lemma <ref> and A1, for all k=0,…,⌊ NT⌋-1, 𝐄[|d_k^N|^2]≤ C/N^2 for some C>0 independent of N≥ 1 and k=0,…,⌊ NT⌋-1. Thus, using (<ref>) and (<ref>), 𝐄[ sup_t∈ [0,T]sup_‖ f‖_Lip≤ 1|⟨ f,μ^N_t_-⟩-⟨ f,μ^N_t⟩| ] ≤𝐄[ sup_‖ f‖_Lip≤ 1max_k=0,…,⌊ NT⌋-1|⟨ f,ν_k+1^N⟩-⟨ f,ν_k^N⟩| ] ≤𝐄[ max_k=0,…,⌊ NT⌋-1d_k^N ] ≤𝐄[ √(∑_k=0^⌊ NT⌋-1 |d_k^N|^2 )] ≤√(𝐄[ ∑_k=0^⌊ NT⌋-1 |d_k^N|^2 ])≤C/√(N). This concludes the proof of Proposition <ref>. §.§.§ Uniqueness of the solution to (<ref>) There is a unique solution μ̅∈𝒞(𝐑_+,𝒫_1(𝐑^d+1)) to (<ref>). First of all, the existence of a solution is provided by Propositions <ref>, <ref> and <ref>. Let us now prove that there is a unique solution to (<ref>) in 𝒞(𝐑_+,𝒫_1(𝐑^d+1)). Recall the definition of v[μ] in (<ref>). We claim that for all T>0 and all solution μ̅∈𝒞(𝐑_+,𝒫_1(𝐑^d+1)) of (<ref>), there exists C>0 such that | v[μ̅_t](θ)- v[μ̅_s](θ)|≤ C|t-s|, for all 0≤ s ≤ t≤ T and θ∈𝐑^d+1. The proof of item (<ref>) is the same as the one made for Item 2 in Proposition <ref> since it holds using (<ref>) and (<ref>), for all 0≤ s≤ t≤ T and z∈𝐑^d, |∫_s^t⟨∇_θϕ(·,z,x)·∇_θ𝒟_ KL(q_ _·^1|P_0^1),μ̅_r⟩ r | ≤ C𝔟(z)∫_s^t ⟨ (1+|·| ), μ̅_r⟩ r ≤ C𝔟(z) max_r∈ [0,T]⟨ (1+|·| ), μ̅_r⟩ |t-s|. We now conclude the proof of Proposition <ref>. Item 1 in the proof of Proposition <ref> and (<ref>) imply that v(t,θ)= v[μ̅_t](θ) is globally Lipschitz on [0,T]×𝐑^d+1, for all T>0, when μ̅∈𝒞(𝐑_+,𝒫_1(𝐑^d+1)) is a solution of (<ref>). Since in addition a solution μ̅ to (<ref>) is a weak solution on 𝐑_+ to (<ref>) in 𝒞(𝐑_+,𝒫(𝐑^d+1)), it holds by <cit.>: ∀ t≥ 0, μ̅_t=ϕ_t#μ_0, where ϕ_t is the flow generated by the vector field v[μ̅_t](θ) over 𝐑^d+1. Together with Item 3 in the proof of Proposition <ref> and using the same arguments as those used in Step 3 of the proof of <cit.>, two solutions agrees on each [0,T] for all T>0. One then deduces the uniqueness of the solution to (<ref>). The proof of Proposition <ref> is complete. We are now in position to end the proof of Theorem <ref>. By Proposition <ref>, (μ^N)_N≥1 is relatively compact in 𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1)). Let μ^1,μ^2∈𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1)) be two limit points of this sequence. By Proposition <ref>, a.s. μ̅^1,μ̅^2∈𝒞(𝐑_+,𝒫_1(𝐑^d+1)). In addition, according to Proposition <ref>, μ^1 and μ^2 are a.s. solutions of (<ref>). Denoting by μ̅∈𝒞(𝐑_+,𝒫_γ_0(𝐑^d+1)) the unique solution to (<ref>) (see Proposition <ref>), we have a.s. μ̅^1 =μ̅ and μ̅^2=μ̅ in 𝒞(𝐑_+,𝒫_1(𝐑^d+1)). In particular μ̅∈𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1) ) and μ̅^j=μ̅ in 𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1)), j∈{1,2}. As a consequence, μ̅ is the unique limit point of (μ^N)_N≥1 in 𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1)) and the whole sequence (μ^N)_N≥1 converges to μ̅ in 𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1)). Since μ̅ is deterministic, the convergence also holds in probability. The proof of Theorem <ref> is complete. Let us now prove Proposition <ref>. Any solution to (<ref>) in 𝒞([0,T],𝒫(Θ_T)) is a solution to (<ref>) in 𝒞([0,T],𝒫_1( 𝐑^d+1)). The result follows from Proposition <ref>.
http://arxiv.org/abs/2307.05667v1
20230711180001
The Tunneling Potential Approach to Q-Balls
[ "José Ramon Espinosa", "Julian Heeck", "Mikheil Sokhashvili" ]
hep-ph
[ "hep-ph", "hep-th" ]
=1 ./Figures/ #1 B_s
http://arxiv.org/abs/2307.04980v1
20230711023822
A Model for Circuit Execution Runtime And Its Implications for Quantum Kernels At Practical Data Set Sizes
[ "Travis L. Scholten", "Derrick Perry II", "Joseph Washington", "Jennifer R. Glick", "Thomas Ward" ]
quant-ph
[ "quant-ph" ]
A Multi-view Impartial Decision Network for Frontotemporal Dementia Diagnosis Guoyao Deng1, Ke Zou1,3, Meng Wang3, Xuedong Yuan2, Sancong Ying2 and Huazhu Fu3 August 12, 2023 ==================================================================================== plain plain Quantum machine learning (QML) is a fast-growing discipline within quantum computing. One popular QML algorithm, quantum kernel estimation, uses quantum circuits to estimate a similarity measure (kernel) between two classical feature vectors. Given a set of such circuits, we give a heuristic, predictive model for the total circuit execution time required, based on a recently-introduced measure of the speed of quantum computers. In doing so, we also introduce the notion of an “effective number of quantum volume layers of a circuit", which may be of independent interest. We validate the performance of this model using synthetic and real data by comparing the model's predictions to empirical runtime data collected from IBM Quantum computers through the use of the Qiskit Runtime service. At current speeds of today's quantum computers, our model predicts data sets consisting of on the order of hundreds of feature vectors can be processed in order a few hours. For a large-data workflow, our model's predictions for runtime imply further improvements in the speed of circuit execution – as well as the algorithm itself – are necessary. § INTRODUCTION Quantum machine learning (QML) is a broad, interdisciplinary topic at the intersection of quantum information/computation and classical machine learning <cit.>. Within QML, there has been much study on one particular QML algorithm, called “quantum kernel estimation" or “quantum support vector machines" <cit.>. Quantum kernels are a similarity measure K(𝐱, 𝐲) between two classical feature vectors (data points) 𝐱, 𝐲 evaluated using a quantum circuit[For details on kernel methods in general, see <cit.>.]. This circuit uses an n-qubit parameterized encoding circuit U(θ). Given U, 𝐱, and 𝐲, and some fiducial starting state |ψ_0⟩, the corresponding quantum kernel value is given by K(𝐱, 𝐲) = |⟨ψ_0|U^†(𝐲)U(𝐱)|ψ_0⟩|^2. Usually, |ψ_0⟩ is taken to be a computational basis state (typically, the all-zeros state, |0^⊗ n⟩). To calculate a quantum kernel using a quantum computer, |ψ_0⟩ is prepared, and the circuit U(𝐱)∘ U^†(𝐲) is applied. (Here, ∘ means the composition of two operators.) Finally, the resulting state is measured, resulting in a classical bitstring 𝐛. The probability of obtaining the bitstring corresponding to |ψ_0⟩ is estimated by repeating the just-described process many times (aka, for many “shots") to build up statistics: Pr(|ψ_0⟩) = # of outcomes 𝐛 corresponding to |ψ_0⟩/S, with S as the number of shots. Here, the symbol “  " is used in the statistical sense of “Is an estimate of", not in the quantum-mechanical sense of “Is a quantum-mechanical operator". That is, Equation (<ref>) is an estimate of the quantum kernel, Equation (<ref>). Given a data set 𝒟 = {𝐱_1, 𝐱_2, ⋯ , 𝐱_N}, usually the collection of pairwise quantum kernel values K(𝐱_1, 𝐱_1), K(𝐱_1, 𝐱_2), ⋯ is estimated. These values can then be used in classical kernel-based algorithms, such as support vector machines <cit.>, Gaussian processes <cit.>, etc. <cit.>. In this way, quantum kernels “enhance" classical kernel-based algorithms. This work focuses on quantum-enhanced support vector machines. Quantum kernels have already been used in a variety of contexts, including high-energy physics <cit.>, healthcare and life sciences <cit.>, many-body physics <cit.>, natural language processing <cit.>, industrial manufacturing <cit.>, and financial services and banking <cit.>. However, to date the only proof of an advantage from using quantum kernels is theoretical in nature <cit.>. In a practical context, quantum advantage with quantum kernels has yet to be attained. One obstacle to deploying quantum kernels in practice – and at scale across a data set where N >> 1 – is the time spent executing the necessary quantum circuits could become a bottleneck to the total runtime of the quantum-enhanced, kernel-based algorithm. At least two places exist where this bottleneck could arise: first, transferring data to the quantum computer (necessary because, usually, quantum computers are not closely co-located with the data sets they are processing, necessitating the transfer of data over networks), and second, the total time required for the quantum computer to run the required circuits. The former obstacle can be alleviated by minimizing the amount of data transfer required <cit.>; the latter is the subject of this work. The question we consider here is: “How much time is needed to execute a job consisting of S shots each of M circuits, each of which estimates a quantum kernel value based on an encoding circuit U(θ)?". The runtime must clearly relate to M itself, as well as S, as evidenced by Figure <ref>. However, other properties of the circuit itself – as well as the system the job is being run on – may also impact runtime. In this work, we introduce a well-motivated model for job runtime (Section <ref>), and evaluate its performance by comparing the model's predictions to results obtained from running jobs on IBM Quantum's systems using the Qiskit Runtime <cit.> service (Section <ref>). Using this model, we then discuss the implications of estimating quantum kernels on practical and large data sets in for a climatologically-relevant problem; namely, flash flood prediction (Section <ref>). Finally, we conclude with a discussion of the implications of the model for processing large data sets (N>>1), as well as interesting directions for future work (Section <ref>). § A MODEL FOR CIRCUIT EXECUTION (JOB) RUNTIME This section presents a model for job runtime. It model does not take into account the time a given job spends waiting in a queue prior to being executed on hardware. Empirical studies of queue times show wide variation in how long a given circuit spends waiting to execute; see <cit.>. Queue time depends strongly on the queuing system used; instead, this work focuses on modeling the time required to run the job once it has been removed from the queue. Modeling job runtime is hindered due to a lack of well-defined notions of “How long does it take a quantum computer to run a circuit?". One starting point is using information about how much time is needed for state initialization, gates, and measurements. However, such a model may be overly-cumbersome to use in practice, as modeling the runtime of a circuit with even a moderate number of qubits or depth could be difficult. Doing so would require getting down into the weeds of the circuits, and considering the vagaries of how the hardware executes them[For example, whether the compilers used to schedule pulses attempt to bring pulses forwards in time in the pulse-based representation of the circuit.]. What's more, such a low-level model misses the impact of contributions higher up the stack on timing performance – for example, the time spent compiling an abstract quantum circuit or program into the requisite pulse signals would clearly impact overall runtime, but wouldn't be captured by such a model. Hence, a better model – in the sense of capturing more of the stack that impacts timing performance – would focus on modeling runtime starting from the moment a given job is pulled from a queue of jobs, to the time its results are sent back to the end-user. The necessary ingredient to do so is a holistic notion of “system speed". Such a quantity has been recently introduced in the literature, and is called “Circuit Layer Operations Per Second" (CLOPS) <cit.>. The methodology used to calculate the CLOPS of a given system explicitly encompasses the entire stack from the moment a job is de-queued, and is straightforward to describe. Consider running a job of M parameterized quantum volume circuits <cit.> on a system with quantum volume V. Each circuit in the job has a number of quantum volume layers (repetitions of permutations and random 2-qubit gates) D=log_2(V). And suppose the parameters of each circuit are updated updated K times, and each circuit in the job is repeated for S shots. Let the total elapsed time be T. The CLOPS C of the system is then C = MDKS/T. The methodology for computing CLOPS presented takes S=M=100, K=10, and performs the parameter updates by chaining the output of one run of a circuit to the inputs of the next run, through the use of a pseudo-random number generator <cit.>. Assuming the stack has no fixed overheads or time costs with respect to varying any of M,K,S, or D, then a multiplicative scaling of any of these parameters would result in a corresponding scaling of the total runtime. That is, if another job was run with M' circuits, K' parameter updates, S' shots, and D' quantum volume layers, then a system with CLOPS C should take a time T' = (M'*K'*D'*S')/C to run such a job. To apply Equation (<ref>) to jobs consisting of circuits which estimate quantum kernel values, two modifications are necessary. Both relate to the fact the CLOPS metric is defined using quantum volume circuits, but quantum volume circuits are not usually used as encoding circuits in QML. The first – and most straightforward – issue is the CLOPS metric incorporates the notion of parameter updates through the variable K. When calculating quantum kernels, no parameter updates are done; K should be fixed to one[Note if quantum kernel training <cit.> was performed, then K≠ 1, and should reflect the number of update calls performed.]. The second issue is what the notion of “number of quantum volume layers" (D) would mean. While a given feature map may have a parameter which seems similar in spirit to D – for example, by repeating a base template for an encoding circuit several times – these are different categories of items, making them incomparable. Figure <ref> shows examples of what both “number of repetitions of a base template" mean for quantum volume and a particular QML circuit, called a “ZZFeatureMap" ([Equation (<ref>)] and reference <cit.>). Consequently, a notion of the “effective" number of quantum volume layers is needed. We provide a definition below, based on 2 observations. The first observation is for an n-qubit encoding circuit U(𝐱), with a number of repetitions of its template D, the corresponding circuit for calculating a quantum kernel acts on n qubits and has a number of repetitions of the base template 2D. Thus, its volumetric area[The notion of volumetric area of a circuit is based on the idea of volumetric benchmarking of quantum computers <cit.>, with the difference that in <cit.>, the depth of the circuit when transpiled to a canonical gate set is used in place of a notion of “number of layers".] – the product of circuit width and number of base layers – is 2Dn. A quantum volume circuit acting on q qubits has volumetric area[Recall quantum volume circuits are square, meaning the number of quantum volume layers is equal to the number of qubits the circuit acts on.] q^2. Thus, a quantum volume circuit with q^2 = 2Dn has the same volumetric area as a quantum kernel circuit. This sets a required number of qubits the quantum volume circuit needs to act on in order to have the same volumetric area as the quantum kernel circuit. The second observation is even when two circuits have the same volumetric area, their depths when transpiled to hardware will generally not be the same (see Figure <ref>). A variety of circuits with different values of n and D can have the same volumetric area, but the circuit execution time can be dramatically different – intuitively, a circuit with higher depth will take more time to execute. Hence, capturing the effect of circuit depth is necessary. To do so, we normalize the depth of the quantum kernel circuit to the depth of a quantum volume circuit with the same volumetric area, and use it as a scaling factor. These two observations above lead to a definition of the “effective number of quantum volume layers" of a quantum kernel circuit as D_eff≡⟨Depth(U^†(𝐱)U(𝐲))⟩/⟨Depth(QV_v) ⟩*v, where v = ⌈√(2Dn)⌉. Here, QV_j denotes a quantum volume circuit with a number of layers j, and Depth() denotes the circuit depth when transpiled onto hardware. The expectation values are taken with respect to the parameters 𝐱, 𝐲 and random seeds for the kernel and quantum volume circuits, respectively. Thus, our model for execution time of a job consisting of M quantum kernel circuits with an effective number of quantum volume layers D_eff on a system with with CLOPS C for a total of S shots is given by T̂ = MS/C*D_eff. Note here, T̂ means “An estimate of the runtime", not “Is a quantum-mechanical operator". § MODEL PERFORMANCE The performance of the model is evaluated using 2 kinds of circuits: quantum volume circuits and kernel circuits based on the ZZFeatureMap circuit. Both of these circuits are parameterized, so synthetic data is generate the parameters. Empirical runtime information is collected by submitting the jobs to IBM Quantum systems using the Qiskit Runtime, a quantum computing service and programming model allowing users to optimize workloads and efficiently execute them on quantum systems at scale <cit.>, via the Runtime's Sampler primitive <cit.>. Across the jobs, the number of circuits M, shots S, backend used, and number of qubits n are varied. In addition, for the ZZFeatureMap circuits, both the number of repetitions of the base template D and the circuit's entanglement structure are varied. To quantify the model's performance at predicting runtime, two numbers are used. Suppose the actual runtime for the job is T, and the runtime predicted by the model is T̂. The corresponding loss L of the model with respect to the job is be L = r-1      r ≥ 1 1/r-1   r<1  with  r = T̂/T, By construction L ≥ 0, with equality if, and only if, the predicted and actual runtimes agree. The number r – the runtime ratio – is another quantifier of the degree to which the predicted and actual runtime agree. When r < 1, the model under-predicts runtime. One problem with this is if the predictions of the model are used in other contexts – for example, analyzing the overall runtime of a QML workflow – then an under-prediction on the part of the model would negatively impact such an overall analysis. Hence, the loss function more strongly penalizes under-prediction of runtime (i.e., increases more quickly when r < 1). §.§ Model Performance: Quantum Volume Circuits The runtime model uses the CLOPS metric as the notion of the speed of circuit execution. The CLOPS metric is computed using quantum volume circuits. Hence, we first evaluate the performanc of the model when the circuits in the job are quantum volume circuits. Note for these jobs, D_eff in Equation (<ref>) is taken to be the number of quantum volume layers D. Table <ref> shows – across 5 backends – the actual runtime T, the runtime predicted by the model T̂, the runtime ratio r=T̂/T, and the corresponding loss for jobs where S=M=100, and D= log_2(QV). (Note in these experiments K=1, whereas for the CLOPS experiments, K=10.) The value of the runtime ratio T̂/T shows the model consistently under-estimates the runtime. As a result, the model's loss is non-zero. One reason for this discrepancy could be that, when calculating a system's CLOPS, the quantum volume circuits are pre-transpiled to a given system. For the jobs submitted here, they were not, meaning some additional time was spent in transpilation. Note the number of quantum volume layers D depends on the quantum volume of the backend; the circuits run on systems with higher quantum volumes have more layers than those run on systems with lower quantum volumes. Hence, even if 2 systems have roughly the same CLOPS values, their actual runtimes may be different, due to differences in the number of layers in the circuit. Further, different systems have different numbers of qubits, which impacts the time cost of circuit transpilation and waveform loading. Thus, even though ibmq_jakarta, ibmq_guadalupe, and ibm_hanoi all have a comparable CLOPS value, their differences in quantum volume and qubit count mean the actual runtime T will be different for these CLOPS jobs. The methodology used to compute CLOPS uses S=100. This is a small value for applications where precise estimates are required; commonly, jobs use on the order of thousands of shots. For quantum kernels, increasing the number of shots directly increases the accuracy with which the kernel [Equation (<ref>)] can be estimated. And, as shown back in Figure <ref>, changing S dramatically changes the runtime. This is also reflected in the results of Table <ref> which extend Table <ref> to run the exact same set of jobs, except the number of shots is changed. Considering the model's loss for these jobs, we see it is minimized when S is 100 or 500 – exactly (or close to) the number of shots used for measuring CLOPS. As S→ 0 the loss increases substantially, because the runtime ratio approaches 0, driven by the fact that in the model [Equation (<ref>)] the number of shots enters multiplicatively in the predicted runtime. However, there are fixed overheads across the stack which don't scale with S. For example, as noted in <cit.>, the time required for circuit compilation and data transfer is independent of S. Such an overhead would dominate circuit runtime in a low-shot regime. When the number of shots increases, the loss does as well, albeit less dramatically as when the number of shots decreases. In terms of the runtime ratio, as S increases, the model over-predicts job runtime, though the runtime ratio appears to be similar across similar backends. These results imply that although the model is not perfectly accurate with respect to predicting runtimes for the CLOPS job, it is – comparatively speaking – most accurate for such a (or a very similar) job, as opposed to jobs involving a small or large number of shots. As we will discuss in Section <ref>, one of the main reasons for these discrepancies could be the fact the CLOPS metric is evaluated using an execution path different from the one used here. That is, the manner in which jobs are set up and run is different, which can lead to differences in execution time, a point returned to in the Conclusions (Section <ref>). In the next subsection, we repeat similar experiments as those whose results are presented here, but with a different kind of circuit. §.§ Model Performance: Quantum Machine Learning Circuits The previous sub-section evaluated the model's performance on quantum volume circuits. Next, we turn to the task of evaluating the model using a circuit used for quantum kernels, which evaluate a similarity measure K(𝐱,𝐲) between two classical feature vectors 𝐱, 𝐲. Note in this section, synthetic values for 𝐱 and 𝐲 are used. Given an encoding circuit U(𝐱), the corresponding quantum kernel circuit is U(𝐱)∘ U^†(𝐲). We focus on a particular encoding circuit on n qubits, based on an encoding circuit introduced in <cit.>. The encoding circuit we use is given by U(𝐱) =V(𝐱)∘ H^⊗ n, where H^⊗ n is the Hadamard gate on all n qubits, and V(𝐱) = Exp(i∑_𝐣∈ S[ ϕ_𝐣(𝐱)∏_a ∈𝐣Z_a]). (Note that in <cit.>, the encoding circuit used is V(𝐱) ∘ H^⊗ n∘ V(𝐱)∘ H^⊗ n.) Here the set S indexes both individual qubits, as well as pairs of them. The function ϕ_𝐣(𝐱) is given by ϕ_𝐣(𝐱) = 𝐱_j                       single qubit j (π - 𝐱_j)(π - 𝐱_k)    qubit pair j,k. On the j^th individual qubit, V(𝐱) applies a phase rotation, with the phase being set by the value the j^th component of 𝐱, 𝐱_j. On a pair of qubits j,k, V(𝐱) applies an entangling ZZ operation, with a phase set by (π - 𝐱_j)(π - 𝐱_k). Implicit in the notation above is the idea of an “entangling strategy", which determines which pairs of qubits become entangled. In this work, we consider two strategies: * “Linear", in which adjacent pairs of qubits are entangled: S = {0, 1, ⋯, n-1 }∪{(0,1), (1,2), (2,3), ⋯, (n-2,n-1)} * “Full", in which all pairs of qubits are entangled: S = {0, 1, ⋯, n-1 }∪{(0,1), (0,2), (0,3), ⋯, (0,n-1), (1,2), ⋯, (n-2,n-1)} In general, quantum kernel circuits are rectangular: the total number of layers (2D) does not equal the circuit width (n). We use the aspect ratio of the circuit, a≡ 2D/n to capture whether the kernel circuits are wide and shallow (a < 1), square (a=1), or narrow and deep (a > 1). Table <ref> shows the average performance of the model for kernel circuits with an aspect ratio a=1, and where M=S=100. (Note that here, the data is aggregated over circuits whose width varies between 2 and 6.) Similar behavior as Table <ref> is observed; namely, the model generally under-predicts job runtime. The degree to which the model does so depends on the entanglement structure of the encoding circuit. In particular, circuits with a “linear" entanglement structure have a runtime ratio closer to 0 than those whose entanglement structure is “full". From Figure <ref>, we see the former family of circuits has a lower depth compared to quantum volume circuits of a similar volumetric area. This suggests the depth-dependent factor in the definition of D_eff in Equation (<ref>) plays a significant role in the model's performance. Figure <ref> extends Table <ref> to include rectangular circuits, and to vary the number of shots. The behavior of the model is very similar as to what was seen for quantum volume circuits: namely, the model's runtime ratio decreases dramatically as S→ 0, and once S is on the order of 500 or so, the ratio stabilizes. This behavior consistently occurs across a variety of circuit aspect ratios a, and is also consistent when the entanglement structure of the circuit is changed. With respect to the circuit's aspect ratio, the model does better when the circuits are narrow and deep (a > 1) than wide and shallow (a < 1). This effect appears to be more pronounced for the “full" entanglement structure, especially when S is large. One way to understand this is that with the “full" entanglement structure, every qubit is entangled with every other; for such circuits which are also wide, the depth of the circuit when transpiled to hardware could be quite large; this impacts the model's predictions via D_eff. Finally, we consider the impact of changing the number of circuits in the job, M. Figure <ref> shows the mean runtime ratio r as a function of M, where the data is segregated on the number of shots, and whether a=1. The model's behavior is consistent for both square (a=1) and rectangular (a≠ 1) kernel circuits, and the mean runtime ratio is fairly stable across a wide range of values for M. Taken together, Figures <ref> and <ref> suggest that of the four parameters in the model, it is the number of shots S and the number of effective quantum volume layers D_eff which play the most (and second-most) substantial role in influencing the model's performance, respectively. Given the assumptions of the model, this makes sense. S enters multiplicatively in the model; as it goes down, the impact of fixed, shot-independent overheads becomes more important, but isn't explicitly captured by the model[As noted in Section <ref>, this is an intentional choice, to avoid creating an unwieldy and over-parameterized model.]. The job's runtime is also impacted by how deep the circuits in the job are. The depth of the circuits is impacted both by the number of repetitions of the template and the entangling strategy, both of which impact D_eff. Having evaluating the model's performance on two kinds of circuits using synthetic data, we now turn to using the model to estimate runtimes for large data sets in a real-world context. § IMPLICATIONS FOR RUNTIME ON PRACTICAL DATA SET SIZES The prior section studied the model's performance. In this section, we use the model to examine the implications of running jobs for calculating quantum kernels where the underlying data set is both large and practical. The choice of the data set was influenced by the fact this work started as part of a summer internship program offered by IBM and its Operations Risk Insights (ORI) organization[Operations Risk Insights (ORI) is an automated, comprehensive, and Watson-powered alert service which assesses employee safety, operations and natural disaster risk events to identify those posing the greatest threat of impact to the business continuity.]. Over the summer of 2022, ORI began incorporating into its capabilities a purely classical model to predict flash floods. In parallel, the authors (and others, noted in the Acknowledgements) began exploring the use of a quantum-enhanced model through the use of quantum kernels <cit.>. Flash floods are are a significant contributor to annual, weather-inflicted monetary losses. They can be catastrophic to communities, infrastructure, and of course, people. Flash flood events are often unpredictable, making it hard to prepare for or mitigate their potential effects. For example, California's flooding rains and heavy snows which killed at least 17 people likely caused more than $30 billion in damages and economic losses in January of 2023 <cit.>. Improved early warnings of flash floods thus can save lives and reduce economic losses. The ORI effort initially focused on flash flood prediction within the state of Texas, at two levels of geographic granularity: county level, and ZIP code level. At these two levels of granularity, the available data set had N=2513 records and N=70571 records, respectively. Although this number of records may be modest from a classical ML perspective, it is important to keep in mind that generating quantum kernels for both of these data sets requires running on the order of 3 million and 2.5 billion circuits, respectively. Utilizing the runtime model in Equation (<ref>), we can roughly predict how long running those jobs would take. Figure <ref> plots the predictions of the model out to data set sizes encompassing both the Texas county and ZIP code data sets[For a given number of feature vectors N, the number of quantum kernel circuits M = N(N-1)/2.]. Here, specific values for both D_eff and S are used; namely, D_eff = 2 and S=4000. As we've seen in the previous section, the runtime will be impacted by both of these quantities. The primary focus of the figure is the impact of improving the speed of circuit execution (as measured by CLOPS, C). Current system speeds are on the order of 1K. At such speeds, processing the Texas county data set would take on the order of approximately 1 year, and processing the Texas ZIP code data set would be infeasible for all practical purposes. Recently, a demonstration of C>10K CLOPS has been made <cit.>. At those system speeds, processing the Texas county data set could take on the order of months, and processing the Texas ZIP code data set would still remain infeasible. Setting aside whether quantum advantage can be found for these particular data sets and the particular encoding circuit used, it is still useful to highlight how considerations from the overall flash-flood prediction workflow used by ORI would place constraints on the acceptable amount of runtime on quantum hardware, assuming quantum-enhanced classifiers were deployed to the platform. That is, the ORI platform updates its flash flood predictions every 2 hours. If a quantum-enhanced classifier was incorporated into the platform, it would be necessary to refresh the kernel values within that time window. And while re-processing an entire data set may not be necessary, the implication from the model presented here is that, barring advances in the underlying algorithm itself, the runtime on quantum hardware would need to come down by several orders of magnitude in order for the quantum kernel part of the ORI platform to sustain the desired rate of updates for the model. This highlights the quantum part of a quantum-enhanced workflow doesn't exist in isolation, and there are considerations which have nothing to do with quantum computing per se which can impact the feasibility of deploying a quantum-enhanced approach to a classical workflow. § CONCLUSIONS & DISCUSSION Quantum kernels are one particular quantum machine learning algorithm, in which classical ML models are enhanced by similarity measures computed by running quantum circuits on quantum systems. Given a data set of size N, 𝒪(N^2) kernels need to be calculated. In this work, we studied the problem of modeling the runtime of a collection of circuits used to calculate quantum kernel values, and presented a predictive model to do so [Equation (<ref>)], based on a recently-introduced measure of the speed of quantum computers, CLOPS <cit.>. We validated the model's performance by comparing its predictions against empirical runtime information, and found the model is most accurate when the job closely mimics those used to calculate the CLOPS of a given system. When the job being run is substantially different, the model's performance suffers. When the number of shots is small, the model consistently under-predicts runtime, due to the fact that, in reality, the software stack has fixed (and unavoidable!) overheads not accounted for by the model. When the number of shots is large, the model generally over-predicts runtime in a shot-dependent fashion. This suggests the model could be used – to reasonable accuracy – in a regime where the number of shots is modest, or large. Further, the model's performance is relatively stable as with respect to the number of circuits in the job, meaning it can be applied in the context of jobs with very large numbers of circuits. We note here one of the main difficulties in making statement about the model as such is the degree to which the job execution path used to establish a system's CLOPS value differs from the one used here. This work leverages the Qiskit Runtime service for job execution, a service not currently used for CLOPS values. It would be interesting to re-consider the analysis presented here if it was, as we could then better understand whether the issues with the model's performance come from the model as such, or the particular execution path of the jobs. By extrapolating the model to very large data set sizes (i.e., a number of feature vectors on the order of thousands and beyond), we find at current system speeds, processing such data sets would require a prohibitively large amount of runtime on quantum hardware. However, for smaller data set sizes, quantum kernels can be processed in a reasonable amount of time on today's systems. What's more, as noted in the Introduction, quantum advantage with quantum kernels has yet to be attained in a practical setting, meaning scaling up to larger data set sizes wouldn't be necessary right now for early users of quantum-enhanced models. That is, for small data set sizes, classical data scientists could already begin exploring quantum-enhanced, kernel based algorithms on real-world data, with circuit execution runtimes that enable interesting experimentation and work. In this sense, the speed of the hardware is not an obstacle to data scientists and other early end-users of quantum-enhanced models to begin upskilling themselves today. It is important to note this work does not touch on the other practical or theoretical considerations necessary to substantiate a claim of quantum advantage. We make no claims – nor dare speculate – on whether improvements in job runtime would enable quantum advantage using the particular encoding circuit we studied, the particular quantum computing modality used (namely, superconducting qubits), and the particular data set considered. The results of this work suggest 4 primary lines of additional research. First, there is a need to apply and validate the runtime model introduced here to a larger variety of circuits used for quantum machine learning. For example, ad-hoc (or “hardware-efficient") circuits are used to encode data in a way with minimal circuit depth and for which their 2-qubit gates respect the connectivity of the qubits in the hardware. Studying a larger variety of circuits would provide more evidence of the regimes of validity of the model. Second, hardware runtime could be further reduced through parallelization of the job across multiple QPUs. If the time on 1 QPU is T, parallelizing across X > 1 QPUs could reduce the total time to approximately T/X. As more quantum systems come online, the feasibility of doing this parallelization becomes higher[Note this approach ignores any latency effects, the overhead of the software orchestrating the parallelization, and the potentiality of the parallelized jobs being sent to different queues, each with their own queue behavior.]. Further, multiple quantum kernel circuits could be executed on the same chip, assuming a sufficient number of qubits is available. This would provide another level of parallelization. Third, one of the most straightforward ways to decrease job execution is to reduce the number of shots S. Doing so comes with the cost of increasing the shot noise of the estimated kernel values. A close collaboration with classical ML scientists and practitioners looking at kernelized ML algorithms with robust performance guarantees in the face of noisy kernel values would be fruitful, and could help the quantum ML research community understand what the practical upper bounds on S might be, both in the context of quantum-enhanced support vector machines, and other ML algorithms. For example, recent work has shown that in order for an SVM to have a generalization error at most ϵ when trained on a data set of size N, the total number of shots required per kernel entry scales as S ∼𝒪(N^8/3/ϵ^2) <cit.>. In turn, this implies a runtime – across the entire data set – of 𝒪(N^2)*𝒪(N^8/3/ϵ^2) * D_eff/C = 𝒪(N^4.67D_eff/(Cϵ^2)). This is a rather unfavorable scaling with respect to N in practice, and motivates exploring regimes wherein small amounts of training data are required, and algorithms which can tolerate relatively large amounts of error in the estimated kernel entries. Fourth, the notion of “effective number of quantum volume layers of a circuit" should be studied in more depth. We presented one definition [Equation (<ref>)]; others are possible. In particular, the definition of D_eff introduced here was particular to quantum kernel circuits; defining one which could be applied across a wider family of circuits would be useful. In sum, this work showed it is possible to model job execution time using a holistic measure of the speed of quantum systems. This model has four parameters: number of circuits M, number of shots S, system CLOPS C, and number of effective quantum volume layers D_eff. Although simple, we showed this model can be used – with reasonable accuracy – to predict job execution time, especially in a regime where the number of shots is large. We encourage end-users of quantum computing systems to leverage this model for analyzing the quantum-enhanced portion of their workflows, and for quantum computing applications researchers to find ways to apply it to other applications of quantum algorithms beyond quantum kernels. § ACKNOWLEDGEMENTS We acknowledge prior collaborative contributions from the other IBM ORI Extreme Blue Interns for Summer 2022: Chelsea Zackey, Christopher Moppel, and Samantha Anthony. Further, we acknowledge the support of other ORI Exterme Blue mentors, including Bhanwar Gupta, Chester Karwatowski, Rinku Kanwar, Mallikarjun Motagi and Ayush Kumar. In addition, we acknowledge the support of the IBM Extreme Blue program, as well as Dr. Liliana Horne of IBM's Global Chief Data Office. TLS thanks Drs. Paul Nation, Omar Shehab, and Stefan Wörner for feedback on earlier versions of this manuscript. JW thanks Fausto Palma of the IBM CIO Supply Chain and Technology Systems group for his gracious support. Finally, we acknowledge the use of IBM Quantum services for this work. The views expressed are those of the authors, and do not reflect the official policy or position of IBM or IBM Quantum. § REAL-WORLD RESULTS: METHODS AND DETAILS This appendix describes the methods and workflows used to generate the empirical results presented in Figure <ref>. These workflows were built in a broader context of creating an end-to-end pipeline for training classical and quantum-enhanced models, leveraging state-of-the-art, cloud-based tools. In particular, the workflows were built using Kubeflow <cit.> running on the IBM Cloud Kubernetes Service, to manage the complexity of both the classical and quantum machine learning experimental workflows. Kubeflow is an open source toolkit and a de-facto standard for building, experimenting with, and deploying ML pipelines to various environments for development, testing, and production-level model serving, on containerized environments such as Red Hat OpenShift <cit.> and vanilla Kubernetes <cit.>. Within Kubeflow are Kubeflow Pipelines (KFP), which is a “platform for building and deploying portable, scalable machine learning (ML) workflows based on Docker containers”. Each KFP step or component is containerized, with the ability to share and track results and associated experiment artifacts between components, while allowing independent, long-running steps to proceed in parallel. The end-to-end Kubeflow pipeline consisted of the following steps: * Initialization: Obtaining the latest source code binaries from Github * Data preparation: Performing feature selection and data resizing. * Quantum kernel generation: create the jobs needed to calculate quantum kernel values, and send them to IBM Quantum systems. * Aggregate Qiskit Runtime job results: extract empirical runtime information and a quantum kernel matrix from the job results * Classical kernel generation: calculate a classical kernel (RBF kernel) for the data set generated in Step 2. * Model training and analysis: train 2 SVMs (one for each kernel matrix), and evaluate their accuracy. An example pipeline – showing the launching of 5 independent quantum and classical kernel generation tasks – is given in Figure <ref>. A major benefit of using Kubeflow for running the experiment done in this work is the ability to parallelize the workflow across multiple splits, where each split can consist of independent data sets. In addition, pipeline runs are automated and asynchronous, on a managed cloud environment (vs., e.g., running manually on a standalone machine). As a result, a very large experiment can be split into multiple independent ones, meaning the failure of any one sub-experiment does not impact whether other sub-experiments fail. This also allows for an easy reboot/restart of the failed sub-experiments. In addition, because of the cloud-based nature of Kubeflow, long-running experiments (e.g., several hours) can be easily handled, due to the fact the orchestration of the work is handled via the cloud. Finally, the use of splits allows for more usage of Qiskit Runtime compute resources as they become available, by, e.g., sending different splits to different systems. We now provide brief descriptions of some of the steps above. For step 2, the real-world data sets used consisted of 38 features, and was constructed out of long-term flash flood records and historical analysis from the following sources: * National Oceanic and Atmospheric Administration (NOAA), for historical precipitation data * The Weather Channel (TWC), for hourly atmospheric and precipitation data * Multi-Resolution Land Characteristics Consortium (MRLC), for land surface data * US Geological Survey (USGS), for regional land classification The particular dataset used here is one generated for the state of Texas at the county level, which had N=2513 records. For the data preprocessing, classical principal component analysis (PCA) was used to perform feature reduction to go from the initial 38 features to the statistically most significant 2, 3, 5, and 7 features. This allowed for a study of the impact on model accuracy as the number of features was changed. For the data points in Figure <ref>, the two most significant features – PrecipAmountAvg and RelativeHumidityAvg – were used. Because flash floods represented only 3% of the data set, caution was needed during data preparation to avoid issues that are typical of highly imbalanced datasets. When attempting resize the dataset from the initial N=2513 records to smaller batches of N=10, 25, 50, 75, 100, 150, 200 the Imbalanced Learn RandomUnderSampler <cit.> was used ensure we maintained an appropriate representation of flash floods in the training dataset. Note that both the feature reduction and the data resizing are done each time our experimental pipelines are run, as they are computationally easy. For step 3, the code used to generate jobs consisting of quantum kernel circuits was based on the open source, quantum kernel library in Qiskit Machine Learning project <cit.>, the compute_overlap, compute_circuit, and evaluate methods in particular. These functions were modified to include calls to the Qiskit Runtime APIs to facilitate the extraction of job execution information, as well as the quantum kernel matrix itself (step 4). Jobs were run on the ibmq_auckland system, using a dedicated reservation mode made available via the IBM Quantum Platform. The ibmq_auckland system is a 27 qubit machine, with a quantum volume of 64, and CLOPS of 2400. For step 5, the choice of RBF kernel was motivated by prior work from the authors and other collaborators <cit.>, which showed the RBF kernel yielded the best balanced accuracy and F1 score compared to other classical kernel functions and model approaches for the flash flood data set. This step is part of the pipeline since it is not computationally intensive, and provided a classical benchmark against which to compare a quantum-enhanced classifier.
http://arxiv.org/abs/2307.03925v2
20230708074740
Probe of soft-QCD in minimum bias events of pp collisions with the ATLAS at the LHC
[ "Yuri A. Kulchitsky" ]
hep-ex
[ "hep-ex", "nucl-ex" ]
Enhancing Room Security and Automating Class Attendance Using ID Cards Shravan Bhat – 171EE240, Nithin R – 171EC131, Pranav S - 171EC135 August 12, 2023 ======================================================================== § INTRODUCTION The study of soft Quantum Chromodynamics (QCD) charged-particle distributions in proton–proton (pp) and proton–antiproton (pp̅) collisions probes the strong interaction in the low transverse momentum (p_T) regime or non-perturbative QCD (non-pQCD). Description of low-p_T processes within pQCD is not possible. Predictions can be made with phenomenological models inspired by QCD (see reviews in <cit.>). In the low-p_T region, charged-particle interactions are typically described by quantum QCD-inspired models implemented in Monte Carlo (MC) event generators. Data are used to constrain such MC models and gain further insight into the particle dynamics of the low-p_T regime. Measurements are used to constrain the free parameters of these models. Low-p_T processes arising from pile-up events[Pile-up events are pp interactions in the same bunch crossing at higher instantaneous luminosities additional to the triggered collision between two protons.] may also affect the topologies of events involving an interaction with a high-p_T scale. An understanding of soft-QCD processes is therefore important both on its own and as a means of reducing systematic uncertainties in measurements of high-p_T phenomena. An accurate description of low-p_T strong interaction processes is essential for simulating single p p and p p̅ interactions and the pile-up effects. Understanding of soft-QCD interactions has a direct impact on precision measurements of high-p_T phenomena and searches for new physics, it provides insight into strong interactions in the non-pQCD regime: soft-QCD results are used in MC generator tuning, soft-QCD description is essential for simulating an underlying event (UE) with multiple parton interactions (MPI), and initial and final state gluon radiation (ISR, FSR). An important example of a process which is entirely governed by soft-QCD physics is hadronization. Since there is no uniform description of the phenomena that occur at low p_T, there is a variety of models trying to explain them through comparisons with extracted data. There is a wealth of CERN's Large Hadron Collider (LHC) <cit.> measurements that probe the soft-QCD region and basically all LHC experiments to measure soft-QCD phenomena. Minimum bias (MB) events were used for soft-QCD studies. MB are inelastic events selected by an MB trigger with as little bias as possible or with low-p_T events. MB events include non-diffractive (ND), single- (SD), double- (DD) and central-diffractive (CD) processes. In order to make a more complete study of particle properties in MB events, results are given for different multiplicity and kinematic selections termed as “phase spaces” (PS). Measurements of charged-particle distributions by the ATLAS <cit.> detector <cit.> at the centre-of-mass (CM) energies √(s) = 0.9, 2.36, 7, 8 and 13  were performed for the pseudorapidity (η ) region |η| <2.5 and for the samples of events with the primary charged-particle multiplicity (n_ch) more than or equal to 2 with the charged-particle transverse momentum p_T>100  and with the primary charged-particle multiplicity n_ch≥ 1, 6, 20, 50 with the charged-particle transverse momentum p_T > 500 . Charged-particle transverse momentum results for pp and Pb + Pb interactions at 2.76 <cit.>, for pp and p + Pb interactions at 5.02 <cit.> in the pseudorapidity range |η| <2 of particles with p_T > 500  and p_T > 4000 , respectively, and with p_T⪅ 200  <cit.> were studied by the ATLAS. Charged-particle distributions were measured by the ALICE <cit.> Collaboration <cit.>, the CMS <cit.> Collaboration <cit.>, the CMS and TOTEM <cit.> Collaborations <cit.>, the LHCb <cit.> Collaboration <cit.>, the LHCf <cit.> Collaboration and the TOTEM <cit.> Collaboration <cit.>. Similar measurements aimed at probing strong interactions at low p_T have been made in lower-energy from √(s) = 0.03 to 0.9  for e^+ e^-, e p and p p̅ collisions. The low p_T studies were carried out in pp collisions at the ISR (CERN) by the ACHM and ABCDHW Collaborations at √(s) = 0.0304, 0.0445, 0.0526 and 0.0622  <cit.>. Similar studies were also carried out in p p̅ collisions at the SPS (CERN) by the NA22 <cit.>, UA1 <cit.>, UA4 <cit.> and UA5 <cit.> Collaborations at √(s) = 0.022, 0.2, 0.54 and 0.9 . Important results on this subject were obtained also in p p̅ collisions at Tevatron (Fermilab) by the CDF <cit.> Collaboration at √(s) = 0.63, 1.8 and 1.96  <cit.> and by the E735 Collaboration at √(s) = 0.3, 0.54, 0.9 and 1.8  <cit.>. The hypothesis that at very high energies the probability distributions P (n, √(s)) of producing n particles in a certain collision process should exhibit a scaling relation was proposed in <cit.>. This scaling behaviour is a property of particle multiplicity distributions known as the KNO scaling hypothesis. The main assumption of the KNO scaling is the Feynman scaling <cit.>, where it was concluded that for asymptotically large energies the mean total number of any kind of particle rises logarithmically with the CM energy as ⟨ n ⟩∝ln√(s). Results of the KNO scaling study using the ATLAS experiment data are presented in <cit.>. The KNO scaling was also studied at the LHC energies by the CMS <cit.> and ALICE <cit.>. Charged-particle multiplicity and transverse momentum distributions in pp collisions at CM energies √(s) = 0.2 – 14  within the MC Quark-Gluon String Model (QGSM) <cit.> based on Gribov’s Reggeon field theory (RFT) <cit.> were studied in <cit.>, where a special attention was given to the origin of violation of the KNO scaling. A detailed theoretical description of the KNO scaling was done in <cit.>. The novel physically well-motivated scaling rules for high-energy data were introduced in <cit.>. The MB events were also used by the LHC experiments to study UE, Bose-Einstein correlations (BEC), an inelastic cross section, track jets, particle correlations, hadronization and colour reconnection. To perform precise Standard Model measurements or to search for new physics phenomena at hadron colliders, it is important to have a good understanding not only of the primary short-distance hard scattering process, but also of the accompanying interactions of the rest of the pp collision — collectively termed the UE. It is impossible to separate uniquely the UE from the hard scattering process on an event-by-event basis, but observables can be defined which are particularly sensitive to the properties of the UE. Such observables have been studied using the MB events measurements performed by the ATLAS detector in pp collisions at √(s) = 0.9 and 7  <cit.> and at √(s) = 13  <cit.>. Using the MB events the BEC effect with one size parameter, the source radius, has been studied by the ATLAS detector in pp collisions at √(s) = 0.9 and 7  <cit.> and at √(s) = 13  <cit.>. Fiducial inelastic cross-sections were measured by the ATLAS at √(s) = 7  <cit.> and at √(s) = 13  <cit.>. The recent soft-QCD measurement results of the LHC experiments are reported, for example, in <cit.>. This paper is organized as follows. A short description of the soft-QCD physics is presented in Sec. <ref>. The ATLAS detector for study of MB events is described in Sec. <ref>. The MC model tunes are presented in Sec. <ref>. The charged-particle analysis is performed in Sec. <ref>. A study of the KNO scaling is presented in Sec. <ref>. The summary and conclusions are given in Sec. <ref>. § SOFT QCD Understanding of soft-QCD interactions has a direct impact on precision measurements in high energy physics and searches for new physics which provides insight into strong interactions in non-pQCD regime: the soft-QCD results are used * in MC generator tuning, * for description of UE simulation, * for description of multiple parton interactions (MPI), * for description of initial and final state gluon radiation (ISR, FSR). Schematic diagrams of non-diffractive (ND) and diffractive processes with single dissociation (SD), double dissociation (DD), and central diffraction (CD) are shown in Fig. <ref>. As discussed in Ref. <cit.>, the Ryskin-Martin-Khoze (RMK) model introduced in <cit.> based on a modification of the classic Gribov's Reggeon Field Theory (RFT) <cit.> allows one to trace the smooth transition from the pure perturbative region with large parton transverse momentum (k_T) into the soft domain. Strong absorption of low-k_T partons plays a crucial role here since it produces an effective infrared cut-off and provides a possibility of extending the parton approach used for hard processes to also describe high-energy soft and semi-hard interactions. This approach combines a description of the soft physics and diffraction with the jet physics in a coherent self-consistent way. The soft and hard components independently include <cit.> is also possible. In this approach the soft part is described in terms of RFT with the phenomenological soft Pomeron pole while the hard part is calculated in terms of the parton model for mini-jet production with the energy-dependent cut-off k_T > k_0 (s). A combined description of soft and hard processes in hadronic collisions is reached within the QGSJET-II MC model <cit.> using of the semi-hard Pomeron approach <cit.>. In Ref. <cit.> a model was constructed, which incorporated attractive features of two successful theoretical approaches to high-energy QCD: Balitsky-Fadin-Kuraev-Lipatov (BFKL) Pomeron calculus <cit.> and the Colour Glass Condensate approach (leads to a saturation of parton density with s) <cit.>. In Refs. <cit.> an analysis was done for the data set divided into two classes corresponding to soft and hard interactions. The term hard' interactions is typically understood to mean high-p_T parton-parton interactions associated with such phenomena as jets, while the soft component consists of everything else. A comparison of the results shows distinct differences in the behaviour of the two samples as a function of the CM energy. Evidence was found that the properties of the soft sample are invariant as a function of the CM energy. The separation of hard and soft interactions in the LHC experiments can be done using the event shape observables <cit.>, for example, spherocity or transverse trust. § ATLAS DETECTOR The ATLAS is a multipurpose particle physics experiment <cit.> operating at one of the beam interaction points at the LHC <cit.>. the cut-away view of the ATLAS detector[ATLAS uses a right-handed coordinate system with its origin at the nominal interaction point (IP) in the centre of the detector and the z-axis along the beam pipe. The x-axis points from the IP to the centre of the LHC ring, and the y-axis points upward. Cylindrical coordinates (r, ϕ) are used in the transverse plane, ϕ being the azimuthal angle around the beam pipe. The pseudorapidity is defined in terms of the polar angle θ as η=-lntan(θ/2). The angular distance is measured in units of Δ R = √( (Δη)^2 + (Δϕ)^2). ] is shown in Fig. <ref>. The ATLAS detector covers almost the whole solid angle around the collision point with layers of tracking detectors, calorimeters and muon chambers. It is designed to study a wide range of physics topics at LHC energies. The tracking devices and the trigger system <cit.> are of particular importance for the study of MB events. The innermost part of the ATLAS detector is the Inner Detector tracker (ID), which has full coverage in ϕ and covers the pseudorapidity range |η|<2.5. The cut-away view of the ATLAS ID is shown in Fig. <ref>. The ID is immersed in the 2 T axial magnetic field of a superconducting solenoid and measures trajectories of charged particles. It consists of a silicon pixel detector (Pixel), a silicon microstrip detector (SCT) and a straw-tube transition radiation tracker (TRT), each of which is split into a barrel and two endcap components. The Pixel, SCT and TRT are located around the interaction point spanning radial distances of 33–150 mm, 299–560 mm and 563–1066 mm, respectively. The barrel (each endcap) consists of four (three) pixel layers, four (nine) double layers of silicon microstrips and 73 (160) layers of TRT straws. The Pixel, SCT and TRT have (r, ϕ)-position resolutions of 10 μm, 17 μm, and 130 μm, respectively. During the first long shutdown of the LHC, the Insertable B-Layer (IBL) <cit.> was constructed, inserted and commissioned to become an additional (innermost) layer of the existing Pixel Detector. The IBL is composed of 14 lightweight staves arranged in a cylindrical geometry, each made of 12 silicon planar sensors in its central region and 2× 4 three-dimensional sensors at the ends. The IBL pixel dimensions are 50 μm in the ϕ-direction and 250 μm in the z-direction (compared with 50 μm by 400 μm for the other pixel layers). The intrinsic spatial resolution of the IBL readout is 10 μm in the (r, ϕ)-position and 75 μm in the z-position <cit.>. The smaller radius and the reduced pixel size result in improvements in both the transverse and longitudinal impact parameter resolutions <cit.>. The services for the existing pixel detector were upgraded, significantly reducing the amount of material in the region |η| > 1.5, in particular at the boundaries of the active tracking volume. A track from a charged particle traversing the barrel detector typically has 12 silicon measurement points (hits), of which 4 at the Pixel and 8 at the SCT, and more than 30 TRT straw hits. Requirements on an IBL hit and on impact parameters strongly suppress the number of tracks from secondary particles. The ATLAS detector has a two-level trigger system: the first-level (L1) trigger and the high-level trigger (HLT) <cit.>. MB events were required to satisfy L1 triggers using the MB trigger scintillators (MBTS). These are mounted at each end of the detector in front of the liquid-argon endcap-calorimeter cryostats at z = ±3.56 m, and are segmented into two rings in pseudorapidity (2.07 < |η| < 2.76 and 2.76 < |η| < 3.86). The inner (outer) ring consists of eight (four) azimuthal sectors, giving a total of 12 sectors on each side. The MB events were selected on the basis of the MBTS alone. The trigger used in this measurement requires at least one signal in a scintillator on one side to be above threshold. The MB ATLAS trigger collect to inelastic events (INEL) in the definition of the ALICE or the CMS. The methods developed for the measurement of the properties of MB events during low luminosity runs using the ATLAS detector was described in Ref. <cit.>. An extensive software suite <cit.> is used in the reconstruction and analysis of real and simulated data, in detector operations and in the trigger and data acquisition systems of the experiment. § MONTE CARLO MODELS Inclusive MB data are modelled in MC event generators assuming three different diffractive processes: non-diffractive, single diffractive and double diffractive. Low-p_T scattering processes may be described by the lowest-order (LO) pQCD two-to-two parton scatters, where the divergence of the cross section at p_T = 0  is regulated by phenomenological models. A summary of MC generator tunes used for comparison with the MB results based on the ATLAS measurements <cit.> is presented in Table <ref>. The Pythia 6 <cit.>, Pythia 8 <cit.>, PHOJET <cit.>, EPOS <cit.> and QGSJET-II <cit.> MC generators are used to correct the data for detector effects and to compare with particle-level corrected data. For the purpose of comparing the present measurements to different phenomenological models describing MB events, the following particle-level MC samples were generated. Pythia 8 <cit.> and EPOS <cit.> models use the effects of colour coherence, which is important in dense parton environments and effectively reduces the number of particles produced in multiple parton–parton interactions. In Pythia 8 the simulation is split into non-diffractive and diffractive processes, the former dominated by t-channel gluon exchange and amounting to approximately 80% of the selected events, and the latter described by a Pomeron-based approach <cit.>. Different parameter settings in the models are used in simulation to reproduce the existing experimental data and are referred to as tunes. A tune is a particular configuration or set of values of the parameters of a particular MC model. The Pythia 8 MC generator <cit.> was used with the parameter values set to the A2 tune <cit.> and with the MSTW2008LO PDF set <cit.>. The contributions from ND, SD and DD processes were included in proportion to the cross sections predicted by Pythia 8 with the A2 tune. The ATLAS MB tune Pythia 8 A2 was used for determination of detector corrections. This was tuned using ATLAS MB data at 7  for the MPI parameters. The Pythia 8 Monash <cit.> is used the tune to MB and UE results. It was constructed using Drell–Yan and UE data from ATLAS, and also data from the CMS, SPS, and Tevatron in order to constrain energy scaling. The Monash UE tune is based on the NNPDF2.3LO PDF <cit.> and incorporates updated fragmentation parameters, as well as SPS and Tevatron data to constrain the energy scaling. The Pythia 8 version 8.130 MC generator <cit.> uses the diffraction model that produces much harder p_T and n_cn spectra for the SD and DD contributions than Pythia 6. The default parton shower model is similar to that used in Pythia 6 MC09. The new Pythia 8 A3 tune <cit.> is suitable for inclusive QCD modelling for LHC Run 3. The Pythia 8 A3 uses the ATLAS Run 2 charged particle distribution and inelastic cross section results in addition to the Run 1 results used previously to construct MB tunes. The A3 uses the same NNPDF 2.3LO PDF and demonstrates that an acceptable description of data can be achieved by using the Donnachie–Landshoff (DL) model for diffraction. The ATLAS Pythia 6 <cit.> MC09 tune <cit.> uses a specific set of optimized parameters; it employs the MRST LO* PDF <cit.> and the p_T-ordered parton shower <cit.>. These parameters were derived by tuning to the UE and MB Tevatron results from energy region √(s) = 0.63 – 1.96 . The ATLAS Pythia 6 MC09c tune <cit.> is an extension of the ATLAS MC09 tune where the strength of the colour reconnection (CR) was tuned to describe the ⟨ p_T⟩ distributions as a function of n_ch measured by CDF in p p̅ collisions at the Tevatron <cit.>. The CR phenomenon is a pure soft-QCD effect. The point is that after a number of coloured secondary partons are produced, there are different possibilities of forming the colour flow between these partons and grouping the partons into colourless clusters. In the process of reconnection, one rearranges the colour flow in such a way as to minimize the size of the clusters. This is especially important when dealing with contributions of MPI. The reconnection between the different cut of Pomeron diagrams diminishes the final multiplicity and can change the form of the n_ch distributions <cit.>. The Pythia 6 AMBT1 tune (ATLAS Minimum Bias Tune 1) <cit.> was developed in order to adapt the free parameters of the ND models to the experimental data at √(s) = 0.9 and 7  in a diffraction-reduced PS with n_cn≥ 6, p_T > 500 , |η| <2.5. The starting point for this tune is the ATLAS Pythia 6 MC09c <cit.>. The Pythia 6 DW tune <cit.> uses virtuality-ordered showers and was derived to describe the CDF Run II UE and Drell–Yan data. The Pythia 6 AMBT2B tune <cit.> with the CTEQ6L1 PDF <cit.> was evaluated using jet and MB data. EPOS <cit.> provides implementation of a parton-based Gribov's Reggeon theory <cit.> which is an effective QCD-inspired field theory describing hard and soft scattering simultaneously. The EPOS generator, version LHCv3400, was used with the LHC tune <cit.>. The EPOS generator does not rely on PDF. The QGSJET-II model version 04 <cit.> provides phenomenological ] treatment of hadronic and nuclear interactions in the framework of the Reggeon field theory. The soft and semihard parton processes are included within the “semihard Pomeron” approach. For QGSJET-II the default settings of the generator are applied. The QGSJET-II generator does not rely on PDF. The PHOJET MC generator <cit.> version 1.12.1.35 is used as an alternative model to Pythia-based generators. It describes low-p_T physics using the two-component Dual Parton Model (DPM) <cit.> which includes soft hadronic processes described by Pomeron exchange and semi-hard processes described by perturbative parton scattering. The PHOJET relies on Pythia 6 version 6.1.15 for the fragmentation of partons. The Pythia 6 MC generator Perugia 0 tune <cit.> with the soft-QCD part is tuned using only MB data from the p p̅ Tevatron and CERN colliders. All large MC samples of MB events were generated and passed through the ATLAS simulation program <cit.>, which is based on Geant4 <cit.>, and the reconstruction chain, which is exactly the same as used for collision dataset. The ATLAS used 13 MC generators and theirs tunes Pythia 6 <cit.>, Pythia 8 <cit.>, PHOJET <cit.>, EPOS <cit.>, QGSJET-II <cit.> to correct the data for detector effects and to compare with particle-level corrected MB results, which are presented in Table <ref>. The comparisons of the MC predictions with the ATLAS MB results are presented in Sec. <ref>. § ANALYSIS OF MINIMUM-BIAS EVENTS Measurements of inclusive particle spectra belong to basic items in the physics programs of LHC experiments, and they are usually measured regularly at each collision energy. The charged-particle multiplicity is one of the key characteristics of high-energy hadron collisions and has been the subject of many experimental and theoretical studies because, although quite simple to measure, it is quite difficult to describe it in the full measured range. Measurements of charged-particle distributions probe the non-pQCD regime where QCD-inspired models implemented in MC event generators are used to describe the data and to constrain free parameters of MC models. Accurate description of low-p_T strong interaction processes is essential for simulating single pp and pile-up multiple pp interactions. Such pp measurements are also used as input in many models trying to describe heavy-ion results. The results used in this review are based on the pp data collected at √(s) = 0.9 – 13  recorded by the ATLAS experiment <cit.> at the LHC <cit.> in 2010 – 2015 <cit.>. The data were taken in a special configuration of the LHC with low beam currents and a reduced beam focusing, producing the low mean number of interactions per bunch-crossing in the range 0.003 – 0.007. The corrected distributions for primary charged particles in five separate PS regions for events with n_ch≥ 2, p_T >100 , n_ch≥ 1, p_T >500  and n_ch≥ 6, 20, 50, p_T >500  are used. The results are compared to predictions of models tuned to a wide range of measurements. The measured distributions are presented as inclusive-inelastic distributions within a given PS region with minimal model-dependent corrections to facilitate comparisons with models. §.§ Observables The following observables were studied by ATLAS: 1/N_ev·d N_ch/dη , 1/N_ev·1/2 π p_T·d^2 N_ch/dηd p_T , 1/N_ev·d N_ev/d n_ch , d⟨ p_T⟩/d n_ch , where, η is the particle pseudorapidity, p_T is the charged-particle transverse momentum,[The factor 2π p_T in the p_T spectrum comes from the Lorentz-invariant definition of the cross-section in terms of d^3 p. The results could thus be interpreted as the massless approximation to d^3 p.] n_ch is the number of primary charged particles in an event within the kinematic acceptance. N_ev is the event number yield for a given event selection, N_ch is the total number of primary charged particles in all selected events in the data sample, ⟨ p_T⟩ is the average transverse momentum of primary charged particles within the kinematic acceptance. A primary charged particle is defined as a charged particle with a mean lifetime τ > 300 ps, which is either directly produced in p p interactions or from decays of directly produced particles with τ < 30 ps. Charged particles produced from decays of particles with τ > 30 ps are considered as secondary particles and are thus excluded. The usually used inclusive charged-particle spectra correspond to events with a minimum multiplicity n_ch≥ 2 or n_ch≥ 1 and contain primary charged particles possessing a minimum transverse momentum p_T > 100  or p_T > 500 , respectively, for the pseudorapidity region |η| < 2.5. Primary charged-particle spectra are also shown for higher-multiplicity events (n_ch≥ 6, 20 and 50, p_T > 500 ). §.§ Pseudorapidity dependence of charged-particle multiplicity §.§.§ ATLAS distributions of charged-particle multiplicity over η The primary charged-particle multiplicity density pseudorapidity distributions (or “pseudorapidity distribution”) for events with n_ch≥ 2, p_T >100  and n_ch≥ 1, p_T >500  for |η| < 2.5 studied by the ATLAS <cit.> at the CM energies √(s)= 13, 8, 7, 2.36 and 0.9 are shown in Figs. <ref>, <ref>(a) and (b), <ref>(a) and (b), <ref> and <ref>, respectively. The pseudorapidity distributions for particles with p_T >500  and higher minimum multiplicities per event n_ch≥ 6, 20, 50 at √(s)= 8  are shown in Figs. <ref>(c) – (d), and for n_ch≥ 6 at √(s)= 7 and 0.9  in Figs. <ref>(c) and <ref>(c), respectively. The accuracy of measurement of pseudorapidity distributions increases with increasing energy, because of the better understanding of dead material values in the ATLAS ID in the data analysis for higher energies. The ATLAS experimental results are compared to predictions of models tuned to a wide range of measurements described in Sec. <ref> and presented in Table <ref>. The measured spectra are presented as inclusive distributions with corrections that minimally rely on the MC model used, in order to facilitate an accurate comparison with predictions. In general, the systematic uncertainties are larger than the statistical uncertainties. In most regions of all distributions the dominant uncertainty comes from the track reconstruction efficiency. Figure <ref> shows the pseudorapidity distributions at √(s) = 13 . The distribution corresponding to the PS with n_ch≥ 2, p_T >100  <cit.> rises as |η| increases, peaking at |η|≈ 1.7 before falling. For the PS with n_ch≥ 1, p_T >500 <cit.>, the mean particle density is roughly constant at 2.9 for |η|≲ 1.5 and falls at higher η. For pseudorapidity distributions at 13  for n_ch≥ 2 with p_T >100  the Pythia 8 Monash tune, EPOS and QGSJET-II give a good description for |η|≲ 1.5 in Fig. <ref>(a). The prediction from the Pythia 8 A2 tune has the same shape as predictions from the other generators, but lies below the data. In case of PS with n_ch≥ 1, p_T >500 , EPOS describes the data for |η|≲ 1.0, and predicts a slightly larger multiplicity at larger |η| values. QGSJET-II and the Pythia 8 Monash tune predict multiplicities that are too large by approximately 15% and 5%, respectively. The Pythia 8 A2 tune predicts a primary charged-particle multiplicity density that is 3% too low in the central region but describes the data well in the forward region. In Fig. <ref>(a) at 8  <cit.> the distribution corresponding to the PS with n_ch≥ 2, p_T >100  is well described by EPOS and Pythia 8 Monash tune but is underestimated by the Pythia 8 A2 tune and QGSJET-II. In Fig. <ref>(b) for the PS with n_ch≥ 1, p_T >500  EPOS overestimates the distribution at |η| > 1.7 and describes the data well for the rest of the pseudorapidity range. The data are overestimated by the QGSJET-II and Pythia 8 Monash tune calculations and underestimated by the Pythia 8 A2 tune prediction. A similar shape is seen for the PS corresponding to higher multiplicities with n_ch≥ 6, 20, 50 shown in Fig. <ref>(c) – (e) with the extent of the plateau becoming shorter as the multiplicity threshold is raised. A small apparent structure in the distributions of the central values of the data points occurs at values of |η|∼ 1.7. In this figures all models overestimate the overall yield for the PS with n_ch≥ 6, 20 although Pythia 8 A2 describes the plateau in the central region well. For the largest multiplicity threshold, n_ch≥ 50, all of the models overestimate the data at |η| > 1.7 but provide a better description in the central region. Figures <ref>(a) and <ref>(a) show the η distributions for the most inclusive PS region with n_ch≥ 2, p_T >100 . In these cases the distributions show weaker dependence on |η| than in the other plots at √(s)= 7  and √(s)= 0.9 . Figures <ref>(b), <ref> and <ref>(b) show the pseudorapidity distributions in the PS region with n_ch≥ 1, p_T >500 at √(s)= 7 , √(s)= 2.36  and √(s)= 0.9 , respectively. The mean particle density is roughly constant for |η| < 1.0 and decreases at higher |η|. The distribution shapes of the models are similar except for that of the Pythia 6 DW tune, which has a flatter spectrum and a more pronounced dip at central |η|, especially at low √(s). At energies 7 , 2.36  and 0.9  the Pythia 6 AMBT1 tune gives the best shape and normalisation description of the data, although it was tuned for n_ch≥ 6 in Figs. <ref>(c) and <ref>(c). At √(s)= 7  all the shapes seem to model the observed spectrum reasonably well, but at this energy the difference in normalisation among the models varies more widely and no model reproduces the data. At √(s)= 0.9  there is very little difference between the models both in shape and normalisation with the exception of PHOJET, which shows excellent agreement with the data. The other models show on average too few particles. The shape of the distribution is reasonably well described by all models. In Ref. <cit.> the performance of the ATLAS Pythia 8 A3 tune was presented for primary charged-particle multiplicity density pseudorapidity distributions, transverse momentum distributions and multiplicity distributions; and also average transverse momentum multiplicity distributions, compared to the predictions of the previous ATLAS Pythia 8 tunes — A2 and Monash. Both these tunes use the default Schuler–Sjöstrand (SS) diffraction model <cit.>, and predict the same value. The SS model overestimates the inelastic cross-section measured by ATLAS at 7  and 13 , as can be seen in Table <ref>; alternative models are therefore considered here. Changing the diffractive model affects the charged particle distributions not only at the low multiplicity or in the low p_T region, but also at intermediate values, and in each case, the MPI and CR parameters need retuning in order to preserve reasonable agreement with data. The DL model <cit.> is found to give the best description of the MB observables and the measured fiducial inelastic cross-section <cit.>. The DL model comes with two tunable parameters which control the Pomeron Regge trajectory. To understand the energy dependence of the parameters, the tuning results at different √(s) individually using just MB distributions were initially determined. For each parameter at each √(s), a tuned value was determined and then compared to values of the same parameter when a subset of sampling runs is used. The spread of these points was an indication of the statistical and extrapolation uncertainty on the tune, as well as how well was constrained the tuned value of the parameter by the observables used. The next step was to determine the sensitivity of each of these parameters to different observables by successively adding distributions other than those from the MB analysis and varying the relative weight. The fiducial inelastic cross section predictions from Pythia 8 A3 are about 5% lower compared to SS, which is somewhat closer to the values from the data. This does not come at a cost of sacrificing agreement with other distributions. In Figs. <ref>, <ref>, <ref> and <ref> the performance of the ATLAS Pythia 8 A3 tune can be seen for primarily charged-particle multiplicity pseudorapidity distributions, primary charged-particle multiplicity transverse momentum distributions, primary charged-particle multiplicity distributions; and average transverse momentum multiplicity distributions, compared to the previous Pythia 8 A2 and Monash tunes. The predicted values of the fiducial inelastic cross-section at √(s) = 7  and 13 for the tunes compared with data are shown in Table <ref>. Figures <ref> shows that the Pythia 8 A3 tune provides a small improvement in the modelling of charged particle pseudorapidity distributions at √(s)= 8 , and to a lesser extent, at √(s)= 13 , at the expense of larger deterioration of the modelling of √(s)= 0.9  data. Since the aim is to model soft collisions for pile-up at √(s) = 13 , the Pythia 8 A3 tune’s mis-modelling of the √(s)= 0.9  data is acceptable. The models EPOS LHC, PHOJET, QGSJET-II, Pythia 6 and Pythia 8 show big troubles in describing the whole spectrum in the data, but the best agreement is achieved with EPOS. For p_T > 100  at the highest energies Pythia 8 Monash, EPOS, QGSJET-II give a good description for |η|< 1.5. The prediction from Pythia 8 A2 has the same shape but lies below the data. For p_T > 500  at the highest energies the MCs have the same shape but different normalisation; EPOS and Pythia 8 A2 give remarkably good predictions. As discussed in Ref. <cit.>, in terms of Feynman diagrams (Fig. <ref>) the cut Pomeron can be viewed as a set of ladder diagrams corresponding to a sum of completely inelastic 2 → n processes, that is, to the last term G_inel = 1 - e^-Ω in the unitarity equation (20.9) in <cit.>. Here n > 2 means the production of additional (n - 2) gluons which form minijet. Minijets result from hadronization of partons emitted from the cut QCD Pomeron. Typically, minijets are groups of hadrons with comparatively low overall transverse momentum, p_T≲ 10 . In the final state driven by one Pomeron, can be expect to observe gluon minijets with a flat rapidity distribution in the central pseudorapidity region of primary charged-particle multiplicities distributions which are presented in Figs. <ref> – <ref> and <ref>. This plateau is more pronounced for the results with higher p_T threshold, p_T > 500  in Figs. <ref>(b) – <ref>(b) and <ref>(a). This would correspond to a flat pseudorapidity distribution of produced particles if they were massless. The dip observed at η = 0 in Figs. <ref>(a) – <ref>(a) for events with p_T > 100  is explained by the presence of massive particles. §.§.§ Distributions of charged-particle multiplicity over η of the LHC experiments The CMS results for pseudorapidity distributions for events for |η| < 2.4 at the CM energies √(s)= 13  with n_ch≥ 1, p_T >500 <cit.> are shown in Fig. <ref>(a). The measured distributions are presented for three different event data sets: * the most inclusive sample (inelastic), * the sample dominated by non-single diffractive dissociation events (NSD-enhanced sample), * the sample enriched by single diffractive dissociation events (SD-enhanced sample). The SD-minus and SD-plus samples are mutually exclusive, depending on the side of the forward-detector that contains the hadronic activity. The pseudorapidity distribution of the SD-enhanced event sample is also presented as a symmetrized distribution constructed from the SD-minus and SD-plus enhanced samples and is referred to as the SD-One-Side enhanced event sample. The symmetrization is performed by reflecting the distribution with respect to |η| = 0. In general terms, the inelastic and NSD distributions are similar. The pseudorapidity density of the SD-enhanced event sample is about a factor of 4 lower than that of the most inclusive event samples. The combined CMS–TOTEM pseudorapidity distributions are presented in Figs. <ref>(b) – (d) for the inclusive event selection sample, the NSD-enhanced event selection sample and the SD-enhanced event selection sample <cit.>. The measurements are compared to the results from Pythia 6 (version 6.426) <cit.> tune Z2* <cit.>, Pythia 8 (version 8.153) <cit.> tune 4C <cit.>, HERWIG++ (version 2.5.0) <cit.> tune UE-EE-3 with CTEQ6L1 <cit.> PDFs, EPOS LHCv3400 tune LHC <cit.> and QGSJET-II version 04 <cit.>. In Ref. <cit.> the similar figures for the pseudorapidity distributions were presented with additional η regions from TOTEM: 3.7 < η < 4.8 and -7.0 < η < -6.0. The results are derived in the central region by averaging the data points in the corresponding ±η bins and in the forward region by averaging over the half-arms four TOTEM T2 telescopes. The primarily charged-particle multiplicity density at η = 0 is 5.35 ± 0.36 for the inclusive sample, 6.20 ± 0.46 for the NSD-enhanced sample, and 1.94^+ 0.26 _-0.23 for the SD-enhanced sample, with negligible statistical uncertainties. The CMS primarily charged-particle multiplicity density at η = 0 for the NSD-enhanced sample is in agreement within error bars with the ATLAS one presented in Table <ref> at √(s) =13 for PS n_ch≥ 2, p_T >100 . The predictions from various MC event generators differ from the data by up to 20% for the inclusive and NSD-enhanced samples, with even larger discrepancies for the SD-enhanced sample. The data are well described by Pythia 6 and QGSJET-II for the inclusive selection. For the NSD-enhanced sample, the predictions obtained from Pythia 6 and QGSJET-II agree with the data for most η bins. A good description of the measurement for the SD-enhanced sample is provided by both EPOS and Pythia 6. The forward primarily charged-particle multiplicity density over pseudorapidity decreases with |η|. In the inclusive sample, d N_ch / dη is 3.85 ± 0.49 at η = 5.375 and 2.61 ± 0.28 at η = 6.350 with negligible statistical uncertainty. The pseudorapidity density of the NSD-enhanced sample varies between 4.80 ± 0.62 and 3.17 ± 0.35, while for the SD-enhanced sample it is in the range of 1.49 ± 0.27 to 1.20 ± 0.20. The MC predictions for the three samples differ from the data by up to about ± 30%. For the inclusive and NSD-enhanced samples, the data in the forward region are in agreement with the prediction from QGSJET-II and are between the EPOS and Pythia 8 results. For the SD-enhanced selection, the TOTEM data points are close to the Pythia 8 and HERWIG++ predictions, while QGSJET-II underestimates the data. The change in the slope of the MC curves close to η = 5.3, more visible for the NSD- and SD-enhanced distributions, is due to the event selection requirement of at least one charged particle in the pseudorapidity region of the TOTEM T2 telescopes. §.§ Charged-particle multiplicity density §.§.§ Energy dependence of the multiplicity density at ATLAS The energy dependence of primary charged-particle multiplicity density, 1/N_ev·d N_ch/ dη|_η =0, is of interest because it * provides information about the basic properties of p p collisions, * is related to the average energy density achieved in the interaction of protons, * constitutes a reference for the comparison with heavy ion collisions. The average primary charged-particle multiplicity in pp interactions per unit of pseudorapidity, multiplicity density, for |η| <0.2 as a function of the CM energy √(s) in three separate PS regions for events with n_ch≥ 2, p_T >100 , n_ch≥ 1, p_T >500  and n_ch≥ 6, p_T >500  are shown in Fig. <ref>. The results are compared to predictions of MC models tuned to a wide range of measurements. The comparison with the MC models Pythia 8 A2, Pythia 8 Monash, EPOS LHC, QGSJET-II for √(s) from 0.9 to 13  <cit.> and Pythia 6 AMBT1, Pythia 6 MC09, Pythia 6 DW, Pythia 8, PHOJET for √(s) from 0.9 to 7  <cit.> is show in Fig. <ref>(a) and Fig. <ref>(b) , respectively. The primary charged-particle multiplicity density in the central pseudorapidity region at √(s) = 13  for events with n_ch≥ 2, p_T >100  is measured for fiducial PS to be 6.42 ± 0.10, by averaging over |η| < 0.2; the quoted error is the systematic uncertainty, the statistical uncertainty is negligible. In order to compare with other measurements, it is corrected for the contribution from strange baryons (and therefore extrapolated to primary charged particles with τ > 30 ps) by a correction factor of 1.0121 ± 0.0035. The central value is taken from EPOS; the systematic uncertainty is taken from the difference between EPOS and Pythia 8 A2, and the statistical uncertainty is negligible. The mean number of primary charged particles after the correction is 6.50 ± 0.10 at √(s) = 13  for events with n_ch≥ 2, p_T >100 . The mean number of primary charged particles in the central region is computed by averaging over |η| < 0.2 and found to be 2.874 ± 0.001 (stat)± 0.033 (syst) at √(s) = 13  for events with n_ch≥ 1, p_T >500 . This measurement is corrected for the contribution from strange baryons. The prediction from EPOS is used to perform the extrapolation, and the deviation from the Pythia 8 Monash prediction is taken as a systematic uncertainty and symmetrised to give 1.024 ± 0.009. A summary of central primary charged-particle multiplicity densities at η = 0 in all measured PS at √(s) = 8, 13 is given in Table <ref>. The primary charged-particle multiplicity density increases by a factor of 2.2 when √(s) increases by a factor of about 14 from 0.9  to 13 . These extrapolated results from Table <ref>. are shown in Fig. <ref>(a) <cit.> and compared to predictions of the MC models Pythia 8 A2, Pythia 8 Monash, EPOS LHC and QGSJET-II for √(s) from 0.9 to 13  <cit.>. The predictions of EPOS and Pythia 8 MONASH match the data well at √(s) = 13  for events with n_ch≥ 2, p_T >100 . For Pythia 8 A2, the match is not so good as was observed when measuring particles with p_T >500  <cit.>. For events with n_ch≥ 1, p_T >500 at √(s) = 13  EPOS and Pythia 8 A2 describe the dependence on √(s) very well, while Pythia 8 Monash and QGSJET-II predict a steeper rise in multiplicity with √(s). In order to make consistent comparisons of pseudorapidity density at 8  <cit.> with other measurements, these results are corrected to the earlier τ > 30 ps definition of stable particles, using the factor 1.012 ± 0.004 in the p_T > 100  PS and 1.025 ± 0.008 in the p_T > 500  PS derived from predictions of the EPOS LHC tune with uncertainties following comparisons of the predictions of different MC models. Results at 8  are shown in Fig. <ref>(a) for the PS (p_T > 500 , n_ch≥ 1; 6) and (p_T > 100 , n_ch≥ 2). It can be seen that the total uncertainty in the measurement at √(s) = 8  is about 30–40% less than for the study with the √(s) = 7  data. This was achieved due to improved knowledge of the ID material distribution <cit.>, which reduced the dominant source of systematic uncertainty by more than 50% with respect to the √(s) = 0.9, 2.36, 7  measurements. The best description of the data is given by EPOS. The predictions of the Pythia 8 tunes provide a fair description of the shape of the multiplicity dependence with CM energy. As in the case of the other presented distributions, QGSJET-II calculations give the worst description. The values for three PS regions are shown in Fig. <ref>(b) with comparison of Pythia 6 AMBT1, Pythia 6 MC09, Pythia 6 DW, Pythia 8 and PHOJET predictions for √(s) from 0.9 to 7  and in Table <ref> <cit.>. The PS region with the largest minimum p_T and the highest minimum multiplicity, (p_T > 500 , n_ch≥ 6), which is the region with the least amount of diffraction, is the one where the models vary the least and the energy extrapolations of most models is in the best agreement with the data. For the most inclusive measurements, none of the models agree with the data and the spread at √(s) = 7  in the expected values is almost one third of the mean predicted value. The observed value is significantly higher at this energy than in any of the models. The total multiplicity density of charged particles with p_T > 100  within the |η| < 2.5 are computed as the mean of the distributions shown in Figs. <ref>(a) and <ref>(a). They are found to be 5.881 ± 0.002 (stat)± 0.276 (syst) at √(s) = 7  and 3.614 ± 0.006 (stat)± 0.170 (syst) at √(s) = 0.9 (see Table <ref>). These charged-particle total multiplicities density in the full pseudorapidity region, -2.5 < η < 2.5, are 29.04 ± 0.01 (stat)± 1.38 (syst) at √(s) = 7  and 18.07 ± 0.03 (stat)± 0.85 (syst) at √(s) = 0.9  and are in good agreement with the results presented in Table <ref>. With extrapolation to p_T = 0 , these numbers were multiplied by the model-dependent scale factors. The averaged inclusive charged-particle multiplicity for events with two or more particles for the kinematic region with p_T≥ 0 is found to be 6.252 ± 0.002 (stat)± 0.304 (syst) at √(s) = 7  and 3.849 ± 0.006 (stat)± 0.185 (syst) at √(s) = 0.9 (see Table <ref>). These are ≈ 6% higher than average multiplicities for p_T > 100 . This result is interpreted as the average total inelastic multiplicity for events with two or more particles within |η| < 2.5. For correct comparison of charged-particle multiplicity and average transverse momentum distributions for different energies or PS regions the scaled multiplicity is introduced as follows: z = n_ch (√(s), p_T^min) /⟨ n_ch (√(s), p_T^min) ⟩. For example, a comparison of results for different PS regions, with two p_T^min thresholds, was presented in Ref. <cit.>. A fit with a fourth-degree polynomial function of the primary charged-particle multiplicity density distributions in the pseudorapidity region -2.5 < η < 2.5 was used in <cit.> for the calculation of an average total multiplicity, ⟨ n_ch ( √(s), p_T^min ) ⟩, for different CM energies and p_T^min using the ATLAS results <cit.>. The 1/N_ev· d N_ch/dη distributions over pseudorapidity are shown in Fig. <ref>. The average multiplicity, ⟨ n_ch ( √(s), p_T^min ) ⟩, resulting from fit of these distributions with the fourth-degree polynomial function are presented in Table <ref>. The average multiplicities from Table <ref> were used for calculation of horizontal axes using Eq. (<ref>) for correct comparison of primary charged-particle multiplicity distributions in and multiplicity dependences of an average transverse momentum in Sec. <ref>, and for KNO scaling study in Sec. <ref>. §.§.§ Energy dependence of the multiplicity density of the LHC experiments The average total primary charged-particle multiplicity, ⟨ n_ch⟩, is equal to the integral of the corresponding single-particle inclusive density in the η interval considered. The ⟨ n_ch⟩ is observed to rise with increasing CM energy in hadron-hadron collisions <cit.>. The same behaviour is also observed in e^+ e^- collisions, in deep-inelastic scattering <cit.>, and in heavy ion collisions <cit.>. The CMS measured average total primary charged-particle multiplicity for |η| < 2.4 presented in Table <ref> and shown in Fig. <ref>(a), where the CMS data are compared with experimental data obtained at lower energies and various theoretical predictions. Recent Regge-inspired models <cit.> predict a power-like behaviour among which only Ref. <cit.> describes the highest energy data very well. Parton saturation models (such as <cit.>) predict a strong rise of the central rapidity plateau as well. The Pythia 6 <cit.> generator and its fragmentation model tuned to CDF data <cit.>, called Pythia D6T, is used as a baseline model to simulate inelastic pp collisions. At 7 a dedicated Pythia tune <cit.> better describing the high multiplicities is used for correcting the data. Alternative tunings that differ mainly in the modelling of MPI have also been considered <cit.>. PHOJET <cit.> is used as an alternative event generator that differs mainly in the underlying dynamical model for particle production. Table <ref> gives an overview of the average total primary charged-particle multiplicity for the data and for the Pythia D6T tune, Pythia 8 and PHOJET models. The Pythia D6T tune produces on average too few particles per event at all energies. PHOJET is consistent with the data within uncertainties for √(s) = 0.9 , but is not able to predict properly the average total multiplicity at higher energies. Pythia 8 describes best the √(s) = 7  data, but underestimates ⟨ n_ch⟩ systematically at all energies. The CMS results at √(s) = 0.9 and 7  presented in Table <ref> are in agreement within the error bars with the ATLAS results at the same energies with p_T > 100  in Table <ref>. The CM energy dependence of the pseudorapidity distribution at η = 0 is shown in Fig. <ref>(b), which includes data from various experiments for NSD events in pp and p p̅ collisions. The different experiments do not use identical event selection criteria, they all include a large fraction of NSD events. Particle production at η = 0 is expected to follow a power-law dependence, d N_ch / dη|_η =0 ∝ s^Δ, where Δ is the Pomeron intercept <cit.> and the effective Pomeron intercept defined as α_eff (0) = 1 + Δ with Δ in the range 0.14 – 0.24 <cit.>. The result of fitting the high-energy pp and p p̅ central-pseudorapidity particle densities with this function is shown in Fig. <ref>(b). The value of Δ = 0.23± 0.01 is obtained. In ALICE the definition for multiplicity density in pp collisions, 1/N_ev·d N_ch/ dη|_η =0, is an integral of the data over the pseudorapidity range |η|< 0.5. The results of the measurements of multiplicity density are shown in Fig. <ref> and given in Table <ref>. Results are given for three conventional event classes: inelastic (INEL) events, non-single diffractive (NSD) events and events with at least one charged particle in |η| < 1 (INEL>0). The fits based on Eq. (<ref>) to combination of the ALICE data with other data at the LHC experiments and other experiments at lower energies in Fig. <ref> yield Δ = 0.102± 0.003 for INEL events, Δ = 0.114 ± 0.003 for NDS events and Δ = 0.114 ± 0.002 for INEL>0 events. These results are compared to Δ = 0.15 for central Pb–Pb collisions <cit.>. This is clear evidence that the charged-particle multiplicity density increases with energy in Pb–Pb collisions faster than in p p collisions. Fits results are shown in Fig. <ref>(a). The results of the extrapolations to CM energies of 13, 13.5 and 14 are presented in Table <ref>. The multiplicity densities ⟨d N_ch / dη⟩ measured in the INEL and INEL>0 events in the pseudorapidity range |η|< 0.5 at √(s)= 13 are shown in Fig. <ref>(b) <cit.> and are 5.31± 0.18 and 6.46± 0.19, respectively. The multiplicity density for the INEL>0 events is also measured in |η|< 1 for direct comparison with the INEL>0 results of ALICE at lower energies and is found to be 6.61± 0.20 <cit.>. Figure <ref>(b) shows compilation of results on multiplicity density of charged particles measured in |η|< 0.5 for the INEL and INEL>0 results at different p p energies by ALICE <cit.>, CMS <cit.>, ACHM <cit.>, UA5 <cit.> and PHOBOS <cit.>. The energy dependence of ⟨d N_ch / dη⟩ is parametrised by the power law (<ref>) fitted to data. By combining the data at lower energies with ALICE and CMS results at √(s) = 13 , it was obtained that Δ = 0.103 ± 0.002 for INEL events and Δ = 0.111 ± 0.004 for INEL>0 events. These fit results are in agreement within error bars with the results obtained in Fig. <ref>(a). The CMS obtained value Δ = 0.23 ± 0.01 in Fig. <ref>(b) is higher than ALICE result Δ = 0.114 ± 0.003 in Fig. <ref>(a) by 0.12± 0.01 for NSD event class. Note that a more complete data sample was used for the ALICE fit that for the CMS one. The measurement of average multiplicity density at 13  by CMS <cit.> for the pseudorapidity region |η| < 2.4 resulted in d N_ch/ dη|_|η| <0.5 = 5.49±0.01 (stat)± 0.17 (syst) for inelastic events, which is consistent with the ALICE extrapolation of 5.30 ± 0.24 in Table <ref>. Over the LHC energy range from 0.9 to 14 , while the CM energy increases by a factor of 15.5, extrapolation of the present data for d N_ch/ dη|_|η| =0 shows an increase by a factor of 1.75 ± 0.03 for the INEL event class, 1.87 ± 0.03 for the NSD event class and 1.87 ± 0.01 for the INEL>0 event class. The multiplicity increase is similar for the NSD and INEL>0 classes but slightly lower for the INEL class. The ALICE results at √(s) = 0.9, 7 and 8  and extrapolation at √(s) = 13  for the average multiplicity density for the NSD events in Table <ref> are in agreement within uncertainties with the ATLAS results presented in Table <ref> at √(s) =8 and 13  and in Table <ref> at √(s) = 0.9 and 7  for inelastic events with p_T > 100  and n_ch≥ 2. The multiplicity pseudorapidity distributions, the charged-particle multiplicity density at mid-rapidity (|η| < 0.2) measured at several √(s) points were found to be well described by the Pythia 8 Monash and EPOS models for three event selections. For p_T > 100  at the highest energies, the predictions from EPOS and Pythia 8 Monash match the data well. For the predictions from Pythia 8 A2, the match is not as good as was observed when measuring particles with p_T > 500 . For p_T > 500  at the highest energies, the predictions from EPOS and Pythia 8 A2 match the data well. The energy dependence of the particle density 1/N_ev·d N_ch / dη|_η =0 is shown in Fig. <ref> for ATLAS, in Fig. <ref>(b) for CMS–TOTEM and in Fig. <ref> for ALICE. As discussed in Ref. <cit.>, neglecting absorptive corrections given by the enhanced diagrams, which mainly change (“renormalize”) the effective Pomeron intercept in Eq. (<ref>), one can conclude that according to the Abramovsky-Gribov-Kancheli <cit.> (AGK) rules[The relation between the cross sections of subprocesses with a different number of cut Pomerons within a given diagram with n Pomerons is given by the AGK cutting rules.], the plateau height in Eq. (<ref>) is driven just by the one-Pomeron exchange with effective Δ∼ 0.2. That is, the density of secondaries observed in the inclusive process increases with increasing energy faster than the total cross section, whose growth is tamed by the multi-Pomeron diagrams. Indeed, as is seen in Fig. <ref>(b), in the interval of collider energies d N_ch / dη = 1/σ_inel·dσ / dη∝ s^0.115 (i.e. dσ / dη∝ s^0.215), while σ_inel∝ s^0.1. §.§ Transverse momentum dependence of charged-particle multiplicity §.§.§ ATLAS distributions of multiplicity over p_T The transverse momentum distributions of charged-particle measured by ATLAS are shown in Figs. <ref> – <ref> at the CM energies √(s)= 0.9, 2.36, 7, 8, and 13 . Figure <ref>(a) shows the charged-particle transverse momentum distribution at √(s)= 13  for p_T >100  <cit.>. The EPOS describes the data well for p_T >300 . For lower p_T the data are underestimated by up to 15%. The other generators show similar mis-modelling at low momenta but with larger discrepancies up to 35% for QGSJET-II. MC models mostly overestimate the charged-particle multiplicity for p_T >400 ; Pythia 8 A2 yields overestimated results only in the intermediate p_T region and slightly underestimates the data for p_T >800 . Figure <ref>(b) shows the charged-particle transverse momentum distribution at √(s)= 13  for p_T >500  <cit.>. EPOS describes the data well over the entire p_T spectrum. The Pythia 8 tunes describe the data reasonably well, but they are slightly above the data in the high-p_T region. QGSJET-II gives a poor prediction over the entire spectrum, overshooting the data in the low-p_T region and undershooting it in the high-p_T region. Figures <ref> show charged-particle multiplicities as a function of the transverse momentum, see Eq. (<ref>), for various PS at the CM energy √(s)= 8  <cit.>. No model is fully consistent with the distributions. Above 1  Pythia 8 Monash predictions agree well with the data. This model is the only, that gives a fair description of the data corresponding to the highest multiplicity threshold with n_ch≥ 50 and p_T >500 , where all other models show large deviations as p_T increases. The EPOS predictions give the best description of the data corresponding to the PS n_ch≥ 2 and p_T >100 , particularly at transverse momenta below 1 , while the other models underestimate the data at the lowest p_T values. The EPOS provides fair predictions for the PS n_ch≥ 1; 6 and p_T >500 , but for the higher multiplicity thresholds, n_ch≥ 20; 50, deviations from the data are seen at high transverse momenta. Pythia 8 A2 gives fair descriptions of the data below 6 , yet shows deviations of up to 30% around p_T∼ 10 . In all measured PS the QGSJET-II approach shows large disagreements with the data as p_T increases. Figures <ref>, <ref>(a) and <ref> show the charged-particle multiplicities as a function of the transverse momentum, Eq. (<ref>). Figures <ref>(b), <ref>(a) and <ref>(b) show three CM energies considered in the PS region n_ch≥ 1, p_T >500  and |η|< 2.5. The observed p_T spectrum is not described by any of the models over the whole range. The region that is most difficult for the models to describe is the region above 1 . Figures <ref>(a) and <ref>(a) show the charged-particle multiplicities in the most inclusive PS region n_ch≥ 2, p_T >100  and |η|< 2.5. At √(s) = 0.9  PHOJET describes the data best over the whole range even though the agreement is still not excellent. The other models tend to under-predict the number of low-p_T particles, while at higher p_T the models vary widely. At √(s) = 7  the effect at low p_T is more pronounced, whereas at high p_T the agreement of Pythia 8 and PHOJET with the data is quite good. The AMBT1 and MC09 tunes of Pythia 6 predict too many particles at higher p_T. Figures  <ref>(c) and <ref>(c) show the charged-particle multiplicities with the smallest contribution from diffractive events. This distribution carried the most weight in the Pythia 6 AMBT1 tune. Considerable improvement in the agreement with the data is seen between the older Pythia 6 MC09 and AMBT1 but the parameters varied in this tune were not sufficient to describe the full spectrum. The charged-particle multiplicities as a function of the transverse momentum measured in pp collisions at √(s) = 2.76 and in Pb+Pb collisions at √(s_NN) = 2.76 are shown in Fig. <ref>(b) for the pseudorapidity range |η| <2 and for five centrality intervals in Pb+Pb collisions: 0–5%, 10–20%, 30–40%, 50–60% and 60–80% in the 0.5 < p_T < 150 . This figure shows the Pb + Pb spectra divided by the ⟨ T_AA⟩ (which is estimated as the number of nucleon–nucleon collisions over their cross section) of the corresponding centrality interval compared with the charged-particle production cross sections measured in pp collisions at √(s) = 2.76 . The charged-particle multiplicities as a function of the transverse momentum combine the measurement of the soft regime at low p_T with the hard regime at high p_T which can be calculated in pQCD. While early measurements could focus only on the regime up to a few , distributions were later measured up to ≈ 200 as presented in Fig. <ref>(b) <cit.> and in pp collisions at √(s) = 5.02 <cit.>. The similar result of the CMS is presented in Ref. <cit.>. For p_T > 100  at the highest energies EPOS describes the data well for p_T > 300 , while for p_T < 300 , the data are underestimated by up to 15%. MCs show similar mis-modelling at low momentum but with larger discrepancies up to 35% for QGSJET-II. MCs mostly overestimate the charged-particle multiplicity for p_T > 400 . Pythia 8 A2 overestimates the data only in the intermediate p_T region and slightly underestimates them for p_T > 800 . For p_T > 500  at the highest energies the measurement spans 10 orders of magnitude; EPOS and Pythia 8 Monash give remarkably good predictions. Contrary to the ‘old’ Regge theory where it was assumed that all transverse momenta are limited, in QCD the k_T distributions of jets have a long k_T tail (dσ / d k_T^2 ∝α_s^2 ( k_T^2 ) / k_T^4 at large k_T and very large energy s ≫ k_T^2). An examples of the p_T primary charged-particle distributions are shown in Figs. <ref>, <ref>(a) and <ref>(a). In Fig. <ref> for charged-particle multiplicity, Pythia 8 A3 is comparable to data at √(s) = 0.9, 2.36, 7, 8, 13 and other tunes: Pythia 8 A3 A2 and Monash. At √(s) = 13 , Pythia 8 A2 describes the low multiplicity part better than Pythia 8 A3 in the range of 40 < n_ch < 60 charged particles. The shape of the distribution predicted by the new tune is consistent across the center-of-mass energies. In Fig. <ref> for charged particle multiplicity, ATLAS Pythia 8 A3 is comparable to other tunes except at √(s) = 0.9 . At √(s) = 13 , Pythia 8 A2 describes the low multiplicity part better than Pythia 8 A3 in the range of 40–60 charged particles. The shape of the distribution predicted by the Pythia 8 A3 tune is consistent across the center-of-mass energies. Compared to Pythia 8 A2, Pythia 8 A3 provides a slightly worse description of the charged particle multiplicity distribution, which coincides with the improved charged-particle p_T distribution that performs similarly to Pythia 8 Monash, as shown by Fig. <ref>. In all cases, √(s) = 8  results are very similar to those at √(s) = 7 . The comparison of the primary charged-particle multiplicities as a function of the transverse momentum for |η| < 2.5 measured at the CM energies from 0.9 to 13  by the ATLAS <cit.> are presented for events with PS n_ch≥ 2, p_T >100  in Fig. <ref>(a) and with n_ch≥ 1, p_T >500  in Fig. <ref>(b). Figures <ref>(a) and (b) show an increase of the primary charged-particle multiplicity distributions with the transverse momentum. As expected the distributions acquire higher values at higher collision energies and an increase by ≈ 40% and ≈ 10% is observed in the region of p_T < 1  as the energy increases from 0.9 to 13  for p_T >100  and p_T >500 , respectively. The results at 7 and 8  are in agreement within error bars. The particle multiplicity in transverse momentum region of p_T > 5 increases by ≈ 40% for particle p_T threshold of 100  and for that of 500  when energy rises from 7 to 13 . Charged-particle multiplicities p_T distributions were compared using “z-scaling”, see details in Refs. <cit.>. The energy independence of the scaling function for some reactions was observed. The concept of z-scaling is considered to reflect the general features of high-p_T particle production in hadron-hadron and hadron-nucleus collisions. Violation of z-scaling is suggested to be considered as a signature of new physics. §.§.§ Distributions of multiplicity over p_T of the LHC experiments The CMS results for primary charged-particle multiplicities as a function of the transverse momentum, p_T, and a leading transverse momentum, p_T, leading, for events for |η| < 2.4 at the CM energy √(s)= 13  with n_ch≥ 1 and p_T >500  <cit.> are shown in Fig. <ref>. The measured distributions are presented for three different event data sets: an inelastic (INEL) sample, an NSD-enhanced sample, and an SD-enhanced sample. The p_T distributions (i. e. p_T and p_T, leading) of the SD-enhanced event sample fall very steeply for large p_T values. The ALICE measurement of primary charged particle transverse momentum spectra in pp collisions at √(s) = 0.9, 2.76, 7  were presented in Ref. <cit.>. The measurement is performed in the pseudorapidity range |η| < 0.8 for particles with p_T > 150 . The differential cross section for the INEL pp collisions as a function of p_T measured by ALICE is shown in Fig. <ref>(a) for three measured collision energies <cit.>. At high p_T a clear evolution of the slope from √(s) = 0.9 to 7  can be observed. The next-to-Leading-Order pQCD (NLO-pQCD) calculation <cit.> for p_T > 3  is compared to the spectra. The calculation shows a similar evolution of the high-p_T dependence with √(s) but over-predicts the data by a factor of two <cit.>. The low systematic uncertainties demonstrate the accuracy of the measurements for all energies over the full p_T range. Though the p_T dependence of the cross section for a single √(s) is not well described by NLO-pQCD, the relative dependence on p_T of cross sections of two collision energies is described better. Figure <ref>(b) shows the ratio between the differential cross section in INEL pp collisions at √(s) = 2.76 to 7 , √(s) = 0.9 to 2.76  and √(s) = 0.9 to 7 as a function of p_T in comparison to the same ratio calculated with NLO-pQCD. The total p_T-dependent systematic uncertainties on the ratios are evaluated with allowance for correlated contributions, and amount to 8.1–9.8% for 0.9 /2.76 , 7.8–9.9% for 0.9 /7 , and 7.9–9.9% for 2.76 /7 . The corresponding normalisation uncertainties amount to +5.4%/-4.4%, +6.2%/-5.4%, and ± 4.1%, and are calculated assuming that the normalisation uncertainties on the p_T spectra are uncorrelated. In all ratios good agreement between the data and the NLO-pQCD calculations is found, which can be seen in the double ratio of data and NLO-pQCD for the three energy ratios in the lower panel of Fig. <ref>(b). §.§ Charged-particle multiplicity dependence §.§.§ ATLAS multiplicity distributions The charged-particle multiplicity distributions are shown in Figs. <ref> – <ref> at the CM energies √(s)= 0.9, 2.36, 7, 8, and 13 . Figures <ref>(a) and (b) show the charged-particle multiplicity distributions at the CM energy √(s)= 13  for events with n_ch≥ 2, p_T >100  <cit.> and n_ch≥ 1, p_T >500  <cit.>, respectively. In Fig. <ref>(a) for events with n_ch≥ 2, p_T >100  at √(s)= 13  the form of the measured distribution is reproduced reasonably by all models. Pythia 8 A2 describes the data well for 30 < n_ch < 80 but underestimates them for higher n_ch. For this multiplicity region, Pythia 8 Monash, EPOS and QGSJET-II underestimate the data by up to 20%. Pythia 8 Monash and EPOS overestimate the data for the multiplicity region n_ch > 80 and drop below the measurement in the high-n_ch region, starting from n_ch > 130 and n_ch > 200, respectively. QGSJET-II significantly overestimates the data for the multiplicity region n_ch > 100. Figure <ref> (b) shows the charged-particle multiplicity distribution for events with n_ch≥ 1, p_T >500  at √(s)= 13 . The high-n_ch region has significant contributions from events with numerous MPI. Pythia 8 A2 describes well the data in the multiplicity region n_ch < 50 but predicts too few events at larger n_ch. Pythia 8 Monash, EPOS and QGSJET-II describe the data reasonably well in the multiplicity region n_ch < 30 but predict too many events in the mid-n_ch region, with Pythia 8 Monash and EPOS predicting too few events in the region n_ch > 100 while QGSJET-II continues to be above the data. In Figs. <ref>(a) and (b) the distributions of primary charged-particle multiplicity are shown for the minimum transverse momentum thresholds of 100  and 500  at √(s)= 8  <cit.>, respectively. For the lower threshold, the distribution rises until n_ch∼ 9 before falling steeply. For the higher threshold the distribution peaks at n_ch∼ 2. The models are consistent with the data although the EPOS model provides a fair description. The two Pythia 8 calculations predict distribution peaks which are at higher n_ch than those observed and underestimate the event yield at low and high multiplicities. The QGSJET-II tune overestimates the data at low and high n_ch values and underestimates the data for intermediate n_ch values. In Figs. <ref>(a) and <ref>(a) the distributions of primary charged-particle multiplicity are shown for the most inclusive PS region n_ch≥ 2, p_T >100  and |η| < 2.5 at the CM energies √(s) = 7  and √(s) = 0.9 , respectively. Here the variations between models at both low n_ch and high n_ch are increased and no model predicts the observed spectra. Due to the normalisation, 1 / N_ev, the deviation observed in one region needs to be compensated for by the one in the other direction somewhere else. Figures <ref>(b), <ref> and <ref>(b) show the primary charged-particle multiplicity distributions for n_ch≥ 1, p_T >500  and |η| < 2.5 at the CM energies √(s) = 7 , 2.36  and 0.9 , respectively. At low n_ch, all models predict more events than observed in the data, which is compensated for by an under-prediction in the tails of the distributions. The predictions of PHOJET at √(s) = 0.9  model the data reasonably well, but at √(s) = 2.36  and √(s) = 7  they do not model the observed spectrum so well. The Pythia 6 AMBT1 tune seems to provide the best agreement with the data. Figures  <ref>(c) and <ref>(c) show the distribution for the diffraction-reduced PS region for events with n_ch > 6, p_T >500 . The distributions are very similar to those in Figs. <ref>(c) and <ref>(c) with a cut at n_ch > 6; only the normalisation is different. In Fig. <ref>, for the charged-particle multiplicity, ATLAS Pythia 8 A3 is comparable to other tunes. At √(s) = 13  Pythia 8 A2 describes the low multiplicity part better than Pythia 8 A3 in the range of 40–60 charged particles. The shape of the distribution predicted by the Pythia 8 A3 tune is consistent across the center-of-mass energies. Compared to Pythia 8 A2 tune, Pythia 8 A3 tune provides a slightly worse description of the charged particle multiplicity distribution, which coincides with an improved charged particle p_T distribution that performs similarly to Pythia 8 Monash, as shown by Fig. <ref>. In all cases, √(s) = 8  results are very similar to those at √(s) = 7 . In Fig. <ref> for charged-particle multiplicity, Pythia 8 A3 is comparable to data at √(s) = 0.9, 2.36, 7, 8, 13 and other tunes: Pythia 8 A3 A2 and Monash. At √(s) = 13 , Pythia 8 A2 describes the low multiplicity part better than Pythia 8 A3 in the range of 40 < n_ch < 60 charged particles. The shape of the distribution predicted by the new tune is consistent across the ECMs. For correct comparison of the charged-particle multiplicity and average transverse momentum distributions for different energies or kinematic regions the scaled multiplicity z, usually called KNO variable, see Eq. (<ref>), is introduced. For example, comparison of the results for different kinematic regions, with two p_T^min thresholds, was presented in Ref. <cit.>. The comparison of the primary charged-particle multiplicities as a function of the scaled multiplicity z or the KNO scale for events with n_ch≥ 2 and p_T >100 ; n_ch≥ 1 and p_T >500  for |η| < 2.5 measured by the ATLAS at √(s) from 0.9 to 13  <cit.> are presented in Fig. <ref> and Fig. <ref> <cit.>, respectively. For these figures the multiplicity axis was compressed by the factor ⟨ n_ch ( √(s), p_T^min ) ⟩. The KNO scale is the same and therefore it is the correct scale for comparing distributions at different √(s) or distributions in different PS regions. The scaled multiplicity regions are up to 7.5 of the average total multiplicity for p_T >100  and up to 10.5 of the average total multiplicity for p_T >500  as shown in Figs. <ref>(a) and <ref>(a), respectively. In Table <ref> the relative uncertainty, δ⟨ n_ch⟩ / ⟨ n_ch⟩, is presented for average total multiplicities. Relative uncertainties are small and equal to 0.32–0.66% for p_T >100  and 0.24–0.46% for p_T >500 , except of the result at √(s)=2.36  which was measured with the lower accuracy. In the bottom panels in Figs. <ref> and <ref> ratios of the charged-particle distributions at 0.9 – 8  to the distribution at 13  are shown. These ratios, and their uncertainties, are obtained by interpolation. For the interpolation procedure the Interpolator method of the Root statistical analysis framework <cit.> was used. In Figs. <ref> – <ref>, the gray curve and the band of the uncertainties are the result of the interpolation of the distribution at 13 . Figures <ref> and <ref> show that primary charged-particle multiplicity distributions decrease as the collision energy increases from 0.9 to 13  by the factor of ≈ 3 for maximum of the functions at z ≈ 0.7. The results for √(s) = 7, 8 and 13 TeV and z ≤ 3 are presented in Fig. <ref>(b) for p_T >100  and in Fig. <ref>(b) for p_T >500 . The distributions at √(s) = 7 and 8  are in agreement within error bars except for the region 0.5 < z < 1.5. The multiplicity distribution at 8  is ≈ 20% larger than at 13  for the region z < 3 in both cases. For p_T > 100  and p_T > 500  at the highest energies the form of the measured distribution is reproduced reasonably by all models. Pythia 8 A2 describes the data well for middle n_ch but underestimates it for higher. For middle n_ch Pythia 8 Monash, EPOS, QGSJET-II underestimate the data by up to 10–20%. Pythia 8 Monash, EPOS overestimate the data for higher n_ch and drop below the measurement in the very high-n_ch region. QGSJET-II overestimates the data significantly. The high-n_ch region has significant contributions from events with numerous MPI. As discussed in Ref. <cit.>, negative binomial distributions (NBDs) were successful to describe (n_ch) at SPS energies <cit.> but failed at higher energies. Two-component approaches using two <cit.> or three <cit.> NBDs could not survive up to LHC energies (see e.g. <cit.>). Multiplicity distributions are a very sensitive probe of multiple parton interactions as collisions with large multiplicities are mostly composed of several parton interactions. Event generators fail to describe the tail of the multiplicity distribution without considering MPI and the careful tuning of the related parameters. §.§.§ Multiplicity distributions of the LHC experiments The CMS results for primary charged-particle multiplicities as a function of the multiplicity for events with |η| < 2.4 at the CM energy √(s)= 13  with n_ch≥ 1 and p_T >500 <cit.> are shown in Fig. <ref>. The measured distributions are presented for two different event data sets: an INEL sample and an NSD-enhanced sample. The charged particle multiplicity distribution of the NSD-enhanced event sample shows a depletion of low-n_ch events and an increase of high-n_ch multiplicity events compared to that of the inelastic sample. The NSD charged hadron multiplicity distributions are measured in increasing ranges of pseudorapidity from |η| < 0.5 to |η| < 2.4. The fully corrected results at √(s) = 0.9, 2.36 and 7  are compared in Fig. <ref> with the measurements in the same pseudorapidity ranges performed by the UA5 <cit.> and ALICE <cit.>. The CMS measurements were also compared with the results obtained from the CMS cross-check analysis of the data at √(s) = 0.9 and 7 using a tracklet-based tracking algorithm as in Ref. <cit.>. With a reconstruction efficiency exceeding 90% for p_T > 50 , the latter provided a cross-check of the extrapolation for tracks below p_T < 100 , including the use of the data without the magnetic field at √(s) = 7 . All measurements agree well within their total uncertainties. In the largest pseudorapidity interval |η| < 2.4, there is a change of slope in P_n for n_ch > 20, indicating a multicomponent structure, as was discussed in Refs. <cit.> in terms of multiple-soft-Pomeron exchanges. This feature becomes more pronounced with increasing CM energies, notably at √(s) = 7 . The Pythia 6 <cit.> generator and its fragmentation model tuned to CDF data <cit.> hereafter called Pythia D6T, is used as a baseline model to simulate inelastic pp collisions. However, at 7 a dedicated Pythia tune <cit.> describing better the high multiplicities is used for correcting the data. Alternative tunings that differ mainly in the modelling of multiple parton interactions have also been considered <cit.>. PHOJET <cit.> is used as an alternative event generator that differs mainly in the underlying dynamical model for particle production. An extensive range of tunes <cit.> based on the Pythia 6 fragmentation model have been developed. They differ mainly in their parametrisation of the multiple-parton interaction model. Some reproduce the charged hadron multiplicities better than others, but none is able to give a good description simultaneously at all √(s) and in all pseudorapidity ranges. For clarity, only the baseline tune Pythia D6T <cit.> is shown in comparison with other models having a different physical description of soft-particle production such as PHOJET <cit.> and the fragmentation model of Pythia 8 <cit.>. A comparison of the CMS measurements with three classes of models is shown in Fig. <ref> for all charged hadrons and for those with p_T > 500 . Pythia D6T drastically underestimates the multiplicity at all measured energies but improves when p_T > 500  is required. Pythia 8 is the only model that gives a reasonable description of the multiplicity distribution at all energies, but tends to overestimate the multiplicity at √(s) = 7  when p_T > 500  is required. PHOJET produces too few charged hadrons overall but gives a good description of the average transverse momentum ⟨ p_T⟩ at the fixed multiplicity n_ch, as illustrated in Fig. <ref>. The ALICE results of study the multiplicity (N_ch) distributions and transverse momentum spectra and KNO scaling of inclusive primary charged particles in the kinematic range of |η| < 0.8 and 0.15 < p_T < 10  for pp, p–Pb, Xe–Xe and Pb–Pb collisions at CM energies per nucleon pair ranging from √( s_NN) = 2.76 up to 13 were published in Ref. <cit.>. The N_ch distributions for pp collisions at the different centre-of-mass energies √(s) = 2.36, 5.02, 7, 8 and 13  for the kinematic region |η| < 0.8 and 0.15 < p_T < 10 are shown in Fig. <ref>(a). These distributions reach a maximum around N_ch≈ 2 and then fall steeply off over several orders of magnitude. The slope of the decay with N_ch decreases with increasing collision energy. This can be attributed to the larger p_T in the initial hard scattering which results in larger multiplicities. Figure <ref>(b) compare measured results for pp collisions for the respective multiplicity distributions with predictions from Pythia 8 <cit.> (solid lines) and EPOS LHC <cit.> (dashed lines). The Pythia 8.306 event generator is used with the Monash-2013 tune <cit.> for pp collisions. The overall shapes of the multiplicity distribution shown in Fig. <ref>(b) are better described by EPOS LHC, while Pythia 8 falls sharply off above N_ch/ ⟨ N_ch⟩≈ 4. Both models agree with the experimental distributions within 25% with larger deviations at highest multiplicities. §.§ Average transverse momentum multiplicity dependence §.§.§ ATLAS average transverse momentum distributions The charged-particle average transverse momentum distributions are shown in Figs. <ref> – <ref> at the CM energies √(s)= 0.9, 2.36, 7, 8, and 13 . The average transverse momentum versus the primary charged-particle multiplicity is shown in Fig. <ref> at √(s)= 13  for n_ch≥ 2, p_T >100  <cit.> and n_ch≥ 1, p_T >500  <cit.>, respectively. For p_T >100  in Fig. <ref>(a) it increases towards higher n_ch, as modelled by a colour reconnection mechanism in Pythia 8 and by the hydrodynamical evolution model in EPOS. The QGSJET-II generator, which has no model for colour coherence effects, describes the data poorly. For low n_ch, Pythia 8 A2 and EPOS underestimate the data, where Pythia 8 Monash agrees within the uncertainties. For higher n_ch all generators overestimate the data, but for n_ch > 40, there is a constant offset for both Pythia 8 tunes, which describe the data to within 10%. EPOS describes the data reasonably well and to within 2%. Figure <ref>(b) for n_ch≥ 1, p_T >500  shows the mean transverse momentum versus the charged-particle multiplicity. The ⟨ p_T⟩ rises with n_ch, from 0.8 to 1.2 . This increase is expected due to colour coherence effects being important in dense parton environments and is modelled by the colour reconnection mechanism in Pythia 8 or by the hydrodynamical evolution model used in EPOS. If the high-n_ch region is assumed to be dominated by events with numerous MPI, without colour coherence effects the ⟨ p_T⟩ is approximately independent of n_ch. Inclusion of colour coherence effects leads to fewer additional charged particles produced with every additional MPI, with an equally large p_T to be shared among the produced hadrons <cit.>. EPOS predicts a slightly lower ⟨ p_T⟩ but describes the dependence on n_ch very well. The Pythia 8 tunes predict a steeper rise of ⟨ p_T⟩ with n_ch than the data, predicting lower values in the low-n_ch region and higher values in the high-n_ch region. QGSJET-II predicts a ⟨ p_T⟩ of ∼ 1 , with very little dependence on n_ch; this is expected as it contains no model for colour coherence effects. Similar plots as for 13  are also shown for 8  in Fig. <ref> for transverse momentum thresholds of 100  and 500 , respectively. The average p_T rises with multiplicity although the rise becomes progressively less steep as the multiplicity increases. This is expected due to colour coherence effects in dense parton environments, which are modelled by a colour reconnection mechanism in Pythia 8 or by the hydrodynamical evolution model used in EPOS. It is assumed that numerous MPI dominate the high-multiplicity events, and that colour coherence effects thereby lead to fewer additional charged particles produced with every additional MPI, which share a higher average p_T. The EPOS and Pythia 8 models provide a fair description of the data. The QGSJET-II model fails to predict the mean transverse momentum over the entire multiplicity range, as it does not simulate colour coherence effects and therefore shows very little dependence on the multiplicity. Figures <ref> and <ref> show the results for events at the CM energies √(s)= 7  and √(s)= 0.9 for n_ch≥ 2, p_T >100  and n_ch≥ 1, p_T >500 , respectively. Globally one can say that at √(s)= 0.9  the slope versus n_ch for high values of n_ch seems to be well described by most models, but the absolute value is best modelled by Pythia 6 DW. At the highest CM energy (8 and 13 ) above multiplicity of 20 the models vary widely both in slope and in absolute value; at low values of n_ch none of the models describe the data very well. In the more inclusive PS region, Figs.<ref>(a) and <ref>(a), the models vary widely, especially at √(s)= 7 . The measurement of ⟨ p_T⟩ as a function of the charged multiplicity at √(s)= 2.36  is not shown because different track reconstruction methods are used for determining p_T and multiplicity distributions. In Fig. <ref>, which shows the mean transverse momentum, ⟨ p_T⟩, against the charged particle multiplicity correlation <cit.>, the choice of lower colour reconnection strength led to slight improvement over Pythia 8 A2. Although √(s) = 2.36  <cit.> and √(s) = 8  charged particle distributions were not used in tuning, comparisons are made with those distributions for completeness. In Figs. <ref>, <ref>, <ref> and <ref> distributions at √(s) = 7  and √(s) = 13  predicted by Pythia 8 A3, in compared to Pythia 8 A2, show a broadly comparable, or better, level of agreement. Pythia 8 A2 demonstrates that an acceptable description of data can be achieved by using the DL model for diffraction and can be viewed as a possible starting point for further systematic studies of soft-QCD tunes. The results of Pythia 8 A3 provide good reasons to believe that an improved and more reliable simulation of pile-up overlay can be obtained. The correct comparison of the primary charged-particle average transverse momentum, ⟨ p_T⟩, as a function of the scaled multiplicity z for events with n_ch≥ 2 and p_T >100 ; n_ch≥ 1 and p_T >500  measure for |η| < 2.5 at the CM energies from 0.9 to 13  by the ATLAS <cit.> are presented in Fig. <ref> <cit.>. The ⟨ p_T⟩ distribution as a function of z acquires a higher value at higher collision energies. The values of ⟨ p_T⟩ distributions increases by 18% and 13% for z > 1 with energy increase from 0.9 to 13  for p_T >100  and p_T >500 , respectively. The results at 7 and 8  are in agreement within error bars. The values of ⟨ p_T⟩ distributions increases by ≈ 3% for p_T >100  and by ≈ 2.5% for p_T >500 with increase in energy from 8 to 13  for z > 0.5. The ratio of ⟨ p_T⟩ distributions for 8 to 13  are ≈ 6 times smaller than the ratio for 0.9 to 13 . For p_T > 100  and p_T > 500  at the highest energies distributions increase towards higher n_ch, as modelled by CR mechanism in Pythia 8 and by the hydrodynamical evolution model in EPOS. The QGSJET-II generator describes the data poorly. For low n_ch, Pythia 8 A2, EPOS underestimate the data and for higher n_ch all generators overestimate the data. EPOS describes the data reasonably well and to within 2%. As discussed in Ref. <cit.>, the ⟨ p_T (n) ⟩ of distributions of primary charged particles was produced via jet fragmentation, slowly increases with collision energy, as shown in Fig. <ref>. This is caused by the stronger absorption (at larger √(s)) of the gluons with a smaller k_T (σ^abs∝ 1 / k_T^2). The growth of ⟨ p_T⟩ with multiplicity can be explained by the fact that events with larger n_ch correspond to a smaller impact parameter, b, where the absorption of the low k_T component is stronger, and larger multiplicity can be originated by the events with jets/minijets with higher p_T. Since ⟨ p_T⟩ of primary charged particles grows with √(s), the increase with √(s) of transverse energy flow is a bit faster than that of the particle density. §.§.§ Average transverse momentum distributions of the LHC experiments Figure <ref> (top) show a CMS comparison of the average transverse momentum, ⟨ p_T⟩, as a function of the charge-particle multiplicity, n_ch, for the inclusive pseudorapidity region |η| < 2.4 with prediction of the Pythia D6T tune, the Pythia 8 and PHOJET models at √(s) = 0.9, 2.36 and 7  <cit.>. In Fig. <ref> (bottom) the ratios of the higher-energy data to the fit at √(s) = 0.9  indicate the approximate energy independence of ⟨ p_T⟩ at fixed n_ch. These results are in disagreement with the ATLAS results presented in Fig. <ref>, where a ratio depends on the multiplicity. The ATLAS ratio of ⟨ p_T⟩ distributions for 7  to 0.9  is ≈ 1.18 for z ≳ 2 as shown in Fig. <ref>(a). According to the CMS, the same ratio shown in Fig. <ref> is ≈ 1.05 for n_ch≳ 30 or z ≳ 1, because ⟨ n_ch⟩ = 30.4 at 7 in Table <ref>. That is ≈ 3.5 times smaller than for ATLAS. Among the three classes of models, Pythia 8 gives the best overall description of the multiplicity distribution and the dependence of the average transverse momentum on n_ch. Inspired by <cit.> the fit by the first-degree polynom in √( n_ch) to the multiplicity dependence of ⟨ p_T (n_ch ) ⟩ for n_ch > 1.5 at each energy, yielding a good description which is valid at all three energies. The ratios of the data obtained at √(s) = 7  and √(s) = 2.36  with respect to the data at √(s) = 0.9  show that the rise of the average transverse momentum with the multiplicity weakly depends on energy. The average charged-particle transverse momenta for pp collisions at the different centre-of-mass energies √(s) = 2.36, 5.02, 7, 8 and 13  for the kinematic region |η| < 0.8 and 0.15 < p_T < 10 were obtained by the ALICE experiment <cit.> and are presented in Fig. <ref>. In Fig. <ref>(a) the average charged-particle transverse momentum ⟨ p_T⟩ spectra and in Fig. <ref>(b) the ⟨ p_T⟩ spectra divided by their respective multiplicity-integrated values, ⟨ p_T⟩_incl, as a function of relative multiplicity N_ch /⟨ N_ch⟩, same as the scale variable z, are shown. The value of ⟨ p_T⟩_incl for pp collisions increase from 6.05 ± 0.17 at √(s) = 2.76  to 9.48 ± 0.07 at √(s) =13 (see in Table 2 <cit.>). The values for each collision system align almost perfectly for the ⟨ p_T⟩ / ⟨ p_T⟩_incl. In pp collisions, the overall shapes of the ⟨ p_T⟩ distributions are shown in Fig. <ref>(c) in comparison with predictions from Pythia 8 <cit.> (solid lines) and EPOS LHC <cit.> (dashed lines). Pythia 8 underpredicts the experimental data on ⟨ p_T⟩ at the lowest values of N_ch by up to 4%. The N_ch dependent ⟨ p_T⟩ values produced by Pythia 8 increase faster than the measurements with an almost linear dependence up to N_ch≈ 20, after which the ratio shows a flat multiplicity dependence with an offset from unity varying from 0.5% at√(s) = 5.02  up to 4% at the highest CM energy. EPOS LHC is further off at low multiplicities by up to 5% and increases slower than the measurements, underestimating themby up to 6% around N_ch≈ 9. At higher multiplicities, the increase is faster with a linearly rising ratio up to N_ch≈ 20 - 30, reaching a plateau which describes the measurements within ± 2%. § KNO SCALING §.§ Study of the KNO scaling using the ATLAS results Deviation from the KNO scaling was already observed long ago at the ISR energies in pp collisions at √(s) from 0.0304 to 0.0622 , in the full PS, for inelastic events <cit.>. For hadron-hadron collisions, the approximate KNO scaling holds up to the ISR energies <cit.>. On the other hand, for NSD collisions, scaling was still found to be present <cit.>, suggesting that diffractive processes might also play a role in KNO scaling violations. In e^+ e^- collisions, at √(s) from 0.005 to 0.034 , the KNO scaling was found to hold within ± 20% <cit.>. Clear scaling violations become manifested above √(s)≈ 0.2  both for the multiplicity distributions in full PS and in central pseudorapidity ranges <cit.>. In pp̅) collisions at the CERN collider at √(s) = 0.2, 0.546 and 0.9 , the KNO scaling was found to be violated for NSD collisions in full PS <cit.>. Nevertheless, for NSD collisions, in limited central pseudorapidity intervals, the KNO scaling was still found to hold up to 0.9 , and at (√(s) = 0.546 , the KNO scaling was found to hold in the pseudorapidity interval |η| < 3.5 <cit.>. In p p̅ collisions, and for large rapidity ranges, the UA5 experiment was the first to observe a larger-than-expected high-multiplicity tail and a change of slope <cit.>, which was interpreted as evidence for a multi-component structure of the final states <cit.>. In NSD pp collisions at the LHC, at √(s) = 2.36  and 7  and in |η| < 0.5, ALICE <cit.> and CMS <cit.> observed no significant deviation from the KNO scaling. On the other hand the CMS observation of strong KNO scaling violations at √(s) = 7 , as well as a change of slop in P_n, confirm the earlier measurements. The KNO variable z provides another way to study the evolution of the shape of multiplicity distributions with varying CM energies and pseudorapidity intervals. For the verification of the KNO scaling hypothesis the following equation with dependence on the CM energy and a kinematic region, p_T^min, was used in Ref. <cit.>: Ψ ( z , √(s)) = ⟨ n_ch (√(s), p_T^min) ⟩· P (n_ch, √(s), p_T^min) = ⟨ n_ch (√(s), p_T^min) ⟩/ N_ev (√(s), p_T^min ) ·d N_ev (√(s), p_T^min )/ d n_ch, where n_ch is the number of primary charged particles within the kinematic acceptance in an event, P (n_ch, √(s)) is the probability distributions of producing n_ch particles, N_ev is the number of events with primary charged particles in the kinematic acceptance, ⟨ n (√(s)) ⟩ is the average multiplicity of primary particles at the CM energy, and Ψ ( z ) is the particle distribution as a function of the scaled multiplicity. The KNO scale variable z provides a way to study evolution of shapes of the KNO charged-particle multiplicity distributions (see Eq. (<ref>)) with varying CM energy and kinematic region, for example p_T^min threshold. The KNO distributions and their ratios, studied using ATLAS results, are presented in Fig. <ref> for charged particles with p_T >100  and in Fig. <ref> for those with p_T >500 . These figures are similar to Fig. <ref> and Fig. <ref> but the vertical axis is stretched by the factor ⟨ n_ch (√(s), p_T^min)⟩. The quantities of interest are derived from the original set of KNO distributions and the ratios of these distributions to the one at 13 . The high-multiplicity tail of the distributions is pushed up and the maximum of the distribution is shifted towards small values of z with increasing collision energy. Ratios of the KNO distributions between the smallest CM energy 0.9  to 13  reach the maximum value at z ≈ 0.8 and the minimum value for the highest multiplicity at z ≈ 5.5 for p_T >100 , as can be seen in Fig. <ref>(a), and z ≈ 6.5 for p_T >500 , in Fig. <ref>(a). There is an intersection point for all distributions at z ≈ 2. A test of the KNO scaling distributions between √(s) = 0.9 and 13  confirms that KNO scaling violation increases with decreasing collision energy. Ratios of the KNO distributions between the highest energies 8 and 13  exceed the maximum value of +8% at z ≈ 0.5 and the minimum value of -15% at z ≈ 0.1 for p_T >100 , as can be seen in Fig. <ref>(b), and the maximum value of +5% at z ≈ 0.5 and -13% at z ≈ 0.1 for p_T >500 , in Fig. <ref>(b). For the high multiplicity tail, these ratios are in agreement within error bars with the KNO distribution at 13 . Single- and double-diffractive processes make an important contribution only for the low-multiplicity region, z ≲ 0.3. The typologies of diffractive and non-diffractive events are different and their KNO behaviour may also be different. The negative spread, ≲ -8%, for the low multiplicity may be the result of the contribution from diffractive processes. The KNO scaling tends to be valid in the energy region from √(s) = 7 to √(s) =13  within ≈^+8_-15% for z ≲ 2 and within error bars for z ≳ 2 for events with the charged-particle transverse momentum p_T >100  (Fig. <ref>(b)), and within ^+5_-13% for z ≲ 3 and within error bars for z ≳ 3 for events with the charged-particle transverse momentum p_T >500  (Fig. <ref>(b)). The tendency of the KNO scaling to hold for the highest collision energies is observed. The MC QGSM predictions are made for the KNO non-diffractive charged-particle multiplicity distributions for pp collisions including at the highest LHC CM energy √(s)= 14  for |η| <2.4 in Fig. 12 in Ref. <cit.>. These distributions have the same qualitative behaviour as those presented in Fig. <ref>(a). The MC QGSM described the KNO distributions as the contribution of the cylinder diagram and diagrams with multi-Pomeron scattering. The pronounced peak in the low z arises solely due to a single Pomeron exchange, and the maxima of the distributions for multi-Pomeron processes are moved in the direction of high z thus pushing up the tail <cit.>. The energy independence of the moments of the probability distributions defined as P (n_ch, √(s)) C_q (√(s)) = ∑_n=1^n_max n_ch^q (√(s)) P (n_ch, √(s)) /( ∑_n=1^n_max n_ch (√(s)) P (n_ch, √(s)) )^q in the energy asymptotic was the precise finding of the KNO scaling <cit.>. The analysis results for the validity of KNO scaling is shown quantitatively in Fig. <ref> by the C_q (√(s)) of the multiplicity distributions measured by the ATLAS and complemented with the CMS measurements at √(s) =0.9, 2.36 and 7  <cit.> and results of the lower-energy experiments by NA22 <cit.>, UA1 <cit.>, and UA5 <cit.>. The C_q (√(s)) calculations based on the ATLAS results for the kinematic region |η| < 2.5, n_ch≥ 2 and p_T >100 are shown in Fig. <ref>(a). The ATLAS and CMS results are agree within the errors. The values of C_q (√(s)) for all experiments linearly increase with log√(s) as illustrated by the fits in Fig. <ref>(a). Since, as mentioned above, the KNO scaling requires that C_q (√(s)) be independent of energy, one can state that the KNO scaling is violated at least for the full region of scaled multiplicity. Figure <ref>(b) shows for the first time the values of C_q (√(s)) calculated using multiplicity distributions measured by ATLAS for the kinematic region |η| < 2.5, n_ch≥ 1 and p_T >500 . Similarly as in Fig. <ref>(a) the values of C_q (√(s)) linearly increase with log√(s). The C_q values at √(s) = 2.36 TeV in Fig. <ref>(b) are much smaller than those for other energies. This is because the region of primary charged-particle multiplicity distributions at 2.36  is smaller (up to z ≈ 3.5) than that for higher CM energies (up to z ≈ 9) <cit.>. Therefore, the C_q values at √(s) = 2.36  were note used in the fits. The C_q (√(s)) for p_T >500  have higher bias (α) and slope (β) of the fits than those for minimum p_T threshold, the bias increasing from 1.1 at q=2 up to 2.1 at q=5, and the slope increasing from 1.4 at q=2 up to 2.6 at q=5. This is the result of stronger interactions with a higher p_T threshold. Figure <ref>(c) shows moments C_q for events with n_ch≥ 2, p_T >100  and for z > 0.5 without the fraction of single and double diffraction events, which was accepted by the ATLAS minimum-bias trigger <cit.>. In this case, the values of C_q (√(s)) are systematically higher than those for full distributions with z > 0 and show a similar linear increase with log√(s) as is illustrated in Fig. <ref>(c). For multiplicity distributions for z > 1.0 the values of C_q (√(s)) at the highest energies √(s) =7, 8 and 13  are in agreement within error uncertainties, as can be seen in Fig. <ref>(c). Therefore, the energy independence of the moments of various orders can be considered as a confirmation of the KNO scaling. §.§ Study of the KNO scaling at the LHC experiments The KNO scaling violation was studied for different pseudorapidity ranges in LHC experiments by the CMS <cit.> and the ALICE <cit.> at the CM energies from √(s) = 0.9 to 8 . The multiplicity distributions obtained by the CMS detector are shown in the KNO form <cit.> for the pseudorapidity interval of |η| < 2.4 in Fig. <ref>(a), which is close to the similar ATLAS results with |η| < 2.5, and for a more central pseudorapidity interval |η| < 0.5 in Fig. <ref>(b). The variation of the ratio for the central region of 0.9 to 7  with |η| < 0.5 is about ± 15% and agree with 1 within error bars; therefore the KNO scaling holds. The variation of the ratio for the full region with |η| < 2.4 is twice wider ≈± 30% and does not agree with 1 in error bars, therefore the KNO scaling is violated similar to the ATLAS data in Fig. <ref>(a). Scaling is a characteristic property of the multiplicity distribution in cascade processes of a single jet with self-similar branching and a fixed coupling constant <cit.>. A similar conclusion about the shape evolution of the multiplicity distributions like from Fig. <ref>(b) can be extracted from Fig. <ref>(c), where are compared the ALICE measurements plotted in terms of KNO variables at the two energies and UA5 p p̅ data at √(s) = 0.2 and 0.9 , for NSD collisions and pseudorapidity interval |η| < 0.5. While the KNO scaling gives a reasonable description of the data from √(s) = 0.2 and 2.36 , the ratio between the √(s) = 0.9 and 2.36  data shows a slight departure from unity above z = 4, but it is in agreement with unit within error bars. The KNO test on the ALICE results in the range of 0.9 to 8  <cit.> is presented in Fig. <ref>. The KNO-scaled distributions and their ratios were obtained for each of the available combinations of corrections with the same procedure used for multiplicity distribution measurements. Bin-to-bin correlations were ignored when comparing KNO distributions and q-moments at various CM energies. Consequently, the relative errors obtained on the ratios are somewhat overestimated. The ratios between two highest energies and 0.9  exceed the value of 2 at z > 5.5, 5 and 4.5, for |η| < 0.5, |η| < 1.0 and |η| < 1.5, respectively, Fig. <ref>. This confirms that KNO scaling violation increases with the size of increasing pseudorapidity interval. The shape of the KNO scaling violation reflects the fact that the high-multiplicity tail of the distribution increases with energy and with size of pseudorapidity interval faster than that for low-multiplicity tail (n_ch≤ 20). A test of the KNO scaling between √(s) = 0.9 to 8 confirms that KNO scaling violation increases with increasing √(s) and, at a given CM energy, with increasing width of pseudorapidity intervals. This is similar to the ATLAS result in Fig. <ref>(a). The KNO test on the ALICE results for pp collisions at the different centre-of-mass energies √(s) = 2.36, 5.02, 7, 8 and 13  for the kinematic region |η| < 0.8 and 0.15 < p_T < 10 is presented in Fig. <ref>(a). Figure <ref>(b) shows the corresponding ratios of the KNO scaled multiplicity distributions at various CM energies relative to √(s) = 13 . The KNO scaling apparently holds within ≈ 30% for CM energies from 2.36 to 8 in relative to √(s) = 13 . Figure <ref>(c) compare measured results for the respective KNO scaled multiplicity distributions with predictions from Pythia 8 <cit.> (solid lines) and EPOS LHC <cit.> (dashed lines). Like for multiplicity distributions in Fig. <ref>(b), the overall shapes of the KNO-scaled distribution shown in Fig. <ref>(c) are better described by EPOS LHC, while Pythia 8 falls sharply off above N_ch/ ⟨ N_ch⟩≈ 4 and these models within 25% agree with the experimental distributions with larger deviations at highest multiplicities. Figure <ref> shows the ALICE results for the trans-max and trans-min UE regions for charged-particle multiplicity distributions in KNO variables for pp collisions at √(s)=2.76, 5.02, 7 and 13  <cit.>. The trans-max and trans-min regions of UE refer to the sub-transverse regions with the largest and smallest charged-particle multiplicity which have an enhanced sensitivity to ISR-FSR and UE, respectively <cit.>. In the trans-max region, within 20%, the KNO-like scaling is observed in a wider range of multiplicity (0<z<4) relative to the results reported in <cit.>, while for higher z values (z > 4) the scaling is broken. It is worth noticing that for trans-max both contributions are considered: UE and ISR-FSR. If the effect of ISR-FSR is suppressed, i.e., exploiting the features of trans-min region, the KNO-like scaling also holds for 0 < z < 4, and then for z > 4 the KNO-like scaling is still broken but a higher z reach is achieved, especially for z>6, a larger violation is observed. Events with high-multiplicity jets can contribute to the large violation of the scaling properties. It was observed that for z > 3, the number of uncorrelated seeds (or MPI) deviate from the linear trend suggesting the presence of high-multiplicity jets <cit.>. Multiplicity distributions may be characterized by their normalized C_q-moments where q is a positive integer studied here for the values 2, 3, 4 and 5, for NSD events. The results obtained by different experiments for the C_q-moment dependence on √(s) are shown in Fig. <ref>. For three pseudorapidity intervals |η| < 0.5, |η| < 1.0 and |η| < 1.5, C_2 remains constant over the energy range, C_3 shows a small increase with increasing energy for two largest η intervals, C_4 and C_5 show an increase with increasing energy, which becomes stronger for larger η intervals. These ALICE data are in agreement with UA5 <cit.> and CMS <cit.>. The results of KNO scaling research according to the data of the ALICE, CMS and ATLAS experiments have been analysed. The shape evolution of the multiplicity distributions with a collision energy at ATLAS is studied in terms of KNO scaling variables at √(s) from 0.9 to 13 with the inclusive |η| < 2.5 one. The KNO scaling and C_q-moments were studied by the CMS at √(s) from 0.9 to 7 in central pseudorapidity |η| < 0.5 region and more inclusive |η| < 2.4 regions, and the ALICE at √(s) from 0.9 to 8 in three pseudorapidity regions: |η| < 0.5, |η| < 1.0 and |η| < 1.5. The charged-particle multiplicity distributions on the KNO scale for all experiments have the similar shape and decrease with increasing collision energy. For all experiments the KNO scaling is violated for energies from 0.9 to 7 if taking into account more inclusive pseoudorapidity regions. The ATLAS data demonstrate the tendency for the KNO scaling to be independent of energy for the highest energies. The CMS results show that the KNO scaling holds for central pseudorapidity region, |η| < 0.5, and is dependent of the energy from √(s) = 0.9 to 7 , because C_q-moments demonstrate independence of energy and the shape of the KNO function is similar. Another situation is for the inclusive region |η| < 2.4 where C_q-moments demonstrate the linear increase with energy. The ALICE results have the KNO scaling violation for all pseudorapidity regions in depending on the energy from √(s) = 0.9 to 8 , because C_q-moments linear increase with log√(s). Ratios of the KNO distributions between the smallest √(s) = 0.9  and 8  exceed the maximum positive value at z ≈ 0.5 and the maximum positive value for the multiplicity at z ≈ 4.5, z ≈ 5.5 and z ≈ 6.0 for the pseudorapidity intervals |η| < 1.5, |η| < 1.0 and |η| < 0.5, respectively. There is an intersection point for all distributions at z ≈ 2. The shapes at √(s) =7 and 8 are similar and agree within error bars. The ALICE results show the tendency for the KNO scaling to be independent of energy for the highest energies. Therefore, an investigation of the KNO scaling at energies higher than 13 is important. The validity of KNO scaling is shown more quantitatively in Fig. <ref>(a) for wider pseudorapidity region and for smaller pseudorapisity region, |η| < 0.5, in Fig. <ref>(a) by the normalized order-q moments C_q of the multiplicity distribution, complemented with measurements at lower energies experiments NA22 <cit.> and UA5 <cit.>. For |η| < 0.5 the values of C_q remain constant over the full CM energy range, as illustrated by the fits in Fig. <ref>(a). The KNO-scaling study by ALICE is carried out for the NSD event class only so that SD events, which may have a different behaviour, are not included in the data samples. The ALICE data are consistent with the UA5 p p̅ measurements at 0.9 <cit.>. The energy dependence of the reduced moments C_q shown in Fig. <ref>(b) indicates a slight increase, which is not significant given the size of our systematic uncertainties. Systematic uncertainties are assumed to be uncorrelated between energies. § CONCLUSIONS The ATLAS studied MB events in pp interactions at the CM energies √(s) = 0.9, 2.36, 7, 8 and 13  for the absolute pseudorapidity region less than 2.5 in five separate PS regions n_ch≥ 2, p_T > 100 and n_ch≥ 1, 6, 20, 50, p_T > 500 recorded in 2010 – 2015. The data were taken in the special configuration of the LHC with low beam currents and reduced beam focusing, producing a low mean number of interactions per bunch-crossing in the range 0.003 – 0.007. The charged-particle multiplicity dependences on pseudorapidity, charged-particle multiplicity and transverse momentum, as well as the dependence of the mean transverse momentum on multiplicity were presented for the study the soft-QCD phenomena. The measured distributions are presented as inclusive-inelastic distributions within a given PS region with minimal model-dependent corrections to facilitate the comparison with models. There variables are tuned in event generators using these MB measurements, because there is a variability in modelling since non-perturbative QCD is used. The results are compared to the predictions of more than ten MC models tuned to a wide range of measurements. Then variables in the MC event generators were tuned using the MB measurements of the LHC and Tevatron experiments, because there was a variability in modelling since non-perturbative QCD was used. This review reported that the multiplicity distribution is not described perfectly by any of the models, there are large discrepancies especially at large multiplicities. Having observed similar discrepancies at all measured energies, we conclude that for every collision energy, model parameters usually need to be re-tuned in every MC generator. Reasonable agreement of the tunes used in the MC models with the data were presented. The models EPOS LHC, PHOJET, QGSJET-II, Pythia 6 and Pythia 8 show big troubles in describing the whole spectrum in the data, but the best agreement is achieved with EPOS. A new ATLAS Pythia 8 A3 tune was presented for result predictions at Run 3 of the LHC. The comparisons of the charged-particle multiplicity and the average transverse momentum distributions on the basis of the scaled multiplicity using the LHC experiments results were presented. The charged-particle multiplicity distributions on the KNO scale have the similar shape and decrease with increasing energy. The KNO scaling was studied using the LHC experiments results. A test of the KNO scaling between 0.9 and 13  confirms that the KNO scaling violation increases with decreasing collision energy. The KNO distributions tend to be independent of energy for the highest energies. The mean transverse momentum on the KNO scale has the same shape and increases with increasing energy. § ACKNOWLEDGEMENTS Our thank the ATLAS collaboration for the excellent experimental results which were used in this review. Special thanks are to Edward K. Sarkisyan-Grinbaum and Stanislav Tokar for very productive discussions. Grateful to Pavel Tsiareshka for the technical support.
http://arxiv.org/abs/2307.07347v1
20230714135513
Achieving unidirectional propagation of twisted magnons in a magnetic nanodisk array
[ "Zhixiong Li", "Xiansi Wang", "Xuejuan Liu", "Peng Yan" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall" ]
School of Physics, Central South University, Changsha, 410083, China School of Physics and Electronics, Hunan University, Changsha, 410082, China College of Physics and Engineering, Chengdu Normal University, Chengdu 611130, China School of Physics and State Key Laboratory of Electronic Thin Films and Integrated Devices, University of Electronic Science and Technology of China, Chengdu 610054, China [Corresponding author: ][email protected] School of Physics and State Key Laboratory of Electronic Thin Films and Integrated Devices, University of Electronic Science and Technology of China, Chengdu 610054, China Twisted magnons (TMs) have great potential applications in communication and computing owing to the orbital angular momentum (OAM) degree of freedom. Realizing the unidirectional propagation of TMs is the key to design functional magnonics devices. Here we theoretically study the propagation of TMs in one-dimensional magnetic nanodisk arrays. By performing micromagnetic simulations, we find that the one-dimensional nanodisk array exhibits a few bands due to the collective excitations of TMs. A simple model by considering the exchange interaction is proposed to explain the emerging multiband structure and theoretical results agree well with micromagnetic simulations. Interestingly, for a zigzag structure, the dispersion curves and propagation images of TMs show obvious nonreciprocity for specific azimuthal quantum number (l), which originates from a geometric effect depending on the phase difference of TMs and the relative angle between two adjacent nanodisks. Utilizing this feature, one can conveniently realize the unidirectional propagation of TMs with arbitrary nonzero l. Our work provides important theoretical references for controlling the propagation of TMs. Achieving unidirectional propagation of twisted magnons in a magnetic nanodisk array Peng Yan August 12, 2023 ==================================================================================== § INTRODUCTION Ever since the quantized orbital angular momentum (OAM) states were originally introduced in photonics <cit.>, the peculiar twisted structure has been rapidly extended to a broad field of electronics <cit.>, acoustics <cit.>, neutronics <cit.>, and spintronics <cit.>. In magnetic system, the magnons (quantized quasiparticle of spin wave) carrying OAM are called twisted magnons (TMs) <cit.>. The researches about the OAM states of magnons have attracted growing interest owing to both the fundamental interest and potential applications. By using the twisted phase structure of TMs as individual information channels, it is possible to realize the frequency-division multiplexing which can greatly enhance the communication capacity of magnons <cit.>. It has been proposed that the TMs can act as “magnetic tweezers" to drive the rotation of spin texture (such as skyrmion) <cit.>. Very recently, Wang et al. <cit.> showed that the magnonic frequency comb emerges in the nonlinear interaction between TMs and magnetic vortex. Twisted magnonics focuses on the generation, propagation, manipulation, and detection of TMs. To design various functional devices based on TMs, realizing the unidirectional propagation of TMs with arbitrary OAM quantum number (l) is one of the central tasks. Generally speaking, TMs only exist in magnetic nanocylinder and nanodisk. On the one hand, Jiang et al. <cit.> and Jia et al. <cit.> have theoretically studied the spectrum of TMs in a single magnetic nanocylinder. In such configuration, it is however difficult to excite the TMs with a specific l because of the multiband structure. On the other hand, although the intrinsic dynamics of TMs in single magnetic nanodisk has been investigated <cit.>, the collective dynamics of TMs in nanodisk arrays is rarely explored. The magnetic nanodisk array is an ideal platform for studying the collective propagation of TMs with the following reasons: (i) The desired lattice structure based on magnetic nanodisks can be fabricated within the reach of current experimental techniques, for example, electron-beam lithography <cit.>. (ii) It is convenient to excite TMs with arbitrary l in nanodisk arrays by means of the so-called spin-to-orbital angular momentum conversion mechanism <cit.>. (iii) For two- or three-dimensional nanodisk lattice, one may realize the chiral propagation of TMs with topological features. It is thus naturally expected that the collective excitations of TMs in nanodisks array can exhibit abundant physics (unidirectional propagation for instance), which should provide important theoretical references for designing functional magnonic devices. In this work, we study the collective dynamics of TMs in one-dimensional magnetic nanodisk arrays. For a straight lattice, the system supports a few symmetric magnon bands describing different collective excitation modes of TMs. A simple exchange model is proposed to explain the emergence of multiband structure. Interestingly, for the zigzag structure, the TM dispersion relations can exhibit visible nonreciprocity. These asymmetric bands are explained by a geometric effect: when the phase difference of TMs does not match the geometric angle (θ) [see Fig. <ref>(b)], the nonreciprocity occurs. It allows us to realize unidirectional propagation of TMs for any nonzero l by tuning θ. In addition, we find that the propagation direction of TMs can be conveniently tuned by changing the sign of l or the position of excitation field. Our results provide a simple and effective method to control the propagation of TMs which should greatly promote the development of twisted magnonics. The paper is organized as follows. In Sec. <ref>, we present micromagnetic simulations for collective excitations of TMs in straight one-dimensional nanodisk lattices. Section <ref> introduces the theoretical model to explain the emerging multiband structures of TMs. In Sec. <ref>, we focus on the unidirectional propagation of TMs in a zigzag nanodisk array. Discussion and conclusion are drawn in Sec. <ref>. § MICROMAGNETIC SIMULATION We consider a straight one-dimensional lattice consisting of 101 identical magnetic nanodisks with radius r=50 nm and thickness d=2 nm, as shown in Fig. <ref>. The distance between nearest-neighboring nanodisks is 2r, which indicates that the TMs can interact with each other through the exchange interaction. The material parameters of yttrium iron garnet (YIG) are used <cit.>: the saturation magnetization M_s=1.92×10^5 Am^-1, the exchange stiffness A=3.1×10^-12 Jm^-1, and the Gilbert damping constant α=10^-3. The magnetic moments are perpendicularly magnetized by external magnetic field H_0=400 mT. The cell size is set to be 2×2×2 nm^3. The micromagnetic software package MUMAX3 <cit.> is used to simulate the magnetization dynamics. To excite the collective oscillation of TMs, we apply a sinc-function magnetic field 𝐇(t)=H_1sin[2π f_0(t-t_0)]/2π f_0(t-t_0)[cos(lϕ),sin(lϕ),0], with H_1=40 mT, f_0=15 GHz (cutoff frequency), and t_0=1 ns, confined to the disk located at the center of the lattice, as labeled by the black arrow in Fig. <ref>(a). Here ϕ is the polar angle. The spatiotemporal profile of magnetizations in all nanodisks are recorded every 20 ps and the total simulation time is 200 ns. The dispersion relation of TMs is obtained by calculating the spatiotemporal fast Fourier transformation (FFT) of the averaged (over the whole disk) magnetization x-component ⟨ m_x⟩ (or y-component). For every azimuthal (OAM) quantum number l, we can calculate the spectrum. To get the full band structure, we sum the spectra for all l. Figure <ref> shows the results, from which we can clearly see that the system exhibits five separate dispersion curves below 15 GHz, as marked by blue arabic number 1-5. Besides, by analyzing the spatial distribution of the FFT intensity for these bands, we can identify five different TMs modes, as shown in right column of Fig. <ref>. For each dispersion relations, at the bottom (top) of bands, the adjacent TMs oscillate in-phase (out-of-phase), which is similar to other (quasi-)particles system. Interestingly, we find that the signs of the group velocity are opposite when l is even (bands 1, 3, and 4) and odd (bands 2 and 5) for the same value of wave vector k. § THEORETICAL MODEL To explain the emerging multiband structure of TM, we propose a theoretical model which is similar to the framework of massless Thiele's equation <cit.>. Here the dynamics of TM can be described by analogous Thiele's equation based on the following facts. At first, due to the distinctive mode profile of TM (see Fig. <ref>), it is reasonable to use a wavepacket description. Then we consider the position of the peak (or trough) to represent the TM in nanodisk because of the circular symmetry, as denoted by blue ball in Fig. <ref>(a) [here (l,s)=(2,0)]. At last, we envision that the steady-state magnetization of the nanodisk only depends on the position of TM. Assuming the displacement vector of TM from the disk center in jth nanodisk as 𝐔_j=(u_j,v_j), we obtain the dynamic equation characterizing TM as Gẑ×d𝐔_j/dt+𝐅_j=0, where G is a gyroscopic coefficient depending on both the values l and s. The conservative force can be expressed as 𝐅_j=-∂ W/∂𝐔_j. Here W denotes the total potential energy W=∑_jK𝐔_j^2/2+W_d+W_z+W_e. The first term at the right hand in Eq. (<ref>) originates from the confinement of disk boundary, while the terms W_d, W_z, and W_e represent the potential energy from magnetostatic, Zeeman and exchange interactions, respectively. Then we consider the excitation of TM, i.e., 𝐦=(m_x,m_y,1) with m_x^2+m_y^2≪ 1, with 𝐦 being the unit vector of the local magnetic moment. On the one hand, it is straightforward that the Zeeman energy -μ_0M_s∫ H_0ẑ·𝐦d𝐫=-Nμ_0M_s is a constant. Here, μ_0 is vacuum permeability and N is the number of magnetic moment. On the other hand, the magnetostatic energy can also be treated as constant value under the linear approximation (see Appendix A for details). At last, we assume that the exchange energy takes the simple form <cit.> W_e=∑_k∈⟨ j ⟩I𝐔_j·𝐔_k, where I is the coupling coefficient. Then the total potential energy becomes the following form W=W_0+∑_jK𝐔_j^2/2+∑_k∈⟨ j ⟩I𝐔_j·𝐔_k, where W_0=W_d+W_z denotes the constant term of energy, ⟨ j⟩ is the set of nearest neighbors of j. Substituting Eq. (<ref>) into Eq. (<ref>) and assuming ψ_j=u_j+iv_j, we obtain the eigen-equation dψ_j/dt+iC_1ψ_j+iC_2(ψ_j-1+ψ_j+1)=0, with parameters C_1=K/G and C_2=I/G. Then we consider the plane-wave expansion of ψ_j=ϕ_jexp(-iω t)exp[i(n𝐤·𝐚)], where 𝐤 is the wave vector, n is an integer, and 𝐚=ax̂ is the basis vector with a=100 nm representing the lattice constant. We thus obtain the dispersion relation of TM ω=C_1+2C_2cos(𝐤·𝐚). Then we use the formula (<ref>) to fit the dispersion curves of TM obtained from micromagnetic simulation. The dashed black lines in Fig. <ref> shows the best fit of the numerical data, from which we can clearly see that the theoretical curves agree well with simulations for small l or s (bands 1, 2, and 3). However, for larger values of l or s (bands 4 and 5), there exists obvious discrepancy between theoretical value and micromagnetic result, which may come from the fact that the form of exchange energy (<ref>) is too simple to accurately describe the interaction between TMs with high l (or s). The fitting parameters C_1 and C_2 for different l or s are summarized in Table <ref>. Overall, C_1 and C_2 are sensitive to l and s: (i) The parameter C_1 is always positive, while C_2 is negative (positive) when l takes an even (odd) number. (ii) With the increase of l (s) for fixed s (l), the magnitude of C_1 and C_2 increases. § UNIDIRECTIONAL PROPAGATION OF TMS Next, we discuss the propagation characteristics of TMs in zigzag structure, as shown in Fig. <ref>(b). By changing the value of θ, one can tune the geometric shape of the lattice. We first choose θ=2π/3 as an example. Interestingly, in this case, the dispersion relations of TM show obvious nonreciprocity for l=1 and l=2 [see Figs. <ref> and <ref>], which is in a sharp contrast to the straight structure. We focus on this feature in this section. Figures <ref>(c) and <ref>(d) show the band structures of TM with (l,s)=(2,0) for straight and zigzag lattice, respectively. Here the excitation fields with the form of Eq. (<ref>) are applied to disk 51 (the center disk). One can clearly see that the FFT strength of dispersion curves are symmetric for +k and -k in the straight lattice, while it shows visible asymmetric feature for the zigzag case. Besides, we plot the spectra of the magnetization (m_x) oscillation at disks 45 and 57 [as marked in Figs. <ref>(a) and <ref>(b)], as shown in Figs. <ref>(e) and <ref>(f), from which one can identify again the existence of nonreciprocity for TMs propagation in zigzag lattice. What's more, the TM with (l,s)=(1,0) also exhibits the similar behaviors, as plotted in Fig. <ref>. The band structures [Figs. <ref>(a) and <ref>(b)] and the disk spectra [Figs. <ref>(c) and <ref>(d)] clearly show that the propagation of TMs [(l,s)=(1,0)] is nonreciprocal (reciprocal) for zigzag (straight) shape. However, for l=0 and l=3, the dispersion relations are symmetric in both zigzag and straight lattice (see Appendix B for details). To further visualize the nonreciprocal propagation of TMs, we choose one representative frequency: f_1=9.19 GHz for (l,s)=(2,0), as marked by red lines in Fig. <ref>. We then simulate the dynamics of TMs by the excitation field 𝐁(t)=B_0sin(2π f_1t)[cos(lϕ),sin(lϕ),0], with B_0=1 mT applied at the center disk, indicated by the black arrows in Fig. <ref>. Figure <ref>(b) shows the propagation of TMs in the zigzag structure, from which one can clearly observe the unidirectional propagation of TMs. For comparison, we also plot the propagation images of TMs in the straight lattice, as shown in Fig. <ref>(a), which shows a symmetric spread. Interestingly, we find that for the zigzag structure, the propagation direction of TMs can be reversed by changing the sign of l [see Fig. <ref>(c)] or the position of excitation field [see Fig. <ref>(d)]. The physical mechanism of the symmetric and asymmetric TM dispersion relations can be explained as a geometric effect. For l=0, because the phase structure of TM is symmetric along any radius direction [see Fig. <ref>], the propagation of TMs are thus symmetric for both straight and zigzag lattice, as shown in Fig. <ref>. It is worth noting that this conclusion is always hold for any value of θ (here the condition π/3<θ≤π should be satisfied to guarantee that there is no overlap between neighboring nanodisks. If θ=π/3, each disk is tangent to the four surrounding disks and the system is no longer a simple one-dimensional structure). For l≠ 0, we define β=π/l to represent the angle between the nearest neighbor azimuthal nodes of TM. At first, we must stress the fact that the TM can spread to adjacent disk only when the contact point is not at the node of TM. Considering that the TM is excited in a nanodisk, when the left contact point is (not) located at the node of TMs, the right contact point is also (not) located at the node, if θ is the integer multiple of β. In this case, the dispersion relation is symmetric. However, if θ is not the integer multiple of β, the two contact points can not at the node simultaneously, the dispersion relation thus asymmetric. These conclusions can be used to explain our results. On the one hand, for the straight lattice, i.e., θ=π, no matter what value l takes, θ is always an integer multiple of β. Therefore, the dispersion relations for all l are reciprocal [see Fig. <ref>]. On the other hand, for the zigzag structure considered in our paper, i.e., θ=2π/3, the situation is different. When l=1, β=π, the θ=2π/3 is not the integer multiple of β. Naturally, the dispersion relation is asymmetrical for (l,s)=(1,0) [see Fig. <ref>]. We can do the similar analysis for l=2, in this case, β=π/2, again, the θ is not the integer multiple of β, the dispersion relation is thus asymmetri for (l,s)=(2,0) [see Fig. <ref>]. However, when l=3, β=π/3, we have θ=2β, therefore, the band structure is reciprocal for (l,s)=(3,0) [see Figs. <ref>(i) and <ref>(l)]. Here the spectra show a little nonreciprocity which originates from the fact that the software MUMAX3 is based on the finite difference method, and the position of contact is thus not a strict point. At last, it is worth noting that the propagation direction of the unidirectional TMs depends on both the sign of l and the position of excitation field [we use P=1 (P=-1) denotes the excitation field located at the lower (upper) disks]. Concretely, when sgn(l)sgn(P)=1 (or -1), the TMs propagate leftward (or rightward). Based on the above analysis, we can easily infer that when one contact point is located at the node, if the other contact point is located at the peak (or trough), the nonreciprocity of TMs reach the maximum. In this case, θ=(2n+1)π/2l with n=0,1,2,3··· (note the condition π/3<θ≤π should be satisfied simultaneously). We therefore can realize the unidirectional propagation of TMs for any nonzero l by tuning θ. § DISCUSSION AND CONCLUSION The researches about the twisted magnonics are still in the very initial stage, and a lot of questions and new physicsl phenomena need to be answered and discovered. For example, by constructing the Su-Schrieffer-Heeger <cit.> and Haldane <cit.> models based on magnetic nanodisks, we can realize the topological edge states of TMs, which may have great potential for designing topologically protected high-capacity communication devices. Besides, the interaction between TMs and various spin texture (for example skyrmion, vortex, and domain wall etc.) also deserves careful investigation, which may leads to peculiar physicsl phenomena, for example, magnetic frequency comb <cit.>. To conclude, we have studied the collective excitations of TMs in one-dimensional magnetic nanodisk arrays. For a straight lattice, by performing micromagnetic simulations, we identified multiple symmetric bands which characterize different collective modes of TMs. A theoretical model was proposed to explain the band structure and the results agree well with simulations. For the zigzag structure, we found that the TM dispersion relations for l=1 and l=2 show obvious nonreciprocity, which do not happen for l=0 and l=3. The propagation characteristics (reciprocal or nonreciprocal) of these bands result from a geometric effect: when θ is (not) the integer multiple of β (=π/l), the dispersion relation is symmetric (asymmetric). Utilizing this principle, we can achieve unidirectional propagation of TMs with any nonzero l. Our work provides a simple and effective method to manipulate the propagation TMs, which should be helpful for designing useful TM devices. § ACKNOWLEDGMENTS We thank Z. Wang and H. Y. Yuan for helpful discussions. This work was supported by the National Key Research and Development Program under Contract No. 2022YFA1402802 and the National Natural Science Foundation of China (NSFC) (Grants No. 12074057, No. 11604041, and No. 11704060). Z.-X.L. acknowledges financial support from the NSFC (Grant No. 11904048) and the Natural Science Foundation of Hunan Province of China (Grant No. 2023JJ40694). X.S.W. was supported by the NSFC (Grants No. 12174093 and No. 11804045) and the Fundamental Research Funds for the Central Universities. X. L. acknowledges the support from the Talent Introduction Program of Chengdu Normal University under Grant No.YJRC2021-14. § APPENDIX A: THE MAGNETOSTATIC ENERGY OF THE SYSTEM In magnetic nanodisk array, the magnetostatic energy between two arbitrary magnetic moments (𝐌_1 and 𝐌_2) can be expressed as ε(𝐫)=μ_0/4π[𝐌_1·𝐌_2/r^3-3(𝐌_1·𝐫)(𝐌_2·𝐫)/r^5], where 𝐫 is the position vector from 𝐌_1 to 𝐌_2. In our model, all magnetic moments are perpendicularly magnetized along z-axis, we thus can define 𝐫=(rcosγ,rsinγ) with r the magnitude of position vector 𝐫 and γ being the angle between 𝐫 and x-axis. When TM is excited, 𝐌_1=M_s(m_1^x,m_1^y,1) and 𝐌_2=M_s(m_2^x,m_2^y,1), Eq. (<ref>) can be simplified as ε(𝐫)=μ_0M_s^2/4π r^3(1+b), with the parameter b=m_1^xm_2^x(1-3cos^2γ)+m_1^ym_2^y(1-3sin^2γ)-3sinγcosγ(m_1^xm_2^y+m_1^ym_2^x). Under the linear approximation, we have b=0, and ε(𝐫)=μ_0M_s^2/4π r^3, which means that magnetostatic energy between two arbitrary magnetic moments is a constant. As a result, the whole magnetostatic energy of the system keeps invariant when TMs are excited. § APPENDIX B: THE BAND STRUCTURES FOR L=0 AND L=3 Figure <ref> plots the dispersion relation and disk spectra for l=0 and l=3 with the help of micromagnetic simulations. One can clearly see that the propagations of TMs are absolutely symmetric [see Figs. <ref>(a)-<ref>(f)] in a straight lattice. For the zigzag lattice, the bands and spectra show symmetric characteristics for l=0 [see Figs. <ref>(g), <ref>(h), <ref>(j), and <ref>(k)]. There exists a little nonreciprocity for l=3 [see Figs. <ref>(i) and <ref>(l)], which comes from the calculation errors because of the finite difference method (also see related discussions in the main text). We thus conclude that the propagations of TMs are reciprocal for l=0 and l=3 in both straight and zigzag structure. 99 AllenPRA1992L. Allen, M. W. Beijersbergen, R. J. C. Spreeuw, and J. P. Woerdman, Orbital angular momentum of light and the transformation of Laguerre-Gaussian laser modes, https://journals.aps.org/pra/abstract/10.1103/PhysRevA.45.8185Phys. Rev. A 45, 8185 (1992). MolinaNP2007G. Molina-Terriza, J. P. Torres, and L. Torner, Twisted photons, https://www.nature.com/articles/nphys607Nat. Phys. 3, 305 (2007). PadgettOE2017M. J. Padgett, Orbital angular momentum 25 years on, https://www.osapublishing.org/oe/fulltext.cfm?uri=oe-25-10-11265 id=363720Opt. Express 25, 11265 (2017). JiS2020Z. Ji, W. Liu, S. Krylyuk, X. Fan, Z. Zhang, A. Pan, L. Feng, A. Davydov, and R. Agarwal, Photocurrent detection of the orbital angular momentum of light, https://www.science.org/doi/abs/10.1126/science.aba9192Science 368, 763 (2020). FrankeNRP2022S. Franke-Arnold, 30 years of orbital angular momentum of light, https://www.nature.com/articles/s42254-022-00467-xNat. Rev. Phys. 4, 361 (2022). UchidaN2010M. Uchida and A. Tonomura, Generation of electron beams carrying orbital angular momentum, https://www.nature.com/articles/nature08904Nature (London) 464, 737 (2010). VerbeeckN2010J. Verbeeck, H. Tian, and P. Schattschneider, Production and application of electron vortex beams, https://www.nature.com/articles/nature09366Nature (London) 467, 301 (2010). McmorranS2011B. J. Mcmorran, A. Agrawal, I. M. Anderson, A. A. Herzing, H. J. Lezec, J. J. McClelland, and J. Unguris, Electron vortex beams with high quanta of orbital angular momentum, https://www.science.org/doi/10.1126/science.1198804Science 331, 192 (2011). SilenkoPRL2017A. J. Silenko, P. Zhang, and L. Zou, Manipulating Twisted Electron Beams, https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.119.243903Phys. Rev. Lett. 119, 243903 (2017). LloydRMP2017S. M. Lloyd, M. Babiker, G. Thirunavukkarasu, and J. Yuan, Electron vortices: Beams with orbital angular momentum, https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.89.035004Rev. Mod. Phys. 89, 035004 (2017). DashtiPRL2006P. Z. Dashti, F. Alhassen, and H. P. Lee, Observation of Orbital Angular Momentum Transfer between Acoustic and Optical Vortices in Optical Fiber, https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.96.043604Phys. Rev. Lett. 96, 043604 (2006). AnhauserPRL2017A. Anhäuser, R. Wunenburger, and E. Brasselet, Acoustic Rotational Manipulation Using Orbital Angular Momentum Transfer, https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.109.034301Phys. Rev. Lett. 109, 034301 (2012). BareschPRL2018D. Baresch, J.-L. Thomas, and R. Marchiano, Orbital Angular Momentum Transfer to Stably Trapped Elastic Particles in Acoustical Vortex Beams, https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.121.074301Phys. Rev. Lett. 121, 074301 (2018). MarzoPRL2018A. Marzo, M. Caleap, and B. W. Drinkwater, Acoustic Virtual Vortices with Tunable Orbital Angular Momentum for Trapping of Mie Particles, https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.120.044301Phys. Rev. Lett. 120, 044301 (2018). BliokhPRB2019K. Y. Bliokh and F. Nori, Spin and orbital angular momenta of acoustic beams, https://journals.aps.org/prb/abstract/10.1103/PhysRevB.99.174310Phys. Rev. B 99, 174310 (2019). ClarkN2015C. W. Clark, R. Barankov, M. G. Huber, M. Arif, D. G. Cory, and D. A. Pushin, Controlling neutron orbital angular momentum, https://www.nature.com/articles/nature15265Nature (London) 525, 504 (2015). CappellettiPRL2018R. L. Cappelletti, T. Jach, and J. Vinson, Intrinsic Orbital Angular Momentum States of Neutrons, https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.120.090402Phys. Rev. Lett. 120, 090402 (2018). LarocqueNP2018H. Larocque, I. Kaminer, V. Grillo, R. W. Boyd, and E. Karimi, Twisting neutrons may reveal their internal structure, https://www.nature.com/articles/nphys4322xNat. Phys. 14, 1 (2018). AfanasevPRC2019A. V. Afanasev, D. V. Karlovets, and V. G. Serbo, Schwinger scattering of twisted neutrons by nuclei, https://journals.aps.org/prc/abstract/10.1103/PhysRevC.100.051601Phys. Rev. C 100, 051601(R) (2019). SherwinPLA2022J. A. Sherwin, Scattering of slow twisted neutrons by ortho- and parahydrogen, https://www.sciencedirect.com/science/article/abs/pii/S0375960122001840Phys. Lett. A 437, 128102 (2022). JiangPRL2020Y. Jiang, H. Y. Yuan, Z.-X. Li, Z. Wang, H. W. Zhang, Y. Cao, and P. Yan, Twisted Magnon as a Magnetic Tweezer, https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.124.217204Phys. Rev. Lett. 124, 217204 (2020). JiaNC2019C. Jia, D. Ma, A. F. Schäffer, and J. Berakdar, Twisted magnon beams carrying orbital angular momentum, https://www.nature.com/articles/s41467-019-10008-3Nat. Commun. 10, 2077 (2019). ChenAPL2020M. Chen, A. F. Schäffer, J. Berakdar, and C. Jia, Generation, electric detection, and orbital-angular momentum tunneling of twisted magnons, https://aip.scitation.org/doi/abs/10.1063/5.0005764Appl. Phys. Lett. 116, 172403 (2020). JiaJO2019C. Jia, D. Ma, A. F. Schäffer, and J. Berakdar, Twisting and tweezing the spin wave: on vortices, skyrmions, helical waves, and the magnonic spiral phase plate, https://iopscience.iop.org/article/10.1088/2040-8986/ab4f8e/metaJ. Opt. 21, 124001 (2019). LiPRB2022Z.-X. Li, Z. Wang, Y. Cao, and P. Yan, Generation of twisted magnons via spin-to-orbital angular momentum conversion, https://journals.aps.org/prb/abstract/10.1103/PhysRevB.105.174433Phys. Rev. B 105, 174433 (2022). WangPRL2022Z. Wang, H. Y. Yuan, Y. Cao, and P. Yan, Twisted Magnon Frequency Comb and Penrose Superradiance, https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.129.107203Phys. Rev. Lett. 129, 107203 (2022). HanSR2013D. S. Han, A. Vogel, H. Jung, K. S. Lee, M. Weigand, H. Stoll, G. Schütz, P. Fischer, G. Meier, and S. K. Kim, Wave modes of collective vortex gyration in dipolar-coupled-dot-array magnonic crystals, https://www.nature.com/articles/srep02262Sci. Rep. 3, 2262 (2013). BehnckePRB2015C. Behncke, M. Hänze, C. F. Adolff, M. Weigand, and G. Meier, Band structure engineering of two-dimensional magnonic vortex crystals, https://journals.aps.org/prb/abstract/10.1103/PhysRevB.91.224417Phys. Rev. B 91, 224417 (2015). SunPRL2013L. Sun, R. X. Cao, B. F. Miao, Z. Feng, B. You, D. Wu, W. Zhang, A. Hu, and H. F. Ding, Creating an Artificial Two-Dimensional Skyrmion Crystal by Nanopatterning, https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.110.167201Phys. Rev. Lett. 110, 167201 (2013). VansteenkisteAA2014A. Vansteenkiste, J. Leliaert, M. Dvornik, M. Helsen, F. Garcia-Sanchez, and B. V. Waeyenberge, The design and verification of MuMax3, https://aip.scitation.org/doi/10.1063/1.4899186AIP Adv. 4, 107133 (2014). ThielePRL1973A. A. Thiele, Steady-State Motion of Magnetic Domains, https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.30.230Phys. Rev. Lett. 30, 230 (1973). LiPRB2021Z.-X. Li, Z. Wang, Z. Zhang, Y. Cao, and P. Yan, Third-order topological insulator in three-dimensional lattice of magnetic vortices, https://journals.aps.org/prb/abstract/10.1103/PhysRevB.103.214442Phys. Rev. B 103, 214442 (2021). SuPRL1979W. P. Su, J. R. Schrieffer, and A. J. Heeger, Solitons in Polyacetylene, https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.42.1698Phys. Rev. Lett. 42, 1698 (1979). HaldanePRL1988F. D. M. Haldane, Model for a Quantum Hall Effect without Landau Levels: Condensed-Matter Realization of the “Parity Anomal”, https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.61.2015Phys. Rev. Lett. 61, 2015 (1988).
http://arxiv.org/abs/2307.05268v1
20230711140206
Temporal Graphs Anomaly Emergence Detection: Benchmarking For Social Media Interactions
[ "Teddy Lazebnik", "Or Iny" ]
cs.SI
[ "cs.SI", "cs.IR" ]
Temporal Graphs Anomaly Emergence Detection: Benchmarking For Social Media Interactions Teddy Lazebnik^1 and Or Iny^2 ^1 Department of Cancer Biology, Cancer Institute, University College London, London, UK ^2 Department of Economy, The Academic College of Tel Aviv–Yaffo, Tel Aviv, Israel * Corresponding author: [email protected] =================================================================================================================================================================================================================================================================== Temporal graphs have become an essential tool for analyzing complex dynamic systems with multiple agents. Detecting anomalies in temporal graphs is crucial for various applications, including identifying emerging trends, monitoring network security, understanding social dynamics, tracking disease outbreaks, and understanding financial dynamics. In this paper, we present a comprehensive benchmarking study that compares 12 data-driven methods for anomaly detection in temporal graphs. We conduct experiments on two temporal graphs extracted from Twitter and Facebook, aiming to identify anomalies in group interactions. Surprisingly, our study reveals an unclear pattern regarding the best method for such tasks, highlighting the complexity and challenges involved in anomaly emergence detection in large and dynamic systems. The results underscore the need for further research and innovative approaches to effectively detect emerging anomalies in dynamic systems represented as temporal graphs. Keywords: Dynamic systems; social interactions; anomaly detection; emerging trends; group interactions. Temporal Graphs Anomaly Emergence Detection: Benchmarking For Social Media Interactions Teddy Lazebnik^1 and Or Iny^2 ^1 Department of Cancer Biology, Cancer Institute, University College London, London, UK ^2 Department of Economy, The Academic College of Tel Aviv–Yaffo, Tel Aviv, Israel * Corresponding author: [email protected] =================================================================================================================================================================================================================================================================== empty myheadings Draft: August 12, 2023Draft: August 12, 2023 § INTRODUCTION The analysis of complex dynamic systems with multiple agents has gained significant attention in various fields, such as social networks <cit.>, biological systems <cit.>, and transportation networks <cit.>. Recently, temporal graphs have gained much attention as a fundamental framework for capturing the dynamic nature of these systems, enabling the study of evolving relationships and interactions over time <cit.>. Representing systems as temporal graphs is considered straightforward in most cases which makes it a robust and appealing data structure to use <cit.>. Anomalies in temporal graphs can manifest as unexpected shifts in network behavior, sudden changes in interaction patterns, or the emergence of unusual group dynamics <cit.>. These anomalies often provide valuable insights into significant events, emerging phenomena, or potentially malicious activities within the underlying system. Detecting emerging anomalies in such temporal graphs has become a critical task with wide-ranging applications, including identifying credit frauds <cit.>, identifying social trends <cit.>, and understanding cell-level biological processes <cit.>. Consequently, developing effective methods for anomaly emergence detection in temporal graphs allows temporally-close-proximity or even immediate reaction to shifts in the dynamics. Several approaches have been proposed to tackle the challenge of anomaly emergence detection in general <cit.>, and in temporal graphs, in particular <cit.>. These approaches span statistical methods, machine learning algorithms, and graph-based techniques, each leveraging different assumptions and models to capture the unique characteristics of temporal graph data <cit.>. However, due to the complexity and inherent uncertainty associated with detecting anomalies in dynamic systems, identifying the most suitable method for a specific application remains mostly unclear. In this paper, we present a comprehensive benchmarking study that focuses on the task of anomaly emergence detection in temporal graphs, with a specific emphasis on social media interactions. Social media platforms, such as Twitter and Facebook, provide rich sources of temporal graph data, capturing the dynamic interactions among individuals, groups, and communities that can shed light on social and economic trends in real-time. Detecting anomalies in group interactions within these platforms holds immense value in understanding influential events, collective behaviors, and the spread of information. In particular, we evaluated 12 state-of-the-art methods that represent a diverse range of approaches and techniques employed in the field. By conducting experiments on two temporal graphs obtained from Twitter and Facebook, we seek to investigate the performance of these methods in identifying anomalies in group interactions within the context of social media. Our findings present an unexpected outcome: an unclear pattern emerges regarding the best-performing method for anomaly emergence detection in social media interactions. This outcome underscores the need for further research and the development of novel techniques tailored to the unique characteristics of social media data. This paper is structured as follows. Section <ref> provides an overview of the temporal graphs' data structure as well as the formalization of anomaly emergence detection. Next, section <ref> describes the methodology and experimental setup employed in our benchmarking study. Subsequently, section <ref> presents the performance of each method on the Twitter and Facebook temporal graphs. Finally, section <ref> analyzes our findings and suggests potential future studies. § RELATED WORK Temporal graphs have gained significant attention in various domains as a means to capture the evolving relationships and interactions in complex dynamic systems <cit.>. In this section, we provide a formalization of temporal graphs followed by the anomaly emergence detection task definition. Temporal (also known as dynamic, evolving, overtime-varying) graphs can be informally described as graphs that change with time. A temporal graph is a mathematical representation of a dynamic system that captures both the structural properties of a graph and the temporal aspects of interactions between entities. Formally, a temporal graph can be defined as follow. Let G = (V, E, T) be a temporal graph, where V ∈ℕ^k represents the set of nodes or entities in the graph represented as finite state machines with k ∈ℕ possible states, E ⊂ V × V ×ℝ denotes the set of edges such that each edge e ∈ E := (u, v, t) represents an interaction between nodes u and v at time t, and T ∈ℕ is the set of discrete time points or intervals at which the interactions occur. Intuitively, one can represent a temporal graph as a set of timestamped edges, G = (u, v, t) | (u, v) ∈ E, t ∈ T, that implicitly indicates the nodes of the graph and their interactions over time. Though the formal treatment of temporal graphs is still in its infancy, there is already a huge identified set of applications and research domains that motivate it and that could benefit from the development of a concrete set of results, tools, and techniques for temporal graphs <cit.>. In the domain of biological systems, for instance, gene regulatory networks can be represented as temporal graphs, where nodes correspond to genes and edges capture interactions between genes at different time points, which allows the study of gene expression patterns <cit.>. Indeed, <cit.> proposed an inference algorithm based on linear ordinary differential equations. The authors show that algorithm can infer the local network of gene–gene interactions surrounding a gene of interest from time-series gene expression profiles of synthetic genomics samples. In addition, in the transportation systems realm, nodes of a temporal graph can represent locations, and edges capture movements or interactions between locations at different time points, providing an intuitive formalization to analyze traffic flows and congestion patterns <cit.>. For example, <cit.> propose a framework that enables extending the traditional convolutional neural network model to graph domains and learns the graph structure for traffic forecasting. Most relevant for this work, temporal graphs can capture the evolving relationships between individuals, communities, and groups over time. They enable the study of social phenomena, such as information diffusion <cit.>, opinion formation <cit.>, and community detection <cit.>. <cit.> propose a dynamic graph-based framework that leverages the dynamic nature of the users’ network for detecting fake news spreaders. Using their model, the authors show that by analyzing the users’ time-evolving semantic similarities and social interactions, one can indicate misinformation spreading. While there are many possible queries one can perform on a temporal graph, we focus on detecting anomalies over time in close temporal proximity to when they start to emerge. Namely, the anomaly emergence detection (AED) task aims to identify and characterize anomalous events or patterns in temporal graphs and alert about them shortly after they start to occur. Since anomalies can manifest in many forms such as unexpected changes in the interaction patterns, shifts in network behavior, or the emergence of unusual group dynamics. Hence, the AED task's definition is closely related to the definition of an anomaly, in practice. Abstractly, we can assume the anomaly's definition is implicitly provided by the tagging of anomalies in a given dataset <cit.>. Mathematically, the AED task can be defined as follows. Let G be a temporal graph and let A = a_1, a_2, …, a_n represent the set of anomalies in G such that a_i := (U_i, T_i), where: U_i is a subset of nodes U_i ⊂ V, representing the entities involved in the anomaly and T_i is a point in time that indicates the start of the anomaly emergence T_i ∈ T. The AED task considered with finding a function M that accepts G and a subset A_train := (a_1, a_2, …, a_k) and predicts A_test := (a_k+1, …, a_n). For example, let us consider a temporal graph that represents a transportation network's dynamics, where nodes represent physical locations and edges represent the movement of vehicles between these locations, over time. An anomaly can be sudden and unexpected traffic congestion in a location or set of locations which could be caused by an accident or unplanned road closure. In this example, one can use historical records for such events and the data about the transportation network to try and predict the emergence of unexpected traffic congestion. § EXPERIMENT SETUP In this section, we outline the experimental setup used for our benchmarking, including six main steps. To conduct the benchmarking study, we carefully selected 12 data-driven models that encompass a wide range of computational approaches. Our aim was to ensure that these models represent the current state-of-the-art in the field, to the best of our knowledge. In the following sections, we provide a detailed description of each model, including its working principles and the rationale behind our selection. * Tree-based pipeline optimization tool (TPOT) <cit.> - is an automated machine learning (AutoML) framework that optimizes a pipeline of preprocessing steps and machine learning models using genetic programming, based on the Scikit-learn library <cit.>. * AutoKeras <cit.> - is an automated machine learning framework that uses neural architecture search to automatically select and optimize deep learning models based on the TensorFlow framework <cit.>. * Time Series Anomaly Detection Using Generative Adversarial Networks (TADGAN) <cit.> - is a model that uses generative adversarial networks (GANs) to detect anomalies in time series data. We Include TADGAN in the analysis to explore the effectiveness of GAN framework for anomaly detection, which can capture both local and global patterns in the temporal graph data. * Deep Isolation Forest (DIF) <cit.> - is an extension of the Isolation Forest algorithm <cit.> that uses deep learning techniques to improve anomaly detection performance. * Long-short term memory (LSTM) neural network <cit.> - is a type of recurrent neural network (RNN) that can model sequential data and capture long-term dependencies. It has the ability to learn temporal dependencies in the data without taking into consideration the graph-based nature of the data. * Policy-based reinforcement learning for time series anomaly detection <cit.>. This model applies reinforcement learning techniques to train a policy network for anomaly detection in time series data. It is an adaptive approach that learns from a complex from a trial-and-error approach which potentially allows it the detection of complex and evolving anomalies. * A XGboost for anomaly detection (XGBOD) <cit.> - is an anomaly detection algorithm based on the XGBoost gradient boosting framework <cit.>. XGboost is widely considered one of the best machine learning models. * A Python library for graph outlier detection (Pygod) <cit.> - Pygod is a Python library specifically designed for detecting outliers in graph-structured data. * Graph AutoEncoder with Random Forest (GAE+RF) <cit.>. This model combines a graph autoencoder to obtain a meaningful representation of the data from the graph, operating as a feature engineering component that is used by an RF classifier. * Singular Value Decomposition with Random Forest (SVD+RF) <cit.> - This model combines the singular value decomposition method which operates as an unsupervised feature engineering component followed by a random forest classifier. * Spatio-Temporal Graph Neural Networks (STGNN) <cit.> - is a model that integrates graph neural networks (GNNs) with spatial and temporal information for anomaly detection in spatio-temporal data. * Scalable Python Library for Time Series Data Mining (STUMPY) <cit.> - is a Python library that provides scalable algorithms for time series data mining, including motif discovery and time series approximation. In addition, we include a Random model that just randomly decides if an anomaly occurs or not to be a naive baseline. We acquire data from the Twitter[<https://developer.twitter.com/en/docs/twitter-api>] and Facebook[<https://developers.facebook.com/docs/graph-api/>] social media websites using their official application programming interfaces (APIs). We picked these two social media websites as they provide access to the interaction data between their users over time. In addition to capturing user profiles, we also collected information about user interactions with posts (tweets) on both platforms. This included data on actions such as re-tweeting, commenting, and reacting (liking) to posts. For each interaction, we recorded the type of action, the timestamp, and the ID of the post owner. Overall, our dataset consisted of 44.8 thousand users from Twitter and 29.7 thousand users from Facebook, encompassing a total of 51.07 million and 65.93 million interactions, respectively. The data covered a duration of one month, specifically from the 22nd of August to the 22nd of September, 2020, and the 1st of February to the 1st of March, 2023, respectively. In order to generate the temporal graphs representation of this data, one has to define the nodes and edges first. To this end, each account in the dataset represents a node, v ∈ V in the graph while an action (like, comment, share) that an account v ∈ V performance on a post of account u ∈ V at some time t ∈ℕ represents an edge e := (v, u, t). Based on this definition, we obtain a direct temporal graph. For simplicity, we bin all actions to time durations of 15 minutes, in order to get a representation that agrees with a temporal sequence of graphs since the chosen models (see Section <ref>) require such representation. Moreover, in order to obtain a population of temporal graphs from each dataset, we sampled 100 sub-graphs as follows. First, we picked at random a node of the graph, denoted by v_c. Next, starting from v_c, we computed Breadth-first search (BFS) <cit.> while ignoring the time (t) component of the edges e ∈ E (and duplicate edges caused as a result) until |V| = 10000 nodes are obtained. Once the nodes are obtained, we trimmed the temporal graph representing the entire dataset to include only these nodes. Since we do not have anomalies tagged on these temporal graphs, we had to generate them synthetically. Importantly, these synthetic tags have to be computed by information that is not fully available to the models; otherwise one would just examine the model's ability to reconstruct the rules used to generate the synthetic tags. As such, inspired by the works of <cit.>, we define three anomaly rules. For all of them, let us consider a node v ∈ V at a time t ∈ℕ to be anomaly if and only if: N_t(v) > E_t-z, t+z[N(v)] + 2*S_t-z, t+z[N(v)] or ∑_i = t-z^t+zd^2N_i(v)/di^2 > ∑_i = t-z^t+z1/N_i(v)∑_u ∈ C_i(v)dN_i(u)/di or the largest eigenvalue of a matrix representing node's v number of interactions with the rest of nodes between t-z and t+z is larger than 1, where C_t(v) := {∀ u: (u, v, t) ∈ E}, N_t(v) := |C_t(v)|, z ∈ℕ is a window size, E_a, b(x) is the mean value of x such that t ∈ [a, b], and S_a,b(x) is the standard deviation value of x such that t ∈ [a, b]. Based on these anomalies, for each instance of a temporal graph, we computed the weighted F_1 score <cit.> using each one of the models. For all models, we used the first 80% of temporal samples of each temporal graph instance to train the model while using the remaining 20% for the evaluation. Importantly, the model's prediction is set to the next step in time, such that the window size is obtained for each model using the grid search method <cit.> ranging from 1 to 2z. Afterward, for each model, we conducted four sensitivity analysis tests, measuring the effect of changing one parameter of the task on each of the model's performances. Namely, the prediction lag, temporal concept drift, spatial size, and spatial density. Formally, we increase the prediction lag from 1 to z with steps of 1. For the temporal concept drift, for each step in time t with a probability p ∈ [0, 0.001, …, 0.01], all edges that are connected to node v are removed from the temporal graph. The spatial size sensitivity test was conducted by repeating the temporal graph instances construction but with 9500 + 100i such that i ∈ [0, …, 10]. Finally, the spatial was implemented by adding |E_0|t · i · 10^-5 edges to the graph at time t, where i ∈ [1, 10]. § RESULTS Fig. <ref> summarizes the main results obtained where Figs. <ref> and <ref> show the weighted F_1 score of each model for the Twitter and Facebook datasets, respectively. The results are shown as the mean ± standard deviation of n=100 instances for each dataset. Upon examining the results, it becomes evident that the Facebook dataset consistently yielded lower performance, on average, compared to the Twitter dataset. This observation holds true when comparing each individual model's performance within the dataset, as well as when considering the collective performance of all the models. In addition, focusing on Fig. <ref>, we can see that STGNN provides the best results with 0.735 ± 0.037 followed by STUMPY with 0.718 ± 0.088 and DIF with 0.709 ± 0.048. All of the selected models in our benchmarking study are neural network-based approaches that have been specifically designed for anomaly detection. Unlike, Fig. <ref> reveal that Tadgan obtained the best results with 0.652 ± 0.055, followed by DIF with 0.649 ± 0.081 and STUMPY with 0.625 ± 0.075, showing somewhat consistency in the results. Similarly, the LSTM and SVD with RF models consistently performed worse compared to the other models. However, the performance order of the remaining models varied inconsistently between the two cases, indicating that the relative performance of these models is not consistently predictable or generalizable across different datasets or scenarios. Furthermore, the sensitivity analysis results for each model have been summarized in Table <ref>, which is divided into four sensitivity tests, and the values presented represent the average change in performance, as measured by the weighted F_1 score, resulting from variations in the parameters investigated in each sensitivity test. § DISCUSSION AND CONCLUSION In this study, we conducted a comprehensive benchmarking analysis to compare 12 data-driven methods for anomaly emergence detection in temporal graphs, with a specific focus on social media interactions. We evaluated the performance of these methods on two temporal graphs obtained from Twitter and Facebook, aiming to identify anomalies in pairwise and group interactions alike. The comparison of various anomaly detection methods on both Twitter and Facebook datasets (Figs. <ref> and <ref>) reveals has yielded surprising results. Despite employing different computational approaches, several methods achieved statistically similar results while demonstrating inconsistency between the two datasets. This finding highlights the complex nature of anomaly detection in temporal graphs and the challenges associated with generalizing results across different platforms. For instance, we observed that the TPOT automatic machine-learning framework performed as the 9th-best model for the Twitter dataset, while ranking as the 7th-best for the Facebook dataset. This discrepancy emphasizes the need for tailored approaches and the consideration of dataset-specific characteristics when selecting the most effective anomaly detection method. Unsurprisingly, anomaly detection algorithms based on neural networks, such as STGNN and STUMPY outperformed general-purpose models such as AutoKeras and LSTM-based neural networks. This outcome highlights the advantage of leveraging the inherent temporal dependencies and graph structures present in the data for improved anomaly detection performance. More generally, deep learning models seem to outperform other types of models. This can be explained by the ability of these models to capture more complex spatio-temporal connections in the data <cit.>. The inconsistency observed in the performance order of models across datasets further emphasizes the importance of dataset-specific exploration and evaluation. Different social media platforms exhibit unique characteristics in terms of user behaviors, network dynamics, and information propagation patterns. Indeed, the patterns of interactions differ between Twitter and Facebook significantly <cit.>, leading to variations in the effectiveness of the methods. This outcome further supports the common no-free-lunch theorem as we were not able to find a single clear model that outperforms all others even on a small sample size of only two datasets <cit.>. In the same manner, these results agree with a similar benchmarking analysis conducted for unsupervised outlier node detection on static attributed graphs <cit.>. More interestingly, Table <ref> show that different models excel in different tests. Generally speaking, the models designed for anomaly detection are more sensitive to temporal concept drift and spatial density while for the prediction lag and spatial size, the generic purpose models were found to decrease in performance faster. This research contributes to a better understanding of the complexities and challenges associated with anomaly detection in large and dynamic systems represented as temporal graphs. Future work should continue to explore novel techniques and methodologies that can effectively address these challenges and provide more robust anomaly detection solutions for diverse real-world applications. This study is not without limitations. First, the evaluation was conducted on a limited number of datasets, which may not fully capture the diversity and complexity of social media interactions. Furthermore, the anomalies used in this study are synthetic due to the time and resource burden of tagging such events in real data. As such, our results might change slightly given realistic anomaly tagging. Commonly, data-driven models in general, and anomaly detection models, in particular, benefit from the introduction of domain knowledge <cit.>. As such, it is of great interest how the proposed results would alter if domain knowledge is integrated into the examined models. These limitations propose a fertile soil for future studies of temporal graph anomaly emergence detection. § DECLARATIONS §.§ Funding This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. §.§ Conflicts of interest/Competing interests None. §.§ Data availability The data used as part of this study is available upon reasonable request from the authors. §.§ Acknowledgement The author wishes to thank Tom Hope for inspiring this research and implicitly suggesting several of the models used as part of this study. §.§ Author Contribution Teddy Lazebnik: Conceptualization, Data Curation, Methodology, Software, Validation, Formal analysis, Investigation, Writing - Original Draft, Writing - Review & Editing. Or Iny: Conceptualization, Data Curation, Validation, Resources, Writing - Review & Editing. unsrt
http://arxiv.org/abs/2307.04829v1
20230710180755
General wetting energy boundary condition in a fully explicit non-ideal fluids solver
[ "Chunheng Zhao", "Alexandre Limare", "Stephane Zaleski" ]
physics.flu-dyn
[ "physics.flu-dyn" ]
a]Chunheng Zhao a]Alexandre Limare a,b]Stephane Zaleski [a]Sorbonne Université and CNRS, Institut Jean Le Rond d'Alembert UMR 7190, F-75005 Paris, France [b]Institut Universitaire de France, Paris, France We present an explicit finite difference method to simulate the non-ideal multi-phase fluid flow. The local density and the momentum transport are modeled by the Navier-Stokes (N-S) equations and the pressure is computed by the Van der Waals equation of the state (EOS). The static droplet and the dynamics of liquid-vapor separation simulations are performed as validations of this numerical scheme. In particular, to maintain the thermodynamic consistency, we propose a general wetting energy boundary condition at the contact line between fluids and the solid boundary. We conduct a series of comparisons between the current boundary condition and the constant contact angle boundary condition as well as the stress-balanced boundary condition. This boundary condition alleviates the instability induced by the constant contact angle boundary condition at θ≈0 and θ≈π. Using this boundary condition, the equilibrium contact angle is correctly recovered and the contact line dynamics are consistent with the simulation by applying a stress-balanced boundary condition. Nevertheless, unlike the stress-balanced boundary condition for which we need to further introduce the interface thickness parameter, the current boundary condition implicitly incorporates the interface thickness information into the wetting energy. * Energy consistent boundary condition for single species multi-phase Van der Waals' model * Explicit finite difference method with adaptive mesh refinement Van der Waals energy consistent boundary condition explicit finite difference 0000 1111 0000 1111 § INTRODUCTION Fluids spreading on solids are practical multi-phase systems in the real world <cit.>. Industrial applications of solid wetting research can be found in 3D printing <cit.>, nucleate boiling <cit.>, and surface material construction <cit.>. Numerical simulations on the wetting problem are made extremely difficult or impossible by the existence of a very wide range from the macroscopic scales to the nanometric scales <cit.>. When it comes to models such as level-set and volume of fluid (VOF) that treat the interface between two fluids as a sharp interface, the no-slip boundary condition contradicts the actual behavior observed in droplet spreading <cit.>. To address the limitation of those methods on the moving contact line, researchers have implemented explicit Navier-slip or implicit numerical slip boundary conditions <cit.>. Nevertheless, the boundary conditions associated with the sharp interface method will introduce nonphysical dynamics and prove ineffective in handling small or large contact angles. Hence, it is worth considering the diffuse interface method, a thermodynamically consistent mathematical model for multi-phase systems, to effectively simulate the dynamics of contact lines <cit.>. The diffuse interface method introduces energy dissipation, enabling the modeling of droplet spreading even with the no-slip or small slip length boundary condition <cit.>. In the vicinity of a diffused contact line, the bulk free energy and the surface energy determine the contact line profile as well as the fluid flow <cit.>. Moreover, the boundary condition within the diffuse interface method can be described as the wetting energy to ensure the thermodynamic consistency <cit.>. By employing the diffuse interface method, it becomes possible to accurately simulate contact angles, regardless of whether they are small or large in magnitude. A well-known classical diffuse interface method is derived from the Van der Waals (VDW) equation of state (EOS) for a single species, (p+aρ^2)(1/ρ-b)=RT, with classical notations, where a and b are modification parameters of the molecular interaction and the molecular volume respectively <cit.>. Under the pressure and energy-driven mechanism, the VDW method is able to separate the single species into two phases, one with higher density, and the other one with lower density. Compared to the Cahn-Hilliard (C-H) method, the VDW has some different characteristics to be noted. First, the VDW method describes the single species phase change where the interface is indicated by the local density ρ, while the C-H method describes the physical situation of a binary system of two essentially immiscible species and the interface profile is normally steeper than the VDW method. In addition, in the VDW model, the bulk energy density is computed by the entropy and molecular interaction which can be represented by the equation ρ f_0=- ρ RTlog(1/ρ-b)-aρ^2. In contrast, the bulk free energy density adopted in the C-H is a double-well fourth-order polynomial ρ f_0=β (ρ-ρ_l)^2(ρ-ρ_g)^2, where β denotes the constant bulk energy coefficient, and ρ_l, ρ_g are saturated liquid and gas densities. One of the benefits of using the C-H type energy form is it allows us to accurately represent the flat interface profile at equilibrium using a hyperbolic tangent function. Additionally, the C-H energy form enables us to explicitly determine the interface thickness and the surface tension <cit.>. However, the inclusion of a fourth-order partial differential equation greatly amplifies the intricacy of the problem, thereby intensifying the difficulty of numerically simulating the C-H equation. Conversely, the VDW method offers a viable diffuse interface approach that is not only comparatively efficient but also valid. Over the past few decades, extensive research has been conducted to numerically investigate the diffuse interface model of single species multi-phase systems <cit.>, and various boundary condition methods have been employed in the context of the diffuse interface model <cit.>. The stress-balanced boundary condition, as proposed in <cit.>, takes into account a smooth variation of surface tension at the diffused interface along the solid boundary. Moreover, from a thermodynamic perspective, the energy-consistent boundary condition can be applied in the diffuse interface method <cit.>. It establishes a connection between the bulk free energy and the wetting energy at the boundary, ensuring a uniform interface thickness as the system reaches thermodynamic equilibrium. The above-mentioned boundary conditions are based on the C-H type bulk free energy formulation. In this case, the wetting energy and the surface tension can be evaluated without the difficulty to compute the integral operation. However, as for the VDW energy form, the value of the interface thickness and the surface tension is not explicit. In order to obtain the surface tension, we need to further numerically compute the integral along the surface, which makes it challenging to apply the mentioned boundary conditions. In recent years, constant contact angle boundary condition <cit.>, and chemical potential based boundary condition have been employed for the VDW single species model <cit.>. As we shall see the constant contact angle boundary condition induces an instability at equilibrium contact angles θ_eq≈0 or θ_eq≈π. In addition, the boundary condition used in <cit.> is applied to the pseudopotential LBM method. In this approach, the exact determination of the contact angle requires several free parameters, which adds complexity when utilizing other simulation methods. In this study, as the energy-consistent boundary condition used in C-H model, we provide a general energy-consistent boundary condition for the VDW single species multi-phase model <cit.>. The boundary condition ensures energy consistency and allows for a uniform interface profile as the equilibrium contact angle is approached. To solve the Navier-Stokes equations, which incorporate a Korteweg stress form to model the surface effect, we employ a fully explicit finite-difference method <cit.>. This finite difference scheme enables easy implementation of adaptive mesh refinement technology which further enhances the computational efficiency of our approach. We perform a comparison of various boundary condition methods and present the wetting energy for different equilibrium contact angles and interface thickness parameters. Furthermore, we validate our numerical scheme by showcasing two benchmark problems: a single static droplet and the dynamics of liquid-vapor separation. The energy evolution for Laplace numbers La=[10,1000] is shown for a single static droplet and the average domain length evolution of the dynamics of liquid-vapor separation is provided. § METHODOLOGY In this section, we provide an introduction to the mathematical model utilized in this work. We begin by presenting the governing equations and the thermodynamic energy of the system. With a focus on the energy aspect, we derive the wetting energy and proceed to compare different boundary condition methods based on their profiles while simulating a simple one-dimensional (1-D) equilibrium surface. §.§ Governing Equations The governing equations employed in our study consist of the compressible Navier-Stokes equations, incorporating the Korteweg stress surface tension force, along with the equation of state (EOS) <cit.>. Those formulations can be expressed as: ∂ρ/∂ t+∇·(ρ𝐮)=0, ∂ρ𝐮/∂ t+∇·(ρ𝐮⊗𝐮)= ∇·(σ_v+σ_s-p𝐈), p=ρ RT(1/1-bρ-aρ/RT). Eqs. (<ref>) and (<ref>) is the continuity equation and momentum equations. Here, the operator ⊗ represents the tensor product operation. Eq. (<ref>) stands for the VDW's EOS from which we can obtain the pressure and close this non-ideal gas system. In Eq. (<ref>), ρ denotes the local density of the liquid or gas phase, and 𝐮 is the velocity vector. In Eq. (<ref>), p𝐈 is the pressure tensor, where 𝐈 is the identity matrix, σ_v= η[(∇𝐮+∇^T𝐮)-2/3(∇·𝐮)𝐈] represents the viscous stress tensor, and σ_s= λ[(1/2|∇ρ|^2+ρ∇^2ρ)𝐈-∇ρ⊗∇ρ] is the surface stress tensor. Within these equations, η is the local viscosity, while λ corresponds to the surface energy coefficient. It should be noted that the thermodynamic pressure denoted by p can be determined from Eq. (<ref>), where R denotes the universal gas constant, T is defined as the temperature, and a, b are two gas constants that signify the intermolecular attraction and the volume modification ratio, respectively. Normally, we can rearrange Eq. (<ref>) in a dimensionless form: p'=-8ρ'T'/3-ρ'-3ρ'^2, where p'=p/p_c, ρ'=ρ/ρ_c, and T'=T/T_c are the dimensionless forms of pressure, density, and temperature. p_c=3/8ρ_c R T_c, ρ_c=1/3b, and T_c=8a/27Rb are critical pressure, density and temperature. For our simulations, we select the values a=3 and b=1/3, leading to ρ_c=1 and p_c=1. As the pressure term solely appears in a derivative form, we calculate ∇ p' during the simulation instead. An expression for the energy associated with the pressure term can be expressed as follows <cit.>: p=ρ^2 ∂ f_0/∂ρ, where the Helmholtz free energy per unit volume is expressed as ρ f_0. The dimensionless formula f_0'=ρ_c f_0/p_c is given by <cit.>: f_0'=-8/3 T'log(1/ρ'-1/3)-3ρ'- μ^*, where μ^* denotes the dimensionless bulk chemical potential <cit.>, which is a universal constant value in both the liquid and gas regions. The bulk chemical potential can be determined through the Maxwell construction of the pressure profile or the common tangent construction of the free energy <cit.>. §.§ Wetting energy model The energy derivation presented in <cit.> establishes a connection between the stress form and potential form surface tension force formulations. In addition, when there is a solid boundary in simulation, a wetting energy, and a constraint function were introduced to close the system. As outlined in <cit.>, we derive the boundary condition for the VDW model from an energy perspective. To incorporate the surface effect, we introduce a mixed energy density formulation, where the surface energy per unit volume is expressed as follows: e_s=λ/2|∇ρ|^2, and the mixed energy per unit volume is: e_mix=ρ f_0+e_s. In this expression, we also consider the kinetic energy per unit volume ρ e_k=1/2ρ|𝐮|^2 and the wetting energy per unit area e_w. The total energy of the system can be expressed in integral form as follows: E=∫_Ω(e_mix+ρ e_k) d𝐱+∫_∂Ωe_w ds. Considering a constant temperature, viscous dissipation is the only dissipation of the energy. The evolution of the total energy E is then ∂ E/∂ t=∫_Ω(∂ e_mix/∂ t+∂ρ e_k/∂ t) d𝐱 +∫_∂Ω∂ e_w/∂ t ds=∫_Ω𝐮·∇·σ_v d𝐱. In this equation, Ω represents the fluid-dominated region, while ∂Ω corresponds to the solid boundary. Through variable substitution and integration by parts, Eq. (<ref>) can be rearranged as follows: ∫_Ω[(∂ρ f_0/∂ρ+λ∇ρ·∇) ∂ρ/∂ t+∂ρ e_k/∂ t] d𝐱 +∫_∂Ω∂ e_w/∂ t ds = ∫_Ω(μ_mix∂ρ/∂ t+∂ρ e_k/∂ t) d𝐱+∫_∂Ω(λ∂_𝐧ρ+∂ e_w/∂ρ)∂ρ/∂ t ds, where μ_mix=δ e_mix/δρ represents the mixed chemical potential, which is obtained by taking the functional derivative of the mixed energy. To ensure non-dissipation at the boundary, we obtain the following expression: λ∂_𝐧ρ+∂ e_w/∂ρ=0, where ∂_𝐧ρ denotes the wall direction derivative of the density, ∂ e_w/∂ρ is referred to as the wetting potential. The potential surface force formulation can be derived from the volume integral part. The consistency between the potential form and the stress-form formulations can be demonstrated through the inclusion of an additional stress term: ∇·(σ_s-p𝐈)=-ρ∇μ_mix+∇·σ_ρ, where the additional stress term σ_ρ takes the form: σ_ρ=λ(|∇ρ|^2𝐈-∇ρ⊗∇ρ). By utilizing the potential surface force formulation, the presence of spurious currents can be significantly reduced to a level below the round-off limit <cit.>. For a 1-D planar simulation with σ_ρ=0, in the equilibrium state of the system, the mixed chemical potential must satisfy the following condition: μ_mix=∂ρ f_0/∂ρ-λd^2ρ/d x^2=0. When we multiply Eq. (<ref>) by dρ/d x and integrate it, the following equation can be obtained: λ/2(dρ/dx)^2=∫∂ρ f_0/∂ x dx. The first derivative of the density can then be derived as: |dρ/dx|=√(2ρ f_0/λ). To extend Eq.(<ref>) to multi-dimensional problems, we make the approximation |∇ρ|≈√(2ρ f_0/λ). Considering the constraint given by Eq.(<ref>), we can derive an energy-consistent wetting energy per unit area as follows: e_w1=cosθ_eq∫_ρ_gs^ρ_ls√(2λρ f_0) dρ +C. Here, C represents a constant parameter. However, in the simulation, this constant does not affect the evolution of the contact line, so we can set C=0. In this case, the wetting potential e_w1 is characterized by two saturation densities, and these values align with the equilibrium density in the liquid and gas phases <cit.>. Therefore, we have ρ_ls=ρ_l, ρ_gs=ρ_g. In addition to the aforementioned e_w1 formulation, there are other approaches utilized to constrain the dynamics of the contact line. One such method is the constant contact angle boundary condition, which enforces the dynamic contact angle to be equal to the equilibrium contact angle. The energy formulation for this condition is expressed as follows <cit.> e_w2=λθ_eq∫_ρ_gs^ρ_ls∂ρ/∂ x dρ +C. Another stress-balanced energy formulation <cit.> e_w3=-σ/2cosθ_eqsinπϕ/2, where ϕ=2ρ-ρ_l-ρ_g/ρ_l-ρ_g known as the order parameter changing from ϕ=[-1,1], and σ is the surface tension between two fluids. As well as a thermodynamic consistent formulation based on the pseudopotential lattice Boltzmann method can be shown as <cit.> e_w4=-K_EOS K_INTγ(ρ_l-ρ_g)/2ζtanhζϕ, where K_EOS, K_INT are scaling factors to adjust the interface thickness of the phase field method, γ and ζ are independent parameters that determine the contact angle. Among the various wetting energy formulations, e_w2 maintains a constant contact angle throughout the evolution of the contact line. However, this approach may violate thermodynamic principles. Particularly, when the equilibrium contact angle θ_eq is close to 0 or 2π, the simulation system becomes highly unstable. On the other hand, the formulation of e_w3 is derived by considering the stress balance and minimizing the free energy, thereby ensuring the preservation of correct thermodynamics <cit.>. In order to establish the exact relationship between σ and λ, it is necessary to further determine the profile of the interface, as shown in the work by Chen et al. <cit.>. This relationship plays a crucial role in ensuring the accuracy of the contact line dynamics. In Figure <ref>, we present a comparison of the wetting potential values along the interface for different Cahn numbers, denoted as Cn=δ/L, where δ represents the initial interface thickness and L is the length of the system. Specifically, in Figure <ref> (a), (b), and (c), we consider an equilibrium contact angle of θ_eq=π/12 for varying values of Cn. It can be observed that, due to the large value of θ_eq in the case of a small contact angle, the wetting potential of the energy e_w2 exhibits significantly higher values compared to the other two methods. Furthermore, when we increase the equilibrium contact angle to θ_eq=π/4, as depicted in Figure <ref> (c), (d), and (e), the wetting energy formulation e_w3 exhibits varying peak values for different values of Cn. It is worth noting that the density profile is represented by a hyperbolic tangent function in each case. Consequently, the relationship between σ and λ is precisely determined as σ≈0.943λ/δ when the value of δ is known. The formulation of e_w4 is heavily influenced by the parameter selection and is more suitable for specific numerical methods. In recent studies, an implicit chemical potential boundary condition has been proposed to address the contact line problem <cit.>. Due to the fully implicit nature of the method, it becomes challenging to accurately determine the contact angle precisely from the provided chemical potential value and temperature. In our energy boundary condition, as described in Eq.(<ref>), the computation of e_w1 through integration is required. However, in a realistic simulation, this value is not necessary. Therefore, this approach can be utilized as a general boundary condition that effectively preserves thermodynamic consistency. Additionally, the information regarding the interface thickness δ in e_w1 is implicitly incorporated into the bulk free energy, and all the essential parameters are computed locally. This approach successfully addresses the instability issues encountered in previous methods. There are linear, quadratic, and cubic wetting energy formulations based on the C-H model. However, similar to the formulation e_w3, these formulations require prior relations to evaluate the interface thickness and determine the density profile on the boundary. Therefore, we have not considered these formulations in the current work. For a detailed analysis of these formulations, refer to <cit.>. § NUMERICAL SCHEME To solve the governing equations presented in the previous section, we employ the two-step MacCormack methodology <cit.>. To begin, we define a vector 𝐟 consisting of the density ρ and momentum ρ𝐮. Then, we proceed to reconstruct the governing equations using this vector Eqs. (<ref>), (<ref>): 𝐟 = [ ρ; ρ𝐮 ]. Eqs. (<ref>), (<ref>) can now be expressed as the functions of 𝐟: ∂_t 𝐟 +∇·𝐅 (ρ,∇ρ, ∇^2ρ)=0, where 𝐅 can be further expressed as: 𝐅 = [ ρ𝐮; ρ𝐮⊗𝐮+p𝐈-σ_surf-σ_vis ]. As shown in <cit.>, Eq. (<ref>) can be solved by a precondition and correction finite difference method. The time derivative is dealt with in a fully explicit manner: 𝐟^*=𝐟^n -Δ t ∇^bck·𝐅^n, 𝐟^n+1=1/2(𝐟^n+𝐟^*) - Δ t/2∇^fwd·𝐅^*. Here, ∇^fwd stands for forward finite difference: ∇^fwdϕ(𝐱)=ϕ(𝐱+h)-ϕ(𝐱)/h, ∇^bck is the backward finite difference: ∇^bckϕ(𝐱)=ϕ(𝐱)-ϕ(𝐱-h)/h, and ∇^ctr represents the central finite difference: ∇^ctrϕ(𝐱)=ϕ(𝐱+h)-ϕ(𝐱-h)/2h. In addition, the derivative computation appears in 𝐅 can be computed by 𝐅^n(ρ^n,∇^fwdρ^n,∇^2_ctrρ^n), and 𝐅^*(ρ^*,∇^bckρ^*,∇^2_ctrρ^*) respectively. Our simulation is implemented using the free code platform called Basilisk, which is a common tools language for the Octree structure utilizing adaptive mesh refinement methods <cit.>. Given that our method relies on finite differences and is fully explicit, the strategy for adaptive mesh refinement is straightforward. The complete code is now accessible at the following link: http://basilisk.fr/sandbox/zchmacchiato/. § RESULTS In this study, the VDW model is utilized to simulate phase transformations for an isothermal single species multi-phase system. The interface between the two phases undergoes changes from the initial shape to the equilibrium shape, resulting in fluid flow. Our simulations aim to assess the stability, energy oscillation, and morphology changes that occur during this phase transition process. §.§ single droplet simulation To validate the numerical method, we simulate the coexisting saturated density values at a fixed temperature. The simulation begins by initializing a single droplet with a radius of r inside a gas tank, and it continues until the system reaches an equilibrium state. The initial density profile is represented by a hyperbolic tangent function: ρ(𝐱,0)=ρ_l+ρ_g/2-ρ_l-ρ_g/2tanh|𝐱-𝐱_0|-r/δ. Here, |𝐱-𝐱_0| represents the distance between the local position and the droplet interface, and δ denotes the initial interface thickness, r is the radius of the initial droplet. In Figure <ref>, we compare the simulation results of the density values with the analytic solutions obtained from the Maxwell construction. Our numerical scheme accurately captures the results, which are in good agreement with the theoretical solutions. We proceed with the simulation of a single droplet in a square domain under a constant temperature T'=0.95 with periodic boundaries. Based on the results shown in Figure <ref>, the approximate saturated density of the liquid phase is ρ_l≈1.46, and that of the gas phase is ρ_g≈0.58. In this test, we do not consider the viscosity ratio. It is important to note that the exact density profile in a static solution is more complex, but it can be qualitatively represented by a hyperbolic tangent function as given in Eq.(<ref>). Consequently, the presence of different initial density values compared to the saturated density values introduces oscillations in the simulation. The initial density distribution, along with the surface tension stress, drives the droplet towards its equilibrium shape, while pressure helps separate the saturated density profile simultaneously. In an ideal scenario, with sufficient evolution time, we would expect a constant surface energy E_s=∫_Ω e_s and zero kinetic energy E_k=0. However, due to unbalanced numerical schemes and the choice of the surface force formulation<cit.>, spurious currents can occur. In this test, we characterize the system using the Laplace number, La=λρ_c R/η^2, where R is the initial radius of the droplet. With La≫1, we expect a significant surface effect that induces pronounced spurious currents when the system reaches equilibrium <cit.>. The logarithmic evolution of kinetic energy and surface energy is presented in Figure <ref>. We vary La from 10 to 1000. As the viscous force dissipates the system's energy and balances the oscillations caused by capillary waves, reducing La leads to a rapid decrease in kinetic energy. The viscous dissipation gradually consumes the energy associated with the droplet shape, causing the kinetic energy to converge to a small, constant value. In our simulations, the final kinetic energy, attributed to spurious currents, does not reach zero. However, the surface energy converges to the same value for different La values, indicating that the surface effect accelerates the system's attainment of the equilibrium profile. When La≥1000, oscillations in the energies are observed. In such high-temperature systems, the significant surface effect induces capillary waves around the phase interface. The imbalance between surface tension and thermodynamic pressure, combined with the explicit numerical scheme, leads to the generation of spurious currents, preventing the system from reaching zero kinetic energy. §.§ Dynamics of liquid-vapor separation In this section, we explore the applicability of the VDW model to the dynamics of liquid-vapor separation, aiming to assess its performance in a complex morphology-changing problem. Additionally, we have incorporated adaptive mesh refinement into the simulation to enhance efficiency. When a single species is subjected to a temperature close to the critical temperature and possesses a random density profile, the pressure and surface stress act as driving forces, leading the mixture to undergo phase separation. This results in coarsening dynamics and the formation of two distinct phases: one with a higher density and the other with a lower density. In the VDW model, the phase separation solely relies on the equilibrium density corresponding to specific temperatures. This enables the system to minimize the free energy during its evolution. As depicted in Figure <ref> (a), we initiate the simulation by introducing a single species with a random density fluctuation within a 2D square domain. The boundaries of the domain are set as periodic conditions to ensure continuity. The initial density profile is defined as follows: ρ(𝐱,0)=ρ_c+0.2ρ_c (rand), where the amplitude of random fluctuation is set to 0.2 ρ_c. The random number for generating the fluctuations is obtained from the random seed rand=[-1,1]. The phase separation is characterized by the growth of the domain length scale, defined as L=L_0^2/χ_m, where L_0^2 represents the area of the square domain, and χ_m=⟨ C^2(1-C)^2⟩ is the space average quantity parameter associated with the concentration of the gas phase, denoted as C=(ρ-ρ_g)/(ρ_l-ρ_g) <cit.>. In our previous work, we utilized the explicit method to investigate the dynamics of liquid-vapor separation under constant temperature conditions <cit.>. When the system temperature was set to T'=0.85, simulation results exhibited a growth rate characterized by L∼(t-t_0)^0.7, which was close to but slightly higher than the (t-t_0)^2/3 growth rate reported by Miranda et al. <cit.>. In the present study, we simulate the dynamics of liquid-vapor separation under T'=0.95 with a Laplace number of La=0.04. In this example, the simulations are performed with adaptive meshes using the feature of Basilisk <cit.>. The smallest (dimensionless) cell size, Δ, used is 0.0039 in order to fully resolve the liquid-vapor interface. The results presented here are obtained by averaging over 5 runs with different random initial density configurations. The evolution of the mixture's morphology at different time steps is shown in Figure <ref>. Over time, the complexity of the mixture gradually diminishes, and the influence of surface tension becomes prominent, resulting in the formation of circular liquid droplets in the later stages. In Figure <ref>, we compare the simulation results of the domain length scale L evolution when La=0.04 with the corresponding theoretical solutions depicted by the red dashed curve. It can be observed that our simulation results exhibit a close agreement with the theoretical prediction L∼ (t-t_0)^2/3. §.§ Equilibrium contact angle and energy evolution In the previous section, we compared various boundary condition methods based on their profiles along the interface for the 1D planar case. Now, we employ different boundary conditions to model the equilibrium contact angle and assess the energy evolution of the contact angle simulation. For this test, we start with a liquid droplet of radius r=0.2L, where the density is set to ρ=ρ_l, located on a solid boundary. The region to the left of the droplet is filled with gas, with a density of ρ=ρ_g. The density profile function and the velocity can be defined as follows: ρ(𝐱,0)=ρ_l+ρ_g/2-ρ_l-ρ_g/2tanh|𝐱-𝐱_0|-r/δ, 𝐮(𝐱,0)=0. Here, |𝐱-𝐱_0| represents the distance to the interface of the liquid droplet, and 𝐱_0 denotes the center position of the droplet. The temperature for this simulation is fixed at T'=0.95. The viscosity η and surface energy coefficient λ are chosen to satisfy La=4 for all simulations. To ensure stable simulations, the time interval Δ t needs to satisfy ηΔ t/Δ x^2≤0.1. Additionally, the Courant-Friedrichs-Lewy (CFL) condition is imposed with CFL=|𝐮max|Δ t/Δ x≤0.1. Since the equilibrium interface thickness, δ_eq, is unknown in the e_w3 formulation, we set δ_eq=0.3 and approximate σ as ≈3.143λ. The equilibrium contact angle for each simulation is computed using the method described in <cit.>. The simulation results are evaluated when the kinetic energy reaches a constant value, typically when t≫1. In the provided figures, the interface position is defined as ρ=ρ_c. Figure <ref> illustrates the evolution of density profiles for an equilibrium contact angle of θ_eq=π/6 at various time points. In this study, we employ adaptive mesh refinement techniques, where the resolution of the mesh is determined by the density distribution. Notably, the grids demonstrate a clear refinement in the contact line region as the droplet spreads over the solid surface. Figure <ref>(a) compares three different boundary conditions with the corresponding analytic solution, indicated by the black dashed line. The equilibrium shapes of the droplets residing on solids with different contact angles are shown in Figure <ref>(b). Upon comparison, it can be observed that when the equilibrium contact angle θ_eq is set to π/2, the results from the three methods align well with each other. Similarly, when the equilibrium contact angle approaches π/2, the simulation results of the form e_w1 exhibit good agreement with the reference curve. However, when θ_eq is set to 2π/3 and π/6, the results from form e_w2 deviate from the analytic solutions. Additionally, the accuracy of the results from form e_w3 is lower compared to those from e_w1 or the analytic solutions. By considering these comparisons, we can conclude that the form e_w1 consistently provides accurate results across a wider range of contact angles compared to the other two formulations. It is important to evaluate the energy evolution of the contact line moving until the system reaches the equilibrium state, as it provides insights into the contact line dynamics of the system <cit.>. Figure <ref> illustrates the evolution of kinetic energy during the simulation. Once the droplet achieves its equilibrium shape, it stops evolving, and the kinetic energy initially increases and then gradually decreases toward zero. For the energy form e_w1, in the late stages of the simulation, the kinetic energy of each case converges to a very small value, approximately E_k∼10^-10. To further analyze the differences in kinetic energy evolution, Figure <ref> compares the kinetic energy profiles for the energy forms e_w1 and e_w2. It is worth noting that for contact angles θ_eq>5π/6, the simulation process becomes highly unstable when using form e_w2. Hence, the results for an even larger contact angle, θ_eq=35π/36, are not compared. The comparison in Figure <ref>(b) clearly demonstrates that the kinetic energy evolution differs significantly between the two boundary conditions. This indicates that the contact line dynamics associated with these two methods during the simulation are also distinct. Moreover, when using the e_w1 formulation for the boundary, the system reaches the equilibrium state more rapidly. In summary, the analysis of the kinetic energy evolution supports the superiority of the e_w1 formulation, as it leads to faster convergence to the equilibrium state and provides more stable contact line dynamics compared to the e_w2 formulation. We proceed to compare the energy forms e_w1 and e_w3 in Figure <ref>. In order to maintain consistency between the two boundary conditions, the interface thickness δ_eq=0.164 is obtained from the equilibrium state of the simulation using e_w1 as the wetting energy. The comparison in Figure <ref>(b) reveals that the kinetic energy evolution of both cases is qualitatively consistent, and even the capillary-induced oscillation exhibits similar characteristics. It is worth noting that the idea behind form e_w3 stems from the concept of stress balance between the gas phase and liquid phase at the interface region <cit.>. The surface tension difference Δσ between the solid-gas and solid-liquid interfaces remains continuous along the boundary, and the dynamics of the moving contact line simulated using this method have been widely employed and compared with molecular dynamics and experimental studies <cit.>. Consequently, from a kinetic energy and dynamics standpoint, similar outcomes can be achieved whether we utilize e_w1 or e_w3 as the wetting energy. This suggests that both formulations can capture the essential features of the contact line dynamics and yield comparable results in kinetic energy and capillary-driven oscillations. § CONCLUDING REMARKS In this study, we investigated an explicit finite difference method for solving the Van der Waals (VDW) multi-phase flow. Based on the MacCormack methodology, the numerical scheme provided qualitative simulation results for single static droplets and the dynamics of liquid-vapor separation. We proposed a general energy-based approach to address the contact line problem by relating the wetting energy to bulk free energy and surface energy, and compared them with existing boundary condition methods. In the simulation tests, we evaluated the energy evolution and spurious currents of single static droplets under different Laplace numbers (La). As La was decreased to approximately 1, we observed a more stable equilibrium system with reduced intensity of spurious currents. We validated our method by analyzing the growth of domain length during the liquid-vapor separation process, and our results were in good agreement with the predicted solution L=(t-t_0)^2/3. Using the general energy-based boundary condition, we achieved highly consistent equilibrium contact angles with the predicted analytic solution. However, the other two existing methods failed to provide qualitative results due to large wetting potential and uncertain interface thickness. Furthermore, the kinetic energy of the simulation for the equilibrium shape of the sessile droplet converged to E_k∼10^-10, which is at a similar level as the simulation of spurious currents in the single static droplet. Additionally, we observed consistent dynamics between the energy-consistent boundary condition and the stress balance boundary condition when the same interface thickness was employed in both approaches. Overall, our study demonstrated the effectiveness of the explicit finite difference method for VDW multi-phase flow and provided valuable insights into the energy-based approach for modeling contact line phenomena. § APPENDIX We here establish the distinction between the Korteweg stress-based surface force and the potential-based surface force. We begin by examining the force terms of the momentum equation, Eq. (<ref>), excluding the contribution from viscous dissipation ∇· ( σ_s - p𝐈 ) = ∇·(λ[( 1/2|∇ρ|^2+ρ∇^2ρ)𝐈-∇ρ⊗∇ρ]-p𝐈). In this context, the pressure is determined by the equation of state, which can be obtained from the thermodynamic energy as p=ρ^2∂ f_0/∂ρ, as explained in the main text. By substituting this expression into Eq. (<ref>), we obtain the following result: ∇· ( σ_s - p𝐈 ) = ∇(λ/2|∇ρ|^2+λρ∇^2ρ-ρ^2∂ f_0/∂ρ) -∇·(λ∇ρ⊗∇ρ). In addition, the flux term of the momentum equations can be also represented by a potential form surface force <cit.> -ρ∇μ_mix=-∇ρμ_mix+μ_mix∇ρ, with mixed chemical potential μ_mix=∂ (ρ f_0)/∂ρ-λ∇^2ρ. Eq. (<ref>) can be further simplified to -ρ∇μ_mix=∇(λρ∇^2ρ-ρ^2∂ f_0/∂ρ)-λ∇^2ρ∇ρ. Finally, the additional stress term σ_ρ can be evaluated by the difference between Eq. (<ref>) and Eq. (<ref>) ∇·σ_ρ=∇·(σ_s-p𝐈)+ρ∇μ_mix =∇·(λ/2|∇ρ|^2𝐈-λ∇ρ⊗∇ρ)+λ∇^2ρ∇ρ, which can be simplified as σ_ρ=λ(|∇ρ|^2𝐈-∇ρ⊗∇ρ)+C, The constant C is typically set to zero in practice, as only the divergence of the stress term appears in the momentum equation. The additional stress term is mostly implicitly incorporated into the pressure term. In the case of a 1-D simulation, this term simplifies to zero. elsarticle-num
http://arxiv.org/abs/2307.03967v1
20230708124657
End-to-End Supervised Multilabel Contrastive Learning
[ "Ahmad Sajedi", "Samir Khaki", "Konstantinos N. Plataniotis", "Mahdi S. Hosseini" ]
cs.CV
[ "cs.CV" ]
Impact of noise on inverse design: The case of NMR spectra matching O. Anatole von Lilienfeld August 12, 2023 =================================================================== Multilabel representation learning is recognized as a challenging problem that can be associated with either label dependencies between object categories or data-related issues such as the inherent imbalance of positive/negative samples. Recent advances address these challenges from model- and data-centric viewpoints. In model-centric, the label correlation is obtained by an external model designs (e.g., graph CNN) to incorporate an inductive bias for training. However, they fail to design an end-to-end training framework, leading to high computational complexity. On the contrary, in data-centric, the realistic nature of the dataset is considered for improving the classification while ignoring the label dependencies. In this paper, we propose a new end-to-end training framework–dubbed KMCL (Kernel-based Mutlilabel Contrastive Learning)–to address the shortcomings of both model- and data-centric designs. The KMCL first transforms the embedded features into a mixture of exponential kernels in Gaussian RKHS. It is then followed by encoding an objective loss that is comprised of (a) reconstruction loss to reconstruct kernel representation, (b) asymmetric classification loss to address the inherent imbalance problem, and (c) contrastive loss to capture label correlation. The KMCL models the uncertainty of the feature encoder while maintaining a low computational footprint. Extensive experiments are conducted on image classification tasks to showcase the consistent improvements of KMCL over the SOTA methods. PyTorch implementation is provided in <https://github.com/mahdihosseini/KMCL>. § INTRODUCTION Learning from multilabel representation is a common practice that is considered in both computer vision <cit.> and medical image <cit.> application domains. Images usually contain more than one object for classification, where they can be semantically related to each other. The idea is to create an embedded feature space that can capture label dependencies to improve the classification task <cit.>. However, effectively learning such embedded space is known to be a challenging problem and various methods have been proposed over the past few years, including sequence-to-sequence modeling <cit.>, graph approaches <cit.>, and new loss-function designs <cit.>. Generally, there are two main approaches to addressing the multilabel representation learning problem: the data-centric approach and the model-centric approach. The data-centric approach focuses on addressing data-related issues like inherent imbalance <cit.>, impartial label training <cit.>, and hierarchical relationships <cit.> while ignoring label dependencies. On the contrary, the model-centric approach aims to capture label interactions for semantic embedding such as graph convolutional networks <cit.>, attention mechanisms <cit.>, and transformer-based learning <cit.>. Despite the benefits, they fail to design an end-to-end learning framework due to their high computational costs or the laborious task of capturing heuristic label dependencies like using correlation matrices. These limitations make them challenging to implement, optimize, and interpret. In this paper, we aim to combine the benefits of both data-centric and model-centric approaches while addressing their potential drawbacks. The solution lays on the foundation of asymmetric loss <cit.> which tackles the imbalance between positive and negative samples in multilabel classification. Our design augments this loss function by capturing the semantic relationships between labels using a kernel-based contrastive loss. This is achieved through two steps: (a) leveraging a Kernel Mixture Module (KMM) to explore the epistemic uncertainty of the feature encoder (see Figs. <ref> and <ref>). This is done by converting the embedded features of multilabel images into a Gaussian Reproducing Kernel Hilbert Space (RKHS) ℋ, and (b) employing a contrastive learning framework on the Gaussian RKHS to capture label dependencies through a weighted loss-function design (see Fig. <ref>). The resulting loss is trainable from end-to-end, providing high numerical stability during training. The following summarizes the contribution of the paper: [C1]: We propose a novel end-to-end framework –dubbed KMCL– to strike a balance between model-centric and data-centric approaches using a new contrastive loss augmented on asymmetric classification loss from <cit.>. KMCL is capable of capturing both the epistemic uncertainty of the model and label dependencies between classes simultaneously. [C2]: We introduce a KMM block design within the KMCL framework to generate a mixture of exponential kernels in Gaussian RKHS to model the uncertainty of the feature encoder and improve the robustness of the classification task. To reconstruct the mixture kernels from data, we propose a loss function ℒ_REC (in Eq. <ref>) as an alternative to the negative log-likelihood loss that addresses the numerical instabilities mentioned in <cit.>. [C3]: We construct the ℒ_KMCL (in Eq. <ref>) as a complementary loss to ℒ_ASL <cit.> to capture label dependencies and enhance classification performance. We utilize the Bhattacharyya coefficient (ρ) as a similarity metric between two kernel representations to pull together similar classes (positive) from a pair of multilabel images while contrasting dissimilar ones (negative) in Gaussian RKHS. [C4]: We consistently improve classification performance on both computer vision and medical imaging tasks with low computational footprints. Our loss design yields robust behavior toward a range of hyperparameters that are fixed across all experiments. §.§ Related Work Multilabel Image Representation. Multilabel image representation problems have been extensively studied, focusing on exploiting label dependencies within semantically aware regions. Previous approaches include RNN-CNN models for sequence-to-sequence modeling <cit.>, transforming the problem into a multi-instance problem <cit.>, and using recurrent attention reinforcement learning <cit.>. Later, efforts were made to incorporate linguistic embedding of training labels into graph neural network designs <cit.>. However, graph-based approaches assume the presence of coexisting label dependencies, which may not hold true when labels co-occur infrequently. Attention mechanisms have been introduced in dynamic graph modeling networks to address this issue <cit.>. Despite their effectiveness, these approaches often result in complex models with heavy computational requirements and limited generalization in different domains. A residual attention mechanism was introduced <cit.> to reduce such complexities by augmenting independent class feature scores using a class-agnostic average pooling method for aggregation scoring. Recent developments in this field emphasize the realistic nature of multilabel data representation. For example, the design proposed in <cit.> introduces an asymmetric loss function to balance the frequency of positive and negative classes. Other approaches include class-aware loss design for impartial label training <cit.> and exploring hierarchical relationships of multilabel data in a contrastive learning framework <cit.>. In this paper, we leverage both data- and model-centric approaches to reduce the above-mentioned complexities. Contrastive Learning. Self-supervised learning methods primarily focus on contrastive learning, which involves capturing inter-relational object information in image representation. This is achieved through the use of contrastive loss functions, either in unsupervised contrastive learning where labels are absent <cit.>, or in supervised contrastive learning where labels are available <cit.>. The framework has been extended to multilabel representation learning <cit.> by considering shared label images as positive and unshared label images as negative. The existing multilabel contrastive loss designs rely on hard-coded features and lack flexibility in representing semantically aware objects and their label dependencies. However, we propose transforming embedded features into a mixture of exponential kernels in Gaussian RKHS to account for the potential uncertainty of model parameters and accordingly relax the embeddings. § BACKGROUND ON BHATTACHARYYA COEFFICIENT BETWEEN EXPONENTIAL KERNELS The Bhattacharyya coefficient is a widely used metric to measure the similarity between probability distributions in various fields, including computer vision, pattern recognition, and statistical analysis <cit.>. Normal distributions are commonly evaluated using this metric to determine class separability in transfer learning <cit.>, perform point cloud instance segmentation <cit.>, and employ pseudo-labels for semi-supervised classification <cit.>. However, the Gaussian probability may not always be the best option for estimating the target variable due to normality assumptions which leads to numerical instabilities such as singularity <cit.>. A mixture of exponential kernels can be used as a reliable alternative to estimate the relative likelihood of the target variable, especially when the distribution is unknown or multimodal. In such cases, the Bhattacharyya coefficient ρ between the normalized versions of the kernel components can assess the geometric similarity and degree of overlap. Compared to Kullback-Leibler divergence <cit.> or L_p norms, ρ takes values in the range of [0, 1], which makes it a practical choice for comparing two statistical samples. In the following remark, we will elaborate on the closed-form expression of ρ between two exponential kernels. Let p(𝐱):= K_Σ_p(𝐱, μ_p) = exp(-1/2𝐱 - μ_p^2_Σ_p^-1) and q(𝐱) := K_Σ_q(𝐱, μ_q) = exp(-1/2𝐱 - μ_q^2_Σ_q^-1) be anisotropic multivariate squared exponential kernels that define a Gaussian RKHS ℋ <cit.>. Then, the Bhattacharyya coefficient between the normalized p(𝐱) and q(𝐱) is: ρ(p(𝐱), q(𝐱) ) = ∫(p(𝐱)∫p(𝐱) d𝐱)^1/2(q(𝐱)∫q(𝐱) d𝐱)^1/2d𝐱 = |Σ_p|^1/4|Σ_q|^1/4/|Σ|^1/2exp(-1/8μ_p-μ_q^2_Σ^-1), where, μ_p-μ_q^2_Σ^-1 = (μ_p-μ_q)^TΣ^-1(μ_p-μ_q) and Σ = Σ_p+Σ_q/2. The μ_p, μ_q∈ℝ^M and Σ_p, Σ_q∈𝕊_++^M are the mean vectors and the covariance matrices, respectively, and the operation |·| represents the determinant of a matrix. The proof of Remark <ref> is provided in Supplementary material. The Bhattacharyya coefficient, also known as the Hellinger affinity <cit.>, measures the normalized correlation between the square roots of kernels over the entire space. This similarity metric compares p(𝐱) and q(𝐱) by projecting their square roots onto a unit hypersphere and measuring the cosine of the angle between them in the complete inner product space ℋ. A careful examination of Equation <ref> reveals that the Bhattacharyya coefficient between normalized p(𝐱) and q(𝐱) consists of two terms: a scale factor and an exponential component. The scale factor measures overlap by comparing the generalized variances of the kernels, which are determined by the determinant of their covariance matrices. The scale factor converges to one when the covariance matrices of the two kernels are similar, indicating an overlap between them. The generalized variance of a kernel is related to its entropy and power entropy <cit.>, which measure uncertainty and spread. This allows the scale factor to consider differences in information content and orientation, resulting in separability due to covariance dissimilarity. On the other hand, the second term measures the similarity between the means μ_p and μ_q weighted by the precision matrix Σ^-1, providing separability based on positional differences. This exponential component represents the Mahalanobis kernel similarity <cit.> between μ_p and μ_q with respect to Σ^-1. The following corollary will further elucidate the connection of the Bhattacharyya coefficient with the Mahalanobis and Gaussian similarities. Let p(𝐱) := K_Σ_p(𝐱, μ_p) and q(𝐱) := K_Σ_q(𝐱, μ_q) be multivariate kernels defined in Remark <ref>. The Bhattacharyya coefficient between normalized p(𝐱) and q(𝐱) can be reduced to either the Mahalanobis or the RBF kernel similarity, depending on the covariance matrices: (i) The Mahalanobis kernel similarity, Sim_M(p(𝐱), q(𝐱)), is obtained when the covariance matrices are homoscedastic, i.e., Σ_p = Σ_q = Σ. It has the following closed-form expression: Sim_M(p(𝐱), q(𝐱)) = ρ(K_Σ(𝐱, μ_p), K_Σ(𝐱, μ_q)) = exp(-12 (2)^2μ_p-μ_q^2_Σ^-1). The described Mahalanobis metric evaluates the similarity between p(𝐱) and q(𝐱) based on their mean difference and relative positions (see Fig. <ref>d). (ii) The Gaussian kernel similarity, Sim_G(p(𝐱), q(𝐱)), is obtained when the covariance matrices are equal and isotropic, meaning Σ_p = Σ_q = σ^2I. The closed-form expression will be: Sim_G(p(𝐱), q(𝐱)) = ρ(K_Σ(𝐱, μ_p), K_Σ(𝐱, μ_q)) = exp(-μ_p-μ_q^28σ^2). In cases where two kernels have similar means but different covariance matrices, the Mahalanobis and Gaussian kernel similarities often exhibit a perfect correlation that may not precisely reflect true similarities (Figs. <ref>a and c). Instead, the Bhattacharyya coefficient evaluates the generalized variances of the kernels and identifies similarities in their orientation, shape, and means (Figs. <ref>a and c). Therefore, it is often a superior metric to the Mahalanobis and the Gaussian kernel similarities. The process of computing the final value of the closed-form expression between high-dimensional kernels can be time-consuming and resource-intensive. This problem can be alleviated by imposing constraints on the mean vectors and/or the covariance matrices. Following <cit.>, we will cover how specific constraints can be applied to improve computational efficiency in a subsequent corollary. Let p(𝐱) := K_Σ_p(𝐱, μ_p) and q(𝐱) := K_Σ_q(𝐱, μ_q) be two multivariate kernels as defined in Remark <ref>. The following statements hold: (i) If the covariance matrices are diagonal, meaning that Σ_p = diag(σ_p,1^2, ⋯, σ_p,M^2) and Σ_q = diag(σ_q,1^2, ⋯, σ_q,M^2), the Bhattacharyya coefficient between normalized p(𝐱) and q(𝐱) will be ρ(p(𝐱), q(𝐱)) = (∏_i=1^M(σ_p,i^2+σ_q,i^2/2σ_p,iσ_q,i)^-1/2)exp(-14∑_i=1^M(μ_p,i -μ_q,i)^2/σ_p,i^2+σ_q,i^2). (Anisotropic) (ii) If the mean vectors have identical values across all dimensions (μ_p = μ_p1, μ_q = μ_q1, where 1 = [1, ⋯, 1]^T∈ℝ^M is the one vector), and the covariance matrices are diagonal with homogeneous variances (Σ_p = σ_p^2I, Σ_q = σ_q^2I, where I∈𝕊^M_++ is the identity matrix), then the Bhattacharyya coefficient between two normalized isotropic kernels p(𝐱) and q(𝐱) can be calculated as ρ(p(𝐱), q(𝐱)) = (σ_p^2+σ_q^2/2σ_pσ_q)^-M/2exp(-M4(μ_p -μ_q)^2/σ_p^2+σ_q^2). (Isotropic) § PROPOSED METHOD [13]R0.68 < g r a p h i c s > Overview of KMCL framework. The training pipeline comprises a feature encoder that feeds into the KMM, which outputs the parameters of a mixture model in the Gaussian RKHS ℋ. These parameters then define the objective function that captures label correlation to aid in training the model for the multi-label classification. The multi-label classification task involves assigning multiple labels to an image 𝐱^n from sample space 𝐗. These labels are typically correlated with each other and represented by a multi-hot binary vector 𝐲^n∈{0,1}^K, where K denotes the number of labels. In this section, we propose an end-to-end multi-label learning framework–dubbed Kernel-based multi-label Contrastive Learning (KMCL), that captures label correlations to improve recognition performance. Given an input batch of data, we first propagate it through the encoder network to obtain the feature embedding. The embedding is then inputted into a novel fully connected layer called the Kernel Mixture Module (KMM), which produces a Gaussian Reproducing Kernel Hilbert Space ℋ. The Gaussian RKHS embedding can handle higher-order statistics of the features and has a complete inner product that enables linear geometry, making it richer than the deterministic feature embedding. Finally, we compute the loss function using the KMM outputs on space ℋ to capture label correlation and train the model for multi-label classification. Figure <ref> provides a visual explanation. §.§ KMCL Framework The main components of the KMCL framework are: [2]R0.23 < g r a p h i c s > Internal architecture of KMM. Feature Encoder.The encoder network takes two samples from the input batch separately and generates corresponding feature representation vectors 𝐟∈ℝ^M. The dimension of the feature vector depends on the encoder type. KMM. Most feature encoders produce deterministic results that do not quantify or control uncertainty, leading to low confidence in robust multi-label classification tasks and errors in interpreting the output predictions. Uncertainty in deep learning arises from two sources: epistemic uncertainty (model uncertainty), resulting from uncertainty in model parameters, and aleatoric uncertainty (data uncertainty), which stems from the inherent noise in data and label ambiguity. In this study, we propose the Kernel Mixture Module (KMM) to estimate epistemic uncertainty in predictions. The KMM takes the feature vector 𝐟 from the encoder network and generates a mixture of exponential kernels within the Hilbert space, each corresponding to a specific class in an image. Specifically, the fully connected layer in the KMM utilizes learnable weights and biases to produce three outputs for each unimodal exponential kernel component: the mixture coefficient π_k, mean vector μ_k, and covariance matrix Σ_k (Fig. <ref>). The parameters π_k, μ_k, and Σ_k quantify the existence, relative spatial positioning, and relative statistical complexities (measures of spread and uncertainty) of the kth class membership. These parameters are then used to model the label representation of a given sample 𝐱^n associated with a class vector 𝐲^n using the following expression: 𝒢_𝒮(𝐟^n) := ∑_k ∈𝒮π_k^n g_k(𝐟^n) = ∑_k ∈𝒮π_k^nexp(-‖𝐟^n - μ_k^n1‖^2/2(σ_k^n)^2), where, 𝒮 = {k: y_k^n = 1} and 𝐟^n is the extracted feature vector of the input sample. The component g_k(𝐟^n) := K_Σ_k^n(𝐟^n, μ_k^n) is an isotropic exponential kernel where μ_k^n = μ_k^n1, Σ_k^n = (σ^n_k)^2I, and π_k^n∈ [0, 1]. These adaptive parameters i.e., θ_k^n = [μ_k^n, (σ^n_k)^2, π_k^n] are calculated through forward propagation, using suitable activation functions to ensure that the parameters adhere to their constraints. The sigmoid activation function is used to normalize the mixture coefficient for efficient multi-label classification, accurately predicting the likelihood of multiple labels. The modified version of the exponential linear unit (ELU) <cit.> is also used as an activation function for variances, ensuring their semi-positivity. The detailed architecture of KMM can be found in Fig. <ref> and Supplementary material. §.§ Multi-label Learning with KMCL Building upon the KMCL framework, we aim to provide insights into the learning process of multi-label tasks. To achieve this, we introduce the details of our objective function, which comprises three components: reconstruction loss, classification loss, and contrastive loss. Throughout this paper, we use N and K to denote the mini-batch size and the total number of classes, respectively. Reconstruction Loss. [12]R0.3 < g r a p h i c s > Relative frequency histograms of class distributions in four datasets show that most images have 2, 2, 4, and 1 labels in the Pascal-VOC, MS-COCO, ADP, and ChestX-ray14, respectively. It is straightforward to compute the mixture model defined in Equation <ref> using the KMM output parameters, which provide 3K values for each input sample. Following this calculation, the model can be used to learn label-level representations in Hilbert space ℋ by minimizing its negative log-likelihood. Therefore, we introduce to optimize the following reconstruction loss over the data batch to train the mixture model ℒ_REC = 1/N∑_n=1^N-log𝒢_𝒮(𝐟^n)/𝒢_𝒴(𝐟^n), where, 𝒢_𝒴(𝐟^n) := ∑_k∈𝒴={1, ⋯, K}π_kg_k and 𝒢_𝒮(𝐟^n) denotes the kernel mixture associated with image 𝐱^n defined in Equation <ref>. The log-ratio term in Equation <ref> is always negative i.e. 𝒢_𝒮(𝐟^n)≤𝒢_𝒴(𝐟^n), where the loss is led by the supervised labels for reconstruction. We propose this as an alternative choice for reconstruction loss, which is commonly used in the literature <cit.>. Our new loss function ℒ_REC exhibits robust behavior without relying on numerical tricks for stabilization. Classification Loss. The analysis in Figure <ref> reveals that despite varying statistical and conceptual properties across datasets, most images have only a fraction of labels, causing a significant imbalance between positive and negative samples. This imbalance can lead to poor training accuracy as gradients from positive labels may be underemphasized. To mitigate this issue, we use ASL <cit.> as a classification loss function that adjusts the contributions of positive and negative samples by down-weighting easy negative samples and focusing on the hard ones. Therefore, given the predictive mixture of coefficients π^n from KMM and the ground-truth multi-hot label vector 𝐲^n, the classification loss for a batch is obtained as ℒ_ASL = 1/N∑_n=1^N∑_k=1^K -y_k^n(L_k^n)_+-(1-y_k^n)(L_k^n)_-, where, (L_k^n)_+ = (1-π_k^n)^γ_+log (π_k^n), and (L_k^n)_- = (max(π_k^n-m, 0))^γ_-log (1-max(π_k^n-m, 0)) represent the positive and negative loss parts, respectively, such that γ_+, γ_-, and m are the hyper-parameters used to balance the loss. For additional information on ℒ_ASL, please refer to <cit.>. Kernel-based Contrastive Loss. The ASL loss function classifies labels independently, making it difficult to capture correlations between co-occurring semantic labels. Moreover, it fails to account for uncertainty in predictions, which can undermine decision-making confidence. To address these limitations, we propose a new loss function, ℒ_KMCL, which incorporates label correlation and epistemic uncertainty into supervised contrastive learning to improve representation. The objective of kernel-based multi-label contrastive loss ℒ_KMCL is to pull together the kernel representations of positive images that have shared classes with the anchor image 𝐱^n in the embedding space ℋ, while pushing apart negative samples that do not share any classes. This approach differs from deterministic supervised contrastive losses <cit.> as ℒ_KMCL constructs the positive and negative pairs using similarity measures that consider the uncertainty of kernel representations. The similarity is measured by a Bhattacharyya coefficient discussed in Corollary <ref> (isotropic), which determines the overlap between these exponential kernels and their confidence in proximity. Essentially, the kernel-based contrastive loss optimizes the similarity of frequently co-occurring labels and captures their statistical dependencies, making it a valuable complement to ASL. The contrastive loss is defined for the entire minibatch as follows: ℒ_KMCL = 1/N∑_n=1^N -1/|𝒜(n)| ∑_m∈𝒜(n)J(n, m) (∑_k∈𝒦(n, m) logexp(ρ_k,k^n,m/τ)/∑_i∈{N\n}exp(ρ_k,k^n,i/τ)), where, ρ_k,l^n,m:=ρ(g_k(𝐟^n), g_l(𝐟^m)) indicates the Bhattacharyya coefficient between the normalized exponential kernels g_k(𝐟^n) and g_l(𝐟^m) (see Corollary <ref>) and τ is the temperature parameter. The positive set 𝒜(n) = {m ∈{N \ n}: 𝐲^n·𝐲^m≠ 0, where · is a dot product.} includes samples that share at least one label with the anchor image 𝐱^n, while 𝒦(n,m)= {k∈𝒴: y_m^k = y_n^k = 1} represents the indices of shared labels between 𝐱^n and 𝐱^m. The Jaccard index J(n, m)=𝐲^n·𝐲^m/𝐲^n^2+𝐲^m^2-𝐲^n·𝐲^m serves as a weighting factor for positive samples based on the number of shared labels with the anchor. It measures the intersection over union (IOU) of the label vectors between the anchor and positive image, taking into account object co-occurrences. In this way, ℒ_KMCL prioritizes positive samples with a high Jaccard index for a given anchor while downplaying samples with few shared labels. [14]R0.44 < g r a p h i c s > (a) Training loss over different epoch training. Plots show the normalized total loss ℒ as well as different normalized sub-losses, and (b) Training accuracy of KMCL pipeline over different epoch training Objective Function. The overall training loss of the KMCL is the augmented Lagrangian of the three aforementioned losses, which can be expressed as: ℒ = ℒ_REC + λ_1 ℒ_ASL + λ_2 ℒ_KMCL, where λ_1 and λ_2 are the Lagrangian multipliers used to balance the gradients of ℒ_ASL and ℒ_KMCL, respectively. We use an end-to-end pipeline to incorporate contrastive learning into supervised classification, which simultaneously trains the feature encoder and classification parts. This approach is different from previous methods that use contrastive losses <cit.>. In those methods, the encoder is trained with a contrastive loss and then frozen before being transferred to the classifier for tuning. Instead, the KMCL framework combines these training regimes into one formulation, enabling us to learn multi-label classification and label correlations with data-driven techniques. §.§ KMCL Algorithm [15]R0.57 The pseudo-code of the proposed KMCL framework is outlined in Algorithm <ref>, which takes a set of batches and a specified number of epochs as inputs. The pair of anchor images and their positive set are fed through the network depicted in Figure <ref> to obtain the feature vectors and parameters of the corresponding kernel mixtures (lines <ref>-<ref>). The overall loss is then computed as an augmented Lagrangian of the ℒ_REC, ℒ_ASL, and ℒ_KMCL using the KMM parameters (lines <ref>-<ref>). Finally, the objective function is back-propagated through the KMM and the feature encoder for each iteration to update the weights based on the gradients associated with the subsequent forward pass (line <ref>). This iterative process continues until convergence is reached. Figures <ref> (a) and (b) demonstrate the results of implementing the KMCL framework with TResNet-L <cit.> as the encoder network on the Pascal-VOC dataset <cit.>. Fig. <ref> (b) displays the objective loss behavior along with the evolution of the three loss terms for the training and test sets; whereas The mean average precision (mAP) accuracy is presented in <ref> (a). The losses decrease with different multiplicative factors due to the tuned Lagrangian multipliers. The convergence speed of the method on multi-label tasks is impressive, reaching 96.2% mAP accuracy in fewer than 30 epochs. § EXPERIMENTS In this section, we present the experimental setup and demonstrate the superior performance of KMCL in both general computer vision and medical imaging domains. To ensure robust feature extraction, we utilized TResNet-M and TResNet-L <cit.>, state-of-the-art architectures designed for different image resolutions (224 and 448, respectively). The features are then passed through the KMM to obtain the mixture parameters π, μ, and Σ. Additional information regarding the encoders, KMM, datasets, evaluation metrics, and training details can be found in Supplementary material. Datasets. We evaluate the KMCL's performance on popular computer vision datasets, PASCAL-VOC <cit.> and MS-COCO <cit.>, as well as on medical datasets, ADP <cit.> and ChestX-ray14 <cit.>. Evaluation Metrics. Following SOTA <cit.>, we report the standard metrics of mean average precision (mAP), average overall precision (OP), recall (OR), and F1 score (OF1) in addition to per-class precision (CP), recall (CR), and F1 score (CF1). We considered the number of parameters (M) and GMAC as measures of computational costs. Finally, for the ChestX-ray14 dataset <cit.>, we reported per-class AUC scores to assess model discriminability for specific classes. Training Details. We implemented the KMCL framework using PyTorch, following Alg. <ref>. The backbone feature encoders were initialized with pre-trained architectures, while the mixture parameters were initialized by applying a uniform distribution to π and μ and setting Σ to a constant value of 1. In all experiments, we assign fixed values of 0.1 and 0.3 to λ_1 and λ_2 respectively, as specified in Eq. <ref>. The Adam optimizer <cit.> was used with an initial learning rate of 2e-4, and the OneCycleLR scheduler <cit.> for 40 epochs. Standard augmentations from RandAugment policy <cit.> were applied to the training data. Experiments were conducted on four NVIDIA GeForce RTX 2080Ti GPUs. How does KMCL compare to SOTA methods on computer vision datasets? We evaluate KMCL with SOTA methods on computer vision datasets in Table <ref> and Fig. <ref>. KMCL outperforms the best competitors on PascalVOC and MS-COCO, achieving superior performance with a margin of 0.4% and 0.2% in mAP score, respectively. In particular, KMCL excels in challenging classes on PascalVOC, such as the sofa and bus classes, with an improvement of over 3.0%. On MS-COCO, KMCL demonstrates significant improvements across multiple metrics, including mAP, OF1, and CF1. Using the TResNet-M encoder at resolution 224, we achieve state-of-the-art results with a 5.0% increase in mAP compared to the best method. Similarly, with TResNet-L at a resolution of 448, KMCL surpasses other methods in overall and per-class metrics. These achievements are attained by integrating the proposed contrastive learning with ASL classification loss, to capture label correlation and enhance prediction accuracy. This is illustrated through the Top3-metrics on MS-COCO, where our 3 classes are better selected by considering label correlation when ranking the predictions. How well KMCL generalizes to medical imaging datasets? [8]r7cm Comparisons with state-of-the-art methods on the ADP dataset. ! 2pt1pt1pt 22emMethod 7c|Performance 2cComplexity 2-10 mAP OP OR OF1 CP CR CF1 Parameters (MM) GMAC 2pt1pt1pt ML-GCN (Binary) <cit.> 94.9 92.0 86.9 89.7 91.8 87.0 89.3 44.90 31.39 ASL (TResNet-L) <cit.> 96.1 92.1 90.7 91.4 92.5 89.2 90.8 44.14 35.28 TDRG <cit.> 95.5 94.3 86.2 90.5 94.6 84.8 89.4 75.20 64.40 CSRA <cit.> 96.1 93.0 89.7 91.7 93.1 88.6 90.8 42.52 31.39 KMCL (TResNet-M) 95.1 94.2 91.0 90.4 94.7 88.9 89.8 29.41 5.74 KMCL (TResNet-L) 96.5 92.7 92.9 92.8 92.6 92.0 92.3 44.20 35.28 2pt1pt1pt We evaluate KMCL against SOTA methods on medical imaging datasets presented in Tables <ref> and<ref>. The recall is a crucial factor in these datasets, as it reflects the likelihood of missing a medical diagnosis. The proposed method achieves a superior tradeoff between precision and recall by significantly improving recall metrics while maintaining competitive precision scores, including SOTA mAP. On the ADP dataset, KMCL outperforms the surveyed SOTA with margins of 0.4%, 2.2%, and 2.8% for mAP, OR, and CR, respectively. Similarly, on the ChestX-ray14 dataset, both TResNet-M and TResNet-L models exhibit significant improvements, with our best model surpassing SOTA results by 5.2%, 7.0%, and 11.6% in mAP, OR, and CR, respectively. In comparison, competing methods such as ML-GCN <cit.> use label correlation but suffer from increased computational complexity and a multi-stage approach, as shown in Table <ref>. However, our method surpasses the SOTA while maintaining a small model size and low GMAC scores. These findings highlight the advantage of KMCL in computationally constrained environments. How KMCL's performance varies with different similarity measurements? In this ablation study, we examine the impact of changing the Battacharya coefficient to either Mahalanobis kernel similarity or Gaussian kernel similarity in the KMCL framework (Corrolary <ref> (i) and (ii)). Under the Mahalanobis kernel similarity, the performance decreases across the PascalVOC and ADP, as indicated in Table <ref>. This is likely due to the constraint that the variance must be identical across all classes, leading to an inability to capture entropy and uncertainty as reported in Section <ref>. [7]r7cm Ablative comparison for similarity measures and kernel representation cases. ! 2pt1pt1pt 2c||Configuration 7c|ADP PascalVOC 2cComplexity Similarity Metric Case mAP OP OR OF1 CP CR CF1 mAP Params(MM) GMAC 2pt1pt1pt Bhattacharyya Anisotropic 95.4 94.0 92.7 90.6 94.8 90.7 90.5 95.4 104.91 5.81 Bhattacharyya Isotropic 95.1 94.2 91.0 90.4 94.7 88.9 89.8 95.2 29.41 5.74 Mahalanobis - 94.7 92.0 92.4 90.9 92.6 90.5 90.4 95.1 71.34 5.78 Gaussian Kernel - 94.5 91.5 89.7 90.6 92.3 86.5 89.3 95.0 29.40 5.74 2pt1pt1pt Similarly, when utilizing Gaussian kernel similarity, the performance further deteriorates because the model is constrained to learn a single variance value that applies to both the label classes and feature dimensions. Therefore, it is more meaningful to use the Bhattacharyya coefficient since it evaluates the generalized variances of the kernels and identifies similarities in their orientation, shape, and means (Eq. <ref>). We further investigate the assumptions from both isotropic and anisotropic cases of the exponential kernel representations in KMCL framework as discussed in Corrolary <ref>. While the anisotropic case leads to an improved performance as shown in Table <ref>, but results in an increase in learnable parameters at the cost of higher computational complexity. By incorporating variances over the feature dimension, we better capture epistemic uncertainty and achieve enhanced overall results. Thus, if computational resources are available, one could best leverage our framework in the anisotropic case to achieve sota results. [11]r0.7 < g r a p h i c s > Reduced t-SNEs for ASL (left) and KMCL(Center) on PascalVOC color-coded by user-defined super-classes in the legend; (Right) ground truth correlation matrix for PascalVOC. Intuitive Visualizations. KMCL presents an end-to-end framework for contrastive learning that has achieved quantitatively significant results compared to existing methods. In this section, we visualize how the learned feature representation incorporates label correlation and epistemic uncertainty. Figure <ref> shows a reduced t-SNE <cit.> visualization of the feature representation for ASL and KMCL on the Pascal VOC dataset. Both methods accurately discriminate between different classes, as seen from the plotted centroids of each cluster. Notably, both methods exhibit a clustering pattern based on user-defined super-classes (e.g., car and bus are both forms of Transportation). Upon analyzing the ground truth correlation matrix, it becomes apparent that KMCL captures label correlation more effectively. Specifically, the sofa class exhibits the highest correlation with the chair class, resulting in their closer proximity in the t-SNE visualization for KMCL compared to ASL. [8]r0.65 < g r a p h i c s > GradCam visualization of KMCL and competing SOTA method. Bolded class labels indicate instances where KMCL outperforms SOTA by a large margin. Figure <ref> showcases the GradCam visualization for KMCL and a competing SOTA method. KMCL effectively distinguishes the sofa and chair classes, consistent with the t-SNE visualization results. Moreover, by capturing epistemic uncertainty from the kernel representation, our method accurately identifies the correct classes in the ADP sample with minimal extraneous activations. For more visualizations, please refer to the Supplementary material. § BROADER IMPACT KMCL provides an end-to-end supervised contrastive learning framework for multilabel datasets. It requires fewer resources for the design and implementation of downstream tasks such as classification. Contrastive learning methods like <cit.> typically involve two stages of encoder training and fine-tuning for the task, which can take several hundred epochs. In contrast, KMCL only requires one stage of training with significantly fewer epochs. This translates into a much smaller carbon emission footprint, as highlighted in <cit.> for using more compact models for training. Although KMCL has been successfully applied in computer vision and medical imaging domains, its effectiveness has not yet been tested for segmentation/detection tasks or in other modalities like natural language processing. In future work, we will consider broadening our experiments for further validation. Additionally, we believe that society can benefit from the theoretical analysis of the similarity metrics presented in this paper, which can be adapted to different application domains. § ACKNOWLEDGMENT Authors would like to thank Rahavi Selvarajan, Xiao Hu and Jiarui Zhang for their assistant and helpful discussion. ieee_fullname § APPENDIX §.§ Proof of Remark 1. The Bhattacharyya coefficient between the normalized p(𝐱):= K_Σ_p(𝐱, μ_p) = exp(-1/2𝐱 - μ_p^2_Σ_p^-1) and q(𝐱):= K_Σ_q(𝐱, μ_q) = exp(-1/2𝐱 - μ_q^2_Σ_q^-1) is defined as ρ(p(𝐱), q(𝐱) ) = ∫_𝒳(p(𝐱)∫_𝒳 p(𝐱) d𝐱)^1/2(q(𝐱)∫_𝒳 q(𝐱) d𝐱)^1/2d𝐱 = ∫_𝒳p(𝐱)^1/2q(𝐱)^1/2d𝐱/√(∫_𝒳p(𝐱) d𝐱)√(∫_𝒳q(𝐱) d𝐱). To begin, we expand the integrand part of the enumerator, i.e., √(p(𝐱)q(𝐱)) as follows: exp(-14𝐱^T(Σ_p^-1+Σ_q^-1)𝐱+12(Σ_p^-1μ_p+Σ_q^-1μ_q)^T𝐱 -14(μ_p^TΣ_p^-1μ_p + μ_q^TΣ_q^-1μ_q )). In order to overcome the challenge of integrating the derived integrand in Equation <ref>, we will introduce a new approach. We will represent √(p(𝐱)q(𝐱)) as the product of a constant value, denoted as h(μ_p, μ_q, Σ_p, Σ_q), and a newly defined anisotropic multivariate squared exponential kernels, denoted as r(𝐱):= K_Σ_r(𝐱, μ_r). This formal representation can be expressed as follows: √(p(𝐱)q(𝐱)) = h(μ_p, μ_q, Σ_p, Σ_q)r(𝐱). We defined the new exponential kernel of Equation <ref> as r(𝐱) := K_Σ_r(𝐱, μ_r) = exp(-1/2𝐱 - μ_r^2_Σ_r^-1) = exp(-12(𝐱-μ_r)^TΣ_r^-1(𝐱-μ_r)), where Σ_r≜(12Σ_p^-1+12Σ_q^-1)^-1 and μ_r≜Σ_p(12Σ_p^-1μ_p+12Σ_q^-1μ_q). Once the values of Σ_r and μ_r are replaced in Equation <ref>, the kernel r(𝐱) will be r(𝐱) = exp(-14𝐱^T(Σ_p^-1+Σ_q^-1)𝐱 + 12(Σ_p^-1μ_p+Σ_q^-1μ_p)^T𝐱 -14(Σ_p^-1μ_p+Σ_q^-1μ_p)^T +(Σ_p^-1+Σ_q^-1)^-1(Σ_p^-1μ_p+Σ_q^-1μ_p)). By substituting Equations <ref> and <ref> into Equation <ref>, we obtain the closed-form expression of h(μ_p, μ_q, Σ_p, Σ_q) as presented below. exp(-1/4( μ_p^T(Σ_p^-1-Σ_p^-1(Σ_p^-1+Σ_q^-1)^-1Σ_p^-1)μ_p+ μ_q^T(Σ_q^-1-Σ_q^-1(Σ_p^-1+Σ_q^-1)^-1Σ_q^-1)μ_q -μ_p^T(Σ_p^-1(Σ_p^-1+Σ_q^-1)^-1Σ_q^-1)μ_q -μ_q^T(Σ_q^-1(Σ_p^-1+Σ_q^-1)^-1Σ_p^-1)μ_p )) Given the fact that Σ_p^-1-Σ_p^-1(Σ_p^-1+Σ_q^-1)^-1Σ_p^-1 = Σ_q^-1-Σ_q^-1(Σ_p^-1+Σ_q^-1)^-1Σ_q^-1 = Σ_p^-1(Σ_p^-1+Σ_q^-1)^-1Σ_q^-1 = Σ_q^-1(Σ_p^-1+Σ_q^-1)^-1Σ_p^-1 = (Σ_p+Σ_q)^-1 <cit.>, we can simplify Equation <ref> and derive exp(-1/4μ_p^T(Σ_p+Σ_q)^-1μ_p+μ_q^T(Σ_p+Σ_q)^-1μ_q-μ_p^T(Σ_p+Σ_q)^-1μ_q-μ_q^T(Σ_p+Σ_q)^-1μ_p), where can be further simplified to yield the following expression: h(μ_p, μ_q, Σ_p, Σ_q) = exp(-18(μ_p-μ_q)^TΣ^-1(μ_p-μ_q)), where Σ = Σ_p+Σ_q/2. Ultimately, by utilizing the definition of the Bhattacharyya coefficient, Equation <ref>, and Equation <ref>, we can deduce the following conclusion: ρ(p(𝐱), q(𝐱)) = ∫_ℝ^Mp(𝐱)^1/2q(𝐱)^1/2d𝐱/√(∫_ℝ^Mp(𝐱) d𝐱)√(∫_ℝ^Mq(𝐱) d𝐱) =∫_ℝ^Mh(μ_p, μ_q, Σ_p, Σ_q)r(𝐱)d𝐱/√(∫_ℝ^Mp(𝐱) d𝐱)√(∫_ℝ^Mq(𝐱) d𝐱) = h(μ_p, μ_q, Σ_p, Σ_q) ∫_ℝ^M|2πΣ_r|^1/2𝒩(𝐱;μ_r, Σ_r)d𝐱/√(∫_ℝ^M|2πΣ_p|^1/2𝒩(𝐱;μ_p, Σ_p) d𝐱)√(∫_ℝ^M|2πΣ_q|^1/2𝒩(𝐱;μ_q, Σ_q) d𝐱) = |Σ_r|^1/2/|Σ_p|^1/4|Σ_q|^1/4h(μ_p, μ_q, Σ_p, Σ_q) = |2Σ_p(Σ_p+Σ_q)^-1Σ_q|^1/2/|Σ_p|^1/4|Σ_q|^1/4h(μ_p, μ_q, Σ_p, Σ_q) (a)= |Σ_p|^1/2|Σ_q|^1/2|Σ|^1/2exp(-18(μ_p-μ_q)^TΣ^-1(μ_p-μ_q)), where, Σ = Σ_p+Σ_q/2 and (a) is followed by the probability property that the total area underneath a probability density function is 1. The notation 𝒩(𝐱;μ, Σ) represents a multivariate Gaussian probability distribution in M dimensions, characterized by a mean vector μ and a covariance matrix Σ. This completes the proof of Remark 1. §.§ Forward Propagation in KMM. The KMM (Kernel Mixture Module) takes the feature vector 𝐟^n∈ℝ^M as input from the encoder network and produces the parameters for each exponential kernel component in the kernel mixture model. This transformation converts the feature vector into 3K values, where each K represents the parameters for the kth kernel component (existing class), such as μ_k^n∈ℝ, σ_k^n∈ℝ^+, π_k^n∈ [0, 1]. The adaptive parameters are computed through forward propagation, employing suitable activation functions to ensure that the parameters satisfy their respective constraints. The activations corresponding to the parameters of the kth component for the KMM ((a_k^μ)^n,(a_k^σ^2)^n, (a_k^π)^n) are used to accomplish this, and they are calculated through the forward propagation of a fully connected layer by (a_k^μ)^n = 𝐰_k^μ𝐟^n + b^μ_k, (a_k^σ^2)^n = 𝐰_k^σ^2𝐟^n + b^σ^2_k, (a_k^π)^n = 𝐰_k^π𝐟^n + b^π_k, where, {𝐰_k^μ, 𝐰_k^σ^2, 𝐰_k^π}∈ℝ^M are the weights, and {b^μ_k, b^σ^2_k, b^π_k}∈ℝ represent the biases associated with {(a_k^μ)^n, (a_k^σ^2)^n, (a_k^π)^n}, respectively. We make a minor revision to the idea of using nonlinear activation from <cit.> by replacing softmax with sigmoid to normalize the mixture of coefficients and address multilabel issues. In the following, we define the nonlinear and linear transformations applied to (ak^μ)^n, (ak^σ^2)^n, (a_k^π)^n using π_k^n = 11+exp(-(a_k^π)^n), μ_k^n = (a_k^μ)^n, (σ_k^n)^2 = ELU((a_k^σ^2)^n)+2+ϵ, where ELU(·) and ϵ are the exponential linear unit function <cit.> and the hyperparameter used to ensure training stability, respectively. We use a modified ELU function rather than the exponential function as the activation on (a_k^σ^2)^n in order to ensure that variances remain non-negative ((σ_k^n)^2≥ 0). This modification is necessary because the vanilla exponential function exhibits rapid growth for larger values, which can lead to training instability, particularly when dealing with high-variance datasets. It is important to note that there is no constraint on the mean μ_k^n, as it is obtained directly from the activation (a_k^μ)^n. § DATASETS PASCAL-VOC The PASCAL Visual Object Classes Challenge (2007) <cit.> is a common computer vision dataset used in multi-label classification. It contains a total of 9963 images over 20 classes, including 'cat', 'bottle', and 'person'. Being consistent with the state of the art, we trained our architecture on the trainval set and evaluated it on the test set with a total of 5011 and 4952 images in each set, respectively. Referencing the relative frequency in the main paper, we can see that the number of classes per image to the total number of classes is heavily unbalanced, with the majority of images having only 2-4 classes. MS-COCO The Microsoft COCO dataset <cit.> is another common computer vision dataset used in multi-label classification. This dataset includes 82,081 training and 40,504 validation images across 80 different classes including 'person', 'bicycle', and 'elephant'. Following the state of the art, we test our method on the validation dataset making it comparable with competitive approaches. ADP The Atlas of Digital Pathology for Hisotological Tissue Type Classification <cit.> is composed of digital histology images taken from several organ tissues, including the colon, brain, stomach, etc. These images were generated via a Whole Slide Image (WSI) scanner. This database includes 17,668 image patches that are multilabel in nature. The training, validation, and test sets contain 14,134, 1767, and 1767 images respectively. This labeling scheme follows a three-tier hierarchy: L1 (9 labels), L2 (11 labels), and L3 (22 labels). As we progress down the levels, the features being annotated gradually progress from coarse to fine detail. The highest level (L1) contains classes that amalgamate several lower-level classes. For example, Dense Regular Connective (C.D.R) is an L3 precise label that falls under the more coarse L1 category of Connective (C). For the purpose of our work, we have selected L1 as it seems to be the most statistically significant selection with a better balance of per-class distribution. ChestXray-14 The ChestX-Ray 14 dataset contains hospital-scale frontal-view chest X-ray images from 30,805 unique patients. Each image either contains multiple common thoracic illnesses including ‘cardiomegaly’ or ‘pneumonia’ or is designated ‘normal’ indicating no illness. The released version of the dataset catalogs 14 common illnesses to date, as opposed to the original 8 that was released at the time of publication. §.§ Hyperparameters & Tuning In this section, we list all the necessary parameters for the reproducibility of our method. We have categorized our hyperparameters depending on which part of the pipeline they relate to (i.e., Training Optimization refers to any parameters used in setting up the training phase). A special note is made for the Loss Development λ values. In order to best tune our method, we sampled a 15-point log-random search in a subset of the provided range to best adapt our model to the given datasets. See Table <ref>. §.§ Additional Information on Metrics Being consistent with state-of-the-art methods, we calculate the average overall precision (OP), recall (OR), and F1 score (OF1), in addition to the average per-class precision (CP), recall (CR), and F1 score (CF1), as metrics for evaluating the different methods on the datasets <cit.>. Overall these metrics challenge the model’s ability to accurately discriminate the class of interest in terms of measuring false positives and false negatives. Superior OF1 and CF1 indicate that the model is well-tuned for class discrimination as this metric encompasses both recall and precision in the calculation. For some experiments, we include the following computational complexity measures: Parameters (MM) to indicate model size, and GMAC to indicate the forward computational resource required. The motivation behind these metrics is to illustrate that performance is not only measured through how well the method discriminates classes but also through the complexity of deploying said method in the real world. Finally, due to the increased difficulty of the ChestX-ray14 dataset, we additionally report per class AUC scores to identify model discriminability for the class of interest, this has been a common trend in papers that have cited results on this dataset <cit.>. §.§ Additional Visualizations To further augment the main paper visualizations, we attach supplemental visualizations on the two additional datasets: MS-COCO and ChestXray-14. As can be seen, by the visualizations, our model is more precise at localizing the correct features. Due to capturing the epistemic uncertainty from the kernel representation, our method is able to focus the activation on the correct class, limiting extraneous false positive results. See Figure <ref>.
http://arxiv.org/abs/2307.04172v1
20230709133825
Can Generative Large Language Models Perform ASR Error Correction?
[ "Rao Ma", "Mengjie Qian", "Potsawee Manakul", "Mark Gales", "Kate Knill" ]
cs.CL
[ "cs.CL", "cs.SD", "eess.AS" ]
Electron-phonon driven charge density wave in CuTe. Matteo Calandra Received 27 February 2023; accepted 23 May 2023 =================================================== ASR error correction continues to serve as an important part of post-processing for speech recognition systems. Traditionally, these models are trained with supervised training using the decoding results of the underlying ASR system and the reference text. This approach is computationally intensive and the model needs to be re-trained when switching the underlying ASR model. Recent years have seen the development of large language models and their ability to perform natural language processing tasks in a zero-shot manner. In this paper, we take ChatGPT as an example to examine its ability to perform ASR error correction in the zero-shot or 1-shot settings. We use the ASR N-best list as model input and propose unconstrained error correction and N-best constrained error correction methods. Results on a Conformer-Transducer model and the pre-trained Whisper model show that we can largely improve the ASR system performance with error correction using the powerful ChatGPT model. ASR error correction, generative model, large language model, speech recognition, zero-shot § INTRODUCTION Automatic speech recognition (ASR) systems aim to transcribe human speech into readable text and are the key component for human-computer interaction <cit.>. In recent years, significant advancements have been made in this area. End-to-end (E2E) systems such as LAS or RNN-T are effective at modelling long context within the utterance and show superior performance compared to the HMM-based counterparts <cit.>. The training of ASR systems requires the availability of high-quality transcribed speech data, which can be costly to obtain. In general, the training set of the publicly available corpus contains thousands of hours of annotated data. In contrast, the recently released ASR model, Whisper <cit.>, is pre-trained on 680,000 hours of weakly supervised data collected from the Internet. Once published, Whisper gained extensive attention from both academia and industry. The decoder part of an RNN-T or a LAS model acts as a language model that estimates the probability of the generated word sequence <cit.>. It learns from the labelled reference text and is jointly trained with the acoustic encoder. Due to the limited availability of training speech data, ASR systems struggle to generate rare words that have low frequency in the training corpus. Compared to speech data, large quantities of text data covering a wide range of domains are much easier to collect and process. Therefore, text-based methods have been explored to improve the performance of speech recognition systems. Among these, ASR error correction which automatically identifies errors within the ASR hypothesis and outputs the corrected transcription is widely used <cit.>. The development of the error correction model follows the trend of Natural Language Processing (NLP) technology. Early models were rule-based systems, which required carefully designed features and human expertise <cit.>. With the emergence of recurrent networks and attention mechanisms, models with the E2E architecture became mainstream later. These models usually adopt a similar structure where the bidirectional encoder takes the ASR transcription as input and the reference text is used as the training target. This approach has shown promising performance on diverse datasets for ASR models of different architectures <cit.>. In the past few years, large-scale pre-trained language models are made available, which are generally trained on multi-domain text data that is several magnitudes more than the prevailing ASR systems. For instance, BERT is pre-trained on 3,300M words <cit.> and T5 is trained on 750GB text <cit.>. Previous works <cit.> developed methods to build an ASR error correction model based on the powerful T5 model. By fine-tuning from the pre-trained NLP model, implicit knowledge learned from huge amounts of text data can be effectively transferred to the target error correction task. Results indicate the importance of adopting the ASR N-best list rather than the top one hypothesis as model input for accessing richer context in the correction process. Traditional error correction models are trained in a supervised fashion to effectively learn the error patterns made by the ASR system. The training process requires first decoding the ASR model on large amounts of speech data, and then using the erroneous hypotheses to train the correction model. These two stages can be computationally intensive to adopt in practice. Additionally, the error correction model is usually bound to a specific ASR system for a specific domain. Therefore, when we switch the underlying ASR system or apply it to a new domain, the corresponding error correction model can be less effective and needs to be re-trained. To address the above issues, we develop approaches to perform zero-shot or few-shot ASR error correction within the scope of this paper. The proposed methods are training-free and enable plug-and-play support to an existing ASR system. Generative large language models (LLMs) such as ChatGPT have demonstrated remarkable performance of language understanding on text processing tasks <cit.>. In our work, we examine its performance in identifying and correcting errors made by the ASR system. In the experimental section, several prompts and both unconstrained and constrained generation methods are compared on standard speech recognition datasets. Results show that for both a Transducer-based ASR system and the pre-trained Whisper model, ChatGPT shows great potential in performing ASR error correction. § BACKGROUND Error correction models aim to fix errors in the ASR transcription and are an integral part of ASR post-processing. A standard error correction model adopts an E2E structure, taking the ASR transcriptions as the model input and generating the corrected sentence. Several model variants incorporating additional inputs have been proposed. <cit.> proposes an N-best T5 error correction model that is fine-tuned from a pre-trained T5 model. It leverages the N-best ASR hypotheses as model input and demonstrates significant performance gain over the model using the 1-best input. It also proposes an N-best constrained decoding approach in error correction, which uses the combined scores of the ASR model and the T5 model to find the best hypothesis in the N-best list. There has been rapid growth in current LLM literature, and larger and better LLMs are constantly released. Recently, as LLMs have been scaled up in size, pre-trained on increasingly more data, and further fine-tuned to follow instructions, they are capable of performing several NLP tasks in a zero-shot manner <cit.>. For example, LLMs such as ChatGPT have been applied to summary assessment <cit.>, and grammatical error correction <cit.>. However, their inherent ability to perform ASR post-processing tasks, such as ASR error correction, has been less explored. In this work, we follow <cit.> to use the ASR N-best list as input to the error correction model while using ChatGPT rather than T5 to perform the task. § ASR ERROR CORRECTION WITH LLM In this section, we introduce our methods of utilising generative large language models for zero-shot or few-shot error correction. Two types of tasks: unconstrained error correction and N-best constrained error correction are discussed. §.§ Unconstrained Error Correction In the unconstrained error correction (uncon) setting, we ask ChatGPT to directly output the corrected hypothesis without adding further explanation. Since ChatGPT has no prior knowledge about the error patterns of the ASR system and no access to the original utterance, this task can be relatively difficult to perform. Therefore, instead of the 1-best ASR transcription, we input the N-best list obtained from the beam search decoding of the ASR model to ChatGPT. Hypotheses from the N-best list can act as hints to help the model better detect and correct the errors <cit.>. In the ablation study, we show that using a reasonable number of N is important for the model to achieve good performance. When only the top one ASR hypothesis is used as input, ChatGPT yields much worse performance than the proposed method. The prompt designed for the zero-shot uncon setting is illustrated in Figure <ref>. In the designed prompt, all the hypotheses are sorted by the ASR posterior score. Furthermore, tags like and are used to surround each N-best hypothesis. Other input formats such as using numbers rather than tags or using plain sentences without the explicitly specified order are also examined and show degraded performance to our selected prompt. Considering the complexity of this task, we additionally experiment with the 1-shot setting to perform in-context learning. Here, we give an example for ChatGPT to refer to before conducting error correction (shown in orange colour in Figure <ref>). This example is selected from the decoding result of Conformer-Transducer on the dev_other set of LibriSpeech. By showing both input and the desired output in the prompt, we hope to remind ChatGPT to match the sentence length of the given hypotheses and only make edits to the detected errors. §.§ N-best Constrained Error Correction In the above section, we perform standard ASR error correction to generate the corrected transcription based on the information from the given hypotheses. Results in <cit.> suggest that constraining the decoding space to the given N-best list leads to performance gain in some cases. In the following, we design two methods to constrain the output of ChatGPT to be a hypothesis within the given N-best list, namely the selective approach and the closest mapping. §.§.§ Selective Approach With the selective approach (select), ChatGPT is asked to select the most likely ASR transcription from all the candidates rather than generate one from scratch. All the input sentences are listed as , and ChatGPT is asked to return the selected option in the format of . This method is similar to language model rescoring to some extent, however, it performs the selection in one go. More importantly, ChatGPT sees all the candidates before deciding on the best one. This is different from the rescoring process where language model scores are generated individually for each of the N-best hypotheses without comparing the similarity and correlation between each other. §.§.§ Closest Mapping The closest mapping method (closest) is based on the assumption that when ChatGPT performs unconstrained error correction, it first selects the best hypothesis from the given N-best list and makes modifications based on this sentence to yield the final output. Therefore, we hope to find this “closest match” in a reverse process by finding the hypothesis within the ASR N-best list that has the smallest Levenshtein distance to the ChatGPT unconstrained generation result. For instance, for the zero-shot uncon example in Figure <ref>, the Levenshtein distance of the ChatGPT output to the 3-best ASR hypotheses is 1, 0, 1 respectively. Therefore, the second hypothesis will be selected as the corrected result for this utterance. § EXPERIMENTS §.§ Setup We conduct experiments on ChatGPT (gpt-3.5-turbo-0613) to study its performance on error correction for two ASR models. A novel Conformer-Transducer <cit.> model containing 12 encoder layers is utilised. The model was trained on 960 hours LibriSpeech data with SpecAugment <cit.> and speed perturbation applied, following the ESPnet recipe <cit.>. The other ASR model studied is the Whisper <cit.> small.en model. In decoding, we follow <cit.> to suppress the probability of the most common punctuation. Each ASR model is decoded with a beam size of 10 that generates a 10-best list as a byproduct at inference. If not stated otherwise, the top five hypotheses are used as input to ChatGPT, i.e. the size of the input N-best list is 5. The effect of adopting different N is studied in the ablation experiment. We apply lowercase representation to the ASR N-best list without performing other text processing steps. At the evaluation stage, we run the text normalisation scripts from the Whisper project on both the ASR reference and the hypothesis text before calculating WER results. The proposed approaches are evaluated on three public datasets, namely LibriSpeech <cit.>, TED-LIUM3 <cit.>, and Artie bias corpus <cit.>. LibriSpeech is an audiobook-based English speech corpus, TED-LIUM3 is an audio dataset collected from TED talks, and Artie bias corpus is a subset of the Common Voice dataset <cit.> which is also read speech. The details of the datasets are presented in Table <ref>. We undertook a comparative analysis between ASR error correction using the generative LLM ChatGPT and a standard error correction model that adopts an E2E structure. To be specific, we trained an N-best T5 error correction model for the ASR system, as described in Section <ref>. The N-best T5 model was fine-tuned on the 10-best list of the Conformer-Transducer ASR model decoded on the 960 hours LibriSpeech training set. §.§ Experiments on Conformer-Transducer In Table <ref>, we study the behaviour of ChatGPT on ASR error correction when the Conformer-Transducer model is used as the base system. Results from the fine-tuned T5 error correction model are listed for comparison. Since both the ASR system and the T5 error correction model are trained on 960 hours of LibriSpeech training data, LibriSpeech can be considered as an in-domain dataset to the T5 model. In this case, the supervised trained N-best T5 model yields a performance gain of 10.9% (6.90 to 6.15) over the ASR baseline. Error correction results using ChatGPT with different methods are presented, which does not require any form of model training and is therefore more efficient. In the zero-shot setting, both the selective approach and closest mapping perform better than the unconstrained generation, which is in line with the T5 model results that constrained decoding performs better. Moreover, the 0-shot closest which finds the closest match of the output corrected hypothesis in the given N-best list performs better than asking ChatGPT to directly select the best one from the N-best list. The unconditional error correction results become much better when we switch to the 1-shot uncon prompt (6.64 to 6.29), indicating that ChatGPT has a better understanding of the task by referring to the given example. When we further apply the closest mapping in the 1-shot setting, WER on the test set is reduced to 6.24, which is comparable to the T5 model performance. TED-LIUM3 can be considered as an out-of-domain dataset for both the ASR model and the trained T5 error correction model. Therefore, the ASR system shows high error rates on the test set while the T5 model gives 11.3% WERR by performing error correction. Results from the ChatGPT-based methods show significant performance improvement. The 1-shot uncon approach largely outperforms the ASR baseline by 25.1% (13.53 to 10.13). The result is even better than the oracle WER of the 5-best list output by the ASR model. As the upper bound for the constrained decoding methods is the 5-best oracle WER, the 1-shot closest shows worse performance on the test set. The results on both datasets suggest that ChatGPT is effective at detecting errors in the given ASR hypotheses and generating the corrected transcription, especially for out-of-domain scenarios. To further study where the performance gain comes, we built a ROVER-based system <cit.> to align and combine the hypotheses in an N-best list with weighted voting, but it leads to worse results compared to the ASR baseline. Experimental results suggest that ChatGPT leverages the implicitly learned world knowledge to generate the corrected ASR transcription based on the given input information, instead of performing a simple voting process on the N-best list. In Table <ref>, we calculate the WER breakdown of different types of errors. When using the zero-shot uncon prompt, the error correction results from ChatGPT contain fewer substitution and insertion errors compared to the original ASR baseline while leading to much more deletions. With human evaluation, we find out that in the ChatGPT output, error correction results for 14 sentences are truncated (only the first few words are in the ChatGPT output rather than the entire sentence), contributing to 0.2% absolute WER. With 1-shot learning, ChatGPT performs more stable and all the badcases are solved, yielding better overall performance. Additionally, we observe that in some cases, ChatGPT has a tendency to remove redundant spoken words from the given ASR hypothesis to make the transcription more fluent. With 1-shot closest, we search from the given N-best list for the final output and therefore the introduced deletion errors can be reduced. In table <ref>, we perform ablation for the size of the N-best list on the LibriSpeech test_other set. Results show that using a large number of N is important for ChatGPT to perform well with the zero-shot uncon prompt. In the extreme case of only using the top one ASR hypothesis as input, ChatGPT makes many unnecessary changes to the input to make the sentence more “reasonable” due to lack of information. With the increased N-best list size, it learns to compare the differences between the hypotheses and correct when the sentences disagree with each other. For the selective approach, the size of the N-best list matters less as ChatGPT performs choice selection rather than generating the entire corrected hypothesis. The 1-shot closest method achieves the best performance on the test_other set. With the closest mapping, we select the ASR hypothesis within the N-best list that is most similar to the ChatGPT output. Thus, for each utterance, the selected hypothesis falls in the range of hypothesis-1 to hypothesis-5, and we divide the test set into 5 splits accordingly. The proportions of each subset are 67%, 14%, 8%, 5%, and 6%. In Figure <ref>, we calculate the WER of the ASR baseline and the WER after error correction for each subset. When the selected hypothesis is the same as the top one ASR hypothesis, WER remains the same as the ASR baseline. When the model selects other transcription from the N-best list, performance improvement can be seen compared to the ASR baseline. §.§ Experiments on Whisper Next, we investigate the impact of the proposed methods on the pre-trained Whisper model, and the results are listed in Table <ref>. Although Whisper already demonstrates state-of-the-art performance, ChatGPT proves to be effective in correcting ASR errors on both LibriSpeech and Artie, yielding 5.4% and 8.9% WERR on the test sets respectively. However, ChatGPT shows worse performance compared to the ASR baseline on the TED-LIUM3 set. In particular, much more deletion errors compared to the ASR baseline can be seen in the ChatGPT output with all the proposed methods. To further study the possible cause for ChatGPT to be less effective on Whisper outputs, we analyse the ASR N-best list of both Whisper and the Transducer model, as shown in Table <ref>. When computing the statistics, punctuation and special symbols are removed from the ASR hypotheses, leaving only English characters and numbers to focus on meaningful content. The Uniq metric refers to the number of unique hypotheses within one N-best list. We compute the average of all samples in the test set. For Transducer outputs, the result is close to 5 which is the size of the N-best list, however, there are more repeated entries in Whisper outputs. This is due to the fact that Whisper learns to generate sentences with inverse text normalisation (ITN) to improve the readability, i.e. capitalisation added, punctuation and other symbols included, and disfluency removed. Accordingly, in many cases, multiple hypotheses in an N-best list only differ in format, not in actual content. Nevertheless, the diversity of the N-best list is important for our error correction method to perform well. Another observation is that in Whisper output, even when the hypotheses in the N-best list are diverse, the difference may come from one hypothesis omitting or inserting some irrelevant words in the output. This is illustrated with the Cross WER metric in Table <ref>. Here, we keep all the unique hypotheses in an N-best list. Then for each pair of hypotheses in the remaining list, we calculate the WER result against each other and sum the result on the entire set. This metric can help us measure the difference between hypotheses within one N-best list. The results show that the deletion and the insertion rates of Whisper on the Cross WER metric are much higher than the Transducer model. This suggests that Whisper may fail to faithfully transcribe the utterance in all N-best hypotheses, resulting in sentences with varying lengths. ChatGPT tends to choose more coherent ones, leading to the large number of deletions in the error correction results. In Table <ref>, we conduct case analysis for an error correction example from the test set of TED-LIUM3. As the table shows, for the Transducer ASR model, all the hypotheses are of similar length containing all the information from the utterance, and the Uniq metric is 5. ChatGPT helps to correct “blue” into “blew” utilising the given N-best list and world knowledge. Meanwhile, for Whisper 5-best hypotheses, the Uniq metric is only 3 due to the repetition problem. In addition, disfluencies in the utterance (“that”, “you know”) and the non-existent word (“and”) are incorrectly removed or introduced in the output, resulting in more deletions and insertions in Cross WER. The produced N-best list is hence less informative and misleads ChatGPT into the wrong output. § CONCLUSIONS In this paper, we propose to use ChatGPT, a powerful generative large language model, to perform ASR error correction in zero-shot or 1-shot settings. Results on standard datasets suggest that when using the ASR N-best list as input, ChatGPT has the ability to detect and correct errors for the ASR output. 10% and 25% WER reduction can be observed for the Transducer model in the in-domain and out-of-domain settings. We also analyse the Whisper N-best list to explore potential reasons that cause the proposed methods to be less effective. IEEEbib
http://arxiv.org/abs/2307.04013v1
20230708164601
BPNet: Bézier Primitive Segmentation on 3D Point Clouds
[ "Rao Fu", "Cheng Wen", "Qian Li", "Xiao Xiao", "Pierre Alliez" ]
cs.CV
[ "cs.CV" ]
Novel Pipeline for Diagnosing Acute Lymphoblastic Leukemia Sensitive to Related Biomarkers Amirhossein Askari Farsangi1 Ali Sharifi Zarchi1 Mohammad Hossein Rohban1 August 12, 2023 ========================================================================================== This paper proposes BPNet, a novel end-to-end deep learning framework to learn Bézier primitive segmentation on 3D point clouds. The existing works treat different primitive types separately, thus limiting them to finite shape categories. To address this issue, we seek a generalized primitive segmentation on point clouds. Taking inspiration from Bézier decomposition on NURBS models, we transfer it to guide point cloud segmentation casting off primitive types. A joint optimization framework is proposed to learn Bézier primitive segmentation and geometric fitting simultaneously on a cascaded architecture. Specifically, we introduce a soft voting regularizer to improve primitive segmentation and propose an auto-weight embedding module to cluster point features, making the network more robust and generic. We also introduce a reconstruction module where we successfully process multiple CAD models with different primitives simultaneously. We conducted extensive experiments on the synthetic ABC dataset and real-scan datasets to validate and compare our approach with different baseline methods. Experiments show superior performance over previous work in terms of segmentation, with a substantially faster inference speed. § INTRODUCTION Structuring and abstracting 3D point clouds via segmentation is a prerequisite for various computer vision and 3D modeling applications. Many approaches have been proposed for semantic segmentation, but the finite set of semantic classes limits their applicability. 3D instance-level segmentation and shape detection are much more demanding, while this literature lags far behind its semantic segmentation counterpart. Finding a generalized way to decompose point clouds is essential. For example, man-made objects can be decomposed into canonical primitives such as planes, spheres, and cylinders, which are helpful for visualization and editing. However, the limited types of canonical primitives are insufficient to describe objects' geometry in real-world tasks. We are looking for a generalized way of decomposing point clouds. The task of decomposing point clouds into different geometric primitives with corresponding parameters is referred to as parametric primitive segmentation. Parametric primitive segmentation is more reasonable than semantic instance segmentation for individual 3D objects, which unifies the 3D objects in the parametric space instead of forming artificially defined parts. However, the task is quite challenging as 1) there is no exhaustive repertoire of canonical geometric primitives, 2) the number of primitives and points belonging to that primitive may significantly vary, and 3) points assigned to the same primitive should belong to the same type of primitive. Inspired by the fact that Bézier decomposition, where NURBS models can be divided into canonical geometric primitives (plane, sphere, cone, cylinder, etc.) and parametric surfaces into rational Bézier patches, we propose to learn Bézier decomposition on 3D point clouds. We focus on segmenting point clouds sampled from individual objects, such as CAD models. Departing from previous primitive segmentation, we generalize different primitive types to Bézier primitives, making them suitable for end-to-end and batch training. To the best of our knowledge, our method is the only work to learn Bézier decomposition on point clouds. To summarize our contributions: * We introduce a novel soft voting regularizer for the relaxed intersection over union (IOU) loss, improving our primitive segmentation results. * We design a new auto-weight embedding module to cluster point features which is free of iterations, making the network robust to real-scan data and work for axis-symmetric free-form point clouds. * We propose an innovative reconstruction module where we succeed in using a generalized formula to evaluate points on different primitive types, enabling our training process to be fully differential and compatible with batch operations. * Experiments demonstrate that our method works on the free-form point clouds and real-scan data even if we only train our model on the ABC dataset. Furthermore, we present one application of Bézier primitive segmentation to reconstruct the full Bézier model while preserving the sharp features. The code is available at: <https://github.com/bizerfr/BPNet>. § RELATED WORK Bézier primitive segmentation involves parametric fitting, instance segmentation, and multi-task learning. We now provide a brief review of these related research areas. Primitive segmentation. Primitive segmentation refers to the search and approximation of geometric primitives from point clouds. Primitives can be canonical geometric primitives, such as planes or spheres, or parametric surface patches, such as Bézier, BSpline, or NURBS. We can classify primitive segmentation methods into two lines of approaches: geometric optimization and machine learning. Popular geometric optimization-based methods include RANSAC <cit.>, region growing <cit.> and Hough transforms <cit.>. We refer to <cit.> for a comprehensive survey. One limitation of geometric optimization-based methods is that they require strong prior knowledge and are hence sensitive to parameters. In order to alleviate this problem, recent approaches utilize neural networks for learning specific classes of primitives such as cuboids <cit.>. The SPFN supervised learning approach <cit.> detects a wider repertoire of primitives such as planes, spheres, cylinders, and cones. Apart from the canonical primitives handled by SPFN, ParSeNet <cit.> and HPNet <cit.> also detect open or closed BSpline surface patches. Nevertheless, different types of primitives are treated separately with insufficient genericity. This makes them unsuitable for batch operations, thus suffering long inference times. Deep learning-based methods are less sensitive to parameters but often support a limited repertoire of primitives. Our work extends SPFN, ParSeNet, and HPNet with more general Bézier patches. Instance segmentation. Instance segmentation is more challenging than semantic segmentation as the number of instances is not known a priori. Points assigned to the same instance should fall into the same semantic class. We distinguish between two types of methods: proposal-based <cit.> and proposal-free methods <cit.>. On the one hand, proposal-based methods utilize an object-detection module and usually learn an instance mask for prediction. On the other hand, proposal-free methods tackle the problem as a clustering step after semantic segmentation. We refer to a recent comprehensive survey <cit.>. The significant difference between instance segmentation and primitive segmentation is that instance segmentation only focuses on partitioning individual objects where primitive fitting is absent. Patch-based representations. Patch-based representations refer to finding a mapping from a 2D patch to a 3D surface. Previous works including <cit.> learn a parametric 2D mapping by minimizing the Chamfer distance <cit.>. One issue with Chamfer distance is that it is not differentiable when using the nearest neighbor to find matched pairs. We learn the uv mapping instead. Learning uv parameters enables us to re-evaluate points from our proposed generalized Bézier primitives, making our training process differentiable and supporting batch operations. Multi-task learning. Multi-task learning aims to leverage relevant information contained in multiple related tasks to help improve the generalization performance of all the tasks <cit.>. Compared to single-task learning, the architectures used for multi-task learning—see, e.g., <cit.>—share a backbone to extract global features, followed by branches that transform the features and utilize them for specific tasks. Inspired by <cit.>, we use a cascaded architecture for our joint optimization tasks. § METHOD Figure <ref> shows an overview of the proposed neural network. The input to our method is a 3D point cloud P={p_i | 0≤ i ≤ N-1}, where p_i denotes the point coordinates (with or without normals). The output is the per-point patch labels { P_k | ∪_k=0 P_k = P}, where each patch corresponds to a Bézier primitive. The network will also output patch degree (d_u-by-d_v) and weighted control points C={𝐜_kmn = (x,y,z,w)|0≤ m ≤ d_u, 0≤ n ≤ d_v, 0 ≤ k ≤ K-1}, where K denotes the number of patches. We constrain the maximum degree to be M_d*N_d. We let our network output a maximum number of K Bézier patches for all CAD models, and we use K̂ to denote the ground-truth number of patches which is smaller than K and varies for each CAD model. §.§ Architecture Our architecture consists of two components: a backbone for extracting features and a cascaded structure for joint optimization. The backbone is based on three stacked EdgeConv <cit.> layers and extracts a 256D pointwise feature for each input point. Let 𝐏∈ℝ^N × D_in denote the input matrix, where each row is the point coordinates (D_in is three) with optional normals (D_in is six). Let 𝐗∈ℝ^N × 256 denote the 256D pointwise feature matrix extracted from the backbone. We use a cascaded structure to optimize the per-point degree probability matrix 𝐃∈ℝ^N × (M_d*N_d), the soft membership matrix 𝐖∈ℝ^N × K, the UV parameter matrix 𝐓∈ℝ^N × 2, and the weighted control points tensor 𝐂∈ℝ^K × (M_d+1) × (N_d+1) × 4 jointly. Because 𝐃, 𝐖, 𝐓, and 𝐂 are coupled, it is natural to use a cascaded structure to jointly optimize them. Here, the cascaded structure is similar to <cit.>, where the features are concatenated and transformed for different MLP branches. §.§ Joint Optimization We have four modules: decomposition, fitting, embedding, and reconstruction. They are coupled to optimize 𝐃, 𝐖, 𝐓 and 𝐂 jointly by using our proposed four modules. §.§.§ Decomposition Module Degree classification. We use Bézier primitive with different degrees to replace classical primitives, including plane, sphere, plane, BSpline, etc. For the sake of the classification of degrees, the straightforward idea would be to use a cross-entropy loss: CE = -log(p_t), where p_t denotes the possibility of the true degree labels. However, the degree type is highly imbalanced. For example, surfaces of degree type 1-by-1 represent more than 50%, while 3-by-2 surfaces are rare. To deal with the imbalance, we utilize the multi-class focal-loss <cit.>: FL = -(1-p_t)^γlog(p_t), where γ denotes the focusing parameter. Then the degree type classification loss is defined as: L_deg = 1/N∑_i=0^N-1FL(𝐃_i,:) Primitive segmentation. The output of primitive segmentation is a soft membership indicating per-point primitive instance probabilities. Each element w_ik is the probability for a point p_i to be a member of primitive k. Since we can acquire pointwise patch labels from our data pre-processing, we use a relaxed IOU loss <cit.> to regress the 𝐖: L_seg = 1/K̂∑_k=0^K̂-1[1 - 𝐖_:,k^T Ŵ_:,k̂/𝐖_:,k_1 + Ŵ_:,k̂_1 - 𝐖_:,k^T Ŵ_:,k̂], where 𝐖 denotes the output of the neural network and 𝐖̂ is the one-hot encoding of the ground truth primitive instance labels. The best matching pairs (k, k̂) between prediction and ground truth are found via the Hungarian matching <cit.>. Please refer to <cit.> for more details. Soft voting regularizer. Since we learn 𝐃 and 𝐖 separately, points belonging to the same primitive instance may have different degrees, which is undesirable. To favor degree consistency between points assigned to the same primitive, we propose a soft voting regularizer that penalizes pointwise degree possibilities. We first compute a score for each degree case for all primitive instances by 𝐒 = 𝐖^T𝐃, where each element s_kd denotes the soft number of points for degree d in primitive instance k. We then perform L_1-normalization to convert 𝐒 into primitive degree distributions Ŝ: Ŝ = [1/∑_d=0S_kd] ⊙𝐒, where the first term denotes the sum of each column and ⊙ denotes the element-wise product. Finally, we utilize a focal loss to compute the primitive degree voting loss: L_voting = 1/K̂∑_k=0^K̂-1FL(Ŝ_k,:), where FL denotes the focal loss. The global loss for the decomposition module is defined as: L_dec= L_deg + L_seg + L_voting. §.§.§ Fitting Module Parameter regression. Through Bézier decomposition we obtain the ground truth labels for the (u, v) parameters and record all parameters into matrix 𝐓̂. We regress the uv parameters using a mean squared error (MSE) loss: L_para= 1/N∑_i=0^N-1𝐓_i,: - 𝐓̂_i,:_2^2 Control point regression. We select a maximum number of primitive instances K for all models. As the ground truth primitive instance K̂ varies for each model, we reuse the matching pairs directly from the Hungarian matching already computed in the primitive segmentation step. Note that as the predicted degree (d_u, d_v) may differ from the ground truth (d̂_̂û, d̂_̂v̂), we align the degree to compute the loss via a maximum operation as (max(d_u, d̂_̂û), max(d_v, d̂_̂v̂)). The network always outputs (M_d+1) × (N_d+1) control points for each primitive corresponding to the predefined maximum degree in U and V direction, and these control points will be truncated by the aligned degree. Furthermore, if the ground-truth degree is smaller than the prediction, we can pad “fake” control points that are zero for the ground-truth patch; otherwise, we just use the aligned degree, which is the maximum of the predicted and the ground truth. Finally, the control point loss is defined as: L_ctrl= 1/N_𝐜∑_t=0^N_𝐜-1𝐜_t - 𝐜̂_t_2^2, where 𝐜_t and 𝐜̂_t denote the matched control points, and N_𝐜 is the number of matched control point pairs. Finally, we define the L_fit loss as: L_fit = L_para + L_ctrl. §.§.§ Embedding Module We use the embedding module to eliminate over-segmentation by pulling point-wise features toward their center and pushing apart different centers. Unlike ParSeNet and HPNet, 1) we do not need a mean-shift clustering step which is time-consuming; 2) we calculate the feature center in a weighted manner rather than simply averaging. The weights are chosen as 𝐖 and will be automatically updated in the decomposition module; 3) 𝐖 will be further optimized to improve the segmentation. Moreover, our embedding module is suitable for batch operations even though the number of primitive instances for each CAD model and the number of points for each primitive varies. Otherwise, one has to apply mean-shift for each primitive, which deteriorates timing further. To be specific, we use 𝐖 to weight 𝐗 to obtain primitive features for all candidate primitive instances. Then, we reuse 𝐖 to weigh all the primitive instance features to calculate a “soft” center feature for each point. We favor that each point feature embedding should be close to its “soft” center feature, and each primitive instance feature embedding should be far from each other. The primitive instance-wise feature matrix 𝐗_ins is defined as: 𝐗_ins = [1/∑_i=0^N-1w_ik] ⊙ (𝐖^T𝐗), where each row of 𝐗_ins denotes the instance-wise features for each patch. We then compute the “soft” center feature matrix 𝐗_center as: 𝐗_center = 𝐖𝐗_ins, where each row denotes the “soft” center for each point. Then we define L_pull as: L_pull = 1/N∑_i=0^N-1Relu(𝐗_i,: - (𝐗_center)_i,:_2^2 - δ_pull), and we define L_push as: L_push = 1/2K(K-1)∑_k_1<k_2Relu( δ_push - (𝐗_ins)_k_1,: - (𝐗_ins)_k_2,:_2^2 ). Finally, the total embedding loss L_emb is defined as: L_emb = L_pull + L_push. §.§.§ Reconstruction Module The reconstruction module is designed to reconstruct points from the predicted multiple Bézier primitives, i.e., rational Bézier patches, and further jointly optimize 𝐖. One difficulty is that each CAD model has various numbers of primitives, and the degree of each primitive is also different. Therefore, we seek a generalized formula to support tensor operations on re-evaluating points for a batch of CAD models. The straightforward approach would be to compute a synthesizing score for all degree types. Assume the maximum number of primitive instances is K, and we have M_d * N_d types of different degrees. The total number of combinations is K * M_d * N_d. We define a synthesizing score for each case in Einstein summation form: (s_w)_kci = w_ik * s_kc, where w_ik denotes the probability of point p_i to belong to primitive instance k and s_kc denotes the degree score for degree type m-by-n indexed with c = M * (m - 1) + (n - 1) for primitive instance k coming from 𝐒. Then, we need to normalize (s_w)_kdi such that ∑_k, d, i (s_w)_kdi = 1. Finally, the reconstructed point coordinates p_i are defined as: [ x_i'; y_i'; z_i'; ] = ∑_k,m,n(s_w)_kci𝐑_kmn(u_i,v_i), where parameter (u_i,v_i) for point p_i is shared for all combinations. Such a formulation makes extending the formula in matrix form easy and avoids resorting to loop operations. However, such an approach is too memory-intensive. We thus truncate the degree from the degree probability matrix by re-defining the Bernstein basis function for degree d as: (B_M)_d^l(t)= dlt^l(1-t)^d-l, l ≤ d 0, l > d , where 0 ≤ l ≤ M, and M is the maximum degree. Then, the reconstructed point coordinates for p_i for a degree m-by-n patch k is: [ x_i'; y_i'; z_i'; ] = ∑_m_i^M_d∑_n_i^N_d(B_M_d)_m^m_i(u)(B_N_d)_n^n_i(v)𝐜_m_in_i(c_w)_m_in_iw_ik/∑_m_i,n_i(B_M_d)_m^m_i(u)(B_N_d)_n^n_i(v)(c_w)_m_in_iw_ik, where 𝐜_m_in_i denotes the control point coordinates and (c_w)_m_in_i denotes its weight, and w_ik is the element of 𝐖. If we also input the normal (n_x_i, n_y_i, n_z_i) for point p_i, we can also reconstruct the normal (n_x_i', n_y_i', n_z_i') by: [ n_x_i'; n_y_i'; n_z_i'; ] = [ ∂ x_i'/∂ u; ∂ y_i'/∂ u; ∂ z_i'/∂ u; ]×[ ∂ x_i'/∂ v; ∂ y_i'/∂ v; ∂ z_i'/∂ v; ], where × denotes the cross product. 𝐩_i denotes the input point coordinates. 𝐩_i^* denotes the reconstructed point coordinates. 𝐧_p_i denotes the input point normals. 𝐧_p_i^* denotes the reconstructed normals. The coordinate loss is defined as: L_coord = 1/N∑_i=0^N-1𝐩_i- 𝐩_i^*_2^2. If we also input the normals, the normal loss is defined as: L_norm = 1/N∑_i=0^N-1(1 - |𝐧_p_i^T𝐧_p_i^*|). The loss for the reconstruction module is defined as: L_recon = L_coord, without normals, L_coord+L_norm, with normals. §.§.§ Total Loss The total loss is defined as the sum of decomposition, fitting, embedding, and reconstruction losses: L = L_dec + L_fit + L_emb + L_recon. We do not use different weights for each loss item because all point clouds are normalized into a unit sphere. Moreover, the uv parameters are outputted directly from a sigmoid layer, and the control points are outputted directly by a tanh layer. Thus, each loss item is almost at the same scale, so we do not need different weights for each loss item. Furthermore, we use different learning rates for different modules to balance the training. Specific training details are listed in section <ref>. § EXPERIMENTS §.§ Dataset Pre-Processing We evaluate our approach on the ABC dataset <cit.>. However, the ABC dataset does not have the annotations to learn Bézier decomposition on point clouds. Therefore, we do a pre-processing step. Specifically, we utilize the CGAL library <cit.> and OpenCascade library <cit.> to perform Bézier decomposition on STEP files directly and perform random sampling on the surface to obtain the following labels: point coordinates, point normals, point uv parameters, surface patch indices of the corresponding points, surface patch degrees, and surface patch control points. Finally, we use 5,200 CAD models for training and 1,300 CAD models for testing. Each CAD model contains randomly sampled 8,192 points (non-uniform) with annotations. §.§ Training Details We train a multi-task learning model. The learning rates differ depending on the MLP branch. The learning rate for the backbone, soft membership, and uv parameters is set to 10^-3, while the learning rate for the degree probabilities and control points is set to 10^-4. As we have several learning tasks that are not independent, we set a lower learning rate for loss items, such as degree probabilities which converges faster. We set γ as 3.0 for the focal loss, and δ_pull as 0 and δ_push as 2.0 for the embedding losses. We employ ADAM to train our network. The model is then trained using 150 epochs. §.§ Comparisons We compare our algorithm with SPFN, ParSeNet, and HPNet <cit.>. We use both points and normals for training all the algorithms. Since SPFN only supports four types of canonical primitives (plane, sphere, cone, and cylinder), we consider points belonging to other primitives falling out of the supported canonical primitive types as the “unknown” type. To make fair comparisons, we modify SPFN to let the network take point coordinates and normals as input for training. For ParSeNet, we only train the segmentation module on the ABC dataset. We use their pre-trained fitting model (SplineNet) directly. For HPNet, we also use the pre-trained fitting model directly, which is the same as ParSeNet. We observed that the output of HPNet is very sensitive to the number of points. In order to use HPNet at its best, we down-sample the point clouds to 7k points for training and testing. We choose the following evaluation metrics: * Primitive Type Accuracy (“Acc”): 1/K∑_k=0^K-1𝕀(t_k==t̂_k), where t_k and t̂_k are predicted primitive type and ground truth type, respectively. This is used to measure the type accuracy. Note that our primitive types differ from other baselines. * Rand Index (“RI”): a+b/c, where c is N2 denoting the total possible pairs for all points, and a denotes the number of pairs of points that are both in the same primitive of prediction and ground truth, while b denotes the number of pairs of points that are in a different primitive of prediction and ground truth. Rand index is a similarity measurement between two instances of data clustering, and a higher value means better performance <cit.>. * Normal Error (“Err”): 1/N∑_i=0^N-1arccos( |𝐧_p_i^T𝐧_p_i^*|), where 𝐧_p_i and 𝐧_p_i^* are ground truth and predicted unit normal, respectively. * Inference Time (“Time”): The inference time on the whole test dataset. * Average Primitive Number (“Num”): The predicted average number of primitives on the whole test data set. We record these evaluation metrics in table <ref> and <ref>. Figure <ref> shows visual depictions of the results. Our results show the best performance regarding primitive type accuracy, normal fitting error, and inference time. Our method is much faster for inference because it uses a general formula for different primitive types, and the embedding module is free of iterations. Other methods treat primitives with different equations, and ParSeNet and HPNet need a mean-shift step. Even though our approach may lead to more segmented primitives by the nature of Bézier decomposition, the evaluation metrics of primitive type accuracy and normal fitting error are computed in a point-wise manner. Thus, over-segmentation and under-segmentation will not lead to smaller or bigger errors due to fewer or more segmented primitives. We also show the performance of all the methods without normals as input. For our method and SPFN, we only input point coordinates into the neural networks but use normals as supervision. Since ParSeNet does not regress normals, we cannot use normals as supervision. We train ParSeNet without normals as input to test its performance. HPNet uses the network to regress the normals from the input and also utilizes the ground truth normals to construct an affinity matrix as a post-processing step for clustering. We modify HPNet to let the affinity matrix be constructed from the regressed normals instead of the ground-truth normals. Table <ref> records the evaluation metrics of each method. From the experiments, we deduce that normals are important for the task of parametric primitive segmentation. §.§ Ablation Studies We first conduct experiments to verify the usefulness of the soft voting regularizer. The soft voting regularizer favors point primitive type consistency for each primitive instance, i.e., points assigned to the same primitive instance should have the same primitive type. From our experiment, we find that the soft voting regularizer not only improves the primitive type accuracy but also accelerates training relaxed IOU. Please refer to figure <ref> and the last two rows of table <ref>. We also verify the functionalities of each module. If we only use the decomposition module, the result is not good even though the “Acc” and “RI” are slightly higher because the decomposition module ignores the fitting, limiting the segmentation applicable to specific datasets. The reconstruction module reduces the “Err” significantly compared to the fitting module because the reconstruction module controls how “well-fitted” a predicted Bézier primitive is to the input point clouds. In contrast, the fitting module only regresses the control points and uv parameters. The embedding module is designed to eliminate small patches that contain few points, seeing the “Num” column. Therefore, experimenting with the embedding module results in fewer patch numbers than its counterpart. To conclude, training with all the modules yields the best results. §.§ Stress Tests To test whether our algorithm can work in real-world scenarios, we show more results from the real-scan data from the Aim@Shape dataset <cit.>. The sampling is non-uniform, with missing data and measurement noise compared to the ABC dataset. Besides, We cannot train the network on those data directly because they lack ground-truth labels. Instead, we use the models trained on the ABC dataset and test the performance on real-scan data. Our algorithm still works, while other methods are sensitive. Another positive aspect is that our algorithm could decompose the axis-symmetric free-form point clouds with much smoother boundaries of different patches. Please refer to figure <ref>. We also test the performance of our network by adding Gaussian white noise. Specifically, we apply different scales of Gaussian white noise to the point coordinates after normalizing them into a unit sphere. The noise scale denotes the standard deviation of the Gaussian white noise. It ranges from 0.01 to 0.05. We train our network on noise-free data but test the network with Gaussian white noise. Please refer to table <ref>. §.§ Applications We can reconstruct the full Bézier model from the Bézier primitive segmentation. We do not follow ParSeNet to pre-train a model that outputs a fixed control point size. Instead, we reuse the rational Bézier patch to refit the canonical Bézier patch. We treat the degrees of the canonical Bézier patch the same as the rational Bézier patch. As a result, we fetch the segmentation and degrees of each patch predicted from the network. Then, we use the parameterization <cit.> to recompute uv parameters and least squares to refit control points for each patch. Each patch is expanded by enlarging the uv domain to guarantee intersections with its adjacent patches. After that, we use the CGAL co-refinement package <cit.> to detect intersecting polylines for adjacent tessellated patches and trim the tessellated patch with the intersected polylines. Our reconstructed full Bézier model can preserve the sharp features, while the boundaries of ParSeNet for different primitives are jaggy and thus fail to preserve the sharp features. Please refer to figure <ref>. § CONCLUSION This paper presents an end-to-end method to group points by learning Bézier decomposition. In contrast to approaches treating different geometric primitives separately, our method uses a general formulation for different primitive types. Regarding limitations, Bézier decomposition may naturally generate overly complex segmentations. In addition, we choose the rational Bézier patch as the primitive type. As the formulation is not linear, fitting the parametric patch is not direct. In future work, we wish to use the neural network to directly regress the canonical Bézier patch. § ACKNOWLEDGEMENTS This research is part of a project that has received funding from the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement No. 860843. The work of Pierre Alliez is also supported by the French government, through the 3IA Côte d'Azur Investments in the Future project managed by the National Research Agency (ANR) with the reference number ANR-19-P3IA-0002. named
http://arxiv.org/abs/2307.04470v1
20230710104044
Test-Time Adaptation for Nighttime Color-Thermal Semantic Segmentation
[ "Yexin Liu", "Weiming Zhang", "Guoyang Zhao", "Jinjing Zhu", "Athanasios Vasilakos", "Lin Wang" ]
cs.CV
[ "cs.CV" ]
Journal of IEEE Transactions on Artificial Intelligence, Vol. 00, No. 0, Month 2020 First A. Author et al.: Bare Demo of IEEEtai.cls for IEEE Journals of IEEE Transactions on Artificial Intelligence Test-Time Adaptation for Nighttime Color-Thermal Semantic Segmentation Yexin Liu, Weiming Zhang, Guoyang Zhao, Jinjing Zhu, Athanasios Vasilakos, and Lin Wang^† Manuscript received April 19, 2023. ^† corresponding author Y. Liu, W. Zhang, and Jingjin Zhu are with the Artificial Intelligence Thrust, HKUST(GZ), Guangzhou, China. E-mail:[email protected], [email protected], and [email protected] G. Zhao is with the Robotics and Autonomous Systems Thrust, HKUST(GZ), Guangzhou, China. E-mail:[email protected] Athanasios V. Vasilakos is with the Center for AI Research (CAIR), University of Agder(UiA), Grimstad, Norway. Email: [email protected] L. Wang is with the Artificial Intelligence Thrust, HKUST(GZ), Guangzhou, and Dept. of Computer Science and Engineering, HKUST, Hong Kong SAR, China. E-mail: [email protected] August 12, 2023 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== The ability to scene understanding in adverse visual conditions, , nighttime, has sparked active research for color-thermal semantic segmentation. However, it is essentially hampered by two critical problems: 1) the day-night gap of color images is larger than that of thermal images, and 2) the class-wise performance of color images at night is not consistently higher or lower than that of thermal images. We propose the first test-time adaptation (TTA) framework, dubbed Night-TTA, to address the problems for nighttime color-thermal semantic segmentation without access to the source (daytime) data during adaptation.  Our method enjoys three key technical parts. Firstly, as one modality (, color) suffers from a larger domain gap than that of the other (, thermal), Imaging Heterogeneity Refinement (IHR) employs an interaction branch on the basis of color and thermal branches to prevent cross-modal discrepancy and performance degradation. Then, Class Aware Refinement (CAR) is introduced to obtain reliable ensemble logits based on pixel-level distribution aggregation of the three branches. In addition, we also design a specific learning scheme for our TTA framework, which enables the ensemble logits and three student logits to collaboratively learn to improve the quality of predictions during the testing phase of our Night TTA. Extensive experiments show that our method achieves state-of-the-art (SoTA) performance with a 13.07% boost in mIoU. Night-time segmentation, TTA, Cross-modal learning. Night-time segmentation is a critical task for autonomous driving under challenging visual conditions. Existing methods mostly focus on daytime segmentation with perfect illumination. This has inspired active research on color-thermal semantic segmentation as thermal cameras are less affected by illumination changes and can complement color modality. However, thermal images suffer from a lack of large-scale labeled datasets, which are labor-intensive to obtain. TTA allows for the on-the-fly adaptation to different target domains at the testing phase while protecting data privacy. In light of this, we propose the first TTA framework that achieves SoTA nighttime color-thermal segmentation performance at the testing phase without relying on the source (daytime) data. This is practically valuable for real-world application scenarios. The proposed method presents a robust solution for all-day scene understanding, which may hopefully inspire more research in the community. § INTRODUCTION Recent years have witnessed the success of deep neural networks (DNNs) for color image semantic segmentation, which is crucial for the scene understanding, , autonomous driving <cit.>. However, models trained in favorable lighting conditions show poor generalization ability to the nighttime data. Thus, nighttime image semantic segmentation has become a challenging problem. Recently, increasing attention has been paid to thermal images because they are inherently robust to illumination changes and may complement semantic information to the color images (especially nighttime images). <cit.>. This has sparked research for supervised  <cit.> and unsupervised  <cit.> color-thermal semantic segmentation as both modalities can compensate for each other’s deficiencies. However, existing supervised methods necessitate well-label annotations, particularly for thermal images captured during nighttime, which poses significant labor-intensive challenges. Meanwhile, most unsupervised methods (, unsupervised domain adaptation (UDA)) entail the drawbacks of time-consuming offline domain adaptation training, and its performance is greatly affected by the domain gap, leading to limited adaptation in diverse testing environments. Therefore, it is non-trivial as only the nighttime color-thermal data is available under a limited overhead for adaption. This motivates us to explore a suitable adaptation strategy for nighttime color-thermal semantic segmentation. Test-Time Adaptation (TTA)  <cit.> presents a practical domain adaptation approach that enables the seamless adaptation of pre-trained models to the target domain in real-time during the testing phase. TTA is different from the UDA-based semantic segmentation setting <cit.>: TTA does not need to access source data during adaptation. Moreover, the TTA framework can achieve privacy protection while allowing for on-the-fly adaptation to different target domains during the testing phase without the need for offline domain adaptation training. This is practically valuable for real-world applications. However, directly extending existing TTA methods to color-thermal semantic segmentation leads to less optimal performance, as demonstrated in Tab. <ref> in the experiments. For example, entropy minimization of TENT <cit.> generates overconfident predictions. Therefore, applying it individually to color and thermal branches aggravates the color-thermal discrepancy. Motivation: In this paper, we, for the first time, explore a TTA framework for nighttime color-thermal semantic segmentation without access to the source (daytime) color-thermal data. Our work addresses two challenges for nighttime color-thermal semantic segmentation arising from the modality differences during TTA, as shown in Fig. <ref>. (1) Due to the different imaging mechanisms, the day-night domain gap, denoted as G_color, of color images is larger than that, denoted as G_T, of the thermal images (See Fig. <ref>(a)). This unbalanced difference between G_color and G_T leads to the considerable cross-modal discrepancy and performance degradation in the adaption process. We refer to this issue as imaging heterogeneity. (2) Existing color-thermal segmentation methods, , <cit.>, apply the same weights to all classes. However, we find that the class-wise performance at night (denoted as P_color) of color images is not consistently higher or lower than that of the thermal images (denoted as P_T). Therefore, these methods might neglect the discriminative features of the modalities with smaller weights during the color-thermal nighttime segmentation ensemble process. An example is shown in Fig. <ref>(b), where the performance P_T^person on the class `person' in the thermal image is larger than P_color^person of the color image. We refer to this as class-wise prediction heterogeneity. To address aforementioned challenges, we propose a novel nighttime TTA framework, called Night-TTA, which consists of three key technical components: (1) Imaging Heterogeneity Refinement (IHR) (Sec. <ref>) and (2) Class Aware Refinement (CAR) (Sec. <ref>) and (3) a learning scheme (Sec. <ref>), as shown in Fig. <ref>(c). For IHR, we propose an interaction branch to obtain the color-thermal cross-modal invariant feature to prevent the performance degradation in the adaptation process caused by the difference in the cross-modal domain gap (G_color>G_T). Specifically, we first take the color-thermal image pairs as input to the interaction branch and then use the two encoders to obtain the color and thermal features that need to be fused. However, directly fusing the color and thermal features induces inconsistent noises due to the private information in the two individual branches. Therefore, we introduce a novel cross-modal shared attention (CMSA) module to aggregate the cross-modal invariant features while suppressing the noisy ones between the two modalities. The CAR strategy employs an element-wise entropy-based fusion (EEF) module to generate reliable ensemble logits. This subtly avoids neglecting the discriminative feature information of each class in each branch. Specifically, we first evaluate Shannon entropy in the channel dimension of each student's logits. Then, we re-weight the students' logits to generate more reliable ensemble logits (, teacher) based on the pixel-level distribution of three students. By performing pixel-wise re-weight on the logits of the three branches, the performance advantages of different modalities in different classes can be utilized, and more reliable ensemble logits can be obtained. Lastly, we present a novel learning scheme to overcome the potential problematic segmentation results during TTA. By utilizing the reliable ensemble logits generated by the EEF module as a self-supervised signal, we enable three student networks to learn from each other through online distillation <cit.> during the adaptation process. This allows our Night-TTA model to fully utilize the discriminative information in each branch, thus preventing the ensemble logits from making false predictions among the categories. Contribution: In summary, our major contributions are four-fold: (I) We make the first attempt and propose a novel TTA framework for color-thermal semantic segmentation. (II) We propose an IHR strategy with the CMSA module, to reduce the imaging heterogeneity during TTA. We also propose the CAR strategy to take advantage of the segmentation performance of different modalities in different classes and then generate reliable ensemble logits. (III) For cross-modal ensemble distillation of our Night-TTA framework, we propose a novel learning scheme to achieve cross-modal ensemble distillation in the testing phase. (IV) Extensive experiments demonstrate that our method significantly surpasses the baselines and prior methods (at least 3.11% mIoU improvement on the MF-1 dataset, and 2.69% mIoU improvement on the KP dataset). § RELATED WORK Color-Thermal Image Semantic Segmentation. Color-thermal segmentation methods can be divided into two main categories: supervised methods and unsupervised methods. The former includes the fusion of multi-modalities using multiple encoders with a shared decoder <cit.> and the translation between the RGB and thermal images <cit.>. MFNet <cit.> extracts features from the color and thermal images using two encoders and expands the receptive field by using the 'mini-inception' module. ABMDRNet <cit.> solves the problems of multimodal disparity and multi-scale contextual information fusion by using a bridging-then-fuse strategy to obtain more discriminative cross-modal information. UDA-based methods, , HeatNet <cit.>, propose a teacher-student learning method <cit.> to transfer the knowledge from the daytime color image domain to the nighttime thermal image domain to avoid expensive nighttime image annotation. MS-UDA <cit.> enhances the performance of thermal segmentation by transferring knowledge from color to thermal modality. By contrast, we propose the first color-thermal TTA framework that consists of triple student networks for nighttime image semantic segmentation without access to the source domain (daytime) data. Moreover, our TTA framework not only considers the difficulty of the domain gap faced by UDA but also proposes and solves the two novel problems based on the differences between modalities. Test-Time Adaptation (TTA). TTA methods enable the model to adapt quickly to the target domain, which does not require access to source domain data.<cit.>. TTA has been applied to unimodal<cit.> and cross-modal<cit.> segmentation tasks. For the former task, the typical model Tent<cit.> presents an entropy minimization strategy to optimize affine parameters during testing. For the Cross-modal segmentation task, xMUDA<cit.> allows the 2D and 3D modalities to learn from each other via imitation, disentangled from the segmentation objective to prevent false predictions. MM-TTA<cit.> proposes two complementary modules to obtain and select more reliable pseudo-labels (from 2D and 3D modalities) as self-learning signals during TTA. However, directly using previous TTA methods for color-thermal semantic segmentation leads to less optimal performance. Therefore, we propose the IHR and CAR strategies to make our color-thermal TTA framework more robust and generalized, with a unique learning scheme that can perform better in both the training and testing phases. Ensemble distillation. Compared with the standard knowledge distillation (KD) paradigm<cit.>, online KD (ensemble distillation)<cit.> enables efficient and single-stage training via collaborative learning among the student networks. Collaborative learning relies on two main ways: students learn from each other <cit.> or generate ensemble logits to supervise their learning<cit.>. The former methods facilitate peers' mutual learning by sharing knowledge among the student networks. For example, CLNN<cit.> allows multiple classifier heads to share intermediate-level representation for collaborative learning to reduce generalization errors. The latter methods focus on generating ensemble logits that update each student's network based on the contributions shared by the students. In particular, <cit.> select the logits based on the cross-entropy loss of each student with the true label. However, we cannot access the labels during test time. Therefore, we propose the CAR strategy to generate reliable ensemble logits, which considers the different class-wise performance between the two modalities. § METHOD Overview. In multi-modal TTA for color-thermal image semantic segmentation, we consider a source domain dataset, where each sample consists of daytime paired color images (x_s^color∈ℝ^H × W × 3), thermal images (x_s^T∈ℝ^H × W × 1), and corresponding segmentation ground truth (GT). A source model is trained on the labeled source domain dataset. Usually, the source model consists of a color encoder E_color, a thermal encoder E_T, and the decoder D utilized to generate pixel-level semantic labels. The source model can be denoted as f_θ=D(E_color(x_s^color), E_T(x_s^T)). Typically, the performance of the source model f_θ is unsatisfactory when confronted with new test data characterized by a different distribution from the source samples. The primary objective of TTA is to enhance the prediction performance in the target domain by conducting model adaptation solely on unlabeled target data. Specifically, given a target dataset t, which comprises nighttime paired color images (x_t^color) and thermal images (x_t^T). The model is updated using *min_θ̃ℒ(𝐱;θ),𝐱∼ t , where θ̃⊆θ represent the model parameters that should be updated (, batch normalization layer), ℒ denotes self-supervised loss functions. Prior research works on TTA have employed the entropy minimization for single-modality (, color image) semantic segmentation <cit.> or utilized consistency loss and pseudo-labels for cross-modal (, 2D-3D) segmentation <cit.>. However, as discussed above, applying existing TTA methods directly to color-thermal semantic segmentation poses challenges due to two main factors: imaging heterogeneity and class-wise prediction heterogeneity. To this end, we propose a novel TTA framework for nighttime color-thermal image semantic segmentation. Specifically, as depicted in Fig. <ref>, the proposed TTA framework consists of color, thermal, and interaction branches, representing three separate student networks. color, thermal, and interaction branches take the x_t^color, x_t^T, and both as the input, respectively. There are two novel technical components: IHR (Sec. <ref>) and CAR (Sec. <ref>). To solve the problems caused by imaging heterogeneity, the IHR employs an interaction branch with a novel cross-modal shared attention (CMSA) module to generate reliable pseudo labels. The CMSA module is introduced before the decoder to aggregate the complementary features and suppress the noisy features of the color and thermal modalities. To solve the problems caused by class-wise prediction heterogeneity, the CAR is buttressed by an element-wise entropy-based fusion (EEF) module to generate the ensemble logits by aggregating the reliable logits from three branches. We also propose a specific learning scheme that enables the three student networks to collaboratively learn to improve the quality of predictions during adaptation. §.§ Imaging Heterogeneity Refinement (IHR) The straightforward fusion of the color and thermal branches leads to a noticeable degradation in the segmentation performance due to the significant domain gap between the two modalities, as evidenced by the results presented in Tab. <ref>. To address this challenge, we propose the integration of an interaction branch to facilitate the extraction of cross-modal invariant features, which are crucial for generating reliable pseudo labels. Specifically, color images provide abundant textual information that is valuable for segmentation tasks, particularly in well-illuminated daytime scenarios. However, their performance suffers greatly when confronted with adverse lighting conditions. On the contrary, thermal images exhibit robustness to illumination changes but exhibit limitations such as lower resolution and ambiguous object boundaries. Therefore, a direct fusion of color and thermal features may introduce inconsistencies caused by the individual characteristics of each modality, undermining segmentation accuracy. To mitigate these issues, the introduction of the interaction branch aims to exploit the complementary nature of color and thermal modalities. This branch facilitates the extraction of cross-modal invariant features that are resilient to domain gaps, enabling the generation of more reliable pseudo labels. By integrating these cross-modal invariant features with the individual modalities, we can effectively capture both shared and unique information, leading to improved segmentation performance in color-thermal images. This may cause generating unreliable pseudo labels. For this reason, we design the CMSA module (see Fig. <ref>) to rectify the noisy features and extract the cross-modal invariant features. For the CMSA, we first embed both color (F_color∈ℝ^H × W × C) and thermal (F_T∈ℝ^H × W × C) features into two individual channel (C) attention vectors (V_color^C∈ℝ^C) and (V_T^C∈ℝ^C). Unlike <cit.>, rectifying features by utilizing the individual vectors, we generate the shared channel attention vectors ( V_shared^C∈ℝ^C) by aggregating the vectors from the color-thermal features to maintain the shared features while suppressing the noisy features. The channel-wise feature rectification can be described as: F^C_color =V_shared^C ⊙ F_color +F_color, F^C_T =V_shared^C ⊙ F_T + F_T. Similar to the channel-wise rectification, a shared spatial (S) attention vector (V_shared^S∈ℝ^H × W) is embedded to calibrate the local information, which is formulated as follows: F^S_color =V_shared^S ⊙ F^C_color + F^C_color, F^S_T =V_shared^S ⊙ F^C_T + F^C_T. F^S_color and F^S_T are the rectified features after the CMSA module, which will be aggregated to the decoder of the interaction branch. Once obtained the logits in each branch, pseudo-labels are provided for the CAR. §.§ Class Aware Refinement (CAR) To generate ensemble logits, previous method, , <cit.> usually assigns an image-level weight to each branch by measuring the consistency between the cross-modal branches. This may encounter class performance imbalance problems for color-thermal segmentation due to the class-wise prediction heterogeneity in cross-modalities. Take the cross-modal branches as an example (See Fig. <ref>). We assume that the weights calculated by the existing method for the color and thermal branch are 0.7 and 0.3, respectively. When generating the ensemble logits, all classes in the color branch are assigned a weight of 0.7, while those of the thermal branch are assigned 0.3. This leads to poor segmentation performance for some classes that were originally better segmented in the thermal branch (, person). To alleviate this problem, we propose the EEF module to refine the ensemble logits, as shown in Fig. <ref>. §.§.§ Element-wise Entropy-Based Fusion (EEF) The EEF module uses the outputs of three branches as the input, which are denoted as ỹ_1^M, ỹ_2^M, and ỹ_3^M (ỹ_1^M, ỹ_2^M, ỹ_3^M ∈ℝ^H × W × C) respectively, where M ∈{s, t}and C denotes the number of channels. To assign the weight W_i for branch i, specifically, the softmax is firstly computed along the channel dimension. Then, we calculate the Shannon entropy (H(ỹ_i^M) ∈ℝ^H × W × 1) of the logits ỹ_i^M. For each pixel (i, j) ∈ H × W, we can obtain a vector v_i,j∈ 1 × C consisting of the elements of logits at position (i, j) for all channels. Then, we calculate the Shannon entropy H(v_i,j^C) of the vector v_i,j: H(v_i,j^C)=∑_C=1^Nsoftmax(v_i,j^C) · log softmax(v_i,j^C), where v_i,j^C denotes the value of vector v_i,j in channel C. H(ỹ_i^M) is composed of the Shannon entropy (SE) of all vectors v_i,j. Assume that the true label at position (i, j) is k. When the value on the k-th channel becomes larger, the value on other channels diminishes. Then the cross entropy (CE) loss with the label decreases, which means the segmentation performance becomes better. The ideal probability distribution is that the prediction on the k-th channel is close to 1, while the prediction on the other channels is close to 0. In this situation, Shannon entropy will be kept to a relatively small extent. An effective way to generate teacher logits is to re-weight the student's logits based on the element-wise Shannon entropy. For each element in the teacher's logits, the smaller the Shannon entropy in the channel dimension, the greater the weight of the branch. We define the teacher's logits as the combination of all students' weighted logits. The pixel-wise weights W_i of branch i are calculated as: W_i=e^(1-H(ỹ_i^M))/temp/∑_i=1^3 e^(1-H(ỹ_i^M))/temp, where W_i ∈ℝ^H × W × 1, temp denotes the temperature. Finally, the teacher's logits are as follows: ỹ^EN=∑_i=1^3 W_i*ỹ_i^M. §.§ Learning Scheme For TTA, we denote the updated parameters of the Batch normalization layer of color, interaction, and thermal branch as γ^color, γ^Int, and γ^T, respectively. Given paired color-thermal images, there are i classes in the image. The predictions of different branches can be denoted as P_color={P_color^1,P_color^2,..., P_color^i}, P_Int={P_Int^1,P_Int^2,..., P_Int^i}, and P_T={P_T^1,P_T^2,..., P_T^i}. During TTA, the class-wise segmentation performance of one branch is not consistently higher or lower than the other branches. For some classes, one branch can achieve the best segmentation performance while the other branch could achieve the best performance in other classes. Without loss of generality, we consider the case of three classes where the color, interaction, and thermal branch achieves the best performance on class 1, 2, and 3, respectively. The ensemble logits of traditional methods are calculated by P_EN=P_color+PInt+P_T/3. Then, the consistency loss ℒ_KL^tta, which achieves knowledge distillation from ensemble logits to student logits, is used to train the three branches. During TTA, the parameters of the batch normalization layer γ are updated by: γ_t^color=γ_t-1^color-β·▽_γℒ_KL^tta(color,EN) γ_t^Int=γ_t-1^Int-β·▽_γℒ_KL^tta(Int,EN) γ_t^T=γ_t-1^T-β·▽_γℒ_KL^tta(T,EN) Based on our assumptions, for class 1, the entropy of the color branch is smaller than the ensemble logits (SE(P_color) SE(P_EN)), whereas the entropy of the interaction and thermal branches are larger than the ensemble logits (SE(P_Int) SE(P_EN) and SE(P_T) SE(P_EN)). Therefore, although the interaction and thermal branches will improve the segmentation performance, the color branch will have performance degradation after optimization. The other two classes have similar results. To mitigate the issues mentioned above, we propose the EEF module and a learning scheme (See Fig. <ref>). During TTA, we consider the teacher logits as the self-training signals to update the model. We define KL loss as ℒ_KL^tta(i,EN)= KL(ỹ_i^s,ỹ^EN) to ensure collaborative learning of these three students. Moreover, to boost the performance of all three student networks, we introduce the Shannon entropy loss ℒ_i^tta= SE(ỹ_i^t), and ℒ_EN^tta= SE(ỹ^EN). For each student network i, the final learning objective is: ℒ^tta=∑_i=1^3 ℒ_i^tta+λ_1 ℒ_EN^tta+λ_2 ∑_i=1^3 ℒ_KL^tta(i,EN), where λ_1 and λ_2 are hyperparameters. Dynamic Weighting Each branch. Existing methods for multi-modal test time adaptation typically assign the same weights to all branches. However, for color-thermal segmentation, the day-night domain gap in color images is more significant than in thermal images. Consequently, utilizing identical weights for all branches can lead to instability during adaptation. To address this issue, we propose a dynamic weighting scheme for these branches, which exclusively affects the loss function without incurring additional computational overhead for model adaptation. Specifically, we introduce weights ω_i for each branch according to the adaptation extent. Measuring the extent of adaptation typically relies on labeled samples, which presents a challenge in our problem scenario where training data is unavailable, and the test samples remain unlabeled. Consequently, quantifying the extent of adaptation becomes non-trivial. To address this issue, we propose a novel approach that leverages ensemble logits to estimate the extent of adaptation. In particular, we initially compute the distance between the student logits and the ensemble logits of each branch within a batch. This computation can be formulated as follows: D_i=1/B∑_b=1^B 1/2(KL(ỹ^EN||ỹ^M)+KL(ỹ^M||ỹ^EN)), Then we calculate the weights of each branch as follows: ω_i = D_i/min{D_1, D_2,D_3} Then, the final objective is : ℒ^tta=∑_i=1^3 ω_i ℒ_i^tta+λ_1 ℒ_EN^tta+λ_2 ∑_i=1^3 ω_i ℒ_KL^tta(i,EN), where λ_1 and λ_2 are hyperparameters. With the EEF module, we can generate ensemble logits with small entropy at the pixel level. Then, for each class i, we have SE(P_EN) SE(P_color), SE(P_EN) SE(P_Int), and SE(P_EN) SE(P_T), which means that we have better ensemble logit to train the three branches. Adaptation with our learning scheme can continuously improve the segmentation performance of the three student branches through ensemble distillation, so as to gradually carry out more accurate segmentation results. § EXPERIMENTS §.§ Datasets MF dataset. It contains 1569 images (784 for training, 392 for validation, and 393 for test) in which 820 daytime and 749 nighttime images are mixed in training, validation, and test sets. The resolution of images is 480×640 with annotated semantic labels for 8 classes. To evaluate our method, we just drop out the nighttime color-thermal image pairs in the original training and validation sets and drop out the daytime color-thermal image pairs in the original test sets to form a new dataset (410 for training, 205 for validation, and 188 for test), which is denoted as MF-1. For UDA methods, under our investigation, there only exist two UDA methods (HeatNet and MS-UDA) for nighttime image semantic segmentation leveraging color and thermal images. Thus, we compare the segmentation performance with these two methods. For a fair comparison, we use the same training and testing set with MS-UDA: We reorganize the daytime and nighttime images in the MF dataset as training and testing sets (820 daytime images for training and 749 nighttime images for testing ), which is denoted as MF-2. Three categories of labels overlapping the KP dataset (, car, person, and bike) are used for evaluation. The modified KP dataset. The KAIST Multispectral Pedestrian Detection (KP) dataset  <cit.> is a color-thermal paired urban driving dataset without semantic segmentation labels. Kim <cit.> create a modified KP dataset with manually annotated 503 daytime and 447 nighttime color-thermal image pairs and the pixel-level labels of 19 classes consistent with Cityscapes  <cit.>. The resolution of color-thermal image pairs is 512 × 640 × 3 and 512 × 640 × 1, respectively. §.§ Implementation Details The proposed method is implemented using PyTorch libraries with a single A6000 GPU. Source model. As the first TTA framework for nighttime color-thermal semantic segmentation, our approach adopts a three-branch network structure. Each branch utilizes an untrained encoder and decoder from FEANet <cit.> (which after the training step already reaches good performance based on a supervised manner) to obtain the logits. We utilize the encoder and decoder from FEANet as the source model without changing the network architecture. Pre-training the source model. In our experiment setting, we want to use daytime data for training and nighttime data for testing. However, the source model from FEANet was trained and tested on day-night mixed dataset which is a different dataset splitting scheme from ours. Therefore, we pre-train the source encoder and source decoder with the source domain dataset. For a fair comparison, We follow the training details of FEANet apart from using the original dataset. Test-time Adaptation Details. We apply the source model that only uses daytime data as training to each branch and use unlabeled nighttime paired data as input for test time adaptation. Similar to previous TTA methods <cit.>, we only optimize the batch norm affine parameters for one epoch. The learning rate for three sub-networks is set to 1e^-5. The temperature is set to 2. §.§ Comparative Studies We evaluate the proposed framework against state-of-the-art TTA methods on the MF-1, MF-2, and modified KP datasets. MF-1 dataset. We compare our TTA framework with uni-modal and multi-modal TTA frameworks on MF-1 dataset. The quantitative and qualitative results are shown in Tab. <ref> and Fig. <ref>. The proposed Night-TTA could bring a significant adaptation effect on nighttime color-thermal image semantic segmentation compared to the source model (increases 13.07 % mIoU). Specifically, in Tab. <ref>, we conduct a comparison of the segmentation performance among different TTA frameworks across three categories: Car, Person, and Bike. Based on the analysis of the experimental data, our TTA framework exhibits a notable improvement in the segmentation performance for all three categories. Moreover, our Night-TTA achieves a substantial performance advantage over both uni-modal TTA methods, with an improvement of over 17.34% in mIoU. It should be noted that directly applying the uni-model TTA methods would degrade the segmentation performance. Our method also surpasses multi-modal TTA methods with an improvement of over 3.01% in mIoU. MF-2 dataset. We also compare our method with existing UDA methods. The results are shown in Tab. <ref>. In the MF-2 dataset setting, where training is conducted on daytime data and testing on nighttime data, our Night-TTA approach showcases remarkable performance superiority over UDA methods, specifically achieving a significant 6.05% improvement in comparison to MS-UDA. These results highlight the efficacy and professionalism of our Night-TTA framework in addressing the challenges of domain adaptation in the context of semantic segmentation for nighttime scenarios. The modified KP dataset. Tab. <ref> and Fig. <ref> show the quantitative and qualitative results. We can conclude that the proposed Night-TTA performs better than existing nighttime color-thermal image semantic segmentation methods. Specifically, our Night-TTA framework achieves the best segmentation performance in most categories. In addition, our proposed learning scheme for the TTA framework improves the segmentation performance of the source model (from 36.35 % mIou to 47.77 % mIou) more significantly than other TTA methods (The highest increase to 45.08% mIou). §.§ Ablation Studies and Analysis 1) Imaging Heterogeneity Refinement 1) Interaction Branch. We validate the effectiveness of the proposed interaction branch on the MF-1 dataset. The results are shown in Tab. <ref>. During the assessment of single-modal nighttime semantic segmentation, our findings indicate that thermal imaging exhibits superior performance compared to color imaging, highlighting its heightened robustness and reliability in low-light environments. Compared with single-modal nighttime image semantic segmentation, multi-modal (color-thermal) achieves better performance. Besides, the dual path (without the interaction branch) worsens the segmentation performance (from 49.71% mIou to 32.16 % mIou when using EEF), demonstrating the interaction branch's effectiveness. 2) CMSA. We conduct additional experiments to validate the efficacy of the CMSA module, comparing its performance in an interaction-only network and a complete network. The results, presented in Tab. <ref>, demonstrate the significant improvements achieved by the CMSA module in both the interaction-only network (from 35.82% mIoU to 41.26% mIoU) and the triple branches networks (from 49.71% mIoU to 52.06%). 2) Class Aware Refinement 1) EEF module. We compare EEF module against different methods of generating the ensemble logits (as shown in Tab. <ref>). The 'Merge' approach represents taking the mean of the logits from the three branches, while 'IE' refers to methods based on image-level entropy(<cit.>). The results demonstrate that our EEF module performs better than other strategies, with an increase of 5.64% (from 47.52% mIoU to 53.16% mIoU) for 'Merge' and 6.79%(from 46.37% mIoU to 53.16% mIoU) for 'IE' in mIoU. This highlights the superior performance of our EEF module in ensemble logits generation. 2) Learning Scheme. In this experiment, the λ_1 and λ_2 are set to 1. Tab. <ref> shows the quantitative results. Based on our experimental data, it is evident that utilizing individual losses alone or combining any two losses leads to performance improvement in adaptation. Specifically, the three ℒ^tta, ℒ_EN^tta, and ℒ_KL^tta(i,EN) contribute similarly during TTA, while ℒ_KL^tta(i,EN) plays a slightly more important role compared with others. It should be noted that our learning scheme could significantly improve the performance of the source model (13.07% mIoU). 3) Sensitivity Analysis 1) Batch size. We explore the impact of batch size on the semantic segmentation performance of different TTA methods (as shown in Tab. <ref>). The results indicate that a small batch size (1 or 2) leads to degraded segmentation performance, while a larger batch size (4 or 8) results in improved performance. Tab. <ref> shows that the TTA method looks very sensitive to batch size. This sensitivity can be attributed to the parameters updated by the TTA method during the test phase, primarily within the batch normalization layer. Increasing the batch size brings the testing data in a batch closer to the real data contribution during the adaptation process, thus improving the segmentation performance. The proposed method consistently performs well across different batch sizes. It outperforms the other evaluated TTA methods in terms of mIoU, showcasing its effectiveness in semantic segmentation tasks. For example, at a batch size of 8, the proposed method achieves mIoU of 53.16, surpassing the mIoU of the other methods (ranging from 49.28 to 50.05). 2) Robustness to perturbations. We further evaluate the robustness of our methods on the MF dataset. We conduct an ablation study to evaluate the impact of different input perturbations during the test-time adaptation. Three types of perturbations are applied: image cropping, brightness adjustment, and the addition of Gaussian noise. Specifically, we crop the image at the rate of 0.2, randomly add Gaussian noise (noise range is set to 5) to the image, or just the brightness of the images to reorganize three new test sets. Tab. <ref> shows the quantitative results of different TTA methods. shows the quantitative results of different TTA methods. We can conclude that our method is more robust to noises and image corruption. 3) Parameters updated in TTA. We conduct an analysis of the TTA performance by examining the impact of updating specific network layers. The ablation study aims to analyze the impact of updating specific network layers during TTA in semantic segmentation. Three scenarios are considered: updating only the encoder parameters, updating only the decoder parameters, and updating both the encoder and decoder parameters. The experiment is conducted with a batch size of 8. Tab. <ref> presents the results according to updating the affine parameters in different network parts for effective TTA. When only the encoder parameters are updated during TTA, the method achieved the mIoU of 48.71. Updating only the decoder parameters result in the best performance, with a mIoU of 53.16. § DISCUSSION For the IHR strategy, naively combining the individual color and thermal branches yields subpar performance due to modality gap and noise (Fig. <ref>). The proposed IHR strategy enhances prediction reliability by incorporating an interaction branch and a CMSA module. The CMSA module effectively combines cross-modal invariant features while suppressing noisy information between color and thermal modalities. Evaluating with nighttime color-thermal image pairs, we observe a performance gap between color and thermal branch logits without IHR, along with considerable noise in ensemble logits. By introducing the interaction branch and CMSA module, the discrepancy between color and thermal branch logits decreases, resulting in ensemble logits that align better with ground truth labels. This reduction in cross-modal discrepancy highlights the effectiveness of the interaction branch in mitigating the influence of image heterogeneity. As the first TTA framework, we design three branches to generate reliable pseudo labels without considering much about the parameters and computational costs, which is typical for other cross-modal TTA methods, , <cit.>. Future work will focus more on designing tight frameworks. Moreover, while our TTA framework is specifically designed for nighttime color-thermal semantic segmentation, there is potential for its application to address other types of multi-modality data. For instance, it can be extended to handle data combinations such as color and event data or color and depth data, opening up opportunities for broader applicability. § CONCLUSION In this paper, we addressed two potential problems of nighttime color-thermal image semantic segmentation to reduce the cross-modal discrepancy via test time adaptation (TTA) with cross-modal ensemble distillation. We presented a novel TTA framework, dubbed Night-TTA, with two novel refinement strategies: imaging heterogeneity refinement (IHR) and class aware refinement (CAR). In the experiments, both strategies were shown effective in achieving credible performance. The experimental results also proved the benefits of our learning scheme. Moreover, for nighttime color-thermal semantic segmentation, Night-TTA outperformed the existing methods by a considerable margin. IEEEtran [ < g r a p h i c s > ] Yexin Liu is a Mphil. student in the Visual Learning and Intelligent Systems Lab, Artificial Intelligence Thrust, The Hong Kong University of Science and Technology, Guangzhou (HKUST-GZ). His research interests include infrared- and event-based vision, and unsupervised domain adaptation. [ < g r a p h i c s > ] Weiming Zhang is a research assistant in the Visual Learning and Intelligent Systems Lab, Artificial Intelligence Thrust, The Hong Kong University of Science and Technology, Guangzhou (HKUST-GZ). His research interests include event-based vision, Deep Learning, . [ < g r a p h i c s > ] Guoyang ZHAO is a Mphil. student in the Intelligent Autonomous Driving Center, Thrust of Robotics and Autonomous Systems, The Hong Kong University of Science and Technology, Guangzhou (HKUST-GZ). His research interests include vision-based perception systems and Deep learning. [ < g r a p h i c s > ] Jinjing Zhu is a Ph.D. student in the Visual Learning and Intelligent Systems Lab, Artificial Intelligence Thrust, The Hong Kong University of Science and Technology, Guangzhou (HKUST-GZ). His research interests include CV (image classification, person re-identification, action recognition, etc.), DL (especially transfer learning, knowledge distillation, multi-task learning, semi-/self-unsupervised learning, etc.), omnidirectional vision, and event-based vision. [ < g r a p h i c s > ] Athanasios V. Vasilakos is with the Center for AI Research (CAIR), University of Agder(UiA), Grimstad, Norway. He served or is serving as an Editor for many technical journals, such as the IEEE TRANSACTIONS ON AI, IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT; IEEE TRANSACTIONS ON CLOUD COMPUTING, IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, IEEE TRANSACTIONS ON CYBERNETICS; IEEE TRANSACTIONS ON NANOBIOSCIENCE; IEEE TRANSACTIONS ON INFORMATION TECHNOLOGY IN BIOMEDICINE; ACM Transactions on Autonomous and Adaptive Systems; the IEEE JOURNAL ON SELECTED AREAS IN COM-MUNICATIONS . He is WoS highly cited researcher(HC). [ < g r a p h i c s > ] Lin Wang (IEEE Member) is an assistant professor in the AI Thrust, HKUST-GZ, HKUST FYTRI, and an affiliate assistant professor in the Dept. of CSE, HKUST. He did his Postdoc at the Korea Advanced Institute of Science and Technology (KAIST). He got his Ph.D. (with honors) and M.S. from KAIST, Korea. He had rich cross-disciplinary research experience, covering mechanical, industrial, and computer engineering. His research interests lie in computer and robotic vision, machine learning, intelligent systems (XR, vision for HCI), etc.
http://arxiv.org/abs/2307.04635v1
20230710152254
Self-consistent Combined HST, K-band, and Spitzer Photometric Catalogs of the BUFFALO Survey Fields
[ "Amanda Pagul", "F. Javier Sánchez", "Iary Davidzon", "Anton M. Koekemoer", "Hakim Atek", "Renyue Cen", "Lukas J. Furtak", "Mathilde Jauzac", "Guillaume Mahler", "Bahram Mobasher", "Mireia Montes", "Mario Nonino", "Keren Sharon", "Charles L. Steinhardt", "John R. Weaver" ]
astro-ph.GA
[ "astro-ph.GA" ]
0000-0002-6015-8614]Amanda Pagul A. Pagul et al. Space Telescope Science Institute, 3700 San Martin Dr., Baltimore, MD 21218, USA Department of Physics and Astronomy, University of California Riverside, Pierce Hall, Riverside, CA 92521, USA Amanda Pagul [email protected] 0000-0003-3136-9532]F. Javier Sánchez Space Telescope Science Institute, 3700 San Martin Dr., Baltimore, MD 21218, USA 0000-0002-2951-7519]Iary Davidzon Cosmic Dawn Center (DAWN) Niels Bohr Institute, University of Copenhagen, Lyngbyvej 2, Copenhagen Ø 2100 0000-0002-6610-2048]Anton M. Koekemoer Space Telescope Science Institute, 3700 San Martin Dr., Baltimore, MD 21218, USA Institut d'astrophysique de Paris, CNRS UMR7095, Sorbonne Université, 98bis Boulevard Arago, F-75014 Paris, France Department of Astrophysical Sciences, 4 Ivy Lane, Princeton, NJ 08544, USA 0000-0001-6278-032X]Lukas J. Furtak Physics Department, Ben-Gurion University of the Negev, P. O. Box 653, Be'er-Sheva, 8410501, Israel 0000-0003-1974-8732]Mathilde Jauzac Centre for Extragalactic Astronomy, Durham University, South Road, Durham DH1 3LE, U.K. Institute for Computational Cosmology, Durham University, South Road, Durham DH1 3LE, U.K Astrophysics and Cosmology Research Unit, School of Mathematical Sciences, University of KwaZulu-Natal, Durban 4041, South Africa 0000-0003-3266-2001]Guillaume Mahler Centre for Extragalactic Astronomy, Durham University, South Road, Durham DH1 3LE, UK Institute for Computational Cosmology, Durham University, South Road, Durham DH1 3LE, UK Department of Physics and Astronomy, University of California Riverside, Pierce Hall, Riverside, CA 92521, USA 0000-0001-7847-0393]Mireia Montes Instituto de Astrofísica de Canarias, c/ Vía Láctea s/n, E-38205 - La Laguna, Tenerife, Spain Departamento de Astrofísica, Universidad de La Laguna, E-38205 - La Laguna, Tenerife, Spain 0000-0001-6342-9662]Mario Nonino INAF-Trieste Astronomical Observatory 0000-0002-7559-0864]Keren Sharon Department of Astronomy, University of Michigan, 1085 S. University Ave, Ann Arbor, MI 48109, USA 0000-0003-3780-6801]Charles L. Steinhardt Cosmic Dawn Center (DAWN) Niels Bohr Institute, University of Copenhagen, Lyngbyvej 2, Copenhagen Ø 2100 0000-0003-1614-196X]John R. Weaver Department of Astronomy, University of Massachusetts, Amherst, MA 01003, USA BUFFALO Catalogs This manuscript presents new astronomical source catalogs using data from the BUFFALO Survey. These catalogs contain detailed information for over 100,000 astronomical sources in the 6 BUFFALO clusters: Abell 370, Abell 2744, Abell S1063, MACS 0416, MACS 0717, and MACS 1149 spanning a total 240 arcmin^2. The catalogs include positions and forced photometry measurements of these objects in the F275W, F336W, F435W, F606W, F814W, F105W, F125W, F140W, and F160W HST-bands, Keck-NIRC2/VLT-HAWKI Ks band, and IRAC Channel 1 and 2 bands. Additionally, we include photometry measurements in the F475W, F625W, and F110W bands for Abell 370. This catalog also includes photometric redshift estimates computed via template fitting using LePhare. When comparing to spectroscopic reference, we obtain an outlier fraction of 9.2% and scatter, normalized median absolute deviation (NMAD), of 0.062. The catalogs are publicly available for their use by the community. § INTRODUCTION The Hubble Frontier Fields (HFF) <cit.> is a multi-waveband program obtaining deep imaging observations of six massive clusters in a narrow redshift range z∼ 0.308 - 0.545. Combining the sensitivity, resolution power and multi-wavelength capability of the Hubble Space Telescope (HST), with the gravitational lensing effect introduced by the massive galaxy clusters selected for this study, one can reach unprecedented depths. Two HST instruments, the Advanced Camera for Surveys (ACS) and Wide-Field Camera 3 (WFC3), were used in parallel to simultaneously observe each cluster and parallel field. The parallel fields separated by ∼ 6 arcmin from the cluster core, corresponding to > 1.8 projected co-moving Mpc for a z>0.3 cluster. The six parallel fields are comparable in depth to the Hubble Ultra Deep Field <cit.>, corresponding to m(AB) ∼ 29 mag. The area coverage and depth of the parallel fields provide significant improvement in the volume covered and statistics of faint galaxies. The aims of the HFF observations were: (1) leverage gravitational lensing due to massive clusters <cit.> to magnify fluxes and hence detect very faint background galaxies at z ∼ 5 - 10 <cit.>. Strong lensing allows us to probe ∼ 2 magnitudes fainter than in blank fields. At the time of HFF observations, blank fields studies reached ∼-17 rest-frame UV magnitudes <cit.>; (2) study the stellar population of these faint galaxies at high redshifts and constrain the mass function of galaxies at early epochs. Stellar masses reach down to 10^8 M_⊙ in blank fields <cit.> and down to 10^6 M_⊙ in HFF lensed fields <cit.>; (3) study of the morphology and other observable properties of lensed galaxies at z > 8. The Beyond Ultra-deep Frontier Fields and Legacy Observations (BUFFALO) is an HST treasury program with 101 prime orbits (and 101 parallel orbits) (GO-15117; PIs: Steinhardt and Jauzac), covering the immediate areas around the HFF clusters where deep Spitzer (IRAC channels 1 and 2) and multi-waveband coverage already exist <cit.>. BUFFALO extends the spatial coverage of each of the six HFF clusters by three to four times. Observing these fields in five filters (ACS: F606W, F814W and WFC3: F105W, F125W and F160W), BUFFALO aims at a factor of 2 improvement in the statistics of high redshift galaxies <cit.>, improves the cosmic variance and allows a more accurate modeling of the dark matter distribution in the foreground clusters. The HST and Spitzer data for BUFFALO, combined with ground-based observations <cit.> was specifically designed to expand the HFF to sufficiently large area to encompass a full James Webb Space Telescope NIRSpec field of view, without the need for JWST/NIRCam pre-imaging. The program significantly improves the statistics of galaxies in the outskirts of clusters and field samples. In this paper, we present photometric and redshift catalogs for the BUFFALO galaxies. The catalogs presented in this work aim to extend and complement previous efforts in the HFF <cit.>. In section 2, we present the data used in this study. In section 3, we briefly outline the data reduction process, referring the reader to <cit.> for a more detailed description. In section 4, we describe our photometric validation procedure. Section 5, details the data products and results. Section 6 describes the photometric redshifts extracted. Finally, our conclusions are presented in section 7. Throughout this paper we assume standard cosmology with Ω_M = 0.23, Ω_Λ = 0.76 and H_0 = 73 Km/sec/Mpc. Magnitudes are in the AB system. § THE DATA We provide a brief summary of the dataset in the following subsections. For more details about the design, aims and observations of BUFFALO we refer the reader to the BUFFALO overview paper <cit.>. All our data products are available at MAST as a High Level Science Product via [10.17909/t9-w6tj-wp63]10.17909/t9-w6tj-wp63 §.§ HST observations The BUFFALO images provide the deepest exposures of galaxy clusters by HST, only second to the HUDF with respect to depth. With 101 additional prime (and 101 parallel) orbits, they build on the existing HFF cluster and parallel field surveys. BUFFALO slightly increases the depth at the center of the HFF clusters while increasing their areal coverage three- to four fold. As a result, it expands the radial coverage of cluster outskirts, providing observations of the global mass distribution of clusters to almost the virial radius, i.e. ∼ 3/4 × R_vir. The coverage was chosen to increase the high-z sample size, in particular for rare bright high-mass galaxies at z∼8-9. Furthermore, BUFFALO's footprint is chosen to be compatible with JWST's NIRSpec field of view, allowing multiwavelength programs with JWST[These were produced using the module (<https://github.com/spacetelescope/JWST_footprints>).] (Figure <ref>), which is especially timely for planning robust observations with JWST. In the HFF, the gravitational potential of the clusters' halo, besides binding together the galaxies in the system, produces a lensing magnification that could detect background objects to apparent magnitudes of 30–33 mag, i.e. 10–100 times fainter than previous surveys. With BUFFALO, we get magnifications of ∼ 4 on average. Details of the BUFFALO survey design are provided in <cit.>. In Table <ref>, we report the main characteristics of the six clusters, with a summary of the ancillary observations in Table <ref>. We use the official BUFFALO mosaics, with a pixel scale of 0.06"/pix, which have been produced following the procedures outlined in <cit.>; the full BUFFALO dataset is described further in <cit.>. We complement this data with the available public F275W and F336W HFF data from the HFF-Deepspace campaign <cit.>, which uses observations from <cit.>. §.§ Ancillary data The large wealth of complementary legacy datasets and programs for the HFF clusters has contributed to its success. The Spitzer Space Telescope dedicated more than 1,000 hours of Director's Discretionary time to obtain Infrared Array Camera (IRAC) 3.6 μm (channel 1) and 4.6 μm (channel 2) imaging down to the depths of 26.5 and 26.0 mag., in cluster and parallel fields respectively (program IDs: Abell 2744: 83, 90275; MACS J0416.1-2403: 80168, 90258; MACS J0717.4+3745: 40652, 60034, 90009, 90259; MACS J1149.4+2223: 60034, 90009, 90260; Abell S1063 (RXC J2248.7-4431): 83, 10170, 60034; Abell 370: 137, 10171, 60034). These observations are especially important for redshift determination given that they help break the degeneracies between low-redshift interlopers and high-redshift galaxies, and are beneficial in constraining galaxy properties since they provide a good proxy for galaxy stellar mass. The HFF clusters in the southern sky are also covered in the Ks band using the High Acuity Wide Field K-band Imager (HAWK-I) <cit.> at the Very Large Telescope (VLT), reaching a depth of 26.0 mag (5σ, point-like sources) for Abell 2744, MACS-0416, Abell S1063, and Abell 370 clusters. In the northern sky, this campaign used the Multi-Object Spectrometer for Infrared Exploration (MOSFIRE) <cit.> at Keck to observe MACS-0717 and MACS-1149 to a K-band 5σ depth of 25.5 and 25.1 mag respectively. This data covers all of the cluster and parallel field centers, but not the entirety of the outer area observed by BUFFALO. Table <ref> summarizes the available ancillary data. lcccccc Frontier Field cluster and parallel field positions, along with clusters' mean redshift (z_clu), virial mass (M_vir), and X-ray luminosity (L_X) <cit.> Field Cluster Center (J2000) Parallel Center (J2000) z_clu M_vir L_X R.A., Decl. R.A., Decl. Abell 370 02:39:52.9, -01:34:36.5 02:40:13.4, -01:37:32.8 0.375 ∼ 1×10^15 1.1×10^45 Abell 2744 00:14:21.2, -30:23:50.1 00:13:53.6, -30:22:54.3 0.308 1.8 × 10^15 3.1×10^45 Abell S1063 22:48:44.4, -44:31:48.5 22:49:17.7, -44:32:43.8 0.348 1.4×10^15 1.8×10^45 MACS J0416.1-2403 04:16:08.9, -24:04:28.7 04:16:33.1, -24:06:48.7 0.396 1.2 × 10^15 1.0×10^45 MACS J0717.5+3745 07:17:34.0 +37:44:49.0 07:17:17.0 +37:49:47.3 0.545 ∼ 2-3×10^15 3.3×10^45 MACS J1149.5+2223 11:49:36.3, +22:23:58.1 11:49:40.5, +22:18:02.3 0.543 2.5×10^15 1.8×10^45 lccccc Existing multi-wavelength HFF coverage from follow-up programs, as used in the present work. The 5-σ point-source depth was estimated by integrating the noise in a 2D Gaussian PSF aperture with the FWHM value set to the ones given in Table <ref>. The HFF <cit.> program is led by PIs T. Soifer and P. Capak; KIFF PI is G. Brammer <cit.>. Field Observatory/Camera Central Wavelength Depth Abell 370 VLT/HAWK-I 2.2μ m ∼ 26.18 Spitzer IRAC 1,2 3.6μ m, 4.5μ m ∼ 25.19, 25.09 MACS J0717.5+3745 Keck/MOSFIRE 2.2μ m ∼ 25.31 Spitzer IRAC 1,2 3.5μ m, 4.5 μ m ∼ 25.04, 25.17 MACS J0416.1-2403 VLT/HAWK-I 2.2μ m ∼ 26.25 Spitzer IRAC 1,2 3.5μ m, 4.5 μ m ∼ 25.31, 25.44 Abell S1063 VLT/HAWK-I 2.2μ m ∼ 26.31 Spitzer IRAC 1,2 3.6μ m, 4.5μ m ∼ 25.04, 25.04 Abell 2744 VLT/HAWK-I 2.2μ m ∼ 26.28 Spitzer IRAC 1,2 3.6μ m, 4.5μ m ∼ 25.32, 25.08 MACS J1149.5+2223 Keck/MOSFIRE 2.2μ m ∼ 25.41 Spitzer IRAC 1,2 3.5μ m, 4.5 μ m ∼ 25.24, 25.01 lccc[t] The Point Spread Function radius and effective wavelengths for different photometric bands used for the BUFFALO fields. Band FWHM λ_pivot (Å) F275W 011 2710 F336W 012 3354 F435W 013 4329 F606W 011 5922 F814W 010 8045 F105W 020 10551 F125W 020 12486 F140W 020 13923 F160W 020 15369 Ks 036 21524 I1 129 35634 I2 142 45110 Values were calculated for the cluster Abell 370. § DATA PROCESSING The workflow followed for the data processing in this work is the same as the one in <cit.> (P21 hereafter). The main steps taken to obtain the data products presented here are summarized as follows: * Error map correction: we compare the standard deviation of the values of the background pixels in the science image, with the reported root mean-square (rms) values as given by the error maps, and correct the latter so that the mean ratio in the background pixels are equal to 1. * PSF extraction: we select unsaturated, unblended stars and perform median stacking to obtain an estimate of the PSF. * Intracluster light (ICL) + bright galaxy modeling: Perform multi-object fits to Sérsic profiles, plus a local background using a combination of  <cit.> and  <cit.>. * Bright galaxy photometry: we run Source Extractor <cit.> on HST bands PSF-matched to the reddest, F160W, band, and obtain photometric measurements. * Background galaxy photometry: we subtract the bright galaxies and ICL, and run Source Extractor on the “cleaned” field for the PSF-matched HST images. * Spitzer and K-band photometry: we use T-PHOT <cit.> to obtain self-consistent photometry measurements on the Spitzer and K-band images, using the HST images and segmentation maps as priors. * Synthetic source injection: we inject synthetic sources and repeat the process to validate and correct the photometric measurements. * Estimate photometric redshifts: the last step consists on using LePhare <cit.> to obtain photometric redshift estimates of detected galaxies in these catalogs. In the following subsections some of these steps are described in more detail. For a detailed description of all the steps, we refer the reader to P21. §.§ Point Spread Function A well-defined point spread function (PSF) as a function of wavelength is crucial to perform consistent photometry within a `panchromatic' baseline to correctly model galaxies and obtain galaxy fluxes in PSF-matched images. In order to perform multi-waveband photometry with accurate signal-to-noise and resolution for each aperture, we convolve images with a kernel generated by taking (in Fourier space) the ratio between their original and target PSFs, to match that of the reddest F160W PSF. In order to generate the PSFs for the HST and K-band images, we stack isolated and unsaturated stars in each individual image, taking the median of the stack. Up to this point, the procedure is identical to that followed in P21. We improve upon our previous work by creating PSFs for the representative inner (deeper) and outer (shallower) regions in both the cluster- and parallel-fields. Figure <ref> shows examples of the stacked PSFs derived in different regions, and Table <ref> gives the representative FWHM as a function of wavelength. We note that the full-width-half-max (FWHM) in both regions are compatible. Due to large spatial variations of the PSF in the mid-IR Spitzer channels [ See https://irsa.ipac.caltech.edu/data/Spitzer/docs/irac/calibrationfiles/psfprf/the Spitzer/IRAC handbook], we do not use the same approach to create our Spitzer PSF model. Furthermore, the individual pixel response functions (PRFs) are asymmetric and are thus dependent on the orientation of the camera. Moreover, the pixels on IRAC Ch 1 and 2 tend to under sample the PRF[More information in the https://irsa.ipac.caltech.edu/data/Spitzer/docs/files/Spitzer/simfitreport52_final.pdfthe Spitzer/IRAC handbook.]. Thus, instead of stacking stars and generating a single PSF per field, we use a synthetic pixel response function (PRF) that combines the information on the PSF, the detector sampling, and the intrapixel sensitivity variation in response to a point-like source, as done in P21. A PRF model for a given position on the IRAC mosaic is generated by the code (A. Faisst, private communication) by combining the single-epoch frames that contribute to that mosaic. To do so, stacks individual PRF models with the same orientation of the frames, resulting in a realistic, spatially-dependent PSF model. §.§ Modeling the intra-cluster light The deep potential well and high density of galaxy clusters make them rich laboratories to study galaxy dynamics and interactions. Due to these complex processes, stars and gas stripped from their constituent galaxies build up in the cluster core as intracluster light (ICL) <cit.>. This can bias the flux measurements of galaxies, close in angular space, to the cluster center. Following <cit.> and P21, in order to model the ICL in the BUFFALO clusters, we first generate 18×18 arcsecond (300×300 pixel) stamps centered on each galaxy with a magnitude brighter than 26 in each image/band. Using  <cit.>, we fit all galaxies in each stamp with a single Sérsic profile, masking those that are fainter than magnitude 26. In case a given pixel with coordinates (x, y) is only included in one cutout, the ICL emission (F_ICL) is defined as the local background measurement as reported by (namely, the parameter). If there are overlapping cutouts in (x, y), we use the inverse χ^2-weighted mean of their background measurements: F_ICL(x,y)=Σ_i s_i(x,y)/χ^2_i(x,y)/Σ_i 1/χ^2_i(x,y) , where s_i and χ_i^2 are the (fit value to the local background of the postage stamp) and goodness-of-fit values from for the i-th cutout, respectively. As described in P21, the resulting ICL map has unphysical sharp features, which are smoothed out using a Gaussian kernel with σ=4.32". Similarly, for the K_s and Spitzer bands, we use to obtain the local background for each measured source, which is then merged into a single mosaic, and smoothed with a representative kernel. As a caveat, though these maps primarily contain ICL emission, they also contain inhomogeneities in the background. This ensures a robust `background+ICL subtraction" in the individual images. Cleaning of these maps via color selection of the individual stamps will then be performed. §.§ Modeling the brightest galaxies The procedure to model bright galaxies (magnitude brighter than 19) is also unchanged from P21. We rely on GALAPAGOS-M <cit.> to fit Sérsic profiles simultaneously to galaxies in all bands, with the fitting parameters varying as a function of wavelength. We construct galaxy models for the relevant galaxies and also cross-check the fits with those in <cit.>. The results of the ICL and bright galaxy modeling and subtraction are illustrated in Figure <ref>. Finally, we apply a median filter to the ICL+bright galaxy subtracted images. We use a filter with a box size of 1^∘ per side, applied only to pixels within 1σ of the background level to reduce the effects of over-subtraction in the residual. Figure <ref> shows the modeling and filtering process. The lower right panel shows the effect of median filtering. Note that this process does not significantly affect the outskirts of the cluster. §.§ Source Extraction To detect galaxies and perform photometry, we use Source Extractor, focusing only on the "super hot" mode, rather than creating a dual run with hot and cold modes (see P21 for definition of "hot" and "cold" modes). This is one of the main differences with the procedure presented in P21 where a second “cold" mode run is performed. We find that this second run does not have a significant impact on the detection nor photometric performance (< 0.05 mag), especially after bright galaxy and ICL subtraction. This is a consequence of the cold mode focusing on extracting information about the brightest objects, which have already been removed by the bright galaxy subtraction. This is illustrated in Figure <ref>, where we compare a dual run with our new “super hot" run, finding similar magnitudes for the BUFFALO cluster Abell 370. The final Source Extractor configuration file is presented in Appendix <ref>. We also show the magnitude distribution of sources in the F160W band for all clusters in Figure <ref>. The large number density (defined as the number of sources per square arcmin) and depth of these catalogs are indicated. We subdivided the catalogs into sources detected in the inner field regions (the overlap with HFF), which reaches to significant depth, and the outer regions (the extension), where the depth is noticeably lower. The differences between the distributions of the cluster and the parallel regions is apparent. The cluster regions typically contain an over-abundance of brighter galaxies, whereas the parallel fields contain less of these bright objects but reach slightly deeper levels. §.§ Photometry in Ancillary images Because the Ks and Spitzer images have lower angular resolution than the HST images, they are more affected by blending. In order to effectively deblend sources and maximize the information extracted in each image, we use T-PHOT as in P21 to perform forced photometry in the Ks- and IRAC images on sources detected in the IR-Weighted HST image. T-PHOT <cit.> is a software that uses priors from high resolution data in order to deblend and extract fluxes of the same objects in a lower resolution image. We first use T-PHOT's built-in background routine to generate a local background for each source and remove the excess ICL light as well as inhomogenieties in the backgrounds. Then, as “real” galaxy priors, we use the IR-Weighted segmentation map and flux measurements from the F160W-band image. Additionally, we use the galaxy models that have been created in the bright galaxy+ICL removal step as the “model” priors. Given the spatial variation of the PRF in the IRAC bands, we take advantage of T-PHOT's “multikernel” option, and use a separate PRF to model sources at each position. We emphaize that the flux (FitQty) that is provided by T-PHOT corresponds to the total flux emitted by a given source. § PHOTOMETRIC VALIDATION In order to characterize the performance of our detection and measurement procedures, we proceed as in P21 injecting synthetic galaxies in the original BUFFALO images using  <cit.> to render noiseless realistic galaxies via the class following the morphology measurements in COSMOS by <cit.>. This catalog only contains information for fluxes in the F814W band. Thus, we match these sources to the COSMOS catalog <cit.> in order to obtain the fluxes in the rest of our bands of interest. We choose to keep the morphology and centroids fixed across bands in order to simplify data handling and bookkeeping. In this case, we generate 10 realizations of a set of 160 sources using the F160W image footprint as reference. Note that, since not all bands cover the same footprint, some sources will not be recovered after processing. We then insert these sources in the original images, run our pipeline on the resulting combined image (which is the sum of the original and the noiseless synthetic sources) and compare their measured fluxes and positions to their inputs. This provides valuable information about completeness and absolute zeropoint calibration. The two catalogs are matched using a nearest neighbor matching routine, , included in the package <cit.>. The results of this comparison are shown in Figure <ref>. We see that for all of the HST bands (F435W, F606W, F814W, F105W, F125W, F140W, F160W) the recovered magnitude is within 20 mmags of the input, and that the reconstruction of the fluxes is relatively stable across the considered range of magnitudes. We note that at the bright end, there is a small fraction of the flux missing, probably due to the extended tails of the sources not being captured by the aperture. This photometric bias becomes smaller with increasing magnitude up to the point where we start to lose sensitivity. We use these offsets to robustly correct the fluxes in each band. For Ks the performance is also excellent and we find a median value of Δmag=-0.05 mag. For the Spitzer IRAC channels, we find a small photometric offset Δmag = -0.12 and Δmag = -0.13 for I1 and I2, respectively. We compare the mean uncertainty reported by the measurement pipeline to the standard deviation of Δmag as a function of magnitude. Again, for the HST bands the performance is excellent, and we find that the reported errors are in good agreement with the scatter measured using our synthetic sources. This is not the case for Ks nor IRAC, where we find that a correction is needed. In particular, we use a power-law correction: Δ F_new = Δ F_old AF^B, where Δ F_new is the corrected uncertainty estimate, Δ F_old is the reported uncertainty by the measurement software, F is the reported flux, and A, B are free parameters. We fit A, B and tabulate the results in Table <ref>. § DATA PRODUCTS AND RESULTS In this section, we discuss the data products from this work and present some validation results. We produce several new data products from BUFFALO, including catalogs, models for the point spread function, and models for the ICL and bright galaxies. The final catalogs include properties of >100,000 sources in the 6 BUFFALO cluster and parallel fields, and extend the Frontier Fields footprint, covering a total of ∼240 square-arcminutes. These include positions, multi-waveband photometry, and photometric redshift estimates for the sources detected as provided by LePhare <cit.>. Additional details about the information provided by these catalogs can be found in Appendix <ref>. Point spread function (PSF) estimates are provided as as FITS images. Section <ref> describes the modeling of the PSFs. We summarize some of their properties in Table <ref>. Unsurprisingly, these results are very similar to those found by P21, as the BUFFALO fields are mostly extensions of the HFF. The procedure to obtain models for the ICL and bright galaxies is described in Section <ref>. These models are also available as FITS images. §.§ Photometric redshifts In this section we present our redshift estimates based on the photometric measurements presented in previous sections. We run LePhare  <cit.>, a template-based code that derives a redshift likelihood function for each source. As in P21, the fluxes used as inputs to LePhare are rescaled by a factor: f_tot = ∑_i w_i (FLUX_AUTO/FLUX_ISO)_i/∑_i w_i, i.e. the weighted mean of the AUTO-to-ISO flux ratio summed over the observed HST bands, where the weights, w_i, are the sum in quadrature of the Source Extractor errors: w_i= √(σ_i,AUTO^2 +σ_i,ISO^2). This is done in order to improve the accuracy of the colors. For the -based photometry (Ks, and IRAC bands), as we do not have an equivalent to FLUX_ISO, we include our baseline fluxes. The template library, and dust attenuation follows <cit.>, using <cit.> or <cit.> extinction laws depending on the galaxy type. For details about the templates and the extinction prescriptions we refer the reader to <cit.> and P21. In our catalog the redshift estimates, , correspond to the position of the maximum-likelihood for each object. The redshift calibration procedure is similar to that presented in P21, which is based on spectroscopic data described in <cit.>. We obtain the best-fit template for each source and try to find a systematic offset in each band by comparing the predicted and observed flux for all sources that have a measured spectroscopic redshift with a spectroscopic quality flag >3. These magnitude offsets, when applied to the photometric baseline, compensate for a possible bias in the template library and/or for calibration issues in data reduction. We find these corrections to be below 9% for all the HST bands. For the K_s band, we find a correction of 0.883 while in the IRAC channels 1 and 2, the correction is a factor 1.117 and 1.182, respectively. These corrections are shown in Table <ref>. Figure <ref> also shows the photometric redshift distribution for objects in each cluster, estimated from the SED fits with a reduced χ^2< 10. lc[h!] Multiplicative factors applied to each band in the photo-z calibration step. Band Multiplicative Factor F275W 1.055 F336W 1.011 F435W 1.085 F475W 1.060 F606W 1.004 F625W 1.006 F814W 0.992 F105W 1.004 F110W 1.015 F125W 1.011 F140W 1.008 F160W 0.995 Ks 0.883 IRAC1 1.117 IRAC2 1.182 § COMPARISON WITH THE HUBBLE FRONTIER FIELDS By design, there is significant overlap between the HFF and the BUFFALO fields. This makes the HFF catalogs an exceptional reference to verify and validate the data presented in this work and to check for potential improvements, given the increased number of exposures. Here, we compare our BUFFALO data products with those presented in P21. Figure <ref> compares the magnitude distribution of sources in the F160W band between the catalog presented here and the catalogs in P21 in the overlapping region of the MACS J1149 cluster. Here we show that our new BUFFALO catalogs reach fainter sources than those from the HFF. We also show the fraction of detected objects as a function of magnitude, finding that both catalogs have a similar completeness to magnitude ∼ 27.5 in the F160W band. This is in agreement with P21, where the completeness dropped below 100% at ∼ 27.5. Other bands and clusters show a similar behavior. We note that these completeness estimates do not take into account the effects of strong lensing. § SUMMARY The wealth of deep (HST) observations and ancillary data in the HFF <cit.>, open a window to the high-redshift universe, and provides a complementary sample to the JWST. The BUFFALO survey <cit.> used these data and extended the observations in the 6 HFFs, to allow for follow-up spectroscopy. This work presents a new set of data products based on the BUFFALO observations. The data products include models for the point spread function (PSF), intra-cluster light (ICL), the bright galaxies, and catalogs of astronomical sources. The catalogs contain detailed information (including positions and photometry) of over 100,000 sources distributed across 6 separate cluster and parallel fields covering a total area of 240 arcmin^2. The data products are obtained using a similar procedure to that outlined in <cit.>. First, a model of the bright galaxies, and the ICL are created. These models are then subtracted from the original image, in order to increase our sensitivity allowing us to observe fainter sources, which are detected and measured using Source Extractor in the HST bands. We then use the IR-weighted segmentation map as priors in the T-PHOT package to obtain forced-photometry in ancillary data from Keck Ks band, and Spitzer IRAC channels 1 and 2. The photometric measurements are validated using synthetic source injection. Finally, LePhare is run to obtain redshift estimates based on our photometric measurements. The main change with respect to the procedure in P21 is the usage of a “super hot” mode Source Extractor run, that simplifies bookkeping, while not biasing the photometric estimates. As a sanity check, we plot the redshift histograms and note that the peaks of these histograms correspond to the redshift of each respective cluster. This catalog represents one of the deepest views at galaxy clusters to date and a sample that lends itself well for JWST follow-up. All of the data products presented in this work will be made publicly available to the astronomical community through the usual astronomical archive databases (MAST and Vizier). § ACKNOWLEDGEMENTS ID acknowledges the support received from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 896225. This work has made use of the CANDIDE Cluster at the Institut d'Astrophysique de Paris and made possible by grants from the PNCG and the DIM-ACAV. The Cosmic Dawn Center is funded by the Danish National Research Foundation under grant No. 140. LF acknowledges support by Grant No. 2020750 from the United States-Israel Binational Science Foundation (BSF) and Grant No. 2109066 from the United States National Science Foundation (NSF). Based on observations made with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are associated with programs GO-15117, WFC3/UV imaging (GO 13389, 14209; B. Siana), A370 HST/ACS additional imaging (GO 11507; K. Noll, 11582; A. Blain, 13790; S. Rodney, 11591; J.P. Kneib) This work is based in part on data and catalog products from HFF-DeepSpace, funded by the National Science Foundation and Space Telescope Science Institute (operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555). Support for HST Program GO-15117 was provided through a grant from the STScI under NASA contract NAS5-26555. This work is based in part on observations made with the Spitzer Space Telescope, which was operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. Based on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere under ESO programme(s) 090.A-0458, 092.A-0472, and 095.A-0533. Some of the data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. § CATALOG DETAILS The catalogs presented in this work contain the following information: * ID: Source number * FLUX_FXXXW: Total scaled flux in cgs units of erg/cm^2/s/Hz * FLUXERR_FXXXW: Corrected flux error in cgs units of erg/cm^2/s/Hz * ZSPEC: reported spectroscopic redshift * ZSPEC_Q: reported quality flag of spectroscopic redshift * ZSPEC_REF: dataset from which spectroscopic redshift was obtained * ALPHA_J2000_STACK: Right Ascension (J2000) in degrees using GAIA DR2 as reference. * DELTA_J2000_STACK: Declination (J2000) in degrees using GAIA DR2 as reference. * FIELD: denotes the field object belongs to * ZCHI2: photometric redshift goodness of fit * CHI2_RED: reduced chi square * ZPDF: photometric redshift derived via maximum likelihood * ZPDF_LOW: lower threshold for photometric redshift * ZPDF_HIGH: upper threshold for photometric redshift * MOD_BEST: galaxy model for best χ^2 * EXT_LAW: Extinction law * E_BV: E(B-V) * ZSECOND: secondary photometric redshift peak in maximum likelihood distribution * BITMASK: Base 2 number to determine which bands were used. Calculated via bitmask=∑_n=good band index 2^n * NB_USED: number of bands used § SOURCE EXTRACTOR CONFIGURATION 3 0.5 0.5 Y gauss_4.0_7x7.conv 64 0.000005 Y 0.8 CORRECT 2.0, 3.5 0.5 0.17 goods_default.nnw 64 3 LOCAL 24
http://arxiv.org/abs/2307.04318v1
20230710032008
Two-Sample and Change-Point Inference for Non-Euclidean Valued Time Series
[ "Feiyu Jiang", "Changbo Zhu", "Xiaofeng Shao" ]
stat.ME
[ "stat.ME" ]
[4] [] Z>0=c<@ thmTheorem[section] assAssumption[section] thmbis[1] ass-1 corCorollary[section] proProposition[section] remRemark[section] exaExample algAlgorithm[section] defnDefinition[section] lemLemma[section] positioning plotmarks matrix compat=1.7 matrix,backgrounds, arrows.meta myback myback,background,main decorations.pathreplacing,angles,quotes mycolor/.style = dashed,rounded corners,line width=1bp,color=#1 myfillcolor/.style = draw,fill=#1 declare function= normcdf(,,)=1/(1 + exp(-0.07056*((-)/)^3 - 1.5976*(-)/)); #1 1 1 1]Feiyu Jiang 2]Changbo Zhu[Corresponding author. Email address: [email protected].] 3]Xiaofeng Shao [1]Department of Statistics and Data Science, Fudan University [2]Department of Applied and Computational Mathematics and Statistics, University of Notre Dame [3]Department of Statistics, University of Illinois at Urbana Champaign Two-Sample and Change-Point Inference for Non-Euclidean Valued Time Series [ ========================================================================== 0 Two-Sample and Change-Point Inference for Non-Euclidean Valued Time Series ========================================================================== Data objects taking value in a general metric space have become increasingly common in modern data analysis. In this paper, we study two important statistical inference problems, namely, two-sample testing and change-point detection, for such non-Euclidean data under temporal dependence. Typical examples of non-Euclidean valued time series include yearly mortality distributions, time-varying networks, and covariance matrix time series. To accommodate unknown temporal dependence, we advance the self-normalization (SN) technique <cit.> to the inference of non-Euclidean time series, which is substantially different from the existing SN-based inference for functional time series that reside in Hilbert space <cit.>. Theoretically, we propose new regularity conditions that could be easier to check than those in the recent literature, and derive the limiting distributions of the proposed test statistics under both null and local alternatives. For change-point detection problem, we also derive the consistency for the change-point location estimator, and combine our proposed change-point test with wild binary segmentation to perform multiple change-point estimation. Numerical simulations demonstrate the effectiveness and robustness of our proposed tests compared with existing methods in the literature. Finally, we apply our tests to two-sample inference in mortality data and change-point detection in cryptocurrency data. § INTRODUCTION Statistical analysis of non-Euclidean data that reside in a metric space is gradually emerging as an important branch of functional data analysis, motivated by increasing encounter of such data in many modern applications. Examples include the analysis of sequences of age-at-death distributions over calendar years <cit.>, covariance matrices in the analysis of diffusion tensors in medical imaging <cit.>, and graph Laplacians of networks <cit.>. One of the main challenges in dealing with such data is that the usual vector/Hilbert space operation, such as projection and inner product may not be well defined and only the distance between two non-Euclidean data objects is available. Despite the challenge, the list of papers that propose new statistical techniques to analyze non-Euclidean data has been growing. Building on Fréchet mean and variance <cit.>, which are counterparts of mean and variance for metric space valued random object, <cit.> proposed a test for comparing N(≥ 2) populations of metric space valued data. <cit.> developed a novel test to detect a change point in the Fréchet mean and/or variance in a sequence of independent non-Euclidean data. The classical linear and nonparametric regression has also been extended to metric spaced valued data; see <cit.>, <cit.>, and <cit.>, among others. So far, the majority of the literature on non-Euclidean data has been limited to independent data, and the only exceptions are <cit.> and <cit.>, which mainly focused on the autoregressive modeling of non-Euclidean valued time series. To the best of our knowledge, no inferential tools are available for non-Euclidean valued time series in the literature. In this paper, we address two important problems: two-sample testing and change-point detection, in the analysis of non-Euclidean valued time series. These two problems are also well motivated by the data we analyzed in the paper, namely, the yearly age-at-death distributions for countries in Europe and daily Pearson correlation matrices for five cryptocurrencies. For time series data, serial dependence is the rule rather than the exception. This motivates us to develop new tests for non-Euclidean time series that is robust to temporal dependence. Note that the two testing problems have been addressed by <cit.> and <cit.>, respectively for independent non-Euclidean data, but as expected, their tests fail to control the size when there is temporal dependence in the series; see Section <ref> for simulation evidence. To accommodate unknown temporal dependence, we develop test statistics based on self-normalization <cit.>, which is a nascent inferential technique for time series data. It has been mainly developed for vector time series and has been extended to functional time series in Hilbert space <cit.>. The functional extension is however based on reducing the infinite dimensional functional data to finite dimension via functional principal component analysis, and then applying SN to the finite-dimensional vector time series. Such SN-based inference developed for time series in Hilbert space cannot be applied to non-Euclidean valued time series, since the projection and inner product commonly used for data in Hilbert space are not available for data objects that live in a general metric space. The SN-based extension to non-Euclidean valued time series is therefore fairly different from that in <cit.> and <cit.>, in terms of both methodology and theory. For independent non-Euclidean valued data, <cit.> build on the empirical process theory <cit.> by regulating the complexity of the analyzed metric space, which is in general abstract and may not be easy to verify. In our paper, we take a different approach that is inspired by the M-estimation theory in <cit.> and <cit.> for Euclidean data, and extend it to non-Euclidean setting. We assume that the metric distance between data and the estimator of the Fréchet mean admits certain decomposition, which includes a bias term, a leading stochastic term, and a remainder term. Our technical assumptions are more intuitive and could be easier to check in practice. Furthermore, we are able to obtain explicit asymptotic distributions of our test statistics under the local alternatives of rate O(n^-1/2), where n is the sample size, under our assumptions, whereas they seem difficult to derive under the entropy integral type conditions employed by <cit.>. The remainder of the paper is organized as follows. Section <ref> provides background of non-Euclidean metric space in which random objects of interest reside in, and some basic assumptions that will be used throughout the paper. Section <ref> proposes SN-based two-sample tests for non-Euclidean time series. Section <ref> considers SN-based change-point tests. Numerical studies for the proposed tests are presented in Section <ref>, and Section <ref> demonstrates the applicability of these tests through real data examples. Section <ref> concludes. Proofs of all results are relegated to Appendix <ref>. Appendix <ref> summarizes the examples that satisfy assumptions in Section <ref>, and Appendix <ref> provides simulation results for functional time series. Some notations used throughout the paper are defined as follows. Let · denote the conventional Euclidean norm. Let D[0,1] denote the space of functions on [0, 1] which are right continuous with left limits, endowed with the Skorokhod topology <cit.>. We use ⇒ to denote weak convergence in D[0,1] or more generally in ℝ^m-valued function space D^m[0,1], where m∈ℕ; →_d to denote convergence in distribution; and →_p to denote convergence in probability. A sequence of random variables X_n is said to be O_p(1) if it is bounded in probability. For x∈ℝ, define ⌊ x⌋ as the largest integer that is smaller than or equal to x, and ⌈ x ⌉ as the smallest integer that is greater than or equal to x. § PRELIMINARIES AND SETTINGS In this paper, we consider a metric space (Ω,d) that is totally bounded, i.e. for any ϵ>0, there exist a finite number of open ϵ-balls whose union can cover Ω. For a sequence of stationary random objects {Y_t}_t∈ℤ defined on (Ω,d), we follow <cit.>, and define their Fréchet mean and variance by μ=min_ω∈Ω𝔼d^2(Y_t,ω), V=𝔼d^2(Y_t,μ), respectively. Fréchet mean extends the traditional mean in linear spaces to more general metric spaces by minimizing expected squared metric distance between the random object Y_t and the centroid akin to the conventional mean by minimizing the expected sum of residual squares. It is particularly useful for objects that lie in abstract spaces without explicit algebraic structure. Fréchet variance, defined by such expected squared metric distance, is then used for measuring the dispersion in data. Given finite samples {Y_t}_t=1^n, we define their Fréchet subsample mean and variance as μ̂_[a,b]=min_ω∈Ω∑_t=1+⌊ na⌋^⌊ nb⌋d^2(Y_t,ω), V̂_[a,b]=1/⌊ nb⌋-⌊ na⌋∑_t=1+⌊ na⌋^⌊ nb⌋d^2(Y_t,μ̂_[a,b]), where (a,b)∈ℐ_η, ℐ_η={(a,b): 0≤ a<b≤ 1, b-a≥η} for some trimming parameter η∈(0,1). The case corresponding to a=0 and b≥η is further denoted as μ̂_[0,b]=μ̂_b, V̂_[0,b]=V̂_b, with special case of b=1 corresponding to Fréchet sample mean and variance <cit.>, respectively. Note that both Fréchet (subsample) mean and variance depend on the space Ω and metric distance d, which require further regulation for desired inferential purposes. In this paper, we do not impose independence assumptions, and our technical treatment differs substantially from those in the literature, c.f. <cit.>. μ is unique, and for some δ>0, there exists a constant K>0 such that, inf _d(ω, μ)<δ{𝔼(d^2(Y_0, ω))-𝔼(d^2(Y_0, μ))-K d^2(ω, μ)}≥ 0. For any (a,b)∈ℐ_η, μ̂_[a,b] exists and is unique almost surely. For any ω∈Ω, and (a,b)∈ℐ_η, as n→∞, 1/⌊ nb⌋-⌊ na⌋∑_t=⌊ na⌋+1^⌊ nb⌋[d^2(Y_t,ω)-𝔼d^2(Y_t,ω)]→_p 0. For some constant σ>0, 1/√(n)∑_t=1^⌊ nr⌋(d^2(Y_t,μ)-V)⇒σ B(r), r∈(0,1], where B(·) is a standard Brownian motion. Let B_δ(μ) ⊂Ω be a ball of radius δ centered at μ. For ω∈ B_δ(μ), i.e. d(ω,μ)≤δ, we assume the following expansion d^2(Y_t,ω)-d^2(Y_t,μ)= K_dd^2(ω,μ)+ g(Y_t,ω,μ)+R(Y_t,ω,μ), t∈ℤ, where K_d∈(0,∞) is a constant, and g(Y_t,ω,μ) and R(Y_t,ω,μ) satisfy that, as n→∞, sup_(a,b)∈ℐ_ηsup_ω∈ B_δ(μ)| n^-1/2∑_t=⌊ n a⌋+1^⌊ n b⌋ g(Y_t,ω,μ)/d(ω,μ)|=O_p(1), and sup_(a,b)∈ℐ_ηsup_ω∈ B_δ(μ)|n^-1/2∑_t=⌊ n a⌋+1^⌊ n b⌋ R(Y_t,ω,μ)/d(ω,μ)+n^1/2d^2(ω,μ)|→_p 0, respectively. Several remarks are given in order. Assumptions <ref>-<ref> are standard and similar conditions can be found in <cit.> and <cit.>. Assumptions <ref> and <ref> are adapted from Assumption (A1) in <cit.>, and are required for identification purpose. In particular, Assumption <ref> requires that the expected squared metric distance 𝔼d^2(Y_t,ω) can be well separated from the Fréchet variance, and the separation is quadratic in terms of the distance d(ω,μ). Assumption <ref> is useful for obtaining the uniform convergence of the subsample estimate of Fréchet mean, i.e., μ̂_[a,b], which is a key ingredient in forming the self-normalizer in SN-based inference. Assumption <ref> is a pointwise weak law of large numbers, c.f. Assumption (A2) in <cit.>. Assumption <ref> requires the invariance principle to hold to regularize the partial sum that appears in Fréchet subsample variances. Note that d^2(Y_t,ω) takes value in ℝ for any fixed ω∈Ω, thus both Assumption <ref> and <ref> could be implied by high-level weak temporal dependence conditions (e.g., strong mixing) in conventional Euclidean space, see <cit.> for discussions. <Ref> distinguishes our theoretical analysis from the existing literature. Its idea is inspired by <cit.> and <cit.> for M-estimators. In the conventional Euclidean space, i.e. (Ω,d)=(ℝ^m,·) for m≥ 1, it is easy to see that the expansion in <Ref> holds with K_d=1, g(Y_t,ω,μ)=2(μ-ω)^⊤(Y_t-μ) and R(Y_t,ω,μ)≡ 0. In more general cases, Assumption <ref> can be interpreted as the expansion of d^2(Y_t,ω) around the target value d^2(Y_t,μ). In particular, K_dd^2(ω,μ) can be viewed as the bias term, g(Y_t,ω,μ) works as the asymptotic leading term that is proportional to the distance d(ω,μ) while R(Y_t,ω,μ) is the asymptotically negligible remainder term. More specifically, after suitable normalization, it reads as, n^-1/2 ∑_t=⌊na⌋+1^⌊nb⌋ [d^2(Y_t,ω)-d^2(Y_t,μ)] = n^1/2(b-a)K_dd^2(ω,μ)_bias term + d(ω,μ)n^-1/2∑_t=⌊na⌋+1^⌊nb⌋g(Y_t,ω,μ)/d(ω,μ)_stochastic term +n^-1/2∑_t=⌊na⌋+1^⌊nb⌋ R(Y_t,ω,μ)_remainder term. And the verification of this assumption can be done by analyzing each term. In comparison, existing literature, e.g. <cit.>, <cit.>, impose assumptions on the complexity of (Ω,d). These assumptions typically involve the behaviors of entropy integral and covering numbers rooted in the empirical process theory <cit.>, which are abstract and difficult to check in practice, see Propositions 1 and 2 in <cit.>. Assumption <ref>, on the contrary, regulates directly on the metric d and could be easily checked for the examples below. Moreover, Assumption <ref> is useful for deriving local powers of tests to be developed in this paper, see Section <ref> and <ref> for more details. Examples that can satisfy Assumptions <ref>-<ref> include: * L_2 metric d_L for Ω being the set of square integrable functions on [0,1]; * 2-Wasserstein metric d_W for Ω being the set of univariate probability distributions on ℝ; * Frobenius metric d_F for Ω being the set of square matrices, including the special cases of covariance matrices and graph Laplacians; * log-Euclidean metric d_E for Ω being the set of covariance matrices. We refer to Appendix <ref> for more details of these examples and verifications of above assumptions for them. § TWO-SAMPLE TESTING This section considers two-sample testing in metric space under temporal dependence. For two sequences of temporally dependent random objects {Y_t^(1),Y_t^(2)}_t∈ℤ on (Ω,d), we denote Y_t^(i)∼ P^(i), where P^(i) is the underlying marginal distribution of Y_t^(i) with Fréchet mean and variance μ^(i) and V^(i), i=1,2. Given finite sample observations {Y_t^(1)}_t=1^n_1 and {Y_t^(2)}_t=1^n_2, we are interested in the following two-sample testing problem, ℍ_0: P^(1)=P^(2),  ℍ_a: P^(1)≠ P^(2). Let n=n_1+n_2, we assume two samples are balanced, i.e. n_1/n→γ_1 and n_2/n→γ_2 with γ_1,γ_2∈(0,1) and γ_1+γ_2=1 as min(n_1,n_2)→∞. For r∈(0,1], we define their recursive Fréchet sample mean and variance by μ̂^(i)_r=min_ω∈Ω∑_t=1^⌊ rn_i⌋d^2(Y_t^(i),ω), V̂^(i)_r=1/⌊ rn_i⌋∑_t=1^⌊ rn_i⌋d^2(Y_t^(i),μ̂^(i)_r), i=1,2. A natural candidate test of ℍ_0 is to compare their Fréchet sample mean and variance by contrasting (μ̂^(1)_1,V̂^(1)_1) and (μ̂^(2)_1,V̂^(2)_1). For the mean part, it is tempting to use d(μ̂^(1)_1,μ̂^(2)_1) as the testing statistic. However, this is a non-trivial task as the limiting behavior of d(μ̂^(1)_1,μ̂^(2)_1) depends heavily on the structure of the metric space, which may not admit conventional algebraic operations. Fortunately, both V̂^(1)_1 and V̂^(2)_1 take value in ℝ, and it is thus intuitive to compare their difference. In fact, <cit.> propose the test statistic of the form U_n= n_1n_2/n σ̂_1^2σ̂_2^2(V̂^(1)_1-V̂^(2)_1)^2, where σ̂_i^2 is a consistent estimator of lim_n_i→∞Var{√(n)(V̂^(i)_1-V^(i))}, i=1,2. However, U_n requires both within-group and between-group independence, which is too stringent to be realistic for applications in this paper. When either of such independence is violated, the test may fail to control size, see Section <ref> for numerical evidence. Furthermore, taking into account the temporal dependence requires replacing the variance by long-run variance, whose consistent estimation usually involves laborious tuning such as choices of kernels and bandwidths <cit.>. To this end, we invoke self-normalization technique to bypass the foregoing issues. The core principle of self-normalization for the time series inference is to use an inconsistent long-run variance estimator that is a function of recursive estimates to yield an asymptotically pivotal statistic. The SN procedure does not involve any tuning parameter or involves less number of tuning parameters compared to traditional counterparts. See <cit.> for a comprehensive review of recent developments for low dimensional time series. For recent extension to inference for high-dimensional time series, we refer to <cit.> and <cit.>. §.§ Test Statistics Define the recursive subsample test statistic based on Fréchet variance as T_n(r)=r(V̂^(1)_r-V̂^(2)_r), r∈ [η,1], and then construct the SN based test statistic as D_n,1=n[T_n(1)]^2/∑_k=⌊ nη⌋^n [T_n(k/n)-k/nT_n(1)]^2, where η∈(0,1) is a trimming parameter for controlling the estimation effect of T_n(r) when r is close to 0, which is important for deriving the uniform convergence of {√(n)T_n(r), r∈[η,1]}, see <cit.> and <cit.> for similar technical treatments. The testing statistic (<ref>) is composed of the numerator n[T_n(1)]^2, which captures the difference in Fréchet variances, and the denominator ∑_k=⌊ nη⌋^n [T_n(k/n)- k/nT_n(1)]^2, which is called self-normalizer and mimics the behavior of the numerator with suitable centering and trimming. For each r∈[η,1], T_n(r) is expected to be a consistent estimator for r(V^(1)-V^(2)). Therefore, under ℍ_a, T_n(1) is large when there is significant difference in Fréchet variance, whereas the key element T_n(r)-rT_n(1) in self-normalizer remains to be small. This suggests that we should reject ℍ_0 for large values of D_n,1. Note that (<ref>) only targets at difference in Fréchet variances. To detect the difference in Fréchet means, we can use contaminated Fréchet variance <cit.>. Let V̂^C,(1)_r=1/⌊ rn_1⌋∑_t=1^⌊ rn_1⌋d^2(Y_t^(1),μ̂^(2)_r), and V̂^C,(2)_r=1/⌊ rn_2⌋∑_t=1^⌊ rn_2⌋d^2(Y_t^(2),μ̂^(1)_r), and T_n^C(r)=r(V̂^C,(1)_r+V̂^C,(2)_r-V̂^(1)_r-V̂^(2)_r). The contaminated Fréchet sample variances V̂^C,(1)_r and V̂^C,(2)_r switch the role of μ̂_r^(1) and μ̂_r^(2) in V̂^(1)_r and V̂^(2)_r, respectively, and could be viewed as proxies for measuring Fréchet mean differences. Intuitively, it is expected that V̂^C,(i)_r≈𝔼d^2(Y_t^(i),μ^(3-i)), and V̂^(i)_r≈𝔼d^2(Y_t^(i), μ^(i)), i=1,2. Under ℍ_0, both μ̂_r^(1) and μ̂_r^(2) are consistent estimators for μ^(1)=μ^(2), thus V̂^C,(i)_r≈V̂^(i)_r, i=1,2, which indicates a small value for T_n^C(r). On the contrary, when d(μ^(1),μ^(2))>0, V̂^C,(i)_r could be much larger than V̂^(i)_r as 𝔼d^2(Y_t^(i),μ^(3-i))>𝔼d^2(Y_t^(i),μ^(i))=min_ω∈Ω𝔼d^2(Y_t^(i),ω), i=1,2, resulting in large value of T_n^C(r). The power-augmented test statistic is thus defined by D_n,2=n{[T_n(1)]^2+[T_n^C(1)]^2}/∑_k=⌊ nη⌋^n {[T_n(k/n)-k/nT_n(1)]^2+ [T_n^C(k/n)-k/nT_n^C(1)]^2}, where the additional term ∑_k=⌊ nη⌋^n [T_n^C(k/n)-k/nT_n^C(1)]^2 that appears in the self-normalizer is used to stabilize finite sample performances. Our proposed tests could be adapted to comparison of N-sample populations <cit.>, where N≥ 2. An natural way of extension would be aggregating all the pairwise differences in Fréchet variance and contaminated variance. Specifically, let the N groups of random data objects be {Y_t^(i)}_t=1^n_i, i=1,⋯,N. The null hypothesis is given as ℍ_0: P^(1)=⋯=P^(N), for some N≥ 2. Let μ̂^(i)_r and V̂^(i)_r, r∈[η,1] be the Fréchet subsample mean and variance, respectively, for the ith group, i=1,⋯, N. For 1≤ i≠ j≤ N, define the pairwise contaminated Fréchet subsample variance as V̂^C,(i,j)_r=1/⌊ rn_i⌋∑_t=1^⌊ rn_i⌋d^2(Y_t^(i),μ̂^(j)_r), r∈ [η,1], and define the recursive statistics T_n^i,j(r)=r(V̂^(i)_r-V̂^(j)_r), T_n^C,i,j(r)=r(V̂^C,(i,j)_r+V̂^C,(j,i)_r-V̂^(i)_r-V̂^(j)_r), r∈ [η,1]. In the same spirit of the test statistics D_n,1 and D_n,2, for n=∑_i=1^N n_i, we may construct their counterparts for the N-sample testing problem as D^(N)_n,1=n∑_i<j[T_n^i,j(1)]^2/∑_k=⌊ nη⌋^n ∑_i<j[T_n^i,j(k/n)-k/nT_n^i,j(1)]^2, and D^(N)_n,2=n∑_i<j{[T_n^i,j(1)]^2+[T_n^C,i,j(1)]^2}/∑_k=⌊ nη⌋^n ∑_i<j{[T_n^i,j(k/n)-k/nT_n^i,j(1)]^2+[T_n^C,i,j(k/n)-k/nT_n^C,i,j(1)]^2}. Compared with classical N-sample testing problem in Euclidean spaces, e.g. analysis of variance (ANOVA), the above modification does not require Gaussianity, equal variance, or serial independence. Therefore, they could be work for broader classes of distributions. We leave out the details for the sake of space. §.§ Asymptotic Theory Before we present asymptotic results of the proposed tests, we need a slightly stronger assumption than Assumption <ref> to regulate the joint behavior of partial sums for both samples. For some σ_1>0 and σ_2>0, we have 1/√(n)∑_t=1^⌊ nr⌋( d^2(Y_t^(1),μ^(1))-V^(1) d^2(Y_t^(2),μ^(2))-V^(2))⇒(σ_1B^(1)(r) σ_2B^(2)(r) ), where B^(1)(·) and B^(2)(·) are two standard Brownian motions with unknown correlation parameter ρ∈ (-1,1), and σ_1,σ_2≠ 0 are unknown parameters characterizing the long-run variance. Suppose Assumptions <ref>-<ref> (with <ref> replaced by <ref>) hold for both {Y_t^(1)}_t=1^n_1 and {Y_t^(2)}_t=1^n_2. Then as n→∞, under ℍ_0, for i=1,2, D_n,i→_d ξ^2_γ_1,γ_2(1;σ_1,σ_2)/∫_η^1[ξ_γ_1,γ_2(r;σ_1,σ_2)-rξ_γ_1,γ_2(1;σ_1,σ_2)]^2dr:=𝒟_η, where ξ_γ_1,γ_2(r;σ_1,σ_2)=γ_1^-1σ_1B^(1)(γ_1r)-γ_2^-1σ_2B^(2)(γ_2r). Theorem <ref> obtains the same limiting null distribution for Fréchet variance based test D_n,1 and its power-augmented version D_n,2. Although D_n,2 contains contaminated variance T_n^C(1), its contribution is asymptotically vanishing as n→∞. This is an immediate consequence of the fact that sup_r∈[η,1]|√(n)T_n^C(r)|→_p0, see proof of Theorem <ref> in Appendix <ref>. Similar phenomenon has been documented in <cit.> under different assumptions. We next consider the power behavior under the Pitman local alternative, ℍ_an: V^(1)-V^(2)=n^-κ_VΔ_V, d^2(μ^(1),μ^(2))=n^-κ_MΔ_M, with Δ_V∈ℝ, Δ_M∈(0,∞), and κ_V,κ_M∈ (0,∞). Suppose Assumptions <ref>-<ref> (with <ref> replaced by <ref>) hold for both {Y_t^(1)}_t=1^n_1 and {Y_t^(2)}_t=1^n_2. As n→∞, under ℍ_an, * if max{κ_V,κ_M}∈(0,1/2), then for i=1,2, D_n,i→_p∞; * if min{κ_V,κ_M}∈(1/2,∞), then for i=1,2, D_n,i→_d𝒟_η; * if κ_V=1/2 and κ_M∈(1/2,∞), then for i=1,2, D_n,i→_d (ξ_γ_1,γ_2(1;σ_1,σ_2)+Δ_V)^2/∫_η^1(ξ_γ_1,γ_2(r;σ_1,σ_2)-rξ_γ_1,γ_2(1;σ_1,σ_2))^2dr; * if κ_V∈ (1/2,∞) and κ_M=1/2, then D_n,1→_d𝒟_η, and D_n,2→_d (ξ_γ_1,γ_2(1;σ_1,σ_2))^2+4K_d^2Δ_M^2/∫_η^1(ξ_γ_1,γ_2(r;σ_1,σ_2)-rξ_γ_1,γ_2(1;σ_1,σ_2))^2dr; * if κ_V=κ_M=1/2, then D_n,1→_d (ξ_γ_1,γ_2(1;σ_1,σ_2)+Δ_V)^2/∫_η^1(ξ_γ_1,γ_2(r;σ_1,σ_2)-rξ_γ_1,γ_2(1;σ_1,σ_2))^2dr, D_n,2→_d (ξ_γ_1,γ_2(1;σ_1,σ_2)+Δ_V)^2+4K_d^2Δ_M^2/∫_η^1(ξ_γ_1,γ_2(r;σ_1,σ_2)-rξ_γ_1,γ_2(1;σ_1,σ_2))^2dr; where K_d is defined in Assumption <ref>. Theorem <ref> presents the asymptotic behaviors for both test statistics under local alternatives in various regimes. In particular, D_n,1 can detect differences in Fréchet variance at local rate n^-1/2, but possesses trivial power against Fréchet mean difference regardless of the regime of κ_M. In comparison, D_n,2 is powerful for differences in both Fréchet variance and Fréchet mean at local rate n^-1/2, which validates our claim that D_n,2 indeed augments power. Our results merit additional remarks when compared with <cit.>. In <cit.>, they only obtain the consistency of their test under either n^1/2|V^(1)-V^(2)|→∞ or n^1/2d^2(μ^(1),μ^(2))→∞, while Theorem <ref> explicitly characterizes the asymptotic distributions of our test statistics under local alternatives of order O(n^-1/2), which depend on κ_V and κ_M. Such theoretical improvement relies crucially on our newly developed proof techniques based on Assumption <ref>, and it seems difficult to derive such limiting distributions under empirical-process-based assumptions in <cit.>. However, we do admit that self-normalization could result in moderate power loss compared with t-type test statistics, see <cit.> for evidence in Euclidean space. Note that the limiting distributions derived in <Ref> and <Ref> contain a key quantity ξ_γ_1,γ_2(r;σ_1,σ_2) defined in (<ref>), which depends on nuisance parameters σ_1,σ_2 and ρ. This may hinder the practical use of the tests. The following corollary, however, justifies the wide applicability of our tests. Under Assumption <ref>, if either γ_1=γ_2=1/2 or ρ=0, then for any constants C_a,C_b∈ℝ, (ξ_γ_1,γ_2(1;σ_1,σ_2)+C_a)^2+C_b^2/∫_η^1(ξ_γ_1,γ_2(r;σ_1,σ_2)-rξ_γ_1,γ_2(1;σ_1,σ_2))^2dr=_d (B(1)+C_a/C_ξ)^2+ (C_b/C_ξ)^2/∫_η^1(B(r)-rB(1))^2dr, where C_ξ=√(2σ_1^2+2σ_2^2-4ρσ_1σ_2), if γ_1=γ_2, √(σ_1^2/γ_1+σ_2^2/γ_2), if ρ=0. Therefore, by choosing C_a=C_b=0 in <Ref>, we obtain the pivotal limiting distribution 𝒟_η=_dB^2(1)/∫_η^1(B(r)-rB(1))^2dr. The asymptotic distributions in <Ref> can be similarly derived by letting either C_a=Δ_V or C_b=2K_dΔ_M. Therefore, when either two samples are of the same length (γ_1=γ_2) or two samples are asymptotically independent (ρ=0), the limiting distribution 𝒟_η is pivotal. In practice, we reject ℍ_0 if D_n,i>Q_𝒟_η(1-α) where Q_𝒟_η(1-α) denotes the 1-α quantile of (the pivotal) D_η. In Table <ref>, we tabulate commonly used critical values under various choices of η by simulating 50,000 i.i.d. 𝒩(0,1) random variables 10,000 times and approximating a standard Brownian motion by standardized partial sum of i.i.d. 𝒩(0,1) random variables. § CHANGE-POINT TEST Inspired by the two-sample tests developed in Section <ref>, this section considers the change-point detection problem for a sequence of random objects {Y_t}_t=1^n, i.e. ℍ_0: Y_1, Y_2, …, Y_n∼ P^(1) against the single change-point alternative, ℍ_a: there exists 0<τ<1 such that Y_t={[ Y_t^(1)∼ P^(1), 1≤ t≤⌊ nτ⌋; Y_t^(2)∼ P^(2), ⌊ nτ⌋ +1≤ t ≤ n. ]. The single change-point testing problem can be roughly viewed as two-sample testing without knowing where the two samples split, and they share certain similarities in terms of statistical methods and theory. Recall the Fréchet subsample mean μ̂_[a,b] and variance V̂_[a, b] in (<ref>), we further define the pooled contaminated variance separated by r∈(a,b) as V̂_[r ; a, b]^C=1/⌊ n r⌋-⌊ n a⌋∑_i=⌊ n a⌋+1^⌊ n r⌋ d^2(Y_i, μ̂_[r, b])+1/⌊ n b⌋-⌊ n r⌋∑_i=⌊ n r⌋+1^⌊ n b⌋ d^2(Y_i, μ̂_[a, r]). Define the subsample test statistics T_n(r ; a, b)=(r-a)(b-r)/b-a(V̂_[a, r]-V̂_[r, b]), and T_n^C(r ; a, b)=(r-a)(b-r)/b-a(V̂_[r ; a, b]^C-V̂_[a, r]-V̂_[r, b]). Note that T_n(r ; a, b) and T_n^C(r ; a, b) are natural extensions of T_n(r) and T_n^C(r) from two-sample testing problem to change-point detection problem by viewing {Y_t}_t=⌊ na⌋+1^⌊ nr⌋ and {Y_t}_t=⌊ nr⌋+1^⌊ nb⌋ as two separated samples. Intuitively, the contrast statistics T_n(r ; a, b) and T_n^C(r ; a, b) are expected to attain their maxima (in absolute value) when r is set at or close to the true change-point location τ. §.§ Test Statistics For some trimming parameters η_1 and η_2 such that η_1>2η_2, and η_1∈(0,1/2), in the same spirit of D_n,1 and D_n,2, and with a bit abuse of notation, we define the testing statistics SN_i= max _⌊ nη_1⌋≤ k≤ n-⌊ nη_1⌋D_n,i(k), i=1,2, where D_n,1(k)= n[T_n(k/n ; 0,1)]^2/ ∑_l=⌊nη_2⌋^k-⌊nη_2⌋ [T_n(l/n ; 0, k/n)]^2+ ∑_l=k+⌊nη_2⌋^n-⌊nη_2⌋ [T_n(l/n ; k/n, 1)]^2, D_n,2(k)= n{[T_n(k/n ; 0,1)]^2+[T_n^C(k/n ; 0,1)]^2}/ L_n(k)+R_n(k), with L_n(k)= ∑_l=⌊nη_2⌋^k-⌊nη_2⌋ {[T_n(l/n ; 0, k/n)]^2+[T^C_n(l/n ; 0, k/n)]^2 }, R_n(k)= ∑_l=k+⌊nη_2⌋^n-⌊nη_2⌋ {[T_n(l/n ; k/n, 1)]^2+ [T^C_n(l/n ; k/n, 1)]^2}. The trimming parameter η_1 plays a similar role as η in two-sample testing problem for stabilizing the estimation effect for relatively small sample sizes, while the additional trimming η_2 is introduced to ensure that the subsample estimates in the self-normalizers are constructed with the subsample size proportional to n. Furthermore, we note that the self-normalizers here are modified to accommodate for the unknown change-point location, see <cit.>, <cit.> for more discussion. §.§ Asymptotic Theory Suppose Assumptions <ref>-<ref> hold. Then, under ℍ_0, we have for i=1,2, SN_i=max _⌊ nη_1⌋≤ k≤ n-⌊ nη_1⌋D_n,i(k) ⇒sup _r∈[η_1,1-η_1][B(r)-rB(1)]^2/V(r,η):=𝒮_η, where V(r,η)=∫_η_2^r-η_2 [B(u)-u/rB(r)]^2du+∫_r+η_2^1-η_2 [B(1)-B(u)-(1-u)/(1-r){B(1)-B(r)}]^2du. Similar to Theorem <ref>, Theorem <ref> states that both change-point test statistics have the same pivotal limiting null distribution 𝒮_η. The test is thus rejected when SN_i>Q_𝒮_η(1-α), i=1,2, where Q_𝒮_η(1-α) denotes the 1-α quantile of 𝒮_η. In Table <ref>, we tabulate commonly used critical values under various choices of (η_1,η_2) by simulations. Recall in Theorem <ref>, we have obtained the local power of two-sample tests D_n,1 and D_n,2 at rate n^-1/2. To this end, consider the local alternative ℍ_an: V^(1)-V^(2)=n^-1/2Δ_V, d^2(μ^(1),μ^(2))=n^-1/2Δ_M, where Δ_V∈ℝ and Δ_M∈(0,∞). The following theorem states the asymptotic power behaviors of SN_1 and SN_2. Suppose Assumptions <ref>-<ref> (with <ref> replaced by <ref>) hold. If Δ_V≠ 0 and Δ_M≠ 0 are fixed, then under ℍ_an, if τ∈(η_1,1-η_1), then as n→∞, we have lim_|Δ_V|→∞lim_n→∞{max _⌊ nη_1⌋≤ k≤ n-⌊ nη_1⌋D_n,1(k)}→_p∞, lim_max{|Δ_V|,Δ_M}→∞lim_n→∞{max _⌊ nη_1⌋≤ k≤ n-⌊ nη_1⌋D_n,2(k)}→_p∞. We note that <Ref> deals with the alternative involving two different sequences before and after the change-point, while <Ref> only involves one stationary sequence. Therefore, we need to replace <Ref> by <Ref>. <Ref> demonstrates that our tests are capable of detecting local alternatives at rate n^-1/2. In addition, it is seen from Theorem <ref> that SN_1 is consistent under the local alternative of Fréchet variance change as |Δ_V|→∞, while SN_2 is consistent not only under |Δ_V|→∞ but also under the local alternative of Fréchet mean change as Δ_M→∞. Hence SN_2 is expected to capture a wider class of alternatives than SN_1, and these results are consistent with findings for two-sample problems in Theorem <ref>. When ℍ_0 is rejected, it is natural to estimate the change-point location by τ̂_i=n^-1k̂_i, k̂_i=max_⌊ nη_1⌋≤ k≤ n-⌊ nη_1⌋D_n,i(k), We will show that the estimators are consistent under the fixed alternative, i.e. ℍ_a: V^(1)-V^(2)=Δ_V. Before that, we need to regulate the behaviour of Fréchet mean and variance under ℍ_a. Let μ(α)= min_ω∈Ω{α𝔼(d^2(Y_t^(1),ω))+(1-α)𝔼(d^2(Y_t^(2),ω))}, V(α)= α𝔼(d^2(Y_t^(1),μ(α)))+(1-α)𝔼(d^2(Y_t^(2),μ(α))), be the limiting Fréchet mean and variance of two mixture distributions indexed by α∈[0,1]. μ(α) is unique for all α∈[0,1], and |V^(2)-V(α)|≥φ(α), |V^(1)-V(α)|≥φ(1-α), such that φ(α)≥ 0 is a continuous, strictly increasing function of α∈[0,1] satisfying φ(0)=0 and φ(1)≤ |Δ_V|. The uniqueness of Fréchet mean and variance for mixture distribution is also imposed in <cit.>, see Assumption (A2) therein. Furthermore, Assumption <ref> imposes a bi-Lipschitz type condition on V(α), and is used to distinguish the Fréchet variance V(α) under mixture distribution from V^(1) and V^(2). Suppose Assumptions <ref>-<ref> (with <ref> replaced by <ref>), and Assumption <ref> hold. Under ℍ_a, for i=1,2, we have τ̂_i→_pτ, where τ̂_i is defined in (<ref>). Theorem <ref> obtains the consistency of τ̂_i, i=1,2 when Fréchet variance changes. We note that it is very challenging to derive the consistency result when ℍ_a is caused by Fréchet mean change alone, which is partly due to the lack of explicit algebraic structure on (Ω,d) that we can exploit and the use of self-normalization. We leave this problem for future investigation. §.§ Wild Binary Segmentation To detect multiple change-points and identify the their locations given the time series {Y_t}_t=1^n, we can combine our change-point test with the so-called wild binary segmentation (WBS) <cit.>. The testing procedure in conjunction with WBS can be described as follows. Let I_M = { (s_m, e_m) }_m=1,2, …, M, where s_m, e_m are drawn uniformly from { 0, 1/n, 1/(n-1), …, 1/2, 1 } such that ⌈ n e_m ⌉ - ⌊ n s_m ⌋≥ 20. Then we simulate J i.i.d samples, each sample is of size n, from multivariate Gaussian distribution with mean 0 and identity covariance matrix, i.e., for j=1, 2, …, J, { Z^j_i }_i=1^n i.i.d.∼𝒩(0,1). For the jth sample { Z^j_i }_i=1^n, let D(k; s_m,e_m; {Z_i^j}_i=1^n) be the statistic D_⌊ n e_m ⌋ - ⌈ n s_m ⌉ +1, 2(k) that is computed based on sample { Z_⌈ n s_m ⌉^j, Z_⌈ n s_m ⌉ + 1^j, …, Z_⌊ n e_m ⌋^j } and ξ_j = max_1 ≤ m ≤ Mmax_⌊ñ_m η_1 ⌋≤ k ≤ñ_m - ⌊ñ_m η_1 ⌋D(k; s_m,e_m; {Z_i^j}_i=1^n), where ñ_m = ⌈ n e_m ⌉ - ⌊ n s_m ⌋ +1. Setting ξ as the 95% quantile of ξ_1, ξ_2, …, ξ_J, we can apply our test in combination with WBS algorithm to the data sequence {Y_1, Y_2, … Y_n} by running Algorithm <ref> as WBS(0, 1, ξ). The main rational behind this algorithm is that we exploit the asymptotic pivotality of our SN test statistic, and the limiting null distribution of our test statistic applied to random objects is identical to that applied to i.i.d 𝒩(0,1) random variables. Thus this threshold is expected to well approximate the 95% quantile of the finite sample distribution of the maximum SN test statistic on the M random intervals under the null. § SIMULATION In this section, we examine the size and power performance of our proposed tests in two-sample testing (Section <ref>), change-point detection (Section <ref>) problems, and provide simulation results of WBS based change-point estimation (Section <ref>). We refer to Appendix <ref> with additional simulation results regarding comparison with FPCA approach for two-sample tests in functional time series. The time series random objects considered in this section include (i). univariate Gaussian probability distributions equipped with 2-Wasserstein metric d_W; (ii). graph Laplacians of weighted graphs equipped with Frobenius metric d_F; (iii). covariance matrices <cit.> equipped with log-Euclidean metric d_E. Numerical experiments are conducted according to the following data generating processes (DGPs): (i) Gaussian univariate probability distribution: we consider Y_t^(1)=𝒩(arctan (U_t,1),[arctan(U_t,1^2)+1]^2), Y_t^(2)=𝒩(arctan (U_t,2)+δ_1, δ_2^2[arctan(U_t,2^2)+1]^2). (ii) graph Laplacians: each graph has N nodes (N=10 for two-sample test and N=5 for change-point test) that are categorized into two communities with 0.4N and 0.6N nodes respectively, and the edge weight for the first community, the second community and between community are set as 0.4+arctan(U_t,1^2), 0.2+arctan(U_t,1^'2), 0.1 for the first sample Y_t^(1), and δ_2[0.4+arctan(U_t,2^2)], δ_2[0.2+arctan(U_t,2^'2)], 0.1+δ_1 for the second sample Y_t^(2), respectively; (iii) covariance matrix: Y_t^(i)=(2I_3+Z_t,i)(2I_3+Z_t,i)^⊤, i=1,2, such that all the entries of Z_t,1 (resp. Z_t,2) are independent copies of arctan(U_t,1 ) (resp. δ_1+δ_2arctan(U_t,2)). For DGP (i)-(iii), (U_t,1,U_t,2)^⊤ (with independent copies (U'_t,1,U'_t,2)^⊤) are generated according to the following VAR(1) process, ( U_t,1 U_t,2)=ρ( U_t-1,1 U_t-1,2)+ϵ_t, ϵ_ti.i.d.∼𝒩(0,( 1 a a 1 )); where a∈{0,0.5} measures the cross-dependence, and ρ∈{-0.4,0,0.4,0.7} measures the temporal dependence within each sample (or each segment in change-point testing). For size evaluation in change-point tests, only {Y_t^(1)} is used. Furthermore, δ_1∈[0,0.3] and δ_2∈[0.7,1] are used to characterize the change in the underlying distributions. In particular, δ_1 can only capture the location shift, while δ_2 measures the scale change, and the case (δ_1,δ_2)=(0,1) corresponds to ℍ_0. For DGP (i) and (ii), i.e. Gaussian distribution with 2-Wasserstein metric d_W and graph Laplacians with Euclidean metric d_F, the location parameter δ_1 directly shifts Fréchet mean while keeping Fréchet variance constant; and the scale parameter δ_2 works on Fréchet variance only while holding the Fréchet mean fixed. For DGP (iii), i.e. covariance matrices, the log-Euclidean metric d_E operates nonlinearly, and thus changes in either δ_1 or δ_2 will be reflected on changes in both Fréchet mean and variance. The comparisons of our proposed methods with <cit.> for two-sample testing and <cit.> for change-point testing are also reported, which are generally referred to as DM. §.§ Two-Sample Test For the two-sample testing problems, we set the sample size as n_1=n_2∈{50,100,200,400}, and trimming parameter as η=0.15. Table <ref> presents the sizes of our tests and DM test for three DGPs based on 1000 Monte Carlo replications at nominal significance level α=5%. In all three subtables, we see that: (a) both D_1 and D_2 can deliver reasonable size under all settings; (b) DM suffers from severe size distortion when dependence magnitude among data is strong; (c) when two samples are dependent, i.e. a=0.5, DM is a bit undersized even when data is temporally independent. These findings suggest that our SN-based tests provide more accurate size relative to DM when either within-group temporal dependence or between-group dependence is exhibited. In Figure <ref>, we further compare size-adjusted power of our SN-based tests and DM test, in view of the size-distortion of DM. That is, the critical values are set as the empirical 95% quantiles of the test statistics obtained in the size evaluation, so that all curves start from the nominal level at 5%. For all settings, we note that D_2 is more powerful than (or equal to) D_1. In particular, D_1 has trivial power in DGP (i) and (ii) when only Fréchet mean difference is present. In addition, D_2 is more powerful in detecting Fréchet mean differences than DM for DGP (i) and (ii), and beats DM in DGP (i) for detecting Fréchet variance differences, although it is slightly worse than DM in detecting Fréchet variance differences for DGP (ii) and (iii). Due to robust size and power performance, we thus recommend D_2 for practical purposes. §.§ Change-Point Test For the change-point testing problems, we set the sample size n∈{200, 400,800}, and trimming parameter as (η_1,η_2)=(0.15,0.05). Table <ref> outlines the size performance of our tests and DM test for three DGPs based on 1000 Monte Carlo replications at nominal significance level α=5%. DM tests based asymptotic critical value and bootstraps (with 500 replications) are denoted as DM^a and DM^b, respectively. From Table <ref>, we find that SN_1 always exhibits accurate size while SN_2 is a bit conservative. As a comparison, the tests based on DM^a and DM^b suffer from severe distortion when strong temporal dependence is present, although DM^b is slightly better than DM^a in DGP (i) and (ii). In Figure <ref>, we plot the size-adjusted power of our tests and DM test based on bootstrap calibration. Here, the size-adjusted power of DM^b is implemented following <cit.>. Similar to the findings in change-point tests, we find that SN_1 has trivial power in DGP (i) and (ii) when there is only Fréchet mean change and is worst among all three tests. Furthermore, SN_2 is slightly less powerful compared to DM and the power loss is moderate. Considering its better size control, SN_2 is preferred. We further provide numerical evidence for the estimation accuracy by considering the alternative hypothesis of δ_1=1-δ_2=0.3 with true change-point location at τ=0.5 for DGP (i)-(iii) in the main context. When varying sample size n∈{400,800,1600}, we find that for all DGPs, the histograms of τ̂ (based on SN_2) plotted in Figure <ref> get more concentrated around the truth τ=0.5, when sample size increases, which is consistent with our theoretical consistency of τ̂. §.§ Multiple Change Point Detection For simulations of multiple change point estimation, we consider non-Euclidean time series of length n=500 generated from the following two models. These models are the same as before, but reformulated for better presentation purpose. * Gaussian univariate probability distribution: Y_t=𝒩(arctan (U_t)+δ_t,1, δ_t, 2^2[arctan(U_t^2)+1]^2). * covariance matrix: Y_t=(2I_3+Z_t)(2I_3+Z_t)^⊤ with Z_t= δ_t,1+δ_t,2arctan(U_t). Here, U_t are generated according to the AR(1) process U_t=ρ U_t-1+ϵ_t, ϵ_ti.i.d.∼𝒩(0,1). There are 3 change points at t=110, 250 and 370. The changes point locations are reflected in the definitions of {δ_t,1} and {δ_t,2}, where δ_t,1 = a_1 𝕀_{n ≤ 110 } + a_2 𝕀_{ 110 < n ≤ 250 } + a_3 𝕀_{ 250 < n ≤ 370 } + a_4 𝕀_{ 370 < n ≤ 500 }, δ_t,2 = b_1 𝕀_{ n ≤ 110 } + b_2 𝕀_{ 110 < n ≤ 250 } + b_3 𝕀_{ 250 < n ≤ 370 } + b_4 𝕀_{ 370 < n ≤ 500 }. For each model, we consider 3 cases that are differentiated by the magnitudes of a_i, b_i, i=1,2,3,4. For the data generating model of Gaussian distributions, we set * (a_1, a_2, a_3, a_4) = (0, 0.7, 0, 0.8), (b_1, b_2, b_3, b_4) = (1, 1.5, 0.7, 1.4); * (a_1, a_2, a_3, a_4) = (0, 0.2, 0, 0.3), (b_1, b_2, b_3, b_4) = (0.5, 1.5, 0.4, 1.4); * (a_1, a_2, a_3, a_4) = (0, 0.5, 1.5, 3.3), (b_1, b_2, b_3, b_4) = (0.2, 1.5, 3.8, 6.5). As for the data generating model of covariance matrices, we set * (a_1, a_2, a_3, a_4) = (0, 1.2, 0, 1.3), (b_1, b_2, b_3, b_4) = (0.8, 1.5, 0.7, 1.6); * (a_1, a_2, a_3, a_4) = (0, 1, 0, 1), (b_1, b_2, b_3, b_4) = (0.5, 2, 0.4, 1.9); * (a_1, a_2, a_3, a_4) = (0, 2, 3.9, 5.7), (b_1, b_2, b_3, b_4) = (0.2, 0.7, 1.3, 2). Cases 1 and 2 correspond to non-monotone changes and Case 3 considers the monotone change. Here, our method described in Section <ref> is denoted as WBS-SN_2 (that is, a combination of WBS and our SN_2 test statistic). The method DM in conjunction with binary segmentation, referred as BS-DM, is proposed in <cit.> and included in this simulation for comparison purpose. In addition, our statistic SN_2 in combination with binary segmentation, denoted as BS-SN_2, is implemented and included as well. The critical values for BS-DM and BS-SN_2 are obtained from their asymptotic distributions respectively. The simulation results are shown in Table <ref>, where we present the ARI (adjusted rand index) and number of detected change points for two dependence levels ρ=0.3, 0.6. Note that ARI ∈ [0,1] measures the accuracy of change point estimation and larger ARI corresponds to more accurate estimation. We summarize the main findings as follows. (a) WBS-SN_2 is the best method in general as it can accommodate both monotonic and non-monotoic changes, and appears quite robust to temporal dependence. For Cases 1 and 2, we see that BS-SN_2 does not work for non-monotone changes, due to the use of binary segmentation procedure. (b) BS-DM tends to have more false discoveries comparing to the other methods. This is expected, as method DM is primarily proposed for i.i.d data sequence and exhibit serious oversize when there is temporal dependence in Section <ref>. (c) When we increase ρ=0.3 to ρ=0.6, the performance of WBS-SN_2 appears quite stable for both distributional time series and covariance matrix time series. § APPLICATIONS In this section, we present two real data illustrations, one for two sample testing and the other for change-point detection. Both datasets are in the form of non-Euclidean time series and neither seems to be analyzed before by using techniques that take into account unknown temporal dependence. §.§ Two sample tests Mortality data. Here we are interested in comparing the longevity of people living in different countries of Europe. From the Human Mortality Database (<https://www.mortality.org/Home/Index>), we can obtain a time series that consists of yearly age-at-death distributions for each country. We shall focus on distributions for female from year 1960 to 2015 and there are 26 countries included in the analysis after exclusion of countries with missing data. Pair-wise two sample tests between the included countries are performed using our statistic D_2 to understand the similarity of age-at-death distributions between different countries. The resulting p-value matrix is plotted in Figure <ref> (left). To better present the testing results and gain more insights, we define the dissimilarity between two given countries by subtracting each p-value from 1. Treating these dissimilarities as “distances", we apply multidimensional scaling to “project" each country onto two dimensional plane for visualization. See Figure <ref> (right) for the plot of “projected" countries. It appears that several west European countries, including UK, Belgium, Luxembourg, Ireland, and Austria, and Denmark, form a cluster; whereas several central and eastern European countries, including Poland, Latvia, Russian, Bulgaria, Lithuania and Czechia share similar distributions. We suspect the similarity in Mortality distribution is much related to the similarity in their economic development and healthcare system, less dependent on the geographical locations. §.§ Change point detection Cryptocurrency data. Detecting change points in the Pearson correlation matrices for a set of interested cryptocurrencies can uncover structural breaks in the correlation of these cryptocurrencies and can play an important role in the investors' investment decisions. Here, we construct the daily Pearson correlation matrices from minute prices of Bitcoin, Doge coin, Cardano, Monero and Chainlink for year 2021. The cryptocurrency data can be downloaded at <https://www.cryptodatadownload.com/analytics/correlation-heatmap/>. See Figure <ref> for the plot of time series of pairwise correlations. Three methods, namely, our SN_2 test combined with WBS (WBS-SN_2), SN_2 test combined with binary segmentation (BS-SN_2), and DM test of <cit.> in conjunction with binary segmentation (BS-DM), are applied to detect potential change points for this time series, Method WBS-SN_2 detects an abrupt change on day 2021-05-17 and method BS-SN_2 detects a change point on day 2021-04-29. By comparison, more than 10 change points are detected by BS-DM and we suspect that many of them are false discoveries (see Section <ref> for simulation evidence of BS-DM's tendency of over-detection). The change point in mid-May 2021 is well expected and corresponds to a major crush in crypto market that wiped out 1 trillion dollars. The major causes of this crush are the withdrawal of Tesla's commitment to accept Bitcoin as payment and warnings regarding cryptocurrency sent by Chinese central bank to the financial institutes and business in China. Since this major crush, the market has been dominated by negative sentiments and fear for a recession. We refer the following CNN article for some discussions about this crush <https://www.cnn.com/2021/05/22/investing/crypto-crash-bitcoin-regulation/index.html>. § CONCLUSION Motivated by increasing availability of non-Euclidean time series data, this paper considers two-sample testing and change-point detection for temporally dependent random objects. Our inferential framework builds upon the nascent SN technique, which has been mainly developed for conventional Euclidean time series or functional time series in Hilbert space, and the extension of SN to the time series of objects residing in metric spaces is the first in the literature. The proposed tests are robust to weak temporal dependence, enjoy effortless tuning and are broadly applicable to many non-Euclidean data types with easily verified technical conditions. On the theory front, we derive the asymptotic distributions of our two sample and change-point tests under both null and local alternatives of order O(n^-1/2). Furthermore, for change-point problem, the consistency of the change-point estimator is established under mild conditions. Both simulation and real data illustrations demonstrate the robustness of our test with respect to temporal dependence and the effectiveness in testing and estimation problems. To conclude, we mention several interesting but unsolved problems for analyzing non-Euclidean time series. For example, although powerful against Fréchet mean differences/changes, the testing statistics developed in this paper rely on the asymptotic behaviors of Fréchet (sub)sample variances. It is imperative to construct formal tests that can target directly at Fréchet mean differences/changes. For the change-point detection problem in non-Euclidean data, the existing literature, including this paper, only derives the consistency of the change-point estimator. It would be very useful to derive explicit convergence rate and the asymptotic distribution of the change-point estimator, which is needed for confidence interval construction. Also it would be interesting to study how to detect structural changes when the underlying distributions of random objects change smoothly. We leave these topics for future investigation. § TECHNICAL PROOFS §.§ Auxiliary Lemmas We first introduce some notations. We denote o_up(·) as the uniform o_p(·) w.r.t. the partial sum index (a,b)∈ℐ_η. Let M_n(ω,[a,b])=n^-1∑_t=⌊ na⌋+1^⌊ nb⌋f_ω(Y_t), where f_ω(Y)=d^2(Y,ω)-d^2(Y,μ), then it is clear that μ̂_[a,b]=min_ω∈ΩM_n(ω,[a,b]). Let Ṽ_[a,b]=1/⌊ n b⌋-⌊ n a⌋∑_t=⌊ n a⌋+1^⌊ n b⌋ d^2(Y_t, μ). The following three main lemmas are verified under Assumption <ref>-<ref>, and they are used repeatedly throughout the proof for main theorems. sup_(a,b)∈ℐ_η√(n)d(μ̂_[a,b],μ)=O_p(1). (1). We first show the uniform convergence, i.e. sup_(a,b)∈ℐ_ηd(μ̂_[a,b],μ)=o_up(1). For any ϵ>0, define ψ(ϵ):=inf_d(ω,μ)>ϵ𝔼f_ω(Y), and we know by that ψ(ϵ)>0 by the uniqueness of μ in Assumption <ref>. Hence, let M(ω,[a,b])=(b-a)𝔼f_ω(Y), we have P(sup_(a,b)∈ℐ_ηd(μ̂_[a,b],μ)>ϵ) = P(⋃_(a,b)∈ℐ_η{d(μ̂_[a,b],μ)>ϵ}) ≤ P(⋃_(a,b)∈ℐ_η{M(μ̂_[a,b],[a,b])-inf_d(ω,μ)> ϵM(ω,[a,b])≥ 0}) ≤ P(⋃_(a,b)∈ℐ_η{M(μ̂_[a,b],[a,b])≥ηψ(ϵ)/2}) ≤ P(⋃_(a,b)∈ℐ_η{M(μ̂_[a,b],[a,b])-M_n(μ̂_[a,b],[a,b]) +M_n(μ,[a,b])-M(μ,[a,b])≥ηψ(ϵ)/2}) ≤ P(sup_(a,b)∈ℐ_ηsup_ω∈Ω|M_n(ω,[a,b])-M(ω,[a,b])|≥ηψ(ϵ)/4) where the first inequality holds because the event {d(μ̂_[a,b],μ)>ϵ} implies that μ̂_[a,b]∈{ω∈Ω:d(ω,μ)> ϵ}, and thus M(μ̂_[a,b],[a,b])≥inf_d(ω,μ)>ϵM(ω,[a,b]); the second inequality holds by b-a≥η (hence (⌊ nb⌋-⌊ na⌋)/n>η/2 for large n) and the definition of (<ref>) such that inf_d(ω,μ)>ϵM(ω,[a,b])=(b-a)ψ(ϵ)>ηψ(ϵ)/2; and the third holds by that M(μ,[a,b])=0 and M_n(μ,[a,b])≥ M_n(μ̂_[a,b],[a,b]). Note M_n(ω,[a,b])-M(ω,[a,b])=M_n(ω,[0,b])-M(ω,[0,b])-M_n(ω,[0,a])+M(ω,[0,a]). Therefore, it suffices to show the weak convergence of the process {M_n(ω,[0,u])-M(ω,[0,u]), u∈[0,1],ω∈Ω} to zero. Note the pointwise convergence holds easily by the boundedness of f_ω and Assumption <ref>, so we only need to show the stochastic equicontinuity, i.e. lim sup_n→∞P(sup_|u-v|<δ_1,d(ω_1,ω_2)<δ_2|M_n(ω_1,[0,u])-M(ω_1,[0,u]) -M_n(ω_2,[0,v])+M(ω_2,[0,v])|>ϵ)→ 0 as max(δ_1,δ_2)→ 0. Then, by triangle inequality, we have |M_n(ω_1,[0,u])-M(ω_1,[0,u])-M_n(ω_2,[0,v])+M(ω_2,[0,v])| ≤ |M_n(ω_1,[0,u])-M_n(ω_1,[0,v])|+|M_n(ω_1,[0,v])-M_n(ω_2,[0,v])| +|M(ω_1,[0,u])-M(ω_1,[0,v])|+|M(ω_1,[0,v])-M(ω_2,[0,v])| := ∑_i=1^4 R_n,i. Without loss of generality, we assume v>u, and by the boundedness of the metric, we have for some K>0, R_n,1≤n^-1∑_t=⌊nu⌋+1^⌊nv⌋d^2(Y_t,ω_1)≤K|u-v|≤Kδ_1. Similarly, R_n,3≤ K. Furthermore, we can see that R_n,2,R_n,4≤ 2diam(Ω)d(ω_1,ω_2)≤ Kδ_2. Hence, the result follows by letting δ_1 and δ_2 sufficiently small. Thus, the uniform convergence holds. (2). We then derive the convergence rate based on Assumption <ref>. By the consistency, we have for any δ>0, P(sup_(a,b)∈ℐ_ηd(μ̂_[a,b],μ)≤δ)→ 1. Hence, on the event that sup_(a,b)∈ℐ_ηd(μ̂_a,b,μ)≤δ, and note that M_n(μ,[a,b])=n^-1∑_t=⌊ na⌋+1^⌊ nb⌋[d^2(Y_t,μ)-d^2(Y_t,μ)]=0, we have 0= M_n(μ,[a,b]) ≥ M_n(μ̂_[a,b],[a,b]) = K_d⌊nb⌋-⌊na⌋/nd^2(μ̂_[a,b],μ) + n^-1∑_t=⌊na⌋+1^⌊nb⌋[g(Y_t,μ̂_[a,b],μ)+R(Y_t,μ̂_[a,b],μ)] ≥ K_d η/2d^2(μ̂_[a,b],μ) +d(μ̂_[a,b],μ)[ n^-1∑_t=⌊na⌋+1^⌊nb⌋g(Y_t,μ̂_[a,b],μ)/d(μ̂_[a,b],μ)+o_up(n^-1/2+d(μ̂_[a,b],μ))], where the last inequality holds by Assumption <ref> and the fact (⌊ nb⌋-⌊ na⌋)/n>η/2 for large n. Note the above analysis holds uniformly for (a,b)∈ℐ_η, this implies that sup_(a,b)∈ℐ_η[K_d η/2d(μ̂_[a,b],μ)-o_up(d(μ̂_[a,b],μ))] ≤ n^-1/2 sup_(a,b)∈ℐ_η| n^-1/2∑_t=⌊na⌋+1^⌊nb⌋g(Y_t,μ̂_[a,b],μ)/d(μ̂_[a,b],μ)|+o_up(n^-1/2)=O_p(n^-1/2), and hence sup_(a,b)∈ℐ_ηd(μ̂_[a,b],μ)=O_p(n^-1/2). sup_(a,b)∈ℐ_η√(n)|V̂_[a,b]-Ṽ_[a,b]|=o_p(1). By Lemma <ref>, and Assumption <ref>, we have sup_(a,b)∈ℐ_η√(n)M_n(μ̂_[a,b],[a,b]) ≤ K_dsup_(a,b)∈ℐ_ηd(μ̂_[a,b],μ) sup_(a,b)∈ℐ_η|√(n)d(μ̂_[a,b],μ) + n^-1/2∑_t=⌊na⌋+1^⌊nb⌋g(Y_t,μ̂_[a,b],μ)/d(μ̂_[a,b],μ)+o_up(1+√(n)d(μ̂_[a,b],μ))| = O_p(n^-1/2). Hence, we have that sup_(a,b)∈ℐ_η√(n)|V̂_[a,b]-Ṽ_[a,b]|≤η^-1sup_(a,b)∈ℐ_η√(n)M_n(μ̂_[a,b],[a,b]), the result follows. Let V̂^C_[a,b](ω̃)=1/⌊ nb⌋-⌊ na ⌋∑_t=⌊ na⌋+1^⌊ nb⌋d^2(Y_i,ω̃), where ω̃∈Ω is a random object such that √(n)sup_(a,b)∈ℐ_ηd(ω̃,μ̂_[a,b])=O_p(1). Then, √(n)sup_(a,b)∈ℐ_η|V̂^C_[a,b](ω̃)-Ṽ_[a,b]|=o_p(1). By triangle inequality and Lemma <ref>, √(n)sup_(a,b)∈ℐ_η|V̂^C_[a,b](ω̃)-Ṽ_[a,b]| = sup_(a,b)∈ℐ_η|√(n)/⌊n b⌋-⌊n a⌋ ∑_t=⌊n a⌋+1^⌊n b⌋ d^2(Y_t, ω̃)-d^2(Y_i,μ)| ≤ (η/2)^-1sup_(a,b)∈ℐ_η√(n)M_n(ω̃,[a,b]). Note by triangle inequality for the metric, d(ω̃,μ)≤ d(μ̂_[a,b],μ)+d(ω̃,μ̂_[a,b])=O_p(n^-1/2), and we know that d(ω̃,μ)<δ with probability tending to 1, and on this event, by Assumption <ref>, √(n)M_n(ω̃,[a,b]) ≤ K_dd^2(ω̃,μ) +n^-1| ∑_t=⌊na⌋+1^⌊nb⌋g(Y_t,ω̃,μ)| +n^-1|∑_t=⌊na⌋+1^⌊nb⌋R(Y_t,ω̃,μ)|. Similar to the proof of Lemma <ref>, we get the result. §.§ Proof of Theorems in Section <ref> Let Ṽ^(1)_r=1/⌊ rn_1⌋∑_t=1^⌊ rn_1⌋d^2(Y_t^(1),μ^(1)), and Ṽ^(2)_r=1/⌊ rn_2⌋∑_t=1^⌊ rn_2⌋d^2(Y_t^(2),μ^(2)). For each r∈[η,1], we consider the decomposition, √(n)T_n(r)= √(n)r(V̂^(1)_r-V̂^(2)_r) = √(n)r(V̂^(1)_r-Ṽ^(1)_r+Ṽ^(1)_r-V^(1)) -√(n)r(V̂^(2)_r-Ṽ^(2)_r+Ṽ^(2)_r-V^(2)) +√(n)r(V^(1)-V^(2)) := R_n,1(r)+R_n,2(r)+R_n,3(r). and √(n)T_n^C(r)= √(n)r(V̂^C,(1)_r-Ṽ^(1)_r)-√(n)r(V̂^(1)_r-Ṽ^(1)_r) +√(n)r(V̂^C,(2)_r -Ṽ^(2)_r)-√(n)r(V̂^(2)_r-Ṽ^(2)_r) := R^C_n,1(r)+R^C_n,2(r)+R^C_n,3(r)+R^C_n,4(r). By Lemma <ref>, sup_r∈[η,1]√(n)r(V̂^(1)_r-Ṽ^(1)_r)=o_p(1), sup_r∈[η,1]√(n)r(V̂^(2)_r-Ṽ^(2)_r)=o_p(1), i.e. {R^C_n,2(r)+R^C_n,4(r)}_r∈[η,1]⇒ 0. Furthermore, by Assumption <ref>, √(n)r(V̂^(1)_r-V^(1))⇒γ_1^-1σ_1B^(1)(γ_1r), √(n)r(V̂^(2)_r-V^(2))⇒γ_2^-1σ_2B^(2)(γ_2r). This implies that {R_n,1(r)+R_n,2(r)}_r∈[η,1]⇒{ξ_γ_1,γ_2(r;σ_1,σ_2)}_r∈[η,1]. §.§ Proof of Theorem <ref> Under ℍ_0, R_n,3(r)≡ 0, and μ^(1)=μ^(2)=μ. Hence, by (<ref>) and (<ref>), we obtain that {√(n)T_n(r)}_r∈[η,1]⇒{ξ_γ_1,γ_2(r;σ_1,σ_2)}_r∈[η,1]. Next, by Lemma <ref>, can obtain that √(n)sup_r∈[η,1]d(μ̂^(1)_r,μ)=o_p(1), √(n)sup_r∈[η,1]d(μ̂^(2)_r,μ)=o_p(1). Hence, by Lemma <ref>, we have {R^C_n,1(r)+R^C_n,3(r)}_r∈[η,1]⇒ 0. Together with (<ref>), we have {√(n)T^C_n(r)}_r∈[η,1]⇒ 0. Hence, by continuous mapping theorem, for both i=1,2, D_n,i→_d ξ^2_γ_1,γ_2(1;σ_1,σ_2)/∫_η^1[ξ_γ_1,γ_2(r;σ_1,σ_2)-rξ_γ_1,γ_2(1;σ_1,σ_2)]^2dr. §.§ Proof of Theorem <ref> In view of (<ref>) and (<ref>), {√(n)T_n(r)}_r∈[η,1]⇒{ξ_γ_1,γ_2(r;σ_1,σ_2)+rn^-κ_V+1/2Δ_V}_r∈[η,1]. Hence * For κ_V ∈(1/2,∞), {√(n)T_n(r)}⇒{ξ_γ_1,γ_2(r;σ_1,σ_2)}_r∈[η,1]. * For κ_V=1/2, {√(n)T_n(r)}_r∈[η,1]⇒{ξ_γ_1,γ_2(r;σ_1,σ_2)+rΔ_V}_r∈[η,1]. * For κ_V∈(0,1/2), √(n)T_n(1)→_p∞, and {√(n)T_n(r)-√(n)rT_n(1)}_r∈[η,1]⇒{ξ_γ_1,γ_2(r;σ_1,σ_2)-rξ_γ_1,γ_2(1;σ_1,σ_2)}_r∈[η,1]. Next, we focus on √(n)T_n^C(r). When κ_M∈ (0,∞), it holds that d(μ^(1),μ^(2))=O(n^-κ_M/2)=o(1), and by triangle inequality, for any r∈[η,1], |d(μ^(1),μ^(2))-d(μ̂^(2)_r,μ^(2))|≤ d(μ̂^(2)_r,μ^(1))≤ |d(μ^(1),μ^(2))+d(μ̂^(2)_r,μ^(2))|. By Lemma <ref>, we have sup_r∈[η,1]d(μ̂^(2)_r,μ^(2))=O_p(n^-1/2). This and (<ref>) imply that * when κ_M∈(1/2,∞), d^2(μ̂^(2)_r,μ^(1))=o_up(n^-1/2); * when κ_M∈(0,1/2], d^2(μ̂^(2)_r,μ^(1))=d^2(μ^(1),μ^(2))+o_up(n^-1/2)=n^-κ_MΔ_M+o_up(n^-1/2). Similarly, * when κ_M∈(1/2,∞), d^2(μ̂^(1)_r,μ^(2))=o_up(n^-1/2); * when κ_M∈(0,1/2], d^2(μ̂^(1)_r,μ^(2))=n^-κ_MΔ_M+o_up(n^-1/2). Furthermore, by Assumption <ref>, equations (<ref>) and (<ref>), we obtain √(n)T_n^C(r)=R_n,1^C(r)+R_n,3^C(r)+o_up(1) = √(n)K_d rd^2(μ̂^(2)_r,μ^(1))+ rd(μ̂^(2)_r,μ^(1))[n^-1/2∑_t=1^⌊γ_1nr⌋g(Y_t^(1),μ̂^(2)_r,μ^(1))/d(μ̂^(2)_r,μ^(1))] +o_up(d(μ̂^(2)_r,μ^(1))+√(n)d^2(μ̂^(2)_r,μ^(1))) +√(n)K_dr d^2(μ̂^(1)_r,μ^(2))+ rd(μ̂^(1)_r,μ^(2))[n^-1/2∑_t=1^⌊γ_2nr⌋g(Y_t^(2),μ̂^(1)_r,μ^(2))/d(μ̂^(1)_r,μ^(2))] +o_up(d(μ̂^(1)_r,μ^(2))+√(n)d^2(μ̂^(1)_r,μ^(2))) +o_up(1). * For κ_M ∈(1/2,∞), d^2(μ̂^(2)_r,μ^(1))=o_up(n^-1/2), and d^2(μ̂^(1)_r,μ^(2))=o_up(n^-1/2). Hence, {√(n)T_n^C(r)}_r∈[η,1]⇒ 0. * For κ_M=1/2, we note that d^2(μ̂^(2)_r,μ^(1))=n^-1/2Δ_M+o_up(1), and d^2(μ̂^(1)_n,μ^(2))=n^-1/2Δ_M+o_up(1). Hence, {√(n)T_n^C(r)}_r∈[η,1]⇒{2rK_dΔ_M}_r∈[η,1], and {√(n)[T_n^C(r)-rT_n^C(1)]}_r∈[η,1]⇒ 0. * For κ_M∈(0,1/2), we multiply n^2κ_M-1 on both denominator and numerator of D_n,2, and obtain D_n,2=n^2κ_M{[T_n(1)]^2+[T_n^C(1)]^2}/n^-1∑_k=⌊ nη⌋^n n^2κ_M{[T_n(k/n)-k/nT_n(1)]^2+[T_n^C(k/n)-k/nT^C_n(1)]^2}. Note that n^κ_M-1/2→0, as n→∞, we obtain that {n^κ_M[T_n(r)-rT_n(1)]}_r∈[η,1]⇒ 0. Furthermore, in view of (<ref>), we obtain n^κ_MT^C_n(r)= n^κ_Mr(K_d+o_up(1))[d^2(μ̂^(2)_r,μ^(1))+d^2(μ̂^(1)_r,μ^(2))]+o_up(1), By arguments below (<ref>), we know that n^κ_Md^2(μ̂^(2)_r,μ^(1))=Δ_M+o_up(n^κ_M-1/2)=Δ_M+o_up(1). And similarly, n^κ_Md^2(μ̂^(1)_r,μ^(2))=Δ_M+o_up(1). We thus obtain that {n^κ_M T^C_n(r)-rT^C_n(1)}_r∈[η,1]⇒ 0, and n^κ_MT_n^C(1)→_p 2K_dΔ_M. Therefore, (<ref>) and (<ref>) implies that the denominator of (<ref>) converges to 0 in probability, while (<ref>) implies the numerator of (<ref>) is larger than a positive constant in probability, we thus obtain D_n,2→_p∞. Summarizing the cases of κ_V and κ_M, and by continuous mapping theorem, we get the result. §.§ Proof of <Ref> When γ_1=γ_2=1/2, it can be shown that ξ_γ_1,γ_2(r;σ_1,σ_2)=2σ_1B^(1)(r/2)-2σ_2B^(1)(r/2)=_d √(2σ_1^2+2σ_2^2-4ρσ_1σ_2)B(r); and when ρ=0. ξ_γ_1,γ_2(r;σ_1,σ_2)=_d √(σ_1^2/γ_1+σ_2^2/γ_2)B(r). The result follows by the continuous mapping theorem. §.§ Proof of Theorems in Section <ref> With a bit abuse of notation, we define ℐ_η={(a,b): 0≤ a<b≤ 1, b-a≥η_2 } and 𝒥_η={(r;a,b): 0≤ a<r<b≤ 1, b-r≥η_2, r-a≥η_2 }. §.§ Proof of Theorem <ref> For (r;a,b)∈𝒥_η, we note that √(n)T_n(r;a,b) = √(n){(r-a)(b-r)/(b-a)(V̂_[a, r]-Ṽ_[a,r]+Ṽ_[a,r]-V)} -√(n){(r-a)(b-r)/(b-a)(V̂_[r, b]-Ṽ_[r, b]+Ṽ_[r, b]-V)}. By Lemma <ref> we know that sup_(a,r)∈ℐ_η√(n)|V̂_[a, r]-Ṽ_[a,r]|=o_p(1), sup_(r,b)∈ℐ_η√(n)|V̂_[r,b]-Ṽ_[r,b]|=o_p(1), and by Assumption <ref>, {√(n)(r-a)(Ṽ_[a,r]-V)}_(a,r)∈ℐ_η⇒{σ[B(r)-B(a)]}_(a,r)∈ℐ_η, {√(n)(b-r)(Ṽ_[r,b]-V)}_(r,b)∈ℐ_η⇒{σ[B(b)-B(r)]}_(r,b)∈ℐ_η. Hence, {√(n)T_n(r;a,b)}_(r;a,b)∈𝒥_η ⇒ σ{ (b-r)/(b-a)[B(r)-B(a)]-(r-a)/(b-a)[B(b)-B(r)]}_(r;a,b)∈𝒥_η. Furthermore, we note that √(n)T_n^C(r;a,b) = (b-r)/(b-a)n^-1/2{∑_i=⌊n a⌋+1^⌊n r⌋ [d^2(Y_i, μ̂_[r, b])-d^2(Y_i, μ)] - ∑_i=⌊n a⌋+1^⌊n r⌋ [d^2(Y_i, μ̂_[a,r])-d^2(Y_i, μ)]} + (r-a)/(b-a)n^-1/2∑_i=⌊n r⌋+1^⌊n b⌋ {[d^2(Y_i, μ̂_[a, r])-d^2(Y_i, μ)] - ∑_i=⌊n r⌋+1^⌊n b⌋ [d^2(Y_i, μ̂_[r, b])-d^2(Y_i, μ)]}+o_up(1) where o_up(1) is the rounding error due to [n(r-a)]^-1-[⌊ nr⌋-⌊ na⌋]^-1 and [n(b-r)]^-1-[⌊ nb⌋-⌊ nr⌋]^-1. Note by Lemma <ref>, we know that sup_(a,r)∈ℐ_ηd(μ̂_[a, r],μ)=O_p(n^-1/2) and sup_(r,b)∈ℐ_ηd(μ̂_[r, b],μ)=O_p(n^-1/2), hence by Lemma <ref> and <ref>, we obtain sup_(r;a,b)∈𝒥_η|√(n)T_n^C(r;a,b)|=o_p(1). The result follows by continuous mapping theorem. §.§ Proof of Theorem <ref> Note for any k=⌊ nη_1⌋,⋯,n-⌊ nη_1⌋, and i=1,2, max_⌊ nη_1⌋≤ k≤ n-⌊ nη_1⌋D_n,i(k)≥ D_n,i(⌊ nτ⌋). We focus on k^*=⌊ nτ⌋. In this case, the left and right part of the self-normalizer are both from stationary segments, hence by similar arguments as in ℍ_0, {√(n)T_n(r ; 0, τ)}_r∈[η_2,τ-η_2]⇒{σ_1𝒢_1(r; 0, τ) }_r∈[η_2,τ-η_2], {√(n)T^C_n(r ; 0, τ)}_r∈[η_2,τ-η_2]⇒ 0; and {√(n)T_n(r ; τ, 1)}_r∈[τ+η_2,1-η_2]⇒{σ_2𝒢_2(r;τ, 1) }_r∈[τ+η_2,1-η_2], {√(n)T^C_n(r ; τ, 1)}_r∈[η_2,τ-η_2]⇒ 0, where 𝒢_i(r;a,b)=(b-r)/(b-a)[B^(i)(r)-B^(i)(a)]-(r-a)/(b-a)[B^(i)(b)-B^(i)(r)] for i=1,2. Hence, we only need to consider the numerator, where √(n)T_n(τ;0,1)=√(n)τ(1-τ)(V̂_[0, τ]-V̂_[τ, 1]), √(n)T_n^C(τ;0,1)=√(n)τ(1-τ)(V̂_[τ; 0, 1]^C-V̂_[0,τ]-V̂_[τ, 1]). For √(n)T_n(τ;0,1), we have √(n)T_n(τ;0,1)= √(n){τ(1-τ)(V̂_[0, τ]-Ṽ_[0,τ]+Ṽ_[0,τ]-V^(1))} -√(n){τ(1-τ)(V̂_[τ, 1]-Ṽ_[τ, 1]+Ṽ_[τ, 1]-V^(2))} +√(n)τ(1-τ)(V^(1)-V^(2)) = T_11+T_12+T_13. By Lemma <ref>, we know that √(n)(V̂_[0,τ]-Ṽ_[0,τ])=o_p(1), and by Assumption <ref>, we have √(n)τ(Ṽ_[0,τ]-V^(1))→_d σ_1B^(1)(τ). This implies that T_11→_d (1-τ)σ_1B^(1)(τ). Similarly, we can obtain T_12→_d -τσ_2[B^(2)(1)-B^(2)(τ)]. Hence, using the fact that √(n)(V^(1)-V^(2))=Δ_V, we obtain √(n)T_n(τ;0,1)→_d(1-τ)σ_1B^(1)(τ)-τσ_2[B^(2)(1)-B^(2)(τ)]+τ(1-τ)Δ_V. For √(n)T_n^C(τ;0,1) we have √(n)T_n^C(τ;0,1) = (1-τ)n^-1/2{ ∑_i=1^⌊n τ⌋ [d^2(Y_i, μ̂_[τ,1])- d^2(Y_i, μ^(1))] - ∑_i=1^⌊n τ⌋[ d^2(Y_i, μ̂_[0,τ])- d^2(Y_i, μ^(1))]} +τn^-1/2 {∑_i=⌊n τ⌋+1^n [d^2(Y_i, μ̂_[0,τ])-d^2(Y_i, μ^(2))] -∑_i=⌊n τ⌋+1^n [d^2(Y_i, μ̂_[τ,1])-d^2(Y_i, μ^(2))]}+o_p(1) := T_21+T_22+T_23+T_24+o_p(1), where o_p(1) is the rounding error due to (nτ)^-1-⌊ nτ⌋^-1 and [n(1-τ)]^-1-(n-⌊ nτ⌋)^-1. Note by Lemma <ref>, we have d(μ̂_[0,τ],μ^(1))=O_p(n^-1/2), and by triangle inequality, we know that d(μ̂_[τ,1],μ^(1))≤ d(μ̂_[τ,1],μ^(2))+d(μ^(1),μ^(2))=O_p(n^-1/4). Then, by Assumption <ref>, we know T_21 = √(n)(1-τ)τK_d d^2(μ̂_[τ,1],μ^(1)) +(1-τ) d(μ̂_[τ,1],μ^(1))[n^-1/2∑_i=1^⌊nτ⌋g(Y_i,μ̂_[τ,1],μ^(1))/d(μ̂_[τ,1],μ^(1))] +o_p(d(μ̂_[τ,1],μ^(1))+√(n)d^2(μ̂_[τ,1],μ^(1))) = √(n)(1-τ)τK_dd^2(μ̂_[τ,1],μ^(1))+O_p(n^-1/4)+o_p(1). Now, by triangle inequality, we know √(n)[d(μ̂_[τ,1],μ^(2))-d(μ^(1),μ^(2))]^2≤√(n)d^2(μ̂_[τ,1],μ^(1)) ≤√(n)[d(μ̂_[τ,1],μ^(2))+d(μ^(1),μ^(2))]^2, and note d(μ̂_[τ,1],μ^(2))=O_p(n^-1/2) by Lemma <ref>, we obtain √(n)d^2(μ̂_[τ,1],μ^(1))=Δ_M+o_p(1), and T_21=(1-τ)τ K_dΔ_M+o_p(1). By Lemma <ref>, T_22=o_p(1). Hence T_21+T_22=(1-τ)τ K_dΔ_M+o_p(1). Similarly, we obtain that T_23+T_24=(1-τ)τ K_dΔ_M+o_p(1). Therefore, √(n)T_n^C(τ;0,1)=2τ(1-τ)K_dΔ_M+o_p(1). Hence, combining results of (<ref>)–(<ref>), we have D_n,1(⌊nτ⌋) →_d[(1-τ)σ_1B^(1)(τ)-τσ_2[B^(2)(1)-B^(2)(τ)]+τ(1-τ)Δ_V]^2/[∫_η_2^r-η_2 σ_1^2𝒢_1^2(u ; 0, r) d u+∫_r+η_2^1-η_2 σ_2^2𝒢_2^2(u ; r, 1) d u] := 𝒮_η,1(τ;Δ), and, D_n,2(⌊ nτ⌋) →_d [(1-τ)σ_1B^(1)(τ)-τσ_2[B^(2)(1)-B^(2)(τ)]+τ(1-τ)Δ_V]^2+4[τ(1-τ)Δ_M]^2/[∫_η_2^r-η_2σ_1^2𝒢_1^2(u ; 0, r) d u+∫_r+η_2^1-η_2σ_2^2𝒢_2^2(u ; r, 1) d u] := 𝒮_η,2(τ;Δ). Therefore, we know that for the 1-α quantile of 𝒮_η, denoted by Q_1-α(𝒮_η), for i=1,2, lim_n→∞ P(max_⌊ nη_1⌋≤ k≤ n-⌊ nη_1⌋D_n,i(k)≥ Q_1-α(𝒮_η)) ≥ lim_n→∞ P(D_n,i(⌊ nτ⌋)≥ Q_1-α(𝒮_η)) = P(𝒮_η,i(τ;Δ)≥ Q_1-α(𝒮_η)). In particular, lim_|Δ_V|→∞P(𝒮_η,1(τ;Δ)≥Q_1-α(𝒮_η))=1, lim_max{|Δ_V|,Δ_M}→∞P(𝒮_η,2(τ;Δ)≥Q_1-α(𝒮_η))=1. §.§ Proof of Theorem <ref> Define the pointwise limit of μ̂_[a,b] under ℍ_a as μ_[a,b]= μ^(1), b≤τ min_ω∈Ω{(τ-a)𝔼d^2(Y_t^(1),ω)+(b-τ)𝔼d^2(Y_t^(2),ω)}, a<τ<b μ^(2), τ≤ a Define the Fréchet variance and pooled contaminated variance under ℍ_a as V_[a,b]= V^(1) b≤τ τ-a/b-a𝔼(d^2(Y_t^(1),μ_[a,b]))+b-τ/b-a𝔼(d^2(Y_t^(2),μ_[a,b])), a<τ<b V^(2), τ≤ a, and V^C_[r;a,b]= V^(1) b≤τ τ-a/r-a𝔼(d^2(Y_t^(1),μ_[r,b]))+r-τ/r-a𝔼(d^2(Y_t^(2),μ_[r,b]))+𝔼(d^2(Y_t^(2),μ_[a,r])), a<τ≤r 𝔼(d^2(Y_t^(1),μ_[r,b]))+τ-r/b-r𝔼(d^2(Y_t^(1),μ_[a,r]))+b-τ/b-r𝔼(d^2(Y_t^(2),μ_[a,r])), r<τ<b V^(2), τ≤a. We want to show that {T_n(r;a,b)}_(r;a,b)∈𝒥_η ⇒{T(r;a,b)}_(r;a,b)∈𝒥_η, {T^C_n(r;a,b)}_(r;a,b)∈𝒥_η ⇒{T^C(r;a,b)}_(r;a,b)∈𝒥_η, where T(r;a,b)=(r-a)(b-r)/b-a(V_[a, r]-V_[r, b]), T^C(r;a,b)=(r-a)(b-r)/b-a(V_[r ; a, b]^C-V_[a, r]-V_[r, b]). To achieve this, we need to show (1). sup_(a,b)∈ℐ_ηd(μ̂_[a,b],μ_[a,b])=o_p(1); (2). sup_(a,b)∈ℐ_η|V̂_[a,b]-V_[a,b]|=o_p(1); and (3). sup_(r;a,b)∈𝒥_η|V̂^C_[r;a,b]-V^C_[r;a,b]|=o_p(1). (1). The cases when b≤τ and a≥τ follow by Lemma <ref>. For the case when τ∈(a,b), recall μ̂_[a, b]= min_ω∈Ω1/⌊nb⌋-⌊na⌋ ∑_t=⌊n a⌋+1^⌊n b⌋ d^2(Y_t, ω) = min_ω∈Ω{n/⌊nb⌋-⌊na⌋1/n ∑_t=⌊n a⌋+1^⌊n τ⌋ d^2(Y_t^(1), ω) +n/⌊nb⌋-⌊na⌋ 1/n ∑_t=⌊n τ⌋+1^⌊n b ⌋ d^2(Y_t^(2), ω)}. By the proof of (1) in Lemma <ref>, for i=1,2, we have {1/n∑_t=1^⌊ n u ⌋ d^2(Y_t^(i), ω)-u𝔼d^2(Y_t^(i),ω)}_ω∈Ω,u∈[0,1]⇒ 0, which implies that {n/⌊nb⌋-⌊na⌋1/n ∑_t=⌊n a⌋+1^⌊n τ⌋ d^2(Y_t^(1), ω) +n/⌊nb⌋-⌊na⌋ 1/n ∑_t=⌊n τ⌋+1^⌊n b ⌋ d^2(Y_t^(2), ω)}_ω∈Ω,(a,b)∈ℐ_η ⇒{τ-a/b-a𝔼(d^2(Y_t^(1),ω)+b-τ/b-a𝔼(d^2(Y_t^(2),ω))}_ω∈Ω,(a,b)∈ℐ_η. By Assumption <ref>, and the argmax continuous mapping theorem (Theorem 3.2.2 in <cit.>), the result follows. (2). The cases when b≤τ and a≥τ follows by Lemma <ref>. For the case when τ∈(a,b), we have for some constant K>0 sup_(a,b)∈ℐ_η|V̂_[a,b]-V_[a,b]| ≤ sup_(a,b)∈ℐ_η(1/⌊nb⌋-⌊na⌋ ∑_t=⌊n a⌋+1^⌊n b⌋ |d^2(Y_t, μ̂_[a,b])-d^2(Y_t, μ_[a,b])|) +sup_(a,b)∈ℐ_η|1/⌊nb⌋-⌊na⌋ ∑_t=⌊n a⌋+1^⌊n b⌋ d^2(Y_t, μ_[a,b])-V_[a,b]| ≤ sup_(a,b)∈ℐ_η(1/⌊nb⌋-⌊na⌋ ∑_t=⌊n a⌋+1^⌊n b⌋ K|d(Y_t, μ̂_[a,b])-d(Y_t, μ_[a,b])|)+o_p(1) ≤ sup_(a,b)∈ℐ_ηKd(μ̂_[a,b],μ_[a,b])+o_p(1)=o_p(1) where the second inequality holds by the boundedness of the metric and (<ref>), and the third inequality holds by the triangle inequality of the metric. (3). The proof is similar to (2). By continuous mapping theorem, we obtain that for i=1,2, {D_n,i(⌊ nr⌋)}_r∈[η_1,1-η_1]⇒{D_i(r)}_r∈[η_1,1-η_1], where D_1(r)= [T(r;0,1)]^2/∫_η_2^r-η_2[T(u;0,r)]^2du+∫_r+η_2^1-η_2[T(u;r,1)]^2du, D_2(r)= [T(r;0,1)]^2+[T^C(r;0,1)]^2/∫_η_2^r-η_2[T(u;0,r)]^2+[T^C(u;0,r)]^2du+∫_r+η_2^1-η_2[T(u;r,1)]^2+[T^C(u;r,1)]^2du. In particular, at r=τ, we obtain D_i(τ)=∞. Hence, to show the consistency of τ̂, it suffices to show that for any small ϵ>0, if |r-τ|>ϵ, D_i(r)<∞. By symmetry, we consider the case of r-τ>ϵ. For r-τ>ϵ, we note that for both i=1,2, sup_r-τ>ϵD_i(r)≤sup_r{[T(r;0,1)]^2+[T^C(r;0,1)]^2}/inf_r-τ>ϵ∫_η_2^r-η_2[T(u;0,r)]^2du. By proof of Proposition 1 in <cit.>, we obtain that for some universal constant K>0, sup_r{[T(r;0,1)]^2+[T^C(r;0,1)]^2}≤K(Δ^2_M+Δ^2_V)<∞. Therefore, it suffices to show that there exists a function ζ(ϵ)>0, such that for any r-τ>ϵ, ∫_η_2^τ-η_2[T(u;0,r)]^2du>ζ(ϵ). For r>τ, and for any u∈[η_2,τ-η_2], T(u;0,r) = u(r-u)/r(V^(1)-V_[u,r]) = u(r-u)/r[V^(1)-τ-u/r-u𝔼(d^2(Y_t^(1),μ_[u,r]))-r-τ/r-u𝔼(d^2(Y_t^(2),μ_[u,r]))] = u(r-u)/r[V^(1)-V(τ-u/r-u)]. By Assumption <ref>, we can obtain that |T(u;0,r)|>u(r-u)/rφ(ϵ/r-u)≥η_2^2φ(ϵ). Hence, we can choose ζ(ϵ)=η_2^6φ^2(ϵ). § EXAMPLES As we have mentioned in the main context, since d^2(Y_t,ω) takes value in ℝ for any fixed ω∈Ω, both Assumption <ref> and <ref> could be implied by high-level weak temporal dependence conditions in conventional Euclidean space. Therefore, we only discuss the verification of Assumption <ref>, <ref> and <ref> in what follows. §.§ Example 1: L_2 metric d_L for square integrable functions defined on [0,1] Let Ω be the Hilbert space of all square integrable functions defined on I=[0,1] with inner product ⟨ f,g⟩=∫_If(t)g(t)dt for two functions f,g∈Ω. Then, for the corresponding norm f=⟨ f,f⟩^1/2, L_2 metric is defined by d_L^2(f,g)=∫_I[f(t)-g(t)]^2dt. Assumptions <ref> and <ref> follows easily by the Riesz representation theorem and convexity of Ω. We only consider Assumption <ref>. Note that d_L^2(Y,ω)-d_L^2(Y,μ)= ∫_0^1 [ω(t)-μ(t)][ω(t)+μ(t)-2Y(t)]dt = d_L^2(ω,μ)+2∫_0^1 [ω(t)-μ(t)][μ(t)-Y(t)]dt := d_L^2(ω,μ)+g(Y,ω,μ), and R(Y,ω,μ)≡ 0. Furthermore, |n^-1/2∑_i=⌊n a⌋+1^⌊n b⌋g(Y_i,ω,μ)| = |2∫_0^1 [ω(t)-μ(t)]n^-1/2∑_i=⌊n a⌋+1^⌊n b⌋[Y_i(t)-μ(t)]dt| ≤ 2d_L(ω,μ) {∫_0^1 |n^-1/2∑_i=⌊n a⌋+1^⌊n b⌋[Y_i(t)-μ(t)]|^2dt}^1/2, where the inequality holds by Cauchy-Schwarz inequality. By the boundedness of d_L(ω,μ), Assumption <ref> then follows if sup_t∈[0,1]sup_(a,b)∈ℐ_η|n^-1/2∑_i=⌊ n a⌋+1^⌊ n b⌋[Y_i(t)-μ(t)]|=O_p(1), which holds under general weak temporal dependence for functional observations, see, e.g. <cit.>. §.§ Example 2: 2-Wasserstein metric d_W of univariate CDFs Let Ω be the set of univariate CDF function on ℝ, consider the 2-Wasserstein metric defined by d_W^2(G_1,G_2)=∫_0^1 (G_1(t)-G_2(t))^2dt, where G_1 and G_2 are two inverse CDFs or quantile functions. The verification of Assumption <ref> and <ref> can be found in Proposition C.1 in <cit.>. Furthermore, by similar arguments as Example 1, Assumption <ref> holds under weak temporal dependence conditions, see <cit.>. §.§ Example 3: Frobenius metric d_F for graph Laplacians or covariance matrices Let Ω be the set of graph Laplacians or covariance matrices of a fixed dimension r, with uniformly bounded diagonals, and equipped with the Frobenius metric d_F, i.e. d_F^2(Σ_1,Σ_2)=tr[(Σ_1-Σ_2)^⊤(Σ_1-Σ_2)]. for two r× r matrices Σ_1 and Σ_2. The verification of Assumption <ref> and <ref> can be found in Proposition C.2 in <cit.>. We only consider Assumption <ref>. Note that d_F^2(Y,ω)-d_F^2(Y,μ)= tr(ω-μ)^⊤(ω+μ-2Y) = d_F^2(ω,μ)+2tr(ω-μ)^⊤(μ-Y) := d_F^2(ω,μ)+g(Y,ω,μ), and R(Y,ω,μ)≡ 0. Furthermore, by Cauchy-Schwarz inequality, |n^-1/2∑_i=⌊n a⌋+1^⌊n b⌋g(Y_i,ω,μ)| = 2|tr[(ω-μ)^⊤n^-1/2∑_i=⌊n a⌋+1^⌊n b⌋(Y_i-μ)]| ≤ 2d_F(ω,μ) d_F(n^-1/2∑_i=⌊n a⌋+1^⌊n b⌋[Y_i-μ],0). By the boundedness of d_F(ω,μ), Assumption <ref> then follows if sup_(a,b)∈ℐ_ηn^-1/2∑_i=⌊ n a⌋+1^⌊ n b⌋vec(Y_i-μ)=O_p(1), which holds under common weak dependence conditions in conventional Euclidean space. §.§ Example 4: Log-Euclidean metric d_E for covariance matrices Let Ω be the set of all positive-definite covariance matrices of dimension r, with uniformly both upper and lower bounded eigenvalues, i.e. for any Σ∈Ω, c≤λ_min(Σ)≤λ_max(Σ)≤ C for some constant 0<c<C<∞. The log-Euclidean metric is defined by d_E^2(Σ_1,Σ_2)=d_F^2(log_mΣ_1,log_mΣ_2), where log_m is the matrix-log function. Note that log_mΣ has the same dimension as Σ, hence the verification of Assumptions <ref>, <ref> and <ref> follows directly from Example 3. § FUNCTIONAL DATA IN HILBERT SPACE Our proposed tests and DM test are also applicable to the inference of functional data in Hilbert space, such as L_2[0,1], since the norm in Hilbert space naturally corresponds to the distance metric d. In a sense, our methods can be regarded as fully functional <cit.> since no dimension reduction procedure is required. In this section, we further compare them with SN-based testing procedure by <cit.> for comparing two sequences of temporally dependent functional data, i.e. {Y_t^(i)}_t=1^n_i i=1,2, defined on [0,1]. The general idea is to first apply FPCA, and then compare score functions (for mean) or covariance operators (for covariance) between two samples in the space spanned by leading K eigenfunctions. SN technique is also invoked to account for unknown temporal dependence. Although the test statistic in <cit.> targets at the difference in covariance operators of {Y_t^(1)} and {Y_t^(2)}, their test can be readily modified to testing the mean difference. To be specific, denote μ^(i) as the mean function of Y_t^(i), t=1,⋯,n_i, i=1,2, we are interested in testing ℍ_0: μ^(1)(x)=μ^(2)(x), ∀ x∈[0,1]. We assume the covariance operator is common for both samples, which is denoted by C_p. By Mercer’s Lemma, we have C_p=∑_j=1^∞λ_p^jϕ_p^j⊗ϕ_p^j, where {λ^j_p}_j=1^∞ and {ϕ^j_p}_j=1^∞ are the eigenvalues and eigenfunctions respectively. By the Karhunen-Loève expansion, Y_t^(i)=μ^(i)+∑_j=1^∞η_t,j^(i)ϕ^j_p, t=1,⋯,n_i;  i=1,2, where {η_t,j^(i)} are the principal components (scores) defined by η_t,j^(i)=∫_[0,1]{Y_t^(i)-μ^(i)}ϕ^j_p(x)dx=∫_[0,1]{Y_t^(i)-μ_p+μ_p-μ^(i)}ϕ^j_p(x)dx with μ_p=γ_1μ^(1)+γ_2μ^(2). Under ℍ_0, μ^(1)=μ^(2)=μ_p, and η_t,j^(i) should have mean zero. We thus build the SN based test by comparing empirical estimates of score functions. Specifically, define the empirical covariance operator based on the pooled samples as Ĉ_p= 1/n_1+n_2(∑_t=1^n_1𝒴^(1)_t+∑_t=1^n_2𝒴^(2)_t), where 𝒴^(i)_t= Y_t^(i)⊗ Y_t^(i), i=1,2. Denote by {λ̂^j_p}_j=1^∞ and {ϕ̂^j_p}_j=1^∞ the corresponding eigenvalues and eigenfunctions. We define the empirical scores (projected onto the eigenfunctions of pooled covariance operator) for each functional observation as η̂^(i)_t,j=∫_[0,1]{Y_t^(i)(x)-μ̂_p(x)}ϕ̂^j_p(x)dx, t=1,⋯,n_i;  i=1,2;   j=1,⋯, K, where μ̂_p=(∑_t=1^n_1Y_t^(1)+∑_t=1^n_2Y_t^(2))/n is the pooled sample mean function. Let η̂^(i,K)_t,(K)=(η̂^(i)_t,1,⋯,η̂^(i)_t,K)^⊤, and α̂^(K)(r)=(⌊ rn_1⌋)^-1∑_t=1^⌊ rn_1⌋η̂^(1,K)_t-(⌊ rn_2⌋)^-1∑_t=1^⌊ rn_2⌋η̂^(2,K)_t as the difference of recursive subsample mean of empirical scores, we consider the test statistic as ZSM= n[α̂^(K)(1)]^⊤{∑_k=1^nk^2/n^2[α̂^(K)(k/n) -α̂^(K)(1)][α̂^(K)(k/n)-α̂^(K)(1)]^⊤}^-1[α̂^(K)(1)], and under ℍ_0 with suitable conditions, it is expected that ZSM→_d B_K(1)^⊤{∫_0^1(B_K(r)-r B_K(1))(B_K(r)-r B_K(1))^⊤d r}^-1 B_K(1), where B_K(·) is a K-dimensional vector of independent Brownian motions. Consider the following model taken from <cit.>, Y_t(x)= ∑_j=1^3{ξ^j, 1_t √(2) sin(2 πj x)+ξ^j, 2_t √(2) cos(2 πj x)}, t=1,2, …,n_1 where the coefficients ξ_t=(ξ^1,1_t, ξ^2,1_t, ξ^3,1_t, ξ^1,2_t, ξ^2,2_t, ξ^3,2_t)^' are generated from a VAR process, ξ_t= ρξ_t-1+√(1-ρ^2) e_t, e_t i.i.d.∼ 𝒩(0,1/2 diag(𝐯)+1/2 1_6)∈ℝ^6 with v=(12, 7, 0.5, 9, 5, 0.3)^⊤. To compare the size and power performance, we generate independent functional time series {Y_t^(1)} and {Y_t^(2)} from the above model, and modify {Y_t^(2)} according to the following settings: * Y_t^(2)(x)= Y_t(x)+20δ_1sin(2π x), x∈[0,1]; * Y_t^(2)(x)= Y_t(x)+20δ_2η_tsin(2π x), x∈[0,1]; * Y_t^(2)(x)= Y_t(x)+20δ_1x, x∈[0,1]; * Y_t^(2)(x)= Y_t(x)+20δ_2η_tx, x∈[0,1]; * Y_t^(2)(x)= Y_t(x)+20δ_11(x∈[0,1]); * Y_t^(2)(x)= Y_t(x)+20δ_2η_t1(x∈[0,1]); where η_ti.i.d.∼𝒩(0,1) and δ_1,δ_2∈[0,0.3]. The size performance of all tests are evaluated by setting δ_1=δ_2=0. As for the power performance, Cases 1m-3m with δ_1∈(0,0.3] correspond to alternatives caused by mean differences and Cases 1v-3v with δ_2∈(0,0.3] correspond to covariance operator differences. In particular, we note the alternative of Cases 1m and 1v depends on the signal function f(x)=sin(2π x), x∈[0,1], which is in the space spanned by the eigenfunctions of Y_t(x), while for Cases 3m and 3v, the signal function f(x)=1(x∈[0,1]) is orthogonal to these eigenfunctions. We denote the two-sample mean test and covariance operator test based on <cit.> as ZSM and ZSV respectively. The empirical size of all tests are outlined in Table <ref> at nominal level α=5%. From this table, we see that (a) D_1 has accurate size across all model settings and D_2 is generally reliable for moderate dependence level, albeit oversize phenomenon for small n when ρ=0.7; (b) DM suffers from severe size distortion when temporal dependence is exhibited even for large n; (c) although both ZSM and ZSV utilize SN to robustify the tests due to temporal dependence, we find their performances depend on the user-chosen parameter K a lot, and still suffer from size distortion when n is small. In particular, the size distortion when K=4 is considerably larger than that for K=2 in the presence of temporal dependence. Figure <ref> further compares their size-adjusted powers when n_1=n_2=400 and ρ=0.4. As can be seen, D_1 possesses trivial power against mean differences while D_2 is rather stable in all settings with evident advantages in Cases 2m and 3m. In contrast, the power performances of DM, ZSM and ZSV vary among different settings. For example, when the alternative signal function is in the span of leading eigenfunctions, i.e. Cases 1m and 1v, ZSM and ZSV with K=2 can deliver (second) best power performances as expected, while they are dominated by other tests when the alternative signal function is orthogonal to eigenfunctions in Cases 3m and 3v. As for DM, it is largely dominated by D_2 in terms of mean differences, although it exhibits moderate advantage over D_2 for covariance operator differences. In general, whether the difference in mean/covariance operator is orthogonal to the leading eigenfunctions, or lack thereof, is unknown to the user. Our test D_2 is robust to unknown temporal dependence, exhibits quite accurate size and delivers comparable powers in all settings, and thus should be preferred in practice. agsm
http://arxiv.org/abs/2307.04492v1
20230710113046
Calculating Originality of LLM Assisted Source Code
[ "Shipra Sharma", "Balwinder Sodhi" ]
cs.SE
[ "cs.SE" ]
Calculating Originality of LLM Assisted Source Code Shipra Sharma [email protected] Balwinder Sodhi Department of Computer Science and Engineering Indian Institute of Technology Ropar India [email protected] ========================================================================================================================================================================== The ease of using a Large Language Model (LLM) to answer a wide variety of queries and their high availability has resulted in LLMs getting integrated into various applications. LLM-based recommenders are now routinely used by students as well as professional software programmers for code generation and testing. Though LLM-based technology has proven useful, its unethical and unattributed use by students and professionals is a growing cause of concern. As such, there is a need for tools and technologies which may assist teachers and other evaluators in identifying whether any portion of a source code is LLM generated. In this paper, we propose a neural network-based tool that instructors can use to determine the original effort (and LLM's contribution) put by students in writing source codes. Our tool is motivated by minimum description length measures like Kolmogorov complexity. Our initial experiments with moderate sized (up to 500 lines of code) have shown promising results that we report in this paper. LLM, ChatGPT, plagiarism in education, automation in CSE education, Minimum Description Length § INTRODUCTION With the advent of Large Language Models (LLM) models such as ChatGPT, several coding tasks have become easy to complete via use of such LLMs. Such tasks include programming assignments in courses, generating subroutines and code fragments for commonly encountered algorithmic tasks, and so on. For example, programming assignments in many Computer Science and Engineering (CSE) courses can be generated in large measure <cit.> via these models. It has become very difficult to detect by standard plagiarism detection tools such as Turnitin <cit.>, that such source code is LLM generated. Even a complex assignment can be broken into simpler components, and each component can be written separately using such LLMs. Given this situation, it is highly desirable to construct a tool which can detect unauthorized or unattributed LLM help taken by the students in preparing their coding assignments. Usage of such LLM-assisted coding tools is recommended as the engineers/students may be required by the employers to be conversant with the use of such tools <cit.>. Although the LLM-based coding assistant tools seem to reply correctly to complex queries akin to an expert, they still lack the conceptual understanding of the queries as well as the results generated by the tool. The major shortcoming of these tools is lack of deep reasoning and analytical skills <cit.>. Hence, before we begin to resolve the difficulties mentioned above, we should first be able to measure (at least approximately) the amount of originality in an assignment. Motivated by the above, and by potential applications in the domain of Software Engineering, we consider the following research questions in this paper. RQ1RQ 1 Can we quantify the amount of original contribution by a student in an assignment, assuming that he/she has used an LLM such as ChatGPT for its preparation? RQ2RQ 2 How can we detect the similarity in the original contribution portion of two separate submissions when it is known that the students can take assistance from LLM-based tools in creating the submissions? RQ3RQ 3 How efficiently can we automate our answers to the above questions? In this paper, we propose two scores: the originality score o(D) and the similarity score s(D) of a source code D as solutions to the above questions. We further propose to use these scores extensively in an adaptable teaching process as follows: * Students with less measure of original contribution in their assignments (i.e., less originality scores) may be awarded suitably reduced scores. * Students with large amounts of overlap in their respective contributions (i.e., high similarity scores) may not be awarded extra “originality credits”. * More credits may be allocated to the “difficult” fragments of the program (or, assignment submission), and lesser credits may be allocated to the “easier” fragments of the program (or, assignment submission). These steps will lead to a constructive assessment of students, which encourages the students to develop original and high-depth analytic thinking. The above discussed scenario is one of the many applications of our work. Others are its usage in software development as these LLM-based models cannot replace software engineers (as of now), but can assist them <cit.>. § COMPUTING ORIGINALITY SCORE OF A PROGRAM §.§ Setting up the problem Suppose a programmer has unlimited access to a large language model 𝒜 (𝒜 can be ChatGPT, GPT-J, etc.). The programmer constructs a software program D using (see Figure <ref>): * the answers A_1, A_2, …, A_z to a sequence P_1, P_2, …, P_z of z prompts to 𝒜, and * the programmer's own original contribution 𝒪. Program D is finally constructed by combining A_1, A_2, …, A_z and 𝒪 using conventional text editing, rearrangements, etc. To be more specific, a conventional plagiarism detection software (say, Turnitin) will detect high similarity between the strings D and the corpus {A_1, A_2, …, A_n, 𝒪}. We define the following metrics: * total effort e(D) of the programmer as the total length of all prompts and the programmer's original contribution: e(D) = ∑_i=1^z |P_i| + |𝒪| * originality score o(D) (0 ≤ o(D) ≤ 1) of the program: o(D) = |𝒪|/|D| Our assumption is that a lower originality score would imply a lower original contribution by the programmer. Any programmer or student using LLM models to assist in writing programs implicitly minimizes e(D) and in turn also minimizes o(D). This motivates the following question. Question 1. Given a document D and LLM 𝒜, calculate the minimum originality score o(D). (This corresponds to <ref>). §.§ Solving <ref> To solve Question 1 we bound the maximum number of prompts z, which is a positive integer and the maximum length L of each prompt (P_1, P_2, …, P_z). We now formulate a bounded version of Question 1 above: Question 1.1. Compute the minimum value of the originality score o(D), under the assumption that the programmer can give at most z prompts, each of length at most L. Let T be a conventional plagiarism detector (a trivial one to use could be the diff command in UNIX-based systems). Figure <ref> illustrates the algorithm for solving Question 1.1. The program D in Figure <ref> forms the input to a neural network N. The output of N is of size z · L, and corresponds to the z unknown prompts to LLM 𝒜. The output of N is given as input to LLM 𝒜 to obtain answers A_1, A_2, …, A_z. A conventional plagiarism detector T is used to find the similarity percentage t between D and the output answers (A_1, A_2, …, A_n). The original contribution 𝒪 is estimated by removing the parts of D which match with the output answers. Finally, the output (originality score) u is equal to |𝒪|/|D|. If the similarity percentage between D and (A_1, A_2, …, A_n) is t, the originality score is expected to be approximately 1 - 0.01 · t[as t is percentage score we convert it to a number between 0 and 1 by multiplying by 0.01]. The output originality score u is given as the feedback to neural network N, with the objective of minimizing u. Remark. Please note that giving the same prompt again to an LLM can generate somewhat different answers. To cover all possibilities, our model allows for the same prompt to be repeated more than once in the sequence P_1, P_2, …, P_z. §.§ Applying the minimum description length (MDL) principle The minimum description length (MDL) principle <cit.> is a well-known principle for model selection. The MDL principle always selects the shortest description of given data, from the set of all possible descriptions. The quantity Γ=(P_1, P_2, …, P_z, 𝒪) (see Section <ref>) can be viewed as the content comprising of prompts plus the original code added by the student that results in the desired program as the output from an LLM. Thus, Γ can be thought to represent a description of D, which can lead to generation of the desired code. In other words, given the description Γ and LLM 𝒜, we can reconstruct program D almost completely. Our proposed solution (see Section <ref>) can then be viewed as an application of the MDL principle. For each possible description Γ, our algorithm selects the description with minimum “length", where the length of a description Γ is defined as its originality score |𝒪|/|D|. § COMPUTING SIMILARITY SCORE OF TWO PROGRAMS §.§ Setting up the problem Suppose two programmers Alice and Bob produce programs D_1 and D_2 respectively. Both programs solve the same computational problem, and both Alice and Bob had unlimited access to LLM 𝒜 during the coding process. Suppose Alice constructed D_1 using prompts P_1, P_2, …, P_z and original contribution 𝒪_1. Similarly, suppose Bob constructed D_2 using prompts Q_1, Q_2, …, Q_z and original contribution 𝒪_2. Let p be the similarity percentage between the two descriptions, Γ_1=(P_1, P_2, …, P_z, 𝒪_1) and Γ_2=(Q_1, Q_2, …, Q_z, 𝒪_2) using the conventional plagiarism detector T. Then we define similarity score, s(D_1, D_2) = 0.01 · p We now state the second question considered in this paper: Question 2. Given two source codes D_1 and D_2 and LLM 𝒜, calculate the similarity score s(D_1, D_2). (This corresponds to <ref>.) §.§ Solving <ref> In analogy with our approach for originality score, we consider a bounded version of Question 2: Question 2.1. Given two source codes D_1 and D_2, compute the maximum value of similarity score s(D_1, D_2), under the assumption that both Alice and Bob can give at most z prompts, each of length at most L. Figure <ref> illustrates the algorithm for solving Question 2.1: Source codes D_1 and D_2 are the inputs to two neural networks N_1 and N_2. The output of each neural network is of size z · L. The output of N_1 corresponds to the z unknown prompts of Alice and the output of N_2 corresponds to the z unknown prompts of Bob. Next, the outputs of N_1 and N_2 are given as input to LLM 𝒜 to generate answers A_1, A_2, …, A_z and B_1, B_2, …, B_z respectively. Using algorithm T, we compute the original contribution 𝒪_1 of Alice for prompts P_1, P_2, …, P_z and the original contribution 𝒪_2 of Bob for prompts Q_1, Q_2, …, Q_z. Finally, the similarity s between (P_1, P_2, …, P_z, 𝒪_1) and (Q_1, Q_2, …, Q_z, 𝒪_2) is computed using T, and this is used as feedback for both neural networks N_1 and N_2. The objective of the training process is to maximize (see Question 2.1) the output similarity s. Remark 1. In our implementation, we input (D_1, D_2) to a single neural network N, with ouput (P_1, P_2, …, P_z, Q_1, Q_2, …, Q_z). The intuition is that a single neural network may lead to faster convergence due to information flow along cross connections between input neurons of D_1 and D_2. Remark 2. In terms of MDL principle, the above network tries to compute the shortest description ((P_1, P_2, …, P_z, 𝒪_1), (Q_1, Q_2, …, Q_z, 𝒪_2)) of (D_1, D_2), where the “length" of the description is defined as the similarity score of T on inputs (P_1, P_2, …, P_z, 𝒪_1) and (Q_1, Q_2, …, Q_z, 𝒪_2). § PREVIOUS WORK Kolmogorov complexity and related measures. When the algorithm 𝒜 is a universal Turing machine (instead of a LLM), the minimum length description of program P is called its Kolmogorov complexity <cit.>. In <cit.>, the authors propose that neural network models such as GPT-3 have a “simplicity bias" and prefer data with low Kolmogorov complexity. Kolmogorov complexity inspired measures have a long history of application in similarity detection and compression. In <cit.>, the authors define a similarity metric called Normalized Information Distance (NID), based on Kolmogorov complexity. Since Kolmogorov complexity is non-computable, the authors further develop the notion of Normalized Compression Distance (NCD), which is an efficiently computable variant of NID using compression algorithms like gzip. More in-depth treatment of this topic is available in <cit.> and related papers. Autoencoders. An autoencoder <cit.> is a neural network which first compresses the input using an encoder network and then tries to recover the input from the compressed code by using a decoder network <cit.>. For the use of minimum description length (MDL) principle for autoencoders, see <cit.>. In the algorithm proposed in this paper (Figure <ref>), the neural network N can be viewed as the encoder, and the LLM 𝒜 can be viewed as the decoder. Further, note that only the encoder is trained using feedback from the output. AI-detection tools. We briefly discuss few recent softwares for detecting whether a text is generated by a LLM or written by a human. An AI text classifier by OpenAI, the company behind ChatGPT, is now available <cit.>. The classifier outputs the probability that a given input text is AI-generated. GPTZero <cit.> is another AI-detection tool, which also provides scores for burstiness and perplexity <cit.>. Another well-known tool is Originality.AI <cit.>. § PRELIMINARY EXPERIMENTS AND VISION FOR FUTURE WORK For an initial experimental setup for the proposed ideas, we designed a prompt space 𝒫 of size 64. Each prompt in this space is defined by a tuple of three words taken from independent sets A, B, C. Each of A, B and C contains words taken from common programming vocabulary encountered while describing the programs. For our experiments we chose |A|=8, |B|=2, |C|=4. For example, if the prompt is (“insertion", “sort", “C"), it is equivalent to writing a prompt: . We generated a pool of 10 answers to this prompt using calls to ChatGPT and BLOOM. BLOOM model was run on Macintosh, while ChatGPT was prompted through API calls. This gave us a collection of 64 · 10 = 640 (prompt, answer) pairs. We store this set in an offline repository ℛ which we used to train a neural network N using PyTorch. For each answer the neural network was trained with the following loss function: generate two prompts independently at random from the output probability distribution and calculate their similarity with answer. Next, we collected a test set 𝒯 of 50 programs. Each program D in 𝒯 was manually evaluated for similarity with the repository. Accordingly, an originality score o(D) was assigned to every program in 𝒯 using the formulas discussed in Section <ref>. The neural network N takes as input a source code D∈𝒯 and the output is a probability distribution over the prompt space 𝒫. The best score provided by the neural network is the computed originality score f(D) for two prompts. We found that the mean squared error ϵ between o(D) and f(D) was 0.3 (0≤ϵ≤ 1), which is an encouraging result (<ref>) . This experiment required a considerable amount of manual effort as our goal was to prove the viability of our proposed idea. As the proposed idea shows to be implementable and valid, we propose the following research vision: * We plan to create a prompt space that accurately maps with the internal representation of prompts for large-scale deployed LLMs such as BLOOM, ChatGPT, BARD etc. * We plan to increase the size of repository ℛ, so that it consists of a realistic number of (prompt, answer) pairs. * In future we plan to automate data cleaning, processing and model building so that the model can be trained and updated on real world data on regular basis. * We plan to increase the number of prompts in the prompt sequence to at least 20. * Finally, we will define prompt complexity, and how it minimizes originality score to be always less then 0.45. The implication being that easier the prompt is to write to get the desired code fragment., lesser will be the originality score of a source code. § CONCLUSION As current plagiarism detection tools use a corpus of documents obtained from various sources for comparison, we envision an originality detection tool which generates a prompt sequence and calculates the minimum originality score. The key idea we have proposed in this paper is: the tools for detecting originality of LLM generated source code need to “learn” from the LLM generated source code itself and the prompts used to generate such source code. Rather than trying to compute the probability that a text is AI-generated or human-generated (this has its technical limitations), we feel the focus should be on computing originality score using a pool of LLMs. Our initial results are encouraging, and our computed originality scores are in agreement with human evaluations of originality and similarity. 9 farrokhnia1 Farrokhnia, Mohammadreza, et al. A SWOT analysis of ChatGPT: Implications for educational practice and research, Innovations in Education and Teaching International (2023): 1-15. rosenblatt2 Rosenblatt, Kalhan. ChatGPT passes MBA exam given by a Wharton professor, Retrieved Jan 25 (2023): 2023. dwivedi3 Y.K. Dwivedi, N. Yogesh, et al., “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management 71 (2023): 102642. khalil4 Khalil, Mohammad, and Erkan Er. Will ChatGPT get you caught? Rethinking of plagiarism detection. arXiv preprint arXiv:2302.04335 (2023). weisz5 Weisz, Justin D., et al. Better together? an evaluation of ai-supported code translation. 27th International Conference on Intelligent User Interfaces. 2022. peng6 Peng, Sida, et al. The impact of ai on developer productivity: Evidence from github copilot. arXiv preprint arXiv:2302.06590 (2023). anu7 Baidoo-Anu, David, and Leticia Owusu Ansah. Education in the era of generative artificial intelligence (AI): Understanding the potential benefits of ChatGPT in promoting teaching and learning. Available at SSRN 4337484 (2023). ss8 Shipra Sharma and Balwinder Sodhi. FACT-from actual to conceptual tie-ins: a multi-level knowledge graph structured on context and semantics of software artefacts. Proceedings of the 35th Annual ACM Symposium on Applied Computing. 2020 mdl1 A. Barron, J. Rissanen and B. Yu, The minimum description length principle in coding and modeling, IEEE transactions on information theory, vol. 44, no. 6, pp. 2743–2760, 1998, IEEE. goldblum2023free Micah Goldblum and Marc Finzi and Keefer Rowan and Andrew Gordon Wilson, The No Free Lunch Theorem, Kolmogorov Complexity, and the Role of Inductive Biases in Machine Learning, 2023. kolmogorovbook Ming Li and Paul Vitányi, An Introduction to Kolmogorov Complexity and Its Applications (2nd Ed.), ISBN: 0387948686, Springer-Verlag, Berlin, Heidelberg, 1997. livitanyi1 Ming Li, Xin Chen, Xin Li, Bin Ma and P. M. B. Vitanyi, The similarity metric, IEEE Transactions on Information Theory, vol. 50, no. 12, pp. 3250-3264, Dec. 2004, doi: 10.1109/TIT.2004.838101. vitanyi2 Rudi Cilibrasi and Paul M. B. Vitányi, Clustering by compression, CoRR:cs.CV/0312044, 2003. vitanyi3 M. Li, J.H. Badger, X. Chen, S. Kwong, P. Kearney, and H. Zhang. An information-based sequence distance and its application to whole mitochondrial genome phylogeny, Bioinformatics, 17:2(2001), 149–154. cilibrasi2 R. Cilibrasi, P. Vitanyi and R. de Wolf, Algorithmic clustering of music, Proceedings of the Fourth International Conference on Web Delivering of Music, 2004. EDELMUSIC 2004., Barcelona, Spain, 2004, pp. 110-117, doi: 10.1109/WDM.2004.1358107. deeplearningbook Ian J. Goodfellow and Yoshua Bengio and Aaron Courville, Deep Learning, MIT Press, Cambridge, MA, USA, 2016 openai-classifier https://platform.openai.com/ai-text-classifier gptzero https://gptzero.me/ perplexity D. M. Blei, A. Y. Ng and M. I. Jordan, Latent Dirichlet Allocation, Journal of machine Learning research, 3 Jan 2003, 993-1022. burstiness T. Lappas, B. Arai, M. Platakis, D. Kotsakos and D. Gunopulos, On burstiness-aware search for document sequences, InProceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining 2009 Jun 28, pp. 477-486. originalityai https://originality.ai/ autoencoder C.Y. Liou, W.C. Cheng, J.W. Liou and D.R. Liou, Autoencoder for words, Neurocomputing 139:84-96, Sep 2 2014 . hinton G. E. Hinton and R. Zemel, Autoencoders, Minimum Description Length and Helmholtz Free Energy, Advances in Neural Information Processing Systems, Editors: J. Cowan and G. Tesauro and J. Alspector, Vol. 6, 1993.
http://arxiv.org/abs/2307.07380v1
20230714143935
Composition-contrastive Learning for Sentence Embeddings
[ "Sachin J. Chanchani", "Ruihong Huang" ]
cs.CL
[ "cs.CL", "cs.LG" ]
Investigating ChatGPT's Potential to Assist in Requirements Elicitation Processes Krishna Ronanki1, Christian Berger2 and Jennifer Horkoff3 Dept. of Computer Science and Engineering, University of Gothenburg Gothenburg, Sweden [email protected], [email protected], [email protected] July 14, 2023 =============================================================================================================================================================================================================================== Vector representations of natural language are ubiquitous in search applications. Recently, various methods based on contrastive learning have been proposed to learn textual representations from unlabelled data; by maximizing alignment between minimally-perturbed embeddings of the same text, and encouraging a uniform distribution of embeddings across a broader corpus. Differently, we propose maximizing alignment between texts and a composition of their phrasal constituents. We consider several realizations of this objective and elaborate the impact on representations in each case. Experimental results on semantic textual similarity tasks show improvements over baselines that are comparable with state-of-the-art approaches. Moreover, this work is the first to do so without incurring costs in auxiliary training objectives or additional network parameters.[Code, pre-trained models, and datasets will be available at https://www.github.com/perceptiveshawty/CompCSEgithub.com/perceptiveshawty/CompCSE.] § INTRODUCTION Significant progress has been made on the task of learning universal sentence representations that can be used for a variety of natural language processing tasks without task-specific fine-tuning (, , , , , , , ). Recent works have shown the potential to learn good sentence embeddings without labeled data by fine-tuning pre-trained language models (PLMs) using the unsupervised framework introduced in SimCLR <cit.>, adapted to the natural language processing (NLP) domain. In computer vision (CV), SimCLR exploits a series of transformations (blurs, crops, color distortions, etc.) to construct positive pairs from otherwise unique data points. A cross entropy objective (InfoNCE; ) is then applied to minimize distance between representations originating from the same datum, while maximizing the distance to all other points in a mini-batch. The success of the framework in computer vision is due largely to the diversity of augmentations used for creating positive pairs, which leave the identity of the original example intact while reducing pairwise mutual information in the input space (; ; ). Constructing positive pairs via discrete augmentations have not been effective when applying the same objective to sentence embeddings. In fact, <cit.> perform an ablation study of textual augmentations (e.g., cropping, synonym replacement) and find that training on these pairs hurts downstream performance on semantic textual similarity (STS) tasks. Instead, they observe that minimal (10%) dropout noise can be used to create positive pairs on-the-fly, and empirically results in stronger representations. This framework relying on nearly identical pairs is known as SimCSE. Since the dropout noise exists as a regularization component of the BERT architecture <cit.>, explicit augmentations are unnecessary, making it a simple yet effective framework for unsupervised learning of sentence embeddings. Here, we make a case for composition as augmentation, by exploiting its presence in language as a signal for learning sentence encoders. We conduct a series of experiments to illustrate the impact of training on positive examples derived by averaging representations of textual constituents in the latent space. Following previous works, we benchmark the proposed strategy on 7 STS tasks. Our results show that it is feasible to significantly improve upon SimCSE without making expensive architectural modifications or changing the overall training objective. We hope our findings can inspire new avenues of inquiry in text representation learning that draw on long-standing notions in semantics and linguistics. § BACKGROUND AND RELATED WORK §.§ Unsupervised Contrastive Learning Contrastive learning <cit.> aims to learn vector-valued representations of data without relying on annotations. Meaning is derived from these representations based on their proximity to other points in the same space, e.g. two images of dogs will be closer in space than a dog and a chair. Several works have theoretically verified the utility of representations derived from contrastive learning <cit.> under various assumptions; <cit.> showed that SimCLR can even outperform supervised counterparts on CV transfer learning benchmarks. In SimCLR (and SimCSE), the learning objective for an example is: l_i = -log e^sim(z_i, z^+_i) / τ/∑^N_j=1e^sim(z_i, z^+_j)/τ, where z_i = f(x_i), z_i^+ = f(x_i^+) are vector representations of an input and its corresponding augmented positive, τ is a temperature hyperparameter, sim(.,.) is cosine similarity, and N is batch size. Drawbacks of InfoNCE. In examination of eq. <ref>, it is evident that InfoNCE uniformly repels examples in the mini-batch besides the minimally augmented positive. Consequentially, the resulting embeddings show poor group-wise discrimination, especially in language, since it is likely that different examples in the batch can have different relative similarities to a given anchor. Another consequence of the unsupervised InfoNCE objective is dimensional collapse, wherein embedding vectors are mostly differentiated by a small proportion of the feature axes; thus under-utilizing the full expressive capacity of the encoder. This was theoretically posited in <cit.>. They prove that minimal augmentation, coupled with an over-parameterized network, results in low rank solutions to the unsupervised contrastive objective. We hypothesize that this is closely tied to short-cut learning <cit.> —- in the context of sentence embeddings, <cit.> observed that spurious features related to the lengths of sentences are relied on to solve the contrastive objective. Such solutions can yield non-generalizable features that poorly represent data from new domains. Qualifying the representation space. <cit.> proposed two metrics to measure the quality of embeddings derived through contrastive learning. First, alignment measures on average the proximity of pairs of examples that should be close in space, i.e. for a set of positive pairs p_pos and their normalized representations f(x), f(x^+): .73! ℓ_align≜(x, x^+)∼ p_pos𝔼‖ f(x) - f(x^+) ‖^2. Conversely, uniformity measures how scattered the embeddings are upon the unit hypersphere: .85! ℓ_uniform≜log   x, yi.i.d.∼ p_data𝔼 e^-2‖ f(x)-f(y) ‖^2, where p_data denotes the full data distribution. We use these metrics to explore the advantages and drawbacks of various augmentations in contrastive pre-training, similarly to <cit.>. §.§ Learning Sentence Embeddings Early works. First approaches to learning sentence embeddings span unsupervised <cit.>, and supervised <cit.> methods which have been studied extensively in the literature. More recent work has focused on unsupervised contrastive learning with the advent of SimCSE <cit.>, which passes the same sentence to a language model twice; the independent dropout masks sampled in the two forward passes encode the sentence at slightly different positions in vector space. A cross-entropy objective is then used to maximize the probability of top-1 proximity between positives while uniformly repelling other examples. Successors to SimCSE. Works that follow SimCSE attempt to improve the framework with auxiliary training objectives <cit.>, verbalized or continuous prompts <cit.>, instance generation or weighting strategies <cit.>, momentum encoders with negative sample queues <cit.>, or entirely new parameters with secondary networks <cit.>. Many works combine several of these components, making it difficult to discern their impact in isolation. As the design choices have become more intricate and less parameter-efficient, performance on STS benchmarks has too become saturated. § COMPOSITION-BASED CONTRASTIVE LEARNING Our augmentation strategy retains the simplicity and efficiency of SimCSE, as illustrated in Figure <ref>. Specifically, it requires just one additional forward pass that is ultimately compensated by a non-trivial reduction in convergence time (<ref>). Beginning with a corpus of unlabelled sentences {x_i}_i=1^m, we consider x_i^+ only in the latent space, as a composition of the representations of (x_i^'+, x_i^”+). A simple (and effective) way to curate (x_i^'+, x_i^”+) is to split the tokens of x_i in half, and encode the left and right phrases in independent forward passes through the encoder and linear projector. After obtaining their respective token representations (z_i, z_i^'+, z_i^”+), (z_i^'+, z_i^”+) is aggregrated and taken to be the corresponding positive example for z_i. The training objective for a single pair is then the same as in eq.<ref>, where z^+ = aggregate(z_i^'+, z_i^”+). We experiment with aggregation methods in <ref>, and find that the best approach varies according to the size and type of underlying PLM. In our final model based on BERT, we find that this manner of augmentation is especially suitable for the scheme proposed in DirectCLR <cit.>, which aims to directly mitigate dimensional collapse by computing the loss from eq.<ref> on a subset of the embedding vector axes before backpropagating to the entire representation. Decomposition as data augmentation. To explain the motivation for decomposing examples in the input space, we can consider an example from the development subset of STS-B labelled as having high semantic similarity: There are two semantic atoms at play in the first text: 1) a man is lifting weights, and 2) a man is in a garage. The similarity between the two texts can only be considered high based on the first atom; lifting weights. It cannot be said that there is a general relation between being in a garage and lifting weights - a garage is equally, if not more likely to be related to cars, parking, or storage, yet this does not preclude a connection between them. It is only through the composition of both atoms that we can relate the two. Thus, there is a need for sentence encoders to learn more generalized phrase representations; to at least implicitly abide by principles of semantic compositionality. The challenge in enforcing this kind of constraint through a contrastive objective is in the choice of data — it would require a corpus where lexical collocations are encountered across a diverse set of contexts. Subsampling from decomposed inputs. To further examine the effect of decomposition in the input space, we leverage a pre-trained discourse parser[https://github.com/seq-to-mind/DMRST_Parser] to extract atomic semantic units from each unique example in the training set; typically simple phrases or clauses. We experiment with 3 kinds of strategies (Figure <ref>) to expand the training set, besides considering our augmentation in isolation: let C = {x_i, k}_k=1^c represent the c non-overlapping phrases extracted from an input x_i: * adjacent spans are sampled by taking each unique pair in C such that there is no overlap between inputs; * overlapping and adjacent spans are sampled by taking (potentially) overlapping pairs in C; * overlapping, adjacent, and subsuming spans are sampled by recursively partitioning the elements of C in half, i.e. maximizing the lexical overlap of extracted input samples. Impact on the representation space. A consequence of expanding the training set with subsamples is the presence of harder in-batch negatives. Prior work has demonstrated that this is generally beneficial to contrastive learning <cit.>. Following <cit.>, we measure the uniformity and alignment of representations obtained for the development set of STS-B to understand the effect of training with additional subsamples. STS-B is comprised of pairs of sentences accompanied by a score between 1-5 indicating degree of semantic similarity. We take all pairs as p_data, and pairs with a score greater than 4 as p_pos. Both metrics are measured every 10 steps for 500 training steps, to understand the direction in which each of our strategies drives the encoder. As shown in Figure <ref>, any of the subsampling strategies can bring non-trivial improvements over unsupervised SimCSE in both alignment and uniformity. Specifically, expanding the training set with subsamples (+ adjacent, + overlapping, + subsuming) encourages a more uniform embedding distribution. On the other hand, forgoing subsampling for just the compositional augmentation (naive partition) achieves the better alignment while retaining the uniformity of SimCSE. This is because we leave the self-prediction objective intact, while increasing its difficulty: although subsamples are potentially highly related, positive pairs are only curated from the exact same text. As a consequence, the underlying PLM is forced to effectively distinguish examples with high lexical overlap — which is precisely the intuition underlying DiffCSE <cit.>, and other discriminative pre-training objectives. § EXPERIMENT Setup. In our experiments, we modify the public PyTorch implementation[https://github.com/princeton-nlp/SimCSE] of SimCSE to support our proposed augmentation and subsampling methods. All of our language models are initialized from pre-trained BERT/RoBERTa checkpoints <cit.>, except the randomly-initialized MLP over the representation. For all models, we employ the scheme illustrated in Figure <ref> and report the best results after training with or without the 3 subsampling strategies. We keep the best checkpoints after evaluating on the development set of STS-B every 125 steps during training. Batch size is fixed at 64 for all models; for base and large sized models, learning rates are fixed to 3e-5 and 1e-5 respectively. Besides those covered in <ref>, extensive hyperparameter searches were not conducted in this work. Data. We use the same 1 million randomly sampled sentences[https://huggingface.co/datasets/princeton-nlp/datasets-for-simcse] as SimCSE for training, besides incorporating the subsampling strategies from <ref>. We evaluate on 7 semantic textual similarity tasks: STS 2012-2016, STS-Benchmark, SICK-Relatedness <cit.> and report averaged Spearman's correlation across all available test subsets. We employ the modified SentEval[https://github.com/facebookresearch/SentEval] <cit.> package accompanying the source code of SimCSE for fair comparison with other works. Baselines. We compare our results with many contemporaries: ESimCSE <cit.>, SNCSE <cit.>, PCL <cit.>, DCLR <cit.>, ArcCSE <cit.>, MoCoSE <cit.>, and L2P-CSR <cit.>. We consider SimCSE <cit.> as our baseline, since we leave its training objective and network architecture intact. Results. We can observe in Table <ref> that our methods bring non-trivial improvements to SimCSE with both BERT encoders, as well as RoBERTa. In fact, we achieve an average F1 score within 0.8 points of SNCSE-BERT<cit.>. SNCSE exploits biases in test sets by engineering hard negatives via explicitly negated sentences — the impact of this strategy is more apparent in the results utilizing RoBERTa, where there is parity in all works besides SNCSE. In the case of BERT, the gap in performance between our approach and SNCSE is narrower at 0.52 points. A clear failure of the composition-augmented objective presents itself in the results with RoBERTa. This could be attributed to poor hyperparameter settings, or a fundamental incompatibility between our approach and the model size/RoBERTa pre-training objective, since other works achieve better results with this PLM. § ABLATION We ablate several aspects of the approach to understand their impact in isolation. We first consider the subsampling strategy, or lack thereof, in which each model achieves the best STS-B development set performance. These are then tied to each model in subsequent ablations. Including subsamples. In the process of designing DeCLUTR, <cit.> report gains from subsampling more than one anchor per input document. In our experiments, we find that the aligment-uniformity trade-off differs between BERTand BERT, ie. different strategies can be better suited to different PLMs. In Table <ref>, we show that including subsamples is beneficial to the BERTPLM, but harmful to BERT. This is likely a result of the difference in no. of parameters — the smaller PLM may not possess the expressive capacity to distinguish highly related texts without suffering a degeneration in alignment. With RoBERTa, we observe that subsampling non-overlapping spans gives the best results, whereas none of our strategies appeared compatible with RoBERTa. Aggregration method. In SBERT <cit.>, and in historically effective works such as InferSent <cit.>, PLMs are fine-tuned with a cross entropy loss to predict whether two sentences u and v entail or contradict eachother. Their pooler is a concatenation of the two sentence embeddings, along with second-order features such as the element-wise difference, |u - v|. We experiment with these aggregration methods, as well as simpler choices such as element-wise sums/averages. We can see in Table <ref> that simply interpolating the embeddings is preferable to other methods for BERT-based encoders. We postulate that this interpolation functions as a form of self-distillation, and amplifies the salience of desirable sparse features correlated with sentential context <cit.>. For RoBERTa, we find that concatenating the first and last halves of the representations is better. Since RoBERTa does not use the next-sentence prediction (NSP) objective, its embeddings will not encode sentential knowledge. Averaging RoBERTa embeddings may not correlate well with real tokens in its vocabulary, whereas concatenating the first and last halves of constituent embeddings retains localized token-level information, making it a better choice in this case. Composing z vs. z^+. In our training objective, there are two sets of sentence representations, one derived from pure dropout noise, and the second by averaging the coordinates of constituent representations. However, for each sentence we can: 1) compose the anchor z in latent space, which means other in-batch examples are repelled from a synthetic example's coordinate, 2) compose the positive z^+, which means synthetic coordinates are repelled from representations of real examples, or 3) compose both z and z^+ in the latent space. In Table <ref>, we can see that with BERT, we found the best results by directly embedding the anchor sentence, and composing z^+ from constituents. Number of partitions. Within our framework, we can aggregrate the embeddings of two or more phrases. Increasing the number of phrases increases the number of forward passes, and magnifies the impact of dropout noise. We find that partitioning into more than two bins is detrimental to the objective (Table <ref>), though perhaps this is the case because the evaluation data consists mostly of short-length sentences. Hyperparameter d_0. In our experiments with BERT, computing the contrastive loss on a subvector of (z_i, z_i^+) is complementary to composing z_i^+ in the latent space. When d_0 → d, our training objective is the exact same as in all *CSE works, ie. computing the loss on all coordinates of (z_i, z_i^+). For BERT, we search d_0 ∈{192, 256, 384} with the compositional augmentation in isolation (w/ composition); for BERT, d_0 ∈{320, 384, 512} with the expanded training set of subsamples (+ subsuming). Our results in Table <ref> indicate that taking a subvector to compute the loss is beneficial for BERT, but the entire vector is necessary for BERT. With RoBERTa encoders, we aggregrate embeddings by concatenating the first and last halves of the phrase embeddings, so d_0 is inapplicable. § ANALYSIS Stability and efficiency of training. Successors to SimCSE have incrementally improved STS performance while disproportionately driving up resource requirements. This limits accessibility to practitioners who wish to learn embeddings from their own corpora, perhaps in other languages. Differently, our approach relies on a single additional forward pass while converging much faster than SimCSE. In Figure <ref>, we compare our BERTmodel's evaluation curve to SimCSE's for 1000 training steps in the same setting. We observe that composition as augmentation greatly speeds up convergence, with evaluation metrics plateauing much faster, and more stably than SimCSE. In fact, on a single NVIDIA A100 GPU (40GB), our model can finish training in under 15 minutes. Text length as a feature. To investigate the structure of the learned space, In Figure <ref>, we visualize embeddings of sentences from the development set of STS-B after down-projecting to 2D Euclidean space. We employ UMAP <cit.> with cosine distance as the metric to preserve local and global topological neighborhoods. The same parameters are used to compute the embeddings in Figure <ref> and <ref>, which are derived from dropout noise, and composition-based augmentations (w/ composition) respectively. In Figure <ref>, we can observe several clusters of dark points corresponding to shorter sentences. This corroborates our intuition that minimal augmentation to create positive pairs can lead to shortcut learning, wherein text length is relied upon to solve the training objective. In contrast, we see a more scattered distribution of points in Figure <ref>, particularly with shorter sentences. Coupled with the improved performance on STS tasks, we can conclude that our framework is less prone to learning from spurious correlations. Learned similarity metric. Returning to the example initially posed in <ref>, we show in Figure <ref> similarity scores for pairs of examples computed by our BERTmodel, as well as the corresponding DiffCSE and SimCSE variants. Notice that all three assign higher similarities between anchor: "A man is lifting weights in a garage", and phrases: "A man is lifting weights", "A man in a garage". However, despite their equal constitution in the anchor text, SimCSE incorrectly assesses a higher similarity between the anchor and the first phrase, whereas DiffCSE and our model better capture the equivalence in similarity. The same occurs with anchor: "We store it outside of the house", and texts: "A man is in a garage", "She parked on the driveway"; despite both being unrelated to the anchor, SimCSE spuriously assigns a higher affinity to the former. Overall, we observed parity in the similarity assessments given by our model and DiffCSE, which validates the ability of our approach to remedy the suboptimal alignment of SimCSE without explicit incentive. § CONCLUSION In summary, we proposed a new way to construct positive pairs for unsupervised contrastive learning frameworks relying on pre-trained language models. Our experiments on STS tasks verified the effectiveness of the approach, which achieved competitive results with more complex learning methods, with the benefit of stabilizing and reducing the overall cost of training. We provided empirical studies and qualitative examinations into our approach, verifying its ability to train sentence encoders with better alignment. We believe this work can foster new avenues of inquiry in contrastive learning, especially those that draw upon a human cognition of language. § LIMITATIONS There are several limitations in this work. First, we have not explored how to make use of composition-based augmentations in the supervised setting. A second limitation is a lack of theoretical grounding in the impact of our latent space composition. Finally, we have not explored interoperability with other training objectives. § NOTE ON ETHICS We do not believe there are significant ethical considerations stemming from our work, except those that accompany the use of language models and unlabelled corpora in general. Pre-trained language models, including BERT and RoBERTa, are known to learn and reiterate harmful prejudices. Although our pre-training corpus is sourced from Wikipedia and cited in several related works, it cannot be feasibly vetted for explicit or inappropriate content. § ACKNOWLEDGEMENTS We thank the anonymous reviewers for their valuable feedback and input. We also thank Haotian Xu, for his insight and suggestions that shaped the course of this work. We gratefully acknowledge support from National Science Foundation (NSF) via the awards IIS-1942918 and IIS-2127746. Portions of this research were conducted with the advanced computing resources provided by Texas A&M High Performance Research Computing. acl_natbib
http://arxiv.org/abs/2307.04285v1
20230710002427
HistRED: A Historical Document-Level Relation Extraction Dataset
[ "Soyoung Yang", "Minseok Choi", "Youngwoo Cho", "Jaegul Choo" ]
cs.CL
[ "cs.CL" ]
Generalizing Graph ODE for Learning Complex System Dynamics across Environments Wei Wang August 12, 2023 ================================================================================= Despite the extensive applications of relation extraction (RE) tasks in various domains, little has been explored in the historical context, which contains promising data across hundreds and thousands of years. To promote the historical RE research, we present constructed from . is a collection of records originally written in Hanja, the classical Chinese writing, which has later been translated into Korean. provides bilingual annotations such that RE can be performed on Korean and Hanja texts. In addition, supports various self-contained subtexts with different lengths, from a sentence level to a document level, supporting diverse context settings for researchers to evaluate the robustness of their RE models. To demonstrate the usefulness of our dataset, we propose a bilingual RE model that leverages both Korean and Hanja contexts to predict relations between entities. Our model outperforms monolingual baselines on , showing that employing multiple language contexts supplements the RE predictions. The dataset is publicly available at: <https://huggingface.co/datasets/Soyoung/HistRED> under https://creativecommons.org/licenses/by-nc-nd/4.0/CC BY-NC-ND 4.0 license. § INTRODUCTION Relation extraction (RE) is the task of extracting relational facts from natural language texts. To solve RE problems, diverse datasets and machine learning (ML) methods have been developed. Earlier work limits the scope of the problem to sentence-level RE, in which the task is to predict a relationship between two entities in a single sentence <cit.>. However, such a setting is impractical in real-world applications where relations between entities can exist across sentences in large unstructured texts. Therefore, document-level RE datasets for general and biomedical domains have been introduced <cit.>, serving as benchmarks for document-level RE models <cit.>. Despite the vast amount of accumulated historical data and the ML methods available for extracting information from it, research on information extraction targeting historical data has been rarely conducted. We believe this is due to the high complexity of analyzing historical records which are written in early languages and cover hundreds and thousands of years. For instance, early languages pose a challenge for accurate translation and knowledge extraction due to their differences in expressions, styles, and formats compared to contemporary languages. Also, since historical records are translated a long time after their creation, reading bilingual texts is necessary to fully understand the text. Such discrepancy requires domain experts who are able to understand both languages in order to accurately annotate the data. There has been a demand from historical academics to utilize ML algorithms to extract information from the huge amount of records; however, because of the aforementioned challenges, the historical domain has been overlooked by most ML communities. In response, we introduce , a document-level RE dataset annotated on historical documents for promoting future historical RE studies. contains 5,816 documents extracted from 39 books in the corpus (see Section <ref> for details). As described in Table <ref>[The statistics of our dataset is calculated when is 2.], our dataset is the first dataset that extracts relational information from the historical domain and differs from other RE datasets in that it supports both sentence-level and document-level contexts, as well as two languages: Korean and Hanja. Furthermore, researchers can select different sequence levels (), which we define as a unit of context lengths, when evaluating their RE models. Such independent subtexts are constructed by considering evidence sentences, which the annotators have tagged. The intuition is that evidence sentences, which provide context for deriving a certain relation between two entities, should not be separated from the original text when splitting a document; thus, we introduce an algorithm that properly splits a full document into several self-contained subtexts. Finally, we propose a novel architecture that can fully utilize bilingual contexts using pretrained language models (PLMs). Experimental results demonstrate that our bilingual RE model outperforms other monolingual ones. Our contributions are summarized as follows: * We introduce , a historical RE dataset built from scratch on , a historical record written between the 16th and 19th centuries. * We define new entity and relation types fit for our historical data and proceed with the dataset construction in collaboration with domain experts. * We introduce a sequence level () as a unit of varying sequence lengths, which properly splits a full document into several independent contexts, serving as a testbed for evaluating RE models on different context lengths. § DATASET CONSTRUCTION To the best of our knowledge, is the first RE dataset in the historical domain; thus, there is no consensus regarding the dataset construction process on the historical corpus. In the process of designing our dataset, we collaborated with experts in the linguistics and literature of Hanja to arrive at a consensus. This section describes how we collaborated with the domain experts to construct without losing annotation quality. §.§ Background Joseon, the last dynastic kingdom of Korea, lasted just over five centuries, from 1392 to 1897, and many aspects of Korean traditions and customs trace their roots back to this era. Numerous historical documents exist from the Joseon dynasty, including Annals of Joseon Dynasty (AJD) and Diaries of the Royal Secretariats (DRS). Note that the majority of Joseon's records were written in Hanja, the archaic Chinese writing that differs from modern Chinese, because the Korean language had not been standardized until much later. We considered a number of available historical texts and selected , taking into account the amount of text and the annotation difficulty. is essentially a travel diary from the Joseon period. In the past, traveling to other places, particularly to foreign countries, was rare. Therefore, intellectuals who traveled to Chung (also referred to as the Qing dynasty) meticulously documented their journeys, and is a compilation of these accounts. Diverse individuals from different generations recorded their business trips following similar routes from Joseon to Chung, focusing on people, products, and events they encountered. The Institute for the Translation of Korean Classics (ITKC) has open-sourced the original and their translated texts for many historical documents, promoting active historical research[The entire documents were collected from an open-source database at <https://db.itkc.or.kr/>]. §.§ Dataset Schema We engaged in rounds of deliberate discussions with three experts who have studied the linguistics and literature of Hanja for more than two decades and defined our dataset schema. Documents Written between the 16th and 19th centuries, the books in have different formats and contexts depending on the author or the purpose of the book. After consulting with the experts, a total of 39 books that contain rich textual information were selected for our dataset, excluding ones that only list the names of people or products. The collection consists of a grand total of 2,019 complete documents, with each document encompassing the text for a single day. This arrangement is made possible because each book separates its contents according to date, akin to a modern-day diary. Entity and Relation Types Since is a unique record from the Joseon dynasty, entity and relation types used in typical RE tasks are not fit for our dataset. After conferring with the experts, we newly define the entity and relation types appropriate for our historical data. The details are described in Appendix <ref>. §.§ Annotate and Collect Annotators 15 annotators were recruited, who can comprehend the Hanja texts with the Korean translations and have studied the linguistics and literature of Hanja for at least four years. Data Annotation The annotation process was divided into two steps: Each annotator first annotates the text from scratch, and then a different annotator cross-checks the annotations. Prior to each step, we provided the annotators with guidelines and promptly addressed any inquiries they had throughout the annotation process. The annotators were instructed to tag four types of information: entities, relation types, coreferences, and evidence sentences. Entities are annotated in both Korean and Hanja texts, whereas the relations between entities are tagged in the Korean text only, reducing redundant workload for the annotators. Coreferences, which are words or expressions that refer to the same entity, are also tagged such that they are all used to represent a single entity during model training. Evidence sentences, which provide context why the entities have a particular relation, are labeled as well, following <cit.>. For 2,019 parallel texts, the average number of sentences is 24, and the average number of characters in a sentence is 45 in Korean, and 65 and 7 in Hanja, respectively. Preprocessing The initial annotated data is preprocessed to facilitate model training due to several issues it presents. First, some texts contain quotes from other books and poems, which may be unnecessary information for performing the RE task, and thus we exclude them from our dataset. Second, the annotators have found no relation information in some texts either because they were too short or the author of the text had not written any meaningful information. We filter out such texts accordingly. Lastly, the average number of sentences is quite high, with a high variance of 1,503 characters in Korean and 12,812 characters in Hanja. This is because the writing rule of is not stringent. Therefore, we divide these texts with respect to different sequence levels, as described in Section <ref>. Consequently, the original 2,019 texts yield a total of 5,852 data instances[When is 0. The detailed statistics are in Table <ref>.]. The mean and the variance of the number of sentences are reduced from 24(1503) to 2(4.15) in Korean and from 65(12812) to 5(57.62) in Hanja. Statistics of The collected dataset is split into the training, validation, and test sets, and their statistics are demonstrated in Table <ref>. Since the sequence length of each document varies, we first sort all data by Korean character lengths, followed by random sampling in a 2:1:1 ratio for the training, validation, and test sets, respectively. §.§ Sequence Level A length of a document is a major obstacle to training a PLM such as BERT, which can take sequences of length only up to a specified length, e.g., 512 tokens. Naively, we can split long documents into multiple chunks; however, a problem may arise when the context for identifying a certain relation exists in a different chunk of text. To resolve this issue, we introduce a sequence level (), a unit of sequence length for extracting self-contained subtexts without losing context information for each relation in the text. This is achieved since we have instructed the annotators beforehand to mark evidence sentence(s), which are contextual sentences that help identify the corresponding relation. As a result, we can utilize these sentences as indicators when varying the lengths of a document. Formally, let T^k_a represent a subtext for relation A when is k. Assume two relations exist in separate sentences of a document, i.e., D = [s_1, ⋯, s_n], which consists of n sentences. When is 0 and i+1 < j, the two subtexts can be defined as T^0_a = [s_i, s_i+1], T^0_b = [s_j], where relation A exists in s_i and its context in s_i+1, while relation B exists and has its context in s_j. If SL is set as k, each subtext is expanded to T^k_a = [s_i-k, ⋯, s_i+k], T^k_b = [s_j-k, ⋯, s_j+k], where 1 ≤ i-k, 1 ≤ j-k, i+k ≤ n, and j+k≤ n. Note that the expansion is based on the sentence where the relation exists, i.e., s_i and s_j. If i-k < 1 or j-k<1, we set the initial index of T^k as 1, and if n < i+k or n < j+k, we set the last index of T^k as n. In addition, we must verify whether duplication occurs between the subtexts. If s_i+k of T^k_a becomes the same sentence as s_j-k of T^k_b, we combine two subtexts to a new subtext T^k_a+b to remove the duplication between them. As shown in Table <ref>, the size of the dataset decreases as increases due to the removal of duplication. Based on this process, we produce five versions of our dataset, where {0, 1, 2, 4, 8}∈ k. Because our dataset contains the bilingual corpus, the new documents are first generated in Korean text, followed by constructing the corresponding Hanja subtexts. § DATA ANALYSIS In this section, we analyze various aspects of to provide a deeper understanding and highlight several characteristics of our historical data. Table <ref> shows the properties and statistical aspects of with three most related datasets: I.PHI <cit.>, DocRED <cit.>, and KLUE-RE <cit.>. The tokenizer of mBERT <cit.> is utilized to obtain the number of tokens in diverse languages. is the first dataset comprised of historical texts targeting the document-level RE task. There have been several studies on the historical corpus <cit.>; however, most RE datasets are based on a general or biomedical domain <cit.>, making it hard to derive historical knowledge. Named Entity Types contains 10 entity types, including Location (35.91%), Person (34.55%), Number (13.61%), DateTime (4.82%), and Product (4.40%)[The percentage is calculated when is 1.]. On average, approximately 11 entities appear in a single document, with the median being 10. The aforementioned types are the five most frequent entity types. This can be explained that is a business-travel journal from Joseon to Chung; thus, the authors described whom they had met and when and where they had traveled. The full description is in Appendix Table <ref>. Relation Types Our dataset encloses 20 relation types, including “per:position_held” (32.05%), “nearby” (27.28%), “alternate_name” (7.59%), “per:country_of_citizenship” (5.35%), and “product:provided_by” (3.82%)[The percentage is calculated when is 1, same as entity.]. The frequent occurrence of “per:position_held” can be explained by the distinctive writing style during the Joseon dynasty. For instance, people wrote the name of another person along with their title (e.g., “Scientist Alan Turing” rather than “Alan Turing.”) People referred to each other by their titles or alternative names, such as pseudonyms because using a person's given name implied a lack of respect and courtesy. The second most common relation is “nearby,” which indicates that the place or organization is located nearby[Since there were no mechanical mobilities and the diplomatic group moved with about 200 people, the authors could not move fast and usually walked inside a city.]. This demonstrates that the authors were interested in geographic information when traveling. The full description is in Appendix Table <ref>. Varying Sequence Length As described in Section <ref>, the input text length can be altered via the sequence level (SL). Table <ref> shows a distribution of the number of tokens within a document when SL changes. When is 1, our sequence length becomes longer than the sentence-level RE dataset, including KLUE-RE. Additionally, when ≥ 4, our dataset exceeds the length of other document-level RE datasets, including DocRED. Annotation Procedure Statistics Since our dataset construction consists of annotation and cross-checking steps, we summarize the statistics of this procedure. As shown in Table <ref>, each annotator tagged an average of 51.3 Korean entities, 50.6 Hanja entities, and 4.9 relations on each raw text. At the cross-checking step, a different annotator added an average of 6.5 Korean entities, 6.2 Hanja entities, and 0.5 relations, while deleting 2.2 Korean entities, 2.0 Hanja entities, and 0.3 relations. As a result, the final annotations consist of 55.6 Korean entities, 54.8 Hanja entities, and 5.1 relations for each raw text on average. § BILINGUAL RELATION EXTRACTION MODEL Unlike translation between modern languages, such as translation from English to Korean, historical records have been translated hundreds of years after their creation. As a result, the gap between ancient and present makes the translation task from Hanja into Korean difficult. Also, the translated texts can vary across translators; thus, the domain experts read both Hanja and Korean texts to fully understand the original text. Based on this observation, we hypothesize that understanding the bilingual text would help a model extract valuable information and design our bilingual RE model. As shown in Figure <ref>, our model is a joint model of two separate encoders for Hanja and Korean, along with a cross-attention block from the Transformer architecture <cit.>. For a document D of length n in Hanja and m in Korean, we have D_han=[x_t]_t=1^n and D_kor=[y_t]_t=1^m, where x and y are input tokens of each document. We use the PLM encoder to obtain contextualized embeddings: H_kor, H_han. Based on these hidden representations, we adopt the multi-head cross-attention block, which consists of a cross-attention layer and residual connection layer <cit.>. For instance, when the encoder process the Hanja text, we set the query as the Hanja token and the key and value to the Korean tokens. Cross-attended representation H' is defined as H'_han = softmax(Q_han, K_kor)V_kor, where we denote query Q_han = W_Q H_han, key K_kor = W_K H_kor, and value V_kor = W_V H_kor, which are all linear projections of hidden representation H. W_Q∈ℝ^d× d, W_K∈ℝ^d× d, and W_V∈ℝ^d× d are learnable weight matrices. After the cross attention, H'_han is further processed in a residual-connection layer, Z_han = Linear(H_han + H'_han). We get Z_kor in the same manner. Our model pools entity embeddings from Z_han and Z_kor. Each bilinear classifier predicts relation types, returning separate logits: logit_han and logit_kor. At last, our model generates final logits as follows: logit_out = α·logit_han + (1-α)·logit_kor, where logit∈ℝ^k× c denotes the output logits of k entity pairs for all c relations, and α is a hyper-parameter. § EXPERIMENTS §.§ Settings Models Since our dataset consists of two languages, we build separate models for each language. We implement all models based on Huggingface Transformers <cit.>. For Korean, the baselines are mBERT <cit.>, KoBERT (a Korean BERT)[<https://github.com/SKTBrain/KoBERT>], and KLUE <cit.>. For Hanja, the baselines are mBERT and AnchiBERT <cit.>. For our bilingual model, we consider combinations of these PLMs, i.e., KLUE, KoBERT, and mBERT for the Korean encoder and mBERT and AnchiBERT for the Hanja encoder. In our experiments, the combination of KLUE and AnchiBERT shows consistent scores when varying . Therefore, our model consists of KLUE and AnchiBERT for Korean and Hanja encoders. Evaluation Metric Following previous work in RE <cit.>, precision, recall, and micro-F1 scores are used for evaluating models. Hyper-parameters Hyper-parameters are set similarly to the BERT-base model in <cit.>. The size of the embedding and hidden vector dimensions are set to 768, and the dimension of the position-wise feed-forward layers to 3,072. All encoders consist of 12 layers and 12 attention heads for each multi-head attention layer. Also, the cross-attention block consists of 8 multi-head attention, and α is set as 0.5 when we get the final logits (L_out). However, when is 2, 4, and 8, α is set to 0.6. The batch size for all experiments is set to 8. The learning rate is set to 5e-5 using the Adam optimizer <cit.>. All models are trained for 200 epochs and computed on a single NVIDIA TESLA V100 GPU. Computational details are in Appendix <ref>. §.§ Results As shown in Table <ref>, our model outperforms other monolingual baselines and consistently demonstrates the best performance even as grows. Even though KLUE as a monolingual model performs worse than mBERT when is 1, our model, which combines KLUE and AnchiBERT, outperforms mBERT. This indicates that exploiting bilingual contexts improves performance. We believe that the cross-attention module and the joint architecture not only incorporate the knowledge from the Korean model, but also create synergy between the Korean and Hanja language models by compensating for each other's deficiencies. We test this hypothesis with analysis in Section <ref>. Consequently, the experimental results imply that utilizing a bilingual model would be efficient in analyzing other historical records if the record is written in an early language and translated into a modern one. As our dataset also supports using only one language, we also make note of the monolingual performance. In the Korean dataset, KLUE outperforms mBERT and KoBERT when is 0 and 2, while mBERT performs better than KLUE when is 1. We also find that KoBERT shows worse performance than mBERT, even though KoBERT was trained specifically on the Korean corpus. This demonstrates that our historical domain is dissimilar from the modern Korean one. In Hanja, AnchiBERT performs best regardless of input text length. Additional experimental results are reported in Appendix Table <ref>. § ANALYSIS In this section, we introduce a real-world usage scenario and analyze our model on , describing how our historical dataset can be utilized in detail. §.§ Usage Scenario of Let us assume that a domain expert aims to collect information about the kings of Chung. In our dataset, he or she can extract the facts via the entity of “Hwang Jae (황제)” in Korean, which is a particular word to indicate the emperors of Chung, and chronologically order the events around the title. Note that this is possible because our dataset contains (i) the text in both Korean and Hanja and (ii) the year when the text was written. In total, 34 relational facts are derived from eight distinct years between 1712 and 1849, including that (a) the king in 1713 had the seventh child via the “person:child” class, and (b) the king in 1848 presented the various products with specific names, including “五絲緞” and “小荷包,” to Joseon via the “product:given_by” class. Since most of the historical records only mentioned a crown prince of Chung, describing the seventh child of the king of Chung is a rare event, which can be a motive for other creative writings. In addition, the exact name of the products the king gives reveals that those products were produced in Chung in 1848 and would be a cue to guess the lifestyle of Chung. The expert can derive the facts from our dataset only by reading the 34 relational facts. However, if he or she has to extract them from the raw corpus, they must read at least 20 raw documents containing 1,525 sentences in Korean and 4,995 in Hanja. This scenario illustrates how can accelerate the analysis process in the historical domain. §.§ Advantage of the Bilingual RE Model To analyze the stability of our joint model, we compare three models on random samples from the test set. We use KLUE and AnchiBERT models independently for a monolingual setting, whereas we combine them for our joint model. The SL is set to 4. As shown in Figure <ref>, we sample two examples: case A and B, each of which displays the most representative sentences that contain the relations for the sake of readability. In both examples, our model successfully predicts accurate relation classes. In the case of A, the ground truth (GT) label is “per:worn_by” for first and second relation triplets. Despite the successful prediction of our model with relatively high confidence scores, the Korean model matches only one of the two, while the Hanja model fails to predict both. In the case of B, the GT label is “nearby” for the third and fourth ones. Since the third and fourth relations exist across sentences, predicting them is crucial for a document-level RE task. Our model successfully predicts both relation types even with a low confidence score, while the other monolingual models fail. This case study confirms our hypothesis on our joint model; the jointly trained model can improve the performance by compensating for each monolingual model's weaknesses, and our model successfully harmonizes the separate PLMs. § RELATED WORK §.§ Relation Extraction RE datasets <cit.> have been extensively studied to predict relation types when given the named entities in text. RE dataset begins at the sentence level, where the input sequence is a single sentence. This includes human-annotated datasets <cit.> and utilization of distant supervision <cit.> or external knowledge <cit.>. Especially, TACRED <cit.> is one of the most representative datasets for the sentence-level RE task. However, inter-sentence relations in multiple sentences are difficult for models trained on a sentence-level dataset, where the model is trained to extract intra-sentence relations. To resolve such issues, document-level RE datasets <cit.> have been proposed. Especially, DocRED <cit.> contains large-scale, distantly supervised data, and human-annotated data. KLUE-RE <cit.> is an RE dataset constructed in the Korean language. However, KLUE-RE is a sentence-level RE dataset, making it challenging to apply document-level extraction to the historical Korean text. To the best of our knowledge, our dataset is the first document-level RE dataset in both Korean and Hanja. §.§ Study on Historical Records Several studies have been conducted on the application of deep learning models in historical corpora, particularly in Ancient Greece and Ancient Korea. The restoration and attribution of ancient Greece <cit.> have been studied in close collaboration with experts of epigraphy, also known as the study of inscriptions. In Korea, thanks to the enormous amount of historical records from the Joseon dynasty, a variety of research projects have been conducted focusing on AJD and DRS  <cit.>. In addition, using the Korean text of AJD, researchers have discovered historical events such as magnetic storm activities <cit.>, conversation patterns of the kings of Joseon <cit.>, and social relations <cit.>. <cit.> also suggests a translation model that restores omitted characters when both languages are used. <cit.> introduce BERT-based pretrained models for AJD and DRS. As interests in historical records grow, numerous research proposals have emerged. However, most studies only utilize the translated text to analyze its knowledge. In this paper, we aim to go beyond the studies that rely solely on the text. § CONCLUSION In this paper, we present , a document-level relation extraction dataset of our historical corpus. Our study specializes in extracting the knowledge in by working closely with domain experts. The novelty of can be summarized by two characteristics: it contains a bilingual corpus, especially on historical records, and SL is used to alter the length of input sequences. We also propose a bilingual RE model that can fully exploit the bilingual text of and demonstrate that our model is an appropriate approach for . We anticipate not only will our dataset contribute to the application of ML to historical corpora but also to research in relation extraction. § LIMITATIONS We acknowledge that our dataset is not huge compared to other sentence-level relation extraction datasets. However, is the first bilingual RE dataset at the document level on the historical corpus. In addition, we constructed 5,816 data instances, and our bilingual model trained on achieved an F1 score of 63.48 percent when SL is 2. This reveals that our dataset is sufficient for finetuning the pretrained language models. Also, because is a collection of travel records, the domain is not as expansive as other Joseon dynasty records. Additional research on massive corpora covering a broader domain is required in future studies. § ETHICAL CONSIDERATION We conducted two separate meetings before the first and second steps of data construction. At first, we introduced the reason we built this dataset and the goal of our study and clarified what the relation extraction task is and how the dataset will be used. All annotators agreed that their annotated dataset would be used to build an RE dataset and train neural networks. We explained each type of the named entity and the relation with multiple examples and shared user guidance. In the second meeting, we guided the annotators in evaluating and modifying the interim findings in an appropriate manner. We adjusted the workload of each annotator to be similar by assigning different text lengths during the first and second steps. We compensated each annotator an average of $1,700, which is greater than the minimum wage in Korea. Among 15 annotators, 14 were Korean, one was Chinese, 11 were female, and four were male. 30% of annotators are in a doctorate and 65% are in a master's degree. Regarding copyrights, since our corpus is a historical record, all copyrights belong to ITKC. ITKC officially admit the usage of their corpus under https://creativecommons.org/licenses/by-nc-nd/4.0/CC BY-NC-ND 4.0 license. § ACKNOWLEDGEMENT This research was supported by the KAIST AI Institute (“Kim Jae-Chul AI Development Fund” AI Dataset Challenge Project) (Project No. N11210253), the National Supercomputing Center with supercomputing resources including technical support (KSC-2022-CRE-0312), and the Challengeable Future Defense Technology Research and Development Program through the Agency For Defense Development (ADD) funded by the Defense Acquisition Program Administration (DAPA) in 2022 (No. N04220080). We also thank Junchul Lim, Wonseok Yang, Hobin Song of Korea University, and the Institute for the Translation of Korean Classics (ITKC) for their discussions and support. acl_natbib § DATASET CONSTRUCTION The procedure consists of the following five steps: 1) collecting corpus from the open-source data of ITKC; 2) defining the schema of the named entities and relations; 3) identifying the entities in given documents; 4) annotating corresponding relations; and 5) modifying the interim results. This section illustrates the overall procedure. Note that the construction process is divided into two phases because the raw text of is significantly long, where the average length of Korean text is 1,106 characters, and the history-specialized annotators are rare. Before beginning the first phase, the annotators received instructions on the purpose of this study, the types of entities and relations, and how to operate the user interface (UI) for data tagging. After instructions, annotators identified the named entities and the relations between them. In the second phase, the annotators cross-checked the intermediate results and modified incorrect annotations. During both phases, we provided the annotators with user guidance and maintained real-time communication. §.§ Corpus Collection As mentioned in <ref>, we selected 39 books from and divided them into 2,019 texts, each containing a single day's content. We did not divide the text into shorter texts before providing it to the annotators because a relation may exist across multiple sentences or have its evidence sentence distant from where the relation appears. We provided the entire text to the annotators to reduce the possibility of losing relational data. Due to the highly variable length of the text, an additional process step was required to extract relational information in a manageable length. To select the sentences containing all the information that can indicate the relational fact, we guided the annotators to detect the evidence sentence(s) when they annotated the relation types. §.§ Defining Schema §.§.§ Types of Named Entities As shown in Table <ref>, we defined 10 entity types. Here, we added the date and time as entity type; thus, we can estimate the exact time because most of the corpus includes the time when the text was written. For example, if a text contains tomorrow's plan by mentioning “tomorrow” and the written date is June 6, we can recognize the date of tomorrow as June 7. In historical studies, it is essential to understand the lifestyle of ancient times. Lifestyle includes clothing, food, and utilized products. For instance, humans began consuming grains such as wheat and rice after the agricultural revolution. Since lifestyle has changed according to time and location, detecting food, clothes, and products on our corpus becomes a non-trivial task. We also excluded two text types in the preprocessing: poems and quotations. When writing the , the writers commonly composed poems or quoted related or ancient books, including the Analects of Confucius and Mencius. We decided to detect the books' name because it helps us imply the political status of the writer. However, the poems usually describe the sentiments or thoughts of the writer, and the quotations are written in a more ancient time than Joseon. Since we concentrated on finding objective relational facts about the Joseon dynasty, we determined to exclude the poems and quotations. A special “exclude” entity type was provided to the annotators, and the annotators tagged such subtexts if the text was a poem or a quotation. §.§.§ Types of Relations Since our corpus is a collection of travel reports, the authors wrote the people they had met and the places they had visited. As shown in Table  <ref>, we defined 20 relation classes, including 14 personal and 4 location relations. In the Joseon dynasty, it was a convention to refer to one another by their alternative name or title; thus, identifying the alternative name of a specified person is essential for tracking the individual's life. Also, since the name of a particular location can vary depending on time and place, we added “alternate name” as a relation class to account for these instances. Additionally, in , the number indicates the distance traveled from one location to another. We hypothesized that the locations are close to each other if the text contains the distance between the locations where the author moved because there was no mechanical mobility and they usually walked the cities. In addition, they described the characteristics of a location, such as its regional product or cuisine and its functional role. Therefore, “loc:famous_for” and “loc:function_as” were added to the set of relation types. §.§ Entity Detection The annotators annotated entities using a predefined set of entity types. We provided the original Hanja and the translated Korean texts, as shown in Fig. <ref>. As most annotators' native language is Korean, we recommended detecting the entities in the Korean text first and the parallel entities in the Hanja text after. After detecting entities in both texts, the annotators drew a line connecting the same entity between the two languages (as in apple and pomme in English and French texts). The annotators also drew a line connecting entities that express a certain relation. To avoid confusion, the two lines are colored in blue and orange, respectively, as shown in Figure <ref>. §.§ Relation Annotation After identifying the relations in the previous step, the annotators added relations by using the “add relation” button and selected a relation class for the relation triplet. They also tagged the indices of evidence sentences on the Korean and Hanja texts. §.§ Cross-Checking and Modification After the first phase, we analyzed the intermediate result and updated the user manual, focusing on instructions for editing initial annotations. Before the cross-checking stage, we conducted a second tutorial for the annotators using the updated manual. We assigned annotators to texts such that they had not seen them during the first phase. If they found an error(s) during cross-checking, they revised the annotations by adding or removing the entity(s) or relation(s). § EXPERIMENTS §.§ Computational Details Our experiments include monolingual and bilingual settings. For each model, we describe the number of total parameters and computational budget (hours) for training on 200 epochs on our dataset when is 0. For the Korean model, mBERT consists of 178M parameters and consumes about 4.2 hours, KoBERT is 93M and 3.3 hours, and KLUE is 111M and 4.0 hours, respectively. For the Hanja model, mBERT consists of 178M parameters and requires 4.6 hours, and AnchiBERT is 95M and 3.3 hours. Our joint model consists of 206M parameters and consumes 6.6 hours because our model adopts two separate PLMs. §.§ Performance Comparison on Large As shown in Table <ref>, our joint model outperforms other baseline models when is 2, 4, and 8, where the average length of documents is 153, 250, and 427 tokens on the Korean text. Our model scores better when α is 0.6 rather than 0.5 when is 2, 4, and 8. This can be explained by the fact that ours is affected by the low performance of the Hanja encoder, i.e., AnchiBERT. The Hanja encoder significantly drops its scores as increases. § DATASET EXAMPLES We include additional full data samples: Table <ref>, Table <ref>, and Table <ref>.
http://arxiv.org/abs/2307.06256v1
20230712155045
Groups of Binary Operations and Binary $G$-Spaces
[ "Pavel S. Gevorgyan" ]
math.GN
[ "math.GN", "54H15, 57S99" ]
Moscow State Pedagogical University [email protected] The group of continuous binary operations on a topological space is studied; its relationship with the group of homeomorphisms is established. The category of binary G-spaces and bi-equivariant maps is constructed, which is a natural extension of the category of G-spaces and equivariant maps. Results related to the foundations of the theory of binary G-spaces are obtained. 54H15; 57S99 Groups of Binary Operations and Binary G-Spaces Pavel S. Gevorgyan August 12, 2023 =============================================== § AUXILIARY RESULTS AND NOTATION Throughout this paper, by a space we mean a topological space. All spaces are assumed to be Hausdorff. We denote the category of topological spaces and continuous maps by Top. By C(X,Y) we denote the space of all continuous maps of X to Y endowed with the compact-open topology, that is, the topology generated by the subbase consisting of all sets of the form W(K, U)={f:X→ Y; f(K)⊂ U}, where K a compact subset of X and U is an open subset of Y. All continuous function spaces are considered in the compact-open topology. If G is a topological group, then there is a natural group operation on C(X,G): given any continuous maps f,g∈ C(X,G), their product fg∈ C(X,G) is defined by (fg)(x)=f(x)g(x) for all x∈ X. If G is a topological group, then so is C(X,G). The group of all self-homeomorphisms of X is denoted by H(X). This group is not generally a topological group. However, the following theorem is valid. If a space X is locally compact and locally connected, then H(X) is a topological group. Let X be a topological space, and let G be a topological group G with identity element e. Suppose given a continuous map θ:G × X → X satisfying the conditions (1) θ(g, θ(h, x))=θ(gh,x) and (2) θ(e,x)=x for g, h ∈ G and x∈ X. Then X is called a G-space, and the continuous map θ :G × X → X is called the action of the group G on the G-space X. In this case, we use the notation θ(g,x)=gx. Let X and Y be G-spaces. A continuous map f: X→ Y is said to be equivariant if f(gx)=gf(x) for any g∈ G and x∈ X. All G-spaces and their equivariant maps form a category. This category is denoted by G-Top. The symmetric group on a set X is denoted by S(X). In the case of a finite set X, this group is denoted by S_n(X) or S_n, where n is the number of elements in X. The order of S_n(X) equals n!: |S_n(X)|=n! Details on these notions, as well as on all definitions, notions, and results used in this paper without reference, can be found in <cit.>, <cit.>, <cit.> and <cit.>. § THE GROUP OF CONTINUOUS BINARY OPERATIONS Let X be any topological space. A continuous map f:X^2→ X is called a continuous binary operation, or a binary map, on X. We denote the set of all continuous binary operations on X by C_2(X). On C_2(X) we define an operation “∗” by (f*φ) (t,x)=f(t, φ(t,x)) for all t,x∈ X. The set C_2(X) under the operation “∗” is a topological semigroup with identity element e(t,x)=x, i.e., a topological monoid. Let f, φ, and h be continuous binary operations on a topological space X. Let us check the semigroup axiom: [f*(φ*h)] (t,x)=f(t, (φ*h)(t,x))=f(t, φ(t, h(t,x))) =(f*φ)(t, h(t,x))=[(f*φ)*h] (t,x). The binary operation e:X^2→ X defined by e(t,x)=x is the identity element of the semigroup, because f*e=e*f=f. Indeed, (f*e)(t,x)=f(t, e(t,x))=f(t,x), (e*f)(t, x)=e(t, f(t,x))=f(t,x). A continuous binary operation f∈ C_2(X) is said to be invertible if there exists a continuous binary operation φ∈ C_2(X) such that f*φ=φ*f=e. In this case, f and φ are said to be mutually inverse binary operations. We denote the set of all invertible elements of C_2(X) by H_2(X). The set H_2(X) is a group. Let X={a, b} be a two-point discrete space. The symmetric group S_2(X) of permutations of this set is the cyclic group ℤ_2, and the group of all invertible binary operations on X={a, b} is the group of order 4 with two generators φ_1 and φ_2, which are specified by -2.80ex[0cm][0cm]φ_1: b a a 2-4 a b b 2-4 a b -2.80ex[0cm][0cm]φ_2: b b a 2-4 a a b 2-4 a b The multiplication table for this group is as follows: e φ_1 φ_2 φ_3 1-4 φ_1 e φ_3 φ_2 1-4 φ_2 φ_3 e φ_1 1-4 φ_3 φ_2 φ_1 e This is the Klein four-group. In the case of a three-point set X, the order of H_2(X) equals (3!)^3=216 (see Corollary <ref> below). If a continuous binary operation f:X^2→ X is invertible, then, for any t∈ X, the continuous map f_t:X→ X defined by f_t(x)=f(t,x) for x∈ X is a homeomorphism. Suppose that a binary operation f:X^2→ X is invertible, i.e., there exists a binary operation φ∈ C_2(X) satisfying relations (<ref>). Take any element t∈ X. Let us prove that the continuous map f_t:X→ X is a homeomorphism. The map f_t is a monomorphism. Indeed, suppose that f_t(x)=f_t(x'), i.e., f(t,x)=f(t,x'). Then x=e(t,x)=(φ*f)(t,x)=φ(t,f(t,x))= =φ(t,f(t,x'))=(φ*f)(t,x')=e(t,x')=x'. The continuous map φ_t:X→ X defined by φ_t(x)=φ(t,x) is inverse to f_t:X→ X. Indeed, (f_t∘φ_t(x) = f_t(φ_t(x))=f_t(φ(t,x))=f(t,φ(t,x))= =(f*φ)(t,x)=e(t,x)=x, i.e., f_t∘φ_t=1_X. A similar argument proves that φ_t∘ f_t=1_X. It follows from Theorem <ref> that if an invertible binary operation f:X^2→ X is represented in the form of a family of homeomorphisms {f_t}, t∈ X, then the inverse binary operation has the form {f^-1_t}. The converse of Theorem <ref> is true for locally compact and locally connected spaces. Suppose that a space X is locally compact and locally connected and f:X^2→ X is a continuous binary operation. If the map f_t:X→ X is a homeomorphism for each t∈ X, then the binary operation f is invertible, and the inverse binary operation f^-1 is defined by f^-1(t,x)=f_t^-1(x). Consider the binary operation φ given by φ(t,x)=f_t^-1(x). Let us prove that this binary operation is continuous and inverse to f:X^2→ X: φ=f^-1. First, we prove the continuity of the map φ:X^2→ X. Let (t_0,x_0)∈ X^2 be any point, and let φ(t_0,x_0)=f_t_0^-1(x_0)=y_0. Consider any open neighborhood W⊂ X of y_0 such that the closure W is compact. There exists a compact connected neighborhood K of x_0 for which f_t_0^-1(K)⊂ W. We denote the interior of K by K^∘. We have f_t_0(y_0)=x_0∈ K^∘. Inclusion (<ref>) implies f_t_0(W^C∩W)⊂ K^C, where W^C and K^C are the complements of W and K, respectively. Since f:X^2→ X is a continuous binary operation, {y_0} and W^C∩W are compact supsets of X, and K^∘ and K^C are open subsets, it follows from (<ref>) and (<ref>) that the point t_0 has an open neighborhood U such that, for any t∈ U, we have f_t(y_0)∈ K^∘ and f_t(W^C∩W)⊂ K^C. Inclusion (<ref>) implies that K⊂ f_t(W∪W^C) for any t∈ U. Therefore, f_t^-1(K)⊂ W∪W^C. Since f_t^-1(K) is connected and W and W^C are disjoint open sets, it follows from the last inclusion that f_t^-1(K) is contained in one of the sets W and W^C. However, by virtue of (<ref>), we obviously have f_t^-1(K)⊂ W. Hence f_t^-1(K^∘)⊂ W for all t∈ U. Thus, given any open neighborhood W of y_0=f_t_0^-1(x_0), we have found open neighborhoods U of t_0 and K^∘ of x_0 for which (<ref>) holds. This proves the continuity of the binary operation φ(t,x)=f_t^-1(x). It is easy to show that the continuous binary operation φ(t,x)=f_t^-1(x) is inverse to f:X^2→ X. Indeed, (f*φ)(t,x)=f(t,φ(t,x))=f(t,f_t^-1(x))=f_t(f_t^-1(x))=x, i.e. f*φ=e. The relation φ*f=e is proved in a similar way. Theorems <ref> and <ref> imply the following criterion for the invertibility of continuous binary operations on locally compact and locally connected spaces. A continuous binary operation f:X^2→ X on a locally compact locally connected space X is invertible if and only if the continuous map f_t:X→ X defined by (<ref>) is a homeomorphism for any t∈ X. The group H(X) of all self-homeomorphisms of a topological space X is isomorphic (algebraically and topologically) to a subgroup of the group H_2(X) of invertible binary operations. To each f∈ H(X) we assign the continuous map f̃:X^2→ X defined by f̃(t, x)=f(x), t,x∈ X. Obviously, f^-1=f̃^-1. Thus, f̃ is a continuous invertible binary operation, i.e., f̃∈ H_2(X). The correspondence f →f̃ is the required isomorphism between the group H(X) and a subgroup of H_2(X). For any locally compact locally connected space X, the group H_2(X) is isomorphic (algebraically and topologically) to C(X,H(X)). Consider the map p:C(X,H(X))→ H_2(X) defined by p(f)(t,x)=f(t)(x) for f∈ C(X,H(X)) and t,x∈ X. The map f(t) : X → X is a homeomorphism for each t∈ X. Therefore, by virtue of Theorem <ref>, the binary operation p(f):X× X→ X is invertible, that is, indeed belongs to the group H_2(X). Let us prove that p is a monomorphism. Take f,g∈ C(X,H(X)), f≠ g. There exists a point t_0∈ X such that f(t_0)≠ g(t_0). Since f(t_0), g(t_0) ∈ H(X), it follows that f(t_0)(x_0)≠ g(t_0)(x_0) for some x_0∈ X. Thus, p(f)(t_0,x_0)≠ p(g)(t_0,x_0), and hence p(f)≠ p(g). The map p is also an epimorphism. Indeed, let φ∈ H_2(X) be any continuous binary operation. By virtue of Theorem <ref>, the map φ_t:X→ X defined by φ_t(x)=φ(t,x), t, x∈ X, is a homeomorphism. It is easy to see that the element f∈ C(X,H(X)) determined by the condition f(t)=φ_t is the preimage of the binary operation φ. Indeed, we have p(f)(t,x)=f(t)(x)=φ_t(x)=φ(t,x). Thus, the map p^-1:H_2(X)→ C(X,H(X)) defined by p^-1(φ)(t)(x)= φ(t,x) for φ∈ H_2(X) and t,x∈ X is inverse to p:C(X,H(X))→ H_2(X). The map p is a homomorphism, that is, p(f∘ g)=p(f)*p(g). Indeed, for any t,x∈ X, we have p(f∘ g)(t,x)=(f∘ g)(t)(x)= (f(t)∘ g(t))(x)=f(t)(g(t)(x)) =f(t)(p(g)(t,x))=p(f)(t,p(g)(t,x))=(p(f)*p(g))(t,x). Let us prove the continuity of p. Take any element W(K× K', U) of the subbase of the compact-open topology on H_2(X), where U⊂ X open and K,K'⊂ X are compact subsets of X. Let us show that the preimage of W(K× K', U) is the set W(K, W(K', U)), which is an element of the subbase of the compact-open topology on C(X,H(X)). Indeed, for any φ∈ W(K× K', U) and f=p^-1(φ)∈ C(X,H(X)), we have φ∈ W(K× K', U) φ(t,x)∈ U p(f)(t,x)∈ U f(t)(x)∈ U f∈ W(K, W(K', U)), where t∈ K and x∈ K' are arbitrary elements. The continuity of the inverse map p^-1:H_2(X)→ C(X,H(X)) is proved in precisely the same way. If X is a locally compact locally connected space, then H_2(X) is a topological group. By Theorem <ref>, H(X) is a topological group. Thus, C(X,H(X)) is a topological group as well (by Theorem <ref>). According to Theorem <ref>, H_2(X) is a topological group. If |X|=n<∞, then |H_2(X)|=(n!)^n. For a finite set X, we have H(X)=S_n(X), where S_n(X) is the symmetric group of permutations of X. Since |S_n(X)|=n!, the corollary follows directly from Theorem <ref>. § BINARY ACTIONS OF GROUPS Let G be a topological group, and let X be a space. A continuous map α :G× X^2→ X is called a binary action of G on X if the following conditions hold: α (gh, t,x)=α(g, t, α(h,t,x)), α (e, t,x)=x, where e is the identity element of G, g,h∈ G, and t,x∈ X. We refer to a space X together with a fixed binary action of a group G, that is, to a triple (G,X,α), as a binary G-space. Note that in the notation g(t,x)=α (g, t,x) relations (<ref>) and (<ref>) take the form gh(t,x)=g(t, h(t,x)), e(t,x)=x. The continuous map α :G× G^2→ G defined by α(g,h_1,h_2)=h_1gh_1^-1h_2, or g(h_1,h_2)=h_1gh_1^-1h_2, is a binary action of the topological group G on itself. Indeed, conditions (<ref>) and (<ref>) in Definition <ref> are satisfied: we have α(gh,h_1,h_2)=h_1ghh_1^-1h_2=h_1gh_1^-1h_1hh_1^-1h_2= =α(g, h_1, h_1hh_1^-1h_2)=α(g, h_1, α(h,h_1,h_2)) and α (e, h_1,h_2)=h_1eh_1^-1h_2=h_2 for all g,h,h_1,h_2∈ G. Let GL(n, 𝐑) be the general linear group of degree n. Consider the continuous map α:GL(n, 𝐑)×𝐑^n×𝐑^n →𝐑^n defined by α (A,𝐱,𝐲)=(E-A)𝐱+A𝐲, or, equivalently, A(𝐱,𝐲)=(E-A)𝐱+A𝐲, where A∈ GL(n, 𝐑), E is the identity matrix, and 𝐱,𝐲∈𝐑^n. Note that, for any A, B ∈ GL(n, 𝐑), we have A(𝐱,B(𝐱,𝐲))=(E-A)𝐱+A(B(𝐱,𝐲))= (E-A)𝐱+A((E-B)𝐱+B𝐲) =(E-A)𝐱+A(E-B)𝐱+AB𝐲=(E-A+A-AB)𝐱+AB𝐲 =(E-AB)𝐱+AB𝐲=AB(𝐱,𝐲) and E(𝐱,𝐲)=(E-E)𝐱+E𝐲=𝐲. Therefore, relation (<ref>) defines a binary action of the general linear group GL(n, 𝐑) on the n-dimensional vector space 𝐑^n. Let α be a binary action of a topological group G on a space X. For each g∈ G, we define a continuous map α_g:X^2→ X as α_g(t,x)= α(g,t,x). Considitions (<ref>) and (<ref>) imply that α_gh=α_g * α_h and α_e is the identity element in the monoid of binary operations on X. Thus, the following proposition is valid. The map g→α_g is a continuous homomorphism from G to the group H_2(X) of all invertible continuous binary operations. Thus, the elements of the group G can be treated as continuous binary operations on the space X which act by the rule g(t,x)=α_g(t,x)= α(g,t,x). Take any t∈ X. Consider the continuous map α_t:G× X→ X defined by α_t(g,x)= α(g,t,x). The map α_t is an action of the group G on the space X. The map α_t satisfies the conditions in the definition of an action of a group on a topological space. Indeed, we have (1) α_t(gh,x)=α(gh,t,x)=α(g, t, α(h,t,x))= α(g, t, α_t(h,x))=α_t(g, α_t(h,x)) and (2) α_t(e,x)=α(e,t,x)=x for all g,h∈ G and x∈ X. Thus, a binary action α of a group G on a space X induces the family {α_t}, t∈ X, of “ordinary” actions of G on X. In the case of the binary action (<ref>) (see Example <ref>), all induced actions of G on itself are equivalent. To be more precise, the homeomorphism f:G→ G given by f(g)=h^-1gh, where h is any fixed element of G, is an equivalence with respect to the actions α_e and α_h. Indeed, f(α_e(g,g̃))=f(e^-1geg̃)=f(gg̃)=h^-1gg̃= =h^-1ghh^-1g̃h=h^-1ghf(g̃)=α_h(g,f(g̃)). The binary action of the general linear group GL(n, 𝐑) on 𝐑^n defined by (<ref>) (see Example <ref>) induces the family {α_a; a∈𝐑^n} of ordinary actions of the group GL(n, 𝐑) on 𝐑^n. It is easy to show that all these actions are equivalent. To be more precise, the map f(x)=x-a is an equivariant self-homeomorphism of the space 𝐑^n with respect to the actions α_a and α_0. Moreover, the action α_0 is multiplication of a given matrix by a given element of 𝐑^n: α_0(A,y)=α(A,0,y)=0+Ay= Ay. Given A⊂ X and g∈ G, we set gA={g(a,a'); a,a'∈ A}. Similarly, given K⊂ G, we set KA={g(a,a'); g∈ K, a,a'∈ A}. Let A be a subset of a binary G-space X, and let K be a subset of a compact topological group G. Then the following assertions hold: (i) if A is open, then so is KA; (ii) if A is compact and K is closed, then KA is compact. (i) Note that KA=⋃_g∈ K⋃_a∈ Ag_a(A), where g_a:X→ X is the map defined by g_a(x)=g(a,x), x∈ X. By virtue of Theorem <ref>, g_a:X→ X is a homeomorphism; therefore, the set g_a(A) is open. Hence KA is open as well. (ii) If K is closed, then it is also compact, because G is a compact group. Thus, the set K× A^2 is compact, and so is its continuous image KA=α(K× A^2). Suppose that G is a compact group, X is a binary G-space, and A⊂ X is a subset of X. If A is open (compact), then so is GA. § THE CATEGORY OF BINARY G-SPACES Let (G,X,α) and (G,Y,β) be two binary G-spaces. We say that a continuous map f:X→ Y is bi-equivariant if f(α(g,t,x))=β(g,f(t),f(x)), or, equivalently, f(g(t,x))=g(f(t),f(x)), for all g∈ G and t,x ∈ X. We refer to a bi-equivariant map f:X→ Y which is simultaneously a homeomorphism as a bi-equivalence of binary G-spaces. Note that the inverse map f^-1:Y→ X is bi-equivariant as well. Indeed, we have f^-1(g(y,y'))=f^-1(g(f(x),f(x'))=f^-1(f(g(x,x')))= =g(x,x')=g(f^-1(y),f^-1(y')), where x,x'∈ X and y,y'∈ Y. The following assertion is valid. The binary G-spaces and bi-equivariant maps form a category. We denote the category of binary G-spaces and bi-equivariant maps by G-Top^2. Any bi-equivariant map f:X→ Y is equivariant with respect to the induced actions α_t and β_f(t) for any t∈ X. Indeed, f(α_t(g,x))=f(α(g,t,x))=β(g,f(t),f(x)) = β_f(t)(g,f(x)). Now, let X be any G-space. On X we define a binary G-action by g(x,x')=g· x' for all g∈ G and x, x'∈ X. Note that if X and Y are G-spaces, then any equivariant map f:X→ Y is bi-equivariant with respect to the action (<ref>). Indeed, f(g(x,x')=f(g· x')=g· f(x')=g(f(x), f(x')). Thus, the category G-Top is a subcategory of the category G-Top^2. We have the following chain of natural extensions of categories: Top⊂ G-Top⊂ G-Top^2. Let (G, X, α) be a binary G-space. Consider the G-space on which the group G acts as α̃(g, x,x')=(x, α(g,x,x')). Using the notations α̃(g, x,x')=g· (x,x') and α(g,x,x')=g(x,x'), we rewrite this formula as g· (x,x')=(x, g(x,x')). Note that if X and Y are binary G-spaces, then any bi-equivariant map f:X→ Y generates the equivariant map f̃:X× X→ Y× Y defined by f̃(x,x')=(f(x), f(x')), where x,x'∈ X. Indeed, f̃(g·(x,x'))=f̃(x, g(x,x'))=(f(x), f(g(x,x'))) =(f(x), g(f(x),f(x')))=g·(f(x),f(x'))= g·f̃(x, x'). This correspondence is a covariant functor from the category G-TOP^2 to the category G-TOP. § INVARIANT SETS We say that a subset A⊂ X is invariant with respect to the binary action of a group G if GA=A. It is easy to see that the intersection A∩ B of invariant sets A, B ⊂ X is invariant. However, the union A∪ B of two invariant sets is not generally invariant. Indeed, any one-point set {x}, x∈𝐑^n, is invariant with respect to the binary action (<ref>) of the general linear group GL(n, 𝐑) on the n-dimensional vector space 𝐑^n (see Example <ref>), since A(x,x)=(E-A)x+Ax=x for all A∈ GL(n, 𝐑). However, the union {x}∪{y} of two one-point sets is not invariant, because GL(n, 𝐑){x,y}=𝐑^n. What are binary G-spaces in which the union A∪ B of any invariant subsets A, B ⊂ X is invariant? The orbit of an element x∈ X of a binary G-space X is defined as the minimal invariant set [x]⊂ X containing x. Obviously, x∈ G(x,x)⊂ [x] for all x∈ X. Therefore, if G(x,x) is invariant, then G(x,x)= [x]. The following problem naturally arises. In what binary G-spaces is the set G(x,x) invariant? The solution of this problem in a special case is given by Theorem <ref> below. A binary G-space X is said to be distributive if g(h(x,x'), h(x,x”))=h(x,g(x', x”)) for all x, x', x”∈ X and g, h∈ G. A group G with binary action (<ref>) (see Example <ref>) is a distributive binary G-space. Indeed, for any g,h,k,k_1,k_2∈ G, we have g(h(k,k_1), h(k,k_2))=g(khk^-1k_1, khk^-1k_2)= =khk^-1k_1gk_1^-1kh^-1k^-1khk^-1k_2=khk^-1k_1gk_1^-1k_2= =h(k,k_1gk_1^-1k_2)=h(k,g(k_1, k_2)). If X is a distributive binary G-space, then the set G(x,x) is invariant for any x∈ X. We must show that g(h(x,x), k(x,x))∈ G(x,x) for any h(x,x), k(x,x)∈ G(x,x) and any g∈ G. By virtue of (<ref>), we have g(h(x,x), k(x,x))=g(h(x,x), h(x,h^-1k(x,x)))= =h(x, g(x, h^-1k(x,x))) = h(x, gh^-1k(x, x))= =hgh^-1k(x, x)∈ G(x,x), because hgh^-1k∈ G. Is the converse of Theorem <ref> true? Given a distributive G-space X and x'∉ G(x,x), is the set G(x,x') invariant? plain
http://arxiv.org/abs/2307.04369v1
20230710065440
Exact generalized Turán number for $K_3$ versus suspension of $P_4$
[ "Sayan Mukherjee" ]
math.CO
[ "math.CO", "05C35" ]
New results on the dynamics of critical collapse Cheng-Gang Shao2 August 12, 2023 ================================================ Let P_4 denote the path graph on 4 vertices. The suspension of P_4, denoted by P_4, is the graph obtained via adding an extra vertex and joining it to all four vertices of P_4. In this note, we demonstrate that for n≥ 8, the maximum number of triangles in any n-vertex graph not containing P_4 is ⌊ n^2/8⌋. Our method uses simple induction along with computer programming to prove a base case of the induction hypothesis. Keywords: generalized Turán problem, suspension of a graph, computer programming. 2020 Mathematics Subject Classification: 05C35. § INTRODUCTION The generalized Turán number (n, T, H) is defined as the maximum number of copies of T in an n-vertex graph not containing H as a (not necessarily induced) subgraph. When T=K_2, this is the Turán number (n,H) of the graph. The first systematic study of (n, T, H) for T≠ K_2 was carried out by Alon and Shikhelman <cit.>. In more recent years, several researchers have studied the asymptotic behavior of (n, K_3, H) for the case T=K_3 (see, for example <cit.>). It is known that when χ(H)>3, (n,K_3,H)∼χ(H)-13/(χ(H)-1)^2· n^2, where χ(H) denotes the chromatic number of H <cit.>. Alon and Shikhelman <cit.> extensively study the case when χ(H)=2. Mubayi and the author <cit.> initiated the study of (n, K_3, H) for a simple family of graphs H with χ(H)=3. For any graph G, they denoted the suspension G as the graph obtained from G by adding a new vertex v and joining it with all vertices of G. They proceeded to analyze the asymptotic behavior of (n,K_3,G) for different bipartite graphs G. One of the several bipartite graphs they consider is the path P_4 on four vertices. It was shown that for any n≥ 4, n^2/8-O(1)≤(n, K_3, P_4) < n^2/8+3n. An exact result for sufficiently large n was given by Gerbner <cit.> using the technique of progressive induction. In particular, they prove that for a number K≤ 1575 and n≥ 525+4K, (n,K_3,P_4) = ⌊ n^2/8⌋. They mention that a proof of the upper bound of (<ref>) for n=8,9,10,11 together with induction would suffice to prove (<ref>) for every n≥ 8. In this note, we leverage this idea to determine the exact value of (n, K_3, P_4) for every n≥ 4, thus closing the gap in the literature for this extremal problem. For n≥ 8, (n, K_3, P_4) = ⌊ n^2/8⌋. For n=4,5,6,7 the values of (n, K_3,P_4) are 4,4,5,8 respectively. The lower bound constructions for Theorem <ref> are different for the cases n∈{4,5,6,7} and n≥ 8. Figure <ref> illustrates graphs on n vertices for n∈{4,5,6,7} that achieve the maximum number of triangles. In fact, we shall see later in Section <ref> that these constructions are unique up to isomorphism. The general lower bound construction considered in <cit.> (for n≥ 8) was the complete bipartite graph K_⌊ n/2⌋, ⌈ n/2⌉ with a matching in any of the even parts. A short case analysis shows that the total number of triangles in these graphs is given by ⌊ n^2/8⌋, hence proving the lower bound in Theorem <ref> for general n. Thus, the main goal of this manuscript is to prove that these lower bounds on (n,K_3,P_4) are tight. This work is organized as follows. We present some preliminaries in Section <ref>. Then, we show the upper bound of Theorem <ref> for n≥ 5 in Section <ref>. Finally, we make some concluding remarks regarding uniqueness of the lower bound constructions in Section <ref>. § PRELIMINARIES Throughout the rest of this paper, we assume without loss of generality that all graphs are edge-minimal. This implies that every edge of the graphs considered must lie in a triangle, as we can simply delete edges that do not help forming a triangle. We also assume that the vertex set of any n-vertex graph in the rest of this section is {0,…,n-1}, and abuse notation to represent a K_3 on vertex subset {a,b,c} as simply abc. Let n(G), e(G) and t(G) denote the number of vertices, edges and triangles in G, respectively. Now we recall some definitions and state a two important lemmas from <cit.> and <cit.> which are instrumental in our proof. For a graph G, two edges e and e' are said to be triangle-connected if there is a sequence of triangles {T_1,…,T_k} of G such that e∈ T_1, e'∈ T_k, and T_i and T_i+1 share a common edge for every 1≤ i ≤ k-1. A subgraph H⊆ G is triangle-connected if e and e' are triangle-connected for every edges e and e' of H. A subgraph H⊆ G is a triangle block (or simply a block) if it is edge-maximally triangle-connected. By definition, the triangle blocks of any graph G are edge-disjoint. Let B_s denote the book graph on (s+2) vertices, consisting of s triangles all sharing a common edge. Let this common edge be called the base of the B_s. The following lemma characterizes the triangle blocks of any P_4-free graph G. Every triangle block of a P_4-free graph G is isomorphic to a K_4 or a B_s for some s≥ 1. Let H⊆ G be an arbitrary triangle block. If H contains only one or two triangles, it is isomorphic to B_1 or B_2. Suppose H contains at least three triangles. Let two of them be abx_1 and abx_2 (see Figure <ref>). If another triangle is of the form ax_1y for some y∈ V(H), then there are two possible cases. If y≠ x_2, then N_H(a) contains the 4-path x_2bx_1y, a contradiction. Otherwise if y=x_2, then the vertices a,b,x_1,x_2 create a K_4, and this K_4 is a triangle block by itself. Similarly, if a triangle contained any of the edges bx_1, ax_2, bx_2, we would end up with a K_4-block, and this block cannot be extended any further. Therefore all triangles in H would intersect the edge ab, implying H≅ B_s for some s≥ 1. Suppose G is an n-vertex P_4-free graph containing no K_4. Then, we have t(G)≤⌊ n^2/8⌋. By Lemma <ref>, all triangle blocks of G are isomorphic to B_s for some s≥ 1. Let G' be obtained from G by deleting the base edges of each of the books (if s=1, delete any arbitrary edge). As each triangle of G contains two distinct edges from G', we have t(G)=e(G')/2. By Mantel's theorem, e(G')≤⌊ n^2/4⌋, implying t(G)≤1/2⌊ n^2/4⌋, i.e. t(G)≤⌊ n^2/8⌋. § UPPER BOUNDS In order to prove that (n,K_3,P_4)≤ K for some fixed n and K, we need to show that any n-vertex graph containing at least K+1 triangles contains a copy of P_4. §.§ The cases 5≤ n≤ 8: brute force While a case-by-case analysis is tractable by hand for n=5 for example, we quickly run into several possible configurations while trying to prove (8,K_3,P_4)=8. This is where we turn to a computer-generated check. For example, to prove that all 8-vertex graphs with more than 9 triangles is P_4-free, we can assume that 012 and 013 are two triangles in some 8-vertex graph G containing 9 triangles. Then triangles that have an edge from the set {02, 03, 12, 13} and have a node from {4,5,6,7} are excluded from G since any of these patterns form a P_4. This excludes 16 triangles. Hence the plausible triangles that G may contain other than 012 and 013 are 83 - 18 = 38 in number. We generate 387≈ 1.26× 10^7 possible graphs, filter out the ones that have exactly 9 triangles, and check for P_4's in each of them. Our program is available at the Github repository in <cit.>. We run |triangle_count.ipynb|. Our computation shows that ex(n,K_3,P_4)=4,5,8,8 for n=5,6,7,8, respectively. The total computation time required for (n,t)=(8,9) on 7 threads of an laptop processor running at 1.80GHz was around 18 minutes. §.§ The cases 9≤ n ≤ 11: identifying K_4 The main idea behind these cases is to follow the steps of the proof in <cit.>, Section 5.2. Suppose (n,t)∈{(9,11), (10,13), (11, 16)}, and G is an (edge-minimal) n-vertex graph with t triangles. Then G must contain a P_4. For the sake of contradiction, assume that G was P_4-free. If G was also K_4-free, then by Lemma <ref>, t(G)≤⌊ n^2/8⌋ = 10, 12, 15 for n=9,10,11, contradicting our initial assumption on t(G). Therefore G must contain a K_4. Let this K_4 be induced by vertex subset S={u_0,u_1,u_2,u_3}⊂ V(G). Define X_i := N(u_i) - S for 0≤ i ≤ 3. As G[S] is a triangle block, X_i∩ X_j=∅ for every i≠ j. Further, ∑_i=0^3|X_i|≤ n-4. Without loss of generality assume |X_0|≤⋯≤ |X_3|. Now we consider each case separately. * Case 1. (n,t)=(9,11): In this case, ∑_i=0^3|X_i|≤ 5. If |X_1|>0, by edge-minimality we would have |X_1|≥ 2, implying |X_1|+|X_2|+|X_3|≥ 6, a contradiction. Thus, |X_0| = |X_1| = 0, and by a similar argument, |X_2|≤ 2. This means the vertex u_2 lies in at most one triangle outside of G[S]. Let G' be obtained by deleting {u_0,u_1,u_2} from G. Clearly n(G')=6 and t(G')≥ t(G)-5 = 6. As (6,K_3,P_4)=5 by the discussion in Section <ref>, G' has a P_4, a contradiction. * Case 2. (n,t)=(10,13): Here, ∑_i=0^3 |X_i|≤ 6. By a similar analysis as before, we can infer that |X_0|=0 and |X_1|≤ 2. If |X_1|=0, we could consider G'=G-{u_0,u_1}, which would have n(G')=8 and t(G')=13-4=9, which would lead us to a P_4 since (8,K_3,P_4)=8 by the calculation in Section <ref>. Thus, we have |X_0|=0, |X_1|=2, and hence |X_2|=|X_3|=2. Now, if we consider G”=G-S, we have n(G”)=6 and t(G”) = 13-4-3=6, again implying that G” has a P_4. * Case 3. (n,t)=(11,16): For this pair of (n,t), we have ∑_i=0^3|X_i|≤ 7, implying |X_0|=0 again. Since u_0 lies in exactly three triangles of G[S], G'=G-{u_0} has n(G')=10 and t(G')=13, leading us to the previous case. In either of the three cases, we obtain a contradiction, finishing the proof for these cases. §.§ The case n≥ 12: identifying K_4 Now that we have proved (n,K_3,P_4) = ⌊ n^2/8⌋ for 8≤ n ≤ 11, we are now ready to handle the general case using induction on n. Our proof follows the idea of <cit.> with a more careful analysis to obtain the desired bound. Let us assume that (k,K_3,P_4)=⌊ k^2/8⌋ for all 8≤ k ≤ n-1. We note that a simple case analysis leads to ⌊ n^2/8⌋ - ⌊ (n-1)^2/8⌋ ≥⌊ n/4⌋ ⌊ n^2/8⌋ - ⌊ (n-4)^2/8⌋ = n-2. For the sake of contradiction, suppose G is an n-vertex P_4-free graph with t(G)≥⌊ n^2/8⌋ +1. For a subset U⊂ V(G), let us denote by t(U) the number of triangles containing at least one vertex from U. By (<ref>), we may assume that |U|=1 t(U) ≥⌊ n/4⌋ +1, |U|=4 t(U) ≥ n-1. Now, notice that by Lemma <ref>, G must contain a K_4. As in the previous section, let S={u_0,u_1,u_2,u_3} induce this K_4, and denote X_i=N(u_i)-S for 0≤ i ≤ 3. Again, |X_i∩ X_j|=∅ for every i≠ j. Observe that t(S)= ∑_i=0^3 e(X_i)+4, and so by (<ref>), ∑_i=0^3 e(X_i)≥ n-5. On the other hand, since each X_i is P_4-free, we have ∑_i=0^3 e(X_i)≤∑_i=0^3 |X_i| ≤ n-4. Hence, ∑_i=0^3 e(X_i)∈{n-5, n-4} This implies that e(X_i)=|X_i| for at least three u_i∈ S. Assume that e(X_i)=|X_i| for 0≤ i ≤ 2 and e(X_3)∈{|X_3|-1, |X_3|}. This also means that G[X_i] are vertex-disjoint unions of triangles for 0≤ i≤ 2, and X_3 is a union of triangles and a star on r vertices for some r≥ 0. Further, (<ref>) gives us the bound |X_i|≥⌊ n/4⌋ -2 . We now continue with a more detailed analysis of the neighborhoods of vertices in G. In what follows, let x_i denote the size of X_i. For a subset A⊂ V(G), let 𝒯(A) denote the set of triangles in G[A]. We now consider two cases. Case 1: ∑_i=0^4x_i=n-5. In this case, note that since ∑_i=0^3 e(X_i) = n-5, we have e(X_3)=x_3. Thus, the subgraphs G[X_i] are all disjoint unions of triangles, and there is exactly one vertex y in V(G)-⋃_i X_i ∪ S, and thus 3| n-5, implying n≡ 2 3. Moreover, (<ref>) implies x_i≥ 3, and hence n≥ 17. Now, observe that for G'=G-{y}, ∑_v∈ V(G) v = ∑_i=0^3∑_vwz∈𝒯(X_i)(_G'v+_G'w+_G'z) + ∑_v∈ S v + 2 y. We proceed by upper bounding each term of (<ref>) separately. * Let vwz∈𝒯(X_0). For any j≠ 0, as N(v)-X_0-S-{y} cannot contain two adjacent vertices from the same X_j, v can only be adjacent to at most one vertex from each triangle of X_j. Finally, v is adjacent to exactly three nodes from X_0∪ S, leading to _G' v + _G' w + _G' z ≤ 3(x_1/3 + x_2/3 + x_3/3) + 9 = (x_1+x_2+x_3)+9. By repeating the same argument over all x_i/3 triangles from 𝒯(X_i), we have ∑_vwz∈𝒯(X_i)(_G'v+_G'w+_G'z) ≤x_i/3∑_j≠ ix_j + 3x_i. * As y is not adjacent to any vertex of S, we have ∑_v∈ S v = (x_0+x_1+x_2+x_3) + 12 = n+7. * For each i, N(y)∩ X_i has at most x_i/3 vertices, as otherwise by the pigeonhole principle we would have v,w∈ N(y)∩ X_i that are adjacent, leading to a triangle yvw sharing an edge with the K_4 containing u_i, v and w. Further, y does not have a neighbor in S. Thus, y ≤x_0+x_1+x_2+x_3/3 = n-5/3. Putting these inequalities together and noting that 3t(G)≤∑_v∈ V(G) v, (<ref>) gives us 3⌊ n^2/8⌋ + 3 ≤ 3t(G) ≤2/3∑_i<jx_ix_j + 3(x_0+x_1+x_2+x_3)+(n+7) + 2/3(n-5) = 1/3(n-5)^2 - 1/3∑_i=0^3x_i^2 + 14n-34/3. On the other hand, we note that by the Cauchy-Schwarz inequality, ∑_i=0^3x_i^2 ≥1/4(n-5)^2. Therefore, 3⌊ n^2/8⌋ + 3 ≤1/4(n-5)^2+14n-34/3 = 1/12(3 n^2 + 26 n - 61), A contradiction to n≥ 17. This completes the proof in this case. ▪ Case 2: ∑_i=0^4x_i=n-4. In this case, recall that G[X_i] are disjoint unions of triangles for 0≤ i≤ 2, and X_3 is a union of triangles and a star on r≥ 0 vertices. Let us denote this star as S^∗ = {c,ℓ_1,…, ℓ_r-1} where c is the center and ℓ_j the leaves. We now continue with the exact same analysis of the neighborhoods of vertices in G as in the previous case. For a subset A⊂ V(G), let 𝒯(A) denote the set of triangles in G[A]. First, we note that ∑_v∈ V(G) v = ∑_i=0^2∑_vwz∈𝒯(X_i)( v+ w+ z) + ∑_v∈ X_3 v + ∑_v∈ S v. Let us now upper bound each term in (<ref>) separately. * Let vwz∈𝒯(X_0). Clearly N(v)-X_0-S cannot contain two adjacent vertices from the same X_j, j≠ 0. Therefore, v can only be adjacent with at most one vertex from each triangle of X_j for j≠ 0. Moreover, N(v)∩ S^∗, N(w)∩ S^∗ and N(z)∩ S^∗ are disjoint, implying v + w + z ≤ 3(x_1/3 + x_2/3 + x_3-r/3) + r + 9 = (x_1+x_2+x_3) + 9. Similar inequalities hold for each of the x_i/3 triangles in 𝒯(X_i), 0≤ i≤ 2. In particular, we have ∑_vwz∈𝒯(X_i)( v+ w+ z) ≤x_i/3∑_j≠ ix_j + 3x_i. * Let v∈ X_3. Then, N(v)-X_3-S can have at most one vertex from each triangle of X_i. Thus, v ≤{[ 1/3(x_0+x_1+x_2) + 3, v∉S^∗,; 1/3(x_0+x_1+x_2) + r, v = c,; 1/3(x_0+x_1+x_2) + 2, v ∈ S^∗-{c}. ]. Thus, if r≥ 1, ∑_v∈ X_3 v ≤x_3(x_0+x_1+x_2)/3 + 3(x_3-r) + r + 2(r-1) = x_3(x_0+x_1+x_2)/3 + 3x_3 - 2, and if r=0, ∑_v∈ X_3 v ≤x_3(x_0+x_1+x_2)/3 + 3x_3. We use the latter inequality as it holds for any value of r. * Finally, we have ∑_v∈ S v = (x_0+x_1+x_2+x_3)+12 = n+8. Therefore, (<ref>) along with 3t(G)≤∑_v∈ V(G) v, gives us 3t(G) ≤2/3∑_i<jx_ix_j + 3(x_0+x_1+x_2+x_3) + n + 8. = 1/3(n-4)^2 - 1/3∑_i=0^3 x_i^2 + 4n - 4 Observe that by Cauchy-Schwarz, ∑_i=0^3x_i^2≥1/4(n-4)^2. Hence, (<ref>) implies, 3t(G)≤1/4(n-4)^2 + 4n-4 t(G)≤1/12 n(n+8). By t(G)≥⌊ n^2/8⌋ + 1, this implies n≤ 14. Note that as n-4 = ∑_i=0^3x_i ≥ 9+x_3, we would have x_3≤ 1. By (<ref>), this would mean x_3 = 1. However, this contradicts edge-minimality of G, as the edge between u_3 and the only vertex of X_3 would not be incident to any triangle in G, again leading to a contradiction in this case. ▪ This completes the proof of the induction step, implying (n,K_3,P_4)≤⌊ n^2/8⌋ for all n≥ 12. § CONCLUDING REMARKS: UNIQUENESS For n≥ 8, one may ask whether the lower bound construction of K_⌊ n/2⌋, ⌈ n/2⌉ with a matching in any of the even parts is unique or not. In particular, our proof of Theorem <ref> implies that if the extremal construction contained a K_4, then ⌊ n^2/8⌋≤1/12n(n+8). This implies n≤ 16, and indeed, setting x_i=3 for every i leads us to an equality case in Case 2. Our proof therefore gives us the following construction from Figure <ref> for n=16 consisting entirely of K_4-blocks: consider a K_4 given by S={u_0,u_1,u_2,u_3}. For 0≤ i≤ 3, let N(u_i)-S consist of the triangles b_io_ir_i, where the b_i's are colored blue, o_i's olive and r_i's red. Suppose the blue, red and olive vertices each form a K_4 (the diagonal edges are omitted in Figure <ref> for clarity). Clearly each vertex neighborhood has 6 edges, leading to a total of 16· 6/3=32 triangles, and hence this graph is a valid extremal configuration for n=16. It seems many extremal constructions are possible for smaller values of n whenever divisibility and structural constraints are satisfied. For example, when n=8, we enumerate in our repository <cit.> all extremal constructions with 8 triangles programmatically, and these constructions are comprised of either two edge-disjoint K_4's, or only books. However, our proof of Theorem <ref> provides uniqueness of the extremal configuration for n≥ 17. § ACKNOWLEDGMENTS This work was supported by the Center of Innovations for Sustainable Quantum AI (JST Grant Number JPMJPF2221). plain
http://arxiv.org/abs/2307.04861v2
20230710191726
Bragg-Primakoff Axion Photoconversion in Crystal Detectors
[ "James B. Dent", "Bhaskar Dutta", "Adrian Thompson" ]
hep-ph
[ "hep-ph" ]
apsrev4-1 Department of Physics, Sam Houston State University, Huntsville, TX 77341, USA Mitchell Institute for Fundamental Physics and Astronomy, Department of Physics and Astronomy, Texas A&M University, College Station, TX 77845, USA Mitchell Institute for Fundamental Physics and Astronomy, Department of Physics and Astronomy, Texas A&M University, College Station, TX 77845, USA Axions and axion-like pseudoscalar particles with dimension-5 couplings to photons exhibit coherent Primakoff scattering with ordered crystals at keV energy scales, making for a natural detection technique in searches for solar axions. We find that there are large suppressive corrections, potentially greater than a factor of 𝒪(10^3), to the coherent enhancement when taking into account absorption of the final state photon. This effect has already been accounted for in light-shining-through-wall experiments through the language of Darwin classical diffraction, but is missing from the literature in the context of solar axion searches that use a matrix element approach. We extend the treatment of the event rate with a heuristic description of absorption effects to bridge the gap between these two languages. Furthermore, we explore the Borrmann effect of anomalous absorption in lifting some of the event rate suppression by increasing the coherence length of the conversion. We study this phenomenon in Ge, NaI, and CsI crystal experiments and its impact on the the projected sensitivities of SuperCDMS, LEGEND, and SABRE to the solar axion parameter space. Lastly, we comment on the reach of multi-tonne scale crystal detectors and strategies to maximize the discovery potential of experimental efforts in this vein. MI-HET-804 Bragg-Primakoff Axion Photoconversion in Crystal Detectors Adrian Thompson ^1Central European University, Quellenstraße 51, 1100 Vienna, Austria ^2Computational Social Science - Research Center for Educational and Network Studies, Centre for Social Sciences, Tóth Kálmánutca 4,Budapest, 1097, Hungary ^3Department of Social Research Methodology, Faculty of Social Sciences, Eötvös Loránd University, Pázmány Péter s étány 1/A, Budapest, 1117, Hungary. ^4 National Laboratory for Health Security, Hungary. ^5 Rényi Institute of Mathematics, Reáltanodautca 13-15, Budapest, 1053, Hungary. ^*Corresponding author: [email protected] =========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION Axions and axion-like particles (ALPs) - potentially long-lived pseudoscalars with weak couplings to the Standard Model (SM) that may have masses from the sub-eV to the GeV - are central features in the landscape of solutions to the strong CP problem <cit.>, dark matter problem <cit.>, and in the spontaneous breaking of generic global symmetries <cit.>. In addition to being dark matter candidates, axion-like particles in the keV to sub-eV mass range produced in the sun are well motivated <cit.>. Searches were carried out by several experimental collaborations by looking for a →γ Primakoff conversion in solid crystal detectors, including DAMA <cit.> (NaI), CUORE <cit.> (TeO_2), Edelweiss-II <cit.>, SOLAX <cit.>, COSME <cit.>, CDMS <cit.>, and Majorana <cit.> (Ge). Other upcoming experiments like SuperCDMS <cit.>, LEGEND <cit.>, and SABRE <cit.> are projected to greatly expand coverage over the axion parameter space and test QCD axion solutions to the strong CP problem in the eV mass range. These experiments aim to take advantage of coherence in the conversion rate when axions satisfy the Bragg condition, enhancing the detection sensitivity by orders of magnitude relative to incoherent scattering. Searching for solar axions via their coherent conversion in perfect crystals was first treated by Buchmüller & Hoogeveen <cit.> using the Darwin theory of classical X-ray diffraction under the Bragg condition <cit.>. The authors also alluded to potential enhancements in the signal yield when one considers the symmetrical Laue-case of diffraction for the incoming ALP waves. Yamaji et al. <cit.> treated this case thoroughly for the 220 plane of cubic crystals, also using the classical theory, and included the effect of anomalous absorption, also known as the Borrmann effect. It was shown by these authors that an enhancement to the signal yield was possible, replacing the Bragg penetration depth (L_bragg∼ 1 μm) with the Borrmann-enhanced attenuation length (ranging from 10 μm all the way to centimeter scales). The effect of anomalous absorption of X-rays was first shown by Borrmann <cit.>, and theoretically explained by Zachariasen <cit.> and other later authors (Battermann <cit.>, Hirsch <cit.>). A quantum mechanical treatment was offered by Biagini <cit.> in which the Borrmann effect was explained by the interference of statistical ensembles of the so-called |α⟩ and |β⟩ Bloch waves. There have been numerous modern studies that utilize the Borrmann effect, notably as in photon-photon dissipation on Bragg-spaced arrays of superconduncting qubits <cit.>, and in measuring quadrupole transitions in X-ray absorption spectra <cit.>. Now, the calculation of the event rates expected for the Primakoff conversion of solar axions coherently with a perfect crystal was treated in a more traditional, particle physics-based approach in refs. <cit.> and it was applied to derive many of the constraints set by crystal-based solar axion experiments including DAMA, CUORE, Edelweiss-II, SOLAX, COSME, CDMS, and Majorana Demonstrator <cit.>. However, absorption effects in Bragg and Laue case diffraction were not considered in refs. <cit.>; indeed, when comparing the event rates between these references and those presented in light-shining-through-wall (LSW) experiments, which used the classical Darwin theory approach (e.g. ref. <cit.> and more recently ref. <cit.>), there is a clear inconsistency. While the event rates in the LSW literature only consider the coherent volume of the crystal up to the relevant attenuation length (λ∼ 1 μm in the Bragg diffraction case or λ≲ 100 μm in the Laue-case), the solar axion searches have considered the whole volume of the crystal to exhibit coherence. In this work, we show that such effects reduce the expected event rates potentially up to the 𝒪(10^3) level depending on the assumed crystal size (and therefore, the assumed coherent volume enhancement) and material. Although this may impact the existing sensitivities set by solar axion searches in solid crystals, measures can be taken to optimize suppression of the event rate due to absorption effects and recover some or potentially all of the coherent volume. In  <ref> we re-derive the event rate formula for solar axion Primakoff scattering under the Bragg condition, and in  <ref> we discuss the anomalous enhancement to the absorption length under the Borrmann effect and numerically estimate the level of suppression in the coherent sum. In  <ref> we write down the event rates for a perfect crystal exposed to the solar axion flux with and without the absorption effects and discuss the relevant phenomenology. In  <ref> we project the impact on sensitivities with and without absorption effects for SuperCDMS, LEGEND-200, LEGEND-1000, SABRE, and multi-tonne benchmark detector setups and discuss possibilities to restore sensitivity from coherence in  <ref>. Finally, in  <ref> we conclude and discuss further work. § COHERENCE AND ABSORPTION In order to show how photon absorption in coherent Bragg-Primakoff scattering affects the event rate, it is worth going through a pedagogical review of what we mean by coherent scattering and first assume that no absorption takes place. For the reader who is familiar with coherence in neutrino scattering, please refer to the approach illustrated by Bednyakov and Naumov <cit.> in which coherent neutrino-nucleus scattering is calculated by taking a sum over N scattering centers in a nucleus. Let f(k⃗,k⃗^') be the Primakoff scattering matrix element for a single atomic target, for an incoming ALP 3-momentum k⃗ and outgoing γ 3-momentum k⃗^'. Written in terms of the atomic form factor F_A, f = ℳ_free F_A (q) where ℳ_free is the single-atomic scattering amplitude, q is the momentum transfer, with the angle of scattering defined by k⃗·k⃗^' = E_γ k cos2θ, averaged over spins and taken in the limit k ≫ m_a, m_N ≫ k,E_γ <cit.>, |⟨ℳ_free||⟩^2 = 8 e^2 g_aγ^2q^4 E_γ^2 m_N^2 k^2 sin^2 2θ for a nuclear mass m_N. The real atomic scattering form factor can be taken from ref. <cit.> which is defined such that F_A(0) = Z; F_A(q) = Z r_0^2 q^21 + r_0^2 q^2 for atomic number Z and screening constant parameterization r_0=184.15 e^-1/2 Z^-1/3 / m_e, where m_e is the electron mass. Similarly, we sum over the N scattering centers in a crystal; ℳ(k⃗,k⃗^') = ∑_j=1^N f_j(k⃗,k⃗^') e^i(k⃗^' - k⃗)·r⃗_j where e^i(k⃗^' - k⃗)·r⃗_j is a phase factor that comes from assuming plane wave solutions for the in and out states. This assumption is key; for atomic scattering in vacuum, the eigenstates of the final state photon should be a spectrum of plane waves. If we square the total matrix element, we get |ℳ(k⃗,k⃗^')|^2 = ∑_i=1^N | f_i|^2 + ∑_j≠ i^N ∑_i=1^N f_j^† f_i e^-iq⃗·(r⃗_i - r⃗_j) taking q⃗≡k⃗ - k⃗^'. The first (diagonal) term is the incoherent piece, while the second term is usually suppressed by the average destructive interference of the phase factors. Using the Laue diffraction condition <cit.>, q⃗·(r⃗_i - r⃗_j) = 2π n for n∈ℤ, then the phase factor in the exponential goes to one and the scattering is coherent. In this limit, the diagonal term is subdominant and the final matrix element squared tends to ℳ^2 → N^2 f^2 and we have full coherence. See appendix <ref> for a derivation of the event rate in full with this approach. Now consider interactions of the final state γ with the crystal lattice, including the absorption and scattering effects. Pragmatically, we modify the plane wave solutions of the final state photon to that of one in a dielectric medium, k⃗^'→nk⃗^', n = n - i κ, where n̅ is the complex index of refraction with real part n and imaginary part κ. Making this modification, we have e^i n̅k⃗^'· (r⃗_i - r⃗_j) → e^i n k⃗^'· (r⃗_i - r⃗_j) e^-μ/2 |k̂^'· (r⃗_j - r⃗_i)|, The absorption coefficient μ (which can also be expressed in terms of attenuation length or mean free path λ = 1μ) is related to the imaginary part of the index of refraction through μ≡ 2 κ |k⃗|. Conceptually, this factor encodes the effect of a reduced coherent interference amplitude between any two scattering centers, since a photon plane wave sourced at one scattering center will have been attenuated after reaching another scattering center. We note that Eq. <ref> and Eq. <ref> are heuristic modifications, since the attenuated plane wave solution is not a true eigenstate of the interaction Hamiltonian, but rather a simple ansatz made to estimate the phenomenology of absorption. For further convenience, we use z_ij≡|k̂^̂'̂·(r⃗_i-r⃗_j)| and λ = 1/μ. We then have |ℳ(k⃗,k⃗^')|^2 = ∑_i=1^N | f_i|^2 + ∑_j≠ i^N ∑_i=1^N f_j^† f_j e^-iq⃗·(r⃗_i - r⃗_j)e^-z_ij/(2λ) After using the Laue diffraction condition q⃗·(r⃗_i - r⃗_j) = 2π n and several manipulations of the sum, we find that |ℳ(k⃗,k⃗^')|^2 ≳ f^† f∑_j≠ i^N λ L_x L_y NV ≳ f^† f N^2 λL_z Comparing the proportionailty in Eq. <ref> to the usual result ∝ N^2, we see that the coherent volume is V ×λ / L_z, and the total scattering rate is suppressed by a factor λ/L_z, and is now more consistent with Darwin theory calculations <cit.>. This inequality above is strictly a lower limit because, as we will show in  <ref>, the suppression to the coherent sum by the absorptive sum, which we label as I, I ≡∑_j≠ i^N ∑_i=1^N e^-z_ij/(2λ), may be mitigated under certain conditions. Therefore, the suppression factor λ / L_z serves as a pessimistic guiding estimate, but in principle we should compute the sum in Eq. <ref> explicitly. § ANOMALOUS ABSORPTION AND THE BORRMANN EFFECT The suppression to the event rate can be alleviated by considering the anomalous enhancement to the absorption depth or mean free path λ, which, in crystallographic diffraction, is not strictly proportional to the the inverse photon cross section multiplying into the material number desnsity, 1/(nσ). Take for instance ref. <cit.> in which the authors have found that for the Laue-case conversion of ALPs, the attenuation length is modified as L_att→ L_α / β≡ 2L_att,α / β(1 - exp(-L2L_att,α / β) ) where L_att,α / β = L_att1 ∓ϵ and ϵ is a ratio involving the imaginary parts of the scattering form factor. These modifications come from the anomalous dispersion or anomalous absorption effect, or the Borrmann effect. It is an effect that occurs for so-called “Bloch waves" α and β that form in the crystal, discussed further in refs. <cit.>. The total scattering form factor can be decomposed into the real and imaginary parts <cit.>; f = f^0 + Δ f^' + i Δ f^'' where f^0 is the atomic form factor, usually given as the Fourier transform of the charge density; f^0(q) ≡∫ d^3 x⃗ρ(x⃗) e^i q⃗·x⃗ The second term in the real part of the form factor is the anomalous form factor Δ f^', and Δ f^'' is the imaginary part of the form factor associated with absorption. From Batterman <cit.>, the anomalous absorption due to the Borrmann effect modifies the absorption coefficient μ_0 = 1/λ as 1/λ = μ_eff = μ_0 [1 - F^''(hkl)F^''(000)] Here F^''(hkl) is the combination of structure function and imaginary form factor, F^''(hkl) = S(hkl) Δ f^''. The ratio in the second term of the expression is the Borrmann parameter, usually denoted as ϵ[In ref. <cit.>, they use κ.]. More explicitly, studies by Wagenfield have related the Borrmann parameter to the quadrupole photoelectric cross section <cit.>; ϵ≡ D (1 - 2 sin^2θ_B σ^Q/σ_PE) |S(h,k,l)|/|S(0,0,0)| where D is the Debye-Waller factor accounting for thermal vibrations in anomalous absorption, D = e^-B s^2 where s = sinθ / λ and B is a temperature-dependent constant. The Debye-Waller factors for cryogenic temperatures can be found in ref. <cit.> as well as fits to Δ f^'' for several pure materials of interest. Equivalently, we can express the Borrmann factor in terms of the imaginary form factor Δ f^'' and the quadrupole form factor Δ f^''_Q (which obeys the selection rules ℓ = ℓ^'± 2); ϵ≡ D (1 - 2 sin^2θ_B Δ f^''_Q/Δ f^'') |S(h,k,l)|/|S(0,0,0)| and Δ f^'' is more explicitly written as <cit.> Δ f^'' = ∑_ℓ^',m^'∑_n,ℓ,mπħ^2m_e| ∫ψ_f^*(r) _0 ·∇ e^i k·rψ_i(r) d^3 r |^2 While fits to this form factor can be found in ref. <cit.>, we can also usefully relate it to the vectorial form factor defined in ref. <cit.> and calculated using the (Python) or (C++) codes; Δ f^''(k) =πħ^2 m_e |f_1→2(k)|_^2 For more discussion and example functional forms of the Borrmann parameter, see appendix <ref>. While a dedicated study of the Borrmann parameter would require the calculation of the photoelectric quadrupole cross section σ^Q, Borrmann parameters for germanium crystal are already reported in the literature. We use the form factors derived in ref. <cit.> to estimate the Borrmann effect for each reciprocal lattice plane, giving us an anomalous attenuation length along the direction of travel of photons inside the detector I(k⃗,G⃗). We tabulate these and the corresponding values of ϵ in Table <ref> and plot the Borrmann parameters for Ge, Si, CsI, and NaI crystals in Fig. <ref>. The absorptive part of the coherent sum that remains after the Laue condition is met is I(k⃗,G⃗) ≡∑_j≠ i^N ∑_i=1^N e^-(k⃗-G⃗)/|k⃗-G⃗|·(r⃗_i - r⃗_j)/(2λ) which, when the Bragg condition is met, is strictly a function of k⃗ and G⃗ since the mean free path λ can be related via Eq. <ref>. Taking the Ge lattice as an example, with lattice constant d = 5.657 Å, we evaluate I(k⃗,G⃗) numerically by constructing a lattice of N Ge atoms. Since computing the full sum for a real crystal of centimeter length scale would require a huge number of evaluations (∝ N^2), we take a sparse sampling of N atoms across the physical crystal volume such that the sum is computationally feasible. The sum can then be evaluated in increments of increasing N to test for convergence. We find that a lattice of around N≃ 10^4 atoms in a cubic geometry is enough to obtain a convergent error of around 5%. Some evaluations of I(k⃗,G⃗) as a function of varying mean free path λ are shown in Fig. <ref> for several choices of scattering planes G⃗ and incoming wavevectors k⃗. One interesting phenomenon that can be seen in Fig. <ref> is that there are certain choices of k⃗^' = k⃗ - G⃗ such that k⃗^'· (r⃗_i - r⃗_j) = 0. In this special circumstance, while many of the terms in the coherent sum will tend to zero with decreasing λ, the terms where this dot product is zero will survive. What this means physically is that the plane in which r⃗_i - r⃗_j lies will avoid the decoherence from absorption as long as it remains orthogonal to k⃗^'. This relation can be made more apparent by considering the dot product under the Bragg condition; k̂^'· (r⃗_i - r⃗_j) = (G⃗/2 k⃗·Ĝ - G⃗/k)·(r⃗_i - r⃗_j) = 0 where we take k̂ = (cosϕsinθ, sinϕsinθ,cosθ), solving this equation for θ in the hkl = 400 case gives θ = ^-1(n_x cos (ϕ )-n_y sin (ϕ )/n_z)+π c_1 for n_x, n_y, n_z, c_1 ∈ℤ. This defines a family of lattice points that remain in the absorption sum I even in the limit λ→ 0, resulting a lower bound on I as shown for some example choices of k̂ in Fig. <ref>. This effect is similar in nature to the Laue-case diffraction enhancements where the photoconversion occurs down the scattering planes, minimizing the absorption, as studied in ref. <cit.>. In Fig. <ref> the absorption factor I is shown for the plane G⃗(1,1,1) as a function of azimuthal and polar angles of the incoming axion momentum θ, ϕ under the Bragg condition. This fixes k = E_γ for a given (θ, ϕ), and therefore the attenuation length λ given by Eq. <ref>. We see a two prominent features of mitigated absorption in the S-shaped band (tracing out a great circle on the 2-sphere), where (i) I→ 1 as these (θ,ϕ) combinations correspond to larger energies where the photon absorption cross section falls off as we move further into the S, and (ii) there is a jump discontinuity in the S-band due to an absorption edge in the photoelectric cross section for germanium at around 11 keV. § EVENT RATES The event rate for Primakoff coherent scattering with a perfect crystal worked out in <cit.> where full-volume coherence was assumed and there is no dependence on the attenuation length[Notice the factor of (ħ c)^3 rather than ħ c as written in ref. <cit.> for dimensional consistency.]; the event rate in an energy window [E_1, E_2] is dNdt = π g_aγ^2 (ħ c)^3 Vv_cell^2∑_G⃗[dΦ_adE_a|F_j (G⃗) S_j(G⃗) |^2/|G⃗|^2sin^2 (2θ) 𝒲] where S_j is the crystal structure factor (see appendix), F_j is the atomic form factor for species j, and dΦ_a / dE_a is the solar axion flux from Primakoff scattering and photon coalescence in the sun <cit.>. For the solar axion flux, we take the parameterized form appearing in ref. <cit.> which expands upon the form originally given by CAST <cit.> by accounting for the axion mass; see Eq. <ref>. The event rate in Eq. <ref> encodes the effect of detector energy resolution Δ within the function 𝒲; 𝒲(E_a, E_1, E_2, Δ) = 1/2(erf(E_a - E_1/√(2)Δ) - erf(E_a - E_2/√(2)Δ) ) The sum over the reciprocal lattice vectors G⃗ effectively counts the contributions to the coherent scattering from each set of lattice planes, illustrated in Fig. <ref>. The reader may refer to appendix <ref> for a compact description of the reciprocal lattice. At this stage the effect of absorption will simply modify the event rate, as seen in the previous section, by replacing the full coherent volume V → V × I(k⃗,G⃗) with λ = [μ_0 (1 - ϵ(G⃗))]^-1, giving dNdt = π g_aγ^2 (ħ c)^3 Vv_cell^2 ∑_G⃗[dΦ_adE_a·I(k⃗,G⃗)/|G⃗|^2 × |F_j (G⃗) S_j(G⃗) |^2 sin^2 (2θ) 𝒲] With sin^2 (2θ) simplifying to 4(Ĝ·k̂)^2 (1 - (Ĝ·k̂)^2) <cit.> where k̂ is the unit vector pointing toward the Sun's location, we have dNdt = π g_aγ^2 (ħ c)^3 Vv_cell^2 ∑_G⃗ I(k⃗,G⃗) [ dΦ_adE_a |F_j (G⃗) S_j(G⃗) |^2 ×4(Ĝ·k̂)^2 (1 - (Ĝ·k̂)^2)/|G⃗|^2𝒲] At this stage, we have also used the Bragg condition E_a = ħ c|G⃗|^2 / (2 k̂·G⃗). The time dependence is encoded in the solar position, which we can express through k̂ = (cosϕsinθ, sinϕsinθ, cosθ) for θ = θ(t) and ϕ = ϕ(t). For the solar angle as a function of time and geolocation, we use the NREL solar position algorithm <cit.>. In principle, the sum over reciprocal lattice vectors G⃗ is taken to arbitrarily large combinations (h,k,l), but due to the 1/|G⃗|^2 suppression and the upper limit of the solar axion flux of around ∼ 20 keV, we can safely truncate the sum at max{h,k,l}=5. The corresponding event rates for various energy windows are shown in Fig. <ref> for Ge crystal, where we compare the relative enhancements with and without the Borrmann effect to the case of full-volume coherence and to the case of incoherent scattering on an amorphous lattice[Atomic Primakoff scattering is still coherent here; we only turn off the coherence at the level of the lattice for the sake of comparison with scattering on amorphous materials, in this case, amorphous germanium.]. The fluctuating features in the event rate are the result of the sum over G⃗ which contributes to the Bragg peaks. Here we have assumed a volume of 260 cm^3 (corresponding roughly to the volumetric size of a SuperCDMS germanium module), and so the relative suppression for each G⃗ lattice plane goes like V^1/3 / λ(k⃗,G⃗), giving a suppression on the order of 10^2 compared to the full-volume coherence assumption. The time-dependence can be visualized further by viewing the event rates as a function of incident angles integrated across the whole solar axion energy window, as shown in Fig. <ref>. Depending on the time of year, different sets of Bragg peaks will be traced over during the day, inducing an annual modulation in addition to the intra-day modulation of the signal. Since the time of day fixes the solar zenith and azimuth (θ, ϕ), we can finally show the spectrum of the Primakoff signal as a function of energy deposition and time of day; see Fig. <ref>. § PROJECTED SENSITIVITIES FOR SOLAR AXION SEARCHES We forecast the event rates for SuperCDMS <cit.>, LEGEND-200, LEGEND-1000, SABRE, in addition to envisioned multi-tonne setups, with detector specifications listed in Table <ref>. For the background-free limits, we look for the Poisson 90% CL corresponding to ≃ 3 events observed for a given exposure. The projected reach over the (g_aγ - m_a) parameter space for these detector benchmarks is shown in Fig. <ref>, where we show projections including the effects of absorption and the Borrmann enhancement to the absorption length, in addition to the projected limits assuming full volume coherence (FVC), i.e. I(k⃗,G⃗)→1, indicated by the arrows and dotted lines. The QCD axion parameter space is shown (yellow band) for the Kim-Shifman-Vainshtein-Zakharov (KSVZ) type <cit.> and Dine-Fischler-Srednicki-Zhitnitsky (DFSZ) type benchmark models <cit.>, where the range is defined by taking the anomaly ratios of E/N = 44/3 to E/N = 2 <cit.>, although the space of heavier masses is also possible in high-quality axion models and other scenarios <cit.>. To probe this model parameter space beyond the existing bounds from CAST and horizontal branch (HB) stars, when FVC is maintained, multi-tonne scale experiments are needed. Additionally, the stellar cooling hints that could be explained by ALPs with g_aγ≲ 10^-11 GeV^-1 (and for non-vanishing g_ae≃ 10^-13), are also shown in Fig. <ref>, indicated by the gray band (1σ) and down to vanishing g_aγ <cit.>. These hints, though mild, could be tested by the multi-tonne setups with FVC restored. With the effects of absorption included, we project SuperCDMS, LEGEND, and SABRE to test parameter space unexplored by laboratory-based probes beyond the CAST and XENONnT constraints for m_a ≳ 1 eV, but already excluded by HB stars constraints. However, multi-tonne CsI and NaI setups would extend this to nearly cover the HB stars exclusion. Similar reach could in principle be found when considering the joint parameter space of multiple ALP couplings to photons, electrons, and nucleons <cit.>. For instance, by considering the ^57Fe solar axion flux, one could look for 14.4 keV energy signatures and their Bragg-Primakoff peaks, although the sensitivity would likely contend with astrophysics constraints as well <cit.>. The existing bounds from DAMA <cit.>, CUORE <cit.>, Edelweiss-II <cit.>, SOLAX <cit.>, COSME <cit.>, CDMS <cit.>, and Majorana <cit.> are not shown here, but their exclusions would necessarily shift to larger coupling values to account for absorption effects in the Bragg-Primakoff rates, depending on the detector volume and material. Note that the relative reach between NaI and CsI crystals is relatively suppressed when absorption is included here, due to the behavior of the imaginary form factor for CsI giving more modest Borrmann enhancements at the lower reciprocal lattice planes; see Fig. <ref>. In order to push the sensitivity envelope beyond the current bounds by CAST and HB stars, even with multi-tonne setups, the absorption effects need to be mitigated. Some possibilities are discussed in the next section. § RESTORING COHERENCE There may be ways to recover the sensitivity initially projected in the case of full-volume coherence by mitigating the loss of coherence due to absorption. These are of course speculative routes. Some of these routes for future work are enumerated below; * Since the attenuation of the coherent volume is direction-dependent, as shown in Fig. <ref>, one could imagine optimizing a detector geometry such that the size and orientation relative to the incoming flux of axions is ideal, maximizing use of the Laue-type scattering and Borrmann effect to minimize the absorption. This would require precise knowledge of the crystal purity and plane orientation obtained from X-ray measurements. * Along a similar vein, since the effects of absorption are minimized when the detector scale V^1/3 becomes comparable to the photon mean free path λ, one could instead prefer to use smaller detector volumes but with a large total mass partitioned into many individual modules. As long as each module is optically insulated from the others, the loss of coherence due to absorption will be contained within each module and the suppression to the event rate can be mitigated. * It might be possible to apply the principles in this work to radioisotope experiments like those proposed in ref. <cit.>, where a keV-scale nuclear transition line (e.g. the 14.4 keV line of ^57Fe) could source ALPs through a coupling to nucleons. Subsequent detection by an array of crystals encasing the radioactive source searching for transition photons of known energy Primakoff-converting in the crystal would leave a missing energy signature in the detector. By looking for disappearing keV-scale transitions the signal rate would enjoy the coherent enhancement relative to the incoherent scattering considered in ref. <cit.>. * A dedicated keV photon source that would impinge on a crystal detector could fire at a fixed angle of incidence such that the event rate enhancement from the Borrmann effect and Laue effects are optimized and full volume coherence is restored as best as possible. One might achieve this with a keV laser <cit.> or synchrotron sources in a similar fashion to LSW experiments <cit.>. By performing a similar “missing” photon search as the one discussed above, the event rate for the detection of missing energy will be proportional to g_aγ^2, rather than g_aγ^4 as in solar axion searches, greatly enhancing the sensitivity. In the case where we assume full volume coherence, shown in Fig. <ref>, dotted lines, ton-scale setups like LEGEND-200 and LEGEND-1000 can reach significantly smaller couplings, probing values of g_aγ beyond the existing bounds fom HB Stars <cit.> and CAST <cit.> for masses m_a ≲ 10 keV, losing sensitivity for higher masses for which the axion production rates from photon coalescence and Primakoff scattering are diminished (see also Fig. <ref>). These reach more than an order of magnitude lower in the coupling than previous Bragg-Primakoff solar axion searches. § CONCLUSIONS In this work, we have taken into account a more proper estimate of the effects of anomalous absorption into the event rate, i.e. via the Borrmann effect on the coherence condition of Bragg-Primakoff photoconversion of solar axions. The sensitivity of crystal technologies used in the SuperCDMS, LEGEND, and SABRE setups has been demonstrated, and we find that the inclusion of absorption effects even with Borrmann-enhanced signal rates still would require multi-tonne scale detectors to surpass the existing astrophysical constraints in sensitivity to ALPs. However, a dedicated study with a thorough and careful treatment of the absorption suppression and Borrmann effects is definitely needed to better understand its impact on experiments that utilize Bragg-Primakoff conversion. In particular, the evaluation of the imaginary form factor in other crystals (namely, PbWO_4 may be an interesting option) would help determine potential enhancements to the anomalous absorption effect in other detector materials. Crystal detector technologies are also necessary tools to discriminate axion-like particle signals from other types of BSM and neutrino signatures, with high sensitivity to time modulation from the directional sensitivity of Bragg-Primakoff scattering. This is a powerful tool for background rejection as well, and ideally a joint analysis of multiple detectors situated at different latitudes and longitudes would benefit greatly from leveraging the time modulation of the signal. They are also complimentary to future helioscope experiments like IAXO; while the projected reach for IAXO over the axion-photon coupling parameter space is vast, the sensitivity to solar axions with masses m_a ≳ 1 eV becomes weaker to coherent Primakoff conversion in magnetic field helioscopes. Sensitivity to this region of parameter space is necessary in order to test QCD axions, especially in non-traditional models of high quality axions and the like, which have parametrically larger masses <cit.>. It was shown in ref. <cit.> that future liquid noble gas detectors for dark matter direct detection at kiloton-year scales could begin to probe couplings beyond the astrophysics constraints for axion-like particles, while in this work we find that equivalent reach is possible at ton-year exposures with crystal detector technology, if utilized to its fullest potential. The presence of complimentary searches at these mass scales is essential for a complete test of the axion solution to the strong CP problem and the broader space of ALPs. § ACKNOWLEDGEMENTS We are very grateful to Imran Alkhatib, Miriam Diamond, Amirata Sattari Javid, and John Sipe for the vigorous discussions and studies on the theoretical treatment of coherent Primakoff scattering in crystals and the comparison of numerical computations. We graciously thank Tomohiro Yamaji for the insight on Laue-type diffraction, Timon Emken for the technical correspondence on the package, and Alexander Poddubny for the useful comments on Biagini's theory of anomalous absorption. The work of BD and AT is supported by the DOE Grant No. DE-SC0010813. JBD acknowledges support from the National Science Foundation under grant no. PHY-2112799. Portions of this research were conducted with the advanced computing resources provided by Texas A&M High Performance Research Computing. We also thank the Center for Theoretical Underground Physics and Related Areas (CETUP*) and SURF for facilitating portions of this research. § CRYSTAL STRUCTURE For convenience of the reader we repeat the standard discussion on the description of the lattice vector space for the crystals we have considered, much of which can be found in <cit.> and other canonical literature. The α⃗_j describe the positions of each atom within the cell, while the basis vectors a⃗_i describe the Bravais lattice. The linear combination of the two is used to translate anywhere on the lattice by stepping in integer multiples of these basis vectors; r⃗_i = n_1 a⃗_1 + n_2 a⃗_2 + n_3 a⃗_3 + α⃗_i We can then introduce the reciprocal lattice, giving reciprocal lattice basis vectors b⃗_i which satisfy b⃗_i ·a⃗_j = 2πδ_ij. In general the transformations give b⃗_1 = 2πa⃗_2 ×a⃗_3|a⃗_1 · (a⃗_2 ×a⃗_3)| b⃗_2 = 2πa⃗_3 ×a⃗_1|a⃗_1 · (a⃗_2 ×a⃗_3)| b⃗_3 = 2πa⃗_1 ×a⃗_2|a⃗_1 · (a⃗_2 ×a⃗_3)| The reciprocal lattice basis vectors are used to construct the reciprocal lattice vector G⃗ that point along the surface normals of the scattering planes. In terms of integers m_1, m_2, and m_3, each scattering plane is defined; G⃗ = m_1 b⃗_1 + m_2 b⃗_2 + m_3 b⃗_3 Sometimes the integers h,k,l are used instead, and in some contexts one can use this basis to express G⃗ as G⃗(hkl) = 2πa (h, k, l) The lattice constants, cell volumes, and basis vectors for a few examples (Ge, Si, CsI, and NaI) are listed in Table <ref>. § DERIVATION OF THE EVENT RATE Let f(k⃗,k⃗^') be the Primakoff scattering matrix element for a single atomic target, for an incoming ALP 3-momentum k⃗ and outgoing γ 3-momentum k⃗^'; f = ℳ_free F_A (q) where ℳ_free is the single-atomic scattering amplitude with the angle of scattering defined by k⃗_a ·k⃗_γ = E_γ k cos2θ, averaged over spins and taken in the limit k ≫ m_a, m_N ≫ k,E_γ; ⟨ℳ_free|=⟩8 e^2 g_aγ^2q^4 E_γ^2 m_N^2 k^2 sin^2 2θ We sum over the N scattering centers in a crystal; ℳ(k⃗,k⃗^') = ∑_j=1^N f_j(k⃗,k⃗^') e^i(k⃗^' - k⃗)·r⃗_j where e^i(k⃗^' - k⃗)·r⃗_j is a phase factor that comes from assuming plane wave solutions for the in and out states. The position vector r⃗_j can be expressed in terms of the Bravais lattice basis vectors and the primitive basis vectors for each unit cell of the crystal. For germanium crystal with lattice constant a, we have primitive basis vectors α⃗_0 = (0,0,0) α⃗_1 = a/4 (1,1,1) while the basis vectors of the Bravais lattice are described by a⃗_1, a⃗_2, and a⃗_3; a⃗_1 = a/2(0,1,1) a⃗_2 = a/2 (1,0,1) a⃗_3 = a/2 (1,1,0) we can represent any scattering site as a linear combination of the a's and either the first or second primitive; r⃗_i,0 = R⃗_i + α⃗_0 = n_1 a⃗_1 + n_2 a⃗_2 + n_3 a⃗_3 + α⃗_0 r⃗_i,1 = R⃗_i + α⃗_1 = n_1 a⃗_1 + n_2 a⃗_2 + n_3 a⃗_3 + α⃗_1 where the index i maps to a unique combination (n_1, n_2, n_3). If we square this, we get |ℳ(k⃗,k⃗^')|^2 = ∑_i=1^N | f_i|^2 + ∑_j≠ i^N ∑_i=1^N f_j^† f_i e^-iq⃗·(r⃗_i - r⃗_j) taking q⃗≡k⃗ - k⃗^'. Rewriting in terms of a sum over N_c cells and the cell primitives, the coherent part (second term) is |ℳ(k⃗,k⃗^')|^2 = ∑_j≠ i^N_c∑_i=1^N_c∑_μ = 0^1∑_ν = 0^1 f_j^† f_i e^-iq⃗·(R⃗_i - R⃗_j + α⃗_μ - α⃗_ν) When the Laue condition is met, we have q⃗ = G⃗ and G⃗·R⃗_i is a 2π integer multiple; |ℳ|^2 ≡∑_j≠ i^N_c∑_i=1^N_c∑_μ,ν = 0^1 f_j^† f_i e^-iG⃗·(α⃗_μ - α⃗_ν) Now we can factorize the sum over primitives, and since we are considering a monoatomic crystal we can also take the f_i = f_j, simplifying things; |ℳ|^2 = N_c^2 f^† f ∑_μ,ν = 0^1 e^-iG⃗·(α⃗_μ - α⃗_ν) In Eq. <ref> the structure function can be substituted, which is nothing but the sum over primtives; S(G⃗) = ∑_μ e^i G⃗·α_μ and we have no need for a species index j on S_j(G⃗) since we only have one atomic species, but it is trivial to extend this derivation to include it - we just need to add another index to the primitive basis vectors and sum over it. With this identification and also taking f^† f = |ℳ_free|^2 F^2_A (G⃗), we have |ℳ|^2 = N_c^2 |ℳ_free|^2 |F_A (G⃗) S(G⃗)|^2 Now let's write down the cross section. dσ = 14 E_a m_N v_a |ℳ|^2 d^3 k^'(2π)^3 2E_γd^3 p^'(2π)^3 2E_p^' (2π)^4 δ^4 (k + p - k^' - p^') Taking the ALP velocity v_a = 1, momentum transfer minimal such that E_p^' = m_N, and integrating out the δ^3 we get dσ = 164 π^2 E_a E_γ m_N^2 |ℳ|^2 d^3 k^'δ(E_a - E_γ) Performing a change of variables to d^3k^'→ d^3q (since q = k - k^' and k is fixed), we would integrate this over q⃗. Since we have q⃗ = G⃗ at this stage, we should replace the integral with a sum; ∫ d^3 q →(2π)^3/V∑_G⃗ The event rate formula is constructed from a convolution of the detector response, axion flux Φ_a, and cross section; dNdt = ∫_E_1^E_2 dE_ee∫_0^∞ dE_a (2π)^3/V∑_G⃗dΦ_ad E_a164 π^2 E_a E_γ m_N^2 |ℳ|^2 δ(E_a - E_γ) ·( 1Δ√(2π) e^-(E_ee - E_γ)^2/2Δ^2) Putting in the definition of |ℳ|^2 that we worked out and substituting the free Primakoff cross section, integrating over the energy delta function (and identifying E_a = E_γ = E for simplicity), and integrating over dE_ee we get dNdt = (2π)^3 e^2 g_aγ^28 π^2Vv_cell^2∑_G⃗dΦ_adEk^2 sin^2 (2θ)|G⃗|^4 |F_A(G⃗)S(G⃗)|^2 𝒲(E_1, E_2, E) This is almost identical to the rate in ref. <cit.>, which uses a different definition of the atomic form factor up to a factor of q^2/e k^2. After some algebra, the event rate in Eq. <ref> is still different than that given in ref. <cit.> up to a factor of 4sin^2(θ). However, the event rate formula derived here is consistent with the calculation performed in refs. <cit.>. After rederiving the coherent sum using the replacements in Eqns. <ref>-<ref>, the event rate becomes dNdt = (2π)^3 e^2 g_aγ^28 π^2Vv_cell^2∑_G⃗ I(k⃗,G⃗) dΦ_adEk^2 sin^2 (2θ)|G⃗|^4 |F_A(G⃗)S(G⃗)|^2 𝒲(E_1, E_2, E) § SOLAR AXION FLUX We use the parameterization appearing in ref. <cit.> for massive axion production in the sun; the flux parameterizations are repeated here for convenience dΦ_γ→ adE_a = 4.20· 10^10cm^-2s^-1keV^-1(g_aγ10^-10GeV^-1)^2 E_a p_a^2e^E_a/1.1 -0.7 (1 + 0.02 m_a) dΦ_γγ→ adE_a = 1.68· 10^9cm^-2s^-1keV^-1(g_aγ10^-10GeV^-1)^2 m_a^4 p_a (1 + 0.0006 E_a^3 + 10/E_a^2 + 0.2) e^-E_a where Φ_γ→ a is the Primakoff solar flux and Φ_γγ→ a is the flux resulting from resonant photon coalescence, both in units of cm^-2s^-1keV^-1, given for axion energy and momentum E_a and p_a in keV, and for the coupling g_aγ in GeV^-1. The solar axion flux from photon coalescence and Primakoff conversion is shown in Fig. <ref> for several benchmark axion masses. § UTILIZING / FOR CALCULATION OF THE ABSORPTIVE FORM FACTOR Wagenfield's form factor for the anomalous dispersion of X-rays with incoming and outgoing momenta and polarizations k, _0, k^', _0^' is <cit.> Δ f^'' = πħ^2m_e( ∫ψ_f^*(r) _0 ·∇ e^i k·rψ_i(r) d^3 r ) ( ∫ψ_f(r) ^'_0 ·∇ e^-i k^'·rψ_i^*(r) d^3 r ) Applying the gradient and expanding, we get some terms proportional to _0 ·k which vanish, leaving us with Δ f^'' = πħ^2m_e( _0 ·∫ψ_f^*(r) e^i k·r∇ψ_i(r) d^3 r ) ( ^'_0 ·∫ψ_f(r) e^-i k^'·r∇ψ_i^*(r) d^3 r ) Referring to Catena et al <cit.>, we can then apply the definition of the vectorial form factor (eq B18, but with some changes made to keep the notation more consistent), f_1→2(q) = ∫ d^3 r ψ^*_f (r) e^i q·ri ∇/m_eψ_i (r). Here the final state and initial state wave functions have quantum numbers i = n,ℓ,m and f = p^',ℓ^', m^' where p^' is the final state electron momentum, and {n,ℓ,m},{ℓ^',m^'} are the initial and final quantum numbers, respectively. Applying this definition, we have Δ f^'' = πħ^2m_e( _0 · (-i m_e) f_1→2(k) ) ( ^'_0 · (i m_e) f^*_1→2(k^') ) =πħ^2 m_e (_0 ·f_1→2(k)) (_0^'·f^*_1→2(k^')) If our photons are unpolarized, then we can take a sum over the helicity states, giving the completeness relation ∑_s (_0(s))_i (_0^'(s))_j = δ_ij. Taking k^' = k - q, this reduces the polarization-summed imaginary form factor to Δ f^''(k,q) = πħ^2 m_e (f_1→2(k) ·f^*_1→2(k - q))
http://arxiv.org/abs/2307.04713v1
20230710172234
Sphaleron in the Higgs Triplet Model
[ "Jiahang Hu", "Bingrong Yu", "Shun Zhou" ]
hep-ph
[ "hep-ph", "astro-ph.CO", "hep-th" ]
footnote Sphaleron in the Higgs Triplet Model Jiahang Hu ^a [E-mail: [email protected]], Bingrong Yu ^a, b [E-mail: [email protected] (corresponding author)], Shun Zhou ^a, b [E-mail: [email protected] (corresponding author)] ^a School of Physical Sciences, University of Chinese Academy of Sciences, Beijing 100049, China ^b Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, China The Higgs triplet model (HTM) extends the Standard Model (SM) by one complex triplet scalar (also known as the type-II seesaw model), offering a simple and viable way to account for nonzero neutrino masses. On the other hand, the nontrivial couplings of the triplet to the gauge fields and to the SM Higgs field are expected to influence the topological vacuum structure of the SM, and consequently, the energy and the field configuration of the electroweak sphaleron. The sphaleron process plays a crucial role in dynamically generating the baryon asymmetry of the Universe. In this work, we study the vacuum structure of the gauge and Higgs fields and calculate the saddle-point sphaleron configuration in the HTM. The coupled nonlinear equations of motion of the sphaleron are solved using the spectral method. We find the inclusion of the triplet scalar could in principle significantly change the sphaleron energy compared with the SM. Nevertheless, at zero temperature, the current stringent experimental constraint on the vacuum expectation value of the triplet suppresses the difference. Interestingly, we find that there still exists some narrow parameter space where the sphaleron energy can be enhanced up to 30% compared with the SM case. footnote § INTRODUCTION Despite its great success, the Standard Model (SM) of particle physics is unable to accommodate nonzero neutrino masses, which has been firmly established by the neutrino oscillation experiments during the last two decades <cit.> (see, e.g., Ref. <cit.> for a recent theoretical review). Another important unsolved problem in the SM is the observed baryon asymmetry of the Universe <cit.>. Given the 125 GeV Higgs boson discovered at the Large Hadron Collider <cit.>, the SM cannot provide a successful electroweak (EW) baryogenesis since the EW phase transition in the SM is a smooth cross-over <cit.>, failing to depart from thermal equilibrium <cit.>. Therefore, the SM should be incomplete, and new physics beyond the SM is indispensable. The extension of the SM by adding one triplet scalar with hypercharge Y=-1, dubbed the Higgs Triplet Model (HTM), offers an economical way to explain the tiny neutrino masses through the type-II seesaw mechanism <cit.>. On the other hand, following the idea of thermal leptogenesis <cit.>, the out-of-equilibrium decays of the heavy triplets in the early Universe generate the lepton number asymmetry <cit.>,[In order to generate CP violation, at least two triplet scalars are needed. Alternatively, one can also introduce one triplet scalar and one additional heavy Majorana neutrino, which is able to accommodate both the neutrino mass spectrum and the observed baryon asymmetry <cit.>. Recently, it was pointed out that the inclusion of only one triplet scalar could fulfill successful leptogenesis through the Affleck-Dine mechanism <cit.> while the triplet could also play a role in inflation <cit.>.] which can partly be converted to the baryon number asymmetry via the sphaleron process <cit.>. In addition, the triplet scalar modifies the scalar potential of the SM and thus may change the pattern of the EW phase transition. Recently, it was found that there exists viable parameter space for a strong first-order EW phase transition in the HTM, and the spectrum of the produced gravitational waves was calculated <cit.>. Nevertheless, it is still unclear whether or not a successful EW baryogenesis could be fulfilled in the framework of the HTM. To achieve this goal, a necessary step is to calculate the sphaleron configuration in the presence of a triplet scalar, which is the main purpose of the present work. The sphaleron process plays a crucial role in dynamically generating the cosmological matter-antimatter asymmetry <cit.>. It is well known that the vacuum structure of non-Abelian gauge theories is nontrivial and the topologically distinct vacua are characterized by the Chern-Simons numbers <cit.>, which can be directly related to the baryon (B) and lepton (L) numbers. Due to the chiral anomaly <cit.>, B and L are not conserved in the SM. The transition between two topologically distinct vacua changes the Chern-Simons number and hence B and L (but with B-L conserved). The energy barrier between different vacua is characterized by the sphaleron energy E^_ sph. At zero temperature, we have E^_ sph∼ 4π v/g ∼ 5  TeV, where v≈ 246  GeV is the EW vacuum expectation value (VEV) and g≈ 0.65 is the SU(2)_ L^ gauge coupling. Therefore, the B-violating sphaleron rate is highly suppressed at low temperatures: Γ^_ sph∼ exp(-E^_ sph/T) <cit.>. At temperatures above the EW scale, the VEV becomes zero and the energy barrier vanishes. In this case, the B-violating rate is no longer suppressed[Strictly speaking, there is no classical sphaleron solution above the critical temperature T^_c of the EW phase transition. This is because the temperature-dependent VEV v(T) turns out to be zero at T>T^_c and the classical configuration scale 1/v(T) goes infinity. However, the B-violating process is still significant above T^_c and the temperature provides a typical scale (α^_ W T)^-1 for the sphaleron-like configuration <cit.>.] and is given by Γ^_ sph∼α_ W^5 T^4 with α^_ W≡ g^2/(4π) <cit.>. On the other hand, from the view of the classical field theory, the sphaleron configuration is the saddle-point solution of the energy functional <cit.>. The sphaleron energy in the SM is mainly contributed by the Higgs and the gauge bosons. However, in the HTM, the triplet scalar has additional couplings to the gauge fields and to the SM Higgs field, hence is expected to influence the vacuum structure and the sphaleron configuration. As has been discussed above, the sphaleron energy plays an important role in both EW baryogenesis and leptogenesis. Therefore, it is necessary to recalculate the sphaleron configuration in the presence of a triplet scalar in order to realize a self-consistent baryogenesis in the framework of the HTM. The remaining part of this paper is organized as follows. In Sec. <ref>, we briefly review the minimax procedure to find the sphaleron solution and set up our formalism. In Sec. <ref> and Sec. <ref>, we calculate the sphaleron configuration in the HTM, where a minimal version of the potential and a full potential is adopted, respectively. Our main conclusion is summarized in Sec. <ref>, together with some further discussions. Finally, the numerical techniques to solve the equations of motion (EOM) of the sphaleron are provided in appendices. § THEORETICAL SETUP AND SPHALERON ANSATZ In this section, we set up the general formalism to calculate the sphaleron configuration in the SM extended by a complex triplet scalar. We make the following two reasonable assumptions: * The contribution from fermion fields to the sphaleron is neglected. * The finite Weinberg angle has little influence on the sphaleron (e.g., less than 1% correction to the sphaleron energy) <cit.>. Therefore, we can safely neglect the mixing between SU(2)_ L and U(1)_ Y gauge bosons such that the sphaleron configuration is spherically symmetric. Under the above assumptions, the Lagrangian in the HTM is given by L_ HTM=-1/2(F^_μνF^μν_)+(D^_μϕ)^†_(D^μ_ϕ)+1/2[(D^μ_Δ)^†_(D^_μΔ)]-V(ϕ,Δ) . The field strength in Eq. (<ref>) is defined as F^_μν=∂^_μ W^_ν-∂^_ν W^_μ- ig[W^_μ,W^_ν], where W^_μ≡ W_μ^a σ^a_/2 with W_μ^a the SU(2)^_ L gauge fields and σ^a_ (for a=1,2,3) the Pauli matrices. In addition, D^_μ is the covariant derivative, ϕ is the SM Higgs doublet, and Δ is the triplet scalar with hypercharge Y=-1 and transforms according to the adjoint representation of the SU(2)^_ L group ϕ=( ϕ^+_ ϕ^0_) , Δ=( Δ^-_ -√(2)Δ^0_ √(2)Δ^–_ -Δ^-_) . The VEVs of the scalar fields, namely ⟨ϕ⟩=v^_ϕ/√(2) and ⟨Δ⟩= -v^_Δ, are determined by minimizing the scalar potential V(ϕ,Δ), and satisfy √(v_ϕ^2+2v_Δ^2)=v≈ 246  GeV. We will discuss it in more detail later. For the calculation of the sphaleron, since we are only focusing on the static field configuration, all the time components in Eq. (<ref>) can consistently be set to zero. Then the energy density reads H[W^_μ,ϕ,Δ]=1/2g^ik_g^jl_ Tr(F^_ijF^_kl)+g^ij_(D^_iϕ)^†_(D^_jϕ)+1/2g^ij_[(D^_i Δ)^†_(D^_jΔ)]+V(ϕ,Δ) , where g^ij_ is the metric of the coordinate system. Since the sphaleron has a spherical symmetry in a pure SU(2)_ L gauge theory, it is most convenient to adopt the spherical coordinates (r,θ,φ). Then we have g^_ij=(g^ij_)^-1_= diag(1,r^2_,r^2_sin^2_θ). Moreover, the degrees of freedom from the gauge symmetry allow us to take the polar gauge. That is, the radial part of the gauge field can always be set to zero: W^_r=0. The total energy is determined by integrating over the whole space E[W^_μ,ϕ,Δ]=∫_0^2π dφ∫_0^π dθsinθ∫_0^∞ dr r^2_ H[W^_μ,ϕ,Δ] , which is the functional of the field configuration. Below we use the minimax procedure <cit.> to find the sphaleron solution in the HTM. The basic idea is to construct a set of non-contractible loops[The loops are defined on the infinite-dimensional field configuration space {W_μ( x),ϕ( x),Δ( x)}, on which the energy functional E[W_μ( x),ϕ( x),Δ( x)] is also defined. Here x denotes the general spatial indices.] starting and ending at the vacuum. For each of the loop there exists a configuration with maximum energy. Then the infimum of the maximum energies defines the sphaleron configuration, which corresponds to the saddle point of the energy functional. Along this line, the sphaleron configuration in the SM can be worked out <cit.>. Similar strategies have also been used to study the sphaleron in the new-physics scenarios, which extend the SM by adding new singlet or doublet scalars <cit.>. However, as far as we know, the study of the sphaleron in the presence of a triplet scalar is still lacking. In what follows we show that the minimax procedure works in the HTM as well. First, the fields at infinity (r→∞) should be related to the vacuum configuration via W_j^∞ = - i/g∂^_j U^_∞(θ,φ)U_∞^-1(θ,φ) , j=θ,φ , ϕ^∞_ = 1/√(2)U_∞(θ,φ)^( 0 v^_ϕ) , Δ^∞_ = U^_∞(θ,φ)( 0 -v^_Δ 0 0 )U_∞^-1(θ,φ) , where U^_∞(θ,φ)∈ SU(2)^_ L denotes the gauge transformation that preserves the polar gauge condition. Note that Eq. (<ref>) satisfies the pure gauge such that the field strength F_μν vanishes at the infinity, and Eq. (<ref>) comes from the fact that Δ belongs to the adjoint representation of SU(2)^_ L. The gauge transformation U_∞(θ,φ) (or equivalently, the Higgs field at infinity ϕ^∞_) defines a map: S^2 → S^3 that is contractible, because the homotopy group π^_2(S^3_) is trivial. This implies that the fields at infinity can be continuously transformed to the vacuum configuration. In order to find a non-contractible loop in the field configuration space, we could introduce a new parameter μ∈ [0,π], and extend the gauge transformation to U(μ,θ,φ)=( e^ iμ_(cosμ- isinμcosθ) e^ iφ_sinμsinθ -e^- iφ_sinμsinθ e^- iμ_(cosμ+ isinμcosθ) ) , which satisfies U(μ,θ=0,φ)=U(μ=0,θ,φ)=U(μ=π,θ,φ)=1 with 1 the identity matrix. Therefore, μ=0 and μ=π correspond to the vacuum configuration, and the varying μ∈ [0,π] parametrizes the loop. Then it follows that equipped with the loop parametrized by μ, the gauge transformation U(μ,θ,φ) defines a map: S^3 → S^3. Since the homotopy group is π_3(S^3)=ℤ, the topological degree of the map is nonzero and the loop is non-contractible. Now it is straightforward to construct the general field configuration using Eq. (<ref>). A suitable ansatz is W^_j(μ,r,θ,φ) = - i/gf(r)∂^_j U(μ,θ,φ)U^-1_(μ,θ,φ) , j=θ,φ , ϕ(μ,r,θ,φ) = v^_ϕ/√(2)h(r)U(μ,θ,φ)( 0 1 ) , Δ(μ,r,θ,φ) = v^_Δ h^_Δ(r) U(μ,θ,φ)( 0 -1 0 0 )U^-1(μ,θ,φ) , where f(r), h(r) and h^_Δ(r) are radial profile functions to be determined. Since the polar gauge is singular at the origin, the smoothness requires the profile functions of all gauge multiplets to vanish at the origin. In addition, at spatial infinity the field configuration should go back to the vacuum configuration. This ensures the finiteness of the energy. Therefore, the boundary conditions of the profile functions should be f(0) = h(0)=h_Δ(0)=0 , f(∞) = h(∞)=h^_Δ(∞)=1 . Substituting Eqs. (<ref>)-(<ref>) into Eq. (<ref>), we obtain the kinematic terms 1/2g^jk_g^jl_(F^_ijF^_kl) = 4/g^2_ r^4_sin^2_μ[2f^2_(1-f)^2sin^2_μ+r^2_ f'^2_] , g^ij_(D^_iϕ)^†_(D^_jϕ) = v_ϕ^2/2r^2_[2(1-f)^2_ h^2_sin^2_μ+r^2_ h'^2_] , 1/2g^ij_[(D^_i Δ)^†_(D^_jΔ)] = v_Δ^2/2r^2_[(5-cos2θ)(1-f)^2_h_Δ^2 sin^2_μ+r^2_ h_Δ'^2] , where we have suppressed all arguments in the profile functions for simplicity, and all derivatives are with respect to r. It is interesting to notice that the kinetic terms of gauge fields and the doublet are spherically symmetric while that of the triplet is not. Also note that the contribution from the kinetic term of the triplet is suppressed by v_Δ^2/v_ϕ^2 compared with that of the doublet. Furthermore, once the scalar potential V(ϕ,Δ) is known (as shown in the next two sections), one could obtain the total energy E(μ) by performing the integral in Eq. (<ref>), which is the function of the loop parameter μ. The sphaleron configuration (labeled by μ^_0) is determined by finding the maximum energy along the non-contractible loop, namely δ E(μ)/δμ|_μ=μ_0 = 0 , δ^2 E(μ)/δμ^2|_μ=μ_0 < 0 . The sphaleron energy is given by E_ sph=E(μ_0), and the EOM of the sphaleron are obtained from δ E(μ^_0)/δ f=δ E(μ^_0)/δ h=δ E(μ^_0)/δ h^_Δ=0 . Solving the EOM together with the boundary conditions in Eq. (<ref>), one obtains the field configuration of the sphaleron. In the next two sections, we will use the above formalism to calculate the sphaleron configuration in the HTM. § SPHALERON WITH THE MINIMAL POTENTIAL §.§ Scalar Potential The most general scalar potential in the HTM has 8 independent parameters. Before investigating the full potential in the next section, we first consider a simplified potential V(ϕ,Δ)=λ(ϕ^†_ϕ)^2_-κ^2_ϕ^†_ϕ+1/2M_Δ^2 (Δ^†_Δ)-(λ^_Δ M^_Δϕ^ T_ϵΔϕ+ h.c.) , where ϵ≡ iσ^2_. In Eq. (<ref>), only the trilinear interaction (ϕ-Δ-ϕ) is kept and all the quartic terms of triplet self-interaction and doublet-triplet interaction are turned off. This is a minimal version of the HTM, which still violates the lepton number and can accommodate the tiny neutrino masses. We will restrict ourselves to the minimal HTM throughout this section. It helps to exhibit the effects of the triplet on the sphaleron in a more apparent way. Without loss of any generality, we can take M_Δ and λ_Δ in Eq. (<ref>) to be real and positive. Substituting the VEVs into the scalar potential we have V(v^_ϕ,v^_Δ)≡ V(⟨ϕ⟩,⟨Δ⟩)=1/4λ v_ϕ^4-1/2κ^2_ v_ϕ^2+1/2M_Δ^2 v_Δ^2-λ^_Δ M^_Δ v^_Δ v_ϕ^2 . The VEVs are determined by minimizing the potential ∂/∂ v^_ϕV(v^_ϕ,v^_Δ)=λ v_ϕ^3-κ^2_ v^_ϕ-2λ^_Δ M^_Δ v^_Δ v^_ϕ=0 , ∂/∂ v^_ΔV(v^_ϕ,v^_Δ)=M_Δ^2 v^_Δ-λ^_Δ M^_Δ v_ϕ^2=0 , from which one obtains v^_ϕ=√(κ^2_/λ-2λ_Δ^2) , v_Δ^=λ^_Δ v_ϕ^2/M^_Δ . In order to have a real positive v_ϕ, we require κ^2_>0 and λ-2λ_Δ^2>0. Besides, the vacuum stability requires λ>0. Substituting the VEVs back to Eq. (<ref>) we obtain the minimum V^_ min=-κ^4_/4(λ-2λ_Δ^2)=-1/4(λ-2λ_Δ^2) v_ϕ^4 . The nonzero minimum of the potential would bring about infinity after integrating over the whole space. To obtain a finite energy, one can perform a constant shift to the potential V(ϕ,Δ) → V(ϕ,Δ)+1/4(λ-2λ_Δ^2) v_ϕ^4 = λ(ϕ^†_ϕ-v_ϕ^2/2)^2_+2λ_Δ^2 v_ϕ^2(ϕ^†_ϕ-v_ϕ^2/2)+λ_Δ^2 v_ϕ^4/2v_Δ^2[(Δ^†_Δ)-v_Δ^2] +λ_Δ^2 v_ϕ^2/v^_Δ[v^_Δ v_ϕ^2-2 Re(ϕ^ T_ϵΔϕ)] . Note that such a shift has no impact on the sphaleron configuration since it does not involve any dynamical degrees of freedom. In Eq. (<ref>) we have replaced κ^2_ and M^_Δ with the VEVs using Eq. (<ref>). Therefore, in the minimal HTM the scalar potential depends on 4 real positive parameters: {λ,λ^_Δ,v^_ϕ,v^_Δ}. Substituting Eqs. (<ref>)-(<ref>) into Eq. (<ref>), we get the scalar potential in terms of the profile functions V(ϕ,Δ)=1/4v_ϕ^4[λ(1-h^2_)^2_+2λ_Δ^2(2h^2_-1-h^_Δ)(1-h^_Δ)] . It can be seen that the scalar potential is also spherically symmetric, although the fields themselves (i.e., ϕ and Δ) are not. §.§ Equations of Motion Now one can calculate the total energy using Eq. (<ref>). It is helpful to define the following dimensionless quantity ξ≡ g v r ≈ 8.1 ×(r/10^-15  cm) , where we have used g≈ 0.65 and v=√(v_ϕ^2+2v_Δ^2)≈ 246  GeV. As one can see later, ξ characterizes the typical scale of the sphaleron. Substituting Eqs. (<ref>)-(<ref>) and (<ref>) into Eq. (<ref>) and integrating out the angular part, we obtain E(μ)=4π v/g∫_0^∞ dξ( H^_ gauge+ H^_ doublet+ H^_ triplet) , where[From here on, unless otherwise specified, all derivatives are with respect to ξ.] H^_ gauge = 4 f'^2_sin^2_μ+8/ξ^2_f^2_(1-f)^2sin^4_μ , H^_ doublet = ϱ^_1/4β^2_ξ^2_(1-h^2_)^2+1/2βξ^2_ h'^2_+1/βh^2_(1-f)^2sin^2_μ , H^_ triplet = ϱ_2/4β^2_ξ^2(2h^2_-1-h^_Δ)(1-h^_Δ)+ϱ^_3/6β[3ξ^2_ h_Δ'^2+16h_Δ^2(1-f)^2sin^2_μ] , and ϱ^_1≡λ/g^2_ , ϱ^_2≡2λ_Δ^2/g^2_ , ϱ^_3≡v_Δ^2/v_ϕ^2 , β≡v^2_/v_ϕ^2=1+2ϱ^_3 . In Eq. (<ref>) we have divided the contributions into three parts: H^_ gauge and H^_ doublet come from the kinetic and self-interaction terms of the gauge bosons and the doublet, respectively, while H^_ triplet arises from the triplet kinetic term, the triplet mass term, and the doublet-triplet interaction. To reduce to the SM case, one can simply take ϱ^_2=ϱ^_3=0. The next step is to determine the value of μ corresponding to the maximum energy. To this end, we calculate the variation of the energy with respective to μ, i.e., δ E(μ)/δμ=4π v/3gsin2μ∫_0^∞ dξ[12f'^2_+1/β(1-f)^2_(3h^2_+8ϱ^_3 h_Δ^2)+48/ξ^2_f^2_(1-f)^2_sin^2_μ]=0 , which gives μ=0, π/2 or π. A further investigation of the second-order variation leads to δ^2_ E(μ)/δμ^2|^_μ=0 =δ^2_ E(μ)/δμ^2_|^_μ=π=4π v/g∫_0^∞ dξ[8f'^2_+2/3β(1-f)^2(3h^2_+8ϱ^_3 h_Δ^2)]>0 , δ^2_ E(μ)/δμ^2_|^_μ=π/2 =4π v/g∫_0^∞ dξ[-8f'^2_-2/3β(1-f)^2_(3h^2_+8ϱ^_3 h_Δ^2)-32/ξ^2_f^2_(1-f)^2_]<0 . Therefore, μ=0 or π corresponds to the minimum energy (i.e., the vacuum configuration) as expected, while μ=π/2 corresponds to the maximum energy (i.e., the sphaleron configuration). Substituting μ=π/2 into Eq. (<ref>) we obtain the sphaleron energy E^_ sph=4π v/g∫_0^∞ dξ { 4f'^2_+8/ξ^2_f^2_(1-f)^2_+1/β(1-f)^2_h^2_+1/2βξ^2_ h'^2_+ϱ^_3/6β[3ξ^2_ h_Δ'^2+16h_Δ^2(1-f)^2_]. .+ξ^2_/4β^2_[(ϱ^_1-ϱ^_2)(1-h^2_)^2_+ϱ^_2(h^2_-h^_Δ)^2_] } . The EOM of the fields are determined by the variation of the sphaleron energy with respect to the profile functions δ E^_ sph/δ f=δ E^_ sph/δ h=δ E^_ sph/δ h^_Δ=0 , which results in ξ^2_ f” = 2 f(1-f)(1-2f)-ξ^2_/4β(1-f)h^2_-2ϱ^_3/3βξ^2_(1-f)h_Δ^2 , (ξ^2_ h')' = 2(1-f)^2_ h-ξ^2_/β[(ϱ^_1-ϱ^_2)h(1-h^2_)-ϱ^_2h(h^2_-h^_Δ)] , (ξ^2_ h_Δ')' = 16/3(1-f)^2_h^_Δ-ϱ^_2/2βϱ^_3ξ^2_(h^2_-h^_Δ) . In addition, the profile functions should satisfy the boundary conditions in Eq. (<ref>). Once the solutions of the EOM are found, one can simply substitute them back to Eq. (<ref>) to get the sphaleron energy, which is expected to be of the order of 4π v/g≈ 5  TeV. Before solving Eqs. (<ref>)-(<ref>), it is interesting to first take a look at the heavy-mass limit of the triplet scalar (i.e., M_Δ→∞ or v^_Δ/v^_ϕ→ 0). Note that the coupling ϱ^_2/(2ϱ^_3) in Eq. (<ref>) is actually M_Δ^2/(g^2 v_ϕ^2) using the second relation in Eq. (<ref>). In the heavy-mass limit, M_Δ^2/(g^2_ v_ϕ^2) goes infinity and Eq. (<ref>) enforces h^_Δ→ h^2_. Then the EOM of f(ξ) and h(ξ) reduce to ξ^2_ f” = 2f(1-f)(1-2f)-ξ^2_/4(1-f)h^2_ , (ξ^2_ h')' = 2(1-f)^2_ h-ξ^2_(ϱ^_1-ϱ^_2)h(1-h^2_) , which are exactly those in the SM <cit.>, except for the replacement ϱ^_1→ϱ^_1-ϱ^_2, or equivalently, λ→λ_ eff≡λ-2λ_Δ^2. Therefore, a very heavy triplet scalar has no influence on the sphaleron but only shifts the quartic Higgs coupling λ to λ^_ eff. This is consistent with the result that one integrates out the triplet scalar at the tree level and retains only the leading-order term: L^_ eff= L^_ SM+2λ_Δ^2 (ϕ^†_ϕ)^2_+ O(1/M^_Δ) . §.§ Sphaleron Solution The EOM in Eqs. (<ref>)-(<ref>) are coupled nonlinear differential equations. It is difficult to solve them analytically. In Appendix <ref>, we have developed a numerical algorithm based on the spectral method that can be used to efficiently solve the sphaleron EOM. See Appendix <ref> for more details. The solutions of the profile functions and the sphaleron energy density obtained from the spectral method are shown in Fig. <ref>. Note that ϱ_3 violates the custodial symmetry and thus is strictly constrained by the EW precision measurements: √(ϱ^_3) = v^_Δ/v^_ϕ≲ 0.03 <cit.>. Moreover, in the SM, ϱ^_1 is related to the mass ratio of the Higgs boson and W boson via ϱ_1^ SM=m_h^2/(8m_ W^2)≈ 0.306. In Fig. <ref>, as an illustration, we have taken ϱ^_3 to saturate the experimental upper bound, namely ϱ^_3=10^-3_ (corresponding to v^_Δ≈ 8  GeV). We also fix ϱ^_1=ϱ_1^ SM and show the solutions of profile functions and the sphaleron energy density for different ϱ^_2. From Fig. <ref>, it can be seen that all the profile functions approach the vacuum configuration [i.e., f(∞)=h(∞)=h^_Δ(∞)=1] quickly. The sphaleron energy is restricted within a very narrow region: ξ≲ 10, corresponding to r≲ 10^-15_  cm using Eq. (<ref>), which is even two orders of magnitude smaller than the length scale of a proton. This implies that the sphaleron looks like a “particle" localized near the origin. If the triplet couples with the doublet, then a larger trilinear coupling ϱ^_2 makes the profile functions tend to the vacuum configuration more slowly. In addition, ϱ^_2 would diffuse the distribution of the sphaleron energy density and also decrease the total energy of the sphaleron. It is also interesting to investigate the asymptotic behavior of the triplet field near the origin. First, from Eqs. (<ref>) and (<ref>), the smoothness of the profile functions at the origin requires f and h to satisfy f∼ξ^2 and h∼ξ, which is the same as the SM case <cit.>. Then suppose h^_Δ∼ξ^α_ (with α>0) near ξ=0 and substitute it into Eq. (<ref>). If ϱ^_3≠ 0, keeping only the leading-order term of ξ one obtains[If ϱ^_3=0, the term proportional to ξ^2_/ϱ^_3 in Eq. (<ref>) cannot be neglected near ξ=0. Instead, the finiteness of the both sides of Eq. (<ref>) enforces h_Δ→ h^2. Therefore we have h_Δ∼ h^2∼ξ^2 near the origin if ϱ^_3=0.] α(α-1)+2α=16/3 ⇒ α=1/6(√(201)-3)≈ 1.86 . The above asymptotic behavior of the triplet field near the origin has also been verified numerically. In the left panel of Fig. <ref>, we show the contour plot of the sphaleron energy with respect to ϱ^_1 and ϱ^_2, where ϱ^_3=10^-3_ is fixed. It is obvious that a larger ϱ^_1 (or ϱ^_2) would increase (or decrease) the sphaleron energy. One may wonder how large is the difference of the sphaleron energy between the minimal HTM and the SM. The answer is that for ϱ^_3≲ 10^-3_ the difference is negligible. This is because for such a small ϱ^_3, the triplet almost decouples and shifts λ to λ-2λ_Δ^2. As a result, the sphaleron energy in the minimal HTM only depends on ϱ^_1-ϱ^_2, as is shown in the left panel of Fig. <ref>. In Table <ref>, we compare the sphaleron energy in the SM and in the minimal HTM. As one can see, the difference is only about 1‰, if one replaces ϱ_1 in the SM with ϱ^_1-ϱ^_2 in the minimal HTM. Note that such a difference is of the same order of ϱ^_3. However, things are different for a larger ϱ^_3.[We comment here that a large value of v^_Δ/v^_ϕ may be available when taking into account the temperature corrections in the early Universe. See more discussions in Sec. <ref>.] In the right panel of Fig. <ref> we show the behavior of E^_ sph with ϱ_3. It can be seen that a large ϱ^_3 could significantly decrease the sphaleron energy. This can be understood as follows. For small ϱ^_3, β≈ 1, h^_Δ≈ h^2_, and the term proportional to ϱ^_3 in Eq. (<ref>) is suppressed, which means the contribution of the triplet to the sphaleron energy is negligible, and it reduces to the SM case. However, for large ϱ^_3 we have β≈ 2ϱ^_3, then the terms relevant to the doublet in Eq. (<ref>) are suppressed by the inverse power of β. In this case, the sphaleron energy is dominated by the contribution of gauge fields and the triplet. More explicitly, we have E^_ sph(ϱ^_3≫ 1)≈4π v/g∫_0^∞ dξ{ 4f'^2_+8/ξ^2_f^2_(1-f)^2_+1/12[3ξ^2_ h_Δ'^2+16h_Δ^2(1-f)^2_] }≈ 1.32×4π v/g , which tends to a fixed value. This explains why curves with different ϱ^_2 in the right panel of Fig. <ref> converge together in the large ϱ^_3 limit. Compared with the case of small ϱ^_3, we find the sphaleron energy could be decreased by 30% if ϱ^_3 is sufficiently large. To summarize, in the minimal HTM, there are three relevant parameters which could affect the sphaleron configuration, i.e., the doublet quartic coupling ϱ^_1, the doublet-triplet trilinear coupling ϱ_2, and the VEV-ratio parameter ϱ^_3. As in the SM, the sphaleron energy increases monotonically with ϱ^_1, while the two additional parameters ϱ^_2 and ϱ^_3 would decrease the sphaleron energy. However, at zero temperature, the stringent constraint on the triplet VEV has highly suppressed the effects of the triplet on the sphaleron. The sphaleron energy in the minimal HTM can be simply obtained from that in the SM with the replacement ϱ^_1→ϱ^_1-ϱ^_2. As we will see below, the situation becomes different when considering the full potential in the HTM. § SPHALERON WITH THE FULL POTENTIAL In this section, we calculate the sphaleron configuration in the HTM with the full potential. §.§ Scalar Potential and Equations of Motion The most general scalar potential in the HTM is given by V(ϕ,Δ)= λ(ϕ^†_ϕ)^2_-κ^2_ϕ^†_ϕ+1/2M_Δ^2 (Δ^†_Δ)-(λ^_Δ M^_Δϕ^ T_ϵΔϕ+ h.c.) +λ_1/4[(Δ^†_Δ)]^2_+λ^_2/4[(Δ^†_Δ)^2_]+λ^_3(ϕ^†_ϕ)(Δ^†_Δ)+λ^_4ϕ^†_ΔΔ^†_ϕ , where λ^_i (for i=1,2,3,4) are real couplings. Substituting the VEVs of the doublet and the triplet into the potential above and minimizing it leads to ∂/∂ v^_ϕV(v^_ϕ,v^_Δ) =(-κ^2_+λ v_ϕ^2-2λ^_ΔM^_Δv^_Δ+λ^_3v_Δ^2)v^_ϕ=0 , ∂/∂ v^_ΔV(v^_ϕ,v^_Δ) =-λ^_ΔM^_Δv_ϕ^2+M_Δ^2v^_Δ+(λ^_1+λ^_2)v_Δ^3+λ^_3v_ϕ^2v^_Δ=0 . From Eqs. (<ref>) and (<ref>) one can determine v^_ϕ and v^_Δ from the couplings, though the general expressions are very tedious. Alternatively, we could also use Eqs. (<ref>) and (<ref>) to express the couplings as λ^_3 =κ^2_-λ v_ϕ^2+2λ^_ΔM^_Δv^_Δ/v_Δ^2 , λ^_1+λ^_2 =-M^_Δ/v_Δ^3(v^_Δ M^_Δ + λ^_Δ v_ϕ^2 )+v_ϕ^2/v_Δ^4(λ v_ϕ^2 - κ^2_) . With the help of Eqs. (<ref>) and (<ref>), the vacuum energy is given by V(v_ϕ,v_Δ)=1/4[M^_Δv^_Δ(M^_Δv^_Δ-λ^_Δv_ϕ^2)-κ^2_v_ϕ^2] . As what we have done before, in order to have a finite total energy, we perform a shift to the potential to make the vacuum energy being zero V(ϕ,Δ) → V(ϕ,Δ)-1/4[M^_Δv^_Δ(M^_Δv^_Δ-λ^_Δv^2_)-κ^2_v_ϕ^2] = +λ[(ϕ^†_ϕ)-v_ϕ^2/2]^2_+(λ v_ϕ^2-κ^2_)[(ϕ^†_ϕ)-v_ϕ^2/2]+1/2M_Δ^2 [(Δ^†_Δ)-v_Δ^2] -λ^_Δ M^_Δ[2 (ϕ^ T_ϵΔϕ)-v^_Δ v_ϕ^2]+λ^_1/4{[(Δ^†_Δ)]^2_-v_Δ^4}+λ^_2/4{[(Δ^†_Δ)^2_]-v_Δ^4} +λ^_3[(ϕ^†_ϕ)(Δ^†_Δ)-1/2v_ϕ^2 v_Δ^2]+λ^_4 ϕ^†_ΔΔ^†_ϕ . With the above scalar potential, the total energy turns out to be E(μ)=4π v/g∫_0^∞ dξ( H^_ gauge+ H^_ doublet+ H^_ triplet) , where H^_ gauge and H^_ doublet are the same as those in the minimal HTM [i.e., Eqs. (<ref>) and (<ref>)], and H^_ triplet is given by H^_ triplet= +λ_Δ^2/2g^2_β^2_ξ^2_(2h^2_-1-h^_Δ)(1-h^_Δ)+v_Δ^2/6β v^2_ϕ[3ξ^2_ h_Δ'^2+16h_Δ^2(1-f)^2_sin^2_μ] +λ_Δ^2/2g^2_β^2_ξ^2_{κ^2_-(λ-2λ_Δ^2 )v_ϕ^2/λ_Δ^2 v_ϕ^2(1-h^2_). . +(v^_Δ M^_Δ/λ^_Δ v_ϕ^2-1)[2(1-h^2_ h_Δ^2)-(v^_Δ M^_Δ/λ^_Δ v_ϕ^2+1)(1-h_Δ^2)]} -λ^_1+λ^_2/4g^2_β^2_v_Δ^4/v_ϕ^4ξ^2_(1-h_Δ^4)-λ^_3/2g^2_β^2_v_Δ^2/v_ϕ^2ξ^2_(1-h^2_ h_Δ^2) , where β is still defined as β≡ v^2_/v_ϕ^2. Note that λ_4 does not appear in the energy, because ϕ^†_ΔΔ^†_ϕ always vanishes with the ansatz in Eqs. (<ref>) and (<ref>). It is easy to check that in the limit of λ^_1+λ^_2 =0 and λ^_3=0, the parameters κ^2_ and M^_Δ are related to the VEVs by Eq. (<ref>), then the 2nd to 4th lines of Eq. (<ref>) vanish and Eq. (<ref>) reduces to Eq. (<ref>). Moreover, the terms in the 2nd to 4th lines of Eq. (<ref>) are independent of the loop parameter μ, implying that they do not influence the extreme points of the energy. Therefore, we conclude that the sphaleron configuration in the HTM with the full potential is still located at μ=π/2. In order to recast the sphaleron energy into a more compact form, we introduce the following dimensionless parameters ϱ^_1≡λ/g^2_ , ϱ^_2≡2λ_Δ^2/g^2_ , ϱ^_3≡v_Δ^2/v_ϕ^2 , ϱ^_4≡κ^2_/g^2_ v_ϕ^2 , ϱ^_5≡M_Δ^2/g^2_ v_ϕ^2 . Then λ_1+λ_2 and λ_3 are related to them via λ^_1+λ^_2=g^2_(-ϱ^_5/ϱ^_3-ϱ^_5/ϱ^_3√(ϱ^_2/2ϱ^_3ϱ^_5)+ϱ^_1-ϱ^_4/ϱ_3^2) , λ^_3=g^2_(ϱ^_4-ϱ^_1/ϱ^_3+√(2ϱ^_2ϱ^_5/ϱ^_3)) . Notice that in the limit of λ^_3 = 0 and λ^_1+λ^_2=0, it goes back to the minimal HTM, where ϱ^_4 and ϱ^_5 are not independent and they are related to other three parameters by ϱ^_4 =ϱ^_1-ϱ^_2 and ϱ^_5=ϱ^_2/(2ϱ^_3). With the help of Eq. (<ref>), the sphaleron energy can be written as E^_ sph=4π v/g∫_0^∞ dξ { 4f'^2_+8/ξ^2_f^2_(1-f)^2_+1/β(1-f)^2_ h^2_+1/2βξ^2_ h'^2_. .+ξ^2/4β^2_[(ϱ^_1-ϱ^_2)(1-h^2_)^2_+ϱ^_2(h^2_-h^_Δ)^2_]+ϱ^_3/6β[3ξ^2_ h_Δ'^2+16h_Δ^2 (1-f)^2_]. .+ξ^2_/4β^2_[2(ϱ^_4-ϱ^_1+ϱ^_2 )(1-h^2_)-(2ϱ^_3 ϱ^_5 -ϱ^_2)(1-h_Δ^2)]. .+ξ^2_/2β^2_(√(2ϱ^_2 ϱ^_3 ϱ^_5)-ϱ^_2)(1-h^2_ h^_Δ)+ξ^2_/2β^2_(ϱ^_1-ϱ^_4-√(2ϱ^_2 ϱ^_3 ϱ^_5))(1-h^2_ h_Δ^2). .-ξ^2_/4β^2_(ϱ^_1-ϱ^_4-ϱ^_3 ϱ^_5-√(ϱ^_2 ϱ^_3 ϱ^_5/2))(1-h_Δ^4)} . Starting with the energy, we obtain the sphaleron EOM via Eq. (<ref>) ξ^2_ f” = 2f(1-f)(1-2f)-ξ^2_/4β(1-f)h^2_-2ϱ^_3/3βξ^2_(1-f)h_Δ^2 , (ξ^2_ h')' = 2(1-f)^2_ h-ξ^2_/β[(ϱ^_1-ϱ^_2)h(1-h^2_)-ϱ^_2 h(h^2_-h^_Δ). .+(ϱ^_4-ϱ^_1+ϱ^_2)h+(√(2ϱ^_2 ϱ^_3 ϱ^_5)-ϱ^_2)h h^_Δ+(ϱ^_1-ϱ^_4-√(2ϱ^_2ϱ^_3 ϱ^_5))h h_Δ^2 ] , ϱ^_3 (ξ^2_ h_Δ')' = 16/3ϱ^_3 (1-f)^2_ h^_Δ -ϱ^_2 ξ^2_/2β(h^2_-h^_Δ)+ξ^2_/2β[(2ϱ^_3 ϱ^_5 - ϱ^_2 )h^_Δ-(√(2ϱ^_2 ϱ^_3 ϱ^_5)-ϱ^_2)h^2_. .-2(ϱ^_1-ϱ^_4-√(2ϱ^_2 ϱ^_3 ϱ^_5)) h^2_ h^_Δ+2(ϱ^_1-ϱ^_4-ϱ^_3 ϱ^_5-√(ϱ^_2 ϱ^_3 ϱ^_5/2))h_Δ^3] . The profile functions f, h and h_Δ should also satisfy the boundary conditions in Eq. (<ref>). Although there are totally 8 parameters in the scalar potential, namely λ, λ^_Δ, κ^2_, M^_Δ, and λ^_i (for i=1,2,3,4), the sphaleron configuration is only affected by 5 independent parameters, i.e., ϱ^_1-ϱ^_5 defined in Eq. (<ref>). This implies that not all parameters in the HTM are relevant to the B-violating process. §.§ Constraints on the Parameters We have seen that the sphaleron configuration in the HTM is determined by 5 parameters. Using the spectral method developed in Appendix <ref>, one can solve Eqs. (<ref>)-(<ref>) and calculate the sphaleron energy in Eq. (<ref>) for any given parameters. However, there are constraints from both theoretical and experimental aspects on the parameters in the HTM <cit.>. Below we list all the constraints that are relevant to the sphaleron. * Triplet VEV: From the first equality of Eq. (<ref>) one can obtain ϱ^_4 = ϱ^_1-1/2ϱ^_3ϱ^_5(2+√(2ϱ^_2/ϱ^_3ϱ^_5))-(λ^_1+λ^_2)ϱ_3^2/g^2_ ≈ ϱ^_1-1/2ϱ^_3ϱ^_5(2+√(2ϱ^_2/ϱ^_3ϱ^_5)) , where in the second line we have neglected the term proportional to ϱ_3^2. This is a good approximation because the EW precision measurements require ϱ^_3≲ 10^-3, and λ^_i cannot be too large for unitarity. Therefore, ϱ^_4 can be approximated using Eq. (<ref>) in the calculation of the sphaleron. Substituting Eq. (<ref>) back to the second equality of Eq. (<ref>) we have λ^_3/g^2≈√(ϱ^_2ϱ^_5/2ϱ^_3)-ϱ^_5 . * Bounded-from-below conditions and the requirement of unitarity: These conditions provide a series of inequalities on the couplings λ_i in the scalar potential, and part of them can be translated to the constrains on ρ_i. For a complete set of these constraints, see Refs. <cit.>. Here we only list those which are relevant to the sphaleron: 0 < ϱ^_1⩽4π/g^2_ , -√(4π/g^2_ϱ^_1) < √(ϱ^_2ϱ^_5/2ϱ^_3)-ϱ^_5⩽4π/g^2_ , ϱ^_1-ϱ^_3ϱ^_5-√(ϱ^_2ϱ^_3ϱ^_5/2)>0 . In addition, there are also constraints relevant to λ_4: -√(4π/g^2_ϱ^_1)<λ^_3+λ^_4/g^2_⩽4π/g^2_ , |2λ^_3 + 3λ^_4|⩽8π , |2λ^_3-λ^_4|⩽ 8π . Although λ^_4 does not directly contribute to the sphaleron configuration, it would be related to other parameters via the Higgs mass (as discussed below). * Higgs mass: The HTM should also predict a CP-even neutral Higgs boson h, whose mass is around 125 GeV. In the HTM, the mass of h is predicted by m_h^2=g^2_ v_ϕ^2 [ϱ^_1+1/2√(ϱ^_2ϱ^_5/2ϱ^_3)+λ^_1+λ^_2/g^2_ϱ^_3. .-√((ϱ^_1-1/2√(ϱ^_2ϱ^_5/2ϱ^_3)-λ^_1+λ^_2/g^2_ϱ^_3)^2_+4(√(ϱ^_2ϱ^_5/2)-λ^_3+λ^_4/g^2_√(ϱ^_3))^2_)] . The terms proportional to λ^_1+λ^_2 in Eq. (<ref>) are suppressed by ϱ^_3 and can be safely neglected. Then one can extract λ^_4 in terms of m_h and ϱ^_i: λ^_4/g^2_≈ϱ^_5±1/(2ϱ^_3)^3/4√((ϱ^_1-m_h^2/2 g^2_ v_ϕ^2)(√(ϱ^_2ϱ^_5)-√(2ϱ^_3)m_h^2/g^2_ v_ϕ^2)) . Given g≈ 0.65, m^_h≈ 125  GeV and v^_ϕ≈ 246  GeV, the combination of Eqs. (<ref>), (<ref>) and (<ref>) provides additional constraints on ϱ^_i. * Collider constraints: The collider searches put the lower bound on the mass of doubly-charged Higgs, namely m^_H^±±_≳ 350  GeV or m^_H^±±_≳ 1  TeV for the decay channels dominated by vector-boson (v^_Δ≳ 10^-4_  GeV) or charged-lepton (v^_Δ≲ 10^-4_  GeV) final states, respectively <cit.>. In the HTM, the mass of the doubly-charged Higgs is predicted to be m_H^±±_^2=g^2_v_ϕ^2(√(ϱ^_2ϱ^_5/2ϱ^_3)-λ^_4/g^2_-λ^_2/g^2_ϱ^_3)≈ g^2_v_ϕ^2(√(ϱ^_2ϱ^_5/2ϱ^_3)-λ^_4/g^2_) . For ϱ^_3=10^-3_, the dominant decay channel is the gauge-boson final state, so the collider constraint implies √(ϱ^_2ϱ^_5/2ϱ^_3)-λ^_4/g^2_≳ 4.8 , where g≈0.65 and v^_ϕ≈ 246  GeV have been used. * Charged lepton flavor violation (cLFV): The lack of the observation of cLFV in the HTM gives <cit.> M^_Δ v^_Δ≳ 10^2  GeV· eV ⇒ ϱ^_3ϱ^_5≳ 10^-24_ . This constraint is easy to satisfy for v^_Δ∼ O( GeV). In summary, the relevant constraints on the parameters that contribute to the sphaleron configuration are given by Eqs. (<ref>), (<ref>), (<ref>) and (<ref>), where λ^_3 and λ^_4 are given by Eqs. (<ref>) and (<ref>), respectively. §.§ Sphaleron Solution Basically, the contribution of the triplet to the sphaleron energy is suppressed by its VEV. For a small enough VEV-ratio parameter ϱ^_3, it should reduce to the SM case. Therefore, we fix ϱ^_3 to be its upper bound (i.e., ϱ^_3=10^-3_), and see how much the difference of the sphaleron energy between the HTM and the SM is under all theoretical and experimental constraints. In addition, ϱ^_4 could be calculated from Eq. (<ref>) as a good approximation. Therefore, we are left with three independent parameters, namely the doublet quartic coupling ϱ^_1, the doublet-triplet trilinear coupling ϱ^_2, and the triplet mass parameter ϱ^_5. In the SM, ϱ^_1 is completely fixed by the Higgs mass, i.e., ϱ_1^ SM=m_h^2/(2g^2_ v_ϕ^2)≈ 0.306, and so is the sphaleron energy E_ sph^ SM≈ 1.92× 4π v/g. However, in the HTM, ϱ^_1 is not fixed because the Higgs mass depends on other parameters [see Eq. (<ref>)]. It is not difficult to prove that for ϱ^_1<ϱ_1^ SM there is no allowed parameter space under the constraints discussed in Sec. <ref>. Therefore we must have ϱ^_1⩾ϱ_1^ SM≈ 0.306 and ϱ^_2ϱ^_5⩾ 2 ϱ^_3 m_h^4/(g^4_ v_ϕ^4)≈ 7.5 × 10^-4_. In Fig. <ref>, we have taken ϱ^_1=0.306 and shown the sphaleron energy with respect to ϱ^_2 and ϱ_5. It is clear that a larger ϱ^_5 (corresponding to a heavier triplet) would decrease the sphaleron energy, though the difference is small compared with the SM case because of the suppression from ϱ^_3. However, unlike the SM where ϱ^_1 is fixed to be 0.306, ϱ^_1>0.306 is also allowed in the HTM. Due to the constraints in Sec. <ref>, the parameter space of ϱ_2 and ϱ^_5 begins to split into two distinct regions when ϱ^_1≳ 0.34, as is shown in Fig. <ref>. In Region A (left panel of Fig. <ref>), it can be seen that the allowed parameter space of ϱ^_2 and ϱ^_5 moves to upper-right as ϱ^_1 increases. This can be understood by observing the expression of λ^_4 in Eq. (<ref>), whose magnitude should be bounded by the requirement of unitarity. Moreover, the sphaleron energy decreases as ϱ^_5 increases, while larger ϱ^_1 would bring about larger sphaleron energies. The value of ϱ^_1 can keep increasing until the unitarity bound, i.e., ϱ_1^ max=4π/g^2, is reached. We have verified numerically that the maximum sphaleron energy in Region A is around 1.97× 4π v/g. Basically, the parameters in Region A correspond to a heavy mass scale M^_Δ of the triplet scalar, which can reach TeV or above. Things are quite different for Region B (shown in the right panel of Fig. <ref>). The allowed values of ϱ^_2 and ϱ^_5 are much smaller. More explicitly, the lower and upper bounds of ϱ^_2ϱ^_5 in Region B are given by √(ϱ^_2ϱ^_5) ⩽ 1/2[1/√(2ϱ^_3)(ϱ^_1-m_h^2/2 g^2_ v^2_ϕ)-4√(2π)/g√(ϱ^_1ϱ^_3)-√(A^_1)] , √(ϱ^_2ϱ^_5) ⩾ 1/2[2√(2ϱ^_3)(ϱ^_5+24/5)+1/√(2ϱ^_3)(ϱ^_1-m_h^2/2g^2_v^2_ϕ)-√(A^_2)] , where A^_1 ≡ 1/2ϱ^_3(ϱ^_1-m_h^2/2 g^2_ v^2_ϕ)[ϱ^_1-16√(π)ϱ^_3/g√(ϱ^_1)-m^2_h/2g^2_ v^2_ϕ(1+16ϱ^_3)] , A^_2 ≡ 1/10ϱ^_3(ϱ^_1-m^2_h/2g^2_v^2_ϕ)[5ϱ^_1+8ϱ^_3(24+5ϱ^_5)-5 m_h^2/2g^2 v^2_ϕ(1+16ϱ^_3)] . Note that Eq. (<ref>) comes from the constraints in Sec. <ref>, where m^_h≈ 125  GeV, g≈ 0.65 and ϱ^_3=10^-3_ should be substituted to evaluate the lower and upper bounds. The allowed values of ϱ^_2 and ϱ^_5 are restricted to a narrow parameter space by Eq. (<ref>). For example, for ϱ^_1=0.6, the validity of Eq. (<ref>) requires ϱ^_5≲ 0.987 and 1.05 × 10^-3_≲ϱ^_2ϱ^_5≲ 1.22× 10^-3_, which corresponds to the narrow band in the bottom-right subfigure of Fig. <ref>. Since ϱ^_5 is relatively small, the sphaleron energy in Region B can be significantly enhanced as ϱ^_1 increases. In particular, for ϱ^_1=ϱ_1^ max=4π/g^2, the sphaleron energy can reach 2.48× 4π v/g, which is enhanced by about 30% compared with the sphaleron energy in the SM. The parameters in Region B correspond to a much smaller M^_Δ than that in Region A (basically lighter than 1 TeV). However, it does not violate the collider constraints on the mass of doubly-charged Higgs, because m^_H^±± depends on the combination of ϱ^_2ϱ^_5 rather than ϱ^_5 itself, and is enhanced by ϱ_3^-1/2 [see Eq. (<ref>)]. On the other hand, since the allowed parameter space in Region B is quite narrow and is sensitive to the lower bound of m^_H^±±, we point out that it is readily testable by future collider searches and EW precision measurements. In Fig. <ref>, we have shown the sphaleron energy with respect to ϱ^_1 and ϱ^_2 for different values of ϱ^_5. Note that all allowed parameters in Fig. <ref> belong to Region A because the corresponding values of ϱ^_5 are not small enough to satisfy Eq. (<ref>). It is clear that for larger ϱ^_5, the allowed parameter space moves to upper-right. The increase of ϱ^_1 (or ϱ^_5) would enhance (or reduce) the sphaleron energy. For ϱ^_5≳ 100 (corresponding to M^_Δ≳ 1.6  TeV), the lower bound of the sphaleron energy tends to about 1.88× 4π v/g. To sum up, the sphaleron energy in the SM is completely fixed by the Higgs mass, while that in the HTM is not. The allowed parameter space begins to split into two regions when ϱ^_1≳ 0.34. In Region A, the sphaleron energy is bounded to be 1.88 × 4π v/g ≲ E_ sph^≲ 1.97 × 4 π v/g. The difference of the sphaleron energy between the HTM and the SM is less than 3%. On the contrary, in Region B, since ϱ^_5 is relatively small, the sphaleron energy could be significantly enhanced as ϱ^_1 increases. Therefore we have 1.92 × 4π v/g ≲ E_ sph^≲ 2.48 × 4 π v/g, where the sphaleron energy could be enhanced up to about 30% compared with the SM case. § SUMMARY AND DISCUSSIONS The origin of neutrino masses and the baryon asymmetry of the Universe are two of the most important unsolved problems in the SM. Both of them are possible to be explained in a unified framework of the HTM, which extends the SM by adding a complex triplet scalar. The couplings of the triplet to the gauge fields and to the SM Higgs field are expected to affect the sphaleron configuration in the SM, which plays an important role in baryogenesis. Therefore, to realize a self-consistent baryogenesis in the HTM, either via EW baryogenesis or via leptogenesis, the calculation of the sphaleron energy is indispensable. In this work, we calculate the sphaleron configuration in the HTM for the first time, where both the doublet and the triplet scalar fields exist. Although there are 8 parameters in the scalar potential of the HTM, we find that the sphaleron configuration is determined by only 5 independent parameters, i.e., those defined in Eq. (<ref>). Among them, the doublet quartic parameter ϱ^_1 would increase the sphaleron energy, as in the SM case; while the doublet-triplet trilinear parameter ϱ^_2, the VEV-ratio parameter ϱ^_3, and the triplet mass parameter ϱ^_5 would decrease the sphaleron energy in general compared with the SM. Nevertheless, at zero temperature, the constraint from EW precision measurements on the triplet VEV puts a stringent upper bound on ϱ^_3, thus highly suppresses the difference of the sphaleron energy between the HTM and the SM. Interestingly, we find there still exists some narrow parameter space where the sphaleron energy could be enhanced by 30% compared with the SM case. Such narrow parameter space can be tested by future collider searches of doubly-charged Higgs and EW precision measurements. In the following, we discuss some possible extensions of the present work. All of the calculations in this paper have neglected the finite-temperature effects. However, the sphaleron transition rate is significant above the temperature of O(100)  GeV in the early Universe, which is a crucial process for baryogenesis. Therefore, in principle one should include the finite-temperature corrections as well as the one-loop corrections into the scalar potential in Eq. (<ref>) and recalculate the sphaleron configuration using the formalism developed above. This is beyond the scope of this paper, and will be left for a future work. As a good approximation, one could estimate the sphaleron energy at finite temperatures using the scaling law <cit.> E^_ sph(T)=E^_ sphv(T)/v , where v and E_ sph are the VEV and the sphaleron energy at zero temperature, and v(T)=[v_ϕ^2(T)+2v_Δ^2(T)]^1/2_ is the VEV at a finite temperature, with v^_ϕ(T) and v^_Δ(T) being the VEVs of the doublet and the triplet. On this point, it is worthwhile to emphasize that v^_Δ(T)/v^_ϕ(T) is not constrained by experiments as at zero temperature, and hopefully we could have a larger ϱ^_3 at finite temperatures. As has been shown in the right panel of Fig. <ref>, a large ϱ^_3 would significantly decrease the sphaleron energy compared with the SM. Apart from the finite-temperature effects, one can study the sphaleron configuration in the Georgi-Machacek (GM) model <cit.>. The GM model further extends the HTM by introducing an additional real triplet scalar with hypercharge Y=0, and can maintain the custodial symmetry at the tree level by adjusting the VEVs of the complex and real triplets. In this way, the VEVs of the triplets are no longer suppressed and can even be larger than that of the doublet. This may significantly change the sphaleron configuration in the SM according to the results in this work. Therefore, it would be interesting to calculate the sphaleron energy and investigate whether a successful EW baryogenesis could be carried out in the GM model, given that the strong first-order EW phase transition is possible in this model <cit.>. Note added. During the final preparation of this paper, a relevant work <cit.> appeared, which studied the sphaleron configuration in extensions of the SM with general electroweak multiplets (see also Ref. <cit.> for earlier efforts). In particular, Ref. <cit.> calculated the sphaleron energy in a septuplet extension of the SM. Besides, Ref. <cit.> focused on the scenario where the neutral component of the multiplet can be a dark matter candidate. In this case, the hypercharge of the multiplet should be zero and the VEV is vanished at zero temperature. This is different from the scenario we considered in the current work. § ACKNOWLEDGEMENTS We would like to thank Huai-Ke Guo, Yu Tian, Yanda Wu and Deshan Yang for helpful discussions about the sphaleron energy and the spectral method. This work was supported in part by the National Natural Science Foundation of China under grants No. 11835013 and No. 12235008. § SPECTRAL METHODS The EOM of the relevant fields in the calculation of the sphaleron configuration are nonlinear differential equations coupled with each other. It is usually difficult to solve them in an analytical way. In this appendix we show how to use the spectral method to numerically solve the EOM and calculate the sphaleron energy.[The code is publicly available at https://github.com/Bingrong-Yu/Spectral_Sphaleron_Solverhttps://github.com/Bingrong-Yu/Spectral_Sphaleron_Solver.] The main advantage of the spectral method is that it converges very quickly with high precision as the number of the grid points increases. In what follows, we first give a brief introduction to the spectral method, and then apply it to the SM and the HTM. §.§ Basic Ideas The spectral method is an efficient technique to numerically solve differential equations <cit.>. The core idea is to approximate the unknown function by a set of basis functions. Let {ϕ^_n(x)} being a set of orthogonal and complete functions, the unknown function u(x) can be expanded as u(x)=∑_n=0^∞a^_n ϕ^_n(x) , a^_n=∫ dx u(x) ϕ^*_n(x) . For practical numerical computation, one has to truncate at a finite number n=N, and u(x) can be approximated by u(x)≈ u^_N(x)=∑_n=0^Na^_n ϕ^_n(x) , where the coefficients a^_n are calculated at grid points {x^_i} a^_n ≈∑_i=1^Nu^_i ϕ^*_n(x_i) , with u_i≡ u(x_i). Substituting Eq. (<ref>) back to (<ref>) one obtains u^_N(x) = ∑_n=0^N∑_i=0^Nu^_i ϕ_n^*(x_i)ϕ^_n(x) . Then the derivative of the unknown function can be approximated by that of the basis functions, namely u_j' ≈ u_N'(x)|_x=x_j = ∑_n=0^N∑_i=0^Nu^_i ϕ_n^*(x_i)ϕ_n'(x)|_x=x^_j . The differentiation matrix D^_N, which relates the unknown function to its derivative at grid points, is given by (D_N)^_ji=∑_n=0^Nϕ_n^*(x^_i) ϕ_n'(x)|_x=x^_j . Starting from the differentiation matrix, the values of the derivative function can be easily expressed as the linear combination of the values of the raw function. For example, we have u_j' = ∑_i=1^N(D^_N)^_jiu^_i , u_j” = ∑_i=1^N(D_N^2)^_jiu^_i . Then the differential equations of u(x) are reduced to a set of algebraic equations of {u^_i}, which can be numerically solved directly. The numerical error of the above method is described by the residual function R(x)=|u(x)-u^_N(x)|. Therefore, a “good choice" of the basis functions {ϕ^_n(x)} and the grid points {x^_i} should make the residual function as small as possible. For periodic functions, the best choice of the basis functions is the Fourier series. However, for non-periodic functions, as what we encountered in the calculation of the sphaleron, it can be shown that in most cases the best choice of the basis functions is the Chebyshev polynomials (see Appendix <ref>) <cit.>. In addition, the grid points should be taken as the extrema of the Chebyshev polynomials, i.e., x_j = cos(jπ/N) , j=0,1,⋯,N . Then it is straightforward to construct the Chebyshev spectral differentiation matrix <cit.> (D^_N)^_00 = 2N^2_+1/6 , (D^_N)^_NN = -2N_^2+1/6 , (D^_N)^_jj = -x^_j/2(1-x_j^2), j=1,⋯, N-1 , (D^_N)^_ij = c^_i/c^_j(-1)_^i+j/x^_i-x^_j, i≠ j, 0⩽ i,j ⩽ N , where c^_i={ 2 i=0 or N 1 otherwise . . One should keep in mind that when using the Chebyshev spectral method to solve differential equations, the following two conditions need to be satisfied * domain of the variable: x∈ [-1,1] ; * boundary conditions: u(-1) = u(1) =0 . They are easily to achieve after a linear transformation of the variable. In the following parts we will show how to use the spectral method introduced above to solve the differential equations relevant to the sphaleron. §.§ Sphaleron in the Standard Model As a warm up, we first use the spectral method to calculate the sphaleron configuration in the SM. There are only two dynamical fields [i.e., f(ξ) and h(ξ)], and their EOM are given by (recalling that we have defined ξ≡ g v r and ϱ^_1≡λ/g^2_) ξ_^2 f” = 2f(1-f)(1-2f)-ξ_^2/4(1-f)h_^2 , (ξ_^2 h')' = 2(1-f)_^2 h-ϱ_1 ξ_^2 h(1-h_^2) , with the boundary conditions f(0)=h(0)=0 and f(∞)=h(∞)=1. In the practical calculation, the variable is truncated at some finite distance ξ^_ max=2a. This is reasonable because the sphaleron energy is localized near the origin and the profile functions f and h tend to the constant quickly as the distance increases. In order to satisfy the conditions of the Chebyshev spectral method, we perform a linear transformation to the variable ξ→ x=ξ/a-1 . In addition, the profile functions should be shifted to f(x) →f̅(x) = f(x) - 1+x/2 , h(x) →h̅(x) = h(x) - 1+x/2 . Then the domain of the variable is x∈ [-1,1] and the boundary conditions become f̅(-1)=f̅(1)=h̅(-1)=h̅(1)=0. The EOM of the shifted profile functions turn out to be 2(1+x)_^2 f̅” = (2f̅+1+x)(2f̅-1+x)(2f̅+x) +a_^2/16(1+x)_^2(2f̅-1+x)(2h̅+1+x)_^2 , (1+x)_^2 h̅” + (1+x)(2h̅'+1) = 1/4(2f̅-1+x)_^2(2h̅+1+x) - a_^2 ϱ^_1/8(1+x)_^2(2h̅+1+x)[4-(2h̅+1+x)_^2] . Note that all the derivatives in Eqs. (<ref>) and (<ref>) are with respective to x rather than ξ. Now we can use the Chebyshev spectral method introduced above to solve the EOM. Given the grid points in Eq. (<ref>), it is straightforward to construct the (N+1)× (N+1) differentiation matrix D_N using Eq. (<ref>). The derivatives of the profile functions are given by f̅' = D^_N f̅, h̅' = D^_N h̅, f̅” = D_N^2 f̅, and h̅” = D_N^2 h̅. Then Eqs. (<ref>) and (<ref>) are reduced to 2(N-1) algebraic equations with respect to {f̅(x^_1),⋯,f̅(x^_N-1),h̅(x^_1),⋯,h̅(x^_N-1)} , which can be numerically solved directly. Finally, the profile functions should be shifted back via f(x)=f̅(x)+(1+x)/2 and h(x)=h̅(x)+(1+x)/2, and the energy of the sphaleron could be computed by E^_ sph=4π v a/g∫_x^_N-1^x^_1 dx {4/a^2_f'^2_+8/a_^2(1+x)_^2f_^2(1-f)_^2+(1-f)_^2h_^2+1/2(1+x)_^2 h'^2_. .+ϱ^_1/4 a_^2(1+x)_^2 (1-h_^2)_^2 } , where the upper and lower bounds x^_1 and x^_N-1 are given by Eq. (<ref>). We find the results converge rapidly as the number of grid points N increases (see Fig. <ref>). For N ≳ 20, the numerical results are stable and independent of the cut-off a. This is because the profile functions and the sphaleron energy density tend to constants quickly as ξ increases. In Fig. <ref> we show the sphaleron configuration in the SM obtained using the spectral method, where N=60 and a=30 have been taken. It is worthwhile to mention that the spectral method takes only about 1 second to calculate the sphaleron configuration for a given ϱ^_1 using a usual personal desktop. In particular, for ϱ_1=0, ϱ^_1=0.306 and ϱ^_1 →∞, we obtain E^_ sph≈ 1.54, E^_ sph≈ 1.92 and E^_ sph≈ 2.71 (in units of 4π v/g), which matches very well with the result in the literature <cit.>. §.§ Sphaleron in the Higgs Triplet Model Then we turn to calculate the sphaleron configuration in the HTM using the spectral method. We have three dynamical fields, i.e., f(ξ), h(ξ), and h^_Δ(ξ). As what we did in the SM, in order to satisfy the boundary conditions of the spectral method, the variable ξ should be transform to x via Eq. (<ref>), and the profile functions should be shifted to f(x) →f̅(x) = f(x) - 1+x/2 , h(x) →h̅(x) = h(x) - 1+x/2 , h^_Δ(x) →h̅^_Δ(x) = h^_Δ(x) - 1+x/2 . Now the domain of the variable is x∈ [-1,1] and the boundary conditions become f̅(-1)=f̅(1)=h̅(-1)=h̅(1)=h̅^_Δ(-1)=h̅^_Δ(1)=0. After some straightforward calculations, the EOM of the shifted profile functions turn out to be 2(1+x)_^2 f̅” = (2f̅+1+x)(2f̅-1+x)(2f̅+x) +a_^2/16β(1+x)_^2(2f̅-1+x)(2h̅+1+x)_^2 +a_^2 ϱ_3/6β(1+x)_^2(2f̅-1+x)(2h̅^_Δ+1+x)_^2 , (1+x)_^2 h̅” + (1+x)(2h̅'+1) = 1/4(2f̅-1+x)_^2(2h̅+1+x) -a_^2/8β(1+x)_^2{(ϱ^_1-ϱ^_2)(2h̅+1+x) [4-(2h̅+1+x)_^2]. .-ϱ^_2(2h̅+1+x)[(2h̅+1+x)_^2-2(2h̅^_Δ+1+x)]. .+4(ϱ^_4-ϱ^_1+ϱ^_2)(2h̅+1+x). .+2(√(2ϱ^_2ϱ^_3ϱ^_5)-ϱ^_2)(2h̅+1+x)(2h̅^_Δ+1+x). .+(ϱ^_1-ϱ^_4-√(2ϱ^_2ϱ^_3ϱ^_5))(2h̅+1+x)(2h̅^_Δ+1+x)_^2} , ϱ^_3(1+x)_^2 h̅_Δ” + ϱ^_3(1+x)(2h̅_Δ'+1) = 2ϱ^_3/3(2f̅-1+x)_^2(2h̅^_Δ+1+x) -a_^2 ϱ^_2/8β(1+x)_^2[(2h̅+1+x)_^2-2(2h̅^_Δ+1+x)] +a_^2/8β(1+x)_^2[2(2ϱ^_3ϱ^_5-ϱ^_2)(2h̅^_Δ+1+x). .-(√(2ϱ^_2ϱ^_3ϱ^_5)-ϱ^_2)(2h̅+1+x)^2. .-(ϱ^_1-ϱ^_4-√(2ϱ^_2ϱ^_3ϱ^_5))(2h̅+1+x)_^2 (2h̅^_Δ+1+x). .+(ϱ^_1-ϱ^_4-ϱ^_3ϱ^_5-√(ϱ^_2ϱ^_3ϱ^_5/2))(2h̅^_Δ+1+x)_^3] . Note that all the derivatives are with respective to x. If we take ϱ^_4=ϱ^_1-ϱ^_2 and ϱ^_5=ϱ^_2/(2ϱ^_3), then Eqs. (<ref>)-(<ref>) simply reduce to the EOM of the shifted profile functions in the minimal HTM. Constructing the differentiation matrix D_N using Eq. (<ref>), the derivatives of the profile functions are given by f̅' = D_N f̅ , h̅' = D_N h̅ , h̅_Δ' = D_N h̅_Δ , f̅” = D_N^2 f̅ , h̅” = D_N^2 h̅ , h̅”_Δ= D_N^2 h̅^_Δ . Then Eqs. (<ref>)-(<ref>) reduce to 3(N-1) algebraic equations with respective to {f̅(x^_1),⋯,f̅(x^_N-1),h̅(x^_1),⋯,h̅(x^_N-1),h̅^_Δ(x^_1),⋯,h̅^_Δ(x^_N-1)} , and they can be numerically solved directly. The profile functions should be shifted back: f(x)=f̅(x)+(1+x)/2, h(x)=h̅(x)+(1+x)/2, and h^_Δ(x)=h̅^_Δ(x)+(1+x)/2. Finally, the energy of the sphaleron is calculated by E_ sph=4π v a/g∫_x^_N-1^x^_1 dx {4/a_^2f'^2_+8/a_^2(1+x)_^2f_^2(1-f)_^2+1/β(1-f)^2h_^2+1/2β(1+x)_^2 h'^2_. .+a_^2(1+x)_^2/4β^2_[(ϱ^_1-ϱ^_2)(1-h_^2)^2_+ϱ^_2(h_^2-h^_Δ)^2_]+ϱ^_3/6β[3(1+x)_^2 h_Δ'^2.. .. +16h_Δ^2 (1-f)_^2]+a_^2(1+x)^2/4β^2_[2(ϱ^_4-ϱ^_1+ϱ^_2 )(1-h_^2).. ..-(2ϱ^_3 ϱ^_5 -ϱ^_2)(1-h_Δ^2)]+a_^2(1+x)_^2/2β_^2(√(2ϱ^_2 ϱ^_3 ϱ^_5)-ϱ^_2)(1-h_^2 h^_Δ). .+a_^2(1+x)_^2/2β_^2(ϱ^_1-ϱ^_4-√(2ϱ^_2 ϱ^_3 ϱ^_5))(1-h_^2 h_Δ^2). .-a^2_(1+x)^2_/4β^2_(ϱ^_1-ϱ^_4-ϱ^_3 ϱ^_5-√(ϱ^_2 ϱ^_3 ϱ^_5/2))(1-h_Δ^4)} , where x^_1 = cos(π/N) and x^_N-1=cos[(N-1)π/N]=-cos(π/N). As in the SM, we find the final results converge rapidly as N increases and depend very weakly on a. Therefore, in the numerical calculation throughout this work, we fix N=60 and a=30. § CHEBYSHEV POLYNOMIALS In this mathematical appendix, we briefly review some properties of the Chebyshev polynomials. We also demonstrate why the Chebyshev polynomials serve as a “good candidate" of the basis functions in the spectral method. The Chebyshev polynomial of degree n is defined as T^_n(cosθ)=cos(nθ) , n=0,1,2,⋯ . From the definition one can obtain T^_0(x) = 1 , T^_1(x) = x , T^_n+2(x) = 2xT^_n+1(x)-T^_n(x) . It is easy to show that the Chebyshev polynomials satisfy the following properties: * Orthonormality. The Chebyshev polynomials are orthogonal with respect to the weight function ρ(x)=1/√(1-x_^2), i.e., ∫_-1^1 dx/√(1-x^2)T^_m(x) T^_n(x) = 0 for m≠ n , ∫_-1^1 dx/√(1-x^2)T_n^2(x) = { π for n=0 π/2 for n=1,2,3,⋯ . . * Completeness. Any function u(x) defined on [-1,1] can be expanded as u(x) = '∑_n=0^∞ a^_n T^_n(x) , a^_n=2/π∫_-1^1 dx/√(1-x^2)u(x)T^_n(x) , where ∑' denotes a sum whose first term is halved. * Roots and extrema. The Chebyshev polynomial of degree n has n+1 extrema and n roots in [-1,1] extrema: x^_j = cos(jπ/n) , j=0,1,⋯,n , roots: x̃^_j = cos(2j+1/2nπ) , j=0,1,⋯,n-1 . For the practical numerical calculation, the infinite sum in Eq. (<ref>) should be truncated at n=N, and the coefficients are evaluated at grid points <cit.> u(x)≈ u^_N(x) = ”∑_j=0^N b^_n T^_n(x) , b^_n = 2/N”∑_n=0^N u(x^_j) T^_n(x^_j) , where ∑” denotes a sum whose first and last terms are halved, and x^_j = cos(jπ/N) (for j=0,1,⋯,N) are extrema of the Chebyshev polynomial of degree N. Alternatively, one can also evaluate the coefficients at roots of the Chebyshev polynomials u(x)≈ũ^_N(x)= '∑_n=0^Nb̃^_n T^_n(x) , b̃^_n = 2/N+1∑_j=0^N u(x̃^_j) T^_n(x̃^_j) , where x̃^_j=cos[(2j+1)π/(2N+2)] (for j=0,1,⋯,N) are roots of of the Chebyshev polynomial of degree N+1. Then it follows that the interpolation functions u^_N(x) and ũ^_N(x) fit u(x) exactly at the grid points, i.e., u^_N(x^_j)=u(x^_j) and ũ^_N(x̃^_j)=u(x̃^_j). Moreover, it can be proved that the upper bounds of the residue functions turn out to be <cit.> |u(x)-u^_N(x)| ⩽ 2 ∑_n=N+1^∞|a^_n| , |u(x)-ũ^_N(x)| ⩽ 2 ∑_n=N+1^∞|a^_n| . This means the error of evaluating the coefficients at grid points can never exceed twice the error of computing the coefficients using the integral in Eq. (<ref>). The grid points in Eqs. (<ref>) and (<ref>) are known as extrema grid and roots grid, respectively. Both of them have been widely used in the Chebyshev spectral method <cit.>. In this work, we take the grid points to be extrema grid [see Eq. (<ref>)]. One could also compare the interpolation using the Chebyshev polynomials with other polynomials. First, recall that the general Lagrange interpolation of u(x) is given by L(x) = ∑_j=0^Nu^_j ℓ^_j(x) , where u^_j≡ u(x^_j) and ℓ^_j(x)=1/c^_j∏^N_k=0 k≠ j(x-x^_k) , c^_j=∏^N_k=0 k≠ j(x^_j-x^_k) . Then we have L(x^_j)=u(x^_j) (for j=0,1,⋯ N). The remainder of the Lagrange interpolation reads R(x)=u(x)-L(x)=u_^(N+1)(ζ)/(N+1)!P^_N+1(x) , P^_N+1(x)≡(x-x^_1)⋯(x-x^_N) , where u_^(N)(x) is the N-th derivative of u(x) and ζ∈ (-1,1). The question is: how to choose the grid points x_j so that we could have the smallest remainder? An intuitive answer is to look at the upper bound of the remainder, which turns out to be max|R(x)|⩽ max|u_^(N+1)(x)|/(N+1)! max|P^_N+1(x)| . It is not difficult to prove that max|P^_N+1(x)|⩾1/2^N_ max|T^_N+1(x)|=1/2^N_ . If P^_N+1(x) is the monic Chebyshev polynomial T^_N+1(x)/2^N, namely the grid points x_j are taken to be the roots of T^_N+1(x), then max|R(x)| has the minimum upper bound. Therefore, the Chebyshev polynomial is the “best choice" of the interpolation polynomial, in the sense that the remainder has a minimum upper bound. elsarticle-num
http://arxiv.org/abs/2307.05409v1
20230711162319
3D detection of roof sections from a single satellite image and application to LOD2-building reconstruction
[ "Johann Lussange", "Mulin Yu", "Yuliya Tarabalka", "Florent Lafarge" ]
cs.CV
[ "cs.CV", "astro-ph.IM", "cs.AI" ]
3D detection of roof sections from a single satellite image and application to LOD2-building reconstruction Johann Lussange^1 [email protected] Mulin Yu^1 [email protected] Yuliya Tarabalka^2 [email protected] Florent Lafarge^1 [email protected] ^1 INRIA Sophia Antipolis Méditerrannée, 2004 route des Lucioles, 06902, Valbonne, France. ^2 LuxCarta Technology, 460 avenue de la Quiéra, voie K, bat. 119 B, 06370, Mouans-Sartoux, France. August 12, 2023 ====================================================================================================================================================================================================================================================================================================================================================================================================================== Reconstructing urban areas in 3D out of satellite raster images has been a long-standing and challenging goal of both academical and industrial research. The rare methods today achieving this objective at a Level Of Details 2 rely on procedural approaches based on geometry, and need stereo images and/or LIDAR data as input. We here propose a method for urban 3D reconstruction named KIBS(Keypoints Inference By Segmentation), which comprises two novel features: i- a full deep learning approach for the 3D detection of the roof sections, and ii- only one single (non-orthogonal) satellite raster image as model input. This is achieved in two steps: i- by a Mask R-CNN model performing a 2D segmentation of the buildings' roof sections, and after blending these latter segmented pixels within the RGB satellite raster image, ii- by another identical Mask R-CNN model inferring the heights-to-ground of the roof sections' corners via panoptic segmentation, unto full 3D reconstruction of the buildings and city. We demonstrate the potential of the KIBS method by reconstructing different urban areas in a few minutes, with a Jaccard index for the 2D segmentation of individual roof sections of 88.55% and 75.21% on our two data sets resp., and a height's mean error of such correctly segmented pixels for the 3D reconstruction of 1.60 m and 2.06 m on our two data sets resp., hence within the LOD2 precision range. § INTRODUCTION In the rapidly evolving era of smart cities and intelligent urbanization, digital city models have become crucial tools for urban planning, environmental analysis, and infrastructure management. Relying on satellite, aerial, and LIght Detection and Ranging (LIDAR) imagery, these models offer detailed three-dimensional representations of urban environments, and facilitate better-informed decision-making processes. At the same time, computer vision research in satellite and aerial imagery has made great strides in recent years. However, the unique challenges posed by satellite, aerial, and LIDAR imagery, such as variation in perspective, scale, lighting, atmospheric conditions, and data density, necessitate constant technical advancements. New algorithms and models have allowed for much more accurate and efficient image analysis, notably with the rise of deep learning methods. In a larger scope, these have shown much promise in automatically detecting and classifying objects in images <cit.>. This object detection and classification is especially relevant to the fields of semantic segmentation <cit.> and 3D reconstruction <cit.>. With the ever-increasing availability of data (notably LIDAR data <cit.>), new applications are constantly arising and allow for the automation of detection and correction of various types of distortion in images, such as those caused by atmospheric conditions <cit.> and the curvature of the Earth's surface <cit.>, or building <cit.> and vegetation occlusion <cit.>, etc. Also, new methods are being developed for automatically extracting information from raster images <cit.> such as land cover <cit.>, or topographical features <cit.>. In this Paper, we will first give a brief overview of the related work and other pertaining methods in Section <ref>. We will then proceed to explain our own proposed KIBS (Keypoints Inference By Segmentation) approach in Section <ref>, where we will describe the model's two-steps architecture and its data post-processing. In Section <ref>, we will then describe our experiment, with a presentation and discussion of the results of the method on our data set, together with details on the model generalisation and limitations. We also describe the training, validation and testing procedure for Mourmelon-le-Grand and Sissonne data sets, for both the 2D segmentation and the 3D reconstruction of the KIBS method in the Section <ref> of the Supplementary Material. § RELATED WORK The interest of using satellite data as input for reconstruction relies on the abundance and low costs of such data, compared to other sources such as LIDAR or aerial data, which face legal or technical constraints, flight authorisation issues over certain areas, etc. The Level Of Details (or LOD) is a usual metric that allows one to specify the desired precision of such reconstruction. As shown on Fig. <ref>, LOD1 denotes a building reconstruction precision looking like a rectangular shoe box, LOD2 denotes a reconstruction precision displaying the shape of the building's roof, while LOD3 denotes a reconstruction precision of objects' sizes below this range, such as windows, balconies, etc. A few studies have claimed to perform urban 3D reconstruction at a LOD1 <cit.>, or similar outcomes of ground surface reconstructions, but the portability of such methods have often remained limited due to their highly procedural architectures <cit.>. A method for 3D reconstruction at LOD2 is being patented <cit.>, but proposes a very different approach than the one-shot procedure presented here, by using pre-existing primes of rooftops. Most of such methods also rely on extra data sets that are not purely of satellite origin <cit.>, such as LIDAR data <cit.>, aerial photography <cit.>, etc. Others rely on data sets of pre-existing primes of rooftops <cit.>. We here review the latest advancements in 3D plane detection and reconstruction, which is an active area of research within computer vision, with substantial contributions made through the use of both single-image and multi-view images or point cloud data. Plane detection from single image Single-image plane detection and reconstruction have seen remarkable progress thanks to advancements in deep learning. Researchers have developed several methods to detect and reconstruct planes using just a single image. For example, in the PlaneFormers paper <cit.>, they utilize deep learning to develop an algorithm that can reconstruct 3D planes from sparse view planes. Another method, PlaneRCNN, was proposed by <cit.> that detects and reconstructs 3D planes from a single image using a Region Convolutional Neural Network. Similarly, <cit.> proposed a method for single-image piece-wise planar 3D reconstruction via associative embedding, and <cit.> introduced PlaneNet for piece-wise planar reconstruction from a single RGB image. Further, <cit.> employed convolutional neural networks for recovering 3D planes from a single image, highlighting the potential of deep learning in plane detection and reconstruction from single images. However these methods are designed to extract a few large planes in certain types of images, typically indoor scenes, and fails to detect the numerous small planes, e.g. hundred of thousand, contained in a satellite image representing a city. Plane detection from point clouds and multiview images Moving beyond single images, point clouds and multiview images offer additional information that can be leveraged for plane detection and reconstruction. Classical methods such as region growing <cit.> and RANSAC <cit.> have been widely used for this purpose. On the other hand, energy minimization methods <cit.> provide a more rigorous approach, leveraging the mathematical foundation of energy functions for plane detection. Scale-space exploration, as demonstrated by <cit.>, is another valuable technique that adapts to various scales for improved detection. Recently, deep learning-based methods have shown great promise, offering new opportunities for plane detection from point clouds and multiview images <cit.>. Unfortunately, such techniques cannot be used in our context where point clouds generated by MVS from satellite imagery have a very low precision on the spatial coordinates of points. Building reconstruction Roof reconstruction has been a challenging task in 3D building modeling, requiring special attention. Different data sources provide different opportunities and challenges for roof reconstruction. In this context, roof skeletonization techniques <cit.> and deep learning-based aerial image analysis methods <cit.> have shown promising results. In the case of LiDAR data, methods like <cit.> have proven effective. Generative models like Roof-GAN <cit.> have demonstrated the ability to learn and generate roof geometry and relations for residential houses. In another approach, <cit.> proposed neural procedural reconstruction for residential buildings, merging the power of deep learning with the procedural generation approach. Such approaches operate from aerial data and are not robust anymore from satellite data. Reconstructing urban environments in 3D out of satellite raster images has been a long-standing ambitious objective of both industrial and academic research <cit.>. One exciting application of plane detection and reconstruction is in building reconstruction from satellite images. Researchers have employed a variety of methods for this task. For instance, automated building extraction from satellite imagery is explored in <cit.>, and building detection from remotely sensed data is studied in <cit.>. In the context of LOD2 models, roof type classification plays a crucial role, as seen in <cit.>, which utilized PointNet for this purpose. For LOD1 models, beyond <cit.>, methods like Voronoi-based algorithms <cit.> and polygonalization of footprints <cit.> have shown effectiveness. Furthermore, advancements in dense mesh and Digital Surface Model (DSM) generation techniques, such as IMPLICITY, which uses deep implicit occupancy fields for city modeling from satellite images <cit.>, have further pushed the boundaries of what is possible in this domain. To the best of our knowledge, we are the first to try reconstruct LOD2-building by detecting and assembling planes directly from one single satellite image. Plane detection from single image deep learning: - <cit.> PlaneFormers: From Sparse View Planes to 3D Reconstruction (ECCV2022) - <cit.> PlaneRCNN: 3D Plane Detection and Reconstruction from a Single Image. In CVPR, 2019 -<cit.> : Single-Image Piece-wise Planar 3D Reconstruction via Associative Embedding. -<cit.> PlaneNet: Piece-wise Planar Reconstruction from a Single RGB Image. In CVPR, 2018. Y<cit.>: Recovering 3d planes from a single image via convolutional neural networks. In: ECCV (2018 Plane detection from point clouds and multiview images - Region growing <cit.> - Ransac <cit.> - energy minimization <cit.> - scale-space esploration <cit.> - Deep learning based <cit.> Building reconstruction from satellite images <cit.> <cit.> <cit.> : for LOD2 -> roof type classification based on PointNet. <cit.> <cit.>: voronoi polygonalization of footprints: <cit.> + polyworld <cit.> + polymapper <cit.> dense mesh / DSM : IMPLICITY: CITY MODELING FROM SATELLITE IMAGES WITH DEEP IMPLICIT OCCUPANCY FIELDS <cit.> Roof reconstruction - roof skeletonization <cit.> - DL aerial <cit.> <cit.> - Lidar: <cit.> - <cit.>: Roof-GAN: Learning to Generate Roof Geometry and Relations for Residential Houses (CVPR21) - <cit.> H. Zeng, J. Wu, and Y. Furukawa. Neural Procedural Reconstruction for Residential Buildings. In ECCV, 2018. § PROPOSED APPROACH §.§ Overview The KIBS procedure performs the 3D reconstruction of urban areas at a LOD2 with, compared to previous methods, two new features: i- a full deep learning solution for the 3D detection of the buildings' roof sections, and ii- an input consisting in only one single satellite raster image. In order to do this, the KIBS model follows a two-step procedure: first a 2D segmentation task identifies the roof sections, and then a second 3D reconstruction task infers those roof sections' corners with their height-to-ground (as a unique class). Such monocular or single-view 3D reconstruction approaches have been recently used in the general field of computer vision <cit.>, however, these methods were applied to simpler images (like those of individual objects or indoor scenes), and applying them to complex raster data like satellite imagery of urban areas is a more challenging problem. Input The KIBS method is here trained on a data set of satellite raster images with a precision of 0.38 meter per pixel (see Fig. <ref> for a sample). The input of the first data set used for the training, validation, and testing of this method derives from one RGB satellite image of Mourmelon-le-Grand, France, of size 30564 × 26320 pixels, corresponding to a surface area of ∼ 73km^2. This raster image comes from Maxar's Worldview-3 satellite, and was acquired on the 13th of August 2020, with a satellite azimuth angle of 181.10^∘, elevation angle of 59.30^∘. This raster image is accompanied with a data set serving as ground truth for this outcome of urban 3D reconstruction, that consists in a hand-annotated shapefile of all individual roof sections' contours, together with their corners' heights above mean sea level. It is also accompanied with a Digital Terrain Model (DTM, i.e. an elevation map of the ground surface, without its urban or natural objects), courtesy of LuxCarta. Once the KIBS method has been developed for Mourmelon-le-Grand, the model has been trained, validated and tested on a second, similar, data set in order to further confirm its validity, this time on the city of Sissonne, France, whose raster data is of size 19120 × 17420, corresponding to a surface area of ∼ 25km^2. This raster image also comes from Maxar's Worldview-3 satellite, and was acquired on the 4th of November 2020, with a satellite azimuth angle of 172.9^∘, elevation angle of 66.6^∘. It also comes with a DTM specifying the altitudes to sea-level of the terrain, and a hand-annotated shapefile of all individual roof sections' contours, together with their corners' altitudes to sea-level. Output Once trained, the first part of the KIBS model outputs a 2D segmentation of the roof sections, which is fed into a second part of the model employing panoptic segmentation in order to derive those roof section corners' height-to-ground, so as to compute their associated 3D planes coefficients, unto full building and urban area reconstruction. We finally use the Kinetic Shape Reconstruction (KSR) method developed in <cit.> in order to visualize it. A sketch of the whole KIBS procedure is shown in Fig. <ref>. Hypotheses The general working hypothesis of this research study is that it is possible to perform the 3D reconstruction of buildings at a LOD2, for a model taking as input only one single, non-orthogonal, satellite raster image with a resolution of 0.38 meter per pixel (see Fig. <ref> for a comparison). More specifically, within the scope of the KIBS method, our working hypothesis is that a deep learning approach can segment in 2D and reconstruct in 3D the roof sections of the buildings of an urban area with a LOD2, at this image resolution, and based on a single-shot satellite raster image. The fundamental intuition behind this hypothesis is that the non-orthogonality of the satellite raster image provides the deep learning algorithms with non-trivial information (e.g. buildings' walls' inclination, buildings' shadows, roof peak or ridge perspective, etc.) allowing them to infer the height-to-ground of the roof sections' corners with a precision within the bounds of the LOD2 requirement. §.§ 2D detection of roof sections The process for training data preprocessing for the Mask-RCNN model for 2D segmentation of roof lines involves several steps. Firstly, the initial 8687 × 9890 satellite image is segmented into individual 230 × 230 tiles, overlapping by a margin of 10 pixels. Subsequently, ground truth shapefile polygons delimiting roof sections are extracted from these tiles. Each tile then gets a set of corresponding black and white images with white pixels representing a unique roof section per image. Using the algorithm <cit.>, annotation files in PYCOCO format are created for these ground truth masks. After randomly shuffling the set of tiles and associated ground truth images, it is divided into three disjoint sets: training (60%), validation (20%), and testing (20%). These sets and their associated annotation files are then fed into a Mask-RCNN neural network named , a combination of a ResNet-50 model stacked with a Feature Pyramid Network (FPN). This model was chosen due to its robustness and ability to handle complex segmentation tasks. The training, which ran for six days on a Dell T630 GPU node with four GeForce GTX 1080 Ti GPUs, was monitored via TensorBoard to manage regularization issues. The trained network weights are available on the KIBS GitHub repository. More implementation details on the training procedure, as well as the training metrics are given in Section <ref> and <ref> of the Supplementary Material, respectively. The weights of the trained model, which represent the learned features, are available on the KIBS GitHub repository for further exploration and reproducibility of our results <cit.>. §.§ 3D roof corners extraction The 3D reconstruction training leverages a Mask-RCNN model, similar to the 2D segmentation process but geared towards panoptic segmentation. This involves marking roof section corners on the image output of the 2D segmentation and assigning unique class labels to these corners, representing their heights. After training, the 2D segmentation output is integrated with the original RGB raster image, improving the 3D reconstruction's efficiency in identifying roof corners. Class labels corresponding to specific heights are used in the Detectron2 framework, extendable to handle taller structures. Generating the training, validation, and testing sets follows a similar procedure to the 2D segmentation. Each blended raster image is linked with ground truth images representing roof corners, and this data is processed via pycococreator to create annotation files compatible with the Detectron2 framework. A Mask-RCNN model is trained to recognize roof corners and their heights. The training process, monitored online to manage regularization issues, leverages the same hardware as the 2D segmentation, with model weights available on the KIBS GitHub repository. More implementation details on the training procedure, as well as the training metrics are given in Section <ref> and <ref> of the Supplementary Material, respectively. §.§ Plane estimation and meshing As said, once at least three roof section's corners are inferred, and their height-to-ground estimated, one can easily geometrically derive the 3D plane coefficients of the associated roof section, and hence the height-to-ground of each pixel belonging to this roof section, unto full building and then city-wide 3D reconstruction. Now for a number N ⩾ 4 of segmented roof section corners, the algorithm proceeds to select three corners among these forming the largest triangle area via a basic Delaunay triangulation, so as to increase 3D reconstruction accuracy, as shown in Fig. <ref>. §.§ Implementation details We can cover the testing procedure of the KIBS method in five general steps. Firstly, the whole satellite raster image is split in a grid of 230 × 230 tile images, with a margin overlap of 10 pixels on each four sides of the image. Secondly, the aforementioned Mask-RCNN model trained for 2D segmentation is applied to each of these tile images so as to infer the roof sections 2D segmentation. Thirdly, these segmented pixels are blended within their associated raster tile image as blue pixels, with a value {0, 0, 200 } if belonging to the training data set, {0, 0, 210 } if belonging to the validation data set, and {0, 0, 220 } if belonging to the testing data set. Fourthly, the aforementioned Mask-RCNN model trained for 3D reconstruction is then applied to each of these blended tile images so as to infer the roof section corners as keypoints, with their own height-to-ground (as a class in meters, according to the LOD2 precision standards, i.e. 1 m, 2 m, 3 m, etc.). After some postprocessing, the output represents these roof section squares as red squares of 15 × 15 pixels where the red RGB channel is given the value 200 + z, where z ∈ℕ^∗ is the height-to-ground of the corner, as shown on Fig. <ref> for Mourmelon-le-Grand and <ref> for Sissonne. Fifthly, as already said, for N ⩾ 3, one can easily geometrically derive the 3D plane coefficients of the roof section, and hence the height-to-ground of each pixel belonging to this roof section, unto full building and then city-wide 3D reconstruction. The latter can then be visualised in 3D via the Kinetic Shape Reconstruction (KSR) method developed in <cit.> (see below). The details of the data postprocessing of the KIBS model are given in the section <ref> of the Supplementary material. We here simply sum up this procedure via Fig. <ref> as a general description. § EXPERIMENTS §.§ Qualitative results The results of the 2D segmentation of the roof sections for all data sets (training, validation, testing) are shown in Fig. <ref> for Mourmelon-le-Grand and Fig. <ref> for Sissonne. These figures provide a detailed visual comparison between the original satellite images and the output of the 2D segmentation part of the KIBS model, allowing to qualitatively assess the accuracy and precision of our model in identifying and segmenting the roof sections from the satellite images. One can thus see the model's capability to accurately perform 2D segmentation of urban satellite images, which is a crucial step towards achieving our ultimate goal of 3D urban reconstruction. The results of the 3D inference on all data sets (training, validation, testing) are shown in Fig. <ref> for Mourmelon-le-Grand and Fig. <ref> for Sissonne. The 3D inference results are represented via color-coded roof section corners, each color code being derived from a unique class corresponding to the discrete corner's height-to-ground in meters. This visual representation and panoptic segmentation allows us to qualitatively evaluate the model's ability to infer the 3D structure of the urban landscape from the 2D segmentation output. It is noteworthy that the model exhibits a high level of detail in the 3D inference, successfully capturing the complex architectural features and the varying heights of the buildings in both cities. The visualization of this 3D inference, scaled to DSM values, is displayed after the KSR reconstruction <cit.> in Fig. <ref> for Mourmelon-le-Grand and <ref> for Sissonne. This provides a more tangible and intuitive understanding of the model's output, effectively transforming the aforementioned panoptic segmentation into a 3D model of the urban landscape, not only for the roof structures but for the whole buildings underneath. §.§ Quantitative results The results of the KIBS model are shown in Tab. <ref> for Mourmelon-le-Grand and Tab. <ref> for Sissonne. The results of the 2D segmentation can be summed up through the Jaccard index, also called Intersection over Union (IoU), which is the percentage of the M accurately segmented pixels on the 2D map, with respect to the ground truth pixels. We obtain an IoU of 88.55 % for the testing set. The accuracy of the 3D inference can be summed up for these pixels that were correctly 2D segmented wrt. ground truth, through their heights mean accuracy, and mean square error. The heights mean accuracy is the average of the absolute differences between the heights of each correctly segmented pixels ẑ, and the height of its associated ground truth pixels z, expressed as a percentage of the latter: Σ_i=0^M 100|ẑ_i - z|/zM. We find a heights' mean accuracy for the testing set of 74.85 % for Mourmelon-le-Grand, and 72.57 % for Sissonne. And we find a heights' mean value for the testing set of 1.60 m for Mourmelon-le-Grand, and 2.06 m for Sissonne. And one can study the 3D reconstruction efficiency via the mean square error, knowing our data set has an average roof height of 6.36 m for Mourmelon-le-Grand, and 7.53 m for Sissonne. The heights' mean square error is given by the average squared difference between the heights of each correctly segmented pixels ẑ, and the height of its associated ground truth pixels z: Σ_i=0^M (ẑ_i - z)^2/M. We thus find a heights' mean square error for the testing set of 2.35 m^2 for Mourmelon, and 7.41 m^2 for Sissonne. We can see from these two latter statistics, that the aim of urban 3D reconstruction at LOD2 is reached. §.§ Performance Let s ∈ℕ^∗ be the number of pixels giving the (squared) raster tile images' size (e.g. for s=230, the raster tile images are of size s × s = 230 × 230), p ∈ℕ^∗ the number of pixels of the raster tile images' margin overlap, and q ∈ℕ^∗ the number of pixels of the size of the segmented roof section corners (e.g. for q=15, the roof section corners were segmented as red squares of sizes q × q = 15 × 15). The KIBS model performance has been explored through several combinations of the model hyperparameters on the validation set, by visualizing the output results for combinations of these hyperparameters s, p, q. The change in performance for different raster tile images' sizes s was explored, with values s = {150, 230, 300, 768}. We found the use of larger resolution raster images as input to be a limiting factor to the number of roof section corners that could be detected by the second Mask-RCNN model (as shown in the Supplementary Material section <ref> by comparison with Fig. <ref>, where s=768, or on Fig. <ref>, where s=300). We thus found a better performance for smaller resolutions, especially at s = 230 (calculated for values p=10 and q=15 only). Secondly, another explored hyperparameter was the raster tile images' margin overlap p, with values p = {10, 150} pixels (calculated for values s=230 and q=15 only). We found a large margin overlap value to cause intractable memory issues during the run time, and hence selected p = 10. Thirdly, the size of the segmented roof section corners q was explored, with values q= {10, 15} (for values s=230 and p=10 only). As said, this hyperparameter has a great impact on the overall KIBS model performance, since too large squares may assign the height of a given corner to several others as well, and too small squares may produce false negatives by not overlapping their associated segmented roof sections at postprocessing, We thus found better performance for q = 15 (calculated for values s=230 and p=10 only). §.§ Limitations As said, the core premise behind the KIBS model hypothesis is that the oblique perspective of the satellite raster image supplies the deep learning algorithms with valuable and complex information related to the roofs corners height-to-ground. This includes aspects such as the tilt of the buildings' walls, the shadows cast by the buildings, the perspective of the roof peak or ridge, and so on. These elements collectively enable the algorithms to deduce the height-to-ground of the corners of the roof sections with a level of accuracy that meets the standards of the LOD2 requirement. This is an important feature of the KIBS prior pertaining to its generalization, because each data set used to train the 3D reconstruction part of the model has its own specific buildings inclinations (related to the raster' satellite viewing angle α), and its own specific shading of the buildings (related to the raster' solar zenith angle θ), as aforementioned for a our satellite data sets. The KIBS method trained on a data set with such angles α and θ, should hence only generalize to new raster sets taken with angle parameters lying in the neighborhoods of those of the training set, so that the variations in the model inference of the buildings' height-to-ground are negligible within the requirements of a LOD2 precision range. §.§ Baseline comparison Due to the uniqueness of the results of this study, the KIBS method faces a challenge in finding relevant methods for a useful baseline comparison. Other interesting research works like <cit.> (which does 3D urban reconstruction at LOD1), and <cit.> (which both use roof primes for the urban reconstruction) rely on third parties code which is not accessible. But a rigorous approach can be to use the 2D segmentation step of our KIBS approach, and then assign the segmented pixels' height-to-ground via another DSM, courtesy of LuxCarta, which is of LOD1. The resulting point cloud can then be approximated as roof sections unto 3D reconstruction, as shown on Fig. <ref> below, with a rather poor precision. §.§ Ablation study We have used different ablations of the model and studied its change in performance. Firstly, if one tries to infer the roof section corners' position and heights (x, y, z) directly through the raster satellite image (cf. images on the left of Fig. <ref> of section <ref> in the Supplementary Material), the result output shows a poor performance. Secondly, when the blending of the 2D segmentation output is performed on either the red or green channels of the RGB raster image, the results and model accuracy do not change much with our current approach (cf. images in the center of Fig. <ref> of section <ref> in the Supplementary Material). Thirdly, if one tries to improve the 2D location (x, y) of the roof section corners by an image input consisting in the direct raster satellite image, or in the binary masks of the roof sections (cf. images on the right of Fig. <ref> of section <ref> in the Supplementary Material), the output results are very unsatisfying. Fourthly, if one tries and infers the roof section corners' heights out of a blending of three stereo satellite pictures of the same geographical area, each taken with different satellite viewing angle and solar zenith angle, the output results show the poor performance of this approach (cf. Fig. <ref> of section <ref> in the Supplementary Material). Fifthly, other very different models and algorithms than the Mask-RCNN solution were tried and used in this research, both for the 2D segmentation part and the 3D reconstruction part with the work of <cit.>, but as can be shown on Fig. <ref> of section <ref> in the Supplementary Material, this approach failed completely. Sixthly, likewise, a RegNet architecture <cit.> of the Detectron2 model zoo () has been used instead of the Mask-RCNN blocks, but with poor results in our time-constrained hyperparameter space optimization procedure so far. § CONCLUSION We have thus presented a new method named KIBS for the urban 3D reconstruction of satellite images at a LOD2, with two central features: an end-to-end deep learning approach, and a model input based on a one-shot satellite raster image. The backbone of this deep learning model is a two-step method relying firstly on a Mask-RCNN algorithm performing the 2D segmentation of the individual roof sections, and secondly on another Mask-RCNN algorithm of exact same architecture using the latter output blended into the raster image in order to infer the roof section corners and their heights. The performance of this KIBS approach is displayed by a Jaccard index for the 2D segmentation of the roof sections of 88.55 % (Mourmelon-le-Grand) and 75.21% (Sissonne), and a heights' mean value for the roof section pixels correctly inferred by the 2D segmentation method of 1.60 m (Mourmelon-le-Grand) and 2.06 m (Sissonne). The KIBS method can thus perform 3D reconstruction of urban satellite raster images within the requirements of the LOD2 precision range. As such, the authors posit that the weight played by deep learning methods in satellite and aerial data ground reconstruction, whether via end-to-end approaches or in complement of more procedural approaches, will only increase in coming years. Bearing in mind the time-constrained optimization procedure of the method presented in this research work, the authors also posit that the performance results of the KIBS method may be easily enhanced at a little cost, notably by a further exploration of the hyperparameter space, and by use of deep learning architectures other than the Mask-RCNN neural networks here employed. Especially, a direct natural extension of the KIBS approach should study whether one single neural network comprising this two-step approach (2D segmentation followed by 3D reconstruction) into one backbone architecture could be designed. The authors posit this general monocular or single-shot approach to 3D inference could find many promising applications reaching far beyond the satellite and aerial imagery segments of computer vision, and pertain to all 3D inference methods of machine learning in the largest sense, with other potential applications in autonomous driving, drones engineering, environmental monitoring, and virtual reality. This said, our work also raises new questions, such as how to further improve the accuracy of 3D inference, how to handle taller structures, and how to apply our methods to other types of data. A crucial future prospect of the KIBS method pertains to its generalization, not only for very different data sets (e.g. dense city centers with tall buildings), but also wrt. to raster data sets of different satellite viewing angle α and solar zenith angle ω from those of our training set. Thanks to its short computational training and inference times, a suite of several KIBS algorithms could be trained on sets of data taken with different combinations of these two angle values' neighborhoods, so as to reach a practical generalisation threshold by modular learning, corresponding to the offers of satellite data vendors. § SUPPLEMENTARY MATERIAL §.§ Implementation details of the 2D segmentation part The training data preprocessing for the Mask-RCNN model performing the 2D segmentation of the roof lines first relies on first slicing the overall 8687 × 9890 satellite raster image into individual tiles of 230 × 230 individual raster images. These are cut to overlap each other on all four sides by a margin of 10 pixels, to improve the future reconstruction at inference level. Then, the ground truth shapefile of the polygons delimiting each roof section contours is extracted for each associated 230 × 230 tile raster image. For each such tile raster image, a set of 230 × 230 black and white images is generated for each roof section, where each white pixel belongs to one unique roof section per image, and all other pixels are set as black. Each roof section mask is given one same dummy class label at this stage. All these generated black and white images associated with each tile raster image are given a unique file name that allows a specific algorithm <cit.> to generate a .json file for these ground truth masks in the PYCOCO format <cit.>. The set of all such tile raster images, together with their associated ground truth images is then shuffled randomly according to a uniform distribution in order to build three disjoints sets: one for training (60% of the whole data set), one for validation (20% of the whole data set), and one for testing (20% of the whole data set). Via , a .json ground truth file associated with the training set is hence generated, and likewise for the validation and testing sets. All the tile raster images and these generated .json ground truth files are then given as input to train a Mask-RCNN artificial neural network <cit.> from the Detectron2 suite <cit.> named . This network consists in a backbone combination of a ResNet-50 model <cit.> stacked with a Feature Pyramid Network <cit.> (FPN), comprising standard convolution and fully-connected heads for mask and box prediction, respectively. It is pretrained with a 3x schedule, corresponding to about 37 COCO epochs. The results, presented in Section <ref>, are based on a six days training, on a Dell T630 GPU node of dual-Xeon E5-26xx with four GeForce GTX 1080 Ti GPUs cards, 3584 CUDA cores per card, and 11 GB of RAM capacity per card. The training metrics are shown in Fig. <ref>-<ref> of Section <ref> below, and were monitored online via TensorBoard <cit.> in order to limit regularization issues. The weights of the Mask-RCNN network trained for this 2D segmentation are available on the KIBS GitHub repository <cit.>. §.§ Implementation details of the 3D reconstruction part From a deep learning perspective, the 3D reconstruction training relies on a Mask-RCNN model of exact same architecture as for the 2D segmentation, but designed for panoptic segmentation, i.e. both pixel segmentation and class inference. In our case, the pixel segmentation here consists in the model drawing a 15 × 15 pixels square over each roof section corner of a raster image blended with the output of the 2D segmentation, and the class inference consists in giving each such corner a unique class label allowing to retrieve the corner's height-to-ground in meters. As said, after training, the output of the latter 2D segmentation is blended within the original associated RGB raster image, such that each segmented pixel (identifying a roof section) is given a value {0, 0, 200 } if belonging to an image from the training data set, {0, 0, 210 } if belonging to an image from the validation data set, and {0, 0, 220 } if belonging to an image from the testing data set. Roof sections on the raster images hence appear in blue color. Ablation studies (see below) show this method allows the 3D reconstruction algorithm to identify much more efficiently the roof sections' corners, than if the ground truth was associated with the original raster images only. In our code (for particular reasons related to the Detectron2 framework), class labels are: for a height of 1 m, for 2 m, for 3 m, …, for 19 m. This range of 19 possible different classes is due to the maximum corner's height in our particular data set not exceeding 19 m above ground, but one can extend the number of these classes/heights much more in the Detectron2 framework to cope with the potential taller building structures of other data sets. Hence, if our data set contained skyscrapers or buildings of greater heights, the KIBS method and training could remain similar by simply increasing the number of possible classes, and/or raising the height granularity above 1 m, and/or using non-linear graduations in the heights increments, etc. This said, the training, validation, and testing sets generation is done in a similar way as for the 2D segmentation: each aforementioned 230 × 230 blended raster image with a margin overlap of 10 pixels on all four sides, is associated with a set of ground truth images, each of them representing on a black background a square of 15 × 15 white pixels, in order to represent a roof corner on this image. Its class name (i.e. corner's height) is given in the image file name. This is likewise proper and fed to the pycococreator framework, in order to produce .json annotation files for this ground truth data in a PYCOCO format that is understandable to the Detectron2 framework. A same Mask-RCNN model () as before is thus trained on this training data set, so as to identify roof corners and their classes (i.e. heights). The results, presented in Section <ref>, are based on a six days training, also on the same hardware as before (four GeForce GTX 1080 Ti GPUs cards). The training metrics are shown in Fig. <ref>-<ref> of Section <ref> below, and can be monitored online in order to limit regularization issues. The learning weights of the Mask-RCNN model for this 3D reconstruction are available on the KIBS GitHub repository <cit.>. §.§ Training and validation metrics §.§ Ablation studies §.§ Data postprocessing There are six general comments one can make wrt. the data postprocessing of the KIBS method. We also refer the reader to Fig. <ref> for a general recap of this procedure. Firstly, one needs to beware the procedure sometimes fails to correctly infer one or several roof section corners. For a number N ⩾ 4 of segmented roof section corners, the algorithm proceeds to select three corners among these forming the largest triangle area by a Delaunay triangulation, so as to increase 3D reconstruction accuracy. For N=1 or N=2, the model considers for simplification purposes the roof section to be parallel to the ground, and at a height equal to that of this corner, or the average of these two corners, respectively. Finally, if N=0 and no corner is detected by the algorithm, the roof section is also considered to be parallel to the ground, but assigned a height equal to the mean of all roof corners heights of the training set (which in the case of our data set amounts to 6.11 m). Secondly, another basic postprocessing consists in “filing” the heights of the corners used for the roof section 3D reconstruction, based on the assumption that virtually no real roof section contains three corners of different heights. Let's assume these three corner heights z_1, z_2, z_3, in ascending order, are all unequal to each other: then, if z_2 < (z_1+z_3)/2, the algorithm hence sets z_1 and z_2 to the value of their average (z_1+z_2)/2; and if z_2 ⩾ (z_1+z_3)/2, the algorithm sets z_2 and z_3 to the value of their average (z_2+z_3)/2. Thirdly, the output of the this Mask-RCNN model for 3D reconstruction gives after some postprocessing (and some changes to the native Detectron2 code <cit.>), 15 × 15 pixels red squares representing the roof section corners, and the class of each of these corners (i.e. their height-to-ground in meters) is embedded in RGB format by assigning these pixels a value 200+z in the red channel, where z ∈ℕ^∗ is their height in meters (as shown in Fig. <ref> for Mourmelon-le-Grand and Fig. <ref> for Sissonne). This 3D reconstruction output of red squares over a black background is then blended over its associated 2D segmentation output (i.e. the blue roof sections over a black background). This blending process must be done cautiously for several reasons, and the KIBS code contains several postprocessing methods to ensure no data is lost or mismatched at this stage. The reasons are the following: i- in certain complex roof structures, some of these 15 × 15 corner squares can overlap other roof section segmentation pixels they don't belong to, and hence assign a wrong height to them; ii- some of these corner squares can sometimes be placed at inference sufficiently far away from the segmented roof section, so that no match is made and the whole roof section is ill-reconstructed; iii- the Delaunay triangulation will have to chose for each 15 × 15 pixels square only one pixel overlapping the segmented roof section rim, and hence a dedicated method must find and select this pixel among many others overlapping the plane. This data postprocessing pipeline of the 3D reconstruction output is shown in Fig. <ref>. Fourthly, the segmented roof sections need to be perfectly distinguished (i.e. pixel-separated) from each other at the 3D reconstruction stage, since each has its own 3D plane coefficients inferred by the model. Fifthly, when these individual blended tile raster images are put back together to form the large 8687 × 9890 original image corresponding to the full satellite view, some roof sections and corners may be found to intersect two or more of these former tile images. Hence, some parts of a given reconstructed building may spread over several former tile images, and thus belong to different data sets (training, validation, or testing), with different associated blue pixel values. A function from the scikit-image collection for image processing with right parameter can correct this by assigning one single value via the blue channel of each spread roof section (200, 210, or 220 for training, testing, or validation, respectively). This flood-fill is done on a first come, first served basis, with no particular priority from the training, validation, or testing set queues. Sixthly, the data postprocessing methods ultimately writes a text file containing for each line, each roof section points' 3D coordinates {x, y, z} and ID (0 for training set origin, 1 for validation set origin, 2 for testing set origin). This is fed to the KSR method reconstruction <cit.> of the whole city in 3D for visualisation. § ACKNOWLEDGEMENT We graciously thank LuxCarta for providing the satellite raster data with its hand-annotated ground truth. * abbrvnat
http://arxiv.org/abs/2307.03984v1
20230708141612
Optimizing Task Waiting Times in Dynamic Vehicle Routing
[ "Alexander Botros", "Barry Gilhuly", "Nils Wilde", "Armin Sadeghi", "Javier Alonso-Mora", "Stephen L. Smith" ]
cs.RO
[ "cs.RO", "cs.SY", "eess.SY", "68M20", "J.2" ]
font=small definition problemProblem theoremTheorem assumptionAssumption definitionDefinition propositionProposition observationObservation exampleExample *remark*Remark claimClaim corollaryCorollary lemmaLemma
http://arxiv.org/abs/2307.04296v1
20230710012648
K-Space-Aware Cross-Modality Score for Synthesized Neuroimage Quality Assessment
[ "Jinbao Wang", "Guoyang Xie", "Yawen Huang", "Jiayi Lyu", "Feng Zheng", "Yefeng Zheng", "Yaochu Jin" ]
eess.IV
[ "eess.IV", "cs.CV" ]
Journal of Class Files, Vol. 18, No. 9, September 2020 How to Use the IEEEtran Templates K-Space-Aware Cross-Modality Score for Synthesized Neuroimage Quality Assessment Jinbao Wang^1, Member, IEEE, Guoyang Xie^1, Yawen Huang^1, Jiayi Lyu, Feng Zheng, Member, IEEE, Yefeng Zheng, Fellow, IEEE, and Yaochu Jin, Fellow, IEEE Jinbao Wang, Jiaqi Liu and Feng Zheng are with the Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China (e-mail: [email protected]; [email protected]; [email protected]) Guoyang Xie is with the Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China and is also with the Department of Computer Science, University of Surrey, Guildford GU2 7YX, United Kingdom (e-mail: [email protected]) Yawen Huang and Yefeng Zheng are with Tencent Jarvis Lab, Shenzhen 518040, China (e-mail: [email protected]; [email protected]). Jiayi Lyu is with the School of Engineering Science, University of Chinese Academy of Sciences, Beijing, China (e-mail: [email protected]) Yaochu Jin is with the Faculty of Technology, Bielefeld University, 33619 Bielefeld, Germany and also with the Department of Computer Science and Engineering, University of Surrey, Guildford GU2 7YX, United Kingdom (e-mail: [email protected]) ^1Contributed Equally. August 12, 2023 ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== The problem of how to assess cross-modality medical image synthesis has been largely unexplored. The most used measures like PSNR and SSIM focus on analyzing the structural features but neglect the crucial lesion location and fundamental k-space speciality of medical images. To overcome this problem, we propose a new metric K-CROSS to spur progress on this challenging problem. Specifically, K-CROSS uses a pre-trained multi-modality segmentation network to predict the lesion location, together with a tumor encoder for representing features, such as texture details and brightness intensities. To further reflect the frequency-specific information from the magnetic resonance imaging principles, both k-space features and vision features are obtained and employed in our comprehensive encoders with a frequency reconstruction penalty. The structure-shared encoders are designed and constrained with a similarity loss to capture the intrinsic common structural information for both modalities. As a consequence, the features learned from lesion regions, k-space, and anatomical structures are all captured, which serve as our quality evaluators. We evaluate the performance by constructing a large-scale cross-modality neuroimaging perceptual similarity (NIRPS) dataset with 6,000 radiologist judgments. Extensive experiments demonstrate that the proposed method outperforms other metrics, especially in comparison with the radiologists on NIRPS. Medical image, quality assessment, synthesized neuroimages, k-space § INTRODUCTION PSNR <cit.>, SSIM <cit.>, and MAE <cit.> are the most commonly used evaluation metrics in cross-modality magnetic resonance imaging (MRI) synthesis works. However, these metrics are inappropriate to a certain degree, considering that they are based on natural images and naturally ignore the inherent properties of MRI data. In general, the quality of neuroimage can be assessed by the content (i.e., lesion region), frequency space, and structure details. Although MAE, PSNR and SSIM are effective in assessing image quality, they are ineffective as a neuroimage metric, because they only focus on the structural details in the pixel space. Therefore, it is important to find a new way to measure how good the cross-modality neuroimage synthesis is. Empirically, the content details of neuroimages, particularly the texture and brightness, are disregarded by either PSNR or SSIM. Instead, radiologists pay more attention to the lesion regions, since the usefulness of analyzing pathology and human cognitive functions. The purpose of K-CROSS is to fully reflect the lesion region by introducing a cross-modality neuroimage segmentation network which has already been trained to precisely forecast the tumor location. The prediction mask (i.e., tumor region) is fed into the proposed tumor encoder to extract features. The proposed tumor loss function improves the extracted feature to capture more essential texture details and brightness information. In Fig. <ref> (A), we can observe that the content of the synthesis neuroimage does not align with the target modality neuroimage. Though PSNR and SSIM scores are the highest for the synthesized ones, they only evaluate the structure details without taking the content into account. By contrast, K-CROSS is reliable in exploring neuroimaging perceptual similarity (NIRPS) for the synthesized results. Besides, PSNR and SSIM are unable to account for differences in the k-space between the synthesized images and the target modality data, whereas K-CROSS can. The fundamental difference between MRI and natural images, as seen from the standpoint of imaging principles, is the basis of MR image reconstruction. The Fourier transformation, often known as the "k-space" in MRI, is a mathematical concept that calculates various frequencies mixed into the received signal of all spins. It forms the basis for all image reconstruction in MRI. Therefore, we believe that the proposed metric can estimate the distance between MRIs in both k-space and pixel space. In k-space, the sophisticated K-CROSS encoder can capture the invariant modal-specific feature, where the frequency loss can be used to further enhance the complicated encoder. When a k-space shift occurs, as seen at the bottom of Fig. <ref> (B), K-CROSS is more stable in accordance with the radiologist’s score, which is able to measure the gap in k-space between the synthesized neuroimage and the corresponding ground truth. To constrain the structural features that are extracted by the shared structure encoder from both the source modality and the target modality, we set up a cross-modality similarity loss function, as the entire structure information between the source and the target modality neuroimaging data is very similar. PSNR and SSIM, on the other hand, only assess the input image, which limits their capacity to recognize the structural details that the source modality and the target modality share. Our contributions can be summarized as follows: * We propose a new metric, called K-CROSS, to evaluate the quality of the synthetic data based on all the structural information, k-space feature shift, and lesion area. This multidimensional quantification indication enables K-CROSS to achieve more precise results than other metrics that only consider natural images. * To properly verify the effectiveness of our K-CROSS, we construct a large-scale and multi-modal neuroimaging perceptual similarity (NIRPS) dataset, which includes 6,000 assessments from radiologists. * K-CROSS achieves highly competitive results based on the judgments from radiologists on NIRPS, which can be treated as a general evaluation metric for various purposes of medical image synthesis. The rest of this paper is organized as follows: Section <ref> presents a literature review on image quality assessment and GAN-based assessment methods. Section <ref> explains the proposed algorithm K-CROSS in detail. In addition, a large-scale multi-modal neuroimaging perceptual similarity (NIRPS) dataset is constructed in Section <ref>. Section <ref> presents comprehensive experimental evaluations while Section V draws the conclusion and limitation of the current work. § RELATED WORK §.§ Image Quality Assessment Image quality assessment (IQA) can be divided into two categories. One is fully referenced IQA, and the other is non-referenced IQA <cit.>. IQA with all references refers to estimating the quality of natural images with references: SSIM <cit.>, MS-SSIM <cit.> and FSIM <cit.>, focus more on image structure specifics. Specifically, FSIM builds up a novel feature similarity index according to the phase congruence and image gradient magnitude, while PSNR focuses on edge estimation for the synthesized images. Most of them <cit.> use low-level features for evaluation. LPIPS <cit.> is the first work that uses a high-level feature for fully referenced IQA in light of the popularity of deep learning. Estimating the synthesized image quality without a reference (ground truth) is known as non-referenced IQA <cit.>. RankIQA <cit.> is the mainstream for non-referenced IQA. Considering the limited size of IQA, Liu et al. propose a Siamese network to rank images and their distorted ones. The Siamese network's knowledge (ranking result) can be transferred to a conventional neural network, whose function is to assess the quality of a single image. Since K-CROSS requires a reference image for evaluation, it belongs to a fully referenced IQA. However, few public data in the medical imaging community could be used to train for the learning-based fully referenced IQA methods. The NIRPS dataset, the first extensive neuroimaging perceptual similarity dataset with radiologists' labels is constructed. As for fully referenced IQA methods, K-CROSS is, therefore, able to use the supervised training methods. §.§ GAN Assessment The existing sample-based methods <cit.> have been proposed to access GAN performance, like Kernel MMD <cit.>, Inception Score <cit.>, Mode Score <cit.> and FID <cit.>. The classical approach is to compare the log-likelhood of generative models. But this approach cannnot accurately indicate the quality of synthesized image. In other words, a model can achieve high likelihood, but low image quality, and conversely, low image quality, and conversely. As for Inception score <cit.>, it computes the KL divergence between the conditional class distribution and the marginal class distribution over the generated data. However, IS does not capture intra-class diversity, which is insensitive to the prior distribution over labels. Among them, the most popular metric is FID. Heusel et al. <cit.> use InceptionV3 <cit.> to extract the features from the real and synthetic neuroimaging data, and then compute the differences in the features between them. However, the majority of them are created in the pixel space and ignore the lesion region and k-space, which are the fundamental elements of MR image properties. In this regard, K-CROSS considers the underlying MR imaging principle as well as the difference between the neuroimages of the source and target modality. § PROPOSED METHOD §.§ Preliminary §.§.§ K-Space Representations The spatial frequencies of an MR picture are represented in k-space by a matrix of numbers. Despite MR images and k-space having the identical dimension, in practice, each point (k_x, k_y) in k-space represents the spatial frequency and phase information about each pixel in the MR image rather than corresponding to a specific pixel value. By contrast, every pixel in the MR image maps to a point in k-space. As a result, we transform MRI into k-space using the 2D discrete Fourier transform: F (u, v) = ∑_x=0^M-1∑_y=0^N-1 f(x, y) e^-i2π (ux/M + vy/N), where the MR image size is M × N, (x,y) is the MRI's pixel coordinate, (u,v) is its spatial coordinate in k-space, F(u,v) is its complex frequency value, and e and i stand for the Euler's number and the imaginary unit, respectively. We concentrate on the real and imaginary components of F(u,v). According to (<ref>), we rewrite F(u, v) as follows: F(u, v) = R(u, v) + I(u, v)i = a + bi, where the imaginary and real parts of F(u, v) are I(u, v) and R(u, v)= a, respectively. Furthermore, we introduce two key k-space concepts. Here, the amplitude can be defined as: | F(u, v) | = √(R(u,v)^2 + I(u,v)^2) = √(a^2 + b^2). The amplitude is a measure of how strongly a 2D wave reacts to an MR image. We typically visualize k-space using the amplitude. ∠ F(u,v) = arctan ( I(u,v)/R(u,v) ) = arctan ( b/a). The peak shit distance between two 2D sinusoidal waves of the same frequency is referred to as a phase. The phase is the second concept, which is defined in (<ref>). §.§.§ Complex Convolution The complex-valued convolution <cit.> is different from the real-valued convolution. Given a complex-valued convolution filter W = A + iB with real-valued matrices A and B. The operation is expressed as follows: W∗h = (A∗x - B∗y) + i(B∗x + A∗y). The visualization can be found in Fig. <ref>. §.§.§ Complex Leaky RELU It applies separate Leaky RELUs <cit.> on both the real part R(z) and the imaginary part Im(z) of a complex-valued, which is defined as: ℂLeakyRELU = LRELU(R(z)) + i ∗ LRELU(Im(z)), §.§.§ Complex RELU It applies separate RELUs <cit.> on both the real part R(z) and the imaginary part Im(z) of a complex-valued, which is defined as: ℂRELU = RELU(R(z)) + i ∗ RELU(Im(z)). The visualization is given in Fig. <ref>. §.§.§ Complex Tanh Complex Tanh applies separate tanh activation <cit.> on both the real part R(z) and imaginary part Im(z) of a complex-valued, which is defined as: ℂTanh = Tanh(R(z)) + i ∗ Tanh(Im(z)). §.§.§ Complex BatchNorm As described in Cogswell et al. <cit.>, complex-valued batch normalization could be separately applied into the imaginary part and real part, which could reduce the risk of over-fitting. The detail operation is defined as: ℂBN = BN(R(z)) + i ∗ BN(Im(z)). §.§.§ Complex Upsample complex-valued upsample algorithm is able to be separately applied to the real part and imaginary part, which is defined as: ℂUpsample = Upsample(R(z)) + i ∗ Upsample(Im(z)). §.§ Architecture §.§.§ Complex Branch A more refined U-Net architecture implemented in k-space makes up the proposed complex encoder. Specifically, each downsampling block in the encoding stage includes complex convolution ℂBN and ℂLeakyRELU. The complex convolution is replaced with the complex transposed convolution for up-sampling during the decoding phase. There is a complex transposed convolution, ℂBN, and ℂRELU in each upsampling block. We apply ℂUpsample, complex convolution and ℂTanh to reconstruct the images in the final layer of the decoding stage. Fig. <ref> shows the complex branch architecture in detail. §.§.§ Tumor and Structure Branch As a cross-modality segmentation neural network, the well-trained nnU-Net <cit.> is used in K-CROSS, with the weights being adjusted in the second stage of training. The modality-specific tumor encoder and decoder are private because the tumor information (the texture details and brightness) from the source modality and the target modality differ. With the exception of the operators using the normal convolution, batch norm, and Leaky RELU, the architecture details of the tumor encoder-decoder and the structure encoder-decoder are similar to those of Fig. <ref>. §.§.§ Score Network and Quality Prediction Regressor We construct a two-layer MLP for quality prediction, considering that the regressor simply maps the output vectors of the triple-path decoder to labeled quality scores. The network is made up of two fully connected layers with 512-256 and 256-1 channels. The complex score network is composed of two complex fully interconnected layers. Its structure is similar to that shown in Fig. <ref>. However, the operator of the complex score network substitutes MLP layers for ℂRELU. The natural score network has two fully connected layers as well. The channels of the complex score network and the natural score network are 512-256 and 256-1, respectively. The regressor is trained by using the L_1 loss function. §.§ Loss Function §.§.§ Frequency Loss Directly measuring the distance between two complex vectors is very difficult. Alternatively, recent works are more concerned with the image amplitude. We discover that without the phase information from Fig. <ref>, which is impossible to reconstruct the entire neuroimage. Our solution is based on the focal frequency loss <cit.>, as shown in Fig. <ref>. The hidden k-space of real MRI is F_r (u, v) = a_r + b_ri, and the corresponding k-space of synthesis MRI is F_f (u, v) = a_f + b_fi. To calculate their distance, we map F_r and F_f into the Euclidean space as v⃗_⃗r⃗ and v⃗_⃗f⃗. Specifically, the lengths of v⃗_⃗r⃗ and v⃗_⃗f⃗ are the amplitudes of F_r and F_f, respectively. And the angles θ_r and θ_f correspond to the phases of F_r and F_f, respectively. As a result, the distance between F_r and F_f can be converted to the distance between v⃗_⃗r⃗ and v⃗_⃗f⃗ (termed as d(v⃗_⃗r⃗, v⃗_⃗f⃗)), which is defined as follows: d(F_r, F_i) = d(v⃗_⃗r⃗, v⃗_⃗f⃗) = v⃗_⃗r⃗ - v⃗_⃗f⃗^2. The complex feature maps are extracted from each layer l of the encoder in the complex U-Net. Each pixel of the complex feature maps for each layer is denoted as m^l∈ℝ^H_l× W_l× C_l. Finally, we compute spatial and channel averages. As a result, the frequency loss for a complex U-Net is defined as follows: ℒ_freq(m^l_r, m^l_f) = ∑_l1/H_lW_l∑_h, wv⃗_⃗r⃗ - v⃗_⃗f⃗^2. §.§.§ Similarity Loss For the similarity loss ℒ_simi, K-CROSS uses the maximum mean discrepancy (MMD) loss <cit.> to measure it. That is, K-CROSS computes the squared population MMD between shared structure encoding of the source modality h^s_c and the target modality h^s_t using a biased statistic. We express this as: ℒ_simi = 1/(N^s)^2∑_i,j=0^N^sκ(h^s_c_i, h^s_c_j) - 2/N^sN^t∑_i,j=0^N^s, N^tκ(h^s_c_i, h^s_c_j) + 1/(N^t)^2∑_i,j=0^N^tκ(h^s_c_i, h^s_c_j), where κ is a linear combination of multiple RBF kernels: κ(x_i, x_j) = ∑_nη_nexp{ - 1/2σ x_i - x_j^2}, where σ_n is the standard deviation and η_n is the weight for n-th RBF kernel. The similarity loss function encourages the shared structure encoder to learn the invariant structure feature irrespective of the modality. §.§.§ Tumor Loss The tumor loss function consists of a Laplacian loss function ℒ_lap and the LPIPS loss function ℒ_lpips <cit.>. The Laplacian loss function is defined as : ℒ_lap = 𝔼 L(x) - L(x̂)^2_2. The LPIPS loss function is defined as: ℒ_lpips = ∑_kτ^k (ϕ^k(x) - ϕ^k(x̂)). So the tumor loss is described below: ℒ_tumor = λ_lapℒ_lap + λ_lpipsℒ_lpips. In (<ref>), ϕ(·) represents the feature extractor and τ(·) computes the feature score from the k-th layer of the backbone architecture. As a result, the LPIPS value is the average score of all backbone layers. To compute the LPIPS loss, we used a well-trained VGG <cit.> network. The Laplacian loss is used to identify the tumor region's high-frequency component. Due to LPIPS loss, the real tumor region and the reconstructed tumor region are more similar, which is more consistent with the radiologist's judgment. §.§.§ Structure Loss We employ L_1 loss function to extract meaningful semantic structure features, where the structure loss function is defined as: ℒ_stru = ||x - x̂||_1. §.§.§ Inconsistency Loss We adopt the MSE loss function to optimize the weights of complex score network n_c and natural score network n_nat, where the inconsistency loss is defined as: ℒ_inc = ||η_total - η_ra||_1, where the score of K-CROSS η_total is aligned with the scale of the radiologist's rating score η_ra via our proposed ranking algorithm. The details of the ranking algorithm can be found in Algorithm <ref>. §.§.§ Total Loss For the first stage, the loss function is described below: ℒ_first = λ_1ℒ_tumor + λ_2ℒ_stru + λ_3ℒ_freq + λ_4ℒ_sim. In the second stage, we optimize the parameters of the complex score network and the natural score network via ℒ_second = ℒ_inc. In this work, all weights of λ are set to 1. §.§ Algorithms The two stages of training K-CROSS are depicted in Fig <ref>. The details of two-stage training algorithms and the inference algorithm are described in Algorithm <ref>, Algorithm <ref> and Algorithm <ref>, respectively. For clarity, Table <ref> provides notation descriptions that occurred in our algorithms. § NIRPS DATASET AND RADIOLOGIST SCORE To comprehensively evaluate the synthesis performance, we construct a large-scale multi-modal neuroimaging perceptual similarity (NIRPS) dataset with 6,000 radiologist judgments. NIRPS dataset is composed of three subsets generated by CycleGAN <cit.>, MUNIT <cit.> and UNIT <cit.>. Each set contains 800 images generated by IXI and 1,200 images generated by BraTS. The IXI dataset includes two modalities, PD and T2, while the BraTS dataset includes three modalities, T1, T2, and FLAIR. In both the IXI and BraTS datasets, we randomly select 10 slices for training and collect the training results after each epoch of the model trained over 40 epochs. IXI <cit.> collects nearly 600 MR images from normal and healthy subjects at three hospitals. The MR image acquisition protocol for each subject includes T1, T2, PD-weighted images (PD), MRA images, and Diffusion-weighted images. In this paper, we only use T1 (581 cases), T2 (578 cases) and PD (578 cases) data to conduct our experiments, and select the paired data with the same ID from the three modes. The image has a non-uniform length on the z-axis with the size of 256 256 on the x-axis and y-axis. The IXI dataset is not divided into a training set and a test set. Therefore, we randomly split the whole data as the training set (80%) and the test set (20%). BraTS2021 <cit.> is designed for brain disease analysis and diagnosis. The dataset of multi-institutional and pre-operative MRI sequences is made publicly available, and it includes both training data (1251 cases) and validation data (219 cases). Each 3D volume is 155×240×240 in size and is imaged by four sequences: T1, T2, T1ce, and FLAIR. Training Data Processing To ensure data validity and diversity, we remove their skulls for each slice, by splitting the three-dimensional volume and choosing slices ranging from 50 to 80 on the z-axis. All images are cropped to 256 256 pixels in size. During the training stage, we choose a total of 10k images from the IXI and BraTS2021 datasets. §.§.§ Radiologist Score The NIRPS dataset contains radiologist scores (RS) resulting from manual annotation for each image. It is worth noting that the radiologist score RS includes 10 levels, i.e., RS ∈ [0, 0.1, 0.2, .., 0.9]. The higher RS value indicates better-synthesized neuroimage quality. The radiologists give scores in accordance with the level of diagnosis and therapy by using the synthesized neuroimage. Fig. <ref> gives the distribution result of RS. We can see that synthesized performance varies among the three models and the average RS is in the middle. §.§.§ How Radiologists Assess? We prepare the real paired modalities neuroimage dataset M in advance. M consists of source modalities M_s and target modalities M_t. We generate the synthesized target modality neuroimages M̂_t via feeding M_s into the generative model, i.e., CycleGAN, MUNIT and UNIT in NIPRS. Then radiologist gives the score for M̂_t according to the comparison with M̂_t. For instance, we have paired ground-truth modality datasets, T1 and T2. As shown in Fig. <ref>, we synthesized the fake T2 by feeding T1 into the MUNIT model. The radiologists make direct comparisons between fake T2 and real T2 and give their score for the synthesised quality of T2. §.§.§ How Radiologists Combine Their Evaluations? We hire 10 radiologists to evaluate the quality of each synthesized neuroimage. We remove the highest score and the lowest score from all radiologists. Then the final score is averaged by the rest score from 8 radiologists. § EXPERIMENT AND ABLATION STUDY §.§ K-CROSS vs Other Metrics Table <ref> illustrates the inconsistency between metrics and human evaluations of several datasets and generative models, with the highest performance shown in red. The calculation method for inconsistency value is given in Algorithm <ref>. We evaluate K-CROSS on datasets created by CycleGAN, MUNIT, and UNIT. The first column indicates various IQA methods. The second column indicates which datasets were used to train the K-CROSS model, including IXI or BraTS. From Table <ref>, our proposed K-CROSS is more compatible with the assessments of radiologists. Note that the IXI dataset is a healthy person dataset. There is no lesion for each neuroimage. So K-CROSS only use the tumor branch and complex branch to assess the quality of neuroimage. The details are described in Section <ref>. §.§ K-Space Importance Table <ref> records the ablation study of individual branches (the complex branch, tumor branch and structure branch) on various datasets. For instance, when we conduct the ablation study of the complex branch, K-CROSS remove the tumor branch and structure branch in the inference phase. In other words, K-CROSS only obtain η_complex score. It applies the same setting for the other branches. It can be clearly observed that the complex branch obtain the highest score among the three branches. It strongly indicates the importance of k-space, which reflects the inherent properties of magnetic resonance imaging principles. The second best is the tumor branch. It also verifies the effectiveness of the tumor branch for the lesion disease dataset. §.§ Metrics for Healthy Person Table <ref> shows K-CROSS performance that surpasses the mainstream IQA methods on the IXI healthy-person dataset. As for assessing the synthesised neuroimage of healthy persons, K-CROSS removes the tumor branch in the inference phase. Because there are no lesions on healthy person datasets. It means that K-CROSS only combines η_complex and the score of the structure encoder as the final score. From Table <ref>, it can be obviously observed that K-CROSS complex branch score η_complex (blue value) has surpassed the other IQA methods, which identify the importance of k-space for MRI of healthy persons. Thus, K-CROSS still be able to serve as the metric for the synthesized quality of healthy person's neuroimage. §.§ Segmentation Network Effect Table <ref> shows K-CROSS remains stable performance even using different state-of-the-art medical segmentation models. Note that the parameters of the pre-trained segmentation network are frozen during the training phase. The first column denotes the segmentation method. We calculate the variance score of the K-CROSS value for CycleGAN, MUNIT and UNIT by using different segmentation backbone models. We find that the variance of K-CROSS performance is tiny (0.2%, 0.3%, and 0.2%). Hence, the performance of K-CROSS is not affected by the segmentation model. §.§ General Metric? Overcoming Domain Gap The purpose of this paper is to demonstrate that K-CROSS is capable of serving as the standard measure for MRI datasets. We conduct extensive experiments and the results are given in Table <ref>, Table <ref>, and Table <ref> to demonstrate that K-CROSS is not affected by dataset domain gap and the generative model. The training dataset is the BraTS dataset, and the test dataset is IXI. As described in Section <ref>, we remove the tumor branch score, when K-CROSS evaluates the quality of neuroimage in healthy cases. We also observe that K-CROSS averagely surpasses DIST and LPIP (SOTA for natural image) by 7.8% and 16.5%, respectively, which proves that K-CROSS is built upon the basis of MRI principle instead of only on the natural image level. From this ablation study, we demonstrate that the performance of K-CROSS (ℒ_stru+ℒ_freq) is stable across several MRI datasets, with the potential to serve as a generic measure for evaluating the quality of the synthesized MRI. § CONCLUSION In this paper, we proposed a new metric K-CROSS for assessing the performance of the synthesized medical images, which is built on the magnetic resonance imaging principle. To improve the capability of reconstruction during training K-CROSS, a complex U-Net was developed. As for training a learning-based full IQA metric, we further constructed a large-scale multi-modal neuroimaging perceptual similarity (NIRPS) dataset. Experimental results indicate that K-CROSS is a useful indicator for evaluating the quality of the generated medical data. Limitation and Negative Society Impact Our method heavily relies on deep learning-based techniques but without directly injecting the knowledge of radiologists into K-CROSS. In the future, K-CROSS need to combine causal inference methods to enhance interpretability. § ACKNOWLEDGMENT This work is partially supported by the National Key R&D Program of China (Grant NO. 2022YFF1202903) and the National Natural Science Foundation of China (Grant NO. 62122035, 61972188, and 62206122). Y. Jin is supported by an Alexander von Humboldt Professorship for AI endowed by the German Federal Ministry of Education and Research. ieee_fullname
http://arxiv.org/abs/2307.04134v1
20230709092249
Stimulated Brillouin scattering at 1 nm-1 wavevector by extreme ultraviolet transient gratings
[ "Danny Fainozzi", "Laura Foglia", "Riccardo Mincigrucci", "Nupur N. Khatu", "Ettore Paltanin", "Claudio Masciovecchio", "Filippo Bencivenga" ]
physics.optics
[ "physics.optics", "cond-mat.mes-hall" ]
APS/123-QED [email protected] Elettra-Sincrotrone Trieste, SS 14 km 163,5 in AREA Science Park, 34149, Trieste, Italy. Elettra-Sincrotrone Trieste, SS 14 km 163,5 in AREA Science Park, 34149, Trieste, Italy. Department of Molecular Sciences and Nanosystems, Ca’ Foscari University of Venice, Venice, Italy. European XFEL, Holzkoppel 4, 22869 Schenefeld, Germany Elettra-Sincrotrone Trieste, SS 14 km 163,5 in AREA Science Park, 34149, Trieste, Italy. Elettra-Sincrotrone Trieste, SS 14 km 163,5 in AREA Science Park, 34149, Trieste, Italy. Elettra-Sincrotrone Trieste, SS 14 km 163,5 in AREA Science Park, 34149, Trieste, Italy. Elettra-Sincrotrone Trieste, SS 14 km 163,5 in AREA Science Park, 34149, Trieste, Italy. We crossed two femtosecond extreme ultraviolet (EUV) pulses in a β -Ga_2O_3 (001) single crystal to create transient gratings (TG) of light intensity with sub-100 nm spatial periodicity. The EUV TG excitation launches phonon modes, whose dynamics were revealed via the backward diffraction of a third, time-delayed, EUV probe pulse. In addition to the modes typically observed in this kind of experiment, the phase-matching condition imposed by the TG, combined with the sharp penetration depth of the EUV excitation pulses, permitted to generate and detect phonons with a wavevector tangibly larger (≈ 1 nm^-1) than the EUV TG one, via stimulated Brillouin back-scattering (SBBS) of the EUV probe. While SBBS of an optical probe was reported in previous EUV TG experiments, the extension of SBBS to short wavelength radiation can be used as a contact-less experimental tool for filling the gap between the wavevector range accessible through inelastic hard X-ray and thermal neutron scattering techniques, and the one accessible through Brillouin scattering of visible and UV light. Stimulated Brillouin scattering at 1 nm^-1 wavevector by extreme ultraviolet transient gratings Filippo Bencivenga August 12, 2023 ================================================================================================ Studying thermal and vibrational dynamics in nanoscale materials is critical for advancing the technological applications of faster, more efficient and more compact nanoelectronic devices, such as smartphone and computer chips, as well as for thermal barrier coatings <cit.>, heat-assisted magnetic recording <cit.>, nano-enhanced photovoltaics and thermoelectric energy conversion, to name a few. To achieve this, layer upon layer of very thin films are often used, with impurities added to tailor their function <cit.>. However, the complex structure of these materials makes it challenging to predict and characterise their thermoelastic properties.  Material properties such as elasticity, thermal conductivity and heat capacity are mostly determined by collective lattice dynamics that exhibit strong length-scale dependencies, which can drastically differ when the spatial dimensions reduce from macroscopic to microscopic scales, i.e., to sizes comparable with the characteristic length scales of nanostructures. Over the years, an obstacle to the full description of thermoelastic responses in the 10s of nm length-scale was given by the lack of experimental techniques capable of accessing such range <cit.> without the requirement of modifying or physically touching the sample. This inherently introduces limitations in the experiment design and complicates data interpretation. Collective lattice dynamics in condensed matter at wavevector q > 1 nm^-1 can be measured by inelastic scattering of hard X-ray and thermal neutron, while Brillouin scattering and optical transient grating (TG) can be used for q < 0.1 nm^-1. The intermediate q = 0.1-1 nm^-1 is hardly accessible, despite efforts to expand the capabilities of Brillouin spectroscopy in the UV range <cit.> and for improving the performance of X-ray spectrometers <cit.>. In addition, these spectroscopic methods are inherently limited by the instrumental resolution when measuring narrow lines, i.e. long dynamics. This limitation does not affect time-domain techniques, such as picosecond ultrasonics and time-domain thermoreflectance. In these techniques, metal films or other nanostructures are fabricated on the sample for transducing an ultrafast optical excitation in a short wavelength thermoelastic perturbation <cit.>. However, this intrinsically modifies the sample under investigation. The advent of free-electron laser (FEL) sources has recently permitted the usage of extreme ultraviolet (EUV) pulses for extending the TG approach to shorter wavelengths, i.e. in the 10-100 nm range, enabling the excitation and probing of nanoscale thermoelasticity in a contact-less fashion <cit.>. The EUV TG approach has been pioneered at the FERMI FEL (Trieste, Italy) with the dedicated endstation TIMER <cit.>, capable of incisively and selectively studying bulk and surface phonons <cit.>, thermal transport kinetics <cit.> and magnetic dynamics <cit.>. In this paper, we exploit EUV TG to probe acoustic phonons in β-Ga_2O_3. In particular, by taking advantage of the phase-matching conditions imposed by the nanoscale EUV TG, we demonstrated the possibility to detect stimulated Brillouin back-scattering (SBBS) from an EUV pulse at 13.3 nm wavelength. This enabled us to probe the dynamics of phonon modes with a wavelength as short as ≈ 6 nm. The employed sample was an Mg-doped β-Ga_2O_3 (001)-oriented bulk crystal with monoclinic structure (space group C2/m), obtained from the Czochralski method at the Leibniz-Institut für Kristallzüchtung <cit.>. The excellent surface quality and well-known elastic parameters made this sample adapted for the present EUV TG experiment. TG is a third-order non-linear optical technique (four-wave-mixing), wherein two pulses of equal wavelength λ (referred to as pumps) are temporally and spatially overlapped on the sample at a crossing angle of 2θ. The interference between these two pulses, assuming parallel polarization of the beams, induces a spatial modulation in the intensity of light. This modulation exhibits a periodicity Λ_TG=λ/(2sinθ); see Figs. <ref>a)-<ref>b). Such a patterned excitation acts as a transient diffraction grating for a third variably-delayed pulse (probe), with wavelength λ_pr, giving rise to a fourth pulse: the diffracted beam (signal). The experiment was performed at the TIMER beamline at the FERMI FEL, which is described in detail elsewhere <cit.>. Two time-coincident ≈ 60 fs (FWHM) EUV pulses were crossed on a crystalline β-Ga_2O_3 (001) sample at the angle 2θ=27.6^∘ (set with 2% accuracy), generating a transient grating in the [100] direction. Two values of λ were used: 39.9 nm and 26.6 nm, resulting in corresponding grating periods of Λ_TG≈ 84 nm and ≈ 56 nm, respectively. In the following, we will refer to the 39.9 nm and 26.6 nm pump-related quantities with the superscript ^39 and ^26, respectively. The probe pulse (≈ 40 fs FWHM) impinged on the sample with an angle of 4.6^∘, and λ_pr=13.3 nm (hereafter denoted as ^13). The backwards-diffracted signal beam was collected by a EUV mirror and detected by a CCD camera, as outlined in <cit.>. The beamline is designed to satisfy the TG phase matching conditions at the Bragg angle (i.e. θ_i = θ_o = sin^-1(λ_pr/2Λ_TG); being θ_i and θ_o the incidence and diffraction angles of the probe beam, respectively) for λ=3λ_pr. However, since the excitation light is absorbed in a subsurface layer shorter than Λ_TG (the absorption lengths of the pumps are: L_abs^39∼ 12.9 nm and L_abs^26∼ 15.9 nm), phase matching conditions are relaxed. In this case only the wavevector component parallel to the sample surface (q_TG^39=2π/Λ^39_TG≈ 0.075 nm^-1, and q_TG^26≈ 0.113 nm^-1) is well-defined <cit.>, while the component perpendicular to the surface (q_z) results in a broad spectrum; see Fig. <ref>. Therefore, acoustic waves with a well-defined wavevector equal to q_TG (parallel to the surface) are launched. In contrast, waves with a broad spectrum in q_z are generated along the z-direction. However, as shown in Ref <cit.>, only two values of q_z satisfy the TG phase-matching conditions i.e., q_z=0, which yields a signal in the forward direction, and q_z=2k√(1-q_TG^2/4k^2) yielding a back-scattered signal, that encodes the dynamics of SBBS modes. Here, k=2π n /λ_pr is the wavevector of the probe in the medium, where n is the refractive index at λ_pr. Thus, the modulus of the acoustic wavevector for the SBBS signal is: q_SBBS=√(q_z^2+q_TG^2)=2k=4π n/λ_pr which is independent of q_TG and is collinear with the backward diffracted signal from the TG. Therefore, under the current experimental conditions, the combination of the sharp penetration depth of the EUV TG pump in the material and the short wavelength EUV probe, enables the excitation and detection of phonons with q as large as q_SBBS≈ 1 nm^-1 (Fig. <ref>b). To further illustrate the excitation mechanism, Fig. <ref>c displays the EUV TG generated on the sample in the 26.6/13.3 configuration plotted against the (x,z) coordinates, taking into account the finite value of L^26_abs. We note that the modulation along x extends in a much larger range, comparable with the width (FWHM_x≈ 100s of μm) of the excitation pulses. This is what we usually call TG. Additionally, there is a steep gradient along z. Such gradient launches acoustic waves in a broad range of Δ q (roughly extending up to ∼ 2π /L^pump_abs). This is represented in Fig. <ref>e as the Fourier transform (FT) along z of the EUV TG intensity profile shown in Fig. <ref>d. In an excitation scheme relying on a single EUV pump, there is no capability to selectively choose a specific phonon wavevector along the z-axis. However, in the current scenario, the phase matching condition imposed by the EUV TG (see Eq.<ref>) selects a specific wavevector phonon with q_SBBS∼ 1 nm^-1 from the wide range of available phonons This is illustrated by the vertical segment in Fig. <ref>e. We detected the EUV TG signal by varying the time delay (Δ t) between the EUV TG excitation and the probe pulse. Measurements were conducted at both long timescales (Figs. <ref>a and <ref>e) and short timescales (Figs. <ref>c and <ref>g). As expected, at long timescales the overall signal is characterized by a slow decay, which can be attributed to the thermal relaxation of the EUV TG, modulated by phonon oscillations <cit.>. After a few oscillations, these modulations become highly regular. For larger Δ t values, when the slow relaxation is decayed, double-frequency oscillations become visible, indicating the long-living nature of this dominant mode <cit.>. Conversely, the irregular shape of the initial oscillations suggests the presence of additional dynamics that damps out after some 10s of ps. The EUV TG data obtained at short timescales (Figs. <ref>c and <ref>g) were sampled with finer steps and exhibit modulations at significantly higher frequencies. These higher-frequency modulations are compatible with the previously mentioned mixing between the SBBS signal and the backward diffracted signal from the EUV TG. In order to quantitatively describe the waveforms at both long (blue line in Fig. <ref>a and <ref>e) and short timescale (black line in Fig. <ref>c and <ref>g) an initial fitting procedure was conducted using Eq. <ref>: I(t) = |1/2[1+erf(Δ t/σ)] · A e^-Δ t/τ|^2, where the erf function accounts for a sudden rise of the signal (with σ representing the width of the rise), followed by an exponential decay with a time constant τ. Subsequently, FTs were computed on the differences between the measured traces and their respective exponential fits. The obtained results are illustrated in Figs. <ref>b, <ref>d, <ref>f and <ref>h. The FTs of the long timescale waveforms present a well-defined mode and its second harmonic, plus a weaker and spectrally broader feature. All frequencies in these FTs vary proportionally to q_TG, as depicted in Figs. <ref>b) and <ref>f). The presence of this broad feature confirms the existence of a damped mode, which predominantly affects the initial portion of the waveform, as already evident from the raw data. To comprehensively describe the signal, the complete fitting procedure incorporated these two vibrational modes, specifically a damped sinusoidal term and an undamped sinusoidal term: I(t) = |1/2[1+erf(Δ t/σ)] ·[A e^-Δ t/τ + - A_SAWsin(2π ν_SAW Δ t + ϕ_SAW) + - A_LAsin(2π ν_LA Δ t + ϕ_LA) e^-Δ t/τ_LA]|^2 The resulting best-fit results are reported as black lines in Figs. <ref>a and <ref>e. All parameters and errors mentioned further below have been obtained using Eq. <ref>. The values obtained from the preliminary fitting of the EUV TG signal with Eq. <ref> and from the FTs were used as an initial guess for fitting the data with Eq. <ref>. The results concerning the oscillation frequencies are shown in Fig. <ref>a. The undamped mode is compatible with a Surface Acoustic Wave (SAW), which exhibits a linear dispersion relation as a function of q_TG. From the slope of such liner dispersion a value for the sound velocity of c_SAW^[100] = 3.15 ± 0.01 km/s is obtained. This value is close to the estimated velocity of 3.24 km/s, as evaluated by using the transverse acoustic (TA) phonon velocity c_TA^[100]=3.57 km/s <cit.> and the Poisson's ratio ν_p=0.2 <cit.> of β-Ga_2O_3 [100], through the relation c_SAW≈ c_TA· (0.862+0.14ν_p)/(1+ν_p) <cit.>. SAW modes represent long-lived coherent surface displacements characterized by mechanical energy confined to the surface. In the employed backward diffraction geometry, these modes are expected to be the dominant contribution to the EUV TG signal, as observed experimentally. The damped mode also presents a liner dispersion with a velocity c_LA^[100]=5.97 ± 0.14, which is similar to the expected value (6.18 km/s) for longitudinal acoustic (LA) phonons <cit.>. Such marginal deviations between the expected and observed velocities may arise from factors such as slight misalignment of the sample relative to the [100] crystallographic direction, sample heating caused by the FEL, or the 10^^∘ tilt in the (x,y)-plane, necessary for collecting the backward diffracted signal <cit.>. Surface-skimming LA modes and, more in general, bulk waves are expected in these types of TG experiments <cit.>, although they do not contribute significantly in the employed geometry and are often disregarded. Furthermore, at these q values bulk modes are not expected to show tangible damping in the probed Δ t range. However, EUV TG data indicate a quite fast decay time, i.e.: τ^39_LA = 22.5 ± 1.5 ps and τ^26_LA = 17.6 ± 1.5 ps, which is compatible with the broad feature observed in the FT (see Fig. <ref>b and <ref>f). The finite decay time can be explained by the fact that we are observing a thin region below the surface, with thickness ≈ L_abs^13∼ 26.3 nm < Λ_TG, and the excitation intensity steeply varies along the sample depth. Consequently, LA modes are strongly influenced by the surface and manifest as leaky waves, such as surface-skimming longitudinal waves, which rapidly transfer mechanical energy away from the subsurface region toward the bulk. The FTs of the short timescale waveforms exhibit two peaks (Figs. <ref>d and <ref>h) located at considerably higher frequencies compared to SAW and LA modes. Furthermore, these peaks do not show dispersion vs q_TG, as shown in Fig. <ref>b. This behaviour is indeed expected from the SBBS of the EUV probe, since the phonon wavevector is given by q_SBBS and in this specific case the dependence on q_TG can be neglected (see Eq. <ref>). The absence of dispersion vs q_TG of phonon modes detected via SBBS does not imply that they do not exhibit dispersion; rather, it indicates that the changes in q_TG allowed under the specific experimental conditions were not sufficient to significantly alter q_SBBS. A more effective approach to modifying q_SBBS would be to vary λ_pr, as in this case, q_SBBS∝λ_pr^-1 (see Eq. <ref>). On the other hand, the observed frequencies (ν_SBBS, as extracted from the FT) match with the ones expected by considering the sound velocities of TA (c_SAW^[001]=4.01 km/s) and LA (c_LA^[001]=7.55 km/s) modes along the relevant crystallographic direction <cit.>; see Fig. <ref>b. Indeed, the LA mode detected via SBBS propagates along q_SBBS, which means with a small tilt angle (ϕ^39=4.8^∘ and ϕ^26=7.2^∘) with respect to q_z, i.e., essentially towards the bulk of the sample ([001]). This is a different crystallographic direction with respect to the leaky LA mode detected at long timescales (see Figs. <ref>a and <ref>e) which essentially propagates beneath the surface ([100]) with wavevector q_TG≪ q_SBBS. However, since the employed setup did not allow precisely selecting crystallographic directions, such modes have to be regarded as quasi-LA and quasi-TA. It is worth mentioning that Brilluoin back-scattering from quasi-TA modes can be observed in monoclinic crystals, exhibiting signal amplitudes (in the optical regime) comparable to those from quasi-LA modes <cit.>. However, while EUV Brillouin scattering reasonably relies on the same selection rules as in the optical regime, the signal amplitude may differ due to potential wavelength-dependent variations in the photoelastic constants. Most likely, the modes associated with larger density variations provide stronger signals, as the EUV refractive index (far from core-hole resonances) primarily depends on density <cit.>. Nevertheless, further experiments beyond the scope of this study are required to investigate these aspects. The combination of the sharp penetration depth of EUV excitation pulses and the phase-matching conditions imposed by the EUV TG permitted the detection of stimulated backscattered Brillouin oscillations with a wavevector as large as ≈ 1 nm^-1. This wavevector range overlaps with the lower limit of the wavevector range covered by inelastic scattering of hard X-ray and thermal neutrons. In this case, the limitations on the q_SBBS and the SBBS signal come from the wavelength of the probe, rather than from the EUV TG periodicity (see Eq. <ref>). This limit can be straightforwardly overcome by using a shorter probe wavelength, that can be envisioned extending all the way to the X-ray spectral range <cit.>. This would provide a longer penetration depth and an increased range in q_SBBS. Furthermore, the described approach also allowed for the detection of high-frequency surface acoustic waves and longitudinal acoustic phonons propagating below the surface, without the need for nanofabrication and in a broad range of materials. In fact, unlike optical laser excitation, EUV photons are highly absorbed by any materials. The current setup at FERMI already makes it possible to conduct transient grating measurements at grating periods as short as 24 nm <cit.>, and a further extension down to approximately 10 nm is feasible, pushing the SAW frequency close to the THz region and q_SBBS above 1 nm^-1. The authors thank Z. Galazka from Leibniz-Institut für Kristallzüchtung for providing the β-Ga_2O_3 (001) sample and Alexei Maznev (MIT, Boston) for useful discussions. E. P. acknowledges funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 860553.
http://arxiv.org/abs/2307.04035v1
20230708191401
A novel framework for Shot number minimization in Quantum Variational Algorithms
[ "Seyed Sajad Kahani", "Amin Nobakhti" ]
quant-ph
[ "quant-ph" ]
High Fidelity 3D Hand Shape Reconstruction via Scalable Graph Frequency Decomposition Tianyu Luan^1 Yuanhao Zhai^1 Jingjing Meng^1 Zhong Li^2 Zhang Chen^2 Yi Xu^2 Junsong Yuan^1 ^1State University of New York at Buffalo        ^2OPPO US Research Center, InnoPeak Technology, Inc. {tianyulu,yzhai6,jmeng2,jsyuan}@buffalo.edu {zhong.li,zhang.chen,yi.xu}@oppo.com =================================================================================================================================================================================================================================================================================================== Variational Quantum Algorithms (VQAs) have gained significant attention as a potential solution for various quantum computing applications in the near term. However, implementing these algorithms on quantum devices often necessitates a substantial number of measurements, resulting in time-consuming and resource-intensive processes. This paper presents a generalized framework for optimization algorithms aiming to reduce the number of shot evaluations in VQAs. The proposed framework combines an estimator and an optimizer. We investigate two specific case studies within this framework. In the first case, we pair a sample mean estimator with a simulated annealing optimizer, while in the second case, we combine a recursive estimator with a gradient descent optimizer. In both instances, we demonstrate that our proposed approach yields notable performance enhancements compared to conventional methods. § INTRODUCTION Variational Quantum Algorithms <cit.> have emerged as a promising solution for near-term applications of quantum computers. These versatile algorithms offer the capability to tackle a diverse range of complex problems, including but not limited to quantum chemistry <cit.>, combinatorial optimization <cit.>, and machine learning <cit.>. Despite their potential for near-term applications, variational algorithms often require a large number of measurements. This makes implementation of those algorithms on quantum devices extremely time and resource-intensive <cit.>, even when performed on shallow and low-width circuits. Various research efforts have sought to employ optimizers to reduce the computational burden of VQAs. These include application of both existing and novel optimization techniques <cit.>. Such approaches are related to well studied and rich literature on optimization of noisy functions in various fields such as signal processing and control theory (see for example <cit.> and <cit.>). Sweke et al.<cit.> introduced a quantum stochastic gradient descent optimizer that relies on a gradient estimator with a limited number of shots. They proved that with some simplifying assumptions this approach will converge to the optimal values. However, the convergence rate is dependent on the error of the estimator. In another study, Polloreno et al.<cit.> studied the robustness of a double simulated annealing optimizer against inherent quantum noise, even when only a few shots are available and the noise is noticeable. Another approach to solve this problem has been to employ a nested optimization framework in which a high-level optimizer is used to improve the performance of a low-level optimizer by tuning its parameters. For example, Tamiya et al.<cit.> employed Bayesian optimization on stochastic measurement results to determine the optimal step size through a line search. Inspired by stochastic gradient descent, this method incorporates an adaptive shot technique to reduce the number of measurements required during the line search. Similarly, Mueller et al.<cit.> proposed a technique to identify a suitable initial value set using Gaussian Processes. Subsequently, they utilized ImFil as the optimizer in their approach. In this work we propose a generalized framework for optimization algorithms which seek to reduce shot-number evaluations in VQAs. The key performance improving novelty in our approach are two fold. First, devising a framework to incorporate powerful estimation techniques to achieve near-true parameter estimates with much fewer data samples. Secondly, by utilizing the sensitivity analysis of the optimizers, it will be assured that the error level of estimators (and the number of shots as a result) are suitably chosen. This is made possible by breaking the problem into two separate estimation and optimization problems, and deriving theoretical results on the sufficient number of shot. We explore two specific case studies within this framework. For the first case, a sample mean estimator is paired with a simulated annealing optimizer, and in the second case, a recursive estimator is paired with a gradient descent optimizer. The remainder of the paper is organized as follows; In section <ref> background material, including quantum variational circuits, and estimation theory are presented. In section <ref> we develop the proposed error control strategy and discuss the resulting optimization framework. In section <ref> we present two case studies together with numerical results. Finally, in section <ref>, we conclude our work. § BASIC CONCEPTS §.§ Quantum Variational Algorithms 𝒞 ℝ In theory of quantum variational algorithms, the expected value of an observable O over a state, generated by applying the parameterized quantum circuit U(*θ) on the initial state |0⟩ is a required data. This value is used by cost function ∈^m to be minimized with respect to the parameter space *θ. Accordingly, the class of algorithms such as VQE, QAOA and QNN, can be formulated as <cit.>, *θ^* = min_*θ∈^m( U(*θ)^† O U(*θ)0 ). Specific details of these algorithms are available in <cit.>. Here we would like to focus on the underlying operation of these algorithms. Let, f^U, O(*θ) = U(*θ)^† O U(*θ)0, in which U and O may be omitted when discussion is not related to the specific choice of U and O. One of the simplest and widely used parameter-shift rules to compute the derivatives of f is given in Lemma <ref>. i [Parameter-shift rule <cit.>] under the circumstance that each the dependence of f to each parameter (like *θ_k) is in the form of e^*θ_k P_k where P_k is a Pauli operator, we have, ∂_k f(*θ) = f(*θ + e_k π / 2) - f(*θ - e_k π / 2)/2. Variable ∂_k is θ_k and e_k is the vector with 1 in the k-th position and 0 elsewhere. Lemma <ref> is not only useful in calculating the derivative of f, it can also be used to bound higher derivatives of f as shown in Lemma <ref>. For any *θ∈^m, we have, Hess f_2 ≤ mO_2. From the definition we know that f < O_2∀*θ∈^m. For any i and j there always exist some values of *θ_1, *θ_2, *θ_3, *θ_4 for which, Hess f_ij = f(*θ_1) - f(*θ_2) - f(*θ_3) + f(*θ_4)/4≤O_2. Accordingly, Hess f_2 ≤ mO_2. §.§ Estimation and Error Analysis Var MSE Bias Contrary to the simple definition of f^U, O, evaluating such an expected value at each sample point may involve measurements with respect to ℓ multiple bases. Accordingly, the observable O will be decomposed to ℓ observables, each of which is diagonal in a different basis, such as, O = ∑_j=1^ℓ V^†_j D_j V_j. For each ℓ, it is necessary to perform r_j repetitive measurements on a quantum circuit. The lth (out of r_j) measurement outcome will be considered as a sample from a random variable χ_j, l∼ X(UV_j, D_j, *θ). We know that 𝔼[χ_j,l] = f^UV_j, D_j(*θ) and this is the reason we typically define an estimator f^U, O(*θ) as follows. A sample mean estimator for f is defined as, f̂^U, O(*θ) = ∑_j=1^ℓ1/r_j∑_l = 1^r_jχ_j, l. And for any of ∂_k fs, ∂̂_k f^U, O(*θ) = ∑_j=1^ℓ1/2r_j+∑_l = 1^r_j+χ_j+, l - 1/2r_j+∑_l = 1^r_j-χ_j-, l. where χ_j+, l∼ X(UV_j, D_j, *θ + e_i π / 2) and χ_j-, l∼ X(UV_j, D_j, *θ - e_i π / 2). The performance of such an estimator can be bounded with the aid of the Hoeffding's inequality. The inequality provides confidence intervals of the estimators of bounded random variables. 𝔼 [Hoeffding's inequality <cit.>] For n random variables ξ_1, ξ_2, …, ξ_n with a_i ≤ξ_i ≤ b_i for all i, and any t > 0, we have, (∑_i=1^n ξ_i - ∑_i=1^n [ξ_i]≥ t) ≤ 2e^-2t^2/∑_i=1^n (b_i - a_i)^2. Based on this, the following bounds are obtained for the MSE (mean square error) and confidence interval (CI) of the sample mean estimator. [Sample mean estimator bounds] By defining, ϵ_f = ∑_j=1^ℓD_j^2_2/r_j, and, ϵ_∂_k f = ∑_j=1^ℓD_j^2_2/4(1/r_j+ + 1/r_j-). When ŝ is f̂^U, O or ∂̂_k f^U, O, it can be respectively bounded by ϵ_f and ϵ_∂_k f for any *θ and κ > 0 as follows, [ ŝ(*θ)] ≤ϵ, (ŝ(*θ) - s(*θ) > κ√(ϵ)) ≤ 2e^-κ^2/2. To prove the bounds for f, we start by setting ξs in Hoeffding's inequality to χ_j,l/r_j for different j and ls. They are bounded to -D_j/r_j≤χ_j,l/r_j≤D_j/r_j, it can thus be shown that, (f̂(*θ) - f(*θ) > t) ≤ 2e^-2t^2/4ϵ_f. It is now only required to replace t with κ√(ϵ_f). From Popoviciu's inequality <cit.> it is evident that [ξ_i] ≤b_i - a_i/4 which is used for the MSE of bounded random variables. The same results hold for the partial derivatives, if we set ξs to χ_j±,l/2r_j± for different j and l and + and - signs. § MAIN RESULTS §.§ Error Control Strategy As mentioned in the introduction, a key performance improving novelty of our work is the means to control the error level, as well as the number of shots. This will be possible by connecting the number of shots to the error level of any estimator, using the problem below. Contrary to the normal estimators that often use a constant number of shots without any further analysis, we intend to find a sufficient value for r_js such that the resulting estimation error is bounded by a specified amount. [Sufficient Number of Shots] Given an estimator ŝ, find the values of r_js which satisfy the following constraints, [ŝ] ≤ E_s. For the sample mean estimator discussed previously, solving Problem <ref>, for f^U, O and ∂_k f^U, O is equivalent to the following optimisation problems, r_j∈ℕargmin∑_j=1^ℓ r_j s. t. [f̂] ≤ E_f. r_j±∈ℕargmin∑_j=1^ℓ r_j+ + r_j- s. t. [∂̂_k f] ≤ E_∂_k f. Optimization problems <ref> and <ref> can be approximately solved using Algorithm <ref>. This algorithm solves the optimisations by relaxing MSE values to the bounds ϵ_f and ϵ_∂_k f defined in Theorem <ref> and limiting r_js and r_j±s to have real values. We can easily verify the algorithm by replacing the values using the formulas in Theorem <ref> and deduce that the algorithm not only bounds the MSE but also provides a CI for the values. §.§ Optimizing Agent Regardless of technical detail, the function of all variational algorithms can be considered as that of agent which interacts with a quantum computer as shown in Figure <ref>. Such a high level conceptualization permits development of a unified framework for the evaluation of f, ∂_k f and higher derivatives. Most general purpose optimizers will not aim to control the number of shots which is often taken as a constant during the optimization. There have been attempts to develop adaptive algorithms such as <cit.> but the scope of their application is limited. Any optimizing agent will ultimately utilize available data by calculating a set of estimators. Statistically, it is possible to reduce the number of estimators to a sufficient set of estimators. For most typical optimizer, those estimates will be limited to f̂^U, O(θ_i) and ∂̂_k f^U, O(θ_i), where f^U, O is the function that is being optimized. However, by application of sufficient shot problem proposed earlier, it is possible to control the optimization error, instead of the number of shots. In our view this is a more natural way of looking at the problem. In such an improved strategy, the optimizer is provided with the errors E_f and E_∂_k f instead of r_j, and solves for f̂, ∂̂_k f instead of χ_j, l. This is illustrated in Figure <ref>. For the sake of simplicity we shall henceforth refer to f^U, O(θ_i) and ∂_k f^U, O(θ_i) as f_i and ∂_k f_i respectively. Moreover, this strategy can also be extended to the sample mean estimator f̂_i and ∂̂_̂k̂f_i, defined in Definition <ref>. In the proposed framework the main problem is broken down into two separate problems. These are, * An optimization problem of uncertain values, with a sensitivity analysis * An estimation problem, with the question of sufficient shots for the estimator. In the proposed framework one is not limited to the sample mean estimator defined in Definition <ref> and can make use of any static or dynamic estimator. Dynamic estimators will also have an internal states which is shown by a gray arrow in Figure <ref>. We will demonstrate the profound effectiveness of this approach by introducing a few examples of estimators and optimizers in the following section. For the sake of illustrating the methodology we shall make use of existing standard and rather simple optimization and estimation techniques. Evidently the eventual obtainable performance improvements can be much greater by a well matched and individually powerful optimizer and estimator. § CASE STUDIES §.§ Example I: Error-Aware Simulated Annealing A simple simulated annealing algorithm is a stochastic process that starts from a random point in the search space and iteratively moves to a new point with a transition probability P based on the values and temperature T_i at step i. In order to introduce the uncertainty, we only need to redefine the transition probability P̂ based on the estimator as follows, P̂(*θ_i+1 | *θ_i) = 1 if f̂_i+1 < f̂_i e^-f̂_i+1 - f̂_i/T_i otherwise. Then, the sensitivity can be analyzed as follows. In order to maintain an accuracy for P̂(*θ_i+1 | *θ_i) we seek, [D_KL(P ∥P̂)] ≤η, where D_KL is the Kullback-Leibler divergence. We know that this equation will hold if, [logP(*θ_i+1 | *θ_i)/P̂(*θ_i+1 | *θ_i)] ≤η ∀*θ_i+1. The RHS could be bounded using [x - [x]] ≤√([x]) and the independence of f̂_i+1 and f̂_i and by assuming a monotonically decreasing temperature T_i+1 < T_i, [log P(*θ_i+1 | *θ_i) - logP̂(*θ_i+1 | *θ_i)] ≤1/T_i[f̂_i+1 - f̂_i - f_i+1 + f_i], ≤1/T_i√([f̂_i+1 - f̂_i]), ≤1/T_i√([f̂_i+1] + [f̂_i]) . Note that the estimators should be unbiased, otherwise the equation above will not hold. Finally we will introduce the condition below, that is sufficient for the equation above and furthermore to bound KL divergence by η, [f_i+1] ≤η^2 T^2_i/2. This is a more efficient condition for the estimator in comparison to the simply asking [f_i+1] ≤ E. In order to compare the performance of the simulated annealing with and without the sensitivity analysis, we conducted three experiments as follows, * Simple Optimizer (1): A simulated annealing optimizer with the condition [f_i+1] ≤ E with a high value for E. * Simple Optimizer (2): A simulated annealing optimizer with the condition [f_i+1] ≤ E with a low value for E. * Error-Aware Optimizer: A simulated annealing optimizer with Equation <ref> as the condition. For experimental studies, consider the benchmark problem defined in <ref>. [Benchmark problem] Assume a variational task with one qubit and U(θ) = R_x(θ) and O = Z with 𝒞 = I, which implies ℓ = 1 and m = 1. Also C(θ) = R_x^†(θ) Z R_x^†(θ)0 could be simplified further into cosθ. We start with an ensemble of θs near 0 and compare the distribution of the exact value of the function f through the optimization (with respect to the number of shots conducted) for each optimizer. The results are shown in Figure <ref>. To more clearly highlight the difference between the distributions, we have also plotted the distribution of data points after 7000 shots for each optimizer in Figure <ref>. Note that the error bound for different optimizers as a function of the number of shots is shown in Figure <ref> which is just a visualisation of condition <ref>. The results show that the error-aware simulated annealing is able to find a better solution with less number of shots. §.§ Example II: Recursive Estimator for Gradient Descent To illustrate the flexibility of the framework with respect to the choice of estimators and optimizers, in this section we perform experiments with a standard gradient descent algorithm and a novel recursive estimator for the function and its derivative. The proposed recursive estimator works on the assumption that the distance between two function evaluations required by the optimizer at two consecutive iterations is not great. That is, the function (and possibly its gradient) at a point *θ_i and its next evaluation at *θ_i+1 doesn't differ drastically from *θ_i. This assumption allows the update rule of the optimizer to be written in the form *θ_i+1 = *θ_i + δ*θ_i where δ*θ_i is a vector with bounded norm. The proposed recursive estimation methodology is formally defined in Definition <ref>. f̂^*_i = α_i(f̂^*_i-1 + δ*θ_i-1·f^*_i-1) + (1 - α_i) f̂_i ∂̂_̂k̂f^*_i = β_i ∂̂_̂k̂f^*_i-1 + (1 - β_i) ∂̂_̂k̂f_i , f̂^*_0 = f̂_0 ∂̂_̂k̂f^*_0 = ∂̂_̂k̂f_0 Note that α_is and β_is are values between 0 and 1 and act as hyperparameters which control the relative weight given to prior knowledge. The optimal values of these parameters are derives in later sections. First we present Theorem <ref> which derives theoretical bounds for the bias and variance of the estimate so obtained. [Recursive estimator bounds] For any i, [f̂^*_i] ≤ B_i [∂̂_̂k̂ f^*_i] ≤ B_∂_k, i. Where B_i and B_∂_k, i are calculated recursively as follows, B_i = α_i(B_i-1 + ∑_k=1^m (δ*θ_i-1)_k B_∂_k, i-1 + m/2δ*θ_i-1_2^2 O_2) B_∂_k, i = β_k,i(B_∂_k, i-1 + δ*θ_i-1_2 O_2) , B_0 = 0 B_∂_k, 0 = 0. and similarly for the variance, [f̂^*_i] ≤ A^2_i [∂̂_̂k̂f^*_i] ≤ A^2_∂_k, i. Using the notation in, Theorem <ref> A^2_i = α_i^2 (A^2_i-1 + ∑_k=1^m (δ*θ_i-1)_k^2 A^2_∂_k, i-1) + (1 - α_i)^2 ϵ^2_f_i A^2_∂_k, i = β_k,i^2 A^2_∂_k, i-1 + (1 - β_k,i)^2 ϵ^2_∂_k f_i, Defining the drift term d_i = f_i - 1 + δ*θ_i-1· f_i-1 - f_i, we can write the bias and variance of f̂^*_i as, [f̂^*_i] = α_i ([f̂^*_i-1] + δ*θ_i-1·[f^*_i-1] + d_i) [f̂^*_i] = α_i^2 ([f̂^*_i-1] + δ*θ_i - 1^2·[f^*_i-1]) + (1 - α_i)^2 [f̂_i]. In an abuse of notation, δ*θ^2_i-1 represents a vector of squared elements and [f^*_i-1] represents a vector of variances. This facilitates a more compact proof as shall be seen. With the same objective, we define another drift term for the derivatives of f as d_∂_k, i = ∂_k f_i - 1 - ∂_k f_i will helps us to write the bias and variance of ∂̂_̂k̂f^*_i as, [∂̂_̂k̂f^*_i] = β_k,i([∂̂_̂k̂f^*_i-1] + d_∂_k, i) [∂̂_̂k̂f^*_i] = β_k,i^2 [∂̂_̂k̂f^*_i-1] + (1 - β_k,i)^2 [∂̂_̂k̂f_i]. Combining Lemma <ref> with the mean value theorem, we have, d_i≤1/2δ*θ_i-1_2^2 m O_2 d_∂_k, i≤δ*θ_i-1_2 O_2. Finally, combining the above equations with the fact that [f̂_i] ≤ϵ^2_f_i and [∂̂_̂k̂ f_i] ≤ϵ^2_∂_k f_i completes the proof. For the confidence interval of recursive estimator, we can prove the following result, [Confidence Interval] As a result of Theorem <ref> the following equation is valid for s^* is any of f_is or ∂_k f_is, simply by setting corresponding A and Bs. [ŝ^*] ≤ B^2 + A^2, (ŝ^* - f > κ A + B) ≤ 2e^-κ^2/2. While the expression for the MSE is trivial, for the confidence interval we have, (f̂^*_i - [f̂^*_i] > κ√(A_i)) ≤ 2e^-κ^2/2. This is true because f̂^*_i is a linear combination of χs that are from bounded distributions. Accordingly, Hoeffding's inequality applies. Moreover, there is a one-to-one correspondence between bounds from Hoeffding's and Popoviciu's inequalities (see the proof of Theorem <ref>), which obviously validates the equation above. Since f̂^*_i - f_i > κ√(A_i) + B_i ⇒f̂^*_i - [f̂^*_i] > κ√(A_i), (f̂^*_i - f_i > κ√(A_i) + B_i) ≤(f̂^*_i - [f̂^*_i] > κ√(A_i)) ≤ 2e^-κ^2/2. Finally, we need to solve the sufficient shots problem (Problem <ref>) for the recursive estimator. The actual objective is to solve, r_j, i, r_j±,i∈ℕ, α_i, β_k,iargmin ∑_i=1^∞∑_j=1^ℓ r_j, i + ∑_k=1^m r_j+, k, i + r_j-, k, i s. t. ∀ i [f̂^*_i] ≤ E_f s. t. ∀ i, k [∂̂_k f^*_i] ≤ E_∂_k f. However, we solve an iterative version as in Algorithm <ref>, min_r_j ∈ℕ, α_i∑_j=1^ℓ r_j s. t. [f̂^*_i] ≤ E_f. min_r_j,±∈ℕ, β_k,i∑_j=1^ℓ r_j+ + r_j- s. t. [∂̂_k f^*_i] ≤ E_∂_k f. Combining the two leads to Algorithm <ref>. Note that with this algorithm, for the same error bound, the number of shots for a recursive estimator of a function will be at max equal to the number of shots for the naive estimator of that function. To illustrate the performance of Algorithm <ref>, first we apply the estimator for the variational Problem <ref> with a random (zero mean) initial point and a simple gradient-descent optimizer. Figure <ref> shows the estimated values (with CIs) of the loss function, for different estimators, as a function of the number of shots used to evaluate the function. It is evident that the proposed recursive estimator is outperforming the sample mean estimator by a significant margin. Another comparison made by visualizing number of shots per each GD iteration is shown in Figure <ref>. To verify the theoretical results derived earlier, the bounds on MSE and CI are compared with the actual values of the MSE and CI of the estimators in Figures <ref> and <ref> respectively. For further experimental verification, the same experiment has also been carried out on the more complex MaxCut problem for a square graph (V = 4 and E = 4). The results are shown in Figure <ref> and Figure <ref>. § CONCLUDING REMARKS In this paper, a generalized framework for optimization algorithms which seek to reduce shot-number evaluations in VQAs was proposed. In the general form, the proposed framework entails a combination of an estimator together with a numerical optimization algorithm. We introduced the sufficient shots problem and proposed an algorithm for it to be used with the sample mean estimator. This concept together with sensitivity analysis of optimizers, allows us to control the number of shots leading to a more natural and effective optimization process. Two specific case studies of this framework were subject to extensive experiments. In the first case, a sample mean estimator is coupled with a simulated annealing optimizer, and in the second case, a recursive estimator was coupled with a gradient descent optimizer. In both cases we demonstrated that the proposed approach achieves significant performance improvements over conventional methods. Our results highlight the importance of considering error control strategies and incorporating them into the design of optimizers for variational quantum algorithms. By leveraging estimators with error control and integrating them with interactive optimization processes, we can achieve better optimization performance and reduce the resource requirements for quantum computations. Overall, this work contributes to advancing the field of variational quantum algorithms by providing a systematic framework for designing error-aware optimizers. The presented approaches and results open up new possibilities for improving the efficiency and effectiveness of quantum computing research in various domains, such as quantum chemistry, combinatorial optimization, and machine learning. Future directions could explore further extensions and applications of the proposed framework, as well as experimental validations on quantum devices. § APPENDIX
http://arxiv.org/abs/2307.04461v1
20230710101657
Multi-modal Graph Learning over UMLS Knowledge Graphs
[ "Manuel Burger", "Gunnar Rätsch", "Rita Kuznetsova" ]
cs.LG
[ "cs.LG" ]
Deformations at Earth's dayside magnetopause during quasi-radial IMF conditions: Global kinetic simulations and soft X-ray imaging Chi Wang August 12, 2023 ================================================================================================================================== Clinicians are increasingly looking towards machine learning to gain insights about patient evolutions. We propose a novel approach named Multi-Modal UMLS Graph Learning (MMUGL) for learning meaningful representations of medical concepts using graph neural networks over knowledge graphs based on the unified medical language system. These representations are aggregated to represent entire patient visits and then fed into a sequence model to perform predictions at the granularity of multiple hospital visits of a patient. We improve performance by incorporating prior medical knowledge and considering multiple modalities. We compare our method to existing architectures proposed to learn representations at different granularities on the MIMIC-III dataset and show that our approach outperforms these methods. The results demonstrate the significance of multi-modal medical concept representations based on prior medical knowledge. We provide our code here[<https://anonymous.4open.science/r/mmugl/>] and showcase some of our results with an online demo available under this link[<https://mmugl.dnsalias.org>]. § INTRODUCTION Modern healthcare facilities record patient information as Electronic Health Records (EHR). EHR Datasets such as MIMIC-III <cit.>, HiRID <cit.>, and eICU <cit.> enable modeling of disease progressions within a single hospital visit, for example in Intensive Care Units (ICU) <cit.>, or progressions across multiple patient visits <cit.>. These progressions can be meaningfully encoded into patient representations using deep learning as shown by numerous prior works <cit.>. This large body of work highlights the value of strong patient representations which aggregate information across entire patient histories from multiple hospital stays, enabling clinicians to model potential risks in various predictive tasks regarding patients' evolution. Further, we see advantages of recent multi-modal approaches in the ICU setting <cit.> and visit sequence modeling <cit.>. In multi-modal EHR representation learning <cit.>, we benefit from two modalities: structured EHR data (e.g., billing codes) and unstructured text information stored in rich clinical reports. Other modalities of medical data exist outside of in-hospital datasets, where a vast amount of prior medical knowledge is stored in static form in databases such as the Unified Medical Language System (UMLS <cit.>). We identify two drawbacks of current UMLS based approaches <cit.>. First, the approaches do not consider a complete set of relational information stored in UMLS (considering multiple vocabularies) and solely use UMLS as a unified concept space. Second, prior solutions <cit.> specify the usage of hierarchical relations, which implies the use of an underlying graph in the form of a tree (single vocabulary). More complex graph structures inside and across vocabularies are thus omitted. We introduce Multi-Modal UMLS Graph Learning (MMUGL) to overcome the previously stated limitations. MMUGL is a novel approach for learning representations over medical concepts extracted from the UMLS Metathesaurus in the form of a complex knowledge graph and relations; extracted using a simple and ambitious procedure considering a considerable set of vocabularies and all the relations across and within them. We apply auto-encoder pretraining techniques (e.g., <cit.>). By training a shared latent space <cit.>, we bridge the modality gap between structured EHR codes and unstructured text. The approach includes rich prior knowledge important in the medical domain, deals with sample scarcity by relying on prior knowledge structure and pretraining techniques, and leverages multiple modalities as inputs. §.§ Generalizable Insights about Machine Learning in the Context of Healthcare The contributions of our work are threefold: * In Section <ref>, we introduce a novel medical knowledge representation learning approach with graph neural networks (GNN) over knowledge graphs based on the UMLS Metathesaurus of previously unseen complexity. While modern machine learning techniques are unlocking amazing advancements in health-care, improved precision, early detection, personalized treatments, and democratized access, all by learning from large amounts of data, most of the accumulated medical knowledge often remains untouched by our algorithms. Prior work has considered to tap into this knowledge, but we go one step further and show, that we can extract large and complex knowledge graphs by considering a considerable amount of the entire UMLS Metathesaurus and build a strong structural prior into our machine learning model and gain performance in the process. * We introduce a shared latent Concept Embedding (Sec. <ref>) space and a shared Visit Encoder (Sec. <ref>) to optimize the single latent space from any modality jointly in a parameter efficient manner. Prior work has established the importance of leveraging EHR records in their entirety, thus incorporating all the available modalities. In our work we show the benefits of grounding all modalities by the same prior knowledge and training a single latent space in end-to-end fashion for all input modalities (structured and unstructured EHR). * In Section <ref> we demonstrate, that we strongly outperform prior graph-based works in pretraining and downstream tasks and can perform competitively with prior work trained at a much larger scale of data. We show the benefits of our large-scale knowledge graph, shared latent space from multiple modalities and tailored (pre-)training procedure. § RELATED WORK In the following, we introduce related work in EHR modeling, knowledge graph learning, and graph learning in the context of EHRs. EHR Various types of deep learning architectures have been proposed to learn representations at different granularities (patients, visits, histories, etc.) in EHR datasets. <cit.> propose EHR-specific visit sequence models. <cit.> propose to focus on the inherent structure of EHRs w.r.t. treatments, diagnosis, visits, and patients. <cit.> adapt the masked language modeling approach to learn medical concept embeddings. Multi-Modality Prior work has considered learning representations from either structured components of EHR data <cit.> or from unstructured clinical text reports <cit.>. <cit.> have proposed multi-modal architectures and <cit.> go a step further and introduce even stronger structural priors, while considering the two modalities of structured EHR data, as well as unstructured clinical reports. Knowledge Graphs and GNNs A vast amount of static prior medical knowledge often remains untouched in current modeling approaches. This prior knowledge can be extracted and transformed into knowledge graphs <cit.>. Existing work in natural language processing has established the benefits of knowledge graph representations to various downstream applications <cit.>; where the most recent approaches include GNNs <cit.>. We aim to leverage the recent success of GNNs, which apply graph convolutions over arbitrary graph structures to learn node (and edge) representations <cit.>. Graph Learning in EHR GRAM <cit.> proposed to include prior knowledge from medical ontologies such as the International Classification of Diseases (ICD). To model structural and relational data explicitly, approaches have started to use GNNs. <cit.> proposed to use the Graph Attention <cit.> operator together with an architecture to pretrain embeddings over two ontologies. Other works learn over heterogeneous graphs with different types of nodes <cit.>. <cit.> construct a global graph of diseases, as well as dynamic local (within a single visit) subgraphs. <cit.> focus on the EHR structure within a single visit. Finally, <cit.> consider hyperbolic embeddings for medical ontologies. The learned embeddings can then be incorporated into task-specific architectures <cit.> to improve outcome predictions in different healthcare settings. Previous approaches do consider dataset-specific structures such as the hierarchical organization of EHRs (patients, visits, etc.) and co-occurrence information or structure coming from ontologies. However, the explored set of ontologies is usually kept small and most of them are tree-like structures. To the best of our knowledge, no prior work has considered using a GNN directly on top of a complex large-scale ontology such as the UMLS Metathesaurus and the complete set of unstructured relational information within it. Further, while previous work considered multiple modalities, they use fusion approaches to join modalities, which can require larger amounts of data to train effectively. Our work proposes to use the learned knowledge representations over the UMLS Metathesaurus as a single shared latent space for information coming from both the structured (billing codes) and unstructured modalities (clinical reports). § GLOSSARY We consider an EHR dataset of multiple patients and present the following terminology: * Patient: p_i indexed by i * Visit: a patient p_i has one or multiple visits v_i,t indexed by t. A visit contains a set of medical concepts c ∈𝒞_i, t, the total set of medical concepts over the dataset is then 𝒞 = ∪_∀ i, t𝒞_i,t. A medical concept can be of different types and we distinguish them by index 𝒞(*): * Disease: indexed by d s.t. 𝒞_i, t(d) and 𝒞(d) = ∪_∀ i, t𝒞_i,t(d) the total set of disease concepts * Medication: (or prescriptions) with type m, similar to diseases we introduce 𝒞(m) and 𝒞_i,t(m) * Concept from clinical reports: a set of medical concepts extracted from text data (clinical reports, Sec. <ref>). The total set of considered medical concepts from text 𝒞(n) = ∪_∀ i, t𝒞_i,t(n) where the set 𝒞_i,t(n) is collected from all reports at a specific visit t of patient i. The type is n for text note. The vector representation of a visit considering data of a specific type * is 𝐯_𝐢,𝐭(*) ∈ℝ^k. * Ontology: each of them has a vocabulary 𝒱_Ont and defines some relation amongst the members of the vocabulary using an edge set ℰ_Ont, which defines the ontology graph 𝒢_Ont = (𝒱_Ont, ℰ_Ont). We consider the following ontologies/databases: * 𝒢_ICD (International Classification of Diseases) where 𝒞(d) ⊆𝒱_ICD * 𝒢_ATC (Anatomical Therapeutic Chemical) where 𝒞(m) ⊆𝒱_ATC * 𝒢_UMLS (Unified Medical Language System) where {𝒞(d) ∪𝒞(m) ∪𝒞(n)} = 𝒞⊆𝒱_UMLS § METHOD The architecture consists of three main components and is derived from the work done by <cit.>; fig:architecture provides an overview. * Concept embedding module f_θ(c): 𝒞 ↦ ℝ^k (parametrized by θ), which computes a representation for any given medical concept c. * Visit encoding module Assume q = |𝒞_i, t| and r ∈{2, 3} the number of concept types considered (diseases and medications with or without concepts from text) then g_ψ(v): ℝ^q × k↦ℝ^r × k (parametrized by ψ), which, given all concept token representations of a single visit v computes single representations for each different type of tokens thereof. * Predictor module which performs either a pretraining task on a single visit or a downstream fine-tuning task across a sequence of visits. In either case, this module receives representations for each visit of a patient from the previous visit encoding module. In the following subsections, we introduce the Concept Embedding module (Sec. <ref>), present how we extract richer concepts from clinical reports (Sec. <ref>), encode the information (Sec. <ref>), and perform predictions (Sec. <ref>). §.§ Concept Embeddings We consider the following implementations of f_θ(c): ICD/ATC Hierarchies Based on the work done by <cit.>, we consider the two tree hierarchies ICD[We consider the 9th revision, as of working on MIMIC-III] for diseases and ATC for medications. In this case, we consider c ∈{𝒞(d) ∪𝒞(m)}. We compute the node embeddings 𝐍_* (⊕ for concatenation): 𝐍_𝒞(d) = GNN_θ_1(𝒢_ICD), 𝐍_𝒞(m) = GNN_θ_2(𝒢_ATC), f_θ(c) = Lookup(𝐍_𝒞(d)⊕𝐍_𝒞(m))(c) where we use a distinct (parametrized by θ_1 and θ_2) multi-layer GNN for each of the two hierarchies (Eqns. <ref>, <ref>) and then perform a lookup (retrieve nodes by index) against the resulting node embeddings (Eqn. <ref>). In this case, we initialize all of the nodes with randomly initialized trainable embeddings. We refer to this approach to learn concept embeddings with ICD/ATC. We can additionally consider co-occurrence information (e.g., <cit.>) to connect the two hierarchies. We refer to this approach with ICD/ATC-CO (details in Appendix <ref>). MMUGL We present our novel approach to rely on the UMLS Metathesaurus as a unified concept space to learn representations for any general medical concept present in the database based on multiple modalities. Given that, we refer to our approach as Multi-Modal UMLS Graph Learning (MMUGL). To constrain the number of concepts we consider from the database we use the set of clinical reports present in EHR datasets such as MIMIC-III <cit.>. Using an extraction pipeline (Sec. <ref>) we collect the set of medical concepts 𝒞(n); additionally, we ensure all of the concepts in the ICD and ATC hierarchies are present as well in our final vocabulary. The final vocabulary {𝒞(d) ∪𝒞(m) ∪𝒞(n)} = 𝒞 = 𝒱_UMLS is used to construct 𝒢_UMLS by extracting all the edges in UMLS fully contained within the vocabulary. To simplify we consider all edges to be undirected. In UMLS many concepts are annotated with a short natural language description. We use SapBERT <cit.>[<https://github.com/cambridgeltl/sapbert>], a pretrained language model fine-tuned to discriminate amongst UMLS concepts, to initialize the node embeddings from these descriptions. This contributes in two ways: (i) by not using trainable embeddings, we reduce the otherwise huge amount of free parameters given the large vocabulary 𝒱_UMLS (ii) we incorporate prior medical knowledge by considering the concept descriptions. We then train a multi-layer GNN on top of the extracted graph: f_θ(c) = GNN_θ(𝒢_UMLS)(c) To retrieve a concept, we return its computed node embedding. We additionally found it to be beneficial for performance to consider two distinct stacks of GNN layers over the same graph and perform a Max-Pooling operation after the final layer across the two stacks. This falls in line with using two distinct GNNs in the simple ICD/ATC Hierarchy case presented in Eqn. <ref>. §.§ Concept Extraction The goal of our approach is to include data from additional modalities such as the clinical reports found in EHR datasets. MMUGL learns modality agnostic representations of medical concepts based on UMLS knowledge. It fuses discrete code information (e.g., ICD codes) with medical concepts extracted from text. The extraction with QuickUMLS <cit.> yields a set of medical concepts 𝒞_i,t(n) based on the collection of clinical reports of that particular visit. Further, we perform a rule-based negation extraction using NegEx <cit.>; for each concept, we extract a binary feature, whether it is negated or not, and concatenate it with its learned concept embedding (Eqn. <ref>). This is a crucial piece of information as clinical reports can both mention the existence or the absence of a certain condition. §.§ Visit Encoder We present implementations of the function g_ψ(v). In line with the work by <cit.>, we consider a multi-layer transformer without positional encodings. To aggregate a set of concepts into a single representation we use a learned token representation at the transformer output as the aggregate. For each concept type in a given visit, we encode a separate representation using the same (weight-sharing) Transformer_ψ with parameters ψ where * ∈{d, m, n}. The aggregated representations g_ψ(v) for each modality are considered as the output of this module in MMUGL: 𝐯_𝐢,𝐭(*) = Transformer_ψ( f_θ( 𝒞(*)_i,t) ) [], g_ψ(v) = ( 𝐯_𝐢,𝐭(d), 𝐯_𝐢,𝐭(m), 𝐯_𝐢,𝐭(n) ) we can also consider a case without the information from clinical reports (Eqn. <ref>), e.g., in cases where we use a simpler graph such as ICD/ATC (Eqn. <ref>) or in MMUGL without 𝒞(n). g_ψ(v) = ( 𝐯_𝐢,𝐭(d), 𝐯_𝐢,𝐭(m) ) §.§ Predictors and Training In the following, we introduce the pretraining module and downstream fine-tuning modules. §.§.§ Pretraining Module We replicate the auto-encoding pretraining approach developed by <cit.> with the reconstruction loss ℒ_recon and perform four different predictions (from each of the two modalities, disease and prescription, as a source to either as the label) using distinct Multi-layer Perceptrons (MLP_∙→ * predicting type * from representations of type ∙) and attach a binary cross-entropy loss ℒ_BCE to model multi-label classification. ℒ_recon = ∑_∙, * ∈{d, m}ℒ'(∙, *), ℒ'(∙, *) = ℒ_BCE( MLP_∙→ *( 𝐯(∙) ) , 𝒞(*)) During pretraining, we additionally randomly mask and replace certain tokens at the input in Eqn. <ref> (same as <cit.>, inspired by masked language modeling <cit.>). Weighted reconstruction pretraining We consider a weighted version of Eqn. <ref>: ℒ_recon = ∑_∙, * ∈{d, m} w_∙, * ℒ'(∙, *) As some of the considered downstream tasks focus on disease diagnosis we consider a tailored disease-focused pretraining approach. In this setting, we omit the predictions (and loss signal) to medications and only predict diseases from either the visits aggregated disease or medication representation. Meaning we set w_∙, d = 1 ∧ w_∙, m = 0. The contributions to the performance of this adaption are presented in Section <ref> and Appendix <ref>. Sum Aggregation Loss Due to the strong imbalance in the distribution of diseases and medications, we explore additional loss components to prevent the attention mechanism from overfitting to the most common tokens. Instead of taking the token representation we take the sum over all tokens excluding and again decode this unbiased aggregate using an MLP to predict the set of diseases or prescriptions (∖ for set difference): ℒ_sum = ∑_* ∈{d, m}ℒ'(t), ℒ'(*) = ℒ_BCE(MLP^ sum_* → *( 𝐯^sum(*) ) , 𝒞(*) ), 𝐯^sum(*) = ∑( Transf._ψ(f_θ( 𝒞(*) ) ) ∖{}) the idea is to ensure a more unbiased aggregation while still allowing the tokens to interact and impute masked or missing information. With this approach, we can induce a more dispersed distribution in the attention mechanism (Sec. <ref>). Concepts from clinical reports In our approach MMUGL we consider additional medical concepts extracted from text (clinical reports) and we concatenate the aggregated representation of these concepts for the respective visit 𝐯(n) to each of the two modalities at the input to the predictor MLP. For example in the case of ℒ_recon: ℒ_recon = ∑_∙, * ∈{d, m}ℒ'(∙, *), ℒ'(∙, *) = ℒ_BCE( MLP_∙→ *( 𝐯(∙) ⊕𝐯(n) ) , 𝒞(*) ) The final loss for pretraining ℒ_pre is a combination of ℒ_recon (Eqn. <ref>, <ref>, <ref>) and ℒ_sum (Eqn. <ref>): ℒ_pre = ℒ_recon + λℒ_sum, where ℒ_sum is configured as a regularizer with hyperparameter λ (for which we provide an ablation in Sec. <ref>). §.§.§ Downstream Modules In this work, we focus our contribution on learning concept representations over a knowledge graph from multiple modalities. We thus consider two prior architectures to perform time-series modeling and leave them mostly unchanged. It is intentional, that we do not propose a novel downstream architecture, but aim to show performance improvements alone through learning more robust and meaningful medical knowledge graph representations and aggregations thereof. Average Pooling To compare to work by <cit.> in medication recommendation, we consider their downstream architecture. Given a patient history of visits (of which we get the representations using modules from Sec. <ref> and <ref>), we perform the same pooling scheme over the past and current visit to get a final representation which is used as input to an MLP to perform a predictive task. RNN Based on the architecture by <cit.> given a patient and sequence of past visits (obtained by encoding in Sec. <ref>), we feed them through a GRU <cit.>. The hidden states at the output of the GRU are aggregated using a temporal attention mechanism where the query is a trainable embedding. We perform a minor modification here w.r.t. to the architecture by <cit.> and introduce a hyperparameter n_q, which refers to the number of trainable queries. If more than one query is used, we aggregate the different temporal aggregations of each query to get a single representation of the entire past of the patient. This representation is used to perform a prediction into the future using a MLP. § EXPERIMENTS We perform our experiments on the MIMIC-III <cit.> dataset, using the , , tables. Medications are mapped to the ATC hierarchy using the approach shared by <cit.>. For any of the approaches and baselines during pretraining we consider the training split of the respective baseline as well as any other patient in the dataset not present in the test or validation splits; this concerns especially patients with only a single visit (which are not usable for fine-tuning sequence tasks) in the dataset. We consider three different downstream tasks all trained using binary cross-entropy (binary/multi-label). Appendix <ref> shows data statistics for each of them. The result tables show standard deviations over three seeded training runs and we highlight the best results in bold font. In Appendix <ref>, <ref>, and <ref> we share training, architecture, and task details. Medication Recommendation To compare to the work by <cit.> (who have shown improvements over any previously published results on this task) we benchmark the medication recommendation task. We use their provided preprocessed patient data derived from MIMIC-III. The multi-label prediction task was evaluated on a sample-averaged Area under the precision-recall curve AuPRC, as well as sample-averaged macro F1 score. Heart Failure This task has been benchmarked in CGL <cit.> (Collaborative Graph Learning), Chet <cit.> (Context-aware Health Event Prediction via Transition Functions), and Sherbet <cit.> (Self-Supervised Graph Learning With Hyperbolic Embedding for Temporal Health Event Prediction); who have performed extensive benchmarking against prior work. We run their provided preprocessing and extract the used target code sets, as well as the computed patient splits. The binary classification is evaluated using F1 score and area under the receiver-operator curve AuROC. Diagnosis Similar to the previous heart failure task we compare to the results of CGL <cit.>, Chet <cit.>, and Sherbet <cit.>. We extract the target code sets and patient splits by running the provided preprocessing in each of the repositories to ensure comparability. We consider thresholded weighted F1 (w-F1) score, and to be comparable to <cit.> we consider their adapted computation of F1. The variant is slightly inflated by considering the number of ground truth positive labels for each sample[<https://github.com/LuChang-CS/CGL/blob/main/metrics.py>]. This avoids the need to set a threshold, but leaks the number of ground-truth positives to the evaluation; we refer to it as w-F1 (infl.). We also report recall at top k predictions (according to model confidence); referred to as R@k (e.g. R@20). § RESULTS AND DISCUSSION §.§ Pretraining: Sum Loss In fig:pretraining-sum-loss-entropy we perform an ablation w.r.t. to the hyperparameter λ controlling the contribution of ℒ_sum (Eqn. <ref>) to the total pretraining loss. <cit.> have computed the entropy of the distribution induced by the attention mechanism to analyze Transformer behavior. Similarly, we show the average (test set) entropy of the distribution induced by attention from the token to all the other tokens. For larger λ the entropy increases, hence the distribution is more dispersed, and we can see an improvement in pretraining performance (shown by the test set reconstruction loss ℒ_recon corresponding to improved test log-likelihood of our model). The idea is, that a more dispersed distribution is a better aggregator and generalizes better to rare diseases, which might otherwise be overlooked by a pointy (overfitted) attention distribution. In Appendix <ref> and <ref> we provide further experimental results ablating pretraining and the different loss terms. §.§ Medication Recommendation We report our performance on the medication recommendation task (Sec. <ref>) using the average pooling architecture (Sec. <ref>) in tab:baseline-comparison-med. Our method and training approach can outperform the previously published state-of-the-art results by <cit.>, however, we note that the multi-modal approach with medical concepts from clinical reports cannot provide improvements on this task and data split (patients have high variation w.r.t. the richness of available clinical reports); also see Appendix <ref>. §.§ Disease Tasks In tab:baseline-comparison-diag we present benchmarking results on two disease-related tasks (Sec. <ref>) using the RNN architecture (Sec. <ref>). We train and evaluate our models on the patient splits and code sets extracted by considering three different prior work implementations, which have performed extensive benchmarking on previous state-of-the-art methods. For Heart Failure we see that our approach can outperform any previous state-of-the-art published methods. Considering the diagnosis task our method outperforms CGL <cit.> (which considers unstructured text data), as well as Chet <cit.>; to be fair, neither considers a pretraining scheme. We also considerably outperform MedPath <cit.>, which considers personalized graphs to enhance the predictive performance of backbone time-series architectures for EHR. Our general method, considering pretraining tailored to encode both diseases and medications, performs on par with the hyperbolic approach Sherbet <cit.>, which performs pretraining too. However, if we tune our visit representations towards encoding disease-specific information (see Eqn. <ref> with w_∙, d = 1 ∧ w_∙, m = 0) we can also outperform this prior method. §.§ Concept Embedding Ablation In tab:concept-embedding-ablation we show ablations over different types of concept embeddings (Sec. <ref>) on a diagnosis task (Sec. <ref>). Our approach strongly benefits from richer multi-modal information coming from clinical reports and thus outperforms prior work (the multi-modal approach can also increase robustness w.r.t. missing and erroneous information, Appendix <ref>). We can see further improvements by tailoring our pretraining towards the downstream task by using disease-focused pretraining (Eqn. <ref> with w_∙, m = 0). Note that in some cases (e.g. Heart Failure) using MMUGL w/o 𝒞(n) (i.e. w/o clinical reports) can slightly harm performance and be outperformed by more dataset-specific approaches such as using co-occurrence information and relying on simpler ontology structures with trainable embeddings, thus being able to adapt better to the dataset than the language model initialized MMUGL embeddings. However, our approach enables the use of richer information coming from clinical reports and a larger concept vocabulary without introducing new parameters. Further, our approach is more general, grounded by prior knowledge, and can hopefully be used to push transfer learning performance in the future. This is crucial in the medical data setting, where publicly available training data is scarce and sharing among institutions difficult to protect the privacy of individual patients. We compare to two alternative approaches for learning concept embeddings by replacing the Concept Embedding <ref> module and performing the same proposed training procedure. As presented by <cit.> we additionally pretrain our knowledge graph concept embeddings using Node2Vec <cit.>. Secondly we compare to Cui2Vec <cit.>. Cui2Vec consists of medical concept embeddings pretrained on a large-scale corpus using a Word2Vec <cit.> style objective function. We show, that using our graph on the scale of 100'000 and around 30'000 patients for pretraining, we can compete with an approach that used training data on the order of 60 million patients, 20 million clinical notes, and 1.7 million biomedical journal articles. §.§ Interpretability Analysis on Clinical Reports We can use the attention mechanism to interpret the results on a patient level to rank diagnosis and medications, as well as general medical concepts from clinical reports w.r.t. their importance for the prediction using their respective attention score. See Figure <ref> where we show aggregated attention values for disease and prescription categories (Fig. <ref>), as well as the highest ranked concepts inside the highest ranked clinical reports (Fig. <ref>). We can also perform various dataset global analyses. We analyze the distribution of medical concepts extracted from clinical reports w.r.t. MIMIC-III report type and present the results in fig:text-attention-category-distribution; please mind the logarithmic y-scale (Appendix <ref> also shows a linear scale). After pretraining, we can see a very strong shift from the dataset's type distribution toward discharge summaries. This is sensible given the pretraining task is an auto-encoder, essentially training for summarizing the visit. By fine-tuning for specific tasks we can see slight shifts towards more specific report types, which can help provide more detailed insights for a given task; note for example how the focus in the respiratory category increases as we fine-tune for a general diagnosis, but decreases below the pretraining level for a heart failure prediction. §.§ Limitations Based on the previously shown results we can see the strong benefits of incorporating larger scale prior knowledge. We conclude the feasibility of extracting a complex graph from the large UMLS database using a fairly simple extraction pipeline and effectively learn strong medical knowledge representations over it. We have proposed a simple extraction pipeline, where we extract an undirected graph from UMLS and ignore potential edge information. A more sophisticated extraction paired with an appropriate GNN should be able to handle the increased heterogeneity of different nodes, edges, and their respective features. However, this will come at a computational cost. One will have to navigate the complexities associated with the various node and edge types within the heterogenous set of subvocabularies present inside the UMLS Metathesaurus. By creating a single shared latent space (our knowledge graph) for multiple modalities, we can achieve improved performance using much less data than prior art or outperform work using the same amounts of data. However, by reducing a clinical report to a set of medical concepts, which we can map onto our graph space, we neglect the natural language context and ordering. As we are already using a Transformer architecture inside our Visit Encoder (Sec. <ref>), we could include the remaining text (without concept matches) to provide the language context to obtain even finer grained final representations of patients and their visits. § CONCLUSION We have introduced a novel way to train a unified latent space for general medical knowledge from multiple modalities. By grounding our representations with prior knowledge from the UMLS Metathesaurus, we have demonstrated improved performance on downstream tasks. Our extended pretraining approach and the corresponding results emphasize its importance to tackle the supervised label scarcity in the medical domain. The more generalized approach to medical concept representations can aid in future designs and explorations of knowledge embedding transferability. Knowledge transfer is an important factor in the medical setting where publicly available training samples are scarce due to necessary regulations to protecting patient privacy. Our results pave the way for future research to bridge the gap between within-visit modeling (e.g., ICU time-series models <cit.>) and across-visit modeling, such as we benchmarked against in this work. Whereas disease and medication codes are usually assigned post-visit (for billing or archival purposes), many clinical reports are generated during the patient stays. To provide richer context information, future within-visit models might include patient histories and the knowledge captured in our global concept representations. This project was supported by grant #2022-278 of the Strategic Focus Area “Personalized Health and Related Technologies (PHRT)” of the ETH Domain (Swiss Federal Institutes of Technology). Further, we would like to thank Hugo Yèche for his feedback during the revision process. Thanks go to Jonas Bokstaller and Severin Husmann whose theses have provided relevant insights. § EXPERIMENTAL DETAILS §.§ Dataset and Split details A small overview of data and task statistics are provided in tab:apd-data-statistics-disease. Splits and target code sets have been extracted from the respective repositories[<https://github.com/jshang123/G-Bert>, <https://github.com/LuChang-CS/CGL>, <https://github.com/LuChang-CS/Chet>, <https://github.com/LuChang-CS/sherbet>] §.§ Knowledge Graph Statistics The extracted knowledge graph contains 87'445 nodes, 261'212 edges with node degrees of 5.97±20.91. The total vocabulary of all considered medical concepts is a subset of 21 UMLS Metathesaurus Vocabularies (percentages in brackets, some concepts belong to multiple): SNOMEDCT_US (46.75%), ICD9_CM (10.44%), CCPSS (7.71%), CSP (6.75%), FMA (6.15%), RXNORM (5.25%), DXP (4.21%), NCI_CDISC (4.13%), WHO (2.40%), ATC (2.12%), DRUGBANK (1.80%), CPT, NOC, BI, CCS, ICNP, NIC, ICF, CCC, PCDS, RAM. Given a patient split we compute the coverage over our vocabulary during pretraining and downstream training. Inclusion criterias causing differences between the two are availability of medication (req. for pretraining) and multiple visits (req. for downstream training). * All splits: 91.59% (pre), 71.24% (down) * Train: 90.30% (pre), 68.21% (down) * Validation: 16.45% (pre), 16.97% (down) * Test: 37.68% (pre), 38.47% (down) A percentage of concepts in the validation and test splits are unseen during training. Because of the graph structure, we can still learn meaningful representations for them: * Validation: 0.78% (pre), 1.93% (down) * Test: 3.11% (pre), 7.07% (down) §.§ Architecture and Training We perform early stopping based on the validation set loss both during pretraining and fine-tuning. The network is first fully pretrained until early stopped, the concept embedding (Sec. <ref>) backend is then frozen, the visit encoder (Sec. <ref>) is left trainable together with the downstream architecture to allow the attention mechanism to be fine-tuned to perform task-specific aggregations. We find a larger batch size (e.g. 32 or more) to be beneficial for better training stability. Appendix <ref> shows an overview of the hyperparameters, which have been tuned w.r.t. validation set performance. §.§.§ GNN Architecture In Section <ref> we use a parametrized GNN in Eqns. <ref>, <ref>, and <ref>. We use Pytorch Geometric <cit.> to implement these networks and based on our hyperparameter searches in Appendix <ref> we settled on using the graph convolution operator GraphSAGE as introduced by <cit.>. The ICD and ATC hierarchical ontologies or our complex UMLS based knowledge graph are passed to the GNN considering all edges as undirected. In the case of multiple GNN layers we use a non-linear ReLU activation after all but the last layer. The representations for each medical concept of an ontology or the knowledge graph at the GNN output are cached and used to retrieve concept embeddings for further processing by the Visit Encoder (Sec. <ref>) module. §.§.§ GNN with Co-Occurrence Similar to work done by <cit.> or <cit.> we can additionally consider co-occurrence information present in our dataset. We construct a new graph 𝒢_ICD/ATC-CO which contains multiple sets of nodes and edges. The node sets are the ICD and ATC tree hierarchy nodes, while the edge sets consist of the two ontologies and four co-occurrence edge sets; one for co-occurrence within each of the two ontologies and one (directed) from each of the two to the other. We then compute a heterogeneous (nodes of different types) multi-layer GNN (see <cit.>) over these node and edge sets, where each edge set is associated with its own parametrized graph convolution operator. As a result, we compute multiple different embeddings for a given node in each layer, which are summed. Co-Occurrence edges can additionally be weighted by computing a count over the dataset (training split) and normalizing s.t. incoming edges sum to one. Such weights can be considered by the GNN by multiplying messages from neighboring nodes with the corresponding weight. Again we have c ∈{𝒞(d) ∪𝒞(m)}: f_θ(c) = GNN_hetero(𝒢_ICD/ATC-CO)(c) §.§ Hyperparameters In tab:apd-hp-pretraining,tab:apd-hp-medication-recommend,tab:apd-hp-heart-failure,tab:apd-hp-diagnosis we present an overview of the model hyperparameters. Final choices based on validation set performances have been marked in bold font. Hardware A typical training is finished in under a day. Depending on the task and set of considered input modalities it can be much faster. We trained our models using mostly GPUs with 11GB of dedicated GPU memory; some larger models, which included medical concepts extracted from text have been trained on GPUs with 24GB of dedicated GPU memory. We use 2-6 worker processes and around 32-64GB of main memory. §.§ Tasks and Evaluation In the following, we provide a more detailed overview of the benchmarked downstream tasks (Sec. <ref>) and the evaluation thereof. §.§.§ Medication Recommendation We benchmark the medication recommendation task based on preprocessed data by <cit.>. The task is to predict a set of medications (ATC level 4 codes) given a patient's history and the current diagnosis (assigned ICD codes). Given a patient i and a trained predictor ĥ we can formalize as follows: 𝒞̂_i, t(m) = ĥ( 𝒞_i, 0… t-1(*), 𝒞_i, t(d) ) where * ∈{d, m, n} Given that this is a multi-label prediction we consider sample-averaged scores. Due to a significant imbalance in the distribution of medication codes, we use the F1 score for thresholded hard predictions and the area under the precision-recall curve (AuPRC) for unthresholded confidence scores. This is in line with the evaluation by <cit.>. §.§.§ Heart Failure This is a binary prediction task as already benchmarked by many prior works on the MIMIC-III <cit.> dataset. The task is to predict the risk of heart failure for a patient in a future visit given the patient's history. The label is extracted from the set of assigned ICD codes by matching with the prefix 428 after stripping the codes of any special characters. Let y_i, t be the target label and it is 1 if there exists a code c ∈𝒞_i, t(d) which has the prefix 428. For a patient i and trained predictor ĥ we can formalize as follows: ŷ_i, t = ĥ( 𝒞_i, 0… t-1(*) ) where * ∈{d, m, n} The task with mild label imbalance is evaluated using F1 score and area under the receiver-operator curve (AuROC) for untresholded performance evaluation; this is in line with work by <cit.> and others. §.§.§ Diagnosis This is a multi-label prediction over a set of diseases. Given a patient's history we predict the set of potential diseases for an upcoming visit . For a patient i and trained predictor ĥ we can formalize as follows: 𝒞̂_i, t(d) = ĥ( 𝒞_i, 0… t-1(*) ) where * ∈{d, m, n} This task might not seem very sensible at first as we cannot expect to reliably predict accidents that cause a hospital visit based on past EHR records. However, this is useful to catch chronic diseases and re-occurring patient patterns. Such a model's predictions could serve as a high-level aggregation of all EHR records for a specific patient. A doctor can get a very quick assessment of the potential risks for a patient upon admission and can tailor further investigations to this. Due to the extreme imbalance over the very large set of potential labels we use weighted-F1 score. To assess the unthresholded model confidence scores we use a popular metric from information retrieval. Recall at top k predictions (ranked by model confidence scores) can give an intuitive indication if the model can retrieve the desired ground truth diseases. The evaluation is in line with prior work e.g. by <cit.>. §.§ Baselines In this section, we provide a summary overview of the presented baselines and the key points of their architectures. §.§.§ CGL: Collaborative Graph Learning In this work, <cit.> propose a collaborative graph learning approach. They consider two graphs, one where patients and diseases are connected based on co-occurrence and one where only diseases are connected amongst each other based on the ICD ontology. GNN layers over the two edge sets and the shared set of nodes are run in an interleaved fashion (collaboratively). The computed embeddings for a certain disease are aggregated to represent patient visits and a sequence model performs task predictions. §.§.§ Chet: Context aware Health Event Prediction via Transition Functions The core contribution of this work by <cit.> is to consider a global disease graph, which connects diseases by co-occurrence and ontology relations, as well as a local graph (for each visit), which models the interactions of assigned disease codes within this specific visit. The architecture includes aggregation functions and sequence modeling to perform task-specific predictions. §.§.§ Sherbet: Self- Supervised Graph Learning With Hyperbolic Embedding for Temporal Health Event Prediction With Sherbet <cit.> propose to encode the structure of a disease ontology in hyperbolic space. The hyperbolic embeddings for the respective diseases are used to pretrain (using a patient history reconstruction task) and fine-tune a sequence model architecture to perform task-specific predictions. §.§.§ MedPath: Augmenting Health Risk Prediction via Medical Knowledge Paths With MedPath <cit.> propose to enhance the performance of existing EHR representation learning architectures by incorporating a personalized graph extracted using knowledge from Semantic MEDLINE <cit.>. The extracted graph is dataset and task-specific and can improve the performance of the backbone architecture. We transformed our data to adapt to their published pipeline and performed the heart failure prediction task using their implementations. We use HiTANet <cit.> as the backbone architecture, because it performed the best on the validation set in our hyperparameter search. §.§.§ G-BERT: Pre-training of Graph Augmented Transformers for Medication Recommendation <cit.> show performance improvements on a medication recommendation task by pretraining disease and medication code embeddings using GNNs over two ontologies. The pretraining objective is a reconstruction task of observed codes during a patient visit and borrows ideas from masked language modeling. The pretrained architecture includes a Transformer-based encoder, which outputs a encoding for each patient visit. The proposed downstream architecture performs a pooling scheme over patient histories and recommends medications for a current patient visit given the patient's history and the current diagnosis of diseases. §.§.§ Embedding Matrix In tab:concept-embedding-ablation we show a concept embedding ablation using an Embedding Matrix. This refers to a matrix of trainable parameters 𝐄∈ℝ^|𝒞| × k where |C| is the total number of considered medical concepts and k the embedding dimension. The embedding matrix replaces the Concept Embedding (Sec. <ref>) module and is pretrained and fine-tuned using the same procedure. §.§.§ Concept Embeddings using Node2Vec In tab:concept-embedding-ablation we show a concept embedding ablation using Node2Vec <cit.>. We consider our extracted complex UMLS-based knowledge graph and perform Node2Vec-style pretraining to obtain embeddings for each concept in our knowledge graph. We then initialize an embedding matrix (which is used to retrieve concept embeddings by index lookup) and use it to replace our proposed GNN-based concept embeddings. To ensure fair comparison we then perform the same reconstruction pretraining as our proposed approach MMUGL to ensure the parameters of the Visit Encoder (Sec. <ref>) module are well pretrained too. Similarly, we apply the same pipeline as for our approach during fine-tuning for downstream tasks. §.§.§ Concept Embeddings using Cui2Vec Cui2Vec as introduced by <cit.> is a collection of pretrained medical concept embeddings mapped to the space of UMLS. Their training optimizes a Word2Vec <cit.> style objective over a large-scale corpus (60 million patient records, 20 million clinical notes, and 1.7 million full-text biomedical journal articles). We use the Cui2Vec embeddings to initialize a lookup matrix from which concept embeddings are retrieved by index and replace our GNN-based concept embeddings. To ensure fair comparison we apply the same pretraining (reconstruction) and fine-tuning procedure to obtain downstream task performance results. § TRAINING AND ARCHITECTURE ABLATIONS §.§ Clinical Reports Performance Contribution In this section, we would like to clarify our findings about why the additional modality of extracted concepts from unstructured text (i.e. clinical reports) cannot yield a performance improvement in all cases. Overall, the billing codes represent an aggregate of information for an entire patient's visit to the hospital and the labels are defined based on them. Thus, the billing codes (ICD, ATC codes) are the strongest signal for our predictions. The additional medical concepts from clinical reports can help in two ways. First, they can help to deal with missing or noisy information from the billing codes (see also Appendix <ref>). Second, they can help the model to make more fine-grained predictions due to the higher level of detail. Heart Failure Here the additional concepts from clinical reports do seem to help, but in most cases only marginally. We hypothesize this is due to the fact, that we are only performing a binary prediction and the finer details of the clinical reports cannot yield enough additional information in most cases to significantly improve our predictions. Diagnosis This is a very complex classification task and here we see the strongest improvement after adding the concepts extracted from clinical reports. For this task, we can benefit from the higher level of detail present in the clinical reports compared to the billing codes. Medication Recommendation On this task the strongest signal comes from the current set of diseases. The additional concepts from clinical reports are only present in the representation of the patient's history, where we do not seem to benefit from the more detailed content of the clinical reports. To avoid information leakage we cannot directly use all concepts from all reports of the current visit when performing the medication recommendation. To accommodate for this we would have to adapt the task to a within-visit online medication recommendation; predicting medication based on the patient's global (past hospital visits) as well as local (past time within current visit) history. This would enable the inclusion of already accumulated clinical reports in the local (current visit) context. §.§ Ablation: SapBERT We ablate the use of SapBERT <cit.> compared to training randomly initialized node embeddings. SapBERT performs better in pretraining (the selection criteria), where we see an increase from 49.38±0.49 to 61.77±0.44 AuPRC. The improvement carries over to the downstream performance, where for the diagnosis prediction we see an improvement of 25.46±0.50 to 26.19±0.30 in the F1 (inflated) score. §.§ Pretraining Ablation In tab:baseline-comparison-diag we can see, that prior work including pretraining schemes performs much stronger than the ones that don't. In tab:pretraining-ablation we perform an ablation w.r.t. pretraining different concept embeddings and report performance on the Diagnosis task (Sec. <ref>) on pretrained and on randomly initialized networks. We note, the more structure bias we provide, the better the performance without pretraining. In Appendix <ref> and <ref> we present further results on exploring modifications to the pretraining loss function. §.§ Sum Aggregation Loss We provide further empirical evidence for the contribution of the additional loss term introduced in Eqn. <ref> in tab:sum-loss-diagnosis. tab:sum-loss-diagnosis shows results on the Diagnosis downstream task across different Concept Embedding implementations and with different pretraining regimes. We show results without pretraining, pretraining on only the default reconstruction loss ℒ_recon (Eqn. <ref>) and including the additional introduced loss term ℒ_sum (Eqn. <ref>). We can see that the additional loss component ℒ_sum during pretraining contributes to better pretrained representations as across different downstream models we can see either at least the same performance or increased performance. This difference is especially notable and important for the best-performing model implementation MMUGL, where w_∙, m = 0 (Eqn. <ref>, pretraining focused on recovering diseases only). We hypothesize, that without the additional loss regularization, we experience stronger overfitting to the training distribution during pretraining, as we have more data available (given that MMUGL includes additional rich information coming from medical concepts in clinical reports) and we have reduced the task complexity (as we set w_∙, m = 0 in the pretraining loss, Eqn. <ref>). We also observe a tendency to more consistent results under pretraining including the ℒ_sum loss component as standard deviations tend to be lower. This stays consistent also on a further task e.g. Heart Failure. For MMUGL with w_∙, m = 0 including ℒ_sum in pretraining we observe a downstream heart failure prediction performance (on the CGL <cit.> patient split) of 87.60±0.40 where this drops to 86.93±0.13 if we pretrain without ℒ_sum. §.§ Reconstruction Loss In tab:recon-loss-ablation,tab:recon-loss-ablation-med we perform an ablation with respect to the different weights in the weighted version of the pretraining reconstruction loss ℒ_recon (Eqn. <ref>). The base version as introduced by <cit.> considers all weights w_∙, * = 1. This is flexible in the sense that it does not enforce a bias towards encoding information relevant for disease or medication predictions. However, by weighting (or fully disabling) the different terms, we can tailor our pretraining to different downstream scenarios. Please also note, that the following experiments have been performed without the additional loss component ℒ_sum (Eqn. <ref>) to focus purely on the effects within the reconstruction loss term ℒ_recon (Eqn. <ref>, <ref>). Downstream Diagnosis tab:recon-loss-ablation shows this effect on the downstream Diagnosis task. We can see that while having all loss terms active yields strong performance, in the case of a diagnosis prediction it is beneficial to only pretrain on loss terms that are predictive for diseases i.e. w_∙, d = 1 ∧ w_∙, m = 0. This is further supported by results shown in tab:baseline-comparison-diag and tab:concept-embedding-ablation, where results on the full MMUGL model (including medical concepts from clinical reports) improve by pretraining with w_∙, m = 0. Downstream Medication Recommendation tab:recon-loss-ablation-med shows the exact same behaviour when performing downstream medication recommendation. The best performance is achieved by only considering loss terms towards predicting the modality relevant for the downstream prediction task. We can conclude, that cross-modality pretraining is beneficial to learn embeddings that can be useful for a yet unspecified downstream application. However, if the nature of the target modality of the downstream task is known and the cost of pretraining affordable, we can achieve better performance by adapting the pretraining to the downstream scenario. § CLINICAL REPORT CONCEPT CATEGORY DISTRIBUTION In fig:apd-text-attention-lin-log-comparison we show the plot discussed in Section <ref> with both logarithmic and linear scale. The plot with logarithmic scale in Figure <ref> is better suited to highlight the fine details and changes in categories such as Respiratory or Radiology. The linear scale in fig:apd-text-attention-lin shows the strong changes caused by the pretraining (compared to the actual token distribution per category) in e.g. the discharge summary type of reports. One might notice a particularly large drop in tokens from reports of the respiratory type. First we would like to highlight that fig:apd-text-attention-log uses a logarithmic y-axis and thus the absolute number of tokens found in the respective report type is comparatively low. Still, we can observe a change over one order of magnitude. This can be explained by looking more in-depth at the reports of this specific type. In MIMIC-III the clinical reports of type Respiratory are mostly highly structured status reports assessing a patient's state w.r.t. the respiratory system. Being a structured report, there is a large set medical concepts matched, which correspond to the field names of the structured report to be filled with patient information and further most of the provided assessments in the form do not vary much across patients. As such, many of the extracted medical concepts from these reports are not discriminative across patients and thus we observe a drop in attention to the tokens extracted from these reports after training the model. § HEART FAILURE PERFORMANCE DISENTANGLEMENT Due to the chronic nature of heart failure, we disentangle the performance on the test set with a fixed model for patients with and without reported histories of heart failure (the target codes have appeared in the patient history). The results are shown in Table <ref>. The model is naturally performing much better on the subset of patients with a reported history of heart failure and can exploit the chronic nature of the disease. However, we note that with our proposed multi-modal approach we see a notable performance improvement on the hard cases of patients without a reported history of heart failure. We conclude, using clinical report concepts backed by a knowledge graph, not just billing codes, aids in understanding disease progressions. § SINGLE PATIENT INTERPRETABILITY In fig:apd-single-patient we present various ways how attention scores of our visit encoder (Sec. <ref>) can be used to provide interpretability of our predictions. We provide an example score analysis of visit 121518 by patient 1784 in the MIMIC-III <cit.> dataset. The patient was assigned the following set of codes: * ICD: 519.1, 496.0, 414.01, 401.9, 443.9, V45.82 * ATC4: N05CD, A02BC, B01AB, A06AD, C07AB, B05CX, G04CA, A07EA The scores can be used to highlight the most relevant diseases and medications (fig:apd-patient-overview). By grouping scores of individual codes and computing an aggregate for each group (e.g. 90th-percentile of scores) we can highlight the most relevant disease and medication categories for this patient at the given visit. We can further extract which of the reports collected during the entire visit contain the most predictive identifiers by computing an aggregated score over the scores of all the matched concepts within each report (fig:apd-patient-overview). In (fig:apd-patient-top-reports) we then highlight the concepts within the two highest-ranked reports with the largest attention scores. We can see that the scores are consistent across different modalities, considering for example the high score given to the Respiratory category for the disease (ICD) codes (fig:apd-patient-overview), as well as high scores for concepts found in clinical reports (e.g. (Tracheomalacia) in or (Carinal reconstruction) in ; fig:apd-patient-top-reports) related to respiratory conditions. We can conclude that for this sample the unified concept latent space promotes consistency across modalities and can improve interpretability. § ROBUSTNESS W.R.T. MISSING INFORMATION In fig:masking-progression we show the results of an experiment, where we progressively mask a larger percentage of input tokens of different modalities. This is done by replacing the respective token identifier with the token used during masked language modeling style pretraining <cit.>. Tokens can either be masked randomly or we sort them with respect to the attention score assigned to them in the visit encoder. The y-axis shows the pretraining performance w.r.t to Eqn. <ref>; decoding to any of the two modalities (diseases, medications) from the visit representation of either. The results show, that although the auto-encoding objective is only formulated w.r.t. the disease and medications tokens, the additional text information can successfully prevent stronger decay in performance and help impute the missing or incorrect information. We can further see that masking tokens according to their attention scores results in a faster overall decrease in performance, highlighting the benefits of using an attention-based encoder, that can focus on relevant medical concepts when encoding a patient's current state.
http://arxiv.org/abs/2307.04154v1
20230709113104
Well posedness of fluid/solid mixture models for biofilm spread
[ "Ana Carpio", "Gema Duro" ]
math.AP
[ "math.AP", "cs.NA", "math-ph", "math.MP", "math.NA" ]
Well posedness of fluid-solid mixture models for biofilm spread Ana Carpio (Universidad Complutense de Madrid), Gema Duro (Universidad Autónoma de Madrid) August 12, 2023 ================================================================================================= Abstract Two phase solid-fluid mixture models are ubiquitous in biological applications. For instance, models for growth of tissues and biofilms combine time dependent and quasi-stationary boundary value problems set in domains whose boundary moves in response to variations in the mechano-chemical variables. For a model of biofilm spread, we show how to obtain better posed models by characterizing the time derivatives of relevant quasi-stationary magnitudes in terms of additional boundary value problems. We also give conditions for well posedness of time dependent submodels set in moving domains depending on the motion of the boundary. After constructing solutions for transport, diffusion and elliptic submodels for volume fractions, displacements, velocities, pressures and concentrations with the required regularity, we are able to handle the full model of biofilm spread in moving domains assuming we know the dynamics of the boundary. These techniques are general and can be applied in models with a similar structure arising in biological and chemical engineering applications. Keyword. Fluid-solid mixture models, thin film approximations, evolution equations in moving domains, quasi-stationary approximations, stationary transport equations. § INTRODUCTION Biofilms are bacterial aggregates that adhere to moist surfaces. Bacteria are encased in a self-produced polymeric matrix <cit.> which shelters them from chemical and mechanical aggressions. Biofilms formed on medical equipment, such as implants and catheters, are responsible for hospital-acquired infections <cit.>. In industrial environments, they cause substantial economical and technical problems, associated to food poisoning, biofouling, biocorrosion, contaminated ventilation systems, and so on <cit.>. Modeling biofilm spread is important to be able to eradicate them. We describe here biofilms in terms of solid-fluid mixtures, see Figure <ref>. At each point 𝐱 of the biofilm we have a solid fraction of biomass ϕ_s(𝐱,t) (cell biomass, polymeric threads) and a volume fraction of water ϕ_f(𝐱,t) containing dissolved substances (nutrients, autoinducers and so on), in such a way that ϕ_s(𝐱,t) +ϕ_f(𝐱,t)=1. The solid and fluid volume fractions move with velocities 𝐯_s and 𝐯_f, respectively. Biofilm spread on an air/solid interface is governed by the following system of equations, see <cit.>. Assume a biofilm occupies a region Ω^t, that varies with time. Figure <ref> represents schematic views of two dimensional slices. The upper boundary Γ^t_+ separates the biofilm from an outer fluid, that can be a liquid or air. A lower boundary Γ^t_- separates the biofilm from the substratum it attaches to. The main variables satisfy a set of quasi-stationary equations [ div (𝐯_f ϕ_f) = - k_s c c + K_sϕ_s,; [1ex] div(k_h(ϕ_s) ∇ (p-π(ϕ_s)) = div (𝐯_s ),; [1ex] μΔ𝐮_s + (μ + λ) ∇ ( div(𝐮_s)) = ∇ p,; [1ex] -d Δ c + div (𝐯_f c) = - k_c c c + K_cϕ_s, ] constrained by the additional conditions ϕ_f 𝐯_f = - k_h(ϕ_s) ∇ (p-π(ϕ_s)) + ϕ_f 𝐯_s, 𝐯_s =∂𝐮_s ∂ t, ϕ_f+ϕ_s = 1, in the region occupied by the biofilm Ω^t, which varies with time. In this quasi-static framework, the displacement vector 𝐮_s(𝐱,t) and the scalar pressure p(𝐱,t), volume fraction ϕ_s(𝐱,t) and concentration c(𝐱,t) fields depend on time through variations of the boundary Γ^t, which expands due to cell division and swelling. The positive functions k_h(ϕ_s) and π(ϕ_s) represent the permeability and the osmotic pressure. This system is subject to a set of boundary conditions: [ p - π = p_ext - π_ext, Γ^t= Γ^t_+∪Γ^t_-,; (σ̂(𝐮_s) - p 𝐈) 𝐧 = 𝐭_ext, ∂ c ∂𝐧 =0, Γ^t_+,; [1ex] 𝐮_s = 0, c = c_0, Γ^t_-, ] where 𝐧 is the outer unit normal and σ̂(𝐮_s)= λ Tr (ε(𝐮_s)) 𝐈 + 2 μ ε(𝐮_s), ε_ij(𝐮)= 1 2( ∂ u_i ∂ x_j + ∂ u_j ∂ x_i), i,j=1,…,n, n=2, 3, represent elastic stress and strain tensors. Boundary conditions for ϕ_f are required or not depending on the sign of 𝐯_f·𝐧 at the border. The displacement and velocity vectors have components 𝐮 = (u_1,…,u_n) and 𝐯 = (v_1,…,v_n), n= 2,3, respectively. All the parameters appearing in the model, k_s, K_s, k_c, K_c, μ, λ, d are positive constants. For ease of the reader, we have summarized the modeling in Appendix A. In some limits, the system can be reformulated as a poroelastic model <cit.>. The model is complemented with an equation for the dynamics of Γ^t, t>0. If we consider biofilms represented by the scheme in Figure <ref>(a), the contact points between biofilm, air and agar require specific additional information to avoid singularities. We will work with the geometry represented in Figure <ref>(b), that avoids this difficulty by introducing precursor layers <cit.>. Then, Γ^t_- is fixed. The upper boundary Γ^t_+ is parametrized by a height function h(x_1,x_2,t), which satisfies the equation <cit.> ∂ h ∂ t + ∂∂ x_1[ ∫_0^h (𝐯·𝐱̂_1) dx_3 ] + ∂∂ x_2[ ∫_0^h (𝐯·𝐱̂_2) dx_3 ] = 𝐯·𝐱̂_3|_0, where the composite velocity of the mixture 𝐯 = ϕ_f 𝐯_f + ϕ_s 𝐯_s has components 𝐯·𝐱̂_i = v_s,i - k_h(ϕ_s) ∂ (p-π) ∂ x_i, i=1,2,3. At present, only perturbation analyses and numerical studies are available for this type of models <cit.> in simple geometries. Asymptotic studies yield thin film type approximations for (<ref>)-(<ref>) assuming circular geometries and radial symmetry. Non standard lubrication equations for the height h are obtained, which admit families of self-similar solutions in radial geometries. However, the construction of reliable numerical solutions of the model in general experimental configurations faces difficulties due to the lack of well-posedness results. In this paper, we assume we know the dynamics of the upper boundary Γ_+^t, given by a smooth curve x_3 = h(x_1,x_2,t), and develop an existence and stability theory for the model equations. To simplify the analysis, we take k_h(ϕ_s) = k_h >0, k_h(ϕ_s)/ϕ_f = ξ_∞ >0 and π(ϕ_s) = Πϕ_s >0. In this quasi-stationary framework, the displacements 𝐮_s depend on time through the motion of the boundary. However, we lack equations for the velocities, other than the relation ∂𝐮_s ∂ t = 𝐯_s. In Section <ref> we obtain a system of equations characterizing the velocity: div(σ̂(𝐯_s)) = μΔ𝐯_s + (μ + λ) ∇ ( div(𝐯_s)) = ∇ p_t, , 𝐯_s = 0, , σ̂(𝐯_s) 𝐧 = ∂𝐠∂ t + 𝐫(𝐠,𝐮_s), with g= - p 𝐧 = -( p_ ext - π_ ext)𝐧 and 𝐫 to be defined later. A similar equation is obtained for p_t from the equation for p. Taking the divergence of the equations for 𝐮_s and 𝐯_s we find additional equations to close the system de dt = k_h (2 μ + λ) Δ e - k_h ΠΔϕ_s, , de_t dt = k_h (2 μ + λ) Δ e_t - k_h ΠΔϕ_s,t, , where e = div (𝐮_s) and e_t = div (𝐯_s). We will neglect Δϕ_s,t in (<ref>) because Π and Δϕ_s are small compared to other terms. Notice that (<ref>) and (<ref>) are time dependent problems set in time dependent domains, while most results in the literature refer to fixed domains. The construction of solutions for such systems combines a number of difficulties that we will address in stages. Section <ref> characterizes the time derivatives of 𝐮_s and p, solutions of elliptic problems in time dependent domains, by means of additional boundary value problems. In this way we improve the stability of the model, since solving additional partial differential equations in each spatial domain is more effective than approximating time derivatives by quotients of differences of solutions calculated in variable spatial domains. Section <ref> establishes well posedness results for linear parabolic problems (<ref>) set in domains with moving boundaries for specific types of parametrizations. Section <ref> considers the elliptic and stationary transport problems involved in the quasi-stationary submodels, separately and in fixed domains, under hypotheses motivated by asymptotic studies and numerical solutions. Finally, section <ref> considers the full coupled time dependent problem and section <ref> discusses our conclusions and open issues. A final appendix summarizes modeling details. § DIFFERENTIATION OF QUASI-STATIONARY PROBLEMS In the previous section, we have defined the velocity 𝐯_s as the time derivative of the displacement 𝐮_s. The change in time of 𝐮_s is due to the motion of the upper boundary Γ^t_+, that is, time variations in h. In this section we seek an equation characterizing 𝐯_s. We expect 𝐯_s to solve the same boundary value problem as 𝐮_s, but differentiating all sources with respect to time. However, since the boundary Γ^t of Ω^t moves with time, we need to calculate the adequate boundary conditions too. In the region Ω^t occupied by the moving biofilm, the displacements 𝐮_s of the solid phase satisfy equations (<ref>) with boundary conditions (<ref>). To simplify later computations, it is convenient to recast these equations in the general linear elasticity framework. The components of the displacement u_j(t), j=1,…,n, n being the dimension, fulfill - ∂∂ x_α( c_j α m β∂ u_m(t) ∂ x_β) = f_j(t), j=1,…,n, , u_j(t) = 0, j=1,…,n, , c_j α m β∂ u_m(t) ∂ x_β n_α(t) =g_j(t), j=1,…,n, where 𝐧(t) is the outer unit normal vector and c_j α m β the elastic constants. Γ_n^t and Γ_d^t are parts of the boundary Γ^t where we enforce conditions on the stresses of the displacements, respectively. We use the Einstein summation convention that implies summation over a set of indexed terms in a formula when repeated in it. In the above equations, summation over α, β, m is implied, but not over j. The elastic constants c_jα m β for a isotropic solids like the ones we consider are c_j α m β=λδ_j αδ_m β+μ (δ_jmδ_αβ +δ_j βδ_α m) where δ_jm stands for the Kronecker delta, whereas λ and μ represent the Lamé constants. The stress tensor is σ_jα = c_j α m βε_mβ = λδ_j αε_pp + 2 με_jα. In this framework, the velocity 𝐯 is the `Frèchet derivative' or `domain derivative' of 𝐮 with respect to t <cit.>, which is characterized by the solution of a boundary value problem, as we show next. Theorem 2.1. We assume that the body 𝐟 and boundary 𝐠 forces are differentiable in time, with values in [L^2(Ω^t)]^n and [L^2(Γ^t)]^n, respectively, with t>0, n=2,3 being the dimension. Moreover, the C^2 boundaries Γ^t are obtained deforming Γ^0 along a smooth vector field ν. Then, the time derivative 𝐯(t)= ∂𝐮(t) ∂ t, t>0, of the displacement given by (<ref>) satisfies - ∂∂ x_α( c_j α m β∂ v_m(t) ∂ x_β) = ∂ f_j(t) ∂ t, j=1,..,n, 𝐱∈Ω^t, v_j(t) = 0, j=1,..,n, 𝐱∈Γ_d^t, c_j α m β∂ v_m(t) ∂ x_β n_α(t) = ∂ g_j(t) ∂ t + r_j(g_j(t),𝐮(t)), j=1,..,n, 𝐱∈Γ_n^t, where [ r_j =c_jα mβ∂ u_m(t)∂ x_β∂ν_q ∂ x_α n_q(t) + c_jα mβ∂ u_m(t) ∂ x_β∂ (ν_p n_α(t)) ∂ x_p; 5mm + c_j α m β∂ u_m(t) ∂ x_β∂ν_p ∂ x_p n_α(t) - g_j(t) 𝐧(t)^T ∇ν 𝐧(t), j=1,…,n. ] As a corollary, we get the expressions of interest for our model. Corollary 2.2. Under the previous hypotheses, the time derivative 𝐯_s(t), t>0, of the solution 𝐮_s of (<ref>) with boundary conditions (<ref>) satisfies div( σ̂(𝐯_s))= μΔ𝐯_s + (μ + λ) ∇ ( div(𝐯_s)) = ∇ p_t, 𝐱∈Ω^t, 𝐯_s = 0, 𝐱∈Γ_-^t, σ̂(𝐯_s) 𝐧 = ∂ g_j ∂ t + r_j(g_j,𝐮_s), j=1,2 𝐱∈Γ_+^t, with g= - p 𝐧 = -( p_ ext - π_ ext)𝐧 and 𝐫 is defined by (<ref>) with c_j α m β= λδ_j αδ_m β+μ(δ_jmδ_αβ +δ_j βδ_α m). In practice, our moving boundaries are given by parametrizations of the form x_3=h(x_1,x_2,t). Therefore, the field ν∼ (0,0,h_t(x_1,x_2,t)) and 𝐧 = (h_x_1(x_1,x_2,t), h_x_2(x_1,x_2,t), -1) √(h_x_1(x_1,x_2,t)^2 +h_x_2(x_1,x_2,t)^2 +1). Thus, r_j = λ∂ u_j ∂ x_j∂ν_3 ∂ x_j n_3 + μ( ∂ u_j ∂ x_α∂ν_3 ∂ x_α +∂ u_m ∂ x_m∂ν_3 ∂ x_j) n_3 - d g_j dt( n_1∂ν_3 ∂ν_1 + n_2∂ν_3 ∂ν_2) n_3. Corollary 2.3 Under the previous hypotheses, assuming k_k(ϕ_s)=k_h and π(ϕ_s) = Πϕ_s, the derivative p_t(t)= ∂ p(t) ∂ t, t>0, of the solution p of (<ref>) with Dirichlet boundary conditions p= p_ext(t) satisfies, [ k_h Δ p_t = div(𝐯_s,t) + k_h ΠΔϕ_s,t + , 𝐱∈Ω^t,; [1ex] p_t = p_ ext'(t), 𝐱∈Γ^t. ] Proof of Theorem 2.1. We will follow a similar variational approach to that employed in <cit.> for 2D exterior elasticity problems with zero Dirichlet boundary conditions on a moving boundary. We are going to calculate the derivative at t=0. Similar arguments hold for any t>0. Step 1: Variational formulation. First, we write the boundary value problem for 𝐮 in variational form <cit.>. The boundary value problem (<ref>) becomes: Find 𝐮^t ∈ [H^1_Γ_d^t(Ω^t)]^n such that b^t(Ω^t; 𝐮^t, 𝐰^t)= ℓ^t(Ω^t;𝐰^t), ∀ 𝐰^t ∈ [H^1_Γ_d^t(Ω^t)]^n, where b^t(Ω^t; 𝐮^t,𝐰^t) = ∫_Ω^t c_j α m β∂ u_m^t∂ x_β^t ∂w_j^t ∂ x_α^t d𝐱^t, ∀ 𝐮^t, 𝐰^t ∈ [H^1_Γ_d^t(Ω^t)]^n, ℓ^t(Ω^t; 𝐰^t)= ∫_Ω^t f_j(t) w_j^t d𝐱^t + ∫_Γ_n^t g_j(t) w_j^t d𝐒_𝐱^t, ∀ 𝐰^t ∈ [H^1_Γ_d^t(Ω^t)]^n. Here, H^1_Γ_d^t(Ω^t) denotes the usual Sobolev space of H^1(Ω^t) functions vanishing on Γ_d^t ⊂∂Ω^t. H^1(Ω^t) if formed by all functions whose square, and the squares of their derivatives, are integrable in Ω^t, that is, belong to L^2(Ω^t). When 𝐟(t) ∈ [L^2(Ω^t)]^n, 𝐠∈ [L^2(Γ^t)]^n and meas(Γ_d^t)≠ 0, this problem admits a unique solution 𝐮^t ∈ [H^1_Γ_d^t(Ω^t)]^n <cit.>, which in fact belongs to [H^2(Ω^t)]^n, vanishes on Γ_d^t and satisfies σ (𝐮^t) 𝐧 = 𝐠 on Γ_n^t= ∂Ω^t ∖Γ_d^t. For t=0, we have u^0. Here, σ_α j (𝐮^t) = c_j α m β∂ u_m^t ∂ x_β. Step 2: Change of variables. We now transform all the quantities appearing in (<ref>)-(<ref>) back to the initial configuration Ω^0. The process is similar to transforming deformed configurations back to a reference configuration in continuum mechanics <cit.>. We are assumig that the evolution of the moving part of the boundary Γ^t = {𝐱 + t ν(𝐱) | 𝐱∈Γ^0 } is given by a family of deformations 𝐱^t = ϕ^t(𝐱) = 𝐱 + t ν(𝐱) starting from a smooth surface Γ^0 ∈ C^2 (twice differentiable) and following a smooth vector field ν∈ C^2 (Ω), Ω^t ⊂Ω, t>0. The deformation gradient is the jacobian of the change of variables <cit.> 𝐉^t(𝐱) = ∇_𝐱ϕ^t(𝐱) = (∂ x^t_i ∂ x_j(𝐱) ) = 𝐈 + t ∇ν(𝐱), and its inverse (𝐉^t)^-1 = (∂ x_i ∂ x^t_j) is the jacobian of the inverse change of variables. Then, volume and surface elements are related by d 𝐱^t = det 𝐉^t(𝐱) d 𝐱, d S_𝐱^t = det 𝐉^t(𝐱) (𝐉^t(𝐱))^-T𝐧 dS_𝐱 and the chain rule for derivatives reads ∇_𝐱 u_m(𝐱^t(𝐱)) = (J^t(𝐱))^T ∇_𝐱^t u_m(𝐱^t(𝐱)), that is, ∇_𝐱^t u_m = (𝐉^t)^-T∇_𝐱 u_m. For each component we have ∂ u_m ∂ x_β^t(𝐱^t(𝐱)) = ∂ u_m ∂ x_k(𝐱^t(𝐱)) (J^t)^-1_kβ(𝐱). We define 𝐮̃(𝐱)= 𝐮^t ∘ϕ^t (𝐱) = 𝐮^t (𝐱^t(𝐱)), definition that extends to 𝐰̃ and other functions. Changing variables and using (<ref>)-(<ref>) we have: b^t(Ω^t; 𝐮^t,𝐰^t) = ∫_Ω^t c_j α m β∂ u_m^t∂ x_β^t (𝐱^t) ∂ w_j^t ∂ x_α^t (𝐱^t) d𝐱^t = ∫_Ω^0 c_j α m β∂ũ_m∂ x_p(𝐱) (J^t)^-1_p β(𝐱) ∂w̃_j ∂ x_q(𝐱) (J^t)^-1_q α(𝐱) det 𝐉^t(𝐱) d 𝐱 = b̃^t(Ω^0; 𝐮̃,𝐰̃) ℓ^t(Ω^t; 𝐰^t) = ∫_Ω^t f_j(𝐱^t,t) w_j^t(𝐱^t) d𝐱^t + ∫_Γ_n^t g_j(𝐱^t,t) w_j^t(𝐱^t) d S_𝐱^t = ∫_Ω^0 -2mm f̃_j(𝐱,t) w̃_j(𝐱) det 𝐉^t d 𝐱+∫_Γ_n^0 -2mm g̃_j(𝐱,t) w̃_j(𝐱) det 𝐉^t (𝐉^t)^-T𝐧 dS_𝐱 = ℓ̃^t(Ω^0; 𝐰̃). For arbitrary test functions 𝐰^t ∈ [H^1_Γ_d^t(Ω^t)]^n, 𝐰̃∈ [H^1_Γ_d^t(Ω^0)]^n is a test function in Ω^0. Therefore, we obtain the equivalent variational formulation: Find 𝐮̃∈ [H^1_Γ_d^t(Ω^0)]^n such that b̃^t(Ω^0; 𝐮̃, 𝐰)= ℓ̃^t(Ω^0;𝐰), ∀ 𝐰∈ [H^1_Γ_d^t(Ω^0)]^n, with b̃^t(Ω^0; 𝐮̃, 𝐰) and ℓ̃^t(Ω^0;𝐰) defined in (<ref>)-(<ref>) replacing 𝐰̃ by 𝐰. Let us analyze the dependence on t of the terms appearing in the expression for b̃^t and ℓ̃^t. From the definitions of the Jacobian matrices (<ref>) we obtain <cit.> det 𝐉^t(𝐱) = 1 + t div(ν(𝐱) ) + O(t^2), (𝐉^t)^-1(𝐱) = 𝐈 - t ∇ν(𝐱) + O(t^2), det 𝐉^t(𝐱) (𝐉^t(𝐱))^-T𝐧 = 1 + t div_Γ(ν(𝐱)) + O(t^2), where div_Γ(ν(𝐱)) = div(ν(𝐱)) - 𝐧^T ∇ν(𝐱) 𝐧. Inserting (<ref>)-(<ref>) in (<ref>) we find the following expansions. When p=β and q=α we get ∫_Ω^0 c_j α m β∂ũ_m∂ x_β∂ w_j ∂ x_α d 𝐱 + t ∫_Ω^0 c_j α m β∂ũ_m ∂ x_β∂ w_j ∂ x_α div(ν) d 𝐱 - t ∫_Ω^0 c_j α m β[ ∂ũ_m∂ x_β∂ν_β∂ x_β∂ w_j ∂ x_α + ∂ũ_m ∂ x_β∂ w_j ∂ x_α∂ν_α∂ x_α] d 𝐱 + O(t^2), whose leading term is b^0(Ω^0; 𝐮̃, 𝐰 ). When p≠β and q ≠α the summands are O(t^2). The remaining terms provide the contribution -t ∫_Ω^0 c_j α m β[ ∂ũ_m ∂ x_p∂ν_p∂ x_β∂ w_j ∂ x_α + ∂ũ_m ∂ x_β∂ w_j ∂ x_q∂ν_q∂ x_α] d 𝐱 + O(t^2), with p ≠β, q=α in the first one and q ≠α, p=β in the second one. Adding up the contributions we get b̃^t(Ω^0; 𝐮̃, 𝐰 ) = b^0(Ω^0; 𝐮̃, 𝐰 ) + t[I_1(𝐮̃)+I_2(𝐮̃)+I_3(𝐮̃)] +O(t^2), where [ I_1(𝐮̃) = ∫_Ω^0 c_j α m β∂ũ_m ∂ x_β∂ w_j ∂ x_α div(ν) d 𝐱,; I_2(𝐮̃) = - ∫_Ω^0 c_j α m β∂ũ_m ∂ x_p∂ν_p∂ x_β∂ w_j ∂ x_α d 𝐱,; I_3(𝐮̃) = - ∫_Ω^0 c_j α m β∂ũ_m ∂ x_β∂ w_j ∂ x_q ∂ν_q ∂ x_α d 𝐱 = ∫_Ω^0∂∂ x_α(c_j α m β∂ũ_m ∂ x_β) ∂ w_j ∂ x_q ν_q d 𝐱; + ∫_Ω^0 c_j α m β∂ũ_m∂ x_β∂^2 w_j ∂ x_α∂ x_qν_q d 𝐱 - ∫_∂Ω^0 c_j α m β∂ũ_m ∂ x_β n_α∂ w_j ∂ x_q ν_q d S_𝐱. ] Similarly, from the definition (<ref>) of the linear form ℓ̃^t and the definition of 'material derivative' 𝐟̇ 𝐟̃(𝐱,t) = 𝐟(𝐱^t(𝐱),t) = 𝐟(𝐱,0) + t 𝐟̇(𝐱,0) + O(t^2), we find the expansion [ ℓ̃^t(Ω^0; 𝐰 ) = ∫_Ω^0 f_j(0) w_j d 𝐱 + t ∫_Ω^0 [ f_j(0) div(ν) + ḟ_j(0) ] w_j d 𝐱; [1.5ex] + ∫_Γ_n^0 g_j(0) w_j d S_𝐱 + t ∫_Γ_n^0 [ g_j(0) div_Γ(ν) + ġ_j(0) ] w_j d S_𝐱 + O(t^2) ] whose leading term is ℓ^0(Ω^0; 𝐰). Step 3. Variational problem for the domain derivative 𝐮'. Let us compare the transformed function 𝐮̃ and the solution 𝐮^0 of b^0(Ω^0;𝐮^0, 𝐰) = ℓ^0(Ω^0;𝐰). For any 𝐰∈ [H^1_Γ_d^t(Ω^0)]^n we have b^0(Ω^0;𝐮̃- 𝐮^0, 𝐰) = b^0(Ω^0;𝐮̃, 𝐰) - ℓ^0(Ω^0;𝐰) = b^0(Ω^0;𝐮̃, 𝐰) - b̃^t(Ω^0;𝐮̃, 𝐰) + ℓ̃^t(Ω^0;𝐰) - ℓ^0(Ω^0;𝐰). Well posedness of the variational problems (<ref>) with respect to changes in domains Ω^t and sources 𝐟(t), 𝐠(t), implies uniform bounds on the solutions for t ∈ [0,T]: 𝐮^t_[H^1(Ω^t)]^n≤ C(T), 𝐮̃ _[H^1(Ω^0)]^n≤ C(T). Expansions (<ref>)-(<ref>) show that the right hand side in (<ref>) tends to zero as t → 0. Well posedness of the variational problem again implies 𝐮̃→𝐮^0 in [H^1_Γ_d^t(Ω^0)]^n as t→ 0. Dividing by t equation (<ref>) and using (<ref>)-(<ref>), we find [ b^0(Ω^0;𝐮̃- 𝐮^0 t, 𝐰) = 1 t [b^0(Ω^0;𝐮̃, 𝐰) - b̃^t(Ω^0;𝐮̃, 𝐰)] + 1 t [ ℓ̃^t(Ω^0;𝐰) -ℓ^0(Ω^0;𝐰)]; [1.5ex] = - [I_1(𝐮̃)+I_2(𝐮̃)+I_3(𝐮̃)]+ ∫_Ω^0 [ f_j(0) div(ν) + ḟ_j(0) ] w_j d 𝐱; [1.5ex] + ∫_Γ_n^0 [ g_j(0) div_Γ(ν) + ġ_j(0) ] w_j d S_𝐱+ O(t). ] Then, the limit 𝐮̇ = lim_t → 0𝐮̃- 𝐮^0 t satisfies [ b^0(Ω^0;𝐮̇, 𝐰) = ∫_Ω^0 [ f_j(0) div(ν) + ḟ_j(0) ] w_j d 𝐱 - [I_1(𝐮^0)+I_2(𝐮^0)+I_3(𝐮^0)]; [1.5ex] + ∫_Γ_n^0 [ g_j(0) div_Γ(ν) + ġ_j(0) ] w_j d S_𝐱. ] As before, the function 𝐮̇ is the so called `material derivative', that is, 𝐮̇ = ∂𝐮∂ t + ∇𝐮^0 ν. The domain derivative becomes 𝐮' = 𝐮̇ - ∇𝐮^0 ν. Then, b^0(Ω^0; 𝐮', 𝐰) = b^0(Ω^0;𝐮̇, 𝐰) - b^0(Ω^0;∇𝐮^0 ν, 𝐰), where b^0(Ω^0;∇𝐮^0 ν, 𝐰) = ∫_Ω^0∂∂ x_β( c_j α m β∂ u_m^0∂ x_p ν_p) ∂ w_j ∂ x_α d 𝐱. Notice that this function vanishes on Γ_d whenever 𝐮̇ and ν do so. Step 4. Differential equation for the domain derivative 𝐮'. We evaluate the different terms in the right hand side of (<ref>) to calculate the right hand side in (<ref>). First, notice that -∂∂ x_α(c_j α m β∂ u_m^0 ∂ x_β) = f_j(0) in Ω^0 and c_j α m β∂ u_m^0 ∂ x_β n_α= g_j(0) on Γ_n^0, u_j^0=0 on Γ_d^0, j=1,...,n, imply: [ I_3(𝐮^0) = ∫_Ω^0( ∂ f_j(0) ∂ x_qν_q + f_j(0) ∂ν_q ∂ x_q) w_j d 𝐱 - ∫_∂Ω^0 f_j(0) w_j n_q ν_q d 𝐱; [1.5ex] - ∫_Γ_n^0 g_j(0) ∂ w_j ∂ x_q ν_q d S_𝐱 + ∫_Ω^0 c_j α m β∂ u_m^0∂ x_β∂^2 w_j ∂ x_q ∂ x_αν_q d 𝐱. ] Using ∂ u_m^0∂ x_p∂ν_p ∂ x_β = ∂∂ x_β(∂ u_m^0 ∂ x_p ν_p ) - ∂^2 u_m^0∂ x_p ∂ x_βν_p, we get [ I_2(𝐮^0) = - b^0(Ω^0;∇𝐮^0 ν, 𝐰) - ∫_Ω^0 c_j α m β∂ u_m^0∂ x_β∂ν_p∂ x_p∂ w_j ∂ x_α d 𝐱; [1.5ex] - ∫_Ω^0 c_j α m β∂ u_m^0∂ x_βν_p∂^2 w_j ∂ x_α∂ x_p d 𝐱 + ∫_∂Ω0 c_j α m β∂ u_m^0∂ x_βν_p n_p ∂ w_j ∂ x_α d 𝐱. ] As a result of the two previous identities [ - [I_1(𝐮^0)+I_2(𝐮^0)+I_3(𝐮^0)]= b^0(Ω^0;∇𝐮^0 ν, 𝐰) - ∫_∂Ω^0 c_j α m β∂ u_m^0∂ x_βν_p n_p ∂ w_j ∂ x_α d 𝐱; -∫_Ω^0( ∂ f_j(0) ∂ x_qν_q + f_j(0) ∂ν_q ∂ x_q) w_j d 𝐱 + ∫_∂Ω^0 f_j(0) w_j n_q ν_q d 𝐱 + ∫_Γ_n^0 g_j(0) ∂ w_j ∂ x_q ν_q d S_𝐱 ] and (<ref>) becomes [ b^0(Ω^0; 𝐮', 𝐰) = - ∫_∂Ω^0 c_j α m β∂ u_m^0∂ x_βν_p n_p ∂ w_j ∂ x_α d 𝐱 + ∫_∂Ω^0 f_j(0) w_j n_q ν_q d 𝐱 +; [1.5ex] ∫_Ω^0 f_j'(0) w_j d 𝐱 + ∫_Γ_n^0 [ g_j(0) div_Γ(ν) + ġ_j(0) ] w_j d S_𝐱 + ∫_Γ_n^0 g_j(0) ∂ w_j ∂ x_q ν_q d S_𝐱. ] Integrating by parts in b^0(Ω^0; 𝐮', 𝐰) and choosing 𝐰 with compact support inside Ω^0, this identity yields the following equation for 𝐮' in Ω^0 - ∂∂ x_α(c_j α m β∂ u_m' ∂ x_β(𝐱) ) = f_j'(𝐱,0), j=1,...,n. However, to obtain a pointwise boundary condition for 𝐮' we need to rewrite the integral on ∂Ω^0 in such a way that no derivatives of the test function 𝐰 are involved. Step 5: Boundary condition for the domain derivative 𝐮'. We integrate by parts the original expressions of I_i(𝐮^0), i=1,2,3 to get [ I_1 = - ∫_Ω^0∂∂ x_α( c_j α m β∂ u_m^0∂ x_β div(ν) ) w_j d 𝐱 + ∫_∂Ω^0 c_j α m β∂ u_m^0 ∂ x_β div(ν) n_α w_j d S_𝐱, ] -5mm [ I_2 = - ∫_Ω^0 c_jα mβ∂∂ x_β(∂ u_m^0 ∂ x_pν_p ) ∂ w_j ∂ x_α d 𝐱 - ∫_Ω^0 c_jα mβ∂∂ x_α∂∂ x_p( ∂ u_m^0 ∂ x_βν_p ) w_j d 𝐱; [1.5ex] + ∫_Ω^0 c_jα mβ∂∂ x_α( ∂ u_m^0 ∂ x_β∂ν_p ∂ x_p) w_j d 𝐱 + ∫_∂Ω^0 c_jα mβ∂^2 u_m^0 ∂ x_p ∂ x_βν_p w_j n_α d 𝐱 ] -4mm [ I_3 = ∫_Ω^0∂∂ x_q∂∂ x_α( c_jα mβ∂ u_m^0 ∂ x_βν_q ) w_j d 𝐱 + ∫_Ω^0∂∂ x_q( f_j(0) ν_q ) w_j d 𝐱; [1.5ex] - ∫_∂Ω^0 c_jα mβ∂ u_m^0 ∂ x_β∂ν_q ∂ x_α n_q w_j d S_𝐱 ] b^0(Ω^0; ∇𝐮^0 ν, 𝐰) - ∫_Ω^0( ∂∂ x_q f_j(0) ν_q + f_j(0) ∂ν_q ∂ x_q) w_j d 𝐱 + ∫_∂Ω^0 c_jα mβ∂ u_m^0 ∂ x_β∂ν_q ∂ x_α n_q w_j d S_𝐱 - ∫_∂Ω^0 c_jα mβ∂^2 u_m^0 ∂ x_p ∂ x_βν_p w_j n_α d 𝐱 + ∫_∂Ω^0 c_j α m β∂ u_m^0 ∂ x_β div(ν) n_α w_j d S_𝐱. We integrate by parts b^0(Ω^0; 𝐮', 𝐰) to get - ∫_Ω^0∂∂ x_α( c_j α m β∂ u_m' ∂ x_β) w_j d 𝐱 + ∫_Γ_n^0 c_j α m β∂ u_m' ∂ x_β n_α w_j d S_𝐱. Adding up to compute -[I_1+I_2+I_3], integrating by parts b^0(Ω^0; 𝐮', 𝐰), inserting (<ref>) in (<ref>) and setting ν=0 on Γ_d we find [ ∫_Γ_n^0 c_j α m β∂ u_m' ∂ x_β n_α w_j d S_𝐱 = ∫_∂Ω^0 c_jα mβ∂ u_m^0 ∂ x_β∂ν_q ∂ x_α n_q w_j d S_𝐱; [1.5ex] - ∫_∂Ω^0 c_jα mβ [ ∂^2 u_m^0 ∂ x_p ∂ x_βν_p n_α + ∂ u_m^0 ∂ x_β div(ν) n_α] w_j d S_𝐱; [1.5ex] + ∫_Γ_n^0 [ g_j(0) div_Γ(ν) + ġ_j(0) ] w_j d S_𝐱. ] Now, using identifies - c_jα mβ∂^2 u_m^0 ∂ x_p ∂ x_βν_p n_α = - ∂∂ x_p (g_j(0) ν_p) + c_jα mβ∂ u_m^0 ∂ x_β∂ (ν_p n_α) ∂ x_p, and g_j(0) div_Γ(ν) + ġ_j(0) = ∂∂ x_p (g_j(0) ν_p) - g_j(0) 𝐧^T ∇ν 𝐧 + g'_j(0), we obtain [ c_j α m β∂ u_m' ∂ x_β n_α = c_jα mβ∂ u_m^0 ∂ x_β∂ν_q ∂ x_α n_q + c_jα mβ∂ u_m^0 ∂ x_β∂ (ν_p n_α) ∂ x_p; [1.5ex] + c_j α m β∂ u_m^0 ∂ x_β∂ν_p ∂ x_p n_α - g_j(0) 𝐧^T ∇ν 𝐧 + g'_j(0) ] on Γ_n^0. □ § STUDY OF DIFFUSION PROBLEMS IN TIME DEPENDENT DOMAINS We study here parabolic problems of the form [ e_t - κΔ e = f(𝐱, t), 𝐱∈Ω^t, t>0,; e = g(t), 𝐱∈Γ^t, t>0,; e(𝐱, 0) = e_0, 𝐱∈Ω^t. ] As in Section <ref> we assume that the evolution of the moving part of the boundary is given by a family of deformations <cit.> Γ^t = {𝐱 + t ν(𝐱) | 𝐱∈Γ^0 }, starting from a smooth surface Γ^0 ∈ C^2 (twice differentiable) and following a smooth vector field ν∈ C(Ω^0) ∪ C^2(Ω^0). We can assume e(t)=0 by making the change e = ê + g. Then ê solves (<ref>) with zero Dirichlet boundary condition, initial datum e_0(𝐱)-g(0) and right hand side f(𝐱, t)-g'(t). Therefore, we will work with zero Dirichlet boundary conditions in the sequel. To solve (<ref>) we will first refer it to a fixed domain and then construct converging Faedo-Galerkin approximations. §.§ Variational formulation in the undeformed configuration As usual, we denote as H^1_0(Ω^t) the subspace of H^1(Ω^t) formed by functions whose trace vanishes on Γ^t with the induced norm. Multiplying (<ref>) by w^t ∈ H^1_0(Ω^t) and integrating, we find [ [ ∫_Ω^t e_t(𝐱^t,t) w^t(𝐱^t) d 𝐱^t +∫_Ω^t∇_𝐱^t e(𝐱^t,t) ∇_𝐱^t w^t(𝐱^t) d 𝐱^t =∫_Ω^t f(𝐱^t,t) w^t(𝐱^t) d 𝐱^t ] ] for each t. We use (<ref>), (<ref>), (<ref>) to refer these integrals to a fixed domain. The jacobian of the change of variables is the deformation gradient 𝐉^t(𝐱) = ∇_𝐱ϕ^t(𝐱) = (∂ x^t_i ∂ x_j(𝐱) ) = 𝐈 + t ∇ν(𝐱), and its inverse (𝐉^t)^-1 = (∂ x_i ∂ x^t_j) is the jacobian of the inverse change of variables. Then, volume and surface elements are related by d 𝐱^t = det 𝐉^t(𝐱) d 𝐱, d S_𝐱^t = det 𝐉^t(𝐱) (𝐉^t(𝐱))^-T𝐧 dS_𝐱, and the chain rule for derivatives reads ∇_𝐱 e(𝐱^t(𝐱)) = (J^t(𝐱))^T ∇_𝐱^t e(𝐱^t(𝐱)), that is, ∇_𝐱^t e = (𝐉^t)^-T∇_𝐱 e. For each component we have ∂ e ∂ x_α^t(𝐱^t(𝐱)) = ∂ e ∂ x_k(𝐱^t(𝐱)) (J^t)^-1_kα(𝐱). Changing variables we have: [ ∫_Ω^t∇_𝐱 e(𝐱^t(𝐱))^T ∇_𝐱 w(𝐱^t(𝐱)) d𝐱^t =; [2ex] ∫_Ω^0∇_𝐱 e^T (𝐉^t)^-T (𝐉^t)^-T∇_𝐱 e det 𝐉^t(𝐱) d 𝐱, ] where we assume the repeated index summing rule, that is, sum over repeated indices is intended. We define w̃(𝐱)= w^t ∘ϕ^t (𝐱) = w^t (𝐱^t(𝐱)), ϕ^t as in (<ref>). Notice that [ e_t(𝐱^t(𝐱),t) = d dt[e(𝐱^t(𝐱),t)] - ∇_𝐱^t e(𝐱^t(𝐱),t)^T d 𝐱^t dt; = d dtẽ(𝐱,t) - (𝐉^t)^-T∇_𝐱ẽ(𝐱,t)^T ν̃(𝐱). ] After changing variables, problem (<ref>) reads: Find e ∈ C([0,T],L^2(Ω^0)) ∩ L^2(0,T;H^1_0(Ω^0)) such that e(𝐱, 0) = e_0(𝐱) and [ ∫_Ω^0ẽ_t(𝐱,t) w̃(𝐱) det 𝐉^t(𝐱) d 𝐱 - ∫_Ω^0∇_𝐱ẽ(𝐱,t)^T (𝐉^t(𝐱))^-1ν̃(𝐱) w̃(𝐱) det 𝐉^t(𝐱) d 𝐱; [2ex] + ∫_Ω^0∇_𝐱 e(𝐱,t)^T ((𝐉^t(𝐱))^T 𝐉^t(𝐱))^-1∇_𝐱 e(𝐱,t) w̃(𝐱) det 𝐉^t(𝐱) d 𝐱; [2ex] = ∫_Ω^0f̃(𝐱,t) w̃(𝐱) det 𝐉^t(𝐱) d 𝐱. ] Since w^t ∈ H^1_0(Ω^t), we have w̃∈ H^1_0(Ω^0). In fact, we can take the same arbitrary function w ∈ H^1_0(Ω^0) for all t. §.§ Construction of stable solutions Consider a basis {ϕ_1, ϕ_2, …, ϕ_M …} of the Hilbert space L^2(Ω). We choose the normalized eigenfunctions ϕ_j ∈ H^2(Ω) ∩ H^1_0(Ω), j∈ℕ, of -Δ in H^1_0(Ω), see <cit.>. 1mm Theorem 3.1 Let Ω⊂ℝ^n be an open and bounded C^2 domain. Given a function f ∈ C^1([0,T]; L^2(Ω)) there exists a unique solution u ∈ C([0,T]; H^2(Ω)) ∩ H^1(0,T;H^1_0(Ω)) of [ ∫_Ω u_t(𝐱,t) w(𝐱) c(𝐱,t) d 𝐱 + ∫_Ω∇ u(𝐱,t)^T 𝐛(𝐱,t) w(𝐱) d 𝐱 +; [2ex] ∫_Ω∇ u(𝐱,t)^T 𝐀(𝐱,t) ∇ w(𝐱) d 𝐱 = ∫_Ω f(𝐱,t) w(𝐱) d 𝐱, ] for all w ∈ H^1_0(Ω), t ∈ [0,T], provided * 𝐀(𝐱,t) ∈ C^1(Ω× [0,T]), 𝐛(𝐱,t) ∈ C^1(Ω× [0,T]) and c(𝐱,t) ∈ C^2(Ω× [0,T]), * the matrices 𝐂^M(t) with elements ∫_Ω c(t) ϕ_m ϕ_k d 𝐱, m,k=1, …, M, are invertible for t ∈ [0,T], * the matrices 𝐀(𝐱, t) are uniformly coercive, that is, ξ^T 𝐀(𝐱, t) ξ≥ a_0 |ξ|^2, a_0>0, for all ξ∈ℝ^n, and the scalar field c(𝐱,t) is bounded from below, c(𝐱,t) ≥ c_0 >0, for all 𝐱∈ℝ^n and t>0, * u_0 ∈ L^2(Ω) and w_0 = div(𝐀(𝐱, 0) ∇ u_0(𝐱)) + 𝐛(𝐱, 0)^T ∇ u_0(𝐱) ∈ L^2(Ω). Moreover, the solution depends continuously on parameters and data. We obtain a solution for the original time dependent problem set in a moving domain undoing the change of variables. 1mm Proof. Existence. We use the Faedo-Galerkin method <cit.>. First, we change variables u(𝐱,t) = e^λ t v(𝐱,t), u_t(𝐱,t) = e^λ t [v_t(𝐱,t) + λ v(𝐱,t)], with λ >0 to be selected large enough. We obtain similar variational equations for v with an additional term λ c v and g and f multiplied by e^-λ t. Then we seek approximate solutions v^M(𝐱,t) = ∑_m=1^M α_m(t) ϕ_m(𝐱) such that [ ∫_Ω c(𝐱, t) v^M_t(𝐱,t) w(𝐱) d 𝐱 + ∫_Ω∑_p,q=1^n a_pq(𝐱,t) ∂ v^M ∂ x_p(𝐱,t) ∂ w ∂ x_q(𝐱) d 𝐱; + ∫_Ωλ c(𝐱,t) v^M(𝐱,t) w(𝐱,t) d 𝐱 + ∫_Ω∑_p=1^n b_p(𝐱,t) ∂ v^M ∂ x_p(𝐱,t) w(𝐱) d 𝐱; = ∫_Ω e^-λ t f(𝐱,t) w(𝐱) d 𝐱,; v^M(𝐱, 0) = ∑_m=1^M α_m(0) ϕ_m(𝐱), α_m(0) = ∫_Ω u_0(𝐱) ϕ_m(𝐱) d 𝐱, ] for all w ∈ V^M= span{ϕ_1, ϕ_2, …, ϕ_M}. We find a system of M differential equations for the coefficient functions α_m(t) setting w = ϕ_k, k=1,…,M, [ ∑_m=1^M α_m'(t) ∫_Ω c(t) ϕ_m ϕ_k d 𝐱 = - ∑_m=1^M α_m(t) ∫_Ω∑_p=1^n b_p(t) ϕ_m∂ x_pϕ_k d 𝐱 -; ∑_m=1^M α_m(t) ∫_Ω[ ∑_p,q=1^n a_pq(t) ∂ϕ_m ∂ x_p∂ϕ_k ∂ x_q + λ c(t) ϕ_m ϕ_k ] d 𝐱 + ∫_Ω e^-λ t f(t) ϕ_k d 𝐱. ] This can be written as a linear system with continuous and bounded coefficients in [0,T] d dtα^M = 𝐂^M(t)^-1𝐀^M(t) α^M + 𝐂^M(t)^-1𝐠^M(t) + 𝐂^M(t)^-1𝐟^M(t) with initial datum α^M(0), which admits a unique solution α^M(t), t ∈ [0,T] <cit.>. Multiplying identity (<ref>) by α_k and adding over k, we obtain [ 1 2d dt∫_Ω c(𝐱, t) |v^M(𝐱,t)|^2 d 𝐱 + ∫_Ω∑_p,q=1^n a_pq(𝐱,t) ∂ v^M ∂ x_p(𝐱,t) ∂ v^M ∂ x_q(𝐱,t) d 𝐱; +∫_Ω -1mm (λ c - 1 2 c_t)(𝐱, t) |v^M(𝐱,t)|^2 d 𝐱 + ∫_Ω∑_p=1^n b_p(𝐱,t) ∂ v^M ∂ x_p(𝐱,t) v^M(𝐱,t) d 𝐱; = ∫_Ω e^-λ t f(𝐱,t) v^M(𝐱,t) d 𝐱. ] Integrating in [0,T] and using coercivity, lower bounds for A and c, L^∞ bounds, as well as Young's inequality <cit.>, we find c_0 2∫_Ω |v^M(𝐱,t)|^2 d 𝐱 + a_0 2∫_0^t ∫_Ω |∇ v^M(𝐱,t)|^2 d 𝐱 ds + λ c_0 2∫_0^t ∫_Ω |v^M(𝐱,t)|^2 d 𝐱 ds ≤c _L^∞_xt 2 v^M(0)_L^2(Ω) + 1 2 f_L^2(0,T,L^2(Ω)) for λ large enough depending on a_0, c_t _L^∞_xt, c_0, 𝐛_L^∞_xt, n. Gronwall inequality, and the fact that v^M(0) → u_0 in L^2, imply that v^M is bounded in L^∞(0,T,L^2(Ω)) and L^2(0,T,H^1_0(Ω)). We extract a subsequence v^M' converging a limit v weakly star in L^∞(0,T,L^2(Ω)) and weakly in L^2(0,T,H^1_0(Ω)). Moreover, d dt∫_Ω c(t) v^M'(t) ϕ_k d 𝐱 tends to d dt∫_Ω c(t) v(t) ϕ_k d 𝐱 in the sense of distributions in D'(0,T) for any k. Similar convegences hold for u^M' and u= e^λ tv. We undo the change in (<ref>), multiply by a function ψ∈ C_c^∞([0,T)), integrate over t and pass to the limit as M' →∞ to find [ - ∫_Ω c(𝐱, 0) u(𝐱,0) w(𝐱) ψ(0) d 𝐱 - ∫_0^t ∫_Ω c_t(𝐱,t) u(𝐱,t) w(𝐱) ψ(t) d 𝐱 ds +; ∫_0^t ∫_Ω[ ∑_p,q=1^n a_pq(𝐱,t) ∂ v ∂ x_p(𝐱,t) ∂ w ∂ x_q(𝐱) + ∑_p=1^n b_p(𝐱,t) ∂ v^M ∂ x_p(𝐱,t) w(𝐱)] ψ(t) d 𝐱 ds; = ∫_0^t ∫_Ω e^-λ t f(𝐱,t) w(𝐱) ψ(t) d 𝐱 ds, ] for any w ∈ H^1_0(Ω), so that the limiting solution satisfies the condition on the initial data and the equation c u_t - div(𝐀∇ u) + 𝐛^T ∇ u = f in the sense of distributions <cit.>. Uniqueness. To prove uniqueness, we assume there are two solutions u_1 and u_2, and set u=u_1-u_2. We subtract the equations satisfied by both, multiply by u, set u=e^λ tv and integrate over Ω to get [ 1 2d dt∫_Ω c(𝐱, t) |v(𝐱,t)|^2 d 𝐱 + ∫_Ω∑_p,q=1^n a_pq(𝐱,t) ∂ v ∂ x_p(𝐱,t) ∂ v ∂ x_q(𝐱,t) d 𝐱; + ∫_Ω∑_p=1^n b_p(𝐱,t) ∂ v ∂ x_p(𝐱,t) v(𝐱,t) d 𝐱+∫_Ω -1mm (λ c -1 2 c_t)(𝐱, t) |v(𝐱,t)|^2 d 𝐱 = 0. ] Using uniform coercivity, the L^∞ bounds, and taking λ large enough, we see that ∫_Ω c(𝐱, t) |v(𝐱,t)|^2 ≤∫_Ω c(𝐱, 0) |v(𝐱,0)|^2 =0. Therefore, the solution is unique. Regularity. Next, we differentiate with respect to t to get [ d dt∫_Ω u_t(𝐱,t) w(𝐱) c(𝐱,t) d 𝐱 + ∫_Ω∇ u_t(𝐱,t)^T 𝐛(𝐱,t) w(𝐱) d 𝐱; [2ex] + ∫_Ω∇ u_t(𝐱,t)^T 𝐀(𝐱,t) ∇ w(𝐱) d 𝐱 + ∫_Ω u_t(𝐱,t) w(𝐱) c_t(𝐱,t) d 𝐱; [2ex] = ∫_Ω f_t(𝐱,t) w(𝐱) d 𝐱 - ∫_Ω u(𝐱,t) w(𝐱) c_tt(𝐱,t) d 𝐱; [2ex] + ∫_Ω∇ u(𝐱,t)^T 𝐛_t(𝐱,t) w(𝐱) d 𝐱 + ∫_Ω∇ u(𝐱,t)^T 𝐀_t(𝐱,t) ∇ w(𝐱) d 𝐱, ] with u_t(𝐱, 0) = w_0(𝐱). The functions ∇ u^T 𝐛_t, ∇ u^T 𝐀_t, u c_tt define linear forms in H^1(Ω). Arguing as in Theorem 3.1, we see that the function u_t is the unique solution in C([0,T];L^2(Ω)) ∩ L^2(0,T;H^1_0(Ω)) of this problem. Then, (<ref>) implies that - div(𝐀∇ u) + 𝐛^t ∇ u = -c u_t + f ∈ C([0,T];L^2(Ω)) zero Dirichlet boundary condition. Elliptic regularity theory ensures that u ∈ C([0,T];H^2(Ω)). Stability. The limiting solution inherits all the bounds established on the approximating sequence. Therefore its L^∞([0,T];H^2(Ω)) and H^1(0,T,H^1_0(Ω)) norms are bounded from above in terms of constants depending on the parameters of the problem and the norms of the data. □ 2mm Theorem 3.2 Under the hypotheses of Theorem 3.1, if f∈ L^q(Ω× [0,T]) and u_0 ∈ L^q(Ω), then u, its first and second order spatial derivatives, and u_t belong to L^q(Ω× [0,T]), 1<q<∞. Proof. We set v/c = u. Then ∇ u = ∇ v/c - v/c^2 ∇ c and cu_t = v_t - c_t/c v. Therefore, v is a solution of v_t - div(𝐀 c∇ v ) + 𝐀∇ c c^2∇ v + 𝐛^T c∇ v + [ div(𝐀∇ c c^2) + 𝐛^T ∇ c c^2 - c_t c] v = f. The result is a consequence of the regularity result stated in Theorem 9.1 in <cit.>. □ 2mm Theorem 3.3 Under the hypotheses of Theorem 3.2, if f∈ L^∞(Ω× [0,T]) and u_0 ∈ L^∞(Ω), then the solution u ∈ L^∞(Ω× [0,T]). As a result, u ∈ L^∞ ([0,T], L^q(Ω)), 1 ≤ q < ∞. Proof. Let M = max( f _L^∞_𝐱,t, u_0 _L^∞_𝐱)/(λ c_0). Then, v = e^-λ t u-M solves c v_t - div(𝐀∇ v) + 𝐛^T ∇ v + λ c v = e^-λ t f - λ c M ≤ 0 and v(0) ≤ 0. Multiplying the equation by v^+ and integrating we get [ 1 2d dt∫_Ω c(𝐱, t) |v^+(𝐱,t)|^2 d 𝐱 + ∫_Ω∑_p,q=1^n a_pq(𝐱,t) ∂ v^+ ∂ x_p(𝐱,t) ∂ v^+ ∂ x_q(𝐱,t) d 𝐱; + ∫_Ω∑_p=1^n b_p(𝐱,t) ∂ v^+ ∂ x_p(𝐱,t) v(𝐱,t) d 𝐱+∫_Ω -1mm (λ c -1 2 c_t)(𝐱, t) |v^+(𝐱,t)|^2 d 𝐱≤ 0. ] Choosing λ large enough, v^+=0 and u ≤ M e^λ t. A similar argument with M = - max( f _L^∞_𝐱,t, u_0 _L^∞_𝐱)/(λ c_0) shows that u ≤ max( f _L^∞_𝐱,t, u_0 _L^∞_𝐱)e^λ t/(λ c_0). □ 2mm Corollary 3.3 Let Ω^t ⊂Ω⊂ℝ^n, t>0, be a family of open and bounded C^2 domains, with Γ_- fixed and Γ_+^t defined deforming a reference curve Γ_+^0 by means of the transformation 𝐱^t = 𝐱 + t ν(𝐱), ν∈ C(Ω), t>0, Assuming that u_0 ∈ L^∞(Ω^0) and the transformed functions f̃(𝐱,t)= f (𝐱^t(𝐱),t) ∈ C^1([0,T], L^2(Ω^0)) ∩ L^∞(Ω× [0,T]), there exists a unique e solution of (<ref>) such that ẽ(𝐱,t) ∈ C([0,T],H^2(Ω^0)) ∩ H^1(0,T;H^1_0(Ω^0)) ∩ L^∞(Ω× [0,T]). Proof. We apply Theorems 3.1-3.2 and use the explicit characterizations of the variable matrix, vector and coefficient fields to prove existence in an interval [0,t_ν], t_ν small enough depending on ν. This solution can then be successively extended until we cover [0,T]. □ § WELL POSEDNESS RESULTS FOR THE QUASI-STATIONARY SUBMODELS In this section we establish the pertinent existence and regularity results for the elliptic submodels and the stationary transport problem in fixed domains. Constructing solutions for the stationary transport problems considered here is a non trivial issue. We are able to obtain them by a regularization procedure under sign hypotheses on the velocity fields motivated by asymptotic studies, which will have to be preserved by any implemented scheme. §.§ Elliptic problems for displacements, velocities and concentrations Consider the first the submodel for mechanical fields: [ μΔ𝐮_s + (μ +λ) ∇ div(𝐮_s) - ∇ p = Π∇ϕ_s, on Ω,; μΔ𝐯_s + (μ +λ) ∇ div(𝐯_s) = ∇ p', on Ω,; k_h Δ p - div(𝐯_s) =0, on Ω,; Δ p' = (2μ + λ) Δ e', on Ω,; p = p_ ext, p' = p_ ext' on Γ,; 𝐮 = 0, 𝐯 = 0, on Γ_-,; (σ̂(𝐮_s) - (p+Πϕ_s) 𝐈) 𝐧 = 𝐠, (σ̂(𝐯_s) - p' 𝐈) 𝐧 = 𝐠', on Γ_+. ] We denote by H^1_0,-(Ω) the Sobolev space of H^1(Ω) functions vanishing on Γ_-. 1mm Theorem 4.1. Let Ω⊂ℝ^n, n=2,3, be an open bounded domain with C^4 boundary ∂Ω. Let us assume that ϕ_s ∈ H^1(Ω) and e' ∈ H^2(Ω). Given positive constants μ, λ, k_h, Π, there exists a unique solution 𝐮_s ∈ [H^2(Ω)]^n × [H^1_0,-(Ω)]^n, 𝐯_s ∈ [H^3(Ω)]^n × [H^1_0,-(Ω)]^n, p ∈ H^4(Ω), p' ∈ H^2(Ω) of (<ref>) for any p_ ext, p_ ext' ∈ℝ and 𝐠, 𝐠' ∈ℝ^n. Moreover, if ϕ_s ∈ W^1,q(Ω) and e' ∈ W^1,q(Ω), n<q<∞, then p' ∈ W^1,q(Ω), 𝐯_s ∈ W^2,q(Ω), p ∈ W^3,q(Ω) and 𝐮_s ∈ W^2,q(Ω). Proof. The equation for p' uncouples from the rest and provides a solution p' ∈ H^2(Ω) by classical theory for Laplace equations <cit.>. Next, the equation for 𝐯 is a classical Navier elasticity system which admits a unique solution 𝐯_s ∈ [H^2(Ω)]^n × [H^1_0,-(Ω)]^n <cit.>. Since the source ∇ p' ∈ [H^1(Ω)]^n, elliptic regularity theory implies 𝐯_s ∈ [H^3(Ω)]^n. Now, div(𝐯_s) ∈ H^2(Ω) implies that the unique solution p of the corresponding Poisson problem has H^4(Ω) regularity. Finally, the equation for 𝐮_s is again a classical Navier elasticity system with L^2 right hand side which admits a unique solution 𝐮_s ∈ [H^2(Ω)]^n ∩ [H^1_0,-(Ω)]^n. When ϕ_s ∈ W^1,q(Ω) and e' ∈ W^1,q(Ω), we obtain the increased regularity <cit.>. Notice that since the boundary values are constant, we can construct extensions to H^k(Ω) and W^k,q for the necessary k, q <cit.>. □ 2mm Now, the equation for the concentrations is: [ -d Δ c + div (𝐯_f c) = - k_c g_c ϕ_s, 𝐱∈Ω,; c = c_0 𝐱∈Γ_-,; ∂ c ∂𝐧 = 0 𝐱∈Γ_+, ] given positive constants d, c_0, k_c, g_c and known functions 𝐯_f and ϕ_s. 1mm Theorem 4.2. Let Ω⊂ℝ^n, n=2,3, be an open bounded domain with C^2 boundary ∂Ω. Given positive constants k_c, g_c, d, c_0, a vector function 𝐯_l ∈ [H^1(Ω)]^n ∩ C(Ω), and a positive function ϕ_b ∈ L^2(Ω) there exists a unique nonnegative solution c ∈ H^1(Ω) of (<ref>) provided d is sufficiently large. Proof. Set c= c̃ + c_0. The resulting problem admits the variational formulation: Find c̃∈ H^1_0,-(Ω) such that d ∫_Ω∇c̃^T ∇ w d 𝐱 - ∫_Ω𝐯_f^T c̃∇ w d 𝐱 + ∫_Γ_+c̃ w 𝐯_l^T 𝐧 dS_𝐱 = - k_c g_c ∫_Ωϕ_s w d 𝐱 + c_0 ∫_Ω𝐯_f^T ∇ w d 𝐱, for all w ∈ H^1_0,-(Ω). The continuous bilinear form is coercive provided d is large enough compared to 𝐯_f_∞. Thus, we have a unique solution c̃∈ H^1_0,-(Ω) with H^2(Ω) regularity. The function c^- ∈ H^1_0,-(Ω) satisfies d ∫_Ω |∇ c^-|^2 d 𝐱 - ∫_Ω𝐯_f^T c^-∇ c^- d 𝐱 + ∫_∂Ω^+ |c^-|^2 𝐯_f^T 𝐧 dS_𝐱 = - k_c g_c ∫_Ωϕ_s c^- d 𝐱≤ 0. Coercivity implies c^-=0 and c ≥ 0 provided d is large enough compared to 𝐯_l _∞. For uniqueness, assume we have two positive solutions c_1 and c_2 in H^1(Ω) and set c = c_1 - c_2 ∈ H^1_0,-(Ω). Then u is a solution of [ - d Δ c + div (𝐯_l c) = 0, 𝐱∈Ω,; c = 0, 𝐱∈∂Ω^-,; [0.5ex] ∂ c ∂𝐧 = 0, 𝐱∈∂Ω^+. ] The variational equation with test function c and coercivity imply c=0, that is, c_1= c_2. □ §.§ Conservation law for volume fractions Consider the equation div(-𝐯_f ϕ_f) + k_s g_s ϕ_f = k_s g_s , 𝐱∈Ω, where k_s and g_s are positive constants and 𝐯_f a known function. 1mm Theorem 4.3. Let Ω⊂ℝ^n, n=2,3, be a thin open, bounded subset, with C^4 boundary ∂Ω. Let 𝐯_f ∈ [H^2(Ω) ∩ C(Ω)]^n such that div(𝐯_f)≤ 0 in Ω, div(𝐯_f) ∈ L^∞(Ω) and 𝐯_f^T 𝐧≤ 0 a.e. on ∂Ω. We assume that ∇𝐯_f ∈ [L^∞(Ω)]^n^2 with ∇𝐯_f_[L^∞]^n^2 small enough compared to k_s g_s. Then, given positive constants k_s and g_s, there exists a solution ϕ_f ∈ L^2(Ω) of (<ref>) in the sense of distributions. Moreover, * 0 ≤ϕ_f ≤ 1 on Ω and ϕ does not vanish in sets of positive measure. * ϕ_f ∈ H^1(Ω) is the unique solution of the variational formulation in H^1(Ω) and 1 2 k_s g_s ∇ϕ_L^2≤∇ div(𝐯_f)_[L^2]^n. * If we assume that Ω is a thin domain for which 𝐧∼𝐞_n and div(𝐯_f) ∈ W^1,q(Ω), n<q<∞, then ∇ϕ_f ∈ L^q(Ω) and 1 2 k_s g_s ∇ϕ_L^q≤∇ div(𝐯_f)_[L^q]^n. Proof. Existence. For each ε >0, we follow <cit.> and let ϕ_ε∈ H^1(Ω) be the solution of the variational formulation b(ϕ_ϵ, w) = ε∫_Ω∇ϕ _ε^T ∇ w d 𝐱 + ∫_Ω𝐯_f^T ϕ _ε∇ w d 𝐱 - ∫_∂Ωϕ_ε w 𝐯_f^T 𝐧 d S_𝐱 + ∫_Ω k_s g_s ϕ_ε w d 𝐱 = ∫_Ω k_s g_s w d𝐱 = L(w), ∀ w ∈ H^1(Ω) of - εΔϕ _ε - div(𝐯_f ϕ_ε) + k_s g_s ϕ_ε = k_s g_s in Ω, ∂ϕ_ε∂𝐧 = 0 on ∂Ω. The bilinear form b(φ, w) is continuous on H^1(Ω) <cit.>, while the linear form L is continuous on L^2(Ω). Since div(𝐯_f) ≤ 0 and 𝐯_f^T 𝐧≤ 0, the bilinear form b is also coercive in H^1(Ω). Indeed, ∫_Ω -1mm 𝐯_f^T ϕ_ε∇ϕ_ε d 𝐱 = 1 2∫_Ω -1mm 𝐯_f^T ∇ |ϕ_ε|^2 d 𝐱 = 1 2∫_∂Ω -1mm |ϕ_ε|^2 𝐯_f^T 𝐧 d 𝐱 - 1 2∫_Ω -1mm div(𝐯_f) |ϕ_ε|^2 d 𝐱. The positive term - ∫_Ω div(𝐯_f) |ϕ_ε|^2 d 𝐱 is finite because |ϕ_ε|^2 ∈ L^2(Ω) thanks to Sobolev embeedings <cit.>. Since the bilinear form ε∫_Ω∇ϕ^T∇ w d 𝐱 + ∫_Ω k_s g_s ϕ w d 𝐱 is coercive in H^1(Ω), we have a unique solution ϕ_ε∈ H^1(Ω) by Lax Milgram's theorem <cit.>. We set w=ϕ _ε and apply Young's inequality <cit.> to obtain the uniform bound ϕ_ε_L^2≤ meas(Ω)^1/2 from 0 ≤ε∫_Ω |∇ϕ _ε|^2 d 𝐱 - 1 2∫_∂Ω |ϕ_ε|^2 𝐯_f^T 𝐧 d S_𝐱 + ∫_Ω[ - 1 2 div(𝐯_f) + k_s g_s ] |ϕ_ε|^2 d 𝐱 = ∫_Ω k_s g_s ϕ_ε d𝐱≤ k_s g_s _L^2( ∫_Ω |ϕ_ε|^2 )^1/2. Each of the positive terms in the left hand side of the above inequality are uniformly bounded too. Thus, we can extract a subsequence ϕ_ε' such that ϕ_ε' tends weakly in L^2(Ω) to a limit ϕ, and ε∇ϕ_ε tends strongly to zero. Setting w ∈ C_c^∞(Ω) in the variational formulation, and taking limits <cit.>, ϕ is a solution of (<ref>) in the sense of distributions. The variational equation holds with ϵ =0, replacing the boundary integral by the duality _H^-1/2(∂Ω)<ϕ 𝐯_f^T 𝐧, w>_H^1/2(∂Ω) for w∈ H^1(Ω) <cit.>. L^∞ estimates. Setting ψ_ε = ϕ_ε - 1 and w = ψ_ε^+ we get ε∫_Ω |∇ψ _ε^+|^2 d 𝐱 - 1 2∫_∂Ω |ψ_ε^+|^2 𝐯_f^T 𝐧d S_𝐱 + ∫_Ω[ - 1 2 div(𝐯_f) + k_s g_s ] |ψ_ε^+|^2 d 𝐱 = ∫_Ω div(𝐯_f) ψ_ε^+ d𝐱≤ 0. Thus, ψ_ε^+=0 and ϕ_ε≤ 1. Similarly, we set ψ_ε = - ϕ_ε to find ε∫_Ω |∇ψ _ε^+|^2 d 𝐱 - 1 2∫_∂Ω (𝐯_f^T 𝐧) |ψ_ε^+|^2 d S_𝐱 + ∫_Ω[ - 1 2 div(𝐯_f) + k_s g_s ] |ψ_ε^+|^2 d 𝐱 = - ∫_Ω k_s g_s ψ_ε^+ d𝐱≤ 0. Thus, ψ_ε^+=0 and ϕ_ε≥ 0. Weak limits ϕ in L^2 inherit these properties. Moreover, (<ref>) implies that ϕ cannot vanish in sets of positive measure. H^1 Regularity. Elliptic regularity for system (<ref>) implies that ϕ_ε∈ H^2(Ω) <cit.>. We multiply (<ref>) by Δϕ_ε and integrate over Ω to get - ε∫_Ω |Δϕ _ε|^2 d 𝐱 - ∫_Ω𝐯_b^T ∇ϕ_εΔϕ _ε d 𝐱 + ∫_Ω[ - div(𝐯_b) + k_s g_s ] ϕ_εΔϕ_ε d 𝐱 = ∫_Ω k_s g_s Δϕ _ε d 𝐱. Integrating by parts, and using the boundary condition, we find - ε∫_Ω |Δϕ _ε|^2 d 𝐱 + ∫_Ω[ 1 2 div(𝐯_f) - k_s g_s ] |∇ϕ_ε|^2 d 𝐱 + 1 2∫_∂Ω |∇ϕ_ε|^2 𝐯_f^T 𝐧 d S_𝐱 = ∫_Ω∇[ - div(𝐯_f) + k_s g_s ] ^T ϕ_ε∇ϕ_ε d 𝐱 - ∫_Ω v_l,j,x_kϕ_ε, x_jϕ_ε, x_k d 𝐱. We know that 0≤ϕ _ε≤ 1. Therefore, ∫_Ω[ -1 2 div(𝐯_f) + k_s g_s ] |∇ϕ_ε|^2 d 𝐱≤∇ div(𝐯_f)_[L^2]^n∇ϕ_ε_L^2 + ∫_Ω |v_l,j,x_kϕ_ε, x_jϕ_ε, x_k| d 𝐱. If ∇𝐯_l_[L^∞]^n^2 is small enough compared to k_s g_s 1 2 k_s g_s ∇ϕ_ε_L^2≤∇ div(𝐯_f)_[L^2]^n. We extract a subsequence ϕ_ε' converging weakly in H^1(Ω) to a limit ϕ, strongly in L^2(Ω), and pointwise in Ω. The traces of ϕ on ∂Ω belong to L^2(∂Ω), and are weak limits of traces of ϕ_ε'. Passing to the limit in the variational formulation for (<ref>), ϕ∈ H^1(Ω) is a solution with ϵ =0 which inherits these bounds. Uniqueness. Given two solutions ϕ_1, ϕ_2 ∈ H^1(Ω), we set ψ = ϕ_1-ϕ_2. Subtracting the variational equations we get for the test function ψ∈ H^1(Ω) - 1 2∫_∂Ω (𝐯_f^T 𝐧) |ψ|^2 d S_𝐱 + ∫_Ω[ - 1 2 div(𝐯_f) + k_s g_s ] |ψ|^2 d 𝐱 = 0, that is, ϕ_1=ϕ_2 in view of the signs. □ W^1,q regularity. By elliptic regularity, ϕ _ε∈ W^3,q(Ω), since the source in (<ref>) belongs to W^1,q(Ω). Following <cit.>, we differentiate (<ref>) with respect to x_k, multiply by h(ϕ_ε) ϕ_x_k for h(ϕ_ε) = (|∇ϕ_ε|^2 + δ)^(q-2)/2, add k and integrate over Ω to get - ε∫_ΩΔ(∇ϕ_ε)^T h(ϕ_ε) ∇ϕ_ε d 𝐱 + ∫_Ω k_s g_s h(ϕ_ε) |∇ϕ_ε|^2 d 𝐱 - ∫_Ω v_l,iϕ_ε,x_i x_k h(ϕ_ε) ϕ_ε, x_k d 𝐱 - ∫_Ω v_l,i,x_kϕ_ε, x_i h(ϕ_ε) ϕ_ε, x_k d 𝐱 - ∫_Ω div(𝐯_f) h(ϕ_ε) |∇ϕ_ε|^2 d 𝐱 - ∫_Ω∇( div(𝐯_f))^T h(ϕ_ε) ϕ_ε∇ϕ_ε d 𝐱 = 0. Sum over repeated indices is intended. Notice that Lemma 3.1 from <cit.> holds in our framework for our thin domains, so that the first term is nonnegative. The fourth term becomes 1 q∫_Ω div(𝐯_f)(|∇ϕ_ε|^2 + δ)^q/2 d 𝐱 - 1 q∫_∂Ω (|∇ϕ_ε|^2 + δ)^q/2𝐯_l^T 𝐧 dS_𝐱. Putting all together we get ∫_Ω k_s g_s h(ϕ_ε) |∇ϕ_ε|^2 d 𝐱≤ - 1 q∫_Ω div(𝐯_f)(|∇ϕ_ε|^2 + δ)^q/2 d 𝐱 + ∫_Ω v_l,i,x_kϕ_ε, x_iϕ_ε, x_k h(ϕ_ε) d 𝐱 + ∫_Ω div(𝐯_f) h(ϕ_ε) |∇ϕ_ε|^2 d 𝐱 + ∫_Ω∇( div(𝐯_f))^T h(ϕ_ε) ϕ_ε∇ϕ_ε d 𝐱. We let δ→ 0 and use that ∇𝐯_f _[L^∞]^n^2 is small enough to find 1 2 k_s g_s ∫_Ω |∇ϕ_ε|^q d 𝐱≤∇( div(𝐯_f)) _L^q |∇ϕ_ε| _L^q^q-1, which yields the bound we seek letting ε→ 0. □ § WELL POSEDNESS RESULTS FOR THE FULL MODEL WITH A KNOWN BOUNDARY DYNAMICS Once we have analyzed the different submodels, we consider the whole system when the boundary of the domains Ω^t moves with time according to a given dynamics [ μΔ𝐮_s + (μ +λ) ∇ div(𝐮_s) - ∇ p = Π∇ϕ_s, in Ω^t,; μΔ𝐯_s + (μ +λ) ∇ div(𝐯_s) = ∇ p' , in Ω^t,; k_h Δ p = div(𝐯_s), in Ω^t,; Δ p' = (2μ + λ) Δ e', in Ω^t,; p = p_ ext, p = p_ ext' on Γ^t,; 𝐮_s = 0, 𝐯_s = 0, on Γ_-^t,; (σ̂ (𝐮_s) - (p+Πϕ_s) 𝐈) 𝐧 = 𝐠, on Γ_+^t,; (σ̂ (𝐯_s) - p' 𝐈) 𝐧 = 𝐠'(∇𝐮_s), on Γ_+^t, ] [ div (-𝐯_f ϕ_f) + k_s g_s ϕ_f = k_s g_s, 1cm in Ω^t,; 𝐯_f = - ξ_∞∇ p + 𝐯_s, ϕ_f+ϕ_s = 1, 1cm in Ω^t, ] [ de' dt = k_h (2 μ + λ) Δ e', 2.9cm ,; e' = e_ ext, 1mm on Γ^t,; e'(0) = e_0, 1mm on Ω^0, ] [ -d Δ c + div (𝐯_f c) = - k_c g_c ϕ_s, 1.8cm in Ω^t,; c = c_0 on Γ^t_-,; ∂ c ∂𝐧 = 0 on Γ^t_+. ] 1mm Theorem 5.1. Let Ω^t ⊂Ω⊂ℝ^n, n=2,3, t ∈ [0,T], be a family of open bounded C^4 domains. The lower boundary Γ_- is fixed, while the upper boundary Γ_+^t is obtained deforming Γ^0_+ along a vector field ν(𝐱) ∈ C(Ω) ∩ C^4(Ω). Assume that * e_ ext(t), 𝐠(t), 𝐠'(t), p_ ext(t), p_ ext'(t), c_0(t) ∈ C([0,T]), e_0 ∈ L^2(Ω^0) ∩ L^q(Ω^0), for q>n, * e_ ext, 𝐠', Π and p_ ext are small enough. Given positive constants μ, λ, Π, k_h, k_s, k_c, g_s, g_c, ξ_∞, and d large enough, system (<ref>)-(<ref>) admits a unique solution e' ∈ H^2(Ω^t)∩ W^2,q(Ω^t), 𝐮_s ∈ [H^2(Ω^t)]^n ∩ W^2,q(Ω^t), 𝐯_s, 𝐯_f ∈ [H^3(Ω^t)]^n, p ∈ H^4(Ω^t), p' ∈ H^2(Ω^t), ϕ_f, ϕ_s ∈ H^1(Ω^t)∩ W^1,q(Ω^t), c ∈ H^2(Ω^t), for q >n, satisfying c ≥ 0 and 0 ≤ϕ_f, ϕ_s ≤ 1, t ∈ [0,T]. Moreover, the norms of the solutions are bounded in terms of the parameters and data of the problem. Proof. Assume first that 𝐠'(∇𝐮_s) does not depend on 𝐮_s. Then, the result is a consequence of Corollary 3.3, Theorems 4.1-4.3 and Sobolev embeddings <cit.> (neither L^q regularity nor conditions on the domain geometry nor smallness assumptions are needed). We calculate the unknowns according to the sequence e', p', 𝐯_s, p, 𝐯_f, ϕ_f, ϕ_s, 𝐮_s, and c. When 𝐠'(∇𝐮_s) does depend on 𝐮_s, we construct e' thanks to Corollary 3.3. For each fixed t>0, e' ∈ H^2(Ω^t)∩ W^2,q(Ω^t) and we can construct p' ∈ H^2(Ω^t)∩ W^2,q(Ω^t). Next, we solve the quasi-stationary system by means of an iterative scheme. At each step ℓ, we freeze Π∇ϕ_s^(ℓ -1) in the equation for 𝐮_s^(ℓ) and 𝐠'(∇𝐮_s^ℓ-1) in the boundary condition for 𝐯_s^(ℓ). Initially, we set ϕ_s^(0)= ϕ_∞∈ (0,1) constant and ϕ_f^(0)=1-ϕ_∞. We set 𝐮^(0)=0. Theorem 4.1, Theorem 4.2, Theorem 4.3 guarantee the existence of 𝐯_s^(1), p^(1), 𝐮_s^(1), 𝐯_f^(1), ϕ_f^(1), ϕ_s^(1), and c^(1), with the stated regularity. In a similar way, given all the fields at step ℓ-1, we can construct the solutions for step ℓ. Notice that 𝐯_f^(ℓ-1)∈ W^2,q implies 𝐯_f^(ℓ-1)∈ W^1,∞(Ω) and 𝐯_f^(ℓ-1)∈ C(Ω). To apply Theorem 4.3 we also need to satisfy smallness and sign assumptions that we will consider later. Assuming they hold, we get for the elliptic system involving 𝐯_s^(ℓ), 𝐮_s^(ℓ), p^(ℓ) and for the transport equation for ϕ_s^(ℓ) p^(ℓ)_H^2(Ω^t) + 𝐯_s^(ℓ)_H^2(Ω^t) + 𝐮_s^(ℓ)_H^2(Ω^t)≤ C_1^t [Π∇ϕ_s^(ℓ-1)_L^2(Ω^t)   + ∇ p' _L^2(Ω^t) + p_ ext_H^3/2(Γ^t_+) + 𝐠'(∇𝐮_s^(ℓ-1)) _H^1/2(Γ^t_+) + 𝐠_H^1/2(Γ^t_+) ], p^(ℓ)_W^2,q(Ω^t) + 𝐯_s^(ℓ)_W^2,q(Ω^t) + 𝐮_s^(ℓ)_W^2,q(Ω^t)≤ C_2^t [ Π∇ϕ_s^(ℓ-1)_L^q(Ω^t) + ∇ p' _L^q(Ω^t) +  p_ ext_W^1-1 q,q(Γ^t_+) + 𝐠'(∇𝐮_s^(ℓ-1)) _W^1-1 q,q(Γ^t_+) + 𝐠_W^1-1 q,q(Γ^t_+) ], p^(ℓ)_W^3,q(Ω^t)≤ C^t_3 [ 𝐯_s^(ℓ)_W^1,q(Ω^t) + p_ ext_W^3-1/q,q(Γ^t) ] 𝐯_f^(ℓ)_W^2,q(Ω^t)≤ξ_∞ p^(ℓ)_W^3,q(Ω^t) + 𝐯_s^(ℓ)_W^2,q(Ω^t) 1 2 k_s g_s ∇ϕ_f^(ℓ)_L^q≤∇ div(𝐯_f^(ℓ))_[L^q]^n. Notice that ∇ϕ_f^(ℓ) = - ∇ϕ_s^(ℓ). Combining the above inequalities, and provided Π and 𝐠' are small enough, we obtain an upper bound for 𝐯_f^(ℓ)_W^2,q(Ω^t), 𝐯_s^(ℓ)_W^2,q(Ω^t), p^(ℓ)_W^2,q(Ω^t), ϕ_s _W^1,q(Ω^t), in terms of constants depending on the problem data and parameters, and also on time, but remain bounded in time for t∈ [0,T]. We guarantee by induction the smallness of 𝐯_f^(ℓ)|_[W^1,∞] and div(𝐯_f^(ℓ)) ≤ 0, 𝐯_f^(ℓ)·𝐧≤ 0. Initially, ϕ_s^(0) is constant and ∇ϕ_s^(0)=0. We construct 𝐯_s^(1) and p^(1) in such a way that 𝐯_s^(1)_[W^2,q]^n, p^(1)_[W^3,q]^n and 𝐯_f^(1)_[W^2,q]^n are bounded in terms of the problem parameters and data. By Sobolev injections for n < q < ∞, 𝐯_s^(1)_[W^1,∞]^n satisfies a similar bound, and can be made as small as required by making 𝐠' and p_ ext small. Then, ∇ϕ_f^(1)_L^q is bounded by 𝐯_f^(1)_[W^2,q]^n and is equally small. Furthermore, div(𝐯_f^(1)) ϕ_f^(1) + 𝐯_f^(1)∇ϕ_f^(1) = - k_s g_s ϕ_f^(1)≤ 0. Since 𝐯_f^(1) and ∇ϕ_f^(1) are small compared to - k_s g_s ϕ_f^(1)≤ 0 which is almost constant. Thus, div(𝐯_l^(1)) ≤ 0. Finally, ∫_A div(𝐯_l^(1)) d 𝐱 = ∫_∂ A𝐯_l^(1)·𝐧 d S_𝐱≤ 0 for all A ⊂Ω so that 𝐯_l^(1)·𝐧≤ 0 on ∂Ω. By induction, if 𝐯_f^(ℓ-1)_[W^1,∞]^n is small and 𝐯_f^(ℓ-1) satisfies the sign conditions, we can repeat the argument to show that this holds for 𝐯_f^(ℓ) too and that it also satisfies the sign conditions. We need to estimate ∇ div(𝐯_f^(ℓ-1)) _[L^q]^n, which is possible since Π is small. These estimates allow us to extract subsequences converging weakly to limits 𝐯_s, 𝐮_s, p, ϕ_s satisfying variational formulations of the equations. Problem (<ref>) is already studied in Theorem 4.2. □ A similar result (except for the uniqueness) can be obtained by means of an iterative scheme if we allow for almost constant smooth coefficients k_h(ϕ_f) ξ_∞(ϕ_f), g_s(c), g_s(c). 1mm § DISCUSSION AND CONCLUSIONS The study of biological aggregates and tissues often leads to complex mixture models, combining transport equations for volume fractions of different phases, with continuum models for mechanical behavior of the mixture and chemical species <cit.>. These models are set in domains that change with time, because cells grow, die and move and because of fluid transport within the biological network. Here, we have considered a fluid-solid mixture description of the spread of cellular systems called biofilms, which could be adapted to general tissues. These models involve different time scales, so that part of the equations are considered quasi-stationary, that is, they are stationary problems solved at different times in different domains and with some time dependent coefficients. Such equations are coupled to time dependent problems set in moving domains and to variables not directly characterized by means of equations. In this paper, we have developed mathematical frameworks to tackle some of the difficulties involved in the construction of solutions for these multiphysics systems and the study of their behavior. First, we have shown how to improve these models by characterizing time derivatives of solutions of stationary boundary value problems with varying coefficients set in moving domains in terms of complementary boundary value problems derived for them. In this way we obtain a quasi-stationary elliptic system for the mechanical variables of the solid phase, not only displacements and pressure, but also velocity, that can be solved at each time coupled to the other submodels. This option is more stable than evaluating velocities as quotients of differences of displacements calculated in meshes of different spatial domains. On one side, the error committed is easier to control. On the other side, the computational is cost smaller, since we use a single mesh at each time. Once we know the velocity of the solid phase and the pressure, the velocity of the fluid phase follows by a Darcy type law. Next, we have devised an strategy to construct solutions of an auxiliary class of time dependent linear diffusion problems set in moving domains with parametrizations satisfying a number of conditions. We are able to refer the model to a fixed domain and then solve by Galerkin type schemes. The complete model involves a quasi-stationary transport problem. We show that we can construct smooth enough solutions by a regularization procedure, under sign hypothesis on the fluid velocity field suggested by asymptotic solutions constructed in simple geometries. Once we know how to construct stable solutions of each submodel satisfying adequate regularity properties, an iterative scheme allows us to solve the full problem when the time evolution of the boundary of the spatial region occupied by the biological film is known. In applications one must couple these models with additional lubrication type equations for the motion of the film boundary, see equation (<ref>). Perturbation analyses <cit.> provide approximate solutions with selfsimilar dynamics for h. Establishing existence and regularity results for such complex models that can guide construction of reliable numerical solutions is a completely open problem. The techniques we have developed are general and can be applied in models with a similar structure arising in other biological and chemical engineering applications. § APPENDIX: THE MODEL EQUATIONS We study biofilms as solid-fluid mixtures, composed of a solid biomass phase and a liquid phase formed by water carrying dissolved chemicals (nutrients, autoinducers, waste). Under the equipresence hypothesis of mixtures, each location 𝐱 in a biofilm can contain both phase simultaneously, assuming that no voids or air bubbles form inside. Let us denote by ϕ_s(𝐱,t) the volume fraction of solid and by ϕ_f(𝐱,t) the volume fraction of fluid, which satisfy ϕ_s + ϕ_f = 1. Taking the densities the mixture and both constituents to be constant and equal to that of water ρ_f= ρ_s= ρ= ρ_w, the mass balance laws for ϕ_s and ϕ_f are <cit.> ∂ϕ_s ∂ t + div (ϕ_s 𝐯_s) = r_s(ϕ_s,c), r_s(ϕ_s,c) = k_s c c + K_cϕ_s, ∂ϕ_f ∂ t + div (ϕ_f 𝐯_f) = - r_s(ϕ_s,c), where 𝐯_s and 𝐯_f denote the velocities of the solid and fluid components, respectively, c is the substrate concentration and r_s(ϕ_s,c) = k_s c c + K_cϕ_s stands for the production of biomass due to nutrient consumption. The parameters K_c (starvation threshold) and k_s (intake rate) are positive constants. The substrate concentration c <cit.> is governed by: ∂ c ∂ t + div (𝐯_f c) - div (d ∇ c) = -r_n(ϕ_s,c), r_n(ϕ_s, c) =ϕ_s k_c c c + K_c, where r_n(ϕ_s,c) represents consumption by the biofilm. The parameters d (diffusivity), k_c (uptake rate) and K_c (half-saturation) are positive constants. We impose zero-flux boundary conditions on the air–biofilm interface and constant Dirichlet boundary condition on the agar–biofilm interface. In equation (<ref>), typical parameter values are such that the time derivatives can be neglected. The solutions depend on time though the motion of the biofilm boundary. Adding up equations (<ref>) and (<ref>), we obtain a conservation law for the growing mixture: 0= div (ϕ_s 𝐯_s+ ϕ_f 𝐯_f) = div (𝐯) = div (𝐯_s + 𝐪), where 𝐯= ϕ_s 𝐯_s+ ϕ_f 𝐯_f is the averaged velocity and 𝐪 = ϕ_f (𝐯_f - 𝐯_s) is the filtration flux. The theory of mixtures hypothesizes that the motion of each phase obeys the usual momentum balance equations <cit.>. In the absence of external body forces, the momentum balance for the solid and the fluid reads ρϕ_s a_s + divσ_s + ρϕ_s (𝐟_s + ∇π_s) = 0, ρϕ_f a_f + divσ_f + ρϕ_f (𝐟_f + ∇π) = 0. In biofilms, the velocities 𝐯_s and 𝐯_f are small enough for inertial forces to be neglected, that is, ρ_s 𝐚_s ≈ρ_f 𝐚_f ≈ρ𝐚≈ 0, where 𝐚_s, 𝐚_f, 𝐚 denote the solid, fluid, and average accelerations. Let us detail now expressions for the stresses and forces appearing in these equations, following <cit.>. When the biofilm contains a large number of small pores, the stresses in the fluid are σ_f = - ϕ_f p 𝐈, p being the pore hydrostatic pressure. In case large regions filled with fluid were present, the standard stress law for viscous fluids should be considered. Under small deformations, and assuming an isotropic solid, the stresses in the solid biomass are σ_s = σ̂_s - ϕ_s p 𝐈, σ̂_s = λ Tr (ε(𝐮_s)) 𝐈 + 2 μ ε(𝐮_s), ε_ij(𝐮)= 1 2( ∂ u_i ∂ x_j + ∂ u_j ∂ x_i), where 𝐮_s is the displacement vector of the solid, ε(𝐮) the deformation tensor, and λ, μ, the Lamé constants. The stresses in the solid are due to interaction with the fluid and strain within the solid. The interaction forces and concentration forces satisfy the relations ϕ_s 𝐟_s +ϕ_f 𝐟_s = 0 and ϕ_s ∇π_s + ϕ_f ∇π = 0 <cit.>. The osmotic pressure is a function of the biomass fraction ϕ_f = Π (ϕ_s) <cit.>. For isotropic solids with isotropic permeability the filtration force 𝐟_f = - 1 k_h𝐪, where k_h (hydraulic permeability) is a positive function of ϕ_s <cit.>. Typically, k_h(ϕ_f)=ϕ_f^2 ζ, where ζ is a friction parameter often set equal to ζ= μ_f ξ(ϕ_s)^2 >0 and ξ is the “mesh size” of the underlying biomass network <cit.>. Using the expressions for the stress tensors (<ref>) and (<ref>), equations (<ref>) become div σ̂_s + ϕ_s (-∇ p + ∇π_s ) + ϕ_s 𝐟_s = 0, ϕ_f (-∇ p+∇π) + ϕ_f 𝐟_f = 0. Combining (<ref>), (<ref>), and (<ref>) we obtain 𝐪 = - k_h ∇ (p - π) = ϕ_f (𝐯_f -𝐯_s). This is Darcy's law in the presence of concentration gradients. Adding up equations (<ref>), we find an equation relating solid displacements and pressure div σ̂_s(𝐮_s) - ∇ p = 0. At the biofilm boundary, the jumps in the total stress vector and the chemical potential vanish: (σ̂_s - p 𝐈) 𝐧 = 𝐭_ext, p - π = p_ext - π_f,ext, when applicable. The solid velocity is then 𝐯_s =∂𝐮_s ∂ t. These equations are complemented by (<ref>) and (<ref>), which now becomes div(𝐯_s) = - div(𝐪) = div(k_h ∇ (p - π)). 5mm Acknowledgements. This research has been partially supported by the FEDER /Ministerio de Ciencia, Innovación y Universidades - Agencia Estatal de Investigación grant PID2020-112796RB-C21. 5mm 9 adams R.A. Adams, Sobolev Spaces, Academic Press, New York, 1975 adn2 S. Agmon, A. Douglis, L. Nirenberg, Estimates Near the Boundary for Solutions of Elliptic Partial Differential Equations Satisfying General Boundary Conditions II, Communications on Pure and Applied Mathematics, XVII, 35-92, 1964 bamberger A. Bamberger, R. Glowinski, Q.H. Tran, A domain decomposition method for the acoustic wave equation with discontinuous coefficients and grid change, SIAM Journal on Numerical Analysis 34(2), 603-639, 1997 beirao H. Beirao da Veiga, On a stationary transport equation, Ann. Univ. Ferrara - Sz. VII - Sc. Mat., Vol XXXII, 1986 brezis H. Brézis, Analyse fonctionnelle, Théorie et applications, Masson, 1987 ibm A. Carpio, R. González-Albaladejo, Immersed boundary approach to biofilm spread on surfaces, Commun. Comput. Phys. 31, 257-292, 2022 entropy A. Carpio, E. Cebrián, Incorporating cellular stochasticity in solid-fluid mixture biofilm models, Entropy 22(2), 188, 2020 poroelastic A. Carpio, E. Cebrián, P. Vidal, Biofilms as poroelastic materials, International Journal of Non-Linear Mechanics 109, 1-8, 2019 amm16 A. Carpio, G. Duro, Well posedness of an angiogenesis related integrodifferential diffusion model, Applied Mathematical Modelling 40 (9-10), 5560-5575, 2016 econ C.C. de Carvalho, Biofilms: recent developments on an old battle, Recent. Pat. Biotechnol. 1, 49-57, 2007 coddington E.A. Coddington, N. Levinson, Theory of ordinary differential equations, New York: McGraw-Hill, 1955 degennes P.G. De Gennes, Wetting: statics and dynamics, Reviews of Modern Physics, 57(3), 828-863, 1985. biofilm H.C. Flemming, J. Wingender, The biofilm matrix, Nat. Rev. Microbiol. 8, 623-633, 2010 gurtin M.E. Gurtin, An introduction to continuum mechanics, Mathematics in Science and Engineering 158, Academic Press 1981. kapellos G.E. Kapellos, T.S. Alexiou, A.C. Payatakes, Theoretical modeling of fluid flow in cellular biological media: An overview, Math. Biosci. 225, 83-93, 2010 kozlov V.A. Kozlov, J.A. Maz'ya, Elliptic boundary value problems in domains with point singularities, Mathematical surveys and monographs 52, AMS, 1997 ladyzenskaya O.A. Ladyzhenskaya, N.N. Ural'tseva, Linear and quasilinear elliptic equations, Academic Press 1968. lanir Y. Lanir, Biorheology and fluid flux in swelling tissues. I. Bicomponent theory for small deformations, including concentration effects, Biorheology 24, 173-187, 1987 lionsmagenes J.L. Lions, E. Magenes, Problémes aux limites non homogénes, Dunod, 1968 lions J.L. Lions, Quelques Méthodes Pour les Problèmes aux Limites Nonlinéaires, Gauthier-Villards, 1969 raviart P.A. Raviart, J.M. Thomas, Introduction a l'analyse numérique des équations aux dérivées partielles, Masson 1983 slimy B. Schachter, Slimy business-the biotechnology of biofilms, Nat. Biotechnol. 21, 361-365, 2003 seminara A. Seminara, T.E. Angelini, J.N. Wilking, H. Vlamakis, S. Ebrahim, R. Kolter, D.A. Weitz, M.P. Brenner, Osmotic spreading of Bacillus subtilis biofilms driven by an extracellular matrix. Proc. Nat. Acad. Sci. USA 109, 1116–1121, 2012 feijoooberai G.R. Feijoo. A.A. Oberai, P.M. Pinsky, An application of shape optimization in the solution of inverse acoustic scattering problems Inverse Problems 20, 199-228, 2004 dirichlet2D P Li, Y Wang, Z Wang, Y Zhao, Inverse obstacle scattering for elastic waves, Inverse Problems 32, 115018, 2016 fstissue M.M. Schuff, J.P. Gore, E.A. Nauman, A mixture theory model of fluid and solute transport in the microvasculature of normal and malignant tissues. I. Theory, J. Math. Biol. 66, 1179-1207, 2013 fsbrain M. Terzano, A. Spagnoli, D. Dini, A.E. Forte, Fluid-solid interaction in the rate-dependent failure of brain tissue and biomimicking gels, Journal of the Mechanical Behavior of Biomedical Materials 119, 104530, 2021 hai K. Vickery, H. Hu, A.S. Jacombs, D.A. Bradshaw, A.K. Deva, A review of bacterial biofilms and their role in device-associated infection, Healthcare Infection 18, 61-66, 2013 zhu Y. Zhu, G. McHale, J. Dawson, S. Armstrong, G. Wells, R. Han, H. Liu, W. Vollmer, P. Stoodley, N. Jakubovics, J. Chen, Slippery liquid-like solid surfaces with promising antibiofilm performance under both static and flow conditions, ACS Appl. Mater. Interfaces 14, 5, 6307-6319, 2022
http://arxiv.org/abs/2307.06217v1
20230712150511
A preliminary model for optimal control of moisture content in unsaturated soils
[ "Marco Berardi", "Fabio V. Difonzo", "Roberto Guglielmi" ]
math.OC
[ "math.OC" ]
Optimal Control of Moisture Content]A preliminary model for optimal control of moisture content in unsaturated soils Berardi]Marco Berardi Istituto di Ricerca sulle Acque, Consiglio Nazionale delle Ricerche, Via De Blasio, 5, 70132 Bari, Italy [email protected] Difonzo]Fabio V. Difonzo Dipartimento di Matematica, Università degli Studi di Bari Aldo Moro, Via E. Orabona 4, 70125 Bari, Italy [email protected] Guglielmi]Roberto Guglielmi Department of Applied Mathematics, University of Waterloo, 200 University Ave W, Waterloo, N2L 3G1 Ontario, Canada [email protected] 34H05, 76S05 Version of August 12, 2023, In this paper we introduce an optimal control approach to Richards' equation in an irrigation framework, aimed at minimizing water consumption while maximizing root water uptake. We first describe the physics of the nonlinear model under consideration, and then develop the first-order necessary optimality conditions of the associated boundary control problem. We show that our model provides a promising framework to support optimized irrigation strategies, thus facing water scarcity in irrigation. The characterization of the optimal control in terms of a suitable relation with the adjoint state of the optimality conditions is then used to develop numerical simulations on different hydrological settings, that supports the analytical findings of the paper. [ [ Received May 01, 2023 / Accepted May 31, 2023 ================================================= § INTRODUCTION More and more often, extreme weather events are accompanied by longer, more intense heat waves and consequent periods of drought, and a forecast global warming is increasing the urgent need of freshwater for human life. In this context, freshwater necessary for agriculture represents almost 70% of the whole amount of freshwater reserve <cit.>. In this scenario, a wise management of water resources for agricultural purposes is of fundamental importance, even at the irrigation district scale <cit.>. Nevertheless, the vast majority of irrigation models just applies heuristic approaches for determining the amount and the timing of irrigation. Albeit an expert knowledge of agricultural and phenological issue is crucial, very seldom such information is coupled with proper mathematical models for controlling irrigation. In most sophisticated cases, the irrigation is managed mainly by studying soil water infiltration into the root zone, starting from Gardner's pioneering works, reviewed in <cit.>; several tools have been proposed in this context, generally providing some type of solver for Richards' equation, the advection-diffusion equation which describes water infiltration in unsaturated porous media accounting also for root water uptake models. In the water resources management or agronomic framework several tools have been proposed in order to benefit from the Richards' equation for irrigation purposes: in <cit.> a Python code is presented, able to solve the Richards' equation with any type of root uptake model by a transverse method of lines; also, the popular Hydrus software is often used for simulating water flow and root uptake with different crops (e.g apples, as in <cit.>, pecan trees as in <cit.>). To the best of our knowledge, control methods are seldom applied to irrigation problems, and always with simplified models. For instance, in <cit.> a zone model predictive control is designed after defining a linear parameter varying model, aimed at maintaining the soil moisture in the root zone within a certain target interval; in <cit.> an optimal control is applied to an irrigation problem, modelled by a simpler (with respect to Richards' equation) hydrologic balance law; a simplified optimization method based on the computation of steady solutions of Richards' equation is proposed in <cit.>; finally, <cit.> presents an interesting sliding-mode control approach, but only considering a constant diffusion term in Richards' equation. An elegant approach for applying control techniques in a Richards' equation framework is provided in <cit.>, yet with very different applications and tools, i.e. maximizing the amount of absorbed liquid by redistributing the materials, when designing the material properties of a diaper. In this paper, we propose a model for solving an optimal control problem under the quasi-unsaturated assumptions (see Section <ref>), which provide a suitable hydrological setting that prevents to reach water moisture saturation in the soil. We derive the appropriate optimality conditions for the boundary control of a class of nonlinear Richards' equations, and implement these results in the development and computation of numerical solutions by a classical Projected Gradient Descent algorithm. Albeit the focus of this paper does not consist in proposing a novel numerical method, and here a standard MATLAB solver is integrated with control tools, some significant advances in the numerical solution of the unsaturated flow model deserve to be reminded; as a matter of fact, the numerics literature on Richards' equation is currently enriching and constantly evolving, since its possibly degenerate and highly nonlinear nature poses several challenges. For instance, the treatment of nonlinearities is a significant issue, and has been faced by different techniques, as Newton methods (<cit.>), L-scheme or its variants <cit.> or Picard iterations <cit.>. The discretization in space has been dealt, for instance, by finite elements or mixed finite element methods <cit.>, discontinuous Galerkin <cit.>, finite volume methods <cit.>. A separate mention is deserved by the problem of infiltration in presence of discontinuities, which can be handled by domain decomposition methods <cit.>, Filippov approach <cit.>. For a more detailed discussion, the interested reader is addressed to the following complete reviews on numerical issues in Richards' equation <cit.>. The aforementioned methods can be used to face peculiar issues in the numerical integration of Richards' equation, where is known to blow up in a relatively small integration time (see, e.g., <cit.>). Thus, they could provide further directions in the development of specific algorithms for solving optimal control problems. It is worth stressing that there is a vast literature about the problems of existence, uniqueness and regularity of the solution to degenerate parabolic differential equations. In the specific case of Richards' equations, the existence of solutions is tackled in the seminal paper by Alt and Luckhaus <cit.>, whose ideas were subsequently developed by other authors. For example, <cit.> describes the semigroup approach to determine the existence of weak solutions, further developed in <cit.>. In this paper we adopt the functional framework from <cit.>, that exhaustively describes the minimal assumptions to derive the necessary regularity conditions to develop our analysis, for general classes of hydrological settings. We also refer to <cit.> for a thorough review of such results. The paper is organized as follows: In Section <ref> we introduce the quasi-unsaturated model of the Richards' equation; in Section <ref> we describe the framework to ensure the well-posedness of the model of interest, and then derive first order necessary optimality conditions via the Lagrangian method in Section <ref>. Finally, we present numerical simulations in Section <ref>. § THE MATHEMATICAL MODEL Our work stems from the quasi-unsaturated Richards' model <cit.>, describing a fast diffusion of water in soils. This framework provides a convenient setting to apply optimal control methods and derive optimal irrigation strategies. Indeed, from the mathematical point of view, the quasi-unsaturated diffusive model retains many crucial features of the nonlinear diffusion of water in the soil, while avoiding the special mathematical treatment required to face the limit case of a saturated diffusion. We consider the model of water diffusion in the space domain (0,Z), where Z>0 is the depth of the domain under consideration. We denote by T∈ (0,+∞) the time horizon of the interval (0, T), by Q = (0,Z)× (0,T) the space-time domain, and by (·, ·) and · the scalar product and the norm in L^2(Ω), respectively. In terms of hydraulic parameters, θ: Q→ [θ_r,θ_S) is the water content or moisture, where θ_r and θ_S represent the residual and the saturated water content, respectively. The function β :[θ_r,θ_S)→ [ϱ,+∞) is the water diffusivity, satisfying the following condition: (𝐇_β) β is locally Lipschitz continuous and monotonically increasing, β(θ) ≥ϱ > 0 for all θ∈ [θ_r, θ_S), and lim_θ↗θ_Sβ(θ) = +∞. The function β^* is the primitive of the water diffusivity β that vanishes Thus, assumption (𝐇_β) implies that β^* is differentiable and monotonically increasing on [θ_r, θ_S), and satisfies (β^*(θ_1) - β^*(θ_2))(θ_1 - θ_2) ≥ϱ (θ_1 - θ_2)^2 ∀ θ_1, θ_2 ∈ [θ_r, θ_S) . Moreover, the hydraulic conductivity K:[θ_r,θ_S]→ℝ is a non-negative, Lipschitz continuous on [θ_r, θ_S], and monotonically increasing function. Other hydraulic functions of interest are the liquid pressure head h:Q→ (-∞,0), that is negative for unsaturated porous media, and the specific water capacity C(h) = dθ/dh, that practically represents a storage term. The relation between the functions β, K and C is then expressed by . With these notations, and assuming the vertical axis with downward positive orientation, the implicit form of the quasi-unsaturated model of the nonhysteretic infiltration of an incompressible fluid into an isotropic, homogeneous, unsaturated porous medium with a constant porosity and truncated diffusivity with non-homogeneous–Dirichlet Boundary Conditions (BCs) is given by the system ∂θ/∂ t - ∂^2β^*(θ)/∂ z^2 + ∂ K(θ)/∂ z = f(θ) in Q , θ(z,0) = θ_0(z) in (0,Z) , θ(0,t) = v(t) for t∈ (0,T) , θ(Z,t) = g(t) for t∈ (0,T) . The source term f(θ) represents a sink function, that in our model describes the root water uptake. The non-homogeneous Dirichlet BCs are given by two functions v,g:(0,T)→ℝ, where the BCs at z = 0 is the control input v, that describes the irrigation strategy over the time horizon (0,T), while the BCs at z = Z is a given function g modeling the interaction with the environment below the root zone θ_r < v(t),g(t) < θ_S for a.e. t∈ (0,T) . Let us notice that the first equation in (<ref>) is the Richards' equation. In fact, we can easily compute that zβ^* = β^*θθhhz = K(h)/C(h)C(h)hz = K(h)hz, that provides the diffusion term in the classical mixed form of the Richards' equa­tion <cit.>. § EXISTENCE OF SOLUTIONS Following <cit.>, we first reduce system (<ref>) to a problem with homogeneous Dirichlet BCs. For this purpose, we assume the following hypothesis (𝐇_ω) ∃ ω:Q→ℝ such that ω∈ L^2(0,T; H^1(0,Z))∩ L^∞(Q) , ω_t∈ L^2(Q) , ω_L^∞(Q) < θ_S , essinf_(z,t)∈ Q ω(z,t) >θ_r , ω(0,t) = v(t) , ω(Z,t) = g(t) , for a.e. t ∈ (0,T) , where ω_t denotes the time derivative in the sense of distributions from (0, T) to L^2(0,Z). Thus, the function ω is defined on the cylinder Q and it attains the boundary values of v and g in z=0 and z = Z, respectively. Then, introducing the function ϕ = θ - ω, system (<ref>) is equivalent to {[ ∂ϕ/∂ t - ∂^2 F^ω(ϕ)/∂ z^2 + ∂ K(ϕ + ω)/∂ z = f^B(ϕ) - ω_t in Q ,; ϕ(z,0) = ϕ_0(z) in (0,Z) ,; ϕ(0,t) = 0 t∈ (0,T) ,; ϕ(Z,t) = 0 t∈ (0,T) , ]. where ϕ_0 := θ_0 - ω_0 and, for all ϕ∈ V, F^ω(ϕ) := β^*(ϕ + ω) - β^*(ω) , f^B(ϕ) := f(ϕ + ω) + ∂^2β^*(ω)/∂ z^2 . We now introduce a suitable functional framework for (<ref>). Let us consider the Gelfand triple H = L^2(0,Z), V = H^1_0(0,Z), and its dual V' = H^-1(0,Z), with their usual norms. Then, (<ref>) is equivalent to the abstract equation dϕ/d t + B(t)ϕ = f^B(ϕ) - ω_t a.e. t∈ (0,T) , ϕ(0) = ϕ_0 , where the operator B(t):V→ V' is defined by ⟨ B(t)ϕ,ψ⟩_V',V = ∫_0^Z (∂ F^ω(ϕ(t))/∂ z - K(ϕ(t) + ω(t)))∂ψ/∂ zdz ∀ϕ, ψ∈ V . Notice that, thanks to (𝐇_ω), we have that ∂^2β^*(ω)/∂ z^2∈ L^2(0,T;V'). Indeed, for all φ∈ V = H^1_0(0,Z), ∫_0^Z -∂^2 β^*(ω)/∂ z^2φ dz = ∫_0^Z ∂β^*(ω)/∂ z∂φ/∂ zdz ≤β(ω)∂ω/∂ z ∂φ/∂ z≤ M_ωφ_V . Hence ∂^2 β^*(ω)/∂ z^2_V'≤ M_ω, and thus f^B - ω_t∈ L^2(0,T;V'). In particular, this implies that the right-hand side of (<ref>) is also in L^2(0,T;V'). Let θ_0∈ L^2(0,Z) and f:[θ_r,θ_S]→ℝ be Lipschitz continuous. We say that the function ϕ∈ C([0,T];L^2(0,Z)) is a solution to (<ref>) if dϕ/dt∈ L^2(0,T;V'), F^ω∈ L^2(0,T;V), ϕ(0) = ϕ_0 and, for a.e. t∈ (0,T) and for all ψ∈ V, ⟨dϕ/dt(t),ψ⟩_V',V + ⟨ B(t)ϕ,ψ⟩_V',V = ⟨ f^B(ϕ) - ω_t,ψ⟩_V',V . Existence of solutions to (<ref>) with a source term independent of the water content θ is proved in <cit.>. For our purpose, we need to extend such well-posedness result to the case of a nonlinear source term f(θ), as in system (<ref>). This can indeed be achieved by following a standard Galerkin approximation approach (see, e.g., <cit.>). Assume (𝐇_ω), θ_0 ∈ L^2(0,Z), and that f:[θ_r,θ_S]→ℝ is Lipschitz continuous. Then the problem (<ref>) admits a unique solution ϕ∈ C([0, T];L^2(0,Z)) ∩ L^2(0, T;V) with dϕ/dt∈ L^2(0, T; V') and F^ω(ϕ)∈ L^2(0, T; V). Therefore, system (<ref>) admits a unique solution θ∈ L^2(0,T;H^1(0,Z))∩ C([0,T];H) with dθ/dt∈ L^2(0,T;V') and β^*(θ)∈ L^2(0,T;H^1(0,Z)). Moreover, for appropriate initial conditions, we can prove that the solution stays away from the saturation value θ_S uniformly in time. To this aim, we introduce the function j:ℝ→ (-∞,∞] defined by j(r)∫_0^r β^*(ξ) dξ , r < θ_S , ∞ , r ≥θ_S , and the space M_j {θ∈ L^2(0,Z) : j(θ)∈ L^1(0,Z)} . Assume (𝐇_ω), θ_0 ∈ M_j, and that f:[θ_r,θ_S]→ℝ is Lipschitz continuous. Moreover, assume that f is non-negative, that is, there exists f_m∈ [0,∞) such that f_m≤ f, and essinf_x∈ (0,Z)θ_0(x)≥ 0 , θ_m(t)≤ g(t),v(t) < θ_S , for all t∈ [0,T] , where θ_m(t) = essinf_x∈ (0,Z)θ_0(x) + f_m t . Then the solution θ to problem (<ref>) satisfies θ_m(t)≤θ(x,t) < θ_S , for all (x,t)∈{0,Z}× [0,T] . § THE OPTIMAL CONTROL PROBLEM In this section we formally derive the first order necessary optimality conditions for the cost functional J(θ,u) = 1/2∫_Q f(θ(z,t)) - 1^2 dzdt + λ/2∫_0^T u(t)^2 dt , where u = v - θ_r, v is the control that appears in (<ref>), λ > 0 is the coefficient of the control cost, f:[θ_r,θ_S]→ℝ describes the normalized root water uptake model as in (<ref>), and θ is the solution to (<ref>) with f as the sink term. Roughly speaking, the performance index (<ref>) optimizes the root water uptake (see, for example, expression (<ref>) in Section <ref>, where f is maximized when f≡1) while minimizing the irrigation cost u. In this setting, it is natural to consider the following space of admissible control U_ad := {u∈ L^∞(0,T) : 0≤ u(t) < θ_S - θ_r for a.e. t∈ (0,T)} . Fixing g ∈ L^2(0, T), θ_0 ∈ L^2(0,Z), we introduce the control-to-state operator Λ: U_ad→ C([0, T ];L^2(0,Z)) such that u∈ U_ad↦θ∈ C([0, T ]; L^2(0,Z)) solution of (<ref>). Theorem <ref> ensures that the mapping Λ is well-posed. We can thus reformulate the minimization of a functional J̃(θ,u) constrained to the control system (<ref>) in terms of the so-called reduced cost functional J : U_ad→ℝ defined by J (u) := J̃(Λ(u), u). We first introduce the Lagrangian functional ℒ(θ,u, p) = J(θ,u) - ∫_Q [∂θ/∂ t - ∂/∂ z(β∂θ/∂ z) + ∂ K(θ)/∂ z - f] p dzdt - ∫_0^T(θ(0,t) - u(t))p_1 dt - ∫_0^T(θ(Z,t) - g(t))p_2 dt, where p = (p,p_1,p_2) are adjoint variables that will be useful to find a representation of the optimal control. After integration by parts, we can rewrite the Lagrangian functional as ℒ(θ,u, p) = J(θ,u) - ∫_Q ∂θ/∂ t p + β∂θ/∂ z∂ p/∂ z + ( ∂ K(θ)/∂ z - f)p dzdt + ∫_0^T[(β∂θ/∂ z p)_| z = Z - (β∂θ/∂ z p)_| z = 0] dt + - ∫_0^T(θ(0,t) - u(t))p_1 dt - ∫_0^T(θ(Z,t) - g(t))p_2 dt . Hereafter, we shall assume that the source term f∈ H^1(θ_r,θ_S) to justify the following computations. In order to derive the first order optimality conditions of problem (<ref>)-(<ref>) with input constraints (<ref>), we enforce the condition D_θℒ(θ^*,u^*,p^*)θ = 0 for all θ, that determines the equation satisfied by the adjoint variable p; and the condition D_uℒ(θ^*,u^*,p^*)· (u - u^*)≥ 0 for all u∈ U_ad, that returns the optimality condition satisfied by any optimal control u^*. After direct computations, we get that D_θℒ(θ^*,u^*, p)θ = - ∫_0^Z[θ(z,T) p(z,T) - θ(z,0) p(z,0)] dz + ∫_Q θ[∂ p/∂ t + (1 - f(θ^*)) df/dθ(θ^*) + β∂^2 p/∂ z^2 + dK/dθ(θ^*) ∂ p/∂ z] dzdt + ∫_0^T[∂θ/∂ z(Z,t) (β(θ^*)p)_| z = Z - ∂θ/∂ z(0,t) (β(θ^*)p)_| z = 0] dt + ∫_0^T θ(Z,t)[- β(θ^*) ∂ p/∂ z - dK/dθ(θ^*) p + dβ/dθ(θ^*) ∂θ^*/∂ z p - p_2] dt - ∫_0^T θ(0,t)[- β(θ^*) ∂ p/∂ z - dK/dθ(θ^*) p + dβ/dθ(θ^*) ∂θ^*/∂ z p + p_1] dt . Thus, we deduce that the adjoint variable p satisfies ∂ p/∂ t + β(θ^*) ∂^2 p/∂ z^2 + dK/dθ(θ^*) ∂ p/∂ z = F(θ^*) in Q, p(z,T) = 0 in (0,Z) , p(0,t) = p(Z,t) = 0 for t∈ (0,T) , p_1 = (β(θ^*) ∂ p/∂ z)_| z = 0 p_2 = -(β(θ^*) ∂ p/∂ z )_| z = Z where F(θ)[f( θ) - 1] df/dθ(θ). On the other hand, since D_u ℒ(θ^*,u^*, p)u = ∫_0^T (λ u^* + p_1) u dt , the condition D_uℒ(θ^*,u^*,p^*)· (u - u^*)≥ 0 for all u∈ U_ad implies the optimality condition ⟨λ u^*(t) + (β(θ^*) ∂ p/∂ z )_| z = 0,u - u^*⟩_L^2(0,T)≥ 0 for all u∈ U_ad. We thus obtain that any optimal solution (θ^*,u^*,p^*) of problem (<ref>)-(<ref>)-(<ref>) must satisfy the optimality system ∂θ/∂ t - ∂^2β^*(θ)/∂ z^2 + ∂ K(θ)/∂ z = f in Q, θ(z,0) = θ_0(z) in (0,Z), θ(0,t) = v(t) for t∈ (0,T), θ(Z,t) = g(t) for t∈ (0,T), ∂ p/∂ t + β(θ^*) ∂^2 p/∂ z^2 + dK/dθ(θ^*) ∂ p/∂ z = F(θ^*) in Q, p(z,T) = 0 in (0,Z) , p(0,t) = p(Z,t) = 0 for t∈ (0,T) , ⟨λ u^*(t) + (β(θ^*) ∂ p/∂ z )_| z = 0,u - u^*⟩_L^2(0,T)≥ 0 for all u∈ U_ad, where we recall that v = θ_r + u. In the next section, we exploit this optimality system to build suitable algorithms to numerically solve the optimal control problem (<ref>)-(<ref>). § ALGORITHM AND NUMERICAL SIMULATIONS Our optimization procedure will follow the Projected Gradient Descent (PGD) described in Algorithm <ref> (see <cit.> for a thorough introduction to such optimization algorithms). However, when solving (<ref>) with PGD, it could happen that Theorem <ref> is not satisfied at each iteration, thus incurring numerical difficulties due to the singularity of water diffusivity β at θ = θ_S. Therefore, we shall approximate it by truncation: given a small ε > 0, we define β_ε(r)β(r), r≤θ_S - ε, β(θ_S -ε), r > θ_S -ε, as shown in the Figure <ref>. Regularization (<ref>) is a standard technique when dealing with Richards' equation to handle singularities in the diffusion term, and it is used in finite difference schemes <cit.> or FEM <cit.> for both the mathematical and numerical analysis of degenerate, and possibly doubly-degenerate, parabolic equations. In the following simulations, we are then actually computing the numerical solutions to (<ref>) after replacing β with β_ε as defined in (<ref>). Moreover, we have set a maximum number of iterations equal to 100 before exiting PGD iterations, a tolerance of 10^-5, and a regularization parameter ε=10^-3 for water diffusivity in (<ref>). We stress that the order of magnitude of ε has been chosen so to be consistent with that of the different θ_S values selected in all the simulations that follow. Moreover, we select a root water uptake model of Feddes type (used, for instance, in <cit.>) as source term in (<ref>). Its expression is given by f(h)=φf̂(h), f̂(h) 0, if h_1≤ h≤0 or h≤ h_4, h-h_1/h_2-h_1, if h_2< h<h_1, 1, if h_3≤ h ≤ h_2, h-h_4/h_3-h_4, if h_4< h<h_3, with the following values, in cm: h_4≈-820, h_3≈-400, h_2≈-350, h_1=0. Also, we set φ=0.1/Z, where Z is the soil depth, and λ=0.1 in (<ref>). Let us notice that the maximum value for f̂(h) in (<ref>) is set to 1 for normalization purposes. In fact, when it comes to practical problems, one uses f(h) properly rescaled according to experimental evidences through the factor φ, which is the ratio of the potential transpiration rate and the rooting depth, as explained in <cit.>. Moreover, we stress that in general one does not necessarily require the source term f(h) to be zero for the values of h corresponding to the boundary of [θ_r,θ_S]. However, from a physical point of view, it makes sense for a source term to vanish when the soil is either dry or saturated. This is exactly the case of Feddes-type source terms as the one we consider in (<ref>) for our numerical simulations. In Example <ref> and Example <ref> below, we simulate a soil described by Haverkamp model <cit.>, whose constitutive relations are given by θ( h) = α(θ_S - θ_r )/α + h^β_2 + θ_r , K( h) = K_S A/A+h^β_1 , representing water retention curve and hydraulic conductivity, respectively. We first verify that Haverkamp model falls within the quasi-unsaturated model for a suitable choice of the parameters involved in the model setting. In fact, in Haverkamp model we have β(θ(h))=K_S A(α+(-h)^β_2)^2/(A+(-h)^β_1)α(θ_S-θ_r)β_2(-h)^β_2-1 with β_1,β_2>0, that is Lipschitz, monotonically increasing and bounded from below. In order to satisfy assumption (𝐇_β), we shall ensure that lim_h↗0^-β(θ(h))=+∞. From (<ref>), a straightforward computation provides that this condition is satisfied if and only if β_2>1. This is the case of the sandy soil considered in <cit.>, with parameters K_S=34 cm/h, A=1.175× 10^6, β_1=4.74, θ_S=0.287, θ_r=0.075, α=1.611×10^6, β_2=3.96, where the root water uptake model is as in (<ref>). We have performed a simulation of T=3 hours with a maximum depth of Z=70 cm. As in (<ref>), boundary condition at the top varies in time according to the irrigation strategy: θ(0,t)=θ_ top(t)=u(t)+θ_r, while bottom condition has been chosen so to be constant over time: θ(Z,t)=θ_ bottom(t)=0.9θ_r+0.1θ_S, t∈[0,T]. Finally, initial condition is linearly varying over time as θ(z,0)=θ_ top(0)+zθ_ bottom(0)-θ_ top(0)/Z, z∈[0,Z]. Our simulations have been produced using algorithm; we report that same results are obtained using . We have observed that convergence is reached after 3 iterates within the given tolerance, and the numerical solution is locally optimal. Results are in Figure <ref>. As can be seen, the optimal control framework succeeds in determining an optimal control that optimizes the performance index (<ref>), with a reduced water consumption and average water content over time. In this second simulation, using the same soil as in Example <ref>, we consider a time-varying bottom condition θ(Z,t)=θ_bottom(t)=(1-t/T)θ_b1+t/Tθ_b2, t∈[0,T], where θ_b10.9θ_r+0.1θ_S, θ_b20.7θ_r+0.3θ_S, while top condition is given by θ(0,t)=θ_top(t)=u(t)+θ_r. Moreover, initial condition is θ(z,0)=θ_ top(0)+zθ_ top(0)-θ_ bottom(0)/Z, z∈[0,Z]. It turns out that converges, in 3 iterates, to a local optimal solution, further providing the best results if compared to . Results are displayed in Figure <ref>. In Example <ref> and Example <ref> that follow, we consider the classical Van Genuchten-Mualem constitutive relations in the unsaturated zone, given by θ( ψ) = θ_r + θ_S - θ_r/( 1 + αψ^n )^m, m 1- 1/n, k(ψ) = K_S [ 1/1 + αψ^n]^m/2[ 1 - (1 - 1/1 + αψ^n)^m]^2. In order to verify under which conditions Van Genuchten-Mualem model satisfies the quasi-unsaturated model, we need to analyze its corresponding function β(θ(h)). Letting φ(h)1/1+α h^n, from (<ref>) and exploiting the fact that h<0, it follows that β(θ(h))=K_S[1-(1-φ(h))^m]^2/mnα^n(θ_S-θ_r)h^n-1φ(h)^m/2+1. It is an easy computation that β is Lipschitz, monotonically increasing and bounded from below. In order to satisfy assumption (𝐇_β), there needs lim_h↗0^-β(θ(h))=+∞. From (<ref>), this condition is satisfied if and only if n>1. This is the case for the simulations reported below. More specifically, we are going to consider a Berino loamy fine sand and a Glendale clay loam, with parameters drawn from <cit.>. The Berino loamy fine sand is defined by the following hydraulic parameters: θ_r = 0.0286, θ_S = 0.3658, α = 0.0280, n = 2.2390, K_S = 22.5416 cm/h. Here, as in Example <ref>, we consider a time-varying bottom condition θ(Z,t)=θ_bottom(t)=(1-t/T)θ_b1+t/Tθ_b2, t∈[0,T], where θ_b10.3θ_r+0.7θ_S, θ_b20.1θ_r+0.9θ_S; boundary condition at the top of the domain is, again as in Example <ref>, θ(0,t)=θ_top(t)=u(t)+θ_r. However, now initial condition is a quadratic polynomial function of depth and is set as θ(z,0)(θ_ bottom(0)-θ_ top(0))(z/Z)^2+θ_ top(0), z∈[0,Z]. Results relative to this soil are depicted in Figure <ref> and are obtained using , where Z=50 cm and T=12 hours. Simulations on the Glendale clay loam are obtained using the following parameters: θ_r = 0.1060, θ_S = 0.4686, α = 0.0104, n = 1.3954, K_S = 0.5458 cm/h. For this experiment, we fix Z=30 cm and T=36 hours; bottom boundary condition is given by θ(Z,t)=θ_bottom(t)=(1-t/T)θ_b1+t/Tθ_b2, t∈[0,T], where θ_b10.5θ_r+0.5θ_S, θ_b20.7θ_r+0.3θ_S, and top boundary condition is, as in previous examples, θ(0,t)=θ_top(t)=u(t)+θ_r. Initial condition is again set as θ(z,0)(θ_ bottom(0)-θ_ top(0))(z/Z)^2+θ_ top(0), z∈[0,Z]. Results are depicted in Figure <ref>. Here, we employed to for solving the optimization problem by PGD. § CONCLUSIONS In this paper we introduce an optimal control approach aimed at optimizing the water content provided by irrigation, applying Richards' equation for unsaturated flow. We make use of quasi-unsaturated model introduced in <cit.>, extending the well-posedness results for nonlinear sink terms and deriving suitable optimality conditions for an irrigation performance index of tracking type. We set the model within a MATLAB solver by implementing a properly adapted Projected Gradient Descent method, and provide significant numerical results over a meaningful variety of soils; a deeper analytical treatise of the control system is beyond the scopes of this paper, and it is currently under investigations by the authors. This paper could pave the way to an extensive use of control techniques for optimizing irrigation in real life applications, and this framework could easily be incorporated in existing irrigation software based on Richards' equation solvers. Moreover, it is worth investigating qualitative features of the more general saturated-unsaturated model, for which there is an increasing need of both numerical and analytical results and approaches. In this context, tools from set-valued analysis and discrete control techniques could carry improvements in understanding such problems. § ACKNOWLEDGMENTS MB acknowledges the partial support of RIUBSAL project funded by Regione Puglia under the call “P.S.R. Puglia 2014/2020 - Misura 16 – Cooperazione - Sottomisura 16.2 “Sostegno a progetti pilota e allo sviluppo di nuovi prodotti, pratiche, processi e tecnologie”: in particular he thanks Mr. Giuseppe Leone and Mrs. Gina Dell'Olio for supporting the project activities; FVD has been supported by REFIN Project, grant number 812E4967, funded by Regione Puglia: both authors acknowledge the partial support of GNCS-INdAM. RG acknowledges the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), funding reference number RGPIN-2021-02632. elsarticle-num
http://arxiv.org/abs/2307.07405v1
20230714153145
Performance of $\ell_1$ Regularization for Sparse Convex Optimization
[ "Kyriakos Axiotis", "Taisuke Yasuda" ]
cs.LG
[ "cs.LG", "cs.DS", "math.ST", "stat.ML", "stat.TH" ]
Hydrodynamic Navier-Stokes equations in two-dimensional systems with Rashba spin-orbit coupling Thomas L. Schmidt August 12, 2023 ================================================================================================ empty Despite widespread adoption in practice, guarantees for the LASSO and Group LASSO are strikingly lacking in settings beyond statistical problems, and these algorithms are usually considered to be a heuristic in the context of sparse convex optimization on deterministic inputs. We give the first recovery guarantees for the Group LASSO for sparse convex optimization with vector-valued features. We show that if a sufficiently large Group LASSO regularization is applied when minimizing a strictly convex function l, then the minimizer is a sparse vector supported on vector-valued features with the largest ℓ_2 norm of the gradient. Thus, repeating this procedure selects the same set of features as the Orthogonal Matching Pursuit algorithm, which admits recovery guarantees for any function l with restricted strong convexity and smoothness via weak submodularity arguments. This answers open questions of Tibshirani et al. <cit.> and Yasuda et al. <cit.>. Our result is the first to theoretically explain the empirical success of the Group LASSO for convex functions under general input instances assuming only restricted strong convexity and smoothness. Our result also generalizes provable guarantees for the Sequential Attention algorithm, which is a feature selection algorithm inspired by the attention mechanism proposed by Yasuda et al. <cit.>. As an application of our result, we give new results for the column subset selection problem, which is well-studied when the loss is the Frobenius norm or other entrywise matrix losses. We give the first result for general loss functions for this problem that requires only restricted strong convexity and smoothness. § INTRODUCTION A common task in modern machine learning is to sparsify a large model by selecting a subset of its inputs. This often leads to a number of improvements to the model such as interpretability and computational efficiency due to the smaller size of the model, as well as improved generalizability due to removal of noisy or redundant features. For these reasons, feature selection and sparse optimization is a heavily studied subject in signal processing, statistics, machine learning, and theoretical computer science. We continue this line of investigation by studying the following sparse optimization problem <cit.>: design an efficient algorithm such that, given l:ℝ^n→ℝ and a sparsity parameter k, outputs a sparse solution such that l() - l() ≥γ*l() - min_∈ℝ^n : _0 ≤ k l(), _0 **i∈[n] : _i ≠ 0 for some approximation factor γ > 0. In practice, there is also much interest in feature selection for vector-valued features, due to a widespread usage of vector representations of discrete features via embeddings <cit.>, as well as for applications to block sparsification for hardware efficiency <cit.>, structured sparsification when pruning neurons in neural nets <cit.> or channels and filters in convolutional nets <cit.>. In such vector-valued or group settings, the n inputs ∈ℝ^n are partitioned into t disjoint groups of features T_1, T_2, …, T_t⊆[n], and we would like to select whole groups of features at a time. We thus also study the question of solving l() - l() ≥γ*l() - min_∈ℝ^n : _≤ k l(), _**i∈[t] : |_T_i≠ 0 where |_T_i denotes the T_i-dimensional vector obtained by restricting to the coordinates j∈ T_i.[We also allow for |_T_i to denote the corresponding n-dimensional vector padded with zeros outside of T_i whenever this makes sense.] Although problems (<ref>) and (<ref>) are computationally challenging problems in general <cit.>, a multitude of highly efficient algorithms have been proposed for solving these problems in practice. Perhaps one of the most popular algorithms in practice is the use of ℓ_1 regularization. That is, if we wish to optimize a function l:ℝ^n→ℝ over k-sparse inputs ∈ℝ^n : _0 ≤ k, then we instead optimize the ℓ_1-regularized objective min_∈ℝ^n l() + λ_1. The resulting optimal solution ^* often has few nonzero entries and thus helps identify a sparse solution. This idea was first introduced for the linear regression problem by Tibshirani <cit.>, known as the LASSO in this case, and has subsequently enjoyed wide adoption in practice in applications far beyond the original scope of linear regression. For the group sparsification setting, one can consider a generalization of the LASSO known as the Group LASSO <cit.>, which involves minimizing the following objective: min_∈ℝ^n l() + λ∑_i=1^t |_T_i_2 That is, the regularizer is now the sum of the ℓ_2 norms of each group of variables T_i for i∈[t]. In practice, this encourages groups of variables to be selected at a time, which facilitates feature selection in the group setting. We refer the reader to the monograph <cit.> on the LASSO and its generalizations for further references and discussion. §.§ Related work: prior guarantees for L1 regularization Due to the practical importance of solving (<ref>) and (<ref>), there has been an intense focus on theoretical work surrounding these optimization problems, especially for the sparse linear regression problem, i.e., when l() = -_2^2 is the least squares objective for a design matrix and target vector . However, as remarked in a number of works <cit.>, recovery guarantees for the LASSO and the Group LASSO are strikingly lacking in settings beyond statistical problems with average-case inputs or strong assumptions on the input, and is usually considered to be a heuristic in the context of sparse convex optimization for deterministic inputs. For example, one line of work focuses on the linear regression problem in the setting where the target vector is exactly a k-sparse linear combination for some _0 ≤ k plus i.i.d. Gaussian noise, and we seek guarantees on the solution to (<ref>) <cit.> when satisfies the restricted isometry property (RIP) or its various relaxations such as the restricted eigenvalue condition (RE). This can be viewed as an instantiation of (<ref>) for l() = -_2^2, under the assumption that there exists an approximate global optimum of l that is exactly k-sparse. Statistical consistency results have also been established, which also assume a “true” k-sparse target solution <cit.>. A more recent line of work has studied algorithms for sparse linear regression problem under a correlated Gaussian design matrix with other general structural assumptions on the covariance matrix <cit.>. All of these works exclude the consideration of worst-case error on a desired k-sparse target solution, which is an undesirable restrictive assumption when solving (<ref>) in general. Indeed, one of the most remarkable aspects about the LASSO is its empirical success on a wide variety of real input distributions that can be far from Gaussian or even general i.i.d. designs. Thus, gaining a theoretical explanation of the success of the LASSO in more general settings is a critical question in this literature. Why are the LASSO and Group LASSO successful on general input distributions, beyond statistical settings? An important exception is the work of <cit.>, which establishes that in the setting of sparse linear regression, a sequential variation on the LASSO known as the Sequential LASSO <cit.>, in which the LASSO is applied sequentially k times to select k inputs one at a time, is in fact equivalent to the Orthogonal Matching Pursuit algorithm (OMP) <cit.>.[We also note a work of <cit.>, which proposes a similar procedure called the Strong Sequential Rule of sequentially zeroing out variables using the LASSO, but does not obtain provable guarantees for the resulting selected features.] The work of <cit.> showed that OMP achieves bounds of the form of (<ref>) whenever satisfies a restricted isometry property in the absence of additional distributional assumptions on the input instance. From <cit.>, it follows that the Sequential LASSO does as well. Thus, the works of <cit.> provide a form of an answer to Question <ref> for the sparse linear regression problem, for general inputs with RIP. Given the previous success of analyzing the LASSO for general inputs under RIP, one may ask for generalizations of this result to other objective functions, such as generalized linear models, logistic regression, or even general sparse convex optimization. Indeed, as mentioned previously, the LASSO and Group LASSO are used in practice in settings far beyond linear regression, and fast algorithms for solving the optimization problems of (<ref>) and (<ref>) are plentiful in the literature <cit.>. However, none of these works provide satisfactory answers on why the LASSO and Group LASSO are successful at selecting a good sparse set of inputs. Why are the LASSO and Group LASSO successful on general convex objectives, beyond ℓ_2 linear regression? Why do they select a sparse set of inputs? Which inputs are chosen? While the work of <cit.> provides answers for the sparse linear regression problem by showing that the selected inputs are precisely the inputs selected by OMP, their analysis relies on specific geometric properties of the linear regression loss such as the Pythagorean theorem and the fact that the dual of the LASSO objective is a Euclidean norm projection onto a polytope <cit.>, and thus the techniques there do not immediately generalize even to specific problems such as ℓ_p regression or regularized logistic regression. Such a generalization is left as a central open question in their work. Similarly, the work of <cit.> asks the question of why sequentially discarding variables using the LASSO performs so well. §.§ Our results The main result of this work is a resolution of Question <ref> for both the LASSO (<ref>) and the Group LASSO (<ref>) setting for any strictly convex objective function l. To state our results, we first recall the (Group) Sequential LASSO and (Group) OMP algorithms in Algorithms <ref> and <ref>, which are both iterative algorithms that maintain a set of selected features S⊆[t] by adding one feature at a time starting with S = ∅. We show that the result of <cit.> generalizes to the setting of group-sparse convex optimization: the Group Sequential LASSO update rule selects a group of features T_i⊆[n] that maximizes the ℓ_2 gradient mass ∇ l()|_T_i_2^2, i.e., the same update rule as Group OMP. Our analysis simultaneously gives a substantial simplification as well as a generalization of the analysis of <cit.>, which gives us the flexibility to handle both group settings as well as general convex functions. Let l:ℝ^n→ℝ be strictly convex. Let S⊆[t] be a set of currently selected features. For each λ>0, define ^λmin_∈ℝ^n l() + λ∑_i∈S|_T_i_2 and let τsupλ > 0: ∃ i∈S, ^λ|_T_i≠ 0 and let ^∞^τ = lim_λ→∞^λ. Then for λ = τ - for all >0 sufficiently small, ^λ|_T_i≠ 0 only if ∇ l(^∞)|_T_i_2^2 = max_j∈S∇ l(^∞)|_T_j_2^2. We give our discussion of this result in Section <ref>. In other words, if we add the Group LASSO regularization only on unselected features i∈S and take λ as large as possible without causing the solution ^λ to be zero, then ^λ must be supported on a group of features maximizing the ℓ_2 gradient mass at ^∞ among the unselected features i∈S. Furthermore, note that ^∞ is exactly the minimizer of l() subject to the constraint that |_T_i = 0 for every i∈S. Thus, in the non-group setting, this algorithm sequentially selects a feature i∈[n] that maximizes ∇ l(^∞)_i, which is exactly the OMP update rule analyzed in <cit.>. The works of <cit.> show that this OMP update rule gives a guarantee of the form of (<ref>) with an approximation factor γ depending on the restricted strong convexity (RSC) of l, which is a generalization of the RIP parameter for matrices to general functions. Thus, as reasoned in <cit.>, the Sequential LASSO for general functions l inherits this guarantee of OMP. We also show in Section <ref> that the group version of the OMP update rule obtained here based on selecting the group with the largest ℓ_2 gradient mass ∇ l()|_T_i_2^2 in fact also gives an analogous guarantee. In particular, we give guarantees for Group OMP both in the setting of outputting exactly k-group-sparse solutions (Corollary <ref>) as well as bicriteria solutions that use a slightly larger sparsity to get within an additive of the function value of the optimal k-sparse solution (Corollary <ref>), restated below. [Exactly k-group-sparse solutions]CorollaryCorSparseOMP After k iterations of Algorithm <ref>, ^∞ (Line <ref>) has group sparsity ^∞_≤ k and satisfies (<ref>) with γ = 1-exp*-μ_2k/L_1 , where μ_2k is a lower bound on the restricted strong convexity constant of l at group sparsity 2k and L_1 is an upper bound on the restricted smoothness constant of l at group sparsity 1 (see Definition <ref>). [Bicriteria sparsity with additive error]CorollaryCorBicriteriaOMP After k' iterations of Algorithm <ref>, for k' ≥ k·L_1/μ_k+k'logl(^(0)) - l(^*)/, then ^∞ (Line <ref>) has group sparsity ^∞_≤ k' and satisfies l(^∞) ≤ l(^*) + , where μ_k+k' is a lower bound on the restricted strong convexity constant of l at group sparsity k+k' and L_1 is an upper bound on the restricted smoothness constant of l at group sparsity 1 (see Definition <ref>). We additionally note that our analysis also immediately extends to an analysis of a local search version of OMP, known as OMP with Replacement (Algorithm <ref>) <cit.>, which gives a bicriteria sparsity bound which does not depend on (Corollary <ref>). [Bicriteria sparsity with additive error]CorollaryCorBicriteriaOMPR After R iterations of Algorithm <ref> with k'≥ k*L_2^2/μ_k+k'^2 + 1, for R ≥ k·L_2/μ_k+k'logl(^(0)) - l(^*)/, then ^∞ (Line <ref>) has group sparsity ^∞_≤ k' and satisfies l(^∞) ≤ l(^*) + , where μ_k+k' is a lower bound on the restricted strong convexity constant of l at group sparsity k+k' and L_2 is an upper bound on the restricted smoothness constant of l at group sparsity 2 (see Definition <ref>). This variant of OMP can be analogously simulated by the LASSO as well, leading to a new LASSO-based feature selection algorithm which we call (Group) Sequential LASSO with Replacement. §.§.§ Techniques Our main technique involves exploiting the correspondence between variables of a primal optimization problem with the gradient of the dual optimization problem, via the Fenchel–Young inequality (Theorem <ref>). We start with an observation given by <cit.>. When we take the dual of the LASSO objective, then the resulting problem involves minimizing the Fenchel dual l^* of l (Definition <ref>), subject to a hypercube constraint set. When the regularization λ is sufficiently large (say larger than some threshold τ), then this increases the size of the constraint set large enough to contain the global minimizer of the Fenchel dual l^*, and thus the gradient of l^* vanishes at this minimizer. Then by the equality case of the Fenchel–Young inequality, this implies that the corresponding primal variable is zero as well. On the other hand, if λ is smaller than this threshold point τ, only some coordinates will be unconstrained (i.e. strictly feasible), while others coordinates will become constrained by the smaller λ. In this case, the strictly feasible coordinates will have zero gradient, which leads to zeroes in the corresponding primal variable and thus a sparse solution. The argument until this point is known in prior work, and <cit.> used this observation to give an algorithm which tunes the value of λ such that at least k variables are selected in a single application, while <cit.> proposed a sequential procedure with better empirical performance. Our central observation, inspired by the work of <cit.>, is that if we regularize strongly enough such that only one feature is selected at a time via the LASSO, then this feature is the one maximizing the absolute value of the gradient. Indeed, note that if λ is just slightly smaller than the threshold point τ, then the global minimizer ^*∈ℝ^n of l^* just slightly violates exactly a single constraint in the dual problem, which corresponds to the feature i^*∈[n] with the largest absolute coordinate value ^*_i in the dual variable. We show that for such λ, all other coordinates j∈[n]∖{i^*} are unconstrained optimizers and thus the gradient is (Lemma <ref>). Thus, by the equality case of the Fenchel–Young inequality, this corresponds to a primal variable that is supported only on this coordinate i^*∈[n]. The crucial next step then is to apply the Fenchel–Young inequality again in the dual direction: via the Fenchel–Young inequality, this coordinate i^*∈[n] maximizes the absolute coordinate value of the dual variable , and thus is the coordinate that maximizes the absolute coordinate value of the gradient of the primal variable . Thus, this selects a coordinate which follows the first step of the OMP update rule. While we have sketched the proof only for this first step in the non-group setting, the analysis also carries through for all steps of the OMP algorithm, as well as for the group setting. Thus, this establishes the equivalence between (Group) Sequential LASSO and (Group) OMP for general convex functions. §.§.§ Connections to analysis of attention mechanisms As noted in <cit.>, we make a connection of our work to the analysis of recently popularized techniques for discrete optimization via continuous and differentiable relaxations inspired by the attention mechanism <cit.>. The attention mechanism can be viewed as a particular algorithm for the sparse optimization problem (<ref>), in which an additional set of variables ∈ℝ^n are introduced, and we solve a new optimization problem min_,∈ℝ^n l(()⊙), where ⊙ denotes the Hadamard (entrywise) product and ()∈ℝ^n is defined as ()_i exp(_i)/∑_j=1^n exp(_j). The idea is that serves as a measure of “importance” of each feature i∈[n], and the softmax allows for a differentiable relaxation for the operation of selecting the most “important” feature when minimizing the loss l. Alternatively, can be viewed as the amount of “attention” placed on feature i∈[n] by the algorithm. Such ideas have been applied extremely widely in machine learning, with applications to feature selection <cit.>, feature attribution <cit.>, permutation learning <cit.>, neural architecture search <cit.>, and differentiable programming <cit.>. Thus, it is a critical problem to obtain a theoretical understanding of subset selection algorithms of the form of (<ref>). The work of <cit.> showed that a slight variation on (<ref>) is in fact amenable to analysis when l is the problem of least squares linear regression. In this case, <cit.> show (using a result of <cit.>) that if we instead consider min_,∈ℝ^n l(⊙) + λ/2*_2^2 + _2^2 i.e., remove the softmax and add ℓ_2 regularization, then this is in fact equivalent to the ℓ_1-regularized problem considered in (<ref>). In Lemma <ref>, we show a generalization of this fact to the group setting, by showing that if we have t features corresponding to disjoint subsets of coordinates T_1, T_2, …, T_t⊆[n], then multiplying each of the features |_T_i by a single “attention weight” _i for ∈ℝ^t gives a similar correspondence to the Group LASSO algorithm (<ref>). Thus, the attention-inspired feature selection algorithm given in Algorithm <ref> also enjoys the same guarantees as the Group Sequential LASSO algorithm. We note that this generalization to the group setting is particularly important for the various applications in attention-based subset selection algorithms, due to the fact that the objects |_T_i being selected are often large vectors in these applications. Finally, we also note that our analysis of Hadamard product-type of algorithms of the form of (<ref>) may prove to be useful in the analysis of similar algorithms in the literature of online convex optimization that have been developed to solve sparse optimization problems <cit.>. §.§.§ Applications to column subset selection As a corollary of our analyses of group feature selection algorithms, we obtain the first algorithms for the column subset selection (CSS) problem for general loss functions with restricted strong convexity and smoothness. In the CSS problem, we are given an input matrix ∈ℝ^n× d, and the goal is to select a small subset of k columns S⊆[d] of that minimizes the reconstruction error min_∈ℝ^k× d* - |^S _F^2, where |^S ∈ℝ^n× k is the matrix restricted to the columns indexed by S. As with sparse linear regression, this problem is known to be computationally difficult <cit.>, and thus most works focus on approximation algorithms and bicriteria guarantees to obtain tractable results. The CSS problem can be viewed as an unsupervised analogue of sparse convex optimization, and has been studied extensively in prior work. In particular, the works of <cit.> gave analyses of greedy algorithms for this problem, showing that iteratively selecting columns that maximizes the improvement in reconstruction error (<ref>) leads to bicriteria sparsity algorithms that depend on the sparse condition number of . In a separate line of work, randomized methods have been employed in the randomized numerical linear algebra literature to sample columns of that span a good low rank approximation <cit.>. Furthermore, there has recently been a large body of work aimed at generalizing CSS results to more general loss functions beyond the Frobenius norm, including ℓ_p norms <cit.> and other entrywise losses <cit.>. All of these works use complicated arguments and rely heavily on the entrywise structure of the loss function. We show that by a surprisingly simple argument, we can immediately obtain the first results on column subset selection for general convex loss functions with restricted strong convexity and smoothness. Our key insight is to view this problem not as a column subset selection problem for , but rather a row subset selection problem for . That is, note that min_S≤ kmin_∈ℝ^k× dl* - |^S = min_S≤ kmin_∈ℝ^d× dl* - |_S where |_S zeros out all rows of not indexed by S. Then, this is just a group variable selection problem, where we have d groups given by each of the rows of , and thus we may write this problem as computing = min_∈ℝ^d× d, _≤ k l( - ) Thus, by using our guarantees for Group OMP in Corollaries <ref> and <ref> (which also hold for Group Sequential LASSO and Group Sequential Attention by Theorem <ref> and Lemma <ref>), we obtain the first algorithm and analysis of the column subset selection problem under general loss functions with restricted strong convexity and smoothness. This gives a substantial generalization of results known in prior work. [Column subset selection via Group OMP] Let ∈ℝ^n× d and let l:ℝ^n× d→ℝ be a strictly convex and differentiable loss function. Let ↦ l( - ) satisfy L_1-group-sparse smoothness and μ_k+k'-group-sparse convexity (Definition <ref>), where the groups are the rows of . The following hold: * Let κ = L_1/μ_2k. After k' = k iterations, Algorithm <ref> outputs a subset S⊆[n] of size S≤ k such that l() - l(-|^S) ≥*1 - e^-κ*l() - . * Let κ = L_1/μ_k+k'. After k' ≥ k·κlogl() - / iterations, Algorithm <ref> outputs a subset S⊆[n] of size S≤ k' such that l(-|^S) ≤ + . This follows from applying Corollaries <ref> and <ref> to the group-sparse convex optimization formulation of column subset selection. Our proof is arguably simpler than prior work even for the Frobenius norm. Indeed, the prior works require arguments that use the special structure of Euclidean projections, whereas we simply observe that CSS is a group-sparse convex optimization problem and use a generalization of techniques for sparse regression. We also immediately obtain analyses for natural algorithms which were previously not considered in the context of column subset selection, such as Group OMP (with Replacement), Group LASSO, and attention-based algorithms. In particular, by applying guarantees for Group OMP with Replacement (Corollary <ref>), we obtain the first column subset selection algorithm with no dependence on in the sparsity, even for the Frobenius norm problem. [Column subset selection with Group OMPR] Let ∈ℝ^n× d and let l:ℝ^n× d→ℝ be a strictly convex and differentiable loss function. Let ↦ l( - ) satisfy L_2-group-sparse smoothness and μ_k+k'-group-sparse convexity (Definition <ref>), where the groups are the rows of . Let κ = L_2/μ_k+k' and k' ≥ k(κ^2 + 1). After R ≥ k·κlogl() - / iterations, Algorithm <ref> outputs a subset S⊆[n] of size S≤ k' such that l(-|^S) ≤ + . This follows from applying Corollary <ref> to the group-sparse convex optimization formulation of column subset selection. §.§ Related work: the Forward Stagewise Regression conjecture A separate line of work has investigated a closely related connection between the LASSO and OMP-like algorithms. In particular, the “continuous” OMP (or coordinate descent) algorithm which updates ^(t+1)^(t) - η·sign(∇_i l(^(t)))_i for i = max_i=1^n ∇ l(^(t)) known as Forward Stagewise Regression is conjectured <cit.>) to have the same solution path as the LASSO path (i.e. the set of solutions as λ ranges from 0 to ∞) when η→0 <cit.>. While a full proof of this conjecture may be useful towards proving our main result, to the best of our knowledge, the only known result towards this conjecture establishes an “instantaneous” result which shows the convergence of the difference between the two paths to the gradient <cit.> under technical assumptions under the underlying loss function such as the monotonicity of the coordinates of the LASSO solution. Our result can be viewed as a full proof of this conjecture in an open ball near for general strictly convex differentiable functions, and our techniques may be useful for a full resolution of this conjecture. §.§ Related work: algorithms for sparse convex optimization While we have argued so far that guarantees for ℓ_1 regularization in solving (<ref>) in prior work are limited, other efficient algorithms have in fact been shown to solve (<ref>), both for sparse linear regression as well as general sparse convex optimization. Via a connection between convexity and weakly submodular optimization, the works of <cit.> showed that the greedy forward algorithm and Orthogonal Matching Pursuit both give guarantees of the form of (<ref>). Efficiency guarantees have also been given for OMP with Replacement (OMPR) <cit.> and Iterative Hard Thresholding (IHT) <cit.>, using the restricted smoothness and strong convexity properties. Ultimately, these results show that an ϵ-approximate sparse solution can be recovered if we allow an O(κ) blowup to the sparsity, where κ is the restricted condition number of the problem. §.§ Open directions We suggest several directions for future study. Our first question is on showing analogous results for the one-shot version of LASSO, which is used much more frequently in practice than the Sequential LASSO. That is, if λ is chosen in (<ref>) such that only k nonzero entries are selected, then can we obtain a guarantee of the form of (<ref>) for this solution? It is known that one-shot variants of OMP or greedy have this type of guarantee <cit.> (also called “oblivious” algorithms in these works). However, our proof techniques do not immediately apply, since we crucially use the fact that for large enough regularizations λ, the resulting solution is close to the λ = ∞ solution, while this is not true when λ can be much smaller. A second question is whether our results generalize beyond convex functions or not. For example, the analysis of OMP carries through to smooth functions that satisfy the Polyak-Łojasiewicz condition <cit.>. Can a similar generalization be shown for our results? There are several parts of our proofs that crucially use convexity, but the LASSO is known to give good results even for nonconvex functions in practice and thus there is still a gap in our understanding of this phenomenon. Finally, we ask if our analyses for ℓ_1 regularization can be extended to an analogous result for nuclear norm regularization for rank-constrained convex optimization. In the setting of rank-constrained convex optimization, it has been shown in special cases, such as affine rank minimization, that nuclear norm regularization can be used to efficiently recover low rank solutions <cit.>. This suggests that our results may have a natural generalization in this setting as well. In particular, an extension of OMP to the rank-sparse setting was shown by <cit.>, and thus it is possible that nuclear norm regularization can be used to simulate this algorithm as well. § PRELIMINARIES Let l:ℝ^n→ℝ be strictly convex and differentiable. For each i∈[t], let T_i⊆[n] denote the group of variables that belong to the i-th feature. §.§ Fenchel duality We will use the following standard facts about Fenchel duality <cit.>. [Fenchel dual] Let l:ℝ^n→ℝ. Then, the Fenchel dual l^* of l is l^*() sup_∈ℝ^n^⊤ - l(). [Fenchel–Young inequality] Let l:ℝ^n→ℝ be convex and differentiable. Then, l() + l^*() ≥^⊤ with equality if and only if = ∇ l(). [Conjugacy theorem] Let l:ℝ^n→ℝ be convex. Then, (l^*)^* = l. The following is known about the convexity and differentiability of the Fenchel dual. [Differentiability of dual, Theorem 26.3, <cit.>] Let l:ℝ^n→ℝ be strictly convex and differentiable. Then, l^* is strictly convex and differentiable. §.§ Berge's theorem We will use a well-known theorem of Berge on the continuity of the argmin for constrained optimization problems with parameterized constraint sets. Recall that a correspondence h:ℝ⇉ℝ^n is a set-valued function which maps real numbers λ to subsets h(λ)⊆ℝ^n. A correspondence h is upper hemicontinuous if for every λ∈ℝ and every open set G⊆ℝ^n such that h(λ)⊂ G, there is an open set U⊆ℝ such that τ∈ U h(τ)⊂ G. [Berge's theorem <cit.>] Let g:ℝ^n→ℝ be a continuous function and let φ:ℝ⇉ℝ^n be a continuous correspondence that map into compact sets. Consider the correspondence h:ℝ⇉ℝ^n given by h(λ) = *∈ℝ^n : g() = min_'∈φ(λ) g(') Then, h is upper hemicontinuous. The following corollary of Theorem <ref> for strictly convex functions is more useful for our purposes. [Berge's theorem for convex functions] Let g:ℝ^n→ℝ be a strictly convex function and let φ:ℝ⇉ℝ^n be a continuous correspondence that map into compact sets. Consider the function h:ℝ→ℝ^n given by h(λ) = min_'∈φ(λ) g(') Then, h is continuous. Because g is strictly convex, there is a unique minimizer ^λ of g for each λ∈ℝ, so h is well-defined. Furthermore, h is upper hemicontinuous as a correspondence that maps real numbers λ to singleton sets {h(λ)} by Theorem <ref>, and any function h that is upper hemicontinuous as a correspondence is continuous as a function. § EQUIVALENCE OF GROUP SEQUENTIAL LASSO AND GROUP ORTHOGONAL MATCHING PURSUIT We will give our proof of Theorem <ref> in this section. §.§ The dual problem Consider the Group Sequential LASSO objective: min_∈ℝ^n l() + λ∑_i∈S*|_T_i_2 We will show that the dual of this problem is max_∈ℝ^n -l^*(-) = -min_∈ℝ^n l^*(-) |_T_i_2 ≤λ  |_T_i_2 = 0 We write the objective of (<ref>) as a constrained optimization problem in the form of min_∈ℝ^n, ∈ℝ^d l() + λ∑_i∈S*|_T_i_2 = Then, the Lagrangian dual of this problem is min_∈ℝ^n, ∈ℝ^nmax_∈ℝ^nl() + λ∑_i∈S*|_T_i_2 + ^⊤(-) Furthermore, the objective of (<ref>) is convex and strictly feasible, so strong duality holds (see, e.g., Section 5.2.3 of <cit.>) and thus we may interchange the min and the max to obtain max_∈ℝ^nmin_∈ℝ^n, ∈ℝ^n l() + λ∑_i∈S*|_T_i_2 + ^⊤(-) =   max_∈ℝ^nmin_∈ℝ^n l() + ^⊤ + min_∈ℝ^nλ∑_i∈S*|_T_i_2 - ^⊤ Now note that the first minimization over ∈ℝ^n gives exactly the Fenchel dual objective min_∈ℝ^n l() + ^⊤ = -max_∈ℝ^n (-)^⊤ - l() = -l^*(-). On the other hand, we show in the next lemma that the second minimization over ∈ℝ^n gives the constraints on the variables given in (<ref>). We have that inf_∈ℝ^dλ∑_i∈S*|_T_i_2 - ^⊤ = 0 if |_T_i_2 ≤λ for i∈S and |_T_i_2 = 0 for i∈ S -∞ otherwise If |_T_i_2 > λ for some coordinate i∈S, then we may choose = |_T_i so that λ|_T_i_2 - |_T_i_2^2 = |_T_i_2(λ - |_T_i_2) < 0 so the objective can be made arbitrarily small by scaling. If |_T_i_2 > 0 for some i∈ S, then we may choose = |_T_i so that λ∑_i∈S*|_T_i_2 - |_T_i_2^2 = 0 - |_T_i_2^2 < 0 so the objective can be made arbitrarily small by scaling. Otherwise, we have that ^⊤ = ∑_i∈S|_T_i^⊤|_T_i since |_T_i = 0 for every i∈ S ≤∑_i∈S|_T_i_2 |_T_i_2 Cauchy–Schwarz ≤λ∑_i∈S|_T_i_2 since |_T_i_2 ≤λ for every i∈S. Thus, λ∑_i∈S*|_T_i_2 - ^⊤≥ 0 and furthermore, this value can be achieved by = 0. §.§ Selection of features We will use Berge's theorem (Theorem <ref>) to prove the following lemma, which characterizes the gradient of the optimal solution to the dual optimization problem given by (<ref>). Let λ > 0 and let ^λ be the minimizer of (<ref>). Let ^∞ be the minimizer of (<ref>) without the constraint that |_T_i_2 ≤λ for every i∈S. Define the threshold τmax_i∈S*^∞|_T_i_2 and let M^τ⊆S denote the corresponding set of indices i∈S that witnesses the max, that is, M^τ*i∈S : ^∞|_T_i_2 = τ. The following hold: * If λ≥τ, then ∇ l^*(-^λ)|_T_i= 0 for all i∈S. * If λ = τ - for sufficiently small >0, then ∇ l^*(-^λ)|_T_i = 0 for all i∈S∖ M^τ and ∇ l^*(-^λ)|_T_i≠ 0 for some i∈ M^τ. If λ≥τ, then the constraint max_i∈S*^λ|_T_i_2≤λ can be removed without affecting the optimal solution, so ^λ = ^∞. Then for the coordinates in T_i for i∈S, ^∞ is a minimizer for an unconstrained optimization problem, so the gradient is 0 on these coordinates. This shows the first bullet point. On the other hand, suppose that λ = τ - for some small >0. Then, ^∞ is outside the set ∈ℝ^n: max_i∈S|_T_i_2 ≤λ. Now consider the function h(λ) = max_i∈S∖ M^τ^λ|_T_i_2, i.e., the second largest value of ^λ|_T_i_2 after excluding the maximizers i∈ M^τ. Note that this function is continuous since λ↦^λ is continuous by Corollary <ref>. Furthermore, we have that h(τ) < τ, since the maximum in the definition of h excludes the indices M^τ. Let τ' satisfy h(τ) < τ' < τ. Then, for all sufficiently small , we have that h(τ-) < τ' by the continuity of h. For these , we can remove the constraints of |_T_i_2 ≤λ = τ- for i∈S∖ M^τ without affecting the optimal solution ^λ in the optimization problem of (<ref>), so on the coordinates T_i for i∈S∖ M^τ, ^λ is an unconstrained minimizer and thus has zero gradient. On the other hand, for the coordinates T_i for i∈ M^τ, ^λ cannot be the unconstrained minimizer and thus there must be some nonzero coordinate in the gradient due to the convexity of l^*. We can then show that Lemma <ref> in fact characterizes the support of the optimal solution ^* by relating the primal and dual variables via the Fenchel–Young inequality (Theorem <ref>). [Primal vs dual variables] We have that - = ∇ l() and = ∇ l^*(-). The primal variable is related to the dual variable via Fenchel dual, that is, l^*(-) = (-)^⊤ - l() Then by the tightness of the Fenchel–Young inequality (Theorem <ref>) for l, we have that - = ∇ l(). Furthemore, by the conjugacy theorem (Theorem <ref>), we have that (l^*)^* = l, so l^*(-) + (l^*)^*() = (-)^⊤. Then by tightness of the Fenchel–Young inequality (Theorem <ref>) for l^*, we have that = = ∇ l^*(-). Thus, by Lemma <ref>, ^λ has a nonzero support on some group T_i if and only if the group T_i maximizes ^∞|_T_i_2 = ∇ l(^∞)|_T_i_2. This is precisely the Group Orthogonal Matching Pursuit selection rule (see Line <ref> of Algorithm <ref>). alpha § GUARANTEES FOR GROUP ORTHOGONAL MATCHING PURSUIT In this section, we give guarantees for the Group OMP algorithm (Algorithm <ref>). Our analysis is similar to <cit.>. We first introduce the notion of restricted strong convexity and smoothness, generalized to the group setting. [Restricted strong convexity and smoothness] Let l:ℝ^n→ℝ. Let T_i⊆[n] for i∈[t] form a partition of [n]. Then, l is μ_s-restricted strongly convex at group sparsity s if for any ∈ℝ^n and ∈ℝ^n with _≤ s, l( + ) - l() - ∠*∇ l(), ≥μ_s/2_2^2 and L_s-restricted smooth at group sparsity s if for any ∈ℝ^n and ∈ℝ^n with _≤ s, l( + ) - l() - ∠*∇ l(), ≤L_s/2_2^2. [Smoothness] Let l be L_1-restricted smooth at group sparsity 1. Let r∈[k'] and let ^∞ and i^* be defined as in Lines <ref> and <ref> of Algorithm <ref> on the r-th iteration. Let ' ^∞ + for = - L_1^-1∇ l(^∞)|_T_i^*. Then, (2L_1)^-1∇ l(^∞)|_T_i^*_2^2 ≤ l(^∞) - l(') Note that has group sparsity 1. We then have that l(') - l(^∞) ≤∠*∇ l(^∞), + L_1/2_2^2 L_1-restricted smoothness = -L_1^-1∇ l(^∞)|_T_i^*_2^2 + 1/2 L_1^-1∇ l(^∞)|_T_i^*_2^2 = -1/2 L_1^-1∇ l(^∞)|_T_i^*_2^2. Rearranging gives the desired result. [Convexity] Let l be μ_k+k'-restricted strongly convex at group sparsity k+k'. Let r∈[k'] and let ^∞ and i^* be defined as in Lines <ref> and <ref> of Algorithm <ref> on the r-th iteration. Let ^* min_∈ℝ^n : _≤ k l() Then, ∇ l(^∞)|_T_i^*_2^2 ≥2μ_k+k'/k*l(^∞) - l(^*). Let U^*⊆[n] be the support of ^* and let U⊆[n] be the support of ^∞. Note that *^* - ^∞_≤ k + k'. Then, l(^*) - l(^∞) ≥∠*∇ l(^∞), ^* - ^∞ + μ_k+k'/2*^* - ^∞_2^2 = ∠*∇ l(^∞), (^* - ^∞)|_U^*∖ U + μ_k+k'/2*^* - ^∞_2^2 ∇ l(^∞)|_U = ≥ -∇ l(^∞)|_U^*∖ U_2(^* - ^∞)|_U^*∖ U_2 + μ_k+k'/2*(^* - ^∞)|_U^*∖ U_2^2 ≥min_x -∇ l(^∞)|_U^*∖ U_2 x + μ_k+k'/2 x^2 = -∇ l(^∞)|_U^*∖ U_2^2/2μ_k+k' so ∇ l(^∞)|_U^*∖ U_2^2 ≥ 2μ_k+k'*l(^∞) - l(^*). Now note that U^*∖ U is supported on at most k groups, so by averaging, there exists some group T_i outside of U such that ∇ l(^∞)|_T_i_2^2 ≥2μ_k+k'/k*l(^∞) - l(^*). Combining Lemmas <ref> and <ref> leads to the following stepwise guarantee for Algorithm <ref>. Let ^(r) denote the value of ^∞ (Line <ref>) after r iterations of Algorithm <ref> with ^(0) =. Let ^* min_∈ℝ^n : _≤ k l() Then, l(^(r)) - l(^*) ≤exp*-r/kμ_k+k'/L_1*l(^(0)) - l(^*) By Lemmas <ref> and <ref>, we have that l(^(r)) - l(^(r+1)) ≥ (2L_1)^-1∇ l(^(r))|_T_i^*_2^2 ≥1/kμ_k+k'/L_1*l(^(r)) - l(^*) so l(^(r+1)) - l(^*) = l(^(r)) - l(^*) - *l(^(r)) - l(^(r+1)) ≤ l(^(r)) - l(^*) - 1/kμ_k+k'/L_1*l(^(r)) - l(^*) = *1 - 1/kμ_k+k'/L_1*l(^(r)) - l(^*) ≤exp*- 1/kμ_k+k'/L_1*l(^(r)) - l(^*) Applying the above inductively proves the claim. As a result of Lemma <ref>, we obtain two guarantees for Algorithm <ref>, one for exact k-group-sparse solutions with large approximation and one for bicriteria sparsity with additive error. After k iterations, we have by Lemma <ref> applied for k'=k that l(^(k)) - l(^*) = l(^(k)) - l(^(0)) + l(^(0))- l(^*) ≤exp*-μ_2k/L_1*l(^(0)) - l(^*) which rearranges to l(^(0)) - l(^(k)) ≥*1-exp*-μ_2k/L_1*l(^(0)) - l(^*) This follows immediately from the bound of Lemma <ref> and rearranging. §.§ Group OMP with Replacement In this section, we give guarantees for the Group OMP with Replacement algorithm (Algorithm <ref>), which is an improvement to Group OMP that can achieve a sparsity bound that is independent of the accuracy parameter <cit.>. [Smoothness] Let l be L_2-restricted smooth at group sparsity 2. Let r∈[k'] and let ^∞, i^*, j^* be defined as in Lines <ref>, <ref> and <ref> of Algorithm <ref> on the r-th iteration. Let ' ^∞ + for = - L_2^-1∇ l(^∞)|_T_i^* - ^∞|_T_j^*. Then, (2L_2)^-1∇ l(^∞)|_T_i^*_2^2 - (1/2) L_2 ^∞|_T_j^*_2^2 ≤ l(^∞) - l(') Note that has group sparsity 2. We then have that l(') - l(^∞) ≤∠*∇ l(^∞), + L_2/2_2^2 L_2-restricted smoothness = -L_2^-1∇ l(^∞)|_T_i^*_2^2 + 1/2 L_2^-1∇ l(^∞)|_T_i^*_2^2 + 1/2 L_2 ^∞|_T_j^*_2^2 (∇ l(^∞)|_T_j^*_2^2=0) = -1/2 L_2^-1∇ l(^∞)|_T_i^*_2^2 + 1/2 L_2^∞|_T_j^*_2^2. Rearranging gives the desired result. [Convexity] Let l be μ_k+k'-restricted strongly convex at group sparsity k+k'. Let r∈[k'] and let ^∞, i^*, j^* be defined as in Lines <ref>, <ref> and <ref> of Algorithm <ref> on the r-th iteration. Let ^* min_∈ℝ^n : _≤ k l() Then, ∇ l(^∞)|_T_i^*_2^2 ≥2μ_k+k'/k*l(^∞) - l(^*) + (k'-k)μ_k+k'^2/k^∞|_T_j^*_2^2. Let U^*⊆[n] be the support of ^* and let U⊆[n] be the support of ^∞. Note that *^* - ^∞_≤ k + k'. Then, l(^*) - l(^∞) ≥∠*∇ l(^∞), ^* - ^∞ + μ_k+k'/2*^* - ^∞_2^2 = ∠*∇ l(^∞), (^* - ^∞)|_U^*∖ U + μ_k+k'/2*^* - ^∞_2^2 ≥ -∇ l(^∞)|_U^*∖ U_2(^* - ^∞)|_U^*∖ U_2 + μ_k+k'/2*(^* - ^∞)|_U^*∖ U_2^2 + μ_k+k'/2*(^* - ^∞)|_U∖ U^*_2^2 ≥min_x -∇ l(^∞)|_U^*∖ U_2 x + μ_k+k'/2 x^2 + μ_k+k'/2*^∞|_U∖ U^*_2^2 = -∇ l(^∞)|_U^*∖ U_2^2/2μ_k+k' + μ_k+k'/2*^∞|_U∖ U^*_2^2 so ∇ l(^∞)|_U^*∖ U_2^2 ≥ 2μ_k+k'*l(^∞) - l(^*) + μ_k+k'^2 *^∞|_U∖ U^*_2^2. Now note that U^*∖ U is supported on at most k groups, so by averaging, there exists some group T_i outside of U such that ∇ l(^∞)|_T_i_2^2 ≥2μ_k+k'/k*l(^∞) - l(^*) + μ_k+k'^2/k*^∞|_U∖ U^*_2^2 ≥2μ_k+k'/k*l(^∞) - l(^*) + (k'-k)μ_k+k'^2/k^∞|_T_j^*_2^2. Let ^(r) denote the value of ^∞ (Line <ref>) after r iterations of Algorithm <ref> with ^(0) = and |S^0| = k' ≥ k*L_2^2/μ_k+k'^2 + 1. Let ^* min_∈ℝ^n : _≤ k l() Then, l(^(r)) - l(^*) ≤exp*-r/kμ_k+k'/L_2*l(^(0)) - l(^*) By Lemmas <ref> and <ref>, we have that l(^(r)) - l(^(r+1)) ≥ (2L_2)^-1∇ l(^(r))|_T_i^*_2^2 - (1/2)L_2^∞|_T_j^*_2^2 ≥1/kμ_k+k'/L_2*l(^(r)) - l(^*) + 1/2*(k'-k)μ_k+k'^2/kL_2- L_2^∞|_T_j^*_2^2 ≥1/kμ_k+k'/L_2*l(^(r)) - l(^*), as long as k' ≥ k*L_2^2 / μ_k+k'^2 + 1. So, l(^(r+1)) - l(^*) = l(^(r)) - l(^*) - *l(^(r)) - l(^(r+1)) ≤ l(^(r)) - l(^*) - 1/kμ_k+k'/L_2*l(^(r)) - l(^*) = *1 - 1/kμ_k+k'/L_2*l(^(r)) - l(^*) ≤exp*- 1/kμ_k+k'/L_2*l(^(r)) - l(^*) Applying the above inductively proves the claim. This follows immediately from the bound of Lemma <ref> and rearranging. § EQUIVALENCE OF GROUP SEQUENTIAL ATTENTION AND GROUP SEQUENTIAL LASSO We generalize a result of <cit.> to the group setting, which allows us to translate guarantees for Group Sequential LASSO (Algorithm <ref>) to Group Sequential Attention (Algorithm <ref>). Let l:ℝ^n→ℝ and λ>0. Let T_i⊆[n] for i∈[t] form a partition of [n]. Let S⊆[t]. Then, inf_∈ℝ^n l() + λ∑_i∈S|_T_i_2 = inf_∈ℝ^t, ∈ℝ^n l(_) + λ/2*|_S_2^2 + ∑_i∈S|_T_i_2^2 where _∈ℝ^n is the vector such that _|_T_i_i ·|_T_i. We have that inf_∈ℝ^t, ∈ℝ^n l(_) + λ/2*|_S_2^2 + ∑_i∈S|_T_i_2^2 = inf_∈ℝ^t, ∈ℝ^n l() + λ/2∑_i∈S_i^2 + |_T_i_2^2/_i^2 Now note that for each i∈S, we have that _i^2 + |_T_i_2^2/_i^2≥ 2|_T_i_2 with equality if and only if _i^2 = |_T_i_2 by tightness of the AM-GM inequality.
http://arxiv.org/abs/2307.04091v1
20230709042412
CMDFusion: Bidirectional Fusion Network with Cross-modality Knowledge Distillation for LIDAR Semantic Segmentation
[ "Jun Cen", "Shiwei Zhang", "Yixuan Pei", "Kun Li", "Hang Zheng", "Maochun Luo", "Yingya Zhang", "Qifeng Chen" ]
cs.CV
[ "cs.CV" ]
CMDFusion: Bidirectional Fusion Network with Cross-modality Knowledge Distillation for LIDAR Semantic Segmentation ^1Authors are with Cheng Kar-Shun Robotics Institute, The Hong Kong University of Science and Technology, Hong Kong SAR, China. jcenaa}@connect.ust.hk. {cqf}@ust.hk. ^2Authors are with Alibaba Group, China. zhangjin.zsw, zh334251, luomaochun.lmc, yingya.zyy}@alibaba-inc.com. lk158400}@cainiao.com. ^3Authors are with the SMILES LAB at the School of Information and Communication Engineering'an Jiaotong University, Xi'an, China. peiyixuan}@stu.xjtu.edu. ^*Work done as an intern at Alibaba DAMO Academy. Jun Cen^1,2*, Shiwei Zhang^2, Yixuan Pei^3, Kun Li^2, Hang Zheng^2, Maochun Luo^2, Yingya Zhang^2, Qifeng Chen^1 August 12, 2023 =============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== 2D RGB images and 3D LIDAR point clouds provide complementary knowledge for the perception system of autonomous vehicles. Several 2D and 3D fusion methods have been explored for the LIDAR semantic segmentation task, but they suffer from different problems. 2D-to-3D fusion methods require strictly paired data during inference, which may not be available in real-world scenarios, while 3D-to-2D fusion methods cannot explicitly make full use of the 2D information. Therefore, we propose a Bidirectional Fusion Network with Cross-Modality Knowledge Distillation (CMDFusion) in this work. Our method has two contributions. First, our bidirectional fusion scheme explicitly and implicitly enhances the 3D feature via 2D-to-3D fusion and 3D-to-2D fusion, respectively, which surpasses either one of the single fusion schemes. Second, we distillate the 2D knowledge from a 2D network (Camera branch) to a 3D network (2D knowledge branch) so that the 3D network can generate 2D information even for those points not in the FOV (field of view) of the camera. In this way, RGB images are not required during inference anymore since the 2D knowledge branch provides 2D information according to the 3D LIDAR input. We show that our CMDFusion achieves the best performance among all fusion-based methods on SemanticKITTI and nuScenes datasets. The code will be released at https://github.com/Jun-CEN/CMDFusion. § INTRODUCTION 3D LIDAR is significant for the perception system of autonomous vehicles, and one of the applicable tasks with LIDAR is semantic segmentation. Great efforts have been made for better LIDAR semantic segmentation performance using single LIDAR modality <cit.>. Recently, several multi-modality methods are developed <cit.> to fuse the features of LIDAR and colorful cameras since they provide complementary information. LIDAR provides reliable depth information and is robust to light conditions such as dark nights, while the camera offers a dense colorful appearance and fine-grained textures. In this work, we also aim to study how to effectively leverage these two modality data for better LIDAR semantic segmentation. Existing fusion-based methods can be divided into 2D-to-3D fusion method (PMF <cit.>) and 3D-to-2D fusion method (2DPASS <cit.>), as shown in Fig. <ref> (a) and (b). PMF injects 2D knowledge into the LIDAR features, so it needs strictly paired data during training and inference. However, the FOV of LIDAR and the camera may not totally overlap with each other, so those points out of the FOV of the camera cannot be tested. For example, SemanticKITTI <cit.> only provides two front-view images, and points at the side and back cannot be involved in the PMF framework. 2DPASS notices this problem and proposed injecting 3D features into 2D features during training to implicitly enhance the 3D features. In this way, 2DPASS does not require images during inference. However, 3D features do not explicitly contain 2D information in such a 3D-to-2D scheme. To solve the mentioned problems of 2D-to-3D and 3D-to-2D fusion methods, we propose a Bidirectional Fusion Network with Cross-Modality Knowledge Distillation (CMDFusion), as shown in Fig. <ref> (c). Specifically, on the one hand, we propose a Bidirectional Fusion Block (BFB) to explicitly and implicitly enhance the 3D features through 2D-to-3D and 3D-to-2D injection, which owns the benefits of both single fusion schemes. On the other hand, we propose a Cross-Modality Distillation (CMD) module to let a 3D network (2D knowledge branch) memorize the information of the 2D network (camera branch) during training. During inference, the 2D knowledge branch provides the 2D image information based on the 3D LIDAR point cloud inputs so that we can obtain the 2D knowledge for the whole point cloud, including those points not in the FOV of the camera. We evaluate our method on two challenging datasets, including SemanticKITTI <cit.> and NuScenes <cit.>. Experiments show that our method achieves the best performance among all fusion-based methods. In summary, our contributions include the following: * We develop a bidirectional fusion method CMDFusion for the LIDAR semantic segmentation task, which surpasses the single directional 2D-to-3D fusion and 3D-to-2D fusion methods. * We develop a cross-modality distillation module to generate 2D information for those points that are out of the FOV of the camera. * We experimentally show that our method achieves the best performance among fusion-based methods on SemanticKITTI and Nuscenes datasets. § RELATED WORK 3D LIDAR semantic segmentation has grown very fast based on well-annotated public datasets, such as SemanticKITTI <cit.> and NuScenes <cit.>. Most methods in this area are single-modality, i.e., only use LIDAR point cloud to extract information. Specifically, single-modality methods can be categorized into point-based, projection-based, voxel-based, and multi-view fusion methods. 1) Point-based methods <cit.> adapt PointNet <cit.> and PointNet++ <cit.> to the LIDAR domain. These point-based methods do not generalize very well in the LIDAR point cloud scenarios since their sampling and searching algorithms cannot perfectly handle the sparse outdoor point clouds. 2) Voxel-based methods divide the whole point cloud into voxels <cit.> and apply efficient 3D convolution for semantic segmentation like SparseConv <cit.>. Cylinder3D <cit.> proposed a cylindrical partition and asymmetrical 3D convolutional network which follows the geometry structure of the LIDAR point cloud. 3) Projection-based methods first project 3D LIDAR point cloud into 2D range-view images <cit.> or bird’s-eye-view (BEV) images <cit.> and then apply 2D convolution network for semantic segmentation. However, such a projection inevitably loses some of the 3D geometry information. 4) Multi-view fusion methods combine different views of the LIDAR point cloud as inputs. FusionNet <cit.> and SPVCNN <cit.> fuse voxel and point level information, while RPVNet <cit.> fuses the information of voxel, point, and range views. Recently, multi-modality fusion has become popular in the autonomous driving area. In the 3D object detection task, BEV fusion <cit.> unifies the LIDAR and image features in the BEV space and achieves the state-of-the-art performance. However, the height information is much more critical in the semantic segmentation task than the object detection task, so the BEV-based method <cit.> has limited performance on the semantic segmentation task. Instead, PMF <cit.> projects the LIDAR point cloud into the image space and then conducts 2D-to-3D fusion for better 3D feature representation. 2DPASS <cit.> finds that the 2D-to-3D fusion method like PMF can only be applied on the points in the overlapping FOVs of the LIDAR and camera, so 2DPASS conducts 3D-to-2D fusion to strengthen the 3D features by supervising the 3D features from the 2D branch. Compared to PMF and 2DPASS, our bidirectional fusion network enjoys the benefits of both 2D-to-3D and 3D-to-2D fusion schemes. Besides, we propose a cross-modality distillation module so that our network can be applied to the whole LIDAR point cloud, including the points that are out of the FOV of the camera. § METHODOLOGY §.§ Framework Overview The simplified and specific overall structure of our proposed CMDFusion is shown in Fig. <ref> (c) and Fig. <ref> (a), respectively. Our CMDFusion is composed of three branches, including a camera branch (2D network), a 2D knowledge branch (3D network), and a 3D LIDAR branch (3D network). §.§.§ Training During training, the 2D knowledge branch (a 3D network) learns the 2D image information from the camera branch (a 2D network) via Cross-Modality Distillation (CMD). Although the CMD is conducted on those points in the overlapping FOVs of the LIDAR and camera, the 2D knowledge branch can be generalized to the points that are out of the FOV of the camera. In this way, we can obtain the 2D information of the whole point cloud, which is not approachable in PMF <cit.> or 2DPASS <cit.>. Then we fuse the features of the 2D knowledge branch and 3D LIDAR branch through Bidirectional Fusion Block (BFB). On the one hand, 2D-to-3D directional fusion explicitly enhances the 3D feature via 2D information injection. On the other hand, 3D-to-2D directional fusion implicitly improves the robustness of the 3D feature since it is required to have the potential to be well adapted to the 2D space. Therefore, our BFB enjoys the benefits of both PMF and 2DPASS. §.§.§ Testing During inference, the camera branch is not needed anymore since its knowledge is already distilled to the 2D knowledge branch. Besides, only 2D-to-3D directional fusion is involved as the final prediction results come from the 3D LIDAR branch. The right-hand side of Fig. <ref> (c) shows the parts that are needed during inference. §.§ Point-to-pixel Corrspondence Point-to-pixel correspondence is the pre-request of Cross-Modality Distillation (CMD). Given a LIDAR point cloud P = {p_i}_i=1^N ∈ℝ^N× 3, where p_i = (x_i, y_i, z_i) ∈ℝ^3 refers to the XYZ coordinates of a point and N is the number of points in the point cloud, the projected 2D coordinates of the point p_i is calculated as: [u_i, v_i, 1]^T = 1/z_i× K× T × [x_i, y_i, z_i, 1]^T, where K ∈ℝ^3× 4 and T ∈ℝ^4× 4 denote the intrinsic and extrinsic matrices of the camera, respectively. Then we have p̂_i =(⌊ v_i⌋ , ⌊ u_i ⌋ ) ∈ℝ^2 as the integer projected 2D coordinates, where ⌊·⌋ is the floor operation. For the SemanticKITTI dataset, K and T are already given. For the NuScenes dataset, the extrinsic matrix T is calculated as: T=T_C←ego_t_c× T_ego_t_c←G× T_G←ego_t_l× T_ego_t_l←L, where L, C, and G refer to the LIDAR, camera, and global. Note that CMD is only applied on the points that are in the overlapping FOVs of LIDAR and camera, as shown in the colorized region in the input of the 2D knowledge branch in Fig. <ref> (a). Formally, suppose the points set in the overlapping FOVs of LIDAR and camera is P^O = {p_i}_i=1^N^O∈ℝ^N^O× 3, where N^O denotes the number of points in the overlapping FOVs of the LIDAR and camera, then for each point p_i in P^O, its corresponding projected coordinates p̂_i =(⌊ v_i⌋ , ⌊ u_i ⌋ ) should meet: {[ 0 ≤⌊ v_i⌋≤ H; 0 ≤⌊ u_i⌋≤ W, ]. where H and W refer to the height and width of corresponding images. Note that for feature maps under different scales, we first upsample the feature maps to the original scale and then use the corresponding point-to-pixel corresponding. §.§ Cross-Modality Distillation Cross-Modality Distillation (CMD) is to distillate the 2D knowledge from the camera branch (a 2D network) to the 2D knowledge branch (a 3D network), so we can generate the 2D information for those points out of the FOV of the camera and do not need the images during inference. §.§.§ Camera Branch Unlike PMF <cit.> and 2DPASS <cit.> that train the camera branch with the ground truth projected from the LIDAR point cloud, we use a ResNet101 <cit.> which is pre-trained on the Cityscapes dataset <cit.>. Cityscapes is a popular dataset for 2D image semantic segmentation in the autonomous driving scenario. We adopt this strategy for two reasons. First, if we use the ground truth which is projected from the LIDAR point cloud, the camera branch may learn the overlapping knowledge with the 3D LIDAR branch since they share the same ground truth source. In contrast, the pre-trained camera branch using another dataset could provide additional information on top of the LIDAR point cloud. Second, we could freeze the camera branch during training since it is well-trained, so less back-propagation is needed for the whole structure. In this way, the training process consumes less GPU memory and time. §.§.§ 2D Knowledge Branch Following 2DPASS <cit.>, we use SPVCNN <cit.> as the 3D network used in this paper, including the 2D knowledge branch and 3D LIDAR branch. Now let us formulate the process of CMD. For points in the overlapping FOVs of LIDAR and camera p_i ∈ P^O, we feed them into the 2D knowledge branch f_2D to obtain the features z_2D^s: z_2D^s={ f_2D^s(p_i) }_i=1^N^O∈ℝ^N^O× d, where s={1,2,3,4 } and d refer to the feature map scale and the dimension of the features, respectively. Then we obtain the corresponding features z_C^s of P^O from the camera branch through the point-to-pixel projection described in Sec. <ref>. The CMD is realized through this loss ℒ_CMD: ℒ_CMD = 1/N^O∑ z_2D^s - z_C^s _2, where ·_2 denotes the L2 loss. In this way, the 2D knowledge branch can mimic the function of the camera branch to provide the 2D information based on the 3D LIDAR point cloud. Although ℒ_CMD is only available for P^O during training, the trained 2D knowledge branch can be generalized to the whole point cloud P during inference. §.§ Bidirectional Fusion Our bidirectional fusion block (BFB) is composed of a 3D-to-2D fusion block and a 2D-to-3D fusion block, as shown in Fig. <ref> (b). 2D-to-3D directional fusion explicitly enhances the 3D features via 2D feature injection, while 3D-to-2D implicitly enhances the 3D features via 2D knowledge branch supervision. Note that the 3D-to-2D fusion block and 2D-to-3D fusion block share the same single directional fusion structure, as shown in Fig. <ref> (c), and the only difference is the input position. Fig. <ref> (c) is the example of the 3D-to-2D single directional fusion block, and we can obtain the 2D-to-3D single directional fusion block by simply changing the positions of two inputs in Fig. <ref> (c). Unlike CMD which can only be applied on the P^O, BFB is applied on the whole point cloud. So z_2D^s ∈ℝ^N× d and z_3D^s∈ℝ^N× d in this section. §.§.§ 3D-to-2D Fusion 3D-to-2D fusion is illustrated in Fig. <ref> (c). Formally, we first have: z_3D2D^s = _2((_1(z_3D^s), z_2D^s)), where is a multiplayer perceptron, and refers to the feature concatenation. _1 is used to transfer the 3D feature z_3D^s into the 2D feature space. _2 is responsible to transfer the concatenated feature into the residual space of z_2D^s. Then we have: z̃_2D^s = z_2D^s ⊕σ(_3(((z_3D2D^s),z_3D2D^s))) ⊙ z_3D2D^s, where ⊕ and ⊙ denote the element-wise plus and element-wise multiply, respectively. means global average pooling, and σ means Sigmoid activation function. is used to integrate the gloable information, and _3 is used to transfer the feature into the attention value. z̃_2D^s represents the enhanced 2D features of scale s. Then we concatenate z̃_2D^s and the enhanced features of previous scales z_2DF^s-1 to obtain z_2DF^s: z_2DF^s = (z_2DF^s-1,z̃_2D^s), where z_2DF^s contains all enhanced 2D features from scale 1 to s. Finally, z_2DF^4 contains the enhanced 2D features of all 4 scales, and we use a linear classifier g_2D to output the logits. The loss of 2D knowledge branch ℒ_2D is formulated as: ℒ_2D = -1/N∑ ylog(g_2D(z_2DF^4)_y), where y refers to the ground truth, and g(z_2DF^4)_y denotes the y^th logit of g(z_2DF^4). Note that single directional fusion does not share MLPs for different scales. §.§.§ 2D-to-3D Fusion 2D-to-3D fusion shares the symmetric structure with 2D-to-3D fusion. Formally, we have the following: z_2D3D^s = _2( (_1(z_2D^s), z_3D^s)), z̃_3D^s = z_3D^s ⊕σ(_3(( (z_2D3D^s),z_2D3D^s))) ⊙ z_2D3D^s, z_3DF^s = ( z_3DF^s-1,z̃_3D^s). Similarly, z_3DF^4 is the final enhanced 3D feature, and a linear classifier g_3D is used to output the logits. The loss of 3D knowledge branch ℒ_3D is formulated as: ℒ_3D = -1/N∑ ylog(g_3D(z_3DF^4)_y). Note that 2D-to-3D fusion blocks do not share MLPs and classifiers with 3D-to-2D fusion blocks. §.§ Overall Training and Testing Process §.§.§ Training The overall loss ℒ_all for training the model is calculated as: ℒ_all = ℒ_CMD + ℒ_2D + ℒ_3D. §.§.§ Testing We use the output of the classifier in the 3D LIDAR branch as the final prediction results. Specifically, the prediction result ŷ is: ŷ = max_i=1,2,...,C g_3D(z_3DF^4)_i, where C denotes the total number of classes in the dataset. § EXPERIMENTS §.§ Experiment Settings §.§.§ Datasets We conduct experiments on three large-sclae outdoor datasets, including SemanticKITTI <cit.>, SemanticKITTI-O <cit.> and Nuscenes <cit.>. SemanticKITTI provides the dense segmentation labels for 00-10 sequences, in which sequence 08 is used for validation and others are used for training. The ground truth of sequences 11-21 is not reachable to the public and is used for testing. Two front-view colorful images are equipped with each LIDAR scan in SemanicKITTI. We use the image captured by the left camera in our experiments. NuScenes contains 8130 samples for training, 6019 samples for validation, and 6008 samples for testing. Six images are equipped for every LIDAR scan in Nuscenes, and we randomly pick up one image for training. SemanicKITTI-O is a subset of SemanticKITTI, which contains the points in the overlapping FOVs of the camera and LIDAR. The reason that PMF <cit.> proposed the SemanicKITTI-O is that PMF cannot be applied on the points that are out of the FOV of the camera because of its 2D-to-3D fusion scheme. §.§.§ Evaluation Metrics We adopt the commonly used mean intersection-over-union (mIoU) of all classes as the evaluation metric. Specifically, mIoU is formulated as: mIoU = TP_c/TP_c + FP_c + NP_c. In addition, we also report the frequency-weighted IOU (fwIoU) provided by the NuScenes leaderboard. FwIoU is a weighted version of mIoU by the point-level frequency of different classes. §.§.§ Network Settings The camera branch is a ResNet101 <cit.> network pre-trained using Cityscpaes <cit.> dataset. Following 2DPASS <cit.>, the 2D knowledge branch and 3D LIDAR branch are two modified SPVCNN <cit.> with the same structure. The feature maps from three branches are firstly reduced to the dimension of 128 and 256 for SemanticKITTI and NuScenes datasets, and then they are upsampled through bilinear interpolation to the original scale and used for CMD and BFB. As shown in Fig. <ref> (a), we use feature maps from 4 scales for better performance. §.§.§ Training and Inference Details Our model is trained in an end-to-end manner with the SGD optimizer. The initial learning rate is set to be 0.24, following 2DPASS <cit.> and SPVCNN <cit.>. We train the model for 128 epochs for SemanticKITTI and 80 epochs for NuScenes dataset. We use the commonly used augmentation strategy in the LIDAR semantic segmentation, including global scaling with a random scaling factor sampled from [0.95, 1.05], and global rotation around the Z axis with a random angle. Image augmentation includes horizontal flipping and color jitter. The cropped image size is 1200 × 360 (W × H) for SemanticKITTI and 400 × 240 for NuScenes. The voxel size in the 2D knowledge branch and 3D LIDAR branch is set to 0.1. We train our model with batch size 8 on 2 Nvidia Tesla A100 GPUs with 80G memory. §.§ Results on Benchmarks §.§.§ Results on SemanticKITTI-O PMF <cit.> provides the comprehensive benchmark on the SemanticKITTI-O validation set, as shown in Table <ref>. The traditional 2D-to-3D fusion methods like PointPainting <cit.>, RGBAL <cit.>, and PMF conduct both training and inference based on the LIDAR and camera modality data, while our CMDFusion is trained on the LIDAR and camera pairs, but does not require the camera data during inference. We can see that our method significantly surpasses the PMF method by 6.2 mIoU. Note that our CMDFusion can be trained on the whole SemanticKITTI dataset based on our 2D knowledge branch and CMD, while PointPainting, RGBAL, and PMF can be only trained on the training set of SemanticKITTI-O due to their 2D-to-3D fusion scheme. §.§.§ Results on SemanticKITTI Similar to 2DPASS <cit.>, our CMDFusion is trained on the LIDAR and camera modality, while only LIDAR modality is required during inference, so 2DPASS and our CMDFusion can be tested on the whole LIDAR point cloud. However, our CMDFusion includes both 2D-to-3D and 3D-to-2D fusion while 2DPASS only includes 3D-to-2D fusion, so our method surpasses the 2DPASS according to Table <ref>. Note that 2DPASS only released the codebase and the checkpoint without the validation set involved in the training set and instance-level augmentation, so we retrain their model following the same setting and evaluate on the test set. We also try their released checkpoint on the test set and find that both of them achieve a similar mIoU (67.7). We follow the same setting for fair comparison and our method achieves the better performance (68.6 mIoU). We also try the instance-level augmentation from Polarmix <cit.> on 2DPASS and our method, and our method still surpasses the 2DPASS by 0.6 mIoU. Note that since 2DPASS does not release the code to reproduce the performance reported in their paper, we only compare with them under the same training settings, where our method achieves the better performance. To avoid the mis-correspondence between images and LIDAR point cloud brought by the instance-level augmentation, we do not involve the camera branch during finetuning, and use the frozen 2D knowledge branch to provide 2D information and only finetune the 3D LIDAR branch. In general, our method achieves the best performance among all public methods. §.§.§ Results on NuScenes Table <ref> shows that our method achieves better performance (2.0 mIoU) than 2DPASS. Similar to the SemanticKITTI, the performance of 2DPASS comes from the higher one between our retrained model and their released checkpoint. Unlike the SemanticKITTI dataset, the NuScenes dataset provides 6 images to cover the FOV of the LIDAR, so the 2D-to-3D fusion methods like PMF <cit.> and 2D3DNet <cit.> can also be evaluated on the whole LIDAR point cloud. Among all fusion-based methods, our CMDFusion achieves the best performance. §.§.§ Visualization We provide two samples from SemanticKITTI and NuScenes datasets in Fig. <ref>. The top sample shows that 2DPASS and our method have less error on the building compared to the SPVCNN, which illustrates the effectiveness of multi-modality fusion. Besides, our method has better results on the car and truck than 2DPASS, because 2D-to-3D fusion is involved in our method but not in the 2DPASS. In addition, we visualize the feature representation of 2DPASS and our method on the NuScenes dataset. As shown in Fig. <ref>, our method has more discriminative features, e.g., the pedestrian class is more separable in our method than 2DPASS. §.§ Runtime Analysis Table <ref> provides the runtime analysis on the NuScenes dataset. PointPainting, RGBAL, and PMF use 2D networks for semantic segmentation since the input is range-view or perspective-view, so they can be accelerated using TensorRT by a large margin (125.0 to 22.3 ms for the PMF method). In contrast, the 3D network in Cylinder3D, 2DPASS, and our method cannot be accelerated by TensorRT. Compared to PMF without TensorRT, our method has a smaller number of FLOPs and parameters during inference, while sharing the same runtime. Compared to 2DPASS, our method achieves better performance since two 3D networks are used during inference (2D Knowledge branch and 3D LIDAR branch), which inevitably consumes more runtime. §.§ Ablation Study We conduct a careful ablation study to show the effectiveness of different modules in our method. The comprehensive ablation results are based on the Semantic-O dataset since the classical 2D-to-3D fusion without CMD can only be applied on the points in the overlapping FOVs of LIDAR and camera. The results are in Table <ref>. The baseline refers to a single SPVCNN 3D network. We can see that both 3D-to-2D fusion and 2D-to-3D fusion are helpful, but 2D-to-3D fusion brings more performance gain since the camera information is explicitly injected into the LIDAR branch. After we replace the camera branch (CB) with a frozen CB pre-trained on Cityscapes, the performance is further improved. The reason may be that the pre-trained camera branch could provide additional information for the current LIDAR point cloud dataset. Then we introduce cross-modality distillation (CMD) to let a 3D network output the 2D information so that the model could be trained on the whole dataset rather than the overlapping FOVs of the camera and LIDAR. As a result, the performance is greatly boosted by the CMD. Similar to 2DPASS, we also apply the voting test-time augmentation (TTA), i.e., rotating the input point cloud with 12 angles around the Z axis and averaging the prediction scores as the final outputs. TTA brings better performance by 2.46 mIoU. § CONCLUSION In this paper, we propose a Bidirectional Fusion Network with Cross-Modality Knowledge Distillation (CMDFusion) to fuse the information of the camera and LIDAR for better LIDAR semantic segmentation. Compared to the 2D-to-3D fusion-based method PMF <cit.>, our proposed Cross-Modality Distillation (CMD) module solves the problem that the camera branch cannot output the 2D information for those points out of the FOV of the camera. Compared to 3D-to-2D fusion-based method 2DPASS <cit.>, our proposed Bidirectional Fuision Block (BFB) contains additional 2D-to-3D fusion, which explicitly strengthens the 3D information through 2D information injection for better LIDAR semantic segmentation. We show the effectiveness of our proposed method through comprehensive experiments on SemanticKITTI and NuScenes datasets. Overall, we provide an alternative approach to fully utilize the multi-modality information for 3D semantic segmentation, and introduce a new and feasible way to solve the problem that multi-sensors' FOVs are not overlapping. We hope this paper can provide inspiration for future work in autonomous vehicles and robots. § ACKNOWLEDGMENT This work is supported by Alibaba Group through Alibaba Research Intern Program. IEEEtran
http://arxiv.org/abs/2307.05573v1
20230710083608
On the first bifurcation of Stokes waves
[ "Vladimir Kozlov" ]
math.AP
[ "math.AP" ]
We consider Stokes water waves on the vorticity flow in a two-dimensional channel of finite depth. In the paper <cit.> it was proved existence of subharmonic bifurcations on a branch of Stokes waves. Such bifurcations occur near the first bifurcation in the set of Stokes waves. Moreover it is shown in that paper that the bifurcating solutions build a connected continuum containing large amplitude waves. This fact was proved under a certain assumption concerning the second eigenvalue of the Frechet derivative. In this paper we investigate this assumption and present explicit conditions when it is satisfied. Optical-power-dependent splitting of magnetic resonance in nitrogen-vacancy centers in diamond Kensuke Kobayashi Received / Accepted ============================================================================================== § FORMULATION OF THE PROBLEM Stokes and solitary waves were the main subject of study in the nonlinear water wave theory up to 1980. In 1980 (see Chen <cit.> and Saffman <cit.>) it was discovered numerically and in 2000 (see <cit.>) this was supported theoretically for the ir-rotational case for a flow of infinite depth that there exist new types of periodic waves with several crests on the period (the Stokes wave has only one crest). These waves occur as a result of bifurcation on a branch of Stokes waves when they approach the wave of greatest amplitude. In my papers <cit.> and <cit.> the existence of subharmonic bifurcations was proved on branches of Stokes waves on vorticity flow. The main result in the latest paper <cit.> is proved under a certain assumption on the second eigenvalue of the Frechet derivative. The main goal of this paper is to study this assumption and to give an explicit conditions for its validity. Consider steady surface waves in a two-dimensional channel bounded below by a flat, rigid bottom and above by a free surface that does not touch the bottom. The surface tension is neglected and the water motion can be rotational. In appropriate Cartesian coordinates (X, Y ), the bottom coincides with the X-axis and gravity acts in the negative Y -direction. We choose the frame of reference so that the velocity field is time-independent as well as the free-surface profile which is supposed to be the graph of Y = ξ(X), x ∈ R, where ξ is a positive and continuous unknown function. Thus 𝒟=𝒟_ξ = {X∈ R, 0 <Y < ξ(X)}, 𝒮=𝒮_ξ={X∈ R, Y=ξ(X)} is the water domain and the free surface respectively. We will use the stream function Ψ, which is connected with the velocity vector ( u, v) as u=-Ψ_Y and v=Ψ_X. We assume that ξ is a positive, periodic function having period Λ>0 and that ξ is even and strongly monotonically decreasing on the interval (0,Λ/2). Since the surface tension is neglected, Ψ and ξ after a certain scaling satisfy the following free-boundary problem (see for example <cit.>): ΔΨ+ω(Ψ)=0 , 1/2|∇Ψ|^2+ξ=R , Ψ=1 , Ψ=0 , where ω∈ C^1,α, α∈ (0,1), is a vorticity function and R is the Bernoulli constant. We assume that Ψ is even, Λ-periodic in X and Ψ_Y>0 , which means that the flow is unidirectional. The Frechet derivative for the problem is evaluated for example in <cit.>, <cit.>, and the corresponding eigenvalue problem for the Frechet derivative has the form Δ w+ω'(Ψ)w+μ w=0 , ∂_ν w-ρ w=0 , w=0 , where ν is the unite outward normal to Y=ξ(X) and ρ= ρ(X)=(1+Ψ_XΨ_XY+Ψ_YΨ_YY)/Ψ_Y(Ψ_X^2+Ψ_Y^2)^1/2|_Y=ξ(X). The function w in (<ref>) is supposed also to be even and Λ-periodic. Let us introduce several function spaces. Let α∈ (0,1) and k=0,1,…. The space C^k,α(𝒟) consists of bounded functions in 𝒟 such that the norms C^k,α(𝒟_a,a+1) are uniformly bounded with respect to a∈ R. Here 𝒟_a,a+1={(X,Y)∈𝒟, : a≤ x≤ a+1}. The space C^k,α_0,Λ(𝒟) (C^k,α_0,Λ, e(𝒟)) consists of Λ-periodic (Λ-periodic and even) functions, which belong to C^k,α(𝒟) and vanish at Y=0. Similarly we define the space C^k,α_Λ( R) (C^k,α_Λ, e( R)) consisting of functions in C^k,α( R), which are Λ-periodic (Λ-periodic and even). We will consider a branch of Stokes water waves depending on a parameter t≥ 0, i.e. ξ=ξ(X,t), ψ=ψ(X,Y;t), Λ=Λ(t). For each t the functions ξ∈ C^2,α_Λ,e( R) and Ψ∈ C^3,α_Λ,e(𝒟). This branch starts from a uniform stream solution for t=0. The dependence on t is analytic in the sense explained in Sect. <ref>. The definition of uniform stream solution together with the dispersion equation which is required for existence of the branch of the Stokes waves (<ref>) is given in the next section <ref>. Existence of such branches was a subject of many papers. In the case of non-zero vorticity we note a fundamental work <cit.>, where a bifurcation branches for the flow with vorticity was constructed for the first time. In the case with variable period we refer to the papers <cit.> and <cit.>. The first (lowest eigenvalue of the problem (<ref>)] is always negative and simple and the second one we denote by μ(t). Assume that Assumption There exists t_0>0 such that μ(t)≥ 0 for t∈ (0,t_0) and μ(t)<0 for t∈ (t_0,t_0+ϵ) for a certain positive ϵ. This assumption describes the first bifurcation point t_0 on the branch (<ref>) in the class of Stokes waves of period Λ(t). It is convenient to separate two types of bifurcations of branches of Stokes waves: (i) in the class of Λ(t)-periodic solutions (Stokes bifurcation); (ii) in the class of MΛ(t)-periodic solutions (M-subharmonic bifurcation); Then the following theorem is proved in <cit.>. Let Assumption be fulfilled. Then there exists an integer M_0 and pairs (t_M,M), where M is integer M>M_0 and t_M>t_0, satisfying t_M→ t_0 M→∞, such that t_M is M- subharmonic bifurcation point. There are no subharmonic bifurcations for t<t_0. Moreover in Theorem 9.2, <cit.>, a structure of the set of bifurcating solutions is given. In particular it was shown that the bifurcating solutions build a connected continuum containing large amplitude waves. The main aim of this paper is to give explicit conditions for validity of Assumption. Our analysis consists of two parts: (i) analysis of behaviour of μ(t) for small t; (ii) analysis of μ(t) for large positive t. For t=0, Λ(0)=Λ_0 and μ(0)=0. Our first goal is to study the functions Λ(t) and μ(t) for small t. One of the results is the following. It's quite straightforward to show that these functions has the following asymptotic representations μ (t)=μ_2t^2+0(t^3) Λ(t)=Λ_0+Λ_2t^2+O(t^3), where Λ_0=Λ(0). It is proved that μ_2=CΛ_2 with a positive constant C to be evaluated later. To prove formula (<ref>), first we study the function λ(t)=Λ_0/Λ(t)=1+λ_2t^2+O(t^3) and established the relation -4λ_2τ_*^2∫_0^dγ(Y;τ_*)^2dY=μ_2∫_0^d γ(Y;τ_*)^2dY/Ψ_Y, where γ(Y;τ) solves the problem (<ref>). Since Λ_2=-λ_2Λ_0, the last relation implies (<ref>) with a positive constant C. Thus the sign of μ_2 is the same as of Λ_2 and opposite to the sign of λ_2. In the irrotational case, i.e. ω=0, we study the dependence of μ_2 on the parameter θ>1 connected with the Froude number F=d_-^-3/2 by[It follows from (<ref>)] θ=(1+√(1+8F^-2)/4)^3F^4=(F+√(F^2+8)/4)^3F, where the right-hand side is monotone with respect to F. We prove that μ_2(θ)>0 θ_0≈ 2.48. In terms of the Froude number the eigenvalue μ(t) is positive when F<F_0, F_0≈ 1,511. This give a condition for validity of the first part in Assumption. Let us turn to the second part of the above assumption. It is enough to show an appearance of negative eigenvalues of the Frechet derivative when t→∞. According to Corollary 2.2, <cit.>, there exists a sequence {t_j}, j=1,…, such that a). ξ(0,t_j) tends to R when j→∞ (extreme wave) or b). ξ(0,t_j) tends to a solitary wave as j→∞ In the case a) the limit configuration is the extreme wave with the angle 120^∘ at the crest (see <cit.>, <cit.>, <cit.> and <cit.>) and the appearance of negative eigenvalues follow from Theorem 3.1, <cit.> and <cit.>. To show that the option b) is impossible we choose parameters of the problem such that solitary waves are excluded. We will do this by using known upper estimates for the Froude number of solitary waves. The best known upper estimate for the Froude number of solitary wave, which follows from <cit.> (see also <cit.> and Introduction of <cit.>) is the following F<√(2). This means that if F>√(2) there are no solitary waves with such Froude number. Hence every global branch of Stokes waves must approach a Stokes waves of maximal amplitude which have the angle 120^∘ at the crest. According to Theorem 3.1 <cit.> this fact implies appearance of infinitely many negative eigenvalues of the Frechet derivative when t→∞. This implies the validity of the second part of Assumption. Therefore 1,414<F<1,511 . Another upper estimate for the Froude number obtained numerically (see <cit.>, <cit.>, <cit.> and Introduction in <cit.>) is F<1,29. Hence 1,29<F<1,511 . This estimate is supported now by numerics only, but we present it because it can be used for numerical study of subharmonic bifurcations. §.§ Uniform stream solution, dispersion equation The uniform stream solution Ψ=U(Y) with the constant depth η =d satisfies the problem U^”+ω(U)=0 , U(0)=0, U(d)=1, 1/2U'(d)^2+d=R. In order to find solutions to this problem we introduce a parameter s=U'(0). We assume that s>s_0:=2max_τ∈ [0,1]Ω(τ), where Ω(τ)=∫_0^τω(p)dp. Then the problem (<ref>) has a solution (U,d) with a strongly monotone function U for R=ℛ(s):=1/2s^2+d(s)-Ω(1). The solution is given by Y=∫_0^Udτ/√(s^2-2Ω(τ)), d=d(s)=∫_0^1dτ/√(s^2-2Ω(τ)). If we consider (<ref>) as the equation with respect to s then it is solvable if R≥ R_c, where R_c=min_s≥ s_0ℛ(s), and it has two solutions if R∈ (R_c,R_0), where R_0=ℛ(s_0). We denote by s_c the point where the minimum in (<ref>) is attained. Existence of small amplitude Stokes waves is determined by the dispersion equation (see, for example, <cit.>). It is defined as follows. The strong monotonicity of U guarantees that the problem γ^”+ω'(U)γ-τ^2γ=0, γ(0,τ)=0, γ(d,τ)=1 has a unique solution γ=γ(y,τ) for each τ∈ R, which is even with respect to τ and depends analytically on τ. Introduce the function σ(τ)=κγ'(d,τ)-κ^-1+ω(1), κ=U'(d). It depends also analytically on τ and it is strongly increasing with respect to τ>0. Moreover it is an even function. The dispersion equation (see, for example <cit.>) is the following σ(τ)=0. It has a positive solution if σ(0)<0. By <cit.> this is equivalent to s+d'(s)<0 or what is the same 1<∫_0^ddY/U'^2(Y). The right-hand side here is equal to 1/F^2 where F is the Froude number (see <cit.> and <cit.>). Therefore (<ref>) means that F<1, which is well-known condition for existence of Stokes waves of small amplitude. Another equivalent formulation is given by requirement (see, for example <cit.>) s∈ (s_0,s_c). The existence of such s is guaranteed by R∈ (R_c,R_0). One more formula for the froude number is the following 1/F^2(s)=d'(s)/s, where the Froude number F(s) corresponds to the uniform stream solution (U(Y;s),d(s)) and R=ℛ(s). One can verified directly from (<ref>) that (d'(s)/s)'>0. Therefore ℛ'(s)=s(1-F^-2(s)) and 1-F^-2(s)=1-d'(s)/s=(d'(s)/s)'(s_0-s)+O(s_0-s). The value σ(0) admits the following representation (see [DispEqv]): σ(0)=-3/2κℛ'(s)/d'(s)=3(F^2(s)-1)/2κ. The function σ has the following asymptotic representation σ(τ)=κτ +O(1) and equation (<ref>) has a unique positive root, which will be denoted by τ_*. It is connected with Λ_0 by the relation τ_*=2π/Λ_0. To give another representation of the function σ we introduce ρ_0=1+U'(d)U^”(d)/U'(d)^2 and note that 1+U'(d)U^”(d)/U'(d)^2=κ^-2-ω(1)/κ. Hence another form for (<ref>) is σ(τ)=κγ'(d,τ)-κρ_0. The following problem will be used in asymptotic analysis of the branch (<ref>) for small t: v^”+ω'(U)v-τ^2v=f , v'(d)-ρ_0v(d)=g v(0)=0. Let τ≥ 0 and τ≠τ_*. Let also f∈ C^1,α([0,d]) and g be a constant. Then the problem (<ref>) has a unique solution v∈ C^3,α. If τ=τ_* then the problem (<ref>) has the one dimensional kernel which consists of function cγ(Y;τ_*). § A CONNECTION BETWEEN THE FUNCTIONS Μ(T) AND Λ(T) FOR SMALL T In this section we prove formula (<ref>). It appears that the partial hodograph transform is very useful for this purpose. §.§ Partial hodograph transform In what follows we will study branches of Stokes waves (Ψ(X,Y;t),ξ(X;t)) of period Λ(t), t≥ 0, started from the uniform stream at t=0. The existence of such branches is established in [ConStr] with fixed period but variable R and in [KL] for variable Λ and fixed R. In our case of variable Λ it is convenient to make the following change of variables x=λ X, y=Y, λ=Λ_0/Λ(t) in order to deal with the problem with a fixed period. Here as before Λ_0=Λ(0)=2π/τ_*, where τ_* is the root of the equation (<ref>). As the result we get (λ^2∂_x^2+∂_y^2)ψ+ω(ψ)=0 , 1/2(λ^2ψ_x^2+ψ_y^2)+η=R , ψ=1 , ψ=0 , where ψ(x,y;t)=Ψ(λ^-1x,y;t) η(x;t)=ξ(λ^-1 x;t). Here all functions have the same period Λ_0:=Λ(0), D_η and B_η are the domain and the free surface after the change of variables (<ref>). From (<ref>) it follows that ψ_y>0 . Using the change of variables q=x, p=ψ, we get q_x=1, q_y=0, p_x=ψ_x, p_y=ψ_y, and ψ_x=-h_q/h_p, ψ_y=1/h_p, dxdy=h_pdqdp. System (<ref>) in the new variables takes the form (1+λ^2h_q^2/2h_p^2+Ω(p))_p-λ^2(h_q/h_p)_q=0 , 1+λ^2h_q^2/2h_p^2+h=R , h=0 . Here Q={(q,p) : q∈ R , p∈ (0,1)}. The uniform stream solution corresponding to the solution U of (<ref>) is H(p)=∫_0^pdτ/√(s^2-2Ω(τ)), s=U'(0)=H_p^-1(0). One can check that H_pp-H_p^3ω(p)=0 or equivalently (1/2H_p^2)_p+ω(p)=0. Moreover it satisfies the boundary conditions 1/2H_p^2(1)+H(1)=R, H(0)=0. The Froude number in new variables can be written as 1/F^2=∫_0^1H_p^3dp. Then according to Theorem 2.1, <cit.> there exists a branch of solutions to (<ref>) h=h(q,p;t):[0,∞)→ C^2,γ_pe(Q), λ=λ(t):[0,∞)→ (0,∞), which has a real analytic reparametrization locally around each t≥ 0. §.§ Bifurcation equation In order to find bifurcation points and bifuracating solutions we put h+w instead of h in (<ref>) and introduce the operators ℱ(w;t)=(1+λ^2(h_q+w_q)^2/2(h_p+w_p)^2)_p -(1+λ^2h_q^2/2h_p^2)_p -λ^2(h_q+w_q/h_p+w_p)_q+λ^2(h_q/h_p)_q and 𝒢(w;t)=1+λ^2(h_q+w_q)^2/2(h_p+w_p)^2-1+λ^2h_q^2/2h_p^2+w acting on Λ_0-periodic, even functions w defined in Q. After some cancelations we get ℱ=𝒥_p+ℐ_q, 𝒢=𝒥+w, where 𝒥=𝒥(w;t)=λ^2h_p^2(2h_q+w_q)w_q-(2h_p+w_p)(1+λ^2h_q^2)w_p/2h_p^2(h_p+w_p)^2 and ℐ=ℐ(w;t)=-λ^2h_pw_q-h_qw_p/h_p(h_p+w_p). Both these functions are well defined for small w_p. Then the problem for finding solutions close to h is the following ℱ(w;t)=0 𝒢(w;t)=0 w=0 . Furthermore, the Frechet derivative (the linear approximation of the functions ℱ and 𝒢) is the following Aw=A(t)w=(λ^2h_qw_q/h_p^2-(1+λ^2h_q^2)w_p/h_p^3)_p-λ^2(w_q/h_p-h_qw_p/h_p^2)_q and 𝒩w=𝒩(t)w=(N w-w)|_p=1, where N w=N(t)w=(-λ^2h_qw_q/h_p^2+(1+λ^2h_q^2)w_p/h_p^3)|_p=1. The eigenvalue problem for the Frechet derivative, which is important for the analysis of bifurcations of the problem (<ref>), is the following A(t)w=μ w , 𝒩(t)w=0 , w=0 . For t=0 and μ=0 this problem becomes A_0w:=-(w_p/H_p^3)_p-(w_q/H_p)_q=0 , B_0w:=-w_p/H_p^3+w=0 , w=0 . Since the function H depends only on p this problem admits the separation of variables and its solutions are among the functions v(q,p)=α(p)cos (τ q), τ=kτ_*, k=0,1,…. According to <cit.> the function (<ref>) solves (<ref>) if and only if α(p)=γ(H(p);τ)H_p, where the function γ(Y;τ) solves the euation (<ref>) and σ(τ)=0. Therefore if τ≠τ_* then the problem (<ref>) has no non-trivial solutions. If τ=τ_* then the kernel of the above operator is one dimensional in the class of Λ_*:=2π/τ_* periodic, even function and it is given by v=α(p)cos(τ_*q), α(p)=γ(H(p);τ_*)H_p. We will need also the problem -(u_p/H_p^3)_p+τ^2u/H_p=F u(0)=0, -u_p/H_p^3+u=c , where F∈ C^0,α([0,1]) and c is a constant. Clearly this problem is elliptic and uniquely solvable for all τ≥ 0, τ≠τ_*, the problem (<ref>) has a unique solution in C^2,α([0,1]). This solution is given by u(p)=v(H(p))H_p(p), where v(Y) solves the problem (<ref>) with f=F(H(y)) and g=c. §.§ Stokes waves for small t Here we consider asymptotics of solutions of (<ref>) for small t. For this purpose we take h=H(p) and represent the solution in the form H(p)+w(q,p,t), w=tv, where v(q,p;t)=v_0(q,p)+tv_1(q,p)+t^2v_2(q,p)+⋯ The function λ=λ(t) is sought in the form λ(t)=1+λ_2t^2+O(t^4). The coefficients λ_1 and λ_3 in the above formula are zero as one can easily see from the forthcoming calculations. Our aim is to find Stokes waves close to H. Since the functions w, v and λ analytically depend on t it is sufficient to find coefficients v_j and λ_j. In this case 𝒥=A_1(1+w_p/H_p)^-2+A_2(1+w_p/H_p)^-2, where A_1=-w_p/H_p^3 and A_2=λ^2w_q^2/2H_p^2-w_p^2/2H_p^4. Therefore 𝒥=𝒥_1+𝒥_2+𝒥_3+O(t^4), where 𝒥_1=A_1, 𝒥_2=A_2-2w_p/H_pA_1=λ^2w_q^2/2H_p^2+3/2w_p^2/H_p^4 and 𝒥_3=3w_p^2/H_p^2A_1-2w_p/H_pA_2=-2w_p^3/H_p^5-w_pw_q^2/H_p^3. Furthermore ℐ=-λ^2w_q/H_p(1+w_p/H_p)^-1 =ℐ_1+ℐ_2+ℐ_3+O(t^4). Here ℐ_1=-λ^2w_q/H_p, ℐ_2=λ^2w_qw_p/H^2_p, ℐ_3(w)=-λ^2w_qw_p^2/H^3_p. Inserting (<ref>) and (<ref>) into (<ref>) and equating terms of the same power with respect to t, we get Av_0:=-(v_0p/H_p^3)_p-(v_0q/H_p)_q=0 , Bv_0:=-v_0p/H_p^3+v_0=0 , v_0=0 . As we have shown in previous section the kernel of the above operator is one dimensional and is generated by the function v_0=α_0(p)cos(τ_*q), α_0=γ(H(p);τ_*)H_p. The next term in the asymptotics satisfies the boundary value problem Av_1+(v_0q^2/2H_p^2+3/2v_0p^2/H_p^4)_p+(v_0qv_0p/H^2_p)_q=0 , Bv_1+v_0q^2/2H_p^2+3/2v_0p^2/H_p^4=0 , v_1=0 . The solution of this problem, orthogonal to v_0 in L^2, is given by v_1=α_1(p)+β_1(p)cos(2τ_* q), where α_1 and β_1 satisfy the problem (<ref>9 with τ=0 and τ=2τ_* respectively with certain right-hand sides. Further, the term v_2 is fond from the following problem Av_2+(v_0qv_1q/H_p^2+3v_0pv_1p/H_p^4+𝒥_3(v_0))_p +(v_1qv_0p+v_0qv_1p/H^2_p+ℐ_3(v_0))_q=2λ_2(v_0q/H_p)_q , Bv_2+v_0qv_1q/H_p^2+3v_0pv_1p/H_p^4+𝒥_3(v_0)=0 v_2(q,0)=0. The solvability condition for the last problem has the form 2λ_2∫_Ωv_0q^2/H_pdqdp-∫_Ω((v_0qv_1q/H_p^2+3v_0pv_1q/H_p^4)v_0p+v_0qv_1p+v_1qv_0p/H^2_pv_0q)dqdp +∫_Ω((2v_0p^3/H_p^5+v_0pv_0q^2/H_p^3)v_0p+v_0p^2v_0q/H_p^3v_0q)dqdp=0. This relation can be used to find λ_2. It is quite difficult to find the sign of λ_2 from this relation but it implies a continuity of λ_2 on R and ω. The function v_2 has the form v_2=α_2(p)cos(τ_* q)+β_2(p)cos(3τ_* q), where α_2 and β_2 satisfy the problem (<ref>) with τ=τ_* and τ=3τ_* respectively with certain right-hand sides. Thus we have shown that λ and v have the form (<ref>) and (<ref>) respectively. More exactly v_0 is given by (<ref>), v_1 is represented as (<ref>) and v_2 by (<ref>). §.§ Formula for λ_2 and the proof of the relation (<ref>) Using the representation (<ref>), (<ref>) with h=H+w, where w is evaluated in the previous section, we can write the Frechet derivative of the operators 𝒥 U and ℐ U in the form d𝒥(U)=-U_p/H_p^3+(w_qU_q/H_p^2+3w_pU_p/H_p^4)-6w_p^2U_p/H_p^5 -w_q^2U_p+2w_pw_qU_q/H_p^3+O(t^3) and dℐ(U)=-λ^2U_q/H_p+w_pU_q+w_qU_p/H_p^2-w_p^2U_q+2w_qw_pU_p/H_p^3+O(t^3). The eigenvalue problem is described by the boundary value problem (d𝒥(U))_p+(dℐ(U))_q=(μ_2t^2+O(t^3))U d𝒥(U)+U=0 U=0 We are looking for the eigenfunction U in the form U=U(q,;pt)=U_0(q,p)+tU_1(q,p)+t^2U_2(q,p)+O(t^3), U_0=v_0. Equating terms of the same order with respect to t, we get AU_1+(v_0qU_0q/H_p^2+3v_0pU_0p/H_p^4)_p+(w_0pU_0q+v_0qU_0p/H_p^2)_q=0 , BU_1+(v_0qU_0q/H_p^2+3v_0pU_0p/H_p^4)=0 , U_1=0 . Comparing this problem with (<ref>) and using that U_0=v_0, we conclude that U_1=2v_1. Next, we write the equation for U_2 -2λ_2(U_0q/H_p)_q+AU_2+(v_1qU_0q+v_0qU_1q/H_p^2+3v_1pU_0p+v_0pU_1p/H_p^4)_p +(v_1pU_0q+v_1qU_0p+v_0pU_1q+v_0qU_1p/H_p^2)_q -(6v_0p^2U_0p/H_p^5 +v_0q^2U_0p+2v_0pv_0qU_0q/H_p^3)_p-(v_0p^2U_0q+2v_0qv_0pU_0p/H_p^3)_q=μ_2U_0 and the boundary equations U_2=0 for p=0 and BU_2+(v_1qU_0q+v_0qU_1q/H_p^2+3v_1pU_0p+v_0pU_1p/H_p^4) -(6v_0p^2U_0p/H_p^5+v_0q^2U_0p+2v_0pv_0qU_0q/H_p^3)=0 Since U_0=v_0 and U_1=2v_1, the solvability condition for (<ref>) has the form 2λ_2∫_Q_pv_0q^2/H_pdqdp-3∫_Q_p(v_1qv_0q/H_p^2+3v_0pv_1q/H_p^4)v_0pdqdp -3∫_Q_pv_1pv_0q+v_1qv_0p/H_p^2v_0qdqdp+3(∫_Q_p(2v_0p^3/H_p^5+v_0q^2v_0p/H_p^3)v_0p+v_0p^2v_0q^2/H_p^3)dqdp =μ_2∫_Q_p v_0^2dqdp. Taking the sum of (<ref>) and (<ref>) with the factor -3, we get -4λ_2∫_Ωv_0q^2/H_pdqdp=μ_2∫_Ω v_0^2dqdp, which coincides with (<ref>). § THE COEFFICIENT Λ_2 FOR THE IRROTATIONAL FLOW In this section we evaluate the coefficient λ_2 in the case ω=0. The problem (<ref>) is solvable if R≥ R_c, where R_c=3/2. If R>R_c then the equation 1/d^2+2d=2R has exactly two solutions 0<d_-<1<d_+ which are called supercritical and subcritical, respectively. The Stokes branches appears only for the stream solutions- (Y/d_+,d_+). We will make the following change of variables X=x/d_+, Y=y/d_+-1, ξ(X)=η(x)/d_+-1, Ψ(X,Y)=ψ(x,y). Then the problem (<ref>) takes the form Δ_x,yψ=0 , |∇_x,yψ|^2+2θη=1 , ψ=1 , ψ=0 , where θ=d_+^3. So θ is the only parameter in the problem and θ∈ (1,∞). In the irrotational case one can derive an explicit equation for λ_2(θ). This derivation is based on the application of the integral Byatt-Smith equation, see <cit.>. §.§ Hodograph transformation Let x + iy→ϕ + iψ be a conformal mapping of D={(x,y) : x∈ R, -1<y<η(x)} onto R × (0, 1). Now we apply the hodograph transform, that is, use the imaginary part y(ϕ,ψ) of the inverse conformal mapping as the unknown function instead of the stream function ψ and the potential ϕ. From problem (<ref>) we get the following one: y_ϕϕ+y_ψψ=0, (ϕ,ψ)∈ R×(0,1); y=-1, ψ=0,ϕ∈ R; y=η, ψ=1,ϕ∈ R; (y_ϕ^2+y_ψ^2)^-1+2θ y=1, ψ=1, ϕ∈ R. Let us eliminate y in order to obtain an equation that contains only η. It is clear that relations (24) and (25) yield y_ψ(ϕ,1)=(1/1-2θη(φ)-η_ϕ^2(ϕ))^1/2 Here and below we write η(ϕ) instead of η(x(ϕ, 1)) and hope that this will not cause confusion. The Dirichlet-to-Neumann operator in the left-hand side of formula (26) can be expressed by virtue of the Fourier transform y(τ,ψ)=∫_-∞^∞ y(ϕ,ψ)e^iτϕdϕ. In order to solve the Dirichlet problem (22)–(24) we define the operator N by Nf(ξ)=ν(ξ) f(ξ), ν(ξ)=ξξ . The important property of this operator is N(cos(τϕ))=ν(τ)cos(τϕ) . Let ℱ(u,v)=v^2H_1(u,v)-u^2H_0(u), where H_0(u)=2θ^2[2+S(u)]/S(u)[1+S(u)]^2, H_1(u,v)=S(u)/1+√(1-v^2S^2(u)) and S(u)=√(1-2θ u). Then equation for η=η(ϕ) has the form (θ I-N)η=ℱ(η,η_ϕ). Here I is the identity operator. This equation coincides with that of Byatt-Smith <cit.> up to some algebraic manipulations. It also used in <cit.> and <cit.>, where various properties of this equation can be found. Equation (<ref>) is valid for all solutions with arbitrary period. To fix period we make the change of the variable φ=λϕ, λ=Λ_0/Λ. Then equation (<ref>) becomes (θ I-λ N)η=ℱ(η,λη_ϕ). We are looking for a solution to (<ref>) in the form η(φ)=t(η_1+tη_2+t^2η_3+…) =t(cos(τ_*φ)+t(a_0+a_1cos(2τ_*φ))+t^2(a_2cos(τ_*φ)+...)+⋯) and λ=1+λ_2t^2+⋯, where τ_* is the root of the equation ττ =θ. Using that H_0(u)=θ^2/2(3+5θ u)+O(u^2) H_1(u,v)=1/2(1-θ u)+O(u^2+v^2) we can solve (<ref>) asymptotically (θ I- N)η_2=1/2η_1φ^2-3θ^2/2η_1^2 and (θ I-N)η_3-λ_2Nη_1=1/22η_1φη_2φ-3θ^2/22η_1η_2- θ^2/25θη_1^3-1/2θη_1η_1φ^2. From (<ref>) it follows (θ-1)a_0=τ_*^2-3θ^2/4, (ν(2τ_*)-θ)a_1=3θ^2+τ_*^2/4. Using the relations cos Acos B=1/2(cos(A+B)+cos(A-B)), sin Asin B=1/2(cos(A-B)-cos(A+B)), and equating in (<ref>) coefficients in cos(τ_*φ), we obtain -λ_2ν(τ_*)=τ_*^2a_1-3θ^2a_0-3θ^2/2a_1-15θ^3/8-τ_*^2θ/8=:f(θ). §.§ Sign of λ_2 Since ν(τ_*)=θ we have τ_*<θ. One can check that the function ν(ξ) is convex and hence θ=ν(τ_*)<1/2(1+ν(2τ_*)) θ-1<ν(2τ_*)-θ. In the case θ≫ 1 we have τ_*≈θ, a_0≈-θ/2, a_1≈θ, ν(τ_*)=θ, ν(2τ_*)≈ 2τ_*, and hence λ_2≈θ^2. If we assume that θ=1+ϵ where ϵ is a small positive number then we get ν(τ)=1+τ^2/2+⋯, τ_*=√(2ϵ), ν(2τ_*)=1+4ϵ, a_0=-3/4ϵ, a_1=3/16ϵ and λ_2=-9/4ϵ7/8. Evaluating the root θ_0 of the equation f(θ)=0 we get θ_0≈ 2.479. Therefore if θ∈ (1,θ_0 then Λ_2>0. According to (<ref>) and (<ref>), we conclude that μ_2>0 . §.§ Upper estimates of the Froude number The following relation connected d_-and d_+ can be found in Sect. 2.1, <cit.> (see the formula (14) there): d_+/d_-=1+√(1+8d_-^3)/4d_-^3, which implies d_+=1+√(1+8d_-^3)/4d_-^2. A necessary condition for existence of solitary wave is the lower estimate F>1, Therefore the depth d=d_- corresponds to solitary waves and the corresponding Froude number is F=d_-^-3/2. The best known upper estimate for the Froude number can be derived from <cit.> as it is explained in Introduction of <cit.> and it is given by (<ref>). Since the function x→1+√(1+8x^3)/4x^2 is strongly decreasing we conclude that the condition F^2=d_-^-3>2 which implies non-existence of solitary waves, is equivalent to θ=d_+^3>1,745. Another numerical estimate F<1,29 is obtained in <cit.>. Both these estimates together with (<ref>) lead to relations (<ref>) and (<ref>). §.§ On the validity of Assumption A As before we assume here that ω=0. Consider the branch (<ref>) of Stokes waves which starts from a uniform stream solution. According to <cit.> the limit behaviour of this branch is reduced to one of the following options: the branch approches a solitary wave or it approches an extreme wave. If we assume that F>√(2) then the first option is impossible due to the estimate (<ref>). Therefore in this case the bransh is approaching an extreme wave, which has the angle 120^∘ at the crest. By <cit.> and Theorem 3.1, <cit.> the number of negative eigenvalues of the Frechet derivative becomes more and more when t approaches infinity. As a result we arrive at (<ref>). Similarly if the numerical estimate F<1,29 is excepted then we arrive at the interval (<ref>) where the Assumption A is valid. Certainly both conditions (<ref>) and (<ref>) are sufficient for the validity of the Assumption and this problem requires further research. § ACKNOWLEDGMENTS I want to thank M. Wheeler for fruitful discussions on the estimates of the Froude number. § REFERENCES 20 Am J Amick, Bounds for water waves, rch. Ration. Mech. Anal., 99, pp. 91–114 1987. T2 CJ Amick, LE Fraenkel, JF Toland, On the Stokes conjecture for the wave of extreme form, Acta Mathematica 148 (1), 1982. BS J.G.B. Byatt-Smith, An exact integral equation for steady surface waves, Proc. Roy. Soc. Lond. A 315 (1970) 405–418. BDT1 B Buffoni, EN Dancer, JF Toland, The Regularity and Local Bifurcation of Steady Periodic Water Waves, Archive for rational mechanics and analysis 152 (3), 207-240, 2000. BDT2 B Buffoni, EN Dancer, JF Toland, The sub-harmonic bifurcation of Stokes waves, Archive for rational mechanics and analysis 152 (3), 241-271, 2000. Che Chen, B. and Saffman, P.G. Numerical evidence for the existence of new types of gravity waves on deep water. Stud. Appl. Math. 62, 1980. CSst A Constantin, W Strauss, Exact steady periodic water waves with vorticity, Communications on Pure and Applied Mathematics 57 (4), 481-527, 2004. HVB83 J. K. Hunter and Jean-Marc Vanden-Broeck. Accurate computations for steep solitary waves. Journal of fluid Mechanics, 136:63–71, 1983. KP74 G. Keady and W. G. Pritchard. Bounds for surface solitary waves. Proc. Cambridge Philos. Soc., 76:345–358, 1974. Koz1 V. Kozlov, The subharmonic bifurcation of Stokes waves on vorticity flow, JDE, 2023, arXiv:2204.10699. Koz1a V.Kozlov, On first subharmonic bifurcations in a branch of Stokes waves, arXiv:2303.11440, 2023. KN2008 V Kozlov, N Kuznetsov, On behaviour of free-surface profiles for bounded steady water waves, Journal de mathématiques pures et appliquées 90 (1), 1-14, 2008. KN14 V Kozlov, N Kuznetsov, Dispersion equation for water waves with vorticity and Stokes waves on flows with counter-currents, Archive for Rational Mechanics and Analysis 214 (3), 971-1018, 2014. KN11a V Kozlov, N Kuznetsov, The Benjamin–Lighthill conjecture for near-critical values of Bernoulli’s constant, Archive for rational mechanics and analysis 197, 433-488, 2010. KL1 V Kozlov, E Lokharu, Global bifurcation and highest waves on water of finite depth, arXiv preprint arXiv:2010.14156, 2020. KL2 V Kozlov, E Lokharu, On negative eigenvalues of the spectral problem for water waves of highest amplitude, Journal of Differential Equations, 342, 239-281, 2023. KL3 V Kozlov, E Lokharu, On Rotational Waves of Limit Amplitude, Functional Analysis and Its Applications 55 (2), 165-169, 2021. KLW V Kozlov, E Lokharu, MH Wheeler, Nonexistence of subcritical solitary waves, Archive for Rational Mechanics and Analysis, 241 (1), 535-552, 2021. LHF74 M. S. Longuet-Higgins and J. D. Fenton. On the mass, momentum, energy and circulation of a solitary wave. II. Proc. Roy. Soc. (London) Ser. A, 340:471–493, 1974. McL J. B. McLeod, The Stokes and Krasovskii conjectures for the wave of greatest height, Studies in Applied Mathematics, 98 (1997), pp. 311-333. Mil80 John W. Miles. Solitary waves. In Annual review of fluid mechanics, Vol. 12, pages 11–43. Annual Reviews, Palo Alto, Calif., 1980. P2 PI Plotnikov, A proof of the Stokes conjecture in the theory of surface waves, Studies in Applied Mathematics, 108 (2), 2002. Sa Saffman, P.G. Long wavelength bifurcation of gravity waves on deep water J. Fluid Mech. 101, 1980. Star Victor P. Starr. Momentum and energy integrals for gravity waves of finite height. J. Mar. Res., 6:175– 193, 1947. arXiv preprint arXiv:2204.10071. VW1 E Varvaruca, GS Weiss, A geometric approach to generalized Stokes conjectures, Acta mathematica 206 (2), 363-403, 2011. We M. Wheeler, The Froude number for solitary water waves with vorticity, Journal of Fluid Mechanics 768, 91-112, 2015. §.§ Small τ_* Here we assume that 0<t≪τ_*≪ 1. The the relation (<ref>) for finding v_1 has the form Av_1+3/2(v_0p^2/H_p^4)_p=O(τ_*^2) , Bv_1+3/2v_0p^2/H_p^4=O(τ_*^2) , v_1=0 . We are looking for the solution in the form v_1(q,p)=a_1(p)cos^2(τ_*q). Then a_1 satisfies the equation a_1p=3/2α_0p^2/H_p+H_p^3c_1, where c_1 is a constant. Integrating this relation from 0 to p we get a_1(p)=∫_0^p(3/2α_0p^2/H_s+H_s^3c_1)ds. From the boundry condition for p=1 we get a_1(1)-c_1=0. Therefore c_1(1-∫_0^1H_s^3ds)=∫_0^13/2α_0p^2/H_sds. Since F<1 and the left hand side is equal to c_1(1-F^-2) the coefficient c_1 is negative and c_1=(1-F^-2)^-1∫_0^13/2α_0p^2/H_sds. Now we turn to the next term v_2. The problem for v_2 is the following Av_2+(3v_0pv_1p/H_p^4+𝒥_3(v_0))_p=2λ_2(v_0q/H_p)_q+O(τ_*^2) , Bv_2+3v_0pv_1p/H_p^4+𝒥_3(v_0)=O(τ_*^2) v_2(q,0)=0. It is solvable if 2λ_2τ_*^2∫_Q_pv_0^2/H_pdqdp=∫_Q_p(3v_0pv_1p/H_p^4+𝒥_3(v_0))v_0pdqdp. Since ??? we have 2λ_2τ_*^2∫_0^1α_0^2/H_pdp=-c_1∫_0^13α_0p^2/H_pdp+O(1).
http://arxiv.org/abs/2307.07549v1
20230714180003
Halo Properties from Observable Measures of Environment: I. Halo and Subhalo Masses
[ "Haley Bowden", "Peter Behroozi", "Andrew Hearin" ]
astro-ph.GA
[ "astro-ph.GA" ]
Halo Masses]Halo Properties from Observable Measures of Environment: I. Halo and Subhalo Masses ^1Department of Astronomy and Steward Observatory, University of Arizona, Tucson, AZ 85721, USA ^2Division of Science, National Astronomical Observatory of Japan, 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan ^3High-Energy Physics Division, Argonne National Laboratory, Argonne, IL 60439, USA The stellar mass – halo mass relation provides a strong basis for connecting galaxies to their host dark matter halos in both simulations and observations. Other observable information, such as the density of the local environment, can place further constraints on a given halo’s properties. In this paper, we test how the peak masses of dark matter halos and subhalos correlate with observationally-accessible environment measures, using a neural network to extract as much information from the environment as possible. For high mass halos (peak mass >10^12.5), the information on halo mass contained in stellar mass–selected galaxy samples is confined to the ∼ 1 Mpc region surrounding the halo center. Below this mass threshold, nearly the entirety of the information on halo mass is contained in the galaxy's own stellar mass instead of the neighboring galaxy distribution. The overall root-mean-squared error of the best-performing network was 0.20 dex. When applied to only the central halos within the test data, the same network had an error of 0.17 dex. Our findings suggest that, for the purposes of halo mass inference, both distances to the kth nearest neighbor and counts in cells of neighbors in a fixed aperture are similarly effective measurements of the local environment. § INTRODUCTION In the Lambda Cold Dark Matter (ΛCDM) paradigm, all galaxies form at the centers of dark matter halos. Structure formation within this framework proceeds hierarchically, such that small halos merge into larger ones, becoming satellites of the larger central halo. A galaxy's formation and evolution are tied to the central or satellite halo in which it lives (see for a review). This connection has fueled interest in learning about halos to better understand the physics of galaxy formation and conversely to use observations of galaxies to constrain the structure and abundance of dark matter. The galaxy-halo connection persists across a wide range of scales from small halos hosting dwarf galaxies (≲ 10^11) to massive halos hosting galaxy clusters (≳ 10^14). Across these mass scales, accurate measurements of halo masses have many potential applications. At the low-mass end, host halo masses are vital for understanding the physical processes that govern the formation and evolution of galaxies. The halo mass is directly linked to the amount of baryonic matter available for star formation, and it is also a primary driver of the growth and feedback processes that shape the galaxy <cit.>. In particular, there is a well-studied relationship between stellar mass and host halo mass, known as the stellar mass – halo mass relation (SHMR; see e.g., and references therein). The exact mapping between stellar mass and halo mass, as well as the amount of scatter in this relationship, tells us about the efficiency of the gas-to-stars conversion processes (). Hence, investigations into the masses of halos hosting dwarf galaxies are typically aimed at determining: 1) the halo mass at which galaxies start forming, and/or 2) the scatter in the SHMR for dwarf galaxies. Overall, more widespread access to halo masses will lead to an improved understanding of the physics involved and the creation of galaxy models that more accurately correspond to the real Universe. At slightly higher halo masses, accurate measurements are useful in interpreting observations. For example, in studies aiming to probe the contents or extent of the circumgalactic medium (e.g., ), knowledge of the halo mass is important for understanding the size of the virialized region around the galaxy. The gas surrounding the target galaxy is often probed by observing a background quasar or galaxy (see for a review). In these cases, accurate measurement of a foreground halo's mass is essential for estimating the halo radius, and thus determining whether a given sight line is probing the circumgalactic medium inside the dark matter halo or the intergalactic medium outside the halo <cit.>. In this way, the dark matter halo mass gives us a way of interpreting observations that would otherwise be ambiguous. For studies like these that use the halo mass to understand galaxy formation, ∼0.3 dex uncertainties on observed stellar masses (see e.g., ) mean that measuring halo masses more accurately than ∼ 0.2 dex typically does not lead to significantly tighter constraints on galaxy formation. Halo masses, particularly on the scale of galaxy groups and clusters, also carry information about cosmological parameters and large-scale structure <cit.>. Hence, there is widespread interest in measuring halo masses for clusters to determine the total number of halos above a given mass and thereby constrain the matter density in the universe, as well as the normalization of the power spectrum (). Cosmology applications typically require as accurate constraints as possible on halo masses to achieve the best constraints on cosmological parameters. Despite their relevance across several fields, obtaining accurate host-halo masses remains a challenge across the entire mass spectrum. Additionally, as there are distinct goals relevant to the different mass regimes, many previous techniques have been optimized for specific subsets of halo mass. Over the past two decades, numerous studies have connected galaxy properties with host halo mass using empirical modeling techniques (e.g., halo occupation distributions and empirical modeling). These techniques use abundance and clustering data to match galaxies to halos and subhalos (see and references therein). Halo abundance matching can be highly effective <cit.>, but is limited by the intrinsic scatter between the matched galaxy property and halo mass. More recent efforts (e.g., ) have sought to eliminate this issue by empirically modeling the evolving connection between galaxies and halos over cosmic time. This is the only existing technique that can be applied to satellite halos as well as central halos. For massive halos, particularly on galaxy cluster scales, there are three major alternative approaches for measuring individual halo masses. One approach relies on satellite kinematics from spectroscopic surveys (e.g., ), where satellite positions and velocities are used to determine which galaxies are satellites and which are unbound, thereby determining the extent of the halo. This technique tends to work down to halo masses of ∼ 10^12, below which there are too few observable satellite galaxies per halo in extragalactic surveys. Alternatively, satellite number counts (i.e., ‘richness’) can be estimated from photometry using tools such as redMaPPer <cit.> to identify clusters. <cit.> and <cit.> provide an overview of these approaches, finding that dynamical and galaxy-based techniques can provide accurate measurements of halo mass to a factor of ∼ 2 in the M > 10^14 regime, with significantly larger errors at lower halo masses. The third technique estimates halo masses using evidence of a hot halo through X-ray measurements () or the Sunyaev-Zeldovich effect (). Like the satellite-based approaches, this technique is less effective at lower halo masses. Machine learning (ML) techniques provide a way to circumvent the limitations of traditional techniques and incorporate high-dimensional data to develop models of a wide variety of physical phenomena. Over the last decade, ML methods have been used to extract information from observations (or simulated observations) to enhance halo mass estimates. On the massive cluster scale, several studies have used ML to measure halo masses using dynamical and/or X-ray data (). Other ML studies have incorporated different types of observables, such as photometric, structural, and kinematic data of the hosted galaxy <cit.> or a diverse set of galaxy and group features <cit.>, finding an improvement in accuracy over traditional halo abundance matching and dynamical mass estimates when applied to simulated datasets. Previous local-environment-based methods have tended to consider information that is difficult to retrieve fully from observations, with, for example, <cit.> using all nearby galaxies without imposing a stellar mass limit or <cit.> using 3D distances between galaxies. In this paper, we limit our data to realistically observable information by restricting the stellar mass range of our sample as well as the available galaxy properties and position information. Our major sources of information on halo properties are based on two standard environmental measures in observational work. Simulations suggest that the local environment contains information about halo properties (). However, there is no standard environmental indicator, as some have been found to have advantages and disadvantages for different research goals and different sets of observational data. This paper focuses on two popular methods for probing the density of galaxies: 1) the distance to the kth nearest neighbor (kNN), and 2) counts of neighbors within a fixed aperture (see for a review of both techniques). We compare the effectiveness of the two separate probes over different halo mass regimes to better understand the environmental information provided by each. Many popular environmental measures (e.g., the two-point correlation function) are functions of the distances to the k nearest neighbors. These distances are usually defined as the projected distances (i.e., the 2D comoving separations) to neighboring galaxies within a redshift separation of typically ≲ 1000 km s^-1. Neighbor distances have been used by a number of studies (e.g., ). <cit.> explored the effectiveness of different nearest neighbors-based statistics and found nearest neighbors to be an effective probe of the local environment, with fixed aperture methods being more effective at measuring the large-scale environment. However, previous studies have focused on small values of k (<10), while more distant neighbors still potentially contain information about the target halo. With ML techniques, it is simple to retain a substantial number of neighbors and search for an approximate optimal mass estimator over the large resultant parameter space. Yet, even when retaining a large number of neighbors, this probe might break down on cluster mass scales, where we expect many satellites. In these cases, the nearest neighbors probe could inadequately probe the full extent of the halo (e.g., if the number of satellites is ≫ 50) or fail to separate close satellites from other neighbors given limited redshift information. The second method probes a fixed length scale for all galaxies, rather than probing a length scale dependent on the density of the environment. This is usually done by defining a cylinder around a target object with fixed projected distance (∼ 0.5 – 5 h^-1Mpc) apertures (e.g., ) or annuli <cit.>, within a certain redshift offset (ranging from 500 to 6000 km s^-1). This method is based on the correlation between richness and halo mass (e.g., and ), which suggests that counts of galaxies in cylinders should scale with halo mass, particularly at cluster mass scales. Large-scale bias is also expected to scale with halo mass at high masses (see ). Given these scaling relationships, we expect that cylinder counts would perform best for high-mass halos (M > 10^13). The goals of this paper series are to extract the relevant information from these environmental measures to provide estimates of halo properties (including mass, concentration, and assembly history), analyze the correlations between these properties and galaxy properties, and determine the observational metrics that are most sensitive to a given halo property. The focus of this paper is on halo and subhalo masses and is organized as follows. In Section <ref>, we discuss the simulated halo and galaxy properties used. Section <ref> gives an overview of the sources of environmental information (Section <ref>), sample statistics (Section <ref>), and the methods by which we develop and train a neural network (Sections <ref> and <ref>). We evaluate the performance of the trained networks in Section <ref>. In Section <ref>, we summarize our results and discuss future applications of this technique. Throughout, we adopt a standard ΛCDM cosmology with (h, Ω_m, σ_8, n_s) = (0.678, 0.307, 0.823, 0.96). § DATA §.§ Overview To estimate halo mass, we used galaxy properties that are both observable and confidently simulated, including 1) projected distances to the target galaxy's neighbors within bins in redshift space, and 2) cumulative number densities of the target galaxy's stellar mass and the stellar masses of its neighbors. Simulated halo properties are from the Small MultiDark Planck (SMDPL) and Bolshoi-Planck cosmological simulations (Section <ref>; ). Individual galaxy properties were assigned to halos using the UniverseMachine empirical model (Section <ref>; ). §.§ Halo Properties The neural network was trained on z=0 halo properties extracted from the SMDPL simulation <cit.>, which has a periodic (400 h^-1Mpc)^3 volume and 3840^3 particles, corresponding to a mass resolution of 9.63 × 10^7 h^-1 per particle and a force resolution of 1.5 h^-1kpc. This simulation adopts a flat ΛCDM cosmology with (h, Ω_m, σ_8, n_s) = (0.678, 0.307, 0.823, 0.96), consistent with the most recent Planck results <cit.>. We assume the same cosmology throughout this work. Halo finding was conducted using Rockstar <cit.> and merger trees were constructed with the ConsistentTrees code <cit.>. Halo masses were defined using the <cit.> virial spherical overdensity criterion (ρ_vir). Throughout, we consider peak halo mass (M_p), defined as the maximum mass of the halo across all prior snapshots, rather than the current halo mass (at the time of the snapshot) as it is more closely linked to stellar mass <cit.>. It is essential to test the neural network's performance on distinct data from the training sample. For this purpose, we used the smaller Bolshoi-Planck dark matter simulation box <cit.>, which has a periodic (250 h^-1Mpc)^3 co-moving volume with 2048^3 particles, corresponding to a mass resolution of 1.55 × 10^8 h^-1 per particle and a force resolution of 1.0 h^-1kpc. The simulation uses a similar cosmology to the SMDPL simulation with (h, Ω_m, σ_8, n_s) = (0.68, 0.30711, 0.82, 0.96). Given its relatively small size, the Bolshoi-Planck box contains a limited sample of high-mass halos (fewer than 1000 with peak halo mass M_p > 10^14). By using the SMDPL simulation as the training sample, we ensured that the network has a sufficient number of high-mass halos on which to train (more than 3000 with M_p > 10^14). §.§ Galaxy Properties The dark matter halos were populated with galaxies using UniverseMachine <cit.>. UniverseMachine is an empirical model that uses a Markov Chain Monte Carlo algorithm to constrain how galaxy star formation rates depend on halo mass, halo growth rates, and cosmic time. The algorithm constrains the galaxy–halo relationship by requiring the overall population to match observations of: 1) the stellar mass function (z∼ 0-4), 2) cosmic star formation rates (z∼ 0-10), 3) specific star formation rates (z∼ 0-8), 4) UV luminosity functions (z∼ 4-10), 5) quenched fractions (z∼ 0-4), 6) median UV-stellar mass relations (z∼4-10), 7) correlation functions for quenched and star-forming galaxies (z∼ 0-1), and 8) the dependence of the quenched fraction on environment (z∼ 0). Appendix C in <cit.> contains the full references for these observational constraints. We selected one snapshot at z=0 from the UniverseMachine mock galaxy-catalog for each of the Bolshoi-Planck and SMDPL boxes. From these snapshots, we extracted galaxy positions, velocities, and observed stellar masses. UniverseMachine models both true and observed stellar masses. The observed stellar mass values were adjusted from true stellar masses, taking into account systematic offsets between true and observed stellar masses as well as the random scatter in observed stellar masses <cit.>. Observed galaxy stellar masses from the UniverseMachine were converted to cumulative number densities as measurements of observed stellar masses are model and calibration dependent. Stellar masses are primarily derived from light using spectral energy distribution fitting, which depends on assumptions such as the star formation history of the galaxy and the relevant dust attenuation law. Different model assumptions produce inconsistent stellar masses <cit.>. Translating to cumulative number densities instead of stellar masses removes the largest systematic offsets in stellar mass between different models, and hence allows a neural network trained on one model to more easily be applied to other models. While UniverseMachine provides star formation rates, the relationship between star formation rates and halo properties is not as robustly established as that for stellar masses, and so we take a conservative approach by not using star formation rates as inputs here. § METHODS In Section <ref>, we discuss how the local galaxy environment is defined. Section <ref> covers the statistics of the galaxy and halo populations considered. We then preprocess the data in Section <ref>. Section <ref> describes the general architecture of the neural networks and the training process. We define our galaxy sample as all galaxies within the simulation box with M_* > 10^9, corresponding typically to halos with M_p ≳ 10^11.5. This includes both central and satellite galaxies, as the two categories cannot be perfectly separated using observations. §.§ Sources of Environmental Information The input layer of each network was composed of the stellar mass of the target object concatenated with a data vector of environmental information. The following sections describe how these data vectors were defined. §.§.§ Distances to Nearest Neighbors We searched for the fifty closest neighboring galaxies with a redshift offset of <1000 km s^-1 and did not consider galaxies outside this cut as potential neighbors. This cut eliminated neighbors with a high redshift separation from the target galaxy without excluding the majority of a galaxy's potential satellites from consideration. A 1000 km s^-1 cut corresponds to the virial velocity of clusters with M_p∼ 10^14. Some galaxies within a cluster may be excluded with this cut, but including galaxies at a larger velocity separation has the potential to introduce more noise from projection effects for neighboring galaxy distances at the low-mass end. We searched for the nearest neighbors to each galaxy within these redshift cuts, where the projected distance to a neighbor is measured in the x-y plane. We imposed a stellar mass cut on neighbors such that a neighbor must have a stellar mass no less than 1) 1.5 dex below that of the galaxy under consideration, or 2) 10^9, whichever is highest. For each galaxy, we considered the fifty nearest neighbors that met those criteria, covering projected distances up to ∼11h^-1Mpc in the case of the most isolated galaxies. This information was given to the neural network as a vector of stellar masses and projected separations from the target. We refer to these separations as the distance to the kth nearest neighbor, where k is the rank of the neighbor in separation from the target. In addition, we considered the redshift separations between the target and its neighbors as potential inputs. However, no significant changes in network performance were noted with these additional inputs, and the additional inputs resulted in increased network training times. Thus, we excluded this data from all further analyses. §.§.§ Counts in Cylinders Counts in cylinders are an environmental measure with a fixed spatial scale, unlike the kth nearest neighbors measure, which covers an area dependent on the local density of galaxies. To measure counts in cylinders, we selected circular apertures with radii of 0.5 h^-1Mpc, 1 h^-1Mpc, 2 h^-1Mpc, and 5 h^-1Mpc. These values are spaced in ∼ 0.3 dex intervals in radius, corresponding to changes in virial radius associated with ∼ 1 dex intervals in halo mass. These aperture sizes were selected to provide sensitivity to a wide range of halo masses. We retained the same stellar mass cuts as in the kth nearest neighbors case. A search was then performed to find the number of neighboring galaxies within each cylinder. Once this was complete, the neighbors were further split into bins by absolute redshift separation from the target. Bin widths are |Δ z| = 250 km s^-1 each and together cover separations of up to |Δ z| = 2000 km s^-1. Splitting the data into narrower bins provides information to better exclude sources at larger redshift separations that have low projected distances from the target galaxy. The bin spacing of 250 km s^-1 was chosen to retain this redshift separation information while also reducing the Poisson noise that would result from using narrower bins and avoiding the additional complexity of a neural network with significantly more velocity bins. §.§ Sample Statistics We selected 2,877,669 objects from the SMDPL simulation box that met our sample criteria, including both central and satellite halos. The distribution of the peak halo masses and z=0 stellar masses of these galaxies are shown in Figure <ref>. The galaxies in the SMDPL box make up the training and validation data sets for the neural networks. The Bolshoi-Plank simulation box contains similar halo and galaxy mass distributions to SMDPL. From this box, we selected 695,554 galaxies meeting the stellar mass cutoff criterion, which is approximately 25% of the size of the SMDPL dataset. The objects in the Bolshoi-Planck box make up the test data set. §.§ Pre-processing Before neural network training, we normalized and scaled the inputs (commonly known as features) and outputs (labels) of the network. This is essential as neural networks can be sensitive to the scale of data, and having input and output data covering several orders of magnitude in scale can result in poor model performance <cit.>. Each property was individually standardized to have a mean of zero and a standard deviation of one. For stellar and halo masses, this was performed on a base-ten logarithmic scale, while the distances to the fifty nearest neighbors and the counts in cylinders were scaled linearly. In addition, we wanted the network to prioritize halos at the extremes of halo mass. The vast majority (98%) of SMDPL halos had peak masses between 10^10.5 M_⊙ and 10^13 M_⊙. Less than 0.05% of halos fell below this mass range. 1.9% of galaxies had halo masses above this range and only 0.12% had halo masses of 10^14 M_⊙ or above. Due to this bias in halo number density when separated by mass bin, the default fitting procedure prioritizes typical-mass halos. To counteract this effect, and incentivize the network to also fit the halos in the less populated mass bins, we tested weighting the data conditionally by halo mass. To weight the data, we first separated the training and validation data into 25 bins by halo mass. We then found the number of objects within these bins. These count values were assigned to the median halo mass in the bin and a linear interpolation from these counts was used to calculate an approximate normalized halo mass function where n(M_halo) is the number density of halos at a given halo mass in the simulation. The weight (W) assigned to a given halo is defined according to: W(M_halo) = 1/√(n(M_halo)). In the following sections, we consider a network trained on an unweighted dataset and one trained on a sample weighted as in Eq. <ref> to prioritize objects with extreme halo masses. §.§ The Neural Network Using a neural network, we created an approximate mapping between the inputs (observable data about a galaxy) and outputs (the peak mass of the host halo). We used supervised learning to train the network, i.e., the network is iteratively trained to minimize the error of output predictions by using the provided input-output pairs to adjust its internal weights. The trained network then acts as a function that takes data from outside the training sample as input and can make new predictions based on that data. We developed networks based on three major input types consisting of a fixed length vector of 1) the distances and stellar masses of the target galaxy’s fifty nearest neighbors, 2) counts of galaxies in cylindrical apertures around the target galaxy, or 3) the combination of the previous inputs. Each network also takes the stellar mass of the target galaxy as an additional input. The three networks are all designed to predict the halo mass of the target galaxy’s host halo. We used Keras <cit.>, a popular open-source machine learning library, with a Tensorflow backend <cit.>, to construct and train our neural network. Each network uses a fully-connected network structure, which means every node in a layer receives input from all the nodes in the previous layer. A non-linear activation function is applied to each node after weighting. The nonlinearity of the network and the large number of flexible weights allow the network to learn complex relationships between the input data and the target outputs. For training, the SMDPL box was split in two along the x-axis, with 70% of the volume (2,048,724 galaxies) making up the training data set and the remaining 30% (828,945 galaxies) reserved for model validation. The network was provided with fully-labeled data (i.e., including true values for outputs in addition to inputs) during the training stage. The validation data is used to evaluate the performance of the network during the training process to prevent it from overfitting the training set (see ). The test set, composed of galaxies from the Bolshoi-Planck box, was not viewed by the network during training. Instead, this data was set aside to evaluate the performance of the final trained networks on data not seen before. In addition to the numerous trainable parameters, there are several non-trainable parameters that describe the model architecture, design, and training. We ran a search comparing model performances with different sets of architectures and hyperparameters. The parameters considered are described in the remainder of this section and summarized in Table <ref>. Model parameters were chosen primarily based on their accuracy on the validation dataset. When different models obtained similar performances, we favored the models with fewer trainable weights and shorter training times. The values chosen were based on optimization tests with weighted data, but are consistent across weighted and unweighted networks. A standard fully-connected network consists of an input layer of nodes, followed by several hidden layers connecting the inputs to the output layer. We considered networks with depth of 4, 8, or 12 hidden layers. We chose to tie the number of nodes per hidden layer to the input size with a strictly decreasing number of nodes in each successive layer. Successive layers have × f the number of the nodes of the previous layer (rounded to the nearest whole number), where we considered a fiducial value of f=0.8. The strategy of decreasing the number of nodes in successive layers was designed to allow the network to discard unhelpful information in the input data and only carry important or reduced information forward. Even given the same number of hidden layers and value for f, the structure (and the number of free parameters) varies across the three different inputs we considered based on the size of the input information. For the kNN networks, the input vector has a length of 101, including the distances to the 50 neighbors, their stellar masses, and the stellar mass of the target galaxy itself. On the other hand, the cylinder counts networks take an input of length 33 to include counts in all bins and the stellar mass of the target galaxy. The combination network, which uses the information from both environmental measures, takes an input vector of 134 values. In addition, as the stellar mass of the target object is known to be highly related to the target halo mass, we consider introducing an additional skip connection between the stellar mass input node and the layer directly before the output layer for the deeper networks (8 and 12 layers) to ensure the information contained in the stellar mass input is not discarded before the final layer. Varying the number of hidden layers and the number of units per layer within the values considered led to little variance in the model's performance on the validation dataset (<5% change in MSE) for all three inputs. The exception to this was a small subset of models, consisting mainly of deeper networks with no additional skip connection between stellar mass and output, for which the training process diverged or otherwise failed to improve upon the base SHMR. Overall, the addition of this skip connection between stellar mass and output tended to improve the performance of the deeper networks. Even with these additional connections, the deeper networks still do not show a substantial reduction in error over the shallower, 4-layer networks. Hence, we proceeded with a network structure consisting of four hidden, fully-connected layers throughout the remainder of this paper, regardless of input. Varying the base narrowing factor of f=0.8 by ± 0.1 resulted in no significant changes in network performance. Hence, we retain the fiducial value for our final networks. The networks were trained to minimize the loss of predicted halo masses. Trainable model weights were initialized from a random uniform distribution. Of the three activation functions considered (see Table <ref>), the rectified linear unit activation function (ReLU; ) provided the best performance with the fewest training epochs. Training and validation losses of regression networks are typically measured via mean squared error (MSE) or mean absolute error (MAE). In this case, we chose a Mean Absolute Error (MAE) loss function. MAE was chosen over MSE as MAE is generally more robust to outliers and is a better choice when the data is not normally distributed or has outliers. We found that networks trained to minimize MAE converged much more quickly than networks trained with MSE. The two primary optimization algorithms we considered provided similar accuracy. However, the Adam optimization algorithm () provided faster training than the classic stochastic gradient descent algorithm (SGD; ). Network performance was mostly insensitive to changes in initial learning rate and batch size within the range of parameters considered. For the final models, the initial learning rate was set to 0.001, with a training batch size of 128. This information, as well as the remaining parameters of the networks described in this paper, is summarized in Table <ref>. One potential pitfall of neural networks is lack of generalizability. Failure to train a network using a dataset representative of the intended data for the network's application may result in poor performance. This pitfall is particularly apparent if the network overfits the training set. Therefore, we allowed networks to train for up to 50 epochs but imposed an early stopping criterion to reduce the potential overfitting of the training dataset. After each epoch, the validation dataset's loss was assessed, and if there was no improvement within 10 epochs (i.e., a patience of 10), the training was halted. A warm-up period of ten epochs was implemented to prevent training from being stopped before a solution is found. The final model weights were then selected from the epoch with the best validation loss score. Figure <ref> shows the evolution of training and validation loss with epoch during the training period of the unweighted kNN network. The training for this network was stopped at 30 epochs, as the validation loss had not decreased below the 20 epochs value. In the following section, we analyze the results of networks trained with and without weights as applied to the test data. § RESULTS In this section, we compare the performance of the nearest neighbors, cylinder counts, and combined models when applied to the Bolshoi-Planck test data. We consider the performance of the models over (1) the whole dataset, (2) bins in halo mass, and (3) central and satellite galaxies separately. Errors are compared against the scatter in the SHMR as described in Section <ref>. Section <ref> describes the performance of the fifty nearest neighbors networks, while Section <ref> explores the impact of removing the data from more distant neighbors. Similarly, Sections <ref> and <ref> provide an analysis of the cylinder counts networks and the relative importance of the different features used therein. Lastly, Section <ref> covers the networks which take a combination of the two environmental measures. For each model, the loss values shown represent the root mean squared error (RMSE) in the predicted halo mass compared to the true values from the simulation unless otherwise indicated. All reported uncertainties, including error bars and shaded regions, correspond to 68% confidence intervals. §.§ Stellar Mass Alone There exists a well-constrained relationship between observed stellar mass (or, in practice, cumulative number density) and the peak mass of the host dark matter halo <cit.>. To provide a baseline prediction of halo mass from stellar mass alone, we first found the average SHMR from SMDPL (effectively, the average halo mass as a function of observed stellar mass from the UniverseMachine). We then used this average relationship to assign masses to halos in Bolshoi-Planck. The results of this method are shown in Figure <ref>. While scatter in the SHMR is often reported as the scatter in stellar mass at fixed halo mass, as we are estimating halo masses from known stellar masses, we are interested in the scatter in halo mass at fixed stellar mass. For stellar masses of 10^9 - 10^10 (corresponding on average to halo masses 10^10.5-10^11.5) the one-sigma scatter in halo masses within SMDPL is ≲ 0.2 dex. As stellar mass increases, so does the scatter in stellar mass, reaching more than 0.5 dex at ∼ 10^11.5. Figure <ref> shows how this results in increasing scatter in halo mass estimation above for galaxies with M_p > 10^12 The upturn in the loss for M_p < 10^11 is primarily the result of a sample selection effect. As we excluded galaxies with stellar masses less than 10^9 from our sample, we preferentially selected low-mass halos hosting over-massive galaxies for their size. Another consideration regarding interpolation from stellar mass is the different behaviors of central versus satellite halos. A large portion of our sample, particularly at the low halo mass end, is composed of satellites. Figure <ref> shows the fraction of satellites (purple dashed line) in the SMDPL box as a function of peak halo mass. The sharp upturn in the satellite fraction at low mass is a generic reflection of higher stellar mass – halo mass ratios in subhalos in combination with a fixed stellar mass cut. Galaxies in subhalos continue forming stars even after their halos stop accreting matter, leading to higher stellar mass to peak halo mass ratios than centrals <cit.>. With a fixed stellar mass cut, the galaxies selected at the lowest halo masses will have the highest ratios of stellar mass to halo mass, which means that primarily galaxies in subhalos will be selected. Figure <ref> shows how the inclusion of satellites drives up the error in predicted halo mass. Here, the interpolation method provided a loss of ≲ 0.6 dex for central halos across the full mass range, but performed more poorly on satellite halos, particularly at the massive end. §.§ Nearest Neighbors Results The first source of environmental information we consider is the kNN distances. As shown in Figure <ref>, the average projected distance to the kth nearest neighbor strongly depends on halo mass. Halos with peak masses of 10^11 - 10^12 tend to be found in low-density environments (large distance to kth nearest neighbor) while halos at the group and cluster mass scales (M_p > 10^12.5) are found in denser galaxy environments (small distance to kth nearest neighbor). Below M_p ∼ 10^11, the average distance to neighbors decreases again. In this regime, nearly all halos (∼ 99%) are satellites of massive halos (Fig <ref>), and thus they are also found in high-density environments (i.e., they inherit a high-density environment from the nearby central halo they orbit). From Figure <ref>, we can also see that the dependence of kNN distance on halo mass varies with the value of k, suggesting that certain values of k may be more sensitive to halo masses in different regimes. For example, the distribution of distances to the 50th nearest neighbor does not considerably change between halo masses of 10^11 and 10^14, so we can expect that this value will not be a useful probe in this regime. However, it may be helpful for distinguishing halo masses between 10^14 and 10^15, where the slope of the relationship is steeper. When training with no prior weighting of the data, the best overall RMSE achieved was 0.19 dex. As shown by Figure <ref>, the losses from the network (green dashed line) have a similar shape as the interpolation from the SHMR (black solid line) for halo masses below 10^12.3. Above this threshold, the network outperforms SHMR interpolation with errors tending to decrease at higher halo masses and remaining between ∼ 0.3 and 0.4 dex, compared to the median error of 0.6 dex found by the SHMR interpolation for 10^14 - 10^15 halos. We also consider a network trained on weighted data (as described in Section <ref>), which outperforms both the stellar mass interpolation and the unweighted network at halo masses > 10^12.1, with a sacrifice in accuracy at lower masses (green dotted line in Figure <ref>) and a slightly lower overall accuracy of 0.20 dex. Hence, in applications where the higher-mass end is more important, the weighted network would be more relevant, and vice versa for lower masses. In both the unweighted and weighted network results, the median error in the prediction of the networks for halo masses below ∼ 10^12 does not substantially improve on the stellar mass-only prediction. In the higher mass regime, the neighbor information is more helpful since halos in this regime tend to have multiple satellites, and thus for the neighbor information being probed to be more directly tied to the target halo's mass than to the large-scale environment. In particular, in the galaxy group and cluster regime (M_p ≳ 10^13.5), the number of satellites, as indicated by the neighbor density, is expected to be strongly correlated with halo mass <cit.>. §.§ Nearest Neighbors Feature Importance To decipher how the given information is used by the networks, we attempted to isolate the impact of certain input features on the network predictions. In the case of an average mass halo, we may expect that the first several neighbors provide information about the halo’s satellites, while the remaining neighbors probe a larger region out to several Mpc, which may or may not provide additional information about the halo’s mass. On the other hand, given the choice of a fixed number of neighbors, including neighbors out to fifty (or beyond) could potentially be necessary to probe the full satellite populations of the most massive halos. To determine the relevance of including more distant neighbors, we performed a process where information about neighbors beyond a given value of k was masked (i.e., the feature was replaced with the value zero) and the network re-trained. Each iteration was executed with the same network structure and hyperparameters as for the full 50 neighbors case. Figure <ref> shows the resulting errors in predictions, with each colored line representing a different number of neighbors included. Masking neighbors did not result in changes in network performance for M_p ≲ 10^12.5. This is to be expected, as the performances of the full networks are not distinguishable from the SHMR interpolation in this regime. The impact of masking neighbors only becomes apparent at higher masses. In the unweighted case, the one-neighbor network does not perform as well as the networks provided with more neighbors for M_p ≳ 10^13. However, there is no significant difference between the performances of the 5≤ k ≤ 50 neighbor networks, all falling within the confidence interval of the 50-neighbor network. For the weighted networks, we found that up to halo masses of ∼ 10^13 and ∼ 10^14, predictions based on one neighbor (yellow line) and five neighbors (pink line), respectively, are as accurate as predictions based on larger neighbor numbers. There is no substantial difference in accuracy between the ten, twenty-five, and fifty-neighbor networks. Except for the one-neighbor case, each weighted network achieves a loss of ≤ 0.20 dex. Including more neighbors beyond five does not significantly alter the performance of the overall network as constructed in the weighted cases. When evaluating loss as a function of halo mass in Figure <ref>, including five neighbors is a clear improvement on the one-neighbor case at the high mass end. Including ten neighbors may result in slight improvements for clusters with M_p ≳ 10^14. There is no clear improvement gained in moving to more than ten neighbors. The process is not extended beyond fifty neighbors, due to the lack of clear improvement in including more than ten neighbors. To better understand why the more distant neighbors are not more informative about halo mass, we considered the average halo mass as a function of distance to the kth nearest neighbor (Figure <ref>). If we consider all halos (a), no one value for k stands out as particularly informative, however, when limiting to centrals only (b), the 5th neighbor's distance stands out as having the largest variance with halo mass. If we limit our analysis to the high mass end (M_* > 10^11) where the inclusion of neighbor information was found to improve the network's performance, the 5th neighbor still stands out as the most relevant (Figure <ref>, panel c). However, when limited to high mass centrals (panel d), there is an apparent trend in halo mass with distance for 5 ≤ k ≤ 25. This suggests, as expected, that more distant neighbors also carry information about the most massive central halos. However, it is also possible that some information at higher values of k is redundant. §.§ Counts in Cylinders Results We also consider the alternative environmental measure of counts in cylinders as defined in Section <ref>. Figure <ref> shows how the median value for counts in cylinders evolves with halo mass for different cylinder sizes. Given the similarity in the distribution shapes between Figures <ref> and <ref>, we expected the network trained on counts in cylinders to have a performance highly similar to the nearest neighbors network. One exception we considered was for the highest mass clusters, where the 5 h^-1Mpc radius bin, as well as the inclusion of redshift separations up to 2000 km s^-1, might cover more of a cluster's satellites than the 50 nearest neighbors measure. Additionally, the finer redshift binning might allow the network to better separate nearby galaxies from those with low projected distances but with large velocity offsets. On the other hand, by using counts instead of masses (or some proxy of masses), there is other information lost. For example, with counts alone, the network would not have a way to know if one of the nearby neighbors is more massive than the target galaxy, which might otherwise allow the network to determine whether the target galaxy is a central or a satellite. For the counts in cylinders, we used a dense network with 33 inputs and 4 hidden layers. The overall loss was 0.20 dex for both the unweighted network and weighted networks. Figure <ref> shows the performance of the unweighted (dashed blue line) and weighted (dotted blue line) networks compared to stellar mass alone (black solid line) and the full fifty neighbor networks described in section <ref> (green dashed and dotted lines). There is little difference in performance between the nearest neighbor networks and the cylinder networks. In both the unweighted and weighted cases, the lines denoting the nearest neighbor network's loss fall within the shaded error region for the corresponding cylinder network. The similarity in performance between the kNN networks and the counts in cylinders networks suggests that the neural networks are not able to extract more information from one environmental measure than the other. We can additionally ascertain whether the networks are extracting the same information by comparing the mass-prediction errors for individual galaxies. Figure <ref> shows the errors in estimated halo masses from the weighted counts in cylinders network versus those from the weighted kNN network. The predictions of the two networks are highly similar, suggesting that they are likely extracting the same information from the different environmental measures. To further evaluate this, we considered a network provided with both the kNN and cylinder counts information in section <ref>. §.§ Counts in Cylinders Feature Importance To analyze the relative importance of different information from the counts in cylinders, we performed a feature removal process. This process follows the same general method as described in Section <ref> for the nearest neighbors measure. First, we grouped the counts in cylinder data by cylinder radius, masking information from the larger cylinders while retaining the full range of redshift bins. In trial one, we masked the 5 h^-1Mpc cylinder by replacing the number of counts with zero. In trial two, both the 5 h^-1Mpc and 2 h^-1Mpc cylinders were set to zero. Finally, in trial three, every cylinder excluding the 0.5 h^-1Mpc cylinder was masked. Each iteration is executed with the same network structure and hyperparameters as the full cylinders case. Figure <ref> shows the resulting errors in predictions for the unweighted (top) and weighted (bottom) networks, with the full information shown in green and trials one, two, and three shown by the blue, purple, and pink lines respectively. In both the unweighted and weighted cases, all four networks have highly similar overall losses. Even with predictions divided into halo mass bins, the networks show no distinguishing characteristics. This suggests that the majority of the information is in the stellar mass and the nearby environment (as represented by the 0.5 h^-1Mpc cylinder). Beyond this inner region, the network as designed is not extracting any significant information about halo mass. In addition to considering the projected area covered by the cylinders, we also analyzed the information in each redshift bin, by masking out different bins (Figure <ref>). In trial one, we masked the bins with redshift separation |Δ z| > 1000 km s^-1 (blue dashed line) by setting the counts in each bin to zero. In trials two (purple dotted line) and three (pink solid line) we masked out all bins with separation |Δ z| > 500 km s^-1 and |Δ z| > 250 km s^-1 respectively. Figure <ref> shows results for the unweighted networks (top) and weighted networks (bottom). In both cases, there is no significant change in the loss as a result of removing all redshift bins |Δ z| > 500 km s^-1. The additional masking of the |Δ z| = 250-500 km s^-1 in trial three produces a network that is slightly less accurate (< 0.1 dex difference) at the high-mass end. These results suggest the majority of the information relevant to the network is contained in the smallest redshift bin, |Δ z| = 0 - 250 km s^-1, with some potential additional information in the |Δ z| = 250-500 km s^-1 bin. As in the aperture size tests, the innermost region tested appears to contain the majority of the relevant information on halo mass. Figure <ref> shows the average halo mass as a function of neighbor counts for halos hosting galaxies with M_* > 10^11. The neighbor counts are limited to the |Δ z| = 0 - 250 km s^-1 bin, which proved most relevant to the network. When considering both centrals and satellites (left), there is some small, non-monotonic trend between average halo mass and neighbor counts, but for every cylinder radius, the change in average halo mass over the full range of neighbor counts is not dramatic (< 0.4 dex). When we limited the analysis to central halos (right), the 0.5 h^-1Mpc cylinder (and to a lesser extent the 1 h^-1Mpc cylinder) display a more significant evolution (∼ 1 dex) in average halo mass with neighbor counts. This suggests the smaller cylinders contain more information about halo mass, in keeping with the results of our masked networks. §.§ Combination Network Results From Figure <ref>, we saw that the weighted kNN and counts in cylinders networks have similar errors in their halo mass predictions for the same halos. The small amount of scatter (≲ 0.5 dex) in the prediction errors of the two networks decreases towards larger errors, suggesting that a network combining the information from the two environmental measures will likely not be much more accurate than one using the individual measures. To further test this, we combine the input information from the two environmental measures to create a new network that takes 134 inputs. Figure <ref> shows the performance of a network trained on the combined information (purple) compared with the individual kNN (green) and cylinder counts (blue) networks. There is no substantial improvement in the performance of the combined network over either of the single metric networks. The overall error of the unweighted network is 0.19 dex, which is a ≲ 1% difference from the loss of the individual unweighted kNN and cylinder counts networks. The weighted network had an overall error of 0.20 dex, which is similarly a negligible change from the performance of the weighted kNN and cylinder counts networks. The performance is also similar to the previous models when limited to halo masses M_p > 10^13, where it has an RMSE of 0.36 dex. This is a 27% improvement on the stellar mass alone estimates. The halo masses predicted by the weighted combined network are plotted against the true halo mass values in Figure <ref>. The network is highly accurate, with median predictions (as shown by black points) falling mainly on the one-to-one line with a reduced level of scatter compared to the SHMR interpolation (as shown by the size of the error bars). The network tends to overpredict the masses of low-mass halos and underpredict the masses of high-mass halos, similar to the SHMR interpolation. Weighting the training data has reduced this bias, but not fully eliminated it. We additionally evaluated the performance of the weighted combined network on centrals and satellites separately (Figure <ref>). The network is highly accurate for central halos alone, with an overall error of 0.17 dex and an average loss of ≲ 0.3 dex in all mass bins. Excluding the M_p < 10^11 regime, the peak in error is at M_p ∼ 10^13. This is above the turning point in the SHMR, where we observe greater scatter in halo mass at fixed stellar mass, yet below the regime where the satellite information is expected to become highly informative. On the other hand, for satellite halos, the network loss diverges with increasing halo mass. This is expected as the local environmental information will be more closely tied to the mass of the host halo of the satellite rather than the mass of the satellite halo itself. Thus, there is contrasting information provided to the network by the stellar mass of the satellite galaxy and the neighboring galaxy density. The information contained in the environment would likely be different if we had instead studied the larger composite dark matter halo containing the low-mass subhalo. However, when considering solely the mass of the satellite halo, there is no substantial improvement in the overall loss of the weighted network on satellites compared to the stellar mass-only estimates. When taken together with the results in Sections <ref> & <ref>, this suggests that even at the field-level of the galaxy distribution, the available information about halo mass is fundamentally limited, and is entirely contained by a small handful of summary statistics covering the immediate environment of the galaxy. § DISCUSSION AND CONCLUSIONS The aim of this paper is to address what information regarding halo mass is present in different parts of the halo's observable environment. We find that, beyond the stellar mass of the hosted galaxy, information about the nearest neighbors in the innermost region around a halo's center (≲ 1 Mpc) is the most informative. This information is similarly contained in measurements expressed through either distance to the halo's nearest neighbors or counts of neighbors in cylinders surrounding the halo. Our results indicate that at low halo masses (M_p ≲ 10^12.5), the environment contains little supplemental information about the target's host halo mass above and beyond the target's stellar mass. At higher halo masses, the information content of the environment increases, and including data about the distribution of nearby galaxies can improve halo mass estimates (see Section <ref>). The neural networks trained on distances to nearest neighbors and on counts in cylinders had markedly similar performances, as evidenced by the alignment of their prediction errors (Figure <ref>). We expected that the nearest neighbors' distances would be the more sensitive of the two probes on small scales, and thus more relevant for halos with M_p ∼ 10^12.5-10^13.5 that likely have few satellites. The 5th nearest neighbor is often used as a probe of the environment in the literature (e.g., ). Our results support the choice of the distance to the k = 5 neighbor for probing the environment on scales that are sensitive to halo mass. Smaller values of k are more prone to noise, while larger values tend to probe the larger-scale environment that is less sensitive to halo mass. Surprisingly, it appears the two different environmental measures contain the same information, with that information primarily concentrated at small distance scales. Additionally, we expected the cylinder counts to be more helpful than the neighbor distances on the more massive end (M_p > 10^13.5) where halos have an abundance of satellites. This is evidenced by the relationship between counts in cylinders and halo mass for M_p > 10^13.5 within the smaller radii bins (Figure <ref>), while the trend between halo mass and projected distance to the kth nearest neighbor is more ambiguous (Figure <ref>). However, in both cases, the high number of low-mass halos found in high-density environments, particularly when including satellite halos, results in significant overlaps between high- and low-mass objects for a given environmental parameter (see Figures <ref> and <ref>). Both environmental measures likely share similar sources of error. For example, an increase in the density of neighboring galaxies as a result of projection effects would result in a similar impact to the two measures. The correlation in prediction loss on individual galaxies (Fig. <ref>) supports the idea that error is primarily driven by the same sources across the two network types. We can also consider the particular cases where the networks fail to capture the true mass of a halo. For this purpose, we looked for objects with errors in halo mass estimation of greater than 1.0 dex. This corresponds to 0.13% of the objects in the Bolshoi-Planck dataset in the case of the full kNN network, with similar failure rates for the cylinder counts and combination networks. Already, these values indicate a low failure rate for the networks on individual objects. With further consideration, the vast majority of these objects (∼ 92%) correspond to halos with SHMR more than five standard deviations from the average relationship. This includes, for example, a ∼ 10^14 halo hosting a galaxy with a stellar mass of ∼ 10^10. We suspect the mass values assigned to many of these halos are the result of a bug in UniverseMachine or in the merger tree construction. In order to avoid the suspected anomalous halos, we considered removing objects that fell more than five standard deviations from the average SHMR. However, training and testing on datasets sampled in this manner resulted in no significant changes in network performance overall. Hence, the SHMR outliers were retained in all other calculations. This study differs from previous ML-based halo mass estimates due to the inclusion of satellite halos in the target population. We found drastically different performances between centrals and satellites. As local galaxy density is expected to scale with the mass of the central halo, it is less informative about the masses of the satellites. Thus, it is not unexpected that our best networks, which perform well on centrals (≲ 0.3 dex mean squared error), do not capture the halo mass of satellites to the same level of accuracy. While past studies have focused on the masses of central halos (e.g., ), we considered both satellites and centrals together due to the difficulty of fully separating the two populations in observations. Appendix <ref> demonstrates preliminary results for predicting the halo mass of centrals assuming a perfect separation of central and satellite halos. Given the substantial difference in satellite and central behavior, separating the two categories based on observational data will likely be important for future work probing the mass and secondary properties of halos. An ML study to find the most accurate method for separating centrals and satellites would be an important milestone for understanding halo properties in observations. The neural networks presented in this paper are designed to be applied to observational surveys, but several factors should be considered prior to such an application. Our training and testing are conducted on fully complete and clean data. Real observational surveys will include fiber collisions and other sources of error or incompleteness to which the network may or may not be robust. Hence, it is important to apply the networks to a relatively clean data set. However, in the realistic limit that no data set is perfectly clean, one should test the robustness of the network against perturbations on the scale of the error expected in the observed data. In addition, it is worth noting that it may be simpler to correct cylinder counts data for fiber incompleteness than the distance to the kth nearest neighbor. Therefore, the cylinder counts network may be more easily applied to an observational survey. Future papers in this series are planned to address secondary halo properties such as concentration, mass accretion history, and time since the last major merger. The findings of this paper reemphasized the importance of separating centrals and satellites when investigating trends between secondary halo properties and the environment, as the environment may correlate less with satellite properties. In addition, our results highlight the relationship between the local environment and halo mass, which will need to be marginalized over when considering secondary halo properties. Our main conclusions are summarized as follows: * Stellar mass alone is a strong predictor of halo mass, containing far more information than is found in the local galaxy distribution. This is especially clear for M_p ≲ 10^12.5, where there is no clear improvement gained from including information about the environment. Above this mass threshold, the inclusion of environmental information becomes more significant with increasing halo mass (Sections <ref> and <ref>). * Information about halo mass is extremely spatially restricted, with the innermost regions (∼ 1 Mpc) surrounding the center of the target halo containing the majority of the information about halo mass. This is demonstrated by the similar performances of both the kNN and cylinder counts networks with larger-scale information masked out in comparison with the performances of the full networks (Sections <ref> and <ref>). * The performance of the kNN and cylinder counts networks are remarkably similar, suggesting that both environmental measures contain the same information about halo mass (Section <ref>). § ACKNOWLEDGEMENTS We thank Tom Abel, Han Aung, Aleksandra Ciprijanovic, Tim Eifler, Xiaohui Fan, Robert Feldman, ChangHoon Hahn, Chris Lovell, Josh Peek, Joel Primack, Risa Wechsler, John Wu, Ann Zabludoff, and Haowen Zhang for insightful discussion during the development of this paper. HB and PB were funded through a Fellowship from the Packard Foundation, Grant #2019-69646. Work done by APH at Argonne National Laboratory was supported under the DOE contract DE-AC02-06CH11357. This research was supported in part by the National Science Foundation under Grant No. NSF PHY-1748958. This research is based upon High Performance Computing (HPC) resources supported by the University of Arizona TRIF, UITS, and Research, Innovation, and Impact (RII) and maintained by the UArizona Research Technologies department. The University of Arizona sits on the original homelands of Indigenous Peoples (including the Tohono O’odham and the Pascua Yaqui) who have stewarded the Land since time immemorial. The Bolshoi-Planck simulation was performed by Anatoly Klypin within the Bolshoi project of the University of California High-Performance AstroComputing Center (UC-HiPACC; PI Joel Primack). Resources supporting this work were provided by the NASA High-End Computing (HEC) Program through the NASA Advanced Supercomputing (NAS) Division at Ames Research Center. The SMDPL simulation was performed by Gustavo Yepes on the SuperMUC supercomputer at LRZ (Leibniz-Rechenzentrum) using time granted by PRACE, project number 012060963 (PI Stefan Gottloeber). § DATA AVAILABILITY Trained models, as well as the codes used to create them, are available online at <https://github.com/hbowden-arch/HaloProperties>. UniverseMachine galaxy catalogs <cit.> can be found at <https://www.peterbehroozi.com/data.html>. mnras § CENTRALS ONLY TRAINING For ease of comparison with other works, we include here the results of networks trained and tested on only central halos. These results assume zero contamination from satellite halos in the test population. The Bolshoi-Planck simulated test set can be perfectly separated into central and satellite populations, but this level of accuracy is not available for observational surveys. Figure <ref> shows the resulting prediction errors in peak halo mass for the SHMR interpolation method as well as the weighted and unweighted nearest neighbors and cylinder counts networks trained and tested on central halos. All the tested neural networks outperform the SHMR for halos with M_p ≳ 10^12. The overall RMSE for all networks is ∼ 0.16-0.17 dex. As found in the case with all halos included, the weighted networks (dotted lines) outperform the unweighted networks (dashed lines) for M_p ≳ 10^13. No substantial difference between the performance of the different types of inputs is evident. There is no significant change (< 1%) in the overall RMSE performance on centrals between the centrals-only networks and the networks trained on centrals and satellites (Figure <ref>). However, when limited to halos with M_p ∼ 10^14, the centrals-only networks perform ∼ 0.02-0.06 dex better than the networks trained on a combination of centrals and satellites. It appears that removing the satellites from the training sample had little effect on how the trained network performed on central halos at low halo masses but did improve performance slightly for the most massive halos. At the cluster mass end (M_p ∼ 10^14), we can compare the results of our centrals-only neural network approach against a variety of cluster mass recovery techniques such as those evaluated in <cit.>. They consider the accuracy of over twenty non-ML methods applied to clusters from two galaxy catalogs. The different techniques have a range of root-mean-squared accuracies from ∼ 0.2-0.6 dex when applied to cluster populations with an average true halo mass of ∼ 10^14. Selecting halos from our test catalog with M_p > 10^13.5 provides a population with a similar average true halo mass. The average RMSE of all three weighted networks on this population is ∼ 0.2 dex, corresponding to the best performances seen in the <cit.> review.
http://arxiv.org/abs/2307.05379v2
20230711155037
Three-stage thermalisation of a quasi-integrable system
[ "Leonardo Biagetti", "Guillaume Cecile", "Jacopo De Nardis" ]
cond-mat.stat-mech
[ "cond-mat.stat-mech", "cond-mat.quant-gas", "nlin.SI" ]
decorations.pathreplacing decorations.pathreplacing #1#1 #1#1 Ũ ł =0 =0
http://arxiv.org/abs/2307.04735v1
20230710175007
On tricyclic graphs with maximum edge Mostar index
[ "Fazal Hayat", "Shou-jun Xu", "Bo Zhou" ]
math.CO
[ "math.CO" ]
On tricyclic graphs with maximum edge Mostar index Fazal Hayat^a, Shou-Jun Xu^a[Corresponding author E-mail addresses: [email protected] (F. Hayat), [email protected] (S. J. Xu), [email protected] (B. Zhou)], Bo Zhou^b ^aSchool of Mathematics and Statistics, Gansu Center for Applied Mathematics, Lanzhou University, Lanzhou 730000, P.R. China ^bSchool of Mathematical Sciences, South China Normal University, Guangzhou 510631, P.R. China ========================================================================================================================================================================================================================================================================================================================================================================================================================== For a given connected graph G, the edge Mostar index Mo_e(G) is defined as Mo_e(G)=∑_e=uv ∈ E(G)|m_u(e|G) - m_v(e|G)|, where m_u(e|G) and m_v(e|G) are respectively, the number of edges of G lying closer to vertex u than to vertex v and the number of edges of G lying closer to vertex v than to vertex u. In this paper, we determine the sharp upper bound for the edge Mostar index on tricyclic graphs with a fixed number of edges, and the graphs that attain the bound are completely characterized. Keywords: Mostar index, edge Mostar index, tricyclic graph, distance-balanced graph. 2010 Mathematics Subject Classification: 05C12; 05C35 § INTRODUCTION All graphs considered in this paper are simple, connected and undirected. Let G be a graph on n vertices with vertex set V(G) and edge set E(G). For a set X, denoted by |X| is its cardinality. Thus, the order and size of G are the cardinality of V(G) and E(G), respectively. For v ∈ V(G), denoted by N_G(v) the set of all neighbors of v in G. The degree of v ∈ V(G) , denoted by d_G(v), is the cardinality of N_G(v). A vertex with degree one is called a pendent vertex and an edge incident to a pendent vertex is called a pendent edge. The distance between u and v in G is the least length of the path connecting u and v, denoted by d(u,v). A graph G with n vertices is a tricyclic graph if |E(G)|=n+2. As usual, by S_n, P_n and C_n we denote the star, path and cycle on n vertices, respectively. Let e=uv ∈ E(G), and define two subsets of V(G) as follows: N_u(e|G)= {x∈ V(G): d_G(u,x)< d_G(v,x)}, N_v(e|G)= {x∈ V(G): d_G(v,x)< d_G(u,x)}. Let n_i(e|G)= |N_i(e|G)|, for i = u, v. A graph G is distance-balanced if n_u(e|G) = n_v(e|G) for each e=uv ∈ E(G). One may refer to <cit.>, and the references cited therein, for the study on distance-balanced graph invariants. Since there exist many graphs which are not distance-balanced, measuring how far is a graph from being distance-balanced is a natural problem. However, such a measuring invariant was proposed by Doslić et al. <cit.>, named the Mostar index. For a graph G, the Mostar index of G is defined as Mo(G)=∑_e=uv ∈ E(G)|n_u(e|G) - n_v(e|G)|. Doslić et al. <cit.> studied the Mostar index of trees and unicyclic graphs, and gave a cut method for computing the Mostar index of benzenoid systems. Hayat and Zhou <cit.> determined all the n-vertex cacti with the largest Mostar index, and obtained a sharp upper bound for the Mostar index among cacti of order n with k cycles, and characterized the extremal cacti. Hayat and Zhou  <cit.> identified those trees with minimum and/or maximum Mostar index in the families of trees of order n with fixed parameters like maximum degree, diameter and the number of pendent vertices. Deng and Li <cit.> determined those trees with a given degree sequence have a maximum Mostar index. In <cit.> Deng and Li studied the extremal problem for the Mostar index among trees with a given number of segment sequence. Ali and Doslić <cit.> stated more modifications and generalizations of the Mostar index. For more studies about the Mostar index see  <cit.>. For a vertex x and edge e = uv of a graph G, the distance between x and e, denoted by d_G (x, e) , is defined as d_G (x, e)= min{ d_G(x,u), d_G(x,v)}. For e=uv ∈ E(G), let M_u(e|G) and M_v(e|G) respectively, the set of edges of G lying closer to u than to v and the set of edges of G lying closer to v than to u. Let m_u(e|G) and m_v(e|G) denote the size of M_u(e|G) and M_v(e|G), respectively. Arockiaraj et al. <cit.>, introduced the edge Mostar index as a quantitative refinement of the distance non-balancedness, also it can measure the peripherality of every edge and consider the contributions of all edges into a global measure of peripherality for a given chemical graph.. The edge Mostar index of G is defined as Mo_e(G)=∑_e=uv ∈ E(G)ψ_G(uv), where ψ_G(uv)=|m_u(e|G) - m_v(e|G)|, we use ψ(uv)=|m_u(e) - m_v(e)| for short, if there is no ambiguity. Imran et al. <cit.> studied the edge Mostar index of chemical structures and nanostructures using graph operations. Liu et al. <cit.> determined the extremal values of the edge Mostar index among trees and unicyclic graphs and determined the maximum and the second maximum value of the edge Mostar index among cactus graphs with a given number of vertices. Ghalavand et al. <cit.> determined the minimum values of the edge Mostar index among bicyclic graphs with fixed size, and characterized the corresponding extremal graphs. The edge Mostar index for several classes of cycle-containing graphs was computed in <cit.>. Recently, Hayat et al. <cit.> determined the sharp upper bound for the edge Mostar index on bicyclic graphs with a fixed number of edges, and the graphs that achieve the bound are completely characterized. In this paper, we determine the sharp upper bound for the edge Mostar index on tricyclic graphs with a fixed number of edges, and the graphs that achieve the bound are completely characterized. Let G be a tricyclic graph of size m. Then Mo_e(G) ≤{[ 12, if m=7, and equality holds iff G ≅ F_1, H_1,; 23, if m=8, and equality holds iff G ≅ A_3, F_1, H_1,; 36, if m=9, and equality holds iff G ≅ F_1, H_1, A_i (i= 2,...,6),; 53, if m=10, and equality holds iff G ≅ A_2,; 72, if m=11, and equality holds iff G ≅ A_1, A_2,; m^2-m-36, if m ≥ 12, and equality holds iff G ≅ A_0, ]. (where A_i (i= 0,1,...,6) are depicted in Fig. <ref>, F_1, H_1 are depicted and Fig. <ref> and Fig. <ref>, respectively). In section 2, we give some definitions and preliminary results. Theorem <ref> is proved in section 3. § PRELIMINARIES Let G_1 · G_2 be the graph obtained from G_1 and G_2 by identifying one vertex of the two graphs. Set u as the identified vertex of G_1 and G_2. If G_1 contains a cycle and u belongs to some cycle, and G_2 is a tree, then we call G_2 a pendent tree in G_1 · G_2 associated with u. For each e ∈ E(G_1), every path from e to some edges of G_2 passes through u. Therefore, the contribution of G_2 to ∑_e∈ E(G_1)ψ(e) totally depends on the size of G_2, that is, changing the structure of G_2 cannot alter the value ∑_e∈ E(G_1)ψ(e). If a graph H is gotten by removing repeatedly all pendants (If any) of G. Then we say H is the brace of G. That is to say, H does not contain any pendent vertex. Obviously, for all connected tricyclic graphs, their braces are shown in Fig. <ref>. Let 𝒢_m^i be the collection whose element includes α_i as its brace for i=1, … , 15. For convenience, let 𝒜 = ∪_i=5^15𝒢_m^i. <cit.> Let G be a bicyclic graph of size m. Then Mo_e(G) ≤{[ 4, if m=5, and equality holds iff G ≅ B_3, B_4,; m^2-3m-6, if 6 ≤ m ≤ 8, and equality holds iff G ≅ B_1, B_3,; 48, if m=9, and equality holds iff G ≅ B_0, B_1, B_2, B_3, B_4,; m^2-m-24, if m ≥ 10, and equality holds iff G ≅ B_0, ]. (where B_0, B_1, B_2, B_3, B_4 are depicted in Fig. <ref>). Let S_m,r≅ S_m-r· C_r, where the common vertex of S_m-r and C_r is the center of S_m-r. <cit.> Let G_1 be a connected graph of size m_1 and G_2 be a unicyclic graph of size m_2. Then Mo_e(G_1 · G_2 ) ≤ Mo_e(G_1 · S_m_2, 3 ) for m_1 + m_2 ≤ 8, Mo_e(G_1 · S_m_2, 3 )= Mo_e(G_1 · S_m_2, 4 ) for m_1 + m_2 = 9, Mo_e(G_1 · S_m_2, 4 ) for m_1 + m_2 ≥ 10. By means of Theorem <ref> and the above result, the following conclusions are obtained. Let G=G_1 · G_2 be a tricyclic graph, where G_1 is a bicyclic graph of size m_1 and G_2 is a unicyclic graph of size m_2. Then Mo_e(G) ≤ Mo_e(B_3 · S_m_2, 3 ) for m_1 + m_2= 8, Mo_e(G) ≤ Mo_e(B_2 · S_m_2, 3 )= Mo_e(B_3 · S_m_2, 3 ) = Mo_e(B_3 · S_m_2, 4)= Mo_e(B_4 · S_m_2, 3 ) = Mo_e(B_4 · S_m_2, 4 ) for m_1 + m_2= 9, Mo_e(G) ≤ Mo_e(B_0 · S_m_2, 4 ) for m_1 + m_2 ≥ 12. § PROOF OF THEOREM <REF> Let G ∈𝒜 of size m. Then Mo_e(G) ≤{[ 23, if m=8, and equality holds iff G ≅ A_3,; 36, if m=9, and equality holds iff G ≅ A_i (i= 2,...,6),; 53, if m=10, and equality holds iff G ≅ A_2,; 72, if m=11, and equality holds iff G ≅ A_1, A_2,; m^2-m-36, if m ≥ 12, and equality holds iff G ≅ A_0, ]. Suppose G ∈𝒜, then G contains α_i (i=5,6,...,15) as its brace. Let G_1 be a bicyclic graph of size m_1 and G_2 be a unicyclic graph of size m_2 such that G = G_1 · G_2. Then, in view of Lemmas <ref> and <ref>, if m= 8, we get Mo_e(G ) = Mo_e(G_1 · G_2 ) ≤ Mo_e(G_1 · S_m_2, 3 ) ≤ Mo_e(B_3 · S_m_2, 3 )= Mo_e(A_3); if m= 9, we get Mo_e(G ) = Mo_e(G_1 · G_2 ) ≤ Mo_e(B_2 · S_m_2, 3 )= Mo_e(B_3 · S_m_2, 3 ) = Mo_e(B_3 · S_m_2, 4)= Mo_e(B_4 · S_m_2, 3 ) = Mo_e(B_4 · S_m_2, 4 ) = Mo_e(A_i) (i= 2,...,6); if m ≥ 12, we have Mo_e(G ) = Mo_e(G_1 · G_2 ) ≤ Mo_e(G_1 · S_m_2, 4 ) ≤ Mo_e(B_0 · S_m_2, 4 )= Mo_e(A_0). By simple calculation, it is easy to check that, Mo_e(A_0)= m^2-m-36, Mo_e(A_1)= Mo_e(A_2)=m^2-2m-27, Mo_e(A_3)= Mo_e(A_4)=m^2-4m-9, Mo_e(A_5)= Mo_e(A_6)=Mo_e(A_7)=m^2-3m-18. Clearly, Mo_e(A_0)= m^2-m-36 > A_i (i=3,...,7), for m ≥ 10, but A_0 contains at least 12 edges. Therefore, if m=11, then Mo_e(A_1)= Mo_e(A_2) > A_i (i=3,...,7); if m=10, then Mo_e(A_2) > A_i (i=3,...,7). Let G ∈𝒢_m^1 with brace α_1 (1,1,1,2,1,1). Then Mo_e(G) ≤{[ m^2-3m-24, if 7 ≤ m ≤ 10, and equality holds iff G ≅ D_1,; 64, if m=11, and equality holds iff G ≅ D_1, D_2,; m^2-2m-35, if m ≥ 12, and equality holds iff G ≅ D_2. ]. Suppose that v_i (i=1,...,5) be the vertices in α_1 of G, as shown in Fig. <ref>. Let a_i be the number of pendent edges of v_i (i=1,...,5). Suppose that a_1+a_3 ≥ a_2 + a_4 ≥ 1 . Let G_1 be the graph obtained from G by shifting a_2 (resp. a_4) pendent edges from v_2 (resp. v_4) to v_1 (resp. v_3). We deduce that Mo_e(G_1 )- Mo_e(G ) = (a_1+a_2-a_3-a_4-a_5)-(a_1+a_4-a_3-a_5) + (a_3+a_4+a_5+2-3)-(a_2+a_4+3-a_3-a_5-2) + (a_1+a_2+a_3+a_4-a_5)-(a_1+a_3-a_4-a_5) + (a_3+a_4+3-a_5-2)-(a_2+a_3+3-a_4-a_5-2) + (a_1+a_2)-(a_1-a_2)+(a_1+a_2+a_3+a_4+3-a_5-1) - (a_1+a_2+a_3+3-a_4-a_5-1) + (a_1+a_2+3-a_3-a_4-a_5-1) - (a_1+a_2+a_4+3-a_3-a_5-1) = 2( a_2+a_3+a_4 + a_5 )-2 > 0. For a_5 >0, let G_2 be the graph obtained from G_1 by shifting a_5 pendent edges from v_5 to v_3. We obtain Mo_e(G_2 )- Mo_e(G_1 ) = (a_1+a_3+a_5)-(a_1+a_3-a_5)+(a_3+a_5+3-2) - (a_3+3-a_5-2)+(a_1+a_3+a_5+3-1) - (a_1+a_3+3-a_5-1) = 6 a_5 > 0. Let G_3 be the graph obtained from G_2 by shifting a_1 pendent edges from v_1 to v_3. We obtain Mo_e(G_3 )- Mo_e(G_2 ) = (a_1+a_3)-(a_3-a_1)+(a_1+a_3+2-3) - (a_3+2-3)+(a_1+a_3+3-2)-(a_3+3-2) + 0-a_1+(a_1+a_3+1-3)-(a_3+1-a_1-3) = 5 a_1 >0. Clearly, G_3 ≅ D_2, and G_2 ≅ D_1 for a_3=0. Observe that Mo_e(D_1 )=m^2-3m-24, and Mo_e(D_2 )=m^2-2m-35 . Let G ∈𝒢_m^1 of size m. Then Mo_e(G) < m^2-m-36. Suppose that G ∈𝒢_m^1, then G has a brace α_1 (a_1, a_2, a_3, a_4, a_5, a_6) as shown in Fig. <ref>. We consider the following three possible cases. Case 1. α_1 have at least three paths with length at least two. Subcase 1.1. The three paths inclose a cycle. Assume that the three paths are P(a_1), P(a_2) and P(a_6) by the symmetry of α_1. We choose nine edges, two edges in the path P(a_1) such that each one is incident to x or u, two edges in the path P(a_2) such that each one is incident to y or u, two edges in the path P(a_6) such that each one is incident to y or z, one edge in the path P(a_3) incident to z, one edge in the path P(a_4) incident to z and one edge in the path P(a_5) incident to z. Let e be one of the nine edges. Then ψ(e) ≤ m-7. This fact is also true for the remaining eight edges. Thus, Mo_e(G) ≤ 9(m-7)+(m-9)(m-1) < m^2-m-36. Subcase 1.2. The three paths composed a new path. Assume that the three paths are P(a_1), P(a_2) and P(a_4) by the symmetry of α_1. We choose nine edges, two edges in the path P(a_1) such that each one is incident to x or u, two edges in the path P(a_2) such that each one is incident to y or u, two edges in the path P(a_4) such that each one is incident to y or z, one edge in the path P(a_3) incident to z, one edge in the path P(a_5) incident to z and one edge in the path P(a_6) incident to x. Thus, Mo_e(G) ≤ 2(m-6)+4(m-7)+2(m-8)+(m-9)+(m-9)(m-1) < m^2-m-36. Subcase 1.3. The three paths share a common vertex. Assume that the three paths are P(a_1), P(a_2) and P(a_3) by the symmetry of α_1. We choose nine edges, two edges in the path P(a_1) such that each one is incident to x or u, two edges in the path P(a_2) such that each one is incident to y or u, two edges in the path P(a_3) such that each one is incident to u or z, one edge in the path P(a_4) incident to y, one edge in the path P(a_5) incident to z and one edge in the path P(a_6) incident to x. We have, Mo_e(G) ≤ 3(m-7)+3(m-8)+3(m-9)+(m-9)(m-1) < m^2-m-36. Case 2. α_1 have just two paths with length at least two. Subcase 2.1. The two paths belong to the same cycle at α_1. Assume that the two paths are P(a_1) and P(a_2) by the symmetry of α_1. We choose eight edges, two edges in the path P(a_1) such that each one is incident to x or u, two edges in the path P(a_2) such that each one is incident to y or u, one edge in the path P(a_3) incident to u, one edge in the path P(a_4) incident to y, one edge in the path P(a_5) incident to x and one edge in the path P(a_6) incident to x. We deduce that, Mo_e(G) ≤ 4(m-6)+3(m-7)+(m-8)+(m-8)(m-1) < m^2-m-36. Subcase 2.2. The two paths belong to the two different cycles at α_1. We choose eight edges in a similar way, as in Subcase 2.1. We obtain Mo_e(G) ≤ 4(m-5)+4(m-8)+(m-8)(m-1) < m^2-m-36. Case 3. α_1 has exactly one path with length at least two. Assume that the path is P(a_4) with a_4 ≥ 2. If a_4=2, then by Lemma <ref>, Mo_e(G) < m^2-m-36. If a_4 ≥ 3, then similarly choose eight edges as in Subcase 2.1. We obtain Mo_e(G) ≤ 2(m-5)+6(m-8)+(m-8)(m-1) < m^2-m-36. Let G ∈𝒢_m^2 with brace α_2 (2,1,1,2,1). Then Mo_e(G) ≤{[ m^2-4m-9, if 7 ≤ m ≤ 16, and equality holds iff G ≅ F_1,; 212, if m=17, and equality holds iff G ≅ F_1, F_2,; m^2-3m-26, if m ≥ 18, and equality holds iff G ≅ F_2. ]. Suppose that v_i (i=1,...,5) is the vertices in α_2 (2,1,1,2,1) of G, as shown in Fig. <ref>. Let a_i be the number of pendent edges of v_i (i=1,...,5). Suppose a_2+a_4 ≥ a_3 + a_5 ≥ 1 . Let G_1 be the graph obtained from G by shifting a_3 (resp. a_5) pendent edges from v_3 (resp. v_5) to v_2 (resp. v_4). We deduce that Mo_e(G_1 )- Mo_e(G ) = (a_2+a_3+1-a_1-4)-(a_1+a_3+a_5+4-a_2-1) + (1+a_2+a_3-3-a_4-a_5)-(a_2+1-a_4-a_5-3) + (a_1+3-a_4-a_5-2)-(a_1+a_3+3-a_4-2) + (a_2+a_3+a_4+a_5+3-3)-(a_2+a_4+3-a_5-a_3-3) + (a_1+a_2+a_3+a_4+a_5+4-1)-(a_1+a_2+a_4+4-a_3-1) + (a_4+a_5+3-1)-(a_4+a_5+3-1-a_3) + (a_1+a_2+a_3+3-2)-(a_1+a_2+a_3-a_5-2) = 2 a_2+6a_3+2a_5 -2a_1-6 > 0. Let G_2 be the graph obtained from G_1 by shifting a_4 pendent edges from v_4 to v_1. We obtain Mo_e(G_2 )- Mo_e(G_1 ) = (a_1+a_4+4-a_2-1)-(a_1+4-a_2-1) + (a_1+a_4+3-2)-(a_1+3-a_4-2) + (a_1+a_2+a_4+3-2)-(a_1+a_2+3-2) + (a_2+1-3)-(a_2+1-3-a_4)+(a_2+3-3) - (a_2+a_4+3-3)+(3-1)-(a_4+3-1) = 3 a_4 > 0. Let G_3 be the graph obtained from G_2 by shifting a_2 pendent edges from v_2 to v_1. We obtain Mo_e(G_3 )- Mo_e(G_2 ) = (a_1+a_2+4-1)-(a_1+4-a_2-1)+(a_1+a_2+3-2) - (a_1+3-2)+(1-3)-(a_2+1-3)+0-(a_2+3-3) = 2 a_2 > 0. For a_1 >6-2a_2, let G_4 be the graph obtained from G_3 by shifting a_1 pendent edges from v_1 to v_2. We have Mo_e(G_3 )- Mo_e(G_2 ) = (a_1+a_2+1-4)-(a_1+4-a_2-1)+(3-2) - (a_1+3-2)+(a_1+a_2+1-3)-(a_2+1-3) + (a_1+a_2+3-3)-(a_2+3-3) = a_1+ 2 a_2-6 > 0. Clearly, G_3 ≅ F_1 and G_4 ≅ F_2. By simple calculation, we have Mo_e(F_1 )=m^2-4m-9, and Mo_e(F_2 )=m^2-3m-26. Let G ∈𝒢_m^2 with brace α_2 (2,1,1,2,2). Then Mo_e(G) ≤ m^2-3m-20 with equality if and only if G ≅ F_3. Suppose that v_i (i=1,...,6) be the vertices in α_2 (2,1,1,2,2) of G, as shown in Fig. <ref>. Let a_i be the number of pendent edges of v_i (i=1,...,6). For a_6 >0, let G_1 be the graph obtained from G by shifting a_6 pendent edges from v_6 to v_1. We obtain Mo_e(G_1 )- Mo_e(G ) = (a_1+a_3+ a_5+ a_6+5-a_2-1)-(a_1+a_3+a_5+5-a_2-1) + (a_1+a_2+a_4+a_6+5-a_3-1)-(a_1+a_2+a_4+5-a_3-3) + (a_1+a_3+a_5+a_6+4-a_4-2) - (a_1+a_3+a_5+4-a_4-a_6-2) + (a_1+a_2+a_4+a_6+4-a_5-2) - (a_1+a_2+a_4+4-a_5-a_6-2) + (a_1+a_2+a_4+a_6+4-a_5-2) - (a_1+a_2+a_4+4-a_5-a_6-2) + (a_1+a_3+a_5+4-a_4-2)-(a_1+a_3+a_5+4-a_4-a_6-2) + (a_2+1-a_4-3)-(a_2+1-a_4-a_6-3) + (a_3+1-a_5-3)-(a_3+1-a_5-a_6-3) = 11a_6 > 0. For a_2+a_3>a_1, let G_2 be the graph obtained from G_1 by shifting a_3 (resp. a_5) pendent edges from v_3 (resp. v_5) to v_2 (resp. v_4). We deduce that Mo_e(G_2 )- Mo_e(G_1 ) = (a_2+a_3+1-a_1-5)-(a_1+a_3+a_5+5-a_2-1) + (a_1+a_2+a_3+a_4+a_5+5-1)-(a_1+a_2+a_4+5-a_3-1) + (a_1+4-a_4-a_5-2)-(a_1+a_3+a_5+4-a_4-2) + (a_1+a_2+a_3+a_4+a_5+4-2)-(a_1+a_2+a_4+4-a_5-2) + (a_2+a_3+1-a_4-a_5-3)-(a_2+1-a_4-3) + (3-1)-(a_3+1-a_5-3)+(a_1+a_2+a_3+a_4+a_5-2 - (a_1+a_2+a_4+4-a_5-2)+(a_1+4-a_4-a_5-2) - (a_1+a_3+a_5+4-a_4-2) = 2a_2+2a_3-2a_1 > 0. For a_2+a_4≥ 1, let G_3 be the graph obtained from G_2 by shifting a_2 (resp. a_4) pendent edges from v_2 (resp. v_4) to v_1 (resp. v_4). We have Mo_e(G_3 )- Mo_e(G_2 ) = (a_1+a_2+a_4+5-1)-(a_1+5-a_2-1) + (a_1+a_2+a_4+5-1)-(a_1+5-a_2-1) + (a_1+a_2+a_4+4-2)-(a_1+a_3+a_5+4-a_4-2) + (1-3)-(a_2+1-a_4-3) + (a_1+a_2+a_4+4-2)-(a_1+4-a_4-2) = 5a_2+7a_4 > 0. Clearly, G_3 ≅ F_3, and Mo_e(F_3 )=m^2-3m-20. Let G ∈𝒢_m^2 with brace α_2 (3,1,1,2,1). Then Mo_e(G) ≤ m^2-2m-33 with equality if and only if G ≅ F_4. Suppose that v_i (i=1,...,6) be the vertices in α_2 (3,1,1,2,1) of G, as shown in Fig. <ref>. Let a_i be the number of pendent edges of v_i (i=1,...,6). For a_6 >0, let G_1 be the graph obtained from G by shifting a_5 (resp. a_6) pendent edges from v_5 (resp. v_6) to v_1. We obtain Mo_e(G_1 )- Mo_e(G ) = (a_1+a_3+ a_5+ a_6+3-a_2-2) - (a_1+a_3+a_5+3-a_2-a_6-2) + (a_1+a_2+a_3+a_4+a_5+a_6+5-1) - (a_1+a_2+a_3+a_4+5-a_5-a_6-1) + (a_1+a_2+a_3+a_4+a_5+a_6+5-1) - (a_1+a_2+a_3+a_4+5-a_5-a_6-1) + (a_1+a_3+a_5+a_6+3-a_2-2) - (a_1+a_3+a_5+3-a_2-a_6-2) + (a_2+a_4+3-a_3-1)-(a_2+a_4+a_6+3-a_3-1) + (a_3+a_4+2-a_2-3)-(a_2+a_6+3-a_3-a_4-2) + (a_1+a_5+a_6+4-a_2-2)-(a_1+a_5+4-a_4-2) = 2a_3+4a_5+7a_6-2a_2-2 > 0. Let G_2 be the graph obtained from G_1 by shifting a_3 (resp. a_4) pendent edges from v_3 (resp. v_4) to v_1 (resp. v_2). We deduce that Mo_e(G_2 )- Mo_e(G_1 ) = (a_1+a_2+a_3+a_4+2-3)-(a_1+a_3+3-a_2-2) + (a_1+a_2+a_3+a_4+2-3)-(a_1+a_3+3-a_2-2) + (a_1+a_2+a_3+a_4+5-1)-(a_1+a_2+5-a_3-1) + (a_1+a_2+a_3+a_4+3-1)-(a_2+a_4+3-a_3-1) + (a_1+a_2+a_3+a_4+3-2)-(a_2+3-a_3-a_4-2) = a_1+ 3a_2+6a_3+5a_4 > 0. Let G_3 be the graph obtained from G_2 by shifting a_1 pendent edges from v_1 to v_2. We obtain Mo_e(G_3 )- Mo_e(G_2 ) = (a_1+a_2+2-3)-(a_1+3-a_2-2) + (a_1+a_2+2-3)-(a_2+2-a_1-3) + (a_1+a_2+3-1)-(a_2+3-1) + (a_1+a_2+3-2)-(a_2+3-2) + (4-2)-(a_1+4-2) = 3a_1+ 2a_2-2 > 0. Thus, Mo_e(G )< Mo_e(G_1 )< Mo_e(G_2 )< Mo_e(G_3 ). Clearly, G_3 ≅ F_4, and Mo_e(F_4) = m^2-2m-33. Let G ∈𝒢_m^2 of size m. Then Mo_e(G) < m^2-m-36 for m ≥ 9, and Mo_e(G) ≤ Mo_e(F_1) for m ≤ 9. Suppose that G ∈𝒢_m^2, then G has a brace α_2 (a_1, a_2, a_3, a_4, a_5) as shown in Fig. <ref>. Assume that a_4, a_5 ≥ 2. We consider the following three possible cases. Case 1. a_4, a_5 ≥ 3. Subcase 1.1. a_1= a_2= a_3 =1. We choose nine edges, three edges in the path P(a_4) such that two are incident to x or y and one is in the middle of P(a_4), three edges in the path P(a_5) such that two are incident to x or z and one is in the middle of P(a_5), one edge in the path P(a_2) incident to x, one edge in the path P(a_3) incident to x and one edge in the path P(a_1) incident to y. We have Mo_e(G) ≤ 4(m-4)+4(m-7)+(m-9)+(m-9)(m-1) < m^2-m-36. Subcase 1.2. At least one of a_1, a_2, a_3 is greater than 1. If a_2, a_3 ≥ 2, then we choose 10 edges, three edges in the path P(a_4) such that two are incident to x or y and one is in the middle of P(a_4), three edges in the path P(a_5) such that two are incident to x or z and one is in the middle of P(a_5), two edges in the path P(a_2) incident to x or y, one edge in the path P(a_3) incident to x and one edge in the path P(a_1) incident to y. We have Mo_e(G) ≤ 2(m-4)+ (m-5)+2(m-6)+2(m-8)+3(m-9)+(m-10)(m-1) < m^2-m-36. If a_1 ≥ 2, then we choose 10 edges, three edges in the path P(a_4) such that two are incident to x or y and one is in the middle of P(a_4), three edges in the path P(a_5) such that two are incident to x or z and one is in the middle of P(a_5), two edges in the path P(a_2) incident to x or y, one edge in the path P(a_3) incident to x and two edges in the path P(a_1) incident to y or z. We obtain Mo_e(G) ≤ 4(m-4)+ 6(m-7)+(m-10)(m-1) < m^2-m-36. Case 2. a_4 ≥ 3, a_5 = 2. Subcase 2.1.a_4 ≥ 4, a_5 = 2, and a_1= a_2= a_3 =1. We choose nine edges, four edges in the path P(a_4) such that two are incident to x or y and two are in the middle of P(a_4), two edges in the path P(a_5) such that one is incident to x and one is in the middle of P(a_5), one edge in the path P(a_2) incident to x, one edge in the path P(a_3) incident to x and one edge in the path P(a_1) incident to y. We have Mo_e(G) ≤ (m-4)+2(m-5)+(m-6)+2(m-7)+3(m-8)+(m-9)(m-1) < m^2-m-36. Subcase 2.2.a_4 =3, a_5 = 2, and a_1= a_2= a_3 =1. The Subcase follows from Lemma <ref>. Subcase 2.3.a_4 ≥ 3, a_5 = 2, and at least one of a_1, a_2, a_3 is greater than 1. The proof is similar to the Subcase 2.1. Case 3. a_4 = a_5 = 2. Subcase 3.1. At least one of a_1, a_2, a_3 is greater than 1. If a_2, a_3 ≥ 2, then we choose eight edges, three edges in the path P(a_4) such that two are incident to x or y and one is in the middle of P(a_4), two edges in the path P(a_5) such that one is incident to x and other is in the middle of P(a_5), two edges in the path P(a_2) incident to x or y, one edge in the path P(a_3) incident to x and one edge in the path P(a_1) incident to y. We have Mo_e(G) ≤ 4(m-5)+4(m-7)+(m-8)(m-1) < m^2-m-36. If a_1 ≥ 3, then we choose nine edges, two edges in the path P(a_4) such that one is incident to x and other is in the middle of P(a_4), two edges in the path P(a_5) such that one is incident to x and the other is in the middle of P(a_5), one edge in the path P(a_2) incident to x, one edge in the path P(a_3) incident to x and three edges in the path P(a_1) such that two are incident to y or z and one is in the middle of P(a_1). We obtain Mo_e(G) ≤ 2(m-5)+ 2(m-6)+4(m-7)+(m-9)+(m-9)(m-1) < m^2-m-36. If a_1 = 2, then by Lemma <ref>, Mo_e(G) ≤ m^2-3m-20 < m^2-m-36. Subcase 3.2. a_1= a_2= a_3 =1. By Lemma <ref>, we have Mo_e(G) < m^2-m-36 for m ≥ 9, and Mo_e(G) ≤ Mo_e(F_1) for m ≤ 9. Let G ∈𝒢_m^3 with brace α_3 (1,2,2,2). Then Mo_e(G) ≤{[ m^2-4m-9, if 7 ≤ m ≤ 10, and equality holds iff G ≅ H_1,; 68, if m=11, and equality holds iff G ≅ H_1, H_2,; m^2-2m-31, if m ≥ 12, and equality holds iff G ≅ H_2. ]. Suppose that v_i (i=1,...,5) be the vertices in α_3 (1,2,2,2) of G with d_G(v_1)=d_G(v_2)=4 and d_G(v_3)=d_G(v_4)=d_G(v_5)=2, as shown in Fig. <ref>. Let a_i be the number of pendent edges of v_i (i=1,...,5). Suppose that a_3 ≥ a_4 ≥ a_5. For a_4 +a_5 > a_1+a_2+8, let G_1 be the graph obtained from G by shifting a_4 (resp. a_5) pendent edges from v_4 (resp. v_5) to v_3. We deduce that Mo_e(G_1 )- Mo_e(G ) = (a_3+a_4+a_5+1-a_1-3)-(a_1+a_4+a_5+3-a_3-1) + (a_3+a_4+a_5+1-a_2-3)-(a_1+a_4+a_5+3-a_3-1) + (a_1+a_3+a_4+a_5+3-1)-(a_1+a_3+a_5+3-a_4-1) + (a_2+a_3+a_4+a_5+3-1)-(a_2+a_3+a_5+3-a_4-1) + (a_1+a_3+a_4+a_5+3-1)-(a_1+a_3+a_4+3-a_5-1) + (a_1+a_3+a_4+a_5+3-1)-(a_2+a_3+a_4+3-a_5-1) = 4( a_3+a_4 + a_5 )-2(a_1+a_2) -8> 0. For a_2 +a_3 > 1, let G_2 be the graph obtained from G_1 by shifting a_2 ) pendent edges from v_2 to v_1. We have Mo_e(G_2 )- Mo_e(G_1 ) = (a_1+a_2+3-a_3-1)-(a_1+3-a_3-1) + (a_3+1-3)-(a_2+3-a_3-1) + (a_1+a_2+3-3)-(a_1+3-a_2-3) + (a_1+a_2+a_3+3-1)-(a_1+a_3+3-1) + (a_3+3-1)-(a_2+a_3+3-1) + (a_1+a_2+a_3+3-1)-(a_1+a_3+3-1) + (a_3+3-1)-(a_2+a_3+3-1) = 2( a_2+a_3 )-4> 0. Clearly, G_2 ≅ H_2 for a_1=0, a_3 >0, and G_2 ≅ H_1 for a_3=0, a_1 >0. For a_1 +a_3 > 2, let G_3 be the graph obtained from G_2 by shifting a_1 pendent edges from v_1 to v_3. We have Mo_e(G_2 )- Mo_e(G_1 ) = (a_1+a_3+1-3)-(a_1+3-a_3-1) + (a_1+a_3+1-3)-(a_3+1-3)+(3-3) - (a_1+3-3)+(a_1+a_3+3-1)-(a_3+3-1) + (a_1+a_3+3-1)-(a_3+3-1) = 2( a_1+a_3 )-4> 0. Thus, Mo_e(G )< Mo_e(G_1 )< Mo_e(G_2 )< Mo_e(G_3 ). Clearly, G_3 ≅ H_2, and by simple calculation, we deduce that Mo_e(H_2) = m^2-2m-31, Mo_e(H_1) = m^2-4m-9. Let G ∈𝒢_m^3 with brace α_3 (2,2,2,2). Then Mo_e(G) ≤ m^2-m-48 with equality if and only if G ≅ H_3. Suppose that v_i (i=1,...,6) be the six vertices in α_3 (2,2,2,2) of G with d_G(v_1)=d_G(v_2)=4 and d_G(v_3)=d_G(v_4)=d_G(v_5)=d_G(v_6)=2, as shown in Fig. <ref>. Let a_i be the number of pendent edges of v_i (i=1,...,6). Suppose that a_3 ≥ a_4 ≥ a_5 ≥ a_6>0. Let G_1 be the graph obtained from G by shifting a_i ( i ≥ 4) pendent edges from v_i ( i ≥ 4) to v_3. We obtain Mo_e(G_1 )- Mo_e(G ) = (a_2+a_3+ a_4+a_5+ a_6+1-a_1-3) - (a_1+ a_4+a_5+a_6+3-a_2-a_3-1) + (a_1+a_3+a_4+a_5+a_6+1-a_2-3) - (a_2+a_4+a_5+a_6+3-a_1-a_3-1) + (a_1+a_3+a_4+a_5+a_6+3-a_2-1) - (a_1+a_3+a_5+a_6+3-a_4-a_2-1) + (a_2+a_3+a_4+a_5+a_6+3-a_1-1) - (a_2+a_3+a_5+a_6+3-a_1-a_4-1) + (a_1+a_3+a_4+a_5+a_6+3-a_2-1) - (a_1+a_3+a_4+a_6+3-a_2-a_5-1) + (a_2+a_3+a_4+a_5+a_6+3-a_1-1) - (a_2+a_3+a_4+a_6+3-a_1-a_5-1) + (a_1+a_3+a_4+a_5+a_6+3-a_2-1) - (a_1+a_3+a_4+a_5+3-a_2-a_6-1) + (a_2+a_3+a_4+a_5+a_6+3-a_1-1) - (a_2+a_3+a_4+a_5+3-a_1-a_6-1) = 4(a_3+a_4+a_5+a_6)-8 > 0. For a_1 +a_2 > 0, let G_2 be the graph obtained from G_1 by shifting a_1 (resp. a_2) pendent edges from v_1 (resp. v_2) to v_3. We have Mo_e(G_2 )- Mo_e(G_1 ) = (a_1+a_2+a_3+1-3)-(a_2+a_3+1-a_1-1) + (a_1+a_2+a_3+1-3)-(a_2+3-a_1-a_3-1) + (a_1+a_2+a_3+3-1)-(a_1+a_3+3-a_2-1) + (a_1+a_2+a_3+3-1)-(a_2+a_3+3-a_1-1) + (a_1+a_2+a_3+3-1)-(a_1+a_3+3-a_2-1) + (a_1+a_2+a_3+3-1)-(a_2+a_3+3-a_1-1) + (a_1+a_2+a_3+3-1)-(a_1+a_3+3-a_2-1) + (a_1+a_2+a_3+3-1)-(a_2+a_3+3-a_1-1) = 10a_1+6a_2+2a_3-8> 0. Thus, Mo_e(G )< Mo_e(G_1 )< Mo_e(G_2 ). Clearly, G_2 ≅ H_3, and by simple calculation, we obtain Mo_e(H_3) = m^2-m-48. Let G ∈𝒢_m^3 with brace α_3 (1,2,2,3). Then Mo_e(G) ≤ m^2-3m-24 with equality if and only if G ≅ H_4. Suppose that v_i (i=1,...,6) be the six vertices in α_3 (1,2,2,3) of G with d_G(v_1)=d_G(v_2)=4 and d_G(v_3)=d_G(v_4)=d_G(v_5)=d_G(v_6)=2, as shown in Fig. <ref>. Let a_i be the number of pendent edges of v_i (i=1,...,6). Assume that a_3 ≥ a_2, and a_4+a_5+a_6 >1. Let G_1 be the graph obtained from G by shifting a_i ( i ≥ 4) pendent edges from v_i ( i ≥ 4) to v_3. We get Mo_e(G_1 )- Mo_e(G ) = (a_1+4-a_3- a_4-a_5+-a_6-1) - (a_1+ a_4+a_5+4-a_3-1) + (a_3+a_4+a_5+a_6+1-a_2-4) - (a_2+a_4+a_6+4-a_3-1) + (a_1+a_3+a_4+a_5+a_6+4-1) - (a_1+a_3+a_5+4-a_4-1) + (a_2+a_3+a_4+a_5+a_6+4-1) - (a_2+a_3+a_6+4-a_4-1) + (a_1+a_2+a_3+a_4+a_5+a_6+5-1) - (a_1+a_2+a_2+a_4+5-a_5-a_6-1) + (a_1+a_2+a_3+a_4+a_5+a_6+5-1) - (a_1+a_2+a_2+a_4+5-a_5-a_6-1) + (a_1+3-a_2-3)-(a_1+a_5+3-a_2-a_6-3) + (a_1+3-a_2-3)-(a_1+a_5+3-a_2-a_6-3) = 2(a_3+a_4+a_5)+6a_6-2a_2-12 > 0. For a_1 +a_2 > 0, let G_2 be the graph obtained from G_1 by shifting a_1 (resp. a_2) pendent edges from v_1 (resp. v_2) to v_3. We have Mo_e(G_2 )- Mo_e(G_1 ) = (a_1+a_2+a_3+1-4)-(a_3+1-a_1-4) + (a_1+a_2+a_3+1-4)-(a_3+1-a_2-4) + (a_1+a_2+a_3+4-1)-(a_1+a_3+4-1) + (a_1+a_2+a_3+4-1)-(a_2+a_3+4-1) + (3-3)-(a_1+3-a_2-3) + (3-3)-(a_1+3-a_2-3) = 2a_1+6a_2> 0. Thus, Mo_e(G )< Mo_e(G_1 )< Mo_e(G_2 ). Clearly, G_2 ≅ H_4, and by simple calculation, we get Mo_e(H_4) = m^2-3m-248. Let G ∈𝒢_m^3 of size m. Then Mo_e(G) < m^2-m-36 for m ≥ 9, and Mo_e(G) ≤ Mo_e(H_1) for m ≤ 9. Suppose that G ∈𝒢_m^3, then G has a brace α_2 (a_1, a_2, a_3, a_4) as shown in Fig. <ref>. Assume that 1 ≤ a_1 ≤ a_2 ≤ a_3 ≤ a_4. We proceed with the following three possible cases. Case 1. 3 ≤ a_1 ≤ a_2 ≤ a_3 ≤ a_4. We choose twelve edges, eight edges in the paths P(a_i) (i=1,2,3,4) such that each one is incident to x or y, four edges in the middle of P(a_i) (i=1,2,3,4). We deduce that Mo_e(G) ≤ 8(m-8)+ 4(m-12)+(m-12)(m-1) < m^2-m-36. Case 2. a_1 = 2. Subcase 2.1. 3 ≤ a_2 ≤ a_3 ≤ a_4. We choose eleven edges, eight edges in the paths P(a_i) (i=1,2,3,4) such that each one is incident to x or y, three edges in the middle of P(a_i) (i=2,3,4). We deduce that Mo_e(G) ≤ 6(m-7)+ 2(m-9)+3(m-11)+(m-11)(m-1) < m^2-m-36. Subcase 2.2. a_2 = a_3 = a_4= 2. The Subcase follows from Lemma <ref>. Case 3. a_1 = 1. Subcase 3.1. 3 ≤ a_2 ≤ a_3 ≤ a_4. We choose ten edges, six edges in the paths P(a_i) (i=2,3,4) such that each one is incident to x or y, three edges in the middle of P(a_i) (i=2,3,4), and one edge in P(a_1) incident to x. It follows that Mo_e(G) ≤ 6(m-4)+ 4(m-10)+(m-10)(m-1) < m^2-m-36. Subcase 3.2. a_2=2, 3 ≤ a_3 ≤ a_4. The proof is similar to the Subcase 3.1. Subcase 3.3. a_2= a_3=2, 3 ≤ a_4. If a_4=3, then it follows from Lemma <ref>. If a_4 ≥ 4, then we choose nine edges, four edges in the path P(a_4) such that two are incident to x or y and the other two are in the middle of P(a_4), two edges in the path P(a_3) (resp. P(a_2)) such that one is incident to x and the other is in the middle of P(a_3) (resp. P(a_2)) and one edge in P(a_1) incident to x. We have Mo_e(G) ≤ 2(m-5)+2(m-7)+ 4(m-6)+(m-9)+(m-9)(m-1) < m^2-m-36. Subcase 3.4. a_2= a_3=a_4=2. By Lemma <ref>, Mo_e(G) < m^2-m-36 for m ≥ 9, and Mo_e(G) ≤ Mo_e(H_1) for m ≤ 9. Let G ∈𝒢_m^4 of size m. Then Mo_e(G) < m^2-m-36. Suppose that G ∈𝒢_m^4, then G has a brace α_4 (a_1, a_2, a_3, a_4, a_5, a_6) as shown in Fig. <ref>. We choose eight edges, two edges in the path P(a_5) such that each is incident to w or y, two edges in the path P(a_6) such that each is incident to z or x, the four edges yz, yw, wx, zx. We obtain Mo_e(G) ≤ 4(m-5)+4(m-8)+ (m-8)(m-1) < m^2-m-36. The proof of the Theorem <ref> follows from Lemmas <ref>, <ref>, <ref>, <ref> and <ref>. Acknowledgement: This work was partially supported by the National Natural Science Foundation of China (Grant Nos. 12071194, 11571155 and 12071158). 20 ACT M. Arockiaraj, J. Clement and N. Tratnik, Mostar indices of carbon nanostructures and circumscribed donut benzenoid systems. Int. J. Quantum Chem. 119 (2019) e26043. AD A. Ali, T. Došlić, Mostar index: results and perspectives. Appl. Math. Comput. 404 (2021) 19. 126245. DL K. Deng, S. Li, On the extremal values for the Mostar index of trees with given degree sequence. Appl. Math. Comput. 390 (2021) 11. 125598. DL1 K. Deng, S. Li, On the extremal Mostar indices of trees with a given segment sequence. Bull. Malays. Math. Sci. Soc. 390 (2021) 45. 593–612. DL2 K. Deng, S. Li, Chemical trees with extremal Mostar index. MATCH Commun. Math. Comput. Chem. 85 (2021) 161–180. DL3 K. Deng, S. Li, Extremal catacondensed benzenoid with respect to the Mostar index. J. Math. Chem. 58 (2020) 1437–1465. DoM T. Došlić, I. Martinjak, R. Škrekovski, S. Tipurić Spužević, I. Zubac, Mostar index. J. Math. Chem. 56 (2018) 2995–3013. GAN A. Ghalvandi, A. R. Ashrafi, M. H. Nezhaad, On Mostar and edge Mostar indices of graphs. Journal of Mathematics (2021) 6651220. GRI M. Ghorbani, S. Rahmani, M. J. Islampoor, Some new results on Mostar index of graphs. Iranian J. Math. Chem. 11 (2020) 33–42. GXD F. Gao, K. Xu, T. Došlić, On the difference between Mostar index and irregularity of graphs. Bull. Malays. Math. Sci. Soc. 44 (2021) 45. 905–926. H O. C. Havare, Mostar index and edge Mostar index for some cycle related graphs. Rom. J. Math. Comput. Sci. 10 (2020) 53–66. HXZ F. Hayat, S. J. Xu, B. Zhou, On bicyclic graphs with maximum edge Mostar index. (Preprint). HZ F. Hayat, B. Zhou, On cacti with large Mostar index. Filomat 33 (2019) 4865–4873. HZ1 F. Hayat, B. Zhou, On Mostar index of trees with parameters. Filomat 33 (2019) 6453–6458. HLM S. Huang, S. Li, M. Zhang, On the extremal Mostar indices of hexagonal Chains. MATCH Commun. Math. Comput. Chem. 84 (2020) 249–271. IAI M. Imran, S. Akhter, Z. Iqbal, Edge Mostar index of chemical structures and nanostructures using graph operations. Int. J. Quan. Chem. 120 (2020) e26259. JKR J. Jerebic, S. Klavžar, D.F. Rall, Distance-balanced graphs. Ann. Combin. 12 (2008) 71–79. LD G. Liu, K. Deng, The maximum Mostar indices of unicyclic graphs with given diameter. Appl. Math. Comput. 439 (2023) 127636. LSX H. Liu, L. Song, Q. Xiao, Z. Tang, On edge Mostar index of graphs. Iranian J. Math. Chem. 11(2) (2020) 95–106. MS Š. Miklavič, P. Šparl, ℓ-distance-balanced graphs. Discrete Appl. Math. 244 (2018) 143–154. Te A. Tepeh, Extremal bicyclic graphs with respect to Mostar index. Appl. Math. Comput. 355 (2019) 319–324. XZT Q. Xiao, M. Zeng, Z. Tang, The hexagonal chains with the first three maximal Mostar indices. Discrete Appl. Math. 288 (2020) 180–191. XZT2 Q. Xiao, M. Zeng, Z. Tang, H. Deng, H. Hua, Hexagonal chains with first three minimal Mostar indices. MATCH Commun. Math. Comput. Chem. 85 (2021) 47–61.
http://arxiv.org/abs/2307.04004v1
20230708161850
MAP-NBV: Multi-agent Prediction-guided Next-Best-View Planning for Active 3D Object Reconstruction
[ "Harnaik Dhami", "Vishnu D. Sharma", "Pratap Tokekar" ]
cs.RO
[ "cs.RO", "cs.MA" ]
MAP-NBV: Multi-agent Prediction-guided Next-Best-View Planning for Active 3D Object Reconstruction Harnaik Dhami* Vishnu D. Sharma* Pratap Tokekar *Equal contribution. Names are listed alphabetically. Authors are with the Department of Computer Science, University of Maryland, U.S.A. .This work is supported by the ONR under grant number N00014-18-1-2829. August 12, 2023 ===================================================================================================================================================================================================================================================================== We propose MAP-NBV, a prediction-guided active algorithm for 3D reconstruction with multi-agent systems. Prediction-based approaches have shown great improvement in active perception tasks by learning the cues about structures in the environment from data. But these methods primarily focus on single-agent systems. We design a next-best-view approach that utilizes geometric measures over the predictions and jointly optimizes the information gain and control effort for efficient collaborative 3D reconstruction of the object. Our method achieves 22.75% improvement over the prediction-based single-agent approach and 15.63% improvement over the non-predictive multi-agent approach. We make our code publicly available through our project website: <http://raaslab.org/projects/MAPNBV/> § INTRODUCTION Visual surveying and inspection with robots have been studied for a long time for a wide range of applications such as inspection of civil infrastructure <cit.> and large vehicles <cit.>, precision agriculture <cit.>, and digital mapping for real estate <cit.>. The utilization of robots in these applications is highly advantageous as they can access hard-to-reach areas with greater ease and safety compared to situations with direct human involvement. Recent work on making robots autonomous for these tasks make their use more appealing. This work focuses on one such long-studied problem of 3D object reconstruction <cit.>, where the objective is to digitally reconstruct the object of interest by combining observations from multiple vantage points. While it could be easier to achieve this in an indoor environment by carefully placing sensors around the object, the same can't be achieved for the outdoors and open areas. For the latter, the sensor(s), must be moved around the object to capture information from different viewpoints. This can be realized with sensors such as cameras and LiDARs mounted on unmanned aerial vehicles (UAVs). A UAV with unlimited power supply capacity could capture infinite observations for an almost perfect reconstruction of the object, but the real-world limitation of battery capacity adds another dimension to the problem: achieving an accurate 3D reconstruction as fast as possible. The trade-off between reconstruction accuracy and task duration in unknown environments is commonly addressed through Next-Best-View (NBV) planning, wherein a robot determines the optimal location for the next observation to maximize information gain. Numerous solutions have been proposed by the research community to tackle this problem, with a majority of them catering to single-agent systems <cit.>. However, deploying a team of robots instead of a single agent can enhance task efficiency multi-fold, while also offering additional benefits such as fault tolerance through redundancy. But the direct application of single-agent NBV methods to multi-agent systems does not translate well in terms of performance. This issue stems from the potential overlap between the individual observations. An efficient multi-agent NBV formulation requires coordination among robots to build a joint representation and minimize the overlap. In this work, we extend our previous work on prediction-driven single-agent NBV, Pred-NBV <cit.>, to a team of robots for 3D reconstruction to bring the advantages of the prediction-guided approach to a multi-agent system. We call this multi-agent prediction-based next-best-view method MAP-NBV. Pred-NBV <cit.> uses a 3D point cloud prediction network along with a geometric NBV approach while also considering the control effort required for object reconstruction. An important feature of Pred-NBV is that it doesn't require the partially observed point cloud to be centered at the full object center, an implicit assumption in many 3D reconstruction networks. Naively extending Pred-NBV to a team of robots would result in significant overlap as all the agents would move in the same direction to maximize individual information gain. This is inefficient as it would be more advantageous for the robots to move in different directions. MAP-NBV solves this issue by defining NBV measures over joint observation. We accomplish this by removing duplicate points in observations from multiple robots when calculating the information gain. Along with this, we account for the total control effort in our NBV objective, which results in efficient planning for the whole team. We make the following contributions in this work: * We propose a multi-agent, prediction-based NBV planning approach for active 3D reconstruction of various objects with a novel objective combining visual information gain and control effort. * We modify a single-agent baseline NBV algorithm based on <cit.> that uses frontier-based information gain, and extend its functionality to effectively operate in multi-agent settings. * We show that our method outperforms Pred-NBV <cit.>, a single-agent prediction-based algorithm, by 22.75% and the multi-agent version of a traditional NBV baseline <cit.> by 15.63%. We share the qualitative results and release the project code from our method on our project website[<http://raaslab.org/projects/MAPNBV/>]. § RELATED WORK The use of robots for data acquisition purposes is an extensively studied topic for various domains. Their usage range from infrastructure inspection <cit.> and environment monitoring <cit.> for real-world application to the real-world digitization for research datasets and simulations <cit.>. When the environment is unknown, active methods such as next-best-view (NBV) are used to construct an object model on the fly by capturing additional observations. A majority of the works on NBV planning use information-theoretic measures <cit.> for selection to account for uncertainty in observations <cit.>. The widely used frontier and tree-based exploration approaches also utilize uncertainty about the environment for guiding the robot motion <cit.>. Some works devise geometric methods which make inferences about the exact shape of the object of interest and try to align the observations with the inferred model <cit.>. Prediction-based NBV approaches have emerged as another alternative in recent years, where a neural network takes the robot and/or the environment state as the input and NBV pose or velocity as the output <cit.>. A majority of the existing work on NBV is focused on single robot systems. The task performance can be enhanced by adding more robots to the systems, but directly extending single-robot NBV approaches to multi-robot systems may result in sub-optimal performance due to significant overlap in observations. This issue led to the development of exploration algorithms specifically for multi-robot systems <cit.> with information-theoretic measures for determining NBV. Some recent works on multi-robot systems have explored the utilization of predictions for improvement in task efficiency. Almadhoun et al. <cit.> designed a hybrid planner that switches between a classical NBV approach and a learning-based predictor for NBV selection but uses a partial model obtained by robot observations only. Wu et al. <cit.> use a point cloud prediction model for plants to use the predicted point cloud as an oracle leading to better results than the traditional approaches. This method uses entropy-based information gain measures for NBV and is designed for plant phenotyping with robotic arms. These methods do not consider the control effort required which is important for UAVs with energy constraints when deployed for observing large objects such as airplanes and ships. Also, these works employ information theoretic NBV approaches. We aim to explore a prediction-based approach for geometric NBV selection. In this work, we extend Pred-NBV <cit.> which also uses point cloud prediction and build a multi-robot NBV planner. The prediction on the point cloud makes the pipeline modular and interpretable and can be improved by improving individual modules. We select NBV based on information gain, as well as control effort, making our approach more grounded in real-world limitations. § PROBLEM FORMULATION We are given a team of n robots, each equipped with a 3D sensor. The team flies around a closed object of volume 𝒱∈ℝ^3 and observes the point on its surface 𝒮⊂𝒱. The surface points s_i observed by the robot r_j from the view-point ϕ_k ∈Φ are represented as a voxel-filtered point cloud and the relationship between them is defined as s_i = f(r_j, ϕ_k). The robot r_j follows a trajectory ξ_r_j, consisting of multiple viewpoints, and keeps track of the points observed so far. The distance traveled by a robot between two poses ϕ_i and ϕ_j is represented by d(ϕ_i, ϕ_j). The point cloud observed by the team of robots is the union of the surface points observed by the individual robots over their respective trajectories, i.e., s_ξ = ⋃_i=1^n ⋃_ϕ∈ξ_r_i f(r_i, ϕ) and ξ represents the set of trajectories for each robot, i.e., ξ = {ξ_r_1, ξ_r_2,..., ξ_r_n}. The objective is to find a set of feasible trajectories ξ^* = {ξ_r_1^*, ξ_r_2^*, ..., ξ_r_n^*}, such that the team observes the whole voxel-filtered surface, while also minimizing the total distance traveled by the robots on their respective trajectories. ξ^* = _ξ∑_i=1^n ∑_j=1^| ξ_r_j| - 1 d(ϕ_j, ϕ_j+1) such that  ⋃_i=1^n ⋃_ϕ∈ξ_r_i f(r_i, ϕ) = 𝒮 Given a finite set of trajectories, if 𝒮, the object model is known, the optimal set of trajectories can be found with an exhaustive search. As the object model is not known apriori in an unknown environment, the optimal solution can not be found beforehand. Thus, each robot needs to move based on the partial observations of the team to determine the NBV to reconstruct the object's surface. Here we assume that each robot can observe the object at the start of the mission, which can be accomplished by moving the robots till they see the object. In this work, we define this problem in a centralized form; all the robots share their observations with a central entity that prescribes the NBV for each by solving the aforementioned objective. § PROPOSED APPROACH In this paper, we present Multi-Agent Pred-NBV (MAP-NBV), a model prediction-guided NBV approach for a team of robots. Figure <ref> shows the overview of our process, which consists of two parts: (1)3D Model Prediction, where we combine the observations from all the robots to build a partial model of the object and use PoinTr-C <cit.>, a 3D point cloud completion network, to predict the full shape of the objects, and (2) Multi-Agent NBV Algorithm, which uses the partial model and the predicted model to determine the NBV for the team, while trying to minimize the distance traveled. Our NBV solution performs a greedy selection over the candidate points to generate the trajectory, which also reduces the computation complexity. The following subsections provide further details of our approach. §.§ 3D Model Prediction To start, the target object is segmented out from the rest of the environment in the captured RGB images for each UAV. This allows the algorithm to focus on only the target infrastructure as opposed to also including other obstacles. Then, each of these segmented images is aligned with the captured depth image per UAV to segment the target object out. Point clouds are then generated per each segmented depth image. This gives us a point cloud per each UAV that contains points belonging only to the target object. Assuming a centralized system, each segmented point cloud per UAV is transformed into a central reference frame and concatenated together into a singular point cloud. This point cloud represents the entire multi-agent system's observations of the target object at the current timestamp. The point cloud concatenation can be replaced with a registration algorithm <cit.>, but we use concatenation due to its ease of use. Lastly, this current timestamp's point cloud is then concatenated with previous observations to get an up-to-date observation point cloud. This process is shown in Figure <ref>. In order to get an approximation of the 𝒱̂ of the full model 𝒱, we use PoinTr-C <cit.> a 3D point cloud completion network, developed by fine-tuning PoinTr <cit.> using curriculum learning over ShapeNet dataset <cit.>. Unlike PoinTr and similar point cloud completion networks, PoinTr-C doesn't make implicit assumptions about the knowledge of the center of the full model by fine-tuning over rotationally and translationally perturbed point clouds. Relaxing this assumption makes PoinTr-C more suitable for inputs from an unknown environment than PoinTr. The 3D point cloud of the object obtained as the union of the observed surface points goes as input to PoinTr-C and it predicts the full object point cloud 𝒱̂. PoinTr-C was trained over isolated point clouds and therefore requires object point clouds to be isolated from the scene. This can be realized with the help of distance-based filters and state-of-the-art segmentation networks<cit.> without any fine-tuning. An example of an input point cloud and a predicted point cloud is shown in Figure <ref>. §.§ Next-Best View Planner We use the predicted point cloud as an approximation of the ground truth point cloud for NBV planning. For this, we first generate a set of candidate poses around the partially observed object. From these, we select a set of n poses, corresponding to each robot, based on information gain and control effort. The information gain for the set of n viewpoints is defined as the number of new, unique points expected to be observed after the robots move to these viewpoints. The control effort is defined as the total distance covered by the robots in moving to the viewpoints. The number of new points varies in each iteration since the robots observe more of the surface of the object as they move to new locations. While PoinTr-C predicts the point cloud for the whole object, the robots can observe only the surface points. Hence, before counting the number of new points, we apply hidden point removal <cit.> to the predicted point cloud. We represent this relationship between the number of points observed and the trajectories traversed till time t by I({ξ_t), where ξ_t = {ξ_r_1, ξ_r_2, ..., ξ_r_n}_t represents the set of trajectories for all the robots till time t. To balance the information gain and control effort, we use a hyperparameter τ which is kept fixed throughout an episode. The robots select the candidate to pose set which results in at least τ% of the total possible information gain over all candidate poses. Thus, we formulate our multi-agent NBV objective as follows. {ϕ_r_1, ϕ_r_2, ..., ϕ_r_n}_t+1 = _ϕ∈𝒞∑_i=1^n d(ϕ_r_i, ϕ_r_it)  such that  ⋃_i =1^n I(ξ_r_it∪ϕ)/max_ϕ∈𝒞⋃_i =1^n I(ξ_r_it∪ϕ)≥τ In our experiments, we implement the information gain by first isolating the predicted points that can be observed from a given set of viewpoints and then taking a union of such points from each agent to identify the unique points in the joint observation. The number of the points thus obtained is used as the information gain. For finding the control effort, we use RRT-Connect <cit.> to find the path between a robot's current location to each candidate pose. The candidate poses are generated similar to Pred-NBV <cit.>, i.e. on circles at different heights around the center of the predicted object point cloud. One circle is at the same height as the predicted object center with radius 1.5 × d_max, where d_max is the maximum distance of a point from the center of the predicted point cloud. The other two circles are located above and below this circle 0.25 ×z-range away, with a radius of 1.2 × d_max. The viewpoints are located at steps of 30^∘ on each circle. We set τ = 0.95 for all our experiments. § EXPERIMENTS AND EVALUATION In order to gauge our method's effectiveness, we compare it with a non-predictive multi-agent baseline and a prediction-driven NBV approach which was developed for a single agent. While the first highlights the benefits of including predictions in the NBV pipeline, the latter supports the argument for using a team of robots. §.§ Setup We extend the setup in Pred-NBV <cit.> to work in a multi-agent setting. Similarly, we use Robot Operating System (ROS) Melodic and AirSim <cit.> on Ubuntu 18.04 for our simulation experiments. Multiple UAVs are spawned into the AirSim environment. We equipped each of the UAVs with a depth camera and an RGB camera. Each UAV published a segmented image using AirSim's built-in segmentation. We adapted the depth segmentation package from Pred-NBV to work with multiple UAVs. We then converted these segmented depth images into 3D point clouds. For our collision-free point-to-point planner, we use the MoveIt <cit.> software package implementing the work done by Köse <cit.>. §.§ Qualitative Example We evaluate MAP-NBV on the same 20 objects that were used in Pred-NBV to allow a direct comparison. The 20 objects consist of 5 different ShapeNet classes: airplane, rocket, tower, train, and watercraft. Examples of each class are shown in Figure <ref>. These classes represent diverse shapes and infrastructures that are regularly inspected. Figure <ref> shows the path followed by 2 UAVs as given by MAP-NBV in the C-17 airplane simulation. This environment includes other obstacles that are not of interest but still need to be accounted for in collision-free path planning. MAP-NBV finds a collision-free path for both UAVs while targeting the maximum coverage of the C-17 airplane. §.§ Comparison with Single-agent Baseline We compared the performance of MAP-NBV with a single-agent prediction-based NBV planner called Pred-NBV <cit.>. MAP-NBV is an extension of Pred-NBV designed for multi-agent scenarios. However, in single-agent cases, both algorithms function identically. In MAP-NBV, UAVs are spawned close together, ensuring that the initial environment information is virtually the same as in the single-agent Pred-NBV case. Consequently, the initial points observed and the initial shape completion predictions for both algorithms are highly similar. This means that MAP-NBV and Pred-NBV select their initial NBVs using nearly identical information. To demonstrate the immediate information gain of MAP-NBV over Pred-NBV, we compare the number of points observed after navigating to the first NBVs selected by the algorithms. Our findings, presented in Table <ref>, reveal that, on average, MAP-NBV observes 22.75% more points after the first iteration compared to Pred-NBV in the context of object reconstruction. These results are based on evaluations across 20 objects and 5 object classes. Furthermore, on average, each UAV in MAP-NBV flew a similar distance to the UAV in Pred-NBV. This similarity arises from both algorithms generating candidate viewpoints in the same manner and employing the same point-to-point planner. §.§ Comparison with Multi-agent Baseline We also compared the performance of MAP-NBV with a modified baseline NBV method <cit.> designed for multi-agent use. The baseline method employs frontiers to select the next-best views. Frontiers are points located at the edge of the observed space near unknown areas. We utilized the same modifications described in Pred-NBV <cit.>. Specifically, we used our segmented point cloud to choose frontiers near the target object. To ensure that the UAVs always face the target object, the orientation of all poses selected by the baseline aligns with the center of the observed target object point clouds. We further adapted this baseline method to function in a multi-agent setting. The pose for the first UAV is selected in the exact same manner as in the single-agent baseline. For each subsequent UAV, the remaining best pose is chosen, as long as it does not fall within a certain distance threshold compared to the previously selected poses in the current iteration of the algorithm. Both MAP-NBV and the baseline algorithm employ the same stopping criteria. The algorithm terminates if the total points observed in the previous step exceed 95% of the total points observed in the current step. Our evaluation, presented in Table <ref>, demonstrates that MAP-NBV observes, on average, 15.63% more points than the multi-agent baseline for object reconstruction across all 20 objects from the 5 different model classes. In our simulations, we utilized 2 UAVs for both algorithms. Furthermore, the MAP-NBV algorithm can be readily extended to accommodate more than just 2 robots. By incorporating additional UAVs, the algorithm can effectively leverage the collaborative efforts of a larger multi-agent system to improve object reconstruction performance and exploration efficiency. However, in our current evaluation, we utilized 2 UAVs for both algorithms due to limited computational resources. The simulations were computationally intensive, and our computer experienced significant slowdowns with just 2 robots in the simulation. Despite this limitation, the promising results obtained with 2 UAVs suggest that scaling up the algorithm to include more robots has the potential to yield even more significant improvements in performance. Additionally, Figure <ref> illustrates that MAP-NBV observes more points per step than the multi-agent baseline while also covering a shorter flight distance. § CONCLUSION We present a multi-agent, prediction-guided NBV planning approach for active 3D reconstruction. This method can be helpful in a variety of applications including civil infrastructure inspection. We show that our method is able to faithfully reconstruct the object point clouds efficiently compared to non-predictive multi-agent methods and single-agent prediction-based methods. Our NBV planning objective considers both information gain and control effort, making it more suitable for real-world deployment given the flight time limit imposed on UAVs by their battery capacity. In this work, we focus solely on geometric measures for information gain. Many existing works on NBV have developed sophisticated information theoretic measures. We will explore combining both types of measures in our future work. Also, we consider all possible viewpoint pairs for finding the NBV for the team, which hinders the scalability of MAP-NBV. We will look into methods to make this process more computationally efficient search over a larger candidate viewpoint set. IEEEtran
http://arxiv.org/abs/2307.03960v2
20230708115812
Nonparametric estimation of the diffusion coefficient from S.D.E. paths
[ "Eddy Ella-Mintsa" ]
math.ST
[ "math.ST", "stat.TH" ]
Seismic Signatures of the ^12C(α, γ)^16O Reaction Rate in White Dwarf Models with Overshooting [ August 12, 2023 ============================================================================================== Consider a diffusion process X=(X_t)_t∈[0,1] observed at discrete times and high frequency, solution of a stochastic differential equation whose drift and diffusion coefficients are assumed to be unknown. In this article, we focus on the nonparametric esstimation of the diffusion coefficient. We propose ridge estimators of the square of the diffusion coefficient from discrete observations of X and that are obtained by minimization of the least squares contrast. We prove that the estimators are consistent and derive rates of convergence as the size of the sample paths tends to infinity, and the discretization step of the time interval [0,1] tend to zero. The theoretical results are completed with a numerical study over synthetic data. Keywords. Nonparametric estimation, diffusion process, diffusion coefficient, least squares contrast, repeated observations. MSC: 62G05; 62M05; 60J60 § INTRODUCTION Let X=(X_t)_t∈[0,1] be a one dimensional diffusion process with finite horizon time, solution of the following stochastic differential equation: dX_t=b(X_t)dt+σ(X_t)dW_t, X_0=0 where (W_t)_t≥ 0 is a standard Brownian motion. The drift function b and the diffusion coefficient σ are assumed to be unknown Lipschitz functions. We denote by (ℱ_t)_t∈ [0,1] the natural filtration of the diffusion process X. The goal of the article is to construct, from N discrete observations X̅^j=(X^j_kΔ_n)_0≤ k≤ n,    1 ≤ j ≤ N with time step Δ_n = 1/n, a nonparametric estimator of the square of the diffusion coefficient σ^2(.). We are in the framework of high frequency data since the time step Δ_n tends to zero as n tends to infinity. Furthermore, we consider estimators of σ^2(.) built from a single diffusion path (N = 1), and those built on N paths when N →∞. In this paper, we first propose a ridge estimator of σ^2(.) on a compact interval. Secondly, we focus on a nonparametric estimation of σ^2(.) on the real line . We measure the risk of any estimator σ^2 of the square of the diffusion coefficient σ^2 by [σ^2 - σ^2^2_n,N], where σ^2 - σ^2^2_n,N := (Nn)^-1∑_j=1^N∑_k=0^n-1(σ^2(X^j_kΔ) - σ^2(X^j_kΔ))^2 is an empirical norm defined from the sample paths. Related works. There is a large literature on the estimation of coefficients of diffusion processes, and we focus on the papers studying the estimation of σ^2. Estimation of the diffusion coefficient has been considered in the parametric case (see e.g. <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>). In the nonparametric case, estimators of the diffusion coefficient from discrete observations are proposed under various frameworks. First, the diffusion coefficient is constructed from one discrete observation of the diffusion process (N = 1) in long time (T →∞) (see e.g. <cit.>, <cit.>, <cit.>, <cit.>), or in short time (T = 1) (see e.g. <cit.>, <cit.>, <cit.>, <cit.>, <cit.>). Note that in short time (T<∞), only the diffusion coefficient can be estimated consistently from a single discrete path contrary to the drift function whose consistent estimation relies on repeated discrete observations of the diffusion process (see e.g. <cit.>, <cit.>). For the case of short time diffusion processes (for instance T = 1), estimators of a time-dependent diffusion coefficients t ↦σ^2(t) have been proposed. In this context, <cit.> built a nonparametric estimator of t↦σ^2(t) and studied its L_2 risk using wavelets methods, <cit.> studies the L_p risk of a kernel estimator of σ^2(t), and <cit.> derived a minimax rate of convergence of order n^-ps/(1+2s) where s>1 is the smoothness parameter of the Besov space ℬ^s_p,∞([0,1]) (see later in the paper). For the space-dependent diffusion coefficient x ↦σ^2(x), a first estimator based on kernels and built from a single discrete observation of the diffusion process with T = 1 is proposed in <cit.>. The estimator has been proved to be consistent under a condition on the bandwidth, but a rate of convergence of its risk of estimation has not been established. Secondly, the diffusion coefficient is built in short time (T < ∞) from N repeated discrete observations with N →∞. In <cit.>, a nonparametric estimator of σ^2 is proposed from repeated discrete observations on the real line when the time horizon T = 1. The estimator has been proved to be consistent with a rate of order N^-1/5 over the space of Lipschitz functions. Two main methods are used to build consistent nonparametric estimators of x ↦σ^2(x). The first method is the one using kernels (see e.g. <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>), the other method consists in estimating σ^2 as solution of a nonparametric regression model using the least squares approach. Since the diffusion coefficient is assumed to belong to an infinite dimensional space, the method consists in projecting σ^2 into a finite dimensional subspace, estimating the projection and making a data-driven selection of the dimension by minimizing a penalized least squares contrast (see e.g. <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>). Nonparametric estimation of coefficients of one-dimensional diffusion process from discrete observations is widely studied in the literature under various frameworks. In a first framework, the diffusion coefficient is constructed from one discrete observation of the diffusion process in long time (T →∞) (see e.g. <cit.>, <cit.>, <cit.>, <cit.>), or in short time (T = 1) (see e.g. <cit.>, <cit.>, <cit.>, <cit.>, <cit.>). Note that in short time (T<∞), only the diffusion coefficient can be estimated consistently from a single discrete path contrary to the drift function whose consistent estimation relies on repeated discrete observations of the diffusion process (see e.g. <cit.>, <cit.>). In a second framework, σ^2 is estimated from N discrete observations of the diffusion process, with N →∞ (see e.g. <cit.>). Estimation of the diffusion coefficient has also been considered in the parametric case (see e.g. <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>). For the nonparametric setting, estimators of a time-dependent diffusion coefficients t ↦σ^2(t) have been proposed. In this context, <cit.> built a nonparametric estimator of t↦σ^2(t) and studied its L_2 risk using wavelets methods, <cit.> studies the L_p risk of a kernel estimator of σ^2(t), and <cit.> derived a minimax rate of convergence of order n^-ps/(1+2s) where s>1 is the smoothness parameter of the Besov space ℬ^s_p,∞([0,1]). For the case of space-dependent diffusion coefficients x ↦σ^2(x), a first estimator based on kernels and built from a single discrete observation of the diffusion process with T = 1 is proposed in <cit.>. The estimator has been proved to be consistent on condition on the bandwidth, but a rate of convergence of its risk of estimation has not been established. Two main methods are used to build consistent nonparametric estimators of x ↦σ^2(x). The first method is the one using kernels (see e.g. <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>), the other method consists in estimating σ^2 as solution of a nonparametric regression model using the least squares approach. Since the diffusion coefficient is assumed to belong to an infinite dimensional space, the method consists in projecting σ^2 into a finite dimensional subspace, estimating the projection and making a data-driven selection of the dimension by minimizing a penalized least squares contrast (see e.g. <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>). Main contribution. In this article, we assume to have at our disposal N i.i.d. discrete observations of length n of the diffusion process X. The main objectives of this paper are the following. * Construct a consistent and implementable ridge estimator of σ^2 from a single diffusion path (N=1) using the least squares approach. We derive rates of convergence of the risk of estimation of the ridge estimators built on a compact interval and on the real line over a Hölder space, taking advantage of the properties of the local time of the diffusion process, and its link with the transition density. * We extend the result to the estimation of σ^2 on repeated observations of the diffusion process (N →∞). We prove that the estimators built on a compact interval and on are more efficient considering their respective rates compared to nonparametric estimators built from a single diffusion path. * Focusing on the support of the diffusion coefficient, we consider an intermediate case between a compact interval and by proposing a ridge estimator of σ^2 restricted to the compact interval [-A_N,A_N] where A_N→∞ as N→∞. The benefit of this approach is that the resulting projection estimator can reach a faster rate of convergence compared to the rate obtained on the real line . * Finally, we propose adaptive estimators of σ^2 based on a data-driven selection of the dimension through the minimization of the penalized least squares contrast in different settings. We sum up below the rates of convergence (up to a log-factor) of the ridge estimators of σ^2_|I with I⊆ over a Hölder space defined in the next section with a smoothness parameter β≥ 1. Outline of the paper. In Section <ref>, we define our framework with the key assumptions on the coefficients of the diffusion process ensuring for instance that Equation (<ref>) admits a unique strong solution. Section <ref> is devoted to the non-adaptive estimation of the diffusion coefficient from one diffusion path both on a compact interval and on the real line . In Section <ref>, we extend the study to the non-adaptive estimation of the diffusion coefficient from repeated observations of the diffusion process. We propose in Section <ref>, adaptive estimators of the diffusion coefficient, and Section <ref> complete the study with numerical evaluation of the performance of estimators. We prove our theoretical results in Section <ref>. § FRAMEWORK AND ASSUMPTIONS Consider a diffusion process X=(X_t)_t∈[0,1], solution of Equation (<ref>) whose drift and diffusion coefficient satisfy the following assumption. * There exists a constant L_0>0 such that b and σ are L_0-Lipschitz functions on ℝ. * There exist constants σ_0,σ_1>0 such that : σ_0≤σ(x)≤σ_1, ∀ x∈ℝ. * σ∈𝒞^2(ℝ) and there exist C >0 and α≥ 0 such that: |σ^'(x)|+|σ^''(x)|≤ C(1+|x|^α), ∀ x∈ℝ. Under Assumption <ref>, X=(X_t)_t∈[0,1] is the unique strong solution of Equation (<ref>), and this unique solution admits a transition density (t,x)↦ p_X(t,x). Besides, we draw from Assumption <ref> that ∀ q≥ 1, 𝔼[t∈[0,1]sup|X_t|^q]<∞. §.§ Definitions and notations We suppose to have at our disposal, a sample D_N,n={X̅^j, j=1,⋯,N} constituted of N independent copies of the discrete observation X̅ = (X_kΔ_n)_0≤ k≤ n of the diffusion process X where Δ_n = 1/n is the time-step. The objective is to construct, from the sample D_N,n, a nonparametric estimator of the square σ^2 of the diffusion coefficient on an interval I ⊆. In the sequel, we consider two main cases, the first one being the estimation of σ^2 on the interval I from a single path (N=1 and n→∞). For the second case, we assume that both N and n tend to infinity. For each measurable function h, such that 𝔼[h^2(X_t)]<∞ for all t∈[0,1], we define the following empirical norms: h^2_n:=𝔼_X[1/n∑_k=0^n-1h^2(X_kΔ_n)], h^2_n,N:=1/Nn∑_j=1^N∑_k=0^n-1h^2(X^j_kΔ_n). For all h ∈𝕃^2(I), we have h^2_n=∫_Ih^2(x)1/n∑_k=0^n-1p_X(kΔ_n,x_0,x)dx=∫_Ih^2(x)f_n(x)dx, where f_n: x↦1/n∑_k=0^n-1p_X(kΔ_n,x) is a density function. For the case of non-adaptive estimators of σ^2, we also establish bounds of the risks of the estimators based on the empirical norm ._n or the 𝕃^2-norm . when the estimation interval I is compact. For any integers p,q ≥ 2 and any matrix M ∈^p × q, we denote by ^tM, the transpose of M. §.§ Spaces of approximation We propose projection estimators of σ^2 on a finite-dimensional subspace. To this end, we consider for each m ≥ 1, a m-dimensional subspace 𝒮_m given as follows: 𝒮_m:=Span(ϕ_ℓ, ℓ=0,⋯,m-1),    m≥ 1 where the functions (ϕ_ℓ,  ℓ∈ℕ) are continuous, linearly independent and bounded on I. Furthermore, we need to control the ℓ^2-norm of the coordinate vectors of elements of 𝒮_m, which leads to the following constrained subspace, 𝒮_m,L:={h=∑_ℓ=0^m-1a_ℓϕ_ℓ, ∑_ℓ=0^m-1a^2_ℓ=𝐚^2_2≤ mL, 𝐚=(a_0,⋯,a_m-1), L>0}. Note that 𝒮_m,L⊂𝒮_m and 𝒮_m,L is no longer a vector space. The control of the coordinate vectors allows to establish an upper bound of the estimation error that tends to zero as n→∞ or N,n→∞. In fact, we prove in the next sections that the construction of consistent estimators of σ^2 requires the functions h=∑_ℓ=0^m-1a_ℓϕ_ℓ to be bounded, such that h_∞≤ℓ=0,…,m-1maxϕ_ℓ_∞ 𝐚_2. This condition is satisfied for the functions of the constrained subspaces 𝒮_m,L with m ≥ 1. In this article, we work with the following bases. [B] The B-spline basis This is an exemple of a non-orthonormal basis defined on a compact interval. Let A > 0 be a real number, and suppose (without restriction) that I = [-A,A]. Let K,M∈ℕ^*, and consider 𝐮=(u_-M,⋯,u_K+M) a knots vector such that u_-M = ⋯ = u_-1 = u_0 = -A, u_K+1 = ⋯ = u_K+M = A, and for all i=0,⋯,K, u_i = -A+i2A/K. One calls B-spline functions, the piecewise polynomial functions (B_ℓ)_ℓ=-M,⋯,K-1 of degree M, associated with the knots vector 𝐮 (see <cit.>, Chapter 14). The B-spline functions are linearly independent smooths functions returning zero for all x∉[-A,A], and satisfying some smoothness conditions established in <cit.>. Thus, we consider approximation subspaces 𝒮_K+M defined by 𝒮_K+M=Span{B_ℓ, ℓ=-M,⋯,K-1} of dimension (𝒮_K+M)=K+M, and in which, each function h=∑_ℓ=-M^K-1a_ℓB_ℓ is M-1 times continuously differentiable thanks to the properties of the spline functions (see <cit.>). Besides, the spline basis is included in the definition of both the subspace 𝒮_m and the constrained subspace 𝒮_m,L (see Equations (<ref>) and (<ref>)) with m = K + M and for any coordinates vector (a_-M,…,a_K-1) ∈^K+M, ∑_ℓ=-M^K-1a_ℓB_ℓ = ∑_ℓ=0^m-1a_ℓ-MB_ℓ-M. The integer M ∈ℕ^* is fixed, while K varies in the set of integers ℕ^*. If we assume that σ^2 belongs to the Hölder space Σ_I(β,R) given as follows: Σ_I(β,R):={h∈𝒞^⌊β⌋+1(I), |h^(ℓ)(x)-h^(ℓ)(y)|≤ R|x-y|^β-l, x,y∈ I}, where β≥ 1, ℓ=⌊β⌋ and R>0, then the unknown function σ^2_|I restricted to the compact interval I can be approximated in the constrained subspace 𝒮_K+M,L spanned by the spline basis. This approximation results to the following bias term: h ∈𝒮_K+M,Linfh - σ^2_|I^2_n≤ C|I|^2βK^-2β where the constant C > 0 depends on β, R and M, and |I| = sup I - inf I. The above result is a modification of Lemma D.2 in <cit.>. [F] The Fourier basis The subspace 𝒮_m can be spanned by the Fourier basis {f_ℓ,   ℓ = 0, …, m-1} = {1,√(2)cos(2π jx), √(2)sin(2π jx),  j=1,...,d}  with   m=2d+1. The above Fourier basis is defined on the compact interval [0,1]. The definition can be extended to any compact interval, replacing the bases functions x ↦ f_ℓ(x) by x ↦ 1/(max I - min I)f_ℓ(x-min I/max I - min I). We use this basis to build the estimators of σ^2 on a compact interval I ⊂. Define for all s ≥ 1 and for any compact interval I ⊂, the Besov space ℬ^s_2,∞(I) which is a space of functions f ∈ L^2(I) such that the ⌊ s⌋^th derivative f^(⌊ s ⌋) belongs to the space ℬ^s-⌊ s ⌋_2,∞(I) given by ℬ^s - ⌊ s ⌋_2,∞(I) = {f ∈ L^2(I)  and w_2,f(t)/t^s - ⌊ s ⌋∈ L^∞(I∩^+)} where for s-⌊ s⌋∈ (0,1), w_2,f(t)=|h|≤ tsupτ_hf - f_2 with τ_hf(x) = f(x-h), and for s-⌊ s⌋ = 1, w_2,f(t)=|h|≤ tsupτ_hf + τ_-hf - 2f_2. Thus, if we assume that the function σ^2_|I belongs to the Besov space ℬ^s_2,∞, then it can be approximated in a constrained subspace 𝒮_m,L spanned by the Fourier basis. Moreover, under Assumption <ref> and from Lemma 12 in <cit.>, there exists a constant C>0 depending on the constant τ_1 of Equation (<ref>), the smoothness parameter s of the Besov space such that h∈𝒮_m,Linfh-σ^2_|I^2_n≤τ_1h∈𝒮_m,Linfh-σ^2_|I^2≤ C|σ^2_|I|^2_β m^-2β where |σ^2_|I|_s is the semi-norm of σ^2_|I in the Besov space ℬ^s_2,∞(I). Note that for all β≥ 1, the Hölder space Σ_I(β,R) and the Besov space ℬ^β_2,∞ satisfy: L^∞() ∩Σ_I(β,R) ⊂ℬ^β_∞,∞(I) ⊂ℬ^β_2,∞(I) (see <cit.>, Chap. 2 page 16). As a result, we rather consider in the sequel the Hölder space Σ_I(β,R) which can also be approximated by the Fourier basis. [H] The Hermite basis The basis is defined from the Hermite functions (h_j,j≥ 0) defined on ℝ and given for all j≥ 0 and for all x∈ℝ by: h_j(x)=c_jH_j(x), where H_j(x)=(-1)^jexp(x^2/2)d^j/dx^j(e^-x^2/2) and c_j=(2^jj!√(π))^-1/2. The polynomials H_j(x), j≥ 0 are the Hermite polynomials, and (h_j,j≥ 0) is an orthonormal basis of L^2(ℝ). Furthermore, for all j≥ 1 and x∈, |h_j(x)|≤ c|x|exp(-c_0x^2) for x^2≥(3/2)(4j+3) where c,c_0>0 are constants independent of j (see  <cit.>, Proof of Proposition 3.5). We use the Hermite basis in the sequel for the estimation of σ^2 on the real line . If one assumes that σ^2 belongs to the Sobolev space W^s_f_n(,R) given for all s ≥ 1 by W^s_f_n(,R) := {g ∈ L^2(, f_n(x)dx), ∀ ℓ≥ 1, g - g_ℓ^2_n≤ Rℓ^-s} where for each ℓ≥ 1, g_ℓ is the L^2(, f_n(x)dx)-orthogonal projection of g on the ℓ-dimensional vector space 𝒮_ℓ spanned by the Hermite basis. Consider a compact interval I ⊂ and the following spaces: W^s(I,R) :=  {g ∈ L^2(I),  ∑_j=0^∞j^s<g,ϕ_j>^2≤ R}, W^s_f_n(I,R) :=  {g ∈ L^2(I, f_n(x)dx), ∀ ℓ≥ 1, g - g_ℓ^2_n≤ Rℓ^-s} where (ϕ_j)_j≥ 0 is an orthonormal basis defined on I and for all ℓ≥ 1, g_ℓ is the orthogonal projection of g onto 𝒮_ℓ = Span(h_j,  j≤ℓ) of dimension ℓ≥ 1 (see e.g. <cit.>). Then, for all g ∈ W^s(I,R), we have g=∑_j=0^∞<g,ϕ_j>ϕ_j  and  g-g_ℓ^2 = ∑_j=ℓ+1^∞<g,ϕ_j>^2≤ℓ^-s∑_j=ℓ+1^∞j^s<g,ϕ_j>^2≤ Rℓ^-s. We have W^s_f_n(I,R) = W^s(I,R) as the empirical norm ._n and the L^2-norm . are equivalent. The space W^s_f_n(,R) is an extension of the space W^s_f_n(I,R) wher I = and (ϕ_j)_j ≥ 0 is the Hermite basis. The B-spline basis is used for the estimation of σ^2 on a compact interval on one side (N = 1 and N>1), and on the real line on the other side restricting σ^2 on the compact interval [-log(n), log(n)] for N = 1, or [-log(N), log(N)] for N > 1, and bounding the exit probability of the process X from the interval [-log(N), log(N)] (or [-log(n), log(n)]) by a negligible term with respect to the estimation error. In a similar context, the Fourier basis is used as an othonormal basis to built nonparametric estimators of σ^2 on a compact interval and on , both for N = 1 and for N > 1. The main goal is to show that, in addition to the spline basis which is not orthogonal, we can built projection estimators of σ^2 on orthonormal bases that are consistent. The advantage of the Hermite basis compared to the Fourier basis is its definition on the real line . As a result, we use the Hermite basis to propose for N > 1, a projection estimator of σ^2 whose support is the real line . Denote by ℳ, the set of possible values of the dimension m ≥ 1 of the approximation subspace 𝒮_m. If (ϕ_0,⋯,ϕ_m-1) is an orthonormal basis, then for all m,m^'∈ℳ such that m < m^', we have 𝒮_m⊂𝒮_m^'. For the case of the B-spline basis, one can find a subset 𝒦⊂ℳ of the form 𝒦={2^q, q=0,⋯,q_max} such that for all K,K^'∈𝒦, K < K^' implies 𝒮_K + M⊂𝒮_K^' + M (see for example <cit.>). The nesting of subspaces 𝒮_m, m∈ℳ is of great importance in the context of adaptive estimation of the diffusion coefficient and the establishment of upper-bounds for the risk of adaptive estimators. In the sequel, we denote by [𝐅],  [𝐇] and [𝐁] the respective collection of subspaces spanned by the Fourier basis, the Hermite basis and the B-spline basis. §.§ Ridge estimators of the square of the diffusion coefficient We establish from Equation (<ref>) and the sample D_N,n the regression model for the estimation of σ^2. For all j ∈ [[1,N]] and k ∈ [[0,n-1]], define U^j_kΔ_n := (X^j_(k+1)Δ_n - X^j_kΔ_n)^2/Δ_n. The increments U^j_kΔ_n are approximations in discrete times of d<X,X>_t/dt since, from Equation (<ref>), one has d<X,X>_t = σ^2(X_t)dt. From Equation (<ref>), we obtain the following regression model, U^j_kΔ_n=σ^2(X^j_kΔ_n)+ζ^j_kΔ_n+R^j_kΔ_n,   ∀ (j,k)∈[[1,N]]×[[0,n-1]] where U^j_kΔ_n is the response variable, ζ^j_kΔ_n and R^j_kΔ_n are respectively the error term and a negligible residual whose explicit formulas are given in Section <ref>. We consider the least squares contrast γ_n,N defined for all m ∈ℳ and for all function h∈𝒮_m,L by γ_n,N(h):=1/Nn∑_j=1^N∑_k=0^n-1(U^j_kΔ-h(X^j_kΔ_n))^2. For each dimension m ∈ℳ, the projection estimator σ^2_m of σ^2 over the subspace 𝒮_m,L satisfies: σ^2_m∈h∈𝒮_m,Lmin γ_n,N(h). Indeed, for each dimension m ∈ℳ, the estimator σ^2_m of σ^2 given in Equation (<ref>) satisfies σ^2_m=∑_ℓ=0^m-1a_ℓϕ_ℓ, where 𝐚=(a_0,⋯,a_m-1):=𝐚^2_2≤ mLmin𝐔-𝐅_m𝐚^2_2 with ^tU = (U^1_0,…,U^1_(n-1)Δ_n, …, U^N_0,…,U^N_(n-1)Δ_n) and the matrix 𝐅_m is defined as follows F_m := ( ^t(ϕ_ℓ(X^j_0),…,ϕ_ℓ(X^j_(n-1)Δ_n)))_1 ≤ j ≤ N0 ≤ℓ≤ m-1∈ℝ^Nn × m. The vector of coefficients 𝐚 is unique and called the ridge estimator of 𝐚 because of the ℓ^2 constraint on the coordinate vectors (see <cit.> Chap. 3 page 61). § ESTIMATION OF THE DIFFUSION COEFFICIENT FROM A SINGLE DIFFUSION PATH This section focuses on the nonparametric estimation of the square of the diffusion coefficient σ^2 on an interval I ⊆ when only a single diffusion path is observed at discrete times (N=1). It is proved in the literature that one can construct consistent estimators of the diffusion coefficient from one path when the time horizon T is finite (see e.g. <cit.>). Two cases are considered. First, we propose a ridge estimator of σ^2 on a compact interval I ⊂, say for example I = [-1,1]. Secondly, we extend the study to the estimation of σ^2 on the real line I =. §.§ Non-adaptive estimation of the diffusion coefficient on a compact interval In this section, we consider the estimator σ^2_m of the compactly supported square of the diffusion coefficient σ^2_|I on the constrained subspaces 𝒮_m,L from the observation of a single diffusion path. Since the interval I⊂ is compact, the immediate benefit is that the density function f_n defined from the transition density of the diffusion process X̅ = (X_kΔ) is bounded from below. In fact, there exist constants τ_0,τ_1∈(0,1] such that ∀ x∈ I, τ_0≤ f_n(x)≤τ_1, (see <cit.>). Thus, for each function h∈𝕃^2(I), τ_0h^2≤h^2_n≤τ_1h^2 where . is the 𝕃^2-norm. Equation (<ref>) allows to establish global rates of convergence of the risk of the ridge estimators σ^2_m of σ^2_|I with m∈ℳ using the L^2-norm . which is, in this case, equivalent with the empirical norm ._n. To establish an upper-bound of the risk of estimation that tends to zero as n tends to infinity, we need to establish equivalence relations between the pseudo-norms ._n,1  (N=1) and ._X on one side, and ._X and the L^2-norm . on the other side, where the random pseudo-norm ._X is defined for each function h∈𝕃^2(I) by h^2_X := ∫_0^1h^2(X_s)ds. Define for x∈, the local time ℒ^x of the diffusion process X = (X_t)_t∈[0,1] by ℒ^x = ε→ 0lim1/2ε∫_0^1_(x-ε,x+ε)(X_s)ds. In general, the local time of a continuous semimartingale is a.s. càdlàg (see e.g. <cit.>). But, for diffusion processes and under Assumption <ref>, the local time ℒ^x is bicontinuous at any point x∈ (see Lemma <ref> in Section <ref>). Furthermore, we obtain the following result. Under Assumption <ref>, and for any continuous and integrable function h, it yields, * ∫_0^1h(X_s)ds = ∫_h(x)ℒ^xdx. * For all x∈, (ℒ^x) = ∫_0^1p_X(s,x)ds. In Lemma <ref>, we remark that there is a link between the local time and the transition density of the diffusion process. Thus, if we consider the pseudo-norm ._X depending on the process X = (X_t)_t∈[0,1] and given in Equation (<ref>), and using Lemma <ref>, we obtain that, [h_X^2] = ∫_h^2(x)[ℒ^x]dx = ∫_h^2(x)∫_0^1p_X(s,x)dsdx≥τ_0h^2. where ∫_0^1p_X(s,x)ds≥τ_0 >0 (see <cit.>, Lemma 4.3), and h^2 is the 𝕃^2-norm of h. Set L = log(n). Suppose that σ^2 is approximated in one of the collections [𝐁] and [𝐅]. Under Assumption <ref>, it yields [σ^2_m - σ^2_|I^2_n,1] ≤ 3h∈𝒮_m,Linfh-σ^2_|I^2_n+C(m/n + m^2γ+1log(n)/n^γ/2 + Δ^2_n) [σ^2_m - σ^2_|I^2_n] ≤34τ_1/τ_0h∈𝒮_m,Linfh-σ^2_|I^2_n+C^'(m/n + m^2γ+1log(n)/n^γ/2 + Δ^2_n) where the number γ > 1 comes from the use of the Hölder inequality. The constant C>0 depends on σ_1 and the constant C^'>0 depends on σ_1, τ_0 and τ_1. We observe that the upper-bound of the risk of estimation of σ^2_m is composed of the bias term, which quantifies the cost of approximation of σ^2_|I in the constrained space 𝒮_m,L, the estimation error O(m/n) and the cost of the time discretization O(Δ^2_n) are established on a random event in which the pseudo-norms ._n,1 and ._X are equivalent, and whose probability of the complementary times σ^2_m - σ^2_|I^2_∞ is bounded by the term O(m^2γ+1log(n)/n^γ/2) (see Lemma <ref> and proof of Theorem <ref>). The next result proves that the risk of estimation can reach a rate of convergence of the same order than the rate established in <cit.> if the parameter γ > 1 is chosen such that the term O(m^2γ+1log(n)/n^γ/2) is of the same order than the estimation error of order m/n. Note that the risk σ^2_m - σ^2_|I^2_n is random since σ^2_m - σ^2_|I^2_n = _X[1/n∑_k=0^n-1(σ^2_m - σ^2_|I)(X_kΔ)] and the estimator σ^2_m is built from an independent copy X̅^1 of the discrete times process X̅. Thus, the expectation relates to the estimator σ^2_m. Suppose that σ^2∈Σ_I(β,R) with β > 3/2, and γ = 2(2β+1)/(2β-3). Assume that K_opt∝ n^1/(2β+1) for [𝐁] (m_opt = K_opt + M), and m_opt∝ n^1/(2β+1) for [𝐅]. Under Assumptions <ref>, it yields, [σ^2_m_opt - σ^2_|I^2_n,1] = O(log(n)n^-2β/(2β+1)) [σ^2_m_opt - σ^2_|I^2_n] = O(log(n)n^-2β/(2β+1)). Note that we obtain the exact same rates when considering the risk of σ^2_m_opt defined with the 𝕃^2-norm equivalent to the empirical norm ._n. Moreover, these rates of convergence are of the same order than the optimal rate n^-s/(2s+1) established in <cit.> over a Besov ball. §.§ Non-adaptive estimation of the diffusion coefficient on the real line In this section, we propose a ridge estimator of σ^2 on the real line , built from one diffusion path. In this context, the main drawback is that the density function f_n:x↦1/n∑_k=0^n-1p_X(kΔ,x) is no longer lower bounded. Consequently, the empirical norm ._n is no longer equivalent to the L_2-norm . and the consistency of the estimation error is no longer ensured under the only assumptions made in the previous sections. Consider the truncated estimator σ^2_m,L of σ^2 given by σ^2_m,L(x) = σ^2_m(x)_σ^2_m(x) ≤√(L) + √(L)_σ^2_m(x) > √(L). Thus, the risk of the ridge estimator σ^2_m,L is upper-bounded as follows: [σ^2_m,L - σ^2^2_n,1] ≤  [(σ^2_m,L - σ^2)_[-log(n),log(n)]^2_n,1] + [(σ^2_m,L - σ^2)_[-log(n),log(n)]^c^2_n,1] ≤  [(σ^2_m,L - σ^2)_[-log(n),log(n)]^2_n,1] + 4log^2(n)t∈[0,1]sup(|X_t|>log(n)). The first term on the r.h.s. is equivalent to the risk of a ridge estimator of σ^2 on the compact interval [-log(n),log(n)]. The second term on the r.h.s. is upper-bounded using Lemma <ref>. We derive below, an upper-bound of the risk of estimation of σ^2_m. Suppose that L = log^2(n). Under Assumption <ref>, it yields, [σ^2_m,L - σ^2^2_n,1] ≤h∈𝒮_m,Linfh-σ^2^2_n+C√(m^qlog^2(n)/n) where C>0 is a constant, q = 1 for the collection [𝐁], and q = 2 for the collection [𝐅]. We first remark that the upper-bound of the risk of the truncated estimator of σ^2 differs with respect to each of the chosen bases. This contrast comes from the fact that the Fourier basis {f_ℓ, ℓ = 0, …, m-1} and the spline basis {B_ℓ-M,  ℓ = 0, …, m-1} satisfy ∑_ℓ = 0^m - 1f_ℓ(x)≤ C_fm,  and ∑_ℓ=0^m-1B_ℓ-M(x) = 1. Secondly, the estimation error is not as fine as the one established in Theorem <ref> where σ^2 is estimated on a compact interval. In fact, on the real line , the pseudo-norm ._X can no longer be equivalent to the 𝕃^2-norm since the transition density is not bounded from below on . Consequently, we cannot take advantage of the exact method used to establish the risk bound obtained in Theorem <ref> which uses the equivalence relation between the pseudo-norms ._n,1 and ._X on one side, and ._X and the 𝕃^2-norm . on the other side. Moreover, we can also notice that the term of order 1/n^2 does not appear since it is dominated by the estimation error. We obtain below rates of convergence of the ridge estimator of σ^2 for each of the collections [𝐁] and [𝐅]. Suppose that σ^2∈Σ_I(β,R) with β≥ 1 For [B]. Assume that K ∝ n^1/(4β+1). Under Assumptions <ref>, there exists a constant C>0 depending on β and σ_1 such that [σ^2_m,L - σ^2^2_n,1] ≤ Clog^2β(n)n^-2β/(4β+1). For [F]. Assume that m ∝ n^1/2(2β+1). Under Assumptions <ref>, it yields, [σ^2_m,L - σ^2^2_n,1] ≤ Clog(n)n^-β/(2β+1) where the constant C>0 depends on β and σ_1. As we can remark, the obtained rates are slower than the ones established in Section <ref> where σ^2 is estimated on a compact interval. This result is the immediate consequence of the result of Theorem <ref>. § ESTIMATION OF THE DIFFUSION COEFFICIENT FROM REPEATED DIFFUSION PATHS We now focus on the estimation of the (square) of the diffusion coefficient from i.i.d. discrete observations of the diffusion process (N →∞). §.§ Non-adaptive estimation of the diffusion coefficient on a compact interval We study the rate of convergence of the ridge estimators σ^2_m of σ^2_|I from D_N,n when I is a compact interval. The next theorem gives an upper-bound of the risk of our estimators σ^2_m,  m∈ℳ. Suppose that L = log(Nn) and ℳ = {1,…,√(min(n,N))/log(Nn)}. Under Assumption <ref> and for all m ∈ℳ, there exist constants C>0 and C^'>0 depending on σ_1 such that, 𝔼[σ^2_m-σ^2_|I^2_n,N]≤   3h∈𝒮_m,Linfh-σ^2_|I^2_n+C(m/Nn+mlog(Nn)exp(-C√(min(n,N)))+Δ^2_n) 𝔼[σ^2_m-σ^2_|I^2_n]≤   34h∈𝒮_m,Linfh-σ^2_|I^2_n + C^'(m/Nn+mlog(Nn)exp(-C√(min(n,N)))+Δ^2_n). Note that the result of Theorem <ref> is independent of the choice of the basis that generate the approximation space 𝒮_m. The first term on the right-hand side represents the approximation error of the initial space, the second term O(m/(Nn)) is the estimation error, and the last term characterizes the cost of the time discretization. The next result is derived from Theorem <ref>. Suppose that σ^2∈Σ_I(β,R) with β > 3/2. Moreover, assume that K_opt∝ (Nn)^1/(2β+1) for [𝐁] (m_opt = K_opt + M), and m_opt∝ (Nn)^1/(2β+1) for [𝐅]. Under Assumptions <ref>, it yields, 𝔼[σ^2_m_opt-σ^2_|I^2_n,N] =  O((Nn)^-2β/(2β+1)) 𝔼[σ^2_m_opt-σ^2_|I^2_n] =  O((Nn)^-2β/(2β+1)). The obtained result shows that the nonparametric estimators of σ^2_|I based on repeated observations of the diffusion process are more efficient when N,n→∞. Note that the same rate is obtained if the risk of σ^2_m_opt is defined with the 𝕃^2-norm . equivalent to the empirical norm ._n. The rate obtained in Corollary <ref> is established for β > 3/2. If we consider for example the collection [B] and assume that β∈ [1, 3/2], then K_opt∝ (Nn)^1/(2β+1) belongs to ℳ for n ∝√(N)/log^4(N) and we have 𝔼[σ^2_m_opt-σ^2_|I^2_n,N]≤ C(Nn)^-2β/(2β+1). Under the condition n ∝√(N)/log^4(N) imposed on the length of diffusion paths, the obtained rate is of order n^-3β/(2β+1) (up to a log-factor) which is equivalent to N^-3β/2(2β+1) (up to a log-factor). §.§ Non-adaptive estimation of the diffusion coefficient on the real line Consider a ridge estimator of σ^2 on built from N independent copies of the diffusion process X observed in discrete times, where both N and n tend to infinity. For each m ∈ℳ, we still denote by σ^2_m the ridge estimators of σ^2 and σ^2_m,L the truncated estimators of σ^2 given in Equation (<ref>). We establish, through the following theorem, the first risk bound that highlights the main error terms. Suppose that L=log^2(N). Under Assumptions <ref> and for any dimension m∈ℳ, the following holds: 𝔼[σ^2_m,L-σ^2^2_n,N] ≤   2h∈𝒮_m,Linfh-σ^2^2_n+C(√(m^qlog^2(N)/Nn) + Δ^2_n) where C>0 is a constant depending on the upper bound σ_1 of the diffusion coefficient. Moreover, q = 1 for the collection [𝐁] and q = 2 for the collection [𝐇]. If we consider the risk of σ^2_m,L using the empirical norm ._n, then we obtain 𝔼[σ^2_m,L-σ^2^2_n] ≤ 2h∈𝒮_m,Linfh-σ^2^2_n+C(√(m^qlog^2(N)/Nn) + m^2log^3(N)/N+Δ^2_n) The risk bound given in Equation (<ref>) is a sum of four error terms. The first term is the approximation error linked to the choice of the basis, the second term is the estimation error given in Theorem <ref>, the third term m^2log^3(N)/N comes from the relation linking the empirical norm ._n to the pseudo-norm ._n,N (see Lemma <ref>), and the last term is the cost of the time-discretization. We derive, in the next result, rates of convergence of the risk bound of the truncated ridge estimators σ^2_m,L based on the collections [𝐁] and [𝐇] respectively. Suppose that σ^2∈Σ_I(β,R) with β≥ 1,  I = [-log(N),log(N)], and K ∝ (Nn)^1/(4β+1) for [𝐁], and σ^2∈ W^s_f_n(,R) with s ≥ 1 and m ∝ (Nn)^1/2(2s+1) for [𝐇]. Under Assumption <ref>, the following holds: For  [𝐁]   𝔼[σ^2_m,L-σ^2^2_n,N] ≤ C(log^2β(N)(Nn)^-2β/(4β+1) + 1/n^2), For  [𝐇]   𝔼[σ^2_m,L-σ^2^2_n,N] ≤ C(log^3(N)(Nn)^-s/(2s+1) + 1/n^2). where C>0 is a constant depending on β and σ_1 for [𝐁], or s and σ_1 for [𝐇]. The obtained rates are slower compared to the rates established in Section <ref> for the estimation of σ^2_|I where the interval I⊂ is compact. In fact, the method used to establish the rates of Theorem <ref> from which the rates of Corollary <ref> are obtained, does not allow us to derive rates of order (Nn)^-α/(2α+1) (up to a log-factor) with α≥ 1 (e.g. α = β, s). Finally, if we consider the risk defined with the empirical norm ._n, then from Equation (<ref>) with n ∝ N and assuming that m ∝ N^1/4(s+1) for [𝐇] or K ∝ N^1/4(β+1) for [𝐁], we obtain [𝐁]:     𝔼[σ^2_m,L-σ^2^2_n] ≤   Clog^2β(N)(Nn)^-β/2(β+1), [𝐇]:     𝔼[σ^2_m,L-σ^2^2_n] ≤   Clog^3(N)(Nn)^-s/2(s+1), where C>0 is a constant depending on σ_1 and on the smoothness parameter. We can see that the obtained rates are slower compared to the results of Corollary <ref> for n ∝ N. The deterioration of the rates comes from the additional term of order m^2log^3(N)/N which is now regarded as the new estimation error since it dominates the other term in each case as N→∞. §.§ Non-adaptive estimation of the diffusion coefficient on a compact interval depending on the sample size This section combines the two first sections <ref> and <ref> focusing on the estimation of σ^2 on the compact interval [-A_N,A_N] where (A_N) is a strictly positive sequence such that A_N →∞ as N→∞. Consequently, we obtain that the estimation interval tends to as the sample size N tends to infinity. Define from the observations and for each dimension m∈ℳ, the following matrices: Ψ_m:=(1/Nn∑_j=1^N∑_k=0^n-1ϕ_ℓ(X^j_kΔ)ϕ_ℓ^'(X^j_kΔ))_0≤ℓ,ℓ^'≤ m-1, Ψ_m:=𝔼(Ψ_m)=([1/n∑_k=0^n-1ϕ_ℓ(X_kΔ)ϕ_ℓ^'(X_kΔ)])_0≤ℓ,ℓ^'≤ m-1. These two matrices play an essential role in the construction of a consistent projection estimator of σ^2 over any approximation subspace 𝒮_m spanned by the basis (ϕ_0,⋯,ϕ_m-1). Furthermore, for all h=∑_ℓ=0^m-1a_ℓϕ_ℓ∈𝒮_m, we have: h^2_n,N = ^t𝐚Ψ_m𝐚, h^2_n = 𝔼(h^2_n,N) = ^t𝐚Ψ_m𝐚, where 𝐚=(a_0,⋯,a_m-1). The Gram matrix Ψ_m is invertible under the spline basis (see <cit.>) and the Hermite basis (see <cit.>). We define for any invertible matrix M, the operator norm M^-1_op of M^-1 given by M^-1_op=1/inf{λ_j} where the λ_j are eigenvalues of M. For all dimension m∈ℳ, the matrices Ψ_m and 𝐅_m satisfy: Ψ_m= ^t𝐅_m𝐅_m. Consider the ridge estimator σ^2_m of σ^2_A_N = σ^2_[-A_N,A_N], with m∈ and A_N →∞ as N →∞. The estimator σ^2_m can reach a faster rate of convergence if the Gram matrix Ψ_m given in Equation (<ref>) satisfies the following condition, ℒ(m)(Ψ^-1_m_op∨ 1)≤ CN/log^2(N),   where  ℒ(m):=x∈ℝsup∑_ℓ=0^m-1ϕ^2_ℓ(x)<∞ where C>0 is a constant. In fact, the optimal rate of convergence is achieved on a random event Ω_n,N,m in which the two empirical norms ._n,N and ._n are equivalent (see  <cit.>, <cit.>). Then, Condition (<ref>)  is used to upper-bound (Ω^c_n,N,m) by a negligible term with respect to the considered rate (see <cit.>). Note that in Equation (<ref>), the square on log(N) is justified by the fact that the value of constant C>0 is unknown, and that the spline basis is not othonormal (see <cit.>, proof of Lemma 7.8). The assumption of Equation (<ref>) is also made in <cit.> on the operator norm of Ψ^-1_m based on an orthonormal basis with the bound 𝐜N/log(N) where the value of 𝐜 is known, and chosen and such that the upper-bound of (Ω^c_n,N,m) is negligible with respect to the estimation error. In our framework, since the transition density is approximated by Gaussian densities, we derive the following result. Suppose that n ∝ N and that the spline basis is constructed on the interval [-A_N,A_N] with A_N > 0. Under Assumption <ref> , for all m∈ and for all w∈^m such that w_2,m=1, there exists a constant C>0 such that For  [𝐇]:          w^'Ψ_mw ≥C/log(N)exp(-3c_σ(4m+3)/2(1-log^-1(N))), For  [𝐁]:          w^'Ψ_mw ≥CA_N/mlog(N)exp(-c_σA^2_N), where the constant c_σ>1 that comes from the approximation of the transition density, depends on the diffusion coefficient σ. The result of Lemma <ref>  implies for the Hermite basis that (Ψ^-1_m_op∨ 1)≤log(N)/Cexp(3c_σ(4m+3)/2(1-log^-1(N))) where the upper-bound is an exponentially increasing sequence of N since the dimension m∈ has a polynomial growth with respect to N. Thus, Condition (<ref>)  cannot be satisfied for the Hermite basis in our framework. Considering the spline basis, one has ℒ(m)=ℒ(K+M)≤ 1 and there exists a constant C>0 such that Ψ^-1_m_op≤ Cmlog(N)/A_Nexp(c_σA^2_N). For K ∝(N^2/(2β+1)A_N), Condition (<ref>) is satisfied if the estimation interval [-A_N,A_N] is chosen such that A_N = o (√(log(N))). In the next theorem, we prove that the spline-based ridge estimator of σ^2_A_N reaches a faster rate of convergence compared to the result of Corollary <ref> for the collection [𝐁]. Suppose that N ∝ n and consider the ridge estimator σ^2_A_N,m of σ^2_A_N based on the spline basis. Furthermore, suppose that L = log(N), A_N = o(√(log(N))) and K ∝ (Nn)^1/(2β+1)A_N (m = K + M). Under Assumptions <ref> and for σ^2∈Σ_I(β,R) with I = [-A_N, A_N], the following holds: [σ^2_A_N,m-σ^2_A_N^2_n,N] ≤ Clog^β(N)(Nn)^-2β/(2β+1) where C>0 is a constant depending on β. The above result shows that the risk of the ridge estimator of σ^2_A_N on [-A_N,A_N] reaches a rate of order (Nn)^-β/(2β+1) (up tp a log-factor) thanks to Condition (<ref>) which allows us to take advantage of the equivalence relation between the empirical norms ._n and ._n,N given in Equation (<ref>) to derive a finer estimation error (see proof of Theorem <ref>). Note that the obtained result depends on an appropriate choice of the estimation interval [-A_N,A_N] which tends to as N tends to infinity. Therefore, any choice of A_N such that A_N/√(log(N))⟶ +∞ cannot lead to a consistent estimation error since Equation (<ref>) is no longer satisfied for the upper-bounding of (Ω^c_n,N,m) by a term that tends to zero as N →∞. Thus, the assumption A_N = o(√(log(N))) is a necessary and sufficient condition for the validation of Condition (<ref>) which leads, together with Assumption <ref>, to the result of Theorem <ref>. Finally, under the assumptions of Theorem <ref> and considering the risk of σ^2_A_N,m based on the empirical norm ._n, we also obtain [σ^2_A_N,m-σ^2_A_N^2_n] = O(log^β(N)(Nn)^-2β/(2β+1)). In fact, under Condition (<ref>), the estimator σ^2_A_N,m satisfies the results of Theorem <ref> with I = [-A_N,A_N] and A_N = o(√(log(N))), which implies rates of the same order for the two empirical norms. § ADAPTIVE ESTIMATION OF THE DIFFUSION COEFFICIENT FROM REPEATED OBSERVATIONS In this section, we suppose that n ∝ N and we propose a adaptive ridge estimator of σ^2 by selecting an optimal dimension from the sample D_N. In fact, consider the estimator σ^2_K,L where K satisfies: K:=K∈𝒦min{γ_n,N(σ^2_K)+pen(K)} and the penalty function pen : K↦pen(K) is established using the chaining technique of <cit.>. We derive below the risk of the adaptive estimator of σ^2_|I when the interval I⊂ is compact and the sample size N →∞. Suppose that N ∝ n,   L=log(N) and consider the collection [B] with K ∈𝒦 = {2^q,  q=0,1,…,q_max}⊂ℳ = {1,…,√(N)/log(N)}. Under Assumption <ref> , there exists a constant C>0 such that, 𝔼[σ^2_K,L-σ^2_|I^2_n,N]≤ 34K∈𝒦inf{h∈𝒮_K+M,Linfh-σ^2_|I^2_n+pen(K)}+C/Nn where pen(K) = κ(K+M)log(N)/Nn with κ > 0 a numerical constant. We deduce from Corollary <ref> and its assumptions that the adaptive estimator σ^2_K,L satisfies: 𝔼[σ^2_K,L-σ^2_|I^2_n] = O((Nn)^-2β/(2β+1)). This result is justified since the penalty term is of the same order (up to a log-factor) than the estimation error established in Theorem <ref>. Considering the adaptive estimator of σ^2 on the real line I= when the sample size N →∞, we obtain the following result. Suppose that N ∝ n and L = log(N), and consider the collection [𝐁] with K ∈𝒦 = {2^q,  q=0,1,…,q_max}⊂ℳ = {1,…,√(N)/log(N)}. Under Assumption <ref> and for N large enough, the exists a constant C>0 such that, [σ^2_K,L - σ^2^2_n,N] ≤ 3K∈𝒦inf{h∈𝒮_K+M,Linfh-σ^2^2_n + pen(K)} + C/Nn. where pen(K) = κ^'(K+M)log(N)/Nn with κ^'>0 a numerical constant. We have a penalty term of the same order than the one obtained in Theorem <ref> where σ^2 is estimated on a compact interval. One can deduce that the adaptive estimator reaches a rate of the same order than the rate of the non-adaptive estimator given in Corollary <ref> for the collection [𝐁]. If we consider the adaptive estimator of the compactly supported diffusion coefficient built from a single diffusion path, we obtain below an upper-bound of its risk of estimation. Suppose that N = 1,   L = √(log(n)) and consider the collection [𝐁] with K ∈𝒦 = {2^q,  q=0,…,q_max}⊂ℳ = {1,…,√(n)/log(n)}. Under Assumption <ref>, it yields 𝔼[σ^2_K,L-σ^2_|I^2_n,1] ≤   3K∈𝒦inf{h∈𝒮_K+M,Linfh-σ^2_|I^2_n+pen(K)} + C/n. where C>0 is a constant depending on τ_0, and pen(K) = κ(K+M)log(n)/n with κ>0 a numerical constant. We deduce from Theorem <ref> that if we assume that σ^2 ∈Σ_I(β,R), then the adaptive estimator σ^2_K,L reaches a rate of order n^-β/(2β+1) (up to a log-factor). The result of this theorem is almost a deduction of the result of Theorem <ref>, the slight difference being the use, in the proofs, of the local time of the process and the equivalence relation between the pseudo-norm ._n,1 with the pseudo-norm ._X instead of the empirical norm ._n considered in the proof of Theorem <ref>. § NUMERICAL STUDY This section is devoted to the numerical study on a simulation scheme. Section <ref> focuses on the presentation of the chosen diffusion models. In Section <ref>, we describe the scheme for the implementation of the ridge estimators. We mainly focus on the B-spline basis for the numerical study, and in Section <ref>, we add a numerical study on the performance of the Hermite-based ridge estimator of σ^2 on . Finally, we compare the efficiency of our estimator built on the real line from a single path with that of the Nadaraya-Watson estimator proposed in <cit.>. §.§ Models and simulations Recall that the time horizon is T=1 and X_0 = 0. Consider the following diffusion models: Model 1 Ornstein-Uhlenbeck: b(x) = 1-x,   σ(x)= 1 Model 2: b(x) = 1-x,   σ(x) = 1-x^2 Model 3: b(x) = 1-x,   σ(x) = 1/3+sin(2π x)+cos^2(π/2x) Model 1 is the commonly used Ornstein-Uhlenbeck model, known to be a simple diffusion model satisfying Assumption <ref>. Model 2 does not satisfy Assumption <ref>. Model 3 satisfies Assumption <ref>  with a multimodal diffusion coefficient. The size N of the sample D_N takes values in the set {1,10,100,1000} where the length n of paths varies in the set {100,250,500,1000}. As we work with the spline basis, the dimension m=K+M of the approximation space is chosen such that M=3 and K takes values in 𝒦={2^p, p=0,⋯,5} so that the subspaces are nested inside each other. We are using for the simulation of diffusion paths via the function of package, (see <cit.> for more details on the simulation of SDEs). §.§ Implementation of the ridge estimators In this section, we assess the quality of estimation of the adaptive estimator σ^2_m in each of the 3 models through the computation of its risk of estimation. We compare the performance of the adaptive estimator with that of the oracle estimator σ^2_m^* where m^* is given by: m^*:=m∈ℳmin σ^2_m-σ^2^2_n,N. For the spline basis, we have m^* = K^* + M with M=3. Finally, we complete the numerical study with a representation of a set of 10 estimators of σ^2 for each of the 3 models. We evaluate the MISE of the spline-based adaptive estimators σ^2_K by repeating 100 times the following steps: * Simulate samples D_N,n and D_N^',n with N∈{1,10,100,1000}, N^'=100 and n ∈{100, 250,1000}. * For each K∈𝒦, and from D_N,n, compute estimators σ^2_K given in Equations (<ref>) and (<ref>). * Select the optimal dimension K∈𝒦 using Equation (<ref>) and compute K^* from Equation (<ref>) * Using D_N^',n, evaluate σ^2_K-σ^2^2_n,N^' and σ^2_K^*-σ^2^2_n,N^'. We deduce the risks of estimation considering the average values of σ^2_m-σ^2^2_n,N^' and σ^2_m^*-σ^2^2_n,N^' over the 100 repetitions. Note that we consider in this section, the estimation of σ^2 on the compact interval I = [-1,1] and on the real line . The unknown parameters κ and κ^' in the penalty functions given in Theorem <ref> and Theorem <ref> respectively, are numerically calibrated (details are given in Appendix <ref>), and we choose κ = 4 and κ^' = 5 as their respective values. §.§ Numerical results We present in this section the numerical results of the performance of the spline-based adaptive estimators of σ^2_|I with I ⊆ together with the performance of the oracle estimators. We consider the case I=[-1,1] for the compactly supported diffusion coefficient, and the case I=. Tables <ref> and <ref> present the numerical results of estimation of σ^2_|I from simulated data following the steps given in Section <ref>. The results of Table <ref> and Table <ref> show that the adapted estimator σ^2_K is consistent, since its MISE tends to zero as both the size N of the sample D_N,n and the length n of paths are larger. Moreover, note that in most cases, the ridge estimators of the compactly supported diffusion coefficients perform better than those of the non-compactly supported diffusion functions. As expected, we observe that the oracle estimator has generally a better performance compared to the adaptive estimator. Nonetheless, we can remark that the performances are very close in several cases, highlighting the efficiency of the data-driven selection of the dimension. An additional important remark is the significant influence of the length n of paths on the performance of σ^2_K and σ^2_K^*,L (by comparison of Table <ref> with Table <ref>), which means that estimators built from higher frequency data are more efficient. A similar remark is made for theoretical results obtained in Sections <ref> and <ref>. Performance of the Hermite-based estimator of the diffusion coefficient We focus on the estimation of σ^2 on and assess the performance of its Hermite-based estimator (see Section <ref>). We present in Table <ref>, the performance of the oracle estimator σ^2_m^*,L. From the numerical results of Table <ref>, we observe that the Hermite-based estimator of σ^2 is consistent as the sample size N and the length n paths take larger values. Estimation of the diffusion coefficient from one path Consider ridge estimators of σ^2_|I with I=[-1,1]. For the case of the adaptive estimators of σ^2_|I, the dimension K is selected such that K = K∈𝒦minγ_n(σ^2_K) + pen(K) where pen(K) = κ(K+M)log(n)/n with κ >0. We choose the numerical constant κ = 4 and we derive the numerical performance of the adaptive estimator of σ^2_|I. Table <ref> gives the numerical performances of both the adaptive estimator and the oracle estimator of σ^2_|I on the compact interval I=[-1,1] and from a single diffusion path. From the obtained results, we see that the estimators are numerically consistent. However, we note that the convergence is slow (increasing n from 100 to 1000), which highlights the significant impact of the number N of paths on the efficiency of the ridge estimator. Comparison of the efficiency of the ridge estimator of the diffusion coefficient with its Nadaraya-Watson estimator. Consider the adaptive estimator σ^2_K of the square of the diffusion coefficient buit on the real line from a single diffusion path (N=1), where the dimension K is selected using Equation (<ref>). For the numerical assessment, we use the interval I = [-10^6, 10^6] to approximate the real line , and then, use Equation (<ref>) for the data-driven selection of the dimension. We want to compare the efficiency of σ^2_K with that of the Nadaraya-Watson estimator of σ^2 given from a diffusion path X̅ = (X_k/n)_1≤ k≤ n and for all x ∈ by S_n(x) = ∑_k=1^n-1K(X_k/n - x/h_n)[X_(k+1)/n - X_k/n]^2/n/∑_k=1^nK(X_k/n - x/h_n) where K is a positive kernel function, and h_n is the bandwidth. Thus, the estimator S_n(x) is consistent under the condition nh^4_n→ 0 as n tends to infinity (see <cit.>). We use the function of the R-package to compute the Nadaraya-Watson estimator S_n. We remark from the results of Table <ref> that our ridge estimator is more efficient. Note that for the kernel estimator S_n, the bandwidth is computed using the rule of thumb of Scott (see <cit.>). The bandwidth is proportional to n^-1/(d+4) where n is the number of points, and d is the number of spatial dimensions. §.§ Concluding remarks The results of our numerical study show that our ridge estimators built both on a compact interval and on the real line are consistent as N and n take larger values, or as only n takes larger values when the estimators are built from a single path. These results are in accordance with the theoretical results established in the previous sections. Moreover, as expected, we obtained the consistency of the Hermite-based estimators of σ^2 on the real line . Nonetheless, we only focus on the Hermite-based oracle estimator since we did not establish a risk bound of the corresponding adaptive estimator. Finally, we remark that the ridge estimator of σ^2 built from a single path performs better than its Nadaraya-Watson kernel estimator proposed in <cit.> and implemented in the R-package . § CONCLUSION In this article, we have proposed ridge-type estimators of the diffusion coefficient on a compact interval from a single diffusion path. We took advantage of the local time of the diffusion process to prove the consistency of non-adaptive estimators of σ^2 and derive a rate of convergence of the same order than the optimal rate established in <cit.>. We also propose an estimator of σ^2 on the real line from a single path. We proved its consistency using the method described in Section <ref>, and derive a rate of convergence order n^-β/(4β+1) over a Hölder space for the collection [𝐁]. Then, we extended the study to the estimation of σ^2 from repeated discrete observations of the diffusion process. We establish rates of convergence of the ridge estimators both on a compact interval and on . We complete the study proposing adaptive estimators of σ^2 on a compact interval for N=1 and N→∞, and on the real line for N→∞. A perspective on the estimation of the diffusion coefficient could be the establishment of a minimax rate of convergence of the compactly supported (square of the) diffusion coefficient from repeated discrete observations of the diffusion process. The case of the non-compactly supported diffusion coefficient may be a lot more challenging, since the transition density of the diffusion process is no longer lower-bounded. This new fact can lead to different rates of convergence depending on the considered method (see Section <ref>). § ACKNOWLEDGEMENTS I would like to thank my supervisors, Christophe Denis, Charlotte Dion-Blanc, and Viet-Chi Tran, for their sound advice, guidance and support throughout this research project. Their experience in scientific research and their expertise in stochastic calculus and process statistics were decisive in providing precise and relevant answers to the issues raised in this paper, taking into account what has already been done in the literature. I am particularly grateful for their precise and constant help throughout the writing of this article, from editorial advice to proofreading the introduction, the proofs and all other sections of the paper. § PROOFS In this section, we prove our main results of Sections <ref>, <ref> and <ref>. To simplify our notations, we set Δ_n = Δ(=1/n) and constants are generally denoted by C>0 or c>0 whose values can change from a line to another. Moreover, we use the notation C_α in case we need to specify the dependency of the constant C on a parameter α. §.§ Technical results Recall first some useful results on the local time and estimates of the transition density of diffusion processes. For all integer q≥ 1, there exists C^*>0 depending on q such that for all 0≤ s<t≤ 1, [|X_t-X_s|^2q]≤ C^*(t-s)^q. The proof of Lemma <ref> is provided in <cit.>. Under Assumptions <ref>, there exist constants c_σ >1, C > 1 such that for all t ∈ (0,1], x ∈ℝ, 1/C√(t)exp(-c_σx^2/t) ≤ p_X(t,x) ≤C√(t)exp(-x^2/c_σt). The proof of Proposition <ref> is provided in <cit.>, Proposition 1.2. Let h be a L_0-lipschitz function. Then there exists h̃∈𝒮_K_N,M, such that |h̃(x)-h(x)| ≤ C log(N)/K_N, ∀ x ∈ (-log(N),log(N)), where C >0 depends on L_0, and M. The proof of Proposition <ref> is provided in <cit.>. The finite-dimensional vector space 𝒮_K_N,M = 𝒮_K_N+M is introduced in Section <ref>. Under Assumption <ref>, there exist C_1,C_2 >0 such that for all A >0, sup_t ∈ [0,1](|X_t|≥ A) ≤C_1/Aexp(-C_2A^2). The proof of Lemma <ref> is provided in <cit.>, Lemma 7.3. Under Assumption <ref>, the following holds: ∀ x∈,   ℒ^x = ℒ^x_-   a.s. where ℒ^x_- = ε→ 0limℒ^x-ε. The result of Lemma <ref> justifies the definition of the local time ℒ^x, for x∈, given in Equation (<ref>). From <cit.>, Theorem 1.7, we have ∀ x∈,   ℒ^x - ℒ^x_- = 2∫_0^1_X_s = xdX_s = 2∫_0^1_X_s = xb(X_s)ds + 2∫_0^1_X_s = xσ(X_s)dW_s. For all x∈ and for all s∈[0,1], we have for all ε>0, (X_s = x) =  ε→ 0lim (X_s≤ x + ε) - ε→ 0lim (X_s≤ x - ε) = ε→ 0lim F_s(x + ε) - ε→ 0lim F_s(x - ε) =   F_s(x) - F_s(x^-) =   0 Thus, for all x∈, [|ℒ^x - ℒ^x_-|] ≤   2∫_0^1|b(x)|(X_s = x)ds + 2[|∫_0^1_X_s=xσ(X_s)dW_s|] =   2[|∫_0^1_X_s=xσ(X_s)dW_s|]. Using the Cauchy Schwartz inequality, we conclude that [|ℒ^x - ℒ^x_-|] ≤   2√((∫_0^1_X_s=xσ^2(X_s)ds)) = 2σ(x)∫_0^1(X_s = x)ds = 0. Using the Markov inequality, we have ∀ ε>0,   (|ℒ^x - ℒ^x_-|>ε) ≤1/ε[|ℒ^x - ℒ^x_-|] = 0. We finally conclude that for all x ∈, (ℒ^x≠ℒ^x_-) = (|ℒ^x - ℒ^x_-|>0) = 0. §.§ Proofs of Section <ref>  §.§.§ Proof of Lemma <ref> The proof is divided into two parts for each of the two results to be proven. First result. Since the function h is continuous on , let H be a primitive of h on . We deduce that for all s ∈ [0,1], h(X_s) = ε→ 0limH(X_s + ε) - H(X_s - ε)/2ε = ε→ 0lim1/2ε∫_X_s - ε^X_s + εh(x)dx = ε→ 0lim1/2ε∫_-∞^+∞h(x)_(x-ε,x+ε)(X_s)dx. Finally, since h is integrable on and using the theorem of dominated convergence, we obtain ∫_0^1h(X_s)ds = ∫_-∞^+∞h(x)ε→ 0lim1/2ε∫_0^1_(x-ε,x+ε)(X_s)dsdx = ∫_-∞^+∞h(x)ℒ^xdx. Second result. Fix t∈(0,1] and consider P_X : (t,x) ↦∫_-∞^xp_X(t,y)dy the cumulative density function of the random variable X_t of the density function x ↦ p_X(t,x). We have: ∀ x∈,   (ℒ^x) = ε→ 0lim1/2ε∫_0^1[_(x-ε,x+ε)(X_s)]ds = ε→ 0lim1/2ε∫_0^1(x - ε≤ X_s≤ x + ε)ds = ∫_0^1ε→ 0limP_X(s,x+ε) - P_X(s,x-ε)/2εds = ∫_0^1p_X(s,x)ds. §.§.§ Proof of Theorem <ref>  Let Ω_n,m be the random event in which the two pseudo-norms ._n,1 and ._X are equivalent and given by Ω_n,m := g∈𝒮_m∖{0}⋂{|g^2_n,1/g^2_X-1| ≤1/2}. The proof of Theorem <ref> relies on the following lemma. Let γ > 1 be a real number. Under Assumption <ref>, the following holds (Ω^c_n,m) ≤ Cm^2γ/n^γ/2, where C>0 is a constant depending on γ. The parameter γ > 1 has to be chosen appropriately (i.e. such that m^2γ/n^γ/2 = o(1/n)) so that we obtain a variance term of the risk of the estimator σ^2_m of order mlog(n)/n (see Theorem <ref> and Corollary <ref>). Recall that since N = 1, ζ^1_kΔ=ζ^1,1_kΔ+ζ^1,2_kΔ+ζ^1,3_kΔ is the error term of the regression model, with: ζ^1,1_kΔ=1/Δ[(∫_kΔ^(k+1)Δσ(X^1_s)dW^1_s)^2-∫_kΔ^(k+1)Δσ^2(X^1_s)ds], ζ^1,2_kΔ=2/Δ∫_kΔ^(k+1)Δ((k+1)Δ-s)σ^'(X^1_s)σ^2(X^1_s)dW^1_s, ζ^1,3_kΔ=2b(X^1_kΔ)∫_kΔ^(k+1)Δσ(X^1_s)dW^1_s. Besides, R^1_kΔ =R^1,1_kΔ+R^1,2_kΔ, with: R^1,1_kΔ=1/Δ(∫_kΔ^(k+1)Δb(X^1_s)ds)^2+1/Δ∫_kΔ^(k+1)Δ((k+1)Δ-s)Φ(X^1_s)ds R^1,2_kΔ=2/Δ(∫_kΔ^(k+1)Δ(b(X^1_s)-b(X^1_kΔ))ds)(∫_kΔ^(k+1)Δσ(X^1_s)dW^1_s) where Φ:=2bσ^'σ+[σ^''σ+(σ^')^2]σ^2. By definition of the projection estimator σ^2_m for each m∈ℳ (see Equation (<ref>)), for all h∈𝒮_m,L, we have: γ_n,1(σ^2_m)-γ_n,1(σ^2_|I)≤γ_n,1(h)-γ_n,1(σ^2_|I). Furthermore, for all h∈𝒮_m,L, γ_n,1(h)-γ_n,1(σ^2_|I)=σ^2_|I-h^2_n,1+2ν_1(σ^2_|I-h)+2ν_2(σ^2_|I-h)+2ν_3(σ^2_|I-h)+2μ(σ^2_|I-h), where, ν_i(h) = 1/n∑_k=0^n-1h(X^1_kΔ)ζ^1,i_kΔ, i∈{1,2,3}, μ(h)=1/n∑_k=0^n-1h(X^1_kΔ)R^1_kΔ, and ζ^1,1_kΔ, ζ^1,2_kΔ, ζ^1,3_kΔ are given in Equations (<ref>), (<ref>), (<ref>), and finally, R^1_kΔ = R^1,1_kΔ+R^1,2_kΔ given in Equations (<ref>) and (<ref>). Then, for all m ∈ℳ, and for all h ∈𝒮_m,L, we obtain from Equation (<ref>) that σ^2_m-σ^2_|I^2_n,1≤h-σ^2_|I^2_n,1+2ν(σ^2_m-h)+2μ(σ^2_m-h), with ν=ν_1+ν_2+ν_3. Then, it comes, 𝔼[σ^2_m-σ^2_|I^2_n,1] ≤h∈𝒮_m,Linfh-σ^2_|I^2_n+2𝔼[ν(σ^2_m-h)]+2𝔼[μ(σ^2_m-h)]. Besides, for any a,d>0, using the inequality xy ≤η x^2 + y^2/η with η = a, d, we have, 2ν(σ^2_m-h) ≤2/aσ^2_m-σ^2_|I^2_X+2/ah-σ^2_|I^2_X+ah∈𝒮_m, h_X=1supν^2(h), 2μ(σ^2_m-h) ≤2/dσ^2_m-σ^2_|I^2_n,1+2/dh-σ^2_|I^2_n,1+d/n∑_k=1^n(R^1_kΔ)^2. §.§.§ Upper bound of 1/n∑_k=1^n(R^1_kΔ)^2 We have: ∀ k∈[[1,n]], R^1_kΔ=R^1,1_kΔ+R^1,2_kΔ+R^1,3_kΔ with,   R^1,1_kΔ=1/Δ(∫_kΔ^(k+1)Δb(X^1_s)ds)^2, R^1,2_kΔ=1/Δ∫_kΔ^(k+1)Δ((k+1)Δ-s)Φ(X^1_s)ds   R^1,3_kΔ=2/Δ(∫_kΔ^(k+1)Δ(b(X^1_s)-b(X^1_kΔ))ds)(∫_kΔ^(k+1)Δσ(X^1_s)dW^1_s). For all k∈[[1,n]], using the Cauchy-Schwarz inequality and Equation (<ref>), 𝔼[|R^1,1_kΔ|^2] ≤𝔼[(∫_kΔ^(k+1)Δb^2(X^1_kΔ)ds)^2]≤Δ𝔼[∫_kΔ^(k+1)Δb^4(X^1_kΔ)ds]≤ CΔ^2. Consider now the term R^1,2_kΔ. From Equation (<ref>), we have Φ=2bσ^'σ+[σ^''σ+(σ^')^2]σ^2 and according to Assumption <ref>, there exists a constant C>0 depending on σ_1 and α such that |Φ(X^1_s)| ≤ C[(2+|X^1_s|)(1+|X^1_s|^α) + (1+|X^1_s|^α)^2]. Then, from Equation (<ref>) and for all s∈(0,1], [Φ^2(X^1_s)] ≤ Cs∈(0,1]sup[(2+|X^1_s|)^2(1+|X^1_s|^α)^2 + (1+|X^1_s|^α)^4] < ∞ and 𝔼[|R^1,2_kΔ|^2] ≤1/Δ^2∫_kΔ^(k+1)Δ((k+1)Δ-s)^2ds∫_kΔ^(k+1)Δ𝔼[Φ^2(X^1_s)]ds≤ CΔ^2 Finally, under Assumption <ref>, from Equation (<ref>) and using the Cauchy-Schwarz inequality, we have 𝔼[|R^1,3_kΔ|^2] ≤4/Δ^2𝔼[Δ∫_kΔ^(k+1)ΔL^2_0|X^1_s-X^1_kΔ|^2ds(∫_kΔ^(k+1)Δσ(X^1_s)dW_s)^2] ≤4/Δ√(𝔼[L^4_0Δ∫_kΔ^(k+1)Δ|X^1_s-X^1_kΔ|^4ds]𝔼[(∫_kΔ^(k+1)Δσ(X^1_s)dW_s)^4]) ≤ CΔ^2. As a result, there exists a constant C>0 such that, 𝔼[1/n∑_k=1^n(R^1_kΔ)^2]≤ CΔ^2. We set a = d = 8 and considering the event Ω_n,m on which the empirical norms ._X and ._n,1 are equivalent, we deduce from Equations (<ref>), (<ref>) and (<ref>) that, 𝔼[σ^2_m-σ^2_|I^2_n,1_Ω_n,m]≤ 3h∈𝒮_minfh-σ^2_|I^2_n+C𝔼(h∈𝒮_m, h_X=1supν^2(h))+CΔ^2 where C>0 is a constant depending on σ_1. §.§ Upper bound of 𝔼(h∈𝒮_m, h_X=1supν^2(h)) For all h=∑_ℓ=0^m-1a_ℓϕ_ℓ∈𝒮_m such that h^2_X=1, we have h^2≤1/τ_0 (see Equation (<ref>)) and the coordinate vector 𝐚 = (a_-M,⋯,a_K-1) satisfies: * 𝐚^2_2≤ Cm    (m = K+M) for the spline basis (see <cit.>, Lemma 2.6) * 𝐚^2_2≤ 1/τ_0 for an orthonormal basis since h^2 = 𝐚^2_2. Furthermore, using the Cauchy-Schwarz inequality, we have: ν^2(h)=(∑_ℓ=0^m-1a_ℓν(ϕ_ℓ))^2≤𝐚^2_2∑_ℓ=0^m-1ν^2(ϕ_ℓ). Thus, since ν=ν_1+ν_2+ν_3, for all ℓ∈[[-M,K-1]] and for all i∈{1,2,3}, 𝔼[ν^2_i(ϕ_ℓ)]=  1/n^2𝔼[(∑_k=0^n-1ϕ_ℓ(X^1_kΔ)ζ^1,i_kΔ)^2]. * Case i=1 Recall that ζ^1,1_kΔ=1/Δ[(∫_kΔ^(k+1)Δσ(X^1_s)dW_s)^2-∫_kΔ^(k+1)Δσ^2(X^1_s)ds] where W=W^1. We fix a initial time s∈[0,1) and set M^s_t=∫_s^tσ(X^1_u)dW_u, ∀ t≥ s. (M^s_t)_t≥ s is a martingale and for all t∈[s,1], we have: <M^s,M^s>_t=∫_s^tσ^2(X^1_u)du. Then, ζ^1,1_kΔ=1/Δ(M^kΔ_(k+1)Δ)^2-<M^kΔ,M^kΔ>_(k+1)Δ is also a ℱ_kΔ-martingale, and, using the Burkholder-Davis-Gundy inequality, we obtain for all k∈[[0,n-1]], 𝔼[ζ^1,1_kΔ|ℱ_kΔ]=0, 𝔼[(ζ^1,1_kΔ)^2|ℱ_kΔ]≤C/Δ^2𝔼[(∫_kΔ^(k+1)Δσ^2(X^1_u)du)^2]≤ Cσ^4_1. Then, using Equation (<ref>) we have: 𝔼[ν^2_1(ϕ_ℓ)] =  1/n^2𝔼[∑_k=0^n-1ϕ^2_ℓ(X^1_kΔ)(ζ^1,1_kΔ)^2]=1/n^2𝔼[∑_k=0^n-1ϕ^2_ℓ(X^1_kΔ)𝔼[(ζ^1,1_kΔ)^2|ℱ_kΔ]] ≤  Cσ^4_1/n^2𝔼[∑_k=0^n-1ϕ^2_ℓ(X^1_kΔ)] and, ∑_ℓ=0^m-1𝔼[ν^2_1(ϕ_ℓ)]≤Cσ^4_1/n^2𝔼[∑_k=0^n-1∑_ℓ=0^m-1ϕ^2_ℓ(X^1_kΔ)]. One has: ∑_ℓ=-M^K-1B^2_ℓ(X^1_η(s))≤ 1   for the Spline basis   (m = K + M), ∑_ℓ = 0^m-1ϕ^2_ℓ(X^1_η(s))≤ Cm   for an orthonormal basis with   C = 0 ≤ℓ≤ m-1maxϕ_ℓ^2_∞. Thus, it comes that * ∑_ℓ=-M^K-1𝔼[ν^2_1(B_ℓ)]≤ C/n   for the Spline basis, * ∑_ℓ=0^m-1𝔼[ν^2_1(ϕ_ℓ)]≤ Cm/n   for an orthonormal basis, and, 𝔼(h∈𝒮_m, h^2_X=1supν^2_1(h))≤ Cm/n where C>0 is a constant depending on σ_1 and the basis. * Case i=2 Wa have ζ^1,2_kΔ=2/Δ∫_kΔ^(k+1)Δ((k+1)Δ-s)σ^'(X^1_s)σ^2(X^1_s)dW_s and, 𝔼[ν^2_2(ϕ_ℓ)] =  4𝔼[(∑_k=0^n-1ϕ_ℓ(X^1_kΔ)∫_kΔ^(k+1)Δ(k+1)Δ-s)σ^'(X^1_s)σ^2(X^1_s)dW_s)^2] =  4𝔼[(∫_0^1ϕ_ℓ(X^1_η(s))(η(s)+Δ-s)σ^'(X^1_s)σ^2(X^1_s)dW_s)^2] ≤  Cσ^4_1Δ^2𝔼[∫_0^1ϕ^2_ℓ(X^1_η(s))ds] where C>0 is a constant. We deduce for both the spline basis and any orthonormal basis that there exists a constant C>0 depending on σ_1 such that: 𝔼(h∈𝒮_m, h^2_X=1supν^2_2(h))≤ Cm/n^2. * Case i=3 We have ζ^1,3_kΔ=2b(X^1_kΔ)∫_kΔ^(k+1)Δσ(X^1_s)dW_s and, 𝔼[ν^2_3(ϕ_ℓ)] =  4/n^2𝔼[(∫_0^1ϕ_ℓ(X^1_η(s))b(X^1_η(s))σ(X^1_s)dW_s)^2] ≤  4σ^2_1/n^2𝔼[∫_0^1ϕ^2_ℓ(X^1_η(s))b^2(X^1_η(s))ds] Since for all x∈ℝ, b^2(x)≤ C_0(1+x^2) and t∈[0,1]sup𝔼(|X_t|^2)<∞, there exists a constant C>0 depending on σ_1 such that: 𝔼(h∈𝒮_m, h^2_X=1supν^2_3(h))≤ Cm/n^2. We finally obtain from Equations (<ref>), (<ref>) and (<ref>) that there exists a constant C>0 depending on σ_1 such that: 𝔼(h∈𝒮_m, h^2_X=1supν^2(h))≤ Cm/n. We deduce from Equations (<ref>) and (<ref>) that there exists a constant C>0 depending on σ_1 such that, [σ^2_m - σ^2_|I^2_n,1_Ω_n,m] ≤ 3h∈𝒮_m,Linfσ^2_|I - h^2_n + C(m/n + Δ^2). For n large enough, we have σ^2_m - σ^2_|I^2_∞≤ 2mL since σ^2_m_∞≤√(mL). Then, from Lemma <ref> and for all m∈ℳ, there exists a constant C>0 depending on σ_1 such that 𝔼[σ^2_m-σ^2_|I^2_n,1] =𝔼[σ^2_m-σ^2_|I^2_n,1_Ω_n,m]+𝔼[σ^2_m-σ^2_|I^2_n,1_Ω^c_n,m] ≤𝔼[σ^2_m-σ^2_|I^2_n,1_Ω_n,m]+2mLℙ(Ω^c_n,m) ≤ 3h∈𝒮_m,Linfσ^2_|I - h^2_n + C(m/n + m^2γ+1L/n^γ/2 + Δ^2). Since the pseudo-norms ._n,1 and ._X are equivalent on the event Ω_n,m, then, using Lemma <ref>, there exists a constant C>0 depending on σ_1 such that 𝔼[σ^2_m-σ^2_|I^2_X] = 𝔼[σ^2_m-σ^2_|I^2_X_Ω_n,m] + 𝔼[σ^2_m-σ^2_|I^2_X_Ω^c_n,m] ≤ 8𝔼[σ^2_m-σ^2_|I^2_n,1] + 10h∈𝒮_minfσ^2_|I-h^2_n + 2mL(Ω^c_n,m) ≤ 34h∈𝒮_m,Linfh-σ^2_|I^2_n+C(m/n+m^2γ+1L/n^γ/2+Δ^2). Finally, since the estimator σ^2_m is built from a diffusion path X̅^1 independent of the diffusion process X, and from Equations (<ref>) and (<ref>), the pseudo-norm ._X depending on the process X and the empirical norm ._n are equivalent (∀ h∈𝕃^2(I),  h^2_n≤ (τ_1/τ_0)[h^2_X]), there exists a constant C>0 depending on σ_1, τ_0 and τ_1 such that 𝔼[σ^2_m-σ^2_|I^2_n] ≤ 34τ_1/τ_0h∈𝒮_m,Linfh-σ^2_|I^2_n+C(m/n+m^2γ+1L/n^γ/2+Δ^2). The proof of this Lemma mainly focus on the spline basis and the Fourier basis based on functions cos and sin which are Lipschitz functions. Thus, for all g = ∑_ℓ = 0^m-1a_ℓϕ_ℓ∈𝒮_m, |g^2_n,1 - g^2_X| ≤∫_0^1|g^2(X_η(s)) - g^2(X_s)|ds≤ 2g_∞∫_0^1|g(X_η(s)) - g(X_s)|ds. From Equation (<ref>), one has [g^2_X] ≥τ_0 g^2. Thus, if g^2_X = 1, then g^2≤ 1/τ_0, and we deduce for all g = ∑_ℓ=0^m-1a_ℓϕ_ℓ that there exists a constant C>0 such that * Spline basis: g_∞≤a_2 ≤ C√(m)   (see   <cit.>) * Fourier basis: g_∞≤ C√(m)   since g = a_2 and ∑_ℓ=0^m-1ϕ^2_ℓ = O(m). Moreover, each g∈𝒮_m such that g^2_X = 1 is the Lipschitz function with a Lipschitz coefficient L_g = O(m^3/2). For the spline basis, this result is obtained in <cit.>, proof of Lemma C.1 combined with Lemma 2.6. For the Fourier basis, for all x,y∈ I and using the Cauchy Schwarz inequality, we obtain |g(x) - g(y)| ≤ ∑_ℓ = 0^m - 1|a_ℓ|.|ϕ_ℓ(x) - ϕ_ℓ(y)| ≤ 2π m√(m)𝐚_2|x-y| ≤ 2π/τ_0m√(m)|x-y|. Back to Equation (<ref>), there exists a constant C>0 such that |g^2_n,1 - g^2_X| ≤ Cm^2∫_0^1|X_η(s) - X_s|ds We have: Ω^c_n,m = {ω∈Ω,  ∃ g∈𝒮_m∖{0},  |g^2_n,1/g^2_X-1| > 1/2}, and, using Equation (<ref>), we obtain g∈𝒮_m∖{0}sup|g^2_n,1/g^2_X-1| = g∈𝒮_m, g^2_X = 1sup|g^2_n,1-g_X|≤ Cm^2∫_0^1|X_η(s) - X_s|ds. Finally, using the Markov inequality, the Hölder inequality, Equation (<ref>), and Lemma <ref>, we conclude that (Ω^c_n,m) ≤  (Cm^2∫_0^1|X_η(s) - X_s|ds≥1/2) ≤   Cm^2γ∫_0^1[|X_η(s) - X_s|^γ]ds ≤   Cm^2γ/n^γ/2 with γ∈ (1,+∞). §.§.§ Proof of Theorem <ref> Since L=log^2(n), we have 𝔼[σ^2_m,L-σ^2^2_n,1] =  𝔼[(σ^2_m,L-σ^2)_[-log(n),log(n)]^2_n,1] + 𝔼[(σ^2_m,L-σ^2)_[-log(n),log(n)]^c^2_n,1] ≤  𝔼[(σ^2_m,L-σ^2)_[-log(n),log(n)]^2_n,1] + 2log^2(n)t∈(0,1]sup(|X_t|>log(n)). From Equation (<ref>) (Proof of Theorem <ref>), for all h∈𝒮_m,L, 𝔼[(σ^2_m,L-σ^2)_[-log(n),log(n)]^2_n,1] ≤h∈𝒮_m,Linfh-σ^2^2_n+2∑_i=1^3𝔼[ν_i(σ^2_m-h)]+2𝔼[μ(σ^2_m-h)] where ν_i,   i=1,2,3 and μ are given in Equation (<ref>). For all i∈{1,2,3} and for all h∈𝒮_m,L, one has 𝔼[ν_i(σ^2_m,L-h)]≤√(2mlog^2(n))√(∑_ℓ=0^m-1𝔼[ν^2_i(ϕ_ℓ)]). * Upper bound of ∑_ℓ=0^m-1𝔼[ν^2_1(ϕ_ℓ)] According to Equation (<ref>), we have ∀ℓ∈[[0,m-1]], ν_1(ϕ_ℓ)=1/n∑_k=0^n-1ϕ_ℓ(X^1_kΔ)ζ^1,1_kΔ where ζ^1,1_kΔ=1/Δ[(∫_kΔ^(k+1)Δσ(X^1_s)dW_s)^2-∫_kΔ^(k+1)Δσ^2(X^1_s)ds] is a martingale satisfying 𝔼[ζ^1,1_kΔ|ℱ_kΔ]=0 and 𝔼[(ζ^1,1_kΔ)^2|ℱ_kΔ]≤1/Δ^2𝔼[(∫_kΔ^(k+1)Δσ^2(X^1_s)ds)^2]≤ Cσ^4_1 with C>0 a constant, W=W^1 and (ℱ_t)_t≥ 0 the natural filtration of the martingale (M_t)_t∈[0,1] given for all t∈[0,1] by M_t=∫_0^tσ(X^1_s)dW_s. We derive that ∑_ℓ=0^m-1𝔼[ν^2_1(ϕ_ℓ)]= 1/n^2∑_ℓ=0^m-1𝔼[(∑_k=0^n-1ϕ_ℓ(X^1_kΔ)ζ^1,1_kΔ)^2]=1/n^2𝔼[∑_k=0^n-1∑_ℓ=0^m-1ϕ^2_ℓ(X^1_kΔ)(ζ^1,1_kΔ)^2] since for all integers k, k^' such that k > k^'≥ 0, we have [ϕ_ℓ(X^1_kΔ)ζ^1,1_kΔϕ_ℓ(X^1_k^'Δ)ζ^1,1_k^'Δ|ℱ_kΔ] = ϕ_ℓ(X^1_kΔ)ζ^1,1_k^'Δϕ_ℓ(X^1_k^'Δ)[ζ^1,1_kΔ|ℱ_kΔ] = 0. For each k∈[[0,n-1]], we have ∑_ℓ=0^m-1ϕ_ℓ(X^1_kΔ) = ∑_ℓ=-M^K-1B_ℓ(X^1_kΔ) =1   for   the   spline   basis ∑_ℓ=0^m-1ϕ_ℓ(X^1_kΔ)≤ Cm   For   an   orthonormal   basis   with  C=0 ≤ℓ≤ m-1maxϕ_ℓ_∞. Finally, there exists a constant C>0 such that ∑_ℓ=0^m-1𝔼[ν^2_1(ϕ_ℓ)]≤C/n  for   the   spline   basis Cm/n  for   an   orthonormal   basis. * Upper bound of ∑_ℓ=0^m-1𝔼[ν^2_2(ϕ_ℓ)] For all k∈[[0,n-1]] and for all s∈[0,1], set η(s)=kΔ if s∈[kΔ,(k+1)Δ). We have: ∑_ℓ=0^m-1𝔼[ν^2_2(ϕ_ℓ)] =4∑_ℓ=0^m-1𝔼[(∑_k=0^n-1∫_kΔ^(k+1)Δϕ_ℓ(X^1_kΔ)((k+1)Δ-s)σ^'(X^1_s)σ^2(X^1_s)dW_s)^2] =4∑_ℓ=0^m-1𝔼[(∫_0^1ϕ_ℓ(X^1_η(s))(η(s)+Δ-s)σ^'(X^1_s)σ^2(X^1_s)dW_s)^2]. We conclude that ∑_ℓ=0^m-1𝔼[ν^2_2(ϕ_ℓ)]≤C/n^2  for   the   spline   basis Cm/n^2  for   an   orthonormal   basis. where the constant C>0 depends on the diffusion coefficient and the upper bound of the basis functions. * Upper bound of ∑_ℓ=0^m-1𝔼[ν^3_2(ϕ_ℓ)] We have: ∑_ℓ=0^m-1𝔼[ν^2_3(ϕ_ℓ)] =4/n^2∑_ℓ=0^m-1𝔼[(∑_k=0^n-1∫_kΔ^(k+1)Δϕ_ℓ(X^1_kΔ)b(X^1_kΔ)σ(X^1_s)dW_s)^2] =4/n^2∑_ℓ=0^m-1𝔼[(∫_0^1ϕ_ℓ(X^1_η(s))b(X^1_η(s))σ(X^1_s)dW_s)^2] ≤4/n^2𝔼[∫_0^1∑_ℓ=0^m-1ϕ^2_ℓ(X^1_η(s))b^2(X^1_η(s))σ^2(X^1_s)ds]. Since for all x∈ℝ, b(x)≤ C_0(1+x^2) and t∈[0,1]sup𝔼(|X_t|^4)<∞, there exists a constant C>0 depending on the diffusion coefficient such that ∑_ℓ=0^m-1𝔼[ν^2_3(ϕ_ℓ)]≤C/n^2  for   the   spline   basis Cm/n^2  for   an   orthonormal   basis. We finally deduce that from Equations (<ref>) and (<ref>)  that for all h∈𝒮_m,L, 𝔼[(σ^2_m,L-σ^2)_[-log(n),log(n)]^2_n,1]≤h∈𝒮_minfh-σ^2^2_n+C√(mlog^2(n)/n)+2𝔼[μ(σ^2_m,L-h)]    [B] 𝔼[(σ^2_m,L-σ^2)_[-log(n),log(n)]^2_n,1]≤h∈𝒮_minfh-σ^2^2_n+C√(m^2log^2(n)/n)+2𝔼[μ(σ^2_m,L-h)]    [F] where C>0 is a constant. It remains to obtain an upper bound of the term μ(σ^2_m,L-h). For all a>0 and for all h∈𝒮_m,L, 2μ(σ^2_m,L-h) ≤ 2/aσ^2_m,L-σ^2^2_n,1+2/ah-σ^2^2_n,1+a/n∑_k=0^n-1(R^1_kΔ)^2 2𝔼[μ(σ^2_m,L-h)] ≤ 2/a𝔼σ^2_m,L-σ^2^2_n,1+2/ah∈𝒮_minfh-σ^2^2_n +a/n∑_k=0^n-1𝔼[(R^1_kΔ)^2]. Using Equations (<ref>), (<ref>) and setting a=4, we deduce that there exists constant C>0 depending on σ_1 such that, 𝔼[σ^2_m,L-σ^2^2_n,1]≤h∈𝒮_minfh-σ^2^2_n+C√(mlog^2(n)/n) + 2log^2(n)t∈(0,1]sup(|X_t|>A_n)   [B] 𝔼[σ^2_m,L-σ^2^2_n,1]≤h∈𝒮_minfh-σ^2^2_n+C√(m^2log^2(n)/n) + 2log^2(n)t∈(0,1]sup(|X_t|>A_n)   [F]. From Proposition <ref>, t∈(0,1]sup(|X_t|>log(n))≤log^-1(n)exp(-clog^2(n)) with c>0 a constant. Then, we obtain from Equation (<ref>) that 𝔼[σ^2_m,L-σ^2^2_n,1]≤h∈𝒮_minfh-σ^2^2_n+C√(mlog^2(n)/n)     [B] 𝔼[σ^2_m,L-σ^2^2_n,1]≤h∈𝒮_minfh-σ^2^2_n+C√(m^2log^2(n)/n)     [F]. §.§.§ Proof of Corollary <ref> We have under Assumption <ref> from Theorem <ref> that 𝔼[σ^2_m,L-σ^2^2_n,1]≤h∈𝒮_minfh-σ^2^2_n+C√(mlog^2(n)/n)     [B] 𝔼[σ^2_m,L-σ^2^2_n,1]≤h∈𝒮_minfh-σ^2^2_n+C√(m^2log^2(n)/n)     [F]. For [B]. We have m=K+M with M∈ℕ^* fixed. From Proposition <ref> and under Assumption <ref>, there exists a constant C>0 depending on β such that h∈𝒮_K+M,Linfh-σ^2^2_n≤ Clog^2β(n)K^-2β. Since K ∝ n^1/(4β+1), we obtain that [σ^2_K - σ^2^2_n,1] = O(log^2β(n)n^-2β/(4β+1)). For [F]. Under Assumptions <ref> and <ref> and From Lemma 12 in <cit.>, there exists a constant C>0 depending on τ_1 of Equation (<ref>) and the smoothness parameter β of the Besov space 𝐁^β_2,∞ such that h∈𝒮_m,Linfh-σ^2^2_n≤τ_1h∈𝒮_m,Linfh-σ^2^2≤ C|σ^2|^2_β m^-2β where |σ^2|_β is the semi-norm of σ^2 in the Besov space ℬ^β_2,∞([-log(n),log(n)]). Under Assumption <ref>, |σ^2|_β < ∞. Then, for m ∝ n^1/2(2β+1), the exists a constant C>0 depending on β, σ_1 and τ_1 such that [σ^2_m - σ^2^2_n,1] ≤ Clog(n)n^-β/(2β+1). §.§ Proof of Section <ref> The following lemma allows us to obtain a risk bound of σ^2_m,L defined with the empirical norm ._n from the risk bound defined from the pseudo norm ._n,N. Let σ^2_m,L be the truncated projection estimator on of σ^2 over the subspace 𝒮_m,L. Suppose that L = log^2(N),   N>1. Under Assumption <ref>, there exists a constant C>0 independent of m and N such that [σ^2_m,L - σ^2^2_n,N] - 2[σ^2_m,L - σ^2^2_n] ≤ C m^2log^3(N)/N. The proof of Lemma <ref> is provided in <cit.>, Theorem 3.3. The proof uses the independence of the copies X̅^1,…,X̅^N of the process X at discrete times, and the Bernstein inequality. §.§.§ Proof of Theorem <ref>  For fixed n and N in ℕ^*, we set for all m∈ℳ, Ω_n,N,m:=h∈𝒮_m∖{0}⋂{|h^2_n,N/h^2_n-1|≤1/2}. As we can see, the empirical norms h_n,N and h_n of any function h∈𝒮_m∖{0} are equivalent on Ω_n,N,m. More precisely, on the set Ω_n,N,m, for all h∈𝒮_m∖{0}, we have : 1/2h^2_n≤h^2_n,N≤3/2h^2_n. We have the following result: Under Assumption <ref>, the following holds: * If n ≥ N or n ∝ N, then m ∈ℳ = {1,…,√(N)/log(Nn)} and, ℙ(Ω^c_n,N,m)≤ 2exp(-C√(N)). * If n ≤ N, then m ∈ℳ= {1,…,√(n)/log(Nn)} and ℙ(Ω^c_n,N,m) ≤ 2exp(-C√(n)) where C>0 is a constant. We have: Ω^c_n,N,m={ω∈Ω, ∃ h_0∈𝒮_m, |h^2_n,N/h^2_n-1|>1/2}, Denote by ℋ_m = {h∈𝒮_m,  h_n = 1} and ℋ^ε_m the ε-net of ℋ_m for any ε >0. We have h∈ℋ_msup|h^2_n,N/h^2_n-1| = h∈ℋ_msup|h^2_n,N-1|. Let ε > 0 and let ℋ^ε_m be the ε-net of ℋ_m w.r.t. the supremum norm ._∞. Then, for each h∈ℋ_m, there exists h_ε∈ℋ^ε_m such that h-h_ε_∞≤ε. Then |h^2_n,N - 1| ≤|h^2_n,N - h_ε^2_n,N| + |h_ε^2_n,N - 1| and, |h^2_n,N - h_ε^2_n,N| ≤  1/Nn∑_j=1^N∑_k=0^n-1|h(X^j_kΔ) - h_ε(X^j_kΔ)|(h_∞ + h_ε_∞)≤(h_∞ + h_ε_∞)ε. Moreover, we have h^2, h_ε^2≤ 1/τ_0. Then, there exists a constant 𝐜 > 0 such that |h^2_n,N - h_ε^2_n,N| ≤ 2√( cm/τ_0)ε  for the spline basis  (see  Lemma 2.6  in   Denis  et   al.(2021)) |h^2_n,N - h_ε^2_n,N| ≤ 2√( cm/τ_0)ε  for   an   orthonormal   basis   (h^2_∞≤ (0≤ℓ≤ m-1maxϕ_ℓ^2_∞)mh^2). Therefore, for all δ > 0 and for both the spline basis and any orthonormal basis, (h∈ℋ_msup|h^2_n,N-1|≥δ) ≤(h∈ℋ^ε_msup|h^2_n,N-1|≥δ/2) + _4ε√( cm/τ_0)≥δ. We set δ = 1/2 and we choose ε > 0 such that 4ε√( cm/τ_0) < 1/2. Then, using the Hoeffding inequality, there exists a constant c>0 depending on c and τ_0 such that ℙ(Ω^c_n,N,m)≤ 2𝒩_∞(ε,ℋ_m)exp(-cN/m) where 𝒩_∞(ε,ℋ_m) is the covering number of ℋ_m satisfying: 𝒩_∞(ε,ℋ_m) ≤(κ√(m)/ε)^m where the constant κ>0 depends on c>0 (see <cit.>, Proof of Lemma D.1). We set ε = κ√(m^*)/N with m^* = maxℳ and we derive from Equations (<ref>) and (<ref>) that (Ω^c_n,N,m) ≤ 2N^m^*exp(-cN/m^*) = 2exp(-cN/m^*(1-m^*2log(N)/cN)). * If n ≥ N, then m ∈ℳ = {1,…,√(N)/log(Nn)}. Since m^*2log(N)/N → 0 as N→ +∞, there exists a constant C>0 such that ℙ(Ω^c_n,N,m)≤ 2exp(-C√(N)). * If n ≤ N, then m ∈ℳ = {1,…, √(n)/log(Nn)},   m^*2log(N)/N ≤log(N)/log^2(Nn) → 0 as N,n →∞, and ℙ(Ω^c_n,N,m)≤ 2exp(-C√(n)). * If n ∝ N, then m ∈ℳ = {1,…,√(N)/log(Nn)}. Since m^*2log(N)/N → 0 as N→ +∞, there exists a constant C>0 such that ℙ(Ω^c_n,N,m)≤ 2exp(-C√(N)). The proof of Theorem <ref> extends the proof of Theorem <ref> when N tends to infinity. Then, we deduce from Equation (<ref>) that 𝔼[σ^2_m-σ^2_|I^2_n,N_Ω_n,N,m]≤ 3h∈𝒮_m,Linfh-σ^2_|I^2_n+C𝔼(h∈𝒮_m, h_n=1supν^2(h))+CΔ^2 where C>0 is a constant depending on σ_1, and ν = ν_1+ν_2+ν_3 with ν_i(h) = 1/Nn∑_j=1^N∑_k=0^n-1h(X^j_kΔ)ζ^j,i_kΔ,    i=1,2,3 and the ζ^j,i_kΔ's are the error terms depending on each path X^j,  j=1,…,N. §.§ Upper bound of 𝔼(h∈𝒮_m, h_n=1supν^2(h)) For all h=∑_ℓ=0^m-1a_ℓϕ_ℓ∈𝒮_m such that h_n=1, we have h^2≤1/τ_0 and the coordinate vector a=(a_0,⋯,a_m-1) satisfies: * a^2_2≤ CK ≤ Cm for the spline basis (see <cit.>, Lemma 2.6) * a^2_2≤ 1/τ_0 for an orthonormal basis since h^2 = a^2_2. Furthermore, using the Cauchy Schwartz inequality, we have: ν^2(h)=(∑_ℓ=0^m-1a_ℓν(ϕ_ℓ))^2≤a^2_2∑_ℓ=0^m-1ν^2(ϕ_ℓ). Thus, for all ℓ∈[[0,m-1]], ν=ν_1+ν_2+ν_3 and for all i∈{1,2,3} 𝔼[ν^2_i(ϕ_ℓ)]=  1/Nn^2𝔼[(∑_k=0^n-1ϕ_ℓ(X^1_kΔ)ζ^1,i_kΔ)^2]. We finally deduce from  (<ref>), (<ref>) and (<ref>) that there exists a constant C>0 depending on σ_1 such that: 𝔼(h∈𝒮_m, h_n=1supν^2(h))≤ Cm/Nn. We deduce from  (<ref>) and (<ref>) that there exists a constant C>0 such that, 𝔼[σ^2_m-σ^2_|I^2_n,N_Ω_n,N,m]≤ 3h∈𝒮_m,Linfσ^2_|I-h^2_n+C(m/Nn+Δ^2). Since we have σ^2_m_∞≤√(mL), then for m and L large enough, σ^2_m-σ^2_|I^2_∞≤ 2mL. There exists a constant C>0 such that for all m∈ℳ and for m and L large enough, 𝔼[σ^2_m-σ^2_|I^2_n,N] =𝔼[σ^2_m-σ^2_|I^2_n,N_Ω_n,N,m]+𝔼[σ^2_m-σ^2_|I^2_n,N_Ω^c_n,N,m] ≤𝔼[σ^2_m-σ^2_|I^2_n,N_Ω_n,N,m]+2mL(Ω^c_n,N,m). Then, from Equation (<ref>), Lemma <ref> and for m ∈ℳ = {1,…,√(min(n,N))/√(log(Nn))}, we have: 𝔼[σ^2_m-σ^2_|I^2_n,N]≤   3h∈𝒮_m,Linfh - σ^2_|I^2_n+C(m/Nn+mLexp(-C√(min(n,N)))+Δ^2) where C>0 is a constant. Recall that the empirical norms ._n,N and ._n are equivalent on Ω_n,N,m, that is for all h∈𝒮_m, h^2_n≤ 2h^2_n,N. Thus, we have 𝔼[σ^2_m-σ^2_|I^2_n] =  𝔼[σ^2_m-σ^2_|I^2_n_Ω_n,N,m] + 𝔼[σ^2_m-σ^2_|I^2_n_Ω^c_n,N,m] ≤  𝔼[σ^2_m-σ^2_|I^2_n_Ω_n,N,m] + 2mL(Ω^c_n,N,m). For all h ∈𝒮_m,L⊂𝒮_m, we have: 𝔼[σ^2_m-σ^2_|I^2_n_Ω_n,N,m] ≤   2𝔼[σ^2_m-h^2_n_Ω_n,N,m] + 2h-σ^2_|I^2_n ≤   4𝔼[σ^2_m-h^2_n,N_Ω_n,N,m] + 2h-σ^2_|I^2_n ≤   8𝔼[σ^2_m-σ^2_|I^2_n,N] + 10h-σ^2_|I^2_n. We finally conclude that 𝔼[σ^2_m-σ^2_|I^2_n] ≤ 34h∈𝒮_m,Linfh-σ^2_|I^2_n+C(m/Nn+mLexp(-C√(min(n,N)))+Δ^2). §.§.§ Proof of Corollary <ref>  Under Assumption <ref> and from Theorem <ref> and Lemma (<ref>), there exists a constant C>0 such that 𝔼[σ^2_m-σ^2_|I^2]≤ C(h∈𝒮_m,Linfh-σ^2_|I^2_n+m/Nn+L/min(N^4,n^4)+1/n^2). For [B]. We have m=K+M where M is fixed. From Lemma (<ref>), under Assumption <ref>, we have h∈𝒮_m,Linfh-σ^2_|I^2_n = O(K^-2β). Thus, for K ∝ (Nn)^1/(2β+1) and L = log(Nn), 𝔼[σ^2_m-σ^2_|I^2]≤ C(Nn)^-2β/(2β+1)+Clog(Nn)min(N^-4,n^-4). where C>0 is a constant depending on β. For [F]. From Equation (<ref>) and the proof of Corollary <ref>, we have h∈𝒮_minfh - σ^2_|I^2_n = O(m^-2s). Then, for m = (Nn)^1/(2s+1) and L = log(Nn), we obtain 𝔼[σ^2_m-σ^2_|I^2]≤ C(Nn)^-2s/(2s+1)+Clog(Nn)min(N^-4,n^-4). §.§.§ Proof of Theorem <ref>  We consider the restriction σ^2_[-log(N),log(N)] of σ^2 on the compact interval [-log(N),log(N)] on which the spline basis is built. Then we have: 𝔼[σ^2_m,L-σ^2^2_n] = 𝔼[(σ^2_m,L-σ^2)_[-log(N),log(N)]^2_n] + 𝔼[(σ^2_m,L-σ^2)_[-log(N),log(N)]^c^2_n] and from Proposition <ref>, Lemma <ref> and for N large enough, there exists constants c,C>0 such that 𝔼[(σ^2_m,L-σ^2)_[-log(N),log(N)]^c^2_n] ≤ 2L/n∑_k=0^n-1(|X_kΔ| > log(N))≤ 2Lt∈[0,1]sup(|X_t|≥log(N)) ≤ C/log(N)exp(-clog^2(N)). We deduce that 𝔼[σ^2_m,L-σ^2^2_n] = 𝔼[(σ^2_m,L-σ^2)_[-log(N),log(N)]^2_n] + C/log(N)exp(-clog^2(N)). It remains to upper-bound the first term on the right hand side of Equation (<ref>). Upper bound of 𝔼[σ^2_m,L-σ^2^2_n_[-log(N),log(N)]]. For all h∈𝒮_m,L, we obtain from Equation (<ref>), γ_n,N(σ^2_m,L)-γ_n,N(σ^2)≤γ_n,N(h)-γ_n,N(σ^2). For all h∈𝒮_m,L, γ_n,N(h)-γ_n,N(σ^2)=h-σ^2^2_n,N+2ν_1(σ^2-h)+2ν_2(σ^2-h)+2ν_3(σ^2-h)+2μ(σ^2-h) where ν_i(h)=1/nN∑_j=1^N∑_k=0^n-1h(X^j_kΔ)ζ^j,i_kΔ, i∈{1,2,3}, μ(h)=1/nN∑_j=1^N∑_k=0^n-1h(X^j_kΔ)R^j_kΔ, we deduce from Equation (<ref>) that for all h∈𝒮_m,L, 𝔼[σ^2_m,L-σ^2^2_n,N_[-log(N),log(N)]] ≤h∈𝒮_m,Linfh-σ^2^2_n+2∑_i=1^3𝔼[ν_i(σ^2_m,L-h)]+2𝔼[μ(σ^2_m,L-h)]. For all i∈{1,2,3} and for all h∈𝒮_m,L, one has 𝔼[ν_i(σ^2_m,L-h)]≤√(2mL)√(∑_ℓ=0^m-1𝔼[ν^2_i(ϕ_ℓ)]). * Upper bound of ∑_ℓ=0^m-1𝔼[ν^2_1(ϕ_ℓ)] According to Equation (<ref>), we have ∀ℓ∈[[0,m-1]], ν_1(ϕ_ℓ)=1/Nn∑_j=1^N∑_k=0^n-1ϕ_ℓ(X^j_kΔ)ζ^j,1_kΔ where ζ^j,1_kΔ=1/Δ[(∫_kΔ^(k+1)Δσ(X^j_s)dW^j_s)^2-∫_kΔ^(k+1)Δσ^2(X^j_s)ds] is a martingale satisfying 𝔼[ζ^1,1_kΔ|ℱ_kΔ]=0 and 𝔼[(ζ^1,1_kΔ)^2|ℱ_kΔ]≤1/Δ^2𝔼[(∫_kΔ^(k+1)Δσ^2(X^1_s)ds)^2]≤ Cσ^4_1 with C>0 a constant, W=W^1 and (ℱ_t)_t≥ 0 the natural filtration of the martingale (M_t)_t∈[0,1] given for all t∈[0,1] by M_t=∫_0^tσ(X^1_s)dW_s. We derive that ∑_ℓ=0^m-1𝔼[ν^2_1(ϕ_ℓ)]= 1/Nn^2∑_ℓ=0^m-1𝔼[(∑_k=0^n-1ϕ_ℓ(X^j_kΔ)ζ^1,1_kΔ)^2]=1/Nn^2𝔼[∑_k=0^n-1∑_ℓ=0^m-1ϕ^2_ℓ(X^1_kΔ)(ζ^1,1_kΔ)^2]. For each k∈[[0,n-1]], we have ∑_ℓ=0^m-1ϕ^2_ℓ(X^1_kΔ) = ∑_ℓ=-M^K-1B^2_ℓ(X^1_kΔ) =1   for   the   spline   basis ∑_ℓ=0^m-1ϕ^2_ℓ(X^1_kΔ)≤ Cm   For   an   orthonormal   basis   with  C=0 ≤ℓ≤ m-1maxϕ_ℓ^2_∞. Finally, there exists a constant C>0 such that ∑_ℓ=0^m-1𝔼[ν^2_1(ϕ_ℓ)]≤C/Nn  for   the   spline   basis Cm/Nn  for   an   orthonormal   basis. * Upper bound of ∑_ℓ=0^m-1𝔼[ν^2_2(ϕ_ℓ)] For all k∈[[0,n-1]] and for all s∈[0,1], set η(s)=kΔ if s∈[kΔ,(k+1)Δ). We have: ∑_ℓ=0^m-1𝔼[ν^2_2(ϕ_ℓ)] =4/N∑_ℓ=0^m-1𝔼[(∑_k=0^n-1∫_kΔ^(k+1)Δϕ_ℓ(X^1_kΔ)((k+1)Δ-s)σ^'(X^1_s)σ^2(X^1_s)dW_s)^2] =4/N∑_ℓ=0^m-1𝔼[(∫_0^1ϕ_ℓ(X^1_η(s))(η(s)+Δ-s)σ^'(X^1_s)σ^2(X^1_s)dW_s)^2]. We conclude that ∑_ℓ=0^m-1𝔼[ν^2_2(ϕ_ℓ)]≤C/Nn^2  for   the   spline   basis Cm/Nn^2  for   an   orthonormal   basis. where the constant C>0 depends on the diffusion coefficient and the upper bound of the basis functions. * Upper bound of ∑_ℓ=0^m-1𝔼[ν^3_2(ϕ_ℓ)] We have: ∑_ℓ=0^m-1𝔼[ν^2_3(ϕ_ℓ)] =4/Nn^2∑_ℓ=0^m-1𝔼[(∑_k=0^n-1∫_kΔ^(k+1)Δϕ_ℓ(X^1_kΔ)b(X^1_kΔ)σ(X^1_s)dW_s)^2] =4/Nn^2∑_ℓ=0^m-1𝔼[(∫_0^1ϕ_ℓ(X^1_η(s))b(X^1_η(s))σ(X^1_s)dW_s)^2] ≤4/Nn^2𝔼[∫_0^1∑_ℓ=0^m-1ϕ^2_ℓ(X^1_η(s))b(X^1_η(s))σ^2(X^1_s)ds]. Since for all x∈ℝ, b(x)≤ C_0(1+x^2) and t∈[0,1]sup𝔼(|X_t|^2)<∞, there exists a constant C>0 depending on the diffusion coefficient such that ∑_ℓ=0^m-1𝔼[ν^2_3(ϕ_ℓ)]≤C/Nn^2  for   the   spline   basis Cm/Nn^2  for   an   orthonormal   basis. We finally deduce that from Equations (<ref>) and (<ref>)  that for all h∈𝒮_m,L, 𝔼[σ^2_m,L-σ^2^2_n,N_[-log(N),log(N)]] ≤h∈𝒮_m,Linfh-σ^2^2_n+C√(mL/Nn)+2𝔼[μ(σ^2_m,L-h)]    (1) 𝔼[σ^2_m,L-σ^2^2_n,N_[-log(N),log(N)]] ≤h∈𝒮_m,Linfh-σ^2^2_n+C√(m^2L/Nn)+2𝔼[μ(σ^2_m,L-h)]    (2) where C>0 is a constant, the result (1) corresponds to the spline basis, and the result (2) corresponds to any orthonormal basis. It remains to obtain an upper bound of the term μ(σ^2_m,L-h). For all a>0 and for all h∈𝒮_m,L, 2μ(σ^2_m,L-h) ≤ 2/aσ^2_m,L-σ^2^2_n,N+2/ah-σ^2^2_n,N+a/Nn∑_j=1^N∑_k=0^n-1(R^j_kΔ)^2 2𝔼[μ(σ^2_m,L-h)] ≤ 2/a𝔼σ^2_m,L-σ^2^2_n,N+2/ah∈𝒮_m,Linfh-σ^2^2_n +a/Nn∑_j=1^N∑_k=0^n-1𝔼[(R^j_kΔ)^2]. Using Equations (<ref>), (<ref>) and setting a=4, we deduce that there exists constant C>0 depending on σ_1 such that, 𝔼[σ^2_m,L-σ^2^2_n,N_[-log(N),log(N)]]≤h∈𝒮_m,Linfh-σ^2^2_n+C(√(mL/Nn)+Δ^2)    [𝐁] 𝔼[σ^2_m,L-σ^2^2_n,N_[-log(N),log(N)]] ≤h∈𝒮_m,Linfh-σ^2^2_n+C(√(m^2L/Nn)+Δ^2)    [𝐇]. The final result is obtained from Equations (<ref>) and (<ref>). §.§.§ Proof of Lemma <ref>  It is proven in <cit.> that for each dimension m∈ℳ, the Gram matrix Ψ_m built from the Hermite basis is invertible. For the case of the B-spline basis, let us consider a vector (x_-M,⋯,x_K-1)∈ℝ^m such that x_j∈[u_j+M,u_j+M+1) and B_j(x_j)≠ 0. Since [u_j+M,u_j+M+1)∩[u_j^'+M,u_j^'+M+1)=∅ for all j,j^'∈{-M,⋯,K-1} such that j≠ j^', then for all j,j^'∈{-M,⋯,K-1} such that j≠ j^', B_j(x_j^')=0. Consequently, we obtain: ((B_ℓ(x_ℓ^'))_-M≤ℓ,ℓ^'≤ K-1) =(diag(B_-M(x_M),⋯,B_K-1(x_K-1))) =∏_ℓ=-M^K-1B_ℓ(x_ℓ)≠ 0. Then, we deduce from <cit.>, Lemma 1 that the matrix Ψ_m is invertible for all m∈ℳ, where the function f_T are replaced by f_n : x↦1/n∑_k=0^n-1p_X(kΔ,x) with λ([-A_N,A_N]∩supp(f_n))>0, λ being the Lebesgue measure. Case of the B-spline basis. For all w∈ℝ^m such that w_2,m=1, we have: w^'Ψ_mw = t_w^2_n=∫_-A_N^A_Nt^2_w(x)f_n(x)dx+t^2_w(x_0)/n with t_w=∑_ℓ=-M^K-1w_ℓB_ℓ. Under Assumption <ref>, the transition density (t,x)↦ p_X(t,x) is approximated as follows ∀ (t,x)∈(0,1]×ℝ, 1/K_*√(t)exp(-c_σx^2/t)≤ p_X(t,x)≤K_*/√(t)exp(-x^2/c_σt) where K_*>1 and c_σ>1. Since s↦exp(-c_σx^2/s) is an increasing function, then for n large enough and for all x∈[-A_N,A_N], f_n(x) ≥1/K_*n∑_k=1^n-1exp(-cx^2/kΔ)≥1/K_*∫_0^1-Δexp(-c_σx^2/s)ds ≥1/K_*∫_1-(log(N))^-1^1-(2log(N))^-1exp(-c_σx^2/s)ds ≥1/2K_*log(N)exp(-c_σx^2/1-log^-1(N)). Thus, the density function satisfies ∀ x∈[-A_N,A_N], f_n(x)≥12K_*log(N)exp(-c_σA^2_N/1-log^-1(N))≥12K_*log(N)exp(-c_σA^2_N). Finally, since there exists a constant C_1>0 such that t_w^2≥ C_1A_NK^-1_N (see <cit.>, Lemma 2.6), for all w∈ℝ^m (m = K_N+M) such that w_2,m=1, there exists a constant C>0 such that, w^'Ψ_mw≥CA_N/mlog(N)exp(-c_σA^2_N). Case of the Hermite basis. For all w∈^m such that w_2,m=1, we have w^'Ψ_mw=t_w^2_n=∫_-∞^+∞t^2_w(x)f_n(x)dx+t^2_w(x_0)/n with t_w=∑_ℓ=0^m-1w_ℓh_ℓ. Recall that for all x∈ such that |x|≥√((3/2)(4m+3)), |h_ℓ(x)|≤ c|x|exp(-c_0x^2) for all ℓ≥ 0. Then we have w^'Ψ_mw ≥  ∫_|x|≤√((3/2)(4m+3))(∑_ℓ=0^m-1w_ℓh_ℓ(x))^2f_n(x)dx ≥  x∈[-√((3/2)(4m+3)),√((3/2)(4m+3))]inff_n(x)∫_|x|≤√((3/2)(4m+3))(∑_ℓ=0^m-1w_ℓh_ℓ(x))^2dx ≥  1/2K_*log(N)exp(-3c_σ(4m+3)/2(1-log^-1(N)))∫_|x|≤√((3/2)(4m+3))(∑_ℓ=0^m-1w_ℓh_ℓ(x))^2dx since for all x∈ℝ, f_n(x)≥(1/2K_*log(N))exp(-c_σx^2/1-log^-1(N)). Set a_N=√((3/2)(4m+3)), then we obtain w^'Ψ_mw≥exp(-3c_σ(4m+3)/2(1-log^-1(N)))/2K_*log(N)(∫_-∞^+∞(∑_ℓ=0^m-1w_ℓh_ℓ(x))^2dx-∫_|x|>a_N(∑_ℓ=0^m-1w_ℓh_ℓ(x))^2dx) ≥exp(-3c_σ(4m+3)/2(1-log^-1(N)))/2K_*log(N)(1-2c^2m∫_a_N^+∞x^2exp(-8c_0x^2)dx) ≥exp(-3c_σ(4m+3)/2(1-log^-1(N)))/2K_*log(N)(1-c^2m/8c_0√(3/2(4m+3))exp(-12c_0(4m+3))) where c,c_0>0 are constants depending on the Hermite basis. Finally, for N large enough, 1-c^2m/8c_0√(3/2(4m+3))exp(-12c_0(4m+3))≥1/2. Finally, there exists a constant C>0 such that for all w∈^m such that w_2,m, w^'Ψ_mw ≥C/log(N)exp(-3c_σ(4m+3)/2(1-log^-1(N))). §.§.§ Proof of Theorem <ref>  The proof of Theorem <ref>  relies on the following lemma: Under Assumptions <ref> and for σ^2∈Σ_I(β,R) with β≥ 1,   I = [-A_N,A_N] and N ∝ n,   A_N = o (√(log(N))),   K ∝((Nn)^1/(2β+1)A_N)    (m = K+M), the following holds: ℙ(Ω^c_n,N,m) ≤ Cexp(- c log^3/2(N)) where c,C>0 are constants independent of N. According to Equations (<ref>) in the proof of Theorem <ref>, for all dimension m=K+M, with K∈, and for all h∈𝒮_K+M, there exists a constant C>0 such that [σ^2_A_N,m-σ^2_A_N^2_n,N_Ω_n,N,m]≤ C[h∈𝒮_K+M,Linfh-σ^2_A_N^2_n+(h∈𝒮_K+M,h_n=1supν^2(h))+Δ^2] where Ω_n,N,m is given in Equation (<ref>)  and ν=ν_1+ν_2+ν_3 with the ν_i given in Equation (<ref>) . For all h=∑_ℓ=-M^K-1a_ℓB_ℓ∈𝒮_K+M,L_N, h^2_n=[1/n∑_k=0^n-1h^2(X_kΔ)]=∑_ℓ=-M^K-1∑_ℓ=-M^K-1a_ℓa_ℓ^'[1/n∑_k=0^n-1B_ℓ(X_kΔ)B_ℓ^'(X_kΔ)]=a^'Ψ_ma. The Gram matrix Ψ_m is invertible for each K∈ℳ (see proof of Lemma <ref>). It follows that for all h=∑_ℓ=-M^K-1a_ℓB_ℓ such that h^2_n=a^'Ψ_ma=1, one has a=Ψ^-1/2_mu where u∈ℝ^m and u_2,m=1. Furthermore, we have: h=∑_ℓ=-M^K-1a_ℓB_ℓ=∑_ℓ=-M^K-1u_ℓ∑_ℓ^'^K-1[Ψ^-1/2_m]_ℓ^',ℓB_ℓ^'. Then for all h∈𝒮_K+M, we have ν^2(h)≤ 3(ν^2_1(h)+ν^2_2(h)+ν^2_3(h)) where, ∀ i∈{1,2,3}, ν^2_i(h)≤∑_ℓ=-M^K-1(1/Nn∑_j=1^N∑_k=0^n-1∑_ℓ^'=-M^K-1[Ψ^-1/2_m]_ℓ^',ℓB_ℓ^'(X^j_kΔ)ζ^j,i_kΔ)^2. So we obtain, ∀ i∈{1,2,3}, [h∈𝒮_K+M,h_n=1supν^2_i(h)]≤1/Nn^2∑_ℓ=-M^K-1[(∑_k=0^n-1∑_ℓ^'=-M^K-1[Ψ^-1/2_m]_ℓ^',ℓB_ℓ^'(X^j_kΔ)ζ^1,i_kΔ)^2] For i=1, we have ζ^1,1_kΔ=1/Δ[(∫_kΔ^(k+1)Δσ(X^1_s)dW_s)^2-∫_kΔ^(k+1)Δσ^2(X^1_s)ds] and we obtained in the proof of Theorem <ref>  that there exists a constant C>0 such that for all k∈[[0,n-1]], 𝔼[ζ^1,1_kΔ|ℱ_kΔ]=0, 𝔼[(ζ^1,1_kΔ)^2|ℱ_kΔ]≤ C𝔼[(∫_kΔ^(k+1)Δσ^2(X_u)du)^2]≤ Cσ^4_1Δ^2. We deduce that [h∈𝒮_K+M,h_n=1supν^2_1(h)] =1/Nn^2Δ^2∑_ℓ=0^K-1[(∑_k=0^n-1∑_ℓ^'=-M^K-1[Ψ^-1/2_m]_ℓ^',ℓB_ℓ^'(X^1_kΔ)ζ^1,1_kΔ)^2] ≤1/N∑_ℓ=-M^K-1∑_k=0^n-1{(∑_ℓ^'=-M^K-1[Ψ^-1/2_m]_ℓ^',ℓB_ℓ^'(X^1_kΔ))^2(ζ^1,1_kΔ)^2} ≤4σ^2_1/Nn∑_ℓ=-M^K-1∑_ℓ^'=-M^K-1∑_ℓ^''=-M^K-1[Ψ^-1/2_m]_ℓ^',ℓ[Ψ^-1/2_m]_ℓ^'',ℓ[Ψ^-1/2_m]_ℓ^',ℓ^''. We have: ∑_ℓ=-M^K-1∑_ℓ^'=-M^K-1∑_ℓ^''=-M^K-1[Ψ^-1/2_m]_ℓ^',ℓ[Ψ^-1/2_m]_ℓ^'',ℓ[Ψ^-1/2_m]_ℓ^',ℓ^''=Tr(Ψ^-1_mΨ_m)=Tr(I_m)=m. So we obtain [h∈𝒮_K+Msupν^2_1(h)]≤4σ^2_1m/Nn. For i=2, we have ζ^1,2_kΔ=2/Δ∫_kΔ^(k+1)Δ((k+1)Δ-s)σ^'(X^1_s)σ^2(X^1_s)dW_s and [h∈𝒮_K+M,h_n=1supν^2_2(h)] ≤1/Nn^2∑_ℓ=-M^K-1[(∑_k=0^n-1∑_ℓ^'=-M^K-1[Ψ^-1/2_m]_ℓ^',ℓB_ℓ^'(X^j_kΔ)ζ^1,2_kΔ)^2] ≤4σ^4_1σ^'^2_∞Δ/Nn^2∑_ℓ=-M^K-1∑_k=0^n-1[(∑_ℓ^'=-M^K-1[Ψ^-1/2_m]_ℓ^',ℓB_ℓ^'(X^1_kΔ))^2] ≤4σ^2_1σ^'^2_∞m/Nn^2. For i=3, we have ζ^1,3_kΔ=2b(X^1_kΔ)∫_kΔ^(k+1)Δσ(X^1_s)dW_s and there exists constants C_1,C_2>0 such that [h∈𝒮_K+M,h_n=1supν^2_3(h)] ≤1/Nn^2∑_ℓ=-M^K-1[(∑_k=0^n-1∑_ℓ^'=-M^K-1[Ψ^-1/2_m]_ℓ^',ℓB_ℓ^'(X^j_kΔ)ζ^1,3_kΔ)^2] ≤ C_1σ^2_1Δ/Nn^2∑_ℓ=-M^K-1∑_k=0^n-1[(∑_ℓ^'=-M^K-1[Ψ^-1/2_m]_ℓ^',ℓB_ℓ^'(X^1_kΔ))^2] ≤ C_2σ^2_1m/Nn^2. Finally, there exists a constant C>0 depending on σ_1 and M such that [h∈𝒮_K+M,h_n=1supν^2(h)]≤ Cm/Nn. From Equations (<ref>) and (<ref>) , we deduce that [σ^2_A_N,m-σ^2_A_N^2_n,N_Ω_n,N,m]≤ C(h∈𝒮_K+M,Linfh-σ^2_A_N^2_n+m/Nn+Δ^2) where C>0 is a constant depending on σ_1 and M. We obtain [σ^2_A_N,m-σ^2_A_N^2_n,N]≤ C(h∈𝒮_K+M,Linfh-σ^2_A_N^2_n+m/Nn+Δ^2)+[σ^2_A_N,m-σ^2_A_N^2_n,N_Ω^c_n,N,m] and for N large enough, σ^2_A_N,m-σ^2_A_N^2_n,N≤ 4mL, and according to Lemma <ref> , [σ^2_A_N,m-σ^2_A_N^2_n,N_Ω^c_n,N,m]≤ 4mLℙ(Ω^c_n,N,m)≤ CmLexp(-clog^3/2(N)) where c>0 is a constant. Thus, there exists a constant C>0 such that [σ^2_A_N,m-σ^2_A_N^2_n,N] ≤  [σ^2_A_N,m-σ^2_A_N^2_n,N_Ω_n,N,m]+[σ^2_A_N,m-σ^2_A_N^2_n,N_Ω^c_n,N,m] ≤  C(h∈𝒮_K+M,Linfh-σ^2_A_N^2_n+m/Nn+mLexp(-clog^3/2(N))+Δ^2). Then, as n ∝ N and L = log(N), there exists a constant C>0 such that [σ^2_A_N,m-σ^2_A_N^2_n,N] ≤  C(h∈𝒮_K+M,Linfh-σ^2_A_N^2_n+m/Nn). Finally, since σ^2∈Σ_I(β,R) with β≥ 1 and I = [-A_N, A_N], one has h∈𝒮_K+M,Linfh-σ^2_A_N^2_n≤ CA^2β_NK^-2β where the constant C>0 depends on β, R and M. Furthermore, as we chose the inverval [-A_N,A_N] such that A_N = o (√(log(N))) and for K ∝((Nn)^1/(2β+1)A_N), we obtain [σ^2_A_N,m-σ^2_A_N^2_n,N] ≤  Clog^β(N)(Nn)^-2β/(2β+1). §.§ Proof of Section <ref> §.§.§ Proof of Theorem <ref> Set for all K, K^'∈𝒦 = {2^q,   q=0,…, q_max,   2^q_max≤√(N)/log(N)}⊂ℳ, 𝒯_K,K^' = {g∈𝒮_K+M+𝒮_K^'+M, g_n=1,  g_∞≤√(L)}. Recall that for all j ∈ [[1,N]] and for all k ∈ [[0,n]], ζ^j,1_kΔ = 1/Δ[(∫_kΔ^(k+1)Δσ(X^j_s)dW^j_s)^2-∫_kΔ^(k+1)Δσ^2(X^j_s)ds]. The proof of Theorem <ref> relies on the following lemma whose proof is in Appendix. Under Assumption <ref>, for all ε, v>0 and g∈𝒯_K,K^', there exists a real constant C>0 such that, ℙ(1/Nn∑_j=1^N∑_k=0^n-1g(X^j_kΔ)ζ^j,1_kΔ≥ε, g^2_n,N≤ v^2)≤exp(-CNnε^2/σ^2_1(εg_∞+4σ^2_1v^2)) and for all x>0 such that x≤ε^2/σ^2_1(εg_∞+4σ^2_1v^2), ℙ(1/Nn∑_j=1^N∑_k=0^n-1g(X^j_kΔ)ζ^j,1_kΔ≥ 2σ^2_1v√(x)+σ^2_1g_∞x, g^2_n,N≤ v^2)≤exp(-CNnx). From Equation (<ref>), we have K:=K∈𝒦min{γ_n,N(σ^2_K)+pen(K)}. For all K∈𝒦 and h∈𝒮_K+M,L, γ_n,N(σ^2_K)+pen(K)≤γ_n,N(h)+pen(K), then, for all K∈𝒦 and for all h∈𝒮_K+M,L, γ_n,N(σ^2_K)-γ_n,N(σ^2_|I)≤  γ_n,N(h)-γ_n,N(σ^2_|I)+pen(K)-pen(K) σ^2_K-σ^2_|I^2_n,N≤  h-σ^2_|I^2_n,N+2ν(σ^2_K-h)+2μ(σ^2_K - h)+pen(K)-pen(K) ≤  h-σ^2_|I^2_n,N+1/dσ^2_K-t^2_n+dg∈𝒯_K,Ksupν^2(g)+1/dσ^2_K-h^2_n,N +d/Nn∑_j=1^N∑_k=0^n-1(R^j_kΔ)^2+pen(K)-pen(K) where d>1 and the space 𝒯_K,K is given in Equation (<ref>). On the set Ω_n,N,K_max (given in Equation (<ref>)): ∀ h∈𝒮_K+M, 1/2h^2_n≤h^2_n,N≤3/2h^2_n. Then on Ω_n,N,K_max, for all d>1 and for all h∈𝒮_K+M with K∈𝒦, (1-10/d)σ^2_K-σ^2_|I^2_n,N≤  (1+10/d)h-σ^2_|I^2_n,N+dh∈𝒯_K,Ksupν^2(h)+d/Nn∑_j=1^N∑_k=0^n-1(R^j_kΔ)^2 +pen(K)-pen(K). We set d=20. Then, on Ω_n,N,max and for all h∈𝒮_K+M,L, σ^2_K-σ^2_|I^2_n,N≤ 3h - σ^2_|I^2_n,N+20h∈𝒯_K,Ksupν^2(h)+20/Nn∑_j=1^N∑_k=0^n-1(R^j_kΔ)^2+2(pen(K)-pen(K)). Let q : 𝒦^2⟶ℝ_+ such that 160 q(K,K^')≤ 18 pen(K)+16 pen(K^'). Thus, on the set Ω_n,N,K_max, there exists a constant C>0 such that for all h∈𝒮_K+M 𝔼[σ^2_K-σ^2_|I^2_n,N_Ω_n,N,K_max]≤   34(h∈𝒮_K+M,Linfh-σ^2_|I^2_n+pen(K)) +160(h∈𝒯_K,Ksupν^2_1(h)-q(K,K))+CΔ^2 where ν_1(h):=1/Nn∑_j=1^N∑_k=0^n-1h(X^j_kΔ)ζ^j,1_kΔ with ζ^j,1_kΔ the error term. We set for all K,K^'∈𝒦, G_K(K^'):=h∈𝒯_K,K^'supν^2_1(h) and for N and n large enough, σ^2_K-σ^2_|I^2_n,N≤ 4(K+M)L. We deduce that, 𝔼[σ^2_K-σ^2_|I^2_n,N] ≤  𝔼[σ^2_K-σ^2_|I^2_n,N_Ω_n,N,K_max]+𝔼[σ^2_K-σ^2_|I^2_n,N_Ω^c_n,N,K_max] ≤   34K∈𝒦inf(h∈𝒮_K+Minfh-σ^2_|I^2_n+pen(K)) +CΔ^2+4(K+M)Lℙ(Ω^c_n,N,K_max) +160𝔼[(G_K(K)-q(K,K))_+_Ω_n,N,K_max]. In the sequel, we refer to the proof of Proposition 6.1 in <cit.>. We known from Lorentz et al (see <cit.>) that given the unit ball B_._n(0,1) of the approximation subspace 𝒮_K+M with respect to norm ._n defined as follows: B_._n(0,1)={h∈𝒮_K+M : h_n≤ 1}={h∈𝒮_K+M : h≤1/τ_0}=B_2(0,1/τ_0), we can find a ε-net E_ε such that for each ε∈(0,1], |E_ε|≤(3/ετ_0)^K+M. Recall that 𝒯_K,K^'={g∈𝒮_K+M+𝒮_K^'+M, g_n=1, g_∞≤√(L)} and consider the sequence (E_ε_k)_k≥ 1 of ε-net with ε_k=ε_0 2^-k and ε_0∈(0,1]. Moreover, set N_k = log(|E_ε_k|) for each k≥ 0. Then for each g∈𝒮_K+M+𝒮_K^'+M such that g_∞≤√(L), there exists a sequence (g_k)_k≥ 0 with g_k∈ E_ε_k such that g=g_0+∑_k=1^∞g_k-g_k-1. Set ℙ:=ℙ(.∩Ω_n,N,K_max) and τ:=σ_1^2√(6x^n,N_0)+σ^2_1√(L)x^n,N_0+∑_k≥ 1ε_k-1{σ_1^2√(6x^n,N_k)+2σ^2_1√(L)x^n,N_k}=y^n,N_0+∑_k≥ 0y^n,N_k. For all h∈𝒯_K,K^' and on the event Ω_n,N,K_max, one has h^2_n,N≤3/2h^2_n=3/2. Then, using the chaining technique of <cit.>, we have ℙ(h∈𝒯_K,K^'supν_1(h)>τ) =ℙ(∃ (h_k)_k≥ 0∈∏_k≥ 0E_ε_k/ ν_1(h)=ν_1(h_0)+∑_k=1^∞ν_1(h_k-h_k-1)>τ) ≤∑_h_0∈ E_0ℙ(ν_1(h_0)>y^n,N_0)+∑_k=1^∞∑_h_k-1∈ E_ε_k-1h_k∈ E_ε_kℙ(ν_1(h_k-h_k-1)>y^n,N_k). According to Equation (<ref>) and Lemma <ref>, there exists a constant C>0 such that (ν_1(h_0) > y^n,N_0) ≤  (ν_1(h_0) > σ_1√(6x^n,N_0)+σ^2_1h_0_∞x^n,N_0) ≤  exp(-CNnx^n,N_0), ∀ k≥ 1,  (ν_1(h_k - h_k-1) > y^n,N_0) ≤  (ν_1(h_k - h_k-1) > σ_1√(6x^n,N_k)+σ^2_1h_k - h_k-1_∞x^n,N_k) ≤  exp(-CNnx^n,N_k). Finally, since N_k = log(|E_ε_k|) for all k≥ 0, we deduce that ℙ(h∈𝒯_K,K^'supν_1(h)>τ) ≤|E_ε_0|exp(-CNnx^n,N_0) + ∑_k=1^∞(|E_ε_k|+|E_ε_k-1|)exp(-CNnx^n,N_k) ≤exp(N_0-CNnx^n,N_0)+∑_k=1^∞exp(N_k+N_k-1-CNnx^n,N_k). We choose x^n,N_0 and x^n,N_k, k≥ 1 such that, N_0 - CNnx^n,N_0 = -a(K+K^' + 2M)-b N_k + N_k-1 - CNnx^n,N_k = -k(K + K^'+2M) - a(K + K^' + 2M) - b where a and b are two positive real numbers. We deduce that x^n,N_k≤ C_0(1+k)K + K^'+2M/Nn and τ≤ C_1σ^2_1√(√(L)K + K^'+2M/Nn) with C_0>0 and C_1 two constants depending on a and b. It comes that ∼ℙ(t∈𝒯_K,K^'supν(t)>τ) ≤e/e-1e^-bexp{-a(K + K^' + 2M)}. From Equation (<ref>), we set q(K,K^')=κ^*σ^2_1√(L)K + K^' + 2M/Nn where κ^*>0 depends on C_1>0. Thus, for all K,K^'∈𝒦, ℙ({h∈𝒯_K,K^'supν^2(h)>q(K,K^')}∩Ω_n,N,K_max)≤e^-b+1/e+1exp{-a(K + K^' + 2M)} and there exists constants c,C>0 such that 𝔼[(G_K(K^')-q(K,K^'))_+_Ω_n,N,K_max] ≤c(K+K^')/Nnℙ({t∈𝒯_K,K^'supν^2(t)>q(K,K^')}∩Ω_n,N,K_max) ≤C/Nnexp{-a/2(K+K^')}. Finally, there exists a real constant C>0 such that, 𝔼[(G_K(K)-q(K,K))_+_Ω_n,N,K_max]≤∑_K^'∈𝒦𝔼[(G_K(K^')-q(K,K^'))_+_Ω_n,N,K_max]≤C/Nn. We choose the penalty function pen such that for each K∈𝒦, pen(K)≥κσ^2_1√(L)K+M/Nn. For N large enough, one has σ^2_1≤√(L). Thus, we finally set pen(K)=κ(K+M)log(N)/Nn with L = log(N). Then, there exists a constant C>0 such that, 𝔼[σ^2_K-σ^2_|I^2_n,N] ≤ 34K∈𝒦inf{h∈𝒮_K+Minfh-σ^2_|I^2_n+pen(K)}+C/Nn. §.§.§ Proof of Theorem <ref>  From Equation (<ref>), we have K := K∈𝒦minγ_n,N(σ^2_K+M,L) + pen(K). Then, for all K∈𝒦 and for all h ∈𝒮_K+M,L, we have γ_n,N(σ^2_K,L) + pen(K) ≤γ_n,N(h) + pen(K). Then, for all K∈𝒦 and for all h∈𝒮_K+M,L, γ_n,N(σ^2_K,L) - γ_n,N(σ^2) ≤  γ_n,N(h) - γ_n,N(σ^2) + pen(K) - pen(K) σ^2_K,L - σ^2^2_n,N≤  h - σ^2^2_n,N + 2ν(σ^2_K,L - h) + 2μ(σ^2_K,L - h) + pen(K) - pen(K). We have for all a>0, 2𝔼[μ(σ^2_K,L-h)] ≤ 2/a𝔼σ^2_K,L-σ^2^2_n,N+2/ah∈𝒮_K+M,Linfh-σ^2^2_n +a/Nn∑_j=1^N∑_k=0^n-1𝔼[(R^j_kΔ)^2] and since ν = ν_1 + ν_2 + ν_3, according to the proof of Theorem <ref>, there exists a constant c>0 such that [ν(σ^2_K,L - h)] ≤ c[ν_1(σ^2_K,L - h)] where the for i∈{1,2,3} and for all h ∈𝒮_K+M,L,   K∈𝒦, ν_i(h) = 1/Nn∑_j=1^N∑_k=0^n-1h(X^j_kΔ)ζ^j_kΔ, and the ζ^j_kΔ are given Then, (1-2/a)[σ^2_K,L - σ^2^2_n,N] ≤  (1+2/a)h∈𝒮_K+M,Linfh-σ^2^2_n + 2c[ν_1(σ^2_K,L - h)] + pen(K)-pen(K) + a/Nn∑_j=1^N∑_k=0^n-1[(R^j_kΔ)^2] From Equation (<ref>) and for a = 4, there exists a constant C>0 such that [σ^2_K,L - σ^2^2_n,N] ≤ 3h∈𝒮_K+M,Linfh-σ^2^2_n + 4c[ν_1(σ^2_K,L - h)] + 2(pen(K)-pen(K)) + CΔ^2. Since for all K∈𝒦,  pen(K) ≥ 2κ^*σ^2_1(K+M)√(2L)/(Nn), define the function q: (K,K^') ↦ q(K,K^') such that q(K,K^') = 2C^*σ^2_1(K+K^'+2M)√(2L)/Nn≥ 2σ^2_1v√(x^n,N) + σ^2_1vx^n,N where x^n,N∝(K+K^'+2M/Nn)^2   and   v = √(2L). The constant C^*>0 depends on constants κ^*>0 and c>0 of Equation (<ref>) such that 4cq(K,K^') ≤pen(K) + 2pen(K^'). Then for all K ∈𝒦 and for all h∈𝒮_K+M,L, [σ^2_K,L - σ^2^2_n,N] ≤ 3(h∈𝒮_K+M,Linfh-σ^2^2_n + pen(K)) + 4c[(ν_1(σ^2_m,L - h) - q(K,K))_+] + CΔ^2. For all K∈𝒦 and for all h∈𝒮_K+M,L such that h_∞≤√(L), we have , σ^2_K,L - h^2_n,N≤σ^2_K,L - h^2_∞≤ 2L =: v^2. Then, using Equation (<ref>) and Lemma <ref>, there exists a constant C>0 such that for all K,K^'∈𝒦 and for all h∈𝒮_K+M,L, (ν_1(σ^2_K^',L - h) ≥ q(K,K^'),  σ^2_K,L - h^2_n,N≤ v^2) ≤exp(-CNnx^n,N). Since L = log(N), then for N large enough, σ^2_1≤√(log(N)), we finally choose pen(K) = κ(K+M)log(N)/Nn where κ>0 is a new constant. Since [ν_1(σ^2_K,L - h)] ≤O(√((K_max+M)log^2(N)/Nn)) (see proof of Theorem <ref>), for all K ∈𝒦 and h ∈𝒮_K+M,L, there exists a constant c>0 such that [(ν_1(σ^2_K,L - h) - q(K,K))_+] ≤  K^'∈𝒦max{[(ν_1(σ^2_K^',L - h) - q(K,K^'))_+]} ≤   cq(K,K_max)K^'∈𝒦max{(ν_1(σ^2_K^',L - h) ≥ q(K,K^'))}. From Equation (<ref>), we obtain that [(ν_1(σ^2_K,L - h) - q(K,K))_+] ≤ cq(K,K_max)exp(-CNn)≤C/Nn since K and K_max increase with the size N of the sample paths D_N,n, and cNnq(K,K_max)exp(-CNn) → 0   as   N →∞. Then, from Equations (<ref>) and (<ref>), there exists a constant C>0 such that [σ^2_K,L - σ^2^2_n,N] ≤ 3K∈𝒦inf{h∈𝒮_K+M,Linfh-σ^2^2_n + pen(K)} + C/Nn. §.§.§ Proof of Theorem <ref> The proof of Theorem <ref> is similar to the proof of Theorem <ref>. Then, from Equation (<ref>), for all h∈𝒮_K+M, σ^2_K,L-σ^2_|I^2_n,1≤ 3h - σ^2_|I^2_n,1+20h∈𝒯_K,Ksupν^2(h)+20/n∑_k=0^n-1(R^1_kΔ)^2+2(pen(K)-pen(K)), where 𝒯_K,K^' = {h ∈𝒮_K+M+𝒮_K^'+M,  h_X = 1,  h_∞≤√(L)}. Let q: 𝒦^2⟶_+ such that 160q(K,K^') ≤ 18pen(K) + 16pen(K^'). Recall that the 𝕃^2-norm ., the norm [._X] and the empirical norm ._n are equivalent on 𝕃^2(I) since the transition density is bounded on the compact interval I. Then, for all K ∈𝒦 and h ∈𝒮_K+M,L, we have [σ^2_K,L-σ^2_|I^2_n,1] ≤   3(h∈𝒮_K+M,Linfh-σ^2_|I^2_n+pen(K)) + 20(h∈𝒯_K,Ksupν^2_1(h)-q(K,K)) + CΔ^2 where ν_1(h):=1/n∑_k=0^n-1h(X^1_kΔ)ζ^1,1_kΔ with ζ^1,1_kΔ the error term. We set for all K,K^'∈𝒦, G_K(K^'):=h∈𝒯_K,K^'supν^2_1(h). Then, there exists C>0 such that [σ^2_K,L-σ^2_|I^2_n,1] ≤   3K∈𝒦inf{h∈𝒮_K+M,Linfh-σ^2_|I^2_n+pen(K)} + 20∑_K^'∈𝒦[(G_K(K^')-q(K,K^'))_+] + CΔ^2. Considering the unit ball B_._X(0,1) of the approximation subspace given by B_._X(0,1) = {h∈𝒮_K+M,  h^2_X≤ 1} = {h∈𝒮_K+M,  h^2≤1/τ_0}. We obtain from the proof of Theorem <ref> with N=1 that, ∑_K^'∈𝒦[(G_K(K^')-q(K,K^'))_+]≤C/n, where C>0 is a constant, q(K,K^') ∝σ^4_1(K+K^'+2M)√(log(n))/n and pen(K) ∝(K+M)log(n)/n. Then we obtain 𝔼[σ^2_K,L-σ^2_|I^2_n,1] ≤3/τ_0K∈𝒦inf{h∈𝒮_K+M,Linfh-σ^2_|I^2_n+pen(K)} + C/n. ScandJStat Appendix §.§ Calibration Fix the drift function b(x) = 1-x, the time-horizon T=1 and at time t=0,   x_0=0. Consider the following three models: Model 1: σ(x)=1 Model 2: σ(x)=0.1+0.9/√(1+x^2) Model 3: σ(x) = 1/3+sin^2(2π x)/π + 1/(π+x^2). The three diffusion models satisfy Assumption <ref>  and are used to calibrate the numerical constant κ of the penalty function given in Theorem <ref>  As we already know, the adaptive estimator of σ^2 on the interval [-√(log(N)), √(log(N))] necessitate a data-driven selection of an optimal dimension through the minimization of the penalized least squares contrast given in Equation (<ref>) . Since the penalty function pen(d_N)=κ (K_N+M)log^2(N)/N^2 depends on the unknown numerical constant κ>0, the goal is to select an optimal value of κ in the set 𝒱={0.1,0.5,1,2,4,5,7,10} of its possible values. To this end, we repeat 100 times the following steps: * Simulate learning samples D_N and D_N^' with N∈{50,100}, N^'=100 and n ∈{100, 250} * For each κ∈𝒱: * For each K_N∈𝒦 and from D_N, compute σ^2_d_N,L_N given in Equations (<ref>) and (<ref>). * Select the optimal dimension K_N∈𝒦 using Equation (<ref>)  * Using the learning sample D_N^', evaluate σ^2_d_N,L_N-σ^2_A^2_n,N^' where d_N=K_N+M. Then, we calculate average values of σ^2_d_N,L_N-σ^2_A^2_n,N^' for each κ∈𝒱 and obtain the following results: We finally choose 5∈𝒱 as the optimal value of κ in reference to the results of Figure <ref> . §.§ Proof of Lemma <ref>  We obtain from Comte,Genon-Catalot,Rozenholc (2007) proof of Lemma 3 that for each j∈[[1,N]], k∈[[0,n-1]] and p∈ℕ∖{0,1} 𝔼[exp(ug(X^j_kΔ)ξ^j,1_kΔ-au^2g^2(X^j_kΔ)/1-bu)|ℱ_kΔ]≤ 1 with a=e(4σ^2_1c^2)^2, b=4σ^2_1c^2eg_∞, u∈ℝ such that bu<1 and c>0 a real constant. Thus, ℙ(1/Nn∑_j=1^N∑_k=0^n-1g(X^j_kΔ)ξ^j,1_kΔ≥ε, g^2_N,n≤ v^2)=𝔼(1_{∑_j=1^N∑_k=0^n-1ug(X^j_kΔ)ξ^(j,1)_kΔ≥ Nnuε}1_g^2_n,N≤ v^2) =𝔼(1_{exp(∑_j=1^N∑_k=0^n-1ug(X^j_kΔ)ξ^(j,1)_kΔ)e^-Nnuε≥ 1}_g^2_N,n≤ v^2) ≤e^-Nnuε𝔼[_g^2_n,N≤ v^2exp{∑_j=1^N∑_k=0^n-1ug(X^j_kΔ)ξ^j,1_kΔ}]. It follows that, ℙ(1/Nn∑_j=1^N∑_k=0^n-1g(X^j_kΔ)ξ^j,1_kΔ≥ε, g^2_n,N≤ v^2) ≤  exp{-Nnuε+Nnau^2v^2/1-bu}. We set u=ε/ε b+2av^2. Then, we have -Nnuε+Nnav^2u^2/(1-bu)=-Nnε^2/2(ε b+2av^2) and, ℙ(1/Nn∑_j=1^N∑_k=0^n-1g(X^j_kΔ)ξ^j,1_kΔ≥ε, g^2_n,N≤ v^2) ≤exp(-Nnε^2/2(ε b+av^2)) ≤exp(-CNnε^2/σ^2_1(εg_∞+4σ^2_1v^2)) where C>0 is a constant depending on c>0. §.§ Proof of Lemma <ref>  Set K_n,N = K_N since N ∝ n. Let us remind the reader of the Gram matrix Ψ_K_N given in Equation (<ref>), Ψ_K_N=[1/Nn𝐅^'_K_N𝐅_K_N]=(Ψ_K_N) where, 𝐅_K_N:= ((B_ℓ(X^j_0),…,(B_ℓ(X^j_(n-1)Δ)))_1 ≤ j ≤ N0 ≤ℓ≤ K_N-1∈ℝ^Nn× (K_N+M) The empirical counterpart Ψ is the random matrix given by Ψ_K_N of size (K_N+M) × (K_N+M) is given by Ψ_K_N:=1/Nn𝐅^'_K_N𝐅_K_N=(1/Nn∑_j=1^N∑_k=0^n-1f_ℓ(X^j_kΔ)f_ℓ^'(X^j_kΔ))_ℓ,ℓ^'∈[-M,K_N-1]. For all t=∑_ℓ=-M^K_N-1 a_ℓ B_ℓ,M, u∈ S_K_N, M one has t_n,N^2 = a^'Ψ_K_N a and t_n^2 = a^'Ψ_K_N a, with a=(a_-M,⋯,a_K_N-1)^'. Under Assumption <ref>, we follow the lines of  <cit.> Proposition 2.3 and Lemma 6.2. Then, sup _t ∈ S_K_N,M,t_n=1|t_n,N^2-t_n^2| = sup _w ∈^K_N+M,Φ_K_N^1 / 2 w_2, K_N+M=1|w^'(Ψ_K_N-Ψ_K_N) w| = sup _u ∈ℝ^K_N+M,u_2, K_N+M=1|u^'Ψ_K_N^-1 / 2(Ψ_K_N-Ψ_K_N) Ψ_K_N^-1 / 2 u| = Ψ_K_N^-1 / 2Ψ_K_NΨ_K_N^-1 / 2-Id_K_N+M_op. Therefore, Ω_n, N, K_N^c={Ψ_K_N^-1 / 2Ψ_K_NΨ_K_N^-1 / 2-Id_K_N+M_op > 1 / 2}. Since A_N = o(√(log(N))), we obtain from <cit.>, proof of Lemma 7.8, there exists a constant C>0 such that (Ω^c_n,N,K_N)≤ 2(K_N+M)exp(-C log^3/2(N)). Finally, since 2(K_N+M)exp(- (C/2) log^3/2(N)) ⟶ 0 as N ⟶ +∞, one concludes from Equation (<ref>) and for N large enough, (Ω^c_n,N,K_N)≤ Cexp(- c log^3/2(N)) where c >0 and C>0 are new constants.
http://arxiv.org/abs/2307.04491v2
20230710112824
Thermal Corrections to Rényi Entropy in BMS Field Theory
[ "Yuan Zhong" ]
hep-th
[ "hep-th" ]
Calculating Originality of LLM Assisted Source Code Shipra Sharma [email protected] Balwinder Sodhi Department of Computer Science and Engineering Indian Institute of Technology Ropar India [email protected] ========================================================================================================================================================================== § INTRODUCTION On the journey of understanding the quantum gravity, one of the most remarkable idea is the holographic principle <cit.>, which relates the (d+1)-dimensional quantum gravity with the d-dimensional quantum field theory. The most fruitful incarnation of the holographic principle is the AdS/CFT correspondence <cit.>, which equates the quantum gravity on the (d+1)-dimensional asymptotically anti-de Sitter (AdS) spacetime and the d-dimensional conformal field theory (CFT) on the asymptotic boundary. An important entry in the holographic dictionary is that the asymptotic symmetry of the bulk theory agrees with the symmetry of the boundary theory. The constraints from symmetry are powerful, and many universal results can be obtained in a general way together with other constraints. In the study of holographic description of asymptotically flat gravity, inspired by the success of the role of asymptotic symmetry played in the AdS/CFT correspondence, the study of the asymptotic symmetry in the asymptotically flat spacetime, known as the Bondi–van der Burg–Metzner–Sachs symmetry <cit.>, receives much interest in the last few years. A simpler version of the asymptotically flat gravity is the three-dimensional BMS_3 symmetry. Based on the BMS_3 symmetry, the three-dimensional flat holography was proposed <cit.> that the three-dimensional asymptotic flat gravity is holographically described by a two-dimensional quantum field theory governed by the BMS_3 symmetry, known as the BMS field theory (BMSFT) or Carrollian conformal field theory, since the BMS_3 algebra is isomorphic to the Carrollian conformal algebra. This is an infinite-dimensional algebra, and the constraints from it lead to powerful constraints in the study of the BMS field theories. One important probe in the AdS/CFT correspondence is the holographic entanglement entropy. The Ryu-Takayanagi formula <cit.> proposed that the entanglement entropy in the boundary corresponds to the area of a minimal surface in the bulk. In the case of flat holography, the analogue of the Ryu-Takayanagi formula was proposed in <cit.>. On the BMS field theory side, the entanglement entropy for a single interval on the cylinder or on the plane in the vacuum state can be obtained with the help of replica trick <cit.>. The entanglement entropy is a good measurement of entanglement only when the system is in a pure state. In practice, however, it is always thermally polluted. In this paper, we are interested in the entanglement entropy for a single interval in the thermal state. Since there is a thermal circle and a spatial circle, this task is generally very difficult. However, in the low-temperature limit β_ϕ≫ L,β_u/β_ϕ≤ O(1), the leading thermal correction to the Rényi entropy is dominated by the first excited state and calculable. Here, L is the circumference of the cylinder coordinated by ϕ and u, and β_ϕ and β_u are the lengths of the thermal circle along the ϕ- and u-directions. Inspired by the universal results of the thermal correction to the entanglement entropy in the low-temperature limit in CFT <cit.>, we use the replica trick to rewrite the leading term in the thermal correction as an correlation function on the branched covering space and work it out with the help of the uniformizing map. It turns out that the leading thermal correction to the Rényi entropy takes a universal form δS_n =n/1-n[( sinπl_ϕ/L/nsinπl_ϕ/n L )^2Δ e^2 πl_u ξ/L ( πl_ϕ/L -1/nπl_ϕ/nL ) -1] e^-2πβ_ϕΔ/L -2πβ_uξ/L, which only depends on the scaling dimension Δ and the boost charge ξ of the first excited state and the geometric configuration of the entanglement interval. The thermal correction to the entanglement entropy is obtained by δ S_E = δ S_n→ 1. As a double check, we also use the entanglement first law to translate the calculation of the variation δ S_E of the entanglement entropy to the variation δ⟨K|_|$⟩ of the expectation value of the modular Hamiltonian. The latter can be calculated directly as the modular Hamiltonian for a single interval on the cylinder in the pure state can be written explicitly. We show that these two approaches agree. This paper is organized as follows. In Sec. 2, we give a quick review on BMS field theory. In Sec. 3, we calculate the thermal correction to the Rényi entropy in a type of low-temperature limit with the help of the replica trick and the uniformizing map. We also provide an alternative way to calculate the thermal correction to the entanglement entropy from the modular Hamiltonian and the entanglement first law as a double check. We conclude in Sec. 4 with a summary and some future directions. § REVIEW ON THE BMS FIELD THEORY In this section, we give a quick review on some aspects of the BMS field theory. ∙ BMSFT on the cylinder  A BMSFT on a cylinder(ϕ,u)with a circumference ϕ∼ϕ+L is a two-dimensional quantum field theory that is invariant under the following BMS transformations ϕ→ f(ϕ), u → f'(ϕ) u +g(ϕ). Here,f(ϕ)andg(ϕ)are periodic functions inϕwith the periodicityL. Then, the infinitesimal BMS transformation generators are obtained by taking the Fourier modes l_n = i L/2π e^i n 2π/Lϕ∂_ϕ -n e^i n 2π/Lϕ u∂_u, m_n =i L/2π e^i n 2π/Lϕ∂_u. ∙ BMSFT on the plane  The BMSFT on the(x,y)-plane is obtained from the following plane-to-cylinder transformation x =e^2π i /Lϕ, y = 2π i /L e^2π i/Lϕ u. The infinitesimal symmetry generators on the plane are l_n =-x^n+1∂_x -(n+1) x^n y ∂_y, m_n = -x^n+1∂_y. They form the BMS algebra without a central term via the Lie bracket [l_n ,l_m] =(n-m) l_m+n, [l_n, m_m] =(n-m) m_m+n, [m_n, m_m] =0. At the quantum level, these symmetry generatorsl_nandm_nwill become operatorsL_nandM_nwhich act on the state space. They form the BMS algebra with central chargesc_Mandc_Las [L_n ,L_m] =(n-m) L_m+n +c_L/12n(n^2-1)δ_m+n, [L_n, M_m] =(n-m) M_m+n+c_M/12n(n^2-1)δ_m+n, [M_n, M_m] =0. A primary operatorψof the boost chargeξand the conformal dimensionΔis specified by the following conditions [L_0, ψ] =Δψ, [M_0,ψ] =ξψ, [L_n, ψ] =0,  n>0, [M_n, ψ] =0,  n>0. Under a BMS transformation x̃ =f(x), ỹ = f'(x)y +g(x), a primary operatorψtransforms as ψ̃(x̃,ỹ) =(f')^-Δ e^-ξy f” +g'/f'ψ(x,y). On the plane, the currentsJ(x)andP(x)admit the following mode expansions J(x) = ∑_n L_n x^-n-2, P(x) =∑_n M_n x^-n-2. Under the BMS transformation (<ref>) and (<ref>), the currentsJ(x)andP(x)transform as <cit.>P̃(x̃) =( ∂ f/∂ x)^-2( P(x) -c_M/12{f,x}), J̃(x̃) =( ∂ f/∂ x)^-2( J(x) -c_L/12{f,x}) + ( ∂ g/∂ x)^-2( P(x) -c_M/12{g,x}) . ∙ State-operator correspondence  On the(x,y)-plane, the in-state corresponds to an operator inserted atx=0. From the plane-to-cylinder map (<ref>), in the cylinder coordinate, the in-state is inserted atϕ=i∞. Similarly, the out-state is inserted atϕ=-i∞in the cylinder coordinate. § THERMAL CORRECTIONS TO THE RÉNYI ENTROPY In this section, we use the replica trick and the uniformizing map to calculate the thermal correction to the Rényi entropy in the BMSFT for a single intervalon the cylinder with circumferenceL. §.§ Thermal Corrections to Rényi Entropy in CFT_2 Before we continue our calculation of the thermal correction to the Rényi entropy for a single interval on the cylinder in the BMSFT, we would like to first review the similar calculation in the case of CFT_2<cit.> first. We assume that the theory is put on a cylinder with the circumferenceL, coordinatized byw=x-it, the thermal density matrix written in terms of a complete set of states is ρ =1/(e^-β H)∑_|ϕ|⟩ |ϕ|⟨%s|⟩ϕ| e^-β E_ϕ. The Hamiltonian on the cylinder in the CFT is the combination of the left- and the right-moving zeroth-level Virasoro generators and the central charge, H =2π/L( L_0 +L̅_0 -c/12). Here, we have assumed thatc_L=c_R=c. With the assumptions that there exists a unique ground state|0|$⟩, and that the spectrum of conformal dimensions Δ=h+h̅ is positive and gapped from the smallest positive value, there should exist an operator ψ of conformal weights (h,h̅) carrying this smallest Δ. This ψ has the smallest energy E_ψ=2π/L(Δ -c/12). Then, in the low-temperature limit β≫ L, the thermal density matrix admits the following expansion ρ= |0|⟨%s|⟩0| +|ψ|⟨%s|⟩ψ|e^-2πΔβ/L+⋯/1 +e^-2πΔβ/L +⋯. We consider the entanglement region to be a single interval with two endpoints ∂_- : w=w̅=w_1,   ∂_+ : w=w̅=w_2. For convenience, we also introduce the rescaled endpoints θ_1,2 = 2πw_1,2/L and their difference l=w_2-w_1. The trace of the reduced density matrix ρ_ can be expanded according to the expansion (<ref>) of the thermal density matrix as ρ_A^n = [ _(|0|⟨%s|⟩0| +|ψ|⟨%s|⟩ψ|e^-2πΔβ/L+⋯) ]^n/(1 +e^-2πΔβ/L +⋯)^n =(_ |0|⟨%s|⟩0|)^n [1+ ( (_ |ψ|⟨%s|⟩ψ| (_ |0|⟨%s|⟩0|)^n-1)/(_ |0|⟨%s|⟩0|)^n -1 ) n e^-2πΔβ/L +⋯]. The first term (_ |0|⟨%s|⟩0|)^n is just the zero-temperature Rényi entropy. And the expression in the second term (_ |ψ|⟨%s|⟩ψ| (_ |0|⟨%s|⟩0|)^n-1)/(_ |0|⟨%s|⟩0|)^n, which determines the leading thermal correction, can be recasted as a 2-point function of the operator ψ(w) on an n-sheeted copy C_n of the cylinder branched over via the state operator correspondence |ψ|∼⟩lim_t→-∞ψ(x,t)|0|$⟩ and⟨ψ|| ∼lim_t→∞⟨0||ψ(x,t)as (_B |ψ|⟨%s|⟩ψ| (_B |0|⟨%s|⟩0|)^n-1)/ (_B |0|⟨%s|⟩0|)^n =lim_t_2 →∞, t_1 → -∞⟨ψ|(w_2,w̅_2)ψ(w_1,w̅_̅1̅) |_⟩C_n/⟨ψ|(w_2,w̅_2)ψ(w_1,w̅_̅1̅) |_⟩C_1. To calculate the 2-point function⟨ψ|(w_2,w̅_2)ψ(w_1,w̅_̅1̅) |_⟩C_non then-sheeted copyC_n, we can use the following uniformizing map ζ^(n) =( e^2π i w/L -e^iθ_2/e^2π i w/L -e^iθ_1)^1/n to sendC_nto theζ-plane. The 2-point function on a plane in the CFT is just ⟨ψ|(ζ^(n)_2,ζ̅^(n)_2)ψ(ζ^(n)_1,ζ̅^(n)_1) |=⟩1/(ζ^(n)_21)^2h(ζ̅^(n)_21)^2h̅. Mapping it back to then-sheeted copyC_nalong the uniformizing map (<ref>), we obtain the expression of the 2-point function onC_nas ⟨ψ|(w_2,w̅_2)ψ(w_1,w̅_̅1̅) |_⟩C_n =(d ζ_1/d w_1d ζ_2/d w_2)^h/ζ_12^2h(d ζ̅_1/d w̅_1d ζ̅_2/d w̅_2)^h̅/ζ̅_12^2h̅. Substituting this into (<ref>),we have ⟨ψ|(w_2,w̅_2)ψ(w_1,w̅_̅1̅) |_⟩C_n/⟨ψ|(w_2,w̅_2)ψ(w_1,w̅_̅1̅) |_⟩C_1 = [ 1/n^2h( ζ_1^(n)ζ_2^(n)/ζ_1^(1)ζ_2^(1))^h( ζ_2^(1) -ζ_1^(1)/ζ_2^(n) -ζ_1^(n))^2h] · [complex conjugate]. After taking the limitt_1→-∞andt_2→∞, we have ⟨ψ|(i∞)ψ(-i∞) |_⟩C_n/⟨ψ|(i∞)ψ(-i∞) |_⟩C_11/n^2Δ =( sinθ_2 -θ_1/2/sinθ_2 -θ_1/2n)^2Δ. Then, from (<ref>) and the definition of the Rényi entropy, we obtain the leading thermal correction to the Rényi entropy as δ S_n = 1/1-n( sin^2Δ(π l/L)/n^2Δ-1sin^2Δ(π l/nL)-n ) e^-2πΔβ/L+ o(e^-2πΔβ/L). In this calculation, suitable assumptions about the spectrum have been proposed so that the leading contribution to the thermal correction of the Rényi entropy is captured by the correlation function of the lightest operator on the branched covering space. The latter is further worked out with the help of the uniformizing map that sends thisn-sheeted copy space to the plane. §.§ Thermal Correction Dominated by the Singlet Primary Consider a two-dimensional BMS filed theory on the cylinder coordinated by(ϕ, u)with circumferenceLϕ∼ϕ +L. To introduce the temperature, we consider the following thermal identification [Here, we consider the case that β_u takes the same sign as β_ϕ, because we are going to assume the boost charge ξ is bounded from below. If ξ is bounded from above instead, then we should consider (ϕ, u) ∼ (ϕ +iβ_ϕ, u -iβ_u ) instead.] (ϕ, u) ∼ (ϕ +iβ_ϕ, u +iβ_u ), the corresponding thermal density matrix is ρ = e^-β_ϕ L_0^cyl -β_u M_0^cyl/( e^-β_ϕ L_0^cyl -β_u M_0^cyl) . Here,L_0^cylandM_0^cylare charges generating translations alongϕandudirections respectively. Under the plane-to-cylinder transformation of the currents (<ref>), these cylinder translation generators are related to the canonical BMS generatorsL_0andM_0as L_0^cyl =2π/L( L_0-c_L/24), M_0^cyl=2π/L( M_0-c_M/24). Substituting this back into (<ref>), the thermal density matrix written in terms of canonical BMS generators is ρ = e^-β_ϕ2π/L( L_0-c_L/24) -β_u 2π/L( M_0-c_M/24)/( e^-β_ϕ2π/L( L_0-c_L/24) -β_u 2π/L( M_0-c_M/24)) = e^-β_ϕ2π/L L_0 -β_u 2π/LM_0/( e^-β_ϕ2π/LL_0 -β_u 2π/LM_0). ∙ Low Temperature Expansion  We consider the BMSFT whose spectrum satisfies the following conditions so that the low-temperature expansion of the thermal density matrix is dominated by the first excited state. – There exists a unique ground state |0⟩, around which we can turn on a small temperature and expand the thermal density matrix. – In the spectrum both the conformal weight Δ and the boost charge ξ are bounded from below. – There exists a gap between the ground state |0⟩ and the lightest state |ψ⟩ corresponding to the primary operator ψ labelled by (Δ,ξ). The last condition requires more explanation. As we turn on a small temperature, there might be several candidate lightest states above the ground state. Depending on the approach to the low-temperature limit, the operatorψwith the smallestΔ+β_u/β_ϕ ξexcites first. There are still several difficulties to obtain an expansion dominated byψ. First, due to the non-unitary nature, althoughM_0is self-adjoint, it is not diagonalizable. For example, there are two descendants ofψat the level 1,M_-1|ψ⟩andL_-1|ψ⟩.M_0acts on them non-diagonally as a Jordan block M_0 [ M_-1|ψ⟩; L_-1|ψ⟩ ] = [ ξ 0; 1 ξ ][ M_-1|ψ⟩; L_-1|ψ⟩ ]. As a consequence, the thermal density matrixρis also non-diagonalizable, and it is not possible to expandρin terms of eigenstates{Φ}ofL_0andM_0such as ρ∝∑_Φ e^-2π/L(β_ϕ L_0^Φ +β_u M_0^Φ) |Φ|⟨%s|⟩Φ| . Another problem is that there are infinitely many descendants created byM_-k's with the same boost chargeξasψitself, becauseM_-kall commute withM_0. So, in a low-temperature limit withβ_u ≫β_ϕ, these descendants will not be suppressed. At this point, we will not try to answer the interesting question of the meaning of a non-diagonalizable density matrix. Instead, we restrict to a particular type of low-temperature limit to avoid the above difficulties. – Consider the following low-temperature limit β_ϕ≫ L, β_u/β_ϕ≤ O(1). Then, the primary operator ψ dominates the thermal density matrix expansion. Under these assumptions, the thermal density matrix is dominated byψat this low temperature as ρ = |0⟩⟨ 0| +|ψ⟩⟨ψ| e^-2πβ_ϕ/LΔ -2πβ_u/Lξ +⋯/1 +e^-2πβ_ϕ/LΔ -2πβ_u/Lξ +⋯. ∙ Entanglement measurements  Consider the entanglement region, the reduced density matrix onis ρ_ = _ρ. We are interested in the following entanglement measurements: the Renyi entropy S_n = 1/1-nlog(ρ_A^n) and the entanglement entropy S_E= -ρ_A logρ_A =S_n→ 1.   Concretely, we consider the entanglement region to be a single intervalAspecified by its endpoints ∂_- A =(ϕ_-,u_-), ∂_+ A =(ϕ_+,u_+). For convenience, let us introduce the range of the intervalin theϕ- and theu-directions as l_ϕ =ϕ_+ -ϕ_-,   l_u=u_+ -u_-. Under the above low-temperature expansion (<ref>),ρ_^ncan be expanded as ρ_^n =[_(|0⟩⟨ 0| +|ψ⟩⟨ψ| e^-2πβ_ϕΔ/L -2πβ_uξ/L +⋯)]^n /(1 +e^-2πβ_ϕΔ/L -2πβ_uξ/L +⋯)^n =(_ |0⟩⟨ 0|)^n [1+ ( [_ |ψ⟩⟨ψ| (_ |0⟩⟨ 0|)^n-1]/ (_ |0⟩⟨ 0|)^n -1 ) n e^-2πβ_ϕ/LΔ -2πβ_u/Lξ +⋯]. The first term(_ |0⟩⟨0|)^ncorresponds to the ground-state Rényi entropy. The second term determines the leading contribution to the low-temperature thermal correction. To calculate this term, we use the replica trick and the state-operator correspondence to replace it by a 2-point function ofψon then-sheeted copyC_nof the original space branched over∂. Use the state-operator correspondence, the in-state|ψ⟩corresponds to |ψ⟩=lim_ϕ→ i∞ψ(ϕ,u)|0|,⟩ and the out-state⟨ψ||corresponds to ⟨ψ||=lim_ϕ→ -i∞⟨0||ψ(ϕ,u). Together with the replica trick, the coefficient in the thermal correction term can be written as [_ |ψ|⟨%s|⟩ψ| (_ |0|⟨%s|⟩0|)^n-1]/ (_ |0|⟨%s|⟩0|)^n =lim_ϕ_1→ +i∞ ϕ_2→ -i∞ [_( ψ(ϕ_1,u_1)|0|⟨%s|⟩0|ψ(ϕ_2,u_2) ) (_ |0|⟨%s|⟩0|)^n-1]/ (_ |0|⟨%s|⟩0|)^n = lim_ϕ_1→ +i∞ ϕ_2→ -i∞⟨ψ|(ϕ_2,u_2) ψ(ϕ_1,u_1) |_⟩C_n/⟨ψ|(ϕ_2,u_2) ψ(ϕ_1,u_1) |_⟩C_1.   Now, we can use the uniformizing map to calculate this 2-point function ofψonC_n. ∙ Uniformizing Map  To calculate the 2-point function onC_n, we use the following uniformizing map fromC_nto the plane, x =(e^2π i ϕ/L -e^2π i ϕ_-/L/e^2π i ϕ/L-e^2π i ϕ_+/L)^1/n =: f^(n)(ϕ) y = ( u -l_u/2sinπ l_ϕ/Lsinπ( 2ϕ -ϕ_- -ϕ_+)/L)d/d ϕ f^(n)(ϕ). This transformation can be decomposed into several steps. In thex-direction, the plane-to-cylinder mapz=e^2πiϕ/Lmaps theS^1-coordinateϕto the analytically continued complexz-plane. Then, on this complex plane, thez-coordinate of∂becomesz_±=e^2πi ϕ_±/L. To introduce then-sheeted copy of this analytically continued space branched overz_±, we apply anSL(2,ℂ)transformationw=z-z_-/z-z_+which sendsz_-to0andz_+to∞, and take then-th root of it. In they-direction, the subtraction( u -l_u/2sinπl_ϕ/Lsinπ( 2ϕ-ϕ_- -ϕ_+)/L )cancels the rangel_uof the intervalAinu-direction. The 2-point function of the primary operators on the plane is determined by the symmetry up to a normalization factorN, ⟨ψ|(x_1,y_1)ψ(x_2,y_2) |=⟩N x_12^-2Δe^-2ξy_12/x_12. Mapped to the cylinder coordinate along (<ref>), the primary operatorψtransforms as ψ(ϕ, u) =( d x/d ϕ)^Δe^-ξyd^2ϕ/dx^2/dϕ/dx -ξd/dϕ( l_u/2sinπ l_ϕ/Lsinπ( 2ϕ -ϕ_- -ϕ_+)/L)ψ(x,y) =f^(n)'Δ(ϕ) e^-ξ( u f^(n)'(ϕ)d(f^(n)'(ϕ)^-1)/dϕ +π l_u/L sinπ l_ϕ/Lcosπ( 2ϕ -ϕ_- -ϕ_+)/L)ψ(x,y). Thus, the correlation function onC_nbecomes ⟨ψ|(ϕ_2,u_2) ψ(ϕ_1,u_1) |_⟩C_n = N f^(n)'Δ(ϕ_2) e^-ξ( u_2 f^(n)'(ϕ)d(f^(n)'(ϕ_2)^-1)/dϕ_2 +π l_u/L sinπ l_ϕ/Lcosπ( 2ϕ_2 -ϕ_- -ϕ_+)/L) × f^(n)'Δ(ϕ_1) e^-ξ( u f^(n)'(ϕ_1)d(f^(n)'(ϕ_1)^-1)/dϕ_1 +π l_u/L sinπ l_ϕ/Lcosπ( 2ϕ_1 -ϕ_- -ϕ_+)/L) x_12^-2Δe^-2ξy_12/x_12. Substituting this into (<ref>) and take the limit, we obtain the the correction term [_ |ψ|⟨%s|⟩ψ| (_ |0|⟨%s|⟩0|)^n-1]/ (_ |0|⟨%s|⟩0|)^n = lim_ϕ_1→ +i∞ ϕ_2→ -i∞⟨ψ|(ϕ_2,u_2) ψ(ϕ_1,u_1) |_⟩C_n/⟨ψ|(ϕ_2,u_2) ψ(ϕ_1,u_1) |_⟩C_1 = ( sinπ l_ϕ/L/nsinπ l_ϕ/n L)^2Δ e^2 π l_u ξ/L( π l_ϕ/L -1/nπ l_ϕ/nL) . In the explicit calculation, to take the limitϕ_1→+i∞, ϕ_2→-i∞, we have setϕ_1=i T_1andϕ_2 =-i T_2and expand the above in order ofϵ_1=e^-2πT_1/Landϵ_2=e^-2πT_2/L. Substituting this back to the definition (<ref>) of the Rényi entropy, we obtain the thermal correction to the Rényi entropy δ S_n =n/1-n[( sinπ l_ϕ/L/nsinπ l_ϕ/n L)^2Δ e^2 π l_u ξ/L( π l_ϕ/L -1/nπ l_ϕ/nL) -1] e^-2πβ_ϕΔ/L -2πβ_uξ/L. The thermal correction to the entanglement entropy can be obtained by taking then→1limit, δ S_E =[ 2Δ(1-π l_ϕ/Lπ l_ϕ/L) + 2 ξ( π^2 l_u l_ϕ/L^2 sin^2π l_ϕ/L-π l_u/Lπ l_ϕ/L) ] e^-2πβ_ϕΔ/L -2πβ_uξ/L. For a pure state,S_n()=S_n(). However, the thermal correction contribution violates this equality. The compliment ofis an interval of rangeL-l_ϕin theϕ-direction and-l_uin theu-direction. SinceδS_n(L-l_ϕ,-l_u)≠δS_n (l_ϕ,l_u), the Rényi entropy is indeed thermally polluted. §.§ Thermal Correction Dominated by the Multiplet Primary Previously, we obtained the thermal correction to the Rényi entropy and the entanglement entropy in the case that a singlet primary dominates the thermal correction. Now, we consider the case that a multiplet primary dominates the thermal correction. As we will see, the thermal correction to the Rényi entropy is just that of a singlet multiplied by the rank of the multiplet. However, this seemingly intuitive result is not that trivial. Actually, the off-diagonal terms dominate the expansion of the thermal density matrix, but they just do not contribute to the thermal correction to the Rényi entropy. TheM_0acts on a rank-rprimary multiplet=(O_0,O_1,⋯,O_r-1)^Tas M_0 | O_a | ⟩= ξ | O_a |+⟩ | O_a-1|,⟩ a=1,⋯,r-1, M_0 | O_0 | ⟩= ξ | O_0 |,⟩  a=0. Or in a more compact form,M_0 =(ξ_r +_r) . Here,_ris the rank-ridentity matrix, and_ris the rank-rJordan cell _r= [ 0 ; 1 0 ; ⋱ ⋱; 1 0; ]_r× r, which is nilpotent(_r)^r=0. The action ofe^-β_ϕ2π/LL_0-β_u 2π/LM_0on the primary part of this multiplet becomese^-β_u 2π/L_r e^-2πβ_ϕ/LΔ-2πβ_u/Lξ. The matrix parte^-β_u 2π/L_rcan be expanded into finitely many terms as e^-β_u 2π/L_r =∑_k=0^r-1(-β_u 2π/L)^k/k! (_r)^k. Sinceβ_u ≫L, it seems that thek=r-1term dominates the expansion (<ref>). However, as we will see later, although this^r-1term dominates the expansion of the matrix, it does not contribute to the thermal correction term to the Rényi entropy after taking trace. It is the^0term that dominates the thermal correction. Explicitly, thek=r-1term is (-β_u 2π/L)^r-1/(r-1)! (_r)^k= (-β_u 2π/L)^r-1/(r-1)![ 0 0; ⋮ ⋱; 0 ⋱; 1 0 ⋯ 0 ]_r× r = (-β_u 2π/L)^r-1/(r-1)! |O_0|⟨%s|⟩O_r-1^∨|. Here, the dual basis⟨O|_a^∨|is defined by ⟨O|_a^∨ |O_b|=⟩δ_a,b. Putting everything together, the multiplet version of the low-temperature expansion of density matrix (<ref>) is ρ = |0⟩⟨ 0| +|O_0|⟨%s|⟩O_r-1^∨ | (-2πβ_u/L)^r-1/(r-1)! e^-2πβ_ϕ/LΔ -2πβ_u/Lξ +⋯/1 +r e^-2πβ_ϕ/LΔ -2πβ_u/Lξ +⋯. We can use the inner product between the in-state and the out-state within a multiplet <cit.>⟨O|_a | O_b |=⟩δ_a+b,r-1 to transfrom from the dual basis to the out-states, ⟨O|_a^∨ |=⟨O|_r-1-a|. Then, the density matrix can be written as ρ = |0⟩⟨ 0| +|O_0|⟨%s|⟩O_0| (-2πβ_u/L)^r-1/(r-1)! e^-2πβ_ϕ/LΔ -2πβ_u/Lξ +⋯/1 +r e^-2πβ_ϕ/LΔ -2πβ_u/Lξ +⋯. The correlation function <cit.> among the rank-rmultiplet is ⟨ O_a(x_x,y_x)O_b(x_1,y_1)⟩ ={[ 0 ; d_r x_12^-2Δ_i e^-2ξ_iy_12/x_121/q!(-2y_12/x_12)^q, ]. , q=a+b-r+1. In particular, forr>1⟨O|_0(x,y) O_0(x',y') |=⟩0. So, we see this^r-1term does not contribute to the thermal correction term at all. Moreover, it turns out that all the off-diagonal terms do not contribute to the leading correction to the Rényi entropy. To see this, consider the^ksummand in (<ref>) written in the basis _r^k = ∑_a=0^r-1-k |O_a|⟨%s|⟩O_a+k^∨| =∑_a=0^r-1-k |O_a|⟨%s|⟩O_r-1-a-k|. Sinceq=(a) +(r-1-a-k) =r-1-k ≥r-1and the equality holds only ifk=0, the correlation function⟨O|_a(x,y) O_r-1-a-k(x',y')|$⟩ vanishes for any k>0 because of (<ref>). Only for k=0, the correlation function does not vanish, i.e., ⟨O|_a(x_2,y_2) O_r-1-a(x_1,y_1) |=⟩N x_12^-2Δ e^-ξy_12/x_12,  a=0,⋯,r-1, which is the same as the correlation function of a singlet (<ref>). So, the thermal correction to the Rényi entropy is just that of a singlet multiplied by r, δS_n =rn/1-n[( sinπl_ϕ/L/nsinπl_ϕ/n L )^2Δ e^2 πl_u ξ/L ( πl_ϕ/L -1/nπl_ϕ/nL ) -1] e^-2πβ_ϕΔ/L -2πβ_uξ/L. We see that when a multiplet primary dominates the low-temperature expansion, although the off-diagonal contributions dominate the correction to the thermal density matrix, they do not contribute to the correction of the Rényi entropy. The result is just that of the singlet multiplied by the rank r. It will be interesting to find if there exist any other entanglement measurements to which the off-diagonal contributions do not vanish. We leave this to future work. §.§ Comments on Another Limit So far, we consider the particular low-temperature limit (<ref>), but there is also a complimentary choice to reach the low-temperature limit so that the boost charge ξ dominates the first excited state. An extreme case is that the thermal circle is purely along the u-direction. The thermal circle is u ∼u+ iβ_u,   β_u ≫L. Then, the density matrix is proportional to e^-β_u M_0. In this case, any primary ψ with boost charge ξ>0 is heavier than not only the vacuum state, but all the descendants of the vacuum (e.g., M_-k⃗|0|$⟩), because these descendants of the vacuum all have the boost chargeξ=0. If in the spectrum the boost charge is gapped, then in theβ_u ≫Llimit, the density matrix is dominated by the vacuum block, and all the vacuum descendants are just as heavy as the vacuum, thus a low-temperature expansion is hardly accessible. However, the result of such thermal correction to the entanglement entropy might be even more universal than the previous case, as it depends only on the vacuum block and the algebraic structure, not on the details of the spectrum. On the other hand, if there exist any other primary operators with boost charge0, then the density matrix is dominated by these blocks together with the vacuum block. Since the operatore^-β_u M_0does not care about the conformal weight at all, the results of the thermal correction might be similar to the case that only the vacuum block dominates. To summarize, in this type of low-temperature limit, since all descendants of the vacuum are equally heavy measured by their boost charge, an honest calculation must include them all. Even it is still possible to expand the density matrixe^-β_u M_0organized by the orders of the Taylor expansion and the levels of the descendants, it is still hard to trace outand obtain the reduced density matrix onin a workable way. However, since we expect the result to be universal, once we work it out in one explicit example, hopefully we might find a solution according to the answer. Currently, since this type of thermal circle is in BMSFT is still not well understood, we leave this to future work. §.§ Modular Hamiltonian Approach In this subsection, we calculate the thermal correction to the entanglement entropy from the modular Hamiltonian. As a double check, the result agrees with the previous calculation (<ref>). The modular Hamiltonian for the reduced density matrix onis defined to be K_ = -logρ_. From the entanglement first law, for an infinitesimal variation of the state, the calculation of the variation of the entanglement entropy can be replaced by the variation of the expectation value of the modular Hamiltonian δ S_=δ⟨K|_|.⟩ In general, the modular HamiltonianK_cannot be written down in terms of local data. Only in theories with enough symmetry the modular Hamiltonian has an explicit formula for simple entanglement regions and special states. In particular, the modular Hamiltonian in BMSFT <cit.> can be written down explicitly for a single interval on the cylinder under the vacuum state. For the single intervalin the vacuum state on the cylinder with circumferenceL, the modular HamiltonianK_can be written as a local integral of the modular generatorζ_against the currentsJ(ϕ)andP(ϕ)as K_ =∫_ϕ_-^ϕ_+ dϕ[ L/2πcosπ l_ϕ/L-cosπ(2ϕ -ϕ_+ -ϕ_-)/L/sinπ l_ϕ/L J(ϕ) + l_u/2 π l_ϕ/Lcosπ(2ϕ -ϕ_+ -ϕ_-)/L -π l_ϕ/L/sinπ l_ϕ/L P(ϕ) ]. To calculate the variation of the modular Hamiltonian, we need to calculate the variation of the currentsJ(ϕ)andP(ϕ), δ⟨J||=⟩⟨J||_⟩ρ -⟨J||_⟩|0|⟩, δ⟨P||=⟩⟨J||_⟩ρ -⟨P||_⟩|0|⟩. Substitute the low-temperature expansion (<ref>) of the thermal density matrixρ, δ⟨J|(ϕ) |=⟩e^-2πβ_ϕ/LΔ -2πβ_u/Lξ( ⟨J|(ϕ) |_⟩|ψ|⟩ - ⟨J|(ϕ) |_⟩| 0|⟩), δ⟨P|(ϕ) |=⟩e^-2πβ_ϕ/LΔ -2πβ_u/Lξ( ⟨P|(ϕ) |_⟩|ψ|⟩ - ⟨P|(ϕ) |_⟩| 0|⟩). So, we need to calculate the difference of the expectation values of the currents between the primary state|ψ|$⟩ and the vacuum | 0|$⟩. For this, we apply the plane-to-cylinder transformation (<ref>) and insert the primary operatorψat the origin of the(x,y)-plane. Recall the mode expansion of the currents on the plane J(x) =∑_n L_n x^-n-2,   P(x)=∑_n M_n x^-n-2. Thus, the expectation values of the currents on the plane under a primary state are ⟨J|^pl(x)|=⟩x^-2Δ,  ⟨P|^pl(x)|=⟩x^-2ξ. Applying the transformation of currents (<ref>), the expectation values of currents on cylinder become ⟨J|(ϕ) | ⟩= ( ∂ x/∂ϕ)^2 J^pl(x) +c_L/12{x,ϕ}=-4π^2/L^2Δ +π^2/L^2c_L/6, ⟨P|(ϕ) | ⟩= ( ∂ x/∂ϕ)^2 P^pl(x) +c_M/12{x,ϕ}=-4π^2/L^2ξ +π^2/L^2c_M/6. Thus, the difference of the expectation values of the currents between|ψ|$⟩ and |0|$⟩ are ⟨J|(ϕ) |_⟩|ψ|⟩ - ⟨J|(ϕ) |_⟩| 0|⟩ =-4π^2/L^2Δ, ⟨P|(ϕ) |_⟩|ψ|⟩ -⟨P|(ϕ) |_⟩|0|⟩ =-4π^2/L^2ξ. Substituting this into (<ref>), we obtain the variation of the currents δ⟨J|(ϕ) | ⟩=e^-2πβ_ϕ/LΔ -2πβ_u/Lξ( ⟨J|(ϕ) |_⟩|ψ|⟩ - ⟨J|(ϕ) |_⟩| 0|⟩) =-4π^2/L^2Δ e^-2πβ_ϕ/LΔ -2πβ_u/Lξ , δ⟨P|(ϕ) | ⟩=e^-2πβ_ϕ/LΔ -2πβ_u/Lξ( ⟨P|(ϕ) |_⟩|ψ|⟩ - ⟨P|(ϕ) |_⟩| 0|⟩) =-4π^2/L^2ξ e^-2πβ_ϕ/LΔ -2πβ_u/Lξ. For the modular Hamiltonian (<ref>), the variation of the modular Hamiltonian is δ⟨K|_|=⟩ ∫_ϕ_-^ϕ_+ dϕ[ L/2πcosπ l_ϕ/L-cosπ(2ϕ -ϕ_+ -ϕ_-)/L/sinπ l_ϕ/Lδ⟨J|(ϕ) |+⟩l_u/2 π l_ϕ/Lcosπ(2ϕ -ϕ_+ -ϕ_-)/L -π l_ϕ/L/sinπ l_ϕ/Lδ⟨P|(ϕ) |⟩] = [ 2Δ(1-π l_ϕ/Lπ l_ϕ/L) + 2 ξ( π^2 l_u l_ϕ/L^2 sin^2π l_ϕ/L-π l_u/Lπ l_ϕ/L) ]e^-2πβ_ϕ/LΔ -2πβ_u/Lξ . This result agrees with the previous calculation (<ref>) of the variation of the entanglement entropy. § DISCUSSION In this paper, we consider the single interval entanglement region on the cylinder in the BMSFT. We find a suitable low-temperature limit under which an expansion of the thermal density matrix dominated by the first excited operator is possible. In this limit, we calculate the thermal correction to the Rényi entropy by the replica trick and the uniformizing map. As a double check, for the thermal correction to the entanglement entropy, we also provide an alternative calculation by the modular Hamiltonian and the entanglement first law. Though we provide a double check from another calculation of the entanglement entropy by modular Hamiltonian, it will be more satisfactory to have a numerical check in the concrete model. Despite the fact that several concrete BMSFT models have been found and studied recently, it seems that we still do not have a satisfactory understanding of their underlying Hilbert space structure and the correct way to discretize the models in a meaningful way. We leave this to future work until we have a better understanding of these concrete models. Also, a concrete model analysis might be helpful to understand another type of low-temperature limit in Sec. <ref>. Another interesting thing is to test this thermal correction term in the holographic entanglement proposals. For the finite temperature, the calculation on the cylinder is secretly on a torus, and the replica trick fails as the covering space is of high genus. However, a holographic calculation with temperature in the bulk is still possible using the geometric picture. Hence, a comparison between the low-temperature result in the bulk and boundary is possible. I would like to thank Peng-xiang Hao, Wenxin Lai and Jun Nian for useful discussions. I would like to specially thank Jun Nian for proofreading the manuscript. This work was supported in part by the NSFC under grant No. 12147103. JHEP
http://arxiv.org/abs/2307.06101v1
20230712114915
Air Bumper: A Collision Detection and Reaction Framework for Autonomous MAV Navigation
[ "Ruoyu Wang", "Zixuan Guo", "Yizhou Chen", "Xinyi Wang", "Ben M. Chen" ]
cs.RO
[ "cs.RO" ]
Experimental detectability of spin current shot noise Sebastian T. B. Goennenwein^1 August 12, 2023 ===================================================== empty empty Autonomous navigation in unknown environments with obstacles remains challenging for micro aerial vehicles (MAVs) due to their limited onboard computing and sensing resources. Although various collision avoidance methods have been developed, it is still possible for drones to collide with unobserved obstacles due to unpredictable disturbances, sensor limitations, and control uncertainty. Instead of completely avoiding collisions, this article proposes Air Bumper, a collision detection and reaction framework, for fully autonomous flight in 3D environments to improve the safety of drones. Our framework only utilizes the onboard inertial measurement unit (IMU) to detect and estimate collisions. We further design a collision recovery control for rapid recovery and collision-aware mapping to integrate collision information into general LiDAR-based sensing and planning frameworks. Our simulation and experimental results show that the quadrotor can rapidly detect, estimate, and recover from collisions with obstacles in 3D space and continue the flight smoothly with the help of the collision-aware map. § INTRODUCTION MAVs have gained increasing popularity for their ability to access and operate in environments that are difficult or impossible for humans to reach, making them valuable tools in various fields like infrastructure inspection <cit.>, subterranean exploration <cit.>, and search and rescue <cit.>, etc. However, safety becomes a critical concern for MAVs when operating in such complex and cluttered environments. These scenarios present a significant challenge for MAVs to conduct safe and collision-free flights. To address this challenge, much research has focused on utilizing onboard sensors such as LiDAR <cit.>, stereo cameras, and RGB-D cameras <cit.> for Simultaneous Localization and Mapping (SLAM); motion planning algorithms <cit.> have been developed to generate collision-free paths. Despite these efforts, MAVs are still susceptible to colliding with obstacles due to unpredictable disturbances, sensor limitations, and control uncertainty. Instead of dealing with MAV collision by completely avoiding it, increasing attention has been shifted to collision detection and reaction. In this paper, we introduce a unified IMU-based collision detection and reaction framework (Air Bumper) that estimates collision points and feeds the collision information into collision-aware volumetric mapping and general motion planning algorithms so that robots can move to the original target rather than get stuck by obstacles. The collision detection and estimation only rely on IMU data from the flight controller without requiring any external sensors. We also design and fabricate a fully-autonomous collision-resilient MAV with a 3D cage. The drone runs Air Bumper, the proposed IMU-based collision detection and reaction framework, in addition to general autopilot, SLAM, and motion planning algorithms. The proposed framework enables the drone to detect and react to unobserved collisions and update a collision-aware map for autonomous navigation after collisions (Fig. <ref>). The main contributions of this work are: * We propose Air Bumper, an IMU-based collision detection and reaction framework for autonomous MAV navigation in 3D environments. * We propose a collision-aware mapping method to utilize collision estimation as sensor information in the general autonomous navigation framework. * We design and fabricate a collision-resilient MAV that can autonomously navigate in unknown environments while sustaining collisions with obstacles. The rest of this paper is organized as follows. In Section <ref>, we review state of the art on MAV collision detection and reaction. In section <ref>, we introduce our overall framework's structure and the design of a fully-autonomous collision-resilient MAV. The IMU-based 3D collision detection and reaction methods are detailed in Section <ref> and <ref>, respectively. The simulation and experiment results that demonstrated the performance of Air Bumper are presented in Section <ref>. Finally, we draw some conclusions in Section <ref>. § RELATED WORKS In the face of possible collisions in flight, many researchers choose not to generate a collision-free path to avoid the collision but to design collision-resilient MAVs to deal with it. At the hardware level, there are many kinds of designs and structures to enhance collision resilience. As a high-speed rotating part, the propeller is the most vulnerable to damage in a collision. Therefore, propeller guards <cit.> are commonly used to protect it. At the same time, many cage-like structures are designed to provide more protection for the whole drone. Rigid cage structure <cit.> can use its strength to protect inside fragile parts, like sensors, flight controllers, and onboard computers. In addition to minimizing the impact of collisions through the hardware design discussed above, some researchers are also extracting environmental information from collisions in order to integrate it into the MAV perception system. Lew et al. in <cit.> proposed a contact-based inertial odometry (CIO), which can provide a usable but inaccurate velocity estimation for a hybrid ground and aerial vehicle performing autonomous navigation. In the flight, several not destructive collisions happen, and the controller can get updated information from collisions. The work in <cit.> analyzes the impact of collisions on visual-inertial odometry (VIO) and uses collision information to build a map with a downward camera for localization. In their experiment, two glass walls are included to present that the transparent objects may cause LiDAR to get an inaccurate distance. Still, collision mapping can help MAVs detect these transparent walls. Authors in <cit.> introduce hall sensors to detect collisions and estimate the intensity and location of the collision to realize reaction control. However, these works tend to navigate using only IMU or directly use collision data to perform reaction control, which makes the collision information hard to be recorded and reused. Although the method proposed in <cit.> successfully achieves collision recording for further flight in a laboratory environment using motion capture systems, the lack of integration with online sensing and planning modules limits its applicability in real-world settings. Additionally, most of these works <cit.> focus on collision detection and characterization in a 2D environment. However, the obstacles in cluttered environments are often not on the same level as MAVs, which means that collisions can occur from any direction. In this work, we combine the Air Bumper framework with LiDAR-based sensing on a caged, collision-resilient MAV. This allows for collision detection and estimation in 3D space and the generation of smooth reaction trajectories with the help of collision-aware mapping. § SYSTEM OVERVIEW §.§ Overview of Air Bumper Framework The structure of our proposed collision detection and reaction framework, Air Bumper, is shown in Fig. <ref>. When MAV is flying in unknown environments, it may collide with obstacles due to the onboard sensors' limitations. In this condition, the collision detection part of our framework will use inertial data from the flight controller to estimate the collision points and feed the collision information into collision reaction modules. In collision reaction parts, the collision recovery control algorithm will utilize the direction of the collision point to command the MAV away from obstacles. Meanwhile, it will also generate a collision point cloud to the collision-aware mapping module so that the position of unobserved obstacles can be stored for further navigation. Using the updated collision-aware map, a general motion planning system can easily get the ability to deal with unobserved obstacles in 3D environments. §.§ Design of Collision-Resilient MAV The collision resilient MAV, shown in Fig. <ref>, is designed and fabricated using a customized frame and cage made of carbon fiber composite material <cit.>, 3D printed parts, and commercial electrical components. The quadrotor weighs 1.45 kg with a battery and 3D LiDAR. The frame and cage of the MAV are made of a composite material consisting of carbon fiber and PVC foam, which provides full coverage for onboard components while keeping the entire MAV lightweight. The counter of the frame is designed as circular, which makes the collision estimation more efficient and accurate. An autopilot, Kakute H7, is utilized as the low-level controller. NVIDIA Xavier NX module with carrier board is chosen as the onboard computer for high-level control, providing computing capabilities for Air Bumper, GPU-accelerated volumetric mapping, and motion planning algorithms. The whole MAV is powered by an ACE 4-cell 45C 5300 mAh LiPo battery. Livox Mid-360 LiDAR sensor has been selected to enable 360 of horizontal field of view (FOV) and 59 of vertical FOV. This allows for precise point cloud data collection and navigation in indoor environments. The collision-resilient MAV integrates state-of-the-art flight control, localization, and mapping techniques to achieve fully autonomous and safe flight in unknown environments. The autopilot using PX4 firmware receives pose estimation and targets and runs a low-level controller to generate actuator commands. Robot Operating System (ROS) framework runs on the onboard computer for high-level algorithms. FAST-LIO2 <cit.> SLAM algorithm is chosen to provide odometry data. To meet the required frequency, the framework employs an Extended Kalman filter (EKF) to fuse the odometry data with the onboard IMU. The high-frequency odometry (200 Hz) is then used by autopilot. A volumetric mapping module <cit.> provides GPU-accelerated incremental euclidean distance transform for corresponding online motion planning <cit.>. The volumetric mapping module is modified to work with collision-aware map in the Air Bumper framework. § IMU-BASED COLLISION DETECTION AND ESTIMATION IN 3D SPACE §.§ Collision Detection To make Air Bumper easier to implement on any platform, IMU, the most common drone sensor, is used to collect linear acceleration data on x, y, and z axes to detect the collisions rapidly. When the collision happens, the contact force will cause an additional acceleration on the MAV, and the measured value on the corresponding axes will significantly differ from the normal state. Different from the previous work <cit.>, they only consider detecting the collision on a horizontal plane or using acceleration data on the z-axis to assist the horizontal detection. Our method also takes into account collisions other than those from horizontal planes. This feature can assist popular caged MAVs in detecting collisions from any angle. Let ^ba as the acceleration vector of the MAV. Based on the analysis of acceleration data, we found that an acceleration sample can be identified as a potential collision signal if the magnitude of its component on either the x or y axis, represented as |^ba_x| or |^ba_y|, exceeds a threshold of 20 m/s^2. The relative threshold for collision detection on the z-axis is also 20 m/s^2, but the gravitational acceleration must be added. Meanwhile, the impact of a collision on MAV may cause several related abnormal acceleration data samples. To realize robust collision detection and estimation, a sliding-window method is used to select the maximum value from ten samples following the first acceleration data that exceeds the threshold. After the selection, the represented data sample for one collision will be recorded. In the collision detection stage, we only set the threshold and the size of the sliding window to filter the sensor noise and post-impact of collision, which can be easily adjusted according to specific hardware. §.§ Collision Estimation The collision estimation module estimates the intensity and direction of collision ^bC in the body frame for collision recovery control. It also outputs a collision point ^bp_c for generating a collision point cloud. We represent the intensity of collision ^bC using r. And the collision direction is represented using azimuth angle ϕ∈ (-π, π] and polar angle θ∈ [0,π]. The acceleration directions are the opposite of the directions of collision points in the body frame. Therefore, we can represent the acceleration value as a collision acceleration vector a_c, where a_c,x = -^ba_c,x, a_c,y = -^ba_c,y, a_c,z = -(^ba_z-g). Then, we can estimate the collision via r = ||a_c,x^2 + a_c,y^2 + a_c,z^2|| ϕ = (a_c,y, a_c,x) θ = a_c,z/r The collisions between the aircraft and the obstacle occur on the edge of the drone's cage. To update a collision-aware map for navigation, we need to estimate a collision point ^bp_c on edge. For simplicity, we assume that the cage of our drone is a sphere with a radius l. Then the collision point can be estimated by: ^bp_c = ^bC/||^bC||· l § COLLISION REACTION IN 3D SPACE Our collision reaction method aims first to utilize a simple but rapid recovery control strategy to quickly guide the MAV away from obstacles and restore its stability. Then, the collision mapping module transfers the collision point into a corresponding collision point cloud and integrates it with a volumetric mapping algorithm. This enables the robot to record the estimated positions of obstacles in the world frame and navigate to the pre-collision goal using general motion planning algorithms. §.§ Collision Recovery Control Strategy When the collision occurs, the MAV without our framework will maintain its target velocity but cannot achieve it. The motors will continue to accelerate and lead the robot to crash. The collision recovery control strategy generates a target position on the opposite side of the collision point to address this condition rapidly. Instead of relying on the motion planning module, which typically requires time to re-plan, the position command is sent directly to the low-level controller. Firstly, we need to get the collision recovery position ^bp_r in the body frame as follows: ^bp_r,x = -d sinθcosϕ ^bp_r,y = -d sinθsinϕ ^bp_r,z = -d_z cosθ where d is the reaction distance in the xy-plane, d_z is the reaction height on the z-axis. The collision recovery position in the world frame is denoted by ^wp_r = ^wT_b·^bp_r, which utilizes a transform matrix, ^wT_b, to achieve the transformation from the body frame to world frame. We limit the reaction distance or height to keep the MAV a safe distance away from the obstacles rather than directly use the collision intensity r, as the acceleration caused by collisions can often be very large. Then a standard cascaded proportional–integral–derivative (PID) controller is used to generate thrust force commands from the desired reaction positions. §.§ Collision Point Cloud Generation The collision point cloud generation module is designed to record the positions of unobserved obstacles and avoid a secondary collision. This module constructs a set of points to fit the collision plane where the drone detects the collision. The collision point cloud is then registered in the global volumetric map for the motion planning algorithm to avoid invisible obstacles when re-planning the feasible path autonomously. Firstly, we assume the object that collides with the surface of the drone is a circular plane with radius r_c and center point p_0. r_c is related to the geometric information of the MAV, while p_0 = (x_0,y_0,z_0) is given by the collision point ^bp_c which is generated by the collision estimation module. The 3D collision circular plane can be constructed as an intersection of a sphere, and a plane follows the equation <ref>. (x-x_0)^2+(y-y_0)^2+(z-z_0)^2=r_c^2 x_0(x-x_0)+x_0(y-y_0)+x_0(z-z_0)=0 With the guidance of equation <ref>, a sphere point cloud ^bP_sph can be generated with the help of point cloud library (PCL), and then each point in ^bP_sph will check whether the plane fitting condition is satisfied. Finally, the selected points are used to construct the 3D collision circle plane point cloud ^bP_cir. Then the point cloud is converted to the world frame using ^wP_cir = ^wT_b·^bP_cir for building a collision-aware map. §.§ Collision-Aware Mapping For autonomous navigation purposes, we represent the environment with the help of a volumetric mapper <cit.>. The mapping system constructs Occupancy Grid Maps (OGMs) and Euclidean Distance Transforms (EDTs) by parallel computing in GPU. An OGM contains the probability of a voxel (an element of the 3D grid) being occupied by obstacles, while an EDT consists of structural voxel grids where every voxel contains the distance information to its closest obstacle. The mapper reads the input data of depth and poses from onboard sensors and constructs OGM incrementally. Within the local range, a parallel EDT algorithm converts a batch of OGM in the local volume to EDT. In detail, given a 3D voxel v, the distance value is computed in the way f(v) = min_u∈ O||u - v|| where O denotes the set of voxels that are occupied. Finally, the new observation in the local range is integrated into the global map. The actual distance value is propagated outside the local range by parallel wavefront algorithms, and the global EDT can be obtained. After the construction of OGM and EDT, voxels in the map are labeled in three states, occupied, free, and unknown. Besides, each observed voxel records its distance from the closest obstacle. Hence, the motion planner will drive the vehicle towards the goal through the observed region while avoiding occupied grids. We specially tailor the volumetric mapper for Air Bumper. The collision detection mechanism is modeled as a sensor that generates observations of an obstacle, which we refer to as a collision sensor in below. Upon receiving the point cloud from a collision sensor, the mapper uses a feature extractor from PCL to encapsulate all points to an OBB (oriented bounding box). The bounding vertices and corresponding transformation matrix associated with each collision-induced OBB are stored in the mapper and further streamed to GPU in OGM updating stage. After the local OGM is constructed with onboard sensor observation, the mapper inspects each voxel in parallel to check if the corresponding voxel should be set as occupied in the global OGM. In a thread dealing with the voxel v, all OBBs are iterated, and v is transformed into each OBB coordinate. If v is inside one of the OBBs marked by the collision sensor, or it is occupied in the local OGM, then the global OGM increases the occupancy probability of v. This indicates the collision sensor has a higher priority than onboard sensors, in that the obstacle registered by the collision sensor will not be cleared by onboard sensors. Local OGM is updated accordingly, and EDT takes the observation of the collision sensor as well. In consequence, the vehicle remembers all obstacles it ever collides with and will avoid them in future navigation. §.§ Collision Reaction Motion Planning Once the MAV platform detects the collision information, the motion planning will do re-planning based on the updated collision-aware OGM and EDT. Here we use the GTO-MPC <cit.> algorithm to plan a feasible trajectory to achieve the pre-collision goal and avoid obstacles simultaneously. GTO-MPC algorithm is divided into two steps. Firstly, a jerk-limited trajectory is generated to supply the guiding time-optimal (GTO) initial solution. Then, an MPC-based method is applied to find the trajectory with considering obstacles, smoothness, and flight performance. In our framework, the optimization problem of the second step is formulated as the equation <ref> to find the best trajectory x(t), t ∈ [t_0,t_0+T]. [ min J =∫_t_0^t_0+T u^2(t) d t+w_1 ∫_t_0^t_0+Tx(t)-x_j(t)^2 d t; +w_2 ∫_t_0^t_0+T e^-d(t) d t; s.t. ẋ(t)=f(x(t), u); x(t) ∈𝒳_free; x^(k)(t) ∈Φ_k ] Where the first term of J minimizes the jerk (the derivative of acceleration) to encourage the smoothness of the trajectory, the second term is to minimize the errors between the state trajectory x(t) and jerk limited trajectory. The third term penalizes the closest distance from the drone to obstacles in the EDT map. In the constraints, f(·) is the dynamic function of the quadrotor, 𝒳_free represents the free grids in the OGM, and Φ_k indicates the limited range of velocity, acceleration, and jerk. w_1 and w_2 are the weight coefficients for the corresponding term. In general, the trajectory generation method achieves the re-planning frequency of 5Hz with a prediction horizon T=2s while guaranteeing kinodynamic feasibility, flight safety, and smoothness simultaneously. All of these great performances enable the MAV to react quickly to obstacles detected by the collision detection and estimation module. § EXPERIMENTS AND RESULTS §.§ Simulation in an Unknown Environment We use a customized environment to evaluate the Air Bumper framework in the Gazebo <cit.> simulator, as shown in Fig. <ref>. In the customized environment, we use a simulated MAV with a Velodyne VLP-16 LiDAR sensor, which has 360 horizontal FOV, 30 vertical FOV, and the maximum sensing distance is 100m. Two kinds of doors are designed to validate the framework. One is a black door frame without any obstacles. The other one is a white door frame and transparent material, like glass, within the frame, and it is used to simulate a scenario with transparent obstacles. LiDAR is unable to detect transparent obstacles during flight. As a result, the motion planning module may generate a path from the current position to the next goal that passes through the white glass door. This could cause the MAV, without our framework, to become stuck or crash. In the simulation test, we set three doors: two white doors with transparent obstacles located at [0, -3, 1]^⊤ m and [0, -8, 1]^⊤ m, and one black normal door at [0, -13, 1]^⊤ m. Once the start command is received, the drone takes off and flies autonomously through waypoints (WPs). It follows a path from the origin point [0, 0, 1]^⊤ m to the first waypoint (WP1) [0, -5, 1]^⊤ m, then to the second waypoint (WP2) [0, -10, 1]^⊤ m, and finally to the third waypoint (WP3) [0, -15, 1]^⊤ m. Without our collision detection and reaction framework, the MAV collides with the transparent obstacles and crashes when passing through the white glass doors (Fig. <ref>). In contrast, our Air Bumper framework enables the MAV rapidly recover from the collision upon detecting the abnormal acceleration data in the y direction. The collision-aware mapping module consequently updates the collision-aware map, where estimated obstacles are marked in red in Fig. <ref>. The collision-aware map assists the motion planning module in re-planning a smooth trajectory to the goal without colliding with the same obstacles (Fig. <ref>). Results demonstrate that our framework is able to handle several collisions with unobserved obstacles during autonomous flight and record the collision information for further safe navigation. §.§ Experiments in Real World The Air Bumper framework's performance is demonstrated using the collision-resilient MAV designed in Section <ref> in an unknown indoor environment with a transparent obstacle (Fig. <ref>) and an unpredictable obstacle (Fig. <ref>). The MAV is programmed to autonomously take off from the origin to the first waypoint (WP1) [0.0, 0.0, 1.5]^⊤ m, then fly towards the second waypoint (WP2) [0.0, -3.5, 1.5]^⊤ m, and then perform back-and-forth flights between the two waypoints. For the scenario with a transparent obstacle, a customized transparent object with a size of 2 m × 1 m and a thickness of 8 mm is considered an obstacle. The bottom center of the obstacle is located at [0.0, 1.7, 0.0]^⊤ m. The OGM in Fig. <ref>, represented by the black point cloud, demonstrates that the laser beams are able to penetrate the transparent object. Therefore, there are no occupied voxels in the proximity of the obstacle's location, and the motion planning algorithm plans a path through the obstacle, which leads the MAV to collide with the transparent obstacle and easily get stuck or crash. In one of the flight tests, the collision generates an abnormal acceleration on the y-axis, exceeding the threshold, which occurred at approximately 9.58 seconds (Fig. <ref>). The collision is detected and estimated as ^bC with ϕ = 103.5 and θ = 90.7. Then the collision recovery control module calculates and generates a recovery position setpoint in the negative y-direction to move the drone away from the obstacle at around 9.79 seconds (Fig. <ref>). The recovery position ensures the drone is at a safe distance of approximately 0.5 meters from the obstacle. Meanwhile, the collision-aware map is updated after receiving the collision point cloud, marked red in Fig. <ref>. With the help of the collision-aware map, GTO-MPC re-plans a feasible trajectory to the second waypoint, which is shown as a blue line in Fig. <ref>, and the low-level controller executes the re-planned setpoint at around 14.90 seconds (Fig. <ref>). The framework is designed to allow for a 5-second window after a collision has occurred for the motion planning algorithm to re-plan a feasible path. However, the actual time it takes for the drone to recover and stabilize after the collision is less than 1 second. We then conduct ten trials to demonstrate the robustness of our framework for this case. All the experimental trajectories in the testing scenario are shown in Fig. <ref>. In all the trials, the collision-resilient drone collides with the transparent obstacle, and our framework successfully detects and reacts to the collision. Although some of the trajectories do not intersect with the obstacles, the maximum distance between the surface of the obstacle and the collision position is only 0.1 m, which still falls within the drone's radius. For the scenario with an unpredictable obstacle, we use a stick to randomly hit the MAV outside the FOV of the LiDAR to demonstrate the ability of the Air Bumper framework to detect the collision with unpredictable obstacles and perform reactions in 3D space. When the stick hits the MAV from the lower left side of the cage, there are abnormal acceleration data on all three axes (Fig. <ref>.a), and the collision is detected at 18.59 s. Then a 3D recovery control is performed with setpoints on three axes at 18.70 s (Fig. <ref>.b), the recovery distance d ensures the drone is at a safe distance of approximately 0.5 m from the obstacle in the xy-plane and the recovery distance on z-axis, d_z, makes the drone ascend from 1.5 m to about 2 m. Results demonstrate that our framework enables the MAV to maintain a safe distance from the obstacle in 3D space rather than just in a certain plane. Then the collision-aware mapping module generates a collision point cloud at the collision point which helps the motion planning module generate a smooth feasible trajectory to the original waypoint successfully. § CONCLUSION In this work, we introduced a collision detection and reaction framework to help MAVs recover from collisions during autonomous flights in an unknown environment with unobserved obstacles. To do so, we designed an IMU-based collision detection and estimation module to estimate the collision intensity, direction, and position. Collision reaction modules are developed to assist the drone quickly away from the obstacle and update the collision-aware map to generate a smooth post-collision trajectory. In addition to the software, a caged collision-resilient MAV is also designed and fabricated which fully demonstrates the ability of our framework in the real world. The motion planning algorithm in the current framework still needs a certain time to re-plan after collisions. In the future, we aim to introduce collision-inclusive motion planning, which can better utilize collisions in autonomous navigation in complex environments. The framework can also be extended to assist multi-robot navigation in hazardous environments. conf/IEEEtran
http://arxiv.org/abs/2307.04343v1
20230710045405
Hierarchical Semantic Tree Concept Whitening for Interpretable Image Classification
[ "Haixing Dai", "Lu Zhang", "Lin Zhao", "Zihao Wu", "Zhengliang Liu", "David Liu", "Xiaowei Yu", "Yanjun Lyu", "Changying Li", "Ninghao Liu", "Tianming Liu", "Dajiang Zhu" ]
cs.CV
[ "cs.CV" ]
Hierarchical Semantic Tree Concept Whitening for Interpretable Image Classification Haixing Dai*, Lu Zhang*, Lin Zhao, Zihao Wu, Zhengliang Liu, David Liu, Xiaowei Yu, Yanjun Lyu, Changying Li, Ninghao Liu, Tianming Liu, Dajiang Zhu. * Co-first authors. Haixing Dai, Lin Zhao, Zihao Wu, Zhengliang Liu, Ninghao Liu, Tianming Liu and Changying Li are with the Department of Computer Science, University of Georgia, Athens, GA, USA. (e-mail: hd54134, lin.zhao, zw63397,zl18864, ninghao.liu, [email protected], [email protected]). Lu Zhang, Xiaowei Yu, Yanjun Lyu and Dajiang Zhu are with the Department of Computer Science and Engineering, The University of Texas at Arlington, Arlington, TX, USA. (e-mail: lu.zhang2, xxy1302, [email protected], [email protected]) David Weizhong Liu is with Athens Academy, Athens, GA, USA.(e-mail: [email protected]) August 12, 2023 ===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== With the popularity of deep neural networks (DNNs), model interpretability is becoming a critical concern. Many approaches have been developed to tackle the problem through post-hoc analysis, such as explaining how predictions are made or understanding the meaning of neurons in middle layers. Nevertheless, these methods can only discover the patterns or rules that naturally exist in models. In this work, rather than relying on post-hoc schemes, we proactively instill knowledge to alter the representation of human-understandable concepts in hidden layers. Specifically, we use a hierarchical tree of semantic concepts to store the knowledge, which is leveraged to regularize the representations of image data instances while training deep models. The axes of the latent space are aligned with the semantic concepts, where the hierarchical relations between concepts are also preserved. Experiments on real-world image datasets show that our method improves model interpretability, showing better disentanglement of semantic concepts, without negatively affecting model classification performance. Explainable AI (XAI), hierarchical tree of semantic concepts, image embedding, image interpretation. § INTRODUCTION Machine learning interpretability has recently received considerable attention in various domains <cit.>. An important challenge that arises with deep neural networks (DNNs) is the opacity of semantic meanings of data representations in hidden layers. Several types of methods have been proposed to tackle the problem. First, recent works have shown that some neurons could be aligned with certain high-level semantic patterns in data <cit.>. Second, it is possible to extract concept vectors <cit.> or clusters <cit.> to identify semantic meanings from latent representations. However, these methods are built upon the assumption that semantic patterns are already learned by DNNs, and the models would admit the post-hoc method of a specific form. There is no guarantee that the assumption holds true for any model, especially when meaningful patterns or rules may not be manifested in the model, thus leading to over-interpretation <cit.>. Meanwhile, although many post-hoc explanation methods are proposed with the expectation of improving or debugging models, it is challenging to achieve this goal in practice. Although we could collect human annotations to guide prediction explanations and improve model credibility <cit.>, manually labeling or checking semantic concepts is rather difficult. Unlike explaining individual predictions, which is a local and instance-level task, extracting concepts provides a global understanding of models, where manual inspection of such interpretation is time-consuming and much harder, if not impossible. Instead of relying on post-hoc approaches, we aim to instill interpretability as a constraint into model establishment. For example, explanation regularization is proposed in <cit.>, but it constrains gradient magnitude instead of focusing on semantic concepts. Meanwhile, β-VAE and its variants <cit.> add independence constraints to learn disentangled factors in latent representations, but it is difficult to explicitly specify and align latent dimensions with semantic meanings. Ideally, we want to construct DNNs whose latent space could tell us how it is encoding concepts. The recent decorrelated batch normalization (DBN) method <cit.> normalizes representations, providing an end-to-end technique for manipulating representations, but it is not directly related to interpretability. In this work, we propose a novel Hierarchical Semantic Tree Concept Whitening (HaST-CW) model to decorrelate the latent representations in image classification for disentangling concepts with hierarchical relations. The idea of our work is illustrated in Fig. <ref>. Specifically, we define each concept as one class of objects, where the concepts are of different granularities and form a hierarchical tree structure. We decorrelate the activations of neural network layers, so that each concept is aligned with one or several latent dimensions. Unlike the traditional DBN method (Fig. <ref>a), which treats different concepts as independent, our method is able to leverage the hierarchically related organization of label concepts inherent in domain knowledge (Fig. <ref>b). The consideration of relations between different concepts is crucial in many real-world applications <cit.>. For example, in the healthcare domain, the relationship of different disease stages (concepts) may reflect the progression of the disease, which is significant for reversing pathology <cit.>. Also, in the precision agriculture domain <cit.>, real-time monitoring of interactions of multiple agricultural objects (concepts) with each other and with the environment are crucial in maintaining agro-ecological balance <cit.>. In our model, a novel semantic constraint (SC) loss function is designed to regularize representations. As a result, the data representations of two concepts with higher semantic similarity will be closer with each other in the latent space. Moreover, a new hierarchical concept whitening (HCW) method is proposed to decorrelate different label concepts hierarchically. We evaluated the proposed HaST-CW model using a novel agriculture image dataset called Agri-ImageNet. The results suggest that our model could preserve the semantic relationship between the label concepts, and provide a clear understanding of how the network gradually learns the concept in different layers, without hurting classification performance. § RELATED WORK Post-Hoc Interpretation. Post-Hoc interpretation can be divided into approaches that explain predictions or models <cit.>. Prediction-oriented interpretation aims to develop faithful and robust measures to quantify feature importance towards individual predictions for identifying those features (e.g., pixels, super-pixels, words) that made most contributions <cit.>. Model-oriented interpretation analyzes behaviors of neural networks either by characterizing the function of model components <cit.> or analyzing semantic concepts from latent representations <cit.>. The proposed method also targets concept-level interpretation in deep neural networks. Different from post-hoc techniques that focus on discovering existing patterns in models, the newly proposed HaST-CW proactively injects concept-related knowledge into training and disentangles different concepts to promote model interpretability. Inherently Interpretable Models. Another school of thought favors building inherently explainable machine learning models <cit.>. Some approaches design models that highlight prototypical features of samples as interpretation. For example, Chen et al. <cit.> classifies images by dissecting images into parts and comparing these components to similar prototypes towards prediction. Li et al. <cit.> designs an encoder-decoder framework to allow comparisons between inputs and the learned prototypes in latent space. Some other works such as β-VAE and its variants <cit.> regularize representation learning for autoencoders to produce disentangled factors in representation dimensions, but the semantic meaning of each dimension remains unknown without further manual inspection. In contrast, our method attempts to explicitly align latent dimensions with specific semantic concepts contained in external knowledge. A recent technique called Concept Whitening (CW) <cit.> constrains the latent space, after revising Batch Whitening  <cit.>, such that it aligns with predefined classes. Our method attempts to infuse more complex knowledge of concept relations into representation learning. Applying Whitening to Computer Vision. Whitening is a standard image preprocessing technique, which refers to transforming the covariance matrix of input vectors into the identity matrix. In fact, the well-known Batch Normalization <cit.> can be regarded as a variant of whitening where only the normalization process is retained. There are many works in deep learning that describe the effectiveness of whitening <cit.> and the process of finding the whitening matrix <cit.>. Our work further takes semantics into consideration during the whitening process towards more interpretable representation learning. § METHODOLOGY §.§ Overview The proposed HaST-CW model aims to preserve the underlying hierarchical relationship of label concepts, as well as to disentangle these concepts by decorrelating their latent representations. To achieve this goal, we leverage the hierarchical tree structure of the label concepts extracted from specific domain knowledge (<ref>). Then, the obtained structure of label concepts is used as prior knowledge to be instilled into the model for guiding the representation learning process. There are two key components in the knowledge instillation process – the hierarchical concept whitening (HCW) module and the semantic constraint (SC) loss, which will be elaborated in <ref> and <ref>, respectively. §.§ The Hierarchical Semantic Tree of Concepts In this work, we used a newly collected and curated Agri-ImageNet dataset to develop and evaluate the HaST-CW model. There are 9173 high quality images in Agri-ImageNet, covering 21 different types of agricultural objects. Taking each type of agricultural object as one class, we have 21 label concepts in total. Some pairs of agriculture objects have the supertype-subtype relationship between them, so we obtain the parent-child relationship between the corresponding labels. As a result, a tree structure is built to represent the underlying hierarchically related organization of label concepts, which is shown in <ref>. Two concepts connected in the tree structure means they have parent-child relationship, where the parent is located at the lower hierarchy level. Besides the parent-child relation, we further introduce two notions – brother and cousin. If two concepts have the same parent, then they are brothers. If the parents of two concepts are brothers, then the two concepts are cousins. According to the laws of inheritance: (1) objects with the parent-child relation should be more similar than those with the uncle-child relation (vertical parent-child relationship); and (2) the traits of brothers should be more similar than cousins (horizontal brother-cousin relationship). An effective model should be able to capture both of the vertical relationship and horizontal relationship, so that the representation of any concept in the latent space should be closer to its parent than uncles, and closer to brothers than cousins. For our HaST-CW model shown in <ref>, a new HCW module (<ref>) is proposed to preserve the vertical relationship, and a novel SC loss (<ref>) is proposed to preserve the horizontal relationship. §.§ Hierarchical Concept Whitening The hierarchical concept whitening (HCW) module is one of the key components in the HaST-CW model, which aims to disentangle different label concepts while preserving their underlying hierarchical relationship. Specifically, in this work, the set of label concepts were denoted by C={C_i}_i=1^N_c, where C_i represents the i^th concept and N_c = 21 is the number of concepts. For C_i, its parent, children, brothers and cousins were denoted as C_i.𝒫, {C_i.children}, {C_i.ℬ} and {C_i.𝒞}, respectively. A dataset is denoted as 𝒟{x_i,y_i} ^n_i=1. We use X^C_i={x_j^C_i}_j=1^n_i to denote the set of i^th-class samples labeled by C_i. In traditional whitening transformation <cit.>, during the training process, data samples are first fed into the model in mini-batches to obtain the latent representation matrix Z_d× n, where n is the mini-batch size and d is the dimension of latent representation. We use ResNet as the model backbone in this work. Then a transformation ψ is applied to decorrelate and standardize Z_d× n: ψ(Z)=W(Z-μ1_n× 1^T), where W_d× d is the orthogonal whitening matrix, and μ=1/n∑^n_i=1z_i is the sample mean. A property of representation whitening is that Q^TW is still a valid whitening matrix if Q is an orthogonal matrix. We leverage this property for interpretable representation learning. In our model, besides decorrelation and standardization, we expect that the transformed representation of samples from concept C_i, namely Q^Tψ(Z^C_i), can align well with the i^th axis of latent space. Meanwhile, the underlying hierarchical relationship of concepts should also be preserved in their latent representations. That is, we need to find an orthogonal matrix Q= [q_1, q_2, …, q_N_c] with two requirements: (1) Z^C_i should be most activated by q_i, i.e., the i^th column of Q; (2) Z^C_i should also be activated by {q_c}, where c∈{C_i.children} is the child of concept C_i. The first constraint makes the representation align together with the corresponding concept dimension, and the second one maintains the vertical parent-child relationship between concepts. To this end, the optimization problem can be formulated as: max_q_1,…q_N_c ∑^N_c_i=1[ 1/n_iq^T_iψ(Z^C_i)1_n_i ×1 + ∑_c∈{C_i.children}1/n_i× N_cd(q_c)^Tψ(Z^C_i)1_n_i ×1], s.t. Q^TQ= I_d , where N_cd = |{C_i.children}| is the number of child concepts of C_i. To solve this optimization problem with the orthogonality constraint, a gradient descent method with the curvilinear search algorithm <cit.> is adopted. With the whitening matrix W and rotation orthogonality matrix Q, HaST-CW can replace any batch normalization layer in deep neural networks. The details of representation whitening for HaST-CW is summarized in Algorithm <ref>. The overall training pipeline of our HaST-CW model is shown in <ref>. We adopt an alternative training scheme. In the first stage, the deep neural network is trained with the traditional classification loss. In the second stage, we solve for Q to align representation dimension with semantic concepts. The two stages work alternatively during the training process. The classification loss of the first stage is defined as: min_θ,ω,W,μ,1/m∑^m_i=1ℓ(g(Q^Tψ( Φ(x_i;θ);W,μ);ω);y_i), where Φ(·) and g(·) are layers before and after the HaST-CW module parameterized by θ and ω, respectively. ψ(·) is the whitening transformation parameterized by the sample mean μ and whitening matrix W. The rotation orthogonal matrix Q will be updated according to <ref> in the second stage. The operation of Q^Tψ(·) forms the HCW module. During the first training stage, Q will be fixed and other parameters (θ,ω,W,μ) will be optimized according to <ref> to minimize the classification error. The first stage will take T_thre mini batches (we set T_thre=30 in experiments). After that, Q will be updated by the Cayley transform <cit.>: Q^' = (I+η/2A)^-1(I-η/2A)Q, A = GQ^T-QG^T, where A is a skew-symmetric matrix. G is the gradient of the concept alignment loss, which is defined in <ref>. η is the learning rate. At the end of the second stage, an updated Q^' will participate in the first training stage of the next iteration. [tb] The Overall Framework of HaST-CW §.§ Semantic Constraint Loss Besides preserving the vertical parent-child relationship of concepts, we further model the horizontal relation between concepts that are at the same hierarchy level (i.e., brothers or cousins). Different from the HCW in <ref> that focuses on concept alignment, here we directly control the distance between representations of different concepts with the horizontal relation <cit.>. To this end, we propose a Semantic Constraint (SC) loss to model the horizontal brother-cousin relationship as below: ℒ_SC = αℒ_ℬ + βℒ_𝒞, ℒ_ℬ=∑_j ∑_ℬ_i∈{C_i.ℬ}∑_k max{0,m_ℬ-d(z^C_i_j,z^ℬ_i_k)}, ℒ_𝒞 =∑_j ∑_ℬ_i∈{C_i.ℬ}∑_𝒞_i∈{C_i.𝒞}∑_k∑_l max{0,d(z^C_i_j,z^ℬ_i_k) -d (z^C_i_j,z^𝒞_i_l)+m_𝒞}. There are two components in the SC loss and their contributions are controlled by two hyperparameters – α and β. The first term ℒ_ℬ is a contrastive loss, which takes a pair of image representations labeled by two brother concepts as input and enlarges the distance between them. It uses a hyperparameter m_ℬ to control the distance. The distance between two concepts increases when m_ℬ is set larger. ℬ_i∈{C_i.ℬ} denotes one of the brothers of concept C_i. The second term ℒ_𝒞 is a triplet loss. It takes three inputs: the anchor image representation z^C_i_j, the image representation z^ℬ_i_k labeled by brother concept of the anchor, and the image representation z^𝒞_i_l labeled by cousin concept of the anchor. 𝒞_i∈{C_i.𝒞} denotes the cousins of concept C_i. The triplet loss encourages the anchor-brother distance to be smaller compared with the anchor-cousin distance in representation space. In this way, the distance of image representations from brother classes tends to be smaller than the distance of image representations from cousin classes. The gap between the two types of distance is controlled by the margin value m_𝒞. Consequently, the hierarchical concept whitening module, together with the SC loss, enables the latent representations of concepts with similar semantics to be close with each other in the latent space. §.§ Latent Feature Maps Activation The proposed HaST-CW model can generate latent representations (ẑ_i) for input images (x_i) at each neural network layer by ẑ_i=Q^Tψ( Φ(x_i;θ);W,μ). The latent representation can be used to assess the interpretability of the learning process by measuring the degree of activation of ẑ_i at different concept dimensions (i.e. {q_i}). In the implementation, Φ(·) is a CNN based deep network, whose convolution output z_i= Φ(x_i;θ) is a tensor with the dimension z_i∈ R^d× h× w. Since ẑ_i is calculated by ẑ_i = Q^Tψ(z_i) where Q^T∈ R^d× d, we obtain ẑ_i∈ R^d× h× w, where d is the channel dimension and h× w is the feature map dimension. The hierarchical concept whitening operation Q^Tψ(·) is conducted upon the d feature maps. Therefore, different feature maps contain the information of whether and where the concept patterns exist in the image. However, as a tensor the feature map cannot directly measure the degree of concept activation. To solve this problem and at the same time to reserve both of the high-level and low-level information, we first apply the max pooling on the feature map and then use the mean value of the downsteam feature map to represent the original one. By this way, we reshape the original feature map z_i∈ R^d× h× w to z_i^'∈ R^d× 1. Finally, z_i^' is used to measure the activation of image x_i at each concept dimension. § EXPERIMENTS In the experiments section, we first visually demonstrate how our method can effectively learn and hierarchically organize concepts in the latent space (<ref>). We also show that (<ref>), compared to existing concept whitening methods, HaST-CW not only separates concepts, but also can separate groups of semantically related concepts in the latent space. After that, we discuss the advantages offered by our method with quantitative results and intuitive examples (<ref>) compared with baselines, including the CW module and ablated versions of our method. §.§ Experimental Setting §.§.§ Data Preparation In this work, we use a newly collected and curated Agri-ImageNet dataset to evaluate the proposed HaST-CW model. In total, 9173 images from 21 classes are used in our experiments. Each image is labeled with the class at the highest possible hierarchy level. For example, an image of Melrose apple will be labeled as "Melrose" rather than the superclass "Apple". Then we divide images per class into three parts by 60%/20%/20% for a standardized training/validation/test splitting. Because the resolution of the original images can range from 300 to 5000, we adopt the following steps to normalize the image data: 1) we first lock aspect ratio and resize the images to make the short edge to be 256; 2) During each training epoch, the images in the training and validation datasets are randomly cropped into 224×224; 3) During testing process, images in the test dataset are center cropped to be of size 224×224; 4) After cropping, the pixel values of images are normalized to [0,1]. Then, the whole training dataset is divided into two parts (𝒟_T and 𝒟_C in <ref>). 𝒟_C is the concept dataset used to update the matrix Q in the second stage (<ref>). It is created by randomly selecting 64 images from each class in the training dataset. The remaining images in the training dataset 𝒟_T are used in the first stage to train the model parameters (<ref>). §.§.§ Model Setting In this work, we use several ResNet structures <cit.> to extract features from images, including ResNet18 and ResNet50. During the training process, the two-stage training scheme adopts a 30-to-1 ratio to alternatively train the whole framework. In this case, after 30 mini batches of continuous training, the model will pause and the rotation orthogonal matrix Q will be optimized at the next mini batch. Two hyper-parameters α and β in the SC loss are set to be 1.0. Adam optimizer is used to train the whole model with a learning rate of 0.1, a batch size of 64, a weight decay of 0.01, and a momentum rate of 0.9. §.§ Visualization of Semantic Map To illustrate the learned semantic hierarchical structure, we show the representations extracted from the latent hidden layer of all the samples in <ref>. For better visualization, we use Uniform Manifold Approximation and Projection (UMAP)  <cit.> to project the representations to a two-dimensional space. All the images are color coded using the 17 sub-concepts which are defined on the left of <ref>. The top panel shows the result using CW method. In general, all the concepts are assembled as small groups, but neither semantic relations nor hierarchical structures have been learned. We highlight the super-concept of “Weed" (black) and three sub-concepts ( “Apple Golden" - green, “Apple Fuji" - red and “Apple Melrose" - blue) in the right column. We can see that the three types of apple (sub-concepts) are evenly distributed along with other fruits samples. The bottom panel shows our HaST-CW results. All the different concepts successfully keep their distinct cluster patterns as CW result. After our two-stage training process to instill the semantic and hierarchical knowledge, the three types of apple images have been pulled together and form a new concept (“Apple" with orange circle) at a higher level. Moreover, the newly learned concept of “Apple" simultaneously possesses sufficient distance to “Weed" (different super-concept) and maintains relatively close relations to “Strawberry", “Orange", “Mango" as well as other types of “Fruit". This result demonstrates the effectiveness of our hierarchical semantic concept learning framework, without negatively affecting the overall classification performance. §.§ Efficiency and Accuracy of Concept Alignment In this section, we compare the learning efficiency and accuracy of the proposed HaST-CW with that of the conventional CW method. We track the alignment between image representations and their corresponding concepts at each layer. Specifically, we randomly select six concepts, and for each concept we sort and select the top five images whose representations show the strongest activation at the corresponding concept axis. We show the results at both shallow and deep layers (layer 4 vs. layer 8) in <ref>. From the results of layer 4 (the left column) we can see that most of the top five images obtained by conventional CW (the rows marked by green box) are mismatched with the corresponding concepts. For example, the five images under the concept of “Apple-Melrose" obtained by CW are from the “Weed" class. The five images under the concept of “Snake Weed" are actually from other subclass of “Weed". Moreover, this situation continues in the following layers and has not been changed until layer 8. On the contrary, with the help of our designed semantic constraint loss, our HaST-CW (the rows marked by orange boxes) can learn the intrinsic concept faster and achieves the best performance at an earlier training stage (e.g., at a shallow layer). This result demonstrates that by paralleling multiple HCW layers the proposed HaST-CW model can capture the high-level features more efficiently. To further demonstrate the alignment between images and the corresponding concepts, we project each image in the test dataset into a latent space where each concept can be represented by an axis. To visualize the alignments at different concept hierarchies (<ref>), we show three pairs of concepts which belong to different hierarchical levels as examples: “Apple-Melrose"-“Apple-Fuji" is from hierarchy 3 (H-3), “Snake Weed"-“Parkinsonia" is from hierarchy 2 (H-2), and “Weed"-“Apple" crosses hierarchies 1 and 2 (“Weed": H-1, “Apple": H-2). Within each concept pair, a two-dimensional space has been built by taking the two concepts as axes. Thus, each image can be mapped into the space by calculating the similarity between image representation and the two concept representations. The results are shown in <ref>. Different rows correspond to different methods and the concept axes (space) are defined at the bottom. The first column of <ref> shows the data distribution in the two-dimensional space of “Apple-Melrose"-“Apple-Fuji" concept pair. The images belonging to Apple-Melrose class should have the highest similarity with the concept of “Apple-Melrose", and thereby they should be located at the right-bottom corner. Similarly, the images of Apple-Fuji class should be located at the left-top corner. The other images should distribute in the space according to the similarity with the two concepts. For example, compared to images of fruit-related classes, images of weed-related classes will have lower semantic similarity with the two concepts, so they should locate near the origin point (left-bottom corner). As shown in the first column, the two models which adopt the HaST-CW method (the second and third rows) can better follow the above-mentioned patterns. While in the CW model (the first row), nearly all the images are gathered at the right-bottom corner. This may be due to the high similarity between the two concepts considered, since they share the same super-class of “Apple". As a result, CW model may be limited in distinguishing different classes with high semantic similarity. A similar situation happens in the second column with the concept pair of “Snake Weed"-“Parkinsonia". These results suggest that compared to CW method, HaST-CW can better capture the subtle differences of semantic-related classes. The third column shows the results of the concept pair of two super-classes: “Weed" and “Apple". As each of the super-class concept contains multiple sub-classes, the intra-class variability is greater. Our proposed HaST-CW, together with the SC loss (the third row), can effectively capture the common visual features and project the “Weed" and “Apple" images to the left-top and right-bottom, respectively. At the same time, the images belonging to different sub-classes under “Weed" and “Apple" are assembled as blocks instead of scattered along the diagonal line. In the other two methods, especially in the CW method (the first row), the images of “Weed" class spread out over a wide range along the vertical axis. This result suggests that the proposed HaST-CW with SC loss can effectively model both the inter- and intra- class similarity. §.§ Interpretable Image Classification In this section, we compare the classification performance of the proposed HaST-CW method and the SC loss function with the conventional CW method using different backbones: ResNet18 and ResNet50. The results are summarized in <Ref>. Different rows correspond to different model settings. Within each model setting, we repeat the experiments for five times to reduce the effect of random noise. The mean and variance of accuracy (ACC.) are reported in the fourth column. From the results, we can see that the classification performance is slightly better than the other three model settings. This result indicates that the proposed HaST-CW model can improve the interpretability without hurting predictive performance. To track and visualize the classification process, we randomly select two images from Apple-Melrose class and Snake Weed class. The activation values between each image with the six relevant concepts are calculated and normalized to [0, 1]. The images, concepts and activation values are organized into a hierarchical activation tree. The results are shown in <ref>. We could observe that the activation values of each image correctly represent the semantic relationship between the images and the concepts. For example, in <ref> (a), the image located at the root is from Snake Weed class which is a subclass of Weed. The activation values of the image are consistent with this relationship and possess the highest activation values on the two concepts – “Weed" and “Snake Weed". § CONCLUSION AND FUTURE WORK In this study, we propose a new HaST-CW and demonstrate its superiority over Concept Whitening  <cit.>. HaST-CW decorrelates representations in the latent space and aligns concepts with corresponding dimensions. In addition, it correctly groups concepts at different granularity levels in the latent space and preserves hierarchical structures of concepts of interest. By doing so, we can interpret concepts better and observe the semantic relationships among concepts. We believe there are many possibilities for future work. One promising direction is automatically learning concepts from data. In this scenario, we can jointly learn possible concepts from common abstract features among images and how to represent these learned concepts in the latent space. For example, it might be possible to develop unsupervised or weakly-supervised methods to automatically learn the concept tree from data. By jointly learning concepts, their representations, and relations, the model may discover more data-driven semantic structures. HaST-CW can also be extended with post-hoc interpretability strategies (such as saliency-based methods that highlight focused areas used for classification). Such explanations at the concept level can provide a more global view of model behaviors. In addition, while this work focuses on the the natural image domain, the idea of leveraging hierarchical knowledge to guide representation learning is generalizable to other domains such as natural language processing <cit.> and medical image analysis <cit.>. Exploring knowledge-infused learning in different domains <cit.> and tasks <cit.>, including innovative applications <cit.>, is an interesting future direction. In conclusion, as deep learning models become increasingly complex, model interpretability is crucial for understanding behaviors, gaining trust, and enabling human-AI collaboration. Our work complements previous work and lays a solid foundation for further exploration. IEEEtran
http://arxiv.org/abs/2307.07363v1
20230714141252
Computational progress on the unfair 0-1 polynomial Conjecture
[ "Kevin G. Hare" ]
math.NT
[ "math.NT", "11C08, 03C10" ]
Let c(x) be a monic integer polynomial with coefficients 0 or 1. Write c(x) = a(x) b(x) where a(x) and b(x) are monic polynomials with non-negative real (not necessarily integer) coefficients. The unfair 0–1 polynomial conjecture states that a(x) and b(x) are necessarily integer polynomials with coefficients 0 or 1. Let a(x) be a candidate factor of a (currently unknown) 0–1 polynomial. We will assume that we know if a coefficient is 0, 1 or strictly between 0 and 1, but that we do not know the precise value of non-integer coefficients. Given this candidate a(x), this paper gives an algorithm to either find a b(x) and c(x) with a(x) b(x) = c(x) such that b(x) has non-negative real coefficients and c(x) has coefficients 0 or 1, or (often) shows that no such c(x) and b(x) exist. Using this algorithm, we consider all candidate factors with degree less than or equal to 15. With the exception of 975 candidate factors (out of a possible 7141686 cases), this algorithm shows that there do not exist b(x) with non-negative real coefficients and c(x) with coefficients 0 or 1 such that a(x) b(x) = c(x). Reconstructing Three-decade Global Fine-Grained Nighttime Light Observations by a New Super-Resolution Framework Marc Demoustier, Yue Zhang, Venkatesh Narasimha Murthy(), Florin C. Ghesu, and Dorin Comaniciu August 12, 2023 ================================================================================================================ § INTRODUCTION Let X and Y be independent discrete random variables with finite support. Then Z = X + Y is also a discrete random variable with finite support. It is conjectured that if Z is uniform on its support, then X and Y must also be uniform on their support. To the author's knowledge, this was first asked by G. Letac in 1969. This conjecture can be translated to a conjecture about polynomial multiplication in the following way. Let c(x) be a monic integer polynomial with coefficients 0 or 1. Factor c(x) as c(x) = a(x) b(x) where a(x) and b(x) monic polynomials with non-negative real coefficients. Is it true that a(x) and b(x) are necessarily integer polynomials with coefficients 0 or 1? This is clearly not true if we relax the restriction that a(x) and b(x) are monic. Simply take a(x) = 2 and b(x) = 1/2 c(x) as an example. Further, this is not true if we relax the restriction on the coefficients being non-negative real. Take for example x^3+1 = (x+1)(x^2-x+1) or x^2+1 = (x+i)(x-i). We will say that a 0–1 polynomial c(x) is fair if all factorizations c(x) = a(x) b(x) with a(x), b(x) monic polynomials with real non-negative coefficients have the property that a(x) and b(x) are 0–1 polynomials. We say that a 0–1 polynomial c(x) is unfair if there exists a factorization c(x) = a(x) b(x) with a(x), b(x) both monic with real non-negative coefficients, and at least one of a(x) or b(x) has a non-integer coefficient. The unfair 0–1 polynomial conjecture is that there does not exist an unfair polynomial. See <cit.> and the expansive references therein for more details. The goal of this paper is to provide a classification for those a(x) of degree at most 15 which are not factors of an unfair polynomial. Let a(x) = a_0 + a_1 x + … + a_k x^k be a potential factor of an unfair polynomial. We further assume that the shape of a(x) = a_0 + a_1 x + … + a_k x^k is given. By this, we mean that it is known if a_i is 0, 1 or strictly between 0 and 1. We wish to determine if a(x) is actually a factor of an unfair polynomial. That is, we wish to determine if there exists a b(x) = b_0 + b_1 x + … b_n x^n with non-negative real coefficients such that a(x) b(x) =: c(x), with c(x) a 0–1 polynomial. We use two different algorithms for studying this problem, and substantial computational verification. We first state a surprisingly useful, although simple result. The polynomial a(x) = a_0 + a_1 x + … + a_k x^k is a factor of an unfair polynomial if and only if a^*(x) = a_k + a_k-1 x + … a_0 x^k is a factor of an unfair polynomial. In <cit.> the polynomial 1+ t x^2 + x^5 was considered, where 0 < t < 1. Some of the techniques utilized in <cit.> are also used in this paper, albeit in a more automated manner, (see Section <ref>). For polynomials up to degree 5, all polynomials can be quickly shown to not be factors of an unfair polynomial, with the exception of 1+t x^2 + x^5 and it's reciprocal. Here, 0 < t < 1. Using techniques similar to those in Section <ref> it was shown in <cit.> that b_n = 1 - t b_n-3 - b_n-5 for 5 ≤ n ≤ 10000. Further, the values b_0, b_1, …, b_5 are explicitly given in terms of t. For 0.005 ≤ t < 1, it was shown, based upon this linear recurrence, that there exists an n ≤ 10000 such that b_n < 0. Hence, if 0.005 ≤ t < 1 then a(x) is not a factor of an unfair polynomial. If instead 0 < t < 0.005, then an analysis on the location of the roots of a(x) was used to show that b(x) must eventually have a negative coefficient. For the first step, we use a simply trinary logic to show that most polynomials of degree at most 15 are not factors of an unfair polynomial. This step was also used in <cit.>, albeit in a less automated manner. This is discussed in Section <ref>. In some cases the trinary logic is insufficient, but a recursive case analysis can be utilized in addition to the trinary logic to derive a contradiction. This is done in Section <ref>. A more sophisticated and computationally expense technique is given in Section <ref>, utilizing Groebner basis and Quantifier Elimination. It is interesting to note that we only needed to go up to n=100 to show that 1 + t x^3 + x^5 is not a factor of an unfair polynomial. This is in contrast to n = 10000 which was used for 1 + t x^2 + x^5 in <cit.>. Section <ref> discusses the numerical results for all possible factors of degree less than or equal to 15. In Section <ref> we consider a relaxation of the definition of unfair polynomials for which solutions do exist. Lastly, in Section <ref> makes some final remarks. § TRINARY LOGIC In this section we will show how one can derive a contradiction using trinary logic. A coefficient of a(x) is one of three things. It is either 0, or 1 or something strictly between 0 and 1. If it is strictly between 0 and 1 we will denote it by *. This logic was used in <cit.> to show most cases up to degree 5 were not factors of unfair 0–1 polynomials, and to give the necessary structure for the last remaining degree 5 cases. A coefficient of b(x) is one of four things. It may be 0, 1 or * as before. In addition, it may simply be unknown as we do not have enough information to solve for it. It is often possible to determine unknown coefficients of b(x) to be one of 0, 1 or *. It is also often possible to determine a contradiction based upon known information. Consider the product (a_0 + a_1 x + … a_k x^k) (b_0 + b_1 x + … b_n x^n) = a_0 b_0 + (a_0 b_1 + b_0 a_1) x + … + (∑_i a_i b_j-i) x^j + … . For convenience, we will denote c_i,j = a_i b_j. This allows us to rewrite this product as (a_0 + a_1 x + … a_k x^k) (b_0 + b_1 x + … + b_n x^n) =c_0,0 + (c_0,1 + c_1,0) x + … + ∑_i c_i,j-i x^j + …. By construction we have that ∑_i c_i, j-i is either 0 or 1. We also have that a_0 = b_0 = 1. Below we give the trinary multiplication table: [ × 0 1 *; 0 0 0 0; 1 0 1 *; 0 * * ] If a_i and b_j are known, then we may use this information to determine c_i,j. If c_i,j and a_i are known, it is sometimes possible to use this information to determine b_j. From this, we can construct a table [ b_0 b_1 b_2 …; a_0 c_0,0 c_0,1 c_0,2 …; a_1 c_1,0 c_1,1 c_1,2 …; ⋮ ⋮ ⋮ ⋮ ; a_k-1 c_k-1,0 c_k-1, 1 c_k-1, 2 …; a_k c_k, 0 c_k, 1 c_k, 2 … ] By Equation (<ref>), we observe that we can derive the coefficients for c(x) from this table by summing along the diagonal. We note that this sum must always equal 0 or 1. Initially, many of the values b_j and c_i,j are unknown. It is often possible to determine what these values must be, based on known information. As ∑_i c_i,j-i is either 0 or 1 we see that if c_i,j-i = 1 for some i then c_k, j-k = 0 for all k ≠ i. Additionally, if we know that c_i,j-i = *, and c_k, j-k is unknown and lastly that c_ℓ, j-ℓ = 0 for all ℓ≠ i,k then we known that c_k, j-k = *. Consider a(x) = 1 + * x + x^3. We known that b_0 = 1. This gives us the table [ 1 ; 1 1; *; 0 0; 1 1; ] By observing that c_1,0 + c_0,1 = * + c_0,1 we get that c_0,1 = *. By observing that c_3,0 + c_2,1 + c_1,2 + c_0,3 = 1 + c_2,1 + c_1,2 + c_0,3 we get that c_2,1 = c_1,2 = c_0,3 = 0. We can now update the table to get [ 1 ; 1 1 * 0; * 0; 0 0 0; 1 1; ] By observing that c_0,1 = * and a_0 = 1 we get that b_1 = *. By observing that c_1,2 = 0 and a_1 = * we get that b_2 = 0. By observing that c_0,3 = 0 and a_0 = 1 we get that b_3 = 0. We can now update the table to get [ 1 * 0 0; 1 1 * 0; * 0; 0 0 0; 1 1; ] Updating the table to fill in known multiplications gives [ 1 * 0 0; 1 1 * 0 0; * * 0 0; 0 0 0 0 0; 1 1 * 0 0; ] At this point we have a contradiction. We have that c_2,0 + c_1,1 + c_0,2 = 0 + * + 0 = * which is a number strictly between 0 and 1. Such logic, when automated, and combined with Theorem <ref>, can show a large number of a(x) cannot be factors of an unfair polynomial. See Section <ref>. Using these techniques (combined with Theorem <ref>), we can eliminate all cases for k = 2, 3, 4, 5 and 6 with the exception of 1 + * x^2 + x^5, 1 + * x + * x^2 + x^5, 1 + * x + * x^2 + x^6 and their reciprocals. § RECURSION It occasionally happens that when we fill in the table using the techniques from Section <ref> that we do not arrive at a contradiction, and at the same time, we cannot derive any further information for the coefficients of b(x). As a coefficient of b(x) is either 0, 1 or *, we can recursively check the three cases separately. Consider a(x) = 1 + * x + * x^2 + x^5. Using the techniques from Section <ref>, we can construct the table (up to degree 6) to get [ 1 * 0 0 0 *; 1 1 * 0 0 0 *; * * * 0 0 0 *; * * 0 0 0 *; 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 0; 1 1 * 0 0 0 * ] At this point, no further information can be determined, and we have not reached a contradiction. We now recursively check b_2 as being either 0, 1 or *. When b_2 = 1 we get the table [ 1 * 1 0 0 0 *; 1 1 * 1 0 0 0 *; * * * 0 0 0 *; * * * 0 0 0 *; 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 0; 1 1 * * 0 0 0 * ] which gives a contradiction as c_2,0 + c_1,1 + c_0,2 = * + * + 1 > 1. When b_2 = 0 we get the table [ 1 * 0 0 0 0 *; 1 1 * 0 0 0 0 *; * * * 0 0 0 *; * * 0 0 0 0 *; 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 0; 1 1 * 0 0 0 0 * ] which gives a contradiction as c_1,2 = * and b_2 = 0. Lastly, when b_2 = * we get the table [ 1 * * 0 0 0 *; 1 1 * * 0 0 0 *; * * * 0 0 0 *; * * * 0 0 0 *; 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 0; 1 1 * * 0 0 0 * ] which gives a contradiction as c_4,0 + c_3,1 + c_2,2 + c_1,3 + c_0,4 = * which is a number strictly between 0 and 1. In some cases we have to use recursion multiple times on a candidate polynomial. Using these recursive techniques (combined with Theorem <ref>), we can eliminate all cases for k = 2, 3, 4, 5 and 6 with the exception of 1 + * x^2 + x^5 and it's reciprocal. § SYMBOLIC TECHNIQUES The logic in Section <ref> did not use detailed information about non-integer values. The only information we used was if a value was 0, 1 or strictly between 0 and 1. It is possible to use more precise information about how unknown values interact. In the previous section, we used a simplified form of multiplication for the values 0, 1 and *. Instead here, we use more precise information about the multiplication. We note that a_i b_j = c_i,j. Hence, if information is known about these terms, then we have the identity c_i,j - a_i b_j = 0. In the previous section, we used the fact that ∑_i c_i, j-i is either 0 or 1. As before, if we know that one of these terms is identically 1, then all other terms must be zero. If all values of the diagonal are determined (i.e., not “unknown”), then we have the additional polynomial identity (∑_i c_i, j-i)(1-∑ c_i, j-i) = 0. We will call the set of all known identities at any step of the calculation the basis of identities, and we will denote this by ℐ. We can use quantifier elimination to determine if there is a solution to this basis of identities where all variables in the basis are strictly between 0 and 1. If there is not a solution, then we have derived a contradiction for this stage of the calculation. If instead, there is a solution, we continue to check more terms to (hopefully) derive a contradiction, or find a counter-example to the unfair 0–1 polynomial conjecture. This process uses recursion, although most branches of the recursion quickly lead to a contraction. When adding new identities to ℐ, it is important to do this taking into account all known identities already in ℐ. We can do this by representing ℐ by its Groebner basis, and reducing all entries in the table with respect to this Groebner basis. It is often the case that the variety represented by the basis of identities is actually the union of two or more sub-varieties. (This is easy to check by looking at the Groebner basis). If this is the case, we typically recurse on these sub-varieites, as this improves the performance of the calculations and the algorithm. Consider as an example the polynomial 1 + s x + t x^4 + x^7. Our initial table looks like [ 1 ; 1 1 ; s s ; 0 0 ; t t ; 0 0 ; 0 0 ; 0 0 ; 1 1 ] Filling in the easy to determine information gives [ 1 - - - 0 - 0 0; 1 1 - - - 0 - 0 0; s s - - - 0 - 0 0; 0 0 0 0 0 0 0 0 0; t t - - - 0 - 0 0; 0 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 0 0; 1 1 - - - 0 - 0 0 ] At this point, we recurse. Either b_1 = 0, b_1 = 1 or 0 < b_1 < 1. Both b_1 = 0 and b_1 = 1 quickly lead to a contradiction. So we may assume that 0 < b_1 < 1. This gives [ 1 b_1 - - 0 - 0 0 ; 1 1 b_1 - - 0 - 0 0 ; s s s b_1 - - 0 - 0 0 ; 0 0 0 0 0 0 0 0 0 ; t t t b_1 - - 0 - 0 0 ; 0 0 0 0 0 0 0 0 0 ; 0 0 0 0 0 0 0 0 0 ; 0 0 0 0 0 0 0 0 0 ; 1 1 b_1 - - 0 - 0 0 ] We also note that s+ b_1 = 0 or s + b_1 = 1. As s, b_1 > 0 we easily see that the first cannot occur. Hence we see that s+b_1 = 1. We now update ℐ to be ℐ = ⟨ s + b_1 -1 ⟩. We also have set of known inequalities. 0 < s, t, b_1 < 1. A quick check with quantifier elimination ensures that there is a possible solution to ∃ s, t, b_1: 0 < s < 1, 0 < t < 1, 0 < b_1 < 1, s+b_1 -1 = 0. We next recurse on b_2. This may be 0, 1 or strictly between 0 and 1. We quickly derive contradictions if b_2 = 0 or b_2 = 1. Hence we may assume that 0 < b_2 < 1. We perform all calculations modulo the basis of identities ℐ = ⟨ s + b_1 - 1⟩ (which is easy to do via Groebner basis). This gives us the table [ 1 -s+1 b_2 - 0 - 0 0 ; 1 1 -s+1 b_2 - 0 - 0 0 ; s s -s^2+s s b_2 - 0 - 0 0 ; 0 0 0 0 0 0 0 0 0 ; t t -s t+t t b_2 - 0 - 0 0 ; 0 0 0 0 0 0 0 0 0 ; 0 0 0 0 0 0 0 0 0 ; 0 0 0 0 0 0 0 0 0 ; 1 1 -s+1 b_2 - 0 - 0 0 ] By considering the sum of the diagonal c_2,0 + c_1,1 + c_0,2, we see that b_2 - s^2 + s is either 0 or 1. As b_2 is strictly positive and -s^2 + s is between 0 and 1, we see that this sum must equal 1. Hence we may add b_2 - s^2 +s -1 to our basis of identities. This gives us ℐ = ⟨ b_1 + s -1, b_2 -s^2 + s - 1 ⟩. We then check via quantifier elimination to determine that there exists a solution to ∃ s, t, b_1, b_2: 0 < s < 1, 0 < t < 1, 0 < b_1 < 1, 0 < b_2 < 1, s+b_1 -1 = 0, b_2 - s^2 + s -1 = 0. We next recurse on b_3. As before we quickly derive a contradiction if b_3 = 0 or b_3 = 1. When we assume 0 < b_3 < 1, we can conclude that s^3 - s^2 + s + t + b_3 -1 = 0. Expanding this table out, (reducing modulo ℐ), we get [ 1 -s+1 s^2-s+1 -s^3+s^2-s-t+1 0 - 0 0 ; 1 1 -s+1 s^2-s+1 -s^3+s^2-s-t+1 0 - 0 0 ; s s -s^2+s s^3-s^2+s -s^4+s^3-s^2-s t+s 0 - 0 0 ; 0 0 0 0 0 0 0 0 0 ; t t -s t+t s^2 t-s*t+t -s^3 t+s^2 t-s t-t^2+t 0 - 0 0 ; 0 0 0 0 0 0 0 0 0 ; 0 0 0 0 0 0 0 0 0 ; 0 0 0 0 0 0 0 0 0 ; 1 1 -s+1 s^2-s+1 -s^3+s^2-s-t+1 0 - 0 0 ] When summing along the diagonal ∑_i c_i, 4-i we get -s^4+s^3-s^2-2 s t+s+t we must be either 0 or 1. When this is added to the basis of identities, quantifier elimination shows that there are no solutions to either ∃ s, t, b_1, b_2, b_3: 0 < s < 1, 0 < t < 1, 0 < b_1 < 1, 0 < b_2 < 1, s+b_1 -1 = 0, b_2 - s^2 + s -1 = 0, s^3 - s^2 + s + t + b_3 -1 = 0, -s^4+s^3-s^2-2 s t+s+t = 0 or ∃ s, t, b_1, b_2, b_3: 0 < s < 1, 0 < t < 1, 0 < b_1 < 1, 0 < b_2< 1, s+b_1 -1 = 0, b_2 - s^2 + s -1 = 0, s^3 - s^2 + s + t + b_3 -1 = 0, -s^4+s^3-s^2-2 s t+s+t-1=0. As such, we conclude that 1 + s x + t x^3 + x^7 is not a potential factor of a unfair polynomial. If upon some branch of this calculation we have a sequence of k+1 coefficients of b(x) that are identically zero, and there exists a solution to ℐ with all variables in (0,1), then we have found a counter-example to the unfair 0–1 polynomial conjecture. These tests were performed with the assumption that the degree of b(x) was bounded by 200. Further, we used a four hour cap on the cpu time per test. Under these restrictions, no test returned a counter-example to the unfair 0–1 polynomial conjecture. Examples were found for a variation of the problem, as we will discuss in Section <ref>. § NUMERICAL RESULTS In this section we present some of the results of our computational experiments. In Table <ref> we indicate for each k, how many polynomials of degree k there are with a_0 = a_k = 1 and at least one non-integer coefficient. The first two columns gives the number of polynomials that can be shown not to be factors of an unfair polynomial using the simple non-recursive trinary logic of Section <ref>, along with the time needed to perform these tests. These techniques were very successful, showing a vast majority of the polynomials are not factors of unfair polynomials. On average, these took 0.012 seconds per test. The next two columns gives the number of polynomials than can be shown not to be factors of an unfair polynomial using simple recursive trinary logic of Section <ref>, along with the time needed to perform these tests. This was successful for 65% of the remaining tests that were not resolved by non-recursive logic. These were computationally more expensive, taking on average 1.4 seconds per tst. The last two columns gives the number of polynomials that can be shown not to be factors of an unfair polynomial by using quantifier elimination as described in Section <ref>. This was successful for 75% of the remaining tests. This was by far the most computationally expensive test, taking on average 29 minutes per test. All calculations were done using Maple 2023 <cit.>. There were done on 4 machines, each with four Intel Xeon Gold 6230 20-core 2.1 GHz (Cascade Lake) (768GB Memory). Calculations were allowed to run for a maximum of 4 hours. § Α-UNFAIR POLYNOMIALS In the previous sections, we gave an algorithm to test if a potential factor was actually a factor of an unfair polynomial. In this section, we will examine how much we would need to relax the definition of an unfair polynomial so that a factor would exist. To this end, we give the Let c(x) = ∑ c_i x^i with c_i ∈{0,1}. We say that c(x) is α-unfair if there exists a factorization c(x) = a(x) b(x) with a(x) = ∑ a_i x^i, b(x) = ∑ b_i x^i and * b_0 = a_0 = 1, * There exists an i such that b_i ∉{0,1}, * -α≤ b_i ≤ 1+α, * -α≤ a_i ≤ 1+α. We easily see that a 0-unfair polynomial is an unfair polynomial from Section <ref>. By Table <ref>, we see that there do not exist 0-unfair polynomials with a factor of degree less than or equal to 6. The methods of Section <ref> can be modified to search for for examples of α-unfair polynomials. There are two key differences. The first is, when testing if there is a solution via quantifier elimination, we test in the variables are in the range [-α, 1+α] instead of (0,1). The second difference is that we can no longer assume that if c_i,j =1 then all other terms in the diagonal must be identically 0. Let α = 1 and a(x) = 1 + s x + x^2. We quickly get the table [ 1 ; 1 1 ; s s ; 1 1 ] We then recurse on b_1, testing if it is 0, 1 or a value in [-α,1+α] ∖0,1 = [-1, 0) ∪ (0,1) ∪ (1, 2]. We derive contradictions if b_1 is 0 or 1, giving us [ 1 b_1 ; 1 1 b_1 ; s s s b_1 ; 1 1 b_1 ] Here the diagonal s + b_1 must be either 0 or 1. In this case, we do not derive a contradiction to this sum is 0. Hence, we first test the case where ℐ = ⟨ s + b_1 ⟩ and b_1 = -s. This gives [ 1 -s ; 1 1 -s ; s s -s^2 ; 1 1 -s ] Continuing in this fashion, one of the branches gets to [ 1 -s 1+s -s 1 0 0 0 0 0; 1 1 -s 1+s -s 1 0 0 0 0 0 ; s s -1-s 2*s+1 -1-s s 0 0 0 0 0 ; 1 1 -s 1+s -s 1 0 0 0 0 0 ] with ℐ = ⟨ s^2-s-1, s+b_1, -1-s+b_2, b_3+s ⟩. This does have a solution with all variables in the range (-1, 2). This gives us an example of a 1-unfair polynomial. In particular, we have that x^6 + x^4+x^3+x^2+1 = (1+s x + x^2) (1 - s x + (1+x) x^2 - s x^3 + x^4) ≈ (1-0.618 x + x^2) (1 + 0.618 x + 0.381 x^2 + 0.618 x^3 + x^4) . where s ≈ -0.618 is the root of s^2-s-1. This in fact is an example of a √(5)-1/2-unfair polynomial. We observe If there exists an α-unfair polynomial with a factor of degree k, then for all 1 ≤ i and 0 ≤ j ≤ i-1 there exists an α-unfair polynomial with a factor of degree ik+j. Let a(x) and b(x) be factors of an α-unfair polynomial, with a(x) of degree k. We see that a(x) b(x) is a 0–1 polynoimal. Define A(x) = a(x^i) and B(x) = b(x^i). We see that A(x) B(x) is a 0–1 polynomial, and hence there exists a factor of an α-unfair polynomial of degree ki. Similarly if j ≠ 0 we can take A(x) = a(x^i) (x^j+1) and B(x) = b(x^i) to get a factor of an α-unfair polynomial of degree ki+j. We used this algorithm to computationally explore how small we could have α such that there exists an α-unfair polynomial. To do this, we started with a reasonably large α, say α =1, and ran the algorithm. If we found a solution, we would then update α to be slightly smaller than the α generated by this solution and repeat. Based upon these experiments, and verification of an observed pattern, we make the For all ϵ > 0 there exists an ϵ-unfair polynomial. Let n ≥ 1. Let C(x) = 1 + x^3 + x^6n+2 + x^6n+3 + x^6n+4 + x^6n+5 + x^6n+6 + x^12n+6 + x^12n+9. Take A(x,t) = x^3 + t x^2 + t x + 1. Write C(x) = A(x,t) B(x,t) + R(x, t) for R(x, t) of degree 2 with respect to x. Write R(x, t) = r_2(t) x^2 + r_1(t) x + r_0(t). Write g(t) = (r_2(t), r_1(t), r_0(t)). Computationally g is non-trivial, and has degree 6n+4. Let t_n to the smallest positive root of g. Find the minimal α_n such that the coefficients of A(x, t_n) and B(x, t_n) are in [-α_n, 1+α_n]. Computationally α_n appears to be tending to 0 as n →∞. See Table <ref>. It appears that log(α_n) ≈ -2.925903281 - 1.871057363 log(12 n + 9). This was based on the data for 1 ≤ n ≤ 40. See Figure <ref>. § FINAL REMARKS In this paper we computationally searched for unfair 0–1 polynomials with a given potential factor. Although we were not able to find an example of such a polynomial, we were unable to prove that one does not exist. In Section <ref> we have provided considerable computational evidence that such polynomials do not exist. If it is true that a polynomial does not exist, then there appears to be a clear divide between α-unfair 0–1 polynomials with α > 0 and 0-unfair 0–1 polynomial. § ACKNOWLEDGMENTS I would like to thank Juergen Gerhard for informing me of the new Quantifier Elimination package in Maple 2023. plain 20 CLO David A. Cox , John Little , Donal O’Shea Ideals, Varieties, and Algorithms An Introduction to Computational Algebraic Geometry and Commutative Algebra Springer, 2015 Ghidelli Luca Ghidelli Progress on the unfair 0-1 polynomials conjecture using linear recurrences and numerical analysis. arXiv:2209.09843 HomePage Kevin G. Hare <https://uwaterloo.ca/scholar/kghare/home> 2023 Maple2023 Maple 2023, Maplesoft, a division of Waterloo Maple Inc., Waterloo Ontario. Sturm Sturm, T. A Survey of Some Methods for Real Quantifier Elimination, Decision, and Satisfiability and Their Applications. Math.Comput.Sci. 11, 483–502 (2017).
http://arxiv.org/abs/2307.05560v1
20230709161935
Automatic Coding at Scale: Design and Deployment of a Nationwide System for Normalizing Referrals in the Chilean Public Healthcare System
[ "Fabián Villena", "Matías Rojas", "Felipe Arias", "Jorge Pacheco", "Paulina Vera", "Jocelyn Dunstan" ]
cs.CL
[ "cs.CL" ]
Investigating Berezinskii-Kosterlitz-Thouless phase transitions in Kagome spin ice by quantifying Monte Carlo process: Distribution of Hamming distances Nvsen Ma August 12, 2023 ======================================================================================================================================================== The disease coding task involves assigning a unique identifier from a controlled vocabulary to each disease mentioned in a clinical document. This task is relevant since it allows information extraction from unstructured data to perform, for example, epidemiological studies about the incidence and prevalence of diseases in a determined context. However, the manual coding process is subject to errors as it requires medical personnel to be competent in coding rules and terminology. In addition, this process consumes a lot of time and energy, which could be allocated to more clinically relevant tasks. These difficulties can be addressed by developing computational systems that automatically assign codes to diseases. In this way, we propose a two-step system for automatically coding diseases in referrals from the Chilean public healthcare system. Specifically, our model uses a state-of-the-art NER model for recognizing disease mentions and a search engine system based on Elasticsearch for assigning the most relevant codes associated with these disease mentions. The system's performance was evaluated on referrals manually coded by clinical experts. Our system obtained a MAP score of 0.63 for the subcategory level and 0.83 for the category level, close to the best-performing models in the literature. This system could be a support tool for health professionals, optimizing the coding and management process. Finally, to guarantee reproducibility, we publicly release the code of our models and experiments. § INTRODUCTION The clinical text represents a significant proportion of patient's health records, commonly found in a non-structured format. These texts have particular challenges due to the extensive use of abbreviations, the variability of clinical language across medical specialties, and its restricted availability for privacy reasons <cit.>. Due to the complexity of its analysis, this data is commonly discarded in projects that seek to support clinical decision-making <cit.>. Clinical coding involves mapping medical texts into codes using a controlled vocabulary consistent across different departments, hospitals, or even countries <cit.>. The World Health Organization maintains an open, controlled vocabulary called the International Classification of Diseases (ICD), which is used in almost every country. Currently, the most widely used revision is the tenth (ICD-10) <cit.>, and they are developing its eleventh revision, which will include not only diseases <cit.>. Regarding the Chilean public health system, the ICD-10 terminology is used for coding hospital discharges (morbidity coding by each healthcare provider) and deaths (mortality coding by the Ministry of Health). Having patients' data normalized using these controlled vocabularies enables the ability to summarize information automatically and not deal with the noisiness of free-text data. The already-digested information from the normalized data empowers data analysts who are not experts in NLP to add more complex information into their workflows. The Waiting Time Management System (SIGTE, in Spanish) contains electronic records of referrals from the Chilean Waiting List, which is the system that manages the high demand existent for consultation by specialists <cit.>. This data provided by 29 health services contain information about the medical diagnoses of patients but is not standardized <cit.>. As of November 2022, SIGTE recorded 25,374,491 waiting list referrals, of which 18,716,629 correspond to "new specialty referrals" and are associated with patient pathologies. Of these referrals, approximately 5,760,750 (30.7 %) have an ICD-10 code. This calculation was performed by searching for a regular expression formatted as an ICD-10 code in the free-text diagnosis fields. Clinical experts perform the disease coding task manually, which is not optimal for several reasons. Firstly, since this process is subject to errors, medical personnel must have significant competence in coding rules and a thorough knowledge of specialized terminologies, such as ICD, which also get updated frequently. In other words, expert coding staff must be familiar with the clinical field, analytical and focused, and have fundamental skills for inspecting and analyzing highly specialized texts. In addition, manual coding is time-consuming <cit.>, which could be optimized by a support system, and this time could be used for other tasks relevant to clinical decision-making. These difficulties can be efficiently addressed using computational systems capable of automatically performing the coding task using NLP. Currently, most automatic coding systems are based on an end-to-end architecture based on deep learning techniques. Although these systems have boosted the performance of several coding tasks, they cannot incorporate context-specific rules, such as code priority, medical assumptions, code definition, and synonyms. In this work, we developed an automated disease coding system, thus being able to code the entire historical waiting list in Chile, identifying a total of 18,716,629 referrals. Our system is based on two steps; first, the automatic extraction of diseases is addressed using a state-of-the-art NER model, and then, using a search engine, the most probable code for each disease found is identified. Finally, we explored the potential applications derived from this system and studied in more depth the most frequent diseases in the country today. § RELATED WORK The disease coding task involves transforming clinical texts, commonly written by physicians in a non-structured format, into codes following medical terminologies. This is not an easy task since a medical ontology such as ICD in Spanish has 14,668 codes, an example of extreme multi-label classification <cit.>. We have identified two major groups of computational methods proposed to solve this task; rule-based coding and neural network-based coding. §.§ Rule-based Models This approach involves designing hand-crafted rules to represent and simulate the flow that clinical experts follow when assigning codes. Most of the studies are based on using regular expressions and keywords to transform diseases found in the text into their respective codes. However, these methods are not feasible since manually capturing all the relations between texts and codes is time-consuming and complex. Different approaches based on machine learning have been proposed to address this issue. In this way, features extracted from statistical models such as decision trees and support vector machines, among others, are incorporated into the manual rules <cit.>. Another method is to create a list of synonyms of the original text to calculate a word distance with respect to the code descriptions of the terminology. Despite their disadvantages, these methods have yielded high results in the literature, effectively supporting manual coding performed by humans <cit.>. §.§ Models based on neural networks Deep learning-based methods have significantly improved the disease coding task in recent years. The advantage of using these models is that the healthcare-specific domain knowledge is no longer needed for the manual development of complex rules. In contrast, these methods can automatically build features powerful enough to capture the relationships between clinical texts and their respective codes. Most proposed systems are based on posing the problem as a multi-label text classification task <cit.>. Thus, the algorithm's input is text, while the output can be one or more codes associated with diseases. Unlike traditional text classification problems, this problem is considered extreme since the number of possible labels increases to thousands (depending on the terminology). The main disadvantage of this approach is that manual coding requires incorporating context-specific rules, such as code priority, medical assumptions, code definition, and synonyms, among other types of information, to improve system performance. In the case of deep learning, this is not considered since the systems are commonly created using an end-to-end approach, meaning that no human knowledge is involved when creating the features or making the predictions. To solve the previous problem, we followed another approach used in the literature, which consists of mixing the previous ideas using two sequential steps; the first one uses deep learning algorithms, while the second allows us to incorporate medical knowledge into the computational system. Firstly, we used a Named Entity Recognition model for automatically recognizing sequences of words in the text which are associated with diseases. Then, each disease found is associated with its most likely ICD-10 code, a task better known as Entity Linking <cit.>. Nowadays, the most commonly used methods for solving the NER task are based on deep neural networks such as transformers-based models or recurrent neural networks, while a frequent technique for assigning codes is to use distance algorithms or search engines to compare the diseases found with the code descriptions of the terminology. §.§ Commercial Systems A handful of commercial products offer information extraction from clinical data, including automatic coding. These products usually are delivered as services and offered by leading cloud providers such as Amazon Web Services with Amazon Comprehend Medical[<https://aws.amazon.com/comprehend/medical/>], Google Cloud with Google Cloud Healthcare Data Engine[<https://cloud.google.com/healthcare>] and Microsoft Azure with Azure Cognitive Service for Language[<https://azure.microsoft.com/en-in/products/cognitive-services/language-service>]. The problem with these services is that they do not offer automatic coding for languages other than English. Data privacy concerns may arise from using this third-party software to extract patients' information. Some healthcare providers may prohibit sending data to systems outside the primary source due to potential cybersecurity issues. § DATA AND METHODS The Chilean Waiting List is characteristic of the the public healthcare system. This list arises due to the high demand for medical care and the limited capacity of the public health system to meet it. Entry on the waiting list begins when a patient goes to primary care or secondary care physician to treat pathology. The patient has two possible paths: if the pathology is included in the “Garantías Explícitas en Salud” (GES) program, the patient enters a process where his or her health problem is assured a maximum waiting time for medical attention. If the GES program does not cover the pathology, the referral is classified in one of these five options: New Specialty Consultations (CNE), Follow-up Consultations (CCE), Diagnostic Procedures (Proc), Surgical Intervention (IQ) and Complex Surgical Intervention (IQC). In any of these alternatives, the patient is placed on a waiting list and must wait a variable amount of time to receive medical attention from a specialist. The Chilean Waiting List comprises 25,374,491 referrals, divided into five categories: 18,716,629 correspond to CNE type referrals, 4,391,257 to Proc type referrals, 2,222,545 to IQ type referrals, 39,266 to CCE type referrals, and finally, 4,794 to IQC type referrals. In particular, this work will focus on CNE-type referrals. Within the Chilean Waiting database, 73 attributes are separated into two main types of sets. The first set corresponds to the attributes associated with the person (date of birth, sex, national identifier). In contrast, the second set corresponds to the administrative information associated with the referral given to the person (date of admission, date of discharge, the benefit provided, specialty, diagnostic suspicion, and diagnostic confirmation). For the analysis of the diagnoses present in the referrals, two free-text attributes representing medical diagnoses are considered: diagnostic suspicion and diagnostic confirmation. Table <ref> shows the frequency of referrals according to medical specialty, while Table <ref> shows corpus statistics of the texts analyzed. We used 10,000 referrals from the historical Chilean Waiting List to train the NER module for disease recognition. As detailed in <cit.>, these referrals were previously consolidated by a team of clinical experts, thus constituting the so-called Chilean Waiting list corpus. In addition, we performed rounds of evaluation of the NER performance, identifying diseases that the model could not identify. Thus, these diseases were incorporated as new examples of the model training process. § PROPOSED SYSTEM To code the narratives, we first used a NER model to automatically recognize sequences of words in the text associated with diseases. Then, each disease found is associated with its most likely ICD-10 code through a search engine. Figure <ref> shows an overview of our proposed system. §.§ NER Model As shown in Figure <ref>, the input of our system is the referral written by the physician in an unstructured format. These texts are used as input for the automatic disease recognition model. In particular, this NER model is based on the work proposed in <cit.>, where a simple but highly effective architecture for medical entity recognition is introduced. This model, named Multiple LSTM-CRF (MLC), is a deep neural network system composed of three main modules, emphasizing the impact of using domain-specific contextualized embeddings. The first layer of the MLC approach, the “stacked embedding layer”, transforms the texts associated with the diagnoses into a vector representation using character-level contextual embeddings and static word embeddings, both trained in the clinical domain. Then, in the encoding layer, a recurrent neural network is used to obtain long-distance dependencies between words in the sentence, thus obtaining a better context to improve the previous layer's representations. Finally, the classification layer assigns the most probable label to each word in the diagnosis using the CRF algorithm, identifying which parts of the text correspond to the beginning and end of a disease. Regarding the experimental setup, the disease model was trained to 150 epochs using an SGD optimizer with mini-batches of size 32 and a learning rate of 0.1. As mentioned, to encode sentences, we used two types of representations; a 300-dimensional word embedding model trained on the Chilean Waiting List corpus[<https://zenodo.org/record/3924799>] and character-level contextualized embeddings retrieved from the Clinical Flair model <cit.>. To implement the model and perform our experiments, we used the Flair framework, widely used by the NLP research community <cit.>. §.§ Search Engine The output of the NER step is a list containing all the diseases mentioned in the referral. This second module aims to assign an ICD-10 code to each disease found, which can be used later for clinical decisions or management. The assignment of the ICD-10 code is done through a search engine tool based on Elasticsearch[Registered trademark of Elasticsearch B.V. Available at <https://www.elastic.co/elasticsearch/>], an open-source search and analytics engine. This system can assign similarities between the mention of the disease and each of the codes of the ICD-10 tabular list. Unlike the algorithms of distance comparison between words, this search engine has an index that contains each of the ICD-10 diseases represented through a series of synonymous sentences extracted from different sources of information, simulating in a better way the process followed by clinical experts to determine the code of a disease. For example, in the index, the code “K02.2” contains the canonical code description “Caries of cementum” and multiple synonymous definitions, such as “Cement caries” and “Root caries”. This is important as disease mentions found in unstructured diagnoses are rarely equivalent to the exact definition. The sources of information used for the extraction of synonymous disease definitions were as follows: Tabular list of ICD-10 terminology: This is the basis of the index, which tells us which codes we will assign to the disease mentions. Alphabetical index of ICD-10 terminology: The guide for the manual assignment of codes to diseases and was obtained using the “web scraping” technique from the website of the Spanish Ministry of Health [<https://eciemaps.mscbs.gob.es/ecieMaps/browser/index_10_2008.html>]. IRIS dictionary: It maps natural language sentences to an ICD-10 code. This dictionary was built from the mortality coding rounds conducted in the Chilean Department of Statistics and Health Information. UMLS: Spanish definitions from multiple vocabularies were extracted from the metatresaurus database. DEIS abbreviations: Manually constructed list of abbreviations and their expansions. §.§ Experiments In our experiments, we measure how well the predictions made by the model fit compared to the decisions made by clinical experts. In this way, a subset of the referrals described in Section <ref> was selected to be manually coded by a team of two clinical coders. The manual annotation process and system validation steps are provided below. §.§.§ Manual coding The clinical experts carried out the annotation process using Excel software. For this purpose, a file containing a unique identifier for each referral, the associated diagnostic suspicion, and a blank column for the actual coding was provided to the coders. This way, the expert coders identified disease codes in 1,188 clinical narratives from the Chilean Waiting List for a new specialty. It is important to mention that in this process, codes were identified at the referral level, not at the entity level; therefore, it is not possible to determine the performance of the NER model in this experiment. In future work, specialized software such as INCEpTION, could be used, as proposed in the work of <cit.>. This software would make it possible to identify which parts of the text refer to diseases. On the other hand, only diseases were coded, but future research could extend it to new entity types, such as clinical procedures or clinical findings. §.§.§ Metric The Mean Average Precision (MAP) metric is used to evaluate the performance of our coding system. This metric is widely used in works that address the same automatic coding task. This metric is defined as follows: AveP = ∑ (P(k)· rel(k))/, where P(k) represents the precision at position k, and rel(k) is an indicator function equal to 1 if the element in rank k is a relevant document and 0 otherwise. The MAP is computed using the Python implementation of the TREC evaluation tool, , by <cit.>, where an adaptation was applied, in which the coded diagnoses have to be ordered based on a ranking, which for this work is considered the order in which the mention was found and subsequently the code was assigned. § RESULTS §.§ Coding Performance The ICD-10 consists of a solitary coded catalog composed of categories with three characters, each of which can be additionally subdivided into as many as ten subcategories of four characters. We computed the MAP metric over the test set at the category (e.g. K02) and subcategory (e.g. K02.2) levels. We achieved a MAP of 0.83 for the category and 0.63 for the subcategory level. To underline the difficulty of achieving outstanding results in coding, we analyzed the results obtained only by clinical experts. The expert coders achieved an agreement MAP of 0.75 for subcategory and 0.83 for category level. Several reasons, such as the subjectivity in clinical judgment, the complexity of coding guidelines, the evolving nature of medicine, the time pressure and workload, personal bias, and lack of standardization, could explain the low agreement score. § ERROR ANALYSIS To better understand the errors made by our coding system, we performed a granular analysis of the scores obtained among the different specialties in the corpus. Tables <ref> and <ref> show the top 14 best and 10 worst scores according to the specialties. We noted that in the top 14 best specialties the diagnostic suspicions registered in the referral were written straightforwardly and were specific diagnoses, such as “lipoma”, “caries”, and “nephrolithiasis”, avoiding other clinical information like comorbidity, medication intake, or some other medical history. Furthermore, it can be noted that half of these referrals are related to dental diagnosis. On the other hand, the top 10 worst specialties share in common that most of the diagnoses are very unspecific, with the incorporation of non-medical information such as the patient's phone number, patient's address, physician's name, the specialty the patient is referred to and information about comorbidity. Besides, several referrals are without a diagnosis but with the text “unspecific consultation” or “other”. § MODEL DEPLOYMENT AND USE CASES Due to internal regulations, we could not send patients' data to third-party systems such as cloud providers or academic supercomputing clusters <cit.>. For this reason, we deployed the whole coding system on-premise on a bare metal machine with a GPU compute module (NVIDIA RTX A4000[The compute module has 16 GB of GPU memory and 6.144 CUDA cores. More information at <https://www.nvidia.com/en-us/design-visualization/rtx-a4000/>]) to process the coding requests from the whole department efficiently. The complete automatic coding system was deployed as a pair of microservices running inside containers to ease portability. One container hosts the NER module and exposes an API as a web service listening to disease-mention detection requests. The other container consists of the recommended implementation of the Elasticsearch software, which also exposes its API as a web service listening to mention-coding requests. To code the waiting list and schedule recurrent coding when new data arrives, we used the KNIME[Registered trademark of KNIME GmbH. Available at <https://www.knime.com/>] software, a visual-programming data mining platform. We chose this software because of its ease of use for non-expert developers. The workflow starts with the raw waiting list, which is first passed through the NER module to detect disease mentions, and then each mention is sent to the coding module to assign the most relevant code. The automatic coding result from the workflow mentioned above is persisted on a table inside a database that stores each disease mention for each referral along with the predicted code from the system. § CONCLUSIONS In this work, we created a nationwide system to improve the management of the Chilean public healthcare system. Specifically, we addressed the challenge of creating an automated system to code the diseases present in the Chilean Waiting List referrals. We developed and validated a model based on two steps: a NER model to recognize disease mentions and a search engine based on Elasticsearch to assign the codes to each disease. This mapping system was enriched with several terminology resources used in real life by manual coders to assign codes, thus partially simulating the pipeline followed by these professionals when solving this task. The system allowed us to assign codes to 18,716,629 referrals, thus demonstrating its efficiency and effectiveness. The performance obtained in our experiments was 0.83 according to the MAP score, which is close to the most advanced systems currently in the coding task. The model was deployed into production in the Department of Health Statistics and Information Systems of the Ministry of Health of Chile. The use of this system could be an important support for the management of waiting lists. In addition, since 75% of the Chilean population is in the public healthcare system, the analysis of the new specialty consultations can be used for epidemiological studies, such as the one done on the incidence of psoriasis <cit.>. § ACKNOWLEDGEMENTS This work was funded by ANID Chile: Basal Funds for Center of Excellence FB210005 (CMM); Millennium Science Initiative Program ICN17_002 (IMFD) and ICN2021_004 (iHealth), Fondecyt grant 11201250, and National Doctoral Scholarship 21220200. We also acknowledge Daily Piedra and Marcela Carmona for their work on annotating and coding the test dataset. acl_natbib
http://arxiv.org/abs/2307.06429v1
20230711034831
Mean-squared displacement and variance for confined Brownian motion
[ "Yi Liao", "Yu-Zhou Hao", "Xiao-Bo Gong" ]
cond-mat.stat-mech
[ "cond-mat.stat-mech" ]
^1 Department of Materials Science and Engineering, College of Engineering, Southern University of Science and Technology, Shenzhen, 518055, China ^2 Department of Physics, College of Science, Southern University of Science and Technology, Shenzhen, 518055, China ^3 State Key Laboratory for Mechanical Behavior of Materials, School of Materials Science and Engineering, Xi'an Jiaotong University, Xi'an, 710049, China ^4 Yunnan Observatory and Key Laboratory for the Structure and Evolution of Celestial Objects, Chinese Academy of Sciences, Kunming, 650011, China For one-dimension Brownian motion in the confined system with the size L, the mean-squared displacement(MSD) defined by ⟨ (x-x_0)^2 ⟩ should be proportional to t^α(t). The power α(t) should range from 1 to 0 over time, and the MSD turns from 2Dt to c L^2, here the coefficient c independent of t, D being the diffusion coefficient. The paper aims to quantitatively solve the MSD in the intermediate confinement regime. The key to this problem is how to deal with the propagator and the normalization factor of the Fokker-Planck equation(FPE) with the Dirichlet Boundaries. Applying the Euler-Maclaurin approximation(EMA) and integration by parts for the small t, we obtain the MSD being 2Dt(1-2√(ξ)/3π√(π)), with t_ch=L^2/4π^2D,ξ≡t/t_ch, and the power α(t) being 1-0.18√(ξ)/1-0.12√(ξ). Further, we analysis the MSD and the power for the d-dimension system with γ-dimension confinement. In the case of γ< d, when t is small or large enough, the diffusion is normal(MSD∝ t). However, there exists the sub-diffusive behavior in the intermediate time. The universal description is consistent with the recent experiments and simulations in the micro-nano systems. Finally, we calculate the position variance(PV) meaning ⟨ (x-⟨ x ⟩)^2 ⟩. In the finite system, the variance is not necessarily the same as MSD. The initial conditions are essential to characterize the diffusion behavior described by the FPE, especially in the finite system. Under the initial condition referring to the different probability density function(PDF) being p_0(x), MSD and PV should exhibit different dependencies on time, which reflect corresponding diffusion behaviors.As examples, the paper discusses the representative initial PDFs reading p_0(x)=δ(x-x_0), with the midpoint x_0=L/2 and the endpoint x_0=ϵ(or 0^+). In the case of midpoint, the MSD(equal to PV) reads 2Dt(1-5π^3 Dt/L^2) for the small t, which reflects a kind of sub-diffusion, with D being the diffusion coefficient. In the case of endpoint, the MSD(equal to PV) reads 4/π(2Dt)[1+2√(π Dt)/L] for the small t, which reflects a kind of super-diffusion. Mean-squared displacement and variance for confined Brownian motion Yi Liao^1,2,[[email protected]], Yu-Zhou Hao^3 and Xiao-Bo Gong^4 August 12, 2023 ========================================================================== § I.INTRODUCTION The study of diffusion phenomenon originated from people's exploration of Brownian motion, and its theoretical basis is mainly statistical physics and molecular dynamics<cit.>. At the beginning of the 19-th century, the British botanist R. Brown found that the suspended small particles such as pollen in the water kept moving in an irregular curve, which was called Brownian motion<cit.>. Decades later, physicists such as J. Delsaulx, A. Einstein, and P. Langevin et. al. provided a good quantitative explanation for this phenomenon: the mean square displacement (MSD) of small particles is proportional to the observation duration (MSD∝ t^α,α=1). Its comprehensive mathematical description corresponds to the probability theory of random walking. Further research has shown that this proportional relationship to the power of time is only applicable to normal diffusion situations. There are also some anomalous diffusion phenomena in nature, such as sub diffusion and super-diffusion. α=0 corresponds to strict localization, and α=2 corresponds to ballistic transport, which corresponds to the power relationship of uniform motion. The transition between localization and normal diffusion is called sub-diffusion, while the transition between normal diffusion and ballistic transport is called super-diffusion. The extended diffusion model can explain many phenomena in physics, chemistry, biology, virus transmission, and even economic activities<cit.>. For the diffusion, researchers mainly consider the transport properties of their internal properties, with little exploration of the influence of boundary conditions on them, generally limited to free infinite space or periodic boundary conditions. However, the confinement effect require more critical and cautious treatment. For example, in the Brownian motion in a cup, as time increases, the square root of the mean square displacement of particles cannot exceed the physical scale ρ of the cup. After a sufficient period of time, the mean square displacement of particles is only related to and the dependence on time gradually disappears(MSD∝ t^α(t,ρ),α(t,ρ):1→ 0). This phenomenon naturally goes against the rule that the MSD is proportional to the observation duration. The quantitative description of this intuitive feeling also has academic appeal and considerable scientific significance. With the refinement and deepening of research, this confinement effect can essentially be attributed to the influence of scale effects, and its importance will be highlighted in low dimensional and microscale situations <cit.>. The confinement has been shown to the sub-diffusive dynamics of particles and macromolecules in micro-nano system, special the biological system<cit.>. Several studies have reported the sub-diffusion behaviors in confining system, such as the slits, spheres, channels, and other geometries<cit.>.The kind of slowdown was more pronounced as the degree of confinement increased. However, the previous papers have rarely explored the confinement effect purely from the perspective of boundary conditions, but have focused more on the size effect through comparing the Brownian particle scale with the confinement scale and exploring the effective diffusion coefficient. They avoid the tedious task of normalizing the conservation of probability in finite space. The paper attempts to study the confinement effect from the viewpoint of normalization factor. The normalization factor is equivalent to the partition function in statistical physics, and many confinement effects can be attributed to this. For example, the crucial mean-first-passage time(MFPT) in heat conduction problems can be considered as the integration with time variables of the time-dependent normalization factors. § II. PROPAGATOR AND NORMALIZATION FACTOR IN FINITE SYSTEM The propagator satisfies the Fokker-Planck Equation in the confined system<cit.>. ∂/∂ t Q(x,t|x_0,0)=(-F∂/∂ x+D ∂^2/∂ x^2)Q(x,t|x_0,0) . Dirichlet boundaries mean Q(0,t|x_0,0)=Q(L,t|x_0,0)=0. The corresponding propagator reads Q(x,t|x_0,0) = 2/Lexp[2F(x-x_0)-F^2 t/4D] × n=1+∞∑exp[-n^2π^2D t/L^2]sin(nπ x_0 /L)sin(nπ x /L). If the external force F=0, the propagator reads Q(x,t|x_0,0) = 2/Ln=1+∞∑exp[-n^2π^2D t/L^2]sin(nπ x_0 /L)sin(nπ x /L). To analysis the diffusion behavior,we need to know the probability density function P(x,t). To keep probability conserved, we have to obtain the normalization factor in different initial condition. In the paper, we discuss three initial condition. Initial condition 1 means Q(x,0|x_0,0)=δ(x-x_0),p_0(x_0)=1/L. Here, the p_0(x_0)=1/L denotes the uniform distribution for the initial point. The PDF reads P(x,t)=1/L Z̅(L,t)∫_0^L Q(x,t|x_0,0)dx_0, ∫ Q(x,t|x_0,0)dx_0 dx=LZ̅(L,t). The normalization factor Z̅(L,t) reads Z̅(L,t)=8/π^2m=0+∞∑exp[-A(2m+1)^2]1/(2m+1)^2, A ≡π^2Dt /L^2. The mean first passage time<cit.>reads T = ∫_0^+∞∫_0^L ∫_0^L Q(x,t|x_0,0) dxdx_0dt=∫_0^+∞Z̅(L,t) dt = 8L^2/π^4 Dm=0+∞∑1/(2m+1)^4=L^2/12D=∫_0^L [x_0(L-x_0)/2D]p_0(x_0)dx_0. Initial condition 2 reads Q(x,0|x_0,0)=p_0(x)=δ(x-L/2). The normalization factor Z(L,t) reads Z(L,t) = 2/L∫_0^Lm=0+∞∑{exp[-(2m+1)^2π^2D t/L^2] × [(-1)^m]sin[(2m+1)π x/L]}dx. = 4/πm=0+∞∑[(-1)^m/2m+1]exp[-(2m+1)^2π^2D t/L^2]. The PDF reads P(x,t) = 2/ L Z(L,t)m=0+∞∑{exp[-(2m+1)^2π^2D t/L^2] × [(-1)^m]sin[(2m+1)π x/L]}. Initial condition 3 reads Q(x,0|x_0,0)=p_0(x)=δ(x-ϵ). The normalization factor Z(L,t) reads Z(L,t) = 2πϵ/ L^2 ∫_0^Ln=0+∞∑{exp[-n^2π^2D t/L^2]nsin[nπ x/L]}dx = 4 ϵ/ L m=0+∞∑exp[-(2m+1)^2π^2D t/L^2]≡4 ϵ/ L Ẑ(L,t). The PDF reads P(x,t)=π/ 2L Ẑ(L,t)n=0+∞∑{exp[-n^2π^2D t/L^2]nsin[nπ x/L].} To deal with all kinds of sums of series, we introduce the Euler-Maclaurin approximation(EMA) which means ∑_m=0^∞M(m) ≈ ∫_0^+∞M(x)dx +M(0)+M(+∞)/2 + ∑_k=1^∞B_2k/(2k)![d^(2k-1)M/dx^(2k-1)(+∞)-d^(2k-1)M/dx^(2k-1)(0)]. § III. MSD AND PV FOR CONFINED BROWNIAN MOTION The mean square displacement(MSD)for the one-dimension system in initial condition 1 is defined by ⟨ (x-x_0)^2 ⟩=1/ L Z̅(L,t)∫ (x-x_0)^2Q(x,t|x_0,0)dx_0dx. The probability density function(PDF) reads P(x,t) = 4/π L Z̅(L,t)m=0+∞∑{exp[-(2m+1)^2π^2D t/L^2] × [1/(2m+1)]sin[(2m+1)π x/L]}. We re-write the normalization factor reads<cit.> Z̅(L,t)=8/π^2m=0+∞∑exp[-A (2m+1)^2]1/(2m+1)^2≡8/π^2 I(t). we introduce the reduced size meaning L̃^-1≡√(A)=π√(D t)/L. If L̃^-1 is small, Using the Eq.(<ref>), we obtain I(t)=π^2/8[1-4/π^3/2L̃^-1+O(L̃^-4)]≈π^2/8[ 1-4/√(π)1/L√(D t)]+O(t^2). The normalization factor reads Z̅(L,t)=1-4/π^3/2L̃^-1+O(L̃^-4)≈ 1-4/√(π)√(D t)/L. Making the Taylor expansion of normalization factors, we have 1/Z̅(L,t)=1+4β t^1/2+16β^2t+64β^3t^3/2,β≡√(D )/√(π)L. The normalization factor is related to the fluctuation-induction force, We have proved that the EMA is effective when L̃^-1<0.5 in Ref.<cit.>. Ones know ⟨ x_0^2 ⟩=⟨ x^2 ⟩=∫ x^2P(x,t)dx in this case. So ones get ⟨ (x-x_0)^2 ⟩=2[⟨ x^2 ⟩-⟨ x x_0 ⟩]. Here, the position correlation function reads ⟨ xx_0 ⟩ = 2L^2/π^2 Z̅(L,t)m=0+∞∑{exp[-(m+1)^2π^2D t/L^2][1/(m+1)^2] } ≡ 2L^2/π^2 Z̅(L,t) II(t). The average of the square of position variable reads ⟨ x^2 ⟩ = 4L^2/π^4 Z̅(L,t)m=0+∞∑{exp[-(2m+1)^2π^2D t/L^2] × [1/(2m+1)^4][π^2(2m+1)^2-4] }. It can turn into the following formula, which reads ⟨ x^2 ⟩ = L^2/2-16L^2/π^4 Z̅(L,t)m=0+∞∑{exp[-(2m+1)^2π^2D t/L^2]1/(2m+1)^4] } ≡ L^2/2-16L^2/π^2 Z̅(L,t)III(t). We introduce the function II'(t) which reads II'(t)≡6/π^2 II(t)≈ 1-6β t^1/2+3πβ^2 t. Here,the function III(t) satisfies ∂ III(t)/∂ t=-D/L^2I(t),III(0)=π^2/96. We introduce the function III'(t) which reads III'(t)≡96/π^2III(t)=1-12πβ^2t+32 πβ^3t^3/2. So, the MSD expressed as a series solution reads ⟨ (x-x_0)^2 ⟩=L^2{1-1/3Z̅(t)[III'(t)+2II'(t)]}≡ L^2 f(t). Adopting the EMA for the small t, we have ⟨ (x-x_0)^2 ⟩=2πβ^2t[1-4β/3t^1/2]=2Dt(1-4√(Dt)/3√(π)L). We introduce the characteristic time t_ch=L^2/4π^2D and the reduced time ξ≡t/t_ch. The MSD reads ⟨ (x-x_0)^2 ⟩=2Dt(1-2√(ξ)/3π√(π))=L^2/2π^2(ξ-0.12ξ^3/2). And for the large t, we can adopt the first-term approximation(FTA) for the series solution in Eq.(<ref>). When A≡0.25ξ is large (meaning A>A_0), the structure factor reads<cit.> S(q)≡⟨exp[iq(x-x_0)] ⟩=π^4[1+cos(qL)]/2(π^2-q^2L^2)^2. The MSD reads MSD=-d^2S(q)/dq^2|_q=0=(π^2-8)/2π^2L^2,t→ +∞. As the shown in Fig.(<ref>), A_0≈ 2.5, the approximation is reasonable. We define the power α(t) by<cit.> Δ t→ 0limMSD(t+Δ t)/2D_eff(t+Δ t)^α(t)=1. We have α(t)≡α(ξ)=1-0.18√(ξ)/1-0.12√(ξ). we know α(0.5)=0.95,α(1)=0.93,α(2)≈0.90. Because 2/3π√(π)≈ 0.120,(π^2-8)/2π^2=0.095, we notice t=2.28t_ch, 2Dt(1-4√(Dt)/3√(π)L)=(π^2-8)/2π^2L^2. Above results are Summarized in Table.<ref>. Further, we analysis the MSD and the power for the d-dimension system with γ-dimension confinement. As shown in Table.<ref>, in the case of γ<d, when t is small or large enough, the diffusion is normal(MSD∝ t). The fator η_1≈ 1, η_2≈ 1 is dependent of the numerical result. The function g(t) is a series summation similar to f(t). Using the Eq.(<ref>)and considering ⟨ x ⟩=L/2, the position variance(PV) reads PV ≡ ⟨ x^2 ⟩-⟨ x ⟩^2 =L^2/4-L^2/6 Z̅(L,t)III'(t) ≈ L^2/4-L^2/6(1-4β t^1/2)+2Dt(1-8/3β t^1/2/1-4β t^1/2). ≈ 2L^2/3[(3π-4)β^2t-β t^1/2]+ L^2/12. § IV. DIFFUSION BEHAVIOR DEPENDENT ON INITIAL CONDITION IN THE CONFINED GEOMETRY The initial conditions are essential to characterize the diffusion behavior described by the Fokker-Planck equation, especially in the finite system. To clarify this dependency, we consider the one-dimension FPE with the Dirichlet Boundaries in the confined geometry with the size L. Under the initial condition referring to the different probability density function(PDF) being p_0(x), the mean-squared displacement defined by ⟨ (x-x_0)^2 ⟩ and the position variance(PV) meaning ⟨ (x-⟨ x ⟩)^2 ⟩ should exhibit different dependencies on time, which reflect corresponding diffusion behaviors. The key to this problem is how to deal with the propagator of FPE and the normalization factor. For the small t, we also apply the Euler-Maclaurin approximation and integration by parts. §.§ i. Midpoint case In midpoint case, the normalization reads Z(L,t)=4/πm=0+∞∑[(-1)^m/2m+1]exp[-(2m+1)^2A]. It also reads Z(L,t) = 4/πm=0+∞∑{(1/4m+1)exp[-(4m+1)^2A ] - (1/4m+3)exp[-(4m+3)^2A ]}. = 4/π{∫_1^3exp(-Ax^2)/xdx+1/2[exp(-A )-exp(-9A)]+⋯} = 2/π{[Ei(-9A)-Ei(-A)+exp(-A )-exp(-9A)]+⋯} ≈ 1-40/π A^2+o(A^2). Here, the Airy function reads Ei(ζ)≡∫_-∞^ζexp(t)/tdt=γ+ln|ζ|+ m=1+∞∑ζ^m/m m!. The average of the square of position variable reads ⟨ x^2 ⟩ = 2L^2/ Z(L,t)m=0+∞∑{exp[-A(2m+1)^2] × [(-1)^m](2m+1)^2π^2-4 /(2m+1)^3π^3}. When A→ +∞, ⟨ x^2 ⟩=π^2-4/2π^2L^2. Considering ⟨ x ⟩=x_0=L/2, we have MSD(t→ +∞)=(π^2-8/4π^2)L^2≈ 0.047 L^2. We introduce the auxiliary function R(A), which reads R(A)≡m=0+∞∑{[(-1)^m] exp[-A(2m+1)^2]/(2m+1)^3} It satisfies ∂ R(A)/∂ A=-π/4 Z(L,t), R(0)=π^3/32. Thus, we get ⟨ x^2 ⟩ = L^2/2-8L^2/π^3 Z(L,t)m=0+∞∑{[(-1)^m] exp[-A(2m+1)^2]/(2m+1)^3} ≈ L^2/2-L^2[1-8A/π^2+O(A^3)]/ 4[1-40/πA^2+o(A^2)]. We have the following formula which reads(with small t) MSD=PV≈2Dt(1-5A/π )=2Dt(1-5π Dt/L^2). §.§ ii. Endpoint case We have defined the co-error function erfc(ζ), and for small ζ erfc(ζ) ≡ 1-erf(ζ) ≡ 1-2/√(π)∫_0^ζexp(-t^2)dt = 1-2/√(π)(ζ-ζ^3/3+ζ^5/5· 2!+⋯). Using the EMA, we have Ẑ(L,t) = m=0+∞∑exp[-(2m+1)^2A]≈√(π)/4√(A) erfc(√(A)) ∼√(π)/4√(A)(1-2/√(π)√(A)). Thus,we have MSD=PV = ⟨ x^2 ⟩= L^2 n=1+∞∑[(2-n^2π^2)(-1)^n-2/π^3n^2]exp[-n^2A]/m=0+∞∑exp[-(2m+1)^2A] = L^2/πẐ(L,t)m=0+∞∑{exp[-(2m+1)^2A]-exp[-(2m+2)^2A]} - 4L^2/π^3 Ẑ(L,t)m=0+∞∑exp[-A(2m+1)^2]/(2m+1)^2. ≡ L^2V(t)/πẐ(L,t) -4L^2I(t)/π^3 Ẑ(L,t). When A→ +∞, we have MSD(t→ +∞)=(π^2-4/π^3)L^2≈ 0.189 L^2. Because based on the Eq.(<ref>) for the second term, the divergent part offsets the first term related to V(t), we obtain(with small t) MSD=PV≈L^2/2πẐ(L,t)[(4π^-3/2)√(A)]= 4/π(2Dt)[1+2√(π Dt)/L]. § VI. SIMULATION THROUGH RANDOM WALK THEORY The above theoretical result is shown in Fig.(<ref>). The Fokker-Planck equation could be derived by the random walk theory. The position reads x_i for a particle which randomly takes i steps (i∈[0, i_m]), with x_i∈[0,N_m] for the confined Brownian motion. When x_i≠ 0 and x_i≠ N_m, x_i+1=x_i± 1, with the probability being 0.5, respectively. When x_i=0 , x_i+1=1, When x_i=N_m, x_i+1=N_m-1. It means that D=0.5. We need to introduce a re-scaling relation where t→ i, L^2→√(π)N^2_m and t_ch→ i_ch≡√(π)N^2_m/2π^2. The simulation result is shown in Fig.(<ref>). It need to be pointed that x_i and x_0 is symmetrical under the condition (a) in the simulation. Therefore, PV is equal to PV(0) which satisfies PV/(√(π)N_m^2)≈1/12√(π)≈ 0.047. For the confined system, there is some difference between Fokker-Planck equation and random walk theory, specially for the endpoint case (c). § VI. RESULTS AND DISCUSSION Based on the series solution in Eq.(<ref>), we obtain the MSD being 2Dt(1-2√(ξ)/3π√(π)) for smaill t, with t_ch=L^2/4π^2D,ξ≡t/t_ch, and the power α(t) being 1-0.18√(ξ)/1-0.12√(ξ). Further, as shown in Table.<ref>, we analysis the MSD and the power for the d-dimension system with γ-dimension confinement. In the case of γ<d, when t is small or large enough, the diffusion is normal(MSD∝ t). However, there exists the sub-diffusive behavior in the intermediate time. The universal description is consistent with the recent experiments and simulations in the micro-nano systems. In the Ref.<cit.>, there is a foundational formula in previous researches for the confined system, which reads MSD_L(t)=L^2/6-16L^2/π^4m=0+∞∑{[1/(2m+1)^4]exp[-(2m+1)^2π^2D t/L^2] }. Here, MSD_L(0)=0,MSD_L(t→ 0)≈ 2Dt. The formula has been widely used to discuss the diffusion of nano-materials,such as nanoporous structure<cit.>. Under the condition Z̅=1, the formula is very different of the series solution in the Eq.(<ref>). It is pointed out that it is similar with the PV when Z̅≈ 1. We have PV = L^2/2-16L^2/π^4m=0+∞∑{[1/(2m+1)^4]exp[-(2m+1)^2π^2D t/L^2]}-(L/2)^2 = L^2/12+MSD_L(t). Here, PV(t=0)= L^2/12. When the time t is small, we have a formula being similar to he Eq.(<ref>), which reads PV(t)-PV(t=0)=2Dt(1-8√(Dt)/3√(π)L). It also reflects the sub-diffusive behavior presented in the Ref.<cit.>. In previous studies, the MSD and the PV is equivalent to describe diffusion behavior. But in the paper we find that both is very different in the finite system. and we think that the Eq.(<ref>) is a better choose to study all kinds of macro-nano systems. The initial conditions are essential to characterize the diffusion behavior described by the FPE, especially in the finite system. As examples, the paper discusses two representative initial PDFs reading p_0(x)=δ(x-x_0), with the midpoint x_0=L/2, and the endpoint x_0=ϵ(or 0^+). As shown in As the shown in Fig.(<ref>), In the case of midpoint, the MSD reads 2Dt(1-5π^3 Dt/L^2) for the small t, which reflects a kind of sub-diffusion, with D being the diffusion coefficient. In the case of endpoint, the MSD reads 4/π(2Dt)[1+2√(π Dt)/L] for the small t, which reflects a kind of super-diffusion. How to understand this type of super-diffusion behavior? We use the Dirichlet boundary and also specify that the conservation of probability within the interval L. In a certain sense, the boundary is actually equivalent to a reflective boundary. There is a forced one-way diffusion initially which is faster than the normal diffusion. § ACKNOWLEDGMENTS Y. Liao would thank Li-Cong Hu, Jia-Jun He, Zhi-Bin Gao, Xiang-Ying Shen, Jian-Ying Du and Zi-Qian Xie for drawing assistance and writing embellishment. Y. Liao would be extremely appreciative of Prof. Bao-Wen Li for helpful discussion. This work was supported in part by his startup funding of the Southern University of Science and Technology. § REFERENCES 99 Kardar M. Kardar, Statistical Physics of Particles (Cambridge University Press, New York , 2007). Mazur P. Mazur and I. Oppenheim, Molecular theory of Brownian motion, Physica 50, 241 (1970). Bian X. Bian, C. Kim and G. Em Karniadakis, 111 years of Brownian motion, Soft Matter 12, 6331 (2016). Plyukhin A. V. Plyukhin, Generalized Fokker-Planck equation, Brownian motion, and ergodicity, Phys. Rev. E 77, 061136 (2008). Dzugutov M. Dzugutov, A universal scaling law for atomic diffusion in condensed matter, Nature 381, 137 (1996). de Grooth B. G. de Grooth, A simple model for Brownian motion leading to the Langevin equation, Am. J. Phys. 67, 1248 (1999). Plyukhin2006 A. V. Plyukhin, Does a Brownian particle equilibrate?, Europhys. Lett. 75, 15 (2006). Liao2021 Y. Liao and X. -B. Gong, A new derivation of the relationship between diffusion coefficient and entropy in classical Brownian motion by the ensemble method, SciPost Phys. Core 4, 015 (2021). Faucheux L. P. Faucheux and A. J. Libchaber, Confined Brownian motion, Phys. Rev. E 49, 5158(1994). Alonso D. Alonso, A. Ruiz, and I. de Vega, Polygonal billiards and transport: Diffusion and heat conduction, Phys. Rev. E 66, 066131(2002). Krager J. Kräger, D. M. Ruthven, and D. N. Theodorou, Diffusion in Nanoporous Materials (Wiley-VCH Press, New York , 2012). Liao2015Y. Liao and B. Miao, Structure factor of a Gaussian chain confined between two parallel plates, J. Chem. Phys. 142, 164903 (2015). Ernst M. Ernst, et al., A model for the transient subdiffusive behavior of particles in mucus, Biophys. J. 112, 172-179(2017). Hitimana E. Hitimana, B. K. Roopnarine, and S. Morozova, Diffusive dynamics of charged nanoparticles in convex lens-induced confinement, Soft Matter 18, 832-840(2022). Aporvari M. S. Aporvari, et al., Crowding and confinement act in concert to slow DNA diffusion within cell-sized droplets, iScience 25, 105122 (2022). Broersma S. Broersma, Diffusion and viscosity in a spherical cavity, J. Chem. Phys. 30, 707-717(1959). Pawar Y. Pawar, and J. L. Anderson, Hindered diffusion in slit pores: an analytical result, Ind. Eng. Chem. Res. 32, 743-746(1993). Bevan M. A. Bevan and D. C. Prieve, Hindered diffusion of colloidal particles very near to a wall: revisited, J. Chem. Phys. 113, 1228-1236(2000). Lin B. Lin, J. Yu, and S. A. Rice, Direct measurements of constrained Brownian motion of an isolated sphere between two wall, Phys. Rev. E 62, 3909-3919(2000). Kazoe Y. Kazoe and M. Yoda, Measurements of the near-wall hindered diffusion of colloidal particles in the presence of an electric field, Appl. Phys. Lett.99,124104(2011). Gitterman M. Gitterman, Mean first passage time for anomalous diffusion, Phys. Rev. E 62, 6065(2000). Li2003 B.-W. Li and J. Wang, Anomalous heat conduction and anomalous diffusion in one-dimensional systems, Phys. Rev. Lett. 91, 044301(2003).
http://arxiv.org/abs/2307.10210v1
20230714131907
Unsupervised Domain Adaptation using Lexical Transformations and Label Injection for Twitter Data
[ "Akshat Gupta", "Xiaomo Liu", "Sameena Shah" ]
cs.CL
[ "cs.CL" ]
SynTable: A Synthetic Data Generation Pipeline for Unseen Object Amodal Instance Segmentation of Cluttered Tabletop Scenes Zhili Ng^* Haozhe Wang^*,† Zhengshen Zhang^* Francis Tay Eng Hock Marcelo H. Ang Jr. National University of Singapore {ng.zhili, wang_haozhe, zhengshen_zhang}@u.nus.edu, {mpetayeh, mpeangh}@nus.edu.sg August 12, 2023 ========================================================================================================================================================================================================================== Domain adaptation is an important and widely studied problem in natural language processing. A large body of literature tries to solve this problem by adapting models trained on the source domain to the target domain. In this paper, we instead solve this problem from a dataset perspective. We modify the source domain dataset with simple lexical transformations to reduce the domain shift between the source dataset distribution and the target dataset distribution. We find that models trained on the transformed source domain dataset performs significantly better than zero-shot models. Using our proposed transformations to convert standard English to tweets, we reach an unsupervised part-of-speech (POS) tagging accuracy of 92.14% (from 81.54% zero shot accuracy), which is only slightly below the supervised performance of 94.45%. We also use our proposed transformations to synthetically generate tweets and augment the Twitter dataset to achieve state-of-the-art performance for POS tagging. § INTRODUCTION In a typical machine learning setting, training, development and test sets are usually carved out of the same data collection effort. In doing this, we caveat our models with an implicit assumption - the deployment dataset should belong to the same distribution as the training dataset. This is rarely the case and we see significant drops in performance when the model is deployed. The mismatch between the deployment data distribution, or target domain, and the training data distribution, or source domain, is known as domain shift <cit.> and the process of adapting to target domain distributions is known as domain adaptation <cit.>. The most widely studied domain adaptation methods are model-centric methods <cit.>, where parts of the model, including the feature space, the loss function or even the structure of the model are altered <cit.>. Data-centric methods <cit.> usually involve some form of bootstrapping and pseudo-labelling of the target domain data <cit.>. A popular data-centric domain adaptation method is data selection, which is an intermediate training step that aims to select a subset of data that is closest to the target domain <cit.>. We refer the reader to domain adaptation surveys in natural language processing for a detailed overview <cit.>. To the best of our knowledge, none of the works we encounter in literature address the fundamental reason behind the need for domain adaptation - domain shift. If we are able to transform the source domain dataset such that the domain mismatch between the source domain and the target domain is reduced, while being able to exploit the annotations of the source domain corpus, then the models trained on such a transformed source domain data will naturally perform better on the target domain. This is the main motivation behind our work. All model-centric and data-centric domain adaptation methods can be applied on top of our proposed method and are complementary to it. In this paper, we transform the source domain dataset to resemble the target domain dataset more closely through a series of transformations. In our case, the source domain consists of standard English sentences and the target domain consists of tweets. Through these transformations, we are able to improve the zero-shot POS tagging accuracy by 10.39% when averaged over five different BERT models. Also, when we combine the transformed data to augment the original target dataset, we achieve state-of-the-art POS tagging performance on the target dataset. § LEXICAL TRANSFORMATIONS AND LABEL INJECTIONS Standard English sentences and Tweets have both semantic and lexical differences. Tweets are more likely to be subjective and polarized (appendix <ref>). On the other hand, tweets also contain unique lexical features like acronyms, emojis, user mentions, retweets, hashtags, as shown in Figure <ref>, and can be used as different parts of speech (Table <ref>, appendix <ref>). In this paper, we focus on converting standard English sentences into tweets by making lexical transformations and injecting labels wherever required. Example transformations are shown in Figure <ref>. Lexcial transformations add target domain-specific lexical features to the source domain dataset such that these properties are `distributionally' conserved. For example, when our target domain is Twitter, we expect Tweets to contain emojis. We can measure the distributional presence of emojis in tweets, like the percentage of tweets that on average contain emojis or how they are distributed within the sentence, i.e. if they are more likely to occur in the beginning, middle, or end of a sentence. In lexical transformations, we add these distributional properties to the source domain sentences. Since we are adding these features to an annotated dataset, we also inject the label of the lexical feature wherever required. The process is discussed in detail in section <ref>. The resulting sentences are almost indistinguishable from Tweets, as can be seen in Figure <ref>. It is not trivial to inject these lexical features into the standard English sentences as the same feature can correspond to multiple parts of speech, as shown in Table <ref>. § DATASETS In this paper, we work with two annotated POS tagging datasets. For standard English, we use the GUM (Georgetown University Multilayer Corpus) dataset <cit.>. For Twitter data, we use Tweebank (TBv2) <cit.> dataset. We choose these two datasets because they are both labelled using the universal dependencies <cit.> framework, thus each of the datasets have identical 17 POS tags. The dataset statistics are shown in Table <ref>. The GUM dataset acts as our source domain dataset and is about 5 times larger than TBv2, which is our target domain dataset. GUM dataset is made up of articles and interviews from Wikinews, instructional articles from wikiHow and travel guides from Wikivoyage <cit.>. The GUM dataset contains longer sentences compared to the Tweebank dataset. The Tweebank dataset gets higher average polarity and subjectivity scores when compared to the GUM dataset. The experiments analysing dataset properties are shared in appendix <ref>. § EXPERIMENTS In this section, we present four different types of Lexical Transformations and corresponding label injection methods for Twitter as target domain. All transformations are performed on the GUM train-split (the standard English dataset). Models trained on the transformed dataset are tested on the TBv2 test set (the Twitter dataset). All experiments shown in this paper report accuracy scores on TBv2 test set, in accordance with previous works <cit.>. Each experiment is repeated five times and the mean score is reported with standard deviations reported in brackets. All experiments in this paper are done using the Huggingface implementations of different BERT models. We use five different BERT models, the original BERT-base-uncased and BERT-large-uncased <cit.> models, the RoBERTa-base and RoBERTa-large models <cit.> and the BERTweet model <cit.>. §.§ Zero-Shot Experiments We begin by training the model on the original GUM train-split and testing it on the TBv2 dataset. This experiment sets our baseline for unsupervised domain adaptation as it represents zero-shot application of a model trained on standard English, and then applied to tweets. The results are shown as the Zero Shot results of Table <ref>. §.§ Emoji Injections Social media text is filled with emojis and emoticons. In this paper, we refer to both as Emojis. To convert standard English sentences to Tweets, we inject emojis into standard English sentences. Emojis belong to the `SYM:symbol' class in the universal dependencies framework, which is inserted as the label for the injected emoji in the source domain dataset. To place an emoji within a standard English sentence, we first randomly select an emoji from a pre-decided list of emojis. Then we place the emoji inside a sentence according to a Gaussian distribution which is fit to the location of occurrence of emojis in a tweet. We randomly add emojis to 25% of the sentences in the GUM dataset. The different experiments done to reach the above methodology for emoji injection are described in appendix <ref>. The results for emoji injection are shown in Table <ref>. §.§ Inverse Lexical Normalization Lexical normalization is a common task where non-standard English tokens are corrected to standard English <cit.>. This includes expanding acronyms like wru -> where are you and correcting spelling errors. In this paper, we convert standard English to its lexically un-normalized version. We call this process Inverse Lexical Normalization (ILN). To do so, we use a lexical normalization dataset <cit.> as a dictionary lookup and create a mapping between lexically correct words and their un-normalized version. For example, you is written in various different ways including u, uuuu, youuuu. We randomly replace the correct tokens with their un-normalized versions 75% of the times. The ablation experiments for this lexical transformation are shown in <ref>. The POS tag of the original word is retained in the transformation. BERT-base observes maximum improvement with ILN (Table <ref>). §.§ Converting PROPN to User-Mentions and Hashtags Another distinguishing lexical features of Tweets is the use of user-mentions and hashtags. In this transformation, we randomly pick existing proper nouns in the GUM dataset and convert them into user-mentions or hashtags by adding an '@' or '#' symbol in front of the token, with a probability of 50% and 20% respectively. The existing proper noun labels are kept for the converted tokens. The ablations for this transformation can be found in appendix <ref>. We see consistent improvements with this transformation for all models except RoBERTa models (Table <ref>). §.§ Injecting ReTweets, URLS, user-mentions and hashtags as X The `X' part of speech tag or the other category in the universal dependency framework <cit.> is defined as - "The tag X is used for words that for some reason cannot be assigned a real POS category. It should be used very restrictively". While the `X' POS tag is used sparingly in standard English, a large number of tokens in tweets fall into this category. In this transformation, we insert re-tweets (at the beginning of sentences), urls (usually at the back of the sentences) and hashtags (randomly sampled from a Gaussian calculated from tweets). Re-tweets are added in 30% of the sentences, URL's are added in 60% of the sentences and hashtags are added in 10% of the sentences. The ablations can be found in appendix <ref>. The label `X' is added with these lexical transformations. We see massive improvements across the board by adding this lexical transformation. This is because the `X' POS tag, which is probably the most under-utilized tag when dealing with standard English, becomes vital when dealing with tweets. All Re-tweets, URL's and many hashtags and user mentions fall under this category. § RESULTS We now combine all transformations together, as shown in Table <ref>. The first section in Table <ref> represents our unsupervised domain adaptation results. The first row in Table <ref> shows models trained on the original GUM dataset (standard English) and tested on TBv2 test set, representing zero-shot domain transfer results. The GUM-T dataset represents the transformed dataset containing all the previously described transformations. Models trained on the GUM-T dataset represent our unsupervised domain adaptation performance, which improves on the zero-shot POS tagging accuracy by 10.39%, without ever seeing a single tweet (when averaged over all five models). The class-wise F1 improvements for different POS tags are shown in Table <ref>. BERT-base witnesses the maximum gain from our transformations (12.08%) and performs better than RoBERTa-large and BERTweet. The second section in Table <ref> contains supervised experiments where the training dataset contains tweets. We check the efficacy of our proposed transformations as a synthetic data generation process. We first augment the TBv2 dataset with the original GUM dataset and compare it with the improvements we get when TBv2 is combined with GUM-T. We see that the combination of TBv2 and GUM-T datasets outperforms all supervised models and gives 1.6 to 8 times larger performance boost over augmenting with the original GUM dataset. The TBv2 + GUM-T combination reaches (a saturated) state-of-the-art maxima for POS tagging on the TBv2 dataset, as shown in Table <ref>. § CONCLUSION A lot of focus in literature has been given to converting noisy social media text to standard English. In our work, we convert standard English into noisy social media-like text using simple lexical transformations and show that it can be used as an effective unsupervised domain adaptation and data augmentation method. The fundamental idea behind our work is to reduce domain shift by transforming the source domain into the target domain. We present experiments for these transformations between standard English and Twitter domain and find an average accuracy boost for POS tagging of 10.39% across 5 different BERT models, without ever using a single tweet for supervised training. § ACKNOWLEDGEMENTS This paper was prepared for informational purposes in part by the Artificial Intelligence Research Group of JPMorgan Chase & Co and its affiliates (“J.P. Morgan”) and is not a product of the Research Department of J.P. Morgan. J.P. Morgan makes no representation and warranty whatsoever and disclaims all liability, for the completeness, accuracy, or reliability of the information contained herein. This document is not intended as investment research or investment advice, or a recommendation, offer, or solicitation for the purchase or sale of any security, financial instrument, financial product, or service, or to be used in any way for evaluating the merits of participating in any transaction, and shall not constitute a solicitation under any jurisdiction or to any person if such solicitation under such jurisdiction or to such person would be unlawful. © 2022 JPMorgan Chase & Co. All rights reserved. § LIMITATIONS In this paper, we focus on lexical transformations between source domain and target domain to reduce the domain shift between them. To do this, we identify unique lexical features in the target domain and place them in the source domain so that the transformed domain is distributionally similar to the target domain. But there are also semantic differences between the two domains in terms of content, domain-specific jargon, and other nuances. This work does not take into account those transformations. Also, we use Twitter as the target domain for our work. While the general principles of our work are applicable to any source-target domain pairs, the transformations discussed in this work cater broadly to social media text, and specifically to Twitter data. The generalizability to other target domains has not been tested in this paper and remains a topic of further investigation. In this paper, we work with a POS tagging dataset. POS tagging is a token level task where we classify each token as belonging to a certain category. We feel that because POS tagging is dependent on each token in the sentence, domain transfer affects this task most adversely. Sequence classification tasks like sentiment analysis that only require a high level representation of the entire sentence to make classification decisions might witness different levels of improvement. The current method needs to be tested for other task types, including sequence classification tasks like sentiment analysis, or generative tasks like question answering and text summarization. This was beyond the scope of a short paper. acl_natbib § APPENDIX §.§ Dataset In this paper, we work with two part-of-speech (POS) tagging datasets. The GUM dataset <cit.>, which is made up of standard English sentences from different wiki-sources like wikiNews, wikiHow etc., and the Tweebankv2 (TBv2) dataset <cit.>, which consists of tweets. The GUM dataset acts as our source domain dataset, while TBv2 acts as our target domain dataset. The number of sentences and the number of tokens in each dataset are given in Table <ref>. Figure <ref> shows the sentence length distribution between the GUM and the TBv2 dataset. We see that the GUM dataset contains longer sentences when compared to the TBv2 dataset. The mean tokens per sentence for GUM is 18.06 (std = 13.3) whereas the mean tokens per sentence for the TBv2 dataset is 15.10 (std = 7.74). This shows us that TBv2 not only has shorter sentences, but their spread is also shorter. We measure average subjectivity and polarity scores for the two datasets to indicate semantic differences. We find higher average subjectivity and polarity scores for the TBv2 dataset compared to the GUM dataset. To measure these, we use the spaCY textblob [ <https://spacy.io/universe/project/spacy-textblob> ] library to calculate subjectivity and polarity scores. Polarity is scored between -1 and 1 indicating the sentiment expressed in the sentence. We take the absolute value of the polarity scores since we consider both positive and negative sentiment since we are interested in the presence and absence of polarity in tweets. The mean polarity score for the TBv2 dataset was 0.23 compared to 0.13 for the GUM dataset. Subjectivity is scored between 0 and 1, with 0.0 being very objective and 1.0 being very subjective. TBv2 had a mean subjectivity score of 0.36 compared to 0.27 for the GUM dataset. §.§ Lexical Features Some of the lexical features specific to tweets that we are concerned with in this paper are - emojis, re-tweets, user-mentions, hashtags, URL's and un-normalized tokens. It is not trivial to inject these into the standard English sentences as same lexical feature can correspond to multiple parts of speech. This can also be seen in Figure <ref>, where user-mentions are used both for the category 'X' as well as proper nouns. A more detailed description of the different lexical features and the corresponding parts of speech the features can take can be seen in Table <ref>. Lexical features like user-mentions can take two parts of speech, where hashtags and un-normalized words can essentially be any part of speech. §.§ Emoji Injections Ablation Emoji Injection is a lexical transformation where we insert emojis in standard English sentences such that the distributional properties of the transformed text resemble a Twitter dataset. Lexical emoji injection is done in two steps: * Emoji Selection - Sample an emoji from a pre-selected list of emojis * Emoji Placement - Select a location in the standard English sentence to place the selected emoji Both these steps can be done randomly or based on a particular distribution. The selection step can be done by selecting an emoji based on the distribution of its occurrence in Twitter feeds. Although in this paper, in the emoji selection step, we select an emoji randomly from a pre-decided list of emojis. Similarly, the emoji placement step can be done in two ways. The selected emoji can be placed randomly anywhere in the sentence. This is called RANDOM-PLACEMENT. The alternative is to place the emojis in a sentence based on a certain distribution and sample the location of placement from that distribution. This method of placement is called LOCATION-SAMPLING. The distribution is found by studying the locations at which different emojis occur in a Twitter feed and fitting the location of their occurrence to a Gaussian distribution. We use the TBv2 train-split to calculate the distribution parameters. We experiment with these two methods for emoji injection for the BERT-base model by injecting tweets in 25% sentences in the GUM dataset. The models are trained on the transformed dataset and tested on the TBv2 test set. The results are shown in Table <ref>. We find that LOCATION-SAMPLING is significantly superior to the RANDOM-PLACEMENT method of emoji-injection. We also experimented with different thresholds for emoji injection. We found that injecting emojis into a larger number of sentences hurts the model performance as shown in Table <ref>. Thus, we do emoji injection with a 25% probability. §.§ Inverse Lexcical Normalization Ablation Inverse Lexical Normalization (ILN) aims to convert standard English text into its un-normalized versions. This includes converting correct spellings to their noisy versions as used in social media and converting certain texts to corresponding acronyms. Some examples of such a conversion would be converting you -> u, that - dat, how are you -> hru. We do this by using the dataset released by <cit.> for lexical normalization. We use the training set as a dictionary and find mappings between the lexically-correct tokens and their noisy usage in social media. When a word in this dictionary is found in the standard English sentence, it is converted into its un-normalized version with a probability of 75%. The ablation experiments with BERT-base are shown in Table <ref>. §.§ Injecting User Mentions and Hashtags as PROPN - Ablation User mentions and hashtags are often used as proper nouns (PROPN) as shown in the two examples below : * #FOLLOW us #CHECKOUT the multi - talented Spanglish Pop Singer Model @USER779 aka Lady Boom Boom URL107 * Today I went to watch #Metallica #themostamazingconcertever In the first tweet, @USER779 mention is used as a proper noun. In the second example #Metallica is used as a proper noun followed by another hashtag which refers to a totally different part-of-speech. In this transformation, we convert pre-existing proper nouns in standard English sentences into user mentions or hashtags. In a brief analysis of Twitter feed, we found that user mentions were more common than hashtags. Thus we start by randomly changing proper nouns into user mentions with a probability of 25% and into hashtags with a probability of 10%. The ablation experiments with BERT-base model are shown in Table <ref>. §.§ Injecting Re-Tweets, URLS, User Mentions and Hashtags as X - Ablation Re-tweets involving user mentions are separate from when user mentions are used as proper nouns and are classified in the 'X:other' POS category. URLs and some hashtags also fall into this category. Examples of tweets containing these lexical features can be seen in Figure <ref>. Injecting these features is simpler than the other lexical features and yet results in the largest improvements. Re-tweets are almost always present at the beginning of a tweet. URLs are almost always present at the end of the tweet. We make a pre-selected list of certain hashtags that fall into the 'X:other' POS tag category and place them randomly in a sentence. We experiment with the relative probability of such injections in Table <ref>. §.§ Combining All Lexical Data Transformations When we combine all lexical data transformations, we achieve significant boost in performance on the Twitter dataset. When a model trained on the GUM dataset (standard English, source domain) is tested on the Tweebankv2 test set (Twitter dataset, target domain), we see that the model has about 81.52% accuracy using BERT-large for POS tagging (Table <ref>, first row, Unsupervised). When we use all lexical transformations to transform standard English dataset to Twitter like sentences, called GUM-T, we achieve 92.14% accuracy, and see a significant boost of 10.62% over the zero-shot performance. This shows us that our simple lexical data transformations give the model a massive boost without training on actual tweets annotated for POS tagging. Our lexical data transformations can be used both for unsupervised domain adaptation and data augmentation, as shown in Table <ref>. §.§.§ The `X:other' POS class for Twitter The class-wise F1 score improvements in BERT-large for unsupervised domain adaptation are shown in Table <ref>. We see significant improvements for all POS classes. The improvement is massive for the `X' POS class because this class works very differently in standard English and tweets. Tweets contain a lot of hashtags, URLs, and re-tweets, which is completely different from standard English. Thus, the `X' POS class is the biggest lexical differentiator between standard English and how people communicate on Twitter. This is also why the performance of a POS tagger trained on standard English dataset performed abysmally, with and F1 score of 0.01. §.§ Lexicalally Transformed Sentences Some examples of the lexicalally transformed sentences from standard English to tweets are shown in Figure <ref>. The examples show different features including emojis, user mentions, re-tweets, URLs and lexically incorrect tokens. §.§ Average Runtimes, Hyperparameters and Hardware All experiments were performed on a single Tesla T4 GPU with 16GB GPU memory in a system with 16GB RAM. The run-time for base models per epoch was approximately 2 minutes for the Tweebank train-split and 6 minutes for the GUM train-split. For large models, the time taken per epoch was approximately 6 minutes for Tweebank train-split and 18 minutes for GUM train-split. The best performance and best dev-accuracy were chosen. We kept a batch size of 32, a learning rate of 1e-5 and maximum sequence length of 256. All models are trained for 25 epochs. We run each configuration 5 times and report the mean scores and standard deviation.
http://arxiv.org/abs/2307.03954v1
20230708112025
Magnon influence on the superconducting DOS in FI/S bilayers
[ "A. S. Ianovskaia", "A. M. Bobkov", "I. V. Bobkova" ]
cond-mat.supr-con
[ "cond-mat.supr-con", "cond-mat.mes-hall" ]
National Research University Higher School of Economics, Moscow, 101000 Russia Moscow Institute of Physics and Technology, Dolgoprudny, 141700 Russia Moscow Institute of Physics and Technology, Dolgoprudny, 141700 Russia Moscow Institute of Physics and Technology, Dolgoprudny, 141700 Russia National Research University Higher School of Economics, Moscow, 101000 Russia Heterostuctures superconductor/ferromagnetic insulator (FI/S) are paradigmic systems for studying mutual influence of superconductivity and magnetism via proximity effects. In particular, spin-split superconductivity is realized in such structures. Recent experiments and theories demonstrate a rich variety of transport phenomena occurring in devices based on such heterostructures that suggest direct applications in thermoelectricity, low-dissipative spintronics, radiation detection and sensing. In this work we investigate the influence of the electron-magnon interaction at the superconductor/ferromagnetic insulator interface on the spin-split superconductivity. It is predicted that due to the magnon-mediated electron spin-flip processes the spin-split quasiparticle branches are partially mixed and reconstructed, and the BCS-like spin-split shape of the superconducting DOS, which is typical for superconductors in the effective exchange field, is strongly modified. An odd-frequency superconducting order parameter admixture to the leading singlet order parameter is also found. These findings expand the physical picture of spin-split superconductivity beyond the mean-field description of the ferromagnet exchange field. Magnon influence on the superconducting DOS in FI/S bilayers I.V. Bobkova August 12, 2023 ============================================================ § INTRODUCTION Long ago it was demonstrated that the exchange field of ferromagnetic insulators (FIs), such as EuS and EuO, can spin-split the excitation spectrum of an adjacent thin-film superconductor <cit.>. The spin splitting in the DOS observed in those experiments resembles the spin splitting created by a strong in-plane field applied to a thin superconducting film. This discovery opened up the way for performing spin-polarized tunneling measurements without the need of applying large magnetic fields. A renewed interest in studying ferromagnetic/superconductor (F/S) structures came with active development of superconducting spintronics <cit.>, caloritronics and spin caloritronics <cit.>. In particular, in F/S structures with spin-split density of states (DOS) a series of promising phenomena have been studied. Among them are giant thermoelectric <cit.>, thermospin effects <cit.>, highly efficient thermally-induced domain wall motion <cit.>, spin and heat valves <cit.>, cooling at the nanoscale <cit.>, low-temperature thermometry and development of sensitive electron thermometers <cit.>. The spin-split DOS in F/S structures has also been explored in the presence of magnetic inhomogeneities, such as textured ferromagnets and domain walls <cit.>. Characteristic signatures of equal-spin triplet pairing were reported <cit.>. It was shown that the characteristic spatial and energy dependence of the spin-dependent DOS allows to tomographically extract the structure of the spin-triplet Cooper pairs <cit.>. Furthermore, the influence of the domain structure on the position-averaged superconducting DOS in FI/S bilayer was studied <cit.>. Another important direction in the field of F/S hybrid structures is investigation of interplay between the superconducting state and ferromagnetic excitations - magnons. A series of interesting results, presumably related to the influence of the superconductor on the magnon spectrum have been reported. In particular, it was found that the adjacent superconductor works as a spin sink strongly influencing Gilbert damping of the magnon modes <cit.> and can result in shifting of k = 0 magnon frequencies (Kittel mode) <cit.>. The electromagnetic interaction between magnons in ferromagnets and superconductors also results in appearance of magnon-fluxon excitations <cit.> and efficient gating of magnons <cit.>. Further it was reported that the magnetic proximity effect in thin film F/S hybrids results in appearing of magnon-cooparons, which are composed of a magnon in F and an accompanying cloud of spinful triplet pairs in S <cit.>. Some aspects of back influence of magnons on superconducting state have already been investigated. For example, a possible realization of the magnon-mediated superconductivity in F/S hybrids has been proposed <cit.>. At the same time, the influence of magnons via the magnetic proximity effect on the superconducting DOS practically has not yet been studied, although the electron-magnon interaction and influence of this interaction on the DOS in ferromagnetic metals have been investigated long ago <cit.>. Here we consider how the effects of electron-magnon interactions in FI/S thin-film hybrids manifest themselves in the superconducting DOS and quasiparticle spectra of the superconductor. It is found that the magnon-mediated electron spin-flip processes cause the interaction and mixing of the spin-split bands resulting in their reconstruction, which is especially important near the edge of the superconducting gap. We demonstrate that the classical BCS-like Zeeman-split shape of the superconducting DOS can be strongly modified due to the electron-magnon interaction and this modification is temperature-dependent. The influence of magnons on the temperature dependence of the Zeeman splitting of the DOS and relevance of our findings to existing and future experiments are also discussed. The paper is organized as follows. In Sec. <ref> we describe the system under consideration and the Green's functions formalism taking into account magnon self-energies. In Sec. <ref> the modifications of the quasiparticle spectra in the superconductor due to the electron-magnon coupling are discussed. In Sec. <ref> we study signatures of the electron-magnon interaction in the Zeeman-split superconducting DOS and their temperature dependence. Our conclusions are summarized in Sec. <ref>. § SYSTEM AND FORMALISM We consider a thin-film bilayer as depicted in Fig. <ref>, in which a ferromagnetic insulator FI is interfaced with a conventional spin-singlet s-wave superconductor S. The thickness of the S layer d_S is assumed to be small as compared to the superconducting coherence length ξ_S. In this case the S layer can be considered as homogeneous along the normal to the interface plane. The FI layer in its ground state is magnetized in-plane, along the z-direction. The Hamiltonian of the system takes the form: Ĥ=Ĥ_S+Ĥ_FI+Ĥ_ex, where Ĥ_S is the standard mean-field BCS Hamiltonian describing electrons in the superconducting film: Ĥ_S = ∑_ k σξ_ k c_ k σ^† c_ k σ - ∑_ kΔ c_ k↑^† c_- k↓^† - ∑_ kΔ^* c_- k↓ c_ k↑ . ξ_ k = k^2/2m - μ is the normal state kinetic energy of the electrons in the S layer, counted from the chemical potential of the superconductor μ. Δ is the superconducting order parameter in S, which assumed to be of conventional isotropic s-wave type. c_ k σ^+ and c_ k σ are creation and annihilation operators of electrons with the wave vector k and spin σ. Ĥ_FI describes magnons in the FI. Assuming easy-axis magnetic anisotropy in the FI it can be written as Ĥ_FI = ∑_ q (ω_0 + D q^2) b_ q^† b_ q, where b_ q^+ and b_ q are creation and annihilation operators of magnons in FI with wave vector q, ω_0 = |γ| (μ_0 H_0 + 2 K_a/M_s) is the magnonic frequency at q=0, D is the magnon stiffness constant, γ is the typically negative gyromagnetic ratio, M_s is the saturation magnetization, μ_0 is the permeability of free space, K_a is the easy-axis anisotropy constant and H_0 is the external field (can be equal to zero in our consideration). Electronic and magnonic wave vectors k and q are assumed to be two-dimensional (2D), that is the electrons and magnons can only propagate in plane of the FI/S interface. The wave functions along the y-direction, perpendicular to the interface, are assumed to be quantized. For simplicity, in the formulas we leave only one transverse magnon mode. In fact, we have checked that different modes give quantitatively different, but qualitatively the same contributions to considered self-energies. Their effect can be accounted for by multiplying our results for the self-energy corrections by an effective number of working transverse modes (see below). Ĥ_ex accounts for the exchange interaction between S and FI: Ĥ_ex = -J∫ d^2 ρ S_FI(ρ) s_e(ρ) , where ρ is a two-dimensional radius-vector at the interface plane, S_FI and s_e are the spin density operators in the FI and S, respectively. J is the interface exchange constant. By performing the Holstein-Primakoff transformation to the second order in the magnonic operators in Eq. (<ref>) one obtains Ĥ_ex = Ĥ_1 + Ĥ_2 + Ĥ_3, with Ĥ_1 = ∑_ k, k' U_ k, k'(c_ k, ↑^† c_ k', ↑-c_ k,↓^† c_ k',↓) , U_ k, k' = JM_s/2|γ|∫ d^2 ρΨ_ k^*(ρ) Ψ_ k'(ρ), Ĥ_2 = ∑_ k, k', q, q' T_ k, k', q, q' b_ q^† b_ q' (c_ k, ↑^† c_ k', ↑-c_ k,↓^† c_ k',↓), T_ k, k', q, q' = - J/2∫ d^2 ρΨ_ k^*(ρ) Ψ_ k'(ρ) ϕ_ q^*(ρ) ϕ_ q'(ρ), Ĥ_3 = ∑_ k, k', q V_ k, k', q (b_ q c_ k, ↑^† c_ k', ↓ + b_ q^† c_ k', ↓^† c_ k, ↑), V_ k, k', q = J √(M_s/2|γ|)∫ d^2 ρΨ_ k^*(ρ) Ψ_ k'(ρ) ϕ_ q(ρ) , where Ĥ_1 describes a spin-splitting of the electronic energy spectrum in S in the mean-field approximation. The second term Ĥ_2 represents the Ising-term, which physically accounts for the renormalization of the spin-splitting by magnonic contribution. Since the processes of the spin transfer between electrons and magnons are of primary importance for our consideration, when calculating the electronic Green's function we simplify this term by substituting the magnon operator b_ q^† b_ q by its averaged value ⟨ b_ q^† b_ q⟩ = n_ qδ_ q q', where n_ q is the density of magnons with wave vector q. The third term Ĥ_3 transfers spin between electron and magnon operators and will turn out to be the most significant for effects under consideration. If we choose the wave functions of electrons Ψ_ k(ρ) and magnons ϕ_ q(ρ) at the interface in the form of plane waves propagating along the interface, that is Ψ_ k(ρ)=(1/√(d_S))e^i k ρ and ϕ_ q(ρ)=(1/√(d_FI))e^i q ρ, then Ĥ_ex can be simplified: Ĥ_ex = Ũ∑_k (c_k, ↑^† c_k, ↑-c_k,↓^† c_k,↓) + V ∑_k, q (b_q c_k, ↑^† c_k-q, ↓ + b_q^† c_k-q, ↓^† c_k, ↑) , where Ũ = -J (M_s-N_m |γ|)/(2|γ|d_S ) is the averaged spin-splitting field in the superconductor renormalized by the magnon density N_m, and V = J√(M_s/2|γ|d_FI A)(1/d_S) is the electron-magnon coupling constant, where A is the area of the FI/S interface. Introducing the following Nambu-spinor Ψ̌_ k = (c_ k ↑, c_ k ↓, -c_- k ↓^†, c_- k ↑^†)^T, we define the Gor'kov Green's function in the Matsubara representation Ǧ_ k(τ) = -⟨ T_τΨ̌_ kΨ̌_ k^†⟩, where ⟨ T_τ ... ⟩ means imaginary time-ordered thermal averaging. Turning to the Matsubara frequency representation the Green's function obeys the following equation: (iω - ξ_k τ_z - Ũσ_z - Δτ_x - Σ̌_m )Ǧ_ k (ω) = 1, where ω is the fermionic Matsubara frequency, σ_i and τ_i (i=x,y,z) are Pauli matrices in spin and particle-hole spaces, respectively. Σ̌_m is the magnonic self-energy, which describes corrections to the electronic Green's function due to the electron-magnon interaction and in the framework of the self-consistent Born approximation takes the form: Σ̌_m = - V^2 T ∑_ q,Ω B_ q(Ω) {σ_+ Ǧ_ k- q (ω - Ω)σ_- + . . σ_- Ǧ_ k+ q (ω + Ω)σ_+} , where σ_± = (σ_x ± i σ_y), Ω is the bosonic Matsubara frequency and B_ q(Ω) = [iΩ - (ω_0+Dq^2)]^-1 is the magnonic Green's function. From the spin structure of Eq. (<ref>) it is seen that Σ̌_m is diagonal in spin space. For this reason the electronic Green's function, which is given by the solution of Eq. (<ref>) is also diagonal matrix in spin space and Eq. (<ref>) can be written for the both spin subbands separately: (iω - ξ_k τ_z - σŨ - Δτ_x - Σ̂_m, σ )Ĝ_ k, σ (ω) = 1, where Ĝ_ k, σ is 2 × 2 matrix in the particle-hole space corresponding to the electron spin σ = ↑, ↓. Σ̂_m,σ is also 2 × 2 matrix in the particle-hole space representing the magnonic self-energy for the given spin subband σ: Σ̂_m,σ = - V^2 T ∑_ q,Ω B_ q(Ω) Ĝ_ k-σ q, σ̅ (ω - σΩ). As a factor in the expressions σ means ± 1 for the spin-up (spin-down) subbands, and σ̅ means the opposite spin subband. The dimensionless coupling constant quantifying the strength of the electron-magnon coupling is K=V^2 A / 4 πħ v_F √(D Δ). Our numerical estimates made for the parameters corresponding to EuS/Al or YIG/Nb structures suggest that K should be rather small, K ≪ 1, for the detailed discussion of the numerical estimates see Sec. <ref>. The smallness of the electron-magnon coupling constant allows us to use non self-consistent Born approximation when calculating magnon self-energy. That is, we substitute Ĝ_ k - σ q, σ̅ by the bare superconducting Green's function obtained without taking into account the magnon self-energy Ĝ_ k - σ q, σ̅^(0) in Eq. (<ref>). Then the explicit solution of Eq. (<ref>) takes the form: Ĝ_ k,σ (ω) = i ω_ k, σ +ξ_ k, στ_z + Δ_ k, στ_x/(i ω_ k, σ)^2 - (ξ_ k, σ)^2 - (Δ_ k, σ)^2 . where all the quantities marked by are renormalized by the electron-magnon interaction as follows: Δ_ k, σ (ω) = Δ + δΔ_ k,σ(ω) = Δ - - V^2 T ∑_ q, Ω B_ q(Ω) Δ/(i ω - iσΩ +Uσ)^2 - ξ^2_ k-σ q - |Δ|^2 , ξ_ k, σ (ω) = ξ_ k + δξ_ k,σ(ω)= ξ_ k - - V^2 T ∑_ q, Ω B_ q(Ω) ξ_ k-σ q/(i ω - iσΩ +Uσ)^2 - ξ^2_ k-σ q - |Δ|^2 , ε_ k, σ (ω) = i ω - Uσ + δε_ k,σ(ω)= i ω - Uσ + + V^2 T ∑_ q, Ω B_ q(Ω) i ω - iσΩ +Uσ/(i ω - iσΩ +Uσ)^2 - ξ^2_ k-σ q - |Δ|^2 . For the problem under consideration all the in-plane directions of k are equivalent. For this reason the magnonic corrections only depend on the absolute value k of the wave vector. Further in order to study the quasiparticle spectra and density of states we turn from Matsubara frequencies to the real energies in the Green's functions i ω→ε + i δ, where δ is an infinitesimal positive number. The magnonic corrections for spin-up electrons δΔ_ k, ↑, δξ_ k, ↑ and δε_ k, ↑ are presented in Figs. <ref>-<ref> as functions of the quasiparticle energy ε and ξ_ k≡ξ, which after linearization in the vicinity of the Fermi surface takes the form ξ_ k ≈v_F ( k - k_F). The key features of the corrections, which can be see in the presented plots are: (i) The dependence of the corrections on ξ is very weak. The reason is that the most important range of the magnonic wave numbers contributing to the corrections is q ≲ 1/ξ_S, where ξ_S = v_F/Δ is the superconducting coherence length. Then taking parameters of the magnon spectrum corresponding to YIG ω_0,YIG∼ 10^-1Δ, D_YIG≈ 5*10^-40J*m^2 or EuS ω_0,EuS∼ 10^-2Δ, D_EuS≈ 3*10^-42J*m^2, we obtain that D q^2 ≪ω_0 to very good accuracy for all reasonable parameters. Consequently, one can disregard D q^2 with respect to ω_0 in the magnonic Green's function B_ q and after linearization of ξ_ k - σ q≈v_F ( k - σ q - k_F) in the vicinity of the Fermi surface we see that the dependence on k drops from Eqs. (<ref>)-(<ref>). (ii) The correction to the normal state electron dispersion δξ is small with respect to all other corrections and is neglected below. (iii) The important corrections δΔ and δε have peaks at the energies corresponding to the superconducting coherence peaks of the opposite spin subbands. While the coherence peaks for the spin-up subband are located at ε = ±Δ +Ũ, the peaks of the corrections are at ε = ±Δ -Ũ. It is an obvious consequence of the process of electron spin flip accompanied by emission or absorption of a magnon. (iv) Correction δΔ represents an effective contribution to the superconducting order parameter induced from the pure singlet pairing Δ via the electron-magnon interaction. It depends on the Matsubara frequency and contains both singlet and triplet components. As can be seen from Eq. (<ref>), the correction obeys the condition δΔ_↑(ω) = δΔ_↓(-ω). It means that the triplet component δΔ_t (ω) = δΔ_↑(ω) - δΔ_↓(ω) = -δΔ_t(-ω) works as an effective odd-frequency superconducting order parameter. This situation is rather unusual because typically in F/S hybrid systems we encounter an odd-frequency anomalous Green's function, but at the same time the order parameter is still even frequency in the framework of the conventional BCS weak coupling theory. § QUASIPARTICLE SPECTRA Now we turn to discussion of how quasiparticle spectra in the S layer are modified by the electron-magnon interaction. In Fig. <ref>(a) we present the spectral functions for the both spins in the S layer calculated from the Green's function (<ref>) according to the relation A_σ(ε, k) = -1/π Tr{1+τ_z/2 Im[Ĝ_ k,σ^R(ε)]}. The spectral function is isotropic in momentum space and for this reason we plot it as a function of ξ_ k≡ξ. The electron-like and hole-like quasiparticle branches are clearly seen at positive and negative energies, respectively. Black dashed lines represent the quasiparticle spectra in the absence of the electron-magnon interaction. The electron-magnon interaction leads to the following main modifications of the quasiparticle spectra: (i) The Zeeman splitting of spin-up and spin-down quasiparticle branches is reduced due to the magnon-mediated interaction between quasiparticles with opposite spins. (ii) For positive energy branches, corresponding to electron-like quasiparticles, the lifetime of spin-up quasiparticles and quasiparticles at the upper part of the spin-down branch is considerably suppressed, what is seen as a broadening of the corresponding branches. For negative energies, corresponding to hole-like quasiparticles, the situation is symmetric if we interchange spins. The broadening of the spin-down branch only occurs in the energy region, where the spin-up branch also exists. The physical reason is that the spin-flip processes providing the broadening are nearly horizontal due to the fact that ω_0 + Dq^2 ≪Δ, that is the magnon energies are small as compared Δ in the whole range of ξ, considered in Fig. (<ref>). The lower (upper) part of the spin-down (up) positive (negative) energy branch is not broadened because there are no available states for the opposite spin quasiparticles at the appropriate energies and, consequently, the spin-flip processes are not allowed. (iii) In Fig. <ref>(a) we also see a reconstruction of the spin-down spectral branch in the energy range of the bottom of the spin-up branch. In order to investigate this effect in more detail we plot the same figure on a logarithmic scale in Fig. <ref>(b), what allows to clearly see weak spectral features. Figs. <ref>(c) and (d) represent the spectral functions for the spin-up band on the normal and on the logarithmic scale, respectively. From Figs. <ref>(b) and (d) it is seen that due to the electron-magnon interaction in the energy region of the extremum of the spin-up (down) branch, a nonzero density of states appears for the opposite spin branch. It looks like a horizontal line starting from the bottom of the corresponding branch. This line is horizontal due to the independence of the electron-magnon self-energy corrections (<ref>) and (<ref>) on ξ. This mixing of the spin-up and spin-down bands resulting from the magnon-mediated spin-flip processes is natural and exists at all energies, but the spectral weight of the opposite spin branch is too small except for the regions of the extrema of the bands corresponding to the coherence peaks of the superconducting DOS. Intersection of the additional lines with the original spin-down band results in its reconstruction, which looks like an avoided crossing point. The results for the spectral function presented and discussed above correspond to T=0.1Δ. This temperature is higher than the gap in the magnonic spectrum ω_0=0.03Δ, which we take in our calculations. Therefore, a large number of thermal magnons are excited at this temperature. In Fig. <ref> the spectral function is demonstrated for lower temperature T=0.01Δ<ω_0. It is seen that the characteristic signatures of the magnon-mediated spin-flip processes, that is the mixing, reconstruction and broadening of the branches are much less pronounced due to the suppression of the thermally excited magnons at such low temperatures. § DOS IN THE PRESENCE OF MAGNONS Now we turn to discussion of the local density of states (LDOS) in the S layer, which is calculated as the momentum integrated spectral function: N(ε) = ∫d^2k/(2π)^2 A(ε, k). Fig. <ref>(a) demonstrates the LDOS in the presence of electron-magnon interaction (solid line) as compared to the LDOS calculated at V=0 (dashed line). The LDOS at V=0, that is calculated assuming mean-field approximation for the exchange field, takes the conventional BCS-like shape. It manifests Zeeman-split coherence peaks, and the outer peak is always higher than the inner one. The electron-magnon interaction inverts the relative ratio of the peak heights and broadens the outer peaks, while the width of the inner peaks remains unchanged. The reason is the same as for the broadening of the spectra in Fig. <ref>: electron spin-flip processes accompanied by a magnon emission or absorption. The outer coherence peaks in Fig.<ref>(a) correspond to the energy regions of the bottom (top) of the positive(negative)-energy spin-up(down) bands. This type of broadening, which only affects outer peaks, differs from the other physical mechanisms resulting in the broadening of the coherence peaks, such as the orbital effect of the magnetic field, inelastic scattering or magnetic impurities, which affect all the peaks <cit.> and can be roughly described by the Dynes parameter. The other important manifestation of the electron-magnon interaction is that the shape of the LDOS strongly depends on temperature even at very low temperatures ∼ω_0 ≪Δ, in agreement with the discussed above behavior of the spectral function. The temperature evolution of the LDOS is presented in Fig. <ref>. It is seen that the broadening of the outer peak develops with increasing temperature in the temperature range ∼ω_0. It is clear if we remember that the broadening is caused by the spin-flip processes, which are mediated by the thermally excited magnons. We do not consider larger temperatures T ≫ω_0 comparable to the critical temperature of the superconducting film because in this temperature range the temperature dependence of the superconducting gap comes into play and the correct consideration of the problem requires solving of the self-consistency equation for the order parameter. Now let us discuss numerical estimates of the dimensionless constant K=V^2 A / 4 πħ v_F √(D Δ), which controls the strength of the electron-magnon coupling. Substituting V = J√(M_s/2|γ|d_FI A)(1/d_S) and expressing the interface exchange coupling constant via the experimentally accessible quantity Ũ as |J| = 2 |γ| Ũ d_S/M_s (where to the leading approximation we neglect magnonic contribution to the magnetization), we obtain K = Ũ^2 (2|γ|/M_s) 1/(4 π√(DΔ)v_F d_FI) for one transverse magnon mode. The effective number of working transverse modes N_⊥∼ d_FI/a, where a is the interatomic distance in the ferromagnet. According to our estimates for d_FI≈ 10 nm N_⊥∼ 2 ÷ 5. One can take the following parameters for YIG/Nb heterostructures: Ũ/Δ = 0.5, v_F = 10^6m/s, Δ_Nb = 2.7*10^-22J, a=1.2m, 2|γ|/M_s = 3.3*10^-27m^3, D = D_bare,YIG-δ D_YIG, where D_bare,YIG = 5*10^-40J*m^2<cit.> is the exchange stiffness of YIG and δ D_YIG is the renormalization of the stiffness in FI/S bilayers due to formation of magnon-Cooparon quasiparticles <cit.>. As it was predicted <cit.>, for the material parameters of YIG/Nb heterostructures δ D_YIG can be ∼ (0.5 ÷ 1) D_YIG,bare for d_FI∼ (1 ÷ 0.5) d_S. Therefore, the electron-magnon coupling constant for YIG/Nb heterostructures can vary in a wide range K_YIG/Nb≳ 10^-4. The considered here values K ∼ 0.01 can be realized in the regime of strong renormalization of the exchange stiffness constant D. For EuS/Al heterostructures one can take Ũ/Δ = 0.25 <cit.>, v_F = 10^6m/s, Δ_Al = 3.5*10^-23J, a=10^-10m, 2|γ|/M_s = 3.3*10^-28m^3, D = D_bare,EuS, where D_bare,EuS = 3*10^-42J*m^2<cit.>. The superconducting renormalization of the stiffness due to formation of magnon-Cooparon quasiparticles is predicted to be small for the parameters corresponding to EuS/Al heterostructures at reasonable thicknesses d_FI due to smaller values of Δ and larger M_s. Substituting these parameters to the expression for K we come to the conclusion that for EuS/Al heterostructures K_EuS/Al∼ 10^-7÷ 10^-6, that is the electron-magnon effects unlikely to be observed in such structures. In general, the electron-magnon effects in the LDOS and quasiparticle spectra should be more pronounced in ultra-thin superconducting films with high critical temperatures, where large absolute values of the effective exchange field Ũ can be realized. The smaller values of the exchange stiffness of the ferromagnet will also enhance the effect. The manifestations of the electron-magnon coupling become more pronounced at T ≳ω_0 and grow with temperature. Now we discuss the influence of the electron-magnon interaction on the effective Zeeman splitting, which is defined as the distance between the split coherence peaks of the LDOS divided by 2. Experimentally, the low-temperature reduction of the effective Zeeman splitting at T ≪Δ for EuS/Al heterostructures has been reported <cit.>. It was ascribed to the presence of weakly bound spins at the interface of the EuS/Al. The renormalization of the effective exchange field in the superconductor by the thermal magnons can also contribute to this effect. Indeed, the fit of experimentally observed temperature dependence of the distance between the Zeeman-split coherence peaks Δ V_peak(T) by 2|Ũ| = J (M_s-N_m |γ|)/(2|γ|d_S ) with the magnon density N_m = (1/S d_FI)∑_ q{exp[-(ω_0+Dq^2)/T]-1}^-1 and ω_0 ≈ 0.03K is in reasonable agreement with the experimental data. In addition, the broadening of the outer coherence peaks, predicted in this work, leads to enhancement of the distance between the spin-split coherence peaks. The broadening becomes stronger with increasing temperature. This effect leads to an apparent growth of the peaks splitting with temperature and, therefore, acts opposite to the renormalization of the effective Zeeman field by magnons. However, our numerical estimates suggest that the temperature growth is unlikely to be observed, at least for heterostructures, consisting of the materials discussed above, because the renormalization of the effective Zeeman field by magnons dominates. § CONCLUSIONS In this work the influence of the electron-magnon interaction at the superconductor/ferromagnetic insulator interface in thin-film FI/S heterostructures on the spectrum of quasiparticles and the LDOS in the superconducting layer is studied. It is predicted that due to the magnon-mediated electron spin-flip processes the spin-split quasiparticle branches are partially mixed and reconstructed. The reconstruction is the most pronounced in the region of the bottom of the energetically unfavorable spin band because of the enhanced density of the electronic states and existence of the available states in the opposite-spin band. The BCS-like Zeeman-split shape of the superconducting DOS, which is typical for superconductors in the effective exchange field, is strongly modified due to the electron-magnon interaction. The outer spin-split coherence peaks are broadened, and the inner peaks remain intact. This type of broadening is a clear signature of the magnon-mediated spin flips and strongly differs from other mechanisms of the coherence peaks broadening, which usually influence all peaks. The broadening grows with temperature due to the thermal excitation of magnons. The described above features in the electronic DOS are mainly caused by diagonal in the particle-hole space magnonic contributions to the electron self-energy, that is by the quasiparticle processes. Besides that we have also found an off-diagonal in the particle-hole space magnonic contribution to the electronic self-energy. It mimics an odd-frequency superconducting order parameter admixture to the leading singlet order parameter. The study of its influence on the superconducting properties of the system may be an interesting direction for future research. § ACKNOWLEDGMENTS We acknowledge the discussions of the exchange interaction hamiltonian with Akashdeep Kamra. The work was supported by the Russian Science Foundation via the RSF project No. 22-42-04408.
http://arxiv.org/abs/2307.04988v3
20230711025810
Benchmarking Bayesian Causal Discovery Methods for Downstream Treatment Effect Estimation
[ "Chris Chinenye Emezue", "Alexandre Drouin", "Tristan Deleu", "Stefan Bauer", "Yoshua Bengio" ]
cs.LG
[ "cs.LG", "stat.ME" ]
[ Benchmarking Bayesian Causal Discovery Methods for Downstream Treatment Effect Estimation Chris Chinenye Emezue †ce,mila Alexandre Drouinmila,ad Tristan Deleumila,udem Stefan Bauerce,helm Yoshua Bengiomila,udem,cifar,cifar2 ceTechnical University of Munich, Munich, Germany adServiceNow Research, Montreal, Canada udemUniversité de Montréal, Montreal, Canada milaMila - Quebec AI Institute, Montreal, Canada cifarCIFAR AI Chair cifar2CIFAR Senior Fellow helmHelmholtz AI Chris Chinenye [email protected] gflownets, treatment effect, causal discovery, dag-gflownet, causal inference 0.3in ] †Work done as a visiting research student at Mila. The practical utility of causality in decision-making is widespread and brought about by the intertwining of causal discovery and causal inference. Nevertheless, a notable gap exists in the evaluation of causal discovery methods, where insufficient emphasis is placed on downstream inference. To address this gap, we evaluate seven established baseline causal discovery methods including a newly proposed method based on GFlowNets, on the downstream task of treatment effect estimation. Through the implementation of a distribution-level evaluation, we offer valuable and unique insights into the efficacy of these causal discovery methods for treatment effect estimation, considering both synthetic and real-world scenarios, as well as low-data scenarios. The results of our study demonstrate that some of the algorithms studied are able to effectively capture a wide range of useful and diverse ATE modes, while some tend to learn many low-probability modes which impacts the (unrelaxed) recall and precision. § INTRODUCTION Causal inference has a wide variety of real-world applications in domains such as healthcare <cit.> , marketing <cit.>, political science, and online advertising <cit.>. Treatment effect estimation, the process of estimating the effect or impact of a treatment on an outcome in the presence of other covariates as potential confounders (and mediators), is a fundamental problem in causal inference that has received widespread interest for decades <cit.>. The existing powerful methods for treatment effect estimation from data require a complete (or partial) a priori knowledge of the causal graph <cit.>. When the graph is unknown, this requires solving a problem of causal structure learning, also known as causal discovery. Structure learning involves learning a graph (typically characterized by a directed acyclic graph or DAG for short) that best describes the dependence structure in a given data set <cit.>. In this approach, structure learning is required to learn a causal graph, which can then be applied to infer the influence of treatments on the outcomes of interest <cit.>. It should be noted that the actual causal graph can only be inferred up to its Markov Equivalence class (MEC), and the available observational data does not offer any means of further differentiation <cit.>. Learning a single graph has been shown to lead to poor predictions in a downstream causal inference task <cit.>. Instead of learning a single causal graph, the problem of structure learning can be tackled from a Bayesian perspective where we learn a posterior over the causal graphs. This has the unique advantage of accounting for epistemic uncertainty over the causal graphs in the MEC, thereby leading to a more enriching predictive performance in a downstream causal inference task. However, learning such a posterior over the causal graphs is plagued by challenges. One major issue is the combinatorially large sample space of causal graphs. The second major challenge is related to MCMC mode-mixing <cit.>: the mode-mixing problem occurs when the chances of going from one mode to a neighboring one may become exponentially small and require exponentially long chains, if the modes are separated by a long sequence of low-probability configurations. Therefore by using MCMC, there is an important set of distributions for which finite chains are unlikely to provide enough diversity of the modes of the distribution <cit.>. While there are a number of existing causal discovery methods (both Bayesian and non-Bayesian), our benchmark study centers on DAG-GFlowNet <cit.>, which is a unique method that leverages a novel class of probabilistic models called Generative Flow Networks  <cit.> to approximate the posterior distribution over causal graphs. Although causal inference is an inherent downstream application of causal discovery, most causal discovery evaluation methods are not aligned with causal inference because these two fields are typically studied independently <cit.>. For example, many causal discovery evaluation methods use the structural hamming distance (SHD) which compares the learned causal DAG (or the samples from the posterior distribution of DAGs in Bayesian structure learning) to the true DAG of the data generating process. Measuring the proximity of the learned DAGs, however, does not reveal much about their actual performance in treatment effect estimation given a treatment and outcome variable of interest, which is a predominantly downstream evaluation. In this work, we set out to benchmark causal discovery methods for the downstream task of treatment effect estimation, specifically the average treatment effect. As an extension to the DAG-GFlowNet, we offer insights on the application of GFlowNets to average treatment effect estimation, by comparing it with six other baseline methods for causal discovery. § BACKGROUND We provide a detailed background, in <ref>, on some of the key concepts used in this paper: Bayesian network, interventional distribution, Bayesian causal discovery, average treatment effect and our structure learning baselines. The structure learning baselines employed in our study follow <cit.>. In addition to DAG-GFlowNet <cit.>, we leveraged six baseline causal discovery algorithms: PC <cit.>, GES <cit.>, MC3 <cit.>, BCDNets <cit.>, Gadget <cit.>, and DiBS <cit.>. Due to space restrictions, we move our explanation of the causal discovery methods to Section <ref> in the Appendix. § EXPERIMENTAL SETUP Figure <ref> provides an illustrative overview of our experimental pipeline. The initial step involves Bayesian causal discovery, where, as discussed in Section <ref>, the objective is to learn a posterior distribution of the directed acyclic graphs (DAGs) that provide the most plausible explanations for the training dataset. The subsequent stage involves the estimation of the average treatment effect (ATE). Here, the ATE for each DAG in the posterior is estimated for every pair of distinct variables. In addition, the DAGs within the Markov equivalence class (MEC) of the true graph are enumerated and used to calculate the ATE estimates for each of them. The evaluation process, in stage 3, then involves a comparison of the average treatment effect (ATE) distributions between the true graph Markov equivalence class (MEC) and the learned posterior distribution of DAGs. For our experiments on synthetic data, we worked with 6 baselines in total and 26 seeds for each baseline. Each seed corresponds to a causal discovery experiment with a randomly sampled truth graph and observational data. §.§ Causal discovery experiments Following <cit.>, we performed causal discovery experiments on synthetic and real-world scenarios. For PC and GES we implement bootstrapping to achieve DAG posterior samples. Analysis on synthetic data: Following <cit.>, we performed experimental analyses using synthetic graphs and simulated data. We sampled synthetic data from linear Gaussian Bayesian networks with randomly generated structures. We experimented with Bayesian networks of size d=20 variables and considered two different sample sizes of n=20 and n=100. A small sample size of 20 was specifically chosen to evaluate the capabilities of the causal discovery algorithms in a low-data regime. The ground-truth graphs are sampled according to an Erdos-Rényi model. Analysis on flow cytometry data: DAG-GFlowNet was evaluated against the baselines on real-world flow cytometry data <cit.> to learn protein signaling pathways. The data consists of continuous measurements of d = 11 phosphoproteins in individual T-cells. They used the first n = 853 observations and the DAG, inferred by <cit.> and containing 11 nodes and 17 edges, as the dataset and ground-truth graph respectively for their causal discovery experiments. We continued with this direction in our experimental analysis and our goal was to show the downstream performance of DAG-GFlowNet on average treatment effect of the phosphoproteins in the protein signaling pathways. §.§ ATE experiments For our ATE experiments, we utilized all pairs of distinct variables: the rationale behind this was to thoroughly explore the possible treatment effects across various combinations. Therefore given d random variables {X_1,...,X_d}, we performed ATE evaluations on d^2 - d variable pairs. To achieve this in practice, we leveraged the DoWhy package <cit.>, which facilitated the implementation of the do-calculus algorithm. To ensure consistency and clarity in our results, we set the treatment values at 1.0 and 0.0 for all our experiments. The choice of values 1.0 and 0.0 does not relate to the existence or absence of a treatment, as is commonly used in most causal inference literature. Performing such a robust experiment involved a huge computation load. For example, for our baselines, each with 26 random seeds, each consisting of 1000 DAG samples from the posterior, we had to do d*(d-1) * 1000 * 26 * 6 ATE estimations. For the synthetic graph with 20 nodes, this leads to 57M estimations. In order to optimize the computational efficiency of our experiments, we implemented parallelism techniques. The GNU parallel computing tool <cit.> enabled us to distribute the computational workload across multiple processors or cores, thereby significantly reducing the overall computation time. §.§ Evaluation framework Our evaluation methodology goes beyond single-point ATE estimation, which is employed in standard causal inference benchmarking, by performing ATE evaluations based on posterior samples. This approach aims to provide a more comprehensive assessment of the quality of the learned posterior average treatment effect (ATE). Specifically, our evaluation pipeline involves the following metrics: Wasserstein distance (WD): To obtain a quantitative measure of the similarity between the true ATE sample-based distrbution and that of the learned ATE, we calculate and report their Wasserstein distance <cit.> using their samples[We utilize the Python implementation available https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.wasserstein_distance.htmlhere.]. Precision and Recall: We compute the precision and recall of the modes present in the learned ATE distribution and compare them to the modes in the true ATE distribution. In order to calculate the precision and recall, we first identify the unique modes for each of the true, A_T and learned A_' ATE samples. Then based on these set of modes, we calculate the true positive (modes from A_T that are found in A_') , false negative (modes from A_T that are missed in A_'), and false positive (modes from A_' that are not in A_T). Note that the lists A_T and A_' have been regrouped prior to running the evaluation (see Section <ref>). §.§ Additional settings Enumerating the MEC of the true graph: In order to achieve our evaluation using our strategy (see Section <ref>), it is necessary to not work with just one true graph. For a given ground-truth graph, we enumerate all the DAGs in its Markov equivalence class (MEC). Regrouping ATE values: The estimation of average treatment effects (ATE) through regression analysis is susceptible to generating estimates that may exhibit slight variations within numerical precision (e.g., 1.000000001 and 1). As our precision and recall metrics essentially perform `hard matches" on floating point values, it becomes crucial to consider the influence of numerical precision. In order to accomplish this objective, we group ATE values that are numerically close. More details are in <ref>. § RESULTS & DISCUSSION The results presented in Table <ref> illustrate the Wasserstein distance (WD), precision, and recall metrics of all baseline methods in terms of their learned ATE samples. Upon examining the Wasserstein distance, PC achieves the lowest Wasserstein distance, while GES attains the highest. When focusing on precision, we observe that apart from BCDNets, all the methods seem to be performing very poorly. However all the methods attain relatively high recall scores, with the highest achieved by GES and closely followed by DAG-GFlowNet. This high recall indicates the ability of the methods to capture diverse modes within their ATE distribution. The WD, precision, and recall for the synthetic data experiments with 100 samples are presented in Table <ref>. Given an increased number of observational samples compared to the previous table, it is anticipated that the task of causal discovery will be simpler. This is evidenced in the lower WD scores compared to Table <ref>. In a manner similar to the scenario involving 20 samples, it is observed that the methods, with the exception of BCDNets, exhibit a considerably low precision score, while concurrently displaying high recall values. Table <ref> presents the evaluation results of the analysis on flow cytometry using the Sachs dataset. Overall, all methods demonstrate comparable performance in terms of the Wasserstein distance: the range of the WD is 0.004, unlike in Table <ref> which is 0.072 or Table <ref> which is 0.144. When considering precision, BCDNets and PC outperform DAG-GFlowNet, which exhibits lower performance. Notably, DAG-GFlowNet achieves the highest recall, indicating its ability to learn samples from diverse modes within the true ATE distribution. §.§ Filtering Low-Probability Modes In all our evaluations (Tables <ref>, <ref>, <ref>), we witness a trend of DAG-GFlowNet and other methods exhibiting very low precision scores. In Figure <ref> we observe that DAG-GFlowNet (and other baselines like GES, DiBs) tends to learn new modes, but those modes have a very low probability in the estimated distribution. In our current evaluation framework however, we include all values in the list that have non-zero densities, which leads to unfair penalization of methods that exhibit multimodal diversity. Consequently, these methods receive disproportionately low precision values. However, when we apply a filtering approach that removes the low-probability modes before calculating the metrics, a more insightful narrative emerges for these methods, as shown in Figure <ref>. In particular, we notice a significant increase in precision for all the methods that initially exhibited very low precision values (in Tables <ref>, <ref>, and <ref>), when we apply a density relaxation tolerance of 0.05 (i.e for any list of ATEs, we only consider ATE values that have a mass of at least 0.05). This trend is consistent across all the experimental settings (100 samples, 20 samples, Sachs dataset). § CONCLUSION In conclusion, the practical importance of causality in decision-making is widely acknowledged, and the interplay between causal discovery and inference is evident. In order to bridge the gap in the evaluation of causal discovery methods, where limited attention is given to downstream inference tasks, we conducted a comprehensive evaluation that assessed seven established baseline causal discovery methods including a novel approach utilizing GFlowNets. By incorporating a Bayesian perspective in our evaluation, we offer a unique form of distribution-level insights, into their effectiveness for downstream treatment effect estimation. icml2023 § RELATED WORK Benchmarking methods: Benchmarks have played a crucial role in advancing entire research fields, for instance computer vision with the introduction of ImageNet <cit.>. When it comes to causal discovery, benchmarks usually come in the form of research surveys <cit.>, benchmark datasets <cit.>, learning environments <cit.>, and software packages or platforms <cit.>. However these methods only evaluate the closeness of the causal DAG, or the samples from the posterior distribution of DAGs in Bayesian structure learning, from various causal discovery methods to the ground-truth DAG. Measuring the proximity of the learned DAGs, however, does not reveal much about their actual performance in treatment effect estimation given a treatment and outcome variable of interest, which is a predominantly downstream evaluation. In causal inference, datasets <cit.>, frameworks <cit.>, and software packages <cit.> provide valuable tools for predicting the causal effects of treatments on outcomes. Causal inference plays a crucial role in decision-making and finds numerous practical applications in various domains such as healthcare, advertising, and decision-making processes. This implies that causal inference has a more downstream impact. In causal inference, the graph represents the structure of the joint distribution of variables, which is then leveraged to identify the causal estimand. Therefore, the evaluation of causal discovery methods on downstream causal inference tasks provides more practical insights into the effectiveness and practicality of causal methods within real-world scenarios. Typically, the fields of causal discovery and inference are approached separately, resulting in limited intertwined evaluation methods. This is the aspect that distinguishes our work. Similar approaches can be found in studies that jointly integrate causal discovery and inference in an end-to-end manner, such as the notable example of DECI <cit.>. However, our work differs in two key aspects: firstly, we employ the novel GFlowNets for causal inference, increasing our span and secondly, we specifically focus on linear noise structural equation models, whereas DECI addresses the problem of end-to-end causal inference in non-linear additive noise structural equation models (SEM). § BACKGROUND We offer a detailed background, in this section, on some of the key concepts used in this paper. Bayesian network: A (causal) Bayesian network <cit.> is a probabilistic model over d random variables {X_1,...,X_d}, whose joint probability distribution factorizes according to a DAG G (whose edges express causal dependencies) as: P(X_1,...,X_d) = ∏_k=1^d P(X_k | Pa_G(X_k)), where Pa_G(X) is the set of parents of the node X, i.e the nodes with an edge onto X in G, interpreted as the direct causes of X. Interventional distribution: Given a random variable X_k, a (hard) intervention on X_k, denoted by do(X_k = a), is obtained by replacing the conditional probability distribution (CPD) P(X_k | Pa_G(X_k)) with a Dirac distribution δ_X_k = a which forces X_k to take on the value of a. Note that intervening on a variable, in a graphical sense, results in a mutilated graph where all incoming edges to the node corresponding to that variable are removed <cit.>. §.§ (Bayesian) Causal discovery Given a dataset D {^(i)}_i = 1^n of n observations, such that ^(j)∼ P(X_1,...,X_d), the goal of structure learning is to learn the DAG G corresponding to the causal Bayesian network that best models D. It is important to note that D could be observational samples or interventional data samples (got from performing hard or soft interventions). In a Bayesian structure learning setting, the task is to approximate the posterior distribution P(G | D) over Bayesian networks that model these observations. A distribution over the DAGs allows quantifying the epistemic uncertainty and the degree of confidence in any given Bayesian network model, which is especially useful when the amount of data to learn from is small <cit.>. §.§ Average treatment effect (ATE) estimation The average treatment effect (ATE) is a quantity that allows us to estimate the impact of a treatment variable on an outcome variable. Given X_T and X_Y, our treatment and effect variables of interest respectively, the ATE on targets X_Y for treatment X_T = a given a reference X_T = b is given by <cit.>: ATE(a,b) = 𝔼[X_Y|do(X_T =b)] - 𝔼[X_Y|do(X_T = a)] In practice, this causal inference is broken down into two steps: identification and estimation. Identification deals with converting the causal estimand P(X_Y|do(X_T =b) into a statistical estimand that can be estimated using the dataset D. Some identification methods include the back-door criterion, front-door criterion <cit.>, instrumental variables <cit.> and mediation. Causal estimation then computes the identified statistical estimand from the data set using a range of statistical methods. The do-calculus algorithm <cit.> provides a powerful, systematic, programmable framework for the identification and estimation of the causal estimand. §.§ Causal discovery baseline algorithms In Table <ref> we briefly describe the structure learning algorithms we use in this work. The structure learning baselines employed in our study follow those utilized by <cit.>. For PC and GES we implement bootstrapping to achieve DAG posterior samples. DAG-GFlowNet: DAG-GFlowNet <cit.> employs GFlowNets <cit.> as a substitute for MCMC in order to estimate the posterior distribution of Bayesian network structures, based on a set of observed data. An overview of GFlowNets is presented in Section <ref> of the Appendix. The process of creating a sample DAG from an approximate distribution is considered a sequential decision task. This involves constructing the graph incrementally, one edge at a time, by utilizing transition probabilities that have been learned by a GFlowNet. We refer the reader to <cit.> for a comprehensive study of DAG-GFlowNet. DiBS: The DiBS framework <cit.> is an approach to Bayesian structure learning that is fully differentiable. It operates within the continuous space of a latent probabilistic graph representation. In contrast to prior research, the DiBS method does not rely on a specific format for the local conditional distributions. Additionally, it enables the simultaneous estimation of the graph structure and the parameters of the conditional distributions. MC3: In the MC3 algorithm (also known as structured MCMC) <cit.>, the authors present a hierarchical Bayesian approach to structure learning that leverages a prior over the classes of variables using nonparametric block-structured priors over Bayes net graph structures. This approach relies heavily on the assumption that variables come in one or more classes and that the prior probability of an edge existing between two variables is a function only of their classes <cit.>. GES: The Greedy Equivalence Search (GES) algorithm <cit.> is a score-based method for causal discovery that has been in use for a considerable amount of time. It operates by performing a greedy search across the set of equivalence classes of DAGs. The representation of each search state is accomplished through a completed partially directed acyclic graph (CPDAG), which includes operators for the insertion and deletion of edges. These operators enable the addition or removal of a single edge, respectively <cit.>. PC: The Peter-Clark (PC) algorithm <cit.> is a prominent constraint-based method for causal discovery. It leverages conditional independence (CI) tests to infer the underlying causal structure. The algorithm yields a completed partially directed acyclic graph (CPDAG) that represents the relationships between variables. It follows a three-step process: 1) identifying the skeleton of the graph, 2) determining v-structures or colliders (X ⟶ Y ⟵ Z) based on d-separation, and 3) propagating edge orientations. Initially, the algorithm creates a fully connected undirected graph using all variables in the dataset. It then eliminates edges that are unconditionally or conditionally independent (skeleton detection), identifies and orients v-structures using the d-separation set, and finally orients the remaining edges while ensuring the absence of new v-structures and cycles. The PC algorithm relies on the assumptions of acyclicity, causal faithfulness, and causal sufficiency. BCDNets: BCDNets <cit.> is another variational inference framework like DiBS. In their work they focus on estimating a distribution over DAGs characterizing a linear-Gaussian SEM and propose techniques to scale to high dimensions, such as using deep neural networks to model a variational family of factorized posterior distributions over the SEM parameters (including the edge weights and noise variance), and a horseshoe prior <cit.> on the edge weights, which promotes sparsity. Gadget: Gadget <cit.> is based on MCMC: sampling DAGs by simulating a Markov chain whose stationary distribution is the posterior distribution. However, to enhance the mixing of the chain, and reduce the space and time requirements, they build a Markov chain on the smaller space of ordered partitions of the node set, each state being associated with multiple DAGs. § GENERATIVE FLOW NETWORKS (GFLOWNETS) The Generative Flow Networks <cit.>, also known as GFlowNets, are a type of inference models that have a broad range of applications. GFlowNets are capable of generating samples with a probability that is proportional to a given reward function. The GFlowNets have been extensively studied and discussed in research papers such as <cit.> and <cit.>. The models facilitate the process of selecting a varied pool of potential candidates, while adhering to a training objective that ensures a nearly proportional sampling based on a specified reward function. GFlowNets are characterized unique training objectives like the flow-matching condition <cit.>, the detailed balance condition <cit.>, etc, through which a policy is learned. Through the training objectives, this policy is designed to ensure that the probability P_T(s) of sampling an object s is roughly proportional to the value R(s) of a specified reward function applied to that object. The GFlowNets technique is designed to reduce the computational burden of MCMC methods by performing the necessary work in a single generative pass that has been trained for this purpose. GFlowNets are well-suited for modeling and sampling from distributions over sets and graphs, as well as estimating free energies and marginal distributions <cit.>. They excel in problem scenarios with specific characteristics <cit.>: (1) the ability to define or learn a non-negative or non-marginalized reward function that determines the distribution to sample from, (2) the presence of a highly multi modal reward function, showcasing GFlowNets' strength in generating diverse samples, and (3) the benefit of sequential sampling, where compositional structure can be leveraged for sequential generation. Since its inception, GFlowNets have exhibited promising results in diverse domains such as discrete probabilistic modeling <cit.>, molecular design <cit.>, and causal discovery <cit.>. The aim of our research is to provide significant findings on the feasibility of employing GFlowNets for causal inference. § REGROUPING ATE VALUES The estimation of average treatment effects (ATE) through regression analysis is susceptible to generating estimates that may exhibit slight variations within numerical precision (e.g., 1.000000001 and 1). As our precision and recall metrics essentially perform `hard matches" on floating point values, it becomes crucial to consider the influence of numerical precision. In order to accomplish this objective, we group ATE values that are numerically close. We use the following equation to test whether two floating point values, a and b, are equivalent: |a - b| <= (atol + rtol * |b|), where rtol is the relative tolerance parameter and atol is the absolute tolerance parameter. Practically, we use the `isclose' function from the Numpy package[<https://numpy.org/doc/stable/reference/generated/numpy.isclose.html>] which uses the equation above and returns a boolean indicating whether a and b are equal within the given tolerance. We used the default values from Numpy, rtol=1e-05, atol=1e-08. We apply regrouping to the list of ATEs for precision and recall evaluation, but not for Wasserstein distance.
http://arxiv.org/abs/2307.05249v1
20230711132937
DRMC: A Generalist Model with Dynamic Routing for Multi-Center PET Image Synthesis
[ "Zhiwen Yang", "Yang Zhou", "Hui Zhang", "Bingzheng Wei", "Yubo Fan", "Yan Xu" ]
eess.IV
[ "eess.IV", "cs.CV", "cs.LG" ]
DRMC: A Generalist Model for Multi-Center PET Image Synthesis Yang et al. School of Biological Science and Medical Engineering, State Key Laboratory of Software Development Environment, Key Laboratory of Biomechanics and Mechanobiology of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China [email protected] Department of Biomedical Engineering, Tsinghua University, Beijing 100084, China Xiaomi Corporation, Beijing 100085, China DRMC: A Generalist Model with Dynamic Routing for Multi-Center PET Image Synthesis Zhiwen Yang1 Yang Zhou1 Hui Zhang2 Bingzheng Wei3 Yubo Fan 1 Yan Xu1^(,Corresponding author) August 12, 2023 =================================================================================================== Multi-center positron emission tomography (PET) image synthesis aims at recovering low-dose PET images from multiple different centers. The generalizability of existing methods can still be suboptimal for a multi-center study due to domain shifts, which result from non-identical data distribution among centers with different imaging systems/protocols. While some approaches address domain shifts by training specialized models for each center, they are parameter inefficient and do not well exploit the shared knowledge across centers. To address this, we develop a generalist model that shares architecture and parameters across centers to utilize the shared knowledge. However, the generalist model can suffer from the center interference issue, i.e. the gradient directions of different centers can be inconsistent or even opposite owing to the non-identical data distribution. To mitigate such interference, we introduce a novel dynamic routing strategy with cross-layer connections that routes data from different centers to different experts. Experiments show that our generalist model with dynamic routing (DRMC) exhibits excellent generalizability across centers. Code and data are available at: https://github.com/Yaziwel/Multi-Center-PET-Image-Synthesishttps://github.com/Yaziwel/Multi-Center-PET-Image-Synthesis. § INTRODUCTION Positron emission tomography (PET) image synthesis <cit.> aims at recovering high-quality full-dose PET images from low-dose ones. Despite great success, most algorithms <cit.> are specialized for PET data from a single center with a fixed imaging system/protocol. This poses a significant problem for practical applications, which are not usually restricted to any one of the centers. Towards filling this gap, in this paper, we focus on multi-center PET image synthesis, aiming at processing data from multiple different centers. However, the generalizability of existing models can still be suboptimal for a multi-center study due to domain shift, which results from non-identical data distribution among centers with different imaging systems/protocols (see Fig. <ref> (a)). Though some studies have shown that a specialized model (i.e. a convolutional neural network (CNN) <cit.> or Transformer <cit.> trained on a single center) exhibits certain robustness to different tracer types <cit.>, different tracer doses <cit.>, or even different centers <cit.>, such generalizability of a center-specific knowledge is only applicable to small domain shifts. It will suffer a severe performance drop when exposed to new centers with large domain shifts <cit.>. There are also some federated learning (FL) based <cit.> medical image synthesis methods that improve generalizability by collaboratively learning a shared global model across centers. Especially, federated transfer learning (FTL) <cit.> first successfully applies FL to PET image synthesis in a multiple-dose setting. Since the resultant shared model of the basic FL method <cit.> ignores center specificity and thus cannot handle centers with large domain shifts, FTL addresses this by finetuning the shared model for each center/dose. However, FTL only focuses on different doses and does not really address the multi-center problem. Furthermore, it still requires a specialized model for each center/dose, which ignores potentially transferable shared knowledge across centers and scales up the overall model size. A recent trend, known as generalist models, is to request that a single unified model works for multiple tasks/domains, and even express generalizability to novel tasks/domains. By sharing architecture and parameters, generalist models can better utilize shared transferable knowledge across tasks/domains. Some pioneers <cit.> have realized competitive performance on various high-level vision tasks like classification <cit.>, object detection <cit.>, etc. Nonetheless, recent studies <cit.> report that conventional generalist <cit.> models may suffer from the interference issue, i.e. different tasks with shared parameters potentially conflict with each other in the update directions of the gradient. Specific to PET image synthesis, due to the non-identical data distribution across centers, we also observe the center interference issue that the gradient directions of different centers may be inconsistent or even opposite (see Fig. <ref>). This will lead to an uncertain update direction that deviates from the optimal, resulting in sub-optimal performance of the model. To address the interference issue, recent generalist models <cit.> have introduced dynamic routing <cit.> which learns to activate experts (i.e. sub-networks) dynamically. The input feature will be routed to different selected experts accordingly so as to avoid interference. Meanwhile, different inputs can share some experts, thus maintaining collaboration across domains. In the inference time, the model can reasonably generalize to different domains, even unknown domains, by utilizing the knowledge of existing experts. In spite of great success, the study of generalist models rarely targets the problem of multi-center PET image synthesis. In this paper, inspired by the aforementioned studies, we innovatively propose a generalist model with Dynamic Routing for Multi-Center PET image synthesis, termed DRMC. To mitigate the center interference issue, we propose a novel dynamic routing strategy to route data from different centers to different experts. Compared with existing routing strategies, our strategy makes an improvement by building cross-layer connections for more accurate expert decisions. Extensive experiments show that DRMC achieves the best generalizability on both known and unknown centers. Our contribution can be summarized as: * A generalist model called DRMC is proposed, which enables multi-center PET image synthesis with a single unified model. * A novel dynamic routing strategy with cross-layer connection is proposed to address the center interference issue. It is realized by dynamically routing data from different centers to different experts. * Extensive experiments show that DRMC exhibits excellent generalizability over multiple different centers. § METHOD §.§ Center Interference Issue Due to the non-identical data distribution across centers, different centers with shared parameters may conflict with each other in the optimization process. To verify this hypothesis, we train a baseline Transformer with 15 base blocks (Fig. <ref> (b)) over four centers. Following the paper <cit.>, we calculate the gradient direction interference metric ℐ_i, j of the j-th center C_j on the i-th center C_i. As shown in Fig. <ref> (b), interference is observed between different centers at different layers. This will lead to inconsistent optimization and inevitably degrade the model performance. Details of ℐ_i, j <cit.> are shown in the supplement. §.§ Network Architecture The overall architecture of our DRMC is shown in Fig. <ref> (a). DRMC firstly applies a 3×3×3 convolutional layer for shallow feature extraction. Next, the shallow feature is fed into N blocks with dynamic routing (DRBs), which are expected to handle the interference between centers and adaptively extract the deep feature with high-frequency information. The deep feature then passes through another 3×3×3 convolutional layer for final image synthesis. In order to alleviate the burden of feature learning and stabilize training, DRMC adopts global residual learning as suggested in the paper <cit.> to estimate the image residual from different centers. In the subsequent subsection, we will expatiate the dynamic routing strategy as well as the design of the DRB. §.§ Dynamic Routing Strategy We aim at alleviating the center interference issue in deep feature extraction. Inspired by prior generalist models <cit.>, we specifically propose a novel dynamic routing strategy for multi-center PET image synthesis. The proposed dynamic routing strategy can be flexibly adapted to various network architectures, such as CNN and Transformer. To utilize the recent advance in capturing global contexts using Transformers <cit.>, without loss of generality, we explore the application of the dynamic routing strategy to a Transformer block, termed dynamic routing block (DRB, see Fig. <ref> (c)). We will introduce our dynamic routing strategy in detail from four parts: base expert foundation, expert number scaling, expert dynamic routing, and expert sparse fusion. Base Expert Foundation. As shown in <ref> (b), we first introduce an efficient base Transformer block (base block) consisting of an attention expert and a feed-forward network (FFN) expert. Both experts are for basic feature extraction and transformation. To reduce the complexity burden of the attention expert, we follow the paper <cit.> to perform global channel attention with linear complexity instead of spatial attention <cit.>. Notably, as the global channel attention may ignore the local spatial information, we introduce depth-wise convolutions to emphasize the local context after applying attention. As for the FFN expert, we make no modifications to it compared with the standard Transformer block <cit.>. It consists of a 2-layer MLP with GELU activation in between. Expert Number Scaling. Center interference is observed on both attention experts and FFN experts at different layers (see Fig. <ref> (b)). This indicates that a single expert can not be simply shared by all centers. Thus, we increase the number of experts in the base block to M to serve as expert candidates for different centers. Specifically, each Transformer block has an attention expert bank 𝐄_ATT = [𝐄^1_ATT, 𝐄^2_ATT, ..., 𝐄^M_ATT] and an FFN expert bank 𝐄_FFN = [𝐄^1_FFN, 𝐄^2_FFN, ..., 𝐄^M_FFN], both of which have M base experts. However, it does not mean that we prepare specific experts for each center. Although using center-specific experts can address the interference problem, it is hard for the model to exploit the shared knowledge across centers, and it is also difficult to generalize to new centers that did not emerge in the training stage <cit.>. To address this, we turn to different combinations of experts. Expert Dynamic Routing. Given a bank of experts, we route data from different centers to different experts so as to avoid interference. Prior generalist models <cit.> in high-level vision tasks have introduced various routing strategies to weigh and select experts. Most of them are independently conditioned on the information of the current layer feature, failing to take into account the connectivity of neighboring layers. Nevertheless, PET image synthesis is a dense prediction task that requires a tight connection of adjacent layers for accurate voxel-wise intensity regression. To mitigate the potential discontinuity <cit.>, we propose a dynamic routing module (DRM, see Fig. <ref> (c)) that builds cross-layer connection for expert decisions. The mechanism can be formulated as: W=𝐑𝐞𝐋𝐔(𝐌𝐋𝐏([𝐆𝐀𝐏(X), H])), where X denotes the input; 𝐆𝐀𝐏(·) represents the global average pooling operation to aggregate global context information of the current layer; H is the hidden representation of the previous MLP layer. ReLU activation generates sparsity by setting the negative weight to zero. It is a more suitable gating function in comparison with the commonly used softmax activation <cit.> and top-k gating <cit.> in our study (see Table. <ref>). W is a sparse weight used to assign weights to different experts. In short, DRM sparsely activates the model and selectively routes the input to different subsets of experts. This process maximizes collaboration and meanwhile mitigates the interference problem. On the one hand, the interference across centers can be alleviated by sparsely routing X to different experts (with positive weights). The combinations of selected experts can be thoroughly different across centers if violent conflicts appear. On the other hand, experts in the same bank still cooperate with each other, allowing the network to best utilize the shared knowledge across centers. Expert Sparse Fusion. The final output is a weighted sum of each expert's knowledge using the sparse weight W=[W^1, W^2, ..., W^M] generated by DRM. Given an input feature X, the output X̂ of an expert bank can be obtained as: X̂=∑_m=1^M W^m ·𝐄^m(X), where 𝐄^m(·) represents an operator of 𝐄^m_ATT(·) or 𝐄^m_FFN(·). §.§ Loss Function We utilize the Charbonnier loss <cit.> with hyper-parameter ϵ as 10^-3 to penalize pixel-wise differences between the full-dose (Y) and estimated (Ŷ) PET images: ℒ=√(Y-Ŷ^2+ϵ^2). § EXPERIMENTS AND RESULTS §.§ Dataset and Evaluation Full-dose PET images are collected from 6 different centers (C_1– C_6) at 6 different institutions[I_1 and I_5 are Peking Union Medical College Hospital; I_2 is Beijing Hospital; I_3 is Department of Nuclear Medicine, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine; I_4 is Department of Nuclear Medicine, University of Bern; I_6 is Beijing Friendship Hospital.]. The data of C_3 and C_4 <cit.> are borrowed from the Ultra-low Dose PET Imaging Challenge[Challenge site: https://ultra-low-dose-pet.grand-challenge.org/https://ultra-low-dose-pet.grand-challenge.org/. The investigators of the challenge contributed to the design and implementation of DATA, but did not participate in analysis or writing of this paper. A complete listing of investigators can be found at:https://ultra-low-dose-pet.grand-challenge.org/Description/https://ultra-low-dose-pet.grand-challenge.org/Description/.], while the data from other centers were privately collected. The key information of the whole dataset is shown in Table. <ref>. Note that C_1– C_4 are for both training and testing. We denote them as C_kn as these centers are known to the generalist model. C_5 and C_6 are unknown centers (denote as C_ukn) that are only for testing the model generalizability. The low-dose PET data is generated by randomly selecting a certain portion of the raw scans according to the dose reduction factor (DRF), e.g. the portion is 25% when DRF=4. Then we reconstruct low-dose PET images using the standard OSEM method <cit.>. Since the voxel size differs across centers, we uniformly resample the images of different centers so that their voxel size becomes 2×2×2 mm^3. In the training phase, we unfold images into small patches (uniformly sampling 1024 patches from 20 patients per center) with a shape of 64×64×64. In the testing phase, the whole estimated PET image is acquired by merging patches together. To evaluate the model performance, we choose the PSNR metric for image quantitative evaluation. For clinical evaluation, to address the accuracy of the standard uptake value (SUV) that most radiologists care about, we follow the paper <cit.> to calculate the bias of SUV_mean and SUV_max (denoted as B_mean and B_max, respectively) between low-dose and full-dose images in lesion regions. §.§ Implementation Unless specified otherwise, the intermediate channel number, expert number in a bank, and Transformer block number are 64, 3, and 5, respectively. We employ Adam optimizer with a learning rate of 10^-4. We implement our method with Pytorch using a workstation with 4 NVIDIA A100 GPUs with 40GB memory (1 GPU per center). In each training iteration, each GPU independently samples data from a single center. After the loss calculation and the gradient back-propagation, the gradients of different GPUs are then synchronized. We train our model for 200 epochs in total as no significant improvement afterward. §.§ Comparative Experiments We compare our method with five methods of two types. (i) 3D-cGAN <cit.> and 3D CVT-GAN <cit.> are two state-of-the-art methods for single center PET image synthesis. (ii) FedAVG<cit.>, FL-MRCM<cit.>, and FTL<cit.> are three federated learning methods for privacy-preserving multi-center medical image synthesis. All methods are trained using data from C_kn and tested over both C_kn and C_ukn. For methods in (i), we regard C_kn as a single center and mix all data together for training. For federated learning methods in (ii), we follow the "Mix" mode (upper bound of FL-based methods) in the paper <cit.> to remove the privacy constraint and keep the problem setting consistent with our multi-center study. Comparison Results for Known Centers. As can be seen in Table. <ref>, in comparison with the second-best results, DRMC boosts the performance by 0.77 dB PSNR, 0.0078 B_mean, and 0.0135 B_max. This is because our DRMC not only leverages shared knowledge by sharing some experts but also preserves center-specific information with the help of the sparse routing strategy. Further evaluation can be found in the supplement. Comparison Results for Unknown Centers. We also test the model generalization ability to unknown centers C_5 and C_6. C_5 consists of normal brain data (without lesion) that is challenging for generalization. As the brain region only occupies a small portion of the whole-body data in the training dataset but has more sophisticated structure information. C_6 is a similar center to C_1 but has different working locations and imaging preferences. The quantitative results are shown in Table. <ref> and the visual results are shown in Fig. <ref> (a). DRMC achieves the best results by dynamically utilizing existing experts' knowledge for generalization. On the contrary, most comparison methods process data in a static pattern and unavoidably produce mishandling of out-of-distribution data. Furthermore, we evaluate the performance of different models on various DRF data on C_6, and the results are available in the supplement. These results indicate that our method demonstrates strong robustness. §.§ Ablation Study Specialized Model vs. Generalist Model. As can be seen in Table. <ref>, the baseline model (using 15 base blocks) individually trained for each center acquires good performance on its source center. But it suffers performance drop on other centers. The baseline model trained over multiple centers greatly enhances the overall results. But due to the center interference issue, its performance on a specific center is still far from the corresponding specialized model. DRMC mitigates the interference with dynamic routing and achieves comparable performance to the specialized model of each center. Ablation Study of Routing Strategy. To investigate the roles of major components in our routing strategy, we conduct ablation studies through (i) removing the condition of hidden representation H that builds cross-layer connection, and replacing ReLU activation with (ii) softmax activation <cit.> and (iii) top-2 gating <cit.>. The results are shown in Table. <ref>. We also analyze the interpretability of the routing by showing the distribution of different layers' top-1 weighted experts using the testing data. As shown in Fig. <ref> (b), different centers show similarities and differences in the expert distribution. For example, C_6 shows the same distribution with C_1 as their data show many similarities, while C_5 presents a very unique way since brain data differs a lot from whole-body data. Ablation Study of Hyperparameters. In Fig. <ref> (c) and (d), we show ablation results on expert number (M) and block number (N). We set M=3 and N=5, as this configuration has demonstrated good performance while maintaining acceptable computational complexity. § CONCLUSION In this paper, we innovatively propose a generalist model with dynamic routing (DRMC) for multi-center PET image synthesis. To address the center interference issue, DRMC sparsely routes data from different centers to different experts. Experiments show that DRMC achieves excellent generalizability. splncs04 § SUPPLEMENT Center Interference. To quantify the interference of the j-th center task on the i-th center task, we estimate the change in loss ℒ_i for the i-th center task when optimizing the shared parameters θ according to the j-th center task's loss ℒ_j as follows: Δ_j ℒ_i(X_i) ≐𝔼_X_j(ℒ_i(X_i ; θ)-ℒ_i(X_i ; θ-λ∇_θℒ_j(X_j)/∇_θℒ_j(X_j))), ≈λ𝔼_X_j(∇_θℒ_j(X_j)/∇_θℒ_j(X_j)^T ∇_θℒ_i(X_i)), where X_i and X_j are the sampled training batches of the i-th and j-th centers, respectively. In the implementation, we sample 100 batches from each center for interference calculation. The interference of the j-th center task on the i-th center task can then be quantified as follows: ℐ_i, j=𝔼_X_i(Δ_j ℒ_i(X_i)/Δ_i ℒ_i(X_i)), where the denominator is utilized to normalize the scale of the loss change. SSIM Evaluation. To further assess the performance of our method, we compare the SSIM metric between our method and other comparison methods. The results are presented in Table. <ref>. The results indicate that DRMC achieves the highest performance on the SSIM metric. Evaluation on Different DRF Data. To verify the robustness of the model on different dosage data, we conducted tests on the unknown center C_6. Table <ref> presents the comparison results, demonstrating that our DRMC exhibits superior generalizability across different DRF data.
http://arxiv.org/abs/2307.05576v1
20230710121906
Bulk viscous universe with cosmological constant
[ "Athira Sasidharan", "Titus K Mathew" ]
gr-qc
[ "gr-qc" ]
Bulk viscous universe with cosmological constant Athira Sasidharan^* and Titus K Mathew^+ e-mail:[email protected], [email protected] ^*Department of Physics, NSS Hindu College, Changanacherry, Kerala, India ^+Department of Physics, Cochin University of Science and Technology, India,. In this paper we consider dissipative effects in ΛCDM model, i.e., we consider a universe with cosmological constant having viscous matter. We assume the most general form for bulk viscous coefficient, ζ=ζ_0+ζ_1ȧ/a+ζ_2ä/ȧ and obtained various constrains for ζ's . We also studied the background study of the model with ζ=ζ_0 and ζ=ζ_1ȧ/a. Extracted the value of ζ_1 using the Pantheon data and also obtained its thermodynamic evolution and the age. § INTRODUCTION Since the discovery of accelerating universe <cit.>, active research is been taking place looking for the cause producing the acceleration and also for a model that would incorporate this acceleration. To the present, there are many models that fits this acceleration. Of these, the simplest and the most successful is the interpretation of dark energy as the cosmological constant. However, the discrepancy between the observed and calculated values of dark energy density, known as cosmological constant problem<cit.>, and unexplained coincidence of two dark sectors-dark energy and dark matter, known as Cosmic Coincidence problem <cit.>, make rooms for other models in explaining the current acceleration. Some of these models include quintessence <cit.>, k-essence <cit.> and perfect fluid models (like Chaplygin gas model) <cit.>, f(R) gravity <cit.>, f(T)gravity <cit.>, Gauss-Bonnet theory <cit.>, Lovelock gravity <cit.>, Horava-Lifshitz gravity <cit.>, scalar-tensor theories <cit.>, braneworld models <cit.> etc. A less complicated unified dark energy model is the bulk viscous models. In <cit.>, bulk viscous matter dominated universe is considered and was found that this viscosity alone can produce acceleration in the expansion of the universe. Phase space analysis of this model indicates that only viscosity with constant bulk viscous coefficinet predicts all the conventional phases of the universe i.e., a prior radiation dominated phase, followed by a decelerated matter dominated phase and then finally evolving to a de Sitter type universe <cit.>. A bayesian analysis of this model shows that it is not so superior over the ΛCDM model, but have only a slight advantage over it<cit.>. However Maartens <cit.> has pointed out that these viscous models violates near equilibrium condition (NEC), Π/P≪ 1 There are works <cit.> showing that Λ is an inevitable content of the universe. The matter content of the universe has disspations so it is worth full to consider a universe filled wth viscous matter having a cosmological constant <cit.>. Also recent papers <cit.> showed that introducing Λ with viscosity can attain this NEC. We neglect the other dissipative phenomena like shear viscosity as it is inconsistent with the isotropic nature of the universe. So the only viscosity component to be considered is the bulk viscosity. In this paper we first analyse the basic formalism of the bulk viscous matter dominated universe with cosmological constant. We consider the general form for the bulk viscous coefficient and using Eckart formalism, obtain the expression for Hubble Parameter and the scale factor. We also analyse the equation of state parameter and the deceleration parameter and from the behavior of these parameters, constrains on the viscous parameters was obtained. In section <ref>, we did the background study of the model for constant viscosity and constrains on the parameter is also obtained. We also analysed the age, thermodynamic behavior and asymptotic behavior of the model. In section <ref>,we consider the viscous coefficient as a function of Hubble parameter, i.e., ζ=ζ_1 H, extracted the value of ζ_1 and studied the background evolution and cosmological parameters and the age of the universe. In section <ref>, the results and conclusion are discussed. § VISCOUS MATTER WITH COSMOLOGICAL CONSTANT We consider a spatially flat universe described by FLRW metric. We assume that the universe contains viscous matter (both dark and baryonic) and cosmological constant as dark energy. We neglect the radiation component since its percentage composition is very small and also we are dealing with the late time acceleration. Eckart formalism <cit.> is used for the bulk viscous pressure and is given by, P^*=P-3ζ H where P is the normal pressure, which we assume as zero for the whole matter component of the universe (both dark and baryonic) and ζ is the coefficient of bulk viscosity. So the effective pressure will only be that from the bulk viscosity. The coefficient ζ is basically a transport coefficient, hence it would depend on the dynamics of the cosmic fluid. Since the exact form of ζ is unknown, we consider the most general form for the bulk viscous coefficient ζ <cit.>, which is a linear combination of the three terms as, ζ=ζ_0+ζ_1ȧ/a+ζ_2ä/ȧ The first term is a constant ζ_0, the second term is proportional to the Hubble parameter, which characterizes the dependence of the bulk viscosity on velocity, and the third is proportional to ä/ȧ, characterizing the effect of acceleration on the bulk viscosity.In terms of Hubble parameter H=ȧ/a, this can be written as, ζ=ζ_0+ζ_1H+ζ_2(Ḣ/H+H) The Friedmann equations governing the bulk viscous universe with cosmological constant are given as, H^2=ρ_m+ρ_Λ/3 2ä/a+(ȧ/a)^2=ρ_Λ-P^* where we have taken 8π G = 1, ρ_m and ρ_Λ=Λ/8π G are the densities of matter and cosmological constant Λ, respectively and overdot represents the derivative with respect to cosmic time t. We consider separate conservation equations for matter and dark energy and are given below, ρ̇_m+3H(ρ_m+P^*)=0. ρ̇_Λ=0 where we have assumed a constant equation of state for Λ, ω_Λ=-1. Using the Friedmann equations (<ref>) and (<ref>) and using equations. (<ref>) and (<ref>), we get the differential equation for the Hubble parameter as, Ḣ=1/2-ζ̃_2(ζ̃_0 HH_0+(ζ̃_1+ζ̃_2-3)H^2+3H_0^2 Ω_Λ 0) where we have defined the dimensionless bulk viscous parameters ζ̃_0, ζ̃_1, ζ̃_2 as, ζ̃_0=3ζ_0/H_0, ζ̃_1=3ζ_1, ζ̃_2=3ζ_2 H_0 is the present value of the Hubble parameter and Ω_Λ 0 is the present density parameter of dark energy. Integrating equation (<ref>) we can get the expression for the Hubble parameter as, H=H_0[(y+ζ̃_0)(y-2(ζ̃_1+ζ̃_2-3)-ζ̃_0)e^H_0(t-t_0)y/2-ζ̃_2-(y-ζ̃_0)(y+2(ζ̃_1+ζ̃_2-3)+ζ̃_0)/2(ζ̃_1+ζ̃_2-3)(e^H_0(t-t_0)y/2-ζ̃_2(-y+2(ζ̃_1+ζ̃_2-3)+ζ̃_0)-(y+2(ζ̃_1+ζ̃_2-3)+ζ̃_0))] where y=√(ζ̃_0^2-12Ω_Λ 0(ζ̃_1+ζ̃_2-3)) and t_0 is the present cosmic time. As t-t_0→∞, H→ H_0[y+ζ̃_0/2(ζ̃_1+ζ̃_2-3)], a constant provided ζ̃_2<2. When t-t_0 is small, H evolves as H_0[2(2-ζ̃_2)+H_0(t-t_0)(ζ̃_0+6Ω_Λ 0+y)/2(2-ζ̃_2)+H_0(t-t_0)(y-2(ζ̃_1+ζ̃_2-3)-ζ̃_0)]. Using the definition of the Hubble parameter, we could obtain the expression for the scale factor from equation (<ref>) as, a=e^H_0(t-t_0)(y-ζ̃_0)/2(ζ̃_1+ζ̃_2-3)[y+2(ζ̃_1+ζ̃_2-3)+ζ̃_0+e^H_0(t-t_0)y/2-ζ̃_2(y-2(ζ̃_1+ζ̃_2-3)-ζ̃_0)/2y]^ζ̃_2-2/ζ̃_1+ζ̃_2-3 When Ω_Λ 0=0, the scale factor reduces to a(t)=[(ζ̃_0+ζ̃_12-3/ζ̃_0)+(3-ζ̃_12/ζ̃_0) e^ζ̃_0/2-ζ̃_2H_0(t-t_0)]^2-ζ̃_2/3-ζ̃_12 which is the expression obtained in <cit.>. When t-t_0 is small, the scale factor evolves as a∼[1+H_0(t-t_0)(y-ζ̃_0)/2(ζ̃_1+ζ̃_2-3)][1+H_0(t-t_0)/2-ζ̃_2(y-2(ζ̃_1+ζ̃_2-3)-ζ̃_0)]^ζ̃_2-2/ζ̃_1+ζ̃_2-3 When t-t_0 is very large, from the expression of scale factor we see that it will increases exponentially. §.§ Equation of state and Deceleration parameter The equation of state parameter ω and the deceleration parameter q can be obtained using the following relation, ω=-1-2/3Ḣ/H^2 q=-1-Ḣ/H^2 Using the expression (<ref>) and (<ref>), we get the expressions for ω and q as, ω=-1+2y^2 (ζ̃_0+ζ̃_1+ζ̃_2-3+3 Ω _Λ 0) /3 (ζ̃_2-2) (Sinh[H_0 (t-t_0) y/2 (2-ζ̃_2)] (ζ̃_0+6 Ω _Λ 0)+Cosh[H_0 (t-t_0) y/2 (2-ζ̃_2)] y)^2 q=-1+y^2(ζ̃_0+ζ̃_1+ζ̃_2-3+3 Ω _Λ 0)/(ζ̃_2-2) (Sinh[H_0 (t-t_0) y/2 (2-ζ̃_2)] (ζ̃_0+6 Ω _Λ 0)+Cosh[H_0 (t-t_0) y/2 (2-ζ̃_2)] y)^2 The present value of ω and q can be obtained by putting t=t_0 and are, ω_0=2ζ̃_0+2ζ̃_1-ζ̃_2+6Ω_Λ 0/3(ζ̃_2-2) q_0=ζ̃_0+ζ̃_1-1+3Ω_Λ 0/ζ̃_2-2 The present universe will be accelerating only if 3ω_0+1<0 and q_0<0 and for the universe to be in quintessence region and to avoid big rip, it should satisfy the relation q_0>-1. Using these conditions and from the behaviour of the Hubble parameter and the scale factor, for a universe to begin from the big bang and then entering it to decelerated epoch and then making a transition to the accelerated epoch in the past, a set of conditions has to be satisfied by the ζ̃'s. These conditions are, * ζ̃_0>0, ζ̃_2<2, ζ̃_0+ζ̃_1>1-3Ω_Λ 0, ζ̃_1+ζ̃_2<3, ζ̃_0+ζ̃_1+ζ̃_2<3-3Ω_Λ 0 * ζ̃_0<0, ζ̃_2>2, ζ̃_0+ζ̃_1<1-3Ω_Λ 0, ζ̃_1+ζ̃_2>3, ζ̃_0+ζ̃_1+ζ̃_2>3-3Ω_Λ 0 If we neglect the cosmological constant i.e., Ω_Λ 0=0, then these would reduce to the conditions obtained in the reference <cit.>. § WITH CONSTANT BULK VISCOSITY Let us consider the case when bulk viscous coefficient is a constant, i.e., when ζ=ζ_0 . The expression for Hubble parameter becomes, H=H_0y-ζ̃_0-6 Ω _Λ0+e^1/2 H_0 (t-t_0) y(y+ζ̃ _0+6 Ω _Λ0)/y+ζ̃ _0-6+e^1/2 H_0 (t-t_0) y(y-ζ̃ _0+6) where y=√(ζ̃_0^2+36Ω_Λ 0) Similarly, one could obtained the expression for scale factor for constant ζ as, a=e^1/6 H_0(t-t_0) (ζ̃ _0-y)((y+ζ̃ _0-6)+e^H_0(t-t_0)y/2( y-ζ̃ _0+6)/2 y)^2/3 Similarly, the corresponding equation of state and the deceleration parameter for constant viscosity becomes, ω=(-1-(ζ̃ _0-3+3 Ω _Λ 0) y^2/3(y Cosh[1/4 H_0 (t-t_0) y]+(ζ̃ _0+6 Ω _Λ 0) Sinh[1/4 H_0 (t-t_0) y])^2) q=(-1-(ζ̃ _0-3+3 Ω _Λ 0) y^2/2(y Cosh[1/4 H_0 (t-t_0)y]+(ζ̃ _0+6 Ω _Λ 0) Sinh[1/4H_0 (t-t_0)y])^2) As mentioned before, for an accelerating universe, the present value of equation of state ω_0<-1/3 and the present value of the deceleration parameter q_0<0. To avoid big rip, the equation of state parameter ω_0>-1, above the phantom limit. These conditions help us to constrain the value of ζ̃_0 as 1-3Ω_Λ 0<ζ̃_0<3(1-Ω_Λ 0). From observation Ω_Λ is constrained in the range 0.65-0.75 <cit.>. This constrains the ζ̃_0 in between -1.25<ζ̃_0<1.05. §.§ Age of the universe Age of the universe in this case can be obtained by equating a=1 in the equation (<ref>) and is found to be, Age≡(2/H_0 y)Log[1-2 y/6+y-ζ̃_0]. The plot of age of the universe for different values of (ζ̃_0,Ω_Λ) subjected to the constrain (<ref>) are shown in the figure (<ref>). The age plot shows reasonably good agreement for (ζ_0,Ω_Λ)=(-0.5,0.7) but the agreement with respect (ζ_0,Ω_Λ)=(0.1,0.68) is slightly less and for the third choice it is not in nice agreement. But corresponding to the best agreement pair the viscosity is negative. Whether is physically feasible or not may evident from the further considerations of the entropy evolution and dynamical system behaviour. §.§ Thermodynamics We now check the validity of the Generalized second law and maximization of entropy condition in this case. Assuming apparent horizon as the boundary of the universe and obtaining the horizon entropy using the Bekenstein relation and matter entropy using the Gibbs equation, we calculated the expression for the first derivative and second derivative of the total entropy with respect to time. The relation obtained are as follows: Ṡ=64 π ^2e^t' ỹb^2 ỹ^4 (ỹ-6+ζ̃_0 +e^1/2 t' ỹ (ỹ+6-ζ̃_0))/H_0 (ỹ-ζ̃_0 -6 Ω_Λ +e^1/2 t' ỹ (6 Ω_Λ +ỹ+ζ̃_0))^5, S̈=-384 π ^2 b^2 ỹ^5 e^3/2t' ỹ(b ỹ+2 (1+Ω_Λ) ỹCosh[1/2t' ỹ]+2 d Sinh[1/2t'ỹ])/((-1+e^1/2t'ỹ)ζ̃_0 -6 Ω_Λ+ỹ+e^1/2t'ỹ (6 Ω_Λ +ỹ))^6, where b=ζ̃_0+3Ω_Λ 0-3, d=ζ̃_0+12 Ω_Λ-ζ̃_0Ω_Λ and t'=H_0(t-t_0). The evolution of Ṡ and S̈ with respect to the scale factor for different values of Ω_Λ and ζ̃_0 subjected to the constrain (<ref>) are plotted and are shown in figures (<ref>) and (<ref>) respectively From the figures, it is clear that GSL and maximization of entropy condition is valid for the model. §.§ Phase space analysis We also try to study the asymptotic behavior of the model. We chose u and v as the phase space variables defined as u =Ω_m=ρ_m/3H^2, v =1/H_0/H+1, which varies in the range 0≤ u≤1 and 0≤ v≤1. Using the conservation equation and differential equation for Hubble parameter, we can obtained the autonomous equations for u and v as, u' =(1-v)/v^2(v(1-u)ζ̃_0 -3Ω_Λ u (1-v)), v' =(1-v)/2 v(3Ω_Λ(1-v)^2+ζ̃_0 v(1-v)-3v^2). There are three critical points for the above autonomous equation and the corresponding eigen values are listed in the Table <ref>. Inorder to represent a universe with unstable matter dominated phase and a stable, physically feasible accelerated phase we see that ζ̃_0 must be positive subjected to the constrain (<ref>). In determining the age corresponding to this model we have noted that, the best fit have arised both with negative value of ζ_0 and also with positive value (the black line in the age plot) of ζ_0. But the asymptotic analysis presented here, however supports only a positive value for ζ_0. Earlier in the analysis without cosmological constant also we conclude that, the case with ζ=ζ_0 is preferred over other cases. Thus even though the age prediction has been changed slightly, the present model is also predicting a conventional evolution of the universe with constant viscosity as in the case of the model without cosmological constant. § WITH Ζ=Ζ_1H Let us consider another special case of ζ=ζ_1 H. So here ζ depends only on the velocity component of the expansion of the universe. The expression for the Hubble Parameter and the scale factor are as follows, H=-√(3) H_0Ω_Λ 0(6-2 ζ̃ _1-2√(3(3-ζ̃ _1)Ω_Λ 0)+2e^ H_0(t-t_0) √(3(3-ζ̃ _1)Ω_Λ 0)(3-ζ̃ _1+√(3(3-ζ̃ _1)Ω_Λ 0)))/√((3-ζ̃ _1)Ω_Λ 0)(6-2 ζ̃ _1-2√(3(3-ζ̃ _1)Ω_Λ 0)-2 e^ H_0(t-t_0) √(3(3-ζ̃_1)Ω_Λ 0) (3-ζ̃ _1+ √(3(3-ζ̃ _1)Ω_Λ 0))) ł a=12^1/ζ̃_1-3e^-√(3) H_0 (t-t_0)Ω_Λ 0/√((3-ζ̃_1)Ω_Λ 0)(ζ̃_1-3+√(3(3-ζ̃_1)Ω_Λ 0)+e^ H_0(t-t_0)√(3(3-ζ̃_1)Ω_Λ 0)(3-ζ̃_1+√(3(3-ζ̃_1)Ω_Λ 0))/√((3-ζ̃_1)Ω_Λ 0))^2/3-ζ̃_1 From the expression of Hubble parameter and the scale factor, we see that inorder to represent the conventional behavior of the universe, ζ̃_1 should be less than 3. In this case one could obtain the expression for the Hubble parameter in terms of the scale factor a. And it is found to be, H=H_0√([a^ζ̃_1-3(ζ̃_1-3+3Ω_Λ 0)-3Ω_Λ 0/ζ̃_1-3]) Since a direct relation between the Hubble parameter H and the scale factor a is found out, it is possible to extract the value of ζ_1. §.§ Extraction of ζ̃_1 To extract the value of ζ̃_1, we use the latest Pantheon Type Ia Supernova data consisting of 1048 data points.. The method used is the χ^2 minimization technique and is defined as, χ^2≡∑^n_k=1[μ_t-μ_k]^2/σ_k^2, where μ_k is the observational distance modulus for the k-th Supernova (obtained from the data) with red shift z_k, σ_k^2 is the variance of the measurement, n is the total number of data and μ_t is the theoretical distance modulus for the k-th Supernova with the same redshift z_k, which is given as μ_t=m-M=5log_10[d_L/Mpc]+25 where, m and M are the apparent and absolute magnitudes of the SNe respectively. d_L is the luminosity distance and is defined as d_L=c(1+z)∫_0^zdz'/H, where c is the speed of light. Using the expression for H from equation (<ref>), we construct the χ^2 function. We extract the values of Ω_Λ 0 and H_0 along with ζ̃_1. The values are given in the table below <ref>. §.§ Evolution of equation of state parameter and deceleration parameter The expression for the equation of state parameter and the deceleration parameter for this model can be obtained by making ζ̃_0=ζ̃_2=0 in the equations (<ref>) and (<ref>) respectively. ω=-1-(ζ̃_1-3)(ζ̃_1-3+3 Ω_Λ 0)/(√(3(ζ̃_1-3))Cos[1/2 H_0 (t-t_0) √(3Ω_Λ 0(ζ̃_1-3))] +3√(Ω_Λ 0)Sin[1/2H_0 (t-t_0) √(3Ω_Λ 0(ζ̃_1-3))] )^2 q=-1-3 (ζ̃_1-3)(ζ̃_1-3+3 Ω _Λ 0)/2 (√(3(ζ̃_1-3))Cos[1/2H_0(t-t_0)√(3Ω _Λ 0(ζ̃_1-3))] +3 √(Ω_Λ 0)Sin[1/2√(3) H_0 (t-t_0) √(ζ̃_1-3)√(Ω_Λ 0)] )^2 The equation of state parameter ω and the deceleration parameter q, in terms of scale factor are given as, ω=9 a^3 Ω_Λ 0 -a^ζ̃_1ζ̃_1 (ζ̃_1 -3+3 Ω_Λ 0)/-9 a^3 Ω_Λ 0 +3 a^ζ̃_1 (ζ̃_1 -3+3 Ω_Λ 0), q=-1-a^ζ̃_1 (-3+ζ̃_1) (ζ̃_1 -3+3 Ω_Λ )/-6 a^3 Ω_Λ +2 a^ζ̃_1 (ζ̃_1-3+3 Ω_Λ ). The plot of ω and q for the best estimated values of ζ̃_1 and Ω_Λ are shown in the figures <ref> and <ref> respectively. The equation of state is zero in the recent past. It decreases to the negative values and finally saturated at ω=-1 corresponding to a de Sitter epoch in the extreme future. The evolution of the deceleration parameter starts from around q ∼ 0.5 in the past, which corresponds to decelerated epoch and decreasing as the universe expands. It saturates at q=-1 corresponding the future de Sitter phase. The present value of ω and q can be obtained by putting a=1 in the expressions given by equation (<ref>) and (<ref>), respectively and are obtained as, ω_0=-ζ̃_1 /3-Ω_Λ, q_0=1/2 (1-ζ̃_1 -3Ω_Λ ). Using the best estimated values of ζ̃_1 and Ω_Λ, we get ω_0=-0.867033 and q_0=-0.80055, which is near to concordance value obtained by WMAP observation. §.§ Age of the universe The age of the universe in this model can be obtained by equating the scale factor (equation (<ref>)) to one and is found to be Age≡Log[3-ζ̃_1-√(3)√((3-ζ̃_1) Ω _Λ 0)/3-ζ̃_1+√(3)√((3-ζ̃_1) Ω _Λ 0)]/√(3) p √(-(-3+ζ̃_1) Ω _Λ 0). Using the best estimated values for ζ̃_1 and Ω_Λ, the age is found to 18.44Gyr and is matching with the concordance value of the age of the universe obtained from the oldest globular observations. In this way the model is promising in predicting the age. § CONCLUSION We analyse a universe with a cosmological constant and bulk viscous matter. By considering the general form for ζ=ζ_0+ζ_1ȧ/a+ζ_2ä/ȧ, we obtain the constrains of the viscous parameters by finding the evolution of Hubble parameter, scale factor and cosmological parameters. Two special cases for the viscous coefficient ζ, ζ=ζ_0, a constant and ζ=ζ_1 H, depending on the velocity of the expanding universe are considered. For ζ=ζ_0, for the constrain is -1.25<ζ̃_0<1.05. It is also found out that under this constrain the age of the universe is in accordance with the galactic observations. GSL and maximization of entropy condition are also found to be valid for the model. For ζ=ζ_1H, the value on ζ_1 is extracted using pantheon data and is found to be 0.351. The present value of deceleration parameter and equation of state is found to be q_0=-0.80055 and ω_0=-0.867033, respectively, which is near to concordance value obtained by WMAP observation. The age is found to 18.44Gyr and is matching with the observations. The addition of cosmological constant in the bulk viscous matter dominated universe improves age of the universe as well as other cosmological parameters. Riess1 A. G. Riess et al., Observational Evidence from Supernovae for an Accelerating Universe and a Cosmological Constant, Astron. J., 116, 1009 (1998). Perl1 S. Perlmutter et al., Measurements of Ω and Λ from 42 High-Redshift Supernovae, Astrophys. J., 517, 565 (1999). Bennet1 C. L. Bennett et al., First-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Preliminary Maps and Basic Results, Astrophys. J. Suppl. Ser., 148, 1 (2003). Tegmark1 Tegmark et al., Cosmological parameters from SDSS and WMAP, Phys. Rev. D, 69, 103501 (2004). Seljak Seljak et al., Cosmological parameter analysis including SDSS Ly forest and galaxy bias: Constraints on the primordial spectrum of fluctuations, neutrino mass, and dark energy, Phys. Rev. D, 71, 103515 (2005). Komatsu1 E. Komatsu et al., Seven-year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Cosmological Interpretation, Astrophys. J. Suppl. Ser., 192, 18 (2011). Weinberg S Weinberg., The cosmological constant , Rev. Mod. Phys., 61, 1 (1989). Carroll S M Carroll, , Living Rev. Rel. , 4, 1 (2001). Zlatev Zlatev, L. Wang and P. J. Steinhardt, Quintessence, Cosmic Coincidence and the Cosmological constant , Phys. Rev. Lett, 82, 896 (1999). fujii Yasunori Fujii, Origin of the gravitational constant and particle masses in a scale-invariant scalar-tensor theory, Phys. Rev. D, 26, 2580 (1982). carroll Sean M. Carroll, Quintessence and the Rest of the World: Suppressing Long-Range Interactions, Phys. Rev. Lett., 81, 3067 (1998). chiba1 Takeshi Chiba, Takahiro Okabe and Masahide Yamaguchi, Kinetically driven quintessence, Phys. Rev. D, 62, 023511 (2000). kamen1 Alexander Kamenshchik, Ugo Moschella and Vincent Pasquier, An alternative to quintessence, Phys. Lett. B, 511 265 (2001). capo1 Salvatore Capozziello, Curvature quintessence, Int. J. Mod Phys D, 11 483 (2002). ferraro1 R. Ferraro and F. Fiorini, Modified teleparallel gravity: Inflation without an inflaton, Phys. Rev. D, 75 084031 (2007). nojiri Shin'ichi Nojiri, Sergei D. Odintsov, and Misao Sasaki, Gauss-Bonnet dark energy, Phys. Rev. D, 71 123509 (2005). pad2 T. Padmanabhan and D. Kothawala, Lanczos-Lovelock models of gravity, Phys. Rep., 531 115 (2013). horava1 Petr Hořřava, Quantum gravity at a Lifshitz point, Phys. Rev. D, 79 084008 (2009). amendola1 Luca Amendola, Scaling solutions in general nonminimal coupling theories, Phys. Rev. D, 60 043501 (1999). dvali1 Gia Dvali, Gregory Gabadadze and Massimo Porrati, 4D gravity on a brane in 5D Minkowski space, Phys. Lett. B, 485 208 (2000). fabris1 J. C. Fabris, and S. V. B. Gonçalves, and R.de Sá Ribeiro, Bulk viscosity driving the acceleration of the Universe, Gen. Relat. Gravit., 38 495 (2006). li1 Baojiu Li and John D. Barrow, Does bulk viscosity create a viable unified dark matter model?, Phys. Rev. D, 79 103521 (2009). Hiplito1 W. S. Hipólito-Ricaldi and H. E. S. Velten, and W. Zimdahl, Viscous dark fluid universe, Phys. Rev. D, 82 063507 (2010). av1 Arturo Avelino and Ulises Nucamendi, Can a matter-dominated model with constant bulk viscosity drive the accelerated expansion of the universe?, JCAP, 04 006 (2009). av2 Arturo Avelino and Ulises Nucamendi, Exploring a matter-dominated model with bulk viscosity to drive the accelerated expansion of the Universe, JCAP, 08 009 (2010). Athira1 Athira Sasidharan and Titus K. Mathew, Bulk viscous matter and recent acceleration of the universe, Eur. Phys. J. C, 75, 348 (2015). Jerin1 N D Jerin Mohan, Athira Sasidharan and Titus K. Mathew,Bulk viscous matter and recent acceleration of the universe based on causal viscous theory, Eur. Phys. J. C, 77, 849 (2017). Athira2 Athira Sasidharan and Titus K. Mathew, Phase space analysis of bulk viscous matter dominated universe, JHEP, 06, 138 (2016). Athira3 Athira Sasidharan, N. D. Jerin Mohan, Moncy V. John and Titus K. Mathew, Bayesian analysis of bulk viscous matter dominated universe, Eur. Phys. J. C, 78, 628 (2018). Maartens R Maartens, Dissipative Cosmology, Classical and Quantum Gravity, 12, 1455 (1995). Gron N. Mostafapoor and O Gron, Viscous ΛCDM universe models, Astrophys. Space Sci, 333, 357-368 (2011). Cruz N Cruz, E Gonzalez and J Jovel, Study of a viscous ΛWDM model : Near-Equilibrium Condition, Entropy Production and Cosmological constrains, Symmetry, 14 1866 (2022). Cruz1 N Cruz, E Gonzalez and J Jovel, Singularities and soft- Big Bang in a viscous ΛCDM model, Phys. Rev. D, 105 024047 (2022). Eckart1 Carl Eckart, The Thermodynamics of Irreversible Processes. III. Relativistic Theory of the Simple Fluid, Phys. Rev. 58 (1940) 919. weinberg2 S. Weinberg, Gravitation and cosmology: principles and applications of the general theory of relativity, John Wiley & sons Inc., New york U.S.A. (1972). ren1 Jie Ren and Xin-He Meng, Cosmological model with viscosity media (dark fluid) described by an effective equation of state, Phys. Lett. B, 633 1 (2006). Singh J.P. Singh, Pratibha Singh, Raj Bali, Bulk viscosity and decaying vacuum density in Friedmann universe, Int J Theor Phys, 51 3828 (2012). Avelino A. Avelino et.al, Bulk Viscous Matter-dominated Universes: Asymptotic Properties, JCAP, 1308 12 (2013).
http://arxiv.org/abs/2307.04881v1
20230710200317
Ab initio methods for polariton chemistry
[ "Jonathan J. Foley IV", "Jonathan F. McTague", "A. Eugene DePrince III" ]
physics.chem-ph
[ "physics.chem-ph" ]
Department of Chemistry, University of North Carolina Charlotte, Charlotte, North Carolina, 28223 [email protected] Department of Chemistry William Paterson University Wayne, New Jersey, 07470 Department of Chemistry and Biochemistry, Florida State University, Tallahassee, FL 32306-4390 [email protected] Polariton chemistry exploits the strong interaction between quantized excitations in molecules and quantized photon states in optical cavities to affect chemical reactivity. Molecular polaritons have been experimentally realized by the coupling of electronic, vibrational, and rovibrational transitions to photon modes, which has spurred tremendous theoretical effort to model and explain how polariton formation can influence chemistry. This tutorial review focuses on a particular thrust in theoretical chemistry and chemical physics aimed at merging familiar techniques from ab initio electronic structure theory with cavity quantum electrodynamics, toward the goal of supplying predictive theories for polariton chemistry. Our aim is to emphasize the relevant theoretical details with enough clarity for newcomers to the field to follow, and to present simple and practical code examples to catalyze further development work. Ab initio methods for polariton chemistry A. Eugene DePrince III July 10, 2023 ========================================= § INTRODUCTION Strong interactions between nanoconfined photons and molecular systems<cit.> can lead to the creation of hybrid light–matter states known as polaritons that may display remarkably different chemical and physical properties than their parent components.<cit.> The technological and chemical applications of these strongly-coupled light–matter states are wide ranging. Recent examples of cavity control of chemical reactivity and catalysis,<cit.> polariton lasing,<cit.> manipulation of non-linear optical effects in organic molecules <cit.>, optical energy propagation,<cit.> plasmon-based photostabilization,<cit.> plasmon-based multimode vibrational strong couplingz ,<cit.> Bose-Einstein condensation of molecular exciton-polaritons,<cit.> and protection against decoherence processes<cit.> offer only a glimpse into the transformative potential of polaritonic approaches to chemistry and materials science. In order for the field to fully live up to its promise, the experimental realization of strong and ultra-strong light–matter coupling must be accompanied by high-quality theoretical descriptions of the emergence and properties of molecular polaritons. There have been several excellent review and perspective articles focusing on theoretical advances related to polaritonic chemistry. Theoretical challenges in polaritonic chemistry bridge most domains of chemical physics, including polaritonic structure, dynamics, statistical thermodynamics, and rate theories as pointed out by a recent comprehensive review by Huo and co-workers <cit.> and an incisive perspective by Feist and co-workers <cit.>. Ruggenthaler et al. have contributed a rigorous review of several promising directions in ab initio cavity quantum electrodynamics (QED) methods with a particular emphasis on real-space approaches to bridge density functional theory and its real-time extensions with cavity QED; the resulting QEDFT approach<cit.> has played an important role in simulating polaritonic structure. In this tutorial review, we also focus on the problem of simulating polaritonic structure through the lens of ab initio cavity QED, but we emphasize emerging methods implemented with Gaussian basis sets. Throughout, we refer to ab initio cavity QED methods (whether in Guassian or real-space grid bases) as those where the starting point is a single time-independent Schrödinger equation for charged particles comprising a molecular system coupled to quantized photonic degrees of freedom. These methods can be seen to be complementary to parameterized cavity QED (pCQED) methods where one essentially considers solving two Schrödinger equations in series: a first for the molecular system, and the second for the coupled molecular-photonic system that is parameterized by the solutions to the first.<cit.> As a tutorial review, our aim is to provide a level of technical detail sufficient for newcomers to the field to implement some of the more introductory methods and start applying them as-is, or to leverage these implementations to seed new or more elaborate methodological developments. In addition to the discussion of the related theory in text, we provide example code in a tutorial style that utilizes the Psi4Numpy framework for a QED-Hartree-Fock self-consistent field method and a QED-Configuration Interaction Singles method. We will present some illustrative calculations utilizing these methods, and also discuss results from the literature for methods beyond those for which we have provided tutorial implementations. Historically, theoretical descriptions of strong light–matter interactions have been built upon simple model Hamiltonians that describe interactions between two- or few-level quantum emitters and a single photon modes. For electronic strong coupling in polariton chemistry, the Jaynes-Cummings model provides such an example. Here two states of the quantum emitter are parameterized by the ground- and excited-state energies, and these states couple to the photon mode through a dipolar transition; see, for example, Ref. Huo22_Chemrxiv for a derivation and detailed discussion of this model. Such models are powerful tools for simulating qualitative changes to properties of molecular systems strongly coupled to nanoconfined photons,<cit.> offering essential insight, for example, into optical changes that can be induced by manipulating the energy content of an external field<cit.> or into changes in chemical reactivity<cit.> or rates of electron transfer reactions.<cit.> While such simulations improve our qualitative understanding of many problems, quantitative predictions of chemical reactivity or orbital-specific quantities (e.g., ionization potentials) within optical cavities or other nanoconfined environments necessitate an ab inito approach to light–matter interactions or a pCQED treatement with a sufficiently large basis of molecular and photonic eigenstates<cit.>. The most conceptually straightforward strategy to realize an ab initio polaritonic model is to generalize an existing methodology to treat more than one type of quantum-mechanical particle – namely, for the description of both electrons and photons. Following this scheme, approaches based on quantum electrodynamics generalizations of density functional theory (QEDFT<cit.> and QED-DFT<cit.>), configuration interaction (QED-CIS) <cit.>, and coupled cluster (QED-CC) <cit.> have emerged. An alternative and perhaps more direct description of polaritonic structure could be obtained from a theory designed from the outset with a different particle type, the polariton, in mind.<cit.> This approach could be the more natural one, but, in the framework outlined in Ref. , the technical challenge of designing algorithms for treating multiple types of quantum-mechanical particles is supplanted by a new problem: enforcing the correct Fermi-Bose statistics on the polaritonic wave function. In either case, the vast majoriy of polaritonic quantum chemical models are built upon density functional theory (DFT). For many applications, DFT offers an excellent balance of accuracy and computational affordability. However, DFT suffers from a number of well-known deficiencies<cit.> that are no doubt inherited by polaritonic extensions of the model and potentially limit its applicability to arbitrary polaritonic problems. Hence, while this review article touches on QED generalizations of DFT, the main focus is wave function methods. § THE PAULI-FIERZ HAMILTONIAN The starting point for our presentation of ab initio polaritonic structure theory is the Pauli-Fierz (PF) Hamiltonian,<cit.> represented in the length gauge and within the dipole and Born-Oppenheimer approximations. An excellent pedagogical discussion and derivation of this Hamiltonian from the minimal coupling Hamiltonian in the Coulomb gauge can found in a recent papers and reviews by Huo and co-workers.<cit.> Here, we briefly outline some key details, assuming a single photon mode for simplicity, but the Hamiltonian we derive can be generalized for multiple modes. It has been shown that the inclusion of multiple modes can profoundly impact ground-state and excited-state polariton surfaces, and physichemical process in model systems.<cit.> Most ab inito cavity QED studies to date have considered only a single mode, so multi-mode effects represent an important area to explore in future work. We begin with the minimal coupling Hamiltonian in the Coulomb gauge, Ĥ_ p · A = ∑_i^N 1/2m_i(p̂_i - z_i Â_⊥)^2 + V̂(x̂) + ħω_ cav, where the subscript on the Hamiltonian denotes that this operator is also referred to as the "p· A" Hamiltonian.<cit.> The sum runs over all charged particles (electrons and nuclei in molecular systems), p̂_i and z_i are the momentum operator and charge for particle i, respectively, Â_⊥ is the transverse component of the vector potential, V̂(x̂) is the Coulomb potential operator for all pairs of charged particles, and ħω_ cav captures the photon energy. The symbols b̂^† and b̂ are photonic creation and annihilation operators, respectively. Important properties of the photonic creation and annihilation operators include their action on photon number states, |n⟩ = √(n+1)|n+1⟩ |n⟩ = √(n)|n-1⟩ |n⟩ = n|n⟩, and their commutation relations [, ] = 1 [, ] = -1. In Eq. <ref>, the coupling between light and matter is captured by first term, which includes the matter momenta and the product of the matter charges and the vector potential; note that, in the Coulomb gauge, the vector potential is purely transverse. The "p· A" Hamiltonian in the Coulomb gauge is quite natural for formulations of ab inito QED represented in a real-space grid basis, and so approaches such as QEDFT are formulated in this gauge.<cit.> However, because momentum eigenfunctions are delocalized functions, capturing the coupling matrix elements between in the "p· A" representation is challenging for formulations that utilize Gaussian basis sets, which are inherently localized in space. Therefore, PF Hamiltonian in the length gauge that we seek may be obtained from Ĥ_ p · A via a gauge transformation, known as the Power-Zienau-Wooley (PZW) transformation, followed by a unitary phase transformation. The PZW transformation operator is Û_ PZW = exp( -i/ħμ̂·Â), where  = A_ 0( + ) and A_ 0 = √(ħ/2ω_ cavϵ_0 V)ê is the vector potential of the cavity photon, which is still purely transverse but we are dropping the ⊥ for simplicity. Let's consider the PZW transform of each term in Eq. <ref>. As noted in Ref. Huo20_9215, the PZW operator boosts the momentum operator by an amount z Â. To see why this is the case, consider the BCH expansion of this transformation for the light-matter coupling term for a single particle with charge z: Û_ PZW(p̂ - z Â) Û^†_ PZW = e^B̂Ĉ e^-B̂ = Ĉ + [B̂,Ĉ] + 1/2 [B̂,[B̂,Ĉ]] + ... where Ĉ = (p̂ - z Â) and B̂ = -i/ħzÂx̂, and we have used the fact that the dipole operator μ̂ = zx̂. Because  commutes with itself, we have [Ĉ, B̂] = -i/ħz [x̂,p̂] = zÂ, and all subsequent commutators equal to zero. Thus, we can see that Û_ PZW(p̂ - z Â) Û^†_ PZW = (p̂ - z Â) + z = p̂. Consequently, the first term in the PZW trasnformation of Eq. <ref> becomes Û_ PZW∑_i^N 1/2m_i(p̂_i - z_i Â)^2 Û^†_ PZW = ∑_i^N 1/2m_ip̂_i. Both  and x̂ commute with V̂(x̂), so we have Û_ PZWV̂(x̂) Û^†_ PZW = V̂(x̂). Finally, we have Û_ PZW ħω_ cav Û^†_ PZW = e^B̂Ĉ e^-B̂ = Ĉ + [B̂, Ĉ] + 1/2 [B̂,[B̂,Ĉ]] + ... where we will call Ĉ = and B̂ = g( + ), where g = -i/ħμ̂· A_0. The first commutator gives g [( + ),] = - g ( - ), and the second commutator gives -1/2 g^2 [( + ), ( - )] = - g^2, so that this term overall reads Û_ PZW ħω_ cav Û^†_ PZW = + iω_ cavμ̂· A_0( - ) +ω_ cav/ħ ( μ̂· A_0)^2. Combining all terms gives the Hamiltonian in the dipole gauge, also called the "d· E" Hamiltonian:<cit.> Ĥ_ d· E = ∑_i^N p̂_i^2/2m_i + V̂(x̂) + + iω_ cavμ̂· A_0( - ) +ω_ cav/ħ ( μ̂· A_0)^2. To derive the Pauli-Fierz Hamiltonian from Eq. <ref> we apply a unitary phase transformation defined by the operator Û_ϕ = exp( i π/2), which transforms the photonic operators as follows: Û_ϕÛ^†_ϕ = i Û_ϕÛ^†_ϕ = -i Û_ϕÛ^†_ϕ = . Thus, the Pauli-Fierz Hamiltonian can be defined as Ĥ_PF = Û_ϕĤ_ d· EÛ^†_ϕ = ∑_i^N p̂_i^2/2m_i + V̂(x̂) + - ω_ cavμ̂· A_0( + ) +ω_ cav/ħ ( μ̂· A_0)^2. It is common to define the coupling vector λ = √(ħ/ϵ_0 V)ê, and so after recalling the definition A_ 0 = √(ħ/2ω_ cavϵ_0 V)ê, we can write A_ 0 = √(1/2ω_ cav)λ. At this point, the sum ∑_i^N p̂_i^2/2m_i = T̂_ e + T̂_ N runs over the electrons and nuclei, and potential operator V̂(x̂) = V̂_ ee + V̂_ eN + V̂_ NN includes electron-electron repulsion, electron-nuclear attraction, and nuclear-nuclear repulsion operators. We will invoke the Born-Oppenheimer approximation, which fixes the nuclei and eliminates the nuclear kinetic energy operator, and makes the nuclear-nuclear repulsion a constant for a given molecular geometry. With these definitions in mind, we write the Pauli-Fierz Hamiltonian <cit.> in the length gauge and within the dipole and Born-Oppenheimer approximations and in atomic units as follows: Ĥ = Ĥ_ e + ω_ cavb̂^†b̂ - √(ω_ cav/2) (λ·μ̂ )(b̂^† +b̂) + 1/2 (λ·μ̂ )^2. Here, Ĥ_ e represents the electronic Hamiltonian that arises in standard electronic structure theories when the Born-Oppenheimer approximation is imposed on the charged particles captured by the ∑_i^N p̂_i^2/2m_i + V̂(x̂) term in Eq. <ref>. The second term Ĥ_ cav = ω_ cavb̂^†b̂ represents the Hamiltonian for the cavity mode, which is a harmonic oscillator with fundamental frequency ω_ cav. The last two terms are the bilinear coupling, Ĥ_ blc = √(ω_ cav/2) (λ·μ̂ )(b̂^† +b̂), and dipole self-energy terms Ĥ_ DSE = 1/2( λ·μ̂)^2, respectively. We will assume a Cartesian coordinate system where λ and μ̂ will have x, y, and z components. The molecular dipole operator μ̂ has an electronic and a nuclear contributions, i.e., μ̂ = μ̂_ e + μ_ n. In the Born-Oppenheimer approximation, the nuclear contribution is a constant for a given geometry. In the following sections, we use standard labeling notation for molecular spin orbitals, i.e., labels i, j, k, and l refer to electronic molecular spin-orbitals that are occupied in a reference configuration, and labels a, b, c, d refer to unoccupied electronic molecular spin-orbitals. General electronic molecular orbitals will be indexed by p, q, r, and s, and electronic atomic orbitals will be indexed by Greek labels. Unless otherwise noted, all electronic orbital labels refer to spin-orbitals. The symbols â^† and â will represent fermionic creation and annihilation operators, respectively, while b̂^† and b̂ will represent the bosonic equivalents. § MEAN-FIELD CAVITY QED As our first step in approximating the energy eigenstates of Eq. (<ref>), we introduce the cavity quantum electrodynamics Hartree-Fock (QED-HF) method based on the reference wavefunction |0^ e0^ p⟩ = |0^ e⟩⊗ |0^ p⟩ which is a direct product of a Slater determinant of electronic spin orbitals (|0^ e⟩) and a zero-photon state (|0^ p⟩). This zero-photon state is defined as a linear combination of photon-number states |0^ p⟩ = ∑_n (b̂^†)^n |0⟩ c_n where |0⟩ represents the photon vacuum. The functions |0^ e⟩ and |0^ p⟩ can be determined via the following modified Roothaan-Hall procedure. In the first step, the electronic wavefunction can be determined as the Slater determinant that minimizes the expectation value of Eq. (<ref>), given a fixed zero-photon state. Second, given |0^ e⟩, we integrate out the electronic degrees of freedom of Eq. (<ref>) to obtain a photon Hamiltonian Ĥ_ p = ⟨ 0^ e | Ĥ | 0^ e⟩ the lowest eigenfunction of which is |0^ p⟩. In practice, |0^ p⟩ can be determined by expanding Ĥ_ p in a basis of photon-number states and bringing it to diagonal form. This two-step procedure should be repeated until self-consistency. One key detail in this procedure is that incorrect behavior can be recovered if the photon space is not fully converged. As an example, Fig. <ref>(a) illustrates the QED-HF energy for a cavity-bound hydrogen fluoride cation (described by the cc-pVQZ basis set) as the molecule is moved away from the origin. Here, the cation is coupled to a single-mode cavity with a fundamental frequency of 2 eV, the cavity mode is polarized along the molecular axis, the coupling strength, λ, is 0.05 atomic units, and the H–F distance is fixed at 0.917 Å throughout the translation. The QED-HF energy should be origin invariant, but, as is evident from the data, the correct invariance properties are only observed in the limit that the photon basis is complete. Figure <ref>(b) illustrates the error in the QED-HF energy, with respect to calculations carried out in the so-called “coherent-state basis,”<cit.> which, as discussed below, yields results that are equivalent to those obtained with a complete photon basis. Here, we can see that even with 20 photon number states, the QED-HF energy is still not strictly origin invariant, and this issue is more pronounced the farther from the origin the molecule is placed. Aside from origin invariance, the QED-HF energy should be independent of the photon frequency;<cit.> any polaritonic wave function that is factorizable as a product of an electronic wave function and a photonic wave function should have this property. Figure <ref> illustrates the frequency dependence of the QED-HF energy for the same cavity-bound hydrogen fluoride cation when the molecule is placed 10 Å from the origin. Clearly, an incomplete photon basis leads to an incorrect frequency dependence in the QED-HF energy. The errors with respect to calculations carried out in the coherent-state basis depicted in Fig. <ref>(b) demonstrate that errors due to the incompleteness of the photon basis can be quite large, even when considering 20 photon number states. In this case, errors larger than 10^-3 E_ h are observed for cavity mode frequencies less than 1.5 eV; these errors become much smaller as the photon frequency increases. As alluded to above, an equivalent representation of ground-state QED-HF involves representing the problem within the coherent-state basis,<cit.> which is the basis that diagonalizes Ĥ_ p. In this way, we avoid the need to solve the second step of the modified Roothaan-Hall procedure described above and automatically ensure convergence of the procedure with respect to the number of photon-number states. In the coherent-state basis, we need only solve the electronic problem with a transformed Hamiltonian, the form of which is derived in the next subsection. §.§ Coherent-State Transformation of the Hamiltonian As noted in Ref. , |0^ p⟩ can be exactly defined with a unitary coherent-state transformation operator of the form Û_ CS = exp( z(b̂^† - b̂) ) were z is a parameter defined such that Û_ CSĤ_ pÛ^†_ CS is a diagonal operator: z = -λ·⟨μ̂⟩/√(2 ω_ cav). The term ⟨μ̂⟩ in Eq. <ref> represents the expectation value of the molecular dipole moment (with respect to the Slater determinant, |0^ e⟩), which is also a vector quantity. We can relate the photon vacuum to the zero-photon state through the unitary transformation defined in Eq. <ref>, |0^ p⟩ = Û^†_ CS |0⟩ where |0⟩ represents the photon vacuum. Now, consider the expectation value of the PF Hamiltonian with respect to the QED-HF wavefunction: ⟨ 0^ e0^ p | Ĥ | 0^ e0^ p⟩ = ⟨ 0^ e | ⊗⟨ 0 | Û_ CSĤÛ^†_ CS |0 ⟩⊗ | 0^ e⟩ From the right-hand side of this expression, it is evident that the electronic wave function, |0^ e⟩, could be determined by minimizing the expectation value of the transformed Hamiltonian, ⟨ 0 | Û_ CSĤÛ^†_ CS | 0⟩, with respect to variations in the orbitals, without any explicit consideration of the photon degrees of freedom. Hence, by applying the coherent-state transformation to the full PF Hamiltonian, we avoid the second step of the modified Roothan-Hall procedure for QED-HF that is outlined above. To transform Ĥ_PF to the coherent-state basis, we note that Û_ CSÛ^†_ CS = - z[, ( - )] = - z Û_ CSÛ^†_ CS = - z[, ( - )] = - z Û_ CSÛ^†_ CS = Û_ CSÛ^†_ CSÛ_ CSÛ^†_ CS = ( - z) ( - z). So, applying this transformation to Eq. <ref> yields Ĥ_ CS = Ĥ_e + ω_ cav ( - z) ( - z) - √(ω_ cav/2)λ·μ̂ ( + - 2z) + 1/2 (λ·μ̂)^2, and substituting Eq. <ref> gives the specific form of the Pauli-Fierz Hamiltonian in the coherent state basis: Ĥ_ CS = Ĥ_e + ω_ cav - √(ω_ cav/2) [λ· (μ̂ - ⟨μ̂⟩ )] ( + ) + 1/2 [λ· (μ̂ - ⟨μ̂⟩ )]^2 . Although we see that in Figure <ref> the total energy for charge systems remains origin invariant in the coherent state basis, the orbitals and the Fock matrix itself are not origin invariant for charged systems in this formulation. This presents challenges for introducing perturbative corrections for electron-electron and electron-photon correlation. This was recently observed by Riso et al. who developed a strong coupling quantum electrodynamics Hartree-Fock theory (SC-QED-HF) that leads to a fully origin-invariant formluation <cit.> based on the following ansatz: |Φ_SCQEDHF⟩ = exp( -λ/√(2ω_ cav)∑_p ση_pσâ^†_pσâ_pσ( - ) ) | 0^ e⟩ |0⟩ where â^†_pσ and â_pσ are fermionic creation and annihilation operators for spin orbital pσ and η_p are orbital-specific coherent state coefficients. §.§ Cavity QED Hartree-Fock (QED-HF) in the Coherent-State Basis Consider a QED-HF wave function of the form of Eq. <ref>. We express the photon state using the coherent-state transformation (Eq. <ref>) and take the expectation value of the Pauli-Fierz Hamiltonian to give E_QED-HF = ∑_μν ( T_μν + V_μν + 1/2 J_μν - 1/2 K_μν) γ_μν + ⟨1/2 [λ· (μ̂_ e - ⟨μ̂_ e )⟩]^2⟩ Here, μ and ν represent atomic basis functions, and T_μν, V_μν, J_μν, and K_μν are electron kinetic energy integrals, electron-nucleus potential energy integrals, elements of the Coulomb matrix, and elements of the exchange matrix, respectively. The elements of the Coulomb and exchange matrices are defined by J_μν = ∑_λσ (μν|λσ) γ_λσ and K_μν = ∑_λσ (μλ | σν) γ_λσ where the symbol (μν|λσ) represents a two-electron repulsion integral in chemists' notation, and γ_μν = ∑_i^N_ e c^*_μ i c_ν i is the one-particle reduced density matrix (with {c_μ i} and N_ e being molecular orbital coefficients and the number of electrons, respectively). The last term in Eq. <ref> is the dipole self-energy; note that, in the coherent-state basis, this quantity depends on only electronic degrees of freedom. Note also that the bilinear coupling term in Eq. <ref> does not contribute to the QED-HF total energy when the Hamiltonian is represented in the coherent-state basis. This property is shared by all QED approaches where the wave function is represented as a product of electron and photon functions (e.g., in the QED-DFT approach described in Ref. and in Sec. <ref>). The implementation of the dipole self-energy term is not consistent across the literature, with the difference being the treatment of the square of the electric dipole operator. To appreciate these differences, we first expand the dipole self-energy operator as 1/2 [λ· (μ̂_e - ⟨μ̂_e ⟩)]^2 = 1/2 ( λ·μ̂_ e ) ^2 - ( λ·μ̂_ e ) ( λ·⟨μ̂_ e⟩ )+ 1/2 ( λ·⟨μ̂_ e⟩ ) ^2. Now, the square of the electric dipole operator (the first term on the right-hand side of Eq. <ref>) can be expanded in terms of one- and two-electron contributions as ( λ·μ̂_ e ) ^2 = ∑_i ≠ j [ λ·μ̂_ e(i) ][ λ·μ̂_ e(j)] + ∑_i [ λ·μ̂_ e(i) ]^2. where i and j represent different electrons. The right-hand side of Eq. <ref> can be expressed in second-quantized notation as ( λ·μ̂_ e ) ^2 = ∑_μνλσ d_μν d_λσâ^†_μâ^†_λâ_σâ_ν - ∑_μν q_μνâ^†_μâ_ν. where â^† and â represent fermionic creation and annihilation operators, respectively. The symbols d_μν and q_μν represent modified electric dipole and electric quadrupole integrals, which have the form d_μν = - ∑_a ∈{x,y,z}λ_a ∫χ^*_μ r_a χ_ν dτ, and q_μν = - ∑_ab ∈{x,y,z}λ_a λ_b ∫χ^*_μ r_a r_b χ_ν dτ. respectively, and are evaluated over atomic basis functions, χ_μ. Here, λ_a is a cartesian component of λ, and r_a is a cartesian component of the position vector [e.g., for 𝐫 = (x, y, z), r_x = x]. As is well known, the square of an operator expanded initially in first quantization and then represented in second quantization is not necessarily the same as the square of the second quantized form of the operator; these representations are only equivalent in the limit that the one-electron basis set is complete. Equation <ref> makes no assumptions about the completeness of the one-particle basis set and is the form of the square of the dipole operator employed in Refs. . On the other hand, many other studies take the second-quantized form of the square of the electric dipole operator to be the product of second-quantized electric dipole operators, which leads to ( λ·μ̂_ e ) ^2 = ∑_μνλσ d_μν d_λσâ^†_μâ_νâ^†_λâ_σ = ∑_μνλσ d_μν d_λσâ^†_μâ^†_λâ_σâ_ν +∑_μνâ^†_μâ_ν∑_σ d_μσ d_σν. In these studies, the assumption that the basis set is assumed to be complete is never stated, but this choice is evident in the form of the Fock matrix (see Eq. 30 of Ref. , for example). In this review, we choose the form of ( λ·μ̂_ e ) ^2 given by Eq. <ref>. Given that choice, and the fact that ( λ·μ̂_ e ) = ∑_μν d_μνâ^†_μâ_ν, we arrive at 1/2 [λ· (μ̂_e - ⟨μ̂_e ⟩)]^2 = 1/2∑_μνλσ d_μν d_λσâ^†_μâ^†_λâ_σâ_ν + ∑_μν O^ DSE_μνâ^†_μâ_ν + 1/2 (λ·⟨μ_ e⟩)^2. where O^ DSE_μν = -( λ·⟨μ̂_ e⟩ ) d_μν - 1/2 q_μν. Now, we can evaluate the expectation of Eq. <ref> with respect to a single determinant, which gives ⟨1/2 [λ· (μ̂_e - ⟨μ̂_e ⟩)]^2 ⟩ = ∑_μν (1/2 J^ DSE_μν - 1/2 K^ DSE_μν + O^ DSE_μν ) γ_μν + 1/2 (λ·⟨μ_ e⟩)^2 Here, J^ DSE_μν and K^ DSE_μν are elements of dipole self-energy matrices that are analogies of the usual Coulomb and exchange matrices: J^ DSE_μν = d_μν∑_λσ d_λσγ_λσ = (λ·⟨μ̂_ e⟩) d_μν K^ DSE_μν = ∑_λσ d_μσ d_λνγ_λσ. With all of the components of the energy (Eq. <ref>) defined, we can make this energy stationary with respect to the molecular orbital expansion coefficients, {c_μ i}, while enforcing orthogonality of the molecular orbitals, which leads to a set of Hartree-Fock equations that resembles those in the ordinary electronic problem, augmented by the dipole self-energy contributions. As such, QED-HF orbitals are eigenfunctions of a modified Fock matrix, F_μν = T_μν + V_μν + J_μν - K_μν + O^ DSE_μν + J_μν^ DSE - K_μν^ DSE For organizational purposes, it will become convenient to partition the Fock matrix into contributions that define the canonical Fock operator, F^ C_μν = T_μν + V_μν + J_μν - K_μν, plus terms that derive from the dipole self energy, F^ DSE_μν = O^ DSE_μν + J_μν^ DSE - K_μν^ DSE. Upon solving the QED-HF equations, one obtains a set of molecular orbitals corresponding to the (mean-field) ground state of a many-electron system coupled to an optical cavity. For sufficiently large coupling strengths, the cavity can induce significant changes in these orbitals, as compared to orbitals obtained from a standard HF procedure on the isolated many-electron system. Here, we examine such changes for a formaldehyde molecule that has been coupled to a single-mode optical cavity. Excited states of this system have been explored using QED generalizations of time-dependent density functional theory<cit.> (see Sec. <ref> for a description of the relevant theory). Here, we adapt the results of Ref.  and focus on cavity-induced changes to the ground state (i.e., to the molecular orbitals). We supplement this discussion with a tutorial implementation of QED-HF that the interested reader can find https://github.com/FoleyLab/psi4polaritonic/blob/cpr/QED-HF_Tutorial.ipynbonline.<cit.> The tutorial provides a benchmark calculation on the water molecule, and can be modified to study other systems. As described in Ref. , the geometry of isolated formaldehyde was optimized using restricted HF (RHF) theory and the cc-pVDZ basis set, and the principal symmetry axis of the molecule is aligned along the z-axis. At this level, the RHF ground-state has a dipole moment oriented along the z-axis with ⟨μ⟩_z = -1.009 a.u. We consider solutions to the QED-HF equations for a coupling vector with fixed magnitude, (i.e., |λ| = 0.1 a.u.), and three different cavity mode polarizations: λ_y = 0.1 ê_y a.u., λ_z = 0.1 ê_z a.u., and λ_yz = √(1/2) (λ_y + λ_z) a.u., with ê_y=(0, 1, 0) and ê_z=(0, 0, 1). As compared to the HF energy, the QED-HF energy is higher in all cases, with the largest increase occurring for λ_z (see Table I). Going back to the explicit expressions for the QED-HF dipole self energy derived above, we can see that this large change likely originates from the permanent dipole moment that is oriented along the z-axis, which contributes to the last term in Eq. <ref>. The cavity-induced changes to the energy for the other polarizations point to important effects arising from the other contributions to Eq. <ref>. Specifically, in the case of λ_y, we should see no permanent dipole moment contributions to the dipole self energy, which indicates that the cavity effects stem entirely from the quadrupolar contribution to O^ DSE (Eq. <ref>) and the exchange-like contribution (Eq. <ref>). To quantify cavity-induced changes to the energy, Ref.  considered how various contributions to the QED-HF energy change with and without coupling to the photon field. Specific formulae for these couplings are given in reference Foley_154103. The quadrupolar contribution to O^ DSE (Δ_1qe) and the Coulomb-like and exchange-like contributions (Eqs. <ref> and <ref>), the combination of which is denoted Δ_2de in Table I, typically account for the largest changes to the QED-HF energy for the three polarizations considered in Table I. However, the changes in the one- and two-electron contributions to the canonical RHF energy (denoted Δ_1E and Δ_2E) suggest that cavity-induced changes to the orbitals themselves can have appreciable energetic consequences. We note that the various components of the energetic changes largely cancel with one other (i.e. Δ_1E≈ -Δ_2E in all three cases), leading to more modest changes in the total energy (see Table I). Aside from the energy, we can also visualize the impact that the cavity has on the real-space form of the molecular orbitals. As an example, Fig. <ref> depicts HF orbitals for the highest occupied molecular orbital (HOMO, 2B_2) and the second-lowest unoccupied molecular orbital (LUMO+1, 6A_1) for an isolated formaldehyde molecule and the corresponding QED-HF orbitals for the λ_yz case ( 7A^' and 8A^' ). The QED-HF orbitals are noticably distorted compared to the HF ones, which results in a reduction of symmetry from C_2v to C_s and impacts both ground-state energy and properties. The direct inclusion of these cavity-induced effects on the orbital basis is one appealing advantage of ab initio QED methods. §.§ Cavity QED Density Functional Theory (QED-DFT) The QED-HF theory outlined above can easily be adapted to develop a QED generalization of Kohn-Sham DFT, or QED-DFT.<cit.> To do so, one can simply follows the basic premise of Kohn-Sham DFT:<cit.> there exists a fictitious system of non-interacting photons and electrons that has the same density as the fully-interacting system. The QED-DFT ground-state is then taken to have the form of Eq. <ref>, except that |0^ e⟩ now refers to a determinant of Kohn-Sham orbitals. As with QED-HF, the photon part of the wave function can be exactly represented using the coherent-state transformation operator, see Eq. <ref>. All electron-electron correlation and exchange effects and electron-photon correlation effects can then, in principle, be accounted for by appropriate functionals of the density (and gradient of the density, etc.), as in standard Kohn-Sham DFT. Historically, QED-DFT was predated by a different generalization of DFT for cavity QED applications, called QEDFT, <cit.> which, rather than following the Kohn-Sham scheme, represents the electronic and photonic degrees of freedom directly in real space. QED-DFT studies typically employ standard exchange-correlation functionals used in electronic structure theory (i.e., they ignore electron-photon correlation effects), while, for QEDFT, a few examples of electron-photon correlation functions have been put forward.<cit.> § SINGLE-PARTICLE POST-SCF CAVITY QED METHODS §.§ Cavity QED-Configuration Interaction with Single Excitations (QED-CIS) A general correlated wave function for a many-electron system coupled to a single-mode cavity could take the form |Ψ⟩ = ∑_μ∑_A c_μ^A | μ^ e⟩⊗ | A^ p⟩ where |μ^ e⟩ represents a determinant of electronic orbitals, |A^ p⟩ is a photon-number state corresponding to A photons in the cavity mode, and c_μ^A is an expansion coefficient. If {|μ^ e⟩} includes all possible determinants and {|A^ p⟩} includes all possible photon-number states, then this full configuration interaction (CI) wave function provides an exact description of the electronic/polaritonic structure, within a given one-electron basis set. However, as in the usual electronic case, a full CI description of a cavity-coupled many-electron system is, in general, an intractable prospect. The simplest solution to this problem is to truncate both the many-electron basis and the photon basis at some level. McTague and Foley proposed<cit.> a truncated cavity QED-CI approach wherein the sum over Slater determinants, μ, in Eq. <ref> was restricted to include only the reference electronic configuration, |0^ e⟩, and all single electronic excitations out of this configuration, and the sum over photon-number states was restricted to include only states representing zero or one photon in the cavity (|0⟩ and |1⟩, respectively). Those authors termed this approach cavity QED configuration interaction with single excitations, or CQED-CIS, but, following the naming convention used in some QED coupled-cluster approaches<cit.> (see Sec. <ref>), we adopt the name QED-CIS-1. The QED-CIS-1 wave function for state I takes the form |Ψ_I⟩ = c_0^0 |0^ e⟩⊗ |0⟩ + ∑_i,a c_ia^0 |Φ_i^a⟩⊗ |0⟩ + c_0^1 |0^ e⟩⊗ |1⟩ + ∑_i,a c_ia^1 |Φ_i^a⟩⊗ |1⟩. Following Ref. , |Φ_i^a⟩ = 1/√(2)(|Φ_i_α^a_α⟩ + |Φ_i_β^a_β⟩) represents a singlet spin-adapted basis function, where |Φ_i_σ^a_σ⟩ is a determinant generated by exciting an electron with spin σ from a spatial orbital that is occupied in |0^ e⟩, ϕ_i, to an unoccupied spatial orbital, ϕ_a. For multiple cavity modes, QED-CIS-1 is defined such that the photon basis includes all possible combinations zero or one photon in each of the modes. The expansion coefficients in Eq. <ref> can be determined as the elements of the eigenvectors of the matrix representation of the Pauli-Fierz Hamiltonian represented within the coherent-state basis (Ĥ_CS, Eq. <ref>), i.e., by solving the eigenvalue problem [ 0 0 0 ħ g; 0 A + Δ ħ g^† ħ G; 0 ħ g ħω 0; ħ g^† ħ G 0 A + Δ + ħΩ ][ c^0_0; c^0_ia; c^1_0; c^1_ia ] = Ω_QED-CIS-1[ c^0_0; c^0_ia; c^1_0; c^1_ia, ] Note that the matrix on the left-hand side of Eq. <ref> actually is the matrix representation of Ĥ_ CS - E_QED-HF, where E_QED-HF is the energy of the QED-HF reference state. The elements of A are similar to those encountered in canonical CIS theory, A_ia,jb = F^ C_abδ_ij - F^ C_ij δ_ab + 2(ia|jb) - (ij|ab), with important differences being that (i) the two-electron integrals are performed over QED-HF orbitals, and (ii) F^ C is not diagonal in the QED-HF basis when the coupling strength is non-zero. The dipole self energy contribution to the Hamiltonian in the subspace of spin-adapted singly-excited functions is contained in the Δ matrix, with elements Δ_ia,jb =F^ DSE_abδ_ij - F^ DSE_ij δ_ab + 2 d_ia d_jb -d_ij d_ab, Again, we note that F^ DSE is not necessarily diagonal in the QED-HF basis. The symbol Ω represents a diagonal matrix of photon energy contributions, defined by Ω_ia,jb = ωδ_ijδ_ab. The symbols g and G arise from the bilinear coupling term in Ĥ_ CS and are defined by g_ia = -√(ω) d_ia and G_ia,jb = √(ω/2)( d_ijδ_ab - d_abδ_ij + ⟨ d ⟩δ_ijδ_ab) The g term couples the reference to |Φ_i^a⟩ |1⟩, while G couples singly-excited configurations with different photon numbers, i.e., |Φ_i^a⟩ |0⟩ and |Φ_i^a⟩ |1⟩. Note that the fact that g couples the reference to |Φ_i^a⟩ |1⟩ implies that QED-CIS-1 captures some electron-photon correlation effects. Indeed, the lowest eigenvalue, Ω_QED-CIS-1, obtained from solving Eq. <ref> is nonpositive and represents an electron-photon correlation energy. §.§ Cavity QED Time-Dependent Density Functional Theory (QED-TDDFT) Given the popularity of time-dependent DFT (TDDFT) for the electronic structure problem, it is not surprising that multiple generalizations of TDDFT have been proposed and applied to cavity-embedded molecular systems. Both real-time<cit.> and linear-response<cit.> formulations have been put forward; here, we focus on the linear-response approaches because they more closely resemble the QED-CIS-1 method discussed above. Both real-space<cit.> and atom-centered Gaussian basis function<cit.> representations of the electronic structure have been used within linear-response QED-TDDFT. In the latter category, Refs.  and have considered QED-TDDFT calculations on top of canonical Kohn-Sham reference configurations (i.e., |0^ e⟩⊗ |0⟩, where |0^ e⟩ is a Kohn-Sham determinant optimized in the absence of the cavity), while Refs.  and have considered fully relaxed QED-DFT reference functions and represented the QED-TDDFT problem in the coherent-state basis, similar to what is done in QED-CIS-1. As discussed in Ref. , significant differences in excitation energies obtained from these “unrelaxed” and “relaxed” QED-TDDFT protocols can occur when considering large coupling strengths. In either case, linear-response QED-TDDFT can be implemented as a solution to a generalization of Casida's equations [ A +Δ B + Δ' ħ g^† ħg̃^†; B +Δ' A + Δ ħ g^† ħg̃^†; ħ g ħ g ħω 0; ħg̃ ħg̃ 0 ħω ][ X; Y; M; N ] = Ω^QED-TDDFT[ 1 0 0 0; 0 - 1 0 0; 0 0 1 0; 0 0 0 - 1 ][ X; Y; M; N ] Assuming a spin-adapted basis, the A matrix is the same as that given in Eq. <ref>, except that the exchange term (ij|ab) is replaced with appropriate derivatives of the exchange-correlation energy. For a cavity QED random phase approximation (RPA), the B matrix has elements B_ia,jb = 2(ia|jb) - (ib|ja) and, for QED-TDDFT, the exchange term (ib|ja) is again replaced by the appropriate derivatives of the exchange-correlation energy. The Δ' matrix has elements Δ^'_ai,bj = 2 d_ai d_bj - d_aj d_ib and, lastly, g̃ = g. As described, the QED-TDDFT formalism corresponds to the “relaxed” one developed in Ref. . The “unrelaxed” QED-TDDFT method proposed in Ref.  can be obtained by ignoring the effects of the cavity in the underlying ground-state Kohn-Sham problem and taking Δ_ia,jb = Δ'_ia,jb = 2 d_ai d_bj The elements of X, Y, M, and N parametrize the QED-TDDFT excited states; the elements of X and Y correspond to the usual electronic excitation and de-excitation amplitudes encountered in conventional TDDFT, while M and N refer to photon creation and annihilation amplitudes, respectively. We see clear connections to QED-CIS-1, where the CI coefficients c_ai^0 and c_0^1 play roles that are similar to those of the elements of X and M, respectively. Unlike QED-CIS-1, however, the linear-response QED-TDDFT equations do not couple the QED-DFT reference to any excited configurations. Hence, this approach does not account for any explicit electron-photon correlation effects, absent any that are included via the exchange-correlation functional. Such effects were ignored in Refs. ; all calculations reported therein used standard density functional approximations designed for non-QED applications. §.§ The QED-TDDFT and QED-CIS prisms As mentioned above, some coefficients from the QED-CIS-1 problem map directly onto amplitudes that arise in QED-TDDFT. However, QED-CIS-1 lacks analogues to the de-excitation and annihilation amplitudes (Y and N, respectively). That said, in Ref. , Shao and coworkers explored an approximation to QED-TDDFT that ignored these terms, called the Tamm-Dancoff - Rotating Wave Approximation (TDA-RWA) in that work, which has a simpler structure that is more similar to QED-CIS-1. The TDA-RWA eigenvalue problem is [ A +Δ ħ g^†; ħ g ħω ][ X; M; ] = Ω^TDA-RWA[ X; M ]. The primary differences between QED-CIS-1 and TDA-RWA are (i) the different definitions of the A matrix that we have already discussed and (ii) the fact that TDA-RWA, like QED-TDDFT, does not account for simultaneous electronic excitations and photon creation, which would couple the QED-DFT reference to excited configurations. Other subtle differences exist, depending on whether the TDA-RWA is done in a fully relaxed way or not (as discussed in the context of QED-TDDFT above). The TDA-RWA approach is only one of eight possible approximations to QED-TDDFT that Shao and co-workers analyzed in Ref. ; these approximations live on what those authors describe as the QED-TDDFT prism (see Figure <ref>). The facets of their prism include all possible combinations of including or neglecting of the B matrix, the Δ/Δ' matrices, and g̃. An analogous family of approximations to QED-CIS-1 can be developed by neglecting Δ or the bilinear coupling terms in Eq. <ref> or by excluding simultaneous electron excitation and photon creation terms (|Φ_i^a⟩⊗ |1⟩) in Eq. <ref>. For example, excluding |Φ_i^a⟩⊗ |1⟩ from the wave function expansion results in a QED-CIS method has the same structure as TDA-RWA: [ A +Δ ħ g^†; ħ g ħω ][ c^0_ia; c^1_0 ] = Ω_QED-CIS[ c^0_ia; c^1_0 ] On the other hand, neglecting Δ Eq. <ref> leads to a Jaynes-Cummings-like approximation to QED-CIS-1 (JC-CIS-1): [ 0 0 0 ħ g; 0 A ħ g^† ħ G; 0 ħ g ħω 0; ħ g^† ħ G 0 A + ħΩ ][ c^0_0; c^0_ia; c^1_0; c^1_ia ] = Ω_JC-CIS-1[ c^0_0; c^0_ia; c^1_0; c^1_ia ] and if we neglect Δ from Eq. <ref>, we arrive at a JC-CIS methods that has the same structure as the TDA-JC method of Shao and co-workers <cit.>: [ A ħ g^†; ħ g ħω ][ c^0_ia; c^1_0 ] = Ω_JC-CIS[ c^0_ia; c^1_0 ] Ref.  provides a detailed analysis of the behavior of different facets of the QED-TDDFT prism for several cavity-coupled molecular systems. Here, we consider how the description of an MgH^+ cation coupled to a single-mode cavity differs for facets of the QED-CIS-1 prism. The cavity mode frequency is chosen to be resonant with the S_0 → S_1 transition in MgH^+ at an Mg–H distance of 2.2 Å (4.75 eV, as evaluated at the CIS/cc-pVDZ level of theory). The molecule is chosen to be oriented along the cavity mode polarization axis, and we consider two coupling strengths, |λ| = 0.01 a.u. and |λ| = 0.05 a.u. For the smaller coupling strength (|λ| = 0.01 a.u.), all facets of the prism provide a similar description of the upper and lower polariton states (see Fig. <ref>). On the other hand, clear differences between each model become evident for the stronger coupling strength (|λ| = 0.05 a.u.). Not surprisingly, energies from Jaynes-Cummings approximations (JC-CIS-1 and JC-CIS) are consistently lower than those from the Pauli-Fierz approaches (QED-CIS-1 and QED-CIS) because the Jaynes-Cummings model neglects the quadratic dipole self energy contributions, which are non-negative. We also see that QED-CIS-1 energies are consistent lower bounds to energies from QED-CIS; the reason is that simultaneous electron excitations and photon creation terms in QED-CIS-1 account for electron-photon correlation effects that lower the energy. For large coupling strengths, these effects can be quite large; at an Mg–H bond length of 2.2 Å and |λ| = 0.05 a.u., for example, the energies of the upper- and lower-polariton states computed by QED-CIS and QED-CIS-1 energies differ by 12.4 mE_h and 5.35 mE_h, respectively. As mentioned above, simultaneous electron excitations and photon creation terms in QED-CIS-1 incorporate electron-photon correlation effects into the approach and, as a result, the lowest-energy eigenvalue associated with Eq. <ref> is nonpositive and corresponds to an electron-photon correlation contribution to the ground-state energy. Table <ref> quantifies these effects for a formaldehyde molecule coupled to a single-mode cavity with two different coupling vectors, λ_z and λ_yz, which both have magnitudes of 0.1 a.u. and were defined in <ref>. The geometry for formaldehyde was taken from Ref. , with the principal axis of the molecule aligned in the z-direction. The authors of Ref.  considered a photon mode with ω = 10.4 eV, which is approximately resonant with the first two dipole allowed transitions at the CIS/cc-pVDZ level of theory. The changes to the ground-state energy as predicted by QED-CIS-1 are given relative to the canonical RHF method and the QED-HF method in Table <ref>. A Jupyter-notebook-based tutorial implementing the prism of QED-CIS-1 methods can be found https://github.com/FoleyLab/psi4polaritonic/blob/cpr/QED-CIS-1.ipynbonline.<cit.> The tutorial provides a benchmark calculation on the MgH^+ ion, and it can easily be modified to study other systems. § CAVITY QED COUPLED CLUSTER (QED-CC) Beyond the single-particle theories discussed in the previous sections, a number of groups have considered many-body frameworks for ab initio cavity QED calculations. Many of these efforts have focused on the coupled-cluster (CC)<cit.> ansatz, which has enjoyed great success in conventional (non-QED) quantum chemistry applications. CC methods exhibit a number of desirable features that have contributed to this success, including the size-extensivity of truncated CC expansions, the size-intensivity of equation of motion (EOM)<cit.> or linear-response<cit.> CC excitation energies, and systematic convergence of the approach toward the full CI limit. Two slightly different generalizations of CC theory for use with the PF Hamiltonian appeared in the literature at roughly the same time.<cit.> The polaritonic coupled-cluster theory of Mordovina, Bungey, Appel, Knowles, Rubio, and Manby <cit.> considered an exponential parametrization of the ground-state polaritonic wave function that included single and double electronic transition operators, as well as photon creation operators and coupled electron transition and photon creation operators. They applied this ansatz, along with QED full CI, to the description of strong coupling between a single photon mode and a four-site Hubbard model. It should be noted that this work did not use typical boson creation operators, but, rather, nilpotent operators that lead to a linear parametrization of the photon space. On the otherhand, the QED-CCSD-1 model presented by Haugland, Enrico Ronca, Kjønstad, Rubio, and Koch<cit.> used an exponential parametrization of similar complexity, along with more familiar (non-nilpotent) boson creation operators, and they applied this approach strong coupling problems involving an ab initio molecular Hamiltonian. The ground-state QED-CCSD-1 wave function is |Ψ_ CC⟩ = e^T̂|Φ_0⟩ with T̂ = ∑_ia t_i^a â^†_a â_i + 1/4∑_ijab t_ij^abâ^†_a â^†_b â_j â_i + u_0 b̂^† + ∑_ia u_i^a â^†_a â_i b̂^† + 1/4∑_ijab u_ij^abâ^†_a â^†_b â_j â_i b̂^† and where |Φ_0⟩ is a reference configuration of the form |Φ_0⟩ = |0^ e⟩⊗ |0⟩ In Eq. <ref>, the symbols t_i^a, t_ij^ab, u_0, u_i^a, and u_ij^ab represent the cluster amplitudes, and we can see that QED-CCSD-1 is an extension of the usual CCSD model<cit.> that includes both photon creation operators and products of electronic transition and photon creation operators. Excited states in QED-CC theory are represented within the EOM-CC framework,<cit.> in which we define both left- and right-hand excited states of the form | Ψ_I ⟩ = R̂_I e^T̂ | Φ_0 ⟩ ⟨Ψ̃_I | = ⟨Φ_0 | L̂_I e^-T̂ where, the label I denotes the state. These functions satisfy left- and right-hand eigenvalue equations ⟨Φ_0 | L̂_I H̅ = ⟨Φ | L̂_I E_I H̅R̂_I |Φ_0 ⟩ = E_I R̂_I |Φ⟩ involving the similarity transformed PF Hamiltonian, H̅ = e^-T̂Ĥe^T̂. Here, Ĥ is represented in the coherent-state basis. At the EOM-QED-CCSD-1 level of theory, the R̂_I and L̂_I operators are defined by L̂_I = l_0 + ∑_ail^i_a â^†_iâ_a + 1/4∑_abijl_ab^ijâ^†_i â^†_j â_b â_a + m_0 b̂ + ∑_aim^i_a â^†_iâ_a b̂ + 1/4∑_abijm_ab^ijâ^†_i â^†_j â_b â_a b̂ and R̂_I = r_0 + ∑_air^a_i â^†_aâ_i + 1/4∑_abijr_ij^abâ^†_a â^†_b â_j â_i + s_0 b̂^† + ∑_ais^a_i â^†_aâ_i b̂^† + 1/4∑_abijs_ij^abâ^†_a â^†_b â_j â_i b̂^† respectively, and the amplitudes appearing in Eqs. <ref> and <ref> are determined by solving Eqs. <ref> and <ref>. Since 2020, several groups have developed implementations of similar QED-CC approaches and explored the influence of cavity effects on various ground-state properties. DePrince<cit.> used QED-CCSD-1 to demonstrate that strong coupling leads to appreciable changes in electron affinities in sodium halide compounds and that QED-HF significantly overestimates these effects. Ionization potentials were found to be less sensitive to cavity effects in these systems. Pavošević and Flick<cit.> also explored the influence of cavity effects on electron affinities using a unitary formuation of QED-CCSD-1, implemented using the variational quantum eigensolver (VQE)<cit.> algorithm, on a quantum computer. They also extended the framework to include up to two photon creation operators plus single and double electronic excitaitons (termed QED-CCSD-2). These works led to a study on the features of ionization in QED environments by Riso, Haugland, Ronka and Koch<cit.> that highlighted the importance of an appropriate treatment of the ionized electron. Beyond these studies on ionization / electron attachment, a number of works have used QED-CC approaches to explore how vacuum fluctuations can be leveraged in chemical contexts. Here, it is important to note that we are referring to changes to ground states of cavity-embedded systems, without driving transitions or creating polariton states via the addition of photons to the cavity. Pavošević, Hammes-Schiffer, Rubio, and Flick<cit.> used non-unitary QED-CCSD-2 to show that strong coupling leads to non-negligible changes in proton transfer reaction barrier heights; changes as large as 20% were reported in Ref. . These authors also introduced an approximation to QED-CCSD-2 in which single electron transitions appear with up to two photon creation operators, but double electron transitions only appear with up to single photon creation operators (termed QED-CCSD-21). This QED-CCSD-21 model has a similar structure to the approach of White, Gao, Minnich, and Chan,<cit.> which was developed to model electron-phonon interactions. Pavošević, Smith, and Rubio applied an approximate QED-CCSD-1 model (that ignores coupled two-electron plus photon interactions) to two cycloaddition reactions. In that work, the authors demonstrated that sufficiently strong coupling, along with precise control over the relative orientation of molecules and the cavity mode axis, could influence the major products of these reactions. Pavošević and Rubio have also incorporated QED-CCSD-1 into an embedding protocol<cit.> that treats a subset of a cavity-embedded molecular system using QED-CC and the remainder of the system via QED-DFT or QED-HF (termed `QED-CC-in-QED-SCF”). Assuming that electron-photon correlations are limited to the embedded region, this protocol could circumvent the high computational cost of the many-body ab initio cavity QED framework. Haugland, Schäfer, Ronca, Rubio and Koch used QED-CCSD-1, QED-DFT, and QED full CI to model the effects of vacuum fluctuations on nature of intermolecular interactions.<cit.> Not surprisingly, QED-HF and QED-DFT do not provide good descriptions of intermolecular interactions in a cavity, particularly for van der Waals interactions. Additional notable observations include an R^-3 contribution to van der Waals interactions (which display R^-6 dependence in the absence of a cavity), stemming from electron-photon correlations, and an apparently infinite distance over which cavity-embedded molecules remain correlated, which results from the dipole self-energy contribution to the interaction energy. It should be noted that the coupling strength employed in this study was quite large: λ = 0.1 a.u., which, assuming a single cavity mode, corresponds to an effective mode volume of ≈ 0.2 nm^3. The authors correctly note that, at the mean-field level, multiple modes polarized along the same axis can be treated as a single effective mode with coupling strength, λ_ eff^2 = ∑_i λ_i^2. Even so, some conclusions regarding long-range correlation effects involve inter-molecule distances on the order of hundreds of Å, which seems inconsistent with such large coupling strengths. More recently, Philbin, Haugland, Ghosh, Ronca, Chen, Narang, and Koch<cit.> used machine learning (ML) techniques to learn intermolecular potentials for cavity-embedded dimers of H_2 molecules, which were treated using QED-CCSD-1 plus two-photon creation operators (termed QED-CCSD-12-SD1 in that work) and QED full CI with up to five photon creation operators (QED-FCI-5). Interestingly, comparisons between QED-CCSD-1 and QED-CCSD-12-SD1 revealed that two-photon transitions are crucial for recovering the correct sign on interaction energies for H_2 molecules separated by large distances; QED-CCSD-12-SD1 and QED-FCI-5 predict these interactions to be attractive, while QED-CCSD-1 predicts a repulsive interaction. Given machine-learned potentials, path integral molecular dynamics simulations on hundreds of cavity embedded molecules revealed that cavity-modified van der Waals interactions result in orientational order not seen in cavity-free simulations. An important caveat to note, though, is that these authors used potentials learned for large single-molecule coupling strengths (λ = 0.1 a.u.), which may not be entirely consistent with the large cavity volumes occupied by hundreds of molecules. In 2022, Riso, Grazioli, Ronca, Giovannini, and Koch<cit.> developed a formulation of QED-CCSD-1 that models interactions between electronic degrees of freedom and the quantized photon field of a chiral cavity mode. They found that a proper description requires that the photon field be treated beyond the dipole (or even multipolar) approximation, which results in a complex-valued Hamiltonian that depends two cavity modes (for a single resonant frequency). These complications aside, Ref.  demonstrated that circularly polarized light can discriminate between enantiomers of chiral molecules embedded within a chiral cavity (e.g., via changes to the energies of the ground states of the enantiomers or their rotational spectra). Moreover, the discriminating power of the cavity increases with the number of molecules. Cleary, a large body of work has considered the effects of strong light-matter interactions on ground states of cavity-embedded systems. Somewhat less work has considered excited-state electronic/polaritonic structure of such systems. The initial papers<cit.> describing generalizations of CC theory for use with the PF Hamiltonian developed and applied QED-EOM-CC formalisms to cavity-embedded systems. In particular, Ref.  describes how polariton formation can manipulate conical intersections; QED-CCSD-1 calculations on a cavity-coupled pyrole molecule show sufficiently strong coupling can open a gap at a conical intersection between the ^1 B_1 ^1 A_2 states. An exciting chemical consequence is that such modifications to the energy landscape could lead to changes in relaxation pathways or dynamics in chemical reactions. This idea has also been put forward in the context of linear response QEDFT, as well;<cit.> QEDFT simulations on cavity-embedded formaldehyde<cit.> have showed that different combinations of cavity parameters can move or suppress avoided crossings between excited states. While we have limited this discussion to consider descriptions of purely electronic strong coupling, we recognize that Vidal, Manby, and Knowles<cit.> have used similar QED-EOM-CC approaches to explore how coupling to a cavity mode can affect vibronic structure. Liebenthal and DePrince<cit.> extended QED-EOM-CC theory to consider non-particle-conserving excitation operators. Specifically, they developed a QED-EOM-CCSD-1 model for electron attachmentment (EA), which is a cavity QED generalization of the EOM-EA-CC approach<cit.> from electronic structure theory. One of the key findings in Ref.  was that, in order to recover electron affinities obtained from separate QED-CCSD-1 calculations on different charge states,<cit.> QED-EOM-EA-CCSD-1 calculations starting from an N-electron reference must employ the coherent-state basis defined for the (N+1)-electron state. This finding suggests that the coherent-state basis should be chosen with care in any QED-EOM-CC model that samples non-particle or spin-conserving sectors of Fock space. This work also revealed defects in the similarity-transformed PF Hamiltonian (i.e., complex eigenvalues) at a same-symmetry conical intersection in magnesium fluoride (MgF), involving the lower-polariton state. Such defects can emerge in standard EOM-CC theories that make use of truncated cluster expansions; the MgF example highlights that this issue persists in the cavity QED generalization of EOM-CC. We note that most QED-CC studies are formulated within the coherent-state basis introduced in Sec. <ref>. The primary reason for this choice is that it guarantees that the correlated calculation will be strictly origin invariant, even for charged species. Liebenthal, Vu, and DePrince<cit.> studied the numerical consequences of this choice by comparing QED-CCSD-1 and QED-EOM-CCSD-1 calculations in the coherent-state basis, using a QED-HF reference (termed “relaxed”), to calculations performed in the canonical Hartree-Fock basis, using a Hartree-Fock wave function that was not perturbed by cavity interactions (termed “unrelaxed”). For the unrelaxed case, they found that the presence of exponentiated single electron transitions (e^T̂_1) do a good job of accounting for orbital relaxation effects from QED-HF, while exponentiated boson creation operators (e^u_0b̂^†) can mimic the effects of the coherent-state transformation itself. For example, ground-state unrelaxed QED-CCSD-1 energies on charged species acquire only modest origin dependence; for a cavity-bound HF^+ cation, described by a cc-pVDZ basis set and a large coupling strength of λ = 0.05 a.u., that work showed that the energy changes by less than 1× 10^-3 E_ h when shifting the molecule 10 Å from the origin. Moreover, for the most part, excitation energies from relaxed and unrelaxed QED-EOM-CCSD-1 are similar, particularly for experimentally feasible coupling strengths (i.e., λ < 0.05). These results stand in stark contrast to results obtained from unrelaxed and relaxed formulations of QED-DFT and QED-TDDFT. First, unrelaxed QED-DFT acquires a substantial origin dependence in the energy (stemming from the dipole self energy contribution). Second, relaxed and unrelaxed QED-TDDFT yield significantly different spectra, with relaxed QED-TDDFT generally doing a better job of reproducing some trends from relaxed QED-EOM-CCSD-1. These observations are important, given that multiple formulations of of QED-TDDFT can be found in the literature, and not all of them account for cavity self-consistently in the ground state.<cit.> Fregoni, Haugland, Pipolo, Giovannini, Koch, and Corni have applied QED-EOM-CCSD-1 to interactions between a molecular system and a plasmonic nano/picocavity.<cit.> Their protocol is similar to that discussed throughout this Section, except for the precise form of the Hamiltonian. First, a polarized continuum model for nanoparticles<cit.> is applied to describe the plasmon mode. Second, the dipole self-energy contribution is not included in the Hamiltonian for the coupled system. The argument for neglecting the dipole self energy is that the collective electronic oscillations comprising the plasmon excitation interact with the molecule through longitudinal Coulomb interactions, and this interaction dominates over the coupling between the molecule transverse component of the vector potential. <cit.> It should also be noted that in the case of strong coupling to a cavity mode with a significant material contribution to the excitation (such as a plasmonic mode), Eq. <ref> should be augmented to include coupling between the charged particles of the molecular subsystem and the electric scalar potential ϕ(x) associated with the plasmon excitation: Ĥ_ p · A = ∑_i^N 1/2m_i(p̂_i - z_i Â_⊥)^2 + z_i ϕ(x_i) + V̂(x̂) + ħω_ cav. We note that the dipole self energy term (even if very small) still emerges upon PZW transformation of this Hamiltonian, particularly through transformation of the energy of the cavity mode ħω_ cav (see Eq. <ref>). Third, the bilinear coupling term takes a slightly different form. Despite these differences, the QED-EOM-CCSD-1 wave function ansatz is the same as that discussed herein. Building upon this work, Romanelli, Riso, Haugland, Ronca, Corni, and Koch<cit.> have developed a QED-CC model that folds in the effects of multiple plasmonic modes into a single effective mode. Other models for plasmon-molecule interactions that make use of quantized radiation fields and parametrized plasmon modes have been proposed as well.<cit.> Lastly, a cavity QED extension of second-order perturbation theory (MP2) and the algebraic diagrammatic construction (ADC) has been developed by Bauer and Dreuw.<cit.> QED-MP2 is an approximation to QED-CCSD-1, and, like conventional ADC, QED-ADC can be thought of a Hermitian approximation to QED-EOM-CCSD-1. The data presented in Ref.  suggest that the QED-MP2 correlation energy is much more sensitive to the frequency of the cavity mode than the correlation energy from QED-CCSD-1. This sensitivity is increased if the QED-MP2 calculations are performed on top of Hartree-Fock reference wave functions evaluated in the absence of the cavity. Hence, it appears that, like QED-DFT and QED-TDDFT, the QED-MP2 ansatz is not as robust as QED-CCSD-1 to the description of cavity effects at the mean-field level. § TRANSFORMATION OF OPERATORS In the preceeding sections, we have obtained (approximate) eigenstates of Ĥ_ CS, where Ĥ_ CS results from a unitary transformation of our original Hamiltonian in Eq. <ref>. In the following, we discuss relationships that hold between the exact eigenstates of Ĥ_ CS (which could be obtained, for example, through full configuration interaction in a complete single-particle basis) and Ĥ_ p · A. Although it is generally not possible to obtain the exact eigenfunctions of Ĥ_ CS or Ĥ_ p · A, we will work out practical relationships for the photonic character and the dipole operator and apply them to expectation values taken with approximate eigenfunctions obtained from the QED-CIS-1 method. The exact eigenvalues of an operator are preserved under unitary rotations, while the eigenfunctions of Ĥ_ CS are related to the eigenfunctions of Ĥ_ p · A by a unitary transformation. In particular, we have: Ĥ_ p · A⟶Ĥ_ CS via ÛĤ_ p · AÛ^† |Ψ_I⟩⟶ |Ψ^'_I⟩ via Û|Ψ_I⟩ Ĥ_ p · A |Ψ_I⟩ = E_I |Ψ_I⟩ Ĥ_ CS |Ψ^'_I⟩ = E_I |Ψ^'_I⟩. Therefore, in order for expectation values computed with these transformed eigenstates to have correspondance with the expectation values computed with the eigenstates of Ĥ_ p · A, we must transform the operators as follows: ⟨Ψ_I | Ô | Ψ_I ⟩ = ⟨Ψ_I^' | Ô^' | Ψ_I^'⟩ = ⟨Ψ_I | Û^†Ô^'Û| Ψ_I ⟩ = ⟨Ψ_I | Û^†ÛÔÛ^†Û| Ψ_I ⟩. Thus we see the transformation for operators to use with our transformed eigenstates is also Ô^' = ÛÔÛ^†. Specifically, following transformation of the Hamiltonian from the miminal coupling Hamiltonian in Eq. <ref> to the Pauli-Fierz Hamiltonian in the length gauge and to the coherent state basis, we must apply the same transformations to operators for the purposes of computing expectation values with the eigenfunctions of Eq. <ref>. Following transformation of the Hamiltonian from the miminal coupling Hamiltonian in Eq. <ref> to the Pauli-Fierz Hamiltonian in the length gauge and to the coherent state basis, we apply the same transformations to operators for the purposes of computing expectation values with the eigenfunctions of Eq. <ref>. Some operators will commute with the operators that provide these transformations (Û_ PZW, Û_ϕ, and Û_ CS) and will be unchanged, while others will be transformed. It is common to compute the photonic character of a polaritonic state, and so here we investigate the behaviour of the photon number operator, N̂_ p = b̂^†b̂ for a single photon mode. Furthermore, the dipole moment expectation value of the polariton system can be of interest <cit.>, so we will also investigate the behaviour of the dipole moment operator μ̂. For a single photonic mode: Û_ PZWÛ^†_ PZW = + i/ħ√(1/2ω_ cav)λ·μ̂ ( - ) + 1/ħ^21/2ω_ cav(λ·μ̂)^2, Û_ϕÛ_ PZWÛ^†_ PZWÛ^†_ϕ = - 1/ħ√(1/2ω_ cav)λ·μ̂ ( + ) + 1/ħ^21/2ω_ cav(λ·μ̂)^2, and N̂_CS = - 1/ħ√(1/2ω_ cav) [λ· (μ̂ - ⟨μ̂⟩ )] ( + ) + 1/ħ^21/2ω_ cav[λ· (μ̂ - ⟨μ̂⟩ )]^2, where N̂_CS = Û_ CSÛ_ϕÛ_ PZWÛ^†_ PZWÛ^†_ϕÛ^†_ CS. On the other hand, The PZW transformation of the dipole operator can be shown to preserve the expectation values because the dipole operator can be shown to commute with μ̂·Â since  operators only on photon degrees of freedom, and μ̂ must commute with itself. Similarly, since the phase and coherent state transformations involve only photon operators and μ̂ involves only electron operators, the dipole operator is unchanged by these transformations, and we have Û_CSÛ_ϕÛ_PZWμ̂Û^†_PZWÛ^†_ϕÛ^†_CS = μ̂. Of course we are not typically able to obtain the exact eigenfunctions for Ĥ_ CS; for example we will perform some truncation in the single-particle basis and/or in the many-particle basis. We will derive explicit expressions in the case that we have truncated the many-particle basis consistent with QED-CIS-1; these expressions are independent of the level of truncation of the single-particle basis. Recalling the form of the QED-CIS-1 wavefunction ( <ref>), we will examine the explicit expressions for the photonic occupation of a given electronic state Ψ_I that can be defined as ⟨ N_CS⟩ = ⟨Ψ_I | N̂_CS | Ψ_I ⟩ = ⟨Ψ_I | | Ψ_I ⟩ - 1/√(2ω_cav)⟨Ψ_I | λ· (μ̂ - ⟨μ̂⟩ )( + ) | Ψ_I ⟩ + 1/2ω_cav⟨Ψ_I | λ· (μ̂ - ⟨μ̂⟩ )^2 | Ψ_I ⟩. The first expectation value can be computed as follows: ⟨Ψ_I | | Ψ_I ⟩ = |c_0^1|^2 + ∑_ia |c_ia^1|^2. The second expectation value can be computed as follows: -1/√(2ω_cav)⟨Ψ_I | λ· (μ̂_ e - ⟨μ⟩_e)( + ) | Ψ_I ⟩ = -1/√(2ω_cav) c^ T H_ blc c, where c denotes the QED-CIS-1 eigenvector for state I and H_ blc is the contribution of the Hamiltonian matrix in Eq. <ref> that contains only the elements given in Eqs. <ref> and <ref>. The third expectation value can be computed as 1/2ω_cav⟨Ψ_I | (λ·(μ̂_ e -⟨μ_ e⟩))^2 | Ψ_I ⟩ = 1/2ω_cav c^ T H_ dse c, where H_ blc is the contribution of the Hamiltonian matrix in Eq. <ref> that contains only the elements given in Eqs. <ref>. We plot these various contributions and the total photon occupation of the QED-CIS-1 ground-state of the MgH^+ ion as a function of the fundamental coupling strength λ = √(ħ/ϵ_0 V) from a photon polarized purely along the principle axis of the molecule in Figure <ref>. Here we denote the 0^ th order contribution as arising from Eq. <ref>, the 1^ st order contribution as arising from Eq. <ref>, the 2^ nd order contribution as arising from Eq. <ref>, and the Total as arising from the sum of these three terms, e.g. Eq. <ref>. § CONCLUDING REMARKS Despite the impressive surge of theoretical and experimental advances in polariton chemistry and molecular polaritonics, many challenges and opportunities remain to advance the field towards its full promise. While it may seem daunting to span the chasm that exists between the majority of polariton experiments (done in the regime of 10^6 to 10^9 molecules within the cavity mode volume) to the regime accessible by even large-scale atomistic methods <cit.> ( 100s of molecules), we assert that all advances in the theoretical treatment of cavity-molecule interactions provide value towards the goal of understanding and controlling polariton chemistry. In particular, single- and few-molecule strong coupling has been experimentally realized with several different cavity platforms,<cit.> and, as the limits of this regime are expanded, there is an urgent need for rigorous and non-perturbative quantum mechanical methods that can accurately capture modifications to ground- and excited-state properties and emergent phenomena. The techniques described in this review provide such a rigorous foundation, although we should note that there are additional advances required for plasmonic nanocavities, such as rigorous inclusion of longitudinal scalar potential coupling to capture the material contribution of plasmon excitation, and inclusion of the modified chemical environment that molecules experience in the vicinity of plasmonic particles in the dark.<cit.> Some of these effects are more naturally included in the real-space Coulomb gauge formulations described in Refs. , which then leaves us with an intriguing theoretical challenge for formulations based on Gaussian basis sets and in the length gauge, or Coulomb gauge formulations with Gaussian basis sets, as reported by Koch and co-workers.<cit.> Moreover, theoretical approaches (quantum and classical) can be deployed to approach collective strong coupling from the bottom up, which may provide valuable insights into some of the phenomena that are observed in this regime. In this case, the availability of rigorous methods to benchmark lower-scaling methods (e.g. density functional based approaches, parameterized and semi-empirical approaches, and classical force fields) will be paramount. We hope that this tutorial review will serve to orient researchers towards these varied areas of development, as well as to provide the foundation for further development of ab initio QED approaches and the sound deployment of these methods. Author Information Present Address Department of Chemistry, Texas A&M University, College Station, TX 77843 Acknowledgments This material is based upon work supported by the National Science Foundation under Grant No. CHE-2100984. J.J.F Acknowledges support from the Research Corporation for Scientific Advancement Cottrell Scholar Award. JJF and J.M. and the NSF CAREER Award CHE-2043215. J.J.F. acknowledges support from the Center for MAny-Body Methods, Spectroscopies, and Dynamics for Molecular POLaritonic Systems (MAPOL) under subcontract from FWP 79715, which is funded as part of the Computational Chemical Sciences (CCS) program by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences and Biosciences at Pacific Northwest National Laboratory (PNNL). PNNL is a multi-program national laboratory operated by Battelle Memorial Institute for the United States Department of Energy under DOE contract number DE-AC05-76RL1830.
http://arxiv.org/abs/2307.05758v1
20230711192359
Formal and Fuzzing Amplification: Targeting Vulnerability Detection in 5G and Beyond
[ "Jingda Yang", "Ying Wang" ]
cs.CR
[ "cs.CR" ]
Formal and Fuzzing Amplification: Targeting Vulnerability Detection in 5G and Beyond Jingda Yang1, Student Member, IEEE and Ying Wang2, Member, IEEE 0 School of Systems and Enterprises, Stevens Institute of Technology, Hoboken, USA [email protected], and [email protected] August 12, 2023 ============================================================================================================================================================================================================= Softwarization and virtualization in 5G and beyond require rigorous testing against vulnerabilities and unintended emergent behaviors for critical infrastructure and network security assurance. Formal methods operates efficiently in protocol-level abstract specification models, and fuzz testing offers comprehensive experimental evaluation of system implementations. In this paper, we propose a novel framework that leverages the respective advantages and coverage of both formal and fuzzing methods to efficiently detect vulnerabilities from protocol logic to implementation stacks hierarchically. The detected attack traces from the formal verification results in critical protocols guide the case generation of fuzz testing, and the feedbacks from fuzz testing further broaden the scope of the formal verification. We examine the proposed framework with the 5G Non Standard-Alone (NSA) security processes, focusing on the Radio Resource Control (RRC) connection process. We first identify protocol-level vulnerabilities of user credentials via formal methods. Following this, we implement bit-level fuzzing to evaluate potential impacts and risks of integrity-vulnerable identifier variation. Concurrently, we conduct command-level mutation-based fuzzing by fixing the assumption identifier to assess the potential impacts and risks of confidentiality-vulnerable identifiers. During this approach, we established 1 attack model and detected 53 vulnerabilities. The vulnerabilities identified used to fortify protocol-level assumptions could further refine search space for the following detection cycles. Compared to the state of art fuzz testing, this unified methodology significantly reduces computational complexity, transforming the computational cost from exponential to linear growth. Consequently, it addresses the prevalent scalability challenges in detecting vulnerabilities and unintended emergent behaviors in large-scale systems in 5G and beyond. Formal verification, fuzz testing, reinforcing loop, integrated solution, Non Standard-Alone 5G Network § INTRODUCTION Verticals in 5G and next-generation infrastructure create a diverse and intricate environment consisting of software, hardware, configurations, instruments, data, users, and various stakeholders <cit.>. With system complexity and its lack of security emphasis by domain scientists, the formed ecosystem requires a comprehensive evaluation and in-depth validation for improving transitional Critical Infrastructure (CI) security posture <cit.>. Two major state-of-the-art approaches, formal verification and fuzz testing, have been proposed to detect various vulnerabilities and unintended emergent behaviors of the 5G network. Formal verification provides a high-level protocol concept and logical proof of security and vulnerability <cit.>. For example, Hussian<cit.> et al. proposed a cross-layer formal verification framework, which integrates model checkers and cryptographic protocol verifiers by applying the abstraction-refinement principle. In contrast, fuzz testing can offer a detailed and comprehensive experimental evaluation and detect potential vulnerabilities in the 5G code implementation platform <cit.> and has been proven to be successful in discovering critical security bugs in implemented software<cit.>. However, the limitations of conventional pick-and-choose fuzz testing and formal analysis still exist, especially with the large-scale software stacks in the system. Given that each approach possesses unique strengths, we have proposed a tandem connection between the fuzz testing and formal methods to achieve a more comprehensive vulnerability detection and enable high assurance in the security analysis. Further, we propose an integrated framework fortified by a reinforcing loop to detect vulnerabilities and unintended emergent behaviors in system design and implementations. Our approach advocates for a harmonized application of fuzz testing and formal analysis, aiming to establish a symbiotic cycle between these two methods. This integrated strategy is designed to facilitate the identification of vulnerabilities throughout the entire search space, thereby providing a comprehensive and robust mechanism for vulnerability detection. Formal verification provides valuable guidance and assumptions in reducing and directing fuzz testing. Conversely, fuzz testing broaden formal verification's scope by classifying uncertainty areas. Importantly, this integrated approach enables mutual amplification between the two methodologies. Following this approach, we discover multiple vulnerabilities due to absence of rudimentary MITM protection within the protocol, which is unexpected considering that the TLS solution to this issue has been in existence for well over a decade <cit.>. Our framework, characterized by robust automation, scalability, and usability, promises to enhance security assurance and resilience across both infrastructure and domain levels, striving to guarantee the absence of additional security issues within the system. Additionally, the proposed approach could be applied to various open programmable communication platforms<cit.> Our major contributions are summarized below: * We designed and implemented the integrated formal guided fuzz testing framework, which significantly improves efficiency and achieves scalability for large-scale 5G systems and discovery of new and exploited vulnerabilities in the NSA 5G communication authentication process. * We performed in-depth formal analysis on the NSA 5G authentication process by converting informal protocols into a symbolic flowchart (Fig. <ref>), enabling comprehensive formal analysis. * We discover multiple vulnerabilities due to absence of rudimentary MITM protection that needs to be addressed the 3GPP technical standards and protocols, despite the TLS solution to this issue has been in existence for well over a decade. * With the proposed integrated formal and fuzz testing framework, we connected vulnerabilities detected by formal analysis to real-life attack models and discovered new vulnerabilities. The rest of the paper is organized as follows. Section <ref> introduces the structure of our proposed framework. We then discuss our design and formal symbolic transfer of the NSA 5G communication establishment process in Section <ref>, followed by a detailed analysis and illustration of the formal verification results in Section <ref>. Then, we propose proven solutions for each detected formal attack model, along with some novel suggestions. In Section <ref>, we use the assumptions as a guide to apply our proposed fuzz testing framework. Lastly, in Section <ref>, we use mathematical proof to analyze the efficiency of different fuzzing strategies across varied scopes of fuzz testing. § SYSTEM DESIGN §.§ Architecture Overview We design and implement a hybrid multi-model vulnerability and unintended emergent behaviors detection framework for 5G and other communication systems. As shown in Fig. <ref>, to achieve the amplification and cross-validation of fuzz testing and formal verification, the proposed framework composites the following components to build up a reinforcing loop: * Protocol Abstraction: At the beginning of the system, we abstract the protocol into symbolic language. Logical transfer can easily exploit vulnerabilities in design. * Formal Analysis: In the formal verification process, we employed Proverif <cit.>, a robust tool, to conduct an in-depth analysis of our system's protocols. Proverif offers a logical proof of security properties and potential vulnerabilities, facilitating a robust and comprehensive evaluation of the system's security integrity. * Search Space Isolation: The output of formal verification divides the search space into three sets: no vulnerabilities, attack trace detected, and uncertain areas that need further investigation. The division of the search space effectively narrows down the uncertain regions and enables the scalability of vulnerability detection. * Formal Guided Fuzz Framework: With the guidance of a formal verification conclusion, we initiate fuzz testing on runtime binary systems, focusing particularly on the predefined uncertain areas and those areas where attack traces have been detected. Fuzz testing serves a dual purpose: it is not only deployed to identify runtime vulnerabilities, thereby complementing the detection of vulnerabilities through logical proofs on protocols, but it also functions as a stochastic approach for those uncertain areas that cannot be verified through formal methods. * Fortification of Protocol and Formal Verification : We verify the vulnerabilities detected by fuzz testing and feedback to the formal result and search space. By defining the space more precisely, formal verification can be further optimized, consequently extending the scope of the security assurance area. The proposed framework inter-connected with our previous fuzzing platform  <cit.><cit.> is capable of performing mutation-based identifiers fuzzing and permutation-based command fuzzing following the direction from the formal method conclusion. Formal verification, guided fuzzing analysis of results from the actual 5G testbed, and the real-time analysis and feedback construct a reinforcing loop in our system. §.§ Abstraction of NSA 5G Authentication Protocol Compared to SA 5G network architecture, NSA 5G architecture is still widely adopted but more vulnerable because the complexity introduced by the LTE compatibility in protocol designs and infrastructure implementation, especially for authentication and authorization. Therefore, we focus on the authentication process in NSA 5G architecture. As shown in Fig. <ref>, the abstracted protocol authentication process in NSA architecture includes four parts: RRC Connection Setup, Mutual Authentication, NAS Security Setup and AS Security Setup. Considering the scope of this paper and the critical level among them, we pilot on the RRC connection setup for in-depth analysis. The RRC Connection Setup is a pivotal step in the initial establishment of communication between a mobile device and the network in the LTE and 5G NR frameworks. This procedure is instigated by the network upon receiving a connection request from the UE, commonly in response to an initiating event such as a call or data session initiation. RRC connection setup process aims to build up connections in RRC layer. Further, we abstract and derive the dependency table, presented as Table <ref>, from the defined protocol, considering four essential security properties: confidentiality, integrity, authentication, and accounting. Utilizing Table <ref>, we construct the corresponding dependency graph, as depicted in Fig <ref>, to provide a visual representation of the security dependency relationships. §.§ Formal Guided Fuzz Framework Compared to traditional fuzz testing, which needs a complete understanding of code implementation, like LZFUZZ <cit.>, we propose a novel formal-guided identifier-based fuzzing framework. In our proposed fuzzing framework, we first fix the value of critical identifiers under the assumption proved by the formal verification and collect the communicated commands. Then we set up a relay attack mechanism on srsRAN platform <cit.> following the attack traces, which are detected by formal verification. § FORMAL DETECTED ATTACK MODEL AND ANALYSIS In this section, we present a proof-of-concept via an illustrative attack model detected using Proverif <cit.>. A comprehensive summary of all identified attack models in 5G authentication and authorization process from our findings is depicted in Table <ref>. We specifically focus on the RRC connection setup for an in-depth demonstration. §.§ User Credentials Disclosure In this attack, the adversary can exploit the transparency of RRC Connection Setup process to effortlessly access critical user identity information, which includes but is not limited to the UE identity and establishment cause. This illicit access enables the adversary to acquire user information and use the ensuing session key for nefarious activities such as eavesdropping and manipulation of subsequent communications. Assumption. Analyzing Fig <ref>, we can conclude that the adversary can exploit the transparency of RRC Connection Setup process to directly access any identifier within the message. Furthermore, the adversary is also capable of establish a fake UE or a MITM relay to eavesdrop and manipulate the messages within the RRC Connection Setup process. To verify the security properties of identifiers within the RRC Connection Setup process, including aspects such as confidentiality and consistency, we converted the aforementioned assumptions into ProVerif code. Vulnerability. As depicted in Fig. <ref>, the UE initiates the process by sending an RRC connection request to the CN. Upon receiving this request, the CN responds by transmitting the radioResourceConfigDedicated back to the UE. The UE, in turn, obtains authentication from the CN and responds with the RRC-Transaction Identifier, selectedPLMN-Identity and dedicatedInfoNAS to finalize the RRC connection setup. Nevertheless, this process presents an exploitable vulnerability as an adversary can access all message identifiers. Such unprotected identifiers run the risk of being eavesdropped upon and modified, potentially enabling the adversary to orchestrate a MITM relay attack. Attack Trace Description. Employing formal verification, we analyzed the confidentiality of identifiers within the RRC Connection Setup process. Through this methodical investigation, we identified two categories of identifiers with the most significant impact: user identities and RRC configuration identifiers. As illustrated in Fig. <ref>, an attacker can access the identifiers marked in red, delineating the pathway of the attack. In the initial scenario, an adversary with the access to the user identity, like UE-identity, is capable of launch DoS attack with real UE-identity. Contrary to traditional DoS attacks, which aim to overwhelm a system's capacity, an UE-identity-based DoS attack efficiently disrupts the CN verification mechanism through repeated use of the same UE-identity, leading to authentication confusion. And in second case, with computationally derived RRC-Transaction Identifier, the adversary can establish a fake base station or perform a MITM relay attack by manipulating these identifiers. In the latter case, the adversary positions between the UE and the CN, intercepting and modifying communications in real-time. Consequently, this attack model presents a severe threat to the security and integrity of the mobile network's communication. Fortification via Formal Traced Vulnerability Given the significance and susceptibility of identifiers within the RRC Connection Setup process, it is imperative to implement integrity protection measures for the RRC-Transaction Identifier. Additionally, adopting a hash value approach can assist in preventing the disclosure of UE identity, further reinforcing security measures in this critical process. § FORMAL GUIDED FUZZING ANALYSIS As detailed in Section <ref>, formal verification delineates the system's security landscape into three zones: safe, non-safe, and undetermined. While the safe area necessitates no further scrutiny, the non-safe and undetermined areas warrant further investigation using fuzz testing. Specifically, we leverage fuzz testing to evaluate the impact of the non-safe areas within implementation stacks, as well as to ascertain the security level within the regions previously undetermined. By leveraging our previously proposed framework <cit.>, we effectively assess the security status of regions initially verified through formal methods. Due to the constraints of page length, we present a single example to illustrate the operation of our formally guided fuzzing framework. This example specifically demonstrates how the framework assess the impact of provable attacks that have been identified through formal verification. §.§ MITM bit-level fuzzing In light of the identified vulnerabilities relating to confidentiality and integrity, we have developed a bit-level fuzzing test to examine the effects of exposed UE-identity and EstablishmentCause. The results, as displayed in Table <ref>, highlight two distinct outcomes. Modification of the UE-identity has a minimal impact on authentication and communication, albeit with an introduction of some latency. Conversely, alterations to the EstablishmentCause lead to a change in authentication types - a factor critical to the authentication establishment process, such as transforming an emergency request to data mode. There are total 8 types of vulnerabilities that leverage the EstablishmentCause. Based on these bit-level fuzzing results, we can partition the provably insecure areas of the RRC Connection Request into two categories: areas with less impact, including UE-identity, and areas with substantial impact, encompassing EstablishmentCause. Consequently, in subsequent fuzzing tests, we can strategically exclude UE-identity fuzzing, focusing instead on the chain effects generated by high-risk identifiers. §.§ Command-level fuzzing Assuming complete disclosure of all necessary UE identities and an unprotected RAND in the Authentication Request of the Mutual Authentication process, it is a reasonable deduction that an adversary can acquire the RNTI, which is derived from UE identities, and RAND, a crucial identifier for generating a session key. Unlike the boundless scenarios possible with black-box fuzzing, our approach uses a fixed session key to concentrate on the impact of a MITM attack, thereby eliminating the computational waste associated with guessing random identifiers and UE identities. Building on our previously proposed probability-based fuzzing strategy <cit.>, we have established a more efficient method for identifying unintended vulnerabilities that prove challenging to detect via formal verification. A comparison between random fuzzing and our probability-based approach (Fig. <ref>) reveals that our proposed probability-based framework requires only 36.5% of the number of fuzzing cases used in a random fuzzing strategy to detect all 43 vulnerabilities <cit.>. § PERFORMANCE AND EFFICIENCY ASSESSMENT Our proposed fuzz testing framework, guided by formal verification, affirms the viability of our integrative approach combining both formal and fuzz testing frameworks. In this section, we analyze the efficiency of fuzz testing and explore the relationship between formal verification and fuzz testing, underscoring the potential benefits of our innovative strategy. Fuzz testing is a methodical, brute-force approach to detecting vulnerabilities, accomplished by supplying an extensive range of random data to uncover potential security threats. However, due to computational constraints, exhaustive vulnerability detection for the entire 5G NSA protocol, even for a singular command, is not practical. To increase the efficiency of fuzz testing, the rule-based mutation fuzz testing strategy has been proposed <cit.>. This strategy refines the scope of fuzz testing to specific identifiers in line with protocol rules. Although the rule-based mutation fuzz testing strategy yields a substantial reduction in computational complexity, it can still produce meaningless, randomly generated inputs. As a response, we introduce a formal-guided fuzz testing strategy. This strategy complies with formal verification assumptions and generates three categories of representative inputs: formal-based legal inputs, formal-based illegal inputs, and randomly generated inputs. While formal-based inputs must adhere to the protocol-defined rules or format, randomly generated inputs are not bound by these restrictions. The comparative efficiency of different fuzz strategies across four distinct processes is depicted in Figure <ref>. A detailed performance analysis of these varied fuzzing strategies is provided in the following section. Based on the guidance of formal verification in Section <ref>, the RRC Connection Request command, which includes 40 bits of UE-Identity, 4 bits of EstablishmentCause, and 1 bit of spare, is vulnerable to DoS or MITM attacks. Traditional brute-force fuzz testing generates more than 2^45 fuzzing cases, and rule-based fuzzing generates 2^40+2^4+1 fuzzing cases based on the defined identifiers. However, our formal guided fuzzing strategy requires only 9 fuzzing cases, including one legal UE-Identity case, one illegal UE-Identity case, one random out-of-rule UE-Identity case, 2 legal/illegal EstablishmentCause cases, one random out-of-rule EstablishmentCause case, one legal spare case, one illegal spare case, and one out-of-rule spare case. Our proposed framework has the capacity not only to validate the impact and security of identifiers, but also to detect unintended vulnerabilities based on high-risk assumptions, such as an identifier set that is accessible to an adversary. As corroborated by the evidence presented in Section <ref>, our framework proves highly efficient in detecting vulnerabilities, underscoring its potential utility in enhancing system security. § CONCLUSION In this paper, we have introduced an innovative framework that integrates formal verification and fuzz testing to fortify the security of 5G systems, effectively addressing the vulnerabilities from protocol logic to implementation stacks. The dynamic feedback loop within this framework has demonstrated its strength in both the refinement of undefined areas and the exhaustive detection of potential vulnerabilities. This work has been illuminated through an application on a continuous loop in the RRC Connection Setup process, illustrating the practicability and effectiveness of our proposed methodology. In the initial phase, our framework identifies a formal attack model through the application of formal verification. Subsequently, leveraging the protocol-level exposure of user credentials, the proposed framework employs bit-level and command-level fuzzing to execute comprehensive impact identification and simulate plausible attacks. As a result, by relying on the verified impact and the security status of the identifier or command determined by the fuzz test, our framework robustly reinforces protocol-level assumptions and refines the detection area. Notably, this integrated approach significantly mitigates computational complexity, transitioning it from exponential to linear growth. This scalability ensures that the framework can accommodate larger datasets or more complex scenarios without a drastic increase in computational resources or processing time, making it suitable for extensive applications in 5G security testing. To conclude, our research presents a pioneering step towards bolstering 5G security by employing an integrated, hierarchical approach to vulnerability detection. This work contributes substantially to the ongoing efforts to secure the next generation of wireless communications and provides a foundation for future research in this domain. Further studies might explore extending this approach to other advanced wireless technologies to ensure robust security in our increasingly connected world. § ACKNOWLEDGMENT This effort was sponsored by the Defense Advanced Research Project Agency (DARPA) under grant no. D22AP00144. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA or the U.S. Government. IEEEtran
http://arxiv.org/abs/2307.04130v1
20230709090053
The 21-cm forest as a simultaneous probe of dark matter and cosmic heating history
[ "Yue Shao", "Yidong Xu", "Yougang Wang", "Wenxiu Yang", "Ran Li", "Xin Zhang", "Xuelei Chen" ]
astro-ph.CO
[ "astro-ph.CO" ]
Ultrasonic Image's Annotation Removal: A Self-supervised Noise2Noise Approach Yuanheng Zhang, Nan Jiang, Zhaoheng Xie, Junying Cao*, Yueyang Teng* Y. Zhang is with the College of Medicine and Biological Information Engineering, Northeastern University, China. N. Jiang is with the Department of Ultrasound, General Hospital of Northern Theater Command, China. Z. Xie is with the Institute of Medical Technology, Peking University, China. J. Cao is with the Department of Ultrasound, General Hospital of Northern Theater Command, China. Y. Teng is with the College of Medicine and Biological Information Engineering, Northeastern University, China. J. Cao and Y. Teng contributed equally to this work. This work is supported by the Natural Science Foundation of Liaoning Province (2022-MS-114). This work is supported by the Key R&D Plan Projects of Liaoning Province in 2020 (Project No. 2020JH2/10300122). August 12, 2023 ===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== * Key Laboratory of Cosmology and Astrophysics (Liaoning) & College of Sciences, Northeastern University, Shenyang 110819, China * National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100101, China * Key Laboratory of Radio Astronomy and Technology, Chinese Academy of Sciences, A20 Datun Road, Chaoyang District, Beijing 100101, China * School of Astronomy and Space Science, University of Chinese Academy of Sciences, Beijing 100049, China * Institute for Frontiers in Astronomy and Astrophysics, Beijing Normal University, Beijing 102206, China * National Frontiers Science Center for Industrial Intelligence and Systems Optimization, Northeastern University, Shenyang 110819, China * Key Laboratory of Data Analytics and Optimization for Smart Industry (Ministry of Education), Northeastern University, Shenyang 110819, China * Center for High Energy Physics, Peking University, Beijing 100871, China The absorption features in spectra of high-redshift background radio sources, caused by hyperfine structure lines of hydrogen atoms in the intervening structures, are known collectively as the 21-cm forest. They provide a unique probe of small-scale structures during the epoch of reionization, and can be used to constrain the properties of the dark matter (DM) thought to govern small-scale structure formation. However, the signals are easily suppressed by heating processes that are degenerate with a warm DM model. Here we propose a probe of both the DM particle mass and the heating history of the Universe, using the one-dimensional power spectrum of the 21-cm forest. The one-dimensional power spectrum measurement not only breaks the DM model degeneracy but also increases the sensitivity, making the probe actually feasible. Making 21-cm forest observations with the upcoming Square Kilometre Array has the potential to simultaneously determine both the DM particle mass and the heating level in the early Universe, shedding light on the nature of DM and the first galaxies. The 21-cm line of neutral hydrogen (HI) traces various structures throughout cosmic history. Complementary to the 21-cm tomographic observation, the 21-cm absorption signal against high-redshift radio point sources probes intervening structures along individual lines of sight <cit.>. The structures located at different distances along the sightline resembles forest structure on the background source spectrum, which is called 21-cm forest in analogy to the Lyman α (Lyα) forest. The high frequency resolution of radio telescopes allows the 21-cm forest to be a promising probe to small-scale structures during the epoch of reionization (EoR) <cit.>. In warm dark matter (WDM) models, the small-scale power is suppressed by free-streaming effect compared with the standard cold dark matter (CDM) model <cit.>. Using Lyα forest as a tracer of small-scale structures, this effect has been used to constrain the WDM particle mass at low redshifts <cit.>. Similarly, the 21-cm forest can potentially be used deep into the EoR <cit.>, as the decreased number of low-mass halos leads to weaker 21-cm forest signals. Methods have been developed to improve the detection of 21-cm forest signal <cit.>. However, the 21-cm forest signal can also be suppressed by heating effects during the early galaxy formation<cit.>. While this means that it is a sensitive probe of the temperature of the intergalactic medium (IGM) <cit.>, it is degenerate with the WDM suppression effect <cit.>, making the interpretation of observations ambiguous. Nevertheless, the WDM reduces mainly the number density of 21-cm absorption lines <cit.>, whereas a higher IGM temperature suppresses both the absorption depth and line number density <cit.>. This difference makes it possible to distinguish these two effects statistically. In this Article, we simulate 21-cm forest signals during the EoR under the influence of different dark matter (DM) particle masses and different heating histories of the IGM. We show that although the IGM heating and WDM both suppress the 21-cm signal, they behave differently. By measuring the one-dimensional (1D) power spectrum along lines of sight, it is possible to break the degeneracy, and constrain the DM particle mass and the IGM temperature (hence the early heating history) simultaneously. To simulate the 21-cm forest from the EoR, a high dynamic range is required to model the large-scale structures in density and ionization fields on ≳ 100 comoving-megaparsec scales, while resolving small-scale halos and their ambient gas on approximately kiloparsec scales. We use a hybrid approach to achieve this. The cosmological evolution of large-scale density and ionization fields is simulated with the semi-numerical simulation 21cmFAST <cit.>, with a comoving box size of 1 Gpc with 500^3 grids, where the initial density fluctuations are set by DM properties, while each of the (2 Mpc)^3 grid is further divided into 500^3 voxels, and populated with halos of various masses according to the local grid density and the conditional halo mass function <cit.>, which depends on the matter power spectrum regulated by the DM particle mass. The density in each voxel is determined by the Navarro-Frenk-White profile <cit.> or the infall model profile <cit.> according to its distance to the nearest halo (Methods). § RESULTS Recent astrophysical observations have put lower limits on the WDM particle mass (m_ WDM) of a few kiloelectronvolts <cit.>. We simulate the 21-cm forest signals assuming m_ WDM = 10 keV, 6 keV, and 3 keV, respectively, to be compared with the signals from a CDM model. The 21-cm optical depth depends on the density, the neutral fraction of hydrogen gas, and the spin temperature T_ S. The density field and ionization field are simulated according to the DM properties as described above, with more details given in Methods. T_ S is assumed to be fully-coupled to the gas kinetic temperature T_ K by the early Lyα background<cit.>, and T_ K is determined by the heating history of the IGM, or the virial temperature of halos, depending on the gas location (Methods). The heating history of neutral IGM during the EoR is computed taking into account the adiabatic expansion of the universe, the Compton heating/cooling, and the X-ray heating. We model the X-ray emissivity as proportional to the formation rate of early non-linear structures <cit.>, normalized by an X-ray production efficiency parameter f_ X (Methods). Assuming an unheated IGM (f_ X = 0), the 21-cm optical depth (top panels) and the differential brightness temperature (negative, bottom panels) along a line of sight at z=9 are shown in Fig. <ref>, for CDM (left column) and various WDM particle masses (right columns), respectively. In the lower panels, the flux density of the background source, scaled to 150 MHz assuming a power-law spectrum, is assumed to be S_150 = 1 mJy, 10 mJy, and 100 mJy, from top to bottom respectively. The overall 21-cm absorption depth in WDM models is comparable to the signal level in the CDM model, both corresponding to the absorption depth by the unheated IGM. However, the small-scale fluctuations are notably reduced in WDM models, due to the more suppressed formation of low-mass halos. Note that the major contribution to the 21-cm forest signal is from the overdense gas in the halo surroundings which is not heated by virialization shocks <cit.>. These small-scale fluctuations are also suppressed, resulting in sparser absorption lines in the spectra. Figure <ref> shows the 21-cm optical depth (top panels) and brightness temperature (bottom panels) spectra at z∼ 9 in the CDM model, assuming different X-ray efficiency parameters. As f_ X increases, the IGM is increasingly heated, increasing the spin temperature and notabley reducing the 21-cm forest signal. The dotted and dashed lines in the lower panels correspond to the thermal noise levels expected for phase-one and phase-two low-frequency arrays of the Square Kilometre Array (denoted by SKA1-LOW and SKA2-LOW), for which array sensitivities of A_ eff / T_ sys = 800 m^2 K^-1 <cit.> and 4000 m^2 K^-1 <cit.> (with A_ eff being the total effective area and T_ sys being the system temperature) are adopted, respectively. For both arrays, we assume a maximum baseline of 65 km, a channel width of 1 kHz, and an integration time of 100 hours (hr). For the case with negligible early X-rays, the 21-cm forest signal can be marginally detected by the SKA1-LOW for sources with S_150∼ 1 mJy, while the same signal will be easily detected with SKA2-LOW. However, the heating will notably diminish the detectability of individual absorption lines, weakening the probing power of the 21-cm forest on either the DM properties, or the thermal history of the IGM. Even if f_ X = 0.1, i.e. the early star formation has only ∼ 10% X-ray productivity as that of nearby starburst galaxies, the IGM will be heated to about 56 K at z = 9, then direct measurement of the 21-cm forest would only be possible for extremely bright quasars with S_150≳ 100 mJy for SKA1-LOW, or S_150≳ 10 mJy for SKA2-LOW, otherwise a much longer integration time would be required. If f_ X≳ 1, the IGM would be heated to ≳ 650 K at z = 9, then direct detection of the forest signal will be challenging even for SKA2-LOW. The heating would be weaker at higher redshifts, but then it would be more difficult to find a suitable quasar as background source. Moreover, only the fluctuating part of the absorption is measurable in 21-cm forest observation, while the overall absorption depth from the homogeneous IGM would be effectively subtracted when comparing with the intrinsic continuum <cit.>. If we simply count the absorption lines with a certain threshold of optical depth or equivalent width, the effects of a WDM model and a more heated IGM would be degenerate, both reducing the number of detectable absorbers <cit.>. A statistical variable with more distinguishing power is needed. As we shall show below, the 1D power spectrum of 21-cm forest along the line of sight <cit.> can serve this purpose. The left panel of Fig. <ref> compares the 1D power spectra of 21-cm forest in the CDM model with different f_ X. The 21-cm optical depth is inversely proportional to the gas temperature, and proportional to the density. As f_ X increases, the IGM is increasingly heated, and the 1D power spectrum is notably suppressed on all scales. When the IGM is cold, the high contrast in temperature between gas in halos and gas in the IGM far from halos dominates the large-scale fluctuations in the optical depth, with typical scales corresponding to the clustering scales of halos of various masses. As f_ X increases from 0 to 1, the IGM far from halos with the lowest temperature is heated first, suppressing the temperature contrast on scales of halo clustering, which results in the flattening of 1D power spectrum on large scales. When f_ X = 1, the IGM temperature is about 650 K at z = 9, comparable to the virial temperature (∼ 1000 K) of the smallest halos holding gas (with mass M_ min∼ 10^6 M_⊙, Methods), then the large-scale fluctuations in the temperature are mostly smoothed, leaving only a flatter power spectrum originated from density fluctuations. The 21-cm forest and its 1D power spectrum are further reduced when f_ X increases from 1 to 3. The 1D power spectra all drop off on small scales corresponding to the clustering scale of the smallest halos holding gas, and the cut-off at the small-scale end is set by the spectral resolution assumed. The right panel of Fig. <ref> shows the results for different DM properties assuming an un-heated IGM (f_ X = 0). The lower m_ WDM results in a much lower level of small-scale density fluctuations, thus suppressing the small-scale 21-cm forest power spectrum. Note that with the same thermal history, the overall amplitude of the 1D power spectrum remains similar for different m_ WDM, while the slope will be steeper for a warmer DM model. This behavior is distinct from the heating effect, which suppresses the 1D power spectrum more dramatically on all scales. The dotted and dashed lines in Fig. <ref> indicate the thermal noise in the power spectrum, P^ N, expected for SKA1-LOW and SKA2-LOW respectively, utilizing 10 background sources. The error bars include both the thermal noise of SKA2-LOW and the sample variance (see Methods). As shown in Fig. <ref>, for a background source with S_150 = 10 mJy, direct measurement of 21-cm forest becomes difficult if f_ X≳ 0.1, and almost impossible even for SKA2-LOW if f_ X≳ 1. However, the 1D power spectrum of 21-cm forest can be measured precisely by SKA1-LOW over a broad range of wavenumber k if f_ X∼ 0.1, and it is still detectable by SKA2-LOW with S_150∼ 10 mJy sources even if f_ X = 3 at z=9. This is because the absorption appears as an increased variance and can be measured statistically from the power spectrum even if individual absorbers are too weak to be detected with notableness <cit.>. The 1D power spectrum measurement also allows extraction of the scale-dependent information encoded in the density and temperature fields, in contrast to the flatter thermal noise. So the observation of the 21-cm forest by 1D power spectrum is not only more feasible, but also has better discriminating power for the effects of IGM heating and the WDM. Fig. <ref> shows the 1D power spectra for different f_ X and m_ WDM assuming S_ 150 = 1 mJy, 10 mJy, and 100 mJy, respectively. Using 1D power spectrum, with 10 background sources of S_ 150∼ 1 mJy and a moderate integration time of ∼ 100 hr, the 21-cm forest signal will be detectable by SKA2-LOW if f_ X≲ 0.1, for all DM particle masses considered here. For brighter sources with S_ 150≳ 10 mJy, the full shape of 1D power spectrum can be well characterized, and a broader range of possible f_ X values can be probed. Therefore the 21-cm forest 1D power spectrum will not only break the degeneracy between the effects of WDM and heating, but also be vital to make the probe feasible in practice. The Universe may also accommodate both a heated IGM and WDM particles, both regulating the amplitude and shape of the 1D power spectrum of 21-cm forest. We simulate the signals for various combinations of f_ X and m_ WDM values, and measure the amplitude P and the slope β = dlogP(k)/ dlogk of the 1D power spectra at k = 40 Mpc^-1. The top panels of Fig. <ref> show that the amplitude of the 1D power spectra roughly determines f_ X, or the IGM temperature, with a weak degeneracy between a higher f_ X and a smaller m_ WDM. On the other hand, the slope in the bottom panels shows a different degeneracy; a flatter power spectrum indicates a higher f_ X and/or a larger m_ WDM, while a steeper one implies a lower f_ X and/or a smaller m_ WDM. Therefore, the amplitude and slope of 21-cm forest 1D power spectrum can be diagnostic characters for the DM particle mass and the IGM temperature. When combined, one can effectively break the degeneracy and determine f_ X and m_ WDM simultaneously. With the 21-cm forest 1D power spectrum measured from 100 neutral patches of 10 comoving megaparsec at z = 9, we use the Fisher matrix formalism to forecast constraints on m_ WDM and T_ K as expected for both SKA1-LOW and SKA2-LOW, including the thermal noise and sample variance. Fig. <ref> shows that if the IGM was only weakly heated, then very tight constraints can be put on both m_ WDM and T_ K, with σ_m_WDM = 1.3 keV and σ_T_K = 3.7 K for the fiducial model of m_ WDM = 6 keV and T_ K = 60 K after a total observation time of δ t = 100 hr on each source using SKA1-LOW, and σ_m_WDM = 0.3 keV and σ_T_K = 0.6 K using SKA2-LOW. σ_m_ WDM and σ_T_ K are marginalized absolute errors. If the IGM was heated up to 600 K at z = 9 (corresponding to f_ X = 1), then SKA2-LOW would be required, and we expect to have σ_m_WDM = 0.6 keV and σ_T_K = 88 K. The probe is more sensitive for lower values of m_ WDM. Note that these constrains can be obtained by measurements on segments of neutral patches along sightlines against 10 background sources with S_ 150 = 10 mJy. The constraints would be better if more sources at different redshifts, or brighter sources, are available. § DISCUSSION The 21-cm signal from the EoR can potentially be used to constrain DM properties <cit.>, but the degeneracies with astrophysical effects can be an obstacle<cit.>. During the EoR, there are various feedback effects <cit.>. Here we consider primarily radiative feedbacks, including Lyα photons coupling T_ S to T_ K, ionizing photons determining the large-scale ionization field, and X-ray photons heating the IGM. The mechanical and chemical feedbacks affect the density profiles and the cooling mechanisms, but have minor influences on the 21-cm forest. The main focus of this work is the heating effect that is most important in reducing the 21-cm forest signal and is degenerate with the WDM effect. Using a set of semi-numerical simulations covering a high dynamic range, we show that both the presence of WDM and an early X-ray heating can reduce the number of observable 21-cm absorbers. This degeneracy hinders the 21-cm forest from being an effective probe to either the DM properties or the thermal history of the universe. We have demonstrated that the 1D power spectrum of 21-cm forest is a good observable to break this degeneracy, and is even effective in high heating-rate cases in which the number of 21-cm forest lines is severely diminished. By quantifying the fluctuations, the 1D power spectrum of the 21-cm forest is also immune to subtraction of the overall absorption from the homogeneous IGM in practical observations. The DM particle mass and the IGM temperature at a specific redshift can be simultaneously constrained. Although in our simulation the gas density profile surrounding a halo is based on simple models, this does not have much impact on the number density and the clustering properties of absorption lines, which determines the main characteristics of the 1D power spectrum. We also note that the overall signal level is dependent on the local density δ_0 in the large-scale environment. We investigate the effect of local density by computing the 21-cm forest signals on different grids, with various densities on the 2 Mpc scale. As shown in Extended Data Figs. 1 and 2, the local density affects the overall magnitude of signals, but the effect is much weaker than the heating, even in the extreme case of δ_0 = 2 in a grid of ∼ 2 Mpc. Meanwhile, the local density has almost negligible effect on the shape of 1D power spectrum, making the effect distinguishable from the WDM effect. While direct detection of individual 21-cm absorption lines will be challenging if the early IGM is heated, the 1D power spectrum measurement is more promising. The observation relies on the availability of high-redshift radio-bright sources prior to reionization. Quite a number of radio-loud quasars have been detected beyond redshift 5 <cit.>, including nine at z > 6<cit.>. A few hundred radio quasars with > 8 mJy at z ∼ 6 are expected to be spectroscopically observed in the near future <cit.>. As there is no evidence for the evolution in the radio loudness fraction of high-z quasars <cit.>, one can expect about ∼ 2000 sources with > 6 mJy at 8 < z < 12 <cit.>. The long-duration gamma-ray bursts (GRBs) are also possible high-redshift sources. Several cases have been discovered beyond redshift 8 <cit.>. For future missions like the High-z Gamma-ray bursts for Unraveling the Dark Ages Mission and the Transient High-Energy Sky and Early Universe Surveyor, the expected detection rate of luminous GRBs from Population III stars is 3 – 20 yr^-1 at z > 8 <cit.>. Given the higher sensitivity of 1D power spectrum observation, radio afterglows of high-z GRBs could also be used. The fast radio bursts, though brighter, are however too brief to allow long integration required. Current combination of astrophysical probes of strong gravitational lensing, Lyα forest, and luminous satellites of our Galaxy indicates that m_ WDM may be larger than 6 keV<cit.>, but models with m_ WDM of a few keV are still not excluded. On the other hand, tomographic 21-cm power spectrum measurement, in combination with complementary probes, yield a constraint on the IGM temperature of 8.9 K < T_ K < 1.3× 10^3 K at z∼ 8 at 68% confidence<cit.>. With the upcoming SKA-LOW, the 21-cm forest observation, especially the 1D power spectrum, can improve the constraints on both the properties of DM and the thermal history of the early universe simultaneously, providing an effective probe to the DM in an unexplored era in the structure formation history, and to the first galaxies interplaying with the early IGM. § METHODS §.§ The 21-cm forest signal. Using high-redshift quasars or radio afterglows of GRBs as background radio sources <cit.>, the HI in halos and in the IGM absorbs 21-cm photons along the line of sight. The 21-cm forest signal is the flux decrements due to 21-cm absorption with respect to the continuum of a background radio source, which in the Rayleigh-Jeans limit is characterized by the differential brightness temperature. In the optically-thin limit, which is usually the case for the 21-cm transition, the observed differential brightness of the 21-cm absorption signal, relative to the brightness temperature of the background radiation T_γ(ŝ, ν_0, z) at a specific direction ŝ and redshift z, is δ T_ b(ŝ, ν) ≈T_ S(ŝ, z)-T_γ(ŝ, ν_0, z)/1+zτ_ν_0(ŝ, z). Here ν_0 = 1420.4 MHz is the rest-frame frequency of 21-cm photons, T_ S is the spin temperature of the absorbing HI gas, and τ_ν_0 is the 21-cm optical depth. In terms of the average gas properties within each voxel, the 21-cm optical depth can be written as <cit.> τ_ν_0(ŝ, z) ≈ 0.0085[1+δ(ŝ, z)] (1+z)^3/2[x_ HI(ŝ, z)/T_ S(ŝ, z)] [H(z) /(1+z)/ d v_ / d r_] (Ω_ bh^2/0.022)(0.14/Ω_ mh^2), where δ(ŝ, z), x_ HI(ŝ, z), and H(z) are the gas overdensity, the neutral fraction of hydrogen gas, and the Hubble parameter, respectively, and d v_/ d r_ is the gradient of the proper velocity projected to the line of sight. Ω_ b, Ω_m and h are baryon density parameter, matter density parameter and dimensionless Hubble constant, respectively. The brightness temperature of the background radiation at the rest frame of the 21-cm absorption T_γ(ŝ, ν_0, z) is related to the observed brightness temperature at a redshifted frequency ν, T_γ(ŝ, ν, z=0), by T_γ(ŝ, ν_0, z)=(1+z) T_γ(ŝ, ν, z=0), and it has contributions from both the background point source and the cosmic microwave background (CMB), i.e. T_γ(ŝ, ν, z=0)=T_ rad(ŝ, ν, z=0)+T_ CMB(z=0), where T_ rad(ŝ, ν, z=0) represents the observed brightness temperature of the point source, and it usually dominates over the CMB temperature (T_ CMB). For a given radio telescope resolving a solid angle of Ω, the observed brightness temperature of a source is related to the flux density S_ rad(ν) by T_ rad(ŝ, ν, z=0) =c^2/2 k_ Bν^2S_ rad(ν)/Ω, where c is the speed of light and k_ B is the Boltzmann constant. The flux density of the background source is modeled to have a power-law spectrum scaled to 150 MHz, i.e. S_ rad(ν) = S_150(ν / ν_150)^η <cit.>, where ν_150 = 150 MHz and a spectral index of η=-1.05 is assumed as appropriate for a powerful radio source like Cygnus A <cit.>. Note that the spectral index of high-redshift quasars has a large scatter, and their spectra may be flatter than Cygnus A at low frequencies <cit.>, but the detailed spectral index makes only a negligible difference to our results. In this work, we take the flux densities of S_150 = 1 mJy, 10 mJy, and 100 mJy for the background point sources as examples, and assume the maximum baseline of 65 km for both the SKA1-LOW and SKA2-LOW for calculating the angular resolution for a given redshift. Assuming that T_ S is fully coupled to T_ K by the early Lyα background, the 21-cm optical depth τ_ν_0 and the forest signal δ T_ b are then dependent on the density δ, neutral fraction x_ HI, gas temperature T_ K, and the velocity gradient d v_/ d r_, of each voxel along the line of sight. Here we account only for the Hubble expansion for the velocity field, but neglect the peculiar velocity, as the peculiar velocity mainly shifts the contribution of the absorption from individual segments of gas. We note that the peculiar velocity may affect the individual line profiles <cit.>, but we expect that its effect on the overall amplitude of the signal and the 1D power spectrum is small. The density field, ionization field, and the gas temperature field are modeled as follows. Throughout this study, we adopted the set of cosmological parameters consistent with the Planck 2018 results<cit.>: Ω_ m = 0.3153, Ω_ b h^2 = 0.02236, Ω_Λ = 0.6847, h = 0.6736, σ_8 = 0.8111. Ω_Λ and σ_8 are dark-energy density parameter and matter fluctuation amplitude, respectively. §.§ The density field. The evolution of the large-scale density field is simulated with linear theory using the 21cmFAST <cit.>, for both the CDM and WDM models. The simulation box has a comoving size of (1 Gpc)^3, and (500)^3 grids. The influence of DM properties on the density field is mainly on small scales. In each of the 2 Mpc grids, the small-scale density distribution is simulated by randomly populating halos according to the conditional halo mass function and the local density of the grid from the 21cmFAST simulation, and assigning density profiles to the gas in the halos as well as in the IGM, as detailed below. §.§.§ Halo mass function. In the framework of the CDM model, the number density of halos per mass interval in the range (M, M + dM), in a simulation grid with mass M_0 and overdensity δ_0 at redshift z, can be modeled by the conditional halo mass function <cit.> of the Press-Schechter form <cit.>, i.e. d n(M|δ_0,M_0;z)/ d M=√(1/2 π)ρ̅_ m0 (1+δ_0)/M| d S/ d M| δ_ c(z) - δ_0/(S-S_0)^3/2 exp{-[δ_ c(z) - δ_0]^2/2(S-S_0)}, where ρ̅_ m0 is the average density of matter in the universe today, S=σ^2(M) is the variance of mass scale M, S_0=σ^2(M_0), and δ_ c(z)=1.686/D(z) is the critical overdensity for collapse at redshift z extrapolated to the present time using the linear theory, in which D(z) is the linear growth factor. In the WDM model, the structure formation is suppressed below the free streaming scale λ_ fs of DM particles, and the conditional halo mass function can be approximately written as <cit.> d n(M|δ_0,M_0;z)/ d M=1/2{1+erf[log _10(M / M_ fs)/σ_log M]}[ d n (M|δ_0,M_0;z)/ d M]_ PS, where σ_log M=0.5, and M_ fs is the suppressing mass scale of halo formation corresponding to λ_ fs, i.e. M_ fs=4 π (λ_ fs/2)^3ρ_ m0 /3. PS represents Press-Schechter form in CDM model. The comoving free streaming scale is approximately <cit.> λ_ fs≈ 0.11(Ω_ WDM h^2/0.15)^1 / 3(m_ WDM/ keV)^-4 / 3( Mpc), where Ω_ WDM is the WDM density normalized by the critical density. The Press-Schechter mass function [ d n (M|δ_0,M_0;z)/ d M]_ PS in Eq. (<ref>) takes the form of Eq. (<ref>), but the variance of density fluctuations is evaluated with the matter power spectrum fitted for WDM <cit.>: P_ WDM(k)=P_ CDM(k){[1+(α k)^2 β]^-5 / β}^2, where β = 1.12 and α is given by <cit.> α=0.049(m_ WDM/ keV)^-1.11(Ω_ WDM/0.25)^0.11(h/0.7)^1.22 h^-1 ( Mpc). Supplementary Fig. 1 shows the halo mass function, evaluated at δ_0 = 0 and S_0 = 0, for both CDM and WDM models. The halo number is obviously suppressed below the free streaming scale in the WDM models, with the lower m_ WDM resulting in larger suppressing scale. Especially, the WDM models notably reduce the total number of halos by suppressing the small ones, thus suppressing the small-scale fluctuations in the neutral hydrogen density, which have a major contribution to the 21-cm forest signals. The major contribution to the 21-cm forest signal comes from the gas in and around the large number of low-mass halos that are not producing ionizing photons and reside in neutral environments<cit.>. Therefore, we focus on neutral patches along a given line of sight, and select neutral grids from the large-scale ionization field simulated by 21cmFAST. Then we randomly populate each of these 2 Mpc grids with halos according to the conditional mass function determined by the DM models. We consider only the halos with the mass upper limit M_4 corresponding to the virial temperature of T_ vir = 10^4 K, so that the atomic cooling is not efficient enough to enable substantial star formation. The lower limit of halo mass M_ min is set by the filtering mass scale, so that the halos could retain most of its gas and the gas in the ambient IGM to contribute to the 21-cm absorption. The filtering mass is mainly determined by thermal history of the universe, and it is of order ∼ 10^6 M_⊙ for the redshifts of interest (7≲ z≲ 11) for f_ X≲ 1 in the CDM model<cit.>. It would be higher for higher f_ X, and the different density profiles in WDM models may also slightly modify its value. In the present work, we set the same M_ min = 10^6 M_⊙ for all the models for simplicity, but we expect that the dependence of filtering mass on f_ X will make the probe more sensitive to the thermal history of the universe, while more challenging to discriminate WDM models for cases with high f_ X. §.§.§ Gas profile. Each grid along the line of sight is further divided into (500)^3 voxels, each with a size of (4 kpc)^3, then the gas density of each voxel is determined by its distance to the nearby halos. Inside the virial radius r_ vir, we assume that the dark matter follows the NFW density profile <cit.>, and the gas is in hydrostatic equilibrium with the dark matter <cit.>. Thus, the gas density distribution can be derived analytically <cit.>: lnρ_ g(r)=lnρ_ gc-μ m_ p/2 k_ B T_ vir[v_ e^2(0)-v_ e^2(r)], where ρ_ gc denotes the central gas density, μ is the mean molecular weight of the gas, m_ p is the proton mass, and v_ e(r) is the gas escape velocity at radius r, given by v_ e^2(r)=2 ∫_r^∞G M(r^')/r^' 2 d r^'=2 V_ c^2F(y x)+y x/1+y x/x F(y). Here V_ c^2 ≡ G M/r_ vir is the circular velocity at the virial radius, G is gravitational constant, x ≡ r / r_ vir, y is the halo concentration, and F(y)=ln(1+y)-y/(1+y). The central gas density is determined by normalizing the total baryonic mass fraction of the halo to the cosmic mean value, which gives ρ_ gc =(Δ_c / 3) y^3(Ω_ b / Ω_ m) e^A/∫_0^y(1+t)^A / t t^2 d tρ̅_ m(z), where ρ̅_ m(z) is the mean matter density of the universe at redshift z, A ≡ 2 y/F(y), e is the mathematical constant (base of natural log), and Δ_c=18 π^2 + 82(Ω_ m^z-1) - 39(Ω_ m^z-1)^2 is the mean density of a virialized halo with respect to the cosmic mean value <cit.>, in which Ω_ m^z=Ω_ m(1+z)^3 /[Ω_ m(1+z)^3+Ω_Λ]. The gas density in the halo surroundings is enhanced because of the gravitational potential. Outside the virial radii of halos, we assume that the gas density profile follows the dark matter distribution, and it can be computed by using the infall model which is based on the excursion set theory <cit.>. The gas density profiles in and around halos of different masses are plotted in Supplementary Fig. 2 for z = 9. It is seen that there is density discontinuity at the virial radius in our model. This is expected at the virialization shock near the virial radius <cit.>, though the exact location of the shock may vary from halo to halo <cit.>. The infall model was developed for the matter density and velocity distribution around density peaks <cit.>. Directly applying it to arbitrary environments may over-predict the gas density in under-dense regions. Therefore, we normalize the density field to ensure that the minimum density is 0, and the average density of the (500)^3 voxels in each 2 Mpc grid equals the grid density from the large-scale 21cmFAST simulation. To test the reliability for the small-scale density field, we run a small-scale high-resolution hydrodynamical simulation with the GADGET (GAlaxies with Dark matter and Gas intEracT) <cit.> for high redshifts. The simulation has a box size of 4 h^-1 Mpc and 2×800^3 gas and DM particles <cit.>. We compare the probability density distribution of our analytical gas density field with the one from the simulated gas density in Supplementary Fig. 3, at the same resolution at z = 17. It shows that our gas density model closely recovers the probability distribution of the gas density fluctuations from the hydrodynamical simulations. The line-of-sight density distribution in the CDM model is illustrated in the left panel of Extended Data Fig. 1 for three grids with different local overdensities δ_0 on the 2 Mpc scale at z=9. The density distributions for different DM properties are shown in Supplementary Fig. 4. §.§ The ionization field. The large-scale ionization field is simulated with the semi-numerical simulation 21cmFAST assuming ionizing sources with a minimum halo mass of M_4 and an ionizing efficiency parameter of ζ = 11 <cit.>. By suppressing the formation of small-scale halos, the WDM models may possibly speed up or delay the large-scale reionization process by modifying both the abundances of ionizing sources and sinks <cit.>. In the present work, we use the basic version of 21cmFAST in which the effect of sinks is incorporated by a homogeneous recombination number, and the reionization is delayed in the WDM models as shown in Supplementary Fig. 5. It shows that the effect of WDM on the large-scale reionization history becomes obvious only if m_ WDM≲ 3 keV, and this is consistent with the fact that atomic-cooling halos are effectively suppressed in WDM models with m_ WDM≲ 3 keV as shown in Supplementary Fig. 1. Note that the 21-cm forest signals mainly come from neutral regions, and we pick up neutral patches in the large-scale simulation box to analyze the small-scale structures in the 21-cm forest signals. The large-scale reionization history only determines the probability of getting a neutral patch of the IGM with a certain length along a line of sight. In order to have consistent source properties when comparing the results for the same f_ X, we set the same ionizing efficiency parameter for all the models considered here, while the global reionization history would be slightly different among WDM and CDM models. On the other hand, a different reionization scenario may change the minimum source mass, for example, in a reionization model with stronger feedback effects would have a minimum halo mass for collapse higher than M_4, thus changing the reionization history. However, the large-scale ionization field and the overall reionization history have only a minor effect on the small-scale 21-cm forest signals we are interested in. For each of the neutral grids in the simulation box, we assume that the gas is in collisional ionization equilibrium (CIE), so that the ionized fraction of each voxel is determined by its local density and temperature, i.e. n_ e n_ HIγ=α_ B n_ e n_ p, where n_ HI, n_ e and n_ p represent the number densities of neutral hydrogen, electron and proton, respectively, γ is the collisional ionization coefficient <cit.>, and α_ B is the case B recombination coefficient <cit.> which is appropriate for low-mass halos and the incompletely ionized IGM. Here both γ and α_ B are functions of temperature. §.§ The temperature field. The gas temperature T_ K of each voxel is determined by the thermal history of the early universe and the location of the voxel with respect to halos. While the photoionization heating by the UV background dominates the gas heating in ionized regions <cit.>, it is the X-rays that can penetrate deep into the neutral IGM and dominate the heating of the neutral gas contributing to 21-cm signals. For the gas in the neutral IGM, its temperature is mainly determined by the cosmic expansion, the heating or cooling from the Compton scattering, and the X-ray heating. The global evolution of the IGM temperature can be written as <cit.> d T_ K/ d t=-2 H(z) T_ K+2/3ϵ_ comp/k_ B n+2/3ϵ_ X,h/k_ B n, where n is the total particle number density, ϵ_ comp is the Compton heating/cooling rate per unit physical volume <cit.>, and ϵ_ X,h represents the part of the X-ray emissivity ϵ_ X that contributes to heating, for which we adopt a fitted formula to simulations, i.e. ϵ_ X,h = [1-0.8751(1-x_i^0.4052)] ϵ_ X <cit.>, where x_i is the ionized fraction. Assuming that the X-ray productivity is proportional to the star formation rate, and hence to the matter collapse rate, the total X-ray emissivity ϵ_ X can be written as <cit.>: 2/3ϵ_ X/k_ B n H(z) = 5 × 10^4 K f_ X(f_⋆/0.1 d f_ coll / d z/0.011+z/10). Here f_⋆ is the star formation efficiency approximately evaluated at M_4 <cit.>, as appropriate for the most abundant star-forming halos, f_ coll is the fraction of matter collapsed into atomic-cooling halos with M>M_4, and f_ X is the normalization parameter describing the uncertain nature of X-ray productivity in the early universe as compared to the local universe<cit.>. The global evolution of the IGM temperature T_ K is shown in Supplementary Fig. 6 for different values of f_ X. The curve with f_ X = 0 denotes the case with purely adiabatic cooling and Compton heating. Inside the virial radius, the gas kinetic temperature T_ K equals to the virial temperature T_ vir of the halo. As for the gas in the overdense regions near halos, it will be adiabatically heated depending on the local density. In the absence of X-rays, the temperature profiles for halos with 10^6 M_⊙, 10^7 M_⊙, and 10^8 M_⊙ are illustrated in Supplementary Fig. 7 for z = 9. Similar to the density profiles, the gas temperature also shows discontinuity at the virialization shocks as expected, but the exact location of the virialization shocks has negligible effects on our main results. In the cases with X-ray heating, the gas temperature outside the halos is set by the maximum between the adiabatic temperature and the heated IGM temperature. §.§ Thermal noise of direct measurement. In the direct measurement of individual absorption lines, the noise flux density averaged over two polarizations can be expressed as <cit.>: δ S^ N≈2 k_ B T_ sys/A_ eff√(2 δνδ t), where A_ eff is the effective collecting area of the telescope, T_ sys is the system temperature, δν is the channel width, and δ t is the integration time. The corresponding thermal noise temperature is: δ T^ N = δ S^ N(λ_z^2 /2 k_ BΩ) ≈λ_z^2 T_ sys/A_ effΩ√(2 δνδ t), where λ_z is the observed wavelength, and Ω=π (θ/2)^2 is the solid angle of the telescope beam, in which θ = 1.22λ_z/D is the angular resolution with D being the longest baseline of the radio telescope/array. For the SKA1-LOW, we adopt A_ eff / T_ sys= 800 m^2 K^-1 <cit.>, and A_ eff / T_ sys= 4000 m^2 K^-1 is expected for SKA2-LOW <cit.>. For both arrays, we assume D = 65 km and δ t = 100 hr, and δν = 1 kHz is assumed in order to resolve individual 21-cm lines. Correspondingly, the synthetic spectra shown in Figs. <ref> and <ref> are smoothed with the same channel width. At redshift z = 9, the angular resolution is about 8.17 arcsec, and the noise temperature is plotted with dotted and dashed lines in the lower panels in Figs. <ref> and <ref>, for SKA1-LOW and SKA2-LOW respectively. §.§ 1D power spectrum of 21-cm forest. It is seen from Fig. <ref> that the direct measurement of individual absorption lines is vulnerably hampered by the early X-ray heating. In order to improve the sensitivity for detecting the 21-cm forest signal, and to reveal the clustering properties of the absorption lines so as to distinguish the effects between heating and WDM models, we follow the algorithm in Ref. <cit.>, and compute the 1D power spectrum of the brightness temperature on hypothetical spectra against high-redshift background sources. The brightness temperature δ T_b(ŝ, ν) as a function of observed frequency ν can be equivalently expressed in terms of line-of-sight distance r_z, δ T_ b^'(ŝ, r_z), and the Fourier transform of δ T_b ^'(ŝ, r_z) is δT^'(ŝ, k_)=∫δ T_ b^'(ŝ, r_z) e^-i k_ r_z d r_z. The 1D power spectrum along the line of sight is defined as: P(ŝ, k_) = |δT^'(ŝ, k_)|^2(1/Δ r_z). The term 1/Δ r_z is the normalization factor, in which Δ r_z is the length of sightline under consideration. To reveal the small-scale structures we are interested in, we select neutral patches with Δ r_z = 10 comoving Mpc, and compute the 1D power spectra from segments of 10 comoving Mpc along the line of sight. For a reasonable number of 𝒪(10) high-z background sources, the expected value of the power spectrum is obtained by averaging over 100 neutral patches on lines of sight penetrating various environments, i.e. P(k_) ≡⟨ P(ŝ, k_)⟩. On each quasar spectrum, we will be able to select ∼ 10 segments of 10 comoving Mpc length in neutral patches; as the neutral patches are intermittently separated by ionized regions during the EoR, we may need a spectrum covering ∼ 200 comoving Mpc along the line of sight. A length of 200 comoving Mpc projects to a total bandwidth of about 14 MHz at redshift 9, corresponding to Δ z ∼ 0.8, which is reasonable in practice. For the rest of the paper, we abbreviate k_ as k, as here we are always interested in the k-modes along the line of sight. Supplementary Fig. 8 shows the evolution of the 1D power spectrum with redshift. The solid lines in the left and middle panels show the power spectra in the CDM model and in the WDM model with m_ WDM = 3 keV respectively, in the absence of X-rays. As the redshift increases, the halo abundance decreases, and the small-scale fluctuations in the forest signal decrease, resulting in steeper power spectra. The small-scale power is slightly more notably suppressed in the WDM model, as the halo formation is more delayed. However, the redshift evolution has only a weak effect on the 1D power spectrum in the absence of X-ray heating. The right panel of Supplementary Fig. 8 illustrates the evolution of the 1D power spectrum in the CDM model with f_ X = 3. In the case of strong X-ray heating, the 1D power spectrum of the 21-cm forest is dramatically suppressed with the decreasing redshift, and the dominant reason is the rapidly increasing IGM temperature. It implies that for the purpose of constraining DM properties, the 1D power spectrum measurement at higher redshift is preferred, as long as a radio-bright source at an even higher redshift is available. §.§ Measurement error on 1D power spectrum. The observational uncertainties in the 21-cm forest include the thermal noise, the sample variance, the contaminating spectral structures from foreground sources in the chromatic sidelobes, and the bandpass calibration error. The bandpass calibration error depends on specific calibration strategies, and mainly affects the broadband amplitude of the continuum, so we expect that it has a negligible effect on the small-scale features we are interested in. The contaminating spectral structures from foregrounds are not likely affecting the small structures we are aiming at, as the discriminating features locate at k ≳ 3 Mpc^-1, which are well within the “EoR window”<cit.>. Therefore, we consider only the thermal noise of an interferometer array, and the sample variance in the power spectrum measurement. The sample variance on the 1D power spectrum is P^S=σ_P(k)/√(N_s × N_m), where σ_P(k) is the standard deviation of P(k) from N_s× N_m measurements of the 1D power spectrum at k, in which N_s is the number of 1D power spectrum measurements on different neutral patches of Δ r_z, and N_m is the number of independent modes in each k-bin from each measurement. Using 10 high-redshift background radio sources, it is reasonable to expect about 100 independent measurements of 1D power spectra from segments of spectra, each corresponding to a comoving length of 10 Mpc. We adopt N_s = 100, and σ_P(k) is obtained by simulating 21-cm forest signals from N_s neutral segments of 10 comoving Mpc length penetrating various environments covering grid densities from δ = -0.7 to δ = +1.5. As for the thermal noise error, we follow the approach taken by Ref. <cit.>, and assume that each spectrum is measured for two times separately, or the total integration time is divided into two halves, and the cross-power spectrum is practically measured in order to avoid noise bias. Then the observing time for each measurement of the spectrum is δ t_0.5 = 0.5 δ t, and the thermal noise on the spectrum is increased by a factor of √(2). Then the thermal noise uncertainty on the 1D power spectrum is given by <cit.> P^N = 1/√(N_s)(λ_z^2 T_ sys/ A_ effΩ)^2(Δ r_z/2 Δν_zδ t_0.5), where Δν_z is the total observing bandwidth corresponding to Δ r_z. A distance of 10 comoving Mpc along the line of sight corresponds to a bandwidth of Δν_z = 0.56 MHz at z = 9. Assuming the same telescope parameters of SKA1-LOW and SKA2-LOW as those for the direct measurement, and the same observation time of δ t = 100 hr (δ t_0.5 = 50 hr) on each source, the expected thermal noise on the 1D power spectrum of 21-cm forest is plotted in Figs. <ref> and <ref>, as well as in Supplementary Fig. 8, with dotted lines for SKA1-LOW and dashed lines for SKA2-LOW, respectively. The total measurement errors including the thermal noises of SKA2-LOW and sample variance are shown with the error bars in these figures. We have tested the extraction of 21-cm forest 1D power spectrum by simulating mock quasar spectra with thermal noises, and calculating the 1D power spectra from the noisy spectra. The results are shown in Supplementary Fig. 9, with upper panels from mock spectra with SKA1-LOW noises, and lower panels from mock spectra with SKA2-LOW noises, respectively. In each row, the left panel shows the results from mock spectra with both 21-cm absorption signals and thermal noises, and the right panel shows the results from mock spectra with only thermal noises. The measured noise power spectra agree well with the theoretical predictions. It is seen that the measurement of 1D power spectrum notably improves the observability of the 21-cm forest signals as compared to the direct measurement of individual absorption lines. With about 10 moderately bright quasars with S_ 150≳ 10 mJy at redshift around 9, the 1D power spectrum can be measured by SKA2-LOW even if the IGM was heated as sufficiently as in the model with f_ X = 3, and can reach a high signal-to-noise ratio if f_ X≲ 1. Note that the measurement error can be further suppressed if more sources are available beyond reionization, and more power spectra can be averaged to suppress both the thermal noise and the sample variance. Data Availability The main data that support the results in this work are provided with this paper, and are also available at https://doi.org/10.57760/sciencedb.08093https://doi.org/10.57760/sciencedb.08093. Further datasets are available from the corresponding authors upon reasonable request. Code Availability The code 21cmFAST used for large-scale simulation is publicly available at https://github.com/andreimesinger/21cmFASThttps://github.com/andreimesinger/21cmFAST, the codes for simulating small-scale structures and 21-cm forest signals are available from the corresponding authors upon reasonable request, and the GADGET code is available at https://wwwmpa.mpa-garching.mpg.de/gadgethttps://wwwmpa.mpa-garching.mpg.de/gadget. Additional information Correspondence and requests for materials should be addressed to Yidong Xu (email: [email protected]), Xin Zhang (email: [email protected]) or Xuelei Chen (email: [email protected]). * We thank the anonymous referees for very constructive comments and suggestions. We thank Yichao Li, Peng-Ju Wu, Jing-Zhao Qi, and Bin Yue for helpful discussions. This work was supported by National Key R&D Program of China (Grant No. 2022YFF0504300), the National Natural Science Foundation of China (Grant Nos. 11973047, 11975072, 11835009, 11988101, and 12022306), and the National SKA Program of China (Grant Nos. 2020SKA0110401, 2020SKA0110100, 2022SKA0110200, and 2022SKA0110203). Y.X. and X.C. also acknowledge support by the CAS grant (Grant No. ZDKYYQ20200008). Y.W. acknowledges support by the CAS Interdisciplinary Innovation Team (Grant No. JCTD-2019-05). R.L. acknowledges support by the CAS grant (Grant No. YSBR-062) and the grant from K.C.Wong Education Foundation. Author contributions Y.S. performed most of the computation and analysis, and wrote part of the manuscript. Y.X. led the study, contributed to the simulations, and wrote the majority of the manuscript. Y.W. and W.Y. contributed to the computation of the 1D power spectrum. Y.X. and R.L. proposed the study. X.Z. and X.C. contributed to the collaboration organization, the Fisher forecasts, and the manuscript writing, and supervised the study. All authors discussed the results and commented on the manuscript. Competing Interests The authors declare no competing interests. < g r a p h i c s > The density (left panel), optical depth (middle panel) and brightness temperature (right panel) for a line of sight of 2 comoving Mpc in the CDM model at z = 9. The green, yellow and red lines correspond to local overdensities of δ_0 = 0, 1 and 2, respectively. The flux density of the background source in the right panel is assumed to be S_150=10 mJy. < g r a p h i c s > 1-D power spectrum of a synthetic 21-cm forest spectrum in the CDM model, for a line of sight penetrating through an un-heated IGM (f_ X = 0) with different local overdensities at z = 9. The green, yellow and red curves correspond to δ_0 = 0, 1 and 2, respectively. The flux density of the background source is assumed to be S_150=10 mJy. < g r a p h i c s > [Supplementary Figure figure] Halo mass function for different DM particle masses at z = 9. The red, yellow, blue and pink curves correspond to the CDM model and WDM models with m_ WDM = 10 keV, 6 keV, and 3 keV, respectively. < g r a p h i c s > [Supplementary Figure figure] Neutral hydrogen overdensity profiles inside and outside the virial radius of a halo at z = 9. The green, yellow and red lines correspond to halo mass of 10^6 M_⊙, 10^7 M_⊙ and 10^8 M_⊙, respectively. < g r a p h i c s > [Supplementary Figure figure] Probability density distribution of the gas overdensity at z = 17. The black solid line is the probability density distribution from the GADGET simulation with a box size of 4 h^-1 Mpc and 2×800^3 gas and DM particles. The blue dashed line is the one derived from our hybrid approach with the same resolution as the GADGET simulation. < g r a p h i c s > [Supplementary Figure figure] Density distribution of a patch of 10 comoving Mpc at z = 9 along the line of sight, for an un-heated IGM (f_ X = 0). The four panels, from left to right, correspond to the CDM model and the WDM models with m_ WDM = 10 keV, 6 keV and 3 keV, respectively. < g r a p h i c s > [Supplementary Figure figure] Reionization history simulated by 21cmFAST. The black, red, yellow and green curves correspond to the average neutral fraction x̅_ HI as a function of redshift z in the CDM model and the WDM models with m_ WDM = 10 keV, 6 keV and 3 keV, respectively. < g r a p h i c s > [Supplementary Figure figure] Evolution of the global gas temperature with redshift. The blue, green, yellow and red lines correspond to f_ X = 0, 0.1, 1 and 3, respectively. < g r a p h i c s > [Supplementary Figure figure] Temperature profiles of gas inside and outside the virial radii of halos at z = 9 with an un-heated IGM (f_ X = 0). The green, yellow and red lines correspond to halo masses of 10^6 M_⊙, 10^7 M_⊙ and 10^8 M_⊙, respectively. < g r a p h i c s > [Supplementary Figure figure] Evolution of the 1-D power spectrum of 21-cm forest averaged over 100 measurements on segments of 10 comoving Mpc length in neutral patches along lines of sight against background sources with S_150 = 10 mJy. The solid lines in the left and central panels show the power spectra in the CDM model and those in the WDM model with m_ WDM = 3 keV respectively, assuming an un-heated IGM (f_ X= 0). The solid lines in the right panel show the power spectra in the CDM model assuming an efficiently-heated IGM (f_ X = 3). In each panel, the blue, green and yellow lines correspond to z = 7, 9 and 11, respectively. The dotted and dashed lines with the corresponding colors are the expected thermal noises P^ N for SKA1-LOW and SKA2-LOW, respectively, and the error bars show the total measurement errors of SKA2-LOW. < g r a p h i c s > [Supplementary Figure figure] 1-D cross-power spectrum computed from mock spectra simulated with thermal noises expected for SKA1-LOW (upper panels) and SKA2-LOW (lower panels), respectively. The left plots show the results in which the mock spectra contain both 21-cm forest signal and thermal noise, and the right plots show the results from mock spectra with only thermal noise. Same as Fig. 3 the 1-D power spectra are averaged over 100 measurements on segments of 10 comoving Mpc length in neutral patches along lines of sight against 10 background sources with S_150 = 10 mJy. The blue, green, yellow and red curves correspond to f_ X = 0, 0.1, 1 and 3, respectively. The dotted and dashed lines are the theoretical thermal noises P^N expected for the SKA1-LOW and SKA2-LOW, respectively.
http://arxiv.org/abs/2307.04306v1
20230710020559
The Category of reduced imaginary Verma modules
[ "Juan Camilo Arias", "Vyacheslav Futorny", "André de Oliveira" ]
math.RT
[ "math.RT" ]
Institute of Mathematics and Statistics, University of São Paulo, São Paulo, BRAZIL. [email protected] Shenzhen International Center for Mathematics, Southern University of Science and Technology, China and Institute of Mathematics and Statistics, University of São Paulo, São Paulo, BRAZIL. [email protected] Institute of Mathematics and Statistics, University of São Paulo, São Paulo, BRAZIL. [email protected] [2020]Primary 17B10, 17B67, 17B22 The category of reduced imaginary Verma modules Juan Camilo Arias, Vyacheslav Futorny and André de Oliveira August 12, 2023 =============================================================== For an arbitrary affine Lie algebra we study an analog of the category 𝒪 for the natural Borel subalgebra and zero central charge. We show that such category is semisimple having the reduced imaginary Verma modules as its simple objects. This generalizes the result of Cox, Futorny, Misra in the case of affine sl_2. § INTRODUCTION Let A=(a_ij)_0≤ i,j≤ N be a generalized affine Cartan matrix over with associated affine Lie algebra and Cartan subalgebra . Let Π ={α_0, α_1, ⋯, α_N} be the set of simple roots, δ the indivisible imaginary root and Δ the root system of . A subset S⊆Δ is a closed partition if for any α, β∈ S and α + β∈Δ then α + β∈ S, Δ = S ∪ (-S) and S∩ (-S) = ∅. The classification of closed partitions for root system of affine Lie algebras was obtained by H. Jakobsen and V. Kac in <cit.> and <cit.> and independently by V. Futorny in <cit.> and <cit.>. They show that closed partitions are parameterized by subsets X⊆Π and that (contrary to what happens in the finite case) there exists a finite number (greater than 1) of inequivalent Weyl group orbits of closed partitions. When X=Π we get that S=Δ_+ and we can developed the standard theory of Verma modules, but in the case X⊊Π we obtain new Verma-type modules called non-standard Verma modules. The theory of non-standard Verma modules was initiated by V. Futorny in <cit.> (see also <cit.>) in the case X=∅ and continued by B. Cox in <cit.> for arbitrary X⊊Π. The case X=∅ give rise to the natural Borel subalgebra associated to the natural partition Δ_nat = {α + nδ | α∈Δ_0,+ , n ∈}∪{ kδ | k ∈ℤ_>0}. The Verma module M(λ), of highest weight λ, induced by the natural Borel subalgebra is called imaginary Verma module for , when it is not irreducible it has an irreducible quotient called reduced imaginary Verma module. Unlike the standard Verma modules, imaginary Verma modules contain both finite and infinite dimensional weight spaces. Similar results hold for more general non-standard Verma modules. In <cit.>, while studying crystal bases for reduced imaginary Verma modules of ŝl̂_̂2̂, it was consider a suitable category of modules, denoted O_red,im, with the properties that any module in this category is a reduced imaginary Verma module or it is a direct sum of these modules. In this paper, by appropriate modifications we first define a category O_red,im for any affine Lie algebra and we show that all irreducible modules in this category are reduced imaginary Verma modules and, moreover, that any arbitrary module in O_red,im is a direct sum of reduced imaginary Verma modules. It should be noted that the results presented in this paper hold for both untwisted and twisted affine Lie algebras. The paper is organized as follows. In Sections 2 and 3, we define, set the notations and summarize the basic results for affine algebras, closed partitions and imaginary Verma modules. In section 4 we introduce the category O_red,im and present some of its properties. Finally, in section 5 we present the main results of this paper. § PRELIMINARIES In this section we fixed some notation and the preliminaries about affine algebras and root datum are set up. §.§ Affine algebras Let A=(a_ij)_0≤ i,j≤ N be a generalized affine Cartan matrix over with associated affine Lie algebra . Let D=diag(d_0, …, d_N) be a diagonal matrix with relatively primes integer entries such that DA is symmetric. The Lie algebra has a Chevalley-Serre presentation given by generators e_i, f_i, h_i for 0≤ i ≤ N and d which are subject to the defining relations: [h_i,h_j]=0 [d,h_i]=0 [h_i,e_j]=a_ije_j [h_i,f_j]=-a_ijf_j [e_i,f_j]=δ_i,jh_i [d,e_i]=δ_0,ie_i [d,f_i]=-δ_0,if_i ( e_i)^1-a_ij(e_j)=0 ( f_i)^1-a_ij(f_j)=0 Let be the Cartan subalgebra of which is the span of {h_0, …, h_N,d}. Recall that affine Lie algebras are classified into two classes: untwisted and twisted, see <cit.>. In the untwisted case, has a natural realization known as loop space realization which is defined by = ⊗[t,t^-1]⊕ c ⊕ d where is the simple finite dimensional Lie algebra with Cartan matrix (a_ij)_1≤ i,j ≤ N, c is a central element, d is a degree derivation such that [d,x⊗ t^n]=nx⊗ t^n for any x∈ and n∈ and we have [x⊗ t^n, y⊗ t^m] = [x,y]⊗ t^n+m + δ_n,-mn(x|y)c for all x,y∈, n,m∈ where (-|-) is a symmetric invariant bilinear form on . On the other hand, twisted affine Lie algebras are described as fixed points of automorphisms of untwisted algebras. Concretely, let μ̃ be an automorphism of order r=2 or r=3 of the Coxeter-Dynkin diagram of and let μ be the corresponding diagram automorphism of . Then μ can be extended to an automorphism μ on = ⊗[t,t^-1]⊕ c ⊕ d defined as μ(x ⊗ t^m) = (-1)^m (μ(x) ⊗ t^m), for x ∈, m ∈ℤ, μ(c) = c, μ(d) = d and extended by linearity. The twisted affine Lie algebra ()^μ is the subalgebra of fixed points of μ. For example, when r = 2, ()^μ = (∑_m ∈ℤμ_0⊗ t^2m) ⊕(∑_m ∈ℤμ_1⊗ t^2m+1) ⊕ℂc ⊗ℂd where μ_0 = {x ∈ | μ(x) = x} and μ_1 = {x ∈ | μ(x) = -x} (see <cit.>). §.§ Root datum and closed partitions Let I_0 = {1, …, N} and Δ_0 be the root system of with θ being the longest positive root. We denote by Q_0 and P_0 the root and weight lattices of . Let I={0,1,…, N}, Δ the root system of with simple roots Π={α_0, α_1, …, α_N} and let δ=α_0+θ be the indivisible imaginary root. Q denotes the root lattice, P the weight lattice, and Q̌, P̌ denotes the coroot and coweight lattices, respectively. Δ^re and Δ^im denotes the real and the imaginary sets of roots for Δ. A subset S of Δ is said to be closed if whenever α, β∈ S and α + β∈Δ then α + β∈ S. We also say that S is a closed partition if S is closed, Δ = S ∪ (-S) and S∩ (-S) = ∅. Closed partitions were classified in <cit.> and <cit.> (see also <cit.> and <cit.>). For an untwisted affine Lie algebra , there are two interesting closed partitions of the root system Δ, the standard partition and the natural partition, which give rise to two distinct Borel subalgebras that are not conjugate. The standard partition is defined by Δ_st = {α + nδ | α∈Δ_0 , n ∈_>0}∪Δ_0,+∪{ kδ | k ∈ℤ_>0} and the natural partition by Δ_nat = {α + nδ | α∈Δ_0,+ , n ∈}∪{ kδ | k ∈ℤ_>0} The respective Borel subalgebras, called standard Borel subalgebra and natural Borel subalgebra, are defined by _̱st = ( ⊗ tℂ[t]) ⊕⊕⊕ℂc ⊕ℂd and _̱nat = ( ⊗ℂ[t,t^-1]) ⊕( ⊗ tℂ[t]) ⊕⊕ℂc ⊕ℂd where n = ⊕_α∈Δ_0,+_α, is the nilpotent Lie subalgebra of the finite Lie algebra . As already mentioned above, a twisted affine algebra is a fixed point set in of a non-trivial symmetry of Chevalley generators and, in this case, _̱nat is the intersection of the fixed point set with the natural Borel subalgebra of . For more details see <cit.>. In this paper, we are going to work with the natural partition of the root system Δ_nat. § IMAGINARY VERMA MODULES Let S be a closed partition of the root system Δ. Let be the untwisted affine Lie algebra which has, with respect to the partition S, the triangular decomposition =_S⊕⊕_-S, where _S = ⊕_α∈ S_α and = ⊕ℂc⊕ℂd is an affine Cartan subalgebra. Let U(_S) and U(_-S) be, respectively, the universal enveloping algebras of _S and _-S. Let λ∈ P. A weight U()-module V is called an S-highest weight module with highest weight λ if there is some non-zero vector v∈ V such that: * u· v = 0 for all u ∈_S. * h · v = λ(h)v for all h ∈. * V=U()· v ≅ U(_-S)· v. In what follows, let us consider S to be the natural closed partition of Δ, i.e., S=Δ_nat and so b_nat = _Δ_nat⊕ĥ. We make into a 1-dimensional U(b_nat)-module by picking a generating vector v and setting (x+h)· v = λ(h)v, for all x∈_Δ_nat and h∈. The induced module M(λ) = U()⊗_U(b_nat) v ≅ U(_-Δ_nat)⊗ v is called an imaginary Verma module with Δ_nat-highest weight λ. Equivalently, we can define M(λ) as follows: Let I_Δ_nat(λ) the ideal of U() generated by e_ik:= e_i⊗ t^k, h_il:= h_i⊗ t^l for i∈ I_0, k∈, l ∈ℤ_>0, and by h_i - λ(h_i)· 1, d-λ(d)· 1 and c-λ(c)· 1. Then M(λ) = U()/I_Δ_nat(λ). The main properties of this modules, which hold for any affine Lie algebra, were proved in <cit.> (see also <cit.> for more properties on this modules), we summarize them in the following. Let λ∈ P and let M(λ) be the imaginary Verma module of Δ_nat-highest weight λ. Then M(λ) has the following properties: * The module M(λ) is a free U(_-Δ_nat)-module of rank 1 generated by the Δ_nat-highest weight vector 1⊗ 1 of weight λ. * M(λ) has a unique maximal submodule. * Let V be a U()-module generated by some Δ_nat-highest weight vector v of weight λ. Then there exists a unique surjective homomorphism ϕ: M(λ) → V such that 1⊗ 1 ↦ v. * M(λ)_λ = 1. For any μ=λ-kδ, k∈_>0, 0< M(λ)_μ < ∞. If μ≠λ - kδ for any integer k≥ 0 and M(λ)_μ≠ 0, then M(λ)_μ = ∞. * Let λ, μ∈^*. Any non-zero element of _U()(M(λ), M(μ)) is injective. * The module M(λ) is irreducible if and only if λ(c)≠ 0. Suppose now that λ(c)=0 and consider the ideal J_Δ_nat(λ) generated by I_Δ_nat(λ) and h_il, i∈ I_0 and l∈∖{0}. Set M̃(λ) = U()/J_Δ_nat(λ) Then M̃(λ) is a homomorphic image of M(λ) which we call reduced imaginary Verma module. The following is proved in <cit.>, Theorem 1. M̃(λ) is irreducible if and only if λ(h_i)≠ 0 for all i∈ I_0. § THE CATEGORY O_RED,IM Consider the Heisenberg subalgebra G which by definition is G= ⊕_k∈∖{0}_kδ⊕ c We will say that a -module V is G-compatible if: (i) V has a decomposition V=T(V)⊕ TF(V) where T(V) and TF(V) are non-zero G-modules, called, respectively, torsion and torsion free module associated to V. (ii) h_im for i∈ I_0, m∈∖{0} acts bijectively on TF(V), i.e., they are bijections on TF(V). (iii) TF(V) has no non-zero -submodules. (iv) G· T(V)=0. Consider the set ^*_red = {λ∈^* | λ(c)=0, λ(h_i)∉_≥ 0 i∈ I_0 } We define the category O_red,im as the category whose objects are -modules M such that * M is ^*_red-diagonalizable, that means, M = ⊕_ν∈^*_red M_ν, M_ν = { m∈ M | h_im=ν(h_i)m, dm = ν(d)m, i∈ I_0 } * For any i∈ I_0 and any n∈, e_in acts locally nilpotently. * M is G-compatible. * The morphisms between modules are -homomorphisms Reduced imaginary Verma modules belongs to O_red,im. Indeed, for M̃(λ) consider T(M̃(λ)) = v_λ and TF(V) = ⊕_k∈, n_1, …, n_N∈_≥0M̃(λ)_λ+kδ - n_1α_1 - … -n_Nα_N, and at least one n_j≠ 0. Moreover, direct sums of reduced imaginary Verma modules belongs to O_red,im. Recall that a loop module for is any representation of the form M̂ := M ⊗ℂ[t,t^-1] where M is a 𝔤-module and the action of on M̂ is given by (x ⊗ t^k)(m ⊗ t^l) := (x · m) ⊗ t^k+l , c(m ⊗ t^l) = 0 for x ∈𝔤, m ∈ M and k,l ∈ℤ. Here x · m is the action of x ∈𝔤 on m ∈ M. Let M is a 𝔤-module in the BGG category 𝒪. Then the loop module M̂ can not lie in O_red,im. Let M ∈𝒪 and let M̂ be its associated loop module. If M is finite dimensional, it is a direct sum of finite dimensional irreducible -modules, and these have highest weights which are non-negative integers when evaluated in h_i for any i∈ I_0. So, condition (1) is not satisfied and M̂ does not belongs to O_red,im. Assume now that M is an infinite dimensional -module. Note that condition (2) is satisfied as acts locally nilpotently on M. If condition (1) does not hold, we are done. Suppose that (1) holds and that M̂ is G-compatible. We have M̂ = T(M̂) ⊕ TF(M̂) satisfying (i) - (iv) above. Take any nonzero element ∑_i=-k^km_i⊗ t^i∈ T(M̂) with m_i∈ M_μ for some weight μ̅∈^*_red. Then by (iv) we have 0 = (h_j⊗ t^r)(∑_i=-k^km_i⊗ t^i) = ∑_i=-k^k(h_j· m_i) ⊗ t^i+r = μ̅(h_j)(∑_i=-k^km_i⊗ t^i+r) where j ∈ I_0, r ∈ℤ∖{0}. Hence μ̅(h_j) = 0, for any j ∈ I_0, which contradicts to the fact that μ̅∈^*_red. Then T(M̂) = 0 and M̂ = TF(M̂) which is a -module contradicting (i) and (iii), and thus (3). This completes the proof. § MAIN RESULTS In this section we will show that the category O_red,im is a semisimple category having reduced imaginary Verma modules as its simple objects. First we will show that reduced imaginary Verma modules have no nontrivial extensions in O_red,im. If λ,μ∈^*_red then _O_red,im^1(M̃(λ), M̃(μ)) = 0. Let M be an extension of M̃(λ) and M̃(μ) that fits in the following short exact sequence 0 [r] M̃(λ) [r]^ι M [r]^π M̃(μ) [r] 0 Suppose μ = λ +kδ- ∑_i=1^N s_iα_i, for s_i∈ and k∈, and all s_i's have the same sign or equal to 0. First, consider the case when s_i=0 for all i∈ I_0. Then μ = λ + kδ and so, in M there will be two vectors v_λ and v_μ of weights λ and μ respectively, annihilated by ⊗[t, t^-1]. Moreover, because of the condition (iv) in the definition of G-compatibility, these two points are isolated. So, v_λ and v_μ are highest weight vectors, each of which generates an irreducible subrepresentation (isomorphic to M̃(λ) and M̃(μ) respectively), and the extension splits. Hence, we can assume that not all s_i are equal to zero and that the map ι: M̃(λ) → M in the short exact sequence is an inclusion. Assume that s_i∈_≥ 0 for all i. Let v_μ∈ M be a preimage under the map π of a highest weight vector v_μ∈M̃(μ) of weight μ. We have (⊗[t, t^-1])v_μ=Gv_μ=0, and we are going to show that Gv_μ=0. Assume that v_μ∉ T(M). Then we claim that T(M)= v_λ. Indeed, we have v_λ⊂ T(M). If u∈ T(M)∖ v_λ is some nonzero weight element, then G· u=0 and π(u) belongs to T(M̃(μ))= v_μ. If π(u)=0 then u∈M̃(λ) which is a contradiction. If π(u) is a nonzero multiple of v_μ, then u has weight μ and thus u is a multiple of v_μ which is again a contradiction. So, we assume T(M)= v_λ. Note that for any i∈ I_0 and m∈∖{0} we have π (h_imv_μ)=h_imπ (v_μ)= h_im v_μ = 0. Then h_imv_μ∈M̃(λ). Suppose there exists j∈ I_0 such that h_jmv_μ≠ 0 for m∈∖{0}. Because h_jmv_μ∈M̃(λ) and has weight μ+mδ, it belongs to TF(M̃(λ)). Hence, there exists a nonzero v'∈M̃(λ) of weight μ such that h_jmv_μ = h_jm v'. Hence, h_jm (v_μ - v')=0 implying v_μ - v' ∈ T(M) ≅ v_λ. Then v_μ - v' = p v_λ, for some p ∈ℂ. Comparing the weight we arrive to a contradiction. Hence, h_inv_μ=0. So, we get Gv_μ=0. Recall that the operators e_im acts locally nilpotently on M̃(λ). We claim that e_imv_μ=0 for all possible i and n. Indeed, assume that e_jmv_μ≠ 0 for some j∈ I_0 and some integer m. Then e_imv_μ∈M̃(λ). Consider the ŝl̂_2-subalgebra s(j) generated by f_jn, e_jn and h_jl for n,l∈. Let M_j be an s(j)-submodule of M generated by v_μ. Then M_j is an extension of reduced imaginary Verma s(j)-modules, one of which of highest weight μ. Since M∈O_red,im, we immediately see that M_j is an object of the corresponding reduced category O_red,im(s(j)) for s(j). But this category is semisimple by <cit.>. Hence, e_imv_μ=0 for all i and m. Therefore, v_μ generates a 𝔤-submodule of M isomorphic to M̃(μ) and the short exact sequence splits. Assume now that s_i∈_≤ 0 for all i and not all of them are 0. As M̃(μ) is irreducible and M̃(λ) is a 𝔤-submodule of M, the short exact sequence splits completing the proof. Observe that modules M̃(λ) and M̃(λ-kδ) have a nontrivial extension in the category of -modules for any integer k. If M is an irreducible module in the category O_red,im, then M≅M̃(λ) for some λ∈ĥ_red^*. Let M be an irreducible module in O_red,im. As a G-module, M≅ T(M)⊕ TF(M) where both summands are non-zero. Let v∈ T(M) be a non-zero element of weigh λ∈ĥ_red^*. Then h_imv=0 for all i∈ I_0 and all m∈∖{0}. For each i∈ I_0 let p_i ∈_>0 be the minimum possible integer such that e_i0^p_iv=0. If all p_i=1 we have e_i0v=0 and then, because [h_in,e_i0]=2e_in we get that e_inv=0 for all i∈ I_0 and n∈∖{0}. Hence, we have an epimorphism M̃(λ) ↠ M, since λ∈ĥ_red^*, M̃(λ) is simple and so M≅M̃(λ). On the other hand, assume there exists at least one p_i such that p_i>1. We are going to construct a set of elements in M which are killed by e_i0 for all i∈ I_0. First of all, set p^(1) = max{p_i|i∈ I_0} and set w_i:=e_i0^p^(1)-1v. Note that w_i=0 if p^(1)>p_i and w_i≠ 0 if p^(1) = p_i, so at least one w_i in non-zero. If for all j∈ I_0, e_j0w_i=0 we are done, if not there exists numbers p_ij∈_>0 such that e_j0^p_ijw_i=0 and some of the p_ij are strictly bigger than 1. Set p^(2)=max{p_ij | i,j∈ I_0} and set w_ij = e_j0^p^(2)-1 w_i, note that at least one w_ij is non-zero. If e_k0w_ij=0 for all k∈ I_0 we are done, if not we repeat the process. Because of the locally nilpotency of the e_l0 for l∈ I_0, in finitely many steps, let say ℓ steps, we can find at least one non-zero element w_ i, for i = i_1i_2… i_ℓ a string of elements in I_0 such that e_l0w_ i=0. Moreover, if i^- denotes the string i_1i_2… i_ℓ -1, then w_ i = e_i_ℓ0^p^(ℓ)-1w_ i^- and so, for all n∈∖{0}, 0=h_i_ℓne_i_ℓ0^p^(ℓ)w_ i^- = 2p^(ℓ)e_i_ℓnw_ i, i.e., e_i_ℓnw_ i =0. Now, 0 = h_j0e_jme_i_ℓ0^p^(ℓ)w_ i^- = e_jmh_j0e_i_ℓ0^p^(ℓ)w_ i^- + 2e_jme_i_ℓ0^p^(ℓ)w_ i^- = 2p^(ℓ)e_jme_i_ℓ0^p^(ℓ)-1w_ i^- = 2p^(ℓ)e_jmw_ i. Pick one of the non-zero w_ i constructed above and let W_ i = U(G)w_ i be a G-submodule of M. By construction e_lnW_ i=0 for all l∈ I_0 and n∈. Considered the induced module I(W_ i) = _G⊕ H⊕ N_+^ W_ i, where N_+ = ⊕_i∈ I_0, n∈ Z e_in acts by 0, H = ⊕_i∈ I_0 h_i ⊕ d acts by h_iw_ i = μ(h_i)w_ i, dw_ i = μ(d)w_ i, for some weight μ. Because M is simple, it is a quotient of I(W_ i). If w_ i∈ T(M), we have W_ i = w_ i, and so M is a quotient of I(W_ i) = M̃(λ) and we are done. In case w_ i∉ T(M), as in the proof of Proposition 6.0.3. of <cit.> we get a contradiction. This completes the proof. If M is an arbitrary object in O_red,im, then M≅⊕_λ_i ∈ĥ^*_redM̃(λ_i), for some λ_i's. Because M is in O_red,im, it is a G-compatible and so, it has a decomposition as a G-module given by M≅ T(M) ⊕ TF(M). Since all the weights of M are in ĥ^*_red, T(M) is not a -submodule of M. Indeed, suppose T(M) is a -module. let v∈ T(M) and consider f_0 v∈ T(M). Then h_0mf_0v=0 and f_mv=0 for any m≠ 0. Applying h_0,-m we get h_0,-mf_m v=0 and f_0 v=0. Since the weight of v is in ĥ^*_red, e_0^p v≠ 0 for any p>0. But if p is sufficiently large the weigh of e_0^p v will not be in ĥ^*_red and we get a contradiction. Let v∈ T(M) non-zero. As in the proof of the previous statement there exists a string i of elements of I_0 and a vector w_ i such that e_jmw_ i =0 for all j∈ I_0 and m∈. Let W_ i = U(G)w_ i. Then we have two possibilities: either w_ i∉ T(M) or w_ i∈ T(M). In the first case, consider the induced module I(W_ i). Clearly TF(I(W_ i))⊆ I(W_ i). Now, if w∈ I(W_ i), because w_ i∉ T(M) we have gw≠ 0 for g∈ G and so w∈ TF(I(W_ i)). Then TF(I(W_ i)) = I(W_ i). By the five lemma, any quotient and subquotient of I(W_ i) also satisfies this property. Set M' := U()w_ i which is a subquotient of I(W_ i). Then M' is a -submodule of M and so M' = TF(M') is a -submodule of TF(M), but TF(M) does not have proper -submodule and so M'=TF(M). But, W_ i is a proper G-submodule of M' which is not possible because M is in O_red,im. And so, this case does not occur. In the second case, W_ i = w_ i⊆ T(M). So, as -modules I(W_ i) ≅M̃(λ_ i) for some λ_ i, is a -submodule of M. Then, any non-zero element of T(M) generates an irreducible reduced imaginary Verma module which is a -submodule of M and because there are no extensions between them, they are direct summands on M. The category O_red,im is closed under taking subquotients and direct sums, so it is a Serre subcategory. The proofs on the above statements depends on the structure of reduced imaginary Verma modules, the closed partition Δ_nat and the associated Borel subalgebra b_nat. But, the properties of reduced imaginary Verma modules hold for both untwisted or twisted affine Lie algebras. Moreover, the natural Borel subalgebra for the twisted Lie algebra is properly contained in the natural Borel subalgebra for the untwisted case. So, the results above hold for any affine Lie algebra. § ACKNOWLEDGEMENT JCA has been support by the FAPESP Grant 2021/13022-9. plain
http://arxiv.org/abs/2307.06257v1
20230712155205
The physical acceptability conditions and the strategies to obtain anisotropic compact objects
[ "Daniel Suárez-Urango", "Laura M. Becerra", "Justo Ospino", "Luis A. Núñez" ]
gr-qc
[ "gr-qc" ]
The physical acceptability conditions and the strategies to obtain anisotropic compact objects Daniel Suárez-Urango, Laura M. Becerra Escuela de Física, Universidad Industrial de Santander, Bucaramanga 680002, Colombia; Justo Ospino Departamento de Matemática Aplicada and Instituto Universitario de Física Fundamental y Matemáticas, Universidad de Salamanca, Salamanca, Spain; and Luis A. Núñez Escuela de Física, Universidad Industrial de Santander, Bucaramanga 680002, Colombia and Departamento de Física, Universidad de Los Andes, Mérida 5101, Venezuela. August 12, 2023 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ We studied five methods to include anisotropy, or unequal stress distributions, in general relativistic matter configurations. We used nine acceptability conditions that the metric and physical variables must meet to determine if our models were astrophysically viable. Our analysis found the most effective way to introduce anisotropy while keeping a simple density profile. We also found a practical “rule of thumb” that relates the density at the boundary to the density at the centre of relativistic matter distributions. Additionally, we calculated the configuration radius and encountered that values observed by NICER for PSR J0740+6620 are consistent with several acceptable matter configurations, both isotropic and anisotropic. PACS: Keywords: § INTRODUCTION General Relativity is experiencing an extraordinary era where what was once considered a mathematical curiosity, such as black holes, and faint phenomena, like gravitational waves, have transformed into observable astrophysical entities <cit.>. Significant efforts explore the properties of physically viable matter configurations that may describe general relativistic compact objects in various states: static, stationary, or undergoing collapse. Any exact solution to the Einstein Equations has certain restrictions, constraining the metric and the energy-momentum tensor to ensure that emerging space-time geometry is astrophysically reasonable. Since the seminal result of M.S.R. Delgaty and K. Lake <cit.>, several works have expanded the set of acceptability conditions to obtain more meaningful solutions <cit.>. These conditions are elaborated under the assumption that there are two distinct components for the pressure, one radial and the other tangential, which yields a richer and more realistic description of the internal structure of a compact object. The consideration of local anisotropy, where the radial and tangential stresses are unequal (P ≠ P_⊥), has gained recognition as a relevant concept in describing general relativistic stars. This idea can be traced back to the pioneering works of J. H. Jeans <cit.> and G. Lemaître <cit.> and has continued to be explored in both Newtonian and relativistic frameworks (see <cit.> and references therein). Notably, a recent paper <cit.> presents intriguing insights into the instability of isotropic pressure distribution in self-gravitating matter systems. Various heuristic strategies have been employed to describe anisotropic microphysics in astrophysical relativistic matter configurations <cit.>). First, there is the initial method proposed by Bowers & Liang <cit.>; followed by other schemes such as the proportional-to-gravitation approach <cit.>; the quasilocal method <cit.>; the covariant approach using proportional pressure gradient <cit.>; the complexity factor method <cit.>, and the Karmarkar embedding class I <cit.>. Finally, there is another strategy for implementing anisotropic fluids for General Relativistic matter configurations: the gravitational decoupling approach <cit.>. Throughout this work, we shall consider the first five common assumptions to model non-pascalian fluids in general relativistic matter configurations and examine, through extensive modelling, the consequences of the acceptability conditions. We identify the relevant parameters for a particular equation of state, their range and relevance. We integrate the structure equations implementing every anisotropic equation of state with the same density profile ρ(r) for all configurations. We also identify a comparable set of parameter ranges so as to compare all the physical acceptabilities of the different anisotropic modelling strategies. Within this framework, for a particular common density distribution, we explore answers to the following two questions: * Which type of anisotropy strategy leads to more acceptable matter configurations? * Are these acceptable models consistent with the Neutron Star Interior Composition Explorer (NICER) observations <cit.>? This paper answers the above questions by organizing our subject matter into several sections. The next section describes the notation and framework of General Relativity. In Section <ref>, we list the acceptability conditions that our models must meet to be considered candidates for compact stellar objects. Section <ref> discusses five approaches to include anisotropy in a general relativistic matter configuration. Next, in Section <ref>, we explore the parameter space while fulfilling several acceptability conditions and answer the above queries. Finally, Section <ref> summarizes our closing remarks and conclusions. § THE FIELD EQUATIONS Let us consider the interior of a dense star described by a spherically symmetric line element written as ds^2 = e^2ν(r) dt^2- e^2λ(r) dr^2- r^2 (dθ^2+sin^2(θ)dϕ^2), with regularity conditions at r=r_c=0, i.e. e^2ν_c= constant, e^-2λ_c= 1, and ν^'_c=λ^'_c=0. We shall consider a distribution of matter consisting of a non-Pascalian fluid represented by an energy-momentum tensor: T_μ^ν = [ρ(r),-P(r),-P_⊥(r),-P_⊥(r) ] , where ρ(r) is energy density, with P(r) and P_⊥(r) the radial and tangential pressures, respectively. From Einstein's field equations, we obtain the physical variables in terms of the metric functions as ρ(r) = e^-2λ(2 r λ^'-1)+1 /8π r^2 , P(r) = e^-2 λ(2r ν^' +1) -1/8 π r^2 and P_⊥(r) = - e^-2λ/8π[ λ^'-ν^'/r-ν^''+ν^'λ^'-(ν^')^2] , where primes ^' denote differentiation with respect to r. Now, assuming the metric function λ(r) is expressed in terms of the Misner “mass” <cit.> as m(r)=r^2/2R^3_232 ⇔ m(r)=4π∫ ^r_0 T^0_0r^2dr ⇒ e^-2λ= 1-2 m(r)/r , Additionally, the interior metric should continuously match the Schwarzschild exterior solution at the sphere's surface, r=r_b=R. This implies that e^2ν_b= e^-2λ_b=1-2𝒞_⋆ = 1 -2M/R, where M = m_b is the total mass and 𝒞_⋆=M/R the compactness of the configuration. From now on, the subscripts b and c indicate the variable's evaluation at the boundary and the centre of the matter distribution. The Tolman-Oppenheimer-Volkoff equation (i.e. T^μ_r ; μ = 0, the hydrostatic equilibrium equation) for this anisotropic fluid can be written as d P/d r = -(ρ +P)m + 4 π r^3P/r(r-2m)_F_g + 2/r(P_⊥ -P )_F_a . Thus, we can identify two forces competing in compensating the pressure gradient: the “gravitational force”, F_g and the “anisotropic force”, F_a. Equation (<ref>) together with d m/d r=4π r^2 ρ , constitute the relativistic stellar structure equations. From equation (<ref>), notice that the pressure gradient becomes less steep when the anisotropy is positive Δ_+ = P_⊥ - P > 0, and conversely, it changes more rapidly when the anisotropy is negative Δ_- = P_⊥ - P < 0. The only possibility for negative anisotropy is that the tangential and radial pressures vanish at r = r_b. Thus, for a fixed central stiffness, σ = P_c/ρ_c, the compactness, 𝒞_⋆, of the sphere increases when there is positive anisotropy Δ_+, and decreases when there is negative anisotropy Δ_-. Concerning positive anisotropy, we can adjust more massive configurations compared to isotropic Δ_0 = 0 scenarios. If both forces balance, i.e., F_g=F_a, we obtain a specific matter configuration characterised by vanishing radial pressures and solely sustained by tangential stresses <cit.>. This is because the tangential stresses support the mass shells, reducing the required radial pressure in such circumstances <cit.>. § THE PHYSICAL ACCEPTABILITY CONDITIONS The emerging physical variables have to comply with the various acceptability conditions <cit.>, which are crucial when considering self-gravitating stellar models. Only acceptable self-gravitating objects are of astrophysical interest and, in this work, those models have to comply with nine requirements expressed as <cit.>: C1: 2m/r < 1, which implies <cit.>: * That the metric potentials e^λ and e^ν are positive, finite and free from singularities within the matter distribution, satisfying e^λ_c = 1 and e^ν_c= at the centre of the configuration. * The inner metric functions match the exterior Schwarzschild solution at the boundary surface. * The interior redshift should decrease with increasing of r. C2: Positive density and pressures, finite at the centre of the configuration with P_c=P_⊥ c <cit.>. C3: ρ^' < 0, P^' < 0, P_⊥^' < 0 with density and pressures having maximums at the centre, thus ρ^'_c=P^'_c = P^'_⊥ c=0 with P_⊥≥ P. C4: The causality conditions on the radial, 0 < v_s^2 ≤ 1 and tangential 0 < v_s ⊥^2 ≤ 1, sound speeds, respectively <cit.>. C5: The trace energy condition ρ - P - 2P_⊥≥ 0, which is more restrictive than the strong energy condition, ρ + P + 2P_⊥≥ 0, for imperfect fluids <cit.>. This condition has several interesting consequences for isotropic EoS <cit.>. C6: The dynamic perturbation analysis restricts the adiabatic index <cit.> Γ = ρ + P/P v_s^2≥4/3 . C7: The Harrison-Zeldovich-Novikov stability condition: dM(ρ_c)/dρ_c > 0 <cit.>. C8: The cracking instability against local density perturbations, δρ = δρ(r) (for more details, the reader is referred to <cit.>). C9: The adiabatic convective stability condition ρ^''≤ 0, which is more restrictive than the outward decreasing density and pressure profiles <cit.>. Acceptability conditions for general relativistic spheres refer to the criteria that must be satisfied by the metric and physical variables in a relativistic matter distribution to be considered astrophysically viable and consistent within the framework of General Relativity. They are motivated by * Regularity conditions on the physical and metric variables, i.e. C1 and C2: A physically acceptable solution should exhibit regular behaviour, particularly at the centre of the sphere, avoiding singularities or divergences in physical quantities such as energy density, pressure, and metric components. * Energy conditions and equation of state, i.e. C2, C3, C4 and C5: Relativistic matter distributions are typically required to satisfy certain energy conditions, which impose constraints on the stress-energy tensor components. These conditions ensure the energy density and pressures associated with the matter distribution are within physically reasonable bounds. * Stability, i.e. C6, C7, C8 and C9: This involves assessing the stability of the matter distribution against perturbations or dynamic changes, ensuring that it remains in a state of equilibrium and does not collapse, cracks or other undesirable behaviours. § ANISOTROPY HEURISTIC STRATEGIES This section will introduce several assumptions and heuristic strategies to model anisotropy in relativistic matter configurations. Local anisotropy in compact objects is a hypothesis that has gained relevance over time. Nowadays, it is well understood that unequal radial and tangential stresses may increase the stability of neutron star models. However, a complete description of the complex interactions in the fluid that cause such phenomena is still unknown. The most common approaches in introducing anisotropy for modelling relativistic matter configuration are: * Anisotropy proportional to gravitational force. M. Cosenza et al.  <cit.> inspired by the work of Bowers and Liang <cit.> proposed suitable models for anisotropic matter by considering the anisotropic force proportional to the gravitational one. This relationship leads to the following expression for the difference between the tangential and radial pressures: P_⊥ - P = C_GF(ρ + P)(m + 4π r^3P)/r - 2m = Δ_GF , * Quasi-local anisotropy Local anisotropy can also be considered as the influence of quasi-local variables, which are quantities that are not solely dependent on the state of the fluid at a specific point in space-time <cit.>. These variables, such as the curvature radius r or the compactness μ (=2m/r), are employed as a quasi-local equation of state to describe anisotropy <cit.>. Within this approach, a particular type of anisotropy is: P_⊥ - P = C_QL P μ = 2 C_QL P m /r = Δ_QL . * Anisotropy proportional to a pressure gradient. Another potential form for the anisotropic force, considering equation (<ref>), is for it to be proportional to the pressure gradient. Raposo and collaborators <cit.> proposed an anisotropy proportional to the covariant derivative of pressure as: P_⊥ - P = -C_PG f(ρ)k^μ∇_μP = -C_PG f(ρ)√(1 - 2m/r)dP/dr = Δ_PG , where f(ρ) (see appendix <ref> for details) is a generic function of the energy density and k^μ = (0,k^1,0,0) is a unitary space-like vector orthogonal to the fluid four-velocity. * Complexity factor anisotropy. This factor is a quantity defined by decomposing the Riemann tensor, which measures the level of complexity in self-gravitating systems <cit.>. It reflexes the impact of local anisotropy and density inhomogeneity on the active gravitational mass. Consequently, systems with minimal complexity are represented by homogeneous and isotropic fluids. In the case of anisotropic fluids, satisfying the condition of a vanishing complexity factor with minimal complexity, the local anisotropy can be expressed as follows: P_⊥ - P= - C_CF/2 r^3∫_0^rr̃^3ρ^'dr̃ = Δ_CF . * Karmarkar anisotropy. The Karmarkar condition <cit.> is a relationship among components of the Riemann tensor, given by R_0303 R_1212 -R_0101 R_2323 -R_0313 R_0212 = 0 . This condition provides a geometric mechanism for incorporating anisotropy into matter configurations. To express equation (<ref>) in a scalar form, we introduce a set of scalar functions known as structure scalars, obtained from the orthogonal splitting of the Riemann tensor (refer to <cit.> and <cit.> for more a detailed discussion). Hence, the scalar Karmarkar condition for spherically symmetric static configurations is Y_0 X_1+(X_0+X_1)Y_1=0 , with Y_0 = 4π(ρ + 3𝐏), Y_1 = ℰ_1 - 4πΔ, X_0 = 8πρ and X_1 = -(ℰ_1 + 4πΔ), where 𝐏 = P + 2P_⊥/3 and ℰ_1 = -4π/r^3(∫_0^rr^3ρ^'dr + r^3Δ) . Thus, the induced anisotropy by the Karmarkar condition, written in terms of the physical variables, is given by P_⊥ - P = C_KC/r^3∫_0^rr^3ρ^'dr ((3P -ρ) - 1/r^3∫_0^rr^3ρ^'dr/4 ρ) = Δ_KC . In the above equations (<ref>), (<ref>), (<ref>), (<ref>), and (<ref>), we denoted the corresponding anisotropic parameter by C_GF, C_QL, C_PG, C_CF and C_KC, respectively. The relationship between the complexity and Karmarkar anisotropies is evident when we re-write equation (<ref>) in terms of Δ_CF, i.e. Δ_KC = -Δ_CF( C(3P-ρ) + 2Δ_CF/2 ρ C) . Where we have set C = C_CF = C_KC. It is worth mentioning that when Δ_CF = 0 ⇔ Δ_KC =0 and the only matter configuration for both anisotropic strategies corresponds to the Schwarzchild homogeneous isotropic solution. Another strategy for implementing anisotropic fluids in General Relativistic matter configurations is the gravitational decoupling approach <cit.>. This procedure assumes that the energy-momentum tensor splits into two parts as T_μ^ν = T̂_μ^ν + θ_μ^ν , where T̂_μ^ν corresponds to the perfect fluid contribution and θ_μ^ν describes any other coupled form of gravitational source. Implementing the anisotropic parameter C for modelling this method is unattainable. Thus, comparing the models emerging from this strategy with those executed with all previous techniques is impossible. It deserves a more detailed consideration which will be developed elsewhere. § ANISOTROPY AND PHYSICAL ACCEPTABILITY In this section, we discuss the physical acceptability of relativistic anisotropic models. We numerically integrate the structure equations (<ref>) and (<ref>) implementing every equation of state for anisotropy from the previous section, (i.e. Δ_GF, Δ_QL, Δ_PG, Δ_CF, and Δ_KC) and selecting a common density profile ρ(r) = ρ_c(1 - α r^2) , where the central density ρ_c and the constant α are free parameters. This simple Tolman VII density profile <cit.> is not deprived of physical interest <cit.> and has a long tradition of modelling compact objects. It corresponds to the Gokhroo-Mehra <cit.> solution used in several anisotropic static spheres in General Relativity <cit.>. Additionally, under some circumstances <cit.>, it leads to densities and pressures that give rise to an equation of state similar to the Bethe-Börner-Sato Newtonian equation for nuclear matter <cit.>. It also describes radiating anisotropic fluid spheres <cit.> representing the Kelvin-Helmholtz phase in the birth of a neutron star <cit.>. Now, from equation (<ref>), we obtain the boundary radius of the configuration as a function of the physical parameters of the problem, i.e. ρ_c and ϰ = ρ_b/ρ_c as ρ̃ = ρ̃_c(1-α̃ x^2) ⇒m̃ = 4πρ̃_c(x^3/3 - αx^5/5) ⇒ R = { M /4πρ_c[1/3-(1-ϰ)1/5]}^1/3 . Where we have defined the following quantities α =α R^2 , ϰ = 1 - α =ρ_b/ρ_c , m = Rm̃ , ρ = 1/R^2ρ̃ , and r = R x , where R and M are the structure boundary radius and total mass, respectively. In appendix <ref>, we present the dimensionless expressions for the structure equations and each anisotropic EoS, including some information about the simple numerical integration techniques. §.§ The range of the parameters The solution of equation (<ref>) for the density profile (<ref>) is sensitive to ρ_c, ϰ = ρ_b/ρ_c and the anisotropy factor, C <cit.>. A variation of these three factors generates a parameter space, exhibiting several acceptability conditions satisfied by each model. We shall identify which anisotropy delivers more physically acceptable configurations, i.e. satisfy more acceptability conditions. Thus, we shall identify a common set of parameter variations so as to compare the physical acceptability of the different anisotropic modelling strategies. We start determining the possible variation of a common anisotropic parameter, C. Regarding the case of Δ_GF, observe that equation (<ref>) leads to dP/dr = - h (ρ + P)(m + 4 π r^3 P)/r(r-2m) , with h = 1 - 2 C, and when h = 1 the isotropic case is recovered. Notice that condition C3 and equation (<ref>) implies h > 0, therefore if ρ_b≠ 0 we have, h = 1 - 2 C > 0 ⇒ C < 1/2 , and since P_⊥≥ 0 ⇒ 0 ≤ C < 1/2 . The tangential pressure should be positive at the boundary P_⊥ b≥ 0 within the matter distribution and from equation (<ref>), it restricts the anisotropic parameter to 0 ≤ C < 1/2 for any EoS having ρ_b≠ 0. We selected six values for the anisotropy parameter, i.e. C = 0.000, 0.050, 0.150, 0.250, 0.350, and 0.450. In addition to the anisotropic parameter, C, there are two other significant elements: the central density, ρ_c, and ϰ. According to typical values in compact objects/neutron stars, ρ_c could go from 0.1 × 10^15 to 2.5 × 10^15 g/cm^3. The scale variation for ϰ runs from 0.0 (vanishing density at the surface) to 0.9 (almost homogeneous density). Finally, we have to provide the total mass, M, of the configuration (≈ 2.08 M_⊙, the highest reliable gravitational mass of any neutron star <cit.>) to determine the central density of each model. §.§ The best method to introduce anisotropy To answer the first question, we shall follow two lines of reasoning in identifying which of the above anisotropy strategies is best suited in providing more acceptable models. The next section identifies regions in the parameter space (C, ϰ and ρ_c), that comply with the acceptability criteria. Figure <ref> displays, in a colour scale, those patches for five different values of the anisotropy factor C. For example, in the isotropic case, i.e. C = 0, we obtain 33 of these physically fully acceptable models. More acceptable matter configurations are placed below the red line in all cases shown, i.e. when ρ_b ≤9/10ρ_c ( 1 - 2 ρ_c/5) . As displayed in Figure <ref>, the second approach is to sum up the total number of models satisfying all nine-acceptability criteria. This method, discussed in section <ref>, complements the previous criterion because we explore the number of possible acceptable models for the whole range of variation of the anisotropic parameter. §.§.§ Acceptable model distribution in a parameter space In this section, we shall discuss the acceptable model distribution in the common parameter space defined by 0.000 ≤ C ≤ 0.450; 0.1 × 10^15 ≤ρ_c ≤ 2.5 × 10^15 g/cm^3 and 0.0 ≤ϰ≤ 0.9. Figure <ref> displays, in a colour scale, this model distribution for six different values of the anisotropy factor C. As will be clear in the following discussion, this range of variation in the parameter plane (ρ_c, ϰ) is due to the NICER-acceptable models <cit.> when we considered the total mass of the configuration M ≈ 2.08 M_⊙. See figure <ref> to grasp the rationale of the parameter variation. For a low anisotropic presence (C = 0.050 displayed in Figure <ref>), Δ_GF and Δ_QL strategies deliver more acceptable models (yellow patches represent models satisfying the nine criteria) than in the isotropic case. Several models with anisotropy proportional to the pressure gradient, Δ_PG, become unacceptable because they do not meet the adiabatic index's stability criterium C6. On the other hand, configurations with anisotropy defined by the complexity factor have nonphysical negative pressure and positive tangential pressure gradient. Moreover, configurations with vanishing density at the boundary (ρ_b = 0 ⇌ϰ = 0.0) do not comply with C5, C6 or C8. The Karmarkar anisotropy scheme, Δ_KC, produces unsuitable configurations having negative tangential pressures. When ϰ = 0.0 the corresponding matter distributions do not comply with C4, C6 or C8. In general, as ϰ increases, the speed of sound exceeds the light speed, failing C4. The increase in the central density oversteps the condition on the trace of the energy-momentum tensor, and the darker region in the upper right corner is due to the models' cracking (C8). Raising C to 0.150 enhances the acceptability when anisotropy is proportional to the gravitational force, and 65 out of 90 models satisfy all the conditions. On the other hand, quasi-local anisotropy does not show variation in the acceptable configurations, while the number of acceptable models with Δ_PG decreases drastically by failing with condition C6. The increased anisotropy in Δ_CF makes acceptable models, now to have positive tangential pressure gradient and negative radial pressure. One model becomes acceptable for Δ_KC by fulfilling condition C5. Increasing the anisotropic factor C to 0.250 further causes a maximum in Δ_GF. As seen from Figure <ref>, almost all models, Δ_GF, meet the nine criteria for the considered values of ϰ and ρ_c. This is also evident from Figure <ref>. It attains 76 fully acceptable models when C=0.225. The remaining 12 configurations mainly do not satisfy the causality condition of radial sound speed C4. Quasi-local anisotropy also increases, to a lesser extent, the number of acceptable models. Only one model for Δ_PG becomes unacceptable due to condition C6. Regarding Δ_CF, a few models are no longer acceptable, breaking the tangential pressure condition C3. Acceptable models with Δ_KC remain unchanged. As it is clear from Figure <ref>, when C=0.350, there is no Δ_GF-model satisfying the nine acceptability criteria; the Δ_QL-strategy allows a few new models, while Δ_PG and Δ_CF decrease in acceptable models. In contrast, Δ_KC remains unchanged. Finally, as displayed in Figure <ref>, i.e. for C=0.450, only the Δ_QL-strategy provides more acceptable models than the isotropic condition. Several models under the red line, for Δ_CF, become acceptable by fulfilling condition C4. §.§.§ Total number of acceptable models and the best anisotropy strategy This section extends our analysis by determining the total number of entirely acceptable models. To achieve this, we redefine the range for the anisotropic parameter, starting from isotropy (C=0.000) and continuing until the last value (C=C_0) where no other fully acceptable model exists, i.e. satisfying all nine criteria for acceptability. In Figure <ref> and Table <ref>, we show the most effective methods for introducing anisotropy. The most useful approach is the quasi-local Δ_QL method <cit.>, followed by the Karmarkar scheme <cit.>, and finally the complexity factor approach <cit.>. These strategies include varying degrees of effectiveness due to the significant range of the anisotropic parameter C: 0.000 ≤ C_QL≤ 3.345, 0.000 ≤ C_KC≤ 4.760, and 0.000 ≤ C_CF≤ 3.570, respectively. Anisotropy leads to more acceptable models than isotropic ones. The most distinct scheme is the anisotropy proportional to gravitational force, Δ_GF. It has a narrow range of variation for the anisotropic parameter, with 0.000 ≤ C_GF≤ 0.250, and a pronounced peak with 76 models for C_GF = 0.225. Around half of the models fall within 0.050 ≤ C_GF≤ 0.250. Models with C_GF > 0.250 are considered unacceptable due to their positive tangential pressure gradient, which violates condition C3. Regarding the quasi-local approach, Δ_QL, the number of models satisfying all requirements increases with the level of anisotropy, reaching a peak of 53 for C_QL = 0.910. This method has a significant range in anisotropy 0.000 ≤ C_QL≤ 3.345. The simplest but least effective method is anisotropy proportional to pressure, Δ_PG. It has a limited anisotropic range of 0.000 ≤ C_PG≤ 0.578, with a maximum of 34 models at very low anisotropy, C_PG = 0.025. The following method is complexity anisotropy, Δ_CF, which has a considerable range, 0.000 ≤ C_CF≤ 3.570, but no anisotropic parameter generates more acceptable models than the isotropic case. Finally, the geometric Karmarkar strategy is Δ_KC related to the complexity anisotropy and has the broadest range, i.e., 0.000 ≤ C_CF≤ 4.760. §.§ NICER acceptable models The Neutron Star Interior Composition Explorer is an X-ray telescope on the International Space Station which studies the X-ray emissions from neutron stars, helping to determine their size, mass, and the properties of their dense interiors. By measuring the mass and radius of multiple neutron stars, NICER refines our understanding of the equation of state, providing valuable constraints on the properties of ultra-dense matter. NICER also detects the pulsation of neutron stars, permitting scientists to explore the dynamics of their atmospheres, unravelling the physical processes occurring in and around them. NICER has been employed to obtain the first precise (and dependable) measurements of a pulsar's size and mass and the first-ever map of hot spots on its surface (see <cit.> and references therein). In figure <ref>, we include a region covering observational estimates for the radius of the PSR J0740+6620, with mass 2.08 ± 0.07 M_⊙, which is the highest reliable gravitational mass of any neutron star <cit.>. In table <ref>, we indicate the number of NICER-compatible models for different anisotropic strategies. From equation (<ref>), we calculate the configuration radius as a function of ρ_c and ϰ. We find that the central density, 0.7 × 10^15  g/cm^3 ≤ ρ_c ≤ 1.3 × 10^15  g/cm^3, is consistent with observational data. The range for ϰ associated with various central densities for different anisotropy strategies are displayed in table <ref>. We also find various acceptable configurations with different ρ_c, ranging from almost homogeneous density profiles, i.e. ϰ≈ 0.8, to others with vanishing density at the configuration boundary where ϰ≈ 0.0. Assuming a simple density (<ref>), the observed radius for PSR J0740+6620 can be described by acceptable isotropic matter configurations and several anisotropic approaches. Models with the highest ϰ correspond only to anisotropic configurations within Δ_GF strategy. The same results emerge from figure <ref>. All possible radii in the range 11.6 km ≤ R ≤ 13.1 km agree with several NICER-acceptable models within the selected parameter space, fulfilling all physically acceptable conditions. The NICER acceptable region overlaps more yellow models in Δ_PG and Δ_QL plots than in other anisotropic strategies. The red arrows indicate the possible values for the central density, 0.7 × 10^15  g/cm^3 ≤ρ_c ≤ 1.3 × 10^15  g/cm^3, and the corresponding values of ϰ consistent with the assumed density profile (<ref>). § FINAL REMARKS This work introduces the most common assumptions in modelling non-pascalian fluids in general relativistic matter configurations. Local anisotropy in compact objects is a hypothesis that has gained relevance over time. So far, however, it is still not well known how unequal radial and tangential stresses may increase the stability of neutron star models. The complete description of the complex interactions in the fluid that cause such phenomena is unknown <cit.>. We explore five different heuristic methods to include anisotropy in general relativistic matter configurations. We found that the most effective approach in introducing anisotropy, with a physically meaningful density profile (<ref>) is the quasi-local Δ_QL method <cit.>; followed by the Karmarkar scheme <cit.>; and last by the complexity factor approach <cit.>. Incorporating any of the five types of anisotropy schemes considered in this study results in a significantly greater number of acceptable configurations than their isotropic counterparts within the specified range of critical parameters (C, ϰ and ρ_c). Furthermore, as shown in Figure <ref> and from equation (<ref>), we have established a “rule of thumb” that provides a simple relationship between the density at the boundary, ρ_b, and the centre, ρ_c, for relativistic matter distributions. This rule can serve as a helpful tool for identifying potentially realistic and acceptable models of compact objects. By leveraging this relationship, researchers can make informed judgments about the physical viability of different matter configurations. From equation (<ref>), we calculate the configuration radius as a function of ρ_c and ϰ. We found that the central density, 0.7×10^15  g/cm^3 ≤ ρ_c ≤ 1.3×10^15  g/cm^3, is consistent with NICER-observational data. In table <ref>, we introduce the corresponding ranges for ϰ. All the possible radii values, 11.6 km ≤ R ≤ 13.1 km, correspond to several NICER-acceptable models within the selected parameter space, fulfilling all physically acceptable conditions. Assuming a simple density profile (<ref>), the observed radius for PSR J0740+6620 can be described by acceptable isotropic matter configurations and several anisotropic approaches. Models with the highest ϰ correspond only to anisotropic configurations with Δ_GF strategy. § ACKNOWLEDGMENTS L.A.N. acknowledges the financial support of the Vicerrectoría de Investigación y Extensión de la Universidad Industrial de Santander and Universidad de Salamanca through the research mobility programs. L.A.N. also thanks the hospitality of the Departamento de Matemáticas Aplicadas, Universidad de Salamanca. The Vicerrectoría de Investigación y Extensión, Universidad Industrial de Santander Postdoctoral Fellowship Program No. 2023000359 supported L.M.B. J.O. acknowledges financial support from Ministerio de Ciencia, Innovación y Universidades (grant PGC2018-096038-B-100) and Junta de Castilla y León (grant SA083P17). D.S.U and L.A.N gratefully thank the internship program of the ERASMUS+ project, Latin-American alliance for Capacity buildiNG in Advance physics (LA-CoNGA physics), where this paper's first ideas and calculations began. D.S.U. thanks for the hospitality of the Departamento de Física of the Colegio de Ciencias e Ingeniería, Universidad San Francisco de Quito and especially to Dr Ernesto Contreras for the fruitful discussions. § APPENDICES § THE STRUCTURE EQUATIONS We determine the physical variables (ρ, m, P, P_⊥) and check the acceptability conditions. We compare the physical acceptability among models with the same parameters (ρ_c, α, C) having different anisotropy strategies. The more acceptable models an anisotropy generates, the more it may represent observable compact objects. Now, expressing the structure equations (<ref>) and (<ref>) in term of dimensionless quantities we have dP̃/dx = -(ρ̃ + P̃)(m̃ + 4πP̃x^3)/x(x-2m̃) + 2 (P̃_⊥ - P̃)/x and dm̃/dx = 4πρ̃x^2 , with ρ̃ = ρ̃_c(1-α̃ x^2) ⇒ m̃ = 4πρ̃_c(x^3/3 - αx^5/5) , leaving only equation (<ref>) to be integrated. We have this new set of dimensionless physical variables: m = Rm̃ , P = 1/R^2P̃ , P_⊥ = 1/R^2P̃_⊥ , ρ = 1/R^2ρ̃ , and r = R x , where R is the boundary radius of the configuration. It is convenient to transform the parameter α (=α R^2) into a quantity with greater physical meaning. Evaluating the dimensionless density in (<ref>) at the surface of the configuration, where x=1, we can define ϰ = 1 - α = ρ_b/ρ_c as the density ratio at the surface to the density at the centre. § DIMENSIONLESS EQUATIONS OF STATE FOR ANISOTROPY The change of variables proposed in (<ref>) to express equations in dimensionless form has the virtue of preserving the equations without additional constants. That is, we can directly put the tilde mark on the variables (and swap r for x in the case of the radial coordinate) to obtain the dimensionless version. Equations (<ref>) and (<ref>) are an example of what has just been stated. However, here are some simple calculations that prove it. * Anisotropy proportional to gravitational force.Carrying out the change of variables (<ref>) on the anisotropy proportional to gravitational force yields P̃_⊥ - P̃/R^2 = C_GF(ρ̃/R^2 + P̃/R^2)(Rm̃ + 4π R^3x^3P̃/R^2)/Rx - 2Rm̃ . Now, rearranging the constant R to the right-hand side, we have P̃_⊥ - P̃ = C_GFR^2(ρ̃/R^2 + P̃/R^2)R(m̃ + 4π R^2x^3P̃/R^2)/R(x - 2m̃) , from where we have that Δ̃_GF = C_GF(ρ̃ + P̃)(m̃ + 4π x^3P̃)/x - 2m̃ . * Quasi-local anisotropy. Quasi-local anisotropy is a more straightforward case since compactness is a dimensionless variable. Implementing the change of variables (<ref>) in equation (<ref>) leads us to Δ̃_QL/R^2 = 2C_QLRm̃P̃/R^2/Rx , and therefore Δ̃_QL = 2 C_QLm̃P̃/x . * Anisotropy proportional to a pressure gradient. In this particular case, we first choose the function f(ρ) = ρ as in <cit.>, leaving anisotropy (<ref>) as Δ_PG = -C_PGρ√(1 - 2m/r)dP/dr . Therefore, the anisotropy factor C_PG has dimensions length cubed. Thus, substituting the change of variables into the last equation gives Δ̃_PG/R^2 = -R^3C̃_3ρ̃/R^2√(1 - 2Rm̃/Rx)1/R^3dP̃/dx , and consequently, we get Δ̃_PG = -C̃_PGρ̃√(1 - 2m̃/x)dP̃/dx . * Complexity factor anisotropy We can solve the integral in the anisotropy (<ref>) since density profile is given, yielding Δ_CF = ρ_cα r^2/5 . Now, applying the change of variables, we get Δ̃_CF/R^2 = 1/5ρ̃_c/R^2α̃/R^2R^2x^2 , and therefore Δ̃_CF = ρ̃_c(1-ϰ) x^2/5 . * Karmarkar anisotropy. Given the density profile (<ref>), we can compute derivatives and integrals to obtain Δ_KC = ρ_cα r^2/5ρ(ρ - 3P/2 - ρ_cα r^2/5) . Therefore the dimensionless induced anisotropy by the Karmarkar condition is given by Δ̃_KC = ρ̃_c(1-ϰ) x^2/5ρ̃(ρ̃ - 3P̃/2 - ρ̃_c(1-ϰ) x^2/5) . § NUMERICAL INTEGRATION Equation (<ref>) was numerically integrated with Python, implementing the RK45 method through the solve_ivp function. The solution was started at the surface of the model, with initial values x_b = 1 and P_b = 0, and proceeded with an adaptive step towards the centre, with final values x_c = 10^-15 and P(x_c) = P_c. Since x takes values between 10^-15 and 1 we can identify R as the total radius. 10 AbbottEtalLIGOVIRGCol2019 B. P. Abbott, R. Abbott, T. D. Abbott, et al, LIGO Scientific Collaboration, and Virgo Collaboration. Properties of the binary neutron star merger gw170817. Phys. Rev. X, 9:011001, Jan 2019. GendreauEtal2022 K. Gendreau, Z. Arzoumanian, E. Ferrara, and C.B. Markwardt. NICER: The Neutron Star Interior Composition Explorer, pages 1–21. Springer Nature Singapore, Singapore, 2022. DelgatyLake1998 M. S. R. Delgaty and K. Lake. Physical acceptability of isolated, static, spherically symmetric, perfect fluid solutions of Einstein's equations. Comput. Phys. Commun., 115:395, 1998. Ivanov2017 B. V. Ivanov. Analytical study of anisotropic compact star models. Eur. Phys. J. C, 77(11):738, 2017. Ivanov2018 B. V. Ivanov. A conformally flat realistic anisotropic model for a compact star. The European Physical Journal C, 78(4):332, 2018. HernandezNunezVasquez2018 H. Hernández, L. A. Núñez, and A. Vásquez-Ramírez. Convection and cracking stability of spheres in general relativity. Eur. Phys. J. C, 78(11):883, 2018. HernandezSuarezurangoNunez2021 H. Hernández, D. Suárez-Urango, and L.A. Núñez. Acceptability conditions and relativistic barotropic equation of state. Eur. Phys. J. C, 81(241), 2021. SuarezurangoEtal2022 D. Suárez-Urango, J. Ospino, H. Hernández, and L.A. Núñez. Acceptability conditions and relativistic anisotropic generalized polytropes. The European Physical Journal C, 82(2):1–22, 2022. Jeans1922 J. H. Jeans. The motions of stars in a Kapteyn universe. Mon. Not. R. Astron. Soc., 82:122–132, 1922. Lemaitre1933 G. Lemaıtre. L'univers en expansion. Ann. Soc. Sci.(Bruxelles) A, 53:51–85, 1933. Ruderman1972 M. Ruderman. Pulsars: Structure and dynamics. Annual Review of Astronomy and Astrophysics, 10:427–476, 1972. BowersLiang1974 R. L. Bowers and E. P. T. Liang. Anisotropic spheres in general relativity. Astrophys. J., 188:657–665, 1974. CosenzaEtal1981 M. Cosenza, L. Herrera, M. Esculpi, and L. Witten. Some models of anisotropic spheres in general relativity. Journal of Mathematical Physics, 22:118, 1981. HerreraNunez1989 L. Herrera and L. Núñez. Modeling “hydrodynamic phase transitions” in a radiating spherically symmetric distribution of matter. Astrophys. J., 339:339–353, April 1989. HerreraSantos1997 L. Herrera and N. O. Santos. Local anisotropy in self-gravitating systems. Phys. Rep., 286(2):53–130, 1997. MartinezRojasCuesta2003 A. P. Martínez, H. P. Rojas, and H. M. Cuesta. Magnetic collapse of a neutron gas: Can magnetars indeed be formed? Eur. Phys. J. C, 29(1):111–123, 2003. HerreraBarreto2004 L. Herrera and W. Barreto. Evolution of relativistic polytropes in the post-quasi-static regime. Gen. Relativ. Gravitation, 36(1):127–150, 2004. HerreraEtal2014 L. Herrera et al. Dissipative collapse of axially symmetric, general relativistic sources: a general framework and some applications. Phys. Rev. D, 89(8):084034, 2014. Setiawan2019 A. M. Setiawan and A. Sulaksono. Anisotropic neutron stars and perfect fluid's energy conditions. Eur. Phys. J. C, 79(9):755, 2019. RahmansyahEtal2020 A. Rahmansyah, A. Sulaksono, A.B. Wahidin, and A.M. Setiawan. Anisotropic neutron stars with hyperons: implication of the recent nuclear matter data and observations of neutron stars. The European Physical Journal C, 80(8):769, 2020. RahmansyahSulaksono2021 A. Rahmansyah and A. Sulaksono. Recent multimessenger constraints and the anisotropic neutron star. Physical Review C, 104(6):065805, 2021. Das2022 H. C. Das. I -Love -C relation for an anisotropic neutron star. Physical Review D, 106(10):103518, November 2022. KumarBharti2022 J. Kumar and P. Bharti. Relativistic models for anisotropic compact stars: A review. New Astronomy Reviews, page 101662, 2022. RayEtal2023 S. Ray, S. Das, K.K. Ghosh, B.K. Parida, S.K. Pal, and M. Indra. Study of anisotropic compact stars by exploring tidal deformability. New Astronomy, 104:102069, 2023. Herrera2020 L. Herrera. Stability of the isotropic pressure condition. Physical Review D, 101(10):104024, 2020. Sulaksono2015 A. Sulaksono. Anisotropic pressure and hyperons in neutron stars. International Journal of Modern Physics E, 24(01):1550007, 2015. SetiawanSulaksono2017 A.M. Setiawan and A. Sulaksono. Cracking on anisotropic neutron stars. In AIP Conference Proceedings, volume 1862, page 030001. AIP Publishing LLC, 2017. BiswasBose2019 B. Biswas and S. Bose. Tidal deformability of an anisotropic compact star: Implications of gw170817. Physical Review D, 99(10):104002, 2019. DonevaYazadjiev2012 D. D. Doneva and S. S. Yazadjiev. Nonradial oscillations of anisotropic neutron stars in the cowling approximation. Phys. Rev. D, 85(12):124023, 2012. RaposoEtal2019 G. Raposo, P. Pani, M. Bezares, C. Palenzuela, and V. Cardoso. Anisotropic stars as ultracompact objects in general relativity. Phys.Rev.D, 99:104072, 2019. Herrera2018 L. Herrera. New definition of complexity for self-gravitating fluid distributions: The spherically symmetric, static case. Physical Review D, 97(4):044010, 2018. Karmarkar1948 K. R. Karmarkar. Gravitational metrics of spherical symmetry and class one. Proceedings of the Indian Academy of Sciences-Section A, 27(1):56, 1948. OspinoNunez2020 J. Ospino and L. A. Núñez. Karmarkar scalar condition. The European Physical Journal C, page 166, January 2020. Ovalle2017 J. Ovalle. Decoupling gravitational sources in general relativity: from perfect to anisotropic fluids. Physical Review D, 95(10):104019, 2017. MisnerSharp1964 C. W. Misner and D. H. Sharp. Relativistic Equations for Adiabatic, Spherically Symmetric Gravitational Collapse. Physical Review, 136:571–576, October 1964. Florides1974 P. S. Florides. A new interior schwarzschild solution. Proceeding of the Royal Society of London, A337:529 – 535, 1974. HerreraEtal2001 L. Herrera, A. Di Prisco, J. Ospino, and E. Fuenmayor. Conformally flat anisotropic spheres in general relativity. J. Math. Phys., 42:2129–2143, 2001. Buchdahl1959 H. A. Buchdahl. General relativistic fluid spheres. Phys. Rev., 116(4):1027–1034, 1959. Ivanov2002B B. V. Ivanov. Maximum bounds on the surface redshift of anisotropic stars. Phys. Rev. D, 65(10):104011, 2002. AbreuHernandezNunez2007b H. Abreu, H. Hernández, and L. A. Núñez. Sound speeds, cracking and stability of self-gravitating anisotropic compact objects. Classical Quantum Gravity, 24(18):4631–4646, 2007. KolassisSantosTsoubelis1998 C. A. Kolassis, N. O. Santos, and D. Tsoubelis. Energy conditions for an imperfect fluid. Classical Quantum Gravity, 5(10):1329–1338, 1988. PimentelLoraGonzalez2017 O. M. Pimentel, F. D. Lora-Clavijo, and G. A. González. Ideal magnetohydrodynamics with radiative terms: energy conditions. Classical Quantum Gravity, 34(7):075008, 2017. PodkowkaMendesPoisson2018 D.M. Podkowka, R.F.P. Mendes, and E. Poisson. Trace of the energy-momentum tensor and macroscopic properties of neutron stars. Physical Review D, 98(6):064057, 2018. HeintzmannHillebrandt1975 H. Heintzmann and W. Hillebrandt. Neutron stars with an anisotropic equation of state: Mass, redshift and stability. Astron. Astrophys., 38:51–55, 1975. ChanHerreraSantos1993 R. Chan, L. Herrera, and N. O. Santos. Dynamical instability for radiating anisotropic collapse. Mon. Not. R. Astron. Soc., 265(3):533–544, 1993. ChanHerreraSantos1994 R. Chan, L. Herrera, and N.O. Santos. Dynamical instability for shearing viscous collapse. Mon. Not. R. Astron. Soc., 267(3):637–646, 1994. HarrisonThorneWakano1965 B. K. Harrison et al. Gravitation theory and gravitational collapse. University of Chicago Press, Chicago, 1965. ZeldovichNovikov1971 Y. B. Zeldovich and I. D. Novikov. Relativistic astrophysics. Vol.1: Stars and relativity. University of Chicago Press, Chicago, 1971. GonzalezNavarroNunez2015 G. A. González, A. Navarro, and L. A. Núñez. Cracking of anisotropic spheres in general relativity revisited. J. Phys. Conf. Ser., 600(1):012014, 2015. GonzalezNavarroNunez2017 G. A. González, A. Navarro, and L. A. Núñez. Cracking isotropic and anisotropic relativistic spheres. Can. J. Phys., 95(11):1089–1095, 2017. HernandezNunez2004 H. Hernández and L. A. Núñez. Nonlocal equation of state in anisotropic static fluid spheres in general relativity. Can. J. Phys., 82(1):29–51, 2004. herrera2021 L. Herrera. Complexity of self-gravitating systems. Entropy, 23(7):802, 2021. herrera2023 L. Herrera. Complexity and simplicity of self-gravitating fluids. arXiv preprint arXiv:2304.05870, 2023. OspinoHernandezNunez2017 J. Ospino, J. L. Hernández-Pastora, and L. A. Núñez. An equivalent system of Einstein Equations. In Journal of Physics Conference Series, volume 831 of Journal of Physics Conference Series, page 012011, March 2017. Tolman1939 R. C. Tolman. Static Solutions of Einstein's Field Equations for Spheres of Fluid. Physical Review, 55(4):364–373, 1939. RaghoonundunHobill2015 A. M. Raghoonundun and D. W. Hobill. Possible physical realizations of the Tolman VII solution. Physical Review D, 92(12):124005, December 2015. GokhrooMehra1994 M. K. Gokhroo and A. L. Mehra. Anisotropic spheres with variable energy density in general relativity. Gen. Rel. Grav., 26(1):75 – 84, 1994. Stewart1982 B. W. Stewart. Conformally flat, anisotropic spheres in general relativity. J. Phys. A: Math. Gen., 15(8):2419–2427, 1982. Martinez1996 J. Martínez. Transport processes in the gravitational collapse of an anisotropic fluid. Phys. Rev. D, 53:6921 – 6940, 1996. HohlerEtal1973 G. Höhler, A. Fujimori, J. Kühn, T. Müller, F. Steiner, W.C. Stwalley, J.E. Trümper, P. Wölfle, U. Woggon, and G. Börner. On the properties of matter in neutron stars. Springer, 1973. HernandezNunezPercoco1999 H. Hernández, L. A. Núñez, and U. Percoco. Non-local equation of state in general relativistic radiating spheres. Class. Quantum Grav, 16(3):871 – 896, 1999. HerreraMartinez1998A L. Herrera and J. Martínez. Dissipative fluids out of hydrostatic equilibrium. Class.Quantum Grav., 15:407–420, February 1998. HerreraMartinez1998B L. Herrera and J. Martínez. Gravitational collapse: a case for thermal relaxation. General Relativity and Gravitation, 30(3):445–471, 1998. MillerEtal2021 M. C. Miller, F. K. Lamb, A. J. Dittmann, S. Bogdanov, Z. Arzoumanian, K. C. Gendreau, S. Guillot, W. C. G. Ho, J. M. Lattimer, M. Loewenstein, S. M. Morsink, P. S. Ray, M. T. Wolff, C. L. Baker, T. Cazeau, S. Manthripragada, C. B. Markwardt, T. Okajima, S. Pollard, I. Cognard, H. T. Cromartie, E. Fonseca, L. Guillemot, M. Kerr, A. Parthasarathy, T. T. Pennucci, S. Ransom, and I. Stairs. The radius of PSR j0740+6620 from NICER and XMM-newton data. The Astrophysical Journal Letters, 918(2):L28, September 2021. RileyEtal2021 T. E. Riley, A.L. Watts, P.S. Ray, S. Bogdanov, S. Guillot, S.M. Morsink, A.V. Bilous, Z. Arzoumanian, D. Choudhury, J.S. Deneva, K.C. Gendreau, A.K. Harding, W.C.G. Ho, J.M. Lattimer, M. Loewenstein, R.M. Ludlam, C.B. Markwardt, T. Okajima, C. Prescod-Weinstein, R.A. Remillard, M.T. Wolff, E. Fonseca, H.T. Cromartie, M. Kerr, T.T. Pennucci, A. Parthasarathy, S. Ransom, I. Stairs, L. Guillemot, and I. Cognard. A NICER View of the Massive Pulsar PSR J0740+6620 Informed by Radio Timing and XMM-Newton Spectroscopy. The Astrophysical Journal Letters, 918(2):L27, September 2021.
http://arxiv.org/abs/2307.04016v1
20230708171122
Cellular LTE and Solar Energy Harvesting for Long-Term, Reliable Urban Sensor Networks: Challenges and Opportunities
[ "Alex Cabral", "Vaishnavi Ranganathan", "Jim Waldo" ]
cs.NI
[ "cs.NI" ]
Explicit a posteriori error representation for variational problems and application to TV-minimization [ August 12, 2023 ======================================================================================================== empty § INTRODUCTION As the global urban population continues to grow, cities are increasingly interested in monitoring urban processes such as vehicular traffic, and public health and environmental harms including air pollution and noise, to help cities grow in a healthy and sustainable fashion <cit.>. The lowering cost of sensing infrastructure and recent digital twin capabilities have encouraged city officials, researchers, and urban residents to use large-scale, low-cost sensor networks to monitor hyperlocal phenomena, inform policy and planning decisions, and collect data to help transition to being considered smart cities <cit.>. We identify that, to be successful, a smart city network must be: * reliable: the network should continue to operate and transmit data over long periods of time and across the city to ensure equitable node distribution <cit.> * scalable: it should be easy to add/replace nodes within the network at any new location in the city <cit.> * easy to maintain: nodes should be outfitted with hardware and firmware that minimize the need for in-person maintenance <cit.> * real-time: data must be transmitted as quickly as possible, particularly for applications such as emergency services <cit.>, and the network must be monitored in real-time for maintenance <cit.> * low-cost: by using existing infrastructure and services, the network can avoid added costs in installation and maintenance <cit.> We determine that two key features of an urban sensor network's design can help to make the network fit within the aforementioned criteria. The first is connectivity, which is essential for data transmission, real-time node monitoring, and software updates. The second is power, which provides for reliable operation and data collection. The decisions that cities and network designers make in these two areas have a direct and significant impact on the criteria for a successful smart city network. For example, an urban sensor network that uses a low-power wide-area network (LPWAN) for connectivity may not satisfy the criteria of low cost because the backhaul infrastructure required, although low in per-unit cost, quickly becomes expensive when considering the number of cells required for a large, dense sensor network <cit.>. Similarly, a smart city network that relies on wired power may not be scalable, as nodes will be limited to locations that already have wired mains <cit.> and will involve additional installation and maintenance cost. Based on a review of prior urban sensor network deployments and our experience working on a large-scale sensor network, we establish that LTE networks and solar panels are the appropriate connectivity and power choice for most urban sensor networks given the available options and necessary criteria. Although LTE performance for mobile communication in urban areas is well-researched <cit.>, the performance of IoT-specific networks when implemented in a city-scale long-term sensor network deployment is yet to be characterized. Solar power in urban sensor networks has also been evaluated on a small scale <cit.>, but not in a large-scale long-term deployment. Moreover, there are no established guidelines that can ensure reliable performance for future deployments of such large-scale LTE-connected, solar-powered sensor networks. Finally, researchers have not looked into the overlap between technical issues that arise in LTE connectivity or solar power and the socioeconomic factors that make up many “sensor deserts" <cit.>, or areas that lack nodes in cities with sensor networks. In this work we describe the design and analyze the connectivity and power performance of a stationary 118-node LTE-M connected, solar-powered sensor network deployed for one year in Chicago, Illinois. We find that 11 of the 118 original node locations could not support LTE connectivity, despite all FCC and network provider connectivity maps indicating otherwise. A small number of cell towers and node locations are disproportionately affected by significantly delayed readings, and 44 of the 118 nodes experienced issues charging in the winter months. Furthermore, we discover that connectivity and power related issues are not equitably spread around the city, but rather are more prominent in areas that are classified as socioeconomically disadvantaged and have a larger racial minority population. Our primary contribution is an in-depth analysis of a long-term real-world deployment assessing the feasibility and reliability of a large-scale LTE-connected and solar-powered urban sensor network. Additional contributions include: 1) highlighting the overlap between technical challenges in urban sensor networks and socioeconomic inequality, 2) revealing the inherent challenges in relying upon open data sources that are commonly used to predict connectivity and power availability for urban sensor network deployments, and 3) identifying strengths and weaknesses to define future research directions in energy harvesting systems and equitable network infrastructure deployments to ensure the just future of smart city networks. This paper is structured as follows: Section 2 offers an overview of Related Works; Section 3 highlights why the city of Chicago is a useful case study for urban sensor networks; Section 4 highlights the design of the sensor network and datasets used; Section 5 discusses the connectivity of the sensor network, including the hardware, network carrier information, and insights from the year-long deployment; Section 6 details the powering of the sensor network, including the hardware, energy management techniques, and insights from the deployment; Section 7 provides a discussion, focusing on the implications of the challenges we discovered and the limitations of our study. § RELATED WORKS In this section, we first review former and existing sensor network deployments to identify necessary criteria, prior evaluations, and known issues around inequality. We then examine LTE connectivity and solar power in urban areas, as these are the technologies we use for our sensor network. §.§ Criteria for Urban Sensor Networks By examining prior urban sensor network deployments, we have identified five criteria necessary for success—reliability, scalability, ease of maintenance, real-time communication, and low cost. The shortcomings of prior sensor networks has often been caused by a lack of reliability, either in terms of not functioning over time, as with malfunctioning hardware <cit.>, or not communicating data reliably over space and time <cit.>. Many prior networks have also raised the issue of scalability, which is especially prevalent when relying on electrical cables and wired power, which may be available at street lamps or traffic signals, but ultimately limits the node placement locations <cit.>. Similar initiatives have shown that reliance on these specific locations can additionally make installation and maintenance more difficult, which then increases the cost of operation <cit.>. The issue of maintenance is particularly important in urban settings, where the cost of accessing a node can be very high <cit.>. Conversely, we find that some deployments are more successful because they achieve low-cost via the use of existing infrastructure. For example, officials in New York City chose to use an existing public safety wireless network for a new traffic control system <cit.> and Chicago's Array of Things relied on cellular networks <cit.>, decisions that helped ease installation and thus save costs. §.§ Evaluations of Urban Sensor Network Deployments The evaluations of real-world sensor network deployments in urban settings have often been small-scale and short-term. A small number of researchers have shared the lessons and challenges learned from urban sensor network deployments, but many of these are focused on specific data such as noise <cit.> and water quality <cit.>. Furthermore, many of these studies rely on the power grid for high computation tasks <cit.>, or use technologies such as Wi-Fi or Zigbee for data transfer <cit.>. The works that evaluate LTE-connected or solar-powered urban sensor networks are small scale and short duration studies that do not offer extended insights on reliability <cit.>. §.§ Inequality of Sensor Networks As smart city networks are increasingly explored and deployed, sociology and urban planning researchers have begun to evaluate the potential social implications of urban sensor networks. For example, one group of researchers evaluated prior urban sensor network deployments and identified areas deemed “sensor deserts", which are those that lack nearby sensors based on a straight line distance <cit.>. As the researchers state, sensor deserts not only add to existing forms of inequality, but the placement of sensor nodes can also affect resident perception of the distribution of resources and harms throughout a city <cit.>, creating potential political or social strife if nodes are not visible in certain areas. Similarly, others have noted the potential for smart city technologies to “further deepen the splintering of urban networks, creating deep divides between those with access to 'smart' and those without" and raising questions about the “politics of urban exclusion" <cit.>. Thus, there is an increasing push for equity as a consideration in practical sensor network deployment <cit.>. §.§ LTE Connectivity in Urban Areas Extensive research around mobile connectivity has revealed a variety of factors known to affect RSS and limit propagation distance for LTE signals. These include physical features such as high-rise buildings <cit.>, the distance between the cell tower and receiver <cit.>; meteorological conditions such as precipitation <cit.>, humidity <cit.>, strong winds <cit.>, temperature <cit.> and sudden weather changes <cit.>; and environmental measures such as high particulate matter concentrations <cit.>. Another major factor that affects signal strength is inter-cell interference (ICI) <cit.>, which occurs when a node moves to the edge of one cell tower's range while moving closer to another cell tower. We include all these factors in our analysis of connectivity issues in section 5. §.§ Solar Charging in Urban Areas Due to the vast quantity of previously deployed solar powered sensor networks and the numerous papers published about these networks, it seems guaranteed that solar power is reliable for most sensor network deployments. However, there have been very few studies looking into the long-term reliability of solar power in urban settings. Dehwah et al. <cit.> evaluate the performance of a traffic monitoring sensor network in a desert city, and describe the effect of dust storms and building shadows on solar charging. However, they do not do a deep analysis into the locations that were most affected by shadows to determine how the issue may be prevented in future deployments and the potential social implications. To our knowledge, this work presents the first in-depth analysis of a large-scale, long-term cellular, solar-powered urban sensor network towards understanding the broader impact of the technical challenges for urban communities. § CHICAGO AS A CASE STUDY §.§ Building Height According to the Council on Tall Buildings and Urban Habitat <cit.>, amongst cities around the world, Chicago has the 10th most buildings 150 meters and higher, 11th most buildings 200 meters and higher, and 5th most buildings 300 meters and higher. However, its place on those lists is expected to fall within the coming years—Chicago has only three buildings 150 meters and higher under construction and twelve proposed for construction. By comparison, Wuhan, Shenyang, and Bangkok—cities just below Chicago on the list of most 150+ meter buildings—have 49, 14, and 17, buildings under construction respectively, and dozens more proposed in both Wuhan and Shenyang. In addition, development in cities such as Mumbai, Nanning, and Nanjing, which all have several 150+ meter buildings under and proposed for construction will propel them past Chicago in the list in the coming decades. This puts Chicago currently in a unique position for evaluating the impact of built environment towards planning global urban sensor networks. §.§ Latitude and Sunlight Hours Chicago has a latitude of 41.88 degrees, where the sun is visible for 15 hours, 15 minutes during the summer solstice and 9 hours, 6 minutes during the winter solstice. According to data from the World Economic Forum <cit.>, the top five most populous latitudes are between the 22nd and 27th parallel north, which are all much closer to the equator and thus have more sunlight on the winter solstice, with an average of 10 hours 35 minutes. Nevertheless, a number of highly populated cities reside at or above the 42nd parallel north, including London, Moscow, Harbin, and Toronto, as well as much of Western Europe. Cities such as New York and Beijing are also located at nearly the same latitude, receiving 9 hours 13 minutes sunlight on the winter solstice. Furthermore, as the effects of climate change disproportionately affect populations who live closer to the equator, mass migration away from the equator is expected <cit.>. Thus, understanding the performance of solar-powered sensor networks at northern latitudes is essential for future urban environmental sensing. §.§ Segregation and Inequality Based on 2020 United States Census Data, Chicago is the fourth most racially segregated large city (population at least 200,000) in the United States <cit.>. Fig. <ref>a highlights Chicago's racial segregation, showing where the white and non-white—primarily Black and Latine—populations live relative to each other. There is limited data comparing racial segregation in global cities, likely because many countries are more racially homogeneous than the United States. However, segregation based on income or social status exists in many global cities, with the highest levels of inequality and segregation often found in cities of lower income countries <cit.>. According to Gini Index data from the 2019 American Community Survey <cit.>, Chicago has the 10th greatest income inequality amongst US cities, with a Gini index of 0.53 (where a 0 indicates perfect equality and 1 indicates perfect inequality). Compared to cities such as London and Johannesburg, which have the highest global Gini index values—both over 0.7—Chicago has a relatively medium-high level of income inequality <cit.>. As seen in Fig. <ref>b, the areas of Chicago that are considered most socioeconomically disadvantaged based on factors such as unemployment and poverty level also overlap with many of the areas that have a majority Black or Latine population. Thus, we believe that Chicago provides a useful case study by which to examine the potential social and equity implications that sensing technologies can introduce in cities around the globe. § SENSOR NETWORK AND DATA §.§ Sensor Network Design The sensor network, described in further detail in [blinded] and shown in Fig. <ref>, was designed and deployed to collect air pollution data across Chicago. The network comprised of 118 unique sensor node locations, with 20 nodes allocated to local environmental justice groups for placement according to their priorities, 12 nodes at four EPA stations (3 nodes at each station) for collocation to perform calibration, and the rest placed based on locations chosen through stratified random sampling, as described in NYCCAS <cit.>, with a small subset chosen by partner organizations. All devices that were not at EPA stations were installed at bus shelters throughout the city, as shown in Fig <ref>. These nodes were placed at the same height, about 2.5 meters above ground. Nodes at EPA stations were located on the rooftops near the EPA monitors, several meters above ground and at different heights based on the height of the building or structure housing the EPA monitor. Most of the devices were installed at their respective locations in July and August 2021, with 98 nodes (over 83%) placed by July 3rd, 2021. §.§ Datasets The node-related data for each reading, including the time, received signal strength (RSS), battery level, internal node temperature, and air pollutant readings were all logged with each reading and stored in an cloud server. We calculated the latency by comparing the time of the sensor reading to the time of the data's insertion into the server. Cell tower information, such as the cell tower ID, were collected when making a connection with the tower. We used OpenCellID <cit.> to link the cell tower information with locations, OSM (Open Street Maps) Buildings <cit.> to gather data about buildings surrounding the nodes, FCC Broadband <cit.> and nPerf <cit.> data to examine AT&T connectivity, Meteostat <cit.> to collect external weather data, and the Shadow Accrual Maps tool <cit.> to calculate the amount of shadow hours at each node location. Socioeconomic data were pulled from the City of Chicago Open Data Portal <cit.>. §.§ Data Cleaning We removed readings that had no connectivity data (N = 9,393, 0.2% of readings), readings where the signal was equal to zero (N = 11,626, 0.12%), readings where the tower location was clearly outside of Chicago, possibly due to sensors being shipped back and forth when there were issues (N = 11,778, 0.12%), and readings with a delay of more than 24 hours (N = 54,900, 0.63%), as this was likely indicative of a device issue, rather than connectivity or charging issue. We also identified 565,371 readings (12.7%) where the cell tower could not be located in the OpenCellID database; we kept these readings in for all analyses except ones involving distance and general direction of the cell tower. § CONNECTIVITY §.§ Motivation for an LTE-Connected Urban Sensor Network Despite recent advances in WiFi and low-power wide-area networks (LPWAN), such as LoRaWAN <cit.>, most urban sensor networks will rely on cellular networks in the coming years for the following reasons: 1) Dependence on existing urban cellular networks ensures city-wide coverage without additional infrastructure. 2) Widespread global availability and flexible data plans with each generation. 3) Lower cost and ease of setup and scaling—for technologies such as LoRaWAN, scalability is a particularly pressing issue due to the cross-technology interferences that will arise from other technologies <cit.> and potential packet collisions with large sensor networks <cit.>. In addition, LPWAN require dedicated infrastructure that have a low per-unit cost, but quickly add up in costs based on the cells required to support high node density <cit.>. Thus, to support the necessary criteria of reliability, real-time, and low cost, we use an LTE network for communication. LTE networks propose great coverage in most cities around the globe <cit.>, providing means for scaling reliably. Because the cellular infrastructure is already built and evolving, networks are easy to set up and remain low-cost, especially with the variety of LTE plans available. Finally, with the fast evolving generations of cellular communication, such networks are increasingly seen as dedicated low latency connectivity for massive IoT deployments in growing cities <cit.>. §.§ Materials: Antenna and LTE Carrier The sensing nodes connected via AT&T's 4G IoT LTE-M One network, which uses LTE Bands 2, 4, and 12, and operates at frequencies of 700, 1700, and 1900 MHz. Each node used a SIM card and Ignion NN03-310 antenna <cit.>, which transmits data over 3G and 4G, is tuned for channels 2, 3, 4, 5, 9, 12, 20, and 28, and operates on frequencies from 698-960 MHz and 1710-2690 MHz. The antenna was placed at the top right of the printed circuit board (PCB) [After conversations with the antenna manufacturer and a small series of tests, it was determined that antenna placement on a PCB can have a significant effect on the RSS values. It is imperative for sensing node designers to consult with antenna manufacturers to ensure correct antenna placement on custom PCB for the best connectivity.], as shown in Fig <ref>. §.§ Methods: Node Connectivity and Data Transmission The sensing node preserved battery life by periodically waking up to record a sample and transmit data to the cloud, as further described in Section <ref>. For this deployment, the nodes were set to transmit data every five minutes from the last recorded sample time. The data transmission process included the following series of steps: 1) The microprocessor woke up and kicked off two processes on separate threads, 2a) One thread sampled the sensor with the longest latency, typically about 8 seconds, 2b) A separate thread simultaneously initiated connection to the cloud, 3) Another array of low latency sensors were sampled, 4) The data were then packaged and transmitted to the IoT endpoint going through the cell tower, AT&T network routers etc. §.§ Methods: Retry Logic If a node could not connect to the cloud, it stored the reading locally, went back to sleep for five minutes, and tried to connect again. After 10 retries, if the node still could not connect, then the node was set to reboot itself. After a reboot, the node would immediately try to make a connection to the cloud and would not record local readings until it did because the node lacked a real time clock. Once the node could connect again, it transmitted all locally stored data and errors that were logged in the absence of connectivity. §.§ Results: Readings and Cell Towers For the one-year period and 118 nodes in our network, our dataset included 8,684,756 readings. We linked the readings to 417 unique cell tower locations, 65 with only 1 associated reading, 179 with 500 (0.0057%) or more readings, and 165 with 1000 (0.011%) or more readings. §.§ Results: “Dead Zones" Over the course of our deployment, we identified 11 locations (9.32%) at which the sensor nodes reported consistently low RSS values and ultimately failed to connect, generally within a few days of installation. These 11 locations include 10 from the main deployment beginning in July 2021 and one node location from an earlier pilot program in April 2021. 3 of the 11 locations were selected for deployment by local community groups, a significant percentage more than in the overall deployment. Initial mitigation strategies involved moving the nodes to the closest bus shelter, which was often directly across the street. However, we discovered that the nodes had to be moved even further—sometimes multiple blocks away—to establish a connection. We examined a number of factors to determine the potential cause of these “dead zones", including the distance between the node and cellular tower, the number of towers close to a node, evidence of inter-cell interference (ICI) <cit.>, and nearby physical urban structures, including the distance and height of the closest building to the node, and the number, tallest height, mean and median building height within 100, 250, and 500 meters of each node. We found no evidence to suggest that any of these features had an effect on a node's ability to connect, when comparing all “dead zones" to all other node locations. When comparing “dead zone" locations to the new locations each of those nodes was moved to, we found a statistically significant difference in the height of the tallest building within 100 meters of the node after relocation versus before, as shown in Fig. <ref>. This indicates that land use and urban form close to the location of stationary sensors are likely factors impacting connectivity, fitting in line with observation from prior work <cit.>. In addition, we investigated the role of line-of-sight as a primary factor contributing to “dead zones". We examined the relation between the sensor node, cellular tower, and tallest nearby building for the two nodes found to connect to the same primary cellular tower at their original (“dead zone") and new location. We found that one of these node configurations exhibited line-of-sight interference, as shown in Fig. <ref>, as the tallest building (11.9 meters) was clearly in the path between the cellular tower and sensing node. Due to the limited number of examples to examine, there is a need for further investigation in larger datasets, however, this evidence supports the key role of line-of-sight impediments in contributing to “dead zones". Finally, we examine the socioeconomic factors around the node locations without connectivity. We do not find a significant difference in the socioeconomic factors when comparing node locations that can and cannot connect, likely because there are a large number of nodes around the city. However, we do note that many of the dead zone locations are in socioeconomically disadvantaged and majority Black and Latine neighborhoods, as shown in Fig. <ref>a. §.§ Results: Signal Strength As shown in Fig. <ref>, the yearly median signal strength for each node ranged from -61 dBm to -113 dBm, with a network-wide median of -87 dBm. There was no significant difference in the median signal strength for community-selected versus randomly-selected nodes and we did not identify a statistical relationship between surrounding physical features, such as building height or distance to buildings, and the median signal strength for the sensor node or corresponding cell tower location. As with “dead zones", we found that the node locations with the lowest median signal strength—those less than 100 dBm—were nearly all sited in neighborhoods that are socioeconomically disadvantaged and have a higher percentage racial minority population. In fact, only one of the eight locations with a low median signal strength was sited in a majority white neighborhood, as shown in Fig. <ref>b. §.§ Results: Latency We found that over the entire year's worth of data, the minimum latency was 2 seconds, the median latency was 5 seconds, and the interquartile range fell between 4 and 6 seconds (our data allowed only for estimating seconds, and not milliseconds for latency). When examining the median latency for each sensor node over the course of the study, we found a much tighter distribution then we saw for median signal strength. In fact, the interquartile range all falls at the exact same value of 5 seconds. There are only three sensor locations with a median latency greater than that value, shown in Fig. <ref>c, and two of those locations overlap with those that have poor median signal strength, suggesting a correlation between signal strength and latency. We find that only 7.24% of readings have a latency of 10 or more seconds, 1.18% have a latency of 30 or more seconds, and less than 1% (0.88%) have a latency of one minute or longer. Although these are low percentages, we examined the significantly delayed readings to determine if they occur randomly or follow a pattern. We found that the delayed readings do not occur randomly, but rather appeared disproportionately on certain dates, at certain sensor locations, and with certain cellular towers, as seen in Fig. <ref>. Interestingly, the sensor locations with the most delayed readings have no overlap with the locations that have either the lowest median signal strength or the highest median latency. However, when looking at the map of the sensor locations in Fig <ref>d, we see again that most of these locations are in neighborhoods with a majority Black or Latine population. We could not identify any temporal or location-based events events, such as sporting games, that have previously been associated with cellular network delays and may have caused these significant events. Coupled with the lack of empirical evidence from the cellular service providers , we are led to determine that the delays are likely caused to carrier-specific issues such as cell tower maintenance. § POWER §.§ Motivation for a Solar-Powered Urban Sensor Network Nodes must be continuously running to collect data over time, yet many outdoor urban spaces are not equipped with accessible wired mains <cit.>. Solar power is the most ubiquitous form of renewable energy for sensor networks, and will remain prevalent in the coming years for the following reasons: 1) Solar panels are relatively inexpensive and easy to install. 2) Solar panels can power sensors that need to operate continuously in remote or hard-to-reach locations where it may be difficult or expensive to run electrical cables or replace batteries. 3) Using solar power eliminates the need for frequent battery replacements, which creates an added burden for cities looking to deploy sensor networks. Thus we use solar energy to power our sensor network to achieve reliability through continuous power, scalability in allowing for power in locations that do not have outlets, ease of maintenance by limiting battery replacements, and low-cost by requiring no new infrastructure. §.§ Materials: Battery, Solar Panel, and Power Usage   Each sensing node was outfitted with a rechargeable 2000 mAh lithium polymer battery and a 10×13 cm Voltaic Systems P126 6W solar panel. The solar panel was attached horizontally, in a flat position, to the top of the node's respective bus shelter to maximize solar absorption, maintain security of the panel, and provide ease of installation. To optimize for low power consumption, the microcontroller operated in a duty cycled mode, consuming as little as 40 µA between measurements. The device's four electrochemical gas sensors consume microwatts of power, while the particulate matter (PM) sensor consumes up to 80 mA power as it relies on an internal fan to circulate air. Thus to optimize the overall power usage, we sampled the gases every 60 seconds and sampled the PM and transmitted data every 5 minutes. On average, the device drew 4mA current over a 24 hour period, allowing the battery to power the sensing node, including communications, for approximately 15 days at the aforementioned sampling rate. §.§ Methods: Power Saving Strategies In October 2021, we noticed that one of the devices was no longer charging. After sending the local maintenance team to investigate, we discovered that the sun was no longer reaching the solar panel due to the change in the sun's position and the node's location surrounded by skyscrapers. We anticipated that this issue would begin to show up in other nodes as well, so determined three potential solutions to ensure the network still collected useful data throughout the winter months: * Set the sampling interval to be more than every five minutes, which would deplete the battery less quickly by running the PM sensor and data transmission less often. * Implement a power-saving mode to ensure devices only run when they have a certain amount of battery and sleep when they are below that value. * Schedule devices to only run at certain times of the day, i.e. for a few hours in the middle of the day when there is sunlight. Naturally, each option comes with its own trade-offs that had to be considered. Sampling less often would provide less temporal coverage which could cause cities to potentially miss timely notifications from sensors, make it more difficult to identify noisy or anomalous readings through techniques such as moving averages, and introduce calibration errors from datasets with different resolutions. A power-saving mode could result in large time spans with no data, creating difficulty in comparing data from different seasons and potentially resulting in a lack of data needed for calibration. Scheduling devices to only run at certain times would limit data collection to only specific hours of the day, and may not solve the issue if the number of hours is not chosen correctly. Based on the tradeoffs and our need of data for sensor calibration, we implemented a power-saving mode to put devices into a deep sleep to avoid depleting the batteries in low- or no-light conditions. Power-saving mode was initiated when a battery's power level fell to 15% or less of its total capacity then turned off when the battery's power level had recharged to at least 40%. §.§ Results: Data Loss due to Power Saving Mode Between the autumn and spring equinox of the year long study period, 44 devices (37.29%) went into power saving mode (PSM), with most devices entering PSM between January and March. Seven of these devices were at community selected sites, representing about 16% of the devices in PSM, indicating the community selected sites were not disproportionately affected. In total, devices in the networks spent 19,450,915 seconds — over 33,180 hours or 1382.5 days—in PSM, resulting in about 398,000 potential sensor readings that were not captured. Most devices entered PSM numerous times, with several entering more than five times during the study period. Thus, in many locations there was adequate sunlight to keep the devices charged throughout the winter months if a larger solar panel had been used or the devices had better energy harvesting to extend the battery life with the limited charge they received. §.§ Results: Location of Solar Charging Issues As expected, the node locations in downtown Chicago entered PSM for a long duration of the winter due to the high number of very tall buildings in the neighborhood. However, several node locations in neighborhoods outside of downtown Chicago, that lack a high density of tall buildings, also experienced solar charging issues. In fact, the node location with the second highest amount of time spent in PSM was not in a location near tall buildings, and 8 of the 12 node locations that had the most power saving hours were outside of the downtown area, as shown in Fig. <ref>f. The figure also shows that they mostly fall in neighborhoods with a majority Black or Latine population. As seen in Fig. <ref>, shadows from trees for large portions of the day could be a potential cause for charging issues in some areas. In addition, ice build up on solar panels may cause charging issues, but this is difficult to diagnose without visiting every node location while it is in PSM. Thus, further analysis is required to determine the exact cause of charging issues in these locations that obviously lack tall buildings in the vicinity. The important takeaway is that the dynamic physical environment of solar IoT deployments need to be considered by tools that are currently being developed to estimate solar energy availability using historic data or satellite/map images <cit.>. §.§ Results: Predicting Solar Charging Issues We used the OSM Buildings data <cit.> and Shadow Accrual Maps tool <cit.> to determine how well we would be able to predict a sensor location having power saving issues. With the OSM Buildings data, we examined the distance to the closest building, height of the closest building, and mean and median height of buildings within 100, 250, and 500 meters of each node location. For shadows, we used the tool to calculate the amount of time each node location was in shadow on the winter equinox. Using both a logistic regression model for the binary case of power saving or not, and a linear regression model for the amount of time spent in PSM, we found no statistical significance for either the amount of time spent in shadow, or any data related to buildings around the node locations, as highlighted for one data point in Fig. <ref>. Upon further examination, we discovered that one of the issues around using crowdsourced and open source resources is that they are not consistently updated. For example, one sensor node that was indicated to have shadow issues but did not enter PSM likely had a building present when the data were uploaded, but no longer has a building there as discovered on Google Maps. Likewise, as seen in Fig. <ref>, a node location with no building nearby that entered PSM was likely affected by the presence of a tree near the bus shelter, which was not captured in the tools we used, which are focused on buildings. This points to an additional shortcoming of the data available, which focus on buildings and do not account for foliage, hyperlocal snowfall, and other physical phenomena that may impede solar charging. § DISCUSSION §.§ The Potential of LTE-Connected, Solar-Powered Urban Sensor Networks The results show immense promise for LTE-connected urban sensor networks. Most node locations had adequate signal strength to achieve connectivity, and the vast majority of sensor readings were transmitted to the cloud server within five seconds. Furthermore, there were no noticeable issues around connectivity due to temporal features such as weather or traffic patterns. We also had success using LTE to detect errors and perform software updates, including a firmware patch to add the power saving mode. These findings all point to the potential of LTE in creating reliable, scalable, easily maintainable, and real-time sensing in cities. Solar panels proved to be a reliable energy source for over half of the year-long study, and most devices that experienced charging issues only did so between January and March. Chicago is at a more northern latitude than most of the global population, so we expect that many cities, and especially those in the Global South, would experience fewer solar charging issues. Additional improvements with solar panel efficiency <cit.> and research on smart power management strategies for renewable energy in IoT establish solar charging as a viable powering option. The nodes that were collocated at EPA stations all experienced no charging or connectivity issues, suggesting that placing nodes on rooftops could be a viable solution to improve reliability. However, node placement is highly dependent on the application, and many cities may choose or need to place nodes closer to street level. Future research could include interpolation and machine learning techniques to correlate data from street level to rooftop nodes to address the technical issues and still collect useful data. Additionally, passive wireless reflector and relay research can find application in routing network availability from cell towers and around built infrastructure to end devices. §.§ Implications of Connectivity and Charging Issues Despite the success we had in using 4G LTE-M to transmit data, we discovered issues around “dead zones", delayed readings, and unequal signal strength. The cause of these issues could not often be easily identified and data sources from AT&T and the FCC indicate widespread support of the LTE network across Chicago, as seen in Fig. <ref>. Thus, the discovery of these issues raises questions on the reliability of LTE networks, especially in cities that do not have as much cellular infrastructure as Chicago. However, we did not identify significant data loss from the connection-related issues, suggesting that LTE-connected sensor networks are likely appropriate for applications that do not rely on instant or near instant data. For applications that cannot afford to have any delayed data, such as emergency support services, network designers will want to think about building robustness into the system to ensure real-time communication for all readings. Despite the ubiquity of solar panels as the power source for wireless sensor networks, we found that they are not a reliable power source for urban sensor networks for cities that have limited sunlight in winter months. In addition, urban areas at latitudes closer to the equator will also experience solar charging issues if they have numerous tall buildings blocking the path of the sun. Thus, we need to continue research in alternative charging options, energy harvesting techniques, and battery-less sensors to ensure reliability and scalability in powering urban sensor networks. In our study, we found that cellular connection and solar charging issues are not all localized to areas with tall buildings and may be spread inequitably around a city. Thus, urban sensor network deployments have the potential to exacerbate existing societal inequalities by allowing for networks to be scaled more easily in some neighborhoods than others. In turn, this can increase mistrust between residents and governments <cit.> and drive residents to make assumptions about the distribution of resources and harms based on the physical presence of sensors <cit.>. Thus, to serve people in all communities, sensor network designers should consider working with local service providers, using repeaters, multiple sensors, and other technologies to improve reliability in underserved areas. Furthermore, networking researchers and designers need to focus on equality, and not just quality or area coverage when building and deploying infrastructure. §.§ Challenges around Data Access Due to the lack of official up-to-date building information, we relied on open crowdsourced data to determine the location and height of buildings in the city. Similarly, because the location of cellular towers is not publicly available, we relied on data from OpenCellID. As with many open crowdsourced datasets, these data were not completely accurate or up-to-date <cit.>. This was especially clear when examining FCC carrier connectivity information, as the entire city of Chicago seemingly has coverage (Fig. <ref>, yet we found that was not the case, likely because the data are reported by carriers <cit.>. We also discovered data accuracy issues in shadow prediction using the Shadow Accrual Maps <cit.>. Other crowdsourced data, such as nPerf, presented an alternative usage issue in incompleteness, as seen in Fig. <ref>. Particularly in Chicago, there is significantly more data available in the northern part of the city and along highways, likely attributed to the increased usage of crowdsourced platforms by white people and high-income earners <cit.>. Thus, relying on crowdsourced data makes it difficult to predict locations with solar charging or connectivity issues that may arise due to building height and other urban interferences, made further difficult by the social inequities that exist in many cities and are exacerbated in crowdsourced technologies. The difficulty in working with open crowdsourced data points to a need for new methods to obtain up-to-date urban data. For example, researchers can help develop ways to obtain building height or cell tower location from satellite imagery or Google Maps. We may also look to develop easier ways for cities to create their own databases that are kept up-to-date or develop better community science incentives to keep crowdsourced data sources such as OSM Buildings, OpenCellID, and nPerf up-to-date and to reach new users who do not currently contribute to these datasets. §.§ Limitations of this Study We acknowledge that this work is limited, as it focuses on a single-city case study. Although we believe that Chicago is representative of many other large cities, we lack the empirical evidence needed to “assess the implications and potentially transformative consequences" of how similar smart city networks would emerge in different urban contexts <cit.>. An additional limitation is that we use weather data from US government agencies and there are only three weather stations in the Chicago area. Although we also had temperature and humidity readings at each node, these sensors were located inside the node enclosures, and thus did not always provide accurate external measurements. Thus, our weather-related analyses are not hyperlocalized to most of the sensors, and it is possible that there are hyperlocal weather correlations, such as urban heat islands, that affected sensor connectivity. § CONCLUSION In this work, we present the challenges and opportunities from a year-term city-wide urban sensor network deployment. The network was created based on five specific criteria of success that we identified from past work. We provide an in-depth analysis of deployment data from the aspect of cellular connectivity and solar energy harvesting, which are the two key features that help meet the success criteria. In addition we highlight inherent challenges with open data sources available for root-cause analysis of failure nodes, and identify strengths and weaknesses to define future research directions that will support large-scale, real-time energy harvesting deployments in achieving reliable, equitable smart city networks. acm